Hadoop is a popular framework based on MapReduce programming model to allow for distributed processing of large datasets across clusters with various number of computer nodes. Just like any dynamic computational envir...
详细信息
Recently,nano-systems based on molecular communications via diffusion(MCvD)have been implemented in a variety of nanomedical applications,most notably in targeted drug delivery system(TDDS)***,because the MCvD is unre...
详细信息
Recently,nano-systems based on molecular communications via diffusion(MCvD)have been implemented in a variety of nanomedical applications,most notably in targeted drug delivery system(TDDS)***,because the MCvD is unreliable and there exists molecular noise and inter symbol interference(ISI),cooperative nano-relays can acquire the reliability for drug delivery to targeted diseased cells,especially if the separation distance between the nano transmitter and nano receiver is *** this work,we propose an approach for optimizing the performance of the nano system using cooperative molecular communications with a nano relay scheme,while accounting for blood flow effects in terms of drift *** fractions of the molecular drug that should be allocated to the nano transmitter and nano relay positioning are computed using a collaborative optimization problem solved by theModified Central Force Optimization(MCFO)*** the previous work,the probability of bit error is expressed in a closed-form *** is used as an objective function to determine the optimal velocity of the drug molecules and the detection threshold at the nano *** simulation results show that the probability of bit error can be dramatically reduced by optimizing the drift velocity,detection threshold,location of the nano-relay in the proposed nano system,and molecular drug budget.
Reinforcement learning (RL) depends critically on how well the reward function formulates the aim of the application. In safety-critical systems, several safety properties should be met during the RL process in additi...
详细信息
Nowadays,cloud computing provides easy access to a set of variable and configurable computing resources based on user demand through the *** computing services are available through common internet protocols and netwo...
详细信息
Nowadays,cloud computing provides easy access to a set of variable and configurable computing resources based on user demand through the *** computing services are available through common internet protocols and network standards.n addition to the unique benefits of cloud computing,insecure communication and attacks on cloud networks cannot be *** are several techniques for dealing with network *** this end,network anomaly detection systems are widely used as an effective countermeasure against network *** anomaly-based approach generally learns normal traffic patterns in various ways and identifies patterns of *** anomaly detection systems have gained much attention in intelligently monitoring network traffic using machine learning *** paper presents an efficient model based on autoencoders for anomaly detection in cloud computing *** autoencoder learns a basic representation of the normal data and its reconstruction with minimum ***,the reconstruction error is used as an anomaly or classification *** addition,to detecting anomaly data from normal data,the classification of anomaly types has also been *** have proposed a new approach by examining an autoencoder's anomaly detection method based on data reconstruction *** the existing autoencoder-based anomaly detection techniques that consider the reconstruction error of all input features as a single value,we assume that the reconstruction error is a *** enables our model to use the reconstruction error of every input feature as an anomaly or classification *** further propose a multi-class classification structure to classify the *** use the CIDDS-001 dataset as a commonly accepted dataset in the *** evaluations show that the performance of the proposed method has improved considerably compared to the existing ones in terms of accuracy,recall,false-positive rate,and F1-score
This work focuses on the problem of distributed optimization in multi-agent cyberphysical systems, where a legitimate agent's iterates are influenced both by the values it receives from potentially malicious neigh...
详细信息
The Internet of Things (IoT) seeks to establish a vast network comprising billions of interconnected devices to enable seamless data exchange and intelligent interactions between people and objects. This network is ch...
详细信息
This study presents a novel ultra-high step-up (UHSU) DC-DC topology tailored for applications in DC microgrids. The proposed configuration utilizes a quadraticbased topology, achieving a remarkably high voltage gain ...
详细信息
Traditional method of images is routinely applied in electrostatic problems with infinite ground planes. However, practical situations almost always deal with finite ground planes. This causes the prediction to be ine...
详细信息
In addition to its use in building and agriculture, global solar irradiance is one of the most critical aspects in designing and considering any solar station's volume. Because the Iraqi metrological organization ...
详细信息
The phenomenon of atmospheric haze arises due to the scattering of light by minute particles suspended in the atmosphere. This optical effect gives rise to visual degradation in images and videos. The degradation is p...
详细信息
The phenomenon of atmospheric haze arises due to the scattering of light by minute particles suspended in the atmosphere. This optical effect gives rise to visual degradation in images and videos. The degradation is primarily influenced by two key factors: atmospheric attenuation and scattered light. Scattered light causes an image to be veiled in a whitish veil, while attenuation diminishes the image inherent contrast. Efforts to enhance image and video quality necessitate the development of dehazing techniques capable of mitigating the adverse impact of haze. This scholarly endeavor presents a comprehensive survey of recent advancements in the domain of dehazing techniques, encompassing both conventional methodologies and those founded on machine learning principles. Traditional dehazing techniques leverage a haze model to deduce a dehazed rendition of an image or frame. In contrast, learning-based techniques employ sophisticated mechanisms such as Convolutional Neural Networks (CNNs) and different deep Generative Adversarial Networks (GANs) to create models that can discern dehazed representations by learning intricate parameters like transmission maps, atmospheric light conditions, or their combined effects. Furthermore, some learning-based approaches facilitate the direct generation of dehazed outputs from hazy inputs by assimilating the non-linear mapping between the two. This review study delves into a comprehensive examination of datasets utilized within learning-based dehazing methodologies, elucidating their characteristics and relevance. Furthermore, a systematic exposition of the merits and demerits inherent in distinct dehazing techniques is presented. The discourse culminates in the synthesis of the primary quandaries and challenges confronted by prevailing dehazing techniques. The assessment of dehazed image and frame quality is facilitated through the application of rigorous evaluation metrics, a discussion of which is incorporated. To provide empiri
暂无评论