Digital image has been used in various fields as an essential carrier. Many color images have been constantly produced since their more realistic description, which takes up much storage space and network bandwidth. T...
详细信息
The Internet of Things (IoT) is a large-scale network of devices capable of sensing, data processing, and communicating with each other through different communication protocols. In today's technology ecosystem, I...
详细信息
The Internet of Things (IoT) is a large-scale network of devices capable of sensing, data processing, and communicating with each other through different communication protocols. In today's technology ecosystem, IoT interacts with many application areas such as smart city, smart building, security, traffic, remote monitoring, health, energy, disaster, agriculture, industry. The IoT network in these scenarios comprises tiny devices, gateways, and cloud platforms. An IoT network is able to keep these fundamental components in transmission under many conditions with lightweight communication protocols taking into account the limited hardware features (memory, processor, energy, etc.) of tiny devices. These lightweight communication protocols affect the network traffic, reliability, bandwidth, and energy consumption of the IoT application. Therefore, determining the most proper communication protocol for application developers emerges as an important engineering problem. This paper presents a straightforward overview of the lightweight communication protocols, technological advancements in application layer for the IoT ecosystem. The survey then analyzes various recent lightweight communication protocols and reviews their strengths and limitations. In addition, the paper explains the experimental comparison of Constrained Applications Protocol (CoAP), Message Queuing Telemetry (MQTT), and WebSocket protocols, more convenient for tiny IoT devices. Finally, we discuss future research directions of communication protocols for IoT.
In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow *** cloud data centers,fog computing takes more time to run workflow ***,it is essenti...
详细信息
In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow *** cloud data centers,fog computing takes more time to run workflow ***,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing *** task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog *** process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource *** this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local *** balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization *** FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response *** relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.
As a crucial data preprocessing method in data mining,feature selection(FS)can be regarded as a bi-objective optimization problem that aims to maximize classification accuracy and minimize the number of selected *** c...
详细信息
As a crucial data preprocessing method in data mining,feature selection(FS)can be regarded as a bi-objective optimization problem that aims to maximize classification accuracy and minimize the number of selected *** computing(EC)is promising for FS owing to its powerful search ***,in traditional EC-based methods,feature subsets are represented via a length-fixed individual *** is ineffective for high-dimensional data,because it results in a huge search space and prohibitive training *** work proposes a length-adaptive non-dominated sorting genetic algorithm(LA-NSGA)with a length-variable individual encoding and a length-adaptive evolution mechanism for bi-objective highdimensional *** LA-NSGA,an initialization method based on correlation and redundancy is devised to initialize individuals of diverse lengths,and a Pareto dominance-based length change operator is introduced to guide individuals to explore in promising search space ***,a dominance-based local search method is employed for further *** experimental results based on 12 high-dimensional gene datasets show that the Pareto front of feature subsets produced by LA-NSGA is superior to those of existing algorithms.
Due to the importance of Critical Infrastructure(Cl)in a nation's economy,they have been lucrative targets for cyber *** critical infrastructures are usually Cyber-Physical Systems such as power grids,water,and se...
详细信息
Due to the importance of Critical Infrastructure(Cl)in a nation's economy,they have been lucrative targets for cyber *** critical infrastructures are usually Cyber-Physical Systems such as power grids,water,and sewage treatment facilities,oil and gas pipelines,*** recent times,these systems have suffered from cyber attacks numer-ous *** have been developing cyber security solutions for Cls to avoid lasting *** to standard frameworks,cyber security based on identification,protection,detection,response,and recovery are at the core of these *** of an ongoing attack that escapes standard protection such as firewall,anti-virus,and host/network intrusion detection has gained importance as such attacks eventually affect the physical dynamics of the ***,anomaly detection in physical dynamics proves an effective means to implement *** is one example of anomaly detection in the sensor/actuator data,representing such systems physical *** present EPASAD,which improves the detection technique used in PASAD to detect these micro-stealthy attacks,as our experiments show that PASAD's spherical boundary-based detection fails to *** method EPASAD overcomes this by using Ellipsoid boundaries,thereby tightening the boundaries in various dimen-sions,whereas a spherical boundary treats all dimensions *** validate EPASAD using the dataset produced by the TE-process simulator and the C-town *** results show that EPASAD improves PASAD's average recall by 5.8%and 9.5%for the two datasets,respectively.
Glaucoma is currently one of the most significant causes of permanent blindness. Fundus imaging is the most popular glaucoma screening method because of the compromises it has to make in terms of portability, size, an...
详细信息
Glaucoma is currently one of the most significant causes of permanent blindness. Fundus imaging is the most popular glaucoma screening method because of the compromises it has to make in terms of portability, size, and cost. In recent years, convolution neural networks (CNNs) have revolutionized computer vision. Convolution is a "local" CNN technique that is only applicable to a small region surrounding an image. Vision Transformers (ViT) use self-attention, which is a "global" activity since it collects information from the entire image. As a result, the ViT can successfully gather distant semantic relevance from an image. This study examined several optimizers, including Adamax, SGD, RMSprop, Adadelta, Adafactor, Nadam, and Adagrad. With 1750 Healthy and Glaucoma images in the IEEE fundus image dataset and 4800 healthy and glaucoma images in the LAG fundus image dataset, we trained and tested the ViT model on these datasets. Additionally, the datasets underwent image scaling, auto-rotation, and auto-contrast adjustment via adaptive equalization during preprocessing. The results demonstrated that preparing the provided dataset with various optimizers improved accuracy and other performance metrics. Additionally, according to the results, the Nadam Optimizer improved accuracy in the adaptive equalized preprocessing of the IEEE dataset by up to 97.8% and in the adaptive equalized preprocessing of the LAG dataset by up to 92%, both of which were followed by auto rotation and image resizing processes. In addition to integrating our vision transformer model with the shift tokenization model, we also combined ViT with a hybrid model that consisted of six different models, including SVM, Gaussian NB, Bernoulli NB, Decision Tree, KNN, and Random Forest, based on which optimizer was the most successful for each dataset. Empirical results show that the SVM Model worked well and improved accuracy by up to 93% with precision of up to 94% in the adaptive equalization preprocess
Accurately diagnosing Alzheimer's disease is essential for improving elderly ***,accurate prediction of the mini-mental state examination score also can measure cognition impairment and track the progression of Al...
详细信息
Accurately diagnosing Alzheimer's disease is essential for improving elderly ***,accurate prediction of the mini-mental state examination score also can measure cognition impairment and track the progression of Alzheimer's ***,most of the existing methods perform Alzheimer's disease diagnosis and mini-mental state examination score prediction separately and ignore the relation between these two *** address this challenging problem,we propose a novel multi-task learning method,which uses feature interaction to explore the relationship between Alzheimer's disease diagnosis and minimental state examination score *** our proposed method,features from each task branch are firstly decoupled into candidate and non-candidate parts for ***,we propose feature sharing module to obtain shared features from candidate features and return shared features to task branches,which can promote the learning of each *** validate the effectiveness of our proposed method on multiple *** Alzheimer's disease neuroimaging initiative 1 dataset,the accuracy in diagnosis task and the root mean squared error in prediction task of our proposed method is 87.86%and 2.5,*** results show that our proposed method outperforms most state-of-the-art *** proposed method enables accurate Alzheimer's disease diagnosis and mini-mental state examination score ***,it can be used as a reference for the clinical diagnosis of Alzheimer's disease,and can also help doctors and patients track disease progression in a timely manner.
Mushroom categorization is a difficult process since there are so many different species and they all have different aesthetic qualities. In this paper, we are to investigate the use of transfer learning techniques fo...
详细信息
A common cardiovascular illness with high fatality rates is coronary artery disease (CAD). Researchers have been exploring alternative methods to diagnose and assess the severity of CAD that are less invasive, cost-ef...
详细信息
AI(Artificial Intelligence)workloads are proliferating in modernreal-time *** the tasks of AI workloads fluctuate over time,resourceplanning policies used for traditional fixed real-time tasks should be *** particular...
详细信息
AI(Artificial Intelligence)workloads are proliferating in modernreal-time *** the tasks of AI workloads fluctuate over time,resourceplanning policies used for traditional fixed real-time tasks should be *** particular,it is difficult to immediately handle changes inreal-time tasks without violating the deadline *** cope with thissituation,this paper analyzes the task situations of AI workloads and findsthe following two ***,resource planning for AI workloadsis a complicated search problem that requires much time for ***,although the task set of an AI workload may change over time,thepossible combinations of the task sets are known in *** on theseobservations,this paper proposes a new resource planning scheme for AIworkloads that supports the re-planning of *** of generatingresource plans on the fly,the proposed scheme pre-determines resourceplans for various combinations of ***,in any case,the workload isimmediately executed according to the resource plan ***,the proposed scheme maintains an optimized CPU(Central Processing Unit)and memory resource plan using genetic algorithms and applies it as soonas the workload *** proposed scheme is implemented in the opensourcesimulator SimRTS for the validation of its *** show that the proposed scheme reduces the energy consumptionof CPU and memory by 45.5%on average without deadline misses.
暂无评论