In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow *** cloud data centers,fog computing takes more time to run workflow ***,it is essenti...
详细信息
In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow *** cloud data centers,fog computing takes more time to run workflow ***,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing *** task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog *** process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource *** this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local *** balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization *** FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response *** relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.
Reduplication is a highly productive process in Bengali word formation, with significant implications for various natural language processing (NLP) applications, such as parts-of-speech tagging and sentiment analysis....
详细信息
This research work presents a novel language intervention system for Tamil-speaking children with autism spectrum disorder (ASD). The system satisfies the considerable requirement for tools aimed at one more section o...
详细信息
Dear Editor,The distributed constraint optimization problems(DCOPs) [1]-[3]provide an efficient model for solving the cooperative problems of multi-agent systems, which has been successfully applied to model the real-...
Dear Editor,The distributed constraint optimization problems(DCOPs) [1]-[3]provide an efficient model for solving the cooperative problems of multi-agent systems, which has been successfully applied to model the real-world problems like the distributed scheduling [4], sensor network management [5], [6], multi-robot coordination [7], and smart grid [8]. However, DCOPs were not well suited to solve the problems with continuous variables and constraint cost in functional form, such as the target tracking sensor orientation [9], the air and ground cooperative surveillance [10], and the sensor network coverage [11].
Internet of Things (IoT) enabled Wireless Sensor Networks (WSNs) is not only constitute an encouraging research domain but also represent a promising industrial trend that permits the development of various IoT-based ...
详细信息
Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the *** researchers have made progress in correcting and predicting early heart disease,but ...
详细信息
Heart disease includes a multiplicity of medical conditions that affect the structure,blood vessels,and general operation of the *** researchers have made progress in correcting and predicting early heart disease,but more remains to be *** diagnostic accuracy of many current studies is inadequate due to the attempt to predict patients with heart disease using traditional *** using data fusion from several regions of the country,we intend to increase the accuracy of heart disease prediction.A statistical approach that promotes insights triggered by feature interactions to reveal the intricate pattern in the data,which cannot be adequately captured by a single *** processed the data using techniques including feature scaling,outlier detection and replacement,null and missing value imputation,and more to improve the data ***,the proposed feature engineering method uses the correlation test for numerical features and the chi-square test for categorical features to interact with the *** reduce the dimensionality,we subsequently used PCA with 95%*** identify patients with heart disease,hyperparameter-based machine learning algorithms like RF,XGBoost,Gradient Boosting,LightGBM,CatBoost,SVM,and MLP are utilized,along with ensemble *** model’s overall prediction performance ranges from 88%to 92%.In order to attain cutting-edge results,we then used a 1D CNN model,which significantly enhanced the prediction with an accuracy score of 96.36%,precision of 96.45%,recall of 96.36%,specificity score of 99.51%and F1 score of 96.34%.The RF model produces the best results among all the classifiers in the evaluation matrix without feature interaction,with accuracy of 90.21%,precision of 90.40%,recall of 90.86%,specificity of 90.91%,and F1 score of 90.63%.Our proposed 1D CNN model is 7%superior to the one without feature engineering when compared to the suggested *** illustrates how interaction-focu
This study investigates the application of Learnable Memory Vision Transformers(LMViT)for detecting metal surface flaws,comparing their performance with traditional CNNs,specifically ResNet18 and ResNet50,as well as o...
详细信息
This study investigates the application of Learnable Memory Vision Transformers(LMViT)for detecting metal surface flaws,comparing their performance with traditional CNNs,specifically ResNet18 and ResNet50,as well as other transformer-based models including Token to Token ViT,ViT withoutmemory,and Parallel *** awidely-used steel surface defect dataset,the research applies data augmentation and t-distributed stochastic neighbor embedding(t-SNE)to enhance feature extraction and *** techniques mitigated overfitting,stabilized training,and improved generalization *** LMViT model achieved a test accuracy of 97.22%,significantly outperforming ResNet18(88.89%)and ResNet50(88.90%),aswell as the Token to TokenViT(88.46%),ViT without memory(87.18),and Parallel ViT(91.03%).Furthermore,LMViT exhibited superior training and validation performance,attaining a validation accuracy of 98.2%compared to 91.0%for ResNet 18,96.0%for ResNet50,and 89.12%,87.51%,and 91.21%for Token to Token ViT,ViT without memory,and Parallel ViT,*** findings highlight the LMViT’s ability to capture long-range dependencies in images,an areawhere CNNs struggle due to their reliance on local receptive fields and hierarchical feature *** additional transformer-based models also demonstrate improved performance in capturing complex features over CNNs,with LMViT excelling particularly at detecting subtle and complex defects,which is critical for maintaining product quality and operational efficiency in industrial *** instance,the LMViT model successfully identified fine scratches and minor surface irregularities that CNNs often *** study not only demonstrates LMViT’s potential for real-world defect detection but also underscores the promise of other transformer-based architectures like Token to Token ViT,ViT without memory,and Parallel ViT in industrial scenarios where complex spatial relationships are *** research m
Nowadays online news websites are one of the quickest ways to get information. However, the credibility of news from these sources is sometimes questioned. One common problem with online news is the prevalence of clic...
详细信息
Satellite image classification is the most significant remote sensing method for computerized analysis and pattern detection of satellite data. This method relies on the image's diversity structures and necessitat...
详细信息
On July 18, 2021, the PKU-DAIR Lab1)(data and Intelligence Research Lab at Peking University) openly released the source code of Hetu, a highly efficient and easy-to-use distributed deep learning(DL) framework. Hetu i...
On July 18, 2021, the PKU-DAIR Lab1)(data and Intelligence Research Lab at Peking University) openly released the source code of Hetu, a highly efficient and easy-to-use distributed deep learning(DL) framework. Hetu is the first distributed DL system developed by academic groups in Chinese universities, and takes into account both high availability in industry and innovation in academia. Through independent research and development, Hetu is completely decoupled from the existing DL systems and has unique characteristics. The public release of the Hetu system will help researchers and practitioners to carry out frontier MLSys(machine learning system) research and promote innovation and industrial upgrading.
暂无评论