This paper provides an in-depth exploration of the cutting-edge developments in knowledge-based parametric modeling within the domain of mechanical engineering. It examines the latest advancements that leverage domain...
详细信息
This paper presents a novel medical imaging framework, Efficient Parallel Deep Transfer SubNet+-based Explainable Model (EPDTNet + -EM), designed to improve the detection and classification of abnormalities in medical...
详细信息
Background: Epilepsy is a neurological disorder that leads to seizures. This occurs due to excessive electrical discharge by the brain cells. An effective seizure prediction model can aid in improving the lifestyle of...
详细信息
Communication became vital for human expansion. The ignition of human desire to communicate gave out new outlook on the ways of communicating [1]. This was enabled by satellite communication. Satellite communications ...
详细信息
Most of the search-based software remodularization(SBSR)approaches designed to address the software remodularization problem(SRP)areutilizing only structural information-based coupling and cohesion quality ***,in prac...
详细信息
Most of the search-based software remodularization(SBSR)approaches designed to address the software remodularization problem(SRP)areutilizing only structural information-based coupling and cohesion quality ***,in practice apart from these quality criteria,there require other aspects of coupling and cohesion quality criteria such as lexical and changed-history in designing the modules of the software ***,consideration of limited aspects of software information in the SBSR may generate a sub-optimal modularization ***,such modularization can be good from the quality metrics perspective but may not be acceptable to the *** produce a remodularization solution acceptable from both quality metrics and developers’perspectives,this paper exploited more dimensions of software information to define the quality criteria as modularization ***,these objectives are simultaneously optimized using a tailored manyobjective artificial bee colony(MaABC)to produce a remodularization *** assess the effectiveness of the proposed approach,we applied it over five software *** obtained remodularization solutions are evaluated with the software quality metrics and developers view of *** demonstrate that the proposed software remodularization is an effective approach for generating good quality modularization solutions.
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by challenges in social interaction, communication difficulties, repetitive behaviors, and a range of strengths and differences in...
详细信息
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by challenges in social interaction, communication difficulties, repetitive behaviors, and a range of strengths and differences in cognitive abilities. Early ASD diagnosis using machine learning and deep learning techniques is crucial for preventing its severity and long-term effects. The articles published in this area have only applied different machine learning algorithms, and a notable gap observed is the absence of an in-depth analysis in terms of hyperparameter tuning and the type of dataset used in this context. This study investigated predictive modeling for ASD traits by leveraging two distinct datasets: (i) a raw CSV dataset with tabular data and (ii) an image dataset with facial expression. This study aims to conduct an in-depth analysis of ASD trait prediction in adults and toddlers by doing hyper optimized and interpreting the result through explainable AI. In the CSV dataset, a comprehensive exploration of machine learning and deep learning algorithms, including decision trees, Naive Bayes, random forests, support vector machines (SVM), k-nearest neighbors (KNN), logistic regression, XGBoost, and ANN, was conducted. XGBoost emerged as the most effective machine learning algorithm, achieving an accuracy of 96.13%. The deep learning ANN model outperformed the traditional machine learning algorithms with an accuracy of 99%. Additionally, an ensemble model combining a decision tree, random forest, SVM, KNN, and logistic regression demonstrated superior performance, yielding an accuracy of 96.67%. The XGBoost model, utilized in hyperparameter optimization for CSV data, exhibited a substantial accuracy increase, reaching 98%. For the image dataset, advanced deep learning models, such as ResNet50, VGG16, Boosting, and Bagging, were employed. The bagging model outperformed the others, achieving an impressive accuracy of 99%. Subsequent hyperparameter optimization was conduct
Multi-exposure image fusion (MEF) involves combining images captured at different exposure levels to create a single, well-exposed fused image. MEF has a wide range of applications, including low light, low contrast, ...
详细信息
In telemedicine applications, it is crucial to ensure the authentication, confidentiality, and privacy of medical data due to its sensitive nature and the importance of the patient information it contains. Communicati...
详细信息
In telemedicine applications, it is crucial to ensure the authentication, confidentiality, and privacy of medical data due to its sensitive nature and the importance of the patient information it contains. Communication through open networks is insecure and has many vulnerabilities, making it susceptible to unauthorized access and misuse. Encryption models are used to secure medical data from unauthorized access. In this work, we propose a bit-level encryption model having three phases: preprocessing, confusion, and diffusion. This model is designed for different types of medical data including patient information, clinical data, medical signals, and images of different modalities. Also, the proposed model is effectively implemented for grayscale and color images with varying aspect ratios. Preprocessing has been applied based on the type of medical data. A random permutation has been used to scramble the data values to remove the correlation, and multilevel chaotic maps are fused with the cyclic redundancy check method. A circular shift is used in the diffusion phase to increase randomness and security, providing protection against potential attacks. The CRC method is further used at the receiver side for error detection. The performance efficiency of the proposed encryption model is proved in terms of histogram analysis, information entropy, correlation analysis, signal-to-noise ratio, peak signal-to-noise ratio, number of pixels changing rate, and unified average changing intensity. The proposed bit-level encryption model therefore achieves information entropy values ranging from 7.9669 to 8.000, which is close to the desired value of 8. Correlation coefficient values of the encrypted data approach to zero or are negative, indicating minimal correlation in encrypted data. Resistance against differential attacks is demonstrated by NPCR and UACI values exceeding 0.9960 and 0.3340, respectively. The key space of the proposed model is 1096, which is substantially mor
Fruit safety is a critical component of the global economy, particularly within the agricultural sector. There has been a recent surge in the incidence of diseases affecting fruits, leading to economic setbacks in agr...
详细信息
Glaucoma is currently one of the most significant causes of permanent blindness. Fundus imaging is the most popular glaucoma screening method because of the compromises it has to make in terms of portability, size, an...
详细信息
Glaucoma is currently one of the most significant causes of permanent blindness. Fundus imaging is the most popular glaucoma screening method because of the compromises it has to make in terms of portability, size, and cost. In recent years, convolution neural networks (CNNs) have revolutionized computer vision. Convolution is a "local" CNN technique that is only applicable to a small region surrounding an image. Vision Transformers (ViT) use self-attention, which is a "global" activity since it collects information from the entire image. As a result, the ViT can successfully gather distant semantic relevance from an image. This study examined several optimizers, including Adamax, SGD, RMSprop, Adadelta, Adafactor, Nadam, and Adagrad. With 1750 Healthy and Glaucoma images in the IEEE fundus image dataset and 4800 healthy and glaucoma images in the LAG fundus image dataset, we trained and tested the ViT model on these datasets. Additionally, the datasets underwent image scaling, auto-rotation, and auto-contrast adjustment via adaptive equalization during preprocessing. The results demonstrated that preparing the provided dataset with various optimizers improved accuracy and other performance metrics. Additionally, according to the results, the Nadam Optimizer improved accuracy in the adaptive equalized preprocessing of the IEEE dataset by up to 97.8% and in the adaptive equalized preprocessing of the LAG dataset by up to 92%, both of which were followed by auto rotation and image resizing processes. In addition to integrating our vision transformer model with the shift tokenization model, we also combined ViT with a hybrid model that consisted of six different models, including SVM, Gaussian NB, Bernoulli NB, Decision Tree, KNN, and Random Forest, based on which optimizer was the most successful for each dataset. Empirical results show that the SVM Model worked well and improved accuracy by up to 93% with precision of up to 94% in the adaptive equalization preprocess
暂无评论