Android malware poses a significant challenge for mobile platforms. To evade detection, contemporary malware variants use API substitution or obfuscation techniques to hide malicious activities and mask their shallow ...
详细信息
Finding appropriate information on the web is a tedious task and thus demands an intelligent mechanism to assist users for this purpose. Students are the victims of information overloading on the internet the most, as...
详细信息
Yoga is a healthy exercise that focuses on physical, psychological, and divine connections. However, engaging in yoga while adopting poor postures might result in health issues like muscle discomfort and sprains. Disc...
详细信息
As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy *** research emphasizes data security and user privacy conce...
详细信息
As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy *** research emphasizes data security and user privacy concerns within smart ***,existing methods struggle with efficiency and security when processing large-scale *** efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent *** paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data *** approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user *** also explores the application of Boneh Lynn Shacham(BLS)signatures for user *** proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.
This research focuses on improving the Harris’Hawks Optimization algorithm(HHO)by tackling several of its shortcomings,including insufficient population diversity,an imbalance in exploration ***,and a lack of thoroug...
详细信息
This research focuses on improving the Harris’Hawks Optimization algorithm(HHO)by tackling several of its shortcomings,including insufficient population diversity,an imbalance in exploration ***,and a lack of thorough exploitation *** tackle these shortcomings,it proposes enhancements from three distinct perspectives:an initialization technique for populations grounded in opposition-based learning,a strategy for updating escape energy factors to improve the equilibrium between exploitation and exploration,and a comprehensive exploitation approach that utilizes variable neighborhood search along with mutation *** effectiveness of the Improved Harris Hawks Optimization algorithm(IHHO)is assessed by comparing it to five leading algorithms across 23 benchmark test *** findings indicate that the IHHO surpasses several contemporary algorithms its problem-solving ***,this paper introduces a feature selection method leveraging the IHHO algorithm(IHHO-FS)to address challenges such as low efficiency in feature selection and high computational costs(time to find the optimal feature combination and model response time)associated with high-dimensional *** analyses between IHHO-FS and six other advanced feature selection methods are conducted across eight *** results demonstrate that IHHO-FS significantly reduces the computational costs associated with classification models by lowering data dimensionality,while also enhancing the efficiency of feature ***,IHHO-FS shows strong competitiveness relative to numerous algorithms.
The extraction of atomic-level material features from electron microscope images is crucial for studying structure-property relationships and discovering new materials. However, traditional electron microscope analyse...
详细信息
The extraction of atomic-level material features from electron microscope images is crucial for studying structure-property relationships and discovering new materials. However, traditional electron microscope analyses rely on time-consuming and complex human operations; thus, they are only applicable to images with a small number of atoms. In addition, the analysis results vary due to observers' individual deviations. Although efforts to introduce automated methods have been performed previously, many of these methods lack sufficient labeled data or require various conditions in the detection process that can only be applied to the target material. Thus, in this study, we developed AtomGAN, which is a robust, unsupervised learning method, that segments defects in classical 2D material systems and the heterostructures of MoS2/WS2automatically. To solve the data scarcity problem, the proposed model is trained on unpaired simulated data that contain point and line defects for MoS2/WS2. The proposed AtomGAN was evaluated on both simulated and real electron microscope images. The results demonstrate that the segmented point defects and line defects are presented perfectly in the resulting figures, with a measurement precision of 96.9%. In addition, the cycled structure of AtomGAN can quickly generate a large number of simulated electron microscope images.
Healthcare systems nowadays depend on IoT sensors for sending data over the internet as a common *** ofmedical images is very important to secure patient *** these images consumes a lot of time onedge computing;theref...
详细信息
Healthcare systems nowadays depend on IoT sensors for sending data over the internet as a common *** ofmedical images is very important to secure patient *** these images consumes a lot of time onedge computing;therefore,theuse of anauto-encoder for compressionbefore encodingwill solve such a *** this paper,we use an auto-encoder to compress amedical image before encryption,and an encryption output(vector)is sent out over the *** the other hand,a decoder was used to reproduce the original image back after the vector was received and *** convolutional neural networks were conducted to evaluate our proposed approach:The first one is the auto-encoder,which is utilized to compress and encrypt the images,and the other assesses the classification accuracy of the image after decryption and *** hyperparameters of the encoder were tested,followed by the classification of the image to verify that no critical information was lost,to test the encryption and encoding *** this approach,sixteen hyperparameter permutations are utilized,but this research discusses three main cases in *** first case shows that the combination of Mean Square Logarithmic Error(MSLE),ADAgrad,two layers for the auto-encoder,and ReLU had the best auto-encoder results with a Mean Absolute Error(MAE)=0.221 after 50 epochs and 75%classification with the best result for the classification *** second case shows the reflection of auto-encoder results on the classification results which is a combination ofMean Square Error(MSE),RMSprop,three layers for the auto-encoder,and ReLU,which had the best classification accuracy of 65%,the auto-encoder gives MAE=0.31 after 50 *** third case is the worst,which is the combination of the hinge,RMSprop,three layers for the auto-encoder,and ReLU,providing accuracy of 20%and MAE=0.485.
Emotion recognition using biological brain signals needs to be reliable to attain effective signal processing and feature extraction techniques. The impact of emotions in interpretations, conversations, and decision-m...
详细信息
Emotion recognition using biological brain signals needs to be reliable to attain effective signal processing and feature extraction techniques. The impact of emotions in interpretations, conversations, and decision-making, has made automatic emotion recognition and examination of a significant feature in the field of psychiatric disease treatment and cure. The problem arises from the limited spatial resolution of EEG recorders. Predetermined quantities of electroencephalography (EEG) channels are used by existing algorithms, which combine several methods to extract significant data. The major intention of this study was to focus on enhancing the efficiency of recognizing emotions using signals from the brain through an experimental, adaptive selective channel selection approach that recognizes that brain function shows distinctive behaviors that vary from one individual to another individual and from one state of emotions to another. We apply a Bernoulli–Laplace-based Bayesian model to map each emotion from the scalp senses to brain sources to resolve this issue of emotion mapping. The standard low-resolution electromagnetic tomography (sLORETA) technique is employed to instantiate the source signals. We employed a progressive graph convolutional neural network (PG-CNN) to identify the sources of the suggested localization model and the emotional EEG as the main graph nodes. In this study, the proposed framework uses a PG-CNN adjacency matrix to express the connectivity between the EEG source signals and the matrix. Research on an EEG dataset of parents of an ASD (autism spectrum disorder) child has been utilized to investigate the ways of parenting of the child's mother and father. We engage with identifying the personality of parental behaviors when regulating the child and supervising his or her daily activities. These recorded datasets incorporated by the proposed method identify five emotions from brain source modeling, which significantly improves the accurac
The accurate and early detection of abnormalities in fundus images is crucial for the timely diagnosis and treatment of various eye diseases, such as glaucoma and diabetic retinopathy. The detection of abnormalities i...
详细信息
The accurate and early detection of abnormalities in fundus images is crucial for the timely diagnosis and treatment of various eye diseases, such as glaucoma and diabetic retinopathy. The detection of abnormalities in fundus images using traditional methods is often challenging due to high computational demands, scalability issues, and the requirement of large labeled datasets for effective training. To address these limitations, a new method called triplet-based orchard search (Triplet-OS) has been proposed in this paper. In this study, a GoogleNet (Inception) is utilized for feature extraction of fundus images. Also, the residual network is employed to detect abnormalities in fundus images. The Triplet-OS utilizes the medical imaging technique fundus photography dataset to capture detailed images of the interior surface of the eye, known as the fundus and the fundus includes the retina, optic disk, macula, and blood vessels. To enhance the performance of the Triplet-OS method, the orchard optimization algorithm has been implemented with an initial search strategy for hyperparameter optimization. The performance of the Triplet-OS method has been evaluated based on different metrics such as F1-score, specificity, AUC-ROC, recall, precision, and accuracy. Additionally, the performance of the proposed method has been compared with existing methods. Few-shot learning refers to a process where models can learn from just a small number of examples. This method has been applied to reduce the dependency on deep learning [1]. The goal is for machines to become as intelligent as humans. Today, numerous computing devices, extensive datasets, and advanced methods such as CNN and LSTM have been developed. AI has achieved human-like performance and, in many fields, surpasses human abilities. AI has become part of our daily lives, but it generally relies on large-scale data. In contrast, humans can often apply past knowledge to quickly learn new tasks [2]. For example, if given
Point cloud completion concentrates on completing geometric and topological shapes from incomplete 3D shapes. Nevertheless, the unordered nature of point clouds will hamper the generation of high-quality point clouds ...
详细信息
Point cloud completion concentrates on completing geometric and topological shapes from incomplete 3D shapes. Nevertheless, the unordered nature of point clouds will hamper the generation of high-quality point clouds without predicting structured and topological information of the complete shapes and introducing noisy points. To effectively address the challenges posed by missing topology and noisy points, we introduce SPOFormer, a novel topology-aware model that utilizes surface-projection optimization in a progressive growth manner. SPOFormer consists of three distinct steps for completing the missing topology: (1) Missing Keypoints Prediction. A topology-aware transformer auto-encoder is integrated for missing keypoint prediction. (2) Skeleton Generation. The skeleton generation module produces a new type of representation named skeletons with the aid of keypoints predicted by topology-aware transformer auto-encoder and the partial input. (3) Progressively Growth. We design a progressive growth module to predict final output under Multi-scale Supervision and Surface-projection Optimization. Surface-projection Optimization is firstly devised for point cloud completion, aiming to enforce the generated points to align with the underlying object surface. Experimentally, SPOFormer model achieves an impressive Chamfer Distance-$\ell _{1}$ (CD) score of 8.11 on PCN dataset. Furthermore, it attains average CD-$\ell _{2}$ scores of 1.13, 1.14, and 1.70 on ShapeNet-55, ShapeNet-34, and ShapeNet-Unseen21 datasets, respectively. Additionally, the model achieves a Maximum Mean Discrepancy (MMD) of 0.523 on the real-world KITTI dataset. These outstanding qualitative and quantitative performances surpass previous approaches by a significant margin, firmly establishing new state-of-the-art performance across various benchmark datasets. Our code is available at https://***/kiddoray/SPOFormer IEEE
暂无评论