This research work focuses on food recognition, especially, the identification of the ingredients from food images. Here, the developed model includes two stages namely: 1) feature extraction;2) classification. Initia...
详细信息
The increasing use of cloud-based image storage and retrieval systems has made ensuring security and efficiency crucial. The security enhancement of image retrieval and image archival in cloud computing has received c...
详细信息
The increasing use of cloud-based image storage and retrieval systems has made ensuring security and efficiency crucial. The security enhancement of image retrieval and image archival in cloud computing has received considerable attention in transmitting data and ensuring data confidentiality among cloud servers and users. Various traditional image retrieval techniques regarding security have developed in recent years but they do not apply to large-scale environments. This paper introduces a new approach called Triple network-based adaptive grey wolf (TN-AGW) to address these challenges. The TN-AGW framework combines the adaptability of the Grey Wolf Optimization (GWO) algorithm with the resilience of Triple Network (TN) to enhance image retrieval in cloud servers while maintaining robust security measures. By using adaptive mechanisms, TN-AGW dynamically adjusts its parameters to improve the efficiency of image retrieval processes, reducing latency and utilization of resources. However, the image retrieval process is efficiently performed by a triple network and the parameters employed in the network are optimized by Adaptive Grey Wolf (AGW) optimization. Imputation of missing values, Min–Max normalization, and Z-score standardization processes are used to preprocess the images. The image extraction process is undertaken by a modified convolutional neural network (MCNN) approach. Moreover, input images are taken from datasets such as the Landsat 8 dataset and the Moderate Resolution Imaging Spectroradiometer (MODIS) dataset is employed for image retrieval. Further, the performance such as accuracy, precision, recall, specificity, F1-score, and false alarm rate (FAR) is evaluated, the value of accuracy reaches 98.1%, the precision of 97.2%, recall of 96.1%, and specificity of 917.2% respectively. Also, the convergence speed is enhanced in this TN-AGW approach. Therefore, the proposed TN-AGW approach achieves greater efficiency in image retrieving than other existing
App reviews are crucial in influencing user decisions and providing essential feedback for developers to improve their *** the analysis of these reviews is vital for efficient review *** traditional machine learning(M...
详细信息
App reviews are crucial in influencing user decisions and providing essential feedback for developers to improve their *** the analysis of these reviews is vital for efficient review *** traditional machine learning(ML)models rely on basic word-based feature extraction,deep learning(DL)methods,enhanced with advanced word embeddings,have shown superior *** research introduces a novel aspectbased sentiment analysis(ABSA)framework to classify app reviews based on key non-functional requirements,focusing on usability factors:effectiveness,efficiency,and *** propose a hybrid DL model,combining BERT(Bidirectional Encoder Representations from Transformers)with BiLSTM(Bidirectional Long Short-Term Memory)and CNN(Convolutional Neural Networks)layers,to enhance classification *** analysis against state-of-the-art models demonstrates that our BERT-BiLSTM-CNN model achieves exceptional performance,with precision,recall,F1-score,and accuracy of 96%,87%,91%,and 94%,*** contributions of this work include a refined ABSA-based relabeling framework,the development of a highperformance classifier,and the comprehensive relabeling of the Instagram App Reviews *** advancements provide valuable insights for software developers to enhance usability and drive user-centric application development.
Matrix minimization techniques that employ the nuclear norm have gained recognition for their applicability in tasks like image inpainting, clustering, classification, and reconstruction. However, they come with inher...
详细信息
Matrix minimization techniques that employ the nuclear norm have gained recognition for their applicability in tasks like image inpainting, clustering, classification, and reconstruction. However, they come with inherent biases and computational burdens, especially when used to relax the rank function, making them less effective and efficient in real-world scenarios. To address these challenges, our research focuses on generalized nonconvex rank regularization problems in robust matrix completion, low-rank representation, and robust matrix regression. We introduce innovative approaches for effective and efficient low-rank matrix learning, grounded in generalized nonconvex rank relaxations inspired by various substitutes for the ?0-norm relaxed functions. These relaxations allow us to more accurately capture low-rank structures. Our optimization strategy employs a nonconvex and multi-variable alternating direction method of multipliers, backed by rigorous theoretical analysis for complexity and *** algorithm iteratively updates blocks of variables, ensuring efficient convergence. Additionally, we incorporate the randomized singular value decomposition technique and/or other acceleration strategies to enhance the computational efficiency of our approach, particularly for large-scale constrained minimization problems. In conclusion, our experimental results across a variety of image vision-related application tasks unequivocally demonstrate the superiority of our proposed methodologies in terms of both efficacy and efficiency when compared to most other related learning methods.
Drug-target interactions(DTIs) prediction plays an important role in the process of drug *** computational methods treat it as a binary prediction problem, determining whether there are connections between drugs and t...
详细信息
Drug-target interactions(DTIs) prediction plays an important role in the process of drug *** computational methods treat it as a binary prediction problem, determining whether there are connections between drugs and targets while ignoring relational types information. Considering the positive or negative effects of DTIs will facilitate the study on comprehensive mechanisms of multiple drugs on a common target, in this work, we model DTIs on signed heterogeneous networks, through categorizing interaction patterns of DTIs and additionally extracting interactions within drug pairs and target protein pairs. We propose signed heterogeneous graph neural networks(SHGNNs), further put forward an end-to-end framework for signed DTIs prediction, called SHGNN-DTI,which not only adapts to signed bipartite networks, but also could naturally incorporate auxiliary information from drug-drug interactions(DDIs) and protein-protein interactions(PPIs). For the framework, we solve the message passing and aggregation problem on signed DTI networks, and consider different training modes on the whole networks consisting of DTIs, DDIs and PPIs. Experiments are conducted on two datasets extracted from Drug Bank and related databases, under different settings of initial inputs, embedding dimensions and training modes. The prediction results show excellent performance in terms of metric indicators, and the feasibility is further verified by the case study with two drugs on breast cancer.
Thyroid nodules,a common disorder in the endocrine system,require accurate segmentation in ultrasound images for effective diagnosis and ***,achieving precise segmentation remains a challenge due to various factors,in...
详细信息
Thyroid nodules,a common disorder in the endocrine system,require accurate segmentation in ultrasound images for effective diagnosis and ***,achieving precise segmentation remains a challenge due to various factors,including scattering noise,low contrast,and limited resolution in ultrasound *** existing segmentation models have made progress,they still suffer from several limitations,such as high error rates,low generalizability,overfitting,limited feature learning capability,*** address these challenges,this paper proposes a Multi-level Relation Transformer-based U-Net(MLRT-UNet)to improve thyroid nodule *** MLRTUNet leverages a novel Relation Transformer,which processes images at multiple scales,overcoming the limitations of traditional encoding *** transformer integrates both local and global features effectively through selfattention and cross-attention units,capturing intricate relationships within the *** approach also introduces a Co-operative Transformer Fusion(CTF)module to combine multi-scale features from different encoding layers,enhancing the model’s ability to capture complex patterns in the ***,the Relation Transformer block enhances long-distance dependencies during the decoding process,improving segmentation *** results showthat the MLRT-UNet achieves high segmentation accuracy,reaching 98.2% on the Digital Database Thyroid Image(DDT)dataset,97.8% on the Thyroid Nodule 3493(TG3K)dataset,and 98.2% on the Thyroid Nodule3K(TN3K)*** findings demonstrate that the proposed method significantly enhances the accuracy of thyroid nodule segmentation,addressing the limitations of existing models.
GPT is widely recognized as one of the most versatile and powerful large language models, excelling across diverse domains. However, its significant computational demands often render it economically unfeasible for in...
详细信息
In the realm of deep learning, Generative Adversarial Networks (GANs) have emerged as a topic of significant interest for their potential to enhance model performance and enable effective data augmentation. This paper...
详细信息
This study proposes a malicious code detection model DTL-MD based on deep transfer learning, which aims to improve the detection accuracy of existing methods in complex malicious code and data scarcity. In the feature...
详细信息
The development of the Internet of Things(IoT)technology is leading to a new era of smart applications such as smart transportation,buildings,and smart ***,these applications act as the building blocks of IoT-enabled ...
详细信息
The development of the Internet of Things(IoT)technology is leading to a new era of smart applications such as smart transportation,buildings,and smart ***,these applications act as the building blocks of IoT-enabled smart *** high volume and high velocity of data generated by various smart city applications are sent to flexible and efficient cloud computing resources for ***,there is a high computation latency due to the presence of a remote cloud *** computing,which brings the computation close to the data source is introduced to overcome this *** an IoT-enabled smart city environment,one of the main concerns is to consume the least amount of energy while executing tasks that satisfy the delay *** efficient resource allocation at the edge is helpful to address this *** this paper,an energy and delay minimization problem in a smart city environment is formulated as a bi-objective edge resource allocation ***,we presented a three-layer network architecture for IoT-enabled smart ***,we designed a learning automata-based edge resource allocation approach considering the three-layer network architecture to solve the said bi-objective minimization *** Automata(LA)is a reinforcement-based adaptive decision-maker that helps to find the best task and edge resource *** extensive set of simulations is performed to demonstrate the applicability and effectiveness of the LA-based approach in the IoT-enabled smart city environment.
暂无评论