In high-risk industrial environments like nuclear power plants, precise defect identification and localization are essential for maintaining production stability and safety. However, the complexity of such a harsh env...
详细信息
In high-risk industrial environments like nuclear power plants, precise defect identification and localization are essential for maintaining production stability and safety. However, the complexity of such a harsh environment leads to significant variations in the shape and size of the defects. To address this challenge, we propose the multivariate time series segmentation network(MSSN), which adopts a multiscale convolutional network with multi-stage and depth-separable convolutions for efficient feature extraction through variable-length templates. To tackle the classification difficulty caused by structural signal variance, MSSN employs logarithmic normalization to adjust instance distributions. Furthermore, it integrates classification with smoothing loss functions to accurately identify defect segments amid similar structural and defect signal subsequences. Our algorithm evaluated on both the Mackey-Glass dataset and industrial dataset achieves over 95% localization and demonstrates the capture capability on the synthetic dataset. In a nuclear plant's heat transfer tube dataset, it captures 90% of defect instances with75% middle localization F1 score.
Federated recommender systems(FedRecs) have garnered increasing attention recently, thanks to their privacypreserving benefits. However, the decentralized and open characteristics of current FedRecs present at least t...
详细信息
Federated recommender systems(FedRecs) have garnered increasing attention recently, thanks to their privacypreserving benefits. However, the decentralized and open characteristics of current FedRecs present at least two ***, the performance of FedRecs is compromised due to highly sparse on-device data for each client. Second, the system's robustness is undermined by the vulnerability to model poisoning attacks launched by malicious users. In this paper, we introduce a novel contrastive learning framework designed to fully leverage the client's sparse data through embedding augmentation, referred to as CL4FedRec. Unlike previous contrastive learning approaches in FedRecs that necessitate clients to share their private parameters, our CL4FedRec aligns with the basic FedRec learning protocol, ensuring compatibility with most existing FedRec implementations. We then evaluate the robustness of FedRecs equipped with CL4FedRec by subjecting it to several state-of-the-art model poisoning attacks. Surprisingly, our observations reveal that contrastive learning tends to exacerbate the vulnerability of FedRecs to these attacks. This is attributed to the enhanced embedding uniformity, making the polluted target item embedding easily proximate to popular items. Based on this insight, we propose an enhanced and robust version of CL4FedRec(rCL4FedRec) by introducing a regularizer to maintain the distance among item embeddings with different popularity levels. Extensive experiments conducted on four commonly used recommendation datasets demonstrate that rCL4FedRec significantly enhances both the model's performance and the robustness of FedRecs.
The increasing use of cloud-based image storage and retrieval systems has made ensuring security and efficiency crucial. The security enhancement of image retrieval and image archival in cloud computing has received c...
详细信息
The increasing use of cloud-based image storage and retrieval systems has made ensuring security and efficiency crucial. The security enhancement of image retrieval and image archival in cloud computing has received considerable attention in transmitting data and ensuring data confidentiality among cloud servers and users. Various traditional image retrieval techniques regarding security have developed in recent years but they do not apply to large-scale environments. This paper introduces a new approach called Triple network-based adaptive grey wolf (TN-AGW) to address these challenges. The TN-AGW framework combines the adaptability of the Grey Wolf Optimization (GWO) algorithm with the resilience of Triple Network (TN) to enhance image retrieval in cloud servers while maintaining robust security measures. By using adaptive mechanisms, TN-AGW dynamically adjusts its parameters to improve the efficiency of image retrieval processes, reducing latency and utilization of resources. However, the image retrieval process is efficiently performed by a triple network and the parameters employed in the network are optimized by Adaptive Grey Wolf (AGW) optimization. Imputation of missing values, Min–Max normalization, and Z-score standardization processes are used to preprocess the images. The image extraction process is undertaken by a modified convolutional neural network (MCNN) approach. Moreover, input images are taken from datasets such as the Landsat 8 dataset and the Moderate Resolution Imaging Spectroradiometer (MODIS) dataset is employed for image retrieval. Further, the performance such as accuracy, precision, recall, specificity, F1-score, and false alarm rate (FAR) is evaluated, the value of accuracy reaches 98.1%, the precision of 97.2%, recall of 96.1%, and specificity of 917.2% respectively. Also, the convergence speed is enhanced in this TN-AGW approach. Therefore, the proposed TN-AGW approach achieves greater efficiency in image retrieving than other existing
This study examines the effectiveness of artificial intelligence techniques in generating high-quality environmental data for species introductory site selection *** Strengths,Weaknesses,Opportunities,Threats(SWOT)ana...
详细信息
This study examines the effectiveness of artificial intelligence techniques in generating high-quality environmental data for species introductory site selection *** Strengths,Weaknesses,Opportunities,Threats(SWOT)analysis data with Variation Autoencoder(VAE)and Generative AdversarialNetwork(GAN)the network framework model(SAE-GAN),is proposed for environmental data *** model combines two popular generative models,GAN and VAE,to generate features conditional on categorical data embedding after SWOT *** model is capable of generating features that resemble real feature distributions and adding sample factors to more accurately track individual sample *** data is used to retain more semantic information to generate *** model was applied to species in Southern California,USA,citing SWOT analysis data to train the *** show that the model is capable of integrating data from more comprehensive analyses than traditional methods and generating high-quality reconstructed data from them,effectively solving the problem of insufficient data collection in development *** model is further validated by the Technique for Order Preference by Similarity to an Ideal Solution(TOPSIS)classification assessment commonly used in the environmental data *** study provides a reliable and rich source of training data for species introduction site selection systems and makes a significant contribution to ecological and sustainable development.
The manual process of evaluating answer scripts is strenuous. Evaluators use the answer key to assess the answers in the answer scripts. Advancements in technology and the introduction of new learning paradigms need a...
详细信息
Cloud Computing (CC) is widely adopted in sectors like education, healthcare, and banking due to its scalability and cost-effectiveness. However, its internet-based nature exposes it to cyber threats, necessitating ad...
详细信息
Understanding and predicting air quality is pivotal for public health and environmental management, especially in urban areas like Delhi. This study utilizes a comprehensive dataset from the Central Pollution Control ...
详细信息
Emotion recognition plays a crucial role in various fields and is a key task in natural language processing (NLP). The objective is to identify and interpret emotional expressions in text. However, traditional emotion...
详细信息
Emotion recognition plays a crucial role in various fields and is a key task in natural language processing (NLP). The objective is to identify and interpret emotional expressions in text. However, traditional emotion recognition approaches often struggle in few-shot cross-domain scenarios due to their limited capacity to generalize semantic features across different domains. Additionally, these methods face challenges in accurately capturing complex emotional states, particularly those that are subtle or implicit. To overcome these limitations, we introduce a novel approach called Dual-Task Contrastive Meta-Learning (DTCML). This method combines meta-learning and contrastive learning to improve emotion recognition. Meta-learning enhances the model’s ability to generalize to new emotional tasks, while instance contrastive learning further refines the model by distinguishing unique features within each category, enabling it to better differentiate complex emotional expressions. Prototype contrastive learning, in turn, helps the model address the semantic complexity of emotions across different domains, enabling the model to learn fine-grained emotions expression. By leveraging dual tasks, DTCML learns from two domains simultaneously, the model is encouraged to learn more diverse and generalizable emotions features, thereby improving its cross-domain adaptability and robustness, and enhancing its generalization ability. We evaluated the performance of DTCML across four cross-domain settings, and the results show that our method outperforms the best baseline by 5.88%, 12.04%, 8.49%, and 8.40% in terms of accuracy.
This paper investigates the input-to-state stabilization of discrete-time Markov jump systems. A quantized control scheme that includes coding and decoding procedures is proposed. The relationship between the error in...
详细信息
In the field of object detection for remote sensing images, especially in applications such as environmental monitoring and urban planning, significant progress has been made. This paper addresses the common challenge...
详细信息
暂无评论