As gravitational wave astronomy has advanced, the need for effective and quick signal processing has never been more critical. New detectors such as Laser Interferometer Gravitational-Wave Observatory (LIGO) produces ...
详细信息
People-centric activity recognition is one of the most critical technologies in a wide range of real-world applications,including intelligent transportation systems, healthcare services, and brain-computer interfaces....
详细信息
People-centric activity recognition is one of the most critical technologies in a wide range of real-world applications,including intelligent transportation systems, healthcare services, and brain-computer interfaces. Large-scale data collection and annotation make the application of machine learning algorithms prohibitively expensive when adapting to new tasks. One way of circumventing this limitation is to train the model in a semi-supervised learning manner that utilizes a percentage of unlabeled data to reduce the labeling burden in prediction tasks. Despite their appeal, these models often assume that labeled and unlabeled data come from similar distributions, which leads to the domain shift problem caused by the presence of distribution gaps. To address these limitations, we propose herein a novel method for people-centric activity recognition,called domain generalization with semi-supervised learning(DGSSL), that effectively enhances the representation learning and domain alignment capabilities of a model. We first design a new autoregressive discriminator for adversarial training between unlabeled and labeled source domains, extracting domain-specific features to reduce the distribution gaps. Second, we introduce two reconstruction tasks to capture the task-specific features to avoid losing information related to representation learning while maintaining task-specific consistency. Finally, benefiting from the collaborative optimization of these two tasks, the model can accurately predict both the domain and category labels of the source domains for the classification task. We conduct extensive experiments on three real-world sensing datasets. The experimental results show that DGSSL surpasses the three state-of-the-art methods with better performance and generalization.
In high-risk industrial environments like nuclear power plants, precise defect identification and localization are essential for maintaining production stability and safety. However, the complexity of such a harsh env...
详细信息
In high-risk industrial environments like nuclear power plants, precise defect identification and localization are essential for maintaining production stability and safety. However, the complexity of such a harsh environment leads to significant variations in the shape and size of the defects. To address this challenge, we propose the multivariate time series segmentation network(MSSN), which adopts a multiscale convolutional network with multi-stage and depth-separable convolutions for efficient feature extraction through variable-length templates. To tackle the classification difficulty caused by structural signal variance, MSSN employs logarithmic normalization to adjust instance distributions. Furthermore, it integrates classification with smoothing loss functions to accurately identify defect segments amid similar structural and defect signal subsequences. Our algorithm evaluated on both the Mackey-Glass dataset and industrial dataset achieves over 95% localization and demonstrates the capture capability on the synthetic dataset. In a nuclear plant's heat transfer tube dataset, it captures 90% of defect instances with75% middle localization F1 score.
Point cloud completion aims to infer complete point clouds based on partial 3D point cloud *** previous methods apply coarseto-fine strategy networks for generating complete point ***,such methods are not only relativ...
详细信息
Point cloud completion aims to infer complete point clouds based on partial 3D point cloud *** previous methods apply coarseto-fine strategy networks for generating complete point ***,such methods are not only relatively time-consuming but also cannot provide representative complete shape features based on partial *** this paper,a novel feature alignment fast point cloud completion network(FACNet)is proposed to directly and efficiently generate the detailed shapes of *** aligns high-dimensional feature distributions of both partial and complete point clouds to maintain global information about the complete *** its decoding process,the local features from the partial point cloud are incorporated along with the maintained global information to ensure complete and time-saving generation of the complete point *** results show that FACNet outperforms the state-of-theart on PCN,Completion3D,and MVP datasets,and achieves competitive performance on ShapeNet-55 and KITTI ***,FACNet and a simplified version,FACNet-slight,achieve a significant speedup of 3–10 times over other state-of-the-art methods.
Owing to the extensive applications in many areas such as networked systems,formation flying of unmanned air vehicles,and coordinated manipulation of multiple robots,the distributed containment control for nonlinear m...
Owing to the extensive applications in many areas such as networked systems,formation flying of unmanned air vehicles,and coordinated manipulation of multiple robots,the distributed containment control for nonlinear multiagent systems (MASs) has received considerable attention,for example [1,2].Although the valued studies in [1,2] investigate containment control problems for MASs subject to nonlinearities,the proposed distributed nonlinear protocols only achieve the asymptotic *** a crucial performance indicator for distributed containment control of MASs,the fast convergence is conducive to achieving better control accuracy [3].The work in [4] first addresses the backstepping-based adaptive fuzzy fixed-time containment tracking problem for nonlinear high-order MASs with unknown external ***,the designed fixedtime control protocol [4] cannot escape the singularity problem in the backstepping-based adaptive control *** is well known,the singularity problem has become an inherent problem in the adaptive fixed-time control design,which may cause the unbounded control inputs and even the instability of controlled ***,how to solve the nonsingular fixed-time containment control problem for nonlinear MASs is still open and awaits breakthrough to the best of our knowledge.
In task offloading, the movement of vehicles causes the switching of connected RSUs and servers, which may lead to task offloading failure or high service delay. In this paper, we analyze the impact of vehicle movemen...
详细信息
In task offloading, the movement of vehicles causes the switching of connected RSUs and servers, which may lead to task offloading failure or high service delay. In this paper, we analyze the impact of vehicle movements on task offloading and reveal that data preparation time for task execution can be minimized via forward-looking scheduling. Then, a Bi-LSTM-based model is proposed to predict the trajectories of vehicles. The service area is divided into several equal-sized grids. If the actual position of the vehicle and the predicted position by the model belong to the same grid, the prediction is considered correct, thereby reducing the difficulty of vehicle trajectory prediction. Moreover, we propose a scheduling strategy for delay optimization based on the vehicle trajectory prediction. Considering the inevitable prediction error, we take some edge servers around the predicted area as candidate execution servers and the data required for task execution are backed up to these candidate servers, thereby reducing the impact of prediction deviations on task offloading and converting the modest increase of resource overheads into delay reduction in task offloading. Simulation results show that, compared with other classical schemes, the proposed strategy has lower average task offloading delays.
Federated recommender systems(FedRecs) have garnered increasing attention recently, thanks to their privacypreserving benefits. However, the decentralized and open characteristics of current FedRecs present at least t...
详细信息
Federated recommender systems(FedRecs) have garnered increasing attention recently, thanks to their privacypreserving benefits. However, the decentralized and open characteristics of current FedRecs present at least two ***, the performance of FedRecs is compromised due to highly sparse on-device data for each client. Second, the system's robustness is undermined by the vulnerability to model poisoning attacks launched by malicious users. In this paper, we introduce a novel contrastive learning framework designed to fully leverage the client's sparse data through embedding augmentation, referred to as CL4FedRec. Unlike previous contrastive learning approaches in FedRecs that necessitate clients to share their private parameters, our CL4FedRec aligns with the basic FedRec learning protocol, ensuring compatibility with most existing FedRec implementations. We then evaluate the robustness of FedRecs equipped with CL4FedRec by subjecting it to several state-of-the-art model poisoning attacks. Surprisingly, our observations reveal that contrastive learning tends to exacerbate the vulnerability of FedRecs to these attacks. This is attributed to the enhanced embedding uniformity, making the polluted target item embedding easily proximate to popular items. Based on this insight, we propose an enhanced and robust version of CL4FedRec(rCL4FedRec) by introducing a regularizer to maintain the distance among item embeddings with different popularity levels. Extensive experiments conducted on four commonly used recommendation datasets demonstrate that rCL4FedRec significantly enhances both the model's performance and the robustness of FedRecs.
The increasing use of cloud-based image storage and retrieval systems has made ensuring security and efficiency crucial. The security enhancement of image retrieval and image archival in cloud computing has received c...
详细信息
The increasing use of cloud-based image storage and retrieval systems has made ensuring security and efficiency crucial. The security enhancement of image retrieval and image archival in cloud computing has received considerable attention in transmitting data and ensuring data confidentiality among cloud servers and users. Various traditional image retrieval techniques regarding security have developed in recent years but they do not apply to large-scale environments. This paper introduces a new approach called Triple network-based adaptive grey wolf (TN-AGW) to address these challenges. The TN-AGW framework combines the adaptability of the Grey Wolf Optimization (GWO) algorithm with the resilience of Triple Network (TN) to enhance image retrieval in cloud servers while maintaining robust security measures. By using adaptive mechanisms, TN-AGW dynamically adjusts its parameters to improve the efficiency of image retrieval processes, reducing latency and utilization of resources. However, the image retrieval process is efficiently performed by a triple network and the parameters employed in the network are optimized by Adaptive Grey Wolf (AGW) optimization. Imputation of missing values, Min–Max normalization, and Z-score standardization processes are used to preprocess the images. The image extraction process is undertaken by a modified convolutional neural network (MCNN) approach. Moreover, input images are taken from datasets such as the Landsat 8 dataset and the Moderate Resolution Imaging Spectroradiometer (MODIS) dataset is employed for image retrieval. Further, the performance such as accuracy, precision, recall, specificity, F1-score, and false alarm rate (FAR) is evaluated, the value of accuracy reaches 98.1%, the precision of 97.2%, recall of 96.1%, and specificity of 917.2% respectively. Also, the convergence speed is enhanced in this TN-AGW approach. Therefore, the proposed TN-AGW approach achieves greater efficiency in image retrieving than other existing
This study examines the effectiveness of artificial intelligence techniques in generating high-quality environmental data for species introductory site selection *** Strengths,Weaknesses,Opportunities,Threats(SWOT)ana...
详细信息
This study examines the effectiveness of artificial intelligence techniques in generating high-quality environmental data for species introductory site selection *** Strengths,Weaknesses,Opportunities,Threats(SWOT)analysis data with Variation Autoencoder(VAE)and Generative AdversarialNetwork(GAN)the network framework model(SAE-GAN),is proposed for environmental data *** model combines two popular generative models,GAN and VAE,to generate features conditional on categorical data embedding after SWOT *** model is capable of generating features that resemble real feature distributions and adding sample factors to more accurately track individual sample *** data is used to retain more semantic information to generate *** model was applied to species in Southern California,USA,citing SWOT analysis data to train the *** show that the model is capable of integrating data from more comprehensive analyses than traditional methods and generating high-quality reconstructed data from them,effectively solving the problem of insufficient data collection in development *** model is further validated by the Technique for Order Preference by Similarity to an Ideal Solution(TOPSIS)classification assessment commonly used in the environmental data *** study provides a reliable and rich source of training data for species introduction site selection systems and makes a significant contribution to ecological and sustainable development.
The manual process of evaluating answer scripts is strenuous. Evaluators use the answer key to assess the answers in the answer scripts. Advancements in technology and the introduction of new learning paradigms need a...
详细信息
暂无评论