It is often the case that data are with multiple views in real-world applications. Fully exploring the information of each view is significant for making data more representative. However, due to various limitations a...
详细信息
Cell association is a significant research issue in future mobile communication systems due to the unacceptably large computational time of traditional *** article proposes a polynomial-time cell association scheme wh...
详细信息
Cell association is a significant research issue in future mobile communication systems due to the unacceptably large computational time of traditional *** article proposes a polynomial-time cell association scheme which not only completes the association in polynomial time but also fits for a generic optimization objective *** the one hand,traditional cell association as a non-deterministic polynomial(NP)hard problem with a generic utility function is heuristically transformed into a 2-dimensional assignment optimization and solved by a certain polynomial-time algorithm,which significantly saves computational *** the other hand,the scheme jointly considers utility maximization and load balancing among multiple base stations(BSs)by maintaining an experience pool storing a set of weighting factor values and their corresponding *** an association optimization is required,a suitable weighting factor value is taken from the pool to calculate a long square utility matrix and a certain polynomial-time algorithm will be applied for the *** with several representative schemes,the proposed scheme achieves large system capacity and high fairness within a relatively short computational time.
Federated learning-based Named Entity Recognition (FNER) has attracted widespread attention through decentralized training on local clients. However, most FNER models assume that entity types are pre-fixed, so in prac...
详细信息
Federated Graph Learning (FedGL) is an emerging Federated Learning (FL) framework that learns the graph data from various clients to train better Graph Neural Networks(GNNs) model. Owing to concerns regarding the secu...
详细信息
ISBN:
(纸本)9798400712746
Federated Graph Learning (FedGL) is an emerging Federated Learning (FL) framework that learns the graph data from various clients to train better Graph Neural Networks(GNNs) model. Owing to concerns regarding the security of such framework, numerous studies have attempted to execute backdoor attacks on FedGL, with a particular focus on distributed backdoor attacks. However, all existing methods posting distributed backdoor attack on FedGL only focus on injecting distributed backdoor triggers into the training data of each malicious client, which will cause model performance degradation on original task and is not always effective when confronted with robust federated learning defense algorithms, leading to low success rate of attack. What’s more, the backdoor signals introduced by the malicious clients may be smoothed out by other clean signals from the honest clients, which potentially undermining the performance of the attack. To address the above significant shortcomings, we propose a non-intrusive graph distributed backdoor attack(NI-GDBA) that does not require backdoor triggers to be injected in the training data. Our attack trains an adaptive perturbation trigger generator model for each malicious client to learn the natural backdoor from the GNN model downloading from the server with the malicious client’s local data. In contrast to traditional distributed backdoor attacks on FedGL via trigger injection in training data, our attack on different datasets such as Molecules and Bioinformatics have higher attack success rate, stronger persistence and stealth, and has no negative impact on the performance of the global GNN model. We also explore the robustness of NI-GDBA under different defense strategies, and based on our extensive experimental studies, we show that our attack method is robust to current federated learning defense methods, thus it is necessary to consider non-intrusive distributed backdoor attacks on FedGL as a novel threat that requires custom d
Existing methods on knowledge base question generation (KBQG) learn a one-size-fits-all model by training together all subgraphs without distinguishing the diverse semantics of subgraphs. In this work, we show that ma...
详细信息
Personalized learner modeling uses learners’ historical behavior data to diagnose their cognitive abilities, a process known as Cognitive Diagnosis (CD). This is essential for web-based learning services such as lear...
详细信息
Cognitive diagnosis is a critical task in intelligent education, aimed at inferring students’ mastery of knowledge concepts based on their response logs. Although existing cognitive diagnosis models achieve excellent...
详细信息
ISBN:
(数字)9798331543143
ISBN:
(纸本)9798331543150
Cognitive diagnosis is a critical task in intelligent education, aimed at inferring students’ mastery of knowledge concepts based on their response logs. Although existing cognitive diagnosis models achieve excellent performance, they underestimate the difficulty of easy exercises and overestimate the difficulty of hard exercises. We attribute this to the class imbalance in the response logs of easy and hard exercises. Moreover, the convergence speed varies from exercise to exercise during model training, which further challenges generalization. To address these problems, we propose an exercise’s correct rate-based logit adjustment approach for a wide range of cognitive diagnosis models. Specifically, we enforce logit adjustment in the loss during training to overcome the class imbalance in response logs. Then, we apply group distributionally robust optimization for generalization. Finally, extensive experiments demonstrate the effectiveness of our model, especially on easy and hard exercises.
Enterprises currently face the challenge of reducing production cycles and costs and utilizing existing cases for making changes and iterations has emerged as a viable solution. However, the acquisition and modificati...
详细信息
Partial label learning is a weakly supervised learning framework in which each instance is associated with multiple candidate labels,among which only one is the ground-truth *** paper proposes a unified formulation th...
详细信息
Partial label learning is a weakly supervised learning framework in which each instance is associated with multiple candidate labels,among which only one is the ground-truth *** paper proposes a unified formulation that employs proper label constraints for training models while simultaneously performing *** existing partial label learning approaches that only leverage similarities in the feature space without utilizing label constraints,our pseudo-labeling process leverages similarities and differences in the feature space using the same candidate label constraints and then disambiguates noise *** experiments on artificial and real-world partial label datasets show that our approach significantly outperforms state-of-the-art counterparts on classification prediction.
Deep learning has shown significant improvements on various machine learning tasks by introducing a wide spectrum of neural network ***,for these neural network models,it is necessary to label a tremendous amount of t...
详细信息
Deep learning has shown significant improvements on various machine learning tasks by introducing a wide spectrum of neural network ***,for these neural network models,it is necessary to label a tremendous amount of training data,which is prohibitively expensive in *** this paper,we propose OnLine Machine Learning(OLML)database which stores trained models and reuses these models in a new training task to achieve a better training effect with a small amount of training *** efficient model reuse algorithm AdaReuse is developed in the OLML ***,AdaReuse firstly estimates the reuse potential of trained models from domain relatedness and model quality,through which a group of trained models with high reuse potential for the training task could be selected ***,multi selected models will be trained iteratively to encourage diverse models,with which a better training effect could be achieved by *** evaluate AdaReuse on two types of natural language processing(NLP)tasks,and the results show AdaReuse could improve the training effect significantly compared with models training from scratch when the training data is *** on AdaReuse,we implement an OLML database prototype system which could accept a training task as an SQL-like query and automatically generate a training plan by selecting and reusing trained *** studies are conducted to illustrate the OLML database could properly store the trained models,and reuse the trained models efficiently in new training tasks.
暂无评论