This paper presents a trajectory tracking algorithm designed for autonomous quadrotors to navigate complex dy-namic environments that is resilient to collision disturbances. Since it combines Model Predictive Contouri...
详细信息
Query optimization is a critical task in database systems, focused on determining the most efficient way to execute a query from an enormous set of possible strategies. Traditional approaches rely on heuristic search ...
详细信息
Federated Graph Learning (FedGL) is an emerging Federated Learning (FL) framework that learns the graph data from various clients to train better Graph Neural Networks(GNNs) model. Owing to concerns regarding the secu...
详细信息
ISBN:
(纸本)9798400712746
Federated Graph Learning (FedGL) is an emerging Federated Learning (FL) framework that learns the graph data from various clients to train better Graph Neural Networks(GNNs) model. Owing to concerns regarding the security of such framework, numerous studies have attempted to execute backdoor attacks on FedGL, with a particular focus on distributed backdoor attacks. However, all existing methods posting distributed backdoor attack on FedGL only focus on injecting distributed backdoor triggers into the training data of each malicious client, which will cause model performance degradation on original task and is not always effective when confronted with robust federated learning defense algorithms, leading to low success rate of attack. What’s more, the backdoor signals introduced by the malicious clients may be smoothed out by other clean signals from the honest clients, which potentially undermining the performance of the attack. To address the above significant shortcomings, we propose a non-intrusive graph distributed backdoor attack(NI-GDBA) that does not require backdoor triggers to be injected in the training data. Our attack trains an adaptive perturbation trigger generator model for each malicious client to learn the natural backdoor from the GNN model downloading from the server with the malicious client’s local data. In contrast to traditional distributed backdoor attacks on FedGL via trigger injection in training data, our attack on different datasets such as Molecules and Bioinformatics have higher attack success rate, stronger persistence and stealth, and has no negative impact on the performance of the global GNN model. We also explore the robustness of NI-GDBA under different defense strategies, and based on our extensive experimental studies, we show that our attack method is robust to current federated learning defense methods, thus it is necessary to consider non-intrusive distributed backdoor attacks on FedGL as a novel threat that requires custom d
The computer vision community has witnessed an extensive exploration of vision transformers in the past two years. Drawing inspiration from traditional schemes, numerous works focus on introducing vision-specific indu...
详细信息
—Integrating diverse Consumer Electronics (CE) information is essential to enhance and optimize user experiences, but CE Information Integration (CEII) faces challenges arising from differences in entity descriptions...
详细信息
knowledge Graph Query Embedding (KGQE) aims to embed First-Order Logic (FOL) queries in a low-dimensional KG space for complex reasoning over incomplete KGs. To enhance the generalization of KGQE models, recent studie...
详细信息
Index recommendation is essential for improving query performance in database management systems (DBMSs) through creating an optimal set of indexes under specific constraints. Traditional methods, such as heuristic an...
详细信息
Text-to-SQL, the task of translating natural language questions into SQL queries, plays a crucial role in enabling non-experts to interact with databases. While recent advancements in large language models (LLMs) have...
详细信息
In wireless networks, utilizing sniffers for fault analysis, traffic traceback, and resource optimization is a crucial task. However, existing centralized algorithms cannot be applied to high-density wireless networks...
详细信息
Existing low-rank adaptation (LoRA) methods face challenges on sparse large language models (LLMs) due to the inability to maintain sparsity. Recent works introduced methods that maintain sparsity by augmenting LoRA t...
详细信息
暂无评论