咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Dual-Layered Model Protection ... 收藏

Dual-Layered Model Protection Scheme Against Backdoor Attacks in Fog Computing-Based Federated Learning

作     者:Gu, Ke Zuo, Yiming Tan, Jingjing Yin, Bo Yang, Zheng Li, Xiong 

作者机构:Changsha Univ Sci & Technol Sch Comp & Commun Engn Changsha 410114 Peoples R China Univ Elect Sci & Technol China Inst Cyber Secur Sch Comp Sci & Engn Chengdu 611731 Peoples R China 

出 版 物:《IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT》 (IEEE Trans. Netw. Serv. Manage.)

年 卷 期:2025年第22卷第2期

页      面:2000-2016页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Social Science Foundation of China [20BTQ058] Natural Science Foundation of Hunan Province [2023JJ50033] National Natural Science Foundation of China [62072078, 62332018] 

主  题:Servers Computational modeling Training Data models Adaptation models Protection Federated learning Security Image edge detection Analytical models backdoor attacks defence dual-layered model protection 

摘      要:With the growing popularity of federated learning, the security of training models against backdoor attacks has become a key challenge. Existing defense schemes often fail to address the complexity and diversity of such attacks so as to make training models vulnerable. In this paper, we propose a comprehensive dual-layered model protection scheme for fog computing-based federated learning framework. In our scheme, we first introduce a multi-metric defense mechanism deployed on fog servers to defend against malicious backdoor attacks from edge devices. The proposed defense mechanism employs multiple detection indicators to simultaneously evaluate and detect gradient and model training attributes, so that the abnormal local gradients are identified effectively. Further, we construct a second-layered defense scheme deployed on aggregation servers to regularly monitor the participation status of fog servers, whose purpose is to detect the distribution of uploaded gradients and eliminate malicious gradients from compromised fog servers. Additionally, we design an adaptive gradient adjustment method to mitigate the influence of deleting malicious gradients on the global model training process. Experimental results show that our dual-layered model protection scheme can perform well against three type of backdoor attacks (BadNet, Blended and WaNet).

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分