咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Practical Private Aggregation ... 收藏

Practical Private Aggregation in Federated Learning Against Inference Attack

作     者:Zhao, Ping Cao, Zhikui Jiang, Jin Gao, Fei 

作者机构:Donghua Univ Coll Informat Sci & Technol Shanghai 200051 Peoples R China Beijing Univ Posts & Telecommun State Key Lab Networking & Switching Technol Beijing 100876 Peoples R China 

出 版 物:《IEEE INTERNET OF THINGS JOURNAL》 (IEEE Internet Things J.)

年 卷 期:2023年第10卷第1期

页      面:318-329页

核心收录:

学科分类:0810[工学-信息与通信工程] 0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:Open Foundation of State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications [SKLNST-2021-1-06] National Natural Science Foundation of China 

主  题:Computational modeling Costs Data models Servers Training Homomorphic encryption Differential privacy Computational Diffie--Hellman (CDH) data privacy federated learning (FL) inference attacks 

摘      要:Federated learning (FL) enables multiple worker devices share local models trained on their private data to collaboratively train a machine learning model. However, local models are proved to imply the information about the private data and, thus, introduce much vulnerabilities to inference attacks where the adversary reconstructs or infers the sensitive information about the private data (e.g., labels, memberships, etc.) from the local models. To address this issue, existing works proposed homomorphic encryption, secure multiparty computation (SMC), and differential privacy methods. Nevertheless, the homomorphic encryption and SMC-based approaches are not applicable to large-scale FL scenarios as they incur substantial additional communication and computation costs and require secure channels to delivery keys. Moreover, differential privacy brings a substantial tradeoff between privacy budget and model performance. In this article, we propose a novel FL framework, which can protect the data privacy of worker devices against the inference attacks with minimal accuracy cost and low computation and communication cost, and does not rely on the secure pairwise communication channels. The main idea is to generate the lightweight keys based on computational Diffie-Hellman (CDH) problem to encrypt the local models, and the FL server can only get the sum of the local models of all worker devices without knowing the exact local model of any specific worker device. The extensive experimental results on three real-world data sets validate that the proposed FL framework can protect the data privacy of worker devices, and only incurs a small constant of computation and communication cost and a drop in test accuracy of no more than 1%.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分