咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >FedComm: Federated Learning as... 收藏
arXiv

FedComm: Federated Learning as a Medium for Covert Communication

作     者:Hitaj, Dorjan Pagnotta, Giulio Hitaj, Briland Perez-Cruz, Fernando Mancini, Luigi V. 

作者机构:Department of Computer Science Sapienza University of Rome Italy Computer Science Laboratory SRI International United States Swiss Data Science Center Computer Science Department ETH Zürich Switzerland 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2022年

核心收录:

主  题:Deep neural networks 

摘      要:Proposed as a solution to mitigate the privacy implications related to the adoption of deep learning, Federated Learning (FL) enables large numbers of participants to successfully train deep neural networks without revealing the actual private training data. To date, a substantial amount of research has investigated the security and privacy properties of FL, resulting in a plethora of innovative attack and defense strategies. This paper thoroughly investigates the communication capabilities of an FL scheme. In particular, we show that a party involved in the FL learning process can use FL as a covert communication medium to send an arbitrary message. We introduce FedComm, a novel covert-communication technique that enables robust sharing and transfer of targeted payloads within the FL framework. Our extensive theoretical and empirical evaluations show that FedComm provides a stealthy communication channel, with minimal disruptions to the training process. Our experiments show that FedComm successfully delivers 100% of a payload in the order of kilobits before the FL procedure converges. Our evaluation also shows that FedComm is independent of the application domain and the neural network architecture used by the underlying FL scheme. © 2022, CC BY-NC-SA.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分