咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >MUD-PQFed: Towards Malicious U... 收藏
arXiv

MUD-PQFed: Towards Malicious User Detection in Privacy-Preserving Quantized Federated Learning

作     者:Ma, Hua Li, Qun Zheng, Yifeng Zhang, Zhi Liu, Xiaoning Gao, Yansong Al-Sarawi, Said F. Abbott, Derek 

作者机构:The School of Electrical and Electronic Engineering The University of Adelaide Australia Data61 CSIRO Australia The School of Computer Science and Engineering Nanjing University of Science and Technology China The School of Computer Science and Technology Harbin Institute of Technology Guangdong Shenzhen China The School of Computing Technologies RMIT University Australia Data61 CSIRO Australia 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2022年

核心收录:

主  题:Sensitive data 

摘      要:Federated learning (FL), as a distributed machine learning paradigm, has been adapted to mitigate privacy concerns of clients. Attributing to its advancement, institutions (i.e., hospitals) with sensitive data leverage FL to collaboratively train a global model without transmitting their raw data. Despite attractive, there exists various inference attacks that can exploit the shared plaintext model updates embedding traces of clients private information, causing severe privacy concerns. To alleviate such privacy concerns, cryptographic techniques such as secure multi-party computation and homomorphic encryption have been incorporated for privacy-preserving FL. However, this inevitably exacerbates security concerns once clients are malicious to launch attacks, in particular, corrupting the model to ruin the main impetus of benign clients in FL. Those benign clients aim to gain a better global model by contributing their computational and communicational resources and their local data for local model updates. Such security issues in privacy-preserving FL, however, lack elucidation and are under-explored. This work presents the first attempt towards elucidating the trivialness of performing model corruption attacks against lightweight secret sharing based privacy-preserving FL. We consider the scenario where the model updates are quantized to reduce the communication overhead in this case, the adversary can simply provide local parameters out of the small legitimate range to corrupt the model. We then propose MUD-PQFed, a protocol that can precisely detect malicious clients who performed the attack and enforce fair punishments. By deleting the contributions from the detected malicious clients, the global model utility is preserved as comparable to the baseline global model in absence of the attack. Extensive experiments validate the efficacy in terms of retaining the baseline accuracy and effectiveness in terms of detecting malicious clients in a fine-grained manner. C

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分