咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >TEVA: Training-Efficient and V... 收藏

TEVA: Training-Efficient and Verifiable Aggregation for Federated Learning for Consumer Electronics in Industry 5.0

作     者:Xia, Yuanjun Liu, Yining Chen, Jingxue Liang, Yangfan Khan, Fazlullah Alturki, Ryan Wang, Xiaopei 

作者机构:Guilin University of Electronic Technology Guangxi Key Laboratory of Trusted Software School of Computer and Information Security Guilin541004 China Wenzhou University of Technology School of Data Science and Artificial Intelligence Wenzhou325027 China University of Technology School of Mathematical and Physical Sciences SydneyNSW2007 Australia Jiaxing University Provincial Key Laboratory of Multimodal Perceiving and Intelligent Systems The Key Laboratory of Medical Electronics and Digital Health of Zhejiang Province The Engineering Research Center of Intelligent Human Health Situation Awareness of Zhejiang Province Jiaxing314001 China University of Nottingham Ningbo China School of Computer Science Faculty of Science and Engineering Zhejiang Ningbo315104 China Umm AI-Qura University Makkah Department of Software Engineering College of Computing Saudi Arabia Riverside Department of Computer Science and Engineering University of California CA92521 United States 

出 版 物:《IEEE Transactions on Consumer Electronics》 (IEEE Trans Consum Electron)

年 卷 期:2024年

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 

主  题:Differential privacy 

摘      要:Federated learning (FL) has been widely used for privacy-preserving model updates in Industry 5.0, facilitated by 6G networks. Despite FL s privacy-preserving advantages, it remains vulnerable to attacks where adversaries can infer private data from local models or manipulate the central server (CS) to deliver falsified global models. Current privacy-preserving approaches, primarily based on the FedAvg algorithm, fail to optimize training efficiency for non-independent and identically distributed (non-IID) data. This article proposes training-efficient and verifiable aggregation (TEVA) for FL to resolve these issues. This scheme combines threshold Paillier homomorphic encryption (TPHE), verifiable aggregation, and an optimized double momentum update mechanism (OdMum). TEVA not only leverages TPHE to protect the privacy of local models but also ensures the integrity of the global model through a verifiable aggregation mechanism. Additionally, TEVA integrates the OdMum algorithm to effectively address the challenges posed by non-IID data, promoting rapid model convergence and significantly enhancing overall training efficiency. Security analysis indicates that TEVA meets the requirements for privacy protection. Extensive experimental results demonstrate that TEVA can accelerate model convergence while incurring lower computational and communication overheads. © 1975-2011 IEEE.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分