咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Ternary Compression for Commun... 收藏

Ternary Compression for Communication-Efficient Federated Learning

作     者:Xu, Jinjin Du, Wenli Jin, Yaochu He, Wangli Cheng, Ran 

作者机构:East China Univ Sci & Technol Minist Educ Key Lab Adv Control & Optimizat Chem Proc Shanghai 200237 Peoples R China Tongji Univ Shanghai Inst Intelligent Sci & Technol Shanghai 200092 Peoples R China Univ Surrey Dept Comp Sci Guildford GU2 7XH Surrey England Tongji Univ Shanghai Inst Intelligent Sci & Technol Shanghai 200092 Peoples R China Southern Univ Sci & Technol Dept Comp Sci & Engn Guangdong Prov Key Lab Brain Inspired Intelligen Shenzhen 518055 Peoples R China 

出 版 物:《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 (IEEE Trans. Neural Networks Learn. Sys.)

年 卷 期:2022年第33卷第3期

页      面:1162-1176页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Natural Science Foundation of China under Basic Science Center Program National Natural Science Fund for Distinguished Young Scholars International (Regional) Cooperation and Exchange National Natural Science Foundation of China China Scholarship Council 

主  题:Communication efficiency deep learning federated learning ternary coding 

摘      要:Learning over massive data stored in different locations is essential in many real-world applications. However, sharing data is full of challenges due to the increasing demands of privacy and security with the growing use of smart mobile devices and Internet of thing (IoT) devices. Federated learning provides a potential solution to privacy-preserving and secure machine learning, by means of jointly training a global model without uploading data distributed on multiple devices to a central server. However, most existing work on federated learning adopts machine learning models with full-precision weights, and almost all these models contain a large number of redundant parameters that do not need to be transmitted to the server, consuming an excessive amount of communication costs. To address this issue, we propose a federated trained ternary quantization (FTTQ) algorithm, which optimizes the quantized networks on the clients through a self-learning quantization factor. Theoretical proofs of the convergence of quantization factors, unbiasedness of FTTQ, as well as a reduced weight divergence are given. On the basis of FTTQ, we propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems. Empirical experiments are conducted to train widely used deep learning models on publicly available data sets, and our results demonstrate that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data in contrast to the canonical federated learning algorithms.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分