版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Nanjing Univ Informat Sci & Technol Jiangsu Key Lab Big Data Anal Technol Jiangsu Collaborat Innovat Ctr Atmospher Environm Coll Automat Nanjing 210044 Peoples R China Southeast Univ Sch Cyber Sci & Engn Nanjing 211189 Peoples R China
出 版 物:《NEUROCOMPUTING》 (神经计算)
年 卷 期:2024年第572卷
核心收录:
学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:National Natural Science Foun-dation of China [U23B2061, 62033010] Natural Science Foun-dation of Jiangsu Province, China [BK20200824] Postgraduate Research and Practice Innovation Program of Jiangsu Province, China [SJCX23 0391]
主 题:Distributed online optimization Differential privacy One-point gradient estimate Decentralized federated learning
摘 要:This paper focuses on the distributed online optimization problem in multi -agent systems considering privacy preservation. Each agent exchanges local information with neighboring agents on the strongly connected timevarying directed graphs. Since the process of information transmission is prone to information leakage, a distributed push -sum dual averaging algorithm based on the differential privacy mechanism is proposed to protect the privacy of the data. In addition, to handle situations where the gradient information of the node cost function is unknown, the one -point gradient estimation is designed to calculate the true gradient information and guide the update of the decision variables. With the appropriate choice of the stepsizes and the exploration parameters, the algorithm can effectively protect the privacy of agents while achieving sublinear regret with 3 the convergence rate O(T 4 ). Furthermore, this paper also explores the effect of one -point estimation parameters on the regret in the online setting and investigates the relation between the convergence effect of individual regret and differential privacy levels. Finally, several federated learning experiments were conducted to verify the efficacy of the algorithm.