Decentralized stochastic gradient algorithms efficiently solve large-scale finite-sum optimization problems when all agents in the network are reliable. However, most of these algorithms are not resilient to adverse c...
详细信息
Decentralized stochastic gradient algorithms efficiently solve large-scale finite-sum optimization problems when all agents in the network are reliable. However, most of these algorithms are not resilient to adverse conditions, such as malfunctioning agents, software bugs, and cyber attacks. This article aims to handle a class of general composite optimization problems over multiagent systems (MASs) in the presence of an unknown number of byzantine agents. Building on a resilient aggregation mechanism and the proximal-gradient mapping method, a byzantine-resilient decentralized stochastic proximal-gradient algorithmic framework is proposed, dubbed Prox-DBRO-VR, which achieves an optimization and control goal using only local computations and communications. To asymptotically reduce the noise variance arising from local gradient estimation and accelerate the convergence, we incorporate two localized variance-reduced (VR) techniques (SAGA and LSVRG) into Prox-DBRO-VR to design Prox-DBRO-SAGA and Prox-DBRO-LSVRG. By analyzing the contraction relationships among the gradient-learning error, resilient consensus condition, and convergence error in a unified theoretical framework, it is proved that both Prox-DBRO-SAGA and Prox-DBRO-LSVRG, with a well-designed constant (resp., decaying) step-size, converge linearly (resp., sublinearly) inside an error ball around the optimal solution to the original problem under standard assumptions. A tradeoff between convergence accuracy and byzantine resilience in both linear and sublinear cases is also characterized. In numerical experiments, the effectiveness and practicability of the proposed algorithms are manifested via resolving a decentralized sparse machine learning problem under various byzantine attacks.
暂无评论