版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Semnan Univ Fac Mech Engn Semnan *** Iran Tufts Univ Dept Elect & Comp Engn Medford MA 02155 USA Sharif Univ Technol Comp Engn Dept Tehran *** Iran
出 版 物:《IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING》 (IEEE Trans. Autom. Sci. Eng.)
年 卷 期:2025年第22卷
页 面:10948-10959页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0811[工学-控制科学与工程]
基 金:Tufts University
主 题:Quantization (signal) Convergence Optimization Cost function Costs Heuristic algorithms Distributed databases Machine learning algorithms Ad hoc networks Training Distributed algorithm data classification quantization graph theory optimization
摘 要:Decentralized strategies are of interest for learning from large-scale data over networks. This paper studies learning over a network of geographically distributed nodes/agents subject to quantization. Each node possesses a private local cost function, collectively contributing to a global cost function, which the considered methodology aims to minimize. In contrast to many existing papers, the information exchange among nodes is log-quantized to address limited network-bandwidth in practical situations. We consider a first-order computationally efficient distributed optimization algorithm (with no extra inner consensus loop) that leverages node-level gradient correction based on local data and network-level gradient aggregation only over nearby nodes. This method only requires balanced networks with no need for stochastic weight design. It can handle log-scale quantized data exchange over possibly time-varying and switching network setups. We study convergence over both structured networks (for example, training over data-centers) and ad-hoc multi-agent networks (for example, training over dynamic robotic networks). Through experimental validation, we show that (i) structured networks generally result in a smaller optimality gap, and (ii) log-scale quantization leads to a smaller optimality gap compared to uniform quantization. Note to Practitioners-Motivated by recent developments in cloud computing, parallel processing, and the availability of low-cost CPUs and communication networks, this paper considers distributed and decentralized algorithms for machine learning and optimization. These algorithms are particularly relevant for decentralized data mining, where data sets are distributed across a network of computing nodes. A practical example of this is the classification of images over a networked data centre. In real-world scenarios, practical model nonlinearities such as data quantization must be addressed for information exchange among the computing nodes. T