咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Log-Scale Quantization in Dist... 收藏

Log-Scale Quantization in Distributed First-Order Methods: Gradient-Based Learning From Distributed Data

作     者:Doostmohammadian, Mohammadreza Qureshi, Muhammad I. Khalesi, Mohammad Hossein Rabiee, Hamid R. Khan, Usman A. 

作者机构:Semnan Univ Fac Mech Engn Semnan *** Iran Tufts Univ Dept Elect & Comp Engn Medford MA 02155 USA Sharif Univ Technol Comp Engn Dept Tehran *** Iran 

出 版 物:《IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING》 (IEEE Trans. Autom. Sci. Eng.)

年 卷 期:2025年第22卷

页      面:10948-10959页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0811[工学-控制科学与工程] 

基  金:Tufts University 

主  题:Quantization (signal) Convergence Optimization Cost function Costs Heuristic algorithms Distributed databases Machine learning algorithms Ad hoc networks Training Distributed algorithm data classification quantization graph theory optimization 

摘      要:Decentralized strategies are of interest for learning from large-scale data over networks. This paper studies learning over a network of geographically distributed nodes/agents subject to quantization. Each node possesses a private local cost function, collectively contributing to a global cost function, which the considered methodology aims to minimize. In contrast to many existing papers, the information exchange among nodes is log-quantized to address limited network-bandwidth in practical situations. We consider a first-order computationally efficient distributed optimization algorithm (with no extra inner consensus loop) that leverages node-level gradient correction based on local data and network-level gradient aggregation only over nearby nodes. This method only requires balanced networks with no need for stochastic weight design. It can handle log-scale quantized data exchange over possibly time-varying and switching network setups. We study convergence over both structured networks (for example, training over data-centers) and ad-hoc multi-agent networks (for example, training over dynamic robotic networks). Through experimental validation, we show that (i) structured networks generally result in a smaller optimality gap, and (ii) log-scale quantization leads to a smaller optimality gap compared to uniform quantization. Note to Practitioners-Motivated by recent developments in cloud computing, parallel processing, and the availability of low-cost CPUs and communication networks, this paper considers distributed and decentralized algorithms for machine learning and optimization. These algorithms are particularly relevant for decentralized data mining, where data sets are distributed across a network of computing nodes. A practical example of this is the classification of images over a networked data centre. In real-world scenarios, practical model nonlinearities such as data quantization must be addressed for information exchange among the computing nodes. T

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分