This article focuses on distributed nonconvex optimization by exchanging information between agents to minimize the average of local nonconvex cost functions. The communication channel between agents is normally const...
详细信息
This article focuses on distributed nonconvex optimization by exchanging information between agents to minimize the average of local nonconvex cost functions. The communication channel between agents is normally constrained by limited bandwidth, and the gradient information is typically unavailable. To overcome these limitations, we propose a quantized distributed zeroth-order algorithm, which integrates the deterministic gradient estimator, the standard uniform quantizer, and the distributed gradient tracking algorithm. We establish linear convergence to a global optimal point for the proposed algorithm by assuming Polyak-Lojasiewicz condition for the global cost function and smoothness condition for the local cost functions. Moreover, the proposed algorithm maintains linear convergence at low-data rates with a proper selection of algorithm parameters. Numerical simulations validate the theoretical results.
This article is devoted to addressing the distributed aggregative optimization (DAO) problem via compressed gradient tracking algorithms, where the cost function of each agent relies on the aggregation of other agents...
详细信息
This article is devoted to addressing the distributed aggregative optimization (DAO) problem via compressed gradient tracking algorithms, where the cost function of each agent relies on the aggregation of other agents' decisions as well as its own decision. To this end, a new kind of the distributed aggregative gradient tracking algorithm with compression communication is developed based on the gradient tracking algorithm and the technique of communication compression. Under the scenario with a time-invariant and balanced graph, it is theoretically shown that the present algorithm owns a linear convergence rate (in the mean-square error sense) with the strongly convex and smooth cost functions. Furthermore, the result is extended to a more general case with the time-varying graphs considered. Specifically, it is proven that the developed algorithm could converge linearly to the optimal solution of the DAO problem (in the mean-square error sense) if the time-varying balanced graph is jointly strongly connected and some suitable conditions are satisfied. With the optimal placement problem considered, some numerical simulation results are performed to validate the theoretical results.
In this paper, we study the distributed nonconvex optimization problem, aiming to minimize the average value of the local nonconvex cost functions using local information exchange. To reduce the communication overhead...
详细信息
In this paper, we study the distributed nonconvex optimization problem, aiming to minimize the average value of the local nonconvex cost functions using local information exchange. To reduce the communication overhead, we introduce three general classes of compressors, i.e., compressors with bounded relative compression error, compressors with globally bounded absolute compression error, and compressors with locally bounded absolute compression error. By integrating them, respectively, with the distributed gradient tracking algorithm, we then propose three corresponding compressed distributed nonconvex optimization algorithms. Motivated by the state-of-the-art BEER algorithm proposed in Zhao et al. (2022), which is an efficient compressed algorithm integrating gradienttracking with biased and contractive compressors, our first proposed algorithm extends this algorithm to accommodate both biased and non-contractive compressors For each algorithm, we design a novel Lyapunov function to demonstrate its sublinear convergence to a stationary point if the local cost functions are smooth. Furthermore, when the global cost function satisfies the Polyak-& Lstrok;ojasiewicz (P-& Lstrok;) condition, we show that our proposed algorithms linearly converge to a global optimal point. It is worth noting that, for compressors with bounded relative compression error and globally bounded absolute compression error, our proposed algorithms' parameters do not require prior knowledge of the P-& Lstrok;constant. (c) 2025 Elsevier Ltd. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
Almost all systems are inevitably subject to various uncertainties or disturbances from the external envi-ronment in practical applications. Taking these factors into consideration, in this paper a distributed algo-ri...
详细信息
Almost all systems are inevitably subject to various uncertainties or disturbances from the external envi-ronment in practical applications. Taking these factors into consideration, in this paper a distributed algo-rithm with state noise and gradient disturbance is proposed for solving distributed optimization problem with closed convex set constraint based on multi-agent system under weight-balanced graph. Moreover, based on the gradienttracking and projection methods, the proposed distributed algorithm with gradienttracking can improve the convergence rate by introducing a projection error term and an auxiliary parameter. In contrast to some existing constrained distributed gradientalgorithms, the proposed one can make the convergence faster and enhance the performance of convergence. The proposed algorithm is illustrated with two simulation examples to show its effectiveness and robustness.(c) 2022 Elsevier B.V. All rights reserved.
暂无评论