This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching *** introduce a novel distributed optimization algorithm with a swit...
详细信息
This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching *** introduce a novel distributed optimization algorithm with a switching mechanism to guarantee that all agents eventually converge to an optimal solution point,while their control inputs are constrained in their own nonconvex *** is worth noting that the mechanism is performed to tackle the coexistence of the nonconvex constraint operator and the optimization gradient *** on the dynamic transformation technique,the original nonlinear dynamic system is transformed into an equivalent one with a nonlinear error *** utilizing the nonnegative matrix theory,it is shown that the optimization problem can be solved when the union of switching communication graphs is jointly strongly ***,a numerical simulation example is used to demonstrate the acquired theoretical results.
In the past decade, multi-robot collaborative object transport has garnered significant attention, with the majority of research targeting transport strategies. This study recasts the collaborative object lifting chal...
详细信息
In the past decade, multi-robot collaborative object transport has garnered significant attention, with the majority of research targeting transport strategies. This study recasts the collaborative object lifting challenge into an optimization problem framework. Within this setup, each robot leverages a local evaluation function to determine its lifting location. Collectively, these robots strive to optimize a unified evaluation function. An intertwined equation constraint is embedded within the optimization schema, ensuring that the system's mass center remains stable throughout the lifting process. Furthermore, we impose local feasibility constraints, thereby delimiting the optimal lifting location to a specified region. This research introduces several algorithms, differentiated based on the constraints applied to robot velocity. By harnessing these algorithms, robots can autonomously pinpoint the most apt lifting location that aligns with predetermined criteria. This methodology necessitates a robot to engage in exchanges of auxiliary variables solely with its immediate peers. Noteworthily, parameters such as location, velocity, and mass are accessed in a localized manner, reinforcing data privacy and reducing communication burdens. The paper concludes with a robust mathematical validation that underscores asymptotic convergence to the exact optimal lifting location, underpinned by numerical simulations which attest to the potency of the proposed algorithms.
This article focuses on developing distributed optimization strategies for a class of machine learning problems over a directed network of computing agents. In these problems, the global objective function is an addit...
详细信息
This article focuses on developing distributed optimization strategies for a class of machine learning problems over a directed network of computing agents. In these problems, the global objective function is an addition function, which is composed of local objective functions. Such local objective functions are convex and only endowed by the corresponding computing agent. A second-order Nesterov accelerated dynamical system with time-varying damping coefficient is developed to address such problems. To effectively deal with the constraints in the problems, the projected primal-dual method is carried out in the Nesterov accelerated system. By means of the cocoercive maximal monotone operator, it is shown that the trajectories of the Nesterov accelerated dynamical system can reach consensus at the optimal solution, provided that the damping coefficient and gains meet technical conditions. In the end, the validation of the theoretical results is demonstrated by the email classification problem and the logistic regression problem in machine learning.
distributed optimization is a powerful paradigm to solve various problems in machine learning over networked systems. Existing first-order optimization methods perform cheap gradient descent by exchanging information ...
详细信息
distributed optimization is a powerful paradigm to solve various problems in machine learning over networked systems. Existing first-order optimization methods perform cheap gradient descent by exchanging information per iteration only with single-hop neighbours in a network. However, in many agent networks such as sensor and robotic networks, it is prevalent that each agent can interact with other agents over multi-hop communication. Therefore, distributed optimization method over multi-hop networks is an important but overlooked topic that clearly needs to be developed. Motivated by this observation, in this paper, we apply multi-hop transmission to the outstanding distributed gradient descent (DGD) method and propose two typical versions (i.e., consensus and diffusion) of multi-hop DGD method, which we call CM-DGD and DM-DGD, respectively. Theoretically, we present the convergence guarantee of the proposed methods under mild assumptions. Moreover, we confirm that multi-hop strategy results in higher probability to improve the spectral gap of the underlying network, which has been shown to be a critical factor improving performance of distributed optimizations, thus achieves better convergence metrics. Experimentally, two distributed machine learning problems are picked to verify the theoretical analysis and show the effectiveness of CM-DGD and DM-DGD by using synthesized and real data sets.
This paper considers distributed optimization for learning problems over networks with heterogeneous agents having different computational capabilities. The heterogeneity of computational capabilities implies that a s...
详细信息
This paper considers distributed optimization for learning problems over networks with heterogeneous agents having different computational capabilities. The heterogeneity of computational capabilities implies that a subset of the agents may run computationally-intensive learning algorithms like Newton's method or full gradient descent, while the other agents can only run lower-complexity algorithms like stochastic gradient descent. This leads to opportunities for designing hybrid distributed optimization algorithms that rely on cooperation among the network agents in order to enhance overall performance, improve the rate of convergence, and reduce the communication overhead. We show in this work that hybrid learning with cooperation among heterogeneous agents attains a stable solution. For small step-sizes mu, the proposed approach leads to small estimation error in the order of O(mu). We also provide the theoretical analysis of the stability of the first, second, and fourth order error moments for learning over networks with heterogeneous agents. Finally, results are presented and analyzed for case study scenarios to demonstrate the effectiveness of the proposed approach.
In this study, we propose a predefined-time multi-agent approach for multiobjective optimization. Predefined-time optimization is an optimization approach that can converge to a state that is extremely close to an opt...
详细信息
In this study, we propose a predefined-time multi-agent approach for multiobjective optimization. Predefined-time optimization is an optimization approach that can converge to a state that is extremely close to an optimal solution at a given time. A time-base generator is derived and applied to the optimization approaches for achieving predefined-time optimization. The multi-objective optimization problem is reformulated as a distributed optimization problem and, thus, solved in a private and safe manner. For distributed optimization, a multiagent system with time-base generators is developed for predefined-time optimization, and its convergence and speed are proven. Several examples confirm the validity of the results.
This article studies two fundamental problems in power systems: 1) the economic dispatch problem (EDP) and 2) load shedding. In particular, convex optimization problems are formulated for both the EDP and the load she...
详细信息
This article studies two fundamental problems in power systems: 1) the economic dispatch problem (EDP) and 2) load shedding. In particular, convex optimization problems are formulated for both the EDP and the load shedding problem. For the EDP, an extension of the problem considering the transmission losses is presented. Furthermore, emphasis is placed on scheduling the load shedding when there exist some priorities on the loads. To solve the EDP and the load shedding problem in a distributed setting, we develop a method that combines the dual decomposition approach and the extragradient-based strategy. Notably, this work provides a fixed step size-based scheme for a strongly convex resource allocation problem considering general nonaffine coupled constraints. We show that the proposed algorithm converges to the optimal solution under the assumption that the underlying graph is undirected. In addition, the method has an ergodic convergence rate of O(1/k) in terms of the optimality residuals and the constraint violations. Simulation results are presented to demonstrate the effectiveness of the proposed optimization problems and distributed algorithm.
In this paper, the distributed optimization for a class of interconnected nonlinear multi-agent systems is considered via output feedback control. The problem is challenging because only the system's output is ava...
详细信息
In this paper, the distributed optimization for a class of interconnected nonlinear multi-agent systems is considered via output feedback control. The problem is challenging because only the system's output is available. The system's states and optimal values are estimated by two reduced-order filters and an optimal observer, respectively. Then the final optimal algorithm is designed based on a back-stepping method. The convergence analysis is built by using Lyapunov stability and perturbation system theory. Some examples are presented to illustrate the effectiveness of the designed optimal algorithm.
This paper studies distributed optimization of an uplink cell-free Massive MIMO (CF-mMIMO) network. By observing some interesting analogies between the CF network and an artificial neural network (ANN), we propose to ...
详细信息
ISBN:
(纸本)9781665454681
This paper studies distributed optimization of an uplink cell-free Massive MIMO (CF-mMIMO) network. By observing some interesting analogies between the CF network and an artificial neural network (ANN), we propose to relate the uplink CF network to a so-called quasi-neural network. Borrowing the idea of the back-propagation (BP) algorithm, we propose a novel scheme to optimize the central processing unit (CPU) and the access points (APs) of the network. The proposed scheme can achieve multi-AP cooperation using only the pilot sequences, but without the channel state information (CSI). To reduce the required throughput of the fronthaul, we let each AP beamform the received vector signals into scalar ones before passing them to the CPU. The effectiveness of the proposed scheme is verified by the simulations.
Inspired and underpinned by the idea of integral feedback, a distributed constant gain algorithm is proposed for multiagent networks to solve convex optimization problems with local linear constraints. Assuming agent ...
详细信息
Inspired and underpinned by the idea of integral feedback, a distributed constant gain algorithm is proposed for multiagent networks to solve convex optimization problems with local linear constraints. Assuming agent interactions are modeled by an undirected graph, the algorithm is capable of achieving the optimum solution with an exponential convergence rate. Furthermore, inherited from the beneficial integral feedback, the proposed algorithm has attractive requirements on communication bandwidth and good robustness against disturbance. Both analytical proof and numerical simulations are provided to validate the effectiveness of the proposed distributed algorithms in solving constrained optimization problems.
暂无评论