Output feedback distributed optimization problem is studied for higher-order multi-agent systems with uncertain nonlinearities, which are assumed to satisfy linear growth conditions. The agents dynamics are permitted ...
详细信息
Output feedback distributed optimization problem is studied for higher-order multi-agent systems with uncertain nonlinearities, which are assumed to satisfy linear growth conditions. The agents dynamics are permitted to be heterogenous with different nonlinearities. The problem is solved by embedded control approach based output feedback distributed optimization algorithms. First, a first-order optimal signal generator is designed, of which outputs reach the minimizer of the global cost function. Second, embedding the generator into the feedback loop and taking the outputs of the generator as the reference outputs of the agents, output feedback tracking controllers are designed for the higher-order multi-agent systems on the basis of nonseparation principle and feedback domination technique. Under the proposed control algorithms, all the agents outputs asymptotically approach a bounded neighbor region of the global minimizer. Simulations demonstrate the effectiveness of the proposed output feedback distributed optimization algorithms.
This paper focuses on distributed consensus optimization problems with coupled constraints over time-varying multi-agent networks, where the global objective is the finite sum of all agents' private local objectiv...
详细信息
This paper focuses on distributed consensus optimization problems with coupled constraints over time-varying multi-agent networks, where the global objective is the finite sum of all agents' private local objective functions, and decision variables of agents are subject to coupled equality and inequality constraints and a compact convex subset. Each agent exchanges information with its neighbors and processes local data. They cooperate to agree on a consensual decision vector that is an optimal solution to the considered optimization problems. We integrate ideas behind dynamic average consensus and primal-dual methods to develop a distributed algorithm and establish its sublinear convergence rate. In numerical simulations, to illustrate the effectiveness of the proposed algorithm, we compare it with some related methods by the Neyman-Pearson classification problem.
This paper considers a class of multi-agent distributed convex optimization with a common set of constraints and provides several continuous-time neurodynamic approaches. In problem transformation, l(1) and l(2) penal...
详细信息
This paper considers a class of multi-agent distributed convex optimization with a common set of constraints and provides several continuous-time neurodynamic approaches. In problem transformation, l(1) and l(2) penalty methods are used respectively to cast the linear consensus constraint into the objective function, which avoids introducing auxiliary variables and only involves information exchange among primal variables in the process of solving the problem. For nonsmooth cost functions, two differential inclusions with projection operator are proposed. Without convexity of the differential inclusions, the asymptotic behavior and convergence properties are explored. For smooth cost functions, by harnessing the smoothness of l(2) penalty function, finiteand fixed-time convergent algorithms are provided via a specifically designed average consensus estimator. Finally, several numerical examples in the multi-agent simulation environment are conducted to illustrate the effectiveness of the proposed neurodynamic approaches.
This article investigates the distributed optimization feedback control for a family of multi-agent systems with external disturbances. For physical implementation in the scenario that only the sampled-state informati...
详细信息
This article investigates the distributed optimization feedback control for a family of multi-agent systems with external disturbances. For physical implementation in the scenario that only the sampled-state information is available, a novel disturbance compensation distributed optimization control strategy is proposed by designing a sampled-data-based distributed protocol and a sampled-data-based disturbance compensator. The disturbances in the current sampling interval are compensated by an exact value at the time in the last sampling interval obtained by using the sampled data. It is proved that the states of agents converge to an arbitrarily small domain of the optimal point of the global cost function if the disturbances and their derivatives are bounded, and the sampling period is short enough. Besides, when disturbances are constants, all the agents' states converge to the optimal point asymptotically. Simulations consolidated the validity of the proposed method.
optimization with gradient tracking is particularly notable for its superior convergence results among the various distributed algorithms, especially in the context of directed graphs. However, privacy concerns arise ...
详细信息
optimization with gradient tracking is particularly notable for its superior convergence results among the various distributed algorithms, especially in the context of directed graphs. However, privacy concerns arise when gradient information is transmitted directly which would induce more information leakage. Surprisingly, literature has not adequately addressed the associated privacy issues. In response to the gap, our article proposes a privacy-preserving distributed optimization algorithm with gradient tracking by adding noises to transmitted messages, namely, the decision variables and the estimate of the aggregated gradient. We prove two dilemmas for this kind of algorithm. In the first dilemma, we reveal that this distributed optimization algorithm with gradient tracking cannot achieve epsilon-differential privacy (DP) and exact convergence simultaneously. Building on this, we subsequently highlight that the algorithm fails to achieve epsilon-DP when employing nonsummable stepsizes in the presence of Laplace noises. It is crucial to emphasize that these findings hold true regardless of the size of the privacy metric epsilon. After that, we rigorously analyze the convergence performance and privacy level given summable stepsize sequences under the Laplace distribution since it is only with summable stepsizes that is meaningful for us to study. We derive sufficient conditions that allow for the simultaneous stochastically bounded accuracy and epsilon-DP. Recognizing that several options can meet these conditions, we further derive an upper bound of the mean error's variance and specify the mathematical expression of epsilon under such conditions. Numerical simulations are provided to demonstrate the effectiveness of our proposed algorithm.
This paper introduces output feedback distributed optimization algorithms designed specifically for second-order nonlinear multi-agent systems. The agents are allowed to have heterogeneous dynamics, characterized by d...
详细信息
This paper introduces output feedback distributed optimization algorithms designed specifically for second-order nonlinear multi-agent systems. The agents are allowed to have heterogeneous dynamics, characterized by distinct nonlinearities, as long as they satisfy the Lipschitz continuity condition. For the case with unknown states, nonlinear state observers are designed first for each agent to reconstruct agents' unknown states. It is proven that the agents' unknown states are estimated accurately by the developed state observers. Then, based on the agents' state estimates and the gradient of each agent local cost function, a kind of output feedback distributed optimization algorithms are proposed for the considered multi-agent systems. Under the proposed distributed optimization algorithms, all the agents' outputs asymptotically approach the minimizer of the global cost function which is the sum of all the local cost functions. By using Lyapunov stability theory, convex analysis, and input-to-state stability theory, the asymptotical convergence of the output feedback distributed optimization closed-loop system is proven. Simulations are conducted to validate the efficacy of the proposed algorithms.
distributed optimization, combining cooperative control and optimization objectives, is vital for practical applications. While current research predominantly focuses on linear multi -agent systems, the real -world de...
详细信息
distributed optimization, combining cooperative control and optimization objectives, is vital for practical applications. While current research predominantly focuses on linear multi -agent systems, the real -world demands often involve nonlinear systems with finite or fixed time constraints. This paper addresses the finitetime/fixed-time distributed optimization for nonlinear multi -agent systems with time -varying cost function. Under appropriate assumptions, we propose a finite -time distributed protocol and a fixed -time distributed protocol for the nonlinear multi -agent systems to ensure all agents achieve consensus and minimize the timevarying cost function, relaxing the conditions for strong convexity and bounded gradient difference of the cost function. Simulation experiments validate the effectiveness of these distributed protocols.
This paper proposes an accelerated strategy aimed at safeguarding the privacy of the multi-agent system (MAS) and enhancing the optimization algorithm's convergence rate. First of all, a local penalty factor and a...
详细信息
This paper proposes an accelerated strategy aimed at safeguarding the privacy of the multi-agent system (MAS) and enhancing the optimization algorithm's convergence rate. First of all, a local penalty factor and an auxiliary variable are introduced to reformulate the original distributed optimization (DO) problem, which involves a nonsmooth objective function and set constraints. To the best of our understanding, this is the first exploration in DO to introduce the concept of the local penalty factor. Subsequently, we use Nesterov's accelerated approach to develop a distributed continuous-time primal-dual accelerated algorithm while guaranteeing that local decision variables are not shared. It is demonstrated that this algorithm is convergent and can achieve a convergence rate of O(1/t(2)). In conclusion, the superiority and effectiveness of the mentioned strategy are substantiated by three numerical simulations.
This paper presents a distributed control framework for grid-forming (GFM) distributed generations (DGs), considering the objectives of active/reactive power sharing and load feeder voltage regulation in inverter-base...
详细信息
This paper presents a distributed control framework for grid-forming (GFM) distributed generations (DGs), considering the objectives of active/reactive power sharing and load feeder voltage regulation in inverter-based microgrids (MGs). Battery energy storage systems (BESSs) are controlled GFM sources while solar-powered DGs are working in grid-following (GFL) mode to provide active power support. The proposed method simplifies the global optimization problem into multiple sub-optimal problems by virtually segregating each GFM source into two decoupled sources. Each sub-problem, containing at most two GFM sources and multiple GFL sources with a single distributed agent, is solvable independently using local and neighboring node information. This feature substantially minimizes computational and communication resources, allowing for solutions using low-cost digital signal processors (DSPs). Moreover, the highly distributed nature of the proposed search algorithm ensures fast solution convergence and real-time implementation in multi-agent systems (MAS). Compared to existing segregation methods like Alternating Direction Method of Multipliers (ADMM) and Augmented Lagrangian Alternating Direction Inexact Newton (ALADIN) schemes, which segregate the network into complex sub-networks involving several GFM sources, the proposed approach is more suitable for small-scale inverter-based MGs. The framework's effectiveness is validated through analytical formulation, MATLAB simulations, and realistic experimental results within a multi-feeder test MG system, which includes numerous load feeders and sparsely available GFM DGs, a scenario that has received limited attention in existing literature.
In this paper, we explore accelerated continuous-time dynamic approaches with a vanishing damping a / t , driven by a quadratic penalty function designed for linearly constrained convex optimization problems. We repla...
详细信息
In this paper, we explore accelerated continuous-time dynamic approaches with a vanishing damping a / t , driven by a quadratic penalty function designed for linearly constrained convex optimization problems. We replace these linear constraints with penalty terms incorporated into the objective function, where the penalty coefficient grows to +infinity as t tends to infinity. With appropriate penalty coefficients, we establish convergence rates of O (1/ (t min{2 a / 3 , 2)} ) for the objective residual and the feasibility violation when a > 0 , and demonstrate the robustness of these convergence rates against external perturbation. Furthermore, we apply the proposed dynamic approach to three distributed optimization problems: a distributed constrained consensus problem, a distributed extended monotropic optimization, and a distributed optimization with separated equations, resulting in three variant distributed dynamic approaches. Numerical examples are provided to show the effectiveness of the proposed quadratic penalty dynamic approaches.
暂无评论