In this paper, a distributed convex optimization algorithm under persistent attacks is investigated in the framework of hybrid dynamical systems. The existence of attacks may influence the behavior of an algorithm tha...
详细信息
In this paper, a distributed convex optimization algorithm under persistent attacks is investigated in the framework of hybrid dynamical systems. The existence of attacks may influence the behavior of an algorithm that solves the optimization problem. In this case, an interesting question is under what conditions the optimal solution can be found. To explore this problem, we first use differential inclusions to model attack modes and then use an average dwell-time automaton and time-ratio monitor to constrain attacks. Then based on these constraints, an inequality condition is given to ensure exponential stability of the optimal solution. Finally, a switched algorithm is modeled as a hybrid dynamical system and a Lyapunov function is constructed to show the optimal solution can be achieved exponentially under persistent attacks. (C) 2019 Elsevier Ltd. All rights reserved.
A distributed convex optimization problem over a weight-unbalanced directed network is studied in this brief, where the global objective function is equal to the sum of strongly convex objective functions with globall...
详细信息
A distributed convex optimization problem over a weight-unbalanced directed network is studied in this brief, where the global objective function is equal to the sum of strongly convex objective functions with globally Lipschitz gradients. With respect to the optimization problem, a new continuous-time coordination algorithm is proposed to compute its optimal solution in a distributed manner. The asymptotical convergence of the proposed algorithm is guaranteed by resorting to the direct sum decomposition technique, Kronecker matrix algebra, the stability of perturbed systems, and input-to-state stability. Finally, some simulations are performed to illustrate the theoretical result.
This paper considers an unconstrained collaborative optimization of a sum of convex functions, where agents make decisions using local information in the presence of random interconnection topologies. We recast the pr...
详细信息
This paper considers an unconstrained collaborative optimization of a sum of convex functions, where agents make decisions using local information in the presence of random interconnection topologies. We recast the problem as minimization of the sum of convex functions over a constraint set defined as the set of fixed-value points of a random operator derived from weighted matrices of random graphs. We show that the derived random operator has nonexpansivity property;therefore, this formulation does not need the distribution of random communication topologies. Hence, it includes random networks with/without asynchronous protocols. As an extension of the problem, we define a novel optimization problem, namely minimization of a convex function over the fixed-value point set of a nonexpansive random operator. We propose a discrete-time algorithm using diminishing step size for converging almost surely and in mean square to the global solution of the optimization problem under suitable assumptions. Consequently, as a special case, it reduces to a totally asynchronous algorithm for the distributedoptimization problem. We show that fixed-value point is a bridge from deterministic analysis to random analysis of the algorithm. Finally, a numerical example illustrates the convergence of the proposed algorithm.
This note considers the distributedoptimization problem on directed graphs with nonconvex local objective functions and the unknown network connectivity. A new adaptive algorithm is proposed to minimize a differentia...
详细信息
This note considers the distributedoptimization problem on directed graphs with nonconvex local objective functions and the unknown network connectivity. A new adaptive algorithm is proposed to minimize a differentiable global objective function. By introducing dynamic coupling gains and updating the coupling gains using relative information of system states, the nonconvexity of local objective functions, unknown network connectivity, and the uncertain dynamics caused by locally Lipschitz gradients are tackled concurrently. Consequently, the global asymptotic convergence is established when the global objective function is strongly convex and the gradients of local objective functions are only locally Lipschitz. When the communication graph is strongly connected and weight-balanced, the algorithm is independent of any global information. Then, the algorithm is naturally extended to unbalanced directed graphs by using the left eigenvector of the Laplacian matrix associated with the zero eigenvalue. Several numerical simulations are presented to verify the results.
Target localization is important in monitoring and tracking *** paper employs the distributed convex optimization technique to investigate the vision-based target localization ***,the target localization problem is tr...
详细信息
ISBN:
(纸本)9781538629185
Target localization is important in monitoring and tracking *** paper employs the distributed convex optimization technique to investigate the vision-based target localization ***,the target localization problem is transformed into a convexoptimization problem under the Minimum Squared Error *** a modified consensus-based distributed subgradient algorithm is proposed to solve the convexoptimization ***,a practical experiment is included to verify the effectiveness of the algorithm.
This paper studies the distributed convex optimization problem for multi-agent systems over undirected and connected networks. Motivated by practical considerations, we propose a new distributedoptimization algorithm...
详细信息
This paper studies the distributed convex optimization problem for multi-agent systems over undirected and connected networks. Motivated by practical considerations, we propose a new distributedoptimization algorithm with event-triggered communication. The proposed event detection is decentralized, sampled-data and not requires periodic communications among agents to calculate the threshold. Based on Lyapunov approaches, we show that the proposed algorithm is asymptotically converge to the unknown optimizer if the design parameters are chosen properly. We also give an upper bound on the convergence rate. Finally, we illustrate the effectiveness of the proposed algorithm by a numerical simulation.
Dual decomposition has been successfully employed in a variety of distributed convex optimization problems solved by a network of computing and communicating nodes. Often, when the cost function is separable but the c...
详细信息
Dual decomposition has been successfully employed in a variety of distributed convex optimization problems solved by a network of computing and communicating nodes. Often, when the cost function is separable but the constraints are coupled, the dual decomposition scheme involves local parallel subgradient calculations and a global subgradient update performed by a master node. In this paper, we propose a consensus-based dual decomposition to remove the need for such a master node and still enable the computing nodes to generate an approximate dual solution for the underlying convexoptimization problem. In addition, we provide a primal recovery mechanism to allow the nodes to have access to approximate near-optimal primal solutions. Our scheme is based on a constant stepsize choice, and the dual and primal objective convergence are achieved up to a bounded error floor dependent on the stepsize and on the number of consensus steps among the nodes.
This paper studies the robustness properties against additive persistent noise of a class of distributed continuous-time algorithms for convexoptimization. A team of agents, each with its own private objective functi...
详细信息
This paper studies the robustness properties against additive persistent noise of a class of distributed continuous-time algorithms for convexoptimization. A team of agents, each with its own private objective function, seeks to collectively determine the global decision vector that minimizes the sum of all the objectives. The team communicates over a weight-balanced, strongly connected digraph and both interagent communication and agent computation are corrupted by noise. Under the proposed distributed algorithm, each agent updates its estimate of the global solution using the gradient of its local objective function while, at the same time, seeking to agree with its neighbors' estimates via proportional-integral feedback on their disagreement. Under mild conditions on the local objective functions, we show that this strategy is noise-to-state exponentially stable in the second moment with respect to the optimal solution. Our technical approach combines notions and tools from graph theory, stochastic differential equations, Lyapunov stability analysis, and cocoercivity of vector fields. Simulations illustrate our results.
暂无评论