In this paper, we consider a multi-agent convex optimization problem whose goal is to minimize a global convex objective function that is the sum of local convex objective functions, subject to global convex inequalit...
详细信息
In this paper, we consider a multi-agent convex optimization problem whose goal is to minimize a global convex objective function that is the sum of local convex objective functions, subject to global convex inequality constraints and several randomly occurring local convex state constraint sets. A distributed primal-dual random projection subgradient (DPDRPS) algorithm with diminishing stepsize using local communications and computations is proposed to solve such a problem. By employing iterative inequality techniques, the proposed DPDRPS algorithm is proved to be convergent almost surely. Finally, a numerical example is illustrated to show the effectiveness of the theoretical analysis. (C) 2016 Elsevier B.V. All rights reserved.
This paper proposes a distributed multi-agent optimization protocol to minimize the average of objective functions of the agents in the network with satisfying equality and inequality constraints of each agent. The ex...
详细信息
This paper proposes a distributed multi-agent optimization protocol to minimize the average of objective functions of the agents in the network with satisfying equality and inequality constraints of each agent. The exact penalty method is adopted to obtain a linear distributedoptimization protocol. The proposed protocol works only with the decision variables and does not need any additional variables. The proof of the consensus and convergence of the proposed protocol is provided as well as the boundedness under mild assumptions. The protocol is also illustrated by a numerical example.
This paper proposes a protocol for a distributedoptimization problem to minimize the average of objective functions of the agents in the network with satisfying constraints of each agent. The protocol can handle unco...
详细信息
This paper proposes a protocol for a distributedoptimization problem to minimize the average of objective functions of the agents in the network with satisfying constraints of each agent. The protocol can handle uncommon constraints of the agents. Instead of invoking dual functions, only 1-bit information on fulfillment of the constraint of each agent is transmitted between agents as well as the decision variable. The proof of consensus and convergence is provided based on the constrained subgradient method. A numerical example illustrates how the proposed protocol works.
This paper is concerned with distributed multi-agent optimization with inequality and equality constraints based on exact penalty methods. In the literature of exact penalty methods, constrained optimization problem i...
详细信息
ISBN:
(纸本)9784888983006
This paper is concerned with distributed multi-agent optimization with inequality and equality constraints based on exact penalty methods. In the literature of exact penalty methods, constrained optimization problem is solved through optimization of penalized objective function and, under mild assumptions, the set of the optimal solutions of the penalized objective function coincides with that of the original constrained optimization problem. Exploiting the exactness of the penalized objective function, this paper proposes a new distributed multi-agent optimization protocol which simplifies the update law of previous protocols based on exact penalty methods with equality constraints. Also more concrete proof of the convergence of the protocol is provided under the assumption of the exactness of the penalized objective function.
This paper explores a class of distributed constrained convex optimization problems where the objective function is a sum of $N$ convex local objective functions. These functions are characterized by local non-smoothn...
详细信息
This paper explores a class of distributed constrained convex optimization problems where the objective function is a sum of $N$ convex local objective functions. These functions are characterized by local non-smoothness yet adhere to Lipschitz continuity, and the optimization process is further constrained by $N$ distinct closed convex sets. To delineate the structure of information exchange among agents, a series of time-varying weight-unbalance directed graphs are introduced. Furthermore, this study introduces a novel algorithm, distributed randomized gradient-free constrained optimization algorithm. This algorithm marks a significant advancement by substituting the conventional requirement for precise gradient or subgradient information in each iterative update with a random gradient-free oracle, thereby addressing scenarios where accurate gradient information is hard to obtain. A thorough convergence analysis is provided based on the smoothing parameters inherent in the local objective functions, the Lipschitz constants, and a series of standard assumptions. Significantly, the proposed algorithm can converge to an approximate optimal solution within a predetermined error threshold for the consisdered optimization problem, achieving the same convergence rate of ${\mathcal O}(\frac{\ln (k)}{\sqrt{k} })$ as the general randomized gradient-free algorithms when the decay step size is selected appropriately. And when at least one of the local objective functions exhibits strong convexity, the proposed algorithm can achieve a faster convergence rate, ${\mathcal O}(\fracoptimization{k})$. Finally, rigorous simulation results verify the correctness of theoretical findings.
In this paper,we study a convex optimization problem that arises in a network where multiple agents cooperatively optimize the sum of nonsmooth but Lipschitz continuous functions,subject to a convex and compact constr...
详细信息
ISBN:
(纸本)9781509046584
In this paper,we study a convex optimization problem that arises in a network where multiple agents cooperatively optimize the sum of nonsmooth but Lipschitz continuous functions,subject to a convex and compact constraint *** the additional constraint that each agent can only transmit quantized information,we develop a distributed quantized gradient-free algorithm for solving the multi-agent convex optimization problem over a time-varying *** particular,we provide the convergence rate analysis results of the proposed algorithm,and highlight the dependence of the error bound on the smooth parameter and quantization resolution.
This paper extends the idea of Brezinski's hybrid acceleration procedure, for the solution of a system of linear equations with a symmetric coefficient matrix of dimension n, to a new context called cooperative co...
详细信息
This paper extends the idea of Brezinski's hybrid acceleration procedure, for the solution of a system of linear equations with a symmetric coefficient matrix of dimension n, to a new context called cooperative computation, involving m agents (m << n), each one concurrently computing the solution of the whole system, using an iterative method. Cooperation occurs between the agents through the communication, periodic or probabilistic, of the estimate of each agent to one randomly chosen agent, thus characterizing the computation as concurrent and asynchronous. Every time one agent receives solution estimates from the others, it carries out a least squares computation, involving a small linear system of dimension m, in order to replace its current solution estimate by an affine combination of the received estimates, and the algorithm continues until a stopping criterion is met. In addition, an autocooperative algorithm, in which estimates are updated using affine combinations of current and past estimates, is also proposed. The proposed algorithms are shown to be efficient for certain matrices, specifically in relation to the popular Barzilai-Borwein algorithm, through numerical experiments.
暂无评论