We review a class of recently-proposed linear-cost network flow methods which are amenable to distributed implementation. All the methods in the class use the notion ofε-complementary slackness, and most do not expli...
详细信息
We review a class of recently-proposed linear-cost network flow methods which are amenable to distributed implementation. All the methods in the class use the notion ofε-complementary slackness, and most do not explicitly manipulate any “global” objects such as paths, trees, or cuts. Interestingly, these methods have stimulated a large number of newserial computational complexity results. We develop the basic theory of these methods and present two specific methods, theε-relaxation algorithm for the minimum-cost flow problem, and theauction algorithm for the assignment problem. We show how to implement these methods with serial complexities of O(N 3 logNC) and O(NA logNC), respectively. We also discuss practical implementation issues and computational experience to date. Finally, we show how to implementε-relaxation in a completely asynchronous, “chaotic” environment in which some processors compute faster than others, some processors communicate faster than others, and there can be arbitrarily large communication delays.
It is shown here that stability of the stochastic approximation algorithm is implied by the asymptotic stability of the origin for an associated ODE. This in turn implies convergence of the algorithm. Several specific...
详细信息
It is shown here that stability of the stochastic approximation algorithm is implied by the asymptotic stability of the origin for an associated ODE. This in turn implies convergence of the algorithm. Several specific classes of algorithms are considered as applications. It is found that the results provide (i) a simpler derivation of known results for reinforcement learning algorithms;(ii) a proof for the first time that a class of asynchronous stochastic approximation algorithms are convergent without using any a priori assumption of stability;(iii) a proof for the first time that asynchronous adaptive critic and Q-learning algorithms are convergent for the average cost optimal control problem.
This article presents a family of algorithms for decentralized convex composite problems. We consider the setting of a network of agents that cooperatively minimize a global objective function composed of a sum of loc...
详细信息
This article presents a family of algorithms for decentralized convex composite problems. We consider the setting of a network of agents that cooperatively minimize a global objective function composed of a sum of local functions plus a regularizer. Through the use of intermediate consensus variables, we remove the need for inner communication loops between agents when computing curvature-guided updates. A general scheme is presented, which unifies the analysis for a plethora of computing choices, including gradient descent, Newton updates, and Broyden, Fletcher, Goldfarb, and Shanno updates. Our analysis establishes sublinear convergence rates under convex objective functions with Lipschitz continuous gradients, as well as linear convergence rates when the local functions are further assumed to be strongly convex. Moreover, we explicitly characterize the acceleration due to curvature information. Last but not the least, we present an asynchronous implementation for the proposed algorithms, which removes the need for a central clock, with linear convergence rates established in expectation under strongly convex objectives. We ascertain the effectiveness of the proposed methods with numerical experiments on benchmark datasets.
We are interested in solving linear time-dependent index one differential algebraic equations (DAEs) by parallel asynchronous algorithms. We give a new class of parallel iterative methods the convergence of which is o...
详细信息
We are interested in solving linear time-dependent index one differential algebraic equations (DAEs) by parallel asynchronous algorithms. We give a new class of parallel iterative methods the convergence of which is obtained by the study of a special fixed point mapping. These new algorithms can be described as asynchronous multisplitting waveform relaxation methods for linear differential algebraic systems. (C) 2001 Elsevier Science Inc. All rights reserved.
Random projection algorithm is of interest for constrained optimization when the constraint set is not known in advance or the projection operation on the whole constraint set is computationally prohibitive. This pape...
详细信息
Random projection algorithm is of interest for constrained optimization when the constraint set is not known in advance or the projection operation on the whole constraint set is computationally prohibitive. This paper presents a distributed random projection algorithm for constrained convex optimization problems that can be used by multiple agents connected over a time-varying network, where each agent has its own objective function and its own constrained set. We prove that the iterates of all agents converge to the same point in the optimal set almost surely. Experiments on distributed support vector machines demonstrate good performance of the algorithm.
This paper proposes Triangularly Preconditioned Primal- Dual algorithm, a new primal-dual algorithm for minimizing the sum of a Lipschitz-differentiable convex function and two possibly nonsmooth convex functions, one...
详细信息
This paper proposes Triangularly Preconditioned Primal- Dual algorithm, a new primal-dual algorithm for minimizing the sum of a Lipschitz-differentiable convex function and two possibly nonsmooth convex functions, one of which is composed with a linear mapping. We devise a randomized block-coordinate ( BC) version of the algorithm which converges under the same stepsize conditions as the full algorithm. It is shown that both the original as well as the BC scheme feature linear convergence rate when the functions involved are either piecewise linear-quadratic, or when they satisfy a certain quadratic growth condition (which is weaker than strong convexity). Moreover, we apply the developed algorithms to the problem of multiagent optimization on a graph, thus obtaining novel synchronous and asynchronous distributed methods. The proposed algorithms are fully distributed in the sense that the updates and the stepsizes of each agent only depend on local information. In fact, no prior global coordination is required. Finally, we showcase an application of our algorithm in distributed formation control.
A method to define a quantity that would measure the asynchronicity of linear algorithms is presented. It is demonstrated that every linear algorithm that calculates a set of Q linearly independent linear forms in k ...
详细信息
A method to define a quantity that would measure the asynchronicity of linear algorithms is presented. It is demonstrated that every linear algorithm that calculates a set of Q linearly independent linear forms in k variables must involve additions and subtractions. The result is also applied to the evaluation of a set of bilinear forms. This study is applied to discrete Fourier transform computational problems and to matrix multiplication. The extension to convolution, cyclic convolution, and several other linear and bilinear computational problems is immediate. The presentation that leads to defining the notion of asynchronicity is modified, simplified, and substantially shortened.
We consider iterative algorithms of the form x:=f(x), executed by a parallel or distributed computing system. We first consider synchronous executions of such iterations and study their communication requirements, as ...
详细信息
We consider iterative algorithms of the form x:=f(x), executed by a parallel or distributed computing system. We first consider synchronous executions of such iterations and study their communication requirements, as well as issues related to processor synchronization. We also discuss the parallelization of iterations of the Gauss-Seidel type. We then consider asynchronous implementations whereby each processor iterates on a different component of x, at its own pace, using the most recently received (but possibly outdated) information on the remaining components of x. While certain algorithms may fail to converge when implemented asynchronously, a large number of positive convergence results is available. We classify asynchronous algorithms into three main categories, depending on the amount of asynchronism they can tolerate, and survey the corresponding convergence results. We also discuss issues related to their termination.
We consider algorithms for solving structured convex optimization problems over a network of agents with communication delays. It is assumed that each agent performs its local updates using possibly outdated informati...
详细信息
We consider algorithms for solving structured convex optimization problems over a network of agents with communication delays. It is assumed that each agent performs its local updates using possibly outdated information from its neighbours under the assumption that the delay with respect to each neighbour is bounded but otherwise arbitrary. The private objective of each agent is represented by the sum of two possibly nonsmooth functions one of which is composed with a linear mapping. The global optimization problem is the aggregate of local cost functions and a common Lipschitz-differentiable function. When the coupling between agents is represented only through the common function, the V (u) over tilde -Condat primal-dual algorithm is studied. In the case when the linear maps introduce additional coupling between agents a new algorithm is developed. Moreover, a randomized variant of this algorithm is presented that allows the agents to wake up at random and independently from one another. The convergence of the proposed algorithms is established under strong convexity assumptions.
We consider the recently proposed parallel variable distribution (PVD) algorithm of Ferris and Mangasarian [4] for solving optimization problems in which the variables are distributed among p processors. Each processo...
详细信息
We consider the recently proposed parallel variable distribution (PVD) algorithm of Ferris and Mangasarian [4] for solving optimization problems in which the variables are distributed among p processors. Each processor has the primary responsibility for updating its block of variables while allowing the remaining ''secondary'' variables to change in a restricted fashion along some easily computable directions. We propose useful generalizations that consist, for the general unconstrained case, of replacing exact global solution of the subproblems by a certain natural sufficient descent condition, and, for the convex case, of inexact subproblem solution in the PVD algorithm. These modifications are the key features of the algorithm that has not been analyzed before. The proposed modified algorithms are more practical and make it easier to achieve good load balancing among the parallel processors. We present a general framework for the analysis of this class of algorithms and derive some new and improved linear convergence results for problems with weak sharp minima of order ?. and strongly convex problems. We also show that nonmonotone synchronization schemes are admissible, which further improves flexibility of PVD approach.
暂无评论