We consider a distributed multi-agent network system where each agent has its own convex objective function, which can be evaluated with stochastic errors. The problem consists of minimizing the sum of the agent funct...
详细信息
We consider a distributed multi-agent network system where each agent has its own convex objective function, which can be evaluated with stochastic errors. The problem consists of minimizing the sum of the agent functions over a commonly known constraint set, but without a central coordinator and without agents sharing the explicit form of their objectives. We propose an asynchronous broadcast-based algorithm where the communications over the network are subject to random link failures. We investigate the convergence properties of the algorithm for a diminishing (random) stepsize and a constant stepsize, where each agent chooses its own stepsize independently of the other agents. Under some standard conditions on the gradient errors, we establish almost sure convergence of the method to an optimal point for diminishing stepsize. For constant stepsize, we establish some error bounds on the expected distance from the optimal point and the expected function value. We also provide numerical results.
In this article, we study the problem of minimizing the sum of potentially nondifferentiable convex cost functions with partially overlapping dependences in an asynchronous manner, where communication in the network i...
详细信息
In this article, we study the problem of minimizing the sum of potentially nondifferentiable convex cost functions with partially overlapping dependences in an asynchronous manner, where communication in the network is not coordinated. We study the behavior of an asynchronous algorithm based on dual decomposition and block coordinate subgradient methods under assumptions weaker than those used in the literature. At the same time, we allow different agents to use local stepsizes with no global coordination. Sufficient conditions are provided for almost sure convergence to the solution of the optimization problem. Under additional assumptions, we establish a sublinear convergence rate that, in turn, can be strengthened to the linear convergence rate if the problem is strongly convex and has Lipschitz gradients. We also extend available results in the literature by allowing multiple and potentially overlapping blocks to be updated at the same time with nonuniform and potentially time-varying probabilities assigned to different blocks. A numerical example is provided to illustrate the effectiveness of the algorithm.
asynchronous algorithms have the potential to be more efficient than synchronized algorithms in multiprocessors because the overheads associated with synchronization are removed. The sufficient conditions for the conv...
详细信息
asynchronous algorithms have the potential to be more efficient than synchronized algorithms in multiprocessors because the overheads associated with synchronization are removed. The sufficient conditions for the convergence of numerical asynchronous iterations have been established; however, relaxation procedures are also common in non-numerical applications. In this paper, we introduce sufficient conditions for the convergence of asynchronous iterations defined on any set of data, finite or infinite, countable or not. The sufficient conditions are then applied to the scene-labeling problem. A multitasked implementation of the scene-labeling algorithm in a distributed global memory system is detailed. In this implementation, no synchronization nor critical sections are necessary to enforce correctness of execution.
One of the most important problems in the field of distributed optimization is the problem of minimizing a sum of local convex objective functions over a networked system. Most of the existing work in this area focuse...
详细信息
One of the most important problems in the field of distributed optimization is the problem of minimizing a sum of local convex objective functions over a networked system. Most of the existing work in this area focuses on developing distributed algorithms in a synchronous setting under the presence of a central clock, where the agents need to wait for the slowest one to finish the update, before proceeding to the next iterate. asynchronous distributed algorithms remove the need for a central coordinator, reduce the synchronization wait, and allow some agents to compute faster and execute more iterations. In the asynchronous setting, the only known algorithms for solving this problem could achieve an either linear or sublinear rate of convergence. In this paper, we build upon the existing literature to develop and analyze an asynchronous Newton-based method to solve a penalized version of the problem. We show that this algorithm guarantees almost sure convergence with a global linear and local quadratic rate in expectation. Numerical studies confirm the superior performance of our algorithm against other asynchronous methods.
We introduce novel convergence results for asynchronous iterations that appear in the analysis of parallel and distributed optimization algorithms. The results are simple to apply and give explicit estimates for how t...
详细信息
We introduce novel convergence results for asynchronous iterations that appear in the analysis of parallel and distributed optimization algorithms. The results are simple to apply and give explicit estimates for how the degree of asynchrony impacts the convergence rates of the iterates. Our results shorten, streamline and strengthen existing convergence proofs for several asynchronous optimization methods and allow us to establish convergence guarantees for popular algorithms that were thus far lacking a complete theoretical under-standing. Specifically, we use our results to derive better iteration complexity bounds for proximal incremental aggregated gradient methods, to obtain tighter guarantees depending on the average rather than maximum delay for the asynchronous stochastic gradient descent method, to provide less conservative analyses of the speedup conditions for asynchronous block-co ordinate implementations of Krasnosel'skii-Mann iterations, and to quantify the convergence rates for totally asynchronous iterations under various assumptions on communication delays and update rates.
We introduce and analyze stochastic optimization methods where the input to each update is perturbed by bounded noise. We show that this framework forms the basis of a unified approach to analyzing asynchronous implem...
详细信息
We introduce and analyze stochastic optimization methods where the input to each update is perturbed by bounded noise. We show that this framework forms the basis of a unified approach to analyzing asynchronous implementations of stochastic optimization algorithms, by viewing them as serial methods operating on noisy inputs. Using our perturbed iterate framework, we provide new analyses of the HOGWILD! algorithm and asynchronous stochastic coordinate descent that are simpler than earlier analyses, remove many assumptions of previous models, and in some cases yield improved upper bounds on the convergence rates. We proceed to apply our framework to develop and analyze KROMAGNON: a novel, parallel, sparse stochastic variance-reduced gradient (SVRG) algorithm. We demonstrate experimentally on a 16-core machine that the sparse and parallel version of SVRG is in some cases more than four orders of magnitude faster than the standard SVRG algorithm.
The asymptotic behavior of a distributed, asynchronous stochastic approximation scheme is analyzed in terms of a limiting nonautonomous differential equation. The relation between the latter and the relative values of...
详细信息
The asymptotic behavior of a distributed, asynchronous stochastic approximation scheme is analyzed in terms of a limiting nonautonomous differential equation. The relation between the latter and the relative values of suitably rescaled relative frequencies of updates of different components is underscored.
In this paper, we study the convergence properties of a randomized block-coordinate descent algorithm for the minimization of a composite convex objective function, where the block-coordinates are updated asynchronous...
详细信息
In this paper, we study the convergence properties of a randomized block-coordinate descent algorithm for the minimization of a composite convex objective function, where the block-coordinates are updated asynchronously and randomly according to an arbitrary probability distribution. We prove that the iterates generated by the algorithm form a stochastic quasi-Fejer sequence and thus converge almost surely to a minimizer of the objective function. Moreover, we prove a general sublinear rate of convergence in expectation for the function values and a linear rate of convergence in expectation under an error bound condition of Tseng type. Under the same condition strong convergence of the iterates is provided as well as their linear convergence rate.
asynchronous iterations often converge under different conditions than their synchronous counterparts. In this paper we will study the global convergence of Jacobi-Newton-like methods for nonlinear equations Fx = 0. I...
详细信息
asynchronous iterations often converge under different conditions than their synchronous counterparts. In this paper we will study the global convergence of Jacobi-Newton-like methods for nonlinear equations Fx = 0. It is a known fact, that the synchronous algorithm converges monotonically, if F is a convex M-function and the starting values x(0) and y(0) meet the condition Fx(0) less than or equal to 0 less than or equal to Fy(0). In the paper it will be shown, which modifications are necessary to guarantee a similar convergence behavior for an asynchronous computation. Copyright (C) 1999 John Wiley & Sons, Ltd.
In this paper we propose an algorithm for vehicle coordination at intersections in order to avoid collisions within the intersection area while optimising an objective given as the sum of individual costs associated w...
详细信息
暂无评论