An asynchronous algorithm for the integration of reaction-diffusion equations for inhomogeneous excitable media is described. Since many physical systems are inhomogeneous where either the local kinetics or the diffus...
详细信息
An asynchronous algorithm for the integration of reaction-diffusion equations for inhomogeneous excitable media is described. Since many physical systems are inhomogeneous where either the local kinetics or the diffusion or conduction properties vary significantly in space, integration schemes must be able to account for wide variations in the temporal and spatial scales of the solutions. The asynchronous algorithm utilizes a fixed spatial grid and automatically adjusts the time step locally to achieve an efficient simulation where the errors in the solution are controlled. The scheme does not depend on the specific form of the local kinetics and is easily applied to systems with complex geometries. (C) 2000 American Institute of Physics. [S1054- 1500(00)00304-9].
This paper proposes an asynchronous algorithm for distributed optimization problem (Asy-DOP) in multi-agent network with gradient noise. The algorithm can be implemented in an asynchronous distributed way. The objecti...
详细信息
ISBN:
(纸本)9789881563903
This paper proposes an asynchronous algorithm for distributed optimization problem (Asy-DOP) in multi-agent network with gradient noise. The algorithm can be implemented in an asynchronous distributed way. The objective function of the algorithm is the sum of the local functions of multiple nodes in the network, each node only knows its own local objective function and can exchange information with its neighbors. In addition, the algorithm is based on the undirected connected graph and requires the objective function is Lipschitz continuous. The step-size of the proposed algorithm is homogeneous and when the step-size is in a suitable range, it is proved that the convergence rate of the proposed algorithm is O(1/root k), where the k is the number of iterations.
In this paper, we propose a fully distributed algorithm for joint clock skew and offset estimation in wireless sensor networks based on belief propagation. In the proposed algorithm, each node can estimate its clock s...
详细信息
In this paper, we propose a fully distributed algorithm for joint clock skew and offset estimation in wireless sensor networks based on belief propagation. In the proposed algorithm, each node can estimate its clock skew and offset in a completely distributed and asynchronous way: some nodes may update their estimates more frequently than others using outdated message from neighboring nodes. In addition, the proposed algorithm is robust to random packet loss. Such algorithm does not require any centralized information processing or coordination, and is scalable with network size. The proposed algorithm represents a unified framework that encompasses both classes of synchronous and asynchronous algorithms for network-wide clock synchronization. It is shown analytically that the proposed asynchronous algorithm converges to the optimal estimates with estimation mean-square-error at each node approaching the centralized Cramer-Rao bound under any network topology. Simulation results further show that the convergence speed is faster than that corresponding to a synchronous algorithm.
作者:
Eckstein, JonathanRutgers State Univ
Dept Management Sci & Informat Syst MSIS 100 Rockafeller RdLivingston Campus Piscataway NJ 08854 USA Rutgers State Univ
RUTCOR 100 Rockafeller RdLivingston Campus Piscataway NJ 08854 USA
This paper develops what is essentially a simplified version of the block-iterative operator splitting method already proposed by the author and P. Combettes, but with more general initialization conditions. It then d...
详细信息
This paper develops what is essentially a simplified version of the block-iterative operator splitting method already proposed by the author and P. Combettes, but with more general initialization conditions. It then describes one way of implementing this algorithmasynchronously under a computational model inspired by modern high-performance computing environments, which consist of interconnected nodes each having multiple processor cores sharing a common local memory. The asynchronous implementation framework is then applied to derive an asynchronous algorithm which resembles the alternating direction method of multipliers with an arbitrary number of blocks of variables. Unlike earlier proposals for asynchronous variants of the alternating direction method of multipliers, the algorithm relies neither on probabilistic control nor on restrictive assumptions about the problem instance, instead making only standard convex-analytic regularity assumptions. It also allows the proximal parameters to range freely between arbitrary positive bounds, possibly varying with both iterations and subproblems.
作者:
Eckstein, JonathanRutgers State Univ
Dept Management Sci & Informat Syst MSIS 100 Rockafeller RdLivingston Campus Piscataway NJ 08854 USA Rutgers State Univ
RUTCOR 100 Rockafeller RdLivingston Campus Piscataway NJ 08854 USA
This note gives an improved version of the proof of Proposition 4.2 in "A Simplified Form of Block-Iterative Operator Splitting, and an asynchronous algorithm Resembling the Multi-block Alternating Direction Meth...
详细信息
This note gives an improved version of the proof of Proposition 4.2 in "A Simplified Form of Block-Iterative Operator Splitting, and an asynchronous algorithm Resembling the Multi-block Alternating Direction Method of Multipliers," Journal of Optimization Theory and Applications 173(1): 155-182 (2017).
We present a distributed asynchronous algorithm for solving the two-stage stochastic unit commitment problem. The algorithm uses Lagrangian relaxation to decompose the problem by scenarios and applies an incremental m...
详细信息
ISBN:
(纸本)9781467380409
We present a distributed asynchronous algorithm for solving the two-stage stochastic unit commitment problem. The algorithm uses Lagrangian relaxation to decompose the problem by scenarios and applies an incremental method to solve the dual problem. At each incremental dual iteration, the algorithm evaluates the dual function, providing a lower bound, and recovers a feasible commitment for first stage units, which (through a feasibility recovery process) results in an upper bound. Both the incremental dual iterations as well as the feasibility recovery are executed asynchronously, resulting in more efficient utilization of parallel processors. The method is tested on a model of the Central Western European system, for which it achieved convergence three times faster than an equivalent distributed synchronous algorithm.
This paper proposes an asynchronous algorithm for distributed optimization problem(Asy-DOP) in multi-agent network with gradient *** algorithm can be implemented in an asynchronous distributed *** objective function o...
详细信息
This paper proposes an asynchronous algorithm for distributed optimization problem(Asy-DOP) in multi-agent network with gradient *** algorithm can be implemented in an asynchronous distributed *** objective function of the algorithm is the sum of the local functions of multiple nodes in the network,each node only knows its own local objective function and can exchange information with its *** addition,the algorithm is based on the undirected connected graph and requires the objective function is Lipschitz *** step-size of the proposed algorithm is homogeneous and when the step-size is in a suitable range,it is proved that the convergence rate of the proposed algorithm is O(1/(?)),where the k is the number of iterations.
Given a nonnegative, irreducible matrix P of spectral radius unity, there exists a positive vector π such that π = πP. If P also happens to be stochastic, then π gives the stationary distribution of the Markov cha...
详细信息
Given a nonnegative, irreducible matrix P of spectral radius unity, there exists a positive vector π such that π = πP. If P also happens to be stochastic, then π gives the stationary distribution of the Markov chain that has state-transition probabilities given by the elements of P. This paper gives an algorithm for computing π that is particularly well suited for parallel processing. The main attraction of our algorithm is that the timing and sequencing restrictions on individual processors are almost entirely eliminated and, consequently, the necessary coordination between processors is negligible and the enforced idle time is also *** certain mild and easily satisfied restrictions on P and on the implementation of the algorithm, x(.), the vectors of computed values are proved to converge to within a positive, finite constant of proportionality of π. It is also proved that a natural measure of the projective distance of x(.) from π vanishes geometrically fast, and at a rate for which a lower bound is given. We have conducted extensive experiments on random matrices P, and the results show that the improvement over the parallel implementation of the synchronous version of the algorithm is substantial, sometimes exceeding the synchronization penalty to which the latter is always subject.
We present a distributed asynchronous algorithm for solving the two-stage stochastic unit commitment problem. The algorithm uses Lagrangian relaxation to decompose the problem by scenarios and applies an incremental m...
详细信息
ISBN:
(纸本)9781467380416
We present a distributed asynchronous algorithm for solving the two-stage stochastic unit commitment problem. The algorithm uses Lagrangian relaxation to decompose the problem by scenarios and applies an incremental method to solve the dual problem. At each incremental dual iteration, the algorithm evaluates the dual function, providing a lower bound, and recovers a feasible commitment for first stage units, which (through a feasibility recovery process) results in an upper bound. Both the incremental dual iterations as well as the feasibility recovery are executed asynchronously, resulting in more efficient utilization of parallel processors. The method is tested on a model of the Central Western European system, for which it achieved convergence three times faster than an equivalent distributed synchronous algorithm.
Deep neural network (DNN) learns hierarchical representations from big data in a multilayer network structure and has achieved great successes in many fields such as computer vision and speech analysis. Since DNN usua...
详细信息
ISBN:
(纸本)9781538637906
Deep neural network (DNN) learns hierarchical representations from big data in a multilayer network structure and has achieved great successes in many fields such as computer vision and speech analysis. Since DNN usually contains several billions of parameters, the asynchronous stochastic gradient descent (ASGD) algorithm is often used to train an effective DNN model on a computer cluster. However, as the increase of computing nodes and data size, ASGD suffers from serious slow convergence deficiency because the parameters might be wrongly updated by long-term delayed gradients. In this paper, we propose a delay compensated asynchronous Adam (DC-Adam) algorithm to train DNN. In particular, DC-Adam updates the parameters with the moment increment which is the division of the first and the second moments to take the advantage of the original Adam algorithm, and compensates the gradient with the first-order component in its Taylor expansion. Since the delay compensation technique reduces the error of delayed gradients, and the moment increment further counteracts the influence of approximated compensation, DC-Adam converges much more rapidly than ASGD on a computer cluster with moderate computing nodes. We theoretically analyze the Ergodic convergence rate of DC-Adam and compare with DC-ASGD. We implement our DC-Adam algorithm on 61 computing nodes in a computer cluster, and conduct image classification by using LeNet and ResNet, respectively, on the MNIST and CIFAR-10 datasets. The experimental results demonstrate that DC-Adam greatly accelerates the training progress and achieves almost linear speedup rate as increasing the computing nodes.
暂无评论