Alternating direction method of multipliers (admm) is a popular convex optimisation algorithm, which is implemented in a distributed manner. Applying this algorithm to consensus optimisation problem, where a number of...
详细信息
Alternating direction method of multipliers (admm) is a popular convex optimisation algorithm, which is implemented in a distributed manner. Applying this algorithm to consensus optimisation problem, where a number of agents cooperatively try to solve an optimisation problem using locally available data, leads to a fully distributedalgorithm which relies on local computations and communication between neighbours. In this study, the authors analyse the convergence of the distributed admm algorithm for solving a consensus optimisation problem over a lossy network, whose links are subject to failure. They present and analyse two different distributedadmm-based algorithms. The algorithms are different in their network connectivity, storage and computational resource requirements. The first one converges over a sequence of networks which are not the same but remains connected over all iterations. The second algorithm is convergent over a sequence of different networks whose union is connected. The former algorithm, compared to the latter, has lower computational complexity and storage requirements. Numerical experiments confirm the proposed theoretical analysis.
The distributed alternating direction method of multipliers (admm) is an effective algorithm to solve large-scale optimization problems. However, there are still massive computation and communication cost in distribut...
详细信息
ISBN:
(纸本)9781665464970
The distributed alternating direction method of multipliers (admm) is an effective algorithm to solve large-scale optimization problems. However, there are still massive computation and communication cost in distributedadmm when processing high-dimensional data. To solve this problem, we propose a distributedadmm with sparse computation and Allreduce communication (SCAC-admm) which can process high-dimensional data effectively. In the algorithm, each node optimizes a submodel of the target model in parallel. Then, the target model is obtained by aggregating all sub-models. The features in the sub-model are named associated features. In SCAC-admm, we first design a selecting method of associated features to determine the composition of each sub-model. This method can limit the dimension of the sub-model by setting appropriate parameters, so as to limit the computation cost. Secondly, to reduce the communication traffic caused by transmitting high-dimensional parameters, we propose a novel Allreduce communication model which can only aggregate associated parameters in sub-models. Experiments on high-dimensional datasets show that SCAC-admm has less computation cost and higher communication efficiency than traditional distributedadmm. When solving large-scale logistic regression problem, SCAC-admm can reduce the system time by 73% compared with traditional distributedadmm.
Alternating direction method of multipliers (admm) is an efficient algorithm to solve large- scale machine learning problems in a distributed environment. To make full use of the hierarchical memory model in modern hi...
详细信息
ISBN:
(纸本)9781665435741
Alternating direction method of multipliers (admm) is an efficient algorithm to solve large- scale machine learning problems in a distributed environment. To make full use of the hierarchical memory model in modern highperformance computing systems, this paper implements a hybrid MPI/OpenMP parallelization of the asynchronous admmalgorithm (AH-admm). The AH-admmalgorithm updates local variables in parallel by OpenMP threads and exchanges information between MPI processes, which relieves memory and communication pressure by replacing multiprocessing with multi- threading. Furthermore, for the SVM problem, the AH-admmalgorithm speeds up the calculation of sub- problems through an efficient parallel optimization strategy. This paper effectively combines the features of both algorithm design and programming model. Experiments on the Ziqiang4000 high-performance cluster demonstrate that the AH- admmalgorithm scales better and run faster than the existing distributed admm algorithms implemented by pure MPI. The AH-admm can reduce the communication overhead by up to 91.8% and increase the convergence rate by up to 36x. For large datasets, the AH-admm can scale well on the cluster which over 129 cores.
The number of devices connected to the internet is increasing at a tremendous rate. Connecting intelligent devices has created a new paradigm called the Internet of Things (IoT). Due to the significant volume of data ...
详细信息
The number of devices connected to the internet is increasing at a tremendous rate. Connecting intelligent devices has created a new paradigm called the Internet of Things (IoT). Due to the significant volume of data generated in the IoT, cloud infrastructure alone cannot process this data. Thus, fog computing has been proposed to supplement the cloud. One of the critical issues in fog computing is deciding how much of the workload should be offloaded to the cloud. In this paper, a multi-objective optimization problem that considers cooperation between fog nodes is presented. The goal is to minimize the response time, energy cost, and rental cost of cloud resources. This multi-objective optimization problem is transformed into a single-objective optimization problem which is solved using the distributed Alternating Direction Method of Multipliers (admm) algorithm. Results are presented which show that this distributedalgorithm converges to an optimal solution which provides a tradeoff between the total response time, total energy cost in the fog nodes, and total rental cost of cloud resources. It is shown to provide better performance compared to the well-known adversary algorithm.
暂无评论