In this paper, we propose a proximal parallel decomposition algorithm for solving the optimization problems where the objective function is the sum of m separable functions (i.e., they have no crossed variables), and ...
详细信息
In this paper, we propose a proximal parallel decomposition algorithm for solving the optimization problems where the objective function is the sum of m separable functions (i.e., they have no crossed variables), and the constraint set is the intersection of Cartesian products of some simple sets and a linear manifold. The m subproblems are solved simultaneously per iterations, which are sum of the decomposed subproblems of the augmented Lagrange function and a quadratic term. Hence our algorithm is named as the 'proximal parallel splitting method'. We prove the global convergence of the proposed algorithm under some mild conditions that the underlying functions are convex and the solution set is nonempty. To make the subproblems easier, some linearized versions of the proposed algorithm are also presented, together with their global convergence analysis. Finally, some preliminary numerical results are reported to support the efficiency of the new algorithms. (C) 2013 Elsevier B.V. All rights reserved.
The augmented Lagrangian method (ALM) is a well-regarded algorithm for solving convex optimization problems with linear constraints. Recently, in He et al. [On full Jacobian decomposition of the augmented Lagrangian m...
详细信息
The augmented Lagrangian method (ALM) is a well-regarded algorithm for solving convex optimization problems with linear constraints. Recently, in He et al. [On full Jacobian decomposition of the augmented Lagrangian method for separable convex programming, SIAM J. Optim. 25(4) (2015), pp. 2274-2312], it has been demonstrated that a straightforward Jacobian decomposition of ALM is not necessarily convergent when the objective function is the sum of functions without coupled variables. Then, Wang et al. [A note on augmented Lagrangian-based parallel splitting method, Optim. Lett. 9 (2015), pp. 1199-1212] proved the global convergence of the augmented Lagrangian-based parallel splitting method under the assumption that all objective functions are strongly convex. In this paper, we extend these results and derive the worst-case convergence rate of this method under both ergodic and non-ergodic conditions, where t represents the number of iterations. Furthermore, we show that the convergence rate can be improved from to , and finally, we also demonstrate that this method can achieve global linear convergence, when the involved functions satisfy some additional conditions.
Extrapolation with a parallel splitting method is discussed. The parallel splitting method reduces a multidimensional problem into independent one-dimensional problems and can improve the convergence order of space va...
详细信息
Alternating directions methods (ADMs) are very effective for solving convex optimization problems with separable structure. However, when these methods are applied to solve convex optimization problems with three sepa...
详细信息
Alternating directions methods (ADMs) are very effective for solving convex optimization problems with separable structure. However, when these methods are applied to solve convex optimization problems with three separable operators, their convergence results have not been established as yet. In this paper, we consider a class of constrained matrix optimization problems. The problem is first reformulated into a convex optimization problem with three separable operators, then it is solved by a proposed partial parallel splitting method. The proposed method combines the parallelsplitting (augmented Lagrangian) method (PSALM) and the alternating directions method (ADM), and it is referred to as PADALM in short. The main difference between PADALM and PSALM is that in PADALM, two operators are handled first by a parallelmethod, then the third operator and the former two are dealt with by an alternating method. Finally, the convergence result for PADALM is established and numerical results are provided to show the efficacy of PADALM and its superiority over PSALM. Crown Copyright (C) 2010 Published by Elsevier Ltd. All rights reserved.
This paper addresses a novel solution scheme for a special class of variational inequality problems which can be applied to model a Stackelberg game with one leader and three or more followers. In the scheme, the lead...
详细信息
This paper addresses a novel solution scheme for a special class of variational inequality problems which can be applied to model a Stackelberg game with one leader and three or more followers. In the scheme, the leader makes his decision first and then the followers reveal their choices simultaneously based on the information of the leader's strategy. Under mild conditions, we theoretically prove that the scheme can obtain an equilibrium. The proposed approach is applied to solve a simple game and a traffic problem. Numerical results about the performance of the new method are reported.
The Douglas Rachford algorithm (DRA) is a powerful optimization method for minimizing the sum of two convex (not necessarily smooth) functions. The vast majority of previous research dealt with the case when the sum h...
详细信息
The Douglas Rachford algorithm (DRA) is a powerful optimization method for minimizing the sum of two convex (not necessarily smooth) functions. The vast majority of previous research dealt with the case when the sum has at least one minimizer. In the absence of minimizers, it was recently shown that for the case of two indicator functions, the DRA converges to a best approximation solution. In this paper, we present a new convergence result on the DRA applied to the problem of minimizing a convex function subject to a linear constraint. Indeed, a normal solution may be found even when the domain of the objective function and the linear subspace constraint have no point in common. As an important application, a new parallelsplitting result is provided. We also illustrate our results through various examples.
In this paper, a hybrid splittingmethod is proposed for solving a smoothing Tikhonov regularization problem. At each iteration, the proposed method solves three subproblems. First of all, two subproblems are solved i...
详细信息
In this paper, a hybrid splittingmethod is proposed for solving a smoothing Tikhonov regularization problem. At each iteration, the proposed method solves three subproblems. First of all, two subproblems are solved in a parallel fashion, and the multiplier associated to these two block variables is updated in a rapid sequence. Then the third subproblem is solved in the sense of an alternative fashion with the former two subproblems. Finally, the multiplier associated to the last two block variables is updated. Global convergence of the proposed method is proven under some suitable conditions. Some numerical experiments on the discrete ill-posed problems (DIPPs) show the validity and efficiency of the proposed hybrid splittingmethod.
暂无评论