作者:
He, XinXihua Univ
Sch Sci Chengdu 610039 Sichuan Peoples R China
In this paper, we introduce two accelerated primal-dual methods tailored to address linearly constrained composite convex optimization problems, where the objective function is expressed as the sum of a possibly nondi...
详细信息
In this paper, we introduce two accelerated primal-dual methods tailored to address linearly constrained composite convex optimization problems, where the objective function is expressed as the sum of a possibly nondifferentiable function and a differentiable function with Lipschitz continuous gradient. The first method is the accelerated linearized augmented Lagrangian method (ALALM), which permits linearization to the differentiable function;the second method is the accelerated linearized proximal point algorithm (ALPPA), which enables linearization of both the differentiable function and the augmented term. By incorporating adaptive parameters, we demonstrate that ALALM achieves the O(1/k(2)) convergence rate and the linear convergence rate under the assumption of convexity and strong convexity, respectively. Additionally, we establish that ALPPA enjoys the O(1/k) convergence rate in convex case and the O(1/k(2)) convergence rate in strongly convex case. We provide numerical results to validate the effectiveness of the proposed methods.
The augmented Lagrangian method (ALM) is a benchmark for solving convex minimization problems with linear constraints. When the objective function of the model under consideration is representable as the sum of some f...
详细信息
The augmented Lagrangian method (ALM) is a benchmark for solving convex minimization problems with linear constraints. When the objective function of the model under consideration is representable as the sum of some functions without coupled variables, a Jacobian or Gauss-Seidel decomposition is often implemented to decompose the ALM subproblems so that the functions' properties could be used more effectively in algorithmic design. The Gauss-Seidel decomposition of ALM has resulted in the very popular alternating direction method of multipliers (ADMM) for two-block separable convex minimization models and recently it was shown in He et al. (Optimization Online, 2013) that the Jacobian decomposition of ALM is not necessarily convergent. In this paper, we show that if each subproblem of the Jacobian decomposition of ALM is regularized by a proximal term and the proximal coefficient is sufficiently large, the resulting scheme to be called the proximal Jacobian decomposition of ALM, is convergent. We also show that an interesting application of the ADMM in Wang et al. (Pac J Optim, to appear), which reformulates a multiple-block separable convex minimization model as a two-block counterpart first and then applies the original ADMM directly, is closely related to the proximal Jacobian decomposition of ALM. Our analysis is conducted in the variational inequality context and is rooted in a good understanding of the proximal point algorithm.
We propose two algorithms for finding (common) zeros of finitely many maximal monotone mappings in reflexive Banach spaces. These algorithms are based on the Bregman distance related to a well-chosen convex function a...
详细信息
We propose two algorithms for finding (common) zeros of finitely many maximal monotone mappings in reflexive Banach spaces. These algorithms are based on the Bregman distance related to a well-chosen convex function and improve previous results. Finally, we mention two applications of our algorithms for solving equilibrium problems and convex feasibility problems.
We study a conical extension of averaged nonexpansive operators and the role it plays in convergence analysis of fixed pointalgorithms. Various properties of conically averaged operators are systematically investigat...
详细信息
We study a conical extension of averaged nonexpansive operators and the role it plays in convergence analysis of fixed pointalgorithms. Various properties of conically averaged operators are systematically investigated, in particular, the stability under relaxations, convex combinations and compositions. We derive conical averagedness properties of resolvents of generalized monotone operators. These properties are then utilized in order to analyze the convergence of the proximal point algorithm, the forward-backward algorithm, and the adaptive Douglas-Rachford algorithm. Our study unifies, improves and casts new light on recent studies of these topics.
A forward-backward inertial procedure for solving the problem of finding a zero of the sum of two maximal monotone operators is proposed and its convergence is established under a cocoercivity condition with respect t...
详细信息
A forward-backward inertial procedure for solving the problem of finding a zero of the sum of two maximal monotone operators is proposed and its convergence is established under a cocoercivity condition with respect to the solution set. (C) 2003 Elsevier Science B.V. All rights reserved.
This article concerns a proximal-pointalgorithm with time penalization. The case where the cost of moving from one position to a better one is penalized by the time taken by the agent for the decision-making is studi...
详细信息
This article concerns a proximal-pointalgorithm with time penalization. The case where the cost of moving from one position to a better one is penalized by the time taken by the agent for the decision-making is studied and the restriction employing the penalty method is incorporated. It is shown that the method converges monotonically with respect to the minimal weighted norm to a unique minimal point under mild assumptions. The gradient method is employed for solving the objective function, and its convergence is proven. The rate of convergence of the method is also estimated by computing the optimal parameters. The effectiveness of the method is illustrated by a numerical optimization example employing continuous-time Markov chains.
The purpose of this article is to prove a strong convergence result associated with a generalization of the method of alternating resolvents introduced by the authors in convergence of the method of alternating resolv...
详细信息
The purpose of this article is to prove a strong convergence result associated with a generalization of the method of alternating resolvents introduced by the authors in convergence of the method of alternating resolvents [4] under minimal assumptions on the control parameters involved. Thus, this article represents a significant improvement of the article mentioned above.
In this paper, we introduce a proximal-proximal majorization-minimization (PPMM) algorithm for nonconvex rank regression problems. The basic idea of the algorithm is to apply the proximal majorization-minimization alg...
详细信息
In this paper, we introduce a proximal-proximal majorization-minimization (PPMM) algorithm for nonconvex rank regression problems. The basic idea of the algorithm is to apply the proximal majorization-minimization algorithm to solve the nonconvex problem with the inner subproblems solved by a sparse semismooth Newton (SSN) method based proximal point algorithm (PPA). It deserves mentioning that we adopt the sequential regularization technique and design an implementable stopping criterion to overcome the singular difficulty of the inner subproblem. Especially for the stopping criterion, it plays a very important role for the success of the algorithm. Furthermore, we also prove that the PPMM algorithm converges to a stationary point. Due to the Kurdyka-Lojasiewicz (KL) property of the problem, we present the convergence rate of the PPMM algorithm. Numerical experiments demonstrate that our proposed algorithm outperforms the existing state-of-the-art algorithms.
The concept of a stochastic variational inequality has recently been articulated in a new way that is able to cover, in particular, the optimality conditions for a multistage stochastic programming problem. One of the...
详细信息
The concept of a stochastic variational inequality has recently been articulated in a new way that is able to cover, in particular, the optimality conditions for a multistage stochastic programming problem. One of the long-standing methods for solving such an optimization problem under convexity is the progressive hedging algorithm. That approach is demonstrated here to be applicable also to solving multistage stochastic variational inequality problems under monotonicity, thus increasing the range of applications for progressive hedging. Stochastic complementarity problems as a special case are explored numerically in a linear two-stage formulation.
In this paper, we introduce and analyze a new unified hybrid iterative method to compute the approximate solution of the general optimization problem defined over the set D = Fix(T) boolean AND Omega vertical bar GMEP...
详细信息
In this paper, we introduce and analyze a new unified hybrid iterative method to compute the approximate solution of the general optimization problem defined over the set D = Fix(T) boolean AND Omega vertical bar GMEP(Phi, Psi, phi)], where Fix(T) is the set of common fixed points of a family T = {T(t) : 0 <= t < infinity} of nonexpansive self-mappings on a Hilbert space H, and Omega vertical bar GMEP(Phi, Psi, phi)] is the set of solutions of the generalized mixed equilibrium problem (in short, GMEP). Such type of minimization problem is called the hierarchical minimization problem. We establish the strong convergence of the sequences generated by the proposed algorithm. Our strong convergence theorem extends, improves and unifies the previously known results in the literature. We also give a numerical example to illustrate our algorithm and results. (C) 2013 Elsevier B.V. All rights reserved.
暂无评论