We present a unified framework for the design and convergence analysis of a class of algorithms based on approximate solution of proximalpoint subproblems. Our development further enhances the constructive approximat...
详细信息
We present a unified framework for the design and convergence analysis of a class of algorithms based on approximate solution of proximalpoint subproblems. Our development further enhances the constructive approximation approach of the recently proposed hybrid projection-proximal and extragradient-proximal methods. Specifically, we introduce an even more flexible error tolerance criterion, as well as provide a unified view of these two algorithms. Our general method possesses global convergence and local (super)linear rate of convergence under standard assumptions, while using a constructive approximation criterion suitable for a number of specific implementations. For example, we show that close to a regular solution of a monotone system of semismooth equations, two Newton iterations are sufficient to solve the proximal subproblem within the required error tolerance. Such systems of equations arise naturally when reformulating the nonlinear complementarity problem.
proximal point algorithm(PPA)is a useful algorithm framework and has good convergence *** difficulty is that the subproblems usually only have iterative *** this paper,we propose an inexact customized PPA framework fo...
详细信息
proximal point algorithm(PPA)is a useful algorithm framework and has good convergence *** difficulty is that the subproblems usually only have iterative *** this paper,we propose an inexact customized PPA framework for twoblock separable convex optimization problem with linear *** design two types of inexact error criteria for the *** first one is absolutely summable error criterion,under which both subproblems can be solved *** one of the two subproblems is easily solved,we propose another novel error criterion which is easier to implement,namely relative error *** relative error criterion only involves one parameter,which is more *** establish the global convergence and sub-linear convergence rate in ergodic sense for the proposed *** numerical experiments on LASSO regression problems and total variation-based image denoising problem illustrate that our new algorithms outperform the corresponding exact algorithms.
Several strong convergence results involving two distinct four parameter proximal point algorithms are proved under different sets of assumptions on these parameters and the general condition that the error sequence c...
详细信息
Several strong convergence results involving two distinct four parameter proximal point algorithms are proved under different sets of assumptions on these parameters and the general condition that the error sequence converges to zero in norm. Thus our results address the two important problems related to the proximal point algorithm - one being that of strong convergence (instead of weak convergence) and the other one being that of acceptable errors. One of the algorithms discussed was introduced by Yao and Noor (2008) [7] while the other one is new and it is a generalization of the regularization method initiated by Lehdili and Moudafi (1996) [9] and later developed by Xu (2006) [8]. The new algorithm is also ideal for estimating the convergence rate of a sequence that approximates minimum values of certain functionals. Although these algorithms are distinct, it turns out that for a particular case, they are equivalent. The results of this paper extend and generalize several existing ones in the literature. (C) 2010 Elsevier Ltd. All rights reserved.
The proximal point algorithm (PPA) is a fundamental method for convex programming. When applying the PPA to solve linearly constrained convex problems, we may prefer to choose an appropriate metric matrix to define th...
详细信息
The proximal point algorithm (PPA) is a fundamental method for convex programming. When applying the PPA to solve linearly constrained convex problems, we may prefer to choose an appropriate metric matrix to define the proximal regularization, so that the computational burden of the resulted PPA can be reduced, and sometimes even admit closed form or efficient solutions. This idea results in the so-called customized PPA (also known as preconditioned PPA), and it covers the linearized ALM, the primal-dual hybrid gradient algorithm, ADMM as special cases. Since each customized PPA owes special structures and has popular applications, it is interesting to ask wether we can design a simple relaxation strategy for these algorithms. In this paper we treat these customized PPA algorithms uniformly by a mixed variational inequality approach, and propose a new relaxation strategy for these customized PPA algorithms. Our idea is based on correcting the dual variables individually and does not rely on relaxing the primal variables. This is very different from previous works. From variational inequality perspective, we prove the global convergence and establish a worst-case convergence rate for these relaxed PPA algorithms. Finally, we demonstrate the performance improvements by some numerical results.
Two modified double inertial proximal point algorithms are proposed for solving variational inequality problems with a pseudomonotone vector field in the settings of a Hadamard manifold. Weak convergence of the propos...
详细信息
Two modified double inertial proximal point algorithms are proposed for solving variational inequality problems with a pseudomonotone vector field in the settings of a Hadamard manifold. Weak convergence of the proposed methods is attained without the requirement of Lipschitz continuity conditions. The convergence efficiency of the proposed algorithms is improved with the help of the double inertial technique and the non-monotonic self-adaptive step size rule. We present a numerical experiment to demonstrate the effectiveness of the proposed algorithm compared to several existing ones. The results extend and generalize many recent methods in the literature.
This paper focuses on some customized applications of the proximal point algorithm (PPA) to two classes of problems: the convex minimization problem with linear constraints and a generic or separable objective functio...
详细信息
This paper focuses on some customized applications of the proximal point algorithm (PPA) to two classes of problems: the convex minimization problem with linear constraints and a generic or separable objective function, and a saddle-point problem. We treat these two classes of problems uniformly by a mixed variational inequality, and show how the application of PPA with customized metric proximal parameters can yield favorable algorithms which are able to make use of the models' structures effectively. Our customized PPA revisit turns out to unify some algorithms including some existing ones in the literature and some new ones to be proposed. From the PPA perspective, we establish the global convergence and a worst-case O(1/t) convergence rate for this series of algorithms in a unified way.
This work is motivated by a recent work on an extended linear proximal point algorithm (PPA) [B.S. He, X.L. Fu, and Z.K. Jiang, proximal-pointalgorithm using a linear proximal term, J. Optim. Theory Appl. 141 (2009),...
详细信息
This work is motivated by a recent work on an extended linear proximal point algorithm (PPA) [B.S. He, X.L. Fu, and Z.K. Jiang, proximal-pointalgorithm using a linear proximal term, J. Optim. Theory Appl. 141 (2009), pp. 299-319], which aims at relaxing the requirement of the linear proximal term of classical PPA. In this paper, we make further contributions along the line. First, we generalize the linear PPA-based contraction method by using a nonlinear proximal term instead of the linear one. A notable superiority over traditional PPA-like methods is that the nonlinear proximal term of the proposed method may not necessarily be a gradient of any functions. In addition, the nonlinearity of the proximal term makes the new method more flexible. To avoid solving a variational inequality subproblem exactly, we then propose an inexact version of the developed method, which may be more computationally attractive in terms of requiring lower computational cost. Finally, we gainfully employ our new methods to solve linearly constrained convex minimization and variational inequality problems.
In this note we apply a lemma due to Sabach and Shtern to compute linear rates of asymptotic regularity for Halpern-type nonlinear iterations studied in optimization and nonlinear analysis.
In this note we apply a lemma due to Sabach and Shtern to compute linear rates of asymptotic regularity for Halpern-type nonlinear iterations studied in optimization and nonlinear analysis.
In this paper, we first characterize finite convergence of an arbitrary iterative algorithm for solving the variational inequality problem (VIP), where the finite convergence means that the algorithm can find an exact...
详细信息
In this paper, we first characterize finite convergence of an arbitrary iterative algorithm for solving the variational inequality problem (VIP), where the finite convergence means that the algorithm can find an exact solution of the problem in a finite number of iterations. By using this result, we obtain that the well-known proximal point algorithm possesses finite convergence if the solution set of VIP is weakly sharp. As an extension, we show finite convergence of the inertial proximal method for solving the general variational inequality problem under the condition of weak g-sharpness. (c) 2005 Elsevier Inc. All rights reserved.
We consider the regularization of two proximal point algorithms (PPA) with errors for a maximal monotone operator in a real Hilbert space, previously studied, respectively, by Xu, and by Boikanyo and Morosanu, where t...
详细信息
We consider the regularization of two proximal point algorithms (PPA) with errors for a maximal monotone operator in a real Hilbert space, previously studied, respectively, by Xu, and by Boikanyo and Morosanu, where they assumed the zero set of the operator to be nonempty. We provide a counterexample showing an error in Xu's theorem, and then we prove its correct extended version by giving a necessary and sufficient condition for the zero set of the operator to be nonempty and showing the strong convergence of the regularized scheme to a zero of the operator. This will give a first affirmative answer to the open question raised by Boikanyo and Morosanu concerning the design of a PPA, where the error sequence tends to zero and a parameter sequence remains bounded. Then, we investigate the second PPA with various new conditions on the parameter sequences and prove similar theorems as above, providing also a second affirmative answer to the open question of Boikanyo and Morosanu. Finally, we present some applications of our new convergence results to optimization and variational inequalities.
暂无评论