We study the issue of strong convergence of inexact proximal point algorithms (introduced by Rockafellar in [SIAM J. Control Optim., 14 (1976), pp. 877-898]) for maximal monotone operators on Hilbert spaces. A unified...
详细信息
We study the issue of strong convergence of inexact proximal point algorithms (introduced by Rockafellar in [SIAM J. Control Optim., 14 (1976), pp. 877-898]) for maximal monotone operators on Hilbert spaces. A unified global/local strong convergence of inexact proximal point algorithms is established under the Holder metrically subregular condition. Furthermore, quantitative estimates on the convergence rate of inexact proximal point algorithms are also provided. Applying to the special case of the classical (exact) proximalpointalgorithm, our results improve the corresponding ones in [G. Li and B. S. Mordukhovich, SIAM J. Optim., 22 (2012), pp. 1655-1684]. Finally, as applications, global/local strong convergence and estimates on the convergence rate of inexact proximal point algorithms for optimization problems are presented.
In this paper, an inexact proximal point algorithm concerned with the singularity of maximal monotone vector fields is introduced and studied on Hadamard manifolds, in which a relative error tolerance with squared sum...
详细信息
In this paper, an inexact proximal point algorithm concerned with the singularity of maximal monotone vector fields is introduced and studied on Hadamard manifolds, in which a relative error tolerance with squared summable error factors is considered. It is proved that the sequence generated by the proposed method is convergent to a solution of the problem. Moreover, an application to the optimization problem on Hadamard manifolds is given. The main results presented in this paper generalize and improve some corresponding known results given in the literature. (C) 2013 Elsevier By. All rights reserved.
This paper introduces a second-order differential inclusion for unconstrained convex optimization. In continuous level, solution existence in proper sense is obtained and exponential decay of a novel Lyapunov function...
详细信息
This paper introduces a second-order differential inclusion for unconstrained convex optimization. In continuous level, solution existence in proper sense is obtained and exponential decay of a novel Lyapunov function along with the solution trajectory is derived as well. Then in discrete level, based on numerical discretizations of the continuous model, two inexact proximal point algorithms are proposed, and some new convergence rates are established via a discrete Lyapunov function.
The present paper first provides sufficient conditions and characterizations for linearly conditioned bifunction associated with an equilibrium problem. It then introduces the notion of weak sharp solution for equilib...
详细信息
The present paper first provides sufficient conditions and characterizations for linearly conditioned bifunction associated with an equilibrium problem. It then introduces the notion of weak sharp solution for equilibrium problems which is analogous to the linear conditioning notion. This new notion generalizes and unifies the notion of weak sharp minima for optimization problems as well as the notion of weak sharp solutions for variational inequality problems. Some sufficient conditions and characterizations of weak sharpness are also presented. Finally, we study the finite convergence property of sequences generated by some algorithms for solving equilibrium problems under linear conditioning and weak shapness assumptions. An upper bound of the number of iterations by which the sequence generated by proximalpointalgorithm converges to a solution of equilibrium problems is also given.
The proximalpointalgorithm has many interesting applications,such as signal recovery,signal processing and *** recent years,the proximalpoint method has been extended to Riemannian *** main advantages of these exte...
详细信息
The proximalpointalgorithm has many interesting applications,such as signal recovery,signal processing and *** recent years,the proximalpoint method has been extended to Riemannian *** main advantages of these extensions are that nonconvex problems in classic sense may become geodesic convex by introducing an appropriate Riemannian metric,constrained optimization problems may be seen as unconstrained *** this paper,we propose an inexact proximal point algorithm for geodesic convex vector function on Hadamard *** the assumption that the objective function is coercive,the sequence generated by this algorithm converges to a Pareto critical *** the objective function is coercive and strictly geodesic convex,the sequence generated by this algorithm converges to a Pareto optimal ***,under the weaker growth condition,we prove that the inexact proximal point algorithm has linear/superlinear convergence rate.
In this paper, an estimate of convergence rate concerned with an inexact proximal point algorithm for the singularity of maximal monotone vector fields on Hadamard manifolds is discussed. We introduce a weaker growth ...
详细信息
In this paper, an estimate of convergence rate concerned with an inexact proximal point algorithm for the singularity of maximal monotone vector fields on Hadamard manifolds is discussed. We introduce a weaker growth condition, which is an extension of that of Luque from Euclidean spaces to Hadamard manifolds. Under the growth condition, we prove that the inexact proximal point algorithm has linear/superlinear convergence rate. The main results presented in this paper generalize and improve some corresponding known results. (C) 2014 Elsevier B.V. All rights reserved.
In this article, a new method is proposed for solving a class of structured variational inequalities (SVIs). The proposed method is referred to as the partial inexactproximal alternating direction (piPAD) method. In ...
详细信息
In this article, a new method is proposed for solving a class of structured variational inequalities (SVIs). The proposed method is referred to as the partial inexactproximal alternating direction (piPAD) method. In the method, two subproblems are solved independently. One is handled by an inexactproximalpoint method and the other is solved directly. This feature is the major difference between the proposed method and some existing alternating direction-like methods. The convergence of the piPAD method is proved. Two examples of the modern convex optimization problem arising from engineering and information sciences, which can be reformulated into the encountered SVIs, are presented to demonstrate the applicability of the piPAD method. Also, some preliminary numerical results are reported to validate the feasibility and efficiency of the piPAD method.
In this paper we present a relaxed version of an extragradient-proximalpointalgorithm recently proposed by Solodov and Svaiter for finding a zero of a maximal monotone operator defined on a Hilbert space. The aim is...
详细信息
In this paper we present a relaxed version of an extragradient-proximalpointalgorithm recently proposed by Solodov and Svaiter for finding a zero of a maximal monotone operator defined on a Hilbert space. The aim is to introduce a family of parameters in order to accelerate the rate of convergence of this algorithm. First we study the convergence and rate of convergence of the relaxed algorithms and then we apply them to the generalized variational inequality problem. For this problem, the operator is a sum of two operators: the first one is single-valued, monotone and continuous and the second one is the subdifferential of a nonsmooth lower semi-continuous proper convex function phi. To make the subproblems easier to solve, we consider, as in the bundle methods, piecewise linear convex approximations of phi. We explain how to construct these approximations and how the subproblems fall within the framework of our relaxed extragradient-proximalpointalgorithm. We prove the convergence of the resulting algorithm without assuming a Dunn property on the single-valued operator. Finally, we report some numerical experiences to illustrate the behavior of our implementable algorithm for different values of the relaxation factor. A comparison with another algorithm is also given.
proximalpointalgorithms (PPA) are attractive methods for monotone variational inequalities. The approximate versions of PPA are more applicable in practice. A modified approximate proximalpointalgorithm (APPA) pre...
详细信息
proximalpointalgorithms (PPA) are attractive methods for monotone variational inequalities. The approximate versions of PPA are more applicable in practice. A modified approximate proximalpointalgorithm (APPA) presented by Solodov and Svaiter [Math. Programming, Ser. B 88 (2000) 371-389] relaxes the inexactness criterion significantly. This paper presents an extended version of Solodov-Svaiter's APPA. Building the direction from current iterate to the new iterate obtained by Solodov-Svaiter's APPA, the proposed method improves the profit at each iteration by choosing the optimal step length along this direction. In addition, the inexactness restriction is relaxed further. Numerical example indicates the improvement of the proposed method. (C) 2004 Elsevier Inc. All rights reserved.
暂无评论