Several optimization schemes have been known for convex optimization problems. A significant progress to go beyond convexity was made by considering the class of functions representable as difference of convex functio...
详细信息
Several optimization schemes have been known for convex optimization problems. A significant progress to go beyond convexity was made by considering the class of functions representable as difference of convex functions which constitute the backbone of nonconvex programming and global optimization. In this article, we introduce new algorithm to minimize the difference of a continuously differentiable function and a convex function that accelerate the convergence of the classical proximal point algorithm. We prove that the point computed by proximal point algorithm can be used to define a descent direction for the objective function evaluated at this point. Our algorithms are based on a combination of proximal point algorithm together with a line search step that uses this descent direction. Convergence of the algorithms is proved and the rate of convergence is analyzed under the strong Kurdyka-Lojasiewicz property of the objective function.
The proximal point algorithm (PPA) has been well studied in the literature. In particular, its linear convergence rate has been studied by Rockafellar in 1976 under certain condition. We consider a generalized PPA in ...
详细信息
The proximal point algorithm (PPA) has been well studied in the literature. In particular, its linear convergence rate has been studied by Rockafellar in 1976 under certain condition. We consider a generalized PPA in the generic setting of finding a zero point of a maximal monotone operator, and show that the condition proposed by Rockafellar can also sufficiently ensure the linear convergence rate for this generalized PPA. Indeed we show that these linear convergence rates are optimal. Both the exact and inexact versions of this generalized PPA are discussed. The motivation of considering this generalized PPA is that it includes as special cases the relaxed versions of some splitting methods that are originated from PPA. Thus, linear convergence results of this generalized PPA can be used to better understand the convergence of some widely used algorithms in the literature. We focus on the particular convex minimization context and specify Rockafellar's condition to see how to ensure the linear convergence rate for some efficient numerical schemes, including the classical augmented Lagrangian method proposed by Hensen and Powell in 1969 and its relaxed version, the original alternating direction method of multipliers (ADMM) by Glowinski and Marrocco in 1975 and its relaxed version (i.e., the generalized ADMM by Eckstein and Bertsekas in 1992). Some refined conditions weaker than existing ones are proposed in these particular contexts.
We introduce and investigate a new generalized convexity notion for functions called prox-convexity. The proximity operator of such a function is single-valued and firmly nonexpansive. We provide examples of (strongly...
详细信息
We introduce and investigate a new generalized convexity notion for functions called prox-convexity. The proximity operator of such a function is single-valued and firmly nonexpansive. We provide examples of (strongly) quasiconvex, weakly convex, and DC (difference of convex) functions that are prox-convex, however none of these classes fully contains the one of prox-convex functions or is included into it. We show that the classical proximal point algorithm remains convergent when the convexity of the proper lower semicontinuous function to be minimized is relaxed to prox-convexity.
In the literature, there are a few researches to design some parameters in the proximal point algorithm (PPA), especially for the multi-objective convex optimizations. Introducing some parameters to PPA can make it mo...
详细信息
In the literature, there are a few researches to design some parameters in the proximal point algorithm (PPA), especially for the multi-objective convex optimizations. Introducing some parameters to PPA can make it more flexible and attractive. Mainly motivated by our recent work [Bai et al. A parameterized proximal point algorithm for separable convex optimization. Optim Lett. (2017) doi: 10.1007/s11590-017-1195-9], in this paper we develop a general parameterized PPA with a relaxation step for solving the multi-block separable structured convex programming. By making use of the variational inequality and some mathematical identities, the global convergence and the worst-case caseO(1/t) convergence rate of the proposed algorithm are established. Preliminary numerical experiments on solving a sparse matrix minimization problem from statistical learning validate that our algorithm is more efficient than several state-of-the-art algorithms.
The proximal point algorithm (PPA) for the convex minimization problem min(x-epsilon-H)f(x), where f:H --> R union of {infinity} is a proper, lower semicontinuous (lsc) function in a Hilbert space H is considered. ...
详细信息
The proximal point algorithm (PPA) for the convex minimization problem min(x-epsilon-H)f(x), where f:H --> R union of {infinity} is a proper, lower semicontinuous (lsc) function in a Hilbert space H is considered. Under this minimal assumption on f, it is proved that the PPA, with positive parameters {lambda-k} k = 1, infinity converges in general if and only if sigma-n = SIGMA-k = 1n-lambda-k --> infinity. Global convergence rate estimates for the residual f(X)(n) - f(u), where x(n) is the nth iterate of the PPA and u-epsilon-H is arbitrary are given. An open question of Rockafellar is settled by giving an example of a PPA for which x(n) converges weakly but not strongly to a minimizer of f.
We present a new barrier-based method of constructing smoothing approximations for the Euclidean projector onto closed convex cones. These smoothing approximations are used in a smoothing proximal point algorithm to s...
详细信息
We present a new barrier-based method of constructing smoothing approximations for the Euclidean projector onto closed convex cones. These smoothing approximations are used in a smoothing proximal point algorithm to solve monotone nonlinear complementarity problems (NCPs) over a convex cone via the normal map equation. The smoothing approximations allow for the solution of the smoothed normal map equations with Newton's method and do not require additional analytical properties of the Euclidean projector. The use of proximal terms in the algorithm adds stability to the solution of the smoothed normal map equation and avoids numerical issues due to ill-conditioning at iterates near the boundary of the cones. We prove a sufficient condition on the barrier used that guarantees the convergence of the algorithm to a solution of the NCP. The sufficient condition is satisfied by all logarithmically homogeneous barriers. Preliminary numerical tests on semidefinite programming problems show that our algorithm is comparable with the Newton-CG augmented Lagrangian algorithm proposed in [X. Y. Zhao, D. Sun, and K.-C. Toh, SIAM J. Optim., 20 (2010), pp. 1737-1765].
We apply proof mining methods to analyse a result of Boikanyo and Morosanu on the strong convergence of a Halpern-type proximal point algorithm. As a consequence, we obtain quantitative versions of this result, provid...
详细信息
We apply proof mining methods to analyse a result of Boikanyo and Morosanu on the strong convergence of a Halpern-type proximal point algorithm. As a consequence, we obtain quantitative versions of this result, providing uniform effective rates of asymptotic regularity and metastability.
作者:
Sipos, AndreiUniv Bucharest
Fac Math & Comp Sci Res Ctr Log Optimizat & Secur LOS Dept Comp Sci Acad 14 Bucharest 010014 Romania Romanian Acad
Simion Stoilow Inst Math Calea Grivitei 21 Bucharest 010702 Romania
We prove an abstract form of the strong convergence of the Halpern-type and Tikhonov-type proximal point algorithms in CAT(0) spaces. In addition, we derive uniform and computable rates of metastability (in the sense ...
详细信息
We prove an abstract form of the strong convergence of the Halpern-type and Tikhonov-type proximal point algorithms in CAT(0) spaces. In addition, we derive uniform and computable rates of metastability (in the sense of Tao) for these iterations using proof mining techniques.
The proximal point algorithm (PPA) is a fundamental method in optimization and it has been well studied in the literature. Recently a generalized version of the PPA with a step size in (0, 2) has been proposed. Inheri...
详细信息
The proximal point algorithm (PPA) is a fundamental method in optimization and it has been well studied in the literature. Recently a generalized version of the PPA with a step size in (0, 2) has been proposed. Inheriting all important theoretical properties of the original PPA, the generalized PPA has some numerical advantages that have been well verified in the literature by various applications. A common sense is that larger step sizes are preferred whenever the convergence can be theoretically ensured;thus it is interesting to know whether or not the step size of the generalized PPA can be as large as 2. We give a negative answer to this question. Some counterexamples are constructed to illustrate the divergence of the generalized PPA with step size 2 in both generic and specific settings, including the generalized versions of the very popular augmented Lagrangian method and the alternating direction method of multipliers. A by-product of our analysis is the failure of convergence of the Peaceman-Rachford splitting method and a generalized version of the forward-backward splitting method with step size 1.5.
Using proof-theoretical techniques, we analyze a proof by Hong-Kun Xu regarding a result of strong convergence for the Halpern type proximal point algorithm. We obtain a rate of metastability (in the sense of Terence ...
详细信息
Using proof-theoretical techniques, we analyze a proof by Hong-Kun Xu regarding a result of strong convergence for the Halpern type proximal point algorithm. We obtain a rate of metastability (in the sense of Terence Tao) and also a rate of asymptotic regularity for the iteration. Furthermore, our final quantitative result bypasses the sequential weak compactness argument present in the original proof. This elimination is reflected in the extraction of primitive recursive quantitative information. This work follows from recent results in Proof Mining regarding the removal of sequential weak compactness arguments.
暂无评论