We introduce and investigate a new generalized convexity notion for functions called prox-convexity. The proximity operator of such a function is single-valued and firmly nonexpansive. We provide examples of (strongly...
详细信息
We introduce and investigate a new generalized convexity notion for functions called prox-convexity. The proximity operator of such a function is single-valued and firmly nonexpansive. We provide examples of (strongly) quasiconvex, weakly convex, and DC (difference of convex) functions that are prox-convex, however none of these classes fully contains the one of prox-convex functions or is included into it. We show that the classical proximal point algorithm remains convergent when the convexity of the proper lower semicontinuous function to be minimized is relaxed to prox-convexity.
作者:
Sipos, AndreiUniv Bucharest
Fac Math & Comp Sci Res Ctr Log Optimizat & Secur LOS Dept Comp Sci Acad 14 Bucharest 010014 Romania Romanian Acad
Simion Stoilow Inst Math Calea Grivitei 21 Bucharest 010702 Romania
We prove an abstract form of the strong convergence of the Halpern-type and Tikhonov-type proximal point algorithms in CAT(0) spaces. In addition, we derive uniform and computable rates of metastability (in the sense ...
详细信息
We prove an abstract form of the strong convergence of the Halpern-type and Tikhonov-type proximal point algorithms in CAT(0) spaces. In addition, we derive uniform and computable rates of metastability (in the sense of Tao) for these iterations using proof mining techniques.
Efficient sampling from a high-dimensional Gaussian distribution is an old but high-stakes issue. Vanilla Cholesky samplers imply a computational cost and memory requirements that can rapidly become prohibitive in hig...
详细信息
Efficient sampling from a high-dimensional Gaussian distribution is an old but high-stakes issue. Vanilla Cholesky samplers imply a computational cost and memory requirements that can rapidly become prohibitive in high dimensions. To tackle these issues, multiple methods have been proposed from different communities ranging from iterative numerical linear algebra to Markov chain Monte Carlo (MCMC) approaches. Surprisingly, no complete review and comparison of these methods has been conducted. This paper aims to review all these approaches by pointing out their differences, close relations, benefits, and limitations. In addition to reviewing the state of the art, this paper proposes a unifying Gaussian simulation framework by deriving a stochastic counterpart of the celebrated proximal point algorithm in optimization. This framework offers a novel and unifying revisiting of most of the existing MCMC approaches while also extending them. Guidelines to choosing the appropriate Gaussian simulation method for a given sampling problem in high dimensions are proposed and illustrated with numerical examples.
We apply proof mining methods to analyse a result of Boikanyo and Morosanu on the strong convergence of a Halpern-type proximal point algorithm. As a consequence, we obtain quantitative versions of this result, provid...
详细信息
We apply proof mining methods to analyse a result of Boikanyo and Morosanu on the strong convergence of a Halpern-type proximal point algorithm. As a consequence, we obtain quantitative versions of this result, providing uniform effective rates of asymptotic regularity and metastability.
Using proof-theoretical techniques, we analyze a proof by Hong-Kun Xu regarding a result of strong convergence for the Halpern type proximal point algorithm. We obtain a rate of metastability (in the sense of Terence ...
详细信息
Using proof-theoretical techniques, we analyze a proof by Hong-Kun Xu regarding a result of strong convergence for the Halpern type proximal point algorithm. We obtain a rate of metastability (in the sense of Terence Tao) and also a rate of asymptotic regularity for the iteration. Furthermore, our final quantitative result bypasses the sequential weak compactness argument present in the original proof. This elimination is reflected in the extraction of primitive recursive quantitative information. This work follows from recent results in Proof Mining regarding the removal of sequential weak compactness arguments.
The tight sublinear convergence rate of the proximal point algorithm for maximal monotone inclusion problems is established based on the squared fixed point residual. By using the performance estimation framework, the...
详细信息
The tight sublinear convergence rate of the proximal point algorithm for maximal monotone inclusion problems is established based on the squared fixed point residual. By using the performance estimation framework, the tight sublinear convergence rate problem is written as an infinite dimensional nonconvex optimization problem, which is then equivalently reformulated as a finite dimensional semidefinite programming (SDP) problem. By solving the SDP, the exact sublinear rate is computed numerically. Theoretically, by constructing a feasible solution to the dual SDP, an upper bound is obtained for the tight sublinear rate. On the other hand, an example in two dimensional space is constructed to provide a lower bound. The lower bound matches exactly the upper bound obtained from the dual SDP, which also coincides with the numerical rate computed. Hence, we have established the worst case sublinear convergence rate, which is tight in terms of both the order and the constants involved.
In this article we use techniques of proof mining to analyse a result, due to Yong=hong Yao and Muhammad Aslam Noor, concerning the strong convergence of a generalized proximal point algorithm which involves multiple ...
详细信息
In this article we use techniques of proof mining to analyse a result, due to Yong=hong Yao and Muhammad Aslam Noor, concerning the strong convergence of a generalized proximal point algorithm which involves multiple parameters. Yao and Noor's result ensures the strong convergence of the algorithm to the nearest projection point onto the set of zeros of the operator. Our quantitative analysis, guided by Fernando Ferreira and Paulo Oliva's bounded functional interpretation, provides a primitive recursive bound on the metastability for the convergence of the algorithm, in the sense of Terence Tao. Furthermore, we obtain quantitative information on the asymptotic regularity of the iteration. The results of this paper are made possible by an arithmetization of the lim sup.
We study a general convex optimization problem, which covers various classic problems in different areas and particularly includes many optimal transport related problems arising in recent years. To solve this problem...
详细信息
We study a general convex optimization problem, which covers various classic problems in different areas and particularly includes many optimal transport related problems arising in recent years. To solve this problem, we revisit the classic Bregman proximal point algorithm (BPPA) and introduce a new inexact stopping condition for solving the subproblems, which can circumvent the underlying feasibility difficulty often appearing in existing inexact conditions when the problem has a complex feasible set. Our inexact condition also covers several existing inexact conditions as special cases and hence makes our inexact BPPA (iBPPA) more flexible to fit different scenarios in practice. As an application to the standard optimal transport (OT) problem, our iBPPA with the entropic proximal term can bypass some numerical instability issues that usually plague the popular Sinkhorn's algorithm in the OT community, since our iBPPA does not require the proximal param-eter to be very small for obtaining an accurate approximate solution. The iteration complexity of O(1/k) and the convergence of the sequence are also established for our iBPPA under some mild conditions. Moreover, inspired by Nesterov's acceleration technique, we develop an inertial variant of our iBPPA, denoted by V-iBPPA, and establish the iteration complexity of O(1/k\lambda), where \lambda\geq 1 is a quadrangle scaling exponent of the kernel function. In particular, when the proximal parameter is a constant and the kernel function is strongly convex with Lipschitz continuous gradient (hence \lambda = 2), our V-iBPPA achieves a faster rate of O(1/k(2)) just as existing accelerated inexact proximal point algorithms. Some preliminary numerical experiments for solving the standard OT problem are conducted to show the convergence behaviors of our iBPPA and V-iBPPA under different inexactness settings. The experiments also empirically verify the potential of our V-iBPPA for improving the convergence speed.
In this paper, we modify the proximal point algorithm for finding common fixed points in CAT(0) spaces for nonlinear multivalued mappings and a minimizer of a convex function and prove Delta-convergence of the propose...
详细信息
In this paper, we modify the proximal point algorithm for finding common fixed points in CAT(0) spaces for nonlinear multivalued mappings and a minimizer of a convex function and prove Delta-convergence of the proposed algorithm. A numerical example is presented to illustrate the convergence result. Our results improve and extend the corresponding results in the literature.
The proximal point algorithm (PPA) is a powerful tool for solving monotone inclusion problems. Recently, Tao and Yuan [On the optimal linear convergence rate of a generalized proximal point algorithm, J. Sci. Comput. ...
详细信息
The proximal point algorithm (PPA) is a powerful tool for solving monotone inclusion problems. Recently, Tao and Yuan [On the optimal linear convergence rate of a generalized proximal point algorithm, J. Sci. Comput. 74 (2018), 826-850] proposed a generalized PPA (GPPA) for finding a zero point of a maximal monotone operator, and obtained the linear convergence rate of the generalized PPA. In this paper, we consider accelerating the GPPA with the aid of the inertial extrapolation. We propose a generalized proximal point algorithm with alternating inertial steps solving monotone inclusion problem, and obtain weak convergence results under some mild conditions. When the inverse of the involved monotone operator is Lipschitz continuous at the origin, we prove that the iterative sequence generated by our generalized proximal point algorithm is linearly convergent. The Fejer monotonicity of even subsequences of the iterative sequence is also recovered. Finally, we give some priori and posteriori error estimates of our generated sequences.
暂无评论