This note is a reaction to the recent paper by Rouhani and Moradi (J Optim Theory Appl 172:222-235, 2017), where a proximal point algorithm proposed by Boikanyo and Moroanu (Optim Lett 7:415-420, 2013) is discussed. N...
详细信息
This note is a reaction to the recent paper by Rouhani and Moradi (J Optim Theory Appl 172:222-235, 2017), where a proximal point algorithm proposed by Boikanyo and Moroanu (Optim Lett 7:415-420, 2013) is discussed. Noticing the inappropriate formulation of that algorithm, we propose a more general algorithm for approximating zeros of a maximal monotone operator on a Hilbert space. Besides the main result on the strong convergence of the sequences generated by this new algorithm, we discuss some particular cases, including the approximation of minimizers of convex functionals and present two examples to illustrate the applicability of the algorithm. The note clarifies and extends both the papers quoted above.
Several optimization schemes have been known for convex optimization problems. However, numerical algorithms for solving nonconvex optimization problems are still underdeveloped. A significant progress to go beyond co...
详细信息
Several optimization schemes have been known for convex optimization problems. However, numerical algorithms for solving nonconvex optimization problems are still underdeveloped. A significant progress to go beyond convexity was made by considering the class of functions representable as differences of convex functions. In this paper, we introduce a generalized proximal point algorithm to minimize the difference of a nonconvex function and a convex function. We also study convergence results of this algorithm under the main assumption that the objective function satisfies the Kurdyka-?ojasiewicz property.
In this note, a small gap is corrected in the proof of H. K. Xu [Theorem 3.3, A regularization method for the proximal point algorithm, J. Glob. Optim. 36, 115-125 (2006)], and some strict restriction is removed also.
In this note, a small gap is corrected in the proof of H. K. Xu [Theorem 3.3, A regularization method for the proximal point algorithm, J. Glob. Optim. 36, 115-125 (2006)], and some strict restriction is removed also.
This paper is to illustrate that the main result of the paper [R.U. Verma, Generalized over-relaxed proximalalgorithm based on A-maximal monotonicity framework and applications to inclusion problems, Mathematical and...
详细信息
This paper is to illustrate that the main result of the paper [R.U. Verma, Generalized over-relaxed proximalalgorithm based on A-maximal monotonicity framework and applications to inclusion problems, Mathematical and Computer Modelling 49 (2009) 1587-1594] is incorrect. The convergence rate of the over-relaxed proximal point algorithm should be greater than 1. Moreover, the strong convergence and the unique solution may not be proved accordingly in the paper by Verma. (C) 2012 Elsevier Ltd. All rights reserved.
Efficient sampling from a high-dimensional Gaussian distribution is an old but high-stakes issue. Vanilla Cholesky samplers imply a computational cost and memory requirements that can rapidly become prohibitive in hig...
详细信息
Efficient sampling from a high-dimensional Gaussian distribution is an old but high-stakes issue. Vanilla Cholesky samplers imply a computational cost and memory requirements that can rapidly become prohibitive in high dimensions. To tackle these issues, multiple methods have been proposed from different communities ranging from iterative numerical linear algebra to Markov chain Monte Carlo (MCMC) approaches. Surprisingly, no complete review and comparison of these methods has been conducted. This paper aims to review all these approaches by pointing out their differences, close relations, benefits, and limitations. In addition to reviewing the state of the art, this paper proposes a unifying Gaussian simulation framework by deriving a stochastic counterpart of the celebrated proximal point algorithm in optimization. This framework offers a novel and unifying revisiting of most of the existing MCMC approaches while also extending them. Guidelines to choosing the appropriate Gaussian simulation method for a given sampling problem in high dimensions are proposed and illustrated with numerical examples.
作者:
Liu, Yong-JinYu, JingFuzhou Univ
Ctr Appl Math Fujian Prov Sch Math & Stat Fuzhou 350108 Fujian Peoples R China Fuzhou Univ
Sch Math & Stat Fuzhou 350108 Fujian Peoples R China
The maximum eigenvalue problem is to minimize the maximum eigenvalue function over an affine subspace in a symmetric matrix space, which has many applications in structural engineering, such as combinatorial optimizat...
详细信息
The maximum eigenvalue problem is to minimize the maximum eigenvalue function over an affine subspace in a symmetric matrix space, which has many applications in structural engineering, such as combinatorial optimization, control theory and structural design. Based on classical analysis of proximalpoint (Ppa) algorithm and semismooth analysis of nonseparable spectral operator, we propose an efficient semismooth Newton based dual proximalpoint (Ssndppa) algorithm to solve the maximum eigenvalue problem, in which an inexact semismooth Newton (Ssn) algorithm is applied to solve inner subproblem of the dual proximalpoint (d-Ppa) algorithm. Global convergence and locally asymptotically superlinear convergence of the d-Ppa algorithm are established under very mild conditions, and fast superlinear or even quadratic convergence of the Ssn algorithm is obtained when the primal constraint nondegeneracy condition holds for the inner subproblem. Computational costs of the Ssn algorithm for solving the inner subproblem can be reduced by fully exploiting low-rank or high-rank property of a matrix. Numerical experiments on max-cut problems and randomly generated maximum eigenvalue optimization problems demonstrate that the Ssndppa algorithm substantially outperforms the Sdpnal+ solver and several state-of-the-art first-order algorithms.
We present several strong convergence results for the modified, Halpern-type, proximal point algorithm x(n+1) = alpha(n)u + (1 - alpha(n)) J(beta nxn) + e(n) (n = 0, 1,...;u, x(0) is an element of H given, and J(beta ...
详细信息
We present several strong convergence results for the modified, Halpern-type, proximal point algorithm x(n+1) = alpha(n)u + (1 - alpha(n)) J(beta nxn) + e(n) (n = 0, 1,...;u, x(0) is an element of H given, and J(beta n) = (I + beta(n)A)(-1), for a maximal monotone operator A) in a real Hilbert space, under new sets of conditions on alpha(n) is an element of (0, 1) and beta(n) is an element of (0, infinity). These conditions are weaker than those known to us and our results extend and improve some recent results such as those of H. K. Xu. We also show how to apply our results to approximate minimizers of convex functionals. In addition, we give convergence rate estimates for a sequence approximating the minimum value of such a functional.
The purpose of this paper is to show that the iterative scheme recently studied by Xu (J Glob Optim 36(1):115-125, 2006) is the same as the one studied by Kamimura and Takahashi (J Approx Theory 106(2):226-240, 2000) ...
详细信息
The purpose of this paper is to show that the iterative scheme recently studied by Xu (J Glob Optim 36(1):115-125, 2006) is the same as the one studied by Kamimura and Takahashi (J Approx Theory 106(2):226-240, 2000) and to give a supplement to these results. With the new technique proposed by Maing, (Comput Math Appl 59(1):74-79, 2010), we show that the convergence of the iterative scheme is established under another assumption. It is noted that if the computation error is zero or the approximate computation is exact, our new result is a genuine generalization of Xu's result and Kamimura-Takahashi's result.
In this paper, for a monotone operator T, we shall show strong convergence of the regularization method for Rockafellar's proximal point algorithm under more relaxed conditions on the sequences {r (k) } and {t (k)...
详细信息
In this paper, for a monotone operator T, we shall show strong convergence of the regularization method for Rockafellar's proximal point algorithm under more relaxed conditions on the sequences {r (k) } and {t (k)}, lim k ->infinity tk = 0;epsilon tk = infinity;lim inf rk > 0. k ->infinity Our results unify and improve some existing results.
An extension of a proximal point algorithm for difference of two convex functions is presented in the context of Riemannian manifolds of nonposite sectional curvature. If the sequence generated by our algorithm is bou...
详细信息
An extension of a proximal point algorithm for difference of two convex functions is presented in the context of Riemannian manifolds of nonposite sectional curvature. If the sequence generated by our algorithm is bounded it is proved that every cluster point is a critical point of the function (not necessarily convex) under consideration, even if minimizations are performed inexactly at each iteration. Application in maximization problems with constraints, within the framework of Hadamard manifolds is presented.
暂无评论