A new interior proximal method for variational inequalities with generalized monotone operators is developed. It transforms a given variational inequality (which, maybe, is constrained and ill-posed) into unconstraine...
详细信息
A new interior proximal method for variational inequalities with generalized monotone operators is developed. It transforms a given variational inequality (which, maybe, is constrained and ill-posed) into unconstrained and well-posed equations as well as, at each iteration, one single additional extragradient step with rather small numerical efforts. Convergence is established under mild assumptions: The frequently assumed maximal monotonicity is weakened to pseudo- and quasimonotonicity with respect to the solution set, and a wide class of even nonlinearly constrained feasible sets is allowed for. In this general setting, the presented scheme constitutes the first interior proximal method that works without the so-called cutting plane property. Such a demanding assumption is completely left out, which allows to solve, e.g., wide classes of saddle point and equilibrium problems by means of an interior proximal method for the first time. As another application, we study variational inequalities derived from quasiconvex optimization problems.
A unified efficient algorithm framework of proximal-based decomposition methods has been proposed for monotone variational inequalities in 2012,while only global convergence is proved at the same *** this paper,we giv...
详细信息
A unified efficient algorithm framework of proximal-based decomposition methods has been proposed for monotone variational inequalities in 2012,while only global convergence is proved at the same *** this paper,we give a unified proof on theO(1/t)iteration complexity,together with the linear convergence rate for this kind of proximal-based decomposition *** theε-optimal iteration complexity result defined by variational inequality,the non-ergodic relative error of adjacent iteration points is also proved to decrease in the same ***,the linear convergence rate of this algorithm framework can be constructed based on some special variational inequality properties,without necessary strong monotone conditions.
In this paper, two iteration processes are used to find the solutions of the mathematical programming for the sum of two convex functions. In infinite Hilbert space, we establish two strong convergence theorems as reg...
详细信息
In this paper, two iteration processes are used to find the solutions of the mathematical programming for the sum of two convex functions. In infinite Hilbert space, we establish two strong convergence theorems as regards this problem. As applications of our results, we give strong convergence theorems as regards the split feasibility problem with modified CQ method, strong convergence theorem as regards the lasso problem, and strong convergence theorems for the mathematical programming with a modified proximal point algorithm and a modified gradient-projection method in the infinite dimensional Hilbert space. We also apply our result on the lasso problem to the image deblurring problem. Some numerical examples are given to demonstrate our results. The main result of this paper entails a unified study of many types of optimization problems. Our algorithms to solve these problems are different from any results in the literature. Some results of this paper are original and some results of this paper improve, extend, and unify comparable results in existence in the literature.
The Bregman function-based proximal point algorithm (BPPA) is an efficient tool for solving equilibrium problems and fixed-point problems. Extending rather classical proximal regularization methods, the main additiona...
详细信息
The Bregman function-based proximal point algorithm (BPPA) is an efficient tool for solving equilibrium problems and fixed-point problems. Extending rather classical proximal regularization methods, the main additional feature consists in an application of zone coercive regularizations. The latter allows to treat the generated subproblems as unconstrained ones, albeit with a certain precaution in numerical experiments. However, compared to the (classical) proximal point algorithm for equilibrium problems, convergence results require additional assumptions which may be seen as the price to pay for unconstrained subproblems. Unfortunately, they are quite demanding - for instance, as they imply a sort of unique solvability of the given problem. The main purpose of this paper is to develop a modification of the BPPA, involving an additional extragradient step with adaptive (and explicitly given) stepsize. We prove that this extragradient step allows to leave out any of the additional assumptions mentioned above. Hence, though still of interior proximal type, the suggested method is applicable to an essentially larger class of equilibrium problems, especially including non-uniquely solvable ones.
Linearly constrained convex optimization has many *** first-order optimal condition of the linearly constrained convex optimization is a monotone variational inequality(VI).For solving VI,the proximal point algorithm(...
详细信息
Linearly constrained convex optimization has many *** first-order optimal condition of the linearly constrained convex optimization is a monotone variational inequality(VI).For solving VI,the proximal point algorithm(PPA)in Euclideannorm is classical but ***,the classical PPA only plays an important theoretical role and it is rarely used in the practical scientific *** this paper,we give a review on the recently developed customized PPA in Hnorm(H is a positive definite matrix).In the frame of customized PPA,it is easy to construct the contraction-type methods for convex optimization with different linear *** each iteration of the proposed methods,we need only to solve the proximal subproblems which have the closed form solutions or can be efficiently solved up to a high *** novel applications and numerical experiments are ***,the original primaldual hybrid gradient method is modified to a convergent algorithm by using a prediction-correction uniform *** the variational inequality approach,the contractive convergence and convergence rate proofs of the framework are more general and quite simple.
The problem of the minimization of least squares functionals with 1 penalties is considered in an infinite dimensional Hilbert space setting. Though there are several algorithms available in the finite dimensional set...
详细信息
The problem of the minimization of least squares functionals with 1 penalties is considered in an infinite dimensional Hilbert space setting. Though there are several algorithms available in the finite dimensional setting there are only a few of them that come with a proper convergence analysis in the infinite dimensional setting. In this work we provide an algorithm from a class that has not been considered for 1 minimization before, namely, a proximal-point method in combination with a projection step. We show that this idea gives a simple and easy-to-implement algorithm. We present experiments that indicate that the algorithm may perform better than other algorithms if we employ them without any special tricks. Hence, we may conclude that the projection proximal-point idea is a promising idea in the context of 1 minimization.
In this paper, we concentrate on the maximal inclusion problem of locating the zeros of the sum of maximal monotone operators in the framework of proximalpoint method. Such problems arise widely in several applied ma...
详细信息
In this paper, we concentrate on the maximal inclusion problem of locating the zeros of the sum of maximal monotone operators in the framework of proximalpoint method. Such problems arise widely in several applied mathematical fields such as signal and image processing. We define two new maximal monotone operators and characterize the solutions of the considered problem via the zeros of the new operators. The maximal monotonicity and resolvent of both of the defined operators are proved and calculated, respectively. The traditional proximal point algorithm can be therefore applied to the considered maximal inclusion problem, and the convergence is ensured. Furthermore, by exploring the relationship between the proposed method and the generalized forward-backward splitting algorithm, we point out that this algorithm is essentially the proximal point algorithm when the operator corresponding to the forward step is the zero operator. Copyright (c) 2013 John Wiley & Sons, Ltd.
In this work, we propose a proximalalgorithm for unconstrained optimization on the cone of symmetric semidefinite positive matrices. It appears to be the first in the proximal class on the set of methods that convert...
详细信息
In this work, we propose a proximalalgorithm for unconstrained optimization on the cone of symmetric semidefinite positive matrices. It appears to be the first in the proximal class on the set of methods that convert a Symmetric Definite Positive Optimization in Nonlinear Optimization. It replaces the main iteration of the conceptual proximal point algorithm by a sequence of nonlinear programming problems on the cone of diagonal definite positive matrices that has the structure of the positive orthant of the Euclidian vector space. We are motivated by results of the classical proximalalgorithm extended to Riemannian manifolds with nonpositive sectional curvature. An important example of such a manifold is the space of symmetric definite positive matrices, where the metrics is given by the Hessian of the standard barrier function -In det(X). Observing the obvious fact that proximalalgorithms do not depend on the geodesics, we apply those ideas to develop a proximal point algorithm for convex functions in this Riemannian metric. (c) 2009 Elsevier Inc. All rights reserved.
This paper develops an implementation of a Predual proximal point algorithm (PPPA) solving a Non Negative Basis Pursuit Denoising model. The model imposes a constraint on the l (2) norm of the residual, instead of pen...
详细信息
This paper develops an implementation of a Predual proximal point algorithm (PPPA) solving a Non Negative Basis Pursuit Denoising model. The model imposes a constraint on the l (2) norm of the residual, instead of penalizing it. The PPPA solves the predual of the problem with a proximal point algorithm (PPA). Moreover, the minimization that needs to be performed at each iteration of PPA is solved with a dual method. We can prove that these dual variables converge to a solution of the initial problem. Our analysis proves that we turn a constrained non differentiable convex problem into a short sequence of nice concave maximization problems. By nice, we mean that the functions which are maximized are differentiable and their gradient is Lipschitz. The algorithm is easy to implement, easier to tune and more general than the algorithms found in the literature. In particular, it can be applied to the Basis Pursuit Denoising (BPDN) and the Non Negative Basis Pursuit Denoising (NNBPDN) and it does not make any assumption on the dictionary. We prove its convergence to the set of solutions of the model and provide some convergence rates. Experiments on image approximation show that the performances of the PPPA are at the current state of the art for the BPDN.
In this note, a small gap is corrected in the proof of H. K. Xu [Theorem 3.3, A regularization method for the proximal point algorithm, J. Glob. Optim. 36, 115-125 (2006)], and some strict restriction is removed also.
In this note, a small gap is corrected in the proof of H. K. Xu [Theorem 3.3, A regularization method for the proximal point algorithm, J. Glob. Optim. 36, 115-125 (2006)], and some strict restriction is removed also.
暂无评论