Compressive sensing(CS) is a new framework for simulations sensing and *** to reconstruct a sparse signal from limited measurements is the key problem in *** solving the reconstruction problem of a sparse signal, we p...
详细信息
Compressive sensing(CS) is a new framework for simulations sensing and *** to reconstruct a sparse signal from limited measurements is the key problem in *** solving the reconstruction problem of a sparse signal, we proposed a self-adaptive proximal point algorithm(PPA).This algorithm can handle the sparse signal reconstruction by solving a substituted problem—l *** last, the numerical results shows that the proposed method is more effective compared with the compressive sampling matching pursuit(CoSaMP).
Very recently, the author gave an upper bound on a decreasing positive sequence. And, he made use of it to improve a classical result of Br,zis and Lions concerning the proximal point algorithm for monotone inclusion ...
详细信息
Very recently, the author gave an upper bound on a decreasing positive sequence. And, he made use of it to improve a classical result of Br,zis and Lions concerning the proximal point algorithm for monotone inclusion in an infinite-dimensional Hilbert space. One assumption is the algorithm's strong convergence. In this paper, we derive a new upper bound on this decreasing positive sequence and thus achieve the same improvement without requiring this assumption.
An extension of a proximal point algorithm for difference of two convex functions is presented in the context of Riemannian manifolds of nonposite sectional curvature. If the sequence generated by our algorithm is bou...
详细信息
An extension of a proximal point algorithm for difference of two convex functions is presented in the context of Riemannian manifolds of nonposite sectional curvature. If the sequence generated by our algorithm is bounded it is proved that every cluster point is a critical point of the function (not necessarily convex) under consideration, even if minimizations are performed inexactly at each iteration. Application in maximization problems with constraints, within the framework of Hadamard manifolds is presented.
In this paper, a fast proximal point algorithm (PPA) is proposed for solving l(1)-minimization problem arising from compressed sensing. The proposed algorithm can be regarded as a new adaptive version of customized pr...
详细信息
In this paper, a fast proximal point algorithm (PPA) is proposed for solving l(1)-minimization problem arising from compressed sensing. The proposed algorithm can be regarded as a new adaptive version of customized proximal point algorithm, which is based on a novel decomposition for the given nonsymmetric proximal matrix M. Since the proposed method is also a special case of the PPA-based contraction method, its global convergence can be established using the framework of a contraction method. Numerical results illustrate that the proposed algorithm outperforms some existing proximal point algorithms for sparse signal reconstruction. (C) 2015 Elsevier Inc. All rights reserved.
We study the contraction-proximal point algorithm that has the following iterative form: where is a fixed element, is the resolvent operator and are all real sequences. The algorithm is known to converge strongly unde...
详细信息
We study the contraction-proximal point algorithm that has the following iterative form: where is a fixed element, is the resolvent operator and are all real sequences. The algorithm is known to converge strongly under the assumption that is bounded below away from zero and above away from 1. In this paper, we show that this condition can be further relaxed as
In the context of finite sums minimization, variance reduction techniques are widely used to improve the performance of state-of-the-art stochastic gradient methods. Their practical impact is clear, as well as their t...
详细信息
In the context of finite sums minimization, variance reduction techniques are widely used to improve the performance of state-of-the-art stochastic gradient methods. Their practical impact is clear, as well as their theoretical properties. Stochastic proximal point algorithms have been studied as an alternative to stochastic gradient algorithms since they are more stable with respect to the choice of the step size. However, their variance-reduced versions are not as well studied as the gradient ones. In this work, we propose the first unified study of variance reduction techniques for stochastic proximal point algorithms. We introduce a generic stochastic proximal-based algorithm that can be specified to give the proximal version of SVRG, SAGA, and some of their variants. For this algorithm, in the smooth setting, we provide several convergence rates for the iterates and the objective function values, which are faster than those of the vanilla stochastic proximal point algorithm. More specifically, for convex functions, we prove a sublinear convergence rate of O(1/k). In addition, under the Polyak-& lstrok;ojasiewicz condition, we obtain linear convergence rates. Finally, our numerical experiments demonstrate the advantages of the proximal variance reduction methods over their gradient counterparts in terms of the stability with respect to the choice of the step size in most cases, especially for difficult problems.
In this work, we propose and study a framework of generalized proximal point algorithms associated with a maximally monotone operator. We indicate sufficient conditions on the regularization and relaxation parameters ...
详细信息
In this work, we propose and study a framework of generalized proximal point algorithms associated with a maximally monotone operator. We indicate sufficient conditions on the regularization and relaxation parameters of generalized proximal point algorithms for the equivalence of the boundedness of the sequence of iterations generated by this algorithm and the non-emptiness of the zero set of the maximally monotone operator, and for the weak and strong convergence of the algorithm. Our results cover or improve many results on generalized proximal point algorithms in our references. Improvements of our results are illustrated by comparing our results with related known ones.
proximal point algorithm(PPA)is a useful algorithm framework and has good convergence *** difficulty is that the subproblems usually only have iterative *** this paper,we propose an inexact customized PPA framework fo...
详细信息
proximal point algorithm(PPA)is a useful algorithm framework and has good convergence *** difficulty is that the subproblems usually only have iterative *** this paper,we propose an inexact customized PPA framework for twoblock separable convex optimization problem with linear *** design two types of inexact error criteria for the *** first one is absolutely summable error criterion,under which both subproblems can be solved *** one of the two subproblems is easily solved,we propose another novel error criterion which is easier to implement,namely relative error *** relative error criterion only involves one parameter,which is more *** establish the global convergence and sub-linear convergence rate in ergodic sense for the proposed *** numerical experiments on LASSO regression problems and total variation-based image denoising problem illustrate that our new algorithms outperform the corresponding exact algorithms.
In this paper, we combine theS-iteration process introduced by Agarwal et al. (J. Nonlinear Convex Anal.,8(1), 61-792007) with the proximal point algorithm introduced by Rockafellar (SIAM J. Control Optim.,14, 877-898...
详细信息
In this paper, we combine theS-iteration process introduced by Agarwal et al. (J. Nonlinear Convex Anal.,8(1), 61-792007) with the proximal point algorithm introduced by Rockafellar (SIAM J. Control Optim.,14, 877-8981976) to propose a new modified proximal point algorithm based on theS-type iteration process for approximating a common element of the set of solutions of convex minimization problems and the set of fixed points of nearly asymptotically quasi-nonexpansive mappings in the framework of CAT(0) spaces and prove the o-convergence of the proposed algorithm for solving common minimization problem and common fixed point problem. Our result generalizes, extends and unifies the corresponding results of Dhompongsa and Panyanak (Comput. Math. Appl.,56, 2572-25792008), Khan and Abbas (Comput. Math. Appl.,61, 109-1162011), Abbas et al. (Math. Comput. Modelling,55, 1418-14272012) and many more.
In this paper we introduce an extension of the proximal point algorithm proposed by Guler for solving convex minimization problems. This extension is obtained by substituting the usual quadratic proximal term by a cla...
详细信息
In this paper we introduce an extension of the proximal point algorithm proposed by Guler for solving convex minimization problems. This extension is obtained by substituting the usual quadratic proximal term by a class of convex nonquadratic entropy-like distances, called phi-divergences. A study of the convergence rate of this new proximalpoint method under mild assumptions is given, and further it is shown that this estimate rate is better than the available one of proximal-like methods. Some applications are given concerning general convex minimizations, linearly constrained convex programs and variationnal inequalities.
暂无评论