The Bregman function-based proximal point algorithm (BPPA) is an efficient tool for solving equilibrium problems and fixed-point problems. Extending rather classical proximal regularization methods, the main additiona...
详细信息
The Bregman function-based proximal point algorithm (BPPA) is an efficient tool for solving equilibrium problems and fixed-point problems. Extending rather classical proximal regularization methods, the main additional feature consists in an application of zone coercive regularizations. The latter allows to treat the generated subproblems as unconstrained ones, albeit with a certain precaution in numerical experiments. However, compared to the (classical) proximal point algorithm for equilibrium problems, convergence results require additional assumptions which may be seen as the price to pay for unconstrained subproblems. Unfortunately, they are quite demanding - for instance, as they imply a sort of unique solvability of the given problem. The main purpose of this paper is to develop a modification of the BPPA, involving an additional extragradient step with adaptive (and explicitly given) stepsize. We prove that this extragradient step allows to leave out any of the additional assumptions mentioned above. Hence, though still of interior proximal type, the suggested method is applicable to an essentially larger class of equilibrium problems, especially including non-uniquely solvable ones.
Linearly constrained convex optimization has many *** first-order optimal condition of the linearly constrained convex optimization is a monotone variational inequality(VI).For solving VI,the proximal point algorithm(...
详细信息
Linearly constrained convex optimization has many *** first-order optimal condition of the linearly constrained convex optimization is a monotone variational inequality(VI).For solving VI,the proximal point algorithm(PPA)in Euclideannorm is classical but ***,the classical PPA only plays an important theoretical role and it is rarely used in the practical scientific *** this paper,we give a review on the recently developed customized PPA in Hnorm(H is a positive definite matrix).In the frame of customized PPA,it is easy to construct the contraction-type methods for convex optimization with different linear *** each iteration of the proposed methods,we need only to solve the proximal subproblems which have the closed form solutions or can be efficiently solved up to a high *** novel applications and numerical experiments are ***,the original primaldual hybrid gradient method is modified to a convergent algorithm by using a prediction-correction uniform *** the variational inequality approach,the contractive convergence and convergence rate proofs of the framework are more general and quite simple.
In this article, we introduce composite iterative schemes for finding a zero point of a finite family of maximal monotone operators in a reflexive Banach space. Then, we prove strong convergence theorems by using a sh...
详细信息
In this article, we introduce composite iterative schemes for finding a zero point of a finite family of maximal monotone operators in a reflexive Banach space. Then, we prove strong convergence theorems by using a shrinking projection method. Moreover, we also apply our results to a system of convex minimization problems in reflexive Banach spaces.
Maximum likelihood estimation in finite mixture distributions is typically approached as an incomplete data problem to allow application of the expectation-maximization (EM) algorithm. In its general formulation, the ...
详细信息
Maximum likelihood estimation in finite mixture distributions is typically approached as an incomplete data problem to allow application of the expectation-maximization (EM) algorithm. In its general formulation, the EM algorithm involves the notion of a complete data space, in which the observed measurements and incomplete data are embedded. An advantage is that many difficult estimation problems are facilitated when viewed in this way. One drawback is that the simultaneous update used by standard EM requires overly informative complete data spaces, which leads to slow convergence in some situations. In the incomplete data context, it has been shown that the use of less informative complete data spaces, or equivalently smaller missing data spaces, can lead to faster convergence without sacrifying simplicity. However, in the mixture case, little progress has been made in speeding up EM. In this article we propose a component-wise EM for mixtures. It uses, at each iteration, the smallest admissible missing data space by intrinsically decoupling the parameter updates. Monotonicity is maintained, although the estimated proportions may not sum to one during the course of the iteration. However, we prove that the mixing proportions will satisfy this constraint upon convergence. Our proof of convergence relies on the interpretation of our procedure as a proximal point algorithm. For performance comparison, we consider standard EM as well as two other algorithms based on missing data space reduction, namely the SAGE and AECME algorithms. We provide adaptations of these general procedures to the mixture case. We also consider the ECME algorithm, which is not a data augmentation scheme but still aims at accelerating EM. Our numerical experiments illustrate the advantages of the component-wise EM algorithm relative to these other methods.
The paper deals with the theoretical analysis of a regularized logarithmic barrier method for solving ill-posed convex programming problems. In this method a multi-step proximal regularization of the auxiliary problem...
详细信息
In this paper, we present a modified decomposition algorithm and its bundle style variant for convex programming problems with separable structure. We prove that these methods are globally and linearly convergent and ...
详细信息
In this paper, we present a modified decomposition algorithm and its bundle style variant for convex programming problems with separable structure. We prove that these methods are globally and linearly convergent and discuss the application of the bundle variant in parallel computations.
We present an algorithm to solve: Find (x, y) epsilon A x A perpendicular to such that y epsilon Tx, where A is a subspace and T is a maximal monotone operator. The algorithm is based on the proximal decomposition on ...
详细信息
We present an algorithm to solve: Find (x, y) epsilon A x A perpendicular to such that y epsilon Tx, where A is a subspace and T is a maximal monotone operator. The algorithm is based on the proximal decomposition on the graph of a monotone operator and we show how to recover Spingarn's decomposition method. We give a proof of convergence that does not use the concept of partial inverse and show how to choose a scaling factor to accelerate the convergence in the strongly monotone case. Numerical results performed on quadratic problems confirm the robust behaviour of the algorithm.
This paper develops two implementations of Halpern-type proximalalgorithms(HPA1 and HPA2) solving nonsmooth *** prove their convergence to the solutions of the problems under the new *** this idea to the Basis Pursui...
详细信息
ISBN:
(纸本)9781509001668
This paper develops two implementations of Halpern-type proximalalgorithms(HPA1 and HPA2) solving nonsmooth *** prove their convergence to the solutions of the problems under the new *** this idea to the Basis Pursuit model in image/signal processing,a new Halpern-type proximalalgorithm(HPA) for the model is *** show that the Halpern-type proximalalgorithm has better descent property than the usual proximalalgorithm(PA) for the Basis Pursuit model.
Linearized Bregman algorithm is effective on solving l1-minimization problem, but its parameter's selection must rely on prior information. In order to ameliorate this weakness, we proposed a new algorithm in this...
详细信息
ISBN:
(纸本)9781479972081
Linearized Bregman algorithm is effective on solving l1-minimization problem, but its parameter's selection must rely on prior information. In order to ameliorate this weakness, we proposed a new algorithm in this paper, which combines the proximal point algorithm and the linearized Bregman iterative method. In the second part of the paper, the proposed algorithm is further accelerated through Nestrove's accelerated scheme and parameters' reset skills. Compared with the original linearized Bregman algorithm, the accelerated algorithms have better convergent speed while avoiding selecting model parameter. Simulations on sparse recovery problems show the new algorithms really have robust parameter's selections, and improve the convergent precision at the same time.
It is well known that the subgradient mapping associated with a lower semicontinuous function is maximal monotone if and only if the function is convex, but what characterization can be given for the case in which a s...
详细信息
It is well known that the subgradient mapping associated with a lower semicontinuous function is maximal monotone if and only if the function is convex, but what characterization can be given for the case in which a subgradient mapping is only maximal monotone locally instead of globally? That question is answered here in terms of a condition more subtle than local convexity. Applications are made to the tilt stability of a local minimum and to the local execution of the proximal point algorithm in optimization.
暂无评论