In a Hilbert space, we analyze the convergence properties of a general class of inertial forward-backward algorithms in the presence of perturbations, approximations, errors. These splitting algorithms aim to solve, b...
详细信息
In a Hilbert space, we analyze the convergence properties of a general class of inertial forward-backward algorithms in the presence of perturbations, approximations, errors. These splitting algorithms aim to solve, by rapid methods, structured convex minimization problems. The function to be minimized is the sum of a continuously differentiable convex function whose gradient is Lipschitz continuous and a proper lower semicontinuous convex function. The algorithms involve a general sequence of positive extrapolation coefficients that reflect the inertial effect and a sequence in the Hilbert space that takes into account the presence of perturbations. We obtain convergence rates for values and convergence of the iterates under conditions involving the extrapolation and perturbation sequences jointly. This extends the recent work of Attouch-Cabot which was devoted to the unperturbed case. Next, we consider the introduction into the algorithms of a Tikhonov regularization term with vanishing coefficient. In this case, when the regularization coefficient does not tend too rapidly to zero, we obtain strong ergodic convergence of the iterates to the minimum norm solution. Taking a general sequence of extrapolation coefficients makes it possible to cover a wide range of accelerated methods. In this way, we show in a unifying way the robustness of these algorithms.
In a Hilbert space H, assuming (alpha(kappa)) a general sequence of nonnegative numbers, we analyze the convergence properties of the inertialforward-backward algorithm (IFB) { y(k) = x(k) + alpha(k)(x(k)-x(k-1)), x(...
详细信息
In a Hilbert space H, assuming (alpha(kappa)) a general sequence of nonnegative numbers, we analyze the convergence properties of the inertialforward-backward algorithm (IFB) { y(k) = x(k) + alpha(k)(x(k)-x(k-1)), x(k+1) = prox(s Psi)(y(k)-s del Phi(y(k))), where Psi : H -> R boolean OR{+infinity} is a proper lower-semicontinuous convex function, and Phi : H -> R is a differentiable convex function, whose gradient is Lipschitz continuous. Various options for the sequence (alpha(kappa)) are considered in the literature. Among them, the Nesterov choice leads to the FISTA algorithm and accelerates convergence from O(1/k) to O(1/k(2)) for the values. Several variants are used to guarantee the convergence of the iterates or to improve the rate of convergence for the values. For the design of fast optimization methods, the tuning of the sequence (alpha(kappa)) is a subtle issue, which we deal with in this paper in general. We show that the convergence rate of the algorithm can be obtained simply by analyzing the sequence of positive real numbers (alpha(kappa)). In addition to the case alpha(kappa) = 1 - alpha/k with alpha >= 3, our results apply equally well to alpha(kappa) = 1 - alpha/kappa(tau), with an exponent 0 < r < 1, and to Polyak's heavy ball method. Thus, we unify most of the existing results based on the accelerated gradient method of Nesterov. In the process, we improve some of them and discover new ones.
One of the most popular approaches for the minimization of a convex functional given by the sum of a differentiable term and a nondifferentiable one is the forward-backward method with extrapolation. The main reason m...
详细信息
One of the most popular approaches for the minimization of a convex functional given by the sum of a differentiable term and a nondifferentiable one is the forward-backward method with extrapolation. The main reason making this method very appealing for a wide range of applications is that it achieves a O(1/k(2)) convergence rate in the objective function values, which is optimal for a first order method. Recent contributions on this topic are related to the convergence of the iterates to a minimizer and the possibility of adopting a variable metric in the proximal step. Moreover, it has been also proved that the objective function convergence rate is actually o(1/k(2)). However, these results are obtained under the assumption that the minimization subproblem involved in the backward step is computed exactly, which is clearly not realistic in a variety of relevant applications. In this paper, we analyze the convergence properties when both variable metric and inexact computation of the backward step are allowed. To do this, we adopt a suitable inexactness criterion and we devise implementable conditions on both the accuracy of the inexact backward step computation and the variable metric selection, so that the o(1/k(2)) rate and the convergence of the iterates are preserved. The effectiveness of the proposed approach is also validated with a numerical experience showing the effects of the combination of inexactness with variable metric techniques.
In a Hilbert space H, we study the convergence properties of a class of relaxed inertial forward-backward algorithms. They aim to solve structured monotone inclusions of the form Ax + Bx (sic) 0 where A : H -> 2(H)...
详细信息
In a Hilbert space H, we study the convergence properties of a class of relaxed inertial forward-backward algorithms. They aim to solve structured monotone inclusions of the form Ax + Bx (sic) 0 where A : H -> 2(H) is a maximally monotone operator and B : H -> H is a cocoercive operator. We extend to this class of problems the acceleration techniques initially introduced by Nesterov, then developed by Beck and Teboulle in the case of structured convex minimization (FISTA). As an important element of our approach, we develop an inertial and parametric version of the Krasnoselskii-Mann theorem, where joint adjustment of the inertia and relaxation parameters plays a central role. This study comes as a natural extension of the techniques introduced by the authors for the study of relaxed inertial proximal algorithms. An illustration is given to the inertial Nash equilibration of a game combining non-cooperative and cooperative aspects.
The " fast iterative shrinkage-thresholding algorithm " (FISTA) is one of the most famous first order optimization schemes, and the stepsize, which plays an important role in theoretical analysis and numeric...
详细信息
The " fast iterative shrinkage-thresholding algorithm " (FISTA) is one of the most famous first order optimization schemes, and the stepsize, which plays an important role in theoretical analysis and numerical experiment, is always determined by a constant relating to the Lipschitz constant or by a backtracking strategy. In this paper, we design a new adaptive non-monotone stepsize strategy (NMS), which allows the stepsize to increase monotonically after finite iterations. It is remarkable that NMS can be successfully implemented without knowing the Lipschitz constant or without backtracking. And the additional cost of NMS is less than the cost of some existing backtracking strategies. For using NMS to the original FISTA (FISTA_NMS) and the modified FISTA (MFISTA_NMS), we show that the convergence results stay the same. Moreover, under the error bound condition, we show that FISTA_NMS achieves the rate of convergence to o (1/k(6)) and MFISTA_NMS enjoys the convergence rate related to the value of parameter of t(k), that is o( 1/k(2(a+1)));and the iterates generated by the above two algorithms are convergent. In addition, by taking advantage of the restart technique to accelerate the above two methods, we establish the linear convergences of the function values and iterates under the error bound condition. We conduct some numerical experiments to examine the effectiveness of the proposed algorithms.
暂无评论