作者:
Pakkaranang, NuttapolKumam, PoomBerinde, VasileSuleiman, Yusuf I.KMUTT
Ctr Excellence Theoret & Computat Sci TaCS CoE KMUTT Fixed Point Theory & Applicat Res Grp KMUTT Sci Lab Bldg126 Pracha Uthit Rd Bangkok 10140 Thailand KMUTT
KMUTTFixed Point Res Lab Dept Math Fac Sci Room SCL 802 Fixed Point LabSci Lab Bldg Bangkok 10140 Thailand China Med Univ
China Med Univ Hosp Dept Med Res Taichung 40402 Taiwan Tech Univ Cluj Napoca
Dept Math & Comp Sci North Univ Ctr Baia Mare Victorie 76 Baia Mare 430072 Romania Kano Univ Sci & Technol
Dept Math PMB 3042 Kano Nigeria
In this paper, we construct a novel algorithm for solving non-smooth composite optimization problems. By using inertial technique, we propose a modified proximal gradient algorithm with outer perturbations, and under ...
详细信息
In this paper, we construct a novel algorithm for solving non-smooth composite optimization problems. By using inertial technique, we propose a modified proximal gradient algorithm with outer perturbations, and under standard mild conditions, we obtain strong convergence results for finding a solution of composite optimization problem. Based on bounded perturbation resilience, we present our proposed algorithm with the superiorization method and apply it to image recovery problem. Finally, we provide the numerical experiments to show efficiency of the proposed algorithm and comparison with previously known algorithms in signal recovery.
In this paper, we consider the proximal gradient algorithm with extrapolation for solving a class of convex nonsmooth minimization problems. We show that for a large class of extrapolation parameters including the ext...
详细信息
In this paper, we consider the proximal gradient algorithm with extrapolation for solving a class of convex nonsmooth minimization problems. We show that for a large class of extrapolation parameters including the extrapolation parameters chosen in FISTA (Beck and Teboulle in SIAM J Imaging Sci 2:183-202, 2009), the successive changes of iterates go to 0. Moreover, based on the Lojasiewicz inequality, we establish the global convergence of iterates generated by the proximal gradient algorithm with extrapolation with an additional assumption on the extrapolation coefficients. The assumption is general enough to allow the threshold of the extrapolation coefficients to be 1. In particular, we prove the length of the iterates is finite. Finally, we perform numerical experiments on the least squares problems with l(1) regularization to illustrate our theoretical results.
We develop and analyze an asynchronous algorithm for distributed convex optimization when the objective can be written as a sum of smooth functions, local to each worker, and a nonsmooth function. Unlike many existing...
详细信息
We develop and analyze an asynchronous algorithm for distributed convex optimization when the objective can be written as a sum of smooth functions, local to each worker, and a nonsmooth function. Unlike many existing methods, our distributed algorithm is adjustable to various levels of communication cost, delays, machines' computational power, and functions' smoothness. A unique feature is that the step sizes do not depend on communication delays nor number of machines, which is highly desirable for scalability. We prove that the algorithm converges linearly in the strongly convex case, and provide guarantees of convergence for the non-strongly convex case. The obtained rates are the same as the vanilla proximal gradient algorithm over some introduced epoch sequence that subsumes the delays of the system. We provide numerical results on large-scale machine learning problems to demonstrate the merits of the proposed method.
In this paper, we investigate attractive properties of the proximal gradient algorithm with inertia. Notably, we show that using alternated inertia yields monotonically decreasing functional values, which contrasts wi...
详细信息
In this paper, we investigate attractive properties of the proximal gradient algorithm with inertia. Notably, we show that using alternated inertia yields monotonically decreasing functional values, which contrasts with usual accelerated proximalgradient methods. We also provide convergence rates for the algorithm with alternated inertia, based on local geometric properties of the objective function. The results are put into perspective by discussions on several extensions (strongly convex case, non-convex case, and alternated extrapolation) and illustrations on common regularized optimization problems.
We consider a control proximal gradient algorithm (CPGA) for solving the minimization of a nonsmooth convex function. In particular, the convex function is an l(1) regularized least squares function derived by the dis...
详细信息
We consider a control proximal gradient algorithm (CPGA) for solving the minimization of a nonsmooth convex function. In particular, the convex function is an l(1) regularized least squares function derived by the discretized l(1) norm models arising in image processing. This proximal gradient algorithm with the control step size is attractive due to its simplicity. However, this method is also known to converge quite slowly. In this paper, we present a fast control proximal gradient algorithm by adding Nesterov step. It preserves the computational simplicity of proximalgradient methods with a convergence rate 1/k(2). It is proven to be better than the existing methods both theoretically and practically. The initial numerical results for performing the image deblurring demonstrate the efficiency of CPGA when it is compared to the existing fast iterative shrinkage-thresholding algorithms.
A diagonal finite element-projection-proximalgradient (DFE-P-PG) algorithm and its accelerated forms for elliptic optimal control problem with ������1-control cost are proposed in this paper. Firstly, the elliptic op...
详细信息
A diagonal finite element-projection-proximalgradient (DFE-P-PG) algorithm and its accelerated forms for elliptic optimal control problem with ������1-control cost are proposed in this paper. Firstly, the elliptic optimal control problem is discretized by the diagonal finite element method (DFEM). Then the discrete problem is optimized by projection-proximalgradient (P-PG) algorithm. The global convergence of DFE-P-PG algorithm is proven. In addition, two accelerated methods are used to enhance the convergence rate of DFE-P-PG algorithm. Numerical examples are performed to illustrate the efficiency and effectiveness of DFE-P-PG algorithm.
In this paper, we propose an inexact version of proximal gradient algorithm with extrapolation for solving a class of nonconvex nonsmooth optimization problems. Specifically, the subproblem in proximalgradient algori...
详细信息
In this paper, we propose an inexact version of proximal gradient algorithm with extrapolation for solving a class of nonconvex nonsmooth optimization problems. Specifically, the subproblem in proximal gradient algorithm with extrapolation is allowed to be solved inexactly by certain relative error criterion, in the sense that the criterion can be updated adaptively in each iteration. Under the assumption that an auxiliary function satisfies the Kurdyka-ojasiewicz (KL) inequality, we prove that the iterative sequence generated by the inexact proximal gradient algorithm with extrapolation converges to a stationary point of the considered problem. Furthermore, the convergence rate of the proposed algorithm can be established when the KL exponent is known. Moreover, we illustrate the advantage by applying the algorithm to solve a nonconvex optimization problem.
In this paper, we consider an accelerated method for solving nonconvex and nonsmooth minimization problems. We propose a Bregman proximal gradient algorithm with extrapolation (BPGe). This algorithm extends and accele...
详细信息
In this paper, we consider an accelerated method for solving nonconvex and nonsmooth minimization problems. We propose a Bregman proximal gradient algorithm with extrapolation (BPGe). This algorithm extends and accelerates the Bregman proximal gradient algorithm (BPG), which circumvents the restrictive global Lipschitz gradient continuity assumption needed in proximal gradient algorithms (PG). The BPGe algorithm has a greater generality than the recently introduced proximal gradient algorithm with extrapolation (PGe) and, in addition, due to the extrapolation step, BPGe converges faster than the BPG algorithm. Analyzing the convergence, we prove that any limit point of the sequence generated by BPGe is a stationary point of the problem by choosing the parameters properly. Besides, assuming Kurdyka-Lojasiewicz property, we prove that all the sequences generated by BPGe converge to a stationary point. Finally, to illustrate the potential of the new method BPGe, we apply it to two important practical problems that arise in many fundamental applications (and that not satisfy global Lipschitz gradient continuity assumption): Poisson linear inverse problems and quadratic inverse problems. In the tests the accelerated BPGe algorithm shows faster convergence results, giving an interesting new algorithm.
In this paper, we propose an over-relaxed proximal scaled gradientalgorithm for solving the non-smooth composite optimization problem in Hilbert space. We prove the strong convergence and the bounded perturbation res...
详细信息
In this paper, we propose an over-relaxed proximal scaled gradientalgorithm for solving the non-smooth composite optimization problem in Hilbert space. We prove the strong convergence and the bounded perturbation resilience of this algorithm under some mild conditions. We also list the superiorized version of the algorithm. We apply the algorithms to the problem to illustrate the performance and validity. The numerical examples show that the over-relaxed algorithm can achieve fewer iteration steps and less running times in comparison to the under-relaxed algorithm, and that the superiorization algorithm has the minimum iterative steps among the three algorithms.
Many machine learning problems can be formulated as minimizing the sum of a function and a non-smooth regularization *** stochastic gradient methods are popular for solving such composite optimization *** propose a mi...
详细信息
Many machine learning problems can be formulated as minimizing the sum of a function and a non-smooth regularization *** stochastic gradient methods are popular for solving such composite optimization *** propose a minibatch proximal stochastic recursive gradientalgorithm SRG-DBB,which incorporates the diagonal Barzilai–Borwein(DBB)stepsize strategy to capture the local geometry of the *** linear convergence and complexity of SRG-DBB are analyzed for strongly convex *** further establish the linear convergence of SRGDBB under the non-strong convexity ***,it is proved that SRG-DBB converges sublinearly in the convex *** experiments on standard data sets indicate that the performance of SRG-DBB is better than or comparable to the proximal stochastic recursive gradientalgorithm with best-tuned scalar stepsizes or BB ***,SRG-DBB is superior to some advanced mini-batch proximal stochastic gradient methods.
暂无评论