咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Convex-concave backtracking fo... 收藏
arXiv

Convex-concave backtracking for inertial bregman proximal gradient algorithms in non-convex optimization

作     者:Mukkamala, Mahesh Chandra Ochs, Peter Pock, Thomas Sabach, Shoham 

作者机构:Faculty of Mathematics and Computer Science Saarland University Saarbrucken66123 Germany Institute of Computer Graphics and Vision Graz University of Technology Graz8010 Austria Faculty of Industrial Engineering Technion Haifa3200003 Israel 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2019年

核心收录:

主  题:Extrapolation 

摘      要:Backtracking line-search is an old yet powerful strategy for finding a better step sizes to be used in proximal gradient algorithms. The main principle is to locally find a simple convex upper bound of the objective function, which in turn controls the step size that is used. In case of inertial proximal gradient algorithms, the situation becomes much more difficult and usually leads to very restrictive rules on the extrapolation parameter. In this paper, we show that the extrapolation parameter can be controlled by locally finding also a simple concave lower bound of the objective function. This gives rise to a double convex-concave backtracking procedure which allows for an adaptive choice of both the step size and extrapolation parameters. We apply this procedure to the class of inertial Bregman proximal gradient methods, and prove that any sequence generated by these algorithms converges globally to a critical point of the function at hand. Numerical experiments on a number of challenging non-convex problems in image processing and machine learning were conducted and show the power of combining inertial step and double backtracking strategy in achieving improved *** Codes 90C25, 26B25, 49M27, 52A41, 65K05 Copyright © 2019, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分