咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >CONVERGENCE RATE OF GRADIENT D... 收藏

CONVERGENCE RATE OF GRADIENT DESCENT METHOD FOR MULTI-OBJECTIVE OPTIMIZATION

为多客观的优化的坡度降下方法的集中率

作     者:Liaoyuan Zeng Yuhong Dai Yakui Huang 

作者机构:Institute of Computational Mathematics and Scientific/Engineering ComputingState Key Labomtory of Scientific and Engineering ComputingAcademy of Mathematics and Systems ScienceChinese Academy of SciencesBeijing 100190China School of ScienceHebei University of TechnologyTianjin 300401 China 

出 版 物:《Journal of Computational Mathematics》 (计算数学(英文))

年 卷 期:2019年第37卷第5期

页      面:689-703页

核心收录:

学科分类:07[理学] 0701[理学-数学] 070101[理学-基础数学] 

基  金:The authors are grateful for the valuable comments and suggestions of two anonymous referees The authors also would like to thank Dr. Hui Zhang in National University of Defense Technology for his many suggestions and comments on an early draft of this paper This research is supported by the Chinese Natural Science Foundation (Nos. 11631013, 11971372) the National 973 Program of China (Nos. 2015CB856002) 

主  题:Multi-objective optimization Gradient descent Convergence rate. 

摘      要:The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant stepsizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分