版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Institute of Computational Mathematics and Scientific/Engineering ComputingState Key Labomtory of Scientific and Engineering ComputingAcademy of Mathematics and Systems ScienceChinese Academy of SciencesBeijing 100190China School of ScienceHebei University of TechnologyTianjin 300401 China
出 版 物:《Journal of Computational Mathematics》 (计算数学(英文))
年 卷 期:2019年第37卷第5期
页 面:689-703页
核心收录:
学科分类:07[理学] 0701[理学-数学] 070101[理学-基础数学]
基 金:The authors are grateful for the valuable comments and suggestions of two anonymous referees The authors also would like to thank Dr. Hui Zhang in National University of Defense Technology for his many suggestions and comments on an early draft of this paper This research is supported by the Chinese Natural Science Foundation (Nos. 11631013, 11971372) the National 973 Program of China (Nos. 2015CB856002)
主 题:Multi-objective optimization Gradient descent Convergence rate.
摘 要:The convergence rate of the gradient descent method is considered for unconstrained multi-objective optimization problems (MOP). Under standard assumptions, we prove that the gradient descent method with constant stepsizes converges sublinearly when the objective functions are convex and the convergence rate can be strengthened to be linear if the objective functions are strongly convex. The results are also extended to the gradient descent method with the Armijo line search. Hence, we see that the gradient descent method for MOP enjoys the same convergence properties as those for scalar optimization.