咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Global Convergence Guarantees ... 收藏

Global Convergence Guarantees of (A)GIST for a Family of Nonconvex Sparse Learning Problems

作     者:Zhang, Hengmin Qian, Feng Shang, Fanhua Du, Wenli Qian, Jianjun Yang, Jian 

作者机构:East China University of Science and Technology Key Laboratory of Advanced Control and Optimization for Chemical Processes of Ministry of Education School of Information Science and Engineering Shanghai200237 China Shanghai Institute of Intelligent Science and Technology Tongji University Shanghai200092 China Xidian University Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education School of Artificial Intelligence Xi'an710071 China Peng Cheng Laboratory Shenzhen518066 China Nanjing University of Science and Technology Pca Laboratory Key Lab. of Intelligent Percept. and Syst. for High-Dimensional Information of Ministry of Education Nanjing210094 China Nanjing University of Science and Technology Jiangsu Key Laboratory of Image and Video Understanding for Social Security School of Computer Science and Engineering Nanjing210094 China 

出 版 物:《IEEE Transactions on Cybernetics》 (IEEE Trans. Cybern.)

年 卷 期:2022年第52卷第5期

页      面:3276-3288页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0835[工学-软件工程] 0701[理学-数学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

主  题:Optimization 

摘      要:In recent years, most of the studies have shown that the generalized iterated shrinkage thresholdings (GISTs) have become the commonly used first-order optimization algorithms in sparse learning problems. The nonconvex relaxations of the \ell _{0} -norm usually achieve better performance than the convex case (e.g., \ell _{1} -norm) since the former can achieve a nearly unbiased solver. To increase the calculation efficiency, this work further provides an accelerated GIST version, that is, AGIST, through the extrapolation-based acceleration technique, which can contribute to reduce the number of iterations when solving a family of nonconvex sparse learning problems. Besides, we present the algorithmic analysis, including both local and global convergence guarantees, as well as other intermediate results for the GIST and AGIST, denoted as (A)GIST, by virtue of the Kurdyka-Lojasiewica (KL) property and some milder assumptions. Numerical experiments on both synthetic data and real-world databases can demonstrate that the convergence results of objective function accord to the theoretical properties and nonconvex sparse learning methods can achieve superior performance over some convex ones. © 2013 IEEE.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分