咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Improving the convergence of t... 收藏

Improving the convergence of the backpropagation algorithm using learning rate adaptation methods

用学习率改编方法改进 Backpropagation 算法的集中

作     者:Magoulas, GD Vrahatis, MN Androulakis, GS 

作者机构:Univ Athens Dept Informat GR-15771 Athens Greece Univ Patras Artificial Intelligence Res Ctr GR-26110 Patras Greece Univ Patras Dept Math GR-26110 Patras Greece 

出 版 物:《NEURAL COMPUTATION》 (神经计算)

年 卷 期:1999年第11卷第7期

页      面:1769-1796页

核心收录:

学科分类:1001[医学-基础医学(可授医学、理学学位)] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

主  题:Backpropagation algorithms additional error learning rate Converge 

摘      要:This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estimates of the local Lipschitz constant that are obtained without additional error function and gradient evaluations. The proposed algorithms improve the backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations. Simulations are conducted to compare and evaluate the convergence behavior of these gradient-based training algorithms with several popular training methods.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分