咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Incremental Online Learning of... 收藏
arXiv

Incremental Online Learning of Randomized Neural Network with Forward Regularization

作     者:Wang, Junda Hu, Minghui Li, Ning Al-Ali, Abdulaziz Suganthan, Ponnuthurai Nagaratnam 

作者机构:The Key Laboratory of System Control and Information Processing Ministry of Education of China China The Department of Automation Shanghai Jiao Tong University Shanghai200240 China The Shanghai Engineering Research Center of Intelligent Control and Management Shanghai200240 China The School of Electrical and Electronic Nanyang Technological University Singapore The KINDI Computing Research Center College of Engineering Qatar University Qatar 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

摘      要:Online learning of deep neural networks suffers from challenges such as hysteretic non-incremental updating, increasing memory usage, past retrospective retraining, and catastrophic forgetting. To alleviate these drawbacks and achieve progressive immediate decision-making, we propose a novel Incremental Online Learning (IOL) process of Randomized Neural Networks (Randomized NN), a framework facilitating continuous improvements to Randomized NN performance in restrictive online scenarios. Within the framework, we further introduce IOL with ridge regularization (-R) and IOL with forward regularization (-F). -R generates stepwise incremental updates without retrospective retraining and avoids catastrophic forgetting. Moreover, we substituted -R with -F as it enhanced precognition learning ability using semi-supervision and realized better online regrets to offline global experts compared to -R during IOL. The algorithms of IOL for Randomized NN with -R/-F on non-stationary batch stream were derived respectively, featuring recursive weight updates and variable learning rates. Additionally, we conducted a detailed analysis and theoretically derived relative cumulative regret bounds of the Randomized NN learners with -R/-F in IOL under adversarial assumptions using a novel methodology and presented several corollaries, from which we observed the superiority on online learning acceleration and regret bounds of employing -F in IOL. Finally, our proposed methods were rigorously examined across regression and classification tasks on diverse datasets, which distinctly validated the efficacy of IOL frameworks of Randomized NN and the advantages of forward regularization. Copyright © 2024, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分