版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Xi An Jiao Tong Univ Inst Artificial Intelligence & Robot Xian 710049 Peoples R China Southwest Univ Coll Elect & Informat Engn Chongqing 400715 Peoples R China Chinese Acad Sci Inst Automat State Key Lab Management & Control Complex Syst Beijing 100190 Peoples R China
出 版 物:《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 (《IEEE神经网络与学习系统汇刊》)
年 卷 期:2021年第32卷第7期
页 面:3083-3097页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:National Natural Science Foundation of China [91648208, 61976175] National Natural Science Foundation-Shenzhen Joint Research Program [U1613219] Key Project of Natural Science Basic Research Plan in Shaanxi Province of China [2019JZ-05]
主 题:Learning systems Robustness Standards Optimization Training Perturbation methods Mean square error methods Broad learning system (BLS) incremental learning algorithms maximum correntropy criterion (MCC) regression and classification
摘 要:As an effective and efficient discriminative learning method, broad learning system (BLS) has received increasing attention due to its outstanding performance in various regression and classification problems. However, the standard BLS is derived under the minimum mean square error (MMSE) criterion, which is, of course, not always a good choice due to its sensitivity to outliers. To enhance the robustness of BLS, we propose in this work to adopt the maximum correntropy criterion (MCC) to train the output weights, obtaining a correntropy-based BLS (C-BLS). Due to the inherent superiorities of MCC, the proposed C-BLS is expected to achieve excellent robustness to outliers while maintaining the original performance of the standard BLS in the Gaussian or noise-free environment. In addition, three alternative incremental learning algorithms, derived from a weighted regularized least-squares solution rather than pseudoinverse formula, for C-BLS are developed. With the incremental learning algorithms, the system can be updated quickly without the entire retraining process from the beginning when some new samples arrive or the network deems to be expanded. Experiments on various regression and classification data sets are reported to demonstrate the desirable performance of the new methods.