版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Nanyang Technol Univ Sch Elect & Elect Engn Singapore 639798 Singapore
出 版 物:《IEEE TRANSACTIONS ON NEURAL NETWORKS》 (IEEE Trans Neural Networks)
年 卷 期:1998年第9卷第1期
页 面:224-229页
核心收录:
主 题:activation functions feedforward networks hidden neurons upper bounds
摘 要:It is well known that standard single-hidden layer feed-forward networks (SLFN s) with at most N hidden neurons (including biases) can learn N distinct samples (x(i), t(i)) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen almost arbitrarily, However, these results have been obtained for the case when the activation function for the hidden neurons is the signum function, This paper rigorously proves that standard single-hidden layer feedforward networks (SLFN s) with at most N hidden neurons and with any bounded nonlinear activation function which has a limit at one infinity can learn N distinct samples (x(i), t(i)) with zero error, The previous method of arbitrarily choosing weights is not feasible for any SLFN, The proof of our result is constructive and thus gives a method to directly find the weights of the standard SLFN s with any such bounded nonlinear activation function as opposed to iterative training algorithms in the literature.