咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Upper bounds on the number of ... 收藏

Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions

作     者:Huang, GB Babri, HA 

作者机构:Nanyang Technol Univ Sch Elect & Elect Engn Singapore 639798 Singapore 

出 版 物:《IEEE TRANSACTIONS ON NEURAL NETWORKS》 (IEEE Trans Neural Networks)

年 卷 期:1998年第9卷第1期

页      面:224-229页

核心收录:

主  题:activation functions feedforward networks hidden neurons upper bounds 

摘      要:It is well known that standard single-hidden layer feed-forward networks (SLFN s) with at most N hidden neurons (including biases) can learn N distinct samples (x(i), t(i)) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen almost arbitrarily, However, these results have been obtained for the case when the activation function for the hidden neurons is the signum function, This paper rigorously proves that standard single-hidden layer feedforward networks (SLFN s) with at most N hidden neurons and with any bounded nonlinear activation function which has a limit at one infinity can learn N distinct samples (x(i), t(i)) with zero error, The previous method of arbitrarily choosing weights is not feasible for any SLFN, The proof of our result is constructive and thus gives a method to directly find the weights of the standard SLFN s with any such bounded nonlinear activation function as opposed to iterative training algorithms in the literature.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分