版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Northeastern University USA
出 版 物:《NEURAL NETWORKS》 (神经网络)
年 卷 期:1990年第3卷第2期
页 面:191-201页
核心收录:
学科分类:1002[医学-临床医学] 1001[医学-基础医学(可授医学、理学学位)] 0812[工学-计算机科学与技术(可授工学、理学学位)] 10[医学]
主 题:Connectionist Neural network Computational learning theory PAC-Learning Valiant Model Pocket algorithm Distributed method BRD algorithm Perceptron Vapnik-Chervonenkis dimension Decision lists k -order distributed networks
摘 要:A connectionist learning algorithm, the bounded, randomized, distributed (BRD) algorithm, is presented and formally analyzed within the framework of computational learning theory. From a neural network viewpoint this framework gives clear definitions to commonly used terms such as “generalization and “scaling up, and addresses the following questions: • • What class of functions is being learned? • • How many training examples should be used? • • How many iterations are required? • • With what certainty can we be assured of learning a good model? From a computational learning theory perspective, a new class of connectionist concepts is shown to be polynomially learnable using the BRD algorithm. Since a variant of the BRD algorithm is in current use for tasks such as pattern recognition, this makes it one of the few learning algorithms shown to be polynomial within the computational learning theory framework that is close to an “industrial strength algorithm. The algorithm can fail for several reasons: (a) noisy inputs; (b) underestimation of the difficulty of the concept being learned (i.e., larger concept class required); or (c) bad luck. Whenever the algorithm fails, there are several “fallback bounds available. Finally, the Appendix gives a learnable class of network functions that strictly enlarges a class of learnable functions, Rivest s k-decision lists.