In this survey paper, the-state-of-the-art of the optimal structure design of Multilayer Feedforward Neural Network (MFNN) for pattern recognition is reviewed. Special emphasis is laid on the scale-limited MFNN and th...
详细信息
In this survey paper, the-state-of-the-art of the optimal structure design of Multilayer Feedforward Neural Network (MFNN) for pattern recognition is reviewed. Special emphasis is laid on the scale-limited MFNN and the internal representation and decision boundary-based design methodologies. A comprehensively comparative study of the main characteristics of each method is presented. Also, future research directions are outlined.
This paper describes the cascade neural network design algorithm (CNNDA), a new algorithm for designing compact, two-hidden-layer artificial neural networks (ANNs). This algorithm determines an ANN's architecture ...
详细信息
This paper describes the cascade neural network design algorithm (CNNDA), a new algorithm for designing compact, two-hidden-layer artificial neural networks (ANNs). This algorithm determines an ANN's architecture with connection weights automatically. The design strategy used in the CNNDA was intended to optimize both the generalization ability and the training time of ANNs. In order to improve the generalization ability, the CNDDA uses a combination of constructive and pruning algorithms and bounded fan-ins of the hidden nodes. A new training approach, by which the input weights of a hidden node are temporarily frozen when its output does not change much after a few successive training cycles, was used in the CNNDA for reducing the computational cost and the training time. The CNNDA was tested on several benchmarks including the cancer, diabetes and character-recognition problems in ANNs. The experimental results show that the CNNDA can produce compact ANNs with good generalization ability and short training time in comparison with other algorithms. (C) 2001 Elsevier Science Ltd. All rights reserved.
In this paper, the fully-connected higher-order neuron and sparselized higher-order neuron are introduced, the mapping capabilities of the fully-connected higher-order neural networks are investigated, and that arbitr...
详细信息
ISBN:
(纸本)0780348605
In this paper, the fully-connected higher-order neuron and sparselized higher-order neuron are introduced, the mapping capabilities of the fully-connected higher-order neural networks are investigated, and that arbitrary Boolean function defined from (0,1)(N) can be realized by fully-connected higher-order neural networks is proved. Based on this, in order to simplify the networks' architecture, a pruning algorithm of eliminating the redundant connection weights is also proposed, which can be applied to the implementation of sparselized higher-order neural classifier and other networks. The simulated results show the effectiveness of the algorithm.
In this paper, a learning scheme using a fuzzy controller to generate walking gaits is developed. The learning scheme uses a fuzzy controller combined with a linearized inverse biped model. The controller provides the...
详细信息
In this paper, a learning scheme using a fuzzy controller to generate walking gaits is developed. The learning scheme uses a fuzzy controller combined with a linearized inverse biped model. The controller provides the control signals at each control time instant. The algorithm used to train the controller is "backpropagation through time." The linearized inverse biped model provides the error signals for backpropagation through the controller at control time instants. Given prespecified constraints such as the step length, crossing clearance, and walking speed, the control scheme can generate the gait that satisfies these constraints. Simulation results are reported for a five-link biped robot.
In this paper,the fully-connected higherorder neuron and sparselized higher-order neuron are introduced,the mapping capabilities of the fullyconnected higher-order neural networks are investigated, and that arbitrary ...
详细信息
In this paper,the fully-connected higherorder neuron and sparselized higher-order neuron are introduced,the mapping capabilities of the fullyconnected higher-order neural networks are investigated, and that arbitrary Boolean function defined from {0,1} can be realized by fully-connected higher-order neural networks is *** on this,in order to simplify the networks' architecture,a pruning algorithm of eliminating the redundant connection weights is also proposed,which can be applied to the implementation of sparselized higher-order neural classifier and other *** simulated results show the effectiveness of the algorithm.
作者:
Thompson, SBT Labs
Intelligent Business Syst Res Grp Ipswich IP5 7RE Suffolk England
Ensemble classifiers and algorithms for learning ensembles have recently received a great deal of attention in the machine learning literature (R.E. Schapire, Machine Learning 5(2) (1990) 197-227;N. Cesa-Bianchi, Y. F...
详细信息
Ensemble classifiers and algorithms for learning ensembles have recently received a great deal of attention in the machine learning literature (R.E. Schapire, Machine Learning 5(2) (1990) 197-227;N. Cesa-Bianchi, Y. Freund, D. Haussler, D.P. Helbold, R.E. Schapire, M.K. Warmuth, Proceedings of the 25th Annual ACM Symposium on the Theory of Computing, 1993, pp. 382-391;L. Breiman, Bias, Technical Report 460, Statistics Department, University of California, Berkeley, CA, 1996;J.R. Quinlan, Proceedings of the 14th International Conference on Machine Learning, Italy, 1997;Y. Freund, R.E. Schapire, Proceedings of the 13th International Conference on Machine Learning ICML96, Bari, Italy 1996, pp. 148-157;A.J.C. Sharkey, N.E. Sharkey, Combining diverse neural nets, The Knowledge Engineering Review 12 (3) (1997) 231-247). In particular, boosting has received a great deal of attention as a mechanism by which an ensemble of classifiers that has a better generalisation characteristic than any single classifier derived using a particular technique can be discovered. In this article, we examine and compare a number of techniques for pruning a classifier ensemble which is overfit on its training set and find that a real valued GA is at least as good as the best heuristic search algorithm for choosing an ensemble weighting. (C) 1999 Elsevier Science B.V. All rights reserved.
This paper reports on the structure of a large-signal neural-network (NN) high electron-mobility transistor (HEMT) model as determined by a pruning technique and a genetic algorithm, The bias-dependent intrinsic eleme...
详细信息
This paper reports on the structure of a large-signal neural-network (NN) high electron-mobility transistor (HEMT) model as determined by a pruning technique and a genetic algorithm, The bias-dependent intrinsic elements of an HEMT's equivalent circuit are described by a generalized multilayered NN whose inputs are the gate-to-source bias (V-gs) and the drain-to-source bias (V-ds). Using C-gs data as an example, we began by experimentally examining some of the features of the multilayered NN model to obtain rules-of-thumb on choosing training parameters and other information for succeeding studies. We then developed and studied a novel pruning technique to optimize the C-gs NN model. Excessively large NN configurations ran be reduced to an appropriate size by means of a weight decay, which is based on the analysis of a synaptic connection's activity. Finally, we employed a genetic algorithm for the same purpose, By representing the configuration of a standard multilayered NN as a chromosome, the optimum configuration of a C-gs model was obtained through a simulated evolution process, For this approach, the configuration of an NN that simultaneously represents seven intrinsic elements (C-gs, R-i, ..., C-da) of an equivalent circuit was also shown for comparison to previous work. Ne successfully obtained simplified NN models using both approaches. The advantages and disadvantages of these two approaches are discussed in the conclusion, To our knowledge, this is the first report to clarify the general process of building an NN device model.
There have been many studies of mathematical models of neural networks. However there always arises a problem of determining their optimal structures because of the lark of prior information. Apoptosis is the mechanis...
详细信息
ISBN:
(纸本)078034863X
There have been many studies of mathematical models of neural networks. However there always arises a problem of determining their optimal structures because of the lark of prior information. Apoptosis is the mechanism responsible for the physiological deletion of cells and appears to be intrinsically programmed. We propose a procedure named M-apoptosis for the structure clarification of Neurofuzzy GMDH model whose partial descriptions are represented by the Radial Basis functions network. The proposed method prunes unnecessary links and units hom the larger network to identify, still more to clarify the network structure by minimizing the Minkowski norm of the derivatives of the partial descriptions. The method is validated in the numerical examples of function approximation and the classification of Fisher's Iris data.
The present paper demonstrates algorithms for applying gene counting estimation of haplotype frequencies in very large genetic systems. A factor union representation of phenotypes is used which conveniently yields the...
详细信息
暂无评论