The Pandemonium system of reflective MINOS agents solves problems by automatic dynamic modularization of the input space, The agents contain feedforward neural networks which adapt using the backpropagation algorithm,...
详细信息
The Pandemonium system of reflective MINOS agents solves problems by automatic dynamic modularization of the input space, The agents contain feedforward neural networks which adapt using the backpropagation algorithm, We demonstrate the performance of Pandemonium on various categories of problems, These include learning continuous functions with discontinuities, separating two spirals, learning the parity function, and optical character recognition, It is shown how strongly the advantages gained from using a modularization technique depend on the nature of the problem, The superiority of the Pandemonium method over a single net on the first two test categories is contrasted with its limited advantages for the second two categories, In the first case the system converges quicker with modularization and is seen to lead to simpler solutions, For the second case the problem is not significantly simplified through flat decomposition of the input space, although convergence is still quicker.
Using the tools of nonlinear system theory, we examine several common nonlinear variants of the LMS algorithm and derive a persistence of excitation criteria for local exponential stability. The condition is tight whe...
详细信息
Using the tools of nonlinear system theory, we examine several common nonlinear variants of the LMS algorithm and derive a persistence of excitation criteria for local exponential stability. The condition is tight when the inputs are periodic, and a generic counterexample is demonstrated which gives (local) instability for a large class of such nonlinear versions of LMS, specifically, those which utilize a nonlinear data function. The presence of a nonlinear error function is found to be relatively benign in that it does not affect the stability of the error system. Rather, it defines the cost function the algorithm tends to minimize. Specific examples include the dead zone modification, the cubed data nonlinearity, the cubed error nonlinearity, the signed regressor algorithm, and a single layer version of the backpropagation algorithm.
Fuzzy neural network systems (FNNSs) can incorporate the merits of fuzzy logic systems (FLSs) and neural networks (NN). This paper designs a type of non-singleton FNNSs for forecasting issues. The proposed hybrid back...
详细信息
Fuzzy neural network systems (FNNSs) can incorporate the merits of fuzzy logic systems (FLSs) and neural networks (NN). This paper designs a type of non-singleton FNNSs for forecasting issues. The proposed hybrid backpropagation (BP) algorithms and recursive least square (RLS) algorithms are used for optimizing the parameters of antecedents, input measurements, and consequents simultaneously. Two computer simulation examples based on the data of European Network on Intelligent Technology (EUNITE) and data of west Texas intermediate (WTI) crude oil price are used for testing. Convergence analysis shows that the hybrid optimized FNNSs have very high generalization ability.
Uncertainty is inevitable in problem solving and decision making. One way to reduce it is by seeking the advice of an expert. When we use computers to reduce uncertainty, the computer itself can become an expert in a ...
详细信息
Uncertainty is inevitable in problem solving and decision making. One way to reduce it is by seeking the advice of an expert. When we use computers to reduce uncertainty, the computer itself can become an expert in a specific field through a variety of methods. One such method is machine learning, which involves using a computer algorithm to capture hidden knowledge from data. We compared the prediction performances of three human track experts with those of two machine learning techniques: a decision tree building algorithm (ID3), and a neural network learning algorithm (backpropagation). For our research, we investigated a problem solving scenario called game playing, which is unstructured, complex, and seldom studied. We considered several real life game playing scenarios and decided on greyhound racing, a complex domain that involves about 50 performance variables for eight competing dogs in a race. For every race, each dog's past history is complete and freely available to bettors. This is a large amount of historical information-some accurate and relevant, some noisy and irrelevant-that must be filtered, selected, and analyzed to assist in making a prediction. This large search space poses a challenge for both human experts and machine learning algorithms. The questions then become: can machine learning techniques reduce the uncertainty in a complex game playing scenario? Can these methods outperform human experts in prediction? Our research sought to answer these questions.
The paper principally describes the design of an artificial neural network using flexible sigmoid unit functions (FSUFs), referred to as flexible sigmoid function networks (FSFNs), to achieve both a high flexibility a...
详细信息
The paper principally describes the design of an artificial neural network using flexible sigmoid unit functions (FSUFs), referred to as flexible sigmoid function networks (FSFNs), to achieve both a high flexibility and a high learning ability in neural network structures from a given set of teaching patterns. An FSFN can generate an appropriate shape of the sigmoid function for each of the individual hidden- and output-layer units, in accordance with the specified inputs, desired output(s) and applied system. The paper proposes a learning method in which not only connection weights but also the sigmoid functions may be adjusted. The learning algorithm is derived by using the well known back-propagation algorithm. To demonstrate the validity of the proposed method, we apply the FSFN to the construction of a self-tuning computed torque controller for a two-link manipulator. It is then shown that the controller based on the FSFN gives a better control performance than that based on the traditional neural network.
A fast new algorithm is presented for training multilayer perceptrons as an alternative to the backpropagation algorithm. The number of iterations required by the new algorithm to converge is less than 20% of what is ...
详细信息
A fast new algorithm is presented for training multilayer perceptrons as an alternative to the backpropagation algorithm. The number of iterations required by the new algorithm to converge is less than 20% of what is required by the backpropagation algorithm. Also, it is less affected by the choice of initial weights and setup parameters. The algorithm uses a modified form of the backpropagation algorithm to minimize the mean-squared error between the desired and actual outputs with respect to the inputs to the nonlinearities. This is in contrast to the standard algorithm which minimizes the mean-squared error with respect to the weights. Error signals, generated by the modified backpropagation algorithm, are used to estimate the inputs to the nonlinearities, which along with the input vectors to the respective nodes, are used to produce an updated set of weights through a system of linear equations at each node. These systems of linear equations are solved using a Kalman filter at each layer.
The derivation of a supervised training algorithm for a neural network implies the selection of a norm criterion which gives a suitable global measure of the particular distribution of errors. The present paper addres...
详细信息
The derivation of a supervised training algorithm for a neural network implies the selection of a norm criterion which gives a suitable global measure of the particular distribution of errors. The present paper addresses this problem and proposes a correspondence between error distribution at the output of a layered feedforward neural network and L(p) norms. The generalized delta rule is investigated in order to verify how its structure can be modified in order to perform a minimization in the generic L(p) norm. Finally, the particular case of the Chebyshev norm is developed and tested.
In this paper, we present an integrated approach to feature and architecture selection for single hidden layer-feedforward neural networks trained via backpropagation. In our approach, we adopt a statistical model bui...
详细信息
In this paper, we present an integrated approach to feature and architecture selection for single hidden layer-feedforward neural networks trained via backpropagation. In our approach, we adopt a statistical model building perspective in which we analyze neural networks within a nonlinear regression framework, The algorithm presented in this paper employs a likelihood-ratio test statistic as a model selection criterion, This criterion is used in a sequential procedure aimed at selecting the best neural network given an initial architecture as determined by heuristic rules, Application results for an object recognition problem demonstrate the selection algorithm's effectiveness in identifying reduced neural networks with equivalent prediction accuracy.
At Thinking Machines, my colleagues and I have developed a hybrid system combining a neural network, a statistical module, and a memory-based reasoner, each of which makes its own prediction. A combiner then blends th...
详细信息
At Thinking Machines, my colleagues and I have developed a hybrid system combining a neural network, a statistical module, and a memory-based reasoner, each of which makes its own prediction. A combiner then blends these results to produce the final predictions. This hybrid system improves its ability to determine how amino acid sequences fold into 3D protein structures. It predicts secondary structures with 66.4% accuracy. Both the neural network and the combiner are multilayer perceptrons trained with the standard backpropagation algorithm; this article focuses on the other two components, and on how we trained the hybrid system and used it for prediction. I also discuss how future work in AI and other sciences might meet the challenge of the protein folding problem.
The backpropagation algorithm converges very slowly for two-class problems in which most of the exemplars belong to one dominant class. We analyze that this occurs because the computed net error gradient vector is dom...
详细信息
The backpropagation algorithm converges very slowly for two-class problems in which most of the exemplars belong to one dominant class. We analyze that this occurs because the computed net error gradient vector is dominated by the bigger class so much that the net error for the exemplars in the smaller class increases significantly in the initial iteration. The subsequent rate of convergence of the net error is very low. We present a modified technique for calculating a direction in weight-space which decreases the error for each class. Using this algorithm, we have been able to accelerate the rate of learning for two-class classification problems by an order of magnitude.
暂无评论