This work faces the problem of automatically exploring the optimal size of a neural network, required to solve a class of specific problems. We propose a general self-organized neural network based on adaptive signal ...
详细信息
This work faces the problem of automatically exploring the optimal size of a neural network, required to solve a class of specific problems. We propose a general self-organized neural network based on adaptive signal processing techniques. Specifically, the solution is based on the combination of the multi-model partitioning theory and a bank of neural units that use the localized extended Kalman filter as training algorithm. Thus the problem is reduced to selecting the correct model, among a set of 'candidate' models. The output of such a system is proved to satisfy the universal approximation theorem. Three different implementations are presented that perform better than other existing techniques. The new algorithm is data driven and the resulting networks are recurrent and adaptive, in the sense that they are able to track successfully the changes in the model structure, in real time. Furthermore, the network can be implemented in a parallel environment and a VLSI implementation is feasible. (C) 2001 Elsevier Science B.V. All rights reserved.
This paper addresses the Nonlinear AutoRegressive (NAR) identification problem in connection with the choice of the time varying model structure and computation of the system coefficients. We introduce an-intelligent ...
详细信息
This paper addresses the Nonlinear AutoRegressive (NAR) identification problem in connection with the choice of the time varying model structure and computation of the system coefficients. We introduce an-intelligent method,that is based on the reformulation of the problem in the standard state space form and the subsequent implementation of a bank of Extended Kalman filters, each fitting a different nonlinear model. The problem is reduced then to selecting the true model, using the well known Lainiotis multi-modelpartitioning (MMP) theory, for general (not necessarily Gaussian) data pdf's. Simulations illustrate that the proposed method selects the correct nonlinear model, tracks successfully changes in the model structure and identifies the model parameters, in a sufficiently small number of iterations, in real time.
Neural Networks are massively parallel processing systems, that require expensive and usually not available hardware, in order to be realized. Fortunately, the development of effective and accessible software, makes t...
详细信息
Neural Networks are massively parallel processing systems, that require expensive and usually not available hardware, in order to be realized. Fortunately, the development of effective and accessible software, makes their simulation easy. Thus, various neural network's implementation tools exist in the market, which are oriented to the specific learning algorithm used. Furthermore, they can simulate only fixed size networks. In this work, we present some object-oriented techniques that have been used to defined some types of neuron and network objects, that can be used to realize, in a localized approach, some fast and powerful learning algorithms which combine results of the optimal filtering and the multi-model partitioning theory. Thus, one can build and implement intelligent learning algorithms that face both, the training as well as the on-line adjustment of the network size. Furthermore, the design methodology used, results to a system modeled as a collection of concurrent executable objects, making easy the parallel implementation. The whole design results in a general purpose tool box which is characterized by maintainability, reusability, and increased modularity. The provided features are shown by the presentation of some practical applications.
暂无评论