In this paper, an adaptive bound reduced-form genetic algorithm (ABRGA) to tune the control points of B-spline neural networks is proposed. It is developed not only to search for the optimal control points but also to...
详细信息
In this paper, an adaptive bound reduced-form genetic algorithm (ABRGA) to tune the control points of B-spline neural networks is proposed. It is developed not only to search for the optimal control points but also to adaptively tune the bounds of the control points of the B-spline neural networks by enlarging the search space of the control points. To improve the searching speed of the reduced-form genetic algorithm (RGA), the ABRGA is derived, in which better bounds of control points of B-spline neural networks are determined and paralleled with the optimal control points searched. It is shown that better efficiency is obtained if the bounds of control points are adjusted properly for the RGA-based B-spline neural networks.
A digital design for piecewise-linear (PWL) approximation to the sigmoid function is presented. Circuit operation is based on a recursive algorithm that uses lattice operators max and min to approximating nonlinear fu...
详细信息
A digital design for piecewise-linear (PWL) approximation to the sigmoid function is presented. Circuit operation is based on a recursive algorithm that uses lattice operators max and min to approximating nonlinearfunctions. The resulting hardware is programmable, allowing for the control of the delay-time/approximation-accuracy rate.
In this paper we prove convergence rates for the problem of approximating functions f by neural networks and similar constructions. We show that the rates are the better the smoother the activation functions are, prov...
详细信息
In this paper we prove convergence rates for the problem of approximating functions f by neural networks and similar constructions. We show that the rates are the better the smoother the activation functions are, provided that f satisfies an integral representation. We give error bounds not only in Hilbert spaces but also in general Sobolev spaces W-m,W-r(Omega). Finally, we apply our results to a class of perceptrons and present a sufficient smoothness condition on f guaranteeing the integral representation, (C) 2001 Academic Press.
This paper presents a general control method based on radial basis function networks (RBFNs) for chaotic dynamical systems. For many chaotic systems that can be decomposed into a sum of a linear and a nonlinear part, ...
详细信息
This paper presents a general control method based on radial basis function networks (RBFNs) for chaotic dynamical systems. For many chaotic systems that can be decomposed into a sum of a linear and a nonlinear part, under some mild conditions the RBFN can be used to well approximate the nonlinear part of the system dynamics. The resulting system is then dominated by the linear part, with some small or weak residual nonlinearities due to the RBFN approximation errors. Thus, a simple linear state-feedback controller can be devised, to drive the system response to a desirable set-point. In addition to some theoretical analysis, computer simulations on two representative continuous-time chaotic systems (the Duffing and the Lorenz systems) are presented to demonstrate the effectiveness of the proposed method. (C) 2000 Published by Elsevier Science Inc. All rights reserved.
This correspondence concerns the estimation algorithm for hinging hyperplane (HH) models, a piecewise-linear model for approximating functions of several variables, suggested in Breiman [1], The estimation algorithm i...
详细信息
This correspondence concerns the estimation algorithm for hinging hyperplane (HH) models, a piecewise-linear model for approximating functions of several variables, suggested in Breiman [1], The estimation algorithm is analyzed and it is shown that it is a special case of a Newton algorithm applied to a sum of squared error criterion. This insight is then used to suggest possible improvements of the algorithm so that convergence to a local minimum can be guaranteed. In addition, the way of updating the parameters in the HH model is discussed. In Breiman [1], a stepwise updating procedure is proposed where only a subset of the parameters are changed in each step. This connects closely to some recently suggested greedy algorithms and these greedy algorithms are discussed and compared to a simultaneous updating of all parameters.
An identification algorithm for time-varying nonlinear systems using a sequential learning scheme with a minimal radial basis function neural network (RBFNN) is presented. The learning algorithm combines the growth cr...
详细信息
An identification algorithm for time-varying nonlinear systems using a sequential learning scheme with a minimal radial basis function neural network (RBFNN) is presented. The learning algorithm combines the growth criterion of the resource allocating network of Platt with a pruning strategy based on the relative contribution of each hidden unit of the RBFNN to the overall network output. The performance of the algorithm is evaluated on the identification of nonlinear systems with both fixed and time-varying dynamics and also on a static functionapproximation problem. The nonlinear system with the fixed dynamics have been studied extensively earlier by Chen and Billings and the study with the time-varying dynamics reported is new. For the identification of fixed dynamics case, the resulting RBFNN is shown to be more compact and produces smaller output errors than the hybrid learning algorithm of Chen and Billings. In the case of time-varying dynamics, the algorithm is shown to adjust (add/drop) the hidden neurons of the RBFNN to 'adaptively track' the dynamics of the nonlinear system with a minimal RBF network.
Neural networks have attracted attention due to their capability to perform nonlinear function approximation. In this paper, in order to better understand this capability, a new theorem on an integral transform was de...
详细信息
Neural networks have attracted attention due to their capability to perform nonlinear function approximation. In this paper, in order to better understand this capability, a new theorem on an integral transform was derived by applying ridge functions to neural networks. From the theorem, it is possible to obtain approximation bounds which clarify the quantitative relationship between the functionapproximation accuracy and the number of nodes in the hidden layer. The theorem indicates that the approximation accuracy depends on the smoothness of the target function. Furthermore, the theorem also shows that this type of approximation method differs from usual methods and is able to escape the so-called ''curse of dimensionality,'' in which the approximation accuracy depends strongly of the input dimension of the function and deteriorates exponentially.
A hinge function y=h(x) consists of two hyperplanes continuously joined together at a hinge. In regression (prediction), classification (pattern recognition), and noiseless functionapproximation, use of sums of hinge...
详细信息
A hinge function y=h(x) consists of two hyperplanes continuously joined together at a hinge. In regression (prediction), classification (pattern recognition), and noiseless functionapproximation, use of sums of hinge functions gives a powerful and efficient alternative to neural networks with compute times several orders of magnitude less than fitting neural networks with a comparable number of parameters. The core of the methodology is a simple and effective method for finding good hinges.
暂无评论