We consider a modification of the backpropagation (BP) learning algorithm in which a linear function, directly proportional to the deviation between target values and actual values at the output, is propagated backwar...
详细信息
We consider a modification of the backpropagation (BP) learning algorithm in which a linear function, directly proportional to the deviation between target values and actual values at the output, is propagated backwards instead of the original nonlinear function. The new algorithm is tested on the odd/even parity function for orders between 4 and 7 and on high- (180) dimensional data derived from NMR spectroscopy of animal tumours. Results suggest that using the linear function, the network converges faster and is more likely to escape from local minima than when the original BP is used.< >
There have been many algorithms to speed up the learning time of backpropagation. However, most of them do not take into consideration the amount of hardware required to implement the algorithm. Without suitable hardw...
详细信息
There have been many algorithms to speed up the learning time of backpropagation. However, most of them do not take into consideration the amount of hardware required to implement the algorithm. Without suitable hardware implementation, the real promise of neural network applications will be difficult to achieve. Since multiply dominates computation and is expensive in hardware, this paper proposes a method to reduce the number of multiplies in the backward path of backpropagation algorithm by setting some neuron errors to zero. It proves the convergence theorem by the general Robbins-Monro process, a stochastic approximation process.< >
The backpropagation algorithm has been modified to work without any multiplications and to tolerate computations with a low resolution, which makes it more attractive for a hardware implementation. Numbers are represe...
详细信息
The backpropagation algorithm has been modified to work without any multiplications and to tolerate computations with a low resolution, which makes it more attractive for a hardware implementation. Numbers are represented in floating-point format with 1 bit mantissa and 2 bits in the exponent for the states, and 1 bit mantissa and 4 bit exponent for the gradients, while the weights are 16 bit fixed-point numbers. In this way, all the computations can be executed with shift and add operations. Large networks with over 100000 weights were trained and demonstrated the same performance as networks computed with full precision. An estimate of a circuit implementation shows that a large network can be placed on a single chip, reaching more than 1 billion weight updates per second. A speedup is also obtained on any machine where a multiplication is slower than a shift operation.
The information age changes how US Army decision makers predict future trends. The Army is responding with innovative technology to help it's planners to understand the complexities of its decisions, and their imp...
详细信息
The information age changes how US Army decision makers predict future trends. The Army is responding with innovative technology to help it's planners to understand the complexities of its decisions, and their impacts, as the Army embraces a post Cold War world view. Using neural network technologies is one way to attack this social technical challenge. Several neural networks were developed to predict the outcome of combat battles. The networks were trained with combat battles from a historical database.< >
Field programmable gate arrays (FPGAs) are an excellent technology for implementing neural networking hardware. This paper presents the run-time reconfiguration artificial neural network (RRANN). RRANN is a hardware i...
详细信息
Field programmable gate arrays (FPGAs) are an excellent technology for implementing neural networking hardware. This paper presents the run-time reconfiguration artificial neural network (RRANN). RRANN is a hardware implementation of the backpropagation algorithm that is extremely scalable and makes efficient use of FPGA resources. One key feature is RRANN's ability to exploit parallelism in all stages of the backpropagation algorithm including the stage where errors are propagated backward through the network. This architecture has been designed and implemented on Xilinx XC3090 FPGAs, and its performance has been measured.< >
In this paper, we propose a new method (the OIVS method) for initializing weight values, which is based on the equations representing the characteristics of the information transformation mechanism of a node. Numerica...
详细信息
In this paper, we propose a new method (the OIVS method) for initializing weight values, which is based on the equations representing the characteristics of the information transformation mechanism of a node. Numerical simulations show that the learning performance of the OIVS method is superior to that of the conventional method. It should be noted that if we use appropriate values of the parameters in the OIVS method, the nonconvergence case can be avoided.< >
We show instances where parts of algorithms similar to backpropagation respectively projection learning algorithms have been implemented via feedback in neural systems. The corresponding algorithms, with the same or a...
详细信息
We show instances where parts of algorithms similar to backpropagation respectively projection learning algorithms have been implemented via feedback in neural systems. The corresponding algorithms, with the same or a similar mathematical expression, do not minimize an error in the output space of the network, but rather in the input space of the network, via a comparison between the function to be approximated and the current approximation executed by the network, which is fed back to the input space: We argue that numerous interlayer resp. intracortical feedback connections, e.g. in the visual primary system of mammals, could serve exactly this purpose. We introduce the paradigm with linear operators for illustration purposes, show the extension to nonlinear operators in function space, introduce projection learning, and discuss future work.< >
Training set parallelism and network based parallelism are two popular paradigms for parallelising a feedforward (artificial) neural network. Training set parallelism is particularly suited to feedforward neural netwo...
详细信息
Training set parallelism and network based parallelism are two popular paradigms for parallelising a feedforward (artificial) neural network. Training set parallelism is particularly suited to feedforward neural networks with backpropagation learning where the size of the training set is large in relation to the size of the network. This study analyses how we can optimally distribute the training set on a heterogeneous processor network when the number of patterns in the training set is not an integer multiple of the number of processors. It is shown that optimal allocation of patterns in such cases is a mixed integer programming problem. Using this analysis, it is found that equal distribution of training patterns among a homogeneous array of transputers is not necessarily the optimal way to allocate the patterns to processors even when the training set is an integer multiple of the number of processors.< >
Three parallel implementations of the backpropagation algorithm are studied. The performance for each of these implementations is estimated for three prototypes of neural architectures chosen in such a way that they a...
详细信息
Three parallel implementations of the backpropagation algorithm are studied. The performance for each of these implementations is estimated for three prototypes of neural architectures chosen in such a way that they are limit cases of a wide range of feedforward architectures. This study, based on performance analysis validated by experiments, follows two other papers already published.< >
A long and uncertain training process is one of the most important problems for a multilayer neural network using the backpropagation algorithm. In this paper, a modified backpropagation algorithm for a certain and fa...
详细信息
A long and uncertain training process is one of the most important problems for a multilayer neural network using the backpropagation algorithm. In this paper, a modified backpropagation algorithm for a certain and fast training process is presented. The modification is based on the solving of the weight matrix for the output layer using theory of equations and least squares techniques.< >
暂无评论