A new recursive supervised training algorithm is derived for the radial basis neuralnetwork architecture. The new algorithm combines the procedures of on-line candidate regressor selection with the conventional Given...
详细信息
A new recursive supervised training algorithm is derived for the radial basis neuralnetwork architecture. The new algorithm combines the procedures of on-line candidate regressor selection with the conventional Givens QR based recursive parameter estimator to provide efficient adaptive supervised network training. A new concise on-line correlation based performance monitoring scheme is also introduced as an auxiliary device to detect structural changes in temporal data processing applications. Practical and simulated examples are included to demonstrate the effectiveness of the new procedures. Copyright (C) 1996 Elsevier Science Ltd.
The backpropagation (BP) algorithm is a worldwide used method for learningneuralnetworks. The BP has a low computational load. Unfortunately, this method converges relatively slowly. In this paper a new approach to ...
详细信息
ISBN:
(纸本)9783030879860;9783030879853
The backpropagation (BP) algorithm is a worldwide used method for learningneuralnetworks. The BP has a low computational load. Unfortunately, this method converges relatively slowly. In this paper a new approach to the backpropagation algorithm is presented. The proposed solution speeds up the BP method by using vector calculations. This modification of the BP algorithm was tested on a few standard examples. The obtained performance of both methods was compared.
A new parallel computational approach to the Levenberg-Marquardt learningalgorithm is presented. The proposed solution is based on the AVX instructions to effectively reduce the high computational load of this algori...
详细信息
ISBN:
(纸本)9783031234910;9783031234927
A new parallel computational approach to the Levenberg-Marquardt learningalgorithm is presented. The proposed solution is based on the AVX instructions to effectively reduce the high computational load of this algorithm. Detailed parallel neuralnetwork computations are explicitly discussed. Additionally obtained acceleration is shown based on a few test problems.
暂无评论