作者:
Matson, CLLiu, HLUSAF
Res Lab Adv Opt & Imaging Div Kirtland AFB NM 87117 USA Univ Texas
Dept Biomed Engn Arlington TX 76019 USA
We extend the backpropagation algorithm of standard diffraction tomography to backpropagation in turbid media. We analyze the behavior of the backpropagation algorithm both for a single-view geometry, as is common in ...
详细信息
We extend the backpropagation algorithm of standard diffraction tomography to backpropagation in turbid media. We analyze the behavior of the backpropagation algorithm both for a single-view geometry, as is common in mammography, and for multiple views. The most general form of the algorithm permits arbitrary placement of sources and detectors in the background medium. In addition, we specialize the algorithm for the case of a planar array of detectors, which permits the backpropagation algorithm to be implemented with fast-Fourier-domain noniterative algebraic methods. In this case the algorithm can be used to reconstruct three-dimensional images in a minute or less, depending on the number of views. We demonstrate the theoretical results with computer simulations. (C) 1999 Optical Society of America [S0740-3232(99)02806-9].
A novel adaptive channel equalizer based on the backpropagation algorithm applied to an associative network is presented. simulations are made for linear and nonlinear channels,. The performance is shown to be much be...
详细信息
A novel adaptive channel equalizer based on the backpropagation algorithm applied to an associative network is presented. simulations are made for linear and nonlinear channels,. The performance is shown to be much better than that obtained using the least-mean-square (LMS) algorithm for the nonlinear channel.
The statistical learning behavior of the single-layer backpropagation algorithm was recently analyzed for a system identification formulation for noise-free training data, Transient and steady-state results were obtai...
详细信息
The statistical learning behavior of the single-layer backpropagation algorithm was recently analyzed for a system identification formulation for noise-free training data, Transient and steady-state results were obtained for the mean weight behavior, mean-square error (MSE), and probability of correct classification. This correspondence extends these results to the case of noisy training data, Three new analytical results are obtained -1) the mean weights converge to finite values, 2) the MSE is bounded away from zero, and 3) the probability of correct classification does not converge to unity. However, over a wide range of signal-to-noise ratio (SNR), the noisy training data does not have a significant effect on the perceptron stationary points relative to the weight fluctuations. Hence, one concludes that noisy training data has a relatively small effect on the ability of the perceptron to learn the underlyingweight vector F of the training signal model.
In this correspondence a recursive algorithm for updating the coefficients of a neural network structure for complex signals is presented. Various complex activation functions are considered and a practical definition...
详细信息
In this correspondence a recursive algorithm for updating the coefficients of a neural network structure for complex signals is presented. Various complex activation functions are considered and a practical definition is proposed. The method, associated to a mean-square-error criterion, yields the complex form of the conventional backpropagation algorithm.
作者:
Oh, SHResearch Department
Electronics and Telecommunications Research Institute Taejon South Korea
This letter proposes a modified error function to improve the error backpropagation (EBP) algorithm of multilayer perceptrons (MLP's) which suffers from slow learning speed. To accelerate the learning speed of the...
详细信息
This letter proposes a modified error function to improve the error backpropagation (EBP) algorithm of multilayer perceptrons (MLP's) which suffers from slow learning speed. To accelerate the learning speed of the EBP algorithm, the proposed method reduces the probability that output nodes are near the wrong extreme value of sigmoid activation function. This is acquired through a strong error signal for the incorrectly saturated output node and a weak error signal for the correctly saturated output node. The weak error signal for the correctly saturated output node, also, prevents overspecialization of learning for training patterns. The effectiveness of the proposed method is demonstrated in a handwritten digit recognition task.
It is shown that if there are P noncoincident input patterns to learn and a two-layered feedforward neural network having P - 1 sigmoidal hidden neural and one dummy hidden neuron is used for the learning, then any su...
详细信息
It is shown that if there are P noncoincident input patterns to learn and a two-layered feedforward neural network having P - 1 sigmoidal hidden neural and one dummy hidden neuron is used for the learning, then any suboptimal equilibrium point of the corresponding error surface is unstable in the sense of Lyapunov. This result leads to a sufficient local minima free condition for the backpropagation learning.
As a concise representation of stack filters, multistage weighted-order statistic (MWOS) filters are introduced in this paper, which correspond to multistage threshold logic gates or multilayer perceptrons in the bina...
详细信息
As a concise representation of stack filters, multistage weighted-order statistic (MWOS) filters are introduced in this paper, which correspond to multistage threshold logic gates or multilayer perceptrons in the binary domain. Two adaptive algorithms are derived for finding optimal MWOS filters under the MAE criterion and the MSE criterion, respectively. Experimental results from image enhancement are provided to compare the performance of adaptive MWOS filters and adaptive stack filters.
This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo ...
详细信息
This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estimates of the local Lipschitz constant that are obtained without additional error function and gradient evaluations. The proposed algorithms improve the backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations. Simulations are conducted to compare and evaluate the convergence behavior of these gradient-based training algorithms with several popular training methods.
Over the years, many improvements and refinements of the backpropagation learning algorithm have been reported. In this paper, a new adaptive penalty-based learning extension for the backpropagation learning algorithm...
详细信息
ISBN:
(纸本)9780780394902
Over the years, many improvements and refinements of the backpropagation learning algorithm have been reported. In this paper, a new adaptive penalty-based learning extension for the backpropagation learning algorithm and its variants is proposed. The new method initially puts pressure on artificial neural networks in order to get all outputs for all training patterns into the correct half of the output range, instead of mainly focusing on minimizing the difference between the target and actual output values. The technique is easy to implement and computationally inexpensive. In this study, the new approach has been applied to the backpropagation learning algorithm as well as the RPROP learning algorithm and simulations have been performed. The superiority of the new proposed method is demonstrated. By applying the extension, the number of successful runs can be greatly increased and the average number of epochs to convergence can be wen reduced on various problem instances. Furthermore, the change of the penalty values during training has been studied and its observation shows the active role the penalties play within the learning process.
This paper presents application of complex neural network for calculating complex resonating frequency of microstrip patch antenna on superstrate. The results obtained from neural network agrees well with the theoreti...
详细信息
ISBN:
(纸本)9781479932672
This paper presents application of complex neural network for calculating complex resonating frequency of microstrip patch antenna on superstrate. The results obtained from neural network agrees well with the theoretical results.
暂无评论