The statue temple of recognition pattern used to distinguish the direction the digital image (photos) in order facilitate tourist or user knows the direction of the capture of temple image. In its application used alg...
详细信息
The statue temple of recognition pattern used to distinguish the direction the digital image (photos) in order facilitate tourist or user knows the direction of the capture of temple image. In its application used algorithms the detection of edge Canny and artificial neural network backpropagation and the process of extracting by using the PCA. Canny algorithm applied to the process of image where image processing rgb converted into binary image. While backpropagation algorithms used to recognize patterns of processing that image. The direction of which were recognized by the system which is West, South, Southeast, East, Northeast, and North. The level of accuracy against 36 testing data to the specifications 24 the image of practicing and 12 test image. The level of accuracy testing of the image of trained 100 %, while the image of a test of 83,33 %.
In this paper, a general backpropagation learning framework for the training of feedforward neural networks is proposed. The convergence to global minimum under the framework is investigated using the Lyapunov stabili...
详细信息
ISBN:
(纸本)0780366859
In this paper, a general backpropagation learning framework for the training of feedforward neural networks is proposed. The convergence to global minimum under the framework is investigated using the Lyapunov stability theory. It is shown the existing feedforward neural network training algorithms are special cases of the proposed framework.
Training set parallelism and network based parallelism are two popular paradigms for parallelising a feedforward (artificial) neural network. Training set parallelism is particularly suited to feedforward neural netwo...
详细信息
Training set parallelism and network based parallelism are two popular paradigms for parallelising a feedforward (artificial) neural network. Training set parallelism is particularly suited to feedforward neural networks with backpropagation learning where the size of the training set is large in relation to the size of the network. This study analyses how we can optimally distribute the training set on a heterogeneous processor network when the number of patterns in the training set is not an integer multiple of the number of processors. It is shown that optimal allocation of patterns in such cases is a mixed integer programming problem. Using this analysis, it is found that equal distribution of training patterns among a homogeneous array of transputers is not necessarily the optimal way to allocate the patterns to processors even when the training set is an integer multiple of the number of processors.< >
In this paper, the authors employ a quadratic interior point method to backpropagation neural networks. The new quadratic backpropagation learning rule searches for a direction which minimizes the objective function i...
详细信息
In this paper, the authors employ a quadratic interior point method to backpropagation neural networks. The new quadratic backpropagation learning rule searches for a direction which minimizes the objective function in a neighborhood of the current weight vector. Numerical results on the parity problem show that the new learning rule is more than ten times faster than the standard backpropagation, and five times faster than the linear interior point learning rule developed earlier by the same authors (1995).
In this paper the lower and upper type-2 fuzzy weight adjustment applied in a neural network performing the learning method is proposed. The mathematical representation of the adaptation of the interval type-2 fuzzy w...
详细信息
ISBN:
(纸本)9781479903467
In this paper the lower and upper type-2 fuzzy weight adjustment applied in a neural network performing the learning method is proposed. The mathematical representation of the adaptation of the interval type-2 fuzzy weights and the proposed learning method architecture are presented. This research is based in the analysis of the recent methods that manage weight adaptation and implementing this analysis in the adaptation of these methods with type-2 fuzzy weights. In this paper, we work with type-2 fuzzy weights lower and upper in the neural network architecture and the lower and upper final results obtained are presented in the final. The proposed approach is applied to a case of Mackey-Glass time series prediction.
This paper introduces a general class of dynamic network, the layered digital dynamic network. It then derives the backpropagation-through-time algorithm for computing the gradient of the network error with respect to...
详细信息
ISBN:
(纸本)0780370449
This paper introduces a general class of dynamic network, the layered digital dynamic network. It then derives the backpropagation-through-time algorithm for computing the gradient of the network error with respect to the weights of the network.
Three parallel implementations of the backpropagation algorithm are studied. The performance for each of these implementations is estimated for three prototypes of neural architectures chosen in such a way that they a...
详细信息
Three parallel implementations of the backpropagation algorithm are studied. The performance for each of these implementations is estimated for three prototypes of neural architectures chosen in such a way that they are limit cases of a wide range of feedforward architectures. This study, based on performance analysis validated by experiments, follows two other papers already published.< >
Most of the real life classification problems have ill defined, imprecise or fuzzy class boundaries. Feedforward neural networks with conventional backpropagation learning algorithm are not tailored to these kinds of ...
详细信息
Most of the real life classification problems have ill defined, imprecise or fuzzy class boundaries. Feedforward neural networks with conventional backpropagation learning algorithm are not tailored to these kinds of classification problems. Hence, in this paper, feedforward neural networks, that use fuzzy objective functions in the backpropagation learning algorithm, are investigated. A learning algorithm is proposed that minimizes an error term, which takes care of fuzziness in classification from the point of view of possibilistic approach. Since the proposed algorithm has possibilistic classification ability, it can encompass different backpropagation learning algorithms based on crisp and constrained fuzzy classification. The efficacy of the proposed scheme is demonstrated on a vowel classification problem.
A improved gradient-based backpropagation training method is proposed for neural networks in this paper. Based on the Barzilai and Borwein steplength update and some technique of Resilient Propagation method, we adapt...
详细信息
A improved gradient-based backpropagation training method is proposed for neural networks in this paper. Based on the Barzilai and Borwein steplength update and some technique of Resilient Propagation method, we adapt the new learning rate to improves the speed and the success rate. Experimental results show that the proposed method has considerably improved convergence speed, and for the chosen test problems, outperforms other well-known training methods.
Presents a technique for mapping the backpropagation learning algorithm on a mesh signal processor. The optimal sub-partitioning of computation and communication, and data replication techniques are the key features o...
详细信息
Presents a technique for mapping the backpropagation learning algorithm on a mesh signal processor. The optimal sub-partitioning of computation and communication, and data replication techniques are the key features of the authors' algorithm. Theoretical analysis and simulation results, using the MIT Lincoln Lab simulator, show that the authors' scheme performs better than the other schemes.
暂无评论