There're many models derived from the famous bio-inspired artificial neural network (ANN). Among them, multi-layer perceptron (MLP) is widely used as a universal function approximator. With the development of EDA ...
详细信息
ISBN:
(纸本)9783037853191
There're many models derived from the famous bio-inspired artificial neural network (ANN). Among them, multi-layer perceptron (MLP) is widely used as a universal function approximator. With the development of EDA and recent research work, we are able to use rapid and convenient method to generate hardware implementation of MLP on FPGAs through pre-designed IP cores. In the mean time, we focus on achieving the inherent parallelism of neural networks. In this paper, we firstly propose the hardware architecture of modular IP cores. Then, a parallel MLP is devised as an example. At last, some conclusions are made.
To retrain an existing multilayer perceptron (MLP) on-line using newly observed data, it is necessary to incorporate the new information while preserving the performance of the network. This is known as the "plas...
详细信息
ISBN:
(纸本)9781509042401
To retrain an existing multilayer perceptron (MLP) on-line using newly observed data, it is necessary to incorporate the new information while preserving the performance of the network. This is known as the "plasticitystability" problem. For this purpose, we proposed an algorithm for on-line training with guide data (OLTA-GD). OLTA-GD is good for implementation in portable/wearable computing devices (P/WCDs) because of its low computational cost, and can make us more independent of the internet. Results obtained so far show that, in most cases, OLTA-GD can improve an MLP steadily. One question in using OLTA-GD is how we can select the guide data more efficiently. In this paper, we investigate two methods for guide data selection. The first one is to select the guide data randomly from a candidate data set G, and the other is to cluster G first, and select the guide data from G based on the cluster centers. Results show that the two methods do not have significant difference in the sense that both of them can preserve the performance of the MLP well. However, if we consider the risk of "instantaneous performance degradation", random selection is not recommended. In other words, cluster center-based selection can provide more reliable results for the user during on-line training.
The performance of neural network as a classifier depends on several factors such as initialization of weights, its architecture, between class imbalance in the dataset, activation function etc. Though a three layered...
详细信息
ISBN:
(纸本)9781467382861
The performance of neural network as a classifier depends on several factors such as initialization of weights, its architecture, between class imbalance in the dataset, activation function etc. Though a three layered neural network is able to approximate any non-linear function, number of neurons in the hidden layer plays a significant roll in the performance of the classifier. In this study, the importance of the number of hidden layer neurons of neural network is analyzed for the classification of ECG signals. Five different arrhythmias and the normal beat are classified for different number of hidden layer neurons to examine the performance of the classifier. In this study we get the best number as 35. After the training of the neural network with the optimized number of neurons in the hidden layer, we have tested the performance with three different datasets. The average sensitivity, specificity and accuracy achieved is 94.91%, 99.69% and 99.46% respectively.
In this paper two bio-inspired methods are applied to optimize the type-2 fuzzy inference systems used in the neural network with type-2 fuzzy weights. The genetic algorithm and particle swarm optimization are used to...
详细信息
ISBN:
(纸本)9781479914159
In this paper two bio-inspired methods are applied to optimize the type-2 fuzzy inference systems used in the neural network with type-2 fuzzy weights. The genetic algorithm and particle swarm optimization are used to optimize the two type-2 fuzzy systems that work in the backpropagation learning method with type-2 fuzzy weight adjustment. The mathematical analysis of the learning method architecture and the adaptation of type-2 fuzzy weights are presented. In this work an optimized type-2 fuzzy inference systems to manage weights for the neural network and the results for the two bio-inspired methods are presented. The proposed approach is applied to a case of time series prediction, specifically in Mackey-Glass time series.
A split-based background calibration technique for pipelined-ADC is proposed in this brief to handle both capacitors' mismatches and non-linearity of residue amplifiers in multiple pipeline stages. Some concepts a...
详细信息
ISBN:
(纸本)9781538648810
A split-based background calibration technique for pipelined-ADC is proposed in this brief to handle both capacitors' mismatches and non-linearity of residue amplifiers in multiple pipeline stages. Some concepts and approaches of machine learning, say multilayer perceptron and backpropagation algorithm, are introduced to deal with the problems appearing in modeling and solving the nonlinear calibration filter. Computer simulations demonstrate an exaltation of both SNDR and SFDR for more than 50dB in a 15-bit 7-stages pipelined-ADC with non-ideal first three stages.
The backpropagation (BP) algorithm is a worldwide used method for learning neural networks. The BP has a low computational load. Unfortunately, this method converges relatively slowly. In this paper a new approach to ...
详细信息
ISBN:
(纸本)9783030879860;9783030879853
The backpropagation (BP) algorithm is a worldwide used method for learning neural networks. The BP has a low computational load. Unfortunately, this method converges relatively slowly. In this paper a new approach to the backpropagation algorithm is presented. The proposed solution speeds up the BP method by using vector calculations. This modification of the BP algorithm was tested on a few standard examples. The obtained performance of both methods was compared.
In this work we tested and compared Artificial Metaplasticity (AMP) results for Multilayer Perceptrons (MLPs). AMP is a novel Artificial Neural Network (ANN) training algorithm inspired on the biological metaplasticit...
详细信息
ISBN:
(纸本)9781424427932
In this work we tested and compared Artificial Metaplasticity (AMP) results for Multilayer Perceptrons (MLPs). AMP is a novel Artificial Neural Network (ANN) training algorithm inspired on the biological metaplasticity property of neurons and Shannon's information theory. During training phase, AMP training algorithm gives more relevance to less frequent patterns and subtracts relevance to the frequent ones, claiming to achieve a much more efficient training, while at least maintaining the MLP performance. AMP is specially recommended when few patterns are available to train the network. We implement an Artificial Metaplasticity MLP (AMMLP) on standard and well-used databases for Machine Learning. Experimental results show the superiority of AMMLPs when compared with recent results on the same databases.
In this paper backpropagation learning algorithm and genetic algorithm is applied for network intrusion detection and also to classify the detected attacks into proper types. During the training process of the backpro...
详细信息
ISBN:
(纸本)9781467348362;9781467348331
In this paper backpropagation learning algorithm and genetic algorithm is applied for network intrusion detection and also to classify the detected attacks into proper types. During the training process of the backpropagation algorithm two possible set of features in the rule sets are used separately to determine proper rule set features for better performance. Then the performance of genetic algorithm is compared to the performance of both of the backpropagation approach. The process is tested on training dataset as well as test dataset to analyze the performance. It is found that in detecting the attack connections backpropagation algorithm shows better performance but in classifying the detected attacks into proper types the genetic algorithm approach is more successful.
In this paper an Inclined Planes Optimization algorithm, is used to optimize the performance of the multilayer perceptron. Indeed, the performance of the neural network depends on its parameters such as the number of ...
详细信息
ISBN:
(纸本)9781509064540
In this paper an Inclined Planes Optimization algorithm, is used to optimize the performance of the multilayer perceptron. Indeed, the performance of the neural network depends on its parameters such as the number of neurons in the hidden layer and the connection weights. So far, most research has been done in the field of training the neural network. In this paper, a new algorithm optimization is presented in optimal architecture for data classification. Neural network training is done by backpropagation (BP) algorithm and optimization the architecture of neural network is considered as independent variables in the algorithm. The results in three classification problems have shown that a neural network resulting from these methods have low complexity and high accuracy when compared with results of Particle Swarm Optimization and Gravitational Search algorithm.
An artificial neural networks model to enhance the performance index of mass transfer function in tube flow by means of entry region coil-disc assembly promoter was inserted coaxially is presented in this paper. Popul...
详细信息
ISBN:
(纸本)9789811024719;9789811024702
An artificial neural networks model to enhance the performance index of mass transfer function in tube flow by means of entry region coil-disc assembly promoter was inserted coaxially is presented in this paper. Popular backpropagation algorithm was utilized to test, train and normalize the network data to envisage the performance of mass transfer function. The experimental data of the study is separated into two sets one is training sets and second one is validation sets. The 248 sets of the experimental data were used in training and 106 sets for the validation of the artificial neural networks using MATLAB 7.7.0, particularly tool boxes to predict the performance index of the mass transfer in tube for faster convergence and accuracy. The weights were initialized within the range of [-1, 1]. The network limitations in all attempts taken learning rate as 0.10 and momentum term as 0.30. The finest model was selected based on the MSE, STD and R2. In this, network with 5_8_1 configuration is recommended for mass transfer training. This research work reveals that artificial neural networks with adding more number of layers and nodes in hidden layer may not increase the performance of mass transfer function.
暂无评论