Recurrent backpropagation networks have been used to build up a neural receiver for GSM signals. The simulations have been carried out considering an AWGN channel. Corrupted by ISI, fading and Doppler. The experimenta...
详细信息
Recurrent backpropagation networks have been used to build up a neural receiver for GSM signals. The simulations have been carried out considering an AWGN channel. Corrupted by ISI, fading and Doppler. The experimental results show that the neural receiver performs better than a classic coherent one and it improves its performances when the number of training samples is increased.
Multilayer perceptrons (MLPs) are feed-forward artificial neural networks with high theoretical basis. The most popular algorithm to train MLPs is the backpropagation algorithm, which can be seen as a consistent nonpa...
详细信息
Multilayer perceptrons (MLPs) are feed-forward artificial neural networks with high theoretical basis. The most popular algorithm to train MLPs is the backpropagation algorithm, which can be seen as a consistent nonparametric least squares regression estimator. This algorithm is reformulated in this paper using linear algebra, providing theoretical basis for further studies.
This work presents a reduced complexity architecture for the backpropagation-based compensation of frequency interleaved analog to digital converters (FI-ADC). Earlier literature has described a promising adaptive bac...
详细信息
ISBN:
(纸本)9781728192017;9781728192024
This work presents a reduced complexity architecture for the backpropagation-based compensation of frequency interleaved analog to digital converters (FI-ADC). Earlier literature has described a promising adaptive background compensation technique based on the backpropagation algorithm from machine learning, which is suitable to mitigate errors of the analog signal path of FI-ADCs. This technique is applicable to high speed digital receivers such as those used in coherent optical communications. The key ingredients of the aforementioned technique are MIMO equalization and the backpropagation algorithm used to adapt the coefficients of the equalizer. The work presented here modifies the earlier architecture to reduce the number of required analog mixers by a factor two, and to simplify, also by a factor two, the compensation equalizer responsible for correcting the errors of the analog signal path. This results in a much simpler and significantly improved FI-ADC system. Simulations show that the analog impairments are accurately compensated and their impact on the performance of the receiver is essentially eliminated.
Having an accurate forecast of future electricity usage is vital for utility companies to be able to provide adequate power supply to meet the demand. Two methods have been implemented to perform forecasting of electr...
详细信息
Having an accurate forecast of future electricity usage is vital for utility companies to be able to provide adequate power supply to meet the demand. Two methods have been implemented to perform forecasting of electricity demand, namely, regression analysis (RA) and artificial neural networks (ANNs). We aim to compare these two methods in this paper using the mean absolute percentage error (MAPE) to measure the forecasting performance. The results show that ANNs are more effective than RA in long-term forecast. In addition to that, from our investigation into the effects of the inclusion of economic and social factors, such as population and gross domestic product (GDP), into the forecast, we conclude that the inclusion of economic and social factors do not improve the accuracy of the forecast of the chosen ANN model for electricity demand.
A long and uncertain training process is one of the most important problems for a multilayer neural network using the backpropagation algorithm. In this paper, a modified backpropagation algorithm for a certain and fa...
详细信息
A long and uncertain training process is one of the most important problems for a multilayer neural network using the backpropagation algorithm. In this paper, a modified backpropagation algorithm for a certain and fast training process is presented. The modification is based on the solving of the weight matrix for the output layer using theory of equations and least squares techniques.< >
The supervised backpropagation learning scheme is used to develop a training algorithm for multilayer higher-order neural networks (HONNs). By restructuring the basic HONN architecture, the traditional backpropagation...
详细信息
The supervised backpropagation learning scheme is used to develop a training algorithm for multilayer higher-order neural networks (HONNs). By restructuring the basic HONN architecture, the traditional backpropagation algorithm can be extended to multilayer HONNs. The TC pattern recognition problem is used to compare the performances of various HONNs with different numbers of hidden layers, different numbers of processing elements, and different orders. Simulation results show that, in many causes, the HONN with the same number of training iterations worked better than the conventional first-order networks.< >
Analyses and compares several improvements to the backpropagation method for weights adjustments for a feedforward network. The networks' behavior is simulated on five specific applications. The paper also present...
详细信息
Analyses and compares several improvements to the backpropagation method for weights adjustments for a feedforward network. The networks' behavior is simulated on five specific applications. The paper also presents a method using a variable step and its superiority is proved by simulation.
A backpropagation system for a hypercube which will select one of two implementations, depending on the size of the application, is described. One algorithm executes quickly at the cost of storage. The other optimizes...
详细信息
A backpropagation system for a hypercube which will select one of two implementations, depending on the size of the application, is described. One algorithm executes quickly at the cost of storage. The other optimizes storage at the cost of execution. Both algorithms have considerable message passing. The system is menu driven and has a set of tools to allow the user to determine the correct initial weight matrix W more accurately when the standard guess does not work. This set of tools is especially appropriate for an interactive supercomputer such as an Intel Hypercube. Although the system has been designed to work for a wide range of applications, the authors are especially interested in using it with very large artificial neural systems to do protein identification and classification.< >
Multi-layer backpropagation, like many learning algorithms that can create complex decision surfaces, is prone to overfitting. Softprop is a novel learning approach presented here that is reminiscent of the softmax ex...
详细信息
Multi-layer backpropagation, like many learning algorithms that can create complex decision surfaces, is prone to overfitting. Softprop is a novel learning approach presented here that is reminiscent of the softmax explore-exploit Q-learning search heuristic. It fits the problem while delaying settling into error minima to achieve better generalization and more robust learning. This is accomplished by blending standard SSE optimization with lazy training, a new objective function well suited to learning classification tasks, to form a more stable learning model. Over several machine learning data sets, softprop reduces classification error by 17.1 percent and the variance in results by 38.6 percent over standard SSE minimization.
This paper analyzes the perceptron mean weight learning behavior for a system identification model with Gaussian input training data and fixed non zero biases for both the perceptron and the unknown system. The analys...
详细信息
This paper analyzes the perceptron mean weight learning behavior for a system identification model with Gaussian input training data and fixed non zero biases for both the perceptron and the unknown system. The analysis is based upon the partial evaluation of certain expectations using Price's (1958) theorem followed by numerical integration in the mean weight recursions. The mean weight vector is shown to be in the same direction as that of the unknown system. A scalar recursion is derived for the length of the mean weights. The recursion is shown to yield weight vector predictions that are in close agreement with Monte Carlo simulations of the perceptron learning behavior. The stationary points are also accurately predicted by the theoretical model.
暂无评论