A feedforward neural network with random weights (RW-FFNN) uses a randomized feature map layer. This randomization enables the optimization problem to be replaced by a standard linear least-squares problem, which offe...
详细信息
A feedforward neural network with random weights (RW-FFNN) uses a randomized feature map layer. This randomization enables the optimization problem to be replaced by a standard linear least-squares problem, which offers a major advantage in terms of training speed. An extreme learning machine (ELM) is a well-known RW-FFNN that can be implemented as a single-hidden-layer feedforwardneuralnetwork. However, for a large dataset, owing to the shallow architecture, such an ELM typically requires a very large number of nodes in a single hidden layer to achieve a sufficient level of accuracy. In this paper, we propose a deep residual learning method with a dilated causal convolution ELM (DRLDCC-ELM). The baseline layer performs feature mapping to predict the target features based on the input features. The subsequent residual-compensation layers then iteratively remodel the uncaptured prediction errors in the previous layer. The proposed network architecture also adopts dilated causal convolution based on the ELM in each layer to effectively expand the receptive field of the multilayer network. The results of experiments involving acoustic scene classification of daily activities in a home environment confirmed that the proposed DRLDCC-ELM outperforms the previously proposed residual-compensation ELM and deep-residual-compensation ELM methods. We also confirmed that the generalization capability of the proposed DRLDCC-ELM tends to be superior to that of convolutional neuralnetwork-based models, especially for a large number of parameters.
randomization based methods for training neuralnetworks have gained increasing attention in recent years and achieved remarkable performances on a wide variety of tasks. The interest in such methods relies on the fac...
详细信息
randomization based methods for training neuralnetworks have gained increasing attention in recent years and achieved remarkable performances on a wide variety of tasks. The interest in such methods relies on the fact that standard gradient based learning algorithms may often converge to local minima and are usually time consuming. Despite the good performance achieved by randomization Based neuralnetworks (RNNs), the random feature mapping procedure may generate redundant information, leading to suboptimal solutions. To overcome this problem, some strategies have been used such as feature selection, hidden neuron pruning and ensemble methods. Feature selection methods discard redundant information from the original dataset. Pruning methods eliminate hidden nodes with redundant information. Ensemble methods combine multiple models to generate a single one. Selective ensemble methods select a subset of all available models to generate the final model. In this paper, we propose a selective ensemble of RNNs based on the Successive Projections Algorithm (SPA), for regression problems. The proposed method, named Selective Ensemble of RNNs using the Successive projections algorithm (SERS), employs the SPA for three distinct tasks: feature selection, pruning and ensemble selection. SPA was originally developed as a feature selection technique and has been recently employed for RNN pruning. Herein, we show that it can also be employed for ensemble selection. The proposed framework was used to develop three selective ensemble models based on the three RNNs: Extreme Learning Machines (ELM), feedforward neural network with random weights (FNNRW) and random Vector Functional Link (RVFL). The performances of SERS-ELM, SERS-FNNRW and SERS-RVFL were assessed in terms of model accuracy and model complexity in several real world benchmark problems. Comparisons to related methods showed that SERS variants achieved similar accuracies with significant model complexity reduction. Amon
暂无评论