Due to its efficient model calibration given by unique incrementallearning capability, broad learning system (BLS) has made impressive progress in image analytical tasks such as image classification and object detect...
详细信息
Due to its efficient model calibration given by unique incrementallearning capability, broad learning system (BLS) has made impressive progress in image analytical tasks such as image classification and object detection. Inspired by this incremental remodel success, we proposed a novel transformer-BLS network to achieve a trade-off between model training speed and accuracy. Specially, we developed sub-BLS layers with the multi-head attention mechanism and combining these layers to construct a transformer-BLS network. In particular, our proposed transformer-BLS network provides four different incremental learning algorithms that enable the proposed model can realize the increments of its feature nodes, enhancement nodes, input data and sub-BLS layers, respectively, without the need of the full-weight update in this model. Furthermore, we validated the performance of our transformer-BLS network and its four incremental learning algorithms on a variety of image classification datasets. The results demonstrated that the proposed transformer-BLS maintains classification performance on both the MNIST and Fashion-MNIST datasets, while saving 2/3 of the training time. These findings imply that the proposed method has the potential in significant reducing model training complexity with this incremental remodel system, while simultaneously improving the increment learning performance of the original BLS within such contexts, especially in the classification task of some datasets.
As an effective and efficient discriminative learning method, broad learning system (BLS) has received increasing attention due to its outstanding performance in various regression and classification problems. However...
详细信息
As an effective and efficient discriminative learning method, broad learning system (BLS) has received increasing attention due to its outstanding performance in various regression and classification problems. However, the standard BLS is derived under the minimum mean square error (MMSE) criterion, which is, of course, not always a good choice due to its sensitivity to outliers. To enhance the robustness of BLS, we propose in this work to adopt the maximum correntropy criterion (MCC) to train the output weights, obtaining a correntropy-based BLS (C-BLS). Due to the inherent superiorities of MCC, the proposed C-BLS is expected to achieve excellent robustness to outliers while maintaining the original performance of the standard BLS in the Gaussian or noise-free environment. In addition, three alternative incremental learning algorithms, derived from a weighted regularized least-squares solution rather than pseudoinverse formula, for C-BLS are developed. With the incremental learning algorithms, the system can be updated quickly without the entire retraining process from the beginning when some new samples arrive or the network deems to be expanded. Experiments on various regression and classification data sets are reported to demonstrate the desirable performance of the new methods.
The recent emergence of massive amounts of data requires new algorithms that are capable of processing them in an acceptable time frame. Several proposals have been made, and all of them share the idea of using a proc...
详细信息
The recent emergence of massive amounts of data requires new algorithms that are capable of processing them in an acceptable time frame. Several proposals have been made, and all of them share the idea of using a procedure to break down the entire set of examples into smaller subsets, process each subset with a learning algorithm, and then combine the different partial results. Most of these models make use of a parallel process, where each learning algorithm learns independently for each subset of data. In our case, the goal is to propose a new model to obtain classifiers based on fuzzy rules that make use of a sequential model that can process a large number of examples and to show that, for some problems, a sequential procedure can be competitive in time and learning capacity against parallel processing proposals based on the MapReduce paradigm. This sequential processing uses a batch-incrementallearning technique that can process each subset of examples. The incremental proposal makes use of a biologically inspired computation method. This method is a cognitive computational model which uses genetic algorithms to learn fuzzy rules. The experimentation carried out shows that the incremental model is competitive with respect to a parallel model proposed for addressing big data classification using fuzzy rules.
This paper introduces several nonlinear multi-model ensemble techniques for multiple chaotic models in high-dimensional phase space by means of artificial neural networks. A chaotic model is built by way of the time-d...
详细信息
ISBN:
(纸本)9781424496365
This paper introduces several nonlinear multi-model ensemble techniques for multiple chaotic models in high-dimensional phase space by means of artificial neural networks. A chaotic model is built by way of the time-delayed phase space reconstruction of the time series from observables. Several predictive global and local models, including Multi-layered Perceptron Neural Network (MLP-NN), are constructed and a number of multi-model ensemble techniques are implemented to produce more accurate hybrid models. One of these techniques is the nonlinear multi-model ensemble using one kind of dynamic neural network so-called Focused Time Delay Neural Network (FTDNN) with batch and incremental learning algorithms. The proposed techniques were used and tested for predicting storm surge dynamics in the North Sea. The results showed that the accuracy of multi-model ensemble predictions is generally improved in comparison to the one by single models. An FTDNN with incrementallearning is more desirable for real-time operation, however in our experiments it was less accurate than batch learning.
In many language processing tasks, most of the sentences generally convey rather simple meanings. Moreover, these tasks have a limited semantic domain that can be properly covered with a simple lexicon and a restricte...
详细信息
In many language processing tasks, most of the sentences generally convey rather simple meanings. Moreover, these tasks have a limited semantic domain that can be properly covered with a simple lexicon and a restricted syntax. Nevertheless, casual users are by no means expected to comply with any kind of formal syntactic restrictions due to the inherent "spontaneous" nature of human language. In this work, the use of error-correcting-based learning techniques is proposed to cope with the complex syntactic variability which is generally exhibited by natural language. In our approach, a complex task is modeled in terms of a basic finite state model, F, and a stochastic error model, E. F should account for the basic (syntactic) structures underlying this task, which would convey the meaning. E should account for general vocabulary variations, word disappearance, superfluous words, and so on. Each "natural" user sentence is thus considered as a corrupted version (according to E) of some "simple" sentence of L(F). Adequate bootstrapping procedures are presented that incrementally improve the "structure" of F while estimating the probabilities for the operations of E. These techniques have been applied to a practical task of moderately high syntactic variability, and the results which show the potential of the proposed approach are presented.
Instance‐based representations have been applied to numerous classification tasks with some success. Most of these applications involved predicting a symbolic class based on observed attributes. This paper presents a...
详细信息
暂无评论