Individual differences across subjects and nonstationary characteristic of electroencephalography (EEG) limit the generalization of affective braincomputer interfaces in real-world applications. On the other hand, it ...
详细信息
Individual differences across subjects and nonstationary characteristic of electroencephalography (EEG) limit the generalization of affective braincomputer interfaces in real-world applications. On the other hand, it is very time consuming and expensive to acquire a large number of subjectspecific labeled data for learning subject-specific models. In this paper, we propose to build personalized EEG-based affective models without labeled target data using transfer learning techniques. We mainly explore two types of subject-to-subject transfer approaches. One is to exploit shared structure underlying source domain (source subject) and target domain (target subject). The other is to train multiple individual classifiers on source subjects and transfer knowledge about classifier parameters to target subjects, and its aim is to learn a regression function that maps the relationship between feature distribution and classifier parameters. We compare the performance of five different approaches on an EEG dataset for constructing an affective model with three affective states: positive, neutral, and negative. The experimental results demonstrate that our proposed subject transfer framework achieves the mean accuracy of 76.31% in comparison with a conventional generic classifier with 56.73% in average.
Recent advances in cancer research largely rely on new developments in microscopic or molecular profiling techniques offering high level of detail with respect to either spatial or molecular features, but usually not ...
详细信息
This paper investigates a new voice conversion technique using phone-aware Long Short-Term Memory Recurrent Neural Networks(LSTM-RNNs). Most existing voice conversion methods, including Joint Density Gaussian Mixtur...
详细信息
This paper investigates a new voice conversion technique using phone-aware Long Short-Term Memory Recurrent Neural Networks(LSTM-RNNs). Most existing voice conversion methods, including Joint Density Gaussian Mixture Models(JDGMMs), Deep Neural Networks(DNNs)and Bidirectional Long Short-Term Memory Recurrent Neural Networks(BLSTM-RNNs), only take acoustic information of speech as features to train models. We propose to incorporate linguistic information to build voice conversion system by using monophones generated by a speech recognizer as linguistic features. The monophones and spectral features are combined together to train LSTM-RNN based voice conversion models,reinforcing the context-dependency modelling of *** results of the 1st voice conversion challenge shows our system achieves significantly higher performance than baseline(GMM method) and was found among the most competitive scores in similarity test. Meanwhile, the experimental results show phone-aware LSTM-RNN method obtains lower Melcepstral distortion and higher MOS scores than the baseline LSTM-RNNs.
Artificial neural networks(ANN) have been used in many applications such like handwriting recognition and speech recognition. It is well-known that learning rate is a crucial value in the training procedure for artifi...
详细信息
Artificial neural networks(ANN) have been used in many applications such like handwriting recognition and speech recognition. It is well-known that learning rate is a crucial value in the training procedure for artificial neural networks. It is shown that the initial value of learning rate can confoundedly affect the final result and this value is always set manually in practice. A new parameter called beta stabilizer has been introduced to reduce the sensitivity of the initial learning rate. But this method has only been proposed for deep neural network(DNN) with sigmoid activation function. In this paper we extended beta stabilizer to long short-term memory(LSTM) and investigated the effects of beta stabilizer parameters on different models, including LSTM and DNN with relu activation *** is concluded that beta stabilizer parameters can reduce the sensitivity of learning rate with almost the same performance on DNN with relu activation function and LSTM. However, it is shown that the effects of beta stabilizer on DNN with relu activation function and LSTM are fewer than the effects on DNN with sigmoid activation function.
In the version of this article originally published, the bottom of Figure 4f,g was partially truncated in the PDF. The error has been corrected in the PDF version of this article.
In the version of this article originally published, the bottom of Figure 4f,g was partially truncated in the PDF. The error has been corrected in the PDF version of this article.
暂无评论