To solve the problem of mismatching features in an experimental database, which is a key technique in the field of cross-corpus speech emotion recognition, an auditory attention model based on Chirplet is proposed for...
详细信息
To solve the problem of mismatching features in an experimental database, which is a key technique in the field of cross-corpus speech emotion recognition, an auditory attention model based on Chirplet is proposed for feature ***, in order to extract the spectra features, the auditory attention model is employed for variational emotion features detection. Then, the selective attention mechanism model is proposed to extract the salient gist features which showtheir relation to the expected performance in cross-corpus ***, the Chirplet time-frequency atoms are introduced to the model. By forming a complete atom database, the Chirplet can improve the spectrum feature extraction including the amount of information. Samples from multiple databases have the characteristics of multiple components. Hereby, the Chirplet expands the scale of the feature vector in the timefrequency domain. Experimental results show that, compared to the traditional feature model, the proposed feature extraction approach with the prototypical classifier has significant improvement in cross-corpus speech recognition. In addition, the proposed method has better robustness to the inconsistent sources of the training set and the testing set.
为提高低信噪比环境下的语音可懂度,提出了一种基于联合失真控制的子空间语音增强算法。由于误差信号中的语音失真和残余噪声分量不能被同时最小化,同时,由语音估计器引起的语音放大失真超过6.02 d B时会严重损害语音可懂度。为此分别...
详细信息
为提高低信噪比环境下的语音可懂度,提出了一种基于联合失真控制的子空间语音增强算法。由于误差信号中的语音失真和残余噪声分量不能被同时最小化,同时,由语音估计器引起的语音放大失真超过6.02 d B时会严重损害语音可懂度。为此分别对语音失真和残余噪声进行最小化处理,最小化时把语音放大失真控制在6.02 d B以下作为约束条件,通过求解两个约束最优化问题得到两个不同的估计器,再对这两个估计器进行加权求和,得到一种基于联合失真控制的语音估计器。实验结果表明,相比于传统的子空间增强方法,在低信噪比环境下所提出的算法能更有效提高增强后语音的可懂度。
暂无评论