咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Knowledge Transfer and Distill... 收藏
arXiv

Knowledge Transfer and Distillation from Autoregressive to Non-Autoregressive Speech Recognition

作     者:Gong, Xun Zhou, Zhikai Qian, Yanmin 

作者机构:MoE Key Lab of Artificial Intelligence AI Institute X-LANCE Lab Department of Computer Science and Engineering Shanghai Jiao Tong University Shanghai China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2022年

核心收录:

主  题:Speech recognition 

摘      要:Modern non-autoregressive (NAR) speech recognition systems aim to accelerate the inference speed;however, they suffer from performance degradation compared with autoregressive (AR) models as well as the huge model size issue. We propose a novel knowledge transfer and distillation architecture that leverages knowledge from AR models to improve the NAR performance while reducing the model’s size. Frame- and sequence-level objectives are well-designed for transfer learning. To further boost the performance of NAR, a beam search method on Mask-CTC is developed to enlarge the search space during the inference stage. Experiments show that the proposed NAR beam search relatively reduces CER by over 5% on AISHELL-1 benchmark with a tolerable real-time-factor (RTF) increment. By knowledge transfer, the NAR student who has the same size as the AR teacher obtains relative CER reductions of 8/16% on AISHELL-1 dev/test sets, and over 25% relative WER reductions on LibriSpeech test-clean/other sets. Moreover, the ∼9x smaller NAR models achieve ∼25% relative CER/WER reductions on both AISHELL-1 and LibriSpeech benchmarks with the proposed knowledge transfer and distillation. Copyright © 2022, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分