Numerous compression and acceleration strategies have achieved outstanding results on classification tasks in various fields. Nevertheless, the same strategies may yield unsatisfactory performance on regression tasks ...
详细信息
Numerous compression and acceleration strategies have achieved outstanding results on classification tasks in various fields. Nevertheless, the same strategies may yield unsatisfactory performance on regression tasks because the nature between regression and classification tasks differs. In this paper, a novel sign-exponent-only floating-point network (SEOFP-NET) technique is proposed to compress the model size and accelerate the inference time for speech enhancement, a regression task of speech signal processing. The proposed method compressed the sizes of deep neural network (DNN)-based speech enhancement models by quantizing the fraction bits of single-precision floating-point parameters during training. Before inference implementation, all parameters in the trained SEOFP-NET model are adjusted to accelerate the inference time by replacing the floating-point multiplier with an integer-adder. The experimental results indicate that the size of SEOFP-NET models can be significantly compressed by up to 81.249% without noticeably downgrading their speech enhancement performance, and the inference time can be accelerated to 1.212x compared with the baseline models. The results also verify that SEOFP-NET can cooperate with other efficiency strategies to achieve a synergy effect for model compression. In addition, results of a just noticeable difference experiment show that the listeners cannot facilely differentiate between the enhanced speech signals processed by the baseline model and SEOFP-NET. To the best of our knowledge, this study is one of the first works that aims to compress the model size and reduce the inference time of speech enhancement while maintaining satisfactory performance. The promising results confirm the potential applicability of SEOFP-NET to lightweight embedded devices.
暂无评论