咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Dynamic Convolutional Neural N... 收藏

Dynamic Convolutional Neural Networks as Efficient Pre-Trained Audio Models

作     者:Schmid, Florian Koutini, Khaled Widmer, Gerhard 

作者机构:Johannes Kepler Univ Linz Inst Computat Percept CP JKU A-4040 Linz Austria Johannes Kepler Univ Linz LIT Artificial Intelligence Lab A-4040 Linz Austria 

出 版 物:《IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING》 (IEEE ACM Trans. Audio Speech Lang. Process.)

年 卷 期:2024年第32卷

页      面:2227-2241页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0702[理学-物理学] 

基  金:European Research Council 

主  题:Dynamic convolutional neural networks dynamic convolution dynamic ReLU coordinate attention audio spectrogram transformer audio classification pre-trained audio models knowledge distillation 

摘      要:The introduction of large-scale audio datasets, such as AudioSet, paved the way for Transformers to conquer the audio domain and replace CNNs as the state-of-the-art neural network architecture for many tasks. Audio Spectrogram Transformers are excellent at exploiting large datasets, creating powerful pre-trained models that surpass CNNs when fine-tuned on downstream tasks. However, current popular Audio Spectrogram Transformers are demanding in terms of computational complexity compared to CNNs. Recently, we have shown that, by employing Transformer-to-CNN Knowledge Distillation, efficient CNNs can catch up with and even outperform Transformers on large datasets. In this work, we extend this line of research and increase the capacity of efficient CNNs by introducing dynamic CNN blocks constructed of dynamic convolutions, a dynamic ReLU activation function, and Coordinate Attention. We show that these dynamic CNNs outperform traditional efficient CNNs, such as MobileNets, in terms of the performance-complexity trade-off at the task of audio tagging on the large-scale AudioSet. Our experiments further indicate that the proposed dynamic CNNs achieve competitive performance with Transformer-based models for end-to-end fine-tuning on downstream tasks while being much more computationally efficient.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分