咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Kullback-Leibler Divergence-Ba... 收藏
IEEE Transactions on Artificial Intelligence

Kullback-Leibler Divergence-Based Regularized Normalization for Low-Resource Tasks

作     者:Kumar, Neeraj Narang, Ankur Lall, Brejesh 

作者机构:Indian Institute of Technology Delhi Bharti School of Telecommunication Technology & Management New Delhi110016 India Indian Institute of Technology Delhi Department of Electrical Engineering New Delhi110016 India 

出 版 物:《IEEE Transactions on Artificial Intelligence》 (IEEE. Trans. Artif. Intell.)

年 卷 期:2024年第5卷第6期

页      面:2638-2650页

核心收录:

主  题:Recurrent neural networks 

摘      要:Large pretrained models, like BERT, GPT, and Wav2Vec, have demonstrated their ability to learn transferable representations for various downstream tasks. However, obtaining a substantial amount of supervised data remains a challenge due to resource and time limitations. As a solution, researchers have turned their attention to using large pretrained datasets via techniques like fine tuning, linear probing, or prompt tuning in low-resource settings. Normalization techniques play a crucial role in speeding up training, style transfer, object detection, recurrent neural networks, and improving the generalization of deep neural networks. Despite their success in various domains, their effectiveness in low-resource NLP and speech tasks has been limited. A notable reason for this limitation is the difficulty in capturing expressiveness using affine parameters of normalization. To address this issue, we propose a novel approach called Kullback-Leibler (KL) regularized normalization or KL-Norm. The main objective of KL-Norm is to ensure that normalized data are well-behaved and to improve generalization by reducing overfitting by including a regularization loss function in the training process. It achieves this by promoting good performance on out-of-domain distributions and effectively filtering relevant features while eliminating superficial features or biases present in the dataset or pretrained model. Remarkably, KL-Norm accomplishes these objectives with minimal increase in model parameters and memory overheads. Through extensive experimental analysis, we showcase the improved accuracy and performance of KL-Norm in comparison to other normalization techniques on low-resource downstream NLP tasks. These tasks encompass a wide range of applications, including sentiment classification, semantic relationship characterization, semantic textual similarity, textual entailment, and paraphrase detection. Additionally, KL-Norm exhibits superior results in downstream speech tasks

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分