咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Instance Regularization for Di... 收藏
arXiv

Instance Regularization for Discriminative Language Model Pre-training

作     者:Zhang, Zhuosheng Zhao, Hai Zhou, Ming 

作者机构:Department of Computer Science and Engineering Shanghai Jiao Tong University China Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University China Langboat Technology China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2022年

核心收录:

主  题:Computational linguistics 

摘      要:Discriminative pre-trained language models (PrLMs) can be generalized as denoising auto-encoders that work with two procedures, ennoising and denoising. First, an ennoising process corrupts texts with arbitrary noising functions to construct training instances. Then, a denoising language model is trained to restore the corrupted tokens. Existing studies have made progress by optimizing independent strategies of either ennoising or denosing. They treat training instances equally throughout the training process, with little attention on the individual contribution of those instances. To model explicit signals of instance contribution, this work proposes to estimate the complexity of restoring the original sentences from corrupted ones in language model pre-training. The estimations involve the corruption degree in the ennoising data construction process and the prediction confidence in the denoising counterpart. Experimental results on natural language understanding and reading comprehension benchmarks show that our approach improves pre-training efficiency, effectiveness, and robustness. Copyright © 2022, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分