版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:The School of Computer Science National Engineering Research Center for Multimedia Software Institute of Artificial Intelligence Hubei Key Laboratory of Multimedia and Network Communication Engineering Wuhan University Wuhan China The School of Computer Science Faculty of Engineering The University of Sydney Sydney Australia The College of Computing & Data Science Nanyang Technological University #32 Block N4 #02a-014 50 Nanyang Avenue Singapore639798 Singapore
出 版 物:《arXiv》 (arXiv)
年 卷 期:2022年
核心收录:
主 题:Distillation
摘 要:Prompt Transfer (PoT) is a recently-proposed approach to improve prompt-tuning, by initializing the target prompt with the existing prompt trained on similar source tasks. However, such a vanilla PoT approach usually achieves sub-optimal performance, as (i) the PoT is sensitive to the similarity of source-target pair and (ii) directly fine-tuning the prompt initialized with source prompt on target task might lead to forgetting of the useful general knowledge learned from source task. To tackle these issues, we propose a new metric to accurately predict the prompt transferability (regarding (i)), and a novel PoT approach (namely PANDA) that leverages the knowledge distillation technique to alleviate the knowledge forgetting effectively (regarding (ii)). Extensive and systematic experiments on 189 combinations of 21 source and 9 target datasets across 5 scales of PLMs demonstrate that: 1) our proposed metric works well to predict the prompt transferability;2) our PANDA consistently outperforms the vanilla PoT approach by 2.3% average score (up to 24.1%) among all tasks and model sizes;3) with our PANDA approach, prompt-tuning can achieve competitive and even better performance than model-tuning in various PLM scales scenarios. We have publicly released our code in https://***/WHU-ZQH/PANDA. Copyright © 2022, The Authors. All rights reserved.