咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Enhancing Visual Continual Lea... 收藏
arXiv

Enhancing Visual Continual Learning with Language-Guided Supervision

作     者:Ni, Bolin Zhao, Hongbo Zhang, Chenghao Hu, Ke Meng, Gaofeng Zhang, Zhaoxiang Xiang, Shiming 

作者机构:State Key Laboratory of Multimodal Artificial Intelligence Systems Institute of Automation Chinese Academy of Sciences China School of Artificial Intelligence University of Chinese Academy of Sciences China Centre for Artificial Intelligence and Robotics HK Institute of Science & Innovation Chinese Academy of Sciences China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Image enhancement 

摘      要:Continual learning (CL) aims to empower models to learn new tasks without forgetting previously acquired knowledge. Most prior works concentrate on the techniques of architectures, replay data, regularization, etc. However, the category name of each class is largely neglected. Existing methods commonly utilize the one-hot labels and randomly initialize the classifier head. We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks. In this paper, we revisit the role of the classifier head within the CL paradigm and replace the classifier with semantic knowledge from pretrained language models (PLMs). Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals during training. Such targets fully consider the semantic correlation between all classes across tasks. Empirical studies show that our approach mitigates forgetting by alleviating representation drifting and facilitating knowledge transfer across tasks. The proposed method is simple to implement and can seamlessly be plugged into existing methods with negligible adjustments. Extensive experiments based on eleven mainstream baselines demonstrate the effectiveness and generalizability of our approach to various protocols. For example, under the class-incremental learning setting on ImageNet-100, our method significantly improves the Top-1 accuracy by 3.2% to 6.1% while reducing the forgetting rate by 2.6% to 13.1%. © 2024, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分