咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Doubly Perturbed Task Free Con... 收藏
arXiv

Doubly Perturbed Task Free Continual Learning

作     者:Lee, Byung Hyun Oh, Min-Hwan Chun, Se Young 

作者机构:Department of Electrical and Computer Engineering Seoul National University Korea Republic of Graduate School of Data Science Seoul National University Korea Republic of INMC & IPAI Seoul National University Korea Republic of 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2023年

核心收录:

主  题:Decision making 

摘      要:Task Free online Continual Learning (TF-CL) is a challenging problem where the model incrementally learns tasks without explicit task information. Although training with entire data from the past, present as well as future is considered as the gold standard, naive approaches in TF-CL with the current samples may be conflicted with learning with samples in the future, leading to catastrophic forgetting and poor plasticity. Thus, a proactive consideration of an unseen future sample in TF-CL becomes imperative. Motivated by this intuition, we propose a novel TF-CL framework considering future samples and show that injecting adversarial perturbations on both input data and decision-making is effective. Then, we propose a novel method named Doubly Perturbed Continual Learning (DPCL) to efficiently implement these input and decision-making perturbations. Specifically, for input perturbation, we propose an approximate perturbation method that injects noise into the input data as well as the feature vector and then interpolates the two perturbed samples. For decision-making process perturbation, we devise multiple stochastic classifiers. We also investigate a memory management scheme and learning rate scheduling reflecting our proposed double perturbations. We demonstrate that our proposed method outperforms the state-of-the-art baseline methods by large margins on various TF-CL benchmarks. © 2023, CC BY-NC-SA.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分