版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:The State Key Laboratory of Robotics Shenyang Institute of Automation Chinese Academy of Sciences Shenyang110016 China The Institutes for Robotics and Intelligent Manufacturing Chinese Academy of Sciences Shenyang110169 China The University of Chinese Academy of Sciences Beijing100049 China The Institutes for Robotics and Intelligent Manufacturing Chinese Academy of Sciences Shenyang110016 China The Key Laboratory of Manufacturing Industrial Integrated Automation Shenyang University The State Key Laboratory of Robotics Shenyang Institute of Automation Chinese Academy of Sciences Shenyang110016 China
出 版 物:《arXiv》 (arXiv)
年 卷 期:2024年
核心收录:
主 题:Embeddings
摘 要:Lifelong Person Re-Identification (LReID) aims to continuously learn from successive data streams, matching individuals across multiple cameras. The key challenge for LReID is how to effectively preserve old knowledge while incrementally learning new information, which is caused by task-level domain gaps and limited old task datasets. Existing methods based on CNN backbone are insufficient to explore the representation of each instance from different perspectives, limiting model performance on limited old task datasets and new task datasets. Unlike these methods, we propose a Diverse Representations Embedding (DRE) framework that first explores a pure transformer for LReID. The proposed DRE preserves old knowledge while adapting to new information based on instance-level and task-level layout. Concretely, an Adaptive Constraint Module (ACM) is proposed to implement integration and push away operations between multiple overlapping representations generated by transformer-based backbone, obtaining rich and discriminative representations for each instance to improve adaptive ability of LReID. Based on the processed diverse representations, we propose Knowledge Update (KU) and Knowledge Preservation (KP) strategies at the task-level layout by introducing the adjustment model and the learner model. KU strategy enhances the adaptive learning ability of learner models for new information under the adjustment model prior, and KP strategy preserves old knowledge operated by representation-level alignment and logit-level supervision in limited old task datasets while guaranteeing the adaptive learning information capacity of the LReID model. Extensive experiments were conducted on eleven Re-ID datasets, including five seen datasets for training in order-1 and order-2 orders and six unseen datasets for inference. Compared to state-of-the-art methods, our method achieves significantly improved performance in holistic, large-scale, and occluded datasets. Our code will be availab