咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >ANL: Anti-noise learning for c... 收藏
arXiv

ANL: Anti-noise learning for cross-domain person re-identification

作     者:Zhang, Hongliang Han, Shoudong Pan, Xiaofeng Zhao, Jun 

作者机构:National Key Laboratory of Science and Technology on Multispectral Information Processing School of Artificial Intelligence and Automation Huazhong University of Science and Technology 1037 Luoyu Road WuhanP.C: 430074 China School of Computer Science and Engineering Nanyang Technological University 50 Nanyang Avenue Singapore639798 Singapore 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2020年

核心收录:

主  题:Clustering algorithms 

摘      要:Due to the lack of labels and the domain diversities, it is a challenge to study person re-identification in the cross-domain setting. An admirable method is to optimize the target model by assigning pseudo-labels for unlabeled samples through clustering. Usually, attributed to the domain gaps, the pre-trained source domain model cannot extract appropriate target domain features, which will dramatically affect the clustering performance and the accuracy of pseudo-labels. Extensive label noise will lead to sub-optimal solutions doubtlessly. To solve these problems, we propose an Anti-Noise Learning (ANL) approach, which contains two modules. The Feature Distribution Alignment (FDA) module is designed to gather the id-related samples and disperse id-unrelated samples, through the camera-wise contrastive learning and adversarial adaptation. Creating a friendly cross-feature foundation for clustering that is to reduce clustering noise. Besides, the Reliable Sample Selection (RSS) module utilizes an Auxiliary Model to correct noisy labels and select reliable samples for the Main Model. In order to effectively utilize the outlier information generated by the clustering algorithm and RSS module, we train these samples at the instance-level. The experiments demonstrate that our proposed ANL framework can effectively reduce the domain conflicts and alleviate the influence of noisy samples, as well as superior performance compared with the state-of-the-art methods. Copyright © 2020, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分