咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Wasserstein Distance-Based Aut... 收藏

Wasserstein Distance-Based Auto-Encoder Tracking

追踪的 Wasserstein 基于距离的汽车编码器

作     者:Xu, Long Wei, Ying Dong, Chenhe Xu, Chuaqiao Diao, Zhaofu 

作者机构:Northeastern Univ Shenyang Peoples R China 

出 版 物:《NEURAL PROCESSING LETTERS》 (神经处理通讯)

年 卷 期:2021年第53卷第3期

页      面:2305-2329页

核心收录:

学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Nature Science Foundation of China Key R & D projects of Liaoning Province,China [2020JH2/10100029] Open Project Program Foundation of the Key Laboratory of Opto-Electronics Information Processing, Chinese Academy of Sciences [OEIP-O-202002] 

主  题:Visual object tracking Auto-encoder Importance weighting Wasserstein distance Maximum mean discrepancy 

摘      要:Most of the existing visual object trackers are based on deep convolutional feature maps, but there have fewer works about finding new features for tracking. This paper proposes a novel tracking framework based on a full convolutional auto-encoder appearance model, which is trained by using Wasserstein distance and maximum mean discrepancy . Compared with previous works, the proposed framework has better performance in three aspects, including appearance model, update scheme, and state estimation. To address the issues of the original update scheme including poor discriminant performance under limited supervisory information, sample pollution caused by long term object occlusion, and sample importance unbalance, in this paper, a novel latent space importance weighting algorithm, a novel sample space management algorithm, and a novel IOU-based label smoothing algorithm are proposed respectively. Besides, an improved weighted loss function is adopted to address the sample imbalance issue. Finally, to improve the state estimation accuracy, the combination of Kullback-Leibler divergence and generalized intersection over union is introduced. Extensive experiments are performed on the three widely used benchmarks, and the results demonstrate the state-of-the-art performance of the proposed method. Code and models are available at https://***/wahahamyt/***.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分