版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Beihang Univ State Key Lab Virtual Real Technol & Syst Sch Engn & Comp Sci Beijing 100083 Peoples R China Beihang Univ Beijing Adv Innovat Ctr Big Data & Brain Comp Beijing 100083 Peoples R China
出 版 物:《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 (IEEE Trans Circuits Syst Video Technol)
年 卷 期:2019年第29卷第12期
页 面:3646-3659页
核心收录:
基 金:National Key Research and Development Program of China [2016YFB1001002]
主 题:Feature extraction Legged locomotion Semantics Optical computing Optical imaging Visualization Recurrent neural networks Person re-identification spatio-temporal aggregation similarity measuring multi-model ensemble
摘 要:Person re-identification (ReID) aims to associate the identity of pedestrians captured by cameras across non-overlapped areas. Video-based ReID plays an important role in intelligent video surveillance systems and has attracted growing attention in recent years. In this paper, we propose an end-to-end video-based ReID framework based on the convolutional neural network (CNN) for efficient spatio-temporal modeling and enhanced similarity measuring. Specifically, we build our descriptor of sequences by basic mathematical calculations on the semantic mid-level image features, which avoids the time consuming computations and the loss of spatial correlations. We further hierarchically extract image features from multiple intermediate CNN stages to build multi-level sequence descriptors. For a descriptor at one stage, we design an effective auxiliary pairwise loss which is jointly optimized with a triplet loss. To integrate hierarchical representation, we propose an intuitive yet effective summation-based similarity integration scheme to match identities more accurately. Furthermore, we extend our framework by a multi-model ensemble strategy, which effectively assembles three popular CNN models to represent walking sequences more comprehensively and improve the performance. Extensive experiments on three video-based ReID datasets show that the proposed framework outperforms the state-of-the-art methods.