This paper proposes a robust patch-based object tracking algorithm. Unlike many traditional algorithms, which divide the object into multiple patches and allocate the weight values for each patches, this paper uses SI...
详细信息
ISBN:
(纸本)9781479957521
This paper proposes a robust patch-based object tracking algorithm. Unlike many traditional algorithms, which divide the object into multiple patches and allocate the weight values for each patches, this paper uses SIFT feature matching to select valid patches and filter out invalid patches. The invalid patches usually corresponding to the occluded or partially transformed part of the object. Thus, guided by valid patch, patch-based color histogram provides a richer description of the object. The similarity of valid patch is used in particle filter to locate the object. Moreover, since feature similarity is easy to bring into object drift, this paper updates the object template fusing feature similarity and valid patches, which is both scale adaptive and robust to partial occlusion. The experimental results show that the proposed algorithm is more accurate and robust than state-of-the-art tracking algorithms in challenging scenarios.
To tackle the problem that traditional particle-filter- or correlation-filter-based trackers are prone to low tracking accuracy and poor robustness when the target faces challenges such as occlusion, rotation and scal...
详细信息
To tackle the problem that traditional particle-filter- or correlation-filter-based trackers are prone to low tracking accuracy and poor robustness when the target faces challenges such as occlusion, rotation and scale variation in the case of complex scenes, an accurate reliable-patch-based tracker is proposed through exploiting and complementing the advantages of particle filter and correlation filter. Specifically, to cope with the challenge of continuous full occlusion, the target is divided into numerous patches by combining random with hand-crafted partition methods, and then, an effective target position estimation strategy is presented. Subsequently, according to the motion law between the patch and global target in the particle filter framework, two effective resampling rules are designed to remove unreliable particles to avoid tracking drift, and then, the target position can be estimated by the most reliable patches identified. Finally, an effective scale estimation approach is presented, in which the Manhattan distance between the reliable patches is utilized to estimate the target scale, including the target width and height, respectively. Experimental results illustrate that our tracker can not only be robust against the challenges of occlusion, rotation and scale variation, but also outperform state-of-the-art trackers for comparison in overall performance.
This paper presents a new approach for tracking multiple people in monocular calibrated cameras combining patch matching and pedestrian detection. Initially, background removal and pedestrian detection are used in con...
详细信息
This paper presents a new approach for tracking multiple people in monocular calibrated cameras combining patch matching and pedestrian detection. Initially, background removal and pedestrian detection are used in conjunction with the vertical standing hypothesis to initialize the targets with multiples patches. In the tracking step, each patch related to a given target is matched individually across frames, and their translation vectors are combined robustly with pedestrian detection results in the world coordinate frame using weighted vector median filters. Additionally, the algorithm uses the camera parameters to both estimate the person scale in a straightforward manner and to limit the search region used to track each fragment. Our experimental results indicate that our tracker can deal with occlusions and video sequences with strong appearance variations, presenting results comparable to or better than existing state-of-the-art algorithms. (C) 2013 Elsevier B.V. All rights reserved.
A text localisation and tracking method is presented that finds text-regions in videos and assigns unique IDs to their trajectories. For the goal, a graph-based framework that can work with existing text detection met...
详细信息
A text localisation and tracking method is presented that finds text-regions in videos and assigns unique IDs to their trajectories. For the goal, a graph-based framework that can work with existing text detection methods is developed. To be precise, graphs are built where vertices are image-level text detection results and edges represent the correspondence scores of the vertices. From these graphs, text-region trajectories by using the graph-cut algorithm are extracted. This approach allows considering false positives and misses, as well as their patch-based tracking results at the same time, and text trajectories are reliably extracted. Finally, the results are refined by interpolating misses and filtering out false positives. The proposed method is submitted to the International Conference on Document Analysis and Recognition 2015 robust reading competition (video text localisation) and the method showed the best performance in terms of CLEAR MOT metrics and was ranked third place according to VACE metrics among the seven participating methods.
In this paper, we propose a robust l(1) tracking method based on a two phases sparse representation, which consists of a patch and a global appearance trackers. While recently proposed l(1) trackers showed impressive ...
详细信息
ISBN:
(纸本)9781479911974;9781479911950
In this paper, we propose a robust l(1) tracking method based on a two phases sparse representation, which consists of a patch and a global appearance trackers. While recently proposed l(1) trackers showed impressive tracking accuracies, tracking the dynamic appearance is not easy to them. To overcome dynamic appearance change and achieve robust visual tracking, we model the dynamic appearance of the object by a set of local rigid patches and enhance the distinctiveness of the global appearance tracker by positive/negative learning. The integration of two approaches makes visual tracking robust to occlusion and illumination variation. We demonstrate the experiments with five challenging video sequences and compare with state-of-art trackers. We show that the proposed method successfully handle occlusion, noise, scale, illumination, and appearance change of the object.
暂无评论