Chapter 5: Online Learning and Robust Visual Tracking using Local Features and Global Appearances of Video Objects
KeywordsVisual object tracking
anisotropic mean shift
Full recordShow full item record
AbstractThis chapter describes a novel hybrid visual object tracking scheme that jointly exploits local point features, global appearance and shape of target objects. The hybrid tracker contains two baseline candidate trackers and is formulated under an optical criterion. One baseline tracker, a spatiotemporal SIFT-RANSAC, extracts local feature points separatively for the foreground and background regions. Another baseline tracker, an enhanced anisotropicmean shift, tracks a dynamic object whose global appearance is most similar to the online learned distribution of reference object. An essential building block in the hybrid tracker is the online learning of dynamic object, where we employ a new approach for learning the appearance distribution, and another new approach for updating the two feature point sets. To demonstrate the applications of such online learning approaches to other trackers, we show an example in which online learning is added to an existing JMSPF (joint mean shift and particle filter tracking) tracking scheme, resulting in improved tracking robustness. The proposed hybrid tracker has been tested on numerous videos with a range of complex scenarios where target objects may experience long-term partial occlusions /intersections from other objects, large deformations, abruptmotion changes, dynamic cluttered background/occluding objects having similar color distributions to the target object. Tracking results have shown to be very robust in terms of tracking drift, accuracy and tightness of tracked bounding boxes. The performance of the hybrid tracker is evaluated qualitatively and quantitatively, with comparisons to four existing stat-of-the-art tracking schemes. Limitations of the tracker are also discussed.