Please use this identifier to cite or link to this item: https://doi.org/10.1007/s11263-012-0582-z
Title: Robust visual tracking via structured multi-task sparse learning
Authors: Zhang, T.
Ghanem, B.
Liu, S. 
Ahuja, N.
Keywords: Graph
Multi-task learning
Particle filter
Sparse representation
Structure
Visual tracking
Issue Date: 2013
Source: Zhang, T.,Ghanem, B.,Liu, S.,Ahuja, N. (2013). Robust visual tracking via structured multi-task sparse learning. International Journal of Computer Vision 101 (2) : 367-383. ScholarBank@NUS Repository. https://doi.org/10.1007/s11263-012-0582-z
Abstract: In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in Multi-Task Tracking (MTT). By employing popular sparsity-inducing lp,q mixed norms (specifically p∈2,∞ and q=1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L1 tracker (Mei and Ling, IEEE Trans Pattern Anal Mach Intel 33(11):2259-2272, 2011) is a special case of our MTT formulation (denoted as the L11 tracker) when p=q=1. Under the MTT framework, some of the tasks (particle representations) are often more closely related and more likely to share common relevant covariates than other tasks. Therefore, we extend the MTT framework to take into account pairwise structural correlations between particles (e.g. spatial smoothness of representation) and denote the novel framework as S-MTT. The problem of learning the regularized sparse representation in MTT and S-MTT can be solved efficiently using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, S-MTT and MTT are computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that S-MTT is much better than MTT, and both methods consistently outperform state-of-the-art trackers. © 2012 Springer Science+Business Media New York.
Source Title: International Journal of Computer Vision
URI: http://scholarbank.nus.edu.sg/handle/10635/39730
ISSN: 09205691
DOI: 10.1007/s11263-012-0582-z
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

158
checked on Dec 13, 2017

WEB OF SCIENCETM
Citations

121
checked on Nov 2, 2017

Page view(s)

100
checked on Dec 9, 2017

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.