Please use this identifier to cite or link to this item: https://doi.org/10.1109/TCSVT.2011.2129410
DC FieldValue
dc.titleAdaptive object tracking by learning hybrid template online
dc.contributor.authorLiu, X.
dc.contributor.authorLin, L.
dc.contributor.authorYan, S.
dc.contributor.authorJin, H.
dc.contributor.authorJiang, W.
dc.date.accessioned2014-06-17T02:37:07Z
dc.date.available2014-06-17T02:37:07Z
dc.date.issued2011-11
dc.identifier.citationLiu, X., Lin, L., Yan, S., Jin, H., Jiang, W. (2011-11). Adaptive object tracking by learning hybrid template online. IEEE Transactions on Circuits and Systems for Video Technology 21 (11) : 1588-1599. ScholarBank@NUS Repository. https://doi.org/10.1109/TCSVT.2011.2129410
dc.identifier.issn10518215
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/54926
dc.description.abstractThis paper presents an adaptive tracking algorithm by learning hybrid object templates online in video. The templates consist of multiple types of features, each of which describes one specific appearance structure, such as flatness, texture, or edge/corner. Our proposed solution consists of three aspects. First, in order to make the features of different types comparable with each other, a unified statistical measure is defined to select the most informative features to construct the hybrid template. Second, we propose a simple yet powerful generative model for representing objects. This model is characterized by its simplicity since it could be efficiently learnt from the currently observed frames. Last, we present an iterative procedure to learn the object template from the currently observed frames, and to locate every feature of the object template within the observed frames. The former step is referred to as feature pursuit, and the latter step is referred to as feature alignment, both of which are performed over a batch of observations. We fuse the results of feature alignment to locate objects within frames. The proposed solution to object tracking is in essence robust against various challenges, including background clutters, low-resolution, scale changes, and severe occlusions. Extensive experiments are conducted over several publicly available databases and the results with comparisons show that our tracking algorithm clearly outperforms the state-of-the-art methods. © 2011 IEEE.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1109/TCSVT.2011.2129410
dc.sourceScopus
dc.subjectAdaptive tracking
dc.subjecthybrid template
dc.subjectmatching pursuit
dc.typeArticle
dc.contributor.departmentELECTRICAL & COMPUTER ENGINEERING
dc.description.doi10.1109/TCSVT.2011.2129410
dc.description.sourcetitleIEEE Transactions on Circuits and Systems for Video Technology
dc.description.volume21
dc.description.issue11
dc.description.page1588-1599
dc.description.codenITCTE
dc.identifier.isiut000296471100004
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

32
checked on Aug 12, 2019

WEB OF SCIENCETM
Citations

27
checked on Aug 12, 2019

Page view(s)

50
checked on Aug 9, 2019

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.