Please use this identifier to cite or link to this item: https://doi.org/10.1109/TIP.2011.2158231
Title: Assemble new object detector with few examples
Authors: Yang, K.
Wang, M. 
Hua, X.-S.
Yan, S. 
Zhang, H.-J.
Keywords: Adaptation
Assemble
Object detection
Issue Date: 2011
Source: Yang, K., Wang, M., Hua, X.-S., Yan, S., Zhang, H.-J. (2011). Assemble new object detector with few examples. IEEE Transactions on Image Processing 20 (12) : 3341-3349. ScholarBank@NUS Repository. https://doi.org/10.1109/TIP.2011.2158231
Abstract: Learning a satisfactory object detector generally requires sufficient training data to cover the most variations of the object. In this paper, we show that the performance of object detector is severely degraded when training examples are limited. We propose an approach to handle this issue by exploring a set of pretrained auxiliary detectors for other categories. By mining the global and local relationships between the target object category and auxiliary objects, a robust detector can be learned with very few training examples. We adopt the deformable part model proposed by Felzenszwalb and simultaneously explore the root and part filters in the auxiliary object detectors under the guidance of the few training examples from the target object category. An iterative solution is introduced for such a process. The extensive experiments on the PASCAL VOC 2007 challenge data set show the encouraging performance of the new detector assembled from those related auxiliary detectors. © 2006 IEEE.
Source Title: IEEE Transactions on Image Processing
URI: http://scholarbank.nus.edu.sg/handle/10635/43148
ISSN: 10577149
DOI: 10.1109/TIP.2011.2158231
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

9
checked on Dec 14, 2017

WEB OF SCIENCETM
Citations

7
checked on Nov 18, 2017

Page view(s)

77
checked on Dec 10, 2017

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.