Please use this identifier to cite or link to this item: https://doi.org/10.1109/ICCVW.2009.5457643
Title: Active view selection for object and pose recognition
Authors: Jia Z.
Chang Y.-J.
Chen T. 
Issue Date: 2009
Citation: Jia Z., Chang Y.-J., Chen T. (2009). Active view selection for object and pose recognition. 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops 2009 : 641-648. ScholarBank@NUS Repository. https://doi.org/10.1109/ICCVW.2009.5457643
Abstract: In this paper we present an algorithm for multi-view object and pose recognition. In contrast to the existing work that focuses on modeling the object using the images only; we exploit the information on the image sequences and their relative 3D positions, because under many circumstances the movements between multi-views are accessible and can be controlled by the users. Thus we can calculate the next optimal place to take a picture based on previous behaviors, and perform the object/pose recognition based on these obtained images. The proposed method uses HOG (Histograms of Oriented Gradient) and SVM (Support Vector Machine) as the basic object/pose classifier. To learn the optimal action, this algorithm makes use of a boosting method to find the best sequence across the multi-views. Then it exploits the relation between the different view points using the Adaboost algorithm. The experiment shows that the learned sequence improves recognition performance in early steps compared to a randomly selected sequence, and the proposed algorithm can achieve a better recognition accuracy than the baseline method.
Source Title: 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops 2009
URI: http://scholarbank.nus.edu.sg/handle/10635/146191
ISBN: 9781424444427
DOI: 10.1109/ICCVW.2009.5457643
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.