Please use this identifier to cite or link to this item:
https://doi.org/10.1109/TIP.2011.2170081
Title: | Camera constraint-free view-based 3-D object retrieval | Authors: | Gao, Y. Tang, J. Hong, R. Yan, S. Dai, Q. Zhang, N. Chua, T.-S. |
Keywords: | 3-D object Camera constraint-free retrieval view-based |
Issue Date: | 2012 | Citation: | Gao, Y., Tang, J., Hong, R., Yan, S., Dai, Q., Zhang, N., Chua, T.-S. (2012). Camera constraint-free view-based 3-D object retrieval. IEEE Transactions on Image Processing 21 (4) : 2269-2281. ScholarBank@NUS Repository. https://doi.org/10.1109/TIP.2011.2170081 | Abstract: | Recently, extensive research efforts have been dedicated to view-based methods for 3-D object retrieval due to the highly discriminative property of multiviews for 3-D object representation. However, most of state-of-the-art approaches highly depend on their own camera array settings for capturing views of 3-D objects. In order to move toward a general framework for 3-D object retrieval without the limitation of camera array restriction, a camera constraint-free view-based (CCFV) 3-D object retrieval algorithm is proposed in this paper. In this framework, each object is represented by a free set of views, which means that these views can be captured from any direction without camera constraint. For each query object, we first cluster all query views to generate the view clusters, which are then used to build the query models. For a more accurate 3-D object comparison, a positive matching model and a negative matching model are individually trained using positive and negative matched samples, respectively. The CCFV model is generated on the basis of the query Gaussian models by combining the positive matching model and the negative matching model. The CCFV removes the constraint of static camera array settings for view capturing and can be applied to any view-based 3-D object database. We conduct experiments on the National Taiwan University 3-D model database and the ETH 3-D object database. Experimental results show that the proposed scheme can achieve better performance than state-of-the-art methods. © 2011 IEEE. | Source Title: | IEEE Transactions on Image Processing | URI: | http://scholarbank.nus.edu.sg/handle/10635/43076 | ISSN: | 10577149 | DOI: | 10.1109/TIP.2011.2170081 |
Appears in Collections: | Staff Publications |
Show full item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.