Please use this identifier to cite or link to this item: https://doi.org/10.1007/s00371-008-0294-0
Title: Toward a higher-level visual representation for object-based image retrieval
Authors: Zheng, Y.-T.
Neo, S.-Y. 
Chua, T.-S. 
Tian, Q.
Keywords: More
Object-based image retrieval
Visual representation
Issue Date: 2009
Citation: Zheng, Y.-T., Neo, S.-Y., Chua, T.-S., Tian, Q. (2009). Toward a higher-level visual representation for object-based image retrieval. Visual Computer 25 (1) : 13-23. ScholarBank@NUS Repository. https://doi.org/10.1007/s00371-008-0294-0
Abstract: We propose a higher-level visual representation, visual synset, for object-based image retrieval beyond visual appearances. The proposed visual representation improves the traditional part-based bag-of-words image representation, in two aspects. First, the approach strengthens the discrimination power of visual words by constructing an intermediate descriptor, visual phrase, from frequently co-occurring visual word-set. Second, to bridge the visual appearance difference or to achieve better intra-class invariance power, the approach clusters visual words and phrases into visual synset, based on their class probability distribution. The rationale is that the distribution of visual word or phrase tends to peak around its belonging object classes. The testing on Caltech-256 data set shows that the visual synset can partially bridge visual differences of images of the same class and deliver satisfactory retrieval of relevant images with different visual appearances. © 2008 Springer-Verlag.
Source Title: Visual Computer
URI: http://scholarbank.nus.edu.sg/handle/10635/38950
ISSN: 01782789
DOI: 10.1007/s00371-008-0294-0
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.