Please use this identifier to cite or link to this item:
|Title:||Toward a higher-level visual representation for object-based image retrieval|
Object-based image retrieval
|Citation:||Zheng, Y.-T., Neo, S.-Y., Chua, T.-S., Tian, Q. (2009). Toward a higher-level visual representation for object-based image retrieval. Visual Computer 25 (1) : 13-23. ScholarBank@NUS Repository. https://doi.org/10.1007/s00371-008-0294-0|
|Abstract:||We propose a higher-level visual representation, visual synset, for object-based image retrieval beyond visual appearances. The proposed visual representation improves the traditional part-based bag-of-words image representation, in two aspects. First, the approach strengthens the discrimination power of visual words by constructing an intermediate descriptor, visual phrase, from frequently co-occurring visual word-set. Second, to bridge the visual appearance difference or to achieve better intra-class invariance power, the approach clusters visual words and phrases into visual synset, based on their class probability distribution. The rationale is that the distribution of visual word or phrase tends to peak around its belonging object classes. The testing on Caltech-256 data set shows that the visual synset can partially bridge visual differences of images of the same class and deliver satisfactory retrieval of relevant images with different visual appearances. © 2008 Springer-Verlag.|
|Source Title:||Visual Computer|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Jul 17, 2018
WEB OF SCIENCETM
checked on Jun 26, 2018
checked on Jul 13, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.