Please use this identifier to cite or link to this item: https://doi.org/10.1109/CVPR.2008.4587611
Title: Visual synset: Towards a higher-level visual representation
Authors: Zheng, Y.-T.
Zhao, M.
Neo, S.-Y. 
Chua, T.-S. 
Tian, Q.
Issue Date: 2008
Citation: Zheng, Y.-T.,Zhao, M.,Neo, S.-Y.,Chua, T.-S.,Tian, Q. (2008). Visual synset: Towards a higher-level visual representation. 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR. ScholarBank@NUS Repository. https://doi.org/10.1109/CVPR.2008.4587611
Abstract: We present a higher-level visual representation, visual synset, for object categorization. The visual synset improves the traditional bag of words representation with better discrimination and invariance power. First, the approach strengthens the inter-class discrimination power by constructing an intermediate visual descriptor, delta visual phrase, from frequently co-occurring visual word-set with similar spatial context. Second, the approach achieves better intra-class invariance power, by clustering delta visual phrases into visual synset, based their probabilistic 'semantics', i.e. class probability distribution. Hence, the resulting visual synset can partially bridge the visual differences of images of same class. The tests on Caltech-101 and Pascal-VOC 05 dataset demonstrated that the proposed image representation can achieve good accuracies. ©2008 IEEE.
Source Title: 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
URI: http://scholarbank.nus.edu.sg/handle/10635/41222
ISBN: 9781424422432
DOI: 10.1109/CVPR.2008.4587611
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.