Please use this identifier to cite or link to this item: https://doi.org/10.1109/VCIP.2013.6706420
Title: A novel image tag saliency ranking algorithm based on sparse representation
Authors: Wang, C.
Song, Z.
Feng, S.
Lang, C.
Yan, S. 
Keywords: Diverse Density
multi-instance learning
sparse representation
tag saliency ranking
visual attention model
Issue Date: 2013
Source: Wang, C.,Song, Z.,Feng, S.,Lang, C.,Yan, S. (2013). A novel image tag saliency ranking algorithm based on sparse representation. IEEE VCIP 2013 - 2013 IEEE International Conference on Visual Communications and Image Processing : -. ScholarBank@NUS Repository. https://doi.org/10.1109/VCIP.2013.6706420
Abstract: As the explosive growth of the web image data, image tag ranking used for image retrieval accurately from mass images is becoming an active research topic. However, the existing ranking approaches are not very ideal, which remains to be improved. This paper proposed a new image tag saliency ranking algorithm based on sparse representation. we firstly propagate labels from image-level to region-level via Multi-instance Learning driven by sparse representation, which means reconstructing the target instance from positive bag via the sparse linear combination of all the instances from training set, instances with nonzero reconstruction coefficients are considered to be similar to the target instance; then visual attention model is used for tag saliency analysis. Comparing with the existing approaches, the proposed method achieves a better effect and shows a better performance. © 2013 IEEE.
Source Title: IEEE VCIP 2013 - 2013 IEEE International Conference on Visual Communications and Image Processing
URI: http://scholarbank.nus.edu.sg/handle/10635/83401
ISBN: 9781479902903
DOI: 10.1109/VCIP.2013.6706420
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Page view(s)

18
checked on Mar 10, 2018

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.