Please use this identifier to cite or link to this item:
Title: Collaborative visual modeling for automatic image annotation via sparse model coding
Authors: Wang, M.
Li, F.
Wang, M. 
Keywords: Automatic image annotation
Sparse model reconstruction
Visual relatedness
Issue Date: 2012
Citation: Wang, M., Li, F., Wang, M. (2012). Collaborative visual modeling for automatic image annotation via sparse model coding. Neurocomputing 95 : 22-28. ScholarBank@NUS Repository.
Abstract: Building visual models provides an important way to detect visual concepts from images. However, due to the problems of visual diversity and uncertainty, the estimation based on these models is not satisfactory. The visual relatedness among visual models is ignored for most previous methods. In this paper, we propose a novel annotation method which exploits the visual relatedness information among different concepts by collaborative visual modeling. We propose to approximate a given visual model as a convex combination of other reference models. ℓ 1-penalized regularization is used to exploit the sparsity nature underlying the high dimensional model reconstruction space. The relatedness is well represented in the sparse reconstruction coefficients and used to enhance the discriminativeness and robustness of the visual models. We further provide an efficient strategy to learn the coefficients by solving sparse model reconstruction problem. As we know, it is the first effort to deal with this problem. So the proposed method has general significance. The experimental results on benchmark Corel dataset and Flickr dataset demonstrate the effectiveness of the proposed methods. © 2012 Elsevier B.V.
Source Title: Neurocomputing
ISSN: 09252312
DOI: 10.1016/j.neucom.2011.04.049
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.


checked on Mar 22, 2019


checked on Mar 6, 2019

Page view(s)

checked on Nov 24, 2018

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.