Please use this identifier to cite or link to this item:
|Title:||Towards multi-semantic image annotation with graph regularized exclusive group Lasso|
|Keywords:||Exclusive group lasso|
Multi-semantic image annotation
|Citation:||Chen, X.,Yuan, X.-T.,Yan, S.,Tang, J.,Rui, Y.,Chua, T.-S. (2011). Towards multi-semantic image annotation with graph regularized exclusive group Lasso. MM'11 - Proceedings of the 2011 ACM Multimedia Conference and Co-Located Workshops : 263-272. ScholarBank@NUS Repository. https://doi.org/10.1145/2072298.2072334|
|Abstract:||To bridge the semantic gap between low level feature and human perception, most of the existing algorithms aim mainly at annotating images with concepts coming from only one semantic space, e.g. cognitive or affective. The naive combination of the outputs from these spaces will implicitly force the conditional independence and ignore the correlations among the spaces. In this paper, to exploit the comprehensive semantic of images, we propose a general framework for harmoniously integrating the above multiple semantics, and investigating the problem of learning to annotate images with training images labeled in two or more correlated semantic spaces, such as fascinating nighttime, or exciting cat. This kind of semantic annotation is more oriented to real world search scenario. Our proposed approach outperforms the baseline algorithms by making the following contributions. 1) Unlike previous methods that annotate images within only one semantic space, our proposed multi-semantic annotation associates each image with labels from multiple semantic spaces. 2) We develop a multi-task linear discriminative model to learn a linear mapping from features to labels. The tasks are correlated by imposing the exclusive group lasso regularization for competitive feature selection, and the graph Laplacian regularization to deal with insufficient training sample issue. 3) A Nesterov-type smoothing approximation algorithm is presented for efficient optimization of our model. Extensive experiments on NUS-WIDEEmotive dataset (56k images) with 8×81 emotive cognitive concepts and Object&Scene datasets from NUS-WIDE well validate the effectiveness of the proposed approach. © 2011 ACM.|
|Source Title:||MM'11 - Proceedings of the 2011 ACM Multimedia Conference and Co-Located Workshops|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Jan 10, 2019
checked on Dec 29, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.