Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/40296
Title: | A learning-based approach for annotating large on-line image collection | Authors: | Feng, H. Chua, T.-S. |
Issue Date: | 2004 | Citation: | Feng, H.,Chua, T.-S. (2004). A learning-based approach for annotating large on-line image collection. Proceedings - 10th International Multimedia Modelling Conference, MMM 2004 : 249-256. ScholarBank@NUS Repository. | Abstract: | Several recent works attempt to automatically annotate image collection by exploiting the links between visual information provided by segmented image features and semantic concepts provided by associated text. The main limitation of such approaches, however, is that semantically meaningful segmentation is in general unavailable. This paper proposes a novel statistical learning-based approach to overcome this problem. We employ two different segmentation methods to segment the image into two sets of regions and learn the association between each set of regions with text concepts. Given a new image, the idea is to first employ a greedy strategy to annotate the image with concepts derived from different sets of overlapping and possibly conflicting regions. We then incorporate a decision model to disambiguate the concepts learned using the visual features of the overlapping regions. Experiments on a mid-sized image collection demonstrate that the use of our disambiguation approach could improve the performance of the system by about 12-16% on average in terms of F 1 measures as compared to system that uses only one segmentation method. | Source Title: | Proceedings - 10th International Multimedia Modelling Conference, MMM 2004 | URI: | http://scholarbank.nus.edu.sg/handle/10635/40296 | ISBN: | 0769520847 |
Appears in Collections: | Staff Publications |
Show full item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.