Please use this identifier to cite or link to this item:
|Title:||An Adaptive Image Content Representation and Segmentation Approach to Automatic Image Annotation||Authors:||Shi, R.
|Issue Date:||2004||Citation:||Shi, R.,Feng, H.,Chua, T.-S.,Lee, C.-H. (2004). An Adaptive Image Content Representation and Segmentation Approach to Automatic Image Annotation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 3115 : 545-554. ScholarBank@NUS Repository.||Abstract:||Automatic image annotation has been intensively studied for content-based image retrieval recently. In this paper, we propose a novel approach to automatic image annotation based on two key components: (a) an adaptive visual feature representation of image contents based on matching pursuit algorithms; and (b) an adaptive two-level segmentation method. They are used to address the important issues of segmenting images into meaningful units, and representing the contents of each unit with discriminative visual features. Using a set of about 800 training and testing images, we compare these techniques in image retrieval against other popular segmentation schemes, and traditional non-adaptive feature representation methods. Our preliminary results indicate that the proposed approach outperforms other competing systems based on the popular Blobworld segmentation scheme and other prevailing feature representation methods, such as DCT and wavelets. In particular, our system achieves an F 1 measure of over 50% for the image annotation task. © Springer-Verlag 2004.||Source Title:||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)||URI:||http://scholarbank.nus.edu.sg/handle/10635/38933||ISSN:||03029743|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Dec 2, 2021
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.