Please use this identifier to cite or link to this item: https://doi.org/10.1109/CVPRW.2009.5206518
DC FieldValue
dc.titleA revisit of generative model for automatic image annotation using markov random fields
dc.contributor.authorXiang, Y.
dc.contributor.authorZhou, X.
dc.contributor.authorChua, T.-S.
dc.contributor.authorNgo, C.-W.
dc.date.accessioned2013-07-04T08:26:55Z
dc.date.available2013-07-04T08:26:55Z
dc.date.issued2009
dc.identifier.citationXiang, Y.,Zhou, X.,Chua, T.-S.,Ngo, C.-W. (2009). A revisit of generative model for automatic image annotation using markov random fields. 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009 : 1153-1160. ScholarBank@NUS Repository. <a href="https://doi.org/10.1109/CVPRW.2009.5206518" target="_blank">https://doi.org/10.1109/CVPRW.2009.5206518</a>
dc.identifier.isbn9781424439935
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/41411
dc.description.abstractMuch research effort on Automatic Image Annotation (AIA) has been focused on Generative Model, due to its well formed theory and competitive performance as compared with many well designed and sophisticated methods. However, when considering semantic context for annotation, the model suffers from the weak learning ability. This is mainly due to the lack of parameter setting and appropriate learning strategy for characterizing the semantic context in the traditional generative model. In this paper, we present a new approach based on Multiple Markov Random Fields (MRF) for semantic context modeling and learning. Differing from previous MRF related AIA approach, we explore the optimal parameter estimation and model inference systematically to leverage the learning power of traditional generative model. Specifically, we propose new potential function for site modeling based on generative model and build local graphs for each annotation keyword. The parameter estimation and model inference is performed in local optimal sense. We conduct experiments on commonly used benchmarks. On Corel 5000 images [3], we achieved 0.36 and 0.31 in recall and precision respectively on 263 keywords. This is a very significant improvement over the best reported result of the current state-of-the-art approaches. © 2009 IEEE.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1109/CVPRW.2009.5206518
dc.sourceScopus
dc.typeConference Paper
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.doi10.1109/CVPRW.2009.5206518
dc.description.sourcetitle2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009
dc.description.page1153-1160
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.