Please use this identifier to cite or link to this item:
|Title:||Enhancing image annotation by integrating concept ontology and text-based Bayesian learning model|
|Keywords:||Automatic image annotation|
|Source:||Shi, R.,Lee, C.-H.,Chua, T.-S. (2007). Enhancing image annotation by integrating concept ontology and text-based Bayesian learning model. Proceedings of the ACM International Multimedia Conference and Exhibition : 341-344. ScholarBank@NUS Repository. https://doi.org/10.1145/1291233.1291307|
|Abstract:||Automatic image annotation (AIA) has been a hot research topic in recent years since it can be used to support concept-based image retrieval. However, most existing AIA models depend heavily on the availability of a large number of labeled training samples, which require significant human labeling efforts. In this paper, we propose a novel learning framework which integrates text-based Bayesian model (TBM) and concept ontology to effectively expand the training set of each concept class without the need of additional human labeling efforts or collecting additional training images from other data sources. The basic idea lies in exploiting the text information from training set to provide additional effective annotations for training images so that training data for each concept class can be augmented. In this study we employ Bayesian Hierarchical Multinomial Mixture Models (BHMMMs) as our baseline AIA model. By combining additional annotations obtained from TBM into each concept class in the training phase, the performance of BHMMMs can be significantly improved on Corel image dataset with 263 testing concepts as compared to the state-of-the-art AIA models under the same experimental configurations. Copyright 2007 ACM.|
|Source Title:||Proceedings of the ACM International Multimedia Conference and Exhibition|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Dec 13, 2017
checked on Dec 9, 2017
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.