Please use this identifier to cite or link to this item:
Title: Auto-annotation of multimedia contents: Theory and application
Keywords: Bootstrapping, Co-training, Image Annotation, Supervised Learning, Image Retrieval, Multimedia Content Annotation
Issue Date: 28-Jun-2005
Citation: FENG HUAMIN (2005-06-28). Auto-annotation of multimedia contents: Theory and application. ScholarBank@NUS Repository.
Abstract: In this thesis, we propose a learning-based framework for auto-annotation of multimedia contents. The framework is open and is designed to incorporate different base learners, including the single-view machine learning (traditional) and the bootstrapping approaches. To evaluate the framework, we take the images as the case. We first incorporate the single-view learners and took it as the baseline. We then incorporate a bootstrapping cum active learning approach into the framework. The bootstrapping is based on co-training that trains two classifiers, representing two orthogonal views of the problem, to predict the concepts to be assigned to each image region. Our experimental results show that the bootstrapping cum active learning approach can achieve a performance comparable to or marginally better than the traditional single-view supervised learning approaches. At the same time, it offers the added advantage of requiring only a small number of training samples. Finally, we extend the framework to heterogeneous media in the Web and explore web image annotation by fusing both textual and visual features. The resulting system can be used effectively to annotate large amount of image collections on the Web.
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Contents of the thesis (huamin).pdf51.88 kBAdobe PDF


thesis_body (huamin).pdf1.07 MBAdobe PDF



Page view(s)

checked on Oct 26, 2020


checked on Oct 26, 2020

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.