Please use this identifier to cite or link to this item: http://scholarbank.nus.edu.sg/handle/10635/15829
Title: A multi-resolution multi-source and multi-modal (M3) transductive framework for concept detection in news video
Authors: WANG GANG
Keywords: Domain Knowledge, Unlabeled Data, Text Semantics, Multi-resolution analysis, Transductive Learning, Bootstrapping.
Issue Date: 26-May-2009
Source: WANG GANG (2009-05-26). A multi-resolution multi-source and multi-modal (M3) transductive framework for concept detection in news video. ScholarBank@NUS Repository.
Abstract: We study the problem of detecting concepts in news video. Most existing algorithms for news video concept detection are based on single-resolution (shot), single source (training data), and multi-modal fusion methods under a supervised inductive inference framework. In this thesis, we present a novel multi-resolution, multi-source and multi-modal transductive learning framework. As different modal features only work well in different temporal resolutions and different resolutions exhibit different types of semantics, we perform a multi-resolution analysis at the shot, multimedia discourse and story levels to capture the semantics. Our multi-source inference model makes use of the knowledge not only from training data but also from other online information resources. We perform transductive inference to better capture the distributions of data from both the test and specific training cases to train the classifiers. We test our framework in the TRECVID 2004 dataset. Experimental results demonstrate that our approach is effective.
URI: http://scholarbank.nus.edu.sg/handle/10635/15829
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
final_thesis_wanggang_nus_phdx.pdf2.11 MBAdobe PDF

OPEN

NoneView/Download

Page view(s)

273
checked on Dec 11, 2017

Download(s)

249
checked on Dec 11, 2017

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.