Please use this identifier to cite or link to this item:
Title: combining multimodal external resources for event-based news video retrieval and question answering
Keywords: News Video Retrieval Question Answering
Issue Date: 25-Jul-2008
Citation: NEO SHI YONG (2008-07-25). combining multimodal external resources for event-based news video retrieval and question answering. ScholarBank@NUS Repository.
Abstract: The ever-increasing amount of multimedia data available online creates an urgent need on how to index these information and support effective retrieval by users. In recent years, we observe the gradual shift from performing retrieval solely based on analyzing one media source at a time, to fusion of diverse knowledge sources from correlated media types, context and language resources. In this work, we develop an event-based retrieval model that act as a principled framework to combine the diverse knowledge sources for news video retrieval. We employ the various online news websites and news blogs to supplement details that are not available in news video and extract innate relationship between different content entities during data clustering. The event-based retrieval uses query class dependent models which automatically discover fusion parameters for fusing multimodal features based on previous retrieval results, and predicts parameters for unseen queries.
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
thesis-sub.pdf2.77 MBAdobe PDF



Page view(s)

checked on Feb 10, 2019


checked on Feb 10, 2019

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.