Please use this identifier to cite or link to this item:
|Title:||Combining text and audio-visual features in video indexing|
|Citation:||Chang, S.-F.,Manmatha, R.,Chua, T.-S. (2005). Combining text and audio-visual features in video indexing. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings V : V1005-V1008. ScholarBank@NUS Repository. https://doi.org/10.1109/ICASSP.2005.1416476|
|Abstract:||We discuss the opportunities, state of the art, and open research issues in using multi-modal features in video indexing. Specifically, we focus on how imperfect text data obtained by automatic speech recognition (ASR) may be used to help solve challenging problems, such as story segmentation, concept detection, retrieval, and topic clustering. We review the frameworks and machine learning techniques that are used to fuse the text features with audio-visual features. Case studies showing promising performance will be described, primarily in the broadcast news video domain. © 2005 IEEE.|
|Source Title:||ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Jan 20, 2019
checked on Dec 16, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.