Please use this identifier to cite or link to this item:
Title: Use of generalized pattern model for video annotation
Authors: Xiao, Y. 
Chua, T.-S. 
Chaisorn, L.
Lee, C.-H.
Issue Date: 2007
Citation: Xiao, Y.,Chua, T.-S.,Chaisorn, L.,Lee, C.-H. (2007). Use of generalized pattern model for video annotation. Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007 : 819-822. ScholarBank@NUS Repository.
Abstract: This paper proposes an integrated framework that combines intra-shot and temporal inter-shot sequence analysis based on visual features to find stable patterns for video annotation. At the shot level, we perform multi-stage kNN classification using the global visual features to identify good candidate shots containing the concept. At the sequence level, we aim to find patterns of shot sequences around candidate shots with consistent statistical characteristics and dynamics. We discretize the shot contents into fixed set of tokens, and transform the high dimensional continuous video streams into tractable token sequences. We then extend the soft matching model to reveal video sequence patterns and flexibly match the patterns around candidate shots. We combine both local shot matching method and generalized pattern model using both visual and text features. Experimental results on TRECVID2006 dataset demonstrate that the proposed approach is effective. © 2007 IEEE.
Source Title: Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007
ISBN: 1424410177
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Page view(s)

checked on May 11, 2019

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.