Please use this identifier to cite or link to this item:
DC FieldValue
dc.titleUse of generalized pattern model for video annotation
dc.contributor.authorXiao, Y.
dc.contributor.authorChua, T.-S.
dc.contributor.authorChaisorn, L.
dc.contributor.authorLee, C.-H.
dc.identifier.citationXiao, Y.,Chua, T.-S.,Chaisorn, L.,Lee, C.-H. (2007). Use of generalized pattern model for video annotation. Proceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007 : 819-822. ScholarBank@NUS Repository.
dc.description.abstractThis paper proposes an integrated framework that combines intra-shot and temporal inter-shot sequence analysis based on visual features to find stable patterns for video annotation. At the shot level, we perform multi-stage kNN classification using the global visual features to identify good candidate shots containing the concept. At the sequence level, we aim to find patterns of shot sequences around candidate shots with consistent statistical characteristics and dynamics. We discretize the shot contents into fixed set of tokens, and transform the high dimensional continuous video streams into tractable token sequences. We then extend the soft matching model to reveal video sequence patterns and flexibly match the patterns around candidate shots. We combine both local shot matching method and generalized pattern model using both visual and text features. Experimental results on TRECVID2006 dataset demonstrate that the proposed approach is effective. © 2007 IEEE.
dc.typeConference Paper
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.sourcetitleProceedings of the 2007 IEEE International Conference on Multimedia and Expo, ICME 2007
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Page view(s)

checked on May 23, 2019

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.