Please use this identifier to cite or link to this item:
Title: A distribution based video representation for human action recognition
Authors: Song, Y.
Tang, S.
Zheng, Y.-T.
Chua, T.-S. 
Zhang, Y.
Lin, S.
Keywords: Human action recognition
Information-theoretic video matching
Probabilistic video representation
Issue Date: 2010
Citation: Song, Y., Tang, S., Zheng, Y.-T., Chua, T.-S., Zhang, Y., Lin, S. (2010). A distribution based video representation for human action recognition. 2010 IEEE International Conference on Multimedia and Expo, ICME 2010 : 772-777. ScholarBank@NUS Repository.
Abstract: Most current research on human action recognition in videos uses the bag-of-words (BoW) representations based on vector quantization on local spatial temporal features, due to the simplicity and good performance of such representations. In contrast to the BoW schemes, this paper explores a localized, continuous and probabilistic video representation. Specifically, the proposed representation encodes the visual and motion information of an ensemble of local spatial temporal (ST) features of a video into a distribution estimated by a generative probabilistic model such as the Gaussian Mixture Model. Furthermore, this probabilistic video representation naturally gives rise to an information-theoretic distance metric of videos. This makes the representation readily applicable as input to most discriminative classifiers, such as the nearest neighbor schemes and the kernel methods. The experiments on two datasets, KTH and UCF sports, show that the proposed approach could deliver promising results. © 2010 IEEE.
Source Title: 2010 IEEE International Conference on Multimedia and Expo, ICME 2010
ISBN: 9781424474912
DOI: 10.1109/ICME.2010.5582550
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.


checked on Dec 1, 2020


checked on Nov 23, 2020

Page view(s)

checked on Nov 24, 2020

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.