Please use this identifier to cite or link to this item: https://doi.org/10.1007/s11042-011-0748-7
DC FieldValue
dc.titleExploring probabilistic localized video representation for human action recognition
dc.contributor.authorSong, Y.
dc.contributor.authorTang, S.
dc.contributor.authorZheng, Y.-T.
dc.contributor.authorChua, T.-S.
dc.contributor.authorZhang, Y.
dc.contributor.authorLin, S.
dc.date.accessioned2014-07-04T03:09:37Z
dc.date.available2014-07-04T03:09:37Z
dc.date.issued2012
dc.identifier.citationSong, Y., Tang, S., Zheng, Y.-T., Chua, T.-S., Zhang, Y., Lin, S. (2012). Exploring probabilistic localized video representation for human action recognition. Multimedia Tools and Applications 58 (3) : 663-685. ScholarBank@NUS Repository. https://doi.org/10.1007/s11042-011-0748-7
dc.identifier.issn15737721
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/77858
dc.description.abstractIn recent years, the bag-of-words (BoW) video representations have achieved promising results in human action recognition in videos. By vector quantizing local spatial temporal (ST) features, the BoW video representation brings in simplicity and efficiency, but limitations too. First, the discretization of feature space in BoW inevitably results in ambiguity and information loss in video representation. Second, there exists no universal codebook for BoW representation. The codebook needs to be re-built when video corpus is changed. To tackle these issues, this paper explores a localized, continuous and probabilistic video representation. Specifically, the proposed representation encodes the visual and motion information of an ensemble of local ST features of a video into a distribution estimated by a generative probabilistic model. Furthermore, the probabilistic video representation naturally gives rise to an information-theoretic distance metric of videos. This makes the representation readily applicable to most discriminative classifiers, such as the nearest neighbor schemes and the kernel based classifiers. Experiments on two datasets, KTH and UCF sports, show that the proposed approach could deliver promising results. © 2011 Springer Science+Business Media, LLC.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1007/s11042-011-0748-7
dc.sourceScopus
dc.subjectHuman action recognition
dc.subjectInformation-theoretic video matching
dc.subjectProbabilistic video representation
dc.typeArticle
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.doi10.1007/s11042-011-0748-7
dc.description.sourcetitleMultimedia Tools and Applications
dc.description.volume58
dc.description.issue3
dc.description.page663-685
dc.description.codenMTAPF
dc.identifier.isiut000303507900010
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.