Please use this identifier to cite or link to this item: https://doi.org/10.1109/TCSVT.2012.2226526
DC FieldValue
dc.titleDetecting group activities with multi-camera context
dc.contributor.authorZha, Z.-J.
dc.contributor.authorZhang, H.
dc.contributor.authorWang, M.
dc.contributor.authorLuan, H.
dc.contributor.authorChua, T.-S.
dc.date.accessioned2014-07-04T03:09:22Z
dc.date.available2014-07-04T03:09:22Z
dc.date.issued2013
dc.identifier.citationZha, Z.-J., Zhang, H., Wang, M., Luan, H., Chua, T.-S. (2013). Detecting group activities with multi-camera context. IEEE Transactions on Circuits and Systems for Video Technology 23 (5) : 856-869. ScholarBank@NUS Repository. https://doi.org/10.1109/TCSVT.2012.2226526
dc.identifier.issn10518215
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/77839
dc.description.abstractHuman group activities detection in multi-camera CCTV surveillance videos is a pressing demand on smart surveillance. Previous works on this topic are mainly based on camera topology inference that is hard to apply to real-world unconstrained surveillance videos. In this paper, we propose a new approach for multi-camera group activities detection. Our approach simultaneously exploits intra-camera and inter-camera contexts without topology inference. Specifically, a discriminative graphical model with hidden variables is developed. The intra-camera and inter-camera contexts are characterized by the structure of hidden variables. By automatically optimizing the structure, the contexts are effectively explored. Furthermore, we propose a new spatiotemporal feature, named vigilant area (VA), to characterize the quantity and appearance of the motion in an area. This feature is effective for group activity representation and is easy to extract from a dynamic and crowded scene. We evaluate the proposed VA feature and discriminative graphical model extensively on two real-world multi-camera surveillance video data sets, including a public corpus consisting of 2.5 h of videos and a 468-h video collection, which, to the best of our knowledge, is the largest video collection ever used in human activity detection. The experimental results demonstrate the effectiveness of our approach. © 1991-2012 IEEE.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1109/TCSVT.2012.2226526
dc.sourceScopus
dc.subjectActivity detection
dc.subjectcontext
dc.subjectgroup activity
dc.subjecthuman activity
dc.typeArticle
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.doi10.1109/TCSVT.2012.2226526
dc.description.sourcetitleIEEE Transactions on Circuits and Systems for Video Technology
dc.description.volume23
dc.description.issue5
dc.description.page856-869
dc.description.codenITCTE
dc.identifier.isiut000318697600010
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.