Please use this identifier to cite or link to this item:
https://doi.org/10.1109/TCSVT.2012.2226526
DC Field | Value | |
---|---|---|
dc.title | Detecting group activities with multi-camera context | |
dc.contributor.author | Zha, Z.-J. | |
dc.contributor.author | Zhang, H. | |
dc.contributor.author | Wang, M. | |
dc.contributor.author | Luan, H. | |
dc.contributor.author | Chua, T.-S. | |
dc.date.accessioned | 2014-07-04T03:09:22Z | |
dc.date.available | 2014-07-04T03:09:22Z | |
dc.date.issued | 2013 | |
dc.identifier.citation | Zha, Z.-J., Zhang, H., Wang, M., Luan, H., Chua, T.-S. (2013). Detecting group activities with multi-camera context. IEEE Transactions on Circuits and Systems for Video Technology 23 (5) : 856-869. ScholarBank@NUS Repository. https://doi.org/10.1109/TCSVT.2012.2226526 | |
dc.identifier.issn | 10518215 | |
dc.identifier.uri | http://scholarbank.nus.edu.sg/handle/10635/77839 | |
dc.description.abstract | Human group activities detection in multi-camera CCTV surveillance videos is a pressing demand on smart surveillance. Previous works on this topic are mainly based on camera topology inference that is hard to apply to real-world unconstrained surveillance videos. In this paper, we propose a new approach for multi-camera group activities detection. Our approach simultaneously exploits intra-camera and inter-camera contexts without topology inference. Specifically, a discriminative graphical model with hidden variables is developed. The intra-camera and inter-camera contexts are characterized by the structure of hidden variables. By automatically optimizing the structure, the contexts are effectively explored. Furthermore, we propose a new spatiotemporal feature, named vigilant area (VA), to characterize the quantity and appearance of the motion in an area. This feature is effective for group activity representation and is easy to extract from a dynamic and crowded scene. We evaluate the proposed VA feature and discriminative graphical model extensively on two real-world multi-camera surveillance video data sets, including a public corpus consisting of 2.5 h of videos and a 468-h video collection, which, to the best of our knowledge, is the largest video collection ever used in human activity detection. The experimental results demonstrate the effectiveness of our approach. © 1991-2012 IEEE. | |
dc.description.uri | http://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1109/TCSVT.2012.2226526 | |
dc.source | Scopus | |
dc.subject | Activity detection | |
dc.subject | context | |
dc.subject | group activity | |
dc.subject | human activity | |
dc.type | Article | |
dc.contributor.department | COMPUTER SCIENCE | |
dc.description.doi | 10.1109/TCSVT.2012.2226526 | |
dc.description.sourcetitle | IEEE Transactions on Circuits and Systems for Video Technology | |
dc.description.volume | 23 | |
dc.description.issue | 5 | |
dc.description.page | 856-869 | |
dc.description.coden | ITCTE | |
dc.identifier.isiut | 000318697600010 | |
Appears in Collections: | Staff Publications |
Show simple item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.