Please use this identifier to cite or link to this item: https://doi.org/10.5281/zenodo.884592
Title: Multi-Camera Action Dataset (MCAD)
Creators: Wenhui Li
Wong Yong Kang 
An-An Liu
Yang Li
Yu-Ting Su
KANKANHALLI,MOHAN S 
NUS Contact: Wong Yong Kang
Subject: Cameras
Surveillance
Benchmark testing
Heuristic algorithm
Computer vision
Internet Robustness
Computer vision
Gesture recognition
Image sensors
Learning (artificial intelligence)
Multicamera action dataset
Cross-camera action recognition
Computer vision
Machine learning
Internet
MCAD
Surveillance environment
DOI: 10.5281/zenodo.884592
Description: 

Action recognition has received increasing attentions from the computer vision and machine learning community in the last decades. Ever since then, the recognition task has evolved from single view recording under controlled laboratory environment to unconstrained environment (i.e., surveillance environment or user generated videos). Furthermore, recent work focused on other aspect of action recognition problem, such as cross-view classification, cross domain learning, multi-modality learning, and action localization. Despite the large variations of studies, we observed limited works that explore the open-set and open-view classification problem, which is a genuine inherited properties in action recognition problem. In other words, a well designed algorithm should robustly identify an unfamiliar action as “unknown” and achieved similar performance across sensors with similar field of view. The Multi-Camera Action Dataset (MCAD) is designed to evaluate the open-view classification problem under surveillance environment.

In our multi-camera action dataset, different from common action datasets we use a total of five cameras, which can be divided into two types of cameras (StaticandPTZ), to record actions. Particularly, there are three Static cameras (Cam04 & Cam05 & Cam06) with fish eye effect and two PanTilt-Zoom (PTZ) cameras (PTZ04 & PTZ06). Static camera has a resolution of 1280×960 pixels, while PTZ camera has a resolution of 704×576 pixels and a smaller field of view than Static camera. What’s more, we don’t control the illumination environment. We even set two contrasting conditions (Daytime and Nighttime environment) which makes our dataset more challenge than many controlled datasets with strongly controlled illumination environment.The distribution of the cameras is shown in the picture on the right.

We identified 18 units single person daily actions with/without object which are inherited from the KTH, IXMAS, and TRECIVD datasets etc. The list and the definition of actions are shown in the table. These actions can also be divided into 4 types actions. Micro action without object (action ID of 01, 02 ,05) and with object (action ID of 10, 11, 12 ,13). Intense action with object (action ID of 03, 04 ,06, 07, 08, 09) and with object (action ID of 14, 15, 16, 17, 18). We recruited a total of 20 human subjects. Each candidate repeats 8 times (4 times during the day and 4 times in the evening) of each action under one camera. In the recording process, we use five cameras to record each action sample separately. During recording stage we just tell candidates the action name then they could perform the action freely with their own habit, only if they do the action in the field of view of the current camera. This can make our dataset much closer to reality. As a results there is high intra action class variation among different action samples as shown in picture of action samples.

URL: http://mmas.comp.nus.edu.sg/MCAD/MCAD.html

Resources:IDXXXX.mp4.tar.gz contains video data for each individual; boundingbox.tar.gz contains person bounding box for all videos; protocol.json contains the evaluation protocol; img_list.txt contains the download URLs for the images version of the video data; idt_list.txt contians the download URLs for the improved Dense Trajectory feature; stip_list.txt contians the download URLs for the STIP feature. Manual annotated 2D joints for selected camera view and action class (available via http://zju-capg.org/heightmap/)

This dataset is a part of the following research paper. Please ensure the research paper is cited appropriately if you use the MCAD dataset in your work (papers, articles, reports, books, software, etc). For more details, please refer to Citation field.

  • Wenhui Liu, Yongkang Wong, An-An Liu, Yang Li, Yu-Ting Su, Mohan Kankanhalli Multi-Camera Action Dataset for Cross-Camera Action Recognition Benchmarking IEEE Winter Conference on Applications of Computer Vision (WACV), 2017. http://doi.org/10.1109/WACV.2017.28
Related Publications: 10.1109/WACV.2017.28
Citation: When using this data, please cite the original publication and also the dataset.
  • Wenhui Liu, Yongkang Wong, An-An Liu, Yang Li, Yu-Ting Su, Mohan Kankanhalli Multi-Camera Action Dataset for Cross-Camera Action Recognition Benchmarking IEEE Winter Conference on Applications of Computer Vision (WACV), 2017. http://doi.org/10.1109/WACV.2017.28
  • Wenhui Li, Wong Yong Kang, An-An Liu, Yang Li, Yu-Ting Su, KANKANHALLI,MOHAN S (2017-11-09). Multi-Camera Action Dataset (MCAD). ScholarBank@NUS Repository. [Dataset]. https://doi.org/10.5281/zenodo.884592
License: Attribution-NonCommercial 4.0 International
http://creativecommons.org/licenses/by-nc/4.0/
Appears in Collections:Staff Dataset

Show full item record
Files in This Item:
File Description SizeFormat 
stip_list.txtcontians the download URLs for the STIP feature1.05 kBTextView/Download
protocol.jsoncontains the evaluation protocol2.53 kBUnknownView/Download
img_list.txtcontains the download URLs for the images version of the video data1.09 kBTextView/Download
idt_list.txtcontians the download URLs for the improved Dense Trajectory feature1.09 kBTextView/Download
ID0032.mp4.tar.gz258.37 MBUnknownView/Download
ID0030.mp4.tar.gz294.73 MBUnknownView/Download
ID0027.mp4.tar.gz283.49 MBUnknownView/Download
ID0026.mp4.tar.gz284.34 MBUnknownView/Download
ID0023.mp4.tar.gz332.54 MBUnknownView/Download
ID0020.mp4.tar.gz249.43 MBUnknownView/Download
ID0019.mp4.tar.gz251.57 MBUnknownView/Download
ID0018.mp4.tar.gz253.13 MBUnknownView/Download
ID0017.mp4.tar.gz290.63 MBUnknownView/Download
ID0016.mp4.tar.gz305.72 MBUnknownView/Download
ID0015.mp4.tar.gz261.25 MBUnknownView/Download
ID0014.mp4.tar.gz271.45 MBUnknownView/Download
ID0013.mp4.tar.gz318.48 MBUnknownView/Download
ID0012.mp4.tar.gz273.48 MBUnknownView/Download
ID0008.mp4.tar.gz302.23 MBUnknownView/Download
ID0007.mp4.tar.gz301.23 MBUnknownView/Download
ID0005.mp4.tar.gz251.95 MBUnknownView/Download
ID0004.mp4.tar.gz316.29 MBUnknownView/Download
ID0003.mp4.tar.gz309.47 MBUnknownView/Download
ID0001.mp4.tar.gz325.69 MBUnknownView/Download
boundingbox.tar.gzcontains person bounding box for all videos9.59 MBUnknownView/Download

Page view(s)

53
checked on Nov 20, 2017

Download(s)

32
checked on Nov 20, 2017

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons