================================== MADS: Martial Arts, Dancing and Sports Dataset Weichen Zhang, Zhiguang Liu, Liuyang Zhou, Howard Leung, and Antoni B. Chan, Department of Computer Science City University of Hong Kong Copyright (c) 2017, City University of Hong Kong. ================================== This file accompanies the release of MADS dataset and code. 1) Description of data files The data is in the following directories: - depth_data : depth videos - multi_view_data : multi-view videos Here is more description of the included files: - Each action category contains 6 sequences from 3 views (0,1,2). - *-Calib-CamN.mat contanis the camera parameters of camera N when recording action *. - *-GT.mat contains the global ground-truth 3D pose to corresponding videos. The ground-truth pose contains 19 joints (show in order): neck, pelvis, left hip, left knee, left ankle left foot, right hip, right knee, right ankle right foot, left shouler, left elbow, left wrist, left hand, right shoulder, right elbow, right wrist, right hand, head. - If there is NaN in the ground-truth data, please consider it as an invalid frame, don't train use it and don't evaluate on it. - For each action category, use leave-one-sequence out protocol to train the model, e.g. when test on Jazz-Jazz1 sequence, use all other Jazz sequences as the training data. - More details about data collection can be found in the paper. 2) Description of code files The following directories are found in this release - depth : codes for the robust likelihood tracking on depth sequences - multi-view : codes for the ECPBL algorithm on multiview sequences 3) Dependencies This code relies on the following toolboxes * mexopencv - http://www3.cs.stonybrook.edu/~kyamagu/mexopencv/index.html (a toolbox for compiling OpenCV functions within Matlab mex files) * OpenCV - http://opencv.org/ (Open Source Computer Vision library) * HumanEva baseline - http://humaneva.is.tue.mpg.de/baseline (the HumanEva baseline codes, we use their body model and APF algorithm in our code) mexopencv and HumanEva baseline are packaged in our data, the users should install OpenCV themselves. 4) Installation The mexopencv needs to be compiled before running the code. 5) Run the demo In both depth_code and multi_view_code folders, there are demo.m files for users to run experiments desecribed in the paper. Before you run it, change the path in demo.m to where you put the data. ==== REFERENCES ==== If you use this dataset, please cite: Weichen Zhang, Zhiguang Liu, Liuyang Zhou, Howard Leung, and Antoni B. Chan. "Martial Arts, Dancing and Sports Dataset: a Challenging Stereo and Multi-View Dataset for 3D Human Pose Estimation." Image and Vision Computing, 61:22-39, May 2017. ==== CONTACT INFO ==== Please send comments, bug reports, feature requests to Weichen ZHANG. ==== CHANGE LOG ==== 2017-04-24: v1.0 - initial release