Skip to content

Latest commit

 

History

History
192 lines (165 loc) · 9.4 KB

README.md

File metadata and controls

192 lines (165 loc) · 9.4 KB

MMPD[EMBC 2023 Oral]

📖 Abstract

Here is MMPD: Multi-Domain Mobile Video Physiology Dataset collected by Tsinghua University.
The Multi-domain Mobile Video Physiology Dataset (MMPD), comprising 11 hours(1152K frames) of recordings from mobile phones of 33 subjects. The dataset was designed to capture videos with greater representation across skin tone, body motion, and lighting conditions. MMPD is comprehensive with eight descriptive labels and can be used in conjunction with the rPPG-toolbox(PyTorch) and PhysBench.

Code is now updated in the rPPG-Toolbox_MMPD file fold, allowing users to choose any combination of multiple labels. More details would be uploaded soon. For those who have downloaded or prepare to download our dataset: you are recommended to star this repo in case the dataset may be updated. Recently, we have updated the size.csv file for checking data integrity.

@misc{tang2023mmpd,
      title={MMPD: Multi-Domain Mobile Video Physiology Dataset}, 
      author={Jiankai Tang and Kequan Chen and Yuntao Wang and Yuanchun Shi and Shwetak Patel and Daniel McDuff and Xin Liu},
      year={2023},
      eprint={2302.03840},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

🔍 Samples

LED-low LED-high Incandescent Nature
Skin Tone 3
Stationary
Skin Tone 4
Rotation
Skin Tone 5
Talking
Skin Tone 6
Walking

🗝️ Access and Usage

This dataset is built for academic use. Any commercial usage is banned.
To access the dataset, you are supposed to download this letter of commitment.
Please use the education or affiliation mailbox to email [email protected] and cc [email protected] with the signed or sealed protocol as an attachment. The signature of staff in the university is preferred.
There are two kinds of dataset for convenience: full dataset(370G, 320 x 240 resolution ), mini dataset(48G, 80 x 60 resolution ).
There are two ways for downloads: OneDrive and Baidu Netdisk for researchers of different regions. For those researchers at China, hard disk could also be a solution.

⚙️ Experiment Procedure[Updated]

📊 Distribution

Distribution Skin Tone Gender Glasses Wearing Hair Covering Makeup
3 4 5 6 Male Female True False True False True False
Number 16 5 6 6 16 17 10 23 8 23 4 29

🖥️ The Dataset Structure

MMPD_Dataset
├── subject1
    ├── p1_0.mat        # px_y.mat: x refers to the order of subjects, y refers to the order of the experiments, whcich corresponds to the experiment procedure.
        ├── video       # Rendered images of the subjects at 320 x 240 resolution     [t, w, h, c]
        ├── GT_ppg      # PPG wavefrom signal                                         [t]
        ├── light       # 'LED-low','LED-high','Incandescent','Nature' 
        ├── motion      # 'Stationary','Rotation','Talking','Walking'
        ├── exercise    # True, False
        ├── skin_color  # 3,4,5,6
        ├── gender      # 'male','female'
        ├── glasser     # True, False
        ├── hair_cover  # True, False
        ├── makeup      # True, False
    ├── ... .mat
    ├── p1_19.mat
├── ...
├── subject33
├── size.csv # Each line stands for a mat file and the bytes of this mat file

Reading the data example:

import scipy.io as sio
f = sio.loadmat('p1_0.mat')
print(f.keys())

📝 Results(tested on MMPD)

Simplest scenario

In the simplest scenario, we only include the stationary, skin tone type 3, and artificial light conditions as benchmarks.

METHODS MAE RMSE MAPE PEARSON
ICA 8.75 12.35 12.26 0.21
POS 7.69 11.95 11.45 0.19
CHROME 8.81 13.18 12.95 -0.03
GREEN 10.57 15.03 14.59 0.23
LGI 7.46 11.92 10.12 0.12
PBV 8.15 11.52 11.04 0.35
TS-CAN(trained on PURE) 1.78 3.57 2.47 0.93
TS-CAN(trained on UBFC) 1.46 3.13 2.04 0.94

Unsupervised Signal Processing Methods(Subset)

We evaluated six traditional unsupervised methods in our dataset. In the skin tone comparison, we excluded the exercise, natural light, and walking conditions to eliminate any confounding factors and concentrate on the task. Similarly, the motion comparison experiments excluded the exercise and natural light conditions, while the light comparison experiments excluded the exercise and walking conditions. This approach enabled us to exclude cofounding factors and better understand the unique challenges posed by each task.

Supervised Deep Learning Methods(Subset)

In this paper, we investigated how state-of-the-art supervised neural networks perform on MMPD and studied the influence of skin tone, motion, and light. We used the same exclusion criteria as the evaluation on unsupervised methods.

Full Dataset Benchmark

For the full dataset, no existing methods could accurately predict the PPG wave and heart rate. We are looking forward to algorithms that could be applied to daily scenarios. Researchers are encouraged to report their results and communicate with us.

METHODS MAE RMSE MAPE PEARSON
ICA 18.57 24.28 20.85 0.00
POS 12.34 17.70 14.43 0.17
CHROME 13.63 18.75 15.96 0.08
GREEN 21.73 27.72 24.44 -0.02
LGI 17.02 23.28 18.92 0.04
PBV 17.88 23.53 20.11 0.09
METHODS(trained on PURE) MAE RMSE MAPE PEARSON
TS-CAN 13.94 21.61 15.14 0.20
DeepPhys 16.92 24.61 18.54 0.05
EfficientPhys 14.03 21.62 15.32 0.17
PhysNet 13.22 19.61 14.73 0.23
METHODS(trained on UBFC) MAE RMSE MAPE PEARSON
TS-CAN 14.01 21.04 15.48 0.24
DeepPhys 17.50 25.00 19.27 0.05
EfficientPhys 13.78 22.25 15.15 0.09
PhysNet 10.24 16.54 12.46 0.29
METHODS(trained on SCAMPS) MAE RMSE MAPE PEARSON
TS-CAN 19.05 24.20 21.77 0.14
DeepPhys 15.22 23.17 16.56 0.09
EfficientPhys 20.37 25.04 23.48 0.11
PhysNet 21.03 25.35 24.68 0.14

📄 Citation

Title: MMPD: Multi-Domain Mobile Video Physiology Dataset
Jiankai Tang, Kequan Chen, Yuntao Wang, Yuanchun Shi, Shwetak Patel, Daniel McDuff, Xin Liu, "MMPD: Multi-Domain Mobile Video Physiology Dataset", IEEE EMBC, 2023

@misc{tang2023mmpd,
      title={MMPD: Multi-Domain Mobile Video Physiology Dataset}, 
      author={Jiankai Tang and Kequan Chen and Yuntao Wang and Yuanchun Shi and Shwetak Patel and Daniel McDuff and Xin Liu},
      year={2023},
      eprint={2302.03840},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}