Skip to content
/ GLAMR Public
forked from NVlabs/GLAMR

[CVPR 2022 Oral] Official PyTorch Implementation of "GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras”.

License

Notifications You must be signed in to change notification settings

summkk/GLAMR

 
 

Repository files navigation

GLAMR

This repo contains the official PyTorch implementation of our paper:

GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras
Ye Yuan, Umar Iqbal, Pavlo Molchanov, Kris Kitani, Jan Kautz
CVPR 2022 (Oral)
website | paper | video

Overview

News

  • [08/10/22]: Demos for multi-person videos are added (Thanks to Haofan Wang)!
  • [08/01/22]: Demos for dynamic and static videos are released!

Table of Content

Installation

Environment

  • Tested OS: MacOS, Linux
  • Python >= 3.7
  • PyTorch >= 1.8.0
  • HybrIK (used in demo)

Dependencies

  1. Clone this repo recursively:
    git clone --recursive https://github.com/NVlabs/GLAMR.git
    
    This will fetch the submodule HybrIK.
  2. Follow HybrIK's installation instructions and download its models.
  3. Install PyTorch 1.8.0 with the correct CUDA version.
  4. Install system dependencies (Linux only):
    source install.sh
    
  5. Install python dependencies:
    pip install -r requirements.txt
    
  6. Download SMPL models & joint regressors and place them in the data folder. You can obtain the model following SPEC's instructions here.

Pretrained Models

Demo

We provide demos for single- and multi-person video with both dynamic and static cameras.

Dynamic Videos

Run the following command to test GLAMR on a single-person video with dynamic camera:

python global_recon/run_demo.py --cfg glamr_dynamic \
                                --video_path assets/dynamic/running.mp4 \
                                --out_dir out/glamr_dynamic/running \
                                --save_video

This will output results to out/glamr_dynamic/running. Results videos will be saved to out/glamr_dynamic/running/grecon_videos. Additional dynamic test videos can be found in assets/dynamic. More video comparison with HybrIK are available here.

Static Videos

Run the following command to test GLAMR on a single-person video with static camera:

python global_recon/run_demo.py --cfg glamr_static \
                                --video_path assets/static/basketball.mp4 \
                                --out_dir out/glamr_static/basketball \
                                --save_video

This will output results to out/glamr_static/basketball. Results videos will be saved to out/glamr_static/basketball/grecon_videos. Additional static test videos can be found in assets/static. More video comparison with HybrIK are available here.

Multi-Person Videos

Use the --multi flag and the glamr_static_multi config in the above demos to test GLAMR on a multi-person video:

python global_recon/run_demo.py --cfg glamr_static_multi \
                                --video_path assets/static/basketball.mp4 \
                                --out_dir out/glamr_static_multi/basketball \
                                --save_video \
                                --multi

This will output results to out/glamr_static_multi/basketball. Results videos will be saved to out/glamr_static_multi/basketball/grecon_videos.

Datasets

We use three datasets: AMASS, 3DPW, and Dynamic Human3.6M. Please download them from the official website and place them in the dataset folder with the following structure:

${GLAMR_ROOT}
|-- datasets
|   |-- 3DPW
|   |-- amass
|   |-- H36M

Evaluation

First, run GLAMR on the test set of the dataset you want to evaluate. For example, to run GLAMR on the 3DPW test set:

python global_recon/run_dataset.py --dataset 3dpw --cfg glamr_3dpw --out_dir out/3dpw

Next, evaluate the results generated by GLAMR:

python global_recon/eval_dataset.py --dataset 3dpw --results_dir out/3dpw

Similarly, to evaluate on Dynamic Human3.6M, you can replace the 3dpw to h36m for the dataset and config.

AMASS

The following command processes the original AMASS dataset into a processed version used in the code:

python preprocess/preprocess_amass.py

3DPW

The following command processes the original 3DPW dataset into a processed version used in the code:

python preprocess/preprocess_3dpw.py

Dynamic Human3.6M

Please refer to this doc for generating the Dynamic Human3.6M dataset.

Motion Infiller

To train the motion infiller:

python motion_infiller/train.py --cfg motion_infiller_demo --ngpus 1

where we use the config motion_infiller_demo.


To visualize the trained motion infiller on test data:

python motion_infiller/vis_motion_infiller.py --cfg motion_infiller_demo --num_seq 5

where num_seq is the number of sequences to visualize. This command will save results videos to out/vis_motion_infiller.

Trajectory Predictor

To train the trajectory predictor:

python traj_pred/train.py --cfg traj_pred_demo --ngpus 1

where we use the config traj_pred_demo.


To visualize the trained trajectory predictor on test data:

python traj_pred/vis_traj_pred.py --cfg traj_pred_demo --num_seq 5

where num_seq is the number of sequences to visualize. This command will save results videos to out/vis_traj_pred.

Joint Motion Infiller and Trajectory Predictor

For ease of use, we also define a joint (wrapper) model of motion infiller and trajectory predictor, i.e., the model merges the motion infilling and trajectory prediction stages. The joint model composes of pretrained motion infiller and trajectory predictor and is just a convenient abstraction. We can define the joint model using config files such as joint_motion_traj_demo. The joint model will also be used in the global optimization stage.


To visualize the joint model's results:

python motion_infiller/vis_motion_traj_joint_model.py --cfg joint_motion_traj_demo --num_seq 5

where num_seq is the number of sequences to visualize. This command will save results videos to out/vis_motion_traj_joint_model.

Citation

If you find our work useful in your research, please cite our paper GLAMR:

@inproceedings{yuan2022glamr,
    title={GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras},
    author={Yuan, Ye and Iqbal, Umar and Molchanov, Pavlo and Kitani, Kris and Kautz, Jan},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2022}
}

License

Please see the license for further details.

About

[CVPR 2022 Oral] Official PyTorch Implementation of "GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras”.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Shell 0.1%