Human joint configuration, also called as pose, is restricted by biomechanics of our body. Utilizing these constrains accuratly would be a corner stone of many computer vision tasks, such as estimating 3D human body parameters from 2D keypoints, detecting anomalies, and etc.
Here we present a method that is used in SMPLify-X. Our variational human pose prior, named as VPoser, has the following features:
- is end-to-end differentiable
- provides a way to penalize impossible poses while allowing possible ones
- effectively considers interdependency of configurations of the joints
- introduces an efficient, low-dimensional representation for human pose
- can be used as a generative source for data dependent tasks
- Description
- Installation
- Loading trained models
- Train VPoser
- Tutorials
- Citation
- License
- Acknowledgments
- Contact
To install the model simply you can:
- To install from PyPi simply run:
pip install human_body_prior
- Clone this repository and install it using the setup.py script:
git clone https://github.com/nghorbani/human_body_prior
python setup.py install
To download the VPoser trained models go to the SMPL-X project website and register to get access to the downloads section. Afterwards, you can follow the model loading tutorial to load and use your trained VPoser models.
We train VPoser, using a variational autoencoder, which learns a latent representation of human pose and regularizes the distribution of the latent code to be a normal distribution. We train our prior on the data released by AMASS, namely SMPL pose parameters of various publicly available human motion capture datasets. You can follow the data preparation tutorial to learn how to download and prepare AMASS for VPoser. Afterwards, you can train VPoser from scratch.
- VPoser PoZ Space for Body Models
- Sampling Novel Body Poses from VPoser
- Preparing VPoser Training Dataset
- Train VPoser from Scratch
Please cite the following paper if you use this code directly or indirectly in your research/projects.
@inproceedings{SMPL-X:2019,
title = {Expressive Body Capture: 3D Hands, Face, and Body from a Single Image},
author = {Pavlakos, Georgios and Choutas, Vasileios and Ghorbani, Nima and Bolkart, Timo and Osman, Ahmed A. A. and Tzionas, Dimitrios and Black, Michael J.},
booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
year = {2019}
}
Also note that if you consider training your own VPoser for your research using AMASS dataset, then please follow its respective citation guideline.
Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the SMPL-X/SMPLify-X model, data and software, (the "Model & Software"), including 3D meshes, blend weights, blend shapes, textures, software, scripts, and animations. By downloading and/or using the Model & Software (including downloading, cloning, installing, and any other use of this github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.
The code in this repository is developed by Nima Ghorbani.
If you have any questions you can contact us at [email protected].
For commercial licensing, contact [email protected]
We thank authors of AMASS for their early release of their data to us for this project. We thank Partha Ghosh for the helpfull disscussions and insights that helped with this project.