This repo contains model, demo, training codes for our paper: "2D Hand Pose Estimation from A Single RGB Image through Flow Model"(ICARM2024)
Install the dependencies listed in environment.yml
through conda:
- We recommend to firstly install Pytorch with cuda enabled.
- Create a new conda environment:
conda env create -f environment.yml
- Or in an existing conda environment:
conda env update -f environment.yml
The above operation works well if you are lucky. However, we found that installing opendr is tricky. We solved the errors by:
sudo apt-get install libglu1-mesa-dev freeglut3-dev mesa-common-dev
sudo apt-get install libosmesa6-dev
## then reinstall opendr again
pip install opendr
-
Create a data directory:
data
-
Download RHD dataset at the dataset page and extract it in
data/RHD
. -
Download STB dataset at the dataset page and extract it in
data/STB
-
Download
STB_supp
dataset at Google Drive or Baidu Pan(v858
) and merge it intodata/STB
. (In STB, We generated aligned and segmented hand depthmap from the original depth image)
Now your data
folder structure should like this:
data/
RHD/
RHD_published_v2/
evaluation/
training/
view_sample.py
...
STB/
images/
B1Counting/
SK_color_0.png
SK_depth_0.png
SK_depth_seg_0.png <-- merged from STB_supp
...
...
labels/
B1Counting_BB.mat
...
- Go to MANO website
- Create an account by clicking Sign Up and provide your information
- Download Models and Code (the downloaded file should have the format
mano_v*_*.zip
). Note that all code and data from this download falls under the MANO license. - unzip and copy the
models
folder into themanopth/mano
folder
Now Your manopth
folder structure should look like this:
manopth/
mano/
models/
MANO_LEFT.pkl
MANO_RIGHT.pkl
...
manopth/
__init__.py
...
- First, add this into current bash or
~/.bashrc
:
export PYTHONPATH=/path/to/bihand:$PYTHONPATH
- to test on RHD dataset:
python run.py \
--batch_size 8 --fine_tune rhd --checkpoint checkpoints --data_root data
- to test on STB dataset:
python run.py \
--batch_size 8 --fine_tune stb --checkpoint checkpoints --data_root data
- add
--vis
to visualize:
We can train model for 100 epochs:
python training/train_seednet_fastflow.py --net_modules seed --datasets stb rhd --ups_loss