Skip to content

Latest commit

 

History

History
executable file
·
72 lines (60 loc) · 2.63 KB

README.md

File metadata and controls

executable file
·
72 lines (60 loc) · 2.63 KB

Learning Human Optical Flow

This code is based on the paper Learning Human Optical Flow.

Data

Download the data from webpage. Extract the data.

7z x HumanFlowDataset.7z.001

NOTE: The directions of flow fields in .flo files are reversed from the original convention. So, it is required to either change the sign of flow fields either while training, or while prediction if you would like to stick to original convention of Middlebury. We train the network using original data and change the sign while prediction. If this is not clear, please raise a Github issue or write me an email.

Trained Models

The pretrained models are available in pretrained/ directory. There are two models:

  1. human_flow_model.t7 is the original trained model as evaluated in the paper.
  2. human_flow_model_noise_adaptive.t7 is trained with additional noisy data.

Setup

You need to have Torch.

Install other required packages

cd extras/spybhwd
luarocks make
cd ../stnbhwd
luarocks make

Usage

Load the model

stn = require 'stn'
bodynet = require 'bodynet'
easyComputeFlow = bodynet.easy_setup('pretrained/human_flow_model_[noise_adaptive].t7')

Load images and compute flow

im1 = image.load(<IMAGE_PATH_1>, 3, 'float')
im2 = image.load(<IMAGE_PATH_2>, 3, 'float')
flow = easyComputeFlow(im1, im2)

To save or visualize optical flow, refer to flowExtensions.lua

Training

th main.lua -netType fullBodyModel -nGPU 4 -nDonkeys 16 -LR 1e-6 -epochSize 1000 -data <PATH_TO_DATASET>

References

  1. Training code is based on anuragranj/spynet.
  2. Warping code is based on qassemoquab/stnbhwd.
  3. Additional training data can be found at gulvarol/surreal.

License

MIT License, free usage without any warranty. Check LICENSE file for details.

Citing this code

Ranjan, Anurag, Javier Romero, and Michael J. Black. "Learning Human Optical Flow." British Machine Vision Conference (BMVC 2018).