Skip to content

Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity

Notifications You must be signed in to change notification settings

lliuz/UnRigidFlow

Repository files navigation

UnRigidFlow

This is the official PyTorch implementation of UnRigidFlow (IJCAI2019).

Here are two sample results (~10MB gif for each) of our unsupervised models.

KITTI 15 Cityscapes
kitti cityscapes

If you find this repo useful in your research, please consider citing:

@inproceedings{Liu:2019:unrigid, 
title = {Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity}, 
author = {Liang Liu, Guangyao Zhai, Wenlong Ye, Yong Liu}, 
booktitle = {International Joint Conference on Artificial Intelligence, IJCAI}, 
year = {2019}
}

Requirements

This codebase was developed and tested with Python 3.5, Pytorch>=0.4.1, OpenCV 3.4, CUDA 9.0 and Ubuntu 16.04.

Most of the python packages can be installed by

pip3 install -r requirements.txt

In addition, Optimized correlation with CUDA kernel should be compiled manually with:

cd <correlation_package>
python3 setup.py install

and add <correlation_package> to $PYTHONPATH.

Note that if you are use PyTorch >= 1.0, you should make some changes, see NVIDIA/flownet2-pytorch#98.

Just replace #include <torch/torch.h> with #include <torch/extension.h> , adding #include <ATen/cuda/CUDAContext.h> and then replacing all at::globalContext().getCurrentCUDAStream() with at::cuda::getCurrentCUDAStream().

Training and Evaluation

We are mainly focused on KITTI benchmark. You will need to download all of the KITTI raw data and calibration files to train the model. You will also need the training files of KITTI 2012 and KITTI 2015 with calibration files [1], [2] for validating the models.

The complete training contains 3 steps:

  1. Train the flow model separately:

    python3 train.py -c configs/KITTI_flow.json
    
  2. Train the depth model separately:

    python3 train.py -c configs/KITTI_depth_stereo.json
    
  3. Train the flow and depth models jointly:

    python3 train.py -c configs/KITTI_rigid_flow_stereo.json
    

For evaluation, just adding --e options and modifying the corresponding model path for the above commands.

Pre-trained Models

You can download our pre-trained models, we provide the models as follow:

  • KITTI_flow: The separately trained optical flow network on KITTI raw data (from scratch)
  • KITTI_stereo_depth: The stereo depth network on KITTI raw data.
  • KITTI_flow_joint: The optical flow network jointly trained with stereo depth on KITTI raw data.

Acknowledgement

This repository refers some snippets from several great work, including PWC-Net, monodepth, UnFlow, UnDepthFlow, DF-Net. Although most of these are TensorFlow implementations, we are grateful for the sharing of these works, which save us a lot of time.

About

Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages