Skip to content

(CVPR 2023) Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur

License

Notifications You must be signed in to change notification settings

CVMI-Lab/HybridNeuralRendering

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur

Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur (CVPR 2023)
Peng Dai*, Yinda Zhang*, Xin Yu, Xiaoyang Lyu, Xiaojuan Qi.
Paper, Project_page

Introduction


Our method takes advantage of both neural 3D representation and image-based rendering to render high-fidelity and temporally consistent results. Specifically, the image-based features compensate for the defective neural 3D features, and the neural 3D features boost the temporal consistency of image-based features. Moreover, we propose efficient designs to handle motion blurs that occur during capture.

Environment

  • We use the same environment as PointNeRF, please follow their installation step by step. (conda virtual environment is recommended)

  • Install the dependent python libraries

pip install opencv_python imutils

The code has been tested on a single NVIDIA 3090 GPU.

Preparation

  • Please download datasets used in this paper. The layout looks like this:
HybridNeuralRendering
├── data_src
    ├── scannet
    │   │──frame_weights_step5 
    │   │──scans 
    |   │   │──scene0101_04
    │   │   │──scene0241_01
    │   │   │──livingroom
    │   │   │──vangoroom
    ├── nerf
    │   │──nerf_synthetic
    │   │   │──chair
    │   │   │──lego
  • Download pre-trained models. Since we currently focus on per-scene optimization, make sure that "checkpoints" folder contains "init" and "MVSNet" folders with pre-trained models.

Quality-aware weights

The weights have been included in the "frame_weights_step5" folder. Alternatively, you can follow the RAFT to build the running environment and download their pre-trained models. Then, compute quality-aware weights by running:

cd raft
python demo_content_aware_weights.py --model=models/raft-things.pth --path=path of RGB images  --ref_path=path of RGB images  --scene_name=scene name

Train

We take the training on ScanNet 'scene0241_01' for example (The training scripts will resume training if "xxx.pth" files are provided in the pre-trained scene folder, e.g., "checkpoints/scannet/xxx/xxx.pth". Otherwise, train from scratch.):

Hybrid rendering

Only use hybrid rendering, run:

bash ./dev_scripts/w_scannet_etf/scene241_hybrid.sh

Hybrid rendering + blur-handling module (pre-defined degradation kernels)

The full version of our method, run:

bash ./dev_scripts/w_scannet_etf/scene241_full.sh

Hybrid rendering + blur-handling module (learned degradation kernels)

Instead of using pre-defined kernels, we also provide an efficient way to estimate degradation kernels from rendered and GT patches. Specifically, flattened rendering and GT patches are concatenated and fed into an MLP to predict the degradation kernel.

bash ./dev_scripts/w_scannet_etf/scene241_learnable.sh

Evaluation

We take the evaluation on ScanNet 'scene0241_01' for example:
Please specify "name" in "scene241_test.sh" to evaluate different experiments, then run:

bash ./dev_scripts/w_scannet_etf/scene241_test.sh

You can directly evaluate using our pre-trained models.

Results

Our method generates high-fidelity results when comparing with PointNeRF' results and reference images. Please visit our project_page for more comparisons.

Visualization

We visualize the learned degradation kernel in (a). When this kernel is applied to the rendered sharp image patch (b), the resulting degraded image patch (c) more closely resembles the defective reference image patch (d), allowing for the preservation of high-frequency information.

Contact

If you have questions, you can email me ([email protected]).

Citation

If you find this repo useful for your research, please consider citing our paper:

@inproceedings{dai2023hybrid,
  title={Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur},
  author={Dai, Peng and Zhang, Yinda and Yu, Xin and Lyu, Xiaoyang and Qi, Xiaojuan},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2023}
}

Acknowledgement

This repo is heavily based on PointNeRF and RAFT, we thank authors for their brilliant works.

About

(CVPR 2023) Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages