As the code is based on the SSDNeRF codebase, the requirements are the same. Additionally, we provide a docker image for ease of use.
docker pull eliphatfs/zerorf-ssdnerf:0.0.2
The code has been tested in the environment described as follows:
- Linux (tested on Ubuntu 18.04/20.04 LTS)
- Python 3.7
- CUDA Toolkit 11
- PyTorch 1.12.1
- MMCV 1.6.0
- MMGeneration 0.7.2
- SpConv 2.3.6
Other dependencies can be installed via pip install -r requirements.txt
.
An example of commands for installing the Python packages is shown below (you may change the CUDA version yourself):
# Export the PATH of CUDA toolkit
export PATH=/usr/local/cuda-11.5/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-11.5/lib64:$LD_LIBRARY_PATH
# Create conda environment
conda create -y -n ssdnerf python=3.7
conda activate ssdnerf
# Install PyTorch
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
# Install MMCV and MMGeneration
pip install -U openmim
mim install mmcv-full==1.6
git clone https://github.com/open-mmlab/mmgeneration && cd mmgeneration && git checkout v0.7.2
pip install -v -e .
cd ..
# Install SpConv
pip install spconv-cu114
# Clone this repo and install other dependencies
git clone <this repo> && cd <repo folder> && git checkout ssdnerf-sd
pip install -r requirements.txt
Optionally, you can install xFormers for efficnt attention. Also, this codebase should be able to work on Windows systems as well (tested in the inference mode).
Lastly, there are two CUDA packages from torch-ngp that need to be built locally if you install dependencies manually.
cd lib/ops/raymarching/
pip install -e .
cd ../shencoder/
pip install -e .
cd ../../..
Execute zerorf.py
to run ZeroRF.
Zero123++ Image
ZeroRF can be used to perform reconstruction on generated multi-view images to perform 3D content generation.
You need to prepare a segmented RGBA image in Zero123++ format (see https://github.com/SUDO-AI-3D/zero123plus).
An example can be found at examples/ice.png
.
python zerorf.py --load-image=examples/ice.png
The default setup requires 10GB VRAM to operate.
NeRF-Synthetic
To run general reconstruction, you can prepare the dataset in NeRF-Synthetic format. The NeRF-Synthetic dataset itself can be obtained here.
python zerorf.py --rep=tensorf --data-dir=path/to/nerf_synthetic --obj=hotdog --n-views=6 --dataset=nerf_syn
Open-Illumination
The dataset can be obtained here.
We use the camera aligned with axes (train_split, test_split), please put the two files under path/to/open_illumination/lighting_patterns
.
python zerorf.py --rep=tensorf --data-dir=path/to/open_illumination/lighting_patterns --obj=obj_04_stone --n-views=6 --dataset=oi
The default setup requires about 16GB VRAM to operate depending on the object.
You may want to adjust the --n-rays-up
parameter to a lower value so it fits your VRAM (convergence could take more steps and longer time).
Configuration
You can find more configurations in opt.py
.