Wenyuan Zhang · Kanle Shi · Yu-Shen Liu · Zhizhong Han
We introduce volume rendering priors to infer UDFs from multi-view images. Our prior can be learned in a data-driven manner, which provides a novel perspective to recover geometry with prior knowledge through volume rendering.
Clone the repository and create an anaconda environment called vrpudf using
git clone [email protected]:wen-yuan-zhang/VolumeRenderingPriors.git
cd VolumeRenderingPriors
conda create -n vrpudf python=3.10
conda activate vrpudf
conda install pytorch=1.13.0 torchvision=0.14.0 cudatoolkit=11.7 -c pytorch
conda install cudatoolkit-dev=11.7 -c conda-forge
pip install -r requirements.txt
We leverage MeshUDF to extract mesh from the learned UDF field, which is the same as NeuralUDF. To compile the custom version for your system, please run:
cd custom_mc
python setup.py build_ext --inplace
cd ..
For DTU dataset, we use the same one as NeuS. You can download DTU dataset from the repo of NeuS.
For Deepfashion3D dataset, we use the same one as NeuralUDF. You can download the GT images and pointclouds from the repo of NeuralUDF.
For Replica dataset, we processed it into the same data format as NeRF. You can download the ground truths from here.
For real-captured dataset, the two scenes that used in NeUDF can be downloaded from the repo of NeUDF. The other four scenes that captured by ourself can be downloaded from here (coming soon).
We select a car model from ShapeNet and a cloth model from DeepFashion to contruct our training dataset. Given a model which is pre-normalized in the cube of data/prior_datasets
. To use the scipts, first install Blender and add the Blender directory into the executable path.
export PATH=$YOUR_PATH_TO/blender-2.90.0-linux64:$PATH
Then sample GT SDFs and produce GT depth maps and camera infos using the following script.
cd data/prior_datasets/shapenet_02958343
python sample_sdf.py
blender --background --python render_blender.py -- --output_folder images model_watertight.obj
cd ../df3d_1
python sample_sdf.py
blender --background --python render_blender.py -- --output_folder images model_watertight.obj
Use the following scipt to train volume rendering priors.
python exp_runner.py --conf confs/shapenet/shapenet.conf --mode train_mlp
We provide example commands for training DTU, Replica, and real-captured datasets as follows:
# DTU scan24
python exp_runner.py --conf confs/dtu/womask.conf --mode train_udf_color_wodepth --case dtu_scan24
# DeepFashion 30
python exp_runner.py --conf confs/deepfashion3d/deepfashion3d.conf --mode train_udf_color_wodepth --case 30
# Replica room0
python exp_runner.py --conf confs/replica/replica.conf --mode train_udf_color_wodepth --case room0
# real-captured fan
python exp_runner.py --conf confs/real_captured/real_captured.conf --mode train_udf_color_wodepth --case fan
The mode
argument provides different training manners. You can set different training modes to turn on or turn off different supervisions.
Optional Arguments for "--mode"
Train UDF network with rgb and depth supervisions.
Train both UDF network and color network with rgb and depth supervisions.
Train both UDF network and color network with rgb supervisions. Default setting. To align with other UDF baselines, please keep this setting.
To extract the surfaces from trained model, use the following script:
# DTU scan24
python exp_runner.py --conf confs/dtu/womask.conf --mode train_udf_color_wodepth_validate_mesh --case dtu_scan24 --is_continue
# Deepfashion 30
python exp_runner.py --conf confs/deepfashion3d/deepfashion3d.conf --mode train_udf_color_wodepth_validate_mesh --case 30 --is_continue
# Replica room0
python exp_runner.py --conf confs/replica/replica.conf --mode train_udf_color_wodepth_validate_mesh --case room0 --is_continue
# real-captured fan
python exp_runner.py --conf confs/real_captured/real_captured.conf --mode train_udf_color_wodepth_validate_mesh --case_name fan --is_continue
To evaluate the extracted meshes and calculate numerical results, use the following script. The postprocessed meshes and the evaluated numerical results will be saved under $mesh_dir/udf_meshes_clean/
.
cd evaluation/
# DTU
python eval_dtu.py --gt_dir $PATH_TO_GT --data_dir $PATH_TO_DATA --mesh_dir $PATH_TO_EXTRACTED_MESH --scan $CASE_NAME --mesh_name $MESH_NAME
# example
python eval_dtu.py --gt_dir ../data/DTU_GTpoints --data_dir ../data/DTU --mesh_dir ../log/DTU/dtu_scan24/udf_meshes --scan 24
# Deepfashion
python eval_deepfashion.py --gt_dir ../data/deepfashion3d_gt_pc --data_dir ../data/deepfashion3d --mesh_dir ../log/deepfashion3d/30/udf_meshes --scan 30
# Replica
python eval_replica.py --gt_dir ../data/Replica/gt_meshes/ --data_dir ../data/Replica --mesh_dir ../log/replica/room0/udf_meshes/ --scan room0
We provide the pretrained meshes of all the four datasets. If you are willing to use these meshes, please download them from here.
If you find our code or paper useful, please consider citing
@article{zhang2024learning,
title={Learning Unsigned Distance Functions from Multi-view Images with Volume Rendering Priors},
author={Zhang, Wenyuan and Shi, Kanle and Liu, Yu-Shen and Han, Zhizhong},
journal={European Conference on Computer Vision},
year={2024},
organization={Springer}
}
This project is built upon NeuS. The mesh extraction and evaluation scipts are partially borrowed from MeshUDF and NeuralUDF. Thanks for these great projects.