This repo. is the official implementation of our paper:
Yan, P., Gregson, J., Tang, Q., Ward, R., Xu, Z., & Du, S. “NEO-3DF: Novel Editing-Oriented 3D Face Creation and Reconstruction”. Accepted, 2022 Asian Conference on Computer Vision (ACCV), Macau SAR, China.
Please cite the following paper if you find this work is helpful to your research:
@inproceedings{yan2022neo,
title={NEO-3DF: Novel Editing-Oriented 3D Face Creation and Reconstruction},
author={Yan, Peizhi and Gregson, James and Tang, Qiang and Ward, Rabab and Xu, Zhan and Du, Shan},
booktitle={Proceedings of the Asian Conference on Computer Vision},
pages={486--502},
year={2022}
}
- Python == 3.6.7
- Cython == 0.29.22
- dlib == 19.22.0
- face-alignment == 1.3.4
- facenet-pytorch == 2.5.2
- jupyter == 1.0.0
- matplotlib == 3.2.1
- networkx == 2.3
- ninja == 1.10.2
- numpy == 1.19.5
- nvdiffrast == 0.2.5
- open3d == 0.12.0
- opencv-python == 4.1.0.25
- pandas == 0.25.0
- Pillow == 8.3.2
- pytorch3d == 0.5.0
- scikit-image == 0.17.2
- scikit-learn == 0.24.2
- scipy == 1.4.1
- seaborn == 0.10.0
- sklearn == 0.0
- tensorboard == 2.6.0
- torch == 1.9.0+cu111
- torchvision == 0.10.0+cu111
- tqdm == 4.62.2
- trimesh == 3.9.19
- Please follow link to ./BFM/README.md to prepare the ./BFM folder.
-
Please create a folder ./datasets and create two subfolders: ./datasets/FFHQ and ./datasets/CelebA
-
Please download the FFHQ dataset (link) and the CelebAMask-HQ dataset (link), extract to ./datasets/FFHQ and ./datasets/CelebA respectively.
-
Run ./data_preparation/preprocess_FFHQ.py to resize the images and detect landmarks.
-
Run ./data_preparation/preprocess_CelebA.py to resize the images and masks, and detect landmarks.
-
Extract Deep 3DMM code to ./data_preparation/ , NOTE that, you can prepare the BFM folder using the files prepared in STEP-1. Then, run ./data_preparation/deep3dmm_CelebA.py to generate 3DMM coefficients for the images. Similarly, run ./data_preparation/deep3dmm_FFHQ.py to process FFHQ dataset as well.
-
Run ./data_preparation/coeff_to_shape.py to convert coefficients to numpy files.
-
Run ./data_preparation/facenet_encoding.py to extract the FaceNet encodings of the images.
Download pre-trained models, extract to ./saved_models/
-
Run ./train/train_vae_overall.ipynb to train the VAE for the overall shape. Save the trained model to ./saved_models/part_vaes and ./saved_models/part_decoders
-
Run ./train/train_vaes_parts.ipynb to train the VAE for all the five parts. Save the trained models to ./saved_models/part_vaes and ./saved_models/part_decoders
- Run ./train/train_other.ipynb to train the part encoders (a.k.a. disentangle networks), offset regressor, and fine-tune FaceNet.
- Run ./train/fine_tune_on_CelebA.ipynb fine-tune the network with additional guidance from CelebA's parsing masks.
- Run all the code in ./mapping/measure/ to generate the measured features (e.g., nose height, nose bridge width, etc.).
- Run all the code start with mapping in ./mapping/ to generate the linear mappings for local control/editing.
- Run ./local_editing_demo/run.py to try our local 3D face editing system.
Download pre-computed inverse A and save it to ./automatic_shape_adjusting: download
-
./automatic_shape_adjusting/arap_demo.py demonstrates our differentiable ARAP method using an example 3D shape.
-
./automatic_shape_adjusting/shape_adjusting.ipynb demonstrates the automatic shape adjusting.
This work is partially based on the following works:
- Basel Fase Model (BFM): https://faces.dmi.unibas.ch/bfm/main.php?nav=1-0&id=basel_face_model
- Expression bases for BFM: https://github.com/Juyong/3DFace
- Deep 3DMM (Pytorch implementation): https://github.com/sicxu/Deep3DFaceRecon_pytorch
- FaceNet (Pytorch implementation): https://github.com/timesler/facenet-pytorch
- FaceParsing: https://github.com/hhj1897/face_parsing
- 3DMM fitting code: https://github.com/ascust/3DMM-Fitting-Pytorch
- CelebA-Mask-HQ dataset: http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
- FFHQ dataset: https://github.com/NVlabs/ffhq-dataset
Peizhi Yan ([email protected])