This repository is the official implementation of the ACCV 2022 paper Exp-GAN: 3D-Aware Facial Image Generation with Expression Control.
Yeonkyeong Lee, Taeho Choi, Hyunsung Go, Hyunjoon Lee, Sunghyun Cho, and Junho Kim.
Requirements for using pytorch3d
- Python >= 3.7
- PyTorch >= 1.12.0
pip install -r requirements.txt
git clone https://github.com/facebookresearch/pytorch3d.git
cd pytorch3d
git checkout v0.7.0
pip install -e .
cd -
Download the aligned FFHQ dataset images from the official repository,
and place them under data/FFHQ/img
.
Annotations for DECA parameters from FFHQ dataset (head pose, shape and expression) can be downloaded below(place the files under data/FFHQ/annots
)
DECA is used to generate facial texture, download required assets by running
cd data
sh download_deca.sh
cd -
Below we present the dataset folder tree:
data/
└── DECA/
├── data/
└── indices_ear_noeye.pkl
└── demo/
└── meta_smooth.json
└── FFHQ/
├── annots/
├── ffhq_deca_ear_ortho_flipped.json
└── ffhq_deca_ear_ortho.pkl
└── img/
Please refer experiments/config/config.yaml
to see how the data is used.
Run the following script to train our model:
sh ./experiments/ffhq/train.sh
A pretrained model can be downloaded here. Place the model file under pretrained_model/model_checkpoint.ckpt
.
Run the following script to generate images for the FID evaluation:
python eval.py --cfg <cfg> --ckpt <ckpt> --savedir <savedir>
Then run the following to measure the FID between generated and real images:
python fid.py --root_real <root_real> --root_fake <root_fake> --batch_size 50
where <root_real>
contains downsampled FFHQ images and <root_fake>
contains images generated by eval.py
.
Please check demo.ipynb
to see how to generate some examples by using a pretrained model.
demo_yaw.mp4
demo_expr.mp4
demo_pose_expr.mp4
demo_low_high_res.mp4
This project is maintained by
- Taeho Choi([email protected])
- Yeonkyeong Lee([email protected])
- Hyunjoon Lee([email protected])
Copyright (c) 2022 POSTECH, Kookmin University, Kakao Brain Corp. All Rights Reserved. Licensed under the Apache License, Version 2.0 (see LICENSE for details)