The easiest way to make yourself the hero of video memes.
You don't need a powerful video card with ray tracing support to run it.
MacBook Pro M1 — 29 frames — 48.5 seconds
Without face restoration | With 🧖♂️ face restoration |
---|---|
video-1.mp4 |
video_01.mp4 |
video-2.mp4 |
video-3.mp4 |
- Intentionally no audio support.
- Everything tested only on MacBook Pro (Apple M1, 16GB RAM)
- Splitting video to sequence of images
- Swap all faces with insightface module (same as Roop) on each image
- Restore faces with GFPGAN if passed
--restore
option - Making video from image sequence
git clone [email protected]:pfrankov/video-face-swap.git
cd video-face-swap
python3.10 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Also, you need to put pretrained models into models
directory:
- inswapper_128 is essential for face swapping
- GFPGAN v1.4 for face restoration
# Make sure you're in virtual environment
source venv/bin/activate
python swap.py ./input/leonardo-dicaprio-rick-dalton.mp4 ./my_face.jpg result.mp4
Usage:
swap.py <video> <face> <output> [--restore]
Arguments:
video Path to the .mp4 video file to process
face Path to the image with your face
output Path to output video with .mp4 extension
Options:
--restore Enabling face restoration. Slowing down the processing
# Make batch_run.sh executable
chmod +x batch_run.sh
./batch_run.sh ./my_face.jpg
Usage:
./batch_run.sh <face> [input_directory]
Arguments:
face Path to the image with your face
input_directory Directory with .mp4 files. Default: `input`
Мой канал про нейронки на русском: https://t.me/neuronochka