Bring portraits to life!
-
Updated
Nov 12, 2024 - Python
Bring portraits to life!
Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
This codebase demonstrates how to synthesize realistic 3D character animations given an arbitrary speech signal and a static character mesh.
Wunjo CE: Face Swap, Lip Sync, Control Remove Objects & Text & Background, Restyling, Audio Separator, Clone Voice, Video Generation. Open Source, Local & Free.
[ECCV 2024 Oral] EDTalk - Official PyTorch Implementation
This is the official repository for OTAvatar: One-shot Talking Face Avatar with Controllable Tri-plane Rendering [CVPR2023].
Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation.
Blender add-on to implement VOCA neural network.
Speech to Facial Animation using GANs
[NeurIPS 2023] Learning Motion Refinement for Unsupervised Face Animation
A software pipeline for creating realistic videos of people talking, using only images.
One-shot face animation using webcam, capable of running in real time.
Language-Guided Face Animation by Recurrent StyleGAN-based Generator
Face Animation from Text 🧙♂️
Official Implementation of "Style Generator Inversion for Image Enhancement and Animation".
Thin plate spline motion model TPSMM converted to ONNX
Thin Plate Spline Motion Model - ONNX. Extended version for Video-2-Video and Video-2-Image animation.
Add a description, image, and links to the face-animation topic page so that developers can more easily learn about it.
To associate your repository with the face-animation topic, visit your repo's landing page and select "manage topics."