-
-
Notifications
You must be signed in to change notification settings - Fork 811
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
faceswap with inpaint #89
base: main
Are you sure you want to change the base?
Conversation
this is cool. I made a gist (using chatgpt) to simplify from gradio demo presumably you modify a few lines to add the faceswap into image pipeline? Upgrade gradio samples? |
@johndpope I don't want to play with GUI (no experience with gradio) but if you make a fork, add a checkbox for example "use mask" which will allow you to draw a mask on the copy of the uploaded image then I can probably make it work. |
Hi @nosiu - understand completely. |
Hi @johndpope I don't understand the problem because the provided So if you want to add/change something it would be better to work from faceswap.py as a base |
hi @nosiu - thanks for clarifying. a quick change to this should fix - #adapter_id = 'loras/pytorch_lora_weights.safetensors'
|
Thanks, this is path to the file on my HDD not huggingface repo address. |
how to understand this instantid inpaint with stable diffusion's original inpaint ? |
python faceswap.py |
hi @t00350320, You are probably using a different version of diffusers |
thanks for your proposal. Here is our test result with your default scripts, |
@t00350320 There are two things wrong with this picture.
I implemented all those features to ComfyUI as a custom node (at least I've tried). |
It works much better with a good inpaint model. I swapped in mine that takes the huggingface inpaint model and trains it heavily with random masks on people and it works great - my first attempt with the base model came with an extra forearm/hand thing (subject had hands on head kind of) :) Then use fpie/taichi_solver to do the replacement into the original image and results are super. I have not done a detailed comparison with insightface, but much higher resolution of course, and the patch in looks at least as good. |
faceswap.py TypeError:StableDiffusionXLControlNetInpaintPipeline.prepare_control_image() missing 2 required positional arguments: 'crops_coords' and 'resize_mode' |
@qwerdf4 This branch is using diffusers 0.25.0 |
|
I tried to test it and chose to use two examples images: musk_resize.jpeg and post2.jpeg(in this repo) the parameters: controlnet_conditioning_scale=0.8,
strength=mask_strength,
ip_adapter_scale=0.3, # keep it low the generated results are as follows: Can I know where I made a mistake that caused this? It looks like the face image wasn't replaced correctly. The |
Face swap with inpainting (sample code uses LCM Lora)
Works with stabilityai/stable-diffusion-xl-base-1.0 and probably any base XL model tested it on a few others
(do not use models for inpaint!)
Steps
because InstantID doesn't handle small faces well, they are (optionally) scaled up, and after exiting the pipeline they are scaled back to their original size
Example 1:
Original Image:
Resized face from the pipeline (1 face embedding):
Result (original image + downscaled face):
Result (4 face embeddings):
Example 2:
Original image:
Face embed from:
Result: