Skip to content

iloveoss/IC-Light

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

IC-Light

IC-Light is a project to manipulate the illumination of images.

The name "IC-Light" stands for "Imposing Consistent Light" (we will briefly describe this at the end of this page).

Currently, we release two types of models: text-conditioned relighting model and background-conditioned model. Both types take foreground images as inputs.

Get Started

Below script will run the text-conditioned relighting model:

git clone https://github.com/lllyasviel/IC-Light.git
cd IC-Light
conda create -n iclight python=3.10
conda activate iclight
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
python gradio_demo.py

Or, to use background-conditioned demo:

python gradio_demo_bg.py

Model downloading is automatic.

Note that the "gradio_demo.py" has an official huggingFace Space here.

Screenshot

Text-Conditioned Model

(Note that the "Lighting Preference" are just initial latents - eg., if the Lighting Preference is "Left" then initial latent is left white right black.)


Prompt: beautiful woman, detailed face, warm atmosphere, at home, bedroom

Lighting Preference: Left

image


Prompt: beautiful woman, detailed face, sunshine from window

Lighting Preference: Left

image


beautiful woman, detailed face, neon, Wong Kar-wai, warm

Lighting Preference: Left

image


Prompt: beautiful woman, detailed face, sunshine, outdoor, warm atmosphere

Lighting Preference: Right

image


Prompt: beautiful woman, detailed face, sunshine, outdoor, warm atmosphere

Lighting Preference: Left

image


Prompt: beautiful woman, detailed face, sunshine from window

Lighting Preference: Right

image


Prompt: beautiful woman, detailed face, shadow from window

Lighting Preference: Left

image


Prompt: beautiful woman, detailed face, sunset over sea

Lighting Preference: Right

image


Prompt: handsome boy, detailed face, neon light, city

Lighting Preference: Left

image


Prompt: beautiful woman, detailed face, light and shadow

Lighting Preference: Left

image

(beautiful woman, detailed face, soft studio lighting)

image


Prompt: Buddha, detailed face, sci-fi RGB glowing, cyberpunk

Lighting Preference: Left

image


Prompt: Buddha, detailed face, natural lighting

Lighting Preference: Left

image


Prompt: toy, detailed face, shadow from window

Lighting Preference: Bottom

image


Prompt: toy, detailed face, sunset over sea

Lighting Preference: Right

image


Prompt: dog, magic lit, sci-fi RGB glowing, studio lighting

Lighting Preference: Bottom

image


Prompt: mysteriou human, warm atmosphere, warm atmosphere, at home, bedroom

Lighting Preference: Right

image


Background-Conditioned Model

The background conditioned model does not require careful prompting. One can just use simple prompts like "handsome man, cinematic lighting".


image

image

image

image


A more structured visualization:

r1

Imposing Consistent Light

In HDR space, illumination has a property that all light transports are independent.

As a result, the blending of appearances of different light sources is equivalent to the appearance with mixed light sources:

cons

Using the above light stage as an example, the two images from the "appearance mixture" and "light source mixture" are consistent (mathematically equivalent in HDR space, ideally).

We imposed such consistency (using MLPs in latent space) when training the relighting models.

As a result, the model is able to produce highly consistent relight - so consistent that different relightings can even be merged as normal maps! Despite the fact that the models are latent diffusion.

r2

From left to right are inputs, model outputs relighting, devided shadow image, and merged normal maps. Note that the model is not trained with any normal map data. This normal estimation comes from the consistency of relighting.

You can reproduce this experiment using this button (it is 4x slower because it relight image 4 times)

image

image

Below are bigger images (feel free to try yourself to get more results!)

image

image

For reference, geowizard (geowizard is a really great work!):

image

And, switchlight (switchlight is another great work!):

image

Model Notes

  • iclight_sd15_fc.safetensors - The default relighting model, conditioned on text and foreground. You can use initial latent to influence the relighting.

  • iclight_sd15_fcon.safetensors - Same as "iclight_sd15_fc.safetensors" but trained with offset noise. Note that the default "iclight_sd15_fc.safetensors" outperform this model slightly in a user study. And this is the reason why the default model is the model without offset noise.

  • iclight_sd15_fbc.safetensors - Relighting model conditioned with text, foreground, and background.

Also, note that the original BRIA RMBG 1.4 is for non-commercial use. If you use IC-Light in commercial projects, replace it with other background replacer like BiRefNet.

Cite

@Misc{iclight,
  author = {Lvmin Zhang and Anyi Rao and Maneesh Agrawala},
  title  = {IC-Light GitHub Page},
  year   = {2024},
}

Related Work

Also read ...

Total Relighting: Learning to Relight Portraits for Background Replacement

Relightful Harmonization: Lighting-aware Portrait Background Replacement

SwitchLight: Co-design of Physics-driven Architecture and Pre-training Framework for Human Portrait Relighting

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%