This extension replaces the built-in LoRA forward procedure and provides support for LoCon and LyCORIS.
This extension is forked from the Composable LoRA extension.
Note: This version of Composable LoRA already includes all the features of the original version of Composable LoRA. You only need to select one to install.
This extension cannot be used simultaneously with the original version of the Composable LoRA extension. Before installation, you must first delete the stable-diffusion-webui-composable-lora
folder of the original version of the Composable LoRA extension in the webui\extensions\
directory.
Next, go to [Extension] -> [Install from URL] in the webui and enter the following URL:
https://github.com/a2569875/stable-diffusion-webui-composable-lora.git
Install and restart to complete the process.
Here we demonstrate two LoRAs (one LoHA and one LoCon), where
<lora:roukin8_loha:0.8>
corresponds to the trigger wordyamanomitsuha
<lora:dia_viekone_locon:0.8>
corresponds to the trigger worddia_viekone_\(ansatsu_kizoku\)
We use the Latent Couple extension for generating the images.
It can be observed that:
- The combination of
<lora:roukin8_loha:0.8>
withyamanomitsuha
, and<lora:dia_viekone_locon:0.8>
withdia_viekone_\(ansatsu_kizoku\)
can successfully generate the corresponding characters. - When the trigger words are swapped, causing a mismatch, both characters cannot be generated successfully. This demonstrates that
<lora:roukin8_loha:0.8>
is restricted to the left half of the image, while<lora:dia_viekone_locon:0.8>
is restricted to the right half of the image. Therefore, the algorithm is effective.
The highlighting of the prompt words on the image is done using the sd-webui-prompt-highlight plugin.
This test was conducted on May 14, 2023, using Stable Diffusion WebUI version v1.2 (89f9faa).
Another test was conducted on July 25, 2023, using Stable Diffusion WebUI version v1.5.0 (a3ddf46). Using hiyori (princess_connect!) and dia viekone locon model that I trained myself.
By associating LoRA's insertion position in the prompt with AND
syntax, LoRA's scope of influence is limited to a specific subprompt.
By placing LoRA within a prompt in the form of [A:B:N]
, the scope of LoRA's effect is limited to specific drawing steps.
Added a syntax [A #xxx]
to control the weight of LoRA at each drawing step.
You can replace the #
symbol with \u0023
, if #
didn't work.
Currently supported options are:
decrease
- Gradually decrease weight within the effective steps of LoRA until 0.
increment
- Gradually increase weight from 0 within the effective steps of LoRA.
cmd(...)
- A customizable weight control command, mainly using Python syntax.
- Available parameters
weight
- The current weight of LoRA.
life
- A number between 0-1, indicating the current life cycle of LoRA. It is 0 when it is at the starting step and 1 when it is at the final step of this LoRA's effect.
step
- The current step number.
steps
- The total number of steps.
lora
- The current LoRA object.
lora_module
- The current LoRA working layer object.
lora_type
- The type of LoRA being loaded, which may be
lora
orlyco
.
- The type of LoRA being loaded, which may be
lora_name
- The name of the current LoRA.
lora_count
- The number of all LoRAs.
block_lora_count
- The number of LoRAs in the
AND...AND
block currently being used.
- The number of LoRAs in the
is_negative
- Whether it is a negative prompt.
layer_name
- The name of the current working layer. You can use this to determine and simulate the effect of LoRA Block Weight.
current_prompt
- The prompt currently being used in the
AND...AND
block.
- The prompt currently being used in the
sd_processing
- Parameters for generating the SD image.
enable_prepare_step
- (Output parameter) If set to True, it means that this weight will be applied to the transformer text model encoder layer. If step == -1, it means that the current layer is in the transformer text model encoder layer.
- Available functions
warmup(x)
- x is a number between 0-1, representing a warmup constant. Calculated based on the total number of steps, the function value gradually increases from 0 to 1 until x is reached.
cooldown(x)
- x is a number between 0-1, representing a cooldown constant. Calculated based on the total number of steps, the function value gradually decreases from 1 to 0 after x.
- sin, cos, tan, asin, acos, atan
- Trigonometric functions with all steps as the period. The values of sin and cos are expected to be between 0 and 1.
- sinr, cosr, tanr, asinr, acosr, atanr
- Trigonometric functions in radians, with a period of 2π.
- abs, ceil, floor, trunc, fmod, gcd, lcm, perm, comb, gamma, sqrt, cbrt, exp, pow, log, log2, log10
- Functions in the math library of Python. Example :
- Available parameters
- A customizable weight control command, mainly using Python syntax.
[<lora:A:1>::10]
[<lora:A:1>:<lora:B:1>:10]
[<lora:A:1>:10]
- Start using LoRA named A from step 10.
[<lora:A:1>:0.5]
- Start using LoRA named A from 50% of the steps.
[[<lora:A:1>::25]:10]
[<lora:A:1> #increment:10]
[<lora:A:1> #decrease:10]
[<lora:A:1> #cmd\(warmup\(0.5\)\):10]
[<lora:A:1> #cmd\(sin\(life\)\):10]
[<lora:A:1> #cmd\(
def my_func\(\)\:
return sin\(life\)
my_func\(\)
\):10]
- same as
[<lora:A:1> #cmd\(sin\(life\)\):10]
, but using function syntax.
- Note :
- Try
[<lora:A:1> \u0023cmd\(sin\(life\)\):10]
if[<lora:A:1> #cmd\(sin\(life\)\):10]
doesn't work. - Try
[<lora:A:1> \u0023increment:10]
if[<lora:A:1> #increment:10]
doesn't work.
- Try
With the built-in LoRA, negative prompts are always affected by LoRA. This often has a negative impact on the output. So this extension offers options to eliminate the negative effects.
When checked, Composable LoRA is enabled.
Check this option to enable the feature of turning on or off LoRAs at specific steps.
Enable LoRA for uncondition (negative prompt) text model encoder. With this disabled, you can expect better output.
Enable LoRA for uncondition (negative prompt) diffusion model (denoiser). With this disabled, you can expect better output.
If "Composable LoRA with step" is enabled, you can select this option to generate a chart that shows the relationship between LoRA weight and the number of steps after the drawing is completed. This allows you to observe the variation of LoRA weight at each step.
- If the image you generated becomes like this:
try the following steps to solve it:
- Disable Composable LoRA first
- Temporarily remove all LoRA from your prompt
- Randomly generate a image
- If the image of the habitat is normal, enable Composable LoRA again
- Add the LoRA you just removed back to the prompt
- It should be able to generate pictures normally
--always-batch-cond-uncond
must be enabled with --medvram
or --lowvram
- Added support for LoCon and LyCORIS
- Fixed error: IndexError: list index out of range
- Submitted pull request for the 2023-04-08 version
- Fixed loading extension failure issue when using pytorch 2.0
- Fixed error: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
- Implemented the function of enabling or disabling LoRA at specific steps
- Improved the algorithm for enabling or disabling LoRA in different AND blocks and steps, by referring to the code of LoCon and LyCORIS extensions
- Implemented the method to control different weights of LoRA at different steps (
[A #xxx]
) - Plotted a chart of LoRA weight changes at different steps
- Fixed error: AttributeError: 'Options' object has no attribute 'lora_apply_to_outputs'
- Fixed error: RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
- Fixed the problem that sometimes LoRA cannot be removed after being added
- Add support for
<lyco:MODEL>
syntax.