Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat textual inversion plugin v2 #248

Merged

Conversation

damian0815
Copy link
Contributor

@damian0815 damian0815 commented Jan 19, 2024

With this plugin you can target specific token embeddings in the text encoder, and/or add new ones. The embeddings get written into the model's text encoder and saved out - not the most efficient way to share them, but useful if you want to laser-focus training of specific or custom tokens in your model.

Documentation (such as it is) is in plugins/textual_inversion.py. TL;DR is - first edit plugins/textual_inversion.json to set up your tokens and initialization states. enable text encoder training and disable unet training. then edit optimizer.json to freeze all TE layers and TE final layer norm. the plugin checks for sane config when it's used.

you'll want to use a very high LR for the TE. my last test worked best at 2e-2.

reijerh added a commit to reijerh/EveryDream2trainer that referenced this pull request Jan 20, 2024
@victorchall victorchall merged commit f4a8bce into victorchall:main Feb 17, 2024
1 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants