You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In theory, it will work by specifying a larger LLaMA base model via the --base_model flag, e.g. --base_model=decapoda-research/llama-13b-hf, then select a LoRA model that's trained on top of that base model (such as chansung/alpaca-lora-13b). However, I still need to test it. If you had the chance to try it first, your sharing of how it works would be appreciated! 🚀
BTW I think I'll be adding the ability to switch between base models without restarting the app and support non-llama models in the near future.
Update 2023/4/20: The ability of switching between base models has been added.
It would be great if there was a way to use this with the 13B, 30B or 60B LLaMa model sizes.
The text was updated successfully, but these errors were encountered: