-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some weights of the model checkpoint at declare-lab/flacuna-13b-v1.0 were not used when initializing LlamaForCausalLM: #2
Comments
Hi, Flacuna is a LoRA-based model. You can refer to the flacuna.py file to see how we load the weights in |
Thanks for your response. |
Can you add the warning message here? The warning you are encountering should also show the parameter names for which the pre-trained weights were not loaded. Also, assuming you have the |
This warning should come when you try to initialize Can you also post the full code snippet? |
it worked after closing instance and opening again. I did not understand what was wrong, but now it is all okay. |
Hi,
I am trying to run the model on Sagemaker, but I am getting the following warning:
"Some weights of the model checkpoint at declare-lab/flacuna-13b-v1.0 were not used when initializing LlamaForCausalLM"
I tried two ways, but both of them gave same issue.(hf and github source code)
How can I solve this? Can you help me please?
Thanks
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
base_model = "declare-lab/flacuna-13b-v1.0"
tokenizer = LlamaTokenizer.from_pretrained(base_model)
model = LlamaForCausalLM.from_pretrained(base_model,torch_dtype=torch.float16, device_map="auto")
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("declare-lab/flacuna-13b-v1.0")
model = AutoModelForCausalLM.from_pretrained("declare-lab/flacuna-13b-v1.0",torch_dtype=torch.float16, device_map="auto")
The text was updated successfully, but these errors were encountered: