You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
oading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/fast_data_2d_2/sahil_work/BakLLaVA/scripts/merge_lora_weights.py", line 22, in <module>
merge_lora(args)
File "/fast_data_2d_2/sahil_work/BakLLaVA/scripts/merge_lora_weights.py", line 8, in merge_lora
tokenizer, model, image_processor, context_len = load_pretrained_model(args.model_path, args.model_base, model_name, device_map='cpu')
File "/fast_data_2d_2/sahil_work/BakLLaVA/llava/model/builder.py", line 52, in load_pretrained_model
model = LlavaMistralForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, config=lora_cfg_pretrained, **kwargs)
File "/home/users/sahil/miniconda3/envs/qure_llava/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3502, in from_pretrained
) = cls._load_pretrained_model(
File "/home/users/sahil/miniconda3/envs/qure_llava/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3926, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/home/users/sahil/miniconda3/envs/qure_llava/lib/python3.10/site-packages/transformers/modeling_utils.py", line 807, in _load_state_dict_into_meta_model
hf_quantizer.create_quantized_param(model, param, param_name, param_device, state_dict, unexpected_keys)
File "/home/users/sahil/miniconda3/envs/qure_llava/lib/python3.10/site-packages/transformers/quantizers/quantizer_bnb_4bit.py", line 180, in create_quantized_param
raise ValueError(
ValueError: Detected int4 weights but the version of bitsandbytes is not compatible with int4 serialization.
Getting this error while merging qlora weights.
The text was updated successfully, but these errors were encountered:
```ValueError: Supplied state dict for model.layers.0.mlp.down_proj.weight does not contain bitsandbytes__* and possibly other `quantized_stats` components.
Getting this error while merging qlora weights.
The text was updated successfully, but these errors were encountered: