Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
convert : fix autoawq gemma (ggerganov#6704)
* fix autoawq quantized gemma model convert error using autoawq to quantize gemma model will include a lm_head.weight tensor in model-00001-of-00002.safetensors. it result in this situation that convert-hf-to-gguf.py can't map lm_head.weight. skip loading this tensor could prevent this error. * change code to full string match and print necessary message change code to full string match and print a short message to inform users that lm_head.weight has been skipped. --------- Co-authored-by: Zheng.Deng <[email protected]>
- Loading branch information