微调后的 调用微调后的模型 测试报错 #915
-
(ChatGLM3-6b-finetunning) root@dsw-314593-5c8dcd944c-5mfs4:/mnt/workspace/ChatGLM3/finetune_demo# python inference_hf.py output/checkpoint-3000/ --prompt "你好"Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 1.87it/s]Setting eos_token is not supported, use the default one. File "/mnt/workspace/ChatGLM3/finetune_demo/inference_hf.py", line 56, in File "/mnt/workspace/ChatGLM3/finetune_demo/inference_hf.py", line 50, in main File "/mnt/workspace/ChatGLM3/finetune_demo/inference_hf.py", line 30, in load_model_and_tokenizer File "/opt/conda/envs/ChatGLM3-6b-finetunning/lib/python3.10/site-packages/peft/auto.py", line 126, in from_pretrained File "/opt/conda/envs/ChatGLM3-6b-finetunning/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1811, in resize_token_embeddings File "/opt/conda/envs/ChatGLM3-6b-finetunning/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1832, in _resize_token_embeddings File "/opt/conda/envs/ChatGLM3-6b-finetunning/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1610, in set_input_embeddings File "/opt/conda/envs/ChatGLM3-6b-finetunning/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1612, in set_input_embeddings NotImplementedError (ChatGLM3-6b-finetunning) root@dsw-314593-5c8dcd944c-5mfs4:/mnt/workspace/ChatGLM3/finetune_demo# |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
更新到最新模型代码 |
Beta Was this translation helpful? Give feedback.
更新到最新模型代码