You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
from transformers import AutoModelForCausalLM, AutoTokenizer
Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('THUDM/chatglm3-6b', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('THUDM/chatglm3-6b', trust_remote_code=True, device_map="auto").eval()
#add above code to avoid "model not define".
Who can help? / 谁可以帮助到您?
No response
Information / 问题信息
The official example scripts / 官方的示例脚本
My own modified scripts / 我自己修改的脚本和任务
Reproduction / 复现过程
output = next(predict_stream_generator)
File "G:\ai\aaaHHT\officialOpenSource\ChatGLM3\api_server.py", line 436, in predict_stream
for new_response in generate_stream_chatglm3(model, tokenizer, gen_params):
NameError: name 'model' is not defined
Expected behavior / 期待表现
Author maybe can modify its code.
The text was updated successfully, but these errors were encountered:
also add this vilidation code to api_server.py:
async def check_authentication(request: Request, call_next):
token = request.headers.get("Authorization")
if not token or not token.startswith("Bearer "):
raise HTTPException(status_code=403, detail="Unauthorized")
bearer_token = token[7:] # Strip "Bearer " prefix
if bearer_token != "${yourKey}": # Replace with your logic to verify token
raise HTTPException(status_code=403, detail="Invalid token")
response = await call_next(request)
return response
System Info / 系統信息
from transformers import AutoModelForCausalLM, AutoTokenizer
Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained('THUDM/chatglm3-6b', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('THUDM/chatglm3-6b', trust_remote_code=True, device_map="auto").eval()
#add above code to avoid "model not define".
Who can help? / 谁可以帮助到您?
No response
Information / 问题信息
Reproduction / 复现过程
output = next(predict_stream_generator)
File "G:\ai\aaaHHT\officialOpenSource\ChatGLM3\api_server.py", line 436, in predict_stream
for new_response in generate_stream_chatglm3(model, tokenizer, gen_params):
NameError: name 'model' is not defined
Expected behavior / 期待表现
Author maybe can modify its code.
The text was updated successfully, but these errors were encountered: