We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
启动脚本
vllm=v0.2.6 python -m vllm.entrypoints.openai.api_server --model=/opt/apps/models/baichuan-13B--trust-remote-code --gpu-memory-utilization 0.9 --tensor-parallel-size 2
client
for chunk in client.chat.completions.create(model="llama", messages=[ {"role": "user", "content": "我是一个科学家"}, ],stream=True, max_tokens=40,): print(chunk.choices[0].delta.content, end="", flush=True) 输出结果如下: None<|im_end| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |None 这是数据组成的格式不对么?
The text was updated successfully, but these errors were encountered:
需要根据api_server.py实现,在request中增加top_p、repetable_penality等参数,另外需要设置chat-template为baichuan
Sorry, something went wrong.
No branches or pull requests
启动脚本
client
The text was updated successfully, but these errors were encountered: