Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gorilla-openfunctions-v2模型本地GPU电脑运行模型生成和评估命令报错 #793

Open
belief888 opened this issue Nov 26, 2024 · 2 comments
Labels
hosted-openfunctions-v2 Issues with OpenFunctions-v2

Comments

@belief888
Copy link

本地电脑下载了gorilla-openfunctions-v2模型,用export HF_HOME=XX指向该模型路径。
handler_map.py文件中api_inference_handler_map的"gorilla-openfunctions-v2": GorillaHandler去掉,在local_inference_handler_map中添加
"gorilla-openfunctions-v2": GorillaHandler ,
执行命令:bfcl generate --model gorilla-openfunctions-v2 --test-category simple --backend vllm --num-gpus 1 --gpu-memory-utilization 0.9

报如下错误:
提示:{"id": "simple_18", "result": "Error during inference: HTTPSConnectionPool(host='luigi.millennium.berkeley.edu', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1007)')))"}

现在不是通过API访问gorilla-openfunctions-v2,请问如何解决以上问题?

@belief888 belief888 added the hosted-openfunctions-v2 Issues with OpenFunctions-v2 label Nov 26, 2024
@HuanzhiMao
Copy link
Collaborator

光移到 local_inference_handler_map是没用的。如果你想本地跑openfunction那个模型的话,你需要重写整个handler。因为现在handler里面inference的代码是调用API的,你得把它变成oss_model里的那种形式才行 (而且得从base_oss_handler inherit)

@belief888
Copy link
Author

gorilla-openfunctions-v2 API模型啥时候可以正常调用?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hosted-openfunctions-v2 Issues with OpenFunctions-v2
Projects
None yet
Development

No branches or pull requests

2 participants