We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
按道理讲 oai 应该能支持多模态,然后不行。然后使用qwen_oai 仍然支持多模态功能 ,在这个问题上困扰很久才发现 是不支持的
The text was updated successfully, but these errors were encountered:
您好,如果想使用vl模型的oai接口,请在llm参数中设置'model_type': 'qwenvl_oai',qwenvl_oai是可以调用vl的。不知您说的gpt4o 多模态功能具体是指什么功能呢?
Sorry, something went wrong.
@tuhahaha 国外的模型,如gpt、claude、gemini都是用多模态模型作为基础模型,例如gpt-4o系列,claud系列,gemini1.5系列。 但是国内的模型似乎都是把LLM和VLM做了分离,请问这是为什么呢。
No branches or pull requests
按道理讲 oai 应该能支持多模态,然后不行。然后使用qwen_oai 仍然支持多模态功能 ,在这个问题上困扰很久才发现 是不支持的
The text was updated successfully, but these errors were encountered: