Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qwenvl_oai 无法兼容 gpt4o 多模态功能 #390

Open
jameslian87v5 opened this issue Nov 8, 2024 · 2 comments
Open

qwenvl_oai 无法兼容 gpt4o 多模态功能 #390

jameslian87v5 opened this issue Nov 8, 2024 · 2 comments

Comments

@jameslian87v5
Copy link

按道理讲 oai 应该能支持多模态,然后不行。然后使用qwen_oai 仍然支持多模态功能 ,在这个问题上困扰很久才发现 是不支持的

@tuhahaha
Copy link
Collaborator

您好,如果想使用vl模型的oai接口,请在llm参数中设置'model_type': 'qwenvl_oai',qwenvl_oai是可以调用vl的。不知您说的gpt4o 多模态功能具体是指什么功能呢?

@Halflifefa
Copy link

@tuhahaha
国外的模型,如gpt、claude、gemini都是用多模态模型作为基础模型,例如gpt-4o系列,claud系列,gemini1.5系列。
但是国内的模型似乎都是把LLM和VLM做了分离,请问这是为什么呢。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants