-
Notifications
You must be signed in to change notification settings - Fork 898
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Where can I get started to convert internvl model to mlx format? #865
Comments
mlx_lm.convert --hf-path Mini-InternVL-Chat-2B-V1-5 --mlx-path Mini-InternVL-Chat-2B-V1-5-MLX During handling of the above exception, another exception occurred: Traceback (most recent call last): |
That's a multi-modal model, so it won't run in MLX LM. You need to manually write a model file that is capable of running it. For some examples of VLMs check-out MLX VLM by @Blaizzy. To learn more about how to convert PyTorch models to MLX you can also look at some example models in this repo. I don't know of any good guides to help with that yet. Maybe that's something we should work on 🤔 |
Thanks for the shout out @awni! Hey @hiima123 and @100ZZ , we have InternVL on our roadmap. Read more here: You can also send us a PR, and I will happiply review it. Here is an example: |
Working on it... |
Hello~ I try to cover BELLE-2/Belle-distilwhisper-large-v2-zh model to MLX format . but i meet some error.
how to solve this error? add some arg in CLI ? update :
this mlx_lm.convert no safetensors file , it can't cover to mlx format? |
The MLX example doesn't support loading / converting that format of model. Even fixing the device issue would require some changes to our convert script and/or a model class with the appropriate names to match the Transformers |
OK~ I know MLX example not support this model to cover. |
It doesn't have to do with safetensors or not. The model type is not yet supported. |
https://github.com/OpenGVLab/InternVL
The text was updated successfully, but these errors were encountered: