-
Notifications
You must be signed in to change notification settings - Fork 907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
main: failed to load model from 'ggml-alpaca-7b-q4.bin' #228
Comments
I was having similar problems. There might be an issue with the model weights you installed. It got resolved for me when I downloaded them again. You can download up to date quantized weights here https://huggingface.co/Pi3141/alpaca-native-7B-ggml/tree/main |
I've got the same problem, doesn't matter which version I download |
I think you will be better off working with llama.cpp project. I think this repo is not getting updates anymore. This repo has been upstreamed to llama.cpp. You just need to get the weights of the alpaca model and place it in the right directory. follow the readme from there. It worked for me on M1 Macbook Pro |
also I have found a helpful comment and it worked for me, daffi7, try it, if you haven't yet: |
Hi,
so I downloaded the bin file on my MacBook Pro M1, put it into the same folder as Mac release, but got this error.
Would you know what to do with this? I am very new to handling models outside of their UIs.
Michael
% /Users/michalsmilauer/(Local)\ Documents\ 2/chat ; exit;
main: seed = 1682502661
llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ...
llama_model_load: failed to open 'ggml-alpaca-7b-q4.bin'
main: failed to load model from 'ggml-alpaca-7b-q4.bin'
Saving session...
...copying shared history...
...saving history...truncating history files...
...completed.
[Process completed]
The text was updated successfully, but these errors were encountered: