We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hey THanks for the code. Ironically even the 3B model is crashing on Colab. This is after enabling 8-bit with fp16 precision.
Did it work for anyone?
The text was updated successfully, but these errors were encountered:
Had the same error, seems like that the CPU RAM is not enough to load the model before sending it to the GPU.
Sorry, something went wrong.
Maybe this is a reason - #6
Yep, that's why, and there's solutions on that thread!
No branches or pull requests
Hey THanks for the code. Ironically even the 3B model is crashing on Colab. This is after enabling 8-bit with fp16 precision.
Did it work for anyone?
The text was updated successfully, but these errors were encountered: