-
Notifications
You must be signed in to change notification settings - Fork 635
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR: Your GPU does not support Int8 Matmul! (With example code) #100
Comments
I am facing the same problem on a Nvidia V100 GPU |
Interestingly, I am getting the same error on the ROCM version of this library. |
Ahh, I see now that int8 requires a more recent card vs the rest of the features. |
@ebolam int8 is now supported on all GPUs with the latest release!! Looking forward to Kobold with int8 support. |
My GPU is V100, and I've been using bitsandbytes 0.37.1 with INT8. Although this configuration is supposed to work, I'm encountering an issue where the loss remains at 0 |
solved by updating to the newest version 0.39.1 |
Thanks @miguelamendez , I fixed it! |
I'm working on implementing 8 bit inference into KoboldAI and ran into the above error on my Tesla M40.
Steps to reproduce
Expected Output
an list of tokens
Actual Output
Version information, etc
using cuda 11.1
The text was updated successfully, but these errors were encountered: