Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Colab OOM #16

Closed
amrrs opened this issue Apr 19, 2023 · 3 comments
Closed

Colab OOM #16

amrrs opened this issue Apr 19, 2023 · 3 comments

Comments

@amrrs
Copy link

amrrs commented Apr 19, 2023

Hey THanks for the code. Ironically even the 3B model is crashing on Colab. This is after enabling 8-bit with fp16 precision.

Did it work for anyone?

@gustrd
Copy link

gustrd commented Apr 19, 2023

Had the same error, seems like that the CPU RAM is not enough to load the model before sending it to the GPU.

@amrrs
Copy link
Author

amrrs commented Apr 19, 2023

Maybe this is a reason - #6

@mcmonkey4eva
Copy link

Yep, that's why, and there's solutions on that thread!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants