-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resource table #663
Resource table #663
Conversation
Added some additional info about the hardware and relevant libraries @carmocca |
@carmocca It's a weird one, I haven't seen it with other models and can reproduce:
|
Thanks. Since it's an error in bitsandbytes, I wouldn't worry too much about it. |
Agreed. It was just odd that it occurred only with one of the models. |
I made a resource table that I think will be helpful for people who have questions about the resource requirements. Also, it could be a useful reference for spot checking new select PRs to see whether they improve/decrease performance and memory requirements.
One question though is regarding the runtime of the multi-GPU cases. If I run
max_iters = 1000
on 1 GPU, it will do 1000 iterations. If I run the same code on 2 GPUs, is it actually iterating through 2000 examples (even though it prints 1000 iterations)?EDIT: I want to add 8-GPU tests some time too but that's currently not possible due to the 4 GPUs currently being occupied for other things)