-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
training very slow on GPU #35
Comments
Hi, |
Thanks for the clarification! That makes sense. Looking forward to trying out the PyTorch version. |
Hi Jiaye, colleague developing the new library here. Coincidentally it is also called deeptime. If you are feeling adventurous and want to play around with it, you can find it here: https://github.com/deeptime-ml/deeptime (and documentation for vampnets in the new deeptime) Cheers, |
Hi Moritz! Thanks for pointing me to this new repo. I will take a look and play around with it. |
Hi I am trying to reproduce your results in
Alanine_dipeptide_multiple_files
on a single NVIDIA GeForce GTX 1080 Ti GPU and it took ~ 5h to finish all 10 attempts. I was usingtensorflow-gpu v1.9.0
,cuda/9.0 and cudnn/7.0
. As comparison, I also ran the jupyter-notebook on my laptop CPU and it was faster than GPU (~ 3h, but still very slow!). In the Nature Comm. paper, you mentioned that depending on the system, each run takes between 20s and 180s. Since I didn't change the code, I am wondering why there's such a big discrepancy in speed compared to the paper. Do you have any insight on why my training is so slow? Thanks!The text was updated successfully, but these errors were encountered: