-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA Out of Memory Error #129
Comments
I was encountering a similar problem, and may have found a potential hot-fix (though this issue can stem from the sequence being too long, in which case this will not help). I have been trying to model a protein-RNA complex and found that even the protein alone (516 residues) would not launch on an identically spec'd GPU to yours. This error can stem from the sequence being too large to fit, but after snooping around with some promiscuous print debugging, I discovered that the model would go through one round of prediction fine, but would crash during the second round. After digging in, I saw that two elements are being popped off the GPU. I suspected that these are not getting caught by the garbage collector fast enough to be properly cleared before the next cycle, so I added a line to tell the GPU to force clear the cache to free up the memory again, and now I'm able to predict the protein structure. rf2aa/training/recycling.py
|
Unfortunately, I'm still facing the same issue as OP after attempting your solution. |
Dear all, Have this issue been solved? But I am not an expert so I need your kind help if there is a workaround. Here is the output I get. I tried to add but did not help, here is the output.
Running HHblits against UniRef30 with E-value cutoff 1e-6
Running HHblits against UniRef30 with E-value cutoff 1e-3
Running HHblits against BFD with E-value cutoff 1e-3
Running PSIPRED Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace. Thanks a lot, |
Environment:
GPU: NVIDIA RTX 3060 12GB
PyTorch Version: 2.3.1+cu118
CUDA Version: 11.8
OS: Linux Ubuntu
Python Version: 3.10.13
Amino acid : 651
I am encountering a CUDA Out of Memory error when running the run_inference.py script from the RoseTTAFold-All-Atom repository. The error occurs during the model inference step. Below is the detailed error traceback:
Running PSIPRED
Running hhsearch
Error executing job with overrides: []
Traceback (most recent call last):
File "/home/dhseo/Data_HDD2/RoseTTAFold-All-Atom/rf2aa/run_inference.py", line 206, in main
runner.infer()
File "/home/dhseo/Data_HDD2/RoseTTAFold-All-Atom/rf2aa/run_inference.py", line 155, in infer
outputs = self.run_model_forward(input_feats)
File "/home/dhseo/Data_HDD2/RoseTTAFold-All-Atom/rf2aa/run_inference.py", line 121, in run_model_forward
outputs = recycle_step_legacy(self.model,
File "/home/dhseo/Data_HDD2/RoseTTAFold-All-Atom/rf2aa/training/recycling.py", line 30, in recycle_step_legacy
output_i = ddp_model(**input_i)
File "/home/dhseo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dhseo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dhseo/Data_HDD2/RoseTTAFold-All-Atom/rf2aa/model/RoseTTAFoldModel.py", line 364, in forward
pair, state = self.templ_emb(t1d, t2d, alpha_t, xyz_t, mask_t, pair, state, use_checkpoint=use_checkpoint, p2p_crop=p2p_crop)
File "/home/dhseo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dhseo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dhseo/Data_HDD2/RoseTTAFold-All-Atom/rf2aa/model/layers/Embeddings.py", line 335, in forward
templ = self.templ_stack(templ, rbf_feat, t1d, use_checkpoint=use_checkpoint, p2p_crop=p2p_crop) # (B, T, L,L, d_templ)
File "/home/dhseo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dhseo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dhseo/Data_HDD2/RoseTTAFold-All-Atom/rf2aa/model/layers/Embeddings.py", line 185, in forward
templ = self.block[i_block](templ, rbf_feat, state)
File "/home/dhseo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dhseo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dhseo/Data_HDD2/RoseTTAFold-All-Atom/rf2aa/model/Track_module.py", line 374, in forward
gate = einsum('bli,bmj->blmij', left, right).reshape(B,L,L,-1)
File "/home/dhseo/.local/lib/python3.10/site-packages/opt_einsum/contract.py", line 507, in contract
return _core_contract(operands, contraction_list, backend=backend, **einsum_kwargs)
File "/home/dhseo/.local/lib/python3.10/site-packages/opt_einsum/contract.py", line 591, in _core_contract
new_view = _einsum(einsum_str, *tmp_operands, backend=backend, **einsum_kwargs)
File "/home/dhseo/.local/lib/python3.10/site-packages/opt_einsum/sharing.py", line 151, in cached_einsum
return einsum(*args, **kwargs)
File "/home/dhseo/.local/lib/python3.10/site-packages/opt_einsum/contract.py", line 353, in _einsum
return fn(einsum_str, *operands, **kwargs)
File "/home/dhseo/.local/lib/python3.10/site-packages/opt_einsum/backends/torch.py", line 45, in einsum
return torch.einsum(equation, operands)
File "/home/dhseo/.local/lib/python3.10/site-packages/torch/functional.py", line 380, in einsum
return einsum(equation, *_operands)
File "/home/dhseo/.local/lib/python3.10/site-packages/torch/functional.py", line 385, in einsum
return _VF.einsum(equation, operands) # type: ignore[attr-defined]
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.72 GiB. GPU has a total capacity of 11.76 GiB of which 871.88 MiB is free. Including non-PyTorch memory, this process has 10.90 GiB memory in use. Of the allocated memory 9.07 GiB is allocated by PyTorch, and 873.77 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Any insights or suggestions on how to address this CUDA out of memory error would be greatly appreciated. Is there any way to further optimize the memory usage or any specific configurations that can help mitigate this issue?
Thank you in advance for your assistance!
The text was updated successfully, but these errors were encountered: