Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torch not compiled with CUDA enabled #46

Closed
struggle007 opened this issue Jul 8, 2022 · 4 comments
Closed

Torch not compiled with CUDA enabled #46

struggle007 opened this issue Jul 8, 2022 · 4 comments

Comments

@struggle007
Copy link

Hello,when I run the multiligand_inference.py , it prompts this error:

python multiligand_inference.py -o ./my_data_folder/result/ -r ./my_data_folder/multiligand-test/5v4q_protein.pdb -l ./my_data_folder/multiligand-test/ligand.sdf

Namespace(batch_size=8, checkpoint=None, config=None, device='cpu', lazy_dataload=None, lig_slice=None, ligands_sdf='./my_data_folder/multiligand-test/ligand.sdf', n_workers_data_load=0, num_confs=1, output_directory='./my_data_folder/result/', rec_pdb='./my_data_folder/multiligand-test/5v4q_protein.pdb', run_corrections=True, seed=1, skip_in_output=True, train_args=None, use_rdkit_coords=False)
[2022-07-08 10:34:33.719185] [ Using Seed : 1 ]
Found 0 previously calculated ligands
device = cpu
Entering batch ending in index 5/5
Traceback (most recent call last):
File "multiligand_inference.py", line 278, in
main()
File "multiligand_inference.py", line 275, in main
write_while_inferring(lig_loader, model, args)
File "multiligand_inference.py", line 217, in write_while_inferring
lig_graphs = lig_graphs.to(args.device)
File "/data/anaconda/envs/equibind/lib/python3.7/site-packages/dgl/heterograph.py", line 5448, in to
ret._graph = self._graph.copy_to(utils.to_dgl_context(device))
File "/data/anaconda/envs/equibind/lib/python3.7/site-packages/dgl/utils/internal.py", line 533, in to_dgl_context
device_id = F.device_id(ctx)
File "/data/anaconda/envs/equibind/lib/python3.7/site-packages/dgl/backend/pytorch/tensor.py", line 90, in device_id
return 0 if ctx.type == 'cpu' else th.cuda.current_device()
File "/data/anaconda/envs/equibind/lib/python3.7/site-packages/torch/cuda/init.py", line 479, in current_device
_lazy_init()
File "/data/anaconda/envs/equibind/lib/python3.7/site-packages/torch/cuda/init.py", line 208, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

How can solve this error?

@kjwallace
Copy link

We are working on this issue as we noticed the same thing, we will be pushing our prosposed fix/config instructions on GPU set up on our fork at github.com/openlab-apps/lab-equibind. We have got it to run using on Nvidia containers, working on re-producability at the moment.

@HannesStark
Copy link
Owner

@amfaber
Can you look into these issues that seem to appear with your multiligand inference file?

@kjwallace
Copy link

@amfaber

Can you look into these issues that seem to appear with your multiligand inference file?

We (openlab-apps/equibind) have a solution using an NVidia card and slight refactoring of the torch config and an NVidia docker container. Works well enough but still attempting to generalize it when we have more time.

@amfaber
Copy link
Contributor

amfaber commented Aug 10, 2022

I am not able to replicate the problem on my system, but looking into it, the argument handling of the "device" parameter was dubious at best, which i believe was causing the attempt at sending the data a GPU even when no GPU was present. I've created a PR #57 which hopefully resolves this issue as well as brings the main repo up to date with the changes I've made to how ligand loading is done internally, primarily bringing support for multithreaded loading of SDF and smiles files

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants