Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to adapt this to exploit GPUs? #1

Open
dww100 opened this issue Jun 15, 2018 · 0 comments
Open

Is it possible to adapt this to exploit GPUs? #1

dww100 opened this issue Jun 15, 2018 · 0 comments
Assignees

Comments

@dww100
Copy link
Collaborator

dww100 commented Jun 15, 2018

The original script uses /lustre/atlas/proj-shared/csc249/sfw/NAMD_2.12_Linux-x86_64-multicore/namd2 without CUDA which can't seem to access at the minute. So I installed my own version of the binary from the NAMD website and got it working.

This led me to think it would be much better if we could use the GPU version of NAMD. So I downloaded that (NAMD_2.12_Linux-x86_64-multicore-CUDA) and tried using it but I get the error:

Pe 4 physical rank 4 binding to CUDA device 0 on nid11408: 'Tesla K20X'  Mem: 5759MB  Rev: 3.5
FATAL ERROR: CUDA error cudaMalloc(pp, sizeofT*len) in file src/CudaUtils.C, function allocate_device_T
 on Pe 4 (nid11408 device 0): all CUDA-capable devices are busy or unavailable

The command I was using in the namd.sh is:

$NAMD2 +ppn 7 +setcpuaffinity \
       +pemap 0,2,4,6,8,10,12 +commap 14 +idlepoll +devices 0 \
       $conf > $logfile 2>&1

I based this on a working test script for NAMD using aprun which uses:

aprun -n 1 -N 1 -d 8 $NAMD2 ++ppn 7 +setcpuaffinity +pemap 0,2,4,6,8,10,12 +commap 14 +idlepoll +devices 0 eq0.conf 2>&1

To run on one node.

Is there something obvious I am missing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants