You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AWS ParallelCluster allows for running a Slurm cluster on Amazon AWS. Here are some things that do not work well with this profile, both for other users trying this, and to possibly make this work out of the box on default cluster installations.
By default ParallelCluster does not come with accounting, so sacct does not work. While the job status script supports querying using scontrol, this also lead to issues in my case (to get this far in the first place I removed mem/mem-per-CPU from RESOURCE_MAPPING in slurm-submit.py so jobs would run, see above):
127.0.0.1 - - [19/Jul/2022 14:17:45] "POST /job/register/11557 HTTP/1.1" 200 -
Submitted job 3661 with external jobid '11557'.
[Tue Jul 19 14:17:45 2022]
rule foo:
input: results/xxx.vcf.gz
output: results/xxx.pdf
jobid: 3568
reason: Missing output files: results/xxx.pdf
wildcards: sample=xxx
resources: mem_mb=1000, disk_mb=100000, tmpdir=/scratch, runtime=1000, partition=compute-small
[...]
Submitted job 3747 with external jobid '11561'.
/bin/sh: 11557: command not found
WorkflowError:
Failed to obtain job status. See above for error message.
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Cluster sidecar process has terminated (retcode=0).
AWS ParallelCluster allows for running a Slurm cluster on Amazon AWS. Here are some things that do not work well with this profile, both for other users trying this, and to possibly make this work out of the box on default cluster installations.
Some resources:
Tested with Snakemake version 7.8.5.
Issues:
sbatch --mem
. Using this option sends nodes straight into DRAINED state; see also https://blog.ronin.cloud/slurm-parallelcluster-troubleshooting/sacct
does not work. While the job status script supports querying usingscontrol
, this also lead to issues in my case (to get this far in the first place I removedmem
/mem-per-CPU
from RESOURCE_MAPPING inslurm-submit.py
so jobs would run, see above):config.yaml
:settings.json
The text was updated successfully, but these errors were encountered: