We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, I am running the ATAC-seq pipeline for the first time and encountered some error as follows:
sbatch: error: Please add --ntasks-per-node or --exclusive option for your multi-process jobs. sbatch: error: Batch job submission failed: Unspecified error
My command as follows: caper hpc submit /lustre/home/acct-medzy/medzy-pu/Wanggenyu/teacher/ATAC/ENCODE_ATAC_seq_pipeline/atac-seq-pipeline/atac.wdl -i /lustre/home/acct-medzy/medzy-pu/Wanggenyu/teacher/ATAC/ENCODE_ATAC_seq_pipeline/atac-seq-pipeline/example_input_json/ENCSR356KRQ_subsampled.json --conda --leader-job-name ANY_GOOD_LEADER_JOB_NAME
My configuration file as follows: backend=slurm
slurm-partition=cpu
slurm-account=medzy-pu
local-out-dir
local-loc-dir=/lustre/home/acct-medzy/medzy-pu/Wanggenyu/teacher/ATAC/ENCODE_ATAC_seq_pipeline/tmp
cromwell=/lustre/home/acct-medzy/medzy-pu/.caper/cromwell_jar/cromwell-82.jar womtool=/lustre/home/acct-medzy/medzy-pu/.caper/womtool_jar/womtool-82.jar
So anyone knows how can I add --ntasks-per-node or --exclusive option for my multi-process jobs, and where should I add this parameters?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi,
I am running the ATAC-seq pipeline for the first time and encountered some error as follows:
sbatch: error: Please add --ntasks-per-node or --exclusive option for your multi-process jobs.
sbatch: error: Batch job submission failed: Unspecified error
My command as follows:
caper hpc submit /lustre/home/acct-medzy/medzy-pu/Wanggenyu/teacher/ATAC/ENCODE_ATAC_seq_pipeline/atac-seq-pipeline/atac.wdl -i /lustre/home/acct-medzy/medzy-pu/Wanggenyu/teacher/ATAC/ENCODE_ATAC_seq_pipeline/atac-seq-pipeline/example_input_json/ENCSR356KRQ_subsampled.json --conda --leader-job-name ANY_GOOD_LEADER_JOB_NAME
My configuration file as follows:
backend=slurm
SLURM partition. DEFINE ONLY IF REQUIRED BY YOUR CLUSTER'S POLICY.
You must define it for Stanford Sherlock.
slurm-partition=cpu
SLURM account. DEFINE ONLY IF REQUIRED BY YOUR CLUSTER'S POLICY.
You must define it for Stanford SCG.
slurm-account=medzy-pu
Local directory for localized files and Cromwell's intermediate files.
If not defined then Caper will make .caper_tmp/ on CWD or
local-out-dir
./tmp is not recommended since Caper store localized data files here.
local-loc-dir=/lustre/home/acct-medzy/medzy-pu/Wanggenyu/teacher/ATAC/ENCODE_ATAC_seq_pipeline/tmp
cromwell=/lustre/home/acct-medzy/medzy-pu/.caper/cromwell_jar/cromwell-82.jar
womtool=/lustre/home/acct-medzy/medzy-pu/.caper/womtool_jar/womtool-82.jar
So anyone knows how can I add --ntasks-per-node or --exclusive option for my multi-process jobs, and where should I add this parameters?
The text was updated successfully, but these errors were encountered: