You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We can notice that the mem_mb argument is set as 1000M, because snakemake defines default resources ,'mem_mb=max(2*input.size_mb, 1000)' in the latest version (7.18.2). And slurm profile submit jobs with 1G memory instead of the 10G memory in the customized config. I think this change take place of version 7.10 of even earlier. When switch to snakemake version 7.0. The output is as follow. No default mem_mb is set and slurm job work as expected.
All snakemake version >=7.3.0 have this bug. The input file detection behavior is changed in version 7.3.0 and if the input file size is too small, a 1000M memory bound is hard coded.
Could slurm profile treat mem and mem_mb arguments as duplicates and choose the larger one in the job submission step?
The files to reproduce this bug is as follow:
Run snakemake with latest slurm profile,
the log is as follow:
We can notice that the mem_mb argument is set as 1000M, because snakemake defines default resources ,'mem_mb=max(2*input.size_mb, 1000)' in the latest version (7.18.2). And slurm profile submit jobs with 1G memory instead of the 10G memory in the customized config. I think this change take place of version 7.10 of even earlier. When switch to snakemake version 7.0. The output is as follow. No default mem_mb is set and slurm job work as expected.
The text was updated successfully, but these errors were encountered: