Run bgcflow efficiently on a lot of small snipplets #305
-
Hi, |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
Hi Frida, thanks for the inquiry, this is actually something that I wanted to try (running large scale in SLURM or Azure Batch) but haven't got the chance to do so. Basically, BGCFlow is a Snakemake workflow with several snakefiles for different purposes. The CLI we build for bgcflow is just a commonly used snakemake commands. The command snakemake --snakefile workflow/Snakefile --use-conda --keep-going --rerun-incomplete --rerun-triggers mtime -c 2 --dryrun --wms-monitor http://127.0.0.1:5000 Snakemake should be able to distribute large jobs, or even do batches of smaller jobs (e.g. 500 at a time). More about the command line here: https://snakemake.readthedocs.io/en/stable/executing/cli.html What I think can help right now with BGCFlow:
What I can't help right now but is planned in the future:
My suggestion:
|
Beta Was this translation helpful? Give feedback.
-
Thank you so much! I know, most of the files are not to expected to have anything in there :) |
Beta Was this translation helpful? Give feedback.
-
Hi,
Thank you so much for taking the time to create an extra workflow for me!! |
Beta Was this translation helpful? Give feedback.
Hi Frida, thanks for the inquiry, this is actually something that I wanted to try (running large scale in SLURM or Azure Batch) but haven't got the chance to do so.
Basically, BGCFlow is a Snakemake workflow with several snakefiles for different purposes. The CLI we build for bgcflow is just a commonly used snakemake commands. The command
bgcflow run
is a wrapper to:Snakemake should be able to distribute large jobs, or even do batches of smaller jobs (e.g. 500 at a time). More about the command line here: https://snakemake.readthedo…