Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistant behavior of pipeline-runs, with same input parameters #144

Open
ravipatel4 opened this issue Feb 13, 2019 · 3 comments
Open

Comments

@ravipatel4
Copy link

ravipatel4 commented Feb 13, 2019

Hello,

I ran the newer version of the atac-seq-pipeline four time with same JSON file and same command line, since I wanted to test the output consistency. I am mainly interested in IDR analysis on already filtered BAM files. I find that the pipeline runs successfully sometimes and fails with different errors other times. Following are my commands and attached are corresponding stdout messages and the common JSON file (changed the file name to my.json.txt so that it attaches here) used for all four runs. Please let me know if I am doing anything wrong.

#Command 1: ran successfully with non-empty output
nohup java -jar -Dconfig.file=backends/backend.conf cromwell-34.jar run atac.wdl -i my.json >& my.log &
#Command 2: terminated with error type 1
nohup java -jar -Dconfig.file=backends/backend.conf cromwell-34.jar run atac.wdl -i my.json &> my_reran.log
#Command 3: terminated with error type 2
nohup java -jar -Dconfig.file=backends/backend.conf cromwell-34.jar run atac.wdl -i my.json &>my_reran2.log
#Command 4: ran successfully with non-empty output
nohup java -jar -Dconfig.file=backends/backend.conf cromwell-34.jar run atac.wdl -i my.json &>my_reran3.log

Please let me know if you need any further information. Thank you for your help in advance.

Best,
Ravi
my.log
my_reran.log
my_reran2.log
my_reran3.log
my.json.txt

@leepc12
Copy link
Collaborator

leepc12 commented Feb 13, 2019

Please look into stderr and stdout files in each task directory cromwel-executions/atac/RANDOM_HASH_STRING/call-TASK_NAME/shard-?/executions/stderr.

It looks like you ran these pipelines with a local mode (without using docker/singularity).
Did you activate Conda environment before running them? Pipeline can also fail if you don't have enough resource on your system.

Please post an issue on the new repo. You may find an instruction how to make a tar ball for debugging and upload it.

@ravipatel4
Copy link
Author

ravipatel4 commented Feb 13, 2019

Thanks Jin for a quick reply.

The error message in stderr file is also present in the .log files (towards the end).

Yes, I am running the pipeline locally, with activated Conda environment. I doubt the resources are limiting since I am using a server with 128GB RAM (50GB free currently).

I couldn't find a way to post an issue on the new repo page, hence, I ended up posting the issue here. Sorry about that.

@ravipatel4
Copy link
Author

I think I found how to post issue on the new repo. Doing that now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants