-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merging requires much memory. Is there a way to split the json output ? #35
Comments
Thank you for using the program! I suspect that dinucleotide repeats are causing the issue. So splitting the analysis by the repeat unit length might be the way to go. Could you please run this Linux binary with "--min-unit-len" set to 3? I think discarding all dinucleotide repeats from the downstream analysis may be reasonable anyway because (a) if there are very many dinucleotide repeats, they will dominate the analysis and make it much harder to detect expansions with longer motifs and (b) the vast majority of known pathogenic repeats have motifs of size 3 and longer. We will consider changing the default value of "--min-unit-len" to 3 in the next release. |
@egor-dolzhenko thank you very much ! I won't be able to use your new version before next week. |
Sounds good @lindenb! Please let me know if there are any issues with the new version. |
@egor-dolzhenko |
Glad to hear it @lindenb! Thank you for the update |
Hi all,
Thank you for ExpansionHunterDeNovo,
I'm currently testing ExpansionHunterDeNovo on a set of ~1500 WGS case/control. Everything is fine but the merging step takes too much memory and my jobs are usually killed by the cluser-manager.
Is there a way to split the json to reduce the required memory ? splitting by pattern ? splitting by chromosome ?
Thank you for your help.
The text was updated successfully, but these errors were encountered: