You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the paper of Dick (2009), it mentioned that the input fasta containing contigs > 2kb. And the contigs longer than 5 kb was split into shorter. I just want to inquire if you can suggest some software to prepare such kind of input data.
Best!
The text was updated successfully, but these errors were encountered:
Hello @Lily-WL ,
I am currently also using tetraESOM. I believe that you don't need to prepare the input data yourself, the package does it on its own.
If you place your contigs into one folder (as described by the manual) and run the esomWrapper.pl script, it will create one file called Tetra_yourprefix_2500_5000_split.fasta
The "yourprefix" is the prefix you provide when running the esomWrapper.pl script.
In my dataset, for example, my first contig is ~100kb long and is called "contig_100_0". If I now check the Tetra_..._split.fasta file, I will see this contig split into several parts, like here:
Dear researcher,
In the paper of Dick (2009), it mentioned that the input fasta containing contigs > 2kb. And the contigs longer than 5 kb was split into shorter. I just want to inquire if you can suggest some software to prepare such kind of input data.
Best!
The text was updated successfully, but these errors were encountered: