-dipSPAdes is a genome assembler designed specifically for diploid highly polymorphic genomes based on SPAdes.
-It takes advantage of divergence between haplomes in repetitive genome regions to resolve them and construct longer contigs.
-dipSPAdes produces consensus contigs (representing a consensus of both haplomes for the orthologous regions) and performs haplotype assembly.
-Note that dipSPAdes can only benefit from high polymorphism rate (at least 0.4%).
-For the data with low polymorphism rate no improvement in terms of N50 vs consentional assemblers is expected.
-
-
-
1.1 dipSPAdes pipeline
-dipSPAdes pipeline consists of three steps:
- 1. Assembly of haplocontigs (contigs representing both haplomes).
- 2. Consensus contigs construction.
- 3. Haplotype assembly.
-
-
-
-
2. Installing dipSPAdes
-dipSPAdes comes as a part of SPAdes assembler package.
-See SPAdes manual for installation instructions.
-Please verify your dipSPAdes installation prior to initiate the dipSPAdes:
-
-If the installation is successful, you will find the following information at the end of the log:
-
-
-
- * Assembled consensus contigs are in: test_dipspades/dipspades/consensus_contigs.fasta
- * Assembled paired consensus contigs are in: test_dipspades/dipspades/paired_consensus_contigs.fasta
- * Assembled paired consensus contigs are in: test_dipspades/dipspades/unpaired_consensus_contigs.fasta
- * Alignment of haplocontigs is in: test_dipspades/dipspades/haplocontigs_alignent
- * Assembled paired consensus contigs are in: test_dipspades/dipspades/haplotype_assembly.out
- * Possibly conservative regions are in: test_dipspades/dipspades/possibly_conservative_regions.fasta
-
-Thank you for using SPAdes!
-
-======= dipSPAdes finished.
-dipSPAdes log can be found here: test_dipspades/dipspades/dipspades.log
-
-
-
-
-
3. Running dipSPAdes
-
-
-
3.1 dipSPAdes input
-dipSPAdes can take as an input one of the three following alternatives:
-
-
Reads. dipSPAdes takes them in the same format as described in SPAdes manual. In this case dipSPAdes runs SPAdes to obtain haplocontigs as the first step "Assembly of haplocontigs".
-
Haplocontigs. dipSPAdes can use user-provided haplocontigs (for example computed with another assembler). In this case dipSPAdes skips the first step and starts from the second step "Consensus contigs construction".
-
Reads and haplocontigs. dipSPAdes can also use both reads and haplocontigs. In this case dipSPAdes first computes haplocontigs from reads and then uses mixture of computed haplocontigs and user-provided haplocontigs as input for further steps.
-
-
-We provide example command lines for each of these scenarios in Examples section.
-
-
-
3.2 dipSPAdes command line options
-To run dipSPAdes from the command line, type
-
-
-dipspades.py [options] -o <output_dir>
-
-
-Note that we assume that SPAdes installation directory is added to the PATH variable (provide full path to dipSPAdes executable otherwise: <spades installation dir>/dipspades.py).
-
-
-
- --test
- Runs SPAdes on the toy data set; see section 2.
-
-
- -h (or --help)
- Prints help.
-
-
-
- -v (or --version)
- Prints version.
-
-
-
-
3.2.2 Input data
-For input reads specfication use options of SPAdes described in SPAdes manual.
-
- --hap <file_name>
- Specifies file with haplocontigs in FASTA format. Note that dipSPAdes can use any number of haplocontig files.
-
-
-
-
3.2.3 Advanced options
-
- --expect-gaps
- Indicates significant amount of expected gaps in genome coverage (e.g. for datasets with relatively low coverage).
-
-
- --expect-rearrangements
- Indicates extreme heterozygosity rate in haplomes (e.g. haplomes differ by long insertions/deletions).
-
-
- --hap-assembly
- Enables haplotype assembly phase that results in files haplotype_assembly.out, conservative_regions.fasta, and possibly_conservative_regions.fasta (see Haplotype assembly output).
-
-
-
-
3.2.4 Examples
-To perform assembly (construct consensus contigs and perform haplotype assembly) of diploid genome from paired-end reads (reads_left.fastq and reads_right.fastq) run:
-
-
-To perform assembly of diploid genome from both reads (reads_left.fastq and reads_right.fastq) and preliminary computed haplocontigs (haplocontigs.fasta) run:
-
-
-To relaunch steps 2 and 3 of dipSPAdes (see dipSPAdes pipeline section) with different set of advanced options you can use haplocontigs constructed in the previous run (see dipSPAdes output section) run:
-
haplocontigs.fasta - file in FASTA format with computed haplocontigs (if input reads were provided).
-
consensus_contigs.fasta - file in FASTA format with a set of constructed consensus contigs
-
paired_consensus_contigs.fasta - file in FASTA format with a subset of consensus contigs that have a polymorphism detected on them.
-
unpaired_consensus_contigs.fasta - file in FASTA format with a subset of consensus contigs that have no polymorphism detected on them. These contigs are potentially redundant.
-
haplocontigs_alignment.out - file with recorded haplocontigs that correspond to homologous regions on haplomes.
-
haplotype_assembly.out - result of haplotype assembly
-
conservative_regions.fasta - file in FASTA format with conservative regions of diploid genome
-
possibly_conservative_regions.fasta - file in FASTA format with unresolved regions of haplocontigs that may be either conservative or repetitive.
-
-
-
-
3.3.1 Haplocontigs alignment output
-File haplocontigs_alignment.out consists of blocks of the following structure:
-
-Each block corresponds to alignment of haplocontigs to consensus contigs CONSENSUS_CONTIG_NAME.
-Name of consensus contig, CONSENSUS_CONTIG_NAME, coincides with the name in file consensus_contigs.fasta.
-Further the list of pairs of haplocontig names is printed.
-Haplocontigs in each pair at least partially correspond either to the same positions on the same haplome or to homologous positions on different haplomes.
-Also the list is divided into two subblocks: Overlapping haplocontigs and Nested haplocontigs.
-Overlapping haplocontigs contain pairs of haplocontigs such that the suffix of the first haplocontig corresponds to the prefix of the second contig.
-Nested haplocontigs contains pairs of haplocontigs such that certain subcontig of the second contig corresponds to the entire first contig.
-
-
-
3.3.2 Haplotype assembly output
-File haplotype_assembly.out consists of lines of the following structure:
-
-
-HAPLOCONTIG_NAME_1 HAPLOCONTIG_NAME_2
-
-
-where HAPLOCONTIG_NAME_1 and HAPLOCONTIG_NAME_2 are names of homologous haplocontigs that correspond to different haplomes and at least partially correspond to homologous positions in different chromosomes.
-Names correspond to the names of haplocontigs specified as an input using options --hap or computed at the first step.
-
-
-
- In addition, we would like to list your publications that use our software on our website. Please email the reference, the name of your lab, department and institution to spades.support@cab.spbu.ru.
-
-
-
-
5. Feedback and bug reports
-Your comments, bug reports, and suggestions are very welcomed.
-If you have trouble running dipSPAdes, please provide us with the files params.txt and dipspades.log from the directory <output_dir>.
-Address for communications: <spades.support@cab.spbu.ru.
-
-
-
diff --git a/src/SPAdes-3.10.1-Linux/share/spades/manual.html b/src/SPAdes-3.10.1-Linux/share/spades/manual.html
deleted file mode 100644
index e94fbe6..0000000
--- a/src/SPAdes-3.10.1-Linux/share/spades/manual.html
+++ /dev/null
@@ -1,1223 +0,0 @@
-
-
- SPAdes 3.10.1 Manual
-
-
-
-
- SPAdes – St. Petersburg genome assembler – is an assembly toolkit containing various assembly pipelines. This manual will help you to install and run SPAdes.
-SPAdes version 3.10.1 was released under GPLv2 on March 1, 2017 and can be downloaded from http://cab.spbu.ru/software/spades/.
-
-
-
1.1 Supported data types
-
- The current version of SPAdes works with Illumina or IonTorrent reads and is capable of providing hybrid assemblies using PacBio, Oxford Nanopore and Sanger reads. You can also provide additional contigs that will be used as long reads.
-
- Version 3.10.1 of SPAdes supports paired-end reads, mate-pairs and unpaired reads. SPAdes can take as input several paired-end and mate-pair libraries simultaneously. Note, that SPAdes was initially designed for small genomes. It was tested on bacterial (both single-cell MDA and standard isolates), fungal and other small genomes. SPAdes is not intended for larger genomes (e.g. mammalian size genomes). For such purposes you can use it at your own risk.
-
- SPAdes 3.10.1 includes the following additional pipelines:
-
-
dipSPAdes – a module for assembling highly polymorphic diploid genomes (see dipSPAdes manual).
-
metaSPAdes – a pipeline for metagenomic data sets (see metaSPAdes options).
-
plasmidSPAdes – a pipeline for extracting and assembling plasmids from WGS data sets (see plasmidSPAdes options).
-
rnaSPAdes – a de novo transcriptome assembler from RNA-Seq data (see rnaSPAdes manual).
-
truSPAdes – a module for TruSeq barcode assembly (see truSPAdes manual).
-
-
-
-
1.2 SPAdes pipeline
-
-SPAdes comes in several separate modules:
-
-
BayesHammer – read error correction tool for Illumina reads, which works well on both single-cell and standard data sets.
-
IonHammer – read error correction tool for IonTorrent data, which also works on both types of data.
-
SPAdes – iterative short-read genome assembly module; values of K are selected automatically based on the read length and data set type.
-
MismatchCorrector – a tool which improves mismatch and short indel rates in resulting contigs and scaffolds; this module uses the BWA tool [Li H. and Durbin R., 2009]; MismatchCorrector is turned off by default, but we recommend to turn it on (see SPAdes options section).
-
-
- We recommend to run SPAdes with BayesHammer/IonHammer to obtain high-quality assemblies. However, if you use your own read correction tool, it is possible to turn error correction module off. It is also possible to use only the read error correction stage, if you wish to use another assembler. See the SPAdes options section.
-
-
-
-
1.3 SPAdes' performance
-
- In this section we give approximate data about SPAdes' performance on two data sets:
-
- We ran SPAdes with default parameters using 16 threads on a server with Intel Xeon 2.27GHz processors and SSD hard drive. BayesHammer runs in approximately half an hour and takes up to 8Gb of RAM to perform read error correction on each data set. Assembly takes about 10 minutes for the E. coli isolate data set and 20 minutes for the E. coli single-cell data set. Both data sets require about 8Gb of RAM (see notes below). MismatchCorrector runs for about 15 minutes on both data sets, and requires less than 2Gb of RAM. All modules also require additional disk space for storing results (corrected reads, contigs, etc) and temporary files. See the table below for more precise values.
-
-
-
-
-
Data set
-
E. coli isolate
-
E. coli single-cell
-
-
-
-
Stage
-
Time
-
Peak RAM usage (Gb)
-
Additional disk space (Gb)
-
Time
-
Peak RAM usage (Gb)
-
Additional disk space (Gb)
-
-
-
-
BayesHammer
-
29m
-
7.1
-
11
-
34m
-
7.6
-
8.8
-
-
-
-
SPAdes
-
11m
-
8.4
-
1.6
-
17m
-
8
-
3.0
-
-
-
-
MismatchCorrector
-
13m
-
1.8
-
27.1
-
16m
-
1.8
-
25.5
-
-
-
-
Whole pipeline
-
53m
-
8.4
-
29.6
-
1h 7m
-
8
-
28.3
-
-
-
-
- Notes:
-
-
Running SPAdes without preliminary read error correction (e.g. without BayesHammer or IonHammer) will likely require more time and memory.
-
Each module removes its temporary files as soon as it finishes.
-
SPAdes uses 512 Mb per thread for buffers, which results in higher memory consumption. If you set memory limit manually, SPAdes will use smaller buffers and thus less RAM.
-
Performance statistics is given for SPAdes version 3.10.1.
-
-
-
-
-
2. Installation
-
-
- SPAdes requires a 64-bit Linux system or Mac OS and Python (supported versions are 2.4, 2.5, 2.6, 2.7, 3.2, 3.3, 3.4 and 3.5) to be pre-installed on it. To obtain SPAdes you can either download binaries or download source code and compile it yourself.
-
-
-
2.1 Downloading SPAdes Linux binaries
-
-
- To download SPAdes Linux binaries and extract them, go to the directory in which you wish SPAdes to be installed and run:
-
-
-
- wget http://cab.spbu.ru/files/release3.10.1/SPAdes-3.10.1-Linux.tar.gz
- tar -xzf SPAdes-3.10.1-Linux.tar.gz
- cd SPAdes-3.10.1-Linux/bin/
-
-
-
-
- In this case you do not need to run any installation scripts – SPAdes is ready to use. The following files will be placed in the bin directory:
-
-
spades.py (main executable script)
-
dipspades.py (main executable script for dipSPAdes)
-
metaspades.py (main executable script for metaSPAdes)
-
plasmidspades.py (main executable script for plasmidSPAdes)
-
rnaspades.py (main executable script for rnaSPAdes)
-
truspades.py (main executable script for truSPAdes)
-
hammer (read error correcting module for Illumina reads)
-
ionhammer (read error correcting module for IonTorrent reads)
-
spades (assembly module)
-
bwa-spades (BWA alignment module which is required for mismatch correction)
-
corrector (mismatch correction module)
-
dipspades (assembly module for highly polymorphic diploid genomes)
-
scaffold_correction (executable used in truSPAdes pipeline)
-
-
-
- We also suggest adding SPAdes installation directory to the PATH variable.
-
-
-
2.2 Downloading SPAdes binaries for Mac
-
-
- To obtain SPAdes binaries for Mac, go to the directory in which you wish SPAdes to be installed and run:
-
-
-
- curl http://cab.spbu.ru/files/release3.10.1/SPAdes-3.10.1-Darwin.tar.gz -o SPAdes-3.10.1-Darwin.tar.gz
- tar -zxf SPAdes-3.10.1-Darwin.tar.gz
- cd SPAdes-3.10.1-Darwin/bin/
-
-
-
-
- Just as in Linux, SPAdes is ready to use and no further installation steps are required. You will get the same files in the bin directory:
-
-
spades.py (main executable script)
-
dipspades.py (main executable script for dipSPAdes)
-
metaspades.py (main executable script for metaSPAdes)
-
plasmidspades.py (main executable script for plasmidSPAdes)
-
rnaspades.py (main executable script for rnaSPAdes)
-
truspades.py (main executable script for truSPAdes)
-
hammer (read error correcting module for Illumina reads)
-
ionhammer (read error correcting module for IonTorrent reads)
-
spades (assembly module)
-
bwa-spades (BWA alignment module which is required for mismatch correction)
-
corrector (mismatch correction module)
-
dipspades (assembly module for highly polymorphic diploid genomes)
-
scaffold_correction (executable used in truSPAdes pipeline)
-
-
-
- We also suggest adding SPAdes installation directory to the PATH variable.
-
-
-
-
2.3 Downloading and compiling SPAdes source code
-
- If you wish to compile SPAdes by yourself you will need the following libraries to be pre-installed:
-
-
g++ (version 4.8.2 or higher)
-
cmake (version 2.8.12 or higher)
-
zlib
-
libbz2
-
-
-
- If you meet these requirements, you can download the SPAdes source code:
-
-
-
- wget http://cab.spbu.ru/files/release3.10.1/SPAdes-3.10.1.tar.gz
- tar -xzf SPAdes-3.10.1.tar.gz
- cd SPAdes-3.10.1
-
-
-
-
- and build it with the following script:
-
-
-
- ./spades_compile.sh
-
-
-
-
- SPAdes will be built in the directory ./bin. If you wish to install SPAdes into another directory, you can specify full path of destination folder by running the following command in bash or sh:
-
-
- If you added SPAdes installation directory to the PATH variable, you can run:
-
-
-
- spades.py --test
-
-
-
- For the simplicity we further assume that SPAdes installation directory is added to the PATH variable.
-
-
-
- If the installation is successful, you will find the following information at the end of the log:
-
-
-
-===== Assembling finished. Used k-mer sizes: 21, 33, 55
-
- * Corrected reads are in spades_test/corrected/
- * Assembled contigs are in spades_test/contigs.fasta
- * Assembled scaffolds are in spades_test/scaffolds.fasta
- * Assembly graph is in spades_test/assembly_graph.fastg
- * Assembly graph in GFA format is in spades_test/assembly_graph.gfa
- * Paths in the assembly graph corresponding to the contigs are in spades_test/contigs.paths
- * Paths in the assembly graph corresponding to the scaffolds are in spades_test/scaffolds.paths
-
-======= SPAdes pipeline finished.
-
-========= TEST PASSED CORRECTLY.
-
-SPAdes log can be found here: spades_test/spades.log
-
-Thank you for using SPAdes!
-
-
-
-
-
3. Running SPAdes
-
-
-
3.1 SPAdes input
-
- SPAdes takes as input paired-end reads, mate-pairs and single (unpaired) reads in FASTA and FASTQ. For IonTorrent data SPAdes also supports unpaired reads in unmapped BAM format (like the one produced by Torrent Server). However, in order to run read error correction, reads should be in FASTQ or BAM format. Sanger, Oxford Nanopore and PacBio CLR reads can be provided in both formats since SPAdes does not run error correction for these types of data.
-
-
- To run SPAdes 3.10.1 you need at least one library of the following types:
-
-Illumina and IonTorrent libraries should not be assembled together. All other types of input data are compatible. SPAdes should not be used if only PacBio CLR, Oxford Nanopore, Sanger reads or additional contigs are available.
-
-
-SPAdes supports mate-pair only assembly. However, we recommend to use only high-quality mate-pair libraries in this case (e.g. that do not have a paired-end part). We tested mate-pair only pipeline using Illumina Nextera mate-pairs. See more here.
-
-
- Current version SPAdes also supports Lucigen NxSeq® Long Mate Pair libraries, which always have forward-reverse orientation. If you wish to use Lucigen NxSeq® Long Mate Pair reads, you will need Python regex library to be pre-installed on your machine. You can install it with Python pip-installer:
-
It is not recommended to run SPAdes on PacBio reads with low coverage (less than 5).
-
We suggest not to run SPAdes on PacBio reads for large genomes.
-
SPAdes accepts gzip-compressed files.
-
-
-
Read-pair libraries
-
- By using command line interface, you can specify up to nine different paired-end libraries, up to nine mate-pair libraries and also up to nine high-quality mate-pair ones. If you wish to use more, you can use YAML data set file. We further refer to paired-end and mate-pair libraries simply as to read-pair libraries.
-
-
- By default, SPAdes assumes that paired-end and high-quality mate-pair reads have forward-reverse (fr) orientation and usual mate-pairs have reverse-forward (rf) orientation. However, different orientations can be set for any library by using SPAdes options.
-
-
-
- To distinguish reads in pairs we refer to them as left and right reads. For forward-reverse orientation, the forward reads correspond to the left reads and the reverse reads, to the right. Similarly, in reverse-forward orientation left and right reads correspond to reverse and forward reads, respectively, etc.
-
-
- Each read-pair library can be stored in several files or several pairs of files. Paired reads can be organized in two different ways:
-
-
-
In file pairs. In this case left and right reads are placed in different files and go in the same order in respective files.
-
In merged files. In this case, the reads are interlaced, so that each right read goes after the corresponding paired left read.
-
-
-
- For example, Illumina produces paired-end reads in two files: s_1_1_sequence.txt and s_1_2_sequence.txt. If you choose to store reads in file pairs make sure that for every read from s_1_1_sequence.txt the corresponding paired read from s_1_2_sequence.txt is placed in the respective paired file on the same line number. If you choose to use merged files, every read from s_1_1_sequence.txt should be followed by the corresponding paired read from s_1_2_sequence.txt.
-
-
-
Unpaired (single-read) libraries
-
- By using command line interface, you can specify up to nine different single-read libraries. To input more libraries, you can use YAML data set file.
-
- Single librairies are assumed to have high quality and a reasonable coverage. For example, you can provide PacBio CCS reads as a single-read library. Additionally, if you have merged a paired-end library with overlapping read-pairs (for example, using FLASh), you can provide the resulting reads as a single-read library.
-
- Note, that you should not specify PacBio CLR, Sanger reads or additional contigs as single-read libraries, each of them has a separate option.
-
-
-
-
PacBio and Oxford Nanopore reads
-
-
- SPAdes can take as an input an unlimited number of PacBio and Oxford Nanopore libraries.
-
-
- PacBio CLR and Oxford Nanopore reads are used for hybrid assemblies (e.g. with Illumina or IonTorrent). There is no need to pre-correct this kind of data. SPAdes will use PacBio CLR and Oxford Nanopore reads for gap closure and repeat resolution.
-
-
- For PacBio you just need to have filtered subreads in FASTQ/FASTA format. Provide these filtered subreads using --pacbio option. Oxford Nanopore reads are provided with --nanopore option.
-
-
- PacBio CCS/Reads of Insert reads or pre-corrected (using third-party software) PacBio CLR / Oxford Nanopore reads can be simply provided as single reads to SPAdes.
-
-
Additional contigs
-
- In case you have contigs of the same genome generated by other assembler(s) and you wish to merge them into SPAdes assembly, you can specify additional contigs using --trusted-contigs or --untrusted-contigs. First option is used when high quality contigs are available. These contigs will be used for graph construction, gap closure and repeat resolution. Second option is used for less reliable contigs that may have more errors or contigs of unknown quality. These contigs will be used only for gap closure and repeat resolution. The number of additional contigs is unlimited.
-
-
- Note, that SPAdes does not perform assembly using genomes of closely-related species. Only contigs of the same genome should be specified.
-
-
-
-
-
-
3.2 SPAdes command line options
-
- To run SPAdes from the command line, type
-
-
-
- spades.py [options] -o <output_dir>
-
-
-Note that we assume that SPAdes installation directory is added to the PATH variable (provide full path to SPAdes executable otherwise: <spades installation dir>/spades.py).
-
-
-
- --sc
- This flag is required for MDA (single-cell) data.
-
-
-
-
- --meta (same as metaspades.py)
- This flag is recommended when assembling metagenomic data sets (runs metaSPAdes, see paper for more details). Currently metaSPAdes supports only a single library which has to be paired-end (we hope to remove this restriction soon). It does not support careful mode (mismatch correction is not available). In addition, you cannot specify coverage cutoff for metaSPAdes. Note that metaSPAdes might be very sensitive to presence of the technical sequences remaining in the data (most notably adapter readthroughs), please run quality control and pre-process your data accordingly.
-
-
-
-
- --plasmid (same as plasmidspades.py)
- This flag is required when assembling only plasmids from WGS data sets (runs plasmidSPAdes, see paper for the algorithm details). Note, that plasmidSPAdes is not compatible with metaSPAdes and single-cell mode. Additionally, we do not recommend to run plasmidSPAdes on more than one library.
- See section 3.6 for plasmidSPAdes output details.
-
-
-
-
-
- --rna (same as rnaspades.py)
- This flag should be used when assembling RNA-Seq data sets (runs rnaSPAdes). To learn more, see rnaSPAdes manual.
-
-
-
- --iontorrent
- This flag is required when assembling IonTorrent data. Allows BAM files as input. Carefully read section 3.3 before using this option.
-
-
-
-
- --test
- Runs SPAdes on the toy data set; see section 2.3.
-
- --continue
- Continues SPAdes run from the specified output folder starting from the last available check-point. Check-points are made after:
-
-
error correction module is finished
-
iteration for each specified K value of assembly module is finished
-
mismatch correction is finished for contigs or scaffolds
-
-For example, if specified K values are 21, 33 and 55 and SPAdes was stopped or crashed during assembly stage with K = 55, you can run SPAdes with the --continue option specifying the same output directory. SPAdes will continue the run starting from the assembly stage with K = 55. Error correction module and iterations for K equal to 21 and 33 will not be run again.
-Note that all options except -o <output_dir> are ignored if --continue is set.
-
-
-
- --restart-from <check_point>
- Restart SPAdes run from the specified output folder starting from the specified check-point. Check-points are:
-
-
ec – start from error correction
-
as – restart assembly module from the first iteration
-
k<int> – restart from the iteration with specified k values, e.g. k55
-
mc – restart mismatch correction
-
-In comparison to the --continue option, you can change some of the options when using --restart-from. You can change any option except: all basic options, all options for specifying input data (including --dataset), --only-error-correction option and --only-assembler option. For example, if you ran assembler with k values 21,33,55 without mismatch correction, you can add one more iteration with k=77 and run mismatch correction step by running SPAdes with following options:
- --restart-from k55 -k 21,33,55,77 --mismatch-correction -o <previous_output_dir>.
- Since all files will be overwritten, do not forget to copy your assembly from the previous run if you need it.
-
-
-
- --disable-gzip-output
- Forces read error correction module not to compress the corrected reads. If this options is not set, corrected reads will be in *.fastq.gz format.
-
-
-
-
-
Input data
-
- Specifying one library (previously used format)
-
- --12 <file_name>
- File with interlaced forward and reverse paired-end reads.
-
-
-
- -1 <file_name>
- File with forward reads.
-
-
-
- -2 <file_name>
- File with reverse reads.
-
-
-
- -s <file_name>
- File with unpaired reads.
-
-
- Specifying multiple libraries (new format)
-
-
-
Single-read libraries
-
-
- --s<#> <file_name>
- File for single-read library number <#> (<#> = 1,2,..,9). For example, for the first paired-end library the option is:
- --s1 <file_name>
- Do not use -s options for single-read libraries, since it specifies unpaired reads for the first paired-end library.
-
-
-
-
Paired-end libraries
-
-
- --pe<#>-12 <file_name>
- File with interlaced reads for paired-end library number <#> (<#> = 1,2,..,9). For example, for the first single-read library the option is:
- --pe1-12 <file_name>
-
-
-
- --pe<#>-1 <file_name>
- File with left reads for paired-end library number <#> (<#> = 1,2,..,9).
-
-
-
- --pe<#>-2 <file_name>
- File with right reads for paired-end library number <#> (<#> = 1,2,..,9).
-
-
-
- --pe<#>-s <file_name>
- File with unpaired reads from paired-end library number <#> (<#> = 1,2,..,9)
- For example, paired reads can become unpaired during the error correction procedure.
-
-
-
- --pe<#>-<or>
- Orientation of reads for paired-end library number <#> (<#> = 1,2,..,9; <or> = "fr","rf","ff").
- The default orientation for paired-end libraries is forward-reverse. For example, to specify reverse-forward orientation for the second paired-end library, you should use the flag:
- --pe2-rf
-
-
-
Mate-pair libraries
-
- --mp<#>-12 <file_name>
- File with interlaced reads for mate-pair library number <#> (<#> = 1,2,..,9).
-
-
-
- --mp<#>-1 <file_name>
- File with left reads for mate-pair library number <#> (<#> = 1,2,..,9).
-
-
-
- --mp<#>-2 <file_name>
- File with right reads for mate-pair library number <#> (<#> = 1,2,..,9).
-
-
- --mp<#>-<or>
- Orientation of reads for mate-pair library number <#> (<#> = 1,2,..,9; <or> = "fr","rf","ff").
- The default orientation for mate-pair libraries is reverse-forward. For example, to specify forward-forward orientation for the first mate-pair library, you should use the flag:
- --mp1-ff
-
-
-
-
High-quality mate-pair libraries (can be used for mate-pair only assembly)
-
-
- --hqmp<#>-12 <file_name>
- File with interlaced reads for high-quality mate-pair library number <#> (<#> = 1,2,..,9).
-
-
-
- --hqmp<#>-1 <file_name>
- File with left reads for high-quality mate-pair library number <#> (<#> = 1,2,..,9).
-
-
-
- --hqmp<#>-2 <file_name>
- File with right reads for high-quality mate-pair library number <#> (<#> = 1,2,..,9).
-
-
- --hqmp<#>-s <file_name>
- File with unpaired reads from high-quality mate-pair library number <#> (<#> = 1,2,..,9)
-
-
-
- --hqmp<#>-<or>
- Orientation of reads for high-quality mate-pair library number <#> (<#> = 1,2,..,9; <or> = "fr","rf","ff").
- The default orientation for high-quality mate-pair libraries is forward-reverse. For example, to specify reverse-forward orientation for the first high-quality mate-pair library, you should use the flag:
- --hqmp1-rf
-
-
-
-
-
-
Lucigen NxSeq® Long Mate Pair libraries (see section 3.1 for details)
-
-
- --nxmate<#>-1 <file_name>
- File with left reads for Lucigen NxSeq® Long Mate Pair library number <#> (<#> = 1,2,..,9).
-
-
-
- --nxmate<#>-2 <file_name>
- File with right reads for Lucigen NxSeq® Long Mate Pair library number <#> (<#> = 1,2,..,9).
-
-
-
-
-
-
- Specifying data for hybrid assembly
-
-
- --pacbio <file_name>
- File with PacBio CLR reads. For PacBio CCS reads use -s option. More information on PacBio reads is provided in section 3.1.
-
-
-
-
- --nanopore <file_name>
- File with Oxford Nanopore reads.
-
-
-
-
- --sanger <file_name>
- File with Sanger reads
-
-
-
- --trusted-contigs <file_name>
- Reliable contigs of the same genome, which are likely to have no misassemblies and small rate of other errors (e.g. mismatches and indels). This option is not intended for contigs of the related species.
-
-
-
- --untrusted-contigs <file_name>
- Contigs of the same genome, quality of which is average or unknown. Contigs of poor quality can be used but may introduce errors in the assembly. This option is also not intended for contigs of the related species.
-
-
-
- Specifying input data with YAML data set file (advanced)
-
-
- An alternative way to specify an input data set for SPAdes is to create a YAML data set file.
-By using a YAML file you can provide an unlimited number of paired-end, mate-pair and unpaired libraries.
-Basically, YAML data set file is a text file, in which input libraries are provided as a comma-separated list in square brackets.
-Each library is provided in braces as a comma-separated list of attributes.
-The following attributes are available:
-
-
orientation ("fr", "rf", "ff")
-
type ("paired-end", "mate-pairs", "hq-mate-pairs", "single", "pacbio", "nanopore", "sanger", "trusted-contigs", "untrusted-contigs")
-
interlaced reads (comma-separated list of files with interlaced reads)
-
left reads (comma-separated list of files with left reads)
-
right reads (comma-separated list of files with right reads)
-
single reads (comma-separated list of files with single reads)
-
-
-
- To properly specify a library you should provide its type and at least one file with reads.
-Orientation is an optional attribute. Its default value is "fr" (forward-reverse) for paired-end libraries and
-"rf" (reverse-forward) for mate-pair libraries.
-
-
- The value for each attribute is given after a colon.
-Comma-separated lists of files should be given in square brackets.
-For each file you should provide its full path in double quotes.
-Make sure that files with right reads are given in the same order as corresponding files with left reads.
-
-
- For example, if you have one paired-end library splitted into two pairs of files:
-
- Once you have created a YAML file save it with .yaml extension (e.g. as my_data_set.yaml) and run SPAdes using the --dataset option:
- --dataset <your YAML file>
-
- Notes:
-
-
The --dataset option cannot be used with any other options for specifying input data.
-
We recommend to nest all files with long reads of the same data type in a single library block.
- --cov-cutoff <float>
- Read coverage cutoff value. Must be a positive float value, or 'auto', or 'off'. Default value is 'off'. When set to 'auto' SPAdes automatically computes coverage threshold using conservative strategy. Note, that this option is not supported by metaSPAdes.
-
-
-
-
- --phred-offset <33 or 64>
- PHRED quality offset for the input reads, can be either 33 or 64. It will be auto-detected if it is not specified.
-
-The selection of k-mer length is non-trivial for IonTorrent. If the dataset is more or less conventional (good coverage, not high GC, etc), then use our recommendation for long reads (e.g. assemble using k-mer lengths 21,33,55,77,99,127). However, due to increased error rate some changes of k-mer lengths (e.g. selection of shorter ones) may be required. For example, if you ran SPAdes with k-mer lengths 21,33,55,77 and then decided to assemble the same data set using more iterations and larger values of K, you can run SPAdes once again specifying the same output folder and the following options: --restart-from k77 -k 21,33,55,77,99,127 --mismatch-correction -o <previous_output_dir>. Do not forget to copy contigs and scaffolds from the previous run. We're planning to tackle issue of selecting k-mer lengths for IonTorrent reads in next versions.
-
-
You may need no error correction for Hi-Q enzyme at all. However, we suggest trying to assemble your data with and without error correction and select the best variant.
-
-
For non-trivial datasets (e.g. with high GC, low or uneven coverage) we suggest to enable single-cell mode (setting --sc option) and use k-mer lengths of 21,33,55.
-
-
-
-
<output_dir>/assembly_graph.fastg contains SPAdes assembly graph in FASTG format
-
<output_dir>/contigs.paths contains paths in the assembly graph corresponding to contigs.fasta (see details below)
-
<output_dir>/scaffolds.paths contains paths in the assembly graph corresponding to scaffolds.fasta (see details below)
-
-
-
- Contigs/scaffolds names in SPAdes output FASTA files have the following format: >NODE_3_length_237403_cov_243.207_ID_45 Here 3 is the number of the contig/scaffold, 237403 is the sequence length in nucleotides and 243.207 is the k-mer coverage for the last (largest) k value used. Note that the k-mer coverage is always lower than the read (per-base) coverage.
-
-
- In general, SPAdes uses two techniques for joining contigs into scaffolds. First one relies on read pairs and tries to estimate the size of the gap separating contigs. The second one relies on the assembly graph: e.g. if two contigs are separated by a complex tandem repeat, that cannot be resolved exactly, contigs are joined into scaffold with a fixed gap size of 100 bp. Contigs produced by SPAdes do not contain N symbols.
-
-
- To view FASTG and GFA files we recommend to use Bandage visualization tool. Note that sequences stored in assembly_graph.fastg correspond to contigs before repeat resolution (edges of the assembly graph). Paths corresponding to contigs after repeat resolution (scaffolding) are stored in contigs.paths (scaffolds.paths) in the format accepted by Bandage (see Bandage wiki for details). The example is given below.
-
-
Let the contig with the name NODE_5_length_100000_cov_215.651_ID_5 consist of the following edges of the assembly graph:
-
-Since the current version of Bandage does not accept paths with gaps, paths corresponding contigs/scaffolds jumping over a gap in the assembly graph are splitted by semicolon at the gap positions. For example, the following record
-
-states that NODE_3_length_237403_cov_243.207_ID_45 corresponds to the path with 10 edges, but jumps over a gap between edges EDGE_16_length_21503_cov_482.709 and EDGE_31_length_140767_cov_220.239.
-
-
-The full list of <output_dir> content is presented below:
-
-
- scaffolds.fasta – resulting scaffolds (recommended for use as resulting sequences)
- contigs.fasta – resulting contigs
- assembly_graph.fastg – assembly graph
- contigs.paths – contigs paths in the assembly graph
- scaffolds.paths – scaffolds paths in the assembly graph
- before_rr.fasta – contigs before repeat resolution
-
- corrected/ – files from read error correction
- configs/ – configuration files for read error correction
- corrected.yaml – internal configuration file
- Output files with corrected reads
-
- params.txt – information about SPAdes parameters in this run
- spades.log – SPAdes log
- dataset.info – internal configuration file
- input_dataset.yaml – internal YAML data set file
- K<##>/ – directory containing intermediate files from the run with K=<##>. These files should not be used as assembly results; use resulting contigs/scaffolds in files mentioned above.
-
-
-
- SPAdes will overwrite these files and directories if they exist in the specified <output_dir>.
-
-
-
-
- QUAST may be used to generate summary statistics (N50, maximum contig length, GC %, # genes found in a reference list or with built-in gene finding tools, etc.) for a single assembly. It may also be used to compare statistics for multiple assemblies of the same data set (e.g., SPAdes run with different parameters, or several different assemblers).
-
-
-
-
-
-
- In addition, we would like to list your publications that use our software on our website. Please email the reference, the name of your lab, department and institution to spades.support@cab.spbu.ru.
-
-
-
-
-
rnaSPAdes is a tool for de novo transcriptome assembly from RNA-Seq data and is suitable for all kind of organisms. rnaSPAdes is a part of SPAdes package since version 3.9. Information about SPAdes download, requirements, installation and basic options can be found in SPAdes manual. Below you may find information about differences between SPAdes and rnaSPAdes.
-
-
-
2 rnaSPAdes specifics
-
-
-
2.1 Running rnaSPAdes
-
-To run rnaSPAdes use
-
-
-
- rnaspades.py [options] -o <output_dir>
-
-
-
-or
-
-
-
- spades.py --rna [options] -o <output_dir>
-
-
-
-Note that we assume that SPAdes installation directory is added to the PATH variable (provide full path to rnaSPAdes executable otherwise: <rnaspades installation dir>/rnaspades.py).
-
-
Here are several notes regarding options :
-
-
rnaSPAdes can take as an input only one paired-end library and multiple single-end libraries.
-
rnaSPAdes does not support --careful and --cov-cutoff options.
-
rnaSPAdes is not compatible with other pipeline options such as --meta, --sc and --plasmid.
-
rnaSPAdes works using only a single k-mer size (55 by the default). We strongly recommend not to change this parameter. In case your RNA-Seq data set contains long Illumina reads (150 bp and longer) you may try to use longer k-mer size (approximately half of the read length). In case you have any doubts about your run, do not hesitate to contact us using e-mail given below.
-
-
-
-
2.2 rnaSPAdes output
-
-rnaSPAdes outputs only one FASTA file named transcripts.fasta. The corresponding file with paths in the assembly_graph.fastg is transcripts.paths.
-
-
- Contigs/scaffolds names in rnaSPAdes output FASTA files have the following format: >NODE_97_length_6237_cov_11.9819_g8_i2 Similarly to SPAdes, 97 is the number of the transcript, 6237 is its sequence length in nucleotides and 11.9819 is the k-mer coverage. Note that the k-mer coverage is always lower than the read (per-base) coverage. g8_i2 correspond to the gene number 8 and isoform number 2 within this gene. Transcripts with the same gene number are presumably received from same or somewhat similar (e.g. paralogous) genes. Note, that the prediction is based on the presence of shared sequences in the transcripts and is very approximate.
-
-
-
- rnaQUAST may be used for transcriptome assembly quality assessment for model organisms when reference genome and gene database are available. rnaQUAST also includes BUSCO and GeneMarkS-T tools for de novo evaluation.
-
-
-
-
-
-
-
diff --git a/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/corrector_logic.py b/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/corrector_logic.py
deleted file mode 100644
index 7459c5f..0000000
--- a/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/corrector_logic.py
+++ /dev/null
@@ -1,76 +0,0 @@
-#!/usr/bin/python -O
-
-############################################################################
-# Copyright (c) 2015 Saint Petersburg State University
-# Copyright (c) 2011-2014 Saint Petersburg Academic University
-# All Rights Reserved
-# See file LICENSE for details.
-############################################################################
-
-
-import os
-import sys
-import shutil
-import support
-import process_cfg
-from site import addsitedir
-from distutils import dir_util
-
-
-
-def prepare_config_corr(filename, cfg, ext_python_modules_home):
- addsitedir(ext_python_modules_home)
- if sys.version.startswith('2.'):
- import pyyaml2 as pyyaml
- elif sys.version.startswith('3.'):
- import pyyaml3 as pyyaml
- data = pyyaml.load(open(filename, 'r'))
- data["dataset"] = cfg.dataset
- data["output_dir"] = cfg.output_dir
- data["work_dir"] = process_cfg.process_spaces(cfg.tmp_dir)
- #data["hard_memory_limit"] = cfg.max_memory
- data["max_nthreads"] = cfg.max_threads
- data["bwa"] = cfg.bwa
- file_c = open(filename, 'w')
- pyyaml.dump(data, file_c, default_flow_style = False, default_style='"', width=100500)
- file_c.close()
-
-
-
-def run_corrector(configs_dir, execution_home, cfg,
- ext_python_modules_home, log, to_correct, result):
- addsitedir(ext_python_modules_home)
- if sys.version.startswith('2.'):
- import pyyaml2 as pyyaml
- elif sys.version.startswith('3.'):
- import pyyaml3 as pyyaml
-
- dst_configs = os.path.join(cfg.output_dir, "configs")
- if os.path.exists(dst_configs):
- shutil.rmtree(dst_configs)
- dir_util.copy_tree(os.path.join(configs_dir, "corrector"), dst_configs, preserve_times=False)
- cfg_file_name = os.path.join(dst_configs, "corrector.info")
-
- cfg.tmp_dir = support.get_tmp_dir(prefix="corrector_")
-
- prepare_config_corr(cfg_file_name, cfg, ext_python_modules_home)
- binary_name = "corrector"
-
- command = [os.path.join(execution_home, binary_name),
- os.path.abspath(cfg_file_name), os.path.abspath(to_correct)]
-
- log.info("\n== Running contig polishing tool: " + ' '.join(command) + "\n")
-
-
- log.info("\n== Dataset description file was created: " + cfg_file_name + "\n")
-
- support.sys_call(command, log)
- if not os.path.isfile(result):
- support.error("Mismatch correction finished abnormally: " + result + " not found!")
- if os.path.isdir(cfg.tmp_dir):
- shutil.rmtree(cfg.tmp_dir)
-
-
-
-
-
diff --git a/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/dipspades_logic.py b/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/dipspades_logic.py
deleted file mode 100644
index b85ea95..0000000
--- a/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/dipspades_logic.py
+++ /dev/null
@@ -1,276 +0,0 @@
-#!/usr/bin/env python
-
-############################################################################
-# Copyright (c) 2015 Saint Petersburg State University
-# Copyright (c) 2011-2014 Saint Petersburg Academic University
-# All Rights Reserved
-# See file LICENSE for details.
-############################################################################
-
-
-import sys
-import getopt
-import os
-import logging
-import shutil
-import errno
-import options_storage
-import support
-import process_cfg
-from distutils import dir_util
-from os.path import abspath, expanduser
-
-
-class DS_Args_List:
- long_options = "expect-gaps expect-rearrangements hap= threads= memory= tmp-dir= dsdebug hap-assembly dsK= saves= start-from=".split()
- short_options = "o:t:m:"
-
-
-class DS_Args:
- max_threads = options_storage.THREADS
- max_memory = options_storage.MEMORY
- tmp_dir = None
- allow_gaps = False
- weak_align = False
- haplocontigs_fnames = []
- output_dir = ""
- haplocontigs = ""
- dev_mode = False
- haplotype_assembly = False
- k = 55
- saves = ""
- start_from = "dipspades"
-
-
-def print_ds_args(ds_args, log):
- log.info("dipSPAdes parameters:")
- log.info("\tK value for dipSPAdes: " + str(ds_args.k))
- log.info("\tExpect gaps: " + str(ds_args.allow_gaps))
- log.info("\tExpect rearrangements: " + str(ds_args.weak_align))
- log.info("\tFiles with haplocontigs : " + str(ds_args.haplocontigs_fnames))
- log.info("\tHaplotype assembly stage: " + str(ds_args.haplotype_assembly))
- log.info("\tOutput directory: " + str(ds_args.output_dir))
- log.info("")
- log.info("\tDir for temp files: " + str(ds_args.tmp_dir))
- log.info("\tThreads: " + str(ds_args.max_threads))
- log.info("\tMemory limit (in Gb): " + str(ds_args.max_memory))
-
-
-# src_config_dir - path of dipspades configs
-def copy_configs(src_config_dir, dst_config_dir):
- if os.path.exists(dst_config_dir):
- shutil.rmtree(dst_config_dir)
- dir_util.copy_tree(src_config_dir, dst_config_dir, preserve_times=False)
-
-
-def prepare_configs(src_config_dir, ds_args, log):
- config_dir = os.path.join(ds_args.output_dir, "dipspades_configs")
- copy_configs(src_config_dir, config_dir)
- #log.info("dipSPAdes configs were copied to " + config_dir)
- config_fname = os.path.join(config_dir, "config.info")
- return os.path.abspath(config_fname)
-
-
-def write_haplocontigs_in_file(filename, haplocontigs):
- hapfile = open(filename, 'w')
- for hapcontig in haplocontigs:
- hapfile.write(hapcontig + "\n")
- hapfile.close()
-
-def ParseStartPoint(start_point_arg, log):
- if start_point_arg == 'pbr':
- return 'dipspades:polymorphic_br'
- elif start_point_arg == 'kmg':
- return 'dipspades:kmer_gluer'
- elif start_point_arg == 'cc':
- return 'dipspades:consensus_construction'
- elif start_point_arg == 'ha':
- return 'dipspades:haplotype_assembly'
- log.info("ERROR: Start point " + start_point_arg + " was undefined")
- sys.exit(1)
-
-def parse_arguments(argv, log):
- try:
- options, not_options = getopt.gnu_getopt(argv, DS_Args_List.short_options, DS_Args_List.long_options)
- except getopt.GetoptError:
- _, exc, _ = sys.exc_info()
- sys.stderr.write(str(exc) + "\n")
- sys.stderr.flush()
- options_storage.usage("", dipspades=True)
- sys.exit(1)
-
- ds_args = DS_Args()
- for opt, arg in options:
- if opt == '-o':
- ds_args.output_dir = abspath(expanduser(arg))
- elif opt == '--expect-gaps':
- ds_args.allow_gaps = True
- elif opt == '--expect-rearrangements':
- ds_args.weak_align = True
- elif opt == '--hap':
- ds_args.haplocontigs_fnames.append(support.check_file_existence(arg, 'haplocontigs', log, dipspades=True))
- elif opt == '-t' or opt == "--threads":
- ds_args.max_threads = int(arg)
- elif opt == '-m' or opt == "--memory":
- ds_args.max_memory = int(arg)
- elif opt == '--tmp-dir':
- ds_args.tmp_dir = abspath(expanduser(arg))
- elif opt == '--dsdebug':
- ds_args.dev_mode = True
- elif opt == '--hap-assembly':
- ds_args.haplotype_assembly = True
- elif opt == '--dsK':
- ds_args.k = int(arg)
- elif opt == '--saves':
- ds_args.saves = os.path.abspath(arg)
- ds_args.dev_mode = True
- elif opt == '--start-from':
- ds_args.start_from = ParseStartPoint(arg, log)
- ds_args.dev_mode = True
- ds_args.haplocontigs = os.path.join(ds_args.output_dir, "haplocontigs")
-
- if not ds_args.output_dir:
- support.error("the output_dir is not set! It is a mandatory parameter (-o output_dir).", log, dipspades=True)
- if not ds_args.haplocontigs_fnames and ds_args.start_from == 'dipspades':
- support.error("cannot start dipSPAdes without at least one haplocontigs file!", log, dipspades=True)
- if not ds_args.tmp_dir:
- ds_args.tmp_dir = os.path.join(ds_args.output_dir, options_storage.TMP_DIR)
-
- if ds_args.start_from != 'dipspades' and ds_args.saves == '':
- support.error("saves were not defined! dipSPAdes can not start from " + ds_args.start_from)
-
- return ds_args
-
-
-def prepare_config(config_fname, ds_args, log):
- args_dict = dict()
- args_dict["tails_lie_on_bulges"] = process_cfg.bool_to_str(not ds_args.allow_gaps)
- args_dict["align_bulge_sides"] = process_cfg.bool_to_str(not ds_args.weak_align)
- args_dict["haplocontigs"] = process_cfg.process_spaces(ds_args.haplocontigs)
- args_dict["output_dir"] = process_cfg.process_spaces(ds_args.output_dir)
- args_dict["developer_mode"] = process_cfg.bool_to_str(ds_args.dev_mode)
- args_dict["tmp_dir"] = process_cfg.process_spaces(ds_args.tmp_dir)
- args_dict["max_threads"] = ds_args.max_threads
- args_dict["max_memory"] = ds_args.max_memory
- args_dict["output_base"] = ""
- args_dict["ha_enabled"] = process_cfg.bool_to_str(ds_args.haplotype_assembly)
- args_dict["K"] = str(ds_args.k)
- args_dict['saves'] = ds_args.saves
- args_dict['entry_point'] = ds_args.start_from
- process_cfg.substitute_params(config_fname, args_dict, log)
-
-
-def print_ds_output(output_dir, log):
- consensus_file = os.path.join(output_dir, "consensus_contigs.fasta")
- if os.path.exists(consensus_file):
- log.info(" * Assembled consensus contigs are in: " + support.process_spaces(consensus_file))
-
- paired_consensus_file = os.path.join(output_dir, "paired_consensus_contigs.fasta")
- if os.path.exists(paired_consensus_file):
- log.info(" * Assembled paired consensus contigs are in: " + support.process_spaces(paired_consensus_file))
-
- unpaired_consensus_file = os.path.join(output_dir, "unpaired_consensus_contigs.fasta")
- if os.path.exists(unpaired_consensus_file):
- log.info(" * Assembled unpaired consensus contigs are in: " + support.process_spaces(unpaired_consensus_file))
-
- hapalignment_file = os.path.join(output_dir, "haplocontigs_alignent")
- if os.path.exists(hapalignment_file):
- log.info(" * Alignment of haplocontigs is in: " + support.process_spaces(hapalignment_file))
-
- haplotype_assembly_file = os.path.join(output_dir, "haplotype_assembly.out")
- if os.path.exists(haplotype_assembly_file):
- log.info(" * Results of haplotype assembly are in: " + support.process_spaces(haplotype_assembly_file))
-
- consregions_file = os.path.join(output_dir, "conservative_regions.fasta")
- if os.path.exists(consregions_file):
- log.info(" * Conservative regions are in: " + support.process_spaces(consregions_file))
-
- possconsregions_file = os.path.join(output_dir, "possibly_conservative_regions.fasta")
- if os.path.exists(possconsregions_file):
- log.info(" * Possibly conservative regions are in: " + support.process_spaces(possconsregions_file))
-
-
-def main(ds_args_list, general_args_list, spades_home, bin_home):
- log = logging.getLogger('dipspades')
- log.setLevel(logging.DEBUG)
- console = logging.StreamHandler(sys.stdout)
- console.setFormatter(logging.Formatter('%(message)s'))
- console.setLevel(logging.DEBUG)
- log.addHandler(console)
-
- support.check_binaries(bin_home, log)
- ds_args = parse_arguments(ds_args_list, log)
-
- if not os.path.exists(ds_args.output_dir):
- os.makedirs(ds_args.output_dir)
- log_filename = os.path.join(ds_args.output_dir, "dipspades.log")
- if os.path.exists(log_filename):
- os.remove(log_filename)
- log_handler = logging.FileHandler(log_filename, mode='a')
- log.addHandler(log_handler)
-
- params_filename = os.path.join(ds_args.output_dir, "params.txt")
- params_handler = logging.FileHandler(params_filename, mode='a')
- log.addHandler(params_handler)
-
- log.info("\n")
- log.info("General command line: " + " ".join(general_args_list) + "\n")
- log.info("dipSPAdes command line: " + " ".join(ds_args_list) + "\n")
- print_ds_args(ds_args, log)
- log.removeHandler(params_handler)
-
- log.info("\n======= dipSPAdes started. Log can be found here: " + log_filename + "\n")
- write_haplocontigs_in_file(ds_args.haplocontigs, ds_args.haplocontigs_fnames)
-
- config_fname = prepare_configs(os.path.join(spades_home, "configs", "dipspades"), ds_args, log)
- ds_args.tmp_dir = support.get_tmp_dir(prefix="dipspades_", base_dir=ds_args.tmp_dir)
- prepare_config(config_fname, ds_args, log)
-
- try:
- log.info("===== Assembling started.\n")
- binary_path = os.path.join(bin_home, "dipspades")
- command = [binary_path, config_fname]
- support.sys_call(command, log)
- log.info("\n===== Assembling finished.\n")
- print_ds_output(ds_args.output_dir, log)
- if os.path.isdir(ds_args.tmp_dir):
- shutil.rmtree(ds_args.tmp_dir)
- log.info("\n======= dipSPAdes finished.\n")
- log.info("dipSPAdes log can be found here: " + log_filename + "\n")
- log.info("Thank you for using dipSPAdes!")
- log.removeHandler(log_handler)
- except Exception:
- exc_type, exc_value, _ = sys.exc_info()
- if exc_type == SystemExit:
- sys.exit(exc_value)
- else:
- if exc_type == OSError and exc_value.errno == errno.ENOEXEC: # Exec format error
- support.error("It looks like you are using SPAdes binaries for another platform.\n" +
- support.get_spades_binaries_info_message(), dipspades=True)
- else:
- log.exception(exc_value)
- support.error("exception caught: %s" % exc_type, log)
- except BaseException: # since python 2.5 system-exiting exceptions (e.g. KeyboardInterrupt) are derived from BaseException
- exc_type, exc_value, _ = sys.exc_info()
- if exc_type == SystemExit:
- sys.exit(exc_value)
- else:
- log.exception(exc_value)
- support.error("exception caught: %s" % exc_type, log, dipspades=True)
-
-
-if __name__ == '__main__':
- self_dir_path = os.path.abspath(os.path.dirname(os.path.realpath(__file__)))
- spades_init_candidate1 = os.path.join(self_dir_path, "../../spades_init.py")
- spades_init_candidate2 = os.path.join(self_dir_path, "../../../bin/spades_init.py")
- if os.path.isfile(spades_init_candidate1):
- sys.path.append(os.path.dirname(spades_init_candidate1))
- elif os.path.isfile(spades_init_candidate2):
- sys.path.append(os.path.dirname(spades_init_candidate2))
- else:
- sys.stderr.write("Cannot find spades_init.py! Aborting..\n")
- sys.stderr.flush()
- sys.exit(1)
- import spades_init
- spades_init.init()
- main(sys.argv, "", spades_init.spades_home, spades_init.bin_home)
diff --git a/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/hammer_logic.py b/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/hammer_logic.py
deleted file mode 100644
index 1d971b8..0000000
--- a/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/hammer_logic.py
+++ /dev/null
@@ -1,161 +0,0 @@
-#!/usr/bin/env python
-
-############################################################################
-# Copyright (c) 2015 Saint Petersburg State University
-# Copyright (c) 2011-2014 Saint Petersburg Academic University
-# All Rights Reserved
-# See file LICENSE for details.
-############################################################################
-
-
-import os
-import sys
-import glob
-import shutil
-import support
-import options_storage
-import process_cfg
-from site import addsitedir
-from distutils import dir_util
-from os.path import isfile
-
-
-def compress_dataset_files(dataset_data, ext_python_modules_home, max_threads, log):
- log.info("\n== Compressing corrected reads (with gzip)")
- to_compress = []
- for reads_library in dataset_data:
- for key, value in reads_library.items():
- if key.endswith('reads'):
- compressed_reads_filenames = []
- for reads_file in value:
- compressed_reads_filenames.append(reads_file + ".gz")
- if not isfile(reads_file):
- if isfile(compressed_reads_filenames[-1]):
- continue # already compressed (--continue/--restart-from case)
- support.error('something went wrong and file with corrected reads (' + reads_file + ') is missing!', log)
- to_compress.append(reads_file)
- reads_library[key] = compressed_reads_filenames
- if len(to_compress):
- pigz_path = support.which('pigz')
- if pigz_path:
- for reads_file in to_compress:
- support.sys_call([pigz_path, '-f', '-7', '-p', str(max_threads), reads_file], log)
- else:
- addsitedir(ext_python_modules_home)
- if sys.version.startswith('2.'):
- from joblib2 import Parallel, delayed
- elif sys.version.startswith('3.'):
- from joblib3 import Parallel, delayed
- n_jobs = min(len(to_compress), max_threads)
- outputs = Parallel(n_jobs=n_jobs)(delayed(support.sys_call)(['gzip', '-f', '-7', reads_file]) for reads_file in to_compress)
- for output in outputs:
- if output:
- log.info(output)
-
-
-def remove_not_corrected_reads(output_dir):
- for not_corrected in glob.glob(os.path.join(output_dir, "*.bad.fastq")):
- os.remove(not_corrected)
-
-
-def prepare_config_bh(filename, cfg, log):
- subst_dict = dict()
-
- subst_dict["dataset"] = process_cfg.process_spaces(cfg.dataset_yaml_filename)
- subst_dict["input_working_dir"] = process_cfg.process_spaces(cfg.tmp_dir)
- subst_dict["output_dir"] = process_cfg.process_spaces(cfg.output_dir)
- subst_dict["general_max_iterations"] = cfg.max_iterations
- subst_dict["general_max_nthreads"] = cfg.max_threads
- subst_dict["count_merge_nthreads"] = cfg.max_threads
- subst_dict["bayes_nthreads"] = cfg.max_threads
- subst_dict["expand_nthreads"] = cfg.max_threads
- subst_dict["correct_nthreads"] = cfg.max_threads
- subst_dict["general_hard_memory_limit"] = cfg.max_memory
- if "qvoffset" in cfg.__dict__:
- subst_dict["input_qvoffset"] = cfg.qvoffset
- if "count_filter_singletons" in cfg.__dict__:
- subst_dict["count_filter_singletons"] = cfg.count_filter_singletons
- if "read_buffer_size" in cfg.__dict__:
- subst_dict["count_split_buffer"] = cfg.read_buffer_size
- process_cfg.substitute_params(filename, subst_dict, log)
-
-
-def prepare_config_ih(filename, cfg, ext_python_modules_home):
- addsitedir(ext_python_modules_home)
- if sys.version.startswith('2.'):
- import pyyaml2 as pyyaml
- elif sys.version.startswith('3.'):
- import pyyaml3 as pyyaml
-
- data = pyyaml.load(open(filename, 'r'))
- data["dataset"] = cfg.dataset_yaml_filename
- data["working_dir"] = cfg.tmp_dir
- data["output_dir"] = cfg.output_dir
- data["hard_memory_limit"] = cfg.max_memory
- data["max_nthreads"] = cfg.max_threads
- pyyaml.dump(data, open(filename, 'w'), default_flow_style = False, default_style='"', width=100500)
-
-
-def run_hammer(corrected_dataset_yaml_filename, configs_dir, execution_home, cfg,
- dataset_data, ext_python_modules_home, only_compressing_is_needed, log):
- addsitedir(ext_python_modules_home)
- if sys.version.startswith('2.'):
- import pyyaml2 as pyyaml
- elif sys.version.startswith('3.'):
- import pyyaml3 as pyyaml
-
- # not all reads need processing
- if support.get_lib_ids_by_type(dataset_data, options_storage.LONG_READS_TYPES):
- not_used_dataset_data = support.get_libs_by_type(dataset_data, options_storage.LONG_READS_TYPES)
- to_correct_dataset_data = support.rm_libs_by_type(dataset_data, options_storage.LONG_READS_TYPES)
- to_correct_dataset_yaml_filename = os.path.join(cfg.output_dir, "to_correct.yaml")
- pyyaml.dump(to_correct_dataset_data, open(to_correct_dataset_yaml_filename, 'w'), default_flow_style = False, default_style='"', width=100500)
- cfg.dataset_yaml_filename = to_correct_dataset_yaml_filename
- else:
- not_used_dataset_data = None
-
- if not only_compressing_is_needed:
- dst_configs = os.path.join(cfg.output_dir, "configs")
- if os.path.exists(dst_configs):
- shutil.rmtree(dst_configs)
- if cfg.iontorrent:
- dir_util.copy_tree(os.path.join(configs_dir, "ionhammer"), dst_configs, preserve_times=False)
- cfg_file_name = os.path.join(dst_configs, "ionhammer.cfg")
- else:
- dir_util.copy_tree(os.path.join(configs_dir, "hammer"), dst_configs, preserve_times=False)
- cfg_file_name = os.path.join(dst_configs, "config.info")
-
- cfg.tmp_dir = support.get_tmp_dir(prefix="hammer_")
- if cfg.iontorrent:
- prepare_config_ih(cfg_file_name, cfg, ext_python_modules_home)
- binary_name = "ionhammer"
- else:
- prepare_config_bh(cfg_file_name, cfg, log)
- binary_name = "hammer"
-
- command = [os.path.join(execution_home, binary_name),
- os.path.abspath(cfg_file_name)]
-
- log.info("\n== Running read error correction tool: " + ' '.join(command) + "\n")
- support.sys_call(command, log)
- if not os.path.isfile(corrected_dataset_yaml_filename):
- support.error("read error correction finished abnormally: " + corrected_dataset_yaml_filename + " not found!")
- else:
- log.info("\n===== Skipping %s (already processed). \n" % "read error correction tool")
- support.continue_from_here(log)
-
- corrected_dataset_data = pyyaml.load(open(corrected_dataset_yaml_filename, 'r'))
- remove_not_corrected_reads(cfg.output_dir)
- is_changed = False
- if cfg.gzip_output:
- is_changed = True
- compress_dataset_files(corrected_dataset_data, ext_python_modules_home, cfg.max_threads, log)
- if not_used_dataset_data:
- is_changed = True
- corrected_dataset_data += not_used_dataset_data
- if is_changed:
- pyyaml.dump(corrected_dataset_data, open(corrected_dataset_yaml_filename, 'w'), default_flow_style = False, default_style='"', width=100500)
- log.info("\n== Dataset description file was created: " + corrected_dataset_yaml_filename + "\n")
-
- if os.path.isdir(cfg.tmp_dir):
- shutil.rmtree(cfg.tmp_dir)
diff --git a/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/options_storage.py b/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/options_storage.py
deleted file mode 100644
index 1919e5a..0000000
--- a/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/options_storage.py
+++ /dev/null
@@ -1,514 +0,0 @@
-#!/usr/bin/env python
-
-############################################################################
-# Copyright (c) 2015 Saint Petersburg State University
-# Copyright (c) 2011-2014 Saint Petersburg Academic University
-# All Rights Reserved
-# See file LICENSE for details.
-############################################################################
-
-import os
-import sys
-import support
-from os.path import basename
-
-SUPPORTED_PYTHON_VERSIONS = ['2.4', '2.5', '2.6', '2.7', '3.2', '3.3', '3.4', '3.5']
-# allowed reads extensions for BayesHammer and for thw whole SPAdes pipeline
-BH_ALLOWED_READS_EXTENSIONS = ['.fq', '.fastq', '.bam']
-CONTIGS_ALLOWED_READS_EXTENSIONS = ['.fa', '.fasta']
-ALLOWED_READS_EXTENSIONS = BH_ALLOWED_READS_EXTENSIONS + CONTIGS_ALLOWED_READS_EXTENSIONS
-# reads could be gzipped
-BH_ALLOWED_READS_EXTENSIONS += [x + '.gz' for x in BH_ALLOWED_READS_EXTENSIONS]
-CONTIGS_ALLOWED_READS_EXTENSIONS += [x + '.gz' for x in CONTIGS_ALLOWED_READS_EXTENSIONS]
-ALLOWED_READS_EXTENSIONS += [x + '.gz' for x in ALLOWED_READS_EXTENSIONS]
-
-# we support up to MAX_LIBS_NUMBER libs for each type of short-reads libs
-MAX_LIBS_NUMBER = 9
-OLD_STYLE_READS_OPTIONS = ["--12", "-1", "-2", "-s"]
-SHORT_READS_TYPES = {"pe": "paired-end", "s": "single", "mp": "mate-pairs", "hqmp": "hq-mate-pairs", "nxmate": "nxmate"}
-# other libs types:
-LONG_READS_TYPES = ["pacbio", "sanger", "nanopore", "tslr", "trusted-contigs", "untrusted-contigs"]
-
-# final contigs and scaffolds names
-contigs_name = "contigs.fasta"
-scaffolds_name = "scaffolds.fasta"
-assembly_graph_name = "assembly_graph.fastg"
-assembly_graph_name_gfa = "assembly_graph.gfa"
-contigs_paths = "contigs.paths"
-scaffolds_paths = "scaffolds.paths"
-transcripts_name = "transcripts.fasta"
-transcripts_paths = "transcripts.paths"
-
-#other constants
-MIN_K = 1
-MAX_K = 127
-THRESHOLD_FOR_BREAKING_SCAFFOLDS = 3
-THRESHOLD_FOR_BREAKING_ADDITIONAL_CONTIGS = 10
-
-#default values constants
-THREADS = 16
-MEMORY = 250
-K_MERS_RNA = [55]
-K_MERS_SHORT = [21,33,55]
-K_MERS_150 = [21,33,55,77]
-K_MERS_250 = [21,33,55,77,99,127]
-
-ITERATIONS = 1
-TMP_DIR = "tmp"
-
-### START OF OPTIONS
-# basic options
-output_dir = None
-single_cell = False
-iontorrent = False
-meta = False
-rna = False
-large_genome = False
-test_mode = False
-plasmid = False
-
-# pipeline options
-only_error_correction = False
-only_assembler = False
-disable_gzip_output = None
-disable_rr = None
-careful = None
-diploid_mode = False
-
-# advanced options
-continue_mode = False
-developer_mode = None
-dataset_yaml_filename = None
-threads = None
-memory = None
-tmp_dir = None
-k_mers = None
-qvoffset = None # auto-detect by default
-cov_cutoff = 'off' # default is 'off'
-
-# hidden options
-mismatch_corrector = None
-reference = None
-series_analysis = None
-configs_dir = None
-iterations = None
-bh_heap_check = None
-spades_heap_check = None
-read_buffer_size = None
-### END OF OPTIONS
-
-# for restarting SPAdes
-restart_from = None
-restart_careful = None
-restart_mismatch_corrector = None
-restart_disable_gzip_output = None
-restart_disable_rr = None
-restart_threads = None
-restart_memory = None
-restart_tmp_dir = None
-restart_k_mers = None
-original_k_mers = None
-restart_qvoffset = None
-restart_cov_cutoff = None
-restart_developer_mode = None
-restart_reference = None
-restart_configs_dir = None
-restart_read_buffer_size = None
-
-# for running to specific check-point
-stop_after = None
-run_completed = False
-
-#truseq options
-truseq_mode = False
-correct_scaffolds = False
-run_truseq_postprocessing = False
-
-dict_of_prefixes = dict()
-dict_of_rel2abs = dict()
-
-# list of spades.py options
-long_options = "12= threads= memory= tmp-dir= iterations= phred-offset= sc iontorrent meta large-genome rna plasmid "\
- "only-error-correction only-assembler "\
- "disable-gzip-output disable-gzip-output:false disable-rr disable-rr:false " \
- "help version test debug debug:false reference= series-analysis= config-file= dataset= "\
- "bh-heap-check= spades-heap-check= read-buffer-size= help-hidden "\
- "mismatch-correction mismatch-correction:false careful careful:false "\
- "continue restart-from= diploid truseq cov-cutoff= configs-dir= stop-after=".split()
-short_options = "o:1:2:s:k:t:m:i:hv"
-
-# adding multiple paired-end, mate-pair and other (long reads) libraries support
-reads_options = []
-for i in range(MAX_LIBS_NUMBER):
- for type in SHORT_READS_TYPES.keys():
- if type == 's': # single
- reads_options += ["s%d=" % (i+1)]
- elif type == 'nxmate': # special case: only left and right reads
- reads_options += ("%s%d-1= %s%d-2=" % tuple([type, i + 1] * 2)).split()
- else: # paired-end, mate-pairs, hq-mate-pairs
- reads_options += ("%s%d-1= %s%d-2= %s%d-12= %s%d-s= %s%d-rf %s%d-fr %s%d-ff" % tuple([type, i + 1] * 7)).split()
-reads_options += list(map(lambda x: x + '=', LONG_READS_TYPES))
-long_options += reads_options
-# for checking whether option corresponds to reads or not
-reads_options = list(map(lambda x: "--" + x.split('=')[0], reads_options))
-reads_options += OLD_STYLE_READS_OPTIONS
-
-
-def get_mode():
- mode = None
- if basename(sys.argv[0]) == "rnaspades.py":
- mode = 'rna'
- elif basename(sys.argv[0]) == "plasmidspades.py":
- mode = 'plasmid'
- elif basename(sys.argv[0]) == "metaspades.py":
- mode = 'meta'
- return mode
-
-
-def version(spades_version, mode=None):
- sys.stderr.write("SPAdes v" + str(spades_version))
- if mode is None:
- mode = get_mode()
- if mode is not None:
- sys.stderr.write(" [" + mode + "SPAdes mode]")
- sys.stderr.write("\n")
- sys.stderr.flush()
-
-
-def usage(spades_version, show_hidden=False, mode=None):
- sys.stderr.write("SPAdes genome assembler v" + str(spades_version))
- if mode is None:
- mode = get_mode()
- if mode is not None:
- sys.stderr.write(" [" + mode + "SPAdes mode]")
- sys.stderr.write("\n\n")
- sys.stderr.write("Usage: " + str(sys.argv[0]) + " [options] -o " + "\n")
- sys.stderr.write("" + "\n")
- sys.stderr.write("Basic options:" + "\n")
- sys.stderr.write("-o\t\tdirectory to store all the resulting files (required)" + "\n")
- if mode is None: # nothing special, just regular spades.py
- sys.stderr.write("--sc\t\t\tthis flag is required for MDA (single-cell) data" + "\n")
- sys.stderr.write("--meta\t\t\tthis flag is required for metagenomic sample data" + "\n")
- sys.stderr.write("--rna\t\t\tthis flag is required for RNA-Seq data \n")
- sys.stderr.write("--plasmid\t\truns plasmidSPAdes pipeline for plasmid detection \n")
-
- sys.stderr.write("--iontorrent\t\tthis flag is required for IonTorrent data" + "\n")
- sys.stderr.write("--test\t\t\truns SPAdes on toy dataset" + "\n")
- sys.stderr.write("-h/--help\t\tprints this usage message" + "\n")
- sys.stderr.write("-v/--version\t\tprints version" + "\n")
-
- sys.stderr.write("" + "\n")
- if mode != "dip":
- sys.stderr.write("Input data:" + "\n")
- else:
- sys.stderr.write("Input reads:" + "\n")
- sys.stderr.write("--12\t\tfile with interlaced forward and reverse"\
- " paired-end reads" + "\n")
- sys.stderr.write("-1\t\tfile with forward paired-end reads" + "\n")
- sys.stderr.write("-2\t\tfile with reverse paired-end reads" + "\n")
- sys.stderr.write("-s\t\tfile with unpaired reads" + "\n")
- sys.stderr.write("--pe<#>-12\t\tfile with interlaced"\
- " reads for paired-end library number <#> (<#> = 1,2,..,9)" + "\n")
- sys.stderr.write("--pe<#>-1\t\tfile with forward reads"\
- " for paired-end library number <#> (<#> = 1,2,..,9)" + "\n")
- sys.stderr.write("--pe<#>-2\t\tfile with reverse reads"\
- " for paired-end library number <#> (<#> = 1,2,..,9)" + "\n")
- sys.stderr.write("--pe<#>-s\t\tfile with unpaired reads"\
- " for paired-end library number <#> (<#> = 1,2,..,9)" + "\n")
- sys.stderr.write("--pe<#>-\torientation of reads"\
- " for paired-end library number <#> (<#> = 1,2,..,9; = fr, rf, ff)" + "\n")
- sys.stderr.write("--s<#>\t\t\tfile with unpaired reads"\
- " for single reads library number <#> (<#> = 1,2,..,9)" + "\n")
- if mode not in ["rna", "meta"]:
- sys.stderr.write("--mp<#>-12\t\tfile with interlaced"\
- " reads for mate-pair library number <#> (<#> = 1,2,..,9)" + "\n")
- sys.stderr.write("--mp<#>-1\t\tfile with forward reads"\
- " for mate-pair library number <#> (<#> = 1,2,..,9)" + "\n")
- sys.stderr.write("--mp<#>-2\t\tfile with reverse reads"\
- " for mate-pair library number <#> (<#> = 1,2,..,9)" + "\n")
- sys.stderr.write("--mp<#>-s\t\tfile with unpaired reads"\
- " for mate-pair library number <#> (<#> = 1,2,..,9)" + "\n")
- sys.stderr.write("--mp<#>-\torientation of reads"\
- " for mate-pair library number <#> (<#> = 1,2,..,9; = fr, rf, ff)" + "\n")
- sys.stderr.write("--hqmp<#>-12\t\tfile with interlaced"\
- " reads for high-quality mate-pair library number <#> (<#> = 1,2,..,9)" + "\n")
- sys.stderr.write("--hqmp<#>-1\t\tfile with forward reads"\
- " for high-quality mate-pair library number <#> (<#> = 1,2,..,9)" + "\n")
- sys.stderr.write("--hqmp<#>-2\t\tfile with reverse reads"\
- " for high-quality mate-pair library number <#> (<#> = 1,2,..,9)" + "\n")
- sys.stderr.write("--hqmp<#>-s\t\tfile with unpaired reads"\
- " for high-quality mate-pair library number <#> (<#> = 1,2,..,9)" + "\n")
- sys.stderr.write("--hqmp<#>-\torientation of reads"\
- " for high-quality mate-pair library number <#> (<#> = 1,2,..,9; = fr, rf, ff)" + "\n")
- sys.stderr.write("--nxmate<#>-1\t\tfile with forward reads"\
- " for Lucigen NxMate library number <#> (<#> = 1,2,..,9)" + "\n")
- sys.stderr.write("--nxmate<#>-2\t\tfile with reverse reads"\
- " for Lucigen NxMate library number <#> (<#> = 1,2,..,9)" + "\n")
- sys.stderr.write("--sanger\t\tfile with Sanger reads\n")
- sys.stderr.write("--pacbio\t\tfile with PacBio reads\n")
- sys.stderr.write("--nanopore\t\tfile with Nanopore reads\n")
- sys.stderr.write("--tslr\t\tfile with TSLR-contigs\n")
- sys.stderr.write("--trusted-contigs\t\tfile with trusted contigs\n")
- sys.stderr.write("--untrusted-contigs\t\tfile with untrusted contigs\n")
- if mode == "dip":
- sys.stderr.write("Input haplocontigs:" + "\n")
- sys.stderr.write("--hap\t\tfile with haplocontigs" + "\n")
-
- sys.stderr.write("" + "\n")
- sys.stderr.write("Pipeline options:" + "\n")
- if mode != "dip":
- sys.stderr.write("--only-error-correction\truns only read error correction"\
- " (without assembling)" + "\n")
- sys.stderr.write("--only-assembler\truns only assembling (without read error"\
- " correction)" + "\n")
- if mode != "dip":
- if mode not in ["rna", "meta"]:
- sys.stderr.write("--careful\t\ttries to reduce number of mismatches and short indels" + "\n")
- sys.stderr.write("--continue\t\tcontinue run from the last available check-point" + "\n")
- sys.stderr.write("--restart-from\t\trestart run with updated options and from the specified check-point ('ec', 'as', 'k', 'mc')" + "\n")
- sys.stderr.write("--disable-gzip-output\tforces error correction not to"\
- " compress the corrected reads" + "\n")
- sys.stderr.write("--disable-rr\t\tdisables repeat resolution stage"\
- " of assembling" + "\n")
-
- if mode == "dip":
- sys.stderr.write("" + "\n")
- sys.stderr.write("DipSPAdes options:" + "\n")
- sys.stderr.write("--expect-gaps\t\tindicates that significant number of gaps in coverage is expected" + "\n")
- sys.stderr.write("--expect-rearrangements\tindicates that significant number of rearrangements between haplomes of diploid genome is expected" + "\n")
- sys.stderr.write("--hap-assembly\t\tenables haplotype assembly phase" + "\n")
-
- sys.stderr.write("" + "\n")
- sys.stderr.write("Advanced options:" + "\n")
- sys.stderr.write("--dataset\t\tfile with dataset description in YAML format" + "\n")
- sys.stderr.write("-t/--threads\t\t\tnumber of threads" + "\n")
- sys.stderr.write("\t\t\t\t[default: %s]\n" % THREADS)
- sys.stderr.write("-m/--memory\t\t\tRAM limit for SPAdes in Gb"\
- " (terminates if exceeded)" + "\n")
- sys.stderr.write("\t\t\t\t[default: %s]\n" % MEMORY)
- sys.stderr.write("--tmp-dir\t\tdirectory for temporary files" + "\n")
- sys.stderr.write("\t\t\t\t[default: /tmp]" + "\n")
- if mode != 'rna':
- sys.stderr.write("-k\t\t\tcomma-separated list of k-mer sizes" \
- " (must be odd and" + "\n")
- sys.stderr.write("\t\t\t\tless than " + str(MAX_K + 1) + ") [default: 'auto']" + "\n")
- else:
- sys.stderr.write("-k\t\t\t\tk-mer size (must be odd and less than " + str(MAX_K + 1) + ") " \
- "[default: " + str(K_MERS_RNA[0]) + "]\n")
-
- if mode not in ["rna", "meta"]:
- sys.stderr.write("--cov-cutoff\t\t\tcoverage cutoff value (a positive float number, "
- "or 'auto', or 'off') [default: 'off']" + "\n")
- sys.stderr.write("--phred-offset\t<33 or 64>\tPHRED quality offset in the"\
- " input reads (33 or 64)" + "\n")
- sys.stderr.write("\t\t\t\t[default: auto-detect]" + "\n")
-
- if show_hidden:
- sys.stderr.write("" + "\n")
- sys.stderr.write("HIDDEN options:" + "\n")
- sys.stderr.write("--debug\t\t\t\truns SPAdes in debug mode (keeps intermediate output)" + "\n")
- sys.stderr.write("--stop-after\t\truns SPAdes until the specified check-point ('ec', 'as', 'k', 'mc') inclusive" + "\n")
- sys.stderr.write("--truseq\t\t\truns SPAdes in TruSeq mode\n")
- sys.stderr.write("--mismatch-correction\t\truns post processing correction"\
- " of mismatches and short indels" + "\n")
- sys.stderr.write("--reference\t\tfile with reference for deep analysis"\
- " (only in debug mode)" + "\n")
- sys.stderr.write("--series-analysis\t\tconfig for metagenomics-series-augmented reassembly" + "\n")
- sys.stderr.write("--configs-dir\t\tdirectory with configs" + "\n")
- sys.stderr.write("-i/--iterations\t\t\tnumber of iterations for read error"\
- " correction [default: %s]\n" % ITERATIONS)
- sys.stderr.write("--read-buffer-size\t\t\tsets size of read buffer for graph construction")
- sys.stderr.write("--bh-heap-check\t\t\tsets HEAPCHECK environment variable"\
- " for BayesHammer" + "\n")
- sys.stderr.write("--spades-heap-check\t\tsets HEAPCHECK environment variable"\
- " for SPAdes" + "\n")
- sys.stderr.write("--large-genome\tEnables optimizations for large genomes \n")
- sys.stderr.write("--help-hidden\tprints this usage message with all hidden options" + "\n")
-
- if show_hidden and mode == "dip":
- sys.stderr.write("" + "\n")
- sys.stderr.write("HIDDEN dipSPAdes options:" + "\n")
- sys.stderr.write("--dsK\t\t\t\tk value used in dipSPAdes [default: '55']" + '\n')
- sys.stderr.write("--dsdebug\t\t\tmakes saves and draws pictures" + '\n')
- sys.stderr.write("--saves\t\tdirectory with saves which will be used for graph loading" + '\n')
- sys.stderr.write("--start-from\t\tstart point of dipSPAdes:" + '\n')
- sys.stderr.write(" pbr: polymorphic bulge remover\n kmg: gluer of equal k-mers\n cc: consensus constructor\n ha: haplotype assembly" + '\n')
-
- sys.stderr.flush()
-
-
-def auto_K_allowed():
- return not k_mers and not single_cell and not iontorrent and not rna and not meta
- # kmers were set by default, not SC, not IonTorrent data and not rna and temporary not meta
-
-
-def set_default_values():
- global threads
- global memory
- global iterations
- global disable_gzip_output
- global disable_rr
- global careful
- global mismatch_corrector
- global developer_mode
- global qvoffset
- global cov_cutoff
- global tmp_dir
-
- if threads is None:
- threads = THREADS
- if memory is None:
- if support.get_available_memory():
- memory = int(min(MEMORY, support.get_available_memory()))
- else:
- memory = MEMORY
- if iterations is None:
- iterations = ITERATIONS
- if disable_gzip_output is None:
- disable_gzip_output = False
- if disable_rr is None:
- disable_rr = False
- if careful is None:
- careful = False
- if mismatch_corrector is None:
- mismatch_corrector = False
- if developer_mode is None:
- developer_mode = False
- if qvoffset == 'auto':
- qvoffset = None
- if cov_cutoff is None:
- cov_cutoff = 'off'
- if tmp_dir is None:
- tmp_dir = os.path.join(output_dir, TMP_DIR)
-
-
-def set_test_options():
- global output_dir
- global single_cell
- global test_mode
-
- output_dir = os.path.abspath('spades_test')
- single_cell = False
- meta = False
- test_mode = True
-
-
-def save_restart_options(log):
- if dataset_yaml_filename:
- support.error("you cannot specify --dataset with --restart-from option!", log)
- if single_cell:
- support.error("you cannot specify --sc with --restart-from option!", log)
- if meta:
- support.error("you cannot specify --meta with --restart-from option!", log)
- if iontorrent:
- support.error("you cannot specify --iontorrent with --restart-from option!", log)
- if only_assembler:
- support.error("you cannot specify --only-assembler with --restart-from option!", log)
- if only_error_correction:
- support.error("you cannot specify --only-error-correction with --restart-from option!", log)
-
- global restart_k_mers
- global restart_careful
- global restart_mismatch_corrector
- global restart_disable_gzip_output
- global restart_disable_rr
- global restart_threads
- global restart_memory
- global restart_tmp_dir
- global restart_qvoffset
- global restart_cov_cutoff
- global restart_developer_mode
- global restart_reference
- global restart_configs_dir
- global restart_read_buffer_size
-
- restart_k_mers = k_mers
- restart_careful = careful
- restart_mismatch_corrector = mismatch_corrector
- restart_disable_gzip_output = disable_gzip_output
- restart_disable_rr = disable_rr
- restart_threads = threads
- restart_memory = memory
- restart_tmp_dir = tmp_dir
- restart_qvoffset = qvoffset
- restart_cov_cutoff = cov_cutoff
- restart_developer_mode = developer_mode
- restart_reference = reference
- restart_configs_dir = configs_dir
- restart_read_buffer_size = read_buffer_size
-
-
-def load_restart_options():
- global k_mers
- global careful
- global mismatch_corrector
- global disable_gzip_output
- global disable_rr
- global threads
- global memory
- global tmp_dir
- global qvoffset
- global cov_cutoff
- global developer_mode
- global reference
- global configs_dir
- global read_buffer_size
- global original_k_mers
-
- if restart_k_mers:
- original_k_mers = k_mers
- if restart_k_mers == 'auto':
- k_mers = None # set by default
- else:
- k_mers = restart_k_mers
- if restart_careful is not None:
- careful = restart_careful
- if restart_mismatch_corrector is not None:
- mismatch_corrector = restart_mismatch_corrector
- if disable_gzip_output is not None:
- disable_gzip_output = restart_disable_gzip_output
- if restart_disable_rr is not None:
- disable_rr = restart_disable_rr
- if restart_threads is not None:
- threads = restart_threads
- if restart_memory is not None:
- memory = restart_memory
- if restart_tmp_dir is not None:
- tmp_dir = restart_tmp_dir
- if restart_qvoffset is not None:
- qvoffset = restart_qvoffset
- if restart_cov_cutoff is not None:
- cov_cutoff = restart_cov_cutoff
- if restart_developer_mode is not None:
- developer_mode = restart_developer_mode
- if restart_reference is not None:
- reference = restart_reference
- if restart_configs_dir is not None:
- configs_dir = restart_configs_dir
- if restart_read_buffer_size is not None:
- read_buffer_size = restart_read_buffer_size
-
-
-def enable_truseq_mode():
- global truseq_mode
- global correct_scaffolds
- global run_truseq_postprocessing
- global K_MERS_SHORT
- global K_MERS_150
- global K_MERS_250
- global only_assembler
- global single_cell
- K_MERS_SHORT = [21,33,45,55]
- K_MERS_150 = [21,33,45,55,77]
- K_MERS_250 = [21,33,45,55,77,99,127]
- truseq_mode = True
- correct_scaffolds = True
- run_truseq_postprocessing = True
- only_assembler = True
-
-
-def will_rerun(options):
- for opt, arg in options:
- if opt == '--continue' or opt.startswith('--restart-from'): # checks both --restart-from k33 and --restart-from=k33
- return True
- return False
diff --git a/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/spades_logic.py b/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/spades_logic.py
deleted file mode 100644
index 8b47c0d..0000000
--- a/src/SPAdes-3.10.1-Linux/share/spades/spades_pipeline/spades_logic.py
+++ /dev/null
@@ -1,393 +0,0 @@
-#!/usr/bin/env python
-
-############################################################################
-# Copyright (c) 2015 Saint Petersburg State University
-# Copyright (c) 2011-2014 Saint Petersburg Academic University
-# All Rights Reserved
-# See file LICENSE for details.
-############################################################################
-
-import os
-import sys
-import shutil
-import support
-import process_cfg
-from process_cfg import bool_to_str
-from site import addsitedir
-from distutils import dir_util
-import options_storage
-
-BASE_STAGE = "construction"
-READS_TYPES_USED_IN_CONSTRUCTION = ["paired-end", "single", "hq-mate-pairs"]
-READS_TYPES_USED_IN_RNA_SEQ = ["paired-end", "single", "trusted-contigs", "untrusted-contigs"]
-
-
-def prepare_config_spades(filename, cfg, log, additional_contigs_fname, K, stage, saves_dir, last_one, execution_home):
- subst_dict = dict()
-
- subst_dict["K"] = str(K)
- subst_dict["dataset"] = process_cfg.process_spaces(cfg.dataset)
- subst_dict["output_base"] = process_cfg.process_spaces(cfg.output_dir)
- subst_dict["tmp_dir"] = process_cfg.process_spaces(cfg.tmp_dir)
- if additional_contigs_fname:
- subst_dict["additional_contigs"] = process_cfg.process_spaces(additional_contigs_fname)
- subst_dict["use_additional_contigs"] = bool_to_str(True)
- else:
- subst_dict["use_additional_contigs"] = bool_to_str(False)
- subst_dict["main_iteration"] = bool_to_str(last_one)
- subst_dict["entry_point"] = stage
- subst_dict["load_from"] = saves_dir
- subst_dict["developer_mode"] = bool_to_str(cfg.developer_mode)
- subst_dict["gap_closer_enable"] = bool_to_str(last_one or K >= 55)
- subst_dict["rr_enable"] = bool_to_str(last_one and cfg.rr_enable)
-# subst_dict["topology_simplif_enabled"] = bool_to_str(last_one)
- subst_dict["max_threads"] = cfg.max_threads
- subst_dict["max_memory"] = cfg.max_memory
- if (not last_one):
- subst_dict["correct_mismatches"] = bool_to_str(False)
- if "resolving_mode" in cfg.__dict__:
- subst_dict["resolving_mode"] = cfg.resolving_mode
- if "pacbio_mode" in cfg.__dict__:
- subst_dict["pacbio_test_on"] = bool_to_str(cfg.pacbio_mode)
- subst_dict["pacbio_reads"] = process_cfg.process_spaces(cfg.pacbio_reads)
- if cfg.cov_cutoff == "off":
- subst_dict["use_coverage_threshold"] = bool_to_str(False)
- else:
- subst_dict["use_coverage_threshold"] = bool_to_str(True)
- if cfg.cov_cutoff == "auto":
- subst_dict["coverage_threshold"] = 0.0
- else:
- subst_dict["coverage_threshold"] = cfg.cov_cutoff
-
- #TODO: make something about spades.py and config param substitution
- if "bwa_paired" in cfg.__dict__:
- subst_dict["bwa_enable"] = bool_to_str(True)
- subst_dict["path_to_bwa"] = os.path.join(execution_home, "bwa-spades")
- if "series_analysis" in cfg.__dict__:
- subst_dict["series_analysis"] = cfg.series_analysis
- process_cfg.substitute_params(filename, subst_dict, log)
-
-
-def get_read_length(output_dir, K, ext_python_modules_home, log):
- est_params_filename = os.path.join(output_dir, "K%d" % K, "final.lib_data")
- max_read_length = 0
- if os.path.isfile(est_params_filename):
- addsitedir(ext_python_modules_home)
- if sys.version.startswith('2.'):
- import pyyaml2 as pyyaml
- elif sys.version.startswith('3.'):
- import pyyaml3 as pyyaml
- est_params_data = pyyaml.load(open(est_params_filename, 'r'))
- for reads_library in est_params_data:
- if reads_library['type'] in READS_TYPES_USED_IN_CONSTRUCTION:
- if int(reads_library["read length"]) > max_read_length:
- max_read_length = int(reads_library["read length"])
- if max_read_length == 0:
- support.error("Failed to estimate maximum read length! File with estimated params: " + est_params_filename, log)
- return max_read_length
-
-
-def update_k_mers_in_special_cases(cur_k_mers, RL, log, silent=False):
- if options_storage.auto_K_allowed():
- if RL >= 250:
- if not silent:
- log.info("Default k-mer sizes were set to %s because estimated "
- "read length (%d) is equal to or greater than 250" % (str(options_storage.K_MERS_250), RL))
- return options_storage.K_MERS_250
- if RL >= 150:
- if not silent:
- log.info("Default k-mer sizes were set to %s because estimated "
- "read length (%d) is equal to or greater than 150" % (str(options_storage.K_MERS_150), RL))
- return options_storage.K_MERS_150
- if RL <= max(cur_k_mers):
- new_k_mers = [k for k in cur_k_mers if k < RL]
- if not silent:
- log.info("K-mer sizes were set to %s because estimated "
- "read length (%d) is less than %d" % (str(new_k_mers), RL, max(cur_k_mers)))
- return new_k_mers
- return cur_k_mers
-
-
-def reveal_original_k_mers(RL):
- if options_storage.original_k_mers is None or options_storage.original_k_mers == 'auto':
- cur_k_mers = options_storage.k_mers
- options_storage.k_mers = options_storage.original_k_mers
- original_k_mers = update_k_mers_in_special_cases(options_storage.K_MERS_SHORT, RL, None, silent=True)
- options_storage.k_mers = cur_k_mers
- else:
- original_k_mers = options_storage.original_k_mers
- original_k_mers = [k for k in original_k_mers if k < RL]
- return original_k_mers
-
-def add_configs(command, configs_dir):
- #Order matters here!
- mode_config_mapping = [("single_cell", "mda_mode"),
- ("meta", "meta_mode"),
- ("truseq_mode", "moleculo_mode"),
- ("rna", "rna_mode"),
- ("large_genome", "large_genome_mode"),
- ("plasmid", "plasmid_mode"),
- ("careful", "careful_mode"),
- ("diploid_mode", "diploid_mode")]
- for (mode, config) in mode_config_mapping:
- if options_storage.__dict__[mode]:
- if mode == "rna" or mode == "meta":
- command.append(os.path.join(configs_dir, "mda_mode.info"))
- command.append(os.path.join(configs_dir, config + ".info"))
-
-
-def run_iteration(configs_dir, execution_home, cfg, log, K, prev_K, last_one):
- data_dir = os.path.join(cfg.output_dir, "K%d" % K)
- stage = BASE_STAGE
- saves_dir = os.path.join(data_dir, 'saves')
- dst_configs = os.path.join(data_dir, "configs")
-
- if options_storage.continue_mode:
- if os.path.isfile(os.path.join(data_dir, "final_contigs.fasta")) and not (options_storage.restart_from and
- (options_storage.restart_from == ("k%d" % K) or options_storage.restart_from.startswith("k%d:" % K))):
- log.info("\n== Skipping assembler: " + ("K%d" % K) + " (already processed)")
- return
- if options_storage.restart_from and options_storage.restart_from.find(":") != -1 \
- and options_storage.restart_from.startswith("k%d:" % K):
- stage = options_storage.restart_from[options_storage.restart_from.find(":") + 1:]
- support.continue_from_here(log)
-
- if stage != BASE_STAGE:
- if not os.path.isdir(saves_dir):
- support.error("Cannot restart from stage %s: saves were not found (%s)!" % (stage, saves_dir))
- else:
- if os.path.exists(data_dir):
- shutil.rmtree(data_dir)
- os.makedirs(data_dir)
-
- dir_util._path_created = {} # see http://stackoverflow.com/questions/9160227/dir-util-copy-tree-fails-after-shutil-rmtree
- dir_util.copy_tree(os.path.join(configs_dir, "debruijn"), dst_configs, preserve_times=False)
-
- log.info("\n== Running assembler: " + ("K%d" % K) + "\n")
- if prev_K:
- additional_contigs_fname = os.path.join(cfg.output_dir, "K%d" % prev_K, "simplified_contigs.fasta")
- if not os.path.isfile(additional_contigs_fname):
- support.warning("additional contigs for K=%d were not found (%s)!" % (K, additional_contigs_fname), log)
- additional_contigs_fname = None
- else:
- additional_contigs_fname = None
- if "read_buffer_size" in cfg.__dict__:
- #FIXME why here???
- process_cfg.substitute_params(os.path.join(dst_configs, "construction.info"), {"read_buffer_size": cfg.read_buffer_size}, log)
- if "scaffolding_mode" in cfg.__dict__:
- #FIXME why here???
- process_cfg.substitute_params(os.path.join(dst_configs, "pe_params.info"), {"scaffolding_mode": cfg.scaffolding_mode}, log)
-
- cfg_fn = os.path.join(dst_configs, "config.info")
- prepare_config_spades(cfg_fn, cfg, log, additional_contigs_fname, K, stage, saves_dir, last_one, execution_home)
-
- command = [os.path.join(execution_home, "spades"), cfg_fn]
-
- add_configs(command, dst_configs)
-
- #print("Calling: " + " ".join(command))
- support.sys_call(command, log)
-
-
-def prepare_config_scaffold_correction(filename, cfg, log, saves_dir, K):
- subst_dict = dict()
-
- subst_dict["K"] = str(K)
- subst_dict["dataset"] = process_cfg.process_spaces(cfg.dataset)
- subst_dict["output_base"] = process_cfg.process_spaces(os.path.join(cfg.output_dir, "SCC"))
- subst_dict["tmp_dir"] = process_cfg.process_spaces(cfg.tmp_dir)
- subst_dict["use_additional_contigs"] = bool_to_str(False)
- subst_dict["main_iteration"] = bool_to_str(False)
- subst_dict["entry_point"] = BASE_STAGE
- subst_dict["load_from"] = saves_dir
- subst_dict["developer_mode"] = bool_to_str(cfg.developer_mode)
- subst_dict["max_threads"] = cfg.max_threads
- subst_dict["max_memory"] = cfg.max_memory
-
- #todo
- process_cfg.substitute_params(filename, subst_dict, log)
-
-
-def run_scaffold_correction(configs_dir, execution_home, cfg, log, latest, K):
- data_dir = os.path.join(cfg.output_dir, "SCC", "K%d" % K)
- saves_dir = os.path.join(data_dir, 'saves')
- dst_configs = os.path.join(data_dir, "configs")
- cfg_file_name = os.path.join(dst_configs, "config.info")
-
- if os.path.exists(data_dir):
- shutil.rmtree(data_dir)
- os.makedirs(data_dir)
-
- dir_util.copy_tree(os.path.join(configs_dir, "debruijn"), dst_configs, preserve_times=False)
-
- log.info("\n== Running scaffold correction \n")
- scaffolds_file = os.path.join(latest, "scaffolds.fasta")
- if not os.path.isfile(scaffolds_file):
- support.error("Scaffodls were not found in " + scaffolds_file, log)
- if "read_buffer_size" in cfg.__dict__:
- construction_cfg_file_name = os.path.join(dst_configs, "construction.info")
- process_cfg.substitute_params(construction_cfg_file_name, {"read_buffer_size": cfg.read_buffer_size}, log)
- process_cfg.substitute_params(os.path.join(dst_configs, "moleculo_mode.info"), {"scaffolds_file": scaffolds_file}, log)
- prepare_config_scaffold_correction(cfg_file_name, cfg, log, saves_dir, K)
- command = [os.path.join(execution_home, "scaffold_correction"), cfg_file_name]
- add_configs(command, dst_configs)
- log.info(str(command))
- support.sys_call(command, log)
-
-
-def run_spades(configs_dir, execution_home, cfg, dataset_data, ext_python_modules_home, log):
- if not isinstance(cfg.iterative_K, list):
- cfg.iterative_K = [cfg.iterative_K]
- cfg.iterative_K = sorted(cfg.iterative_K)
- used_K = []
-
- # checking and removing conflicting K-mer directories
- if options_storage.restart_from and (options_storage.restart_k_mers != options_storage.original_k_mers):
- processed_K = []
- for k in range(options_storage.MIN_K, options_storage.MAX_K, 2):
- cur_K_dir = os.path.join(cfg.output_dir, "K%d" % k)
- if os.path.isdir(cur_K_dir) and os.path.isfile(os.path.join(cur_K_dir, "final_contigs.fasta")):
- processed_K.append(k)
- if processed_K:
- RL = get_read_length(cfg.output_dir, processed_K[0], ext_python_modules_home, log)
- needed_K = update_k_mers_in_special_cases(cfg.iterative_K, RL, log, silent=True)
- needed_K = [k for k in needed_K if k < RL]
- original_K = reveal_original_k_mers(RL)
-
- k_to_delete = []
- for id, k in enumerate(needed_K):
- if len(processed_K) == id:
- if processed_K[-1] == original_K[-1]: # the last K in the original run was processed in "last_one" mode
- k_to_delete = [original_K[-1]]
- break
- if processed_K[id] != k:
- k_to_delete = processed_K[id:]
- break
- if not k_to_delete and (len(processed_K) > len(needed_K)):
- k_to_delete = processed_K[len(needed_K) - 1:]
- if k_to_delete:
- log.info("Restart mode: removing previously processed directories for K=%s "
- "to avoid conflicts with K specified with --restart-from" % (str(k_to_delete)))
- for k in k_to_delete:
- shutil.rmtree(os.path.join(cfg.output_dir, "K%d" % k))
-
- bin_reads_dir = os.path.join(cfg.output_dir, ".bin_reads")
- if os.path.isdir(bin_reads_dir) and not options_storage.continue_mode:
- shutil.rmtree(bin_reads_dir)
- cfg.tmp_dir = support.get_tmp_dir(prefix="spades_")
-
- finished_on_stop_after = False
- K = cfg.iterative_K[0]
- if len(cfg.iterative_K) == 1:
- run_iteration(configs_dir, execution_home, cfg, log, K, None, True)
- used_K.append(K)
- else:
- run_iteration(configs_dir, execution_home, cfg, log, K, None, False)
- used_K.append(K)
- if options_storage.stop_after == "k%d" % K:
- finished_on_stop_after = True
- else:
- prev_K = K
- RL = get_read_length(cfg.output_dir, K, ext_python_modules_home, log)
- cfg.iterative_K = update_k_mers_in_special_cases(cfg.iterative_K, RL, log)
- if len(cfg.iterative_K) < 2 or cfg.iterative_K[1] + 1 > RL:
- if cfg.rr_enable:
- if len(cfg.iterative_K) < 2:
- log.info("== Rerunning for the first value of K (%d) with Repeat Resolving" %
- cfg.iterative_K[0])
- else:
- support.warning("Second value of iterative K (%d) exceeded estimated read length (%d). "
- "Rerunning for the first value of K (%d) with Repeat Resolving" %
- (cfg.iterative_K[1], RL, cfg.iterative_K[0]), log)
- run_iteration(configs_dir, execution_home, cfg, log, cfg.iterative_K[0], None, True)
- used_K.append(cfg.iterative_K[0])
- K = cfg.iterative_K[0]
- else:
- rest_of_iterative_K = cfg.iterative_K
- rest_of_iterative_K.pop(0)
- count = 0
- for K in rest_of_iterative_K:
- count += 1
- last_one = count == len(cfg.iterative_K) or (rest_of_iterative_K[count] + 1 > RL)
- run_iteration(configs_dir, execution_home, cfg, log, K, prev_K, last_one)
- used_K.append(K)
- prev_K = K
- if last_one:
- break
- if options_storage.stop_after == "k%d" % K:
- finished_on_stop_after = True
- break
- if count < len(cfg.iterative_K) and not finished_on_stop_after:
- support.warning("Iterations stopped. Value of K (%d) exceeded estimated read length (%d)" %
- (cfg.iterative_K[count], RL), log)
-
- if options_storage.stop_after and options_storage.stop_after.startswith('k'):
- support.finish_here(log)
- latest = os.path.join(cfg.output_dir, "K%d" % K)
-
- if cfg.correct_scaffolds and not options_storage.run_completed:
- if options_storage.continue_mode and os.path.isfile(os.path.join(cfg.output_dir, "SCC", "corrected_scaffolds.fasta")) and not options_storage.restart_from == "scc":
- log.info("\n===== Skipping %s (already processed). \n" % "scaffold correction")
- else:
- if options_storage.continue_mode:
- support.continue_from_here(log)
- run_scaffold_correction(configs_dir, execution_home, cfg, log, latest, 21)
- latest = os.path.join(os.path.join(cfg.output_dir, "SCC"), "K21")
- if options_storage.stop_after == 'scc':
- support.finish_here(log)
-
- if cfg.correct_scaffolds:
- correct_scaffolds_fpath = os.path.join(latest, "corrected_scaffolds.fasta")
- if os.path.isfile(correct_scaffolds_fpath):
- shutil.copyfile(correct_scaffolds_fpath, cfg.result_scaffolds)
- elif not finished_on_stop_after: # interupted by --stop-after, so final K is not processed!
- if os.path.isfile(os.path.join(latest, "before_rr.fasta")):
- result_before_rr_contigs = os.path.join(os.path.dirname(cfg.result_contigs), "before_rr.fasta")
- if not os.path.isfile(result_before_rr_contigs) or not options_storage.continue_mode:
- shutil.copyfile(os.path.join(latest, "before_rr.fasta"), result_before_rr_contigs)
- if options_storage.rna:
- if os.path.isfile(os.path.join(latest, "transcripts.fasta")):
- if not os.path.isfile(cfg.result_transcripts) or not options_storage.continue_mode:
- shutil.copyfile(os.path.join(latest, "transcripts.fasta"), cfg.result_transcripts)
- if os.path.isfile(os.path.join(latest, "transcripts.paths")):
- if not os.path.isfile(cfg.result_transcripts_paths) or not options_storage.continue_mode:
- shutil.copyfile(os.path.join(latest, "transcripts.paths"), cfg.result_transcripts_paths)
- else:
- if os.path.isfile(os.path.join(latest, "final_contigs.fasta")):
- if not os.path.isfile(cfg.result_contigs) or not options_storage.continue_mode:
- shutil.copyfile(os.path.join(latest, "final_contigs.fasta"), cfg.result_contigs)
- if os.path.isfile(os.path.join(latest, "first_pe_contigs.fasta")):
- result_first_pe_contigs = os.path.join(os.path.dirname(cfg.result_contigs), "first_pe_contigs.fasta")
- if not os.path.isfile(result_first_pe_contigs) or not options_storage.continue_mode:
- shutil.copyfile(os.path.join(latest, "first_pe_contigs.fasta"), result_first_pe_contigs)
- if cfg.rr_enable:
- if os.path.isfile(os.path.join(latest, "scaffolds.fasta")):
- if not os.path.isfile(cfg.result_scaffolds) or not options_storage.continue_mode:
- shutil.copyfile(os.path.join(latest, "scaffolds.fasta"), cfg.result_scaffolds)
- if os.path.isfile(os.path.join(latest, "scaffolds.paths")):
- if not os.path.isfile(cfg.result_scaffolds_paths) or not options_storage.continue_mode:
- shutil.copyfile(os.path.join(latest, "scaffolds.paths"), cfg.result_scaffolds_paths)
- if os.path.isfile(os.path.join(latest, "assembly_graph.gfa")):
- if not os.path.isfile(cfg.result_graph_gfa) or not options_storage.continue_mode:
- shutil.copyfile(os.path.join(latest, "assembly_graph.gfa"), cfg.result_graph_gfa)
- if os.path.isfile(os.path.join(latest, "assembly_graph.fastg")):
- if not os.path.isfile(cfg.result_graph) or not options_storage.continue_mode:
- shutil.copyfile(os.path.join(latest, "assembly_graph.fastg"), cfg.result_graph)
- if os.path.isfile(os.path.join(latest, "final_contigs.paths")):
- if not os.path.isfile(cfg.result_contigs_paths) or not options_storage.continue_mode:
- shutil.copyfile(os.path.join(latest, "final_contigs.paths"), cfg.result_contigs_paths)
-
-
- if cfg.developer_mode:
- # saves
- saves_link = os.path.join(os.path.dirname(cfg.result_contigs), "saves")
- if os.path.lexists(saves_link): # exists returns False for broken links! lexists return True
- os.remove(saves_link)
- os.symlink(os.path.join(latest, "saves"), saves_link)
-
- if os.path.isdir(bin_reads_dir):
- shutil.rmtree(bin_reads_dir)
- if os.path.isdir(cfg.tmp_dir):
- shutil.rmtree(cfg.tmp_dir)
-
- return used_K
diff --git a/src/SPAdes-3.14.0-Linux/bin/cds-mapping-stats b/src/SPAdes-3.14.0-Linux/bin/cds-mapping-stats
new file mode 100755
index 0000000..920bec1
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/cds-mapping-stats differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/cds-subgraphs b/src/SPAdes-3.14.0-Linux/bin/cds-subgraphs
new file mode 100755
index 0000000..7d895f1
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/cds-subgraphs differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/mag-improve b/src/SPAdes-3.14.0-Linux/bin/mag-improve
new file mode 100755
index 0000000..9f5e510
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/mag-improve differ
diff --git a/src/SPAdes-3.10.1-Linux/bin/metaspades.py b/src/SPAdes-3.14.0-Linux/bin/metaspades.py
similarity index 100%
rename from src/SPAdes-3.10.1-Linux/bin/metaspades.py
rename to src/SPAdes-3.14.0-Linux/bin/metaspades.py
diff --git a/src/SPAdes-3.10.1-Linux/bin/plasmidspades.py b/src/SPAdes-3.14.0-Linux/bin/plasmidspades.py
similarity index 100%
rename from src/SPAdes-3.10.1-Linux/bin/plasmidspades.py
rename to src/SPAdes-3.14.0-Linux/bin/plasmidspades.py
diff --git a/src/SPAdes-3.10.1-Linux/bin/rnaspades.py b/src/SPAdes-3.14.0-Linux/bin/rnaspades.py
similarity index 100%
rename from src/SPAdes-3.10.1-Linux/bin/rnaspades.py
rename to src/SPAdes-3.14.0-Linux/bin/rnaspades.py
diff --git a/src/SPAdes-3.14.0-Linux/bin/spades-bwa b/src/SPAdes-3.14.0-Linux/bin/spades-bwa
new file mode 100755
index 0000000..77b6113
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/spades-bwa differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/spades-convert-bin-to-fasta b/src/SPAdes-3.14.0-Linux/bin/spades-convert-bin-to-fasta
new file mode 100755
index 0000000..1282753
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/spades-convert-bin-to-fasta differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/spades-core b/src/SPAdes-3.14.0-Linux/bin/spades-core
new file mode 100755
index 0000000..cd80a86
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/spades-core differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/spades-corrector-core b/src/SPAdes-3.14.0-Linux/bin/spades-corrector-core
new file mode 100755
index 0000000..ab75171
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/spades-corrector-core differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/spades-gbuilder b/src/SPAdes-3.14.0-Linux/bin/spades-gbuilder
new file mode 100755
index 0000000..1064402
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/spades-gbuilder differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/spades-gmapper b/src/SPAdes-3.14.0-Linux/bin/spades-gmapper
new file mode 100755
index 0000000..9b57e45
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/spades-gmapper differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/spades-gsimplifier b/src/SPAdes-3.14.0-Linux/bin/spades-gsimplifier
new file mode 100755
index 0000000..30dcaf3
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/spades-gsimplifier differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/spades-hammer b/src/SPAdes-3.14.0-Linux/bin/spades-hammer
new file mode 100755
index 0000000..0b2ce3a
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/spades-hammer differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/spades-ionhammer b/src/SPAdes-3.14.0-Linux/bin/spades-ionhammer
new file mode 100755
index 0000000..5d45572
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/spades-ionhammer differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/spades-kmer-estimating b/src/SPAdes-3.14.0-Linux/bin/spades-kmer-estimating
new file mode 100755
index 0000000..3faa71d
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/spades-kmer-estimating differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/spades-kmercount b/src/SPAdes-3.14.0-Linux/bin/spades-kmercount
new file mode 100755
index 0000000..0229d0f
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/spades-kmercount differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/spades-read-filter b/src/SPAdes-3.14.0-Linux/bin/spades-read-filter
new file mode 100755
index 0000000..d0b2341
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/spades-read-filter differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/spades-truseq-scfcorrection b/src/SPAdes-3.14.0-Linux/bin/spades-truseq-scfcorrection
new file mode 100755
index 0000000..09ebbd3
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/spades-truseq-scfcorrection differ
diff --git a/src/SPAdes-3.14.0-Linux/bin/spades.py b/src/SPAdes-3.14.0-Linux/bin/spades.py
new file mode 100755
index 0000000..ba555e9
--- /dev/null
+++ b/src/SPAdes-3.14.0-Linux/bin/spades.py
@@ -0,0 +1,639 @@
+#!/usr/bin/env python
+
+############################################################################
+# Copyright (c) 2015-2019 Saint Petersburg State University
+# Copyright (c) 2011-2014 Saint Petersburg Academic University
+# All Rights Reserved
+# See file LICENSE for details.
+############################################################################
+
+import logging
+import os
+import shutil
+import platform
+import sys
+from site import addsitedir
+
+import spades_init
+
+spades_init.init()
+spades_home = spades_init.spades_home
+bin_home = spades_init.bin_home
+python_modules_home = spades_init.python_modules_home
+ext_python_modules_home = spades_init.ext_python_modules_home
+spades_version = spades_init.spades_version
+
+import support
+
+support.check_python_version()
+
+addsitedir(ext_python_modules_home)
+if sys.version.startswith("2."):
+ import pyyaml2 as pyyaml
+elif sys.version.startswith("3."):
+ import pyyaml3 as pyyaml
+import options_storage
+options_storage.spades_version = spades_version
+
+import options_parser
+from stages.pipeline import Pipeline
+import executor_local
+import executor_save_yaml
+
+def print_used_values(cfg, log):
+ def print_value(cfg, section, param, pretty_param="", margin=" "):
+ if not pretty_param:
+ pretty_param = param.capitalize().replace('_', ' ')
+ line = margin + pretty_param
+ if param in cfg[section].__dict__:
+ line += ": " + str(cfg[section].__dict__[param])
+ else:
+ if "offset" in param:
+ line += " will be auto-detected"
+ log.info(line)
+
+ log.info("")
+
+ # system info
+ log.info("System information:")
+ try:
+ log.info(" SPAdes version: " + str(spades_version).strip())
+ log.info(" Python version: " + ".".join(map(str, sys.version_info[0:3])))
+ # for more details: '[' + str(sys.version_info) + ']'
+ log.info(" OS: " + platform.platform())
+ # for more details: '[' + str(platform.uname()) + ']'
+ except Exception:
+ log.info(" Problem occurred when getting system information")
+ log.info("")
+
+ # main
+ print_value(cfg, "common", "output_dir", "", "")
+ if ("error_correction" in cfg) and (not "assembly" in cfg):
+ log.info("Mode: ONLY read error correction (without assembling)")
+ elif (not "error_correction" in cfg) and ("assembly" in cfg):
+ log.info("Mode: ONLY assembling (without read error correction)")
+ else:
+ log.info("Mode: read error correction and assembling")
+ if ("common" in cfg) and ("developer_mode" in cfg["common"].__dict__):
+ if cfg["common"].developer_mode:
+ log.info("Debug mode is turned ON")
+ else:
+ log.info("Debug mode is turned OFF")
+ log.info("")
+
+ # dataset
+ if "dataset" in cfg:
+ log.info("Dataset parameters:")
+
+ if options_storage.args.iontorrent:
+ log.info(" IonTorrent data")
+ if options_storage.args.bio:
+ log.info(" BiosyntheticSPAdes mode")
+ if options_storage.args.meta:
+ log.info(" Metagenomic mode")
+ elif options_storage.args.large_genome:
+ log.info(" Large genome mode")
+ elif options_storage.args.truseq_mode:
+ log.info(" Illumina TruSeq mode")
+ elif options_storage.args.isolate:
+ log.info(" Isolate mode")
+ elif options_storage.args.rna:
+ log.info(" RNA-seq mode")
+ elif options_storage.args.single_cell:
+ log.info(" Single-cell mode")
+ else:
+ log.info(" Standard mode")
+ log.info(" For multi-cell/isolate data we recommend to use '--isolate' option;" \
+ " for single-cell MDA data use '--sc';" \
+ " for metagenomic data use '--meta';" \
+ " for RNA-Seq use '--rna'.")
+
+ log.info(" Reads:")
+ dataset_data = pyyaml.load(open(cfg["dataset"].yaml_filename))
+ dataset_data = support.relative2abs_paths(dataset_data, os.path.dirname(cfg["dataset"].yaml_filename))
+ support.pretty_print_reads(dataset_data, log)
+
+ # error correction
+ if "error_correction" in cfg:
+ log.info("Read error correction parameters:")
+ print_value(cfg, "error_correction", "max_iterations", "Iterations")
+ print_value(cfg, "error_correction", "qvoffset", "PHRED offset")
+
+ if cfg["error_correction"].gzip_output:
+ log.info(" Corrected reads will be compressed")
+ else:
+ log.info(" Corrected reads will NOT be compressed")
+
+ # assembly
+ if "assembly" in cfg:
+ log.info("Assembly parameters:")
+ if options_storage.auto_K_allowed():
+ log.info(" k: automatic selection based on read length")
+ else:
+ print_value(cfg, "assembly", "iterative_K", "k")
+ if options_storage.args.plasmid:
+ log.info(" Plasmid mode is turned ON")
+ if cfg["assembly"].disable_rr:
+ log.info(" Repeat resolution is DISABLED")
+ else:
+ log.info(" Repeat resolution is enabled")
+ if options_storage.args.careful:
+ log.info(" Mismatch careful mode is turned ON")
+ else:
+ log.info(" Mismatch careful mode is turned OFF")
+ if "mismatch_corrector" in cfg:
+ log.info(" MismatchCorrector will be used")
+ else:
+ log.info(" MismatchCorrector will be SKIPPED")
+ if cfg["assembly"].cov_cutoff == "off":
+ log.info(" Coverage cutoff is turned OFF")
+ elif cfg["assembly"].cov_cutoff == "auto":
+ log.info(" Coverage cutoff is turned ON and threshold will be auto-detected")
+ else:
+ log.info(" Coverage cutoff is turned ON and threshold is %f" % cfg["assembly"].cov_cutoff)
+
+ log.info("Other parameters:")
+ print_value(cfg, "common", "tmp_dir", "Dir for temp files")
+ print_value(cfg, "common", "max_threads", "Threads")
+ print_value(cfg, "common", "max_memory", "Memory limit (in Gb)", " ")
+ log.info("")
+
+
+def create_logger():
+ log = logging.getLogger("spades")
+ log.setLevel(logging.DEBUG)
+
+
+ console = logging.StreamHandler(sys.stdout)
+ console.setFormatter(logging.Formatter("%(message)s"))
+ console.setLevel(logging.DEBUG)
+ log.addHandler(console)
+ return log
+
+
+def check_cfg_for_partial_run(cfg, partial_run_type="restart-from"): # restart-from ot stop-after
+ if partial_run_type == "restart-from":
+ check_point = options_storage.args.restart_from
+ action = "restart from"
+ verb = "was"
+ elif partial_run_type == "stop-after":
+ check_point = options_storage.args.stop_after
+ action = "stop after"
+ verb = "is"
+ else:
+ return
+
+ if check_point == "ec" and ("error_correction" not in cfg):
+ support.error(
+ "failed to %s 'read error correction' ('%s') because this stage %s not specified!" % (action, check_point, verb))
+ if check_point == "mc" and ("mismatch_corrector" not in cfg):
+ support.error(
+ "failed to %s 'mismatch correction' ('%s') because this stage %s not specified!" % (action, check_point, verb))
+ if check_point == "as" or check_point.startswith('k'):
+ if "assembly" not in cfg:
+ support.error(
+ "failed to %s 'assembling' ('%s') because this stage %s not specified!" % (action, check_point, verb))
+
+def get_options_from_params(params_filename, running_script):
+ command_line = None
+ options = None
+ prev_running_script = None
+ if not os.path.isfile(params_filename):
+ return command_line, options, prev_running_script, \
+ "failed to parse command line of the previous run (%s not found)!" % params_filename
+
+ with open(params_filename) as params:
+ command_line = params.readline().strip()
+ spades_prev_version = None
+ for line in params:
+ if "SPAdes version:" in line:
+ spades_prev_version = line.split("SPAdes version:")[1]
+ break
+
+ if spades_prev_version is None:
+ return command_line, options, prev_running_script, \
+ "failed to parse SPAdes version of the previous run!"
+ if spades_prev_version.strip() != spades_version.strip():
+ return command_line, options, prev_running_script, \
+ "SPAdes version of the previous run (%s) is not equal to the current version of SPAdes (%s)!" \
+ % (spades_prev_version.strip(), spades_version.strip())
+ if "Command line: " not in command_line or '\t' not in command_line:
+ return command_line, options, prev_running_script, "failed to parse executable script of the previous run!"
+ options = command_line.split('\t')[1:]
+ prev_running_script = command_line.split('\t')[0][len("Command line: "):]
+ prev_running_script = os.path.basename(prev_running_script)
+ running_script = os.path.basename(running_script)
+ # we cannot restart/continue spades.py run with metaspades.py/rnaspades.py/etc and vice versa
+ if prev_running_script != running_script:
+ message = "executable script of the previous run (%s) is not equal " \
+ "to the current executable script (%s)!" % prev_running_script, running_script
+ return command_line, options, prev_running_script, message
+ return command_line, options, prev_running_script, ""
+
+
+# parse options and safe all parameters to cfg
+def parse_args(args, log):
+ options, cfg, dataset_data = options_parser.parse_args(log, bin_home, spades_home,
+ secondary_filling=False, restart_from=False)
+
+ command_line = ""
+
+ if options_storage.args.continue_mode:
+ restart_from = options_storage.args.restart_from
+ command_line, options, script_name, err_msg = get_options_from_params(
+ os.path.join(options_storage.args.output_dir, "params.txt"),
+ args[0])
+ if err_msg:
+ support.error(err_msg + " Please restart from the beginning or specify another output directory.")
+ options, cfg, dataset_data = options_parser.parse_args(log, bin_home, spades_home, secondary_filling=True,
+ restart_from=(options_storage.args.restart_from is not None),
+ options=options)
+
+ options_storage.args.continue_mode = True
+ options_storage.args.restart_from = restart_from
+
+ if options_storage.args.restart_from:
+ check_cfg_for_partial_run(cfg, partial_run_type="restart-from")
+
+ if options_storage.args.stop_after:
+ check_cfg_for_partial_run(cfg, partial_run_type="stop-after")
+
+ support.check_single_reads_in_options(log)
+ return cfg, dataset_data, command_line
+
+
+def add_file_to_log(cfg, log):
+ log_filename = os.path.join(cfg["common"].output_dir, "spades.log")
+ if options_storage.args.continue_mode:
+ log_handler = logging.FileHandler(log_filename, mode='a')
+ else:
+ log_handler = logging.FileHandler(log_filename, mode='w')
+ log.addHandler(log_handler)
+ return log_filename, log_handler
+
+
+def get_restart_from_command_line(args):
+ updated_params = ""
+ for i in range(1, len(args)):
+ if not args[i].startswith("-o") and not args[i].startswith("--restart-from") and \
+ args[i - 1] != "-o" and args[i - 1] != "--restart-from":
+ updated_params += "\t" + args[i]
+
+ updated_params = updated_params.strip()
+ restart_from_update_message = "Restart-from=" + options_storage.args.restart_from + "\n"
+ restart_from_update_message += "with updated parameters: " + updated_params
+ return updated_params, restart_from_update_message
+
+
+def get_command_line(args):
+ command = ""
+ for v in args:
+ # substituting relative paths with absolute ones (read paths, output dir path, etc)
+ v, prefix = support.get_option_prefix(v)
+ if v in options_storage.dict_of_rel2abs.keys():
+ v = options_storage.dict_of_rel2abs[v]
+ if prefix:
+ command += prefix + ":"
+ command += v + "\t"
+ return command
+
+
+def print_params(log, log_filename, command_line, args, cfg):
+ if options_storage.args.continue_mode:
+ log.info("\n======= SPAdes pipeline continued. Log can be found here: " + log_filename + "\n")
+ log.info("Restored from " + command_line)
+ log.info("")
+
+ params_filename = os.path.join(cfg["common"].output_dir, "params.txt")
+ params_handler = logging.FileHandler(params_filename, mode='w')
+ log.addHandler(params_handler)
+
+ if not options_storage.args.continue_mode:
+ log.info("Command line: " + get_command_line(args))
+ elif options_storage.args.restart_from:
+ update_params, restart_from_update_message = get_restart_from_command_line(args)
+ command_line += "\t" + update_params
+ log.info(command_line)
+ log.info(restart_from_update_message)
+ else:
+ log.info(command_line)
+
+
+ print_used_values(cfg, log)
+ log.removeHandler(params_handler)
+
+
+def clear_configs(cfg, log, command_before_restart_from, stage_id_before_restart_from):
+ def matches_with_restart_from_arg(stage, restart_from_arg):
+ return stage["short_name"].startswith(restart_from_arg.split(":")[0])
+
+ spades_commands_fpath = os.path.join(cfg["common"].output_dir, "run_spades.yaml")
+ with open(spades_commands_fpath) as stream:
+ old_pipeline = pyyaml.load(stream)
+
+ restart_from_stage_id = None
+ for num in range(len(old_pipeline)):
+ stage = old_pipeline[num]
+ if matches_with_restart_from_arg(stage, options_storage.args.restart_from):
+ restart_from_stage_id = num
+ break
+
+ if command_before_restart_from is not None and \
+ old_pipeline[stage_id_before_restart_from]["short_name"] != command_before_restart_from.short_name:
+ support.error("new and old pipelines have difference before %s" % options_storage.args.restart_from, log)
+
+ if command_before_restart_from is None:
+ first_del = 0
+ else:
+ first_del = stage_id_before_restart_from + 1
+
+ if restart_from_stage_id is not None:
+ stage_filename = options_storage.get_stage_filename(restart_from_stage_id, old_pipeline[restart_from_stage_id]["short_name"])
+ if os.path.isfile(stage_filename):
+ os.remove(stage_filename)
+
+ for delete_id in range(first_del, len(old_pipeline)):
+ stage_filename = options_storage.get_stage_filename(delete_id, old_pipeline[delete_id]["short_name"])
+ if os.path.isfile(stage_filename):
+ os.remove(stage_filename)
+
+ cfg_dir = old_pipeline[delete_id]["config_dir"]
+ if cfg_dir != "" and os.path.isdir(os.path.join(cfg["common"].output_dir, cfg_dir)):
+ shutil.rmtree(os.path.join(cfg["common"].output_dir, cfg_dir))
+
+
+def get_first_incomplete_command(filename):
+ with open(filename) as stream:
+ old_pipeline = pyyaml.load(stream)
+
+ first_incomplete_stage_id = 0
+ while first_incomplete_stage_id < len(old_pipeline):
+ stage_filename = options_storage.get_stage_filename(first_incomplete_stage_id, old_pipeline[first_incomplete_stage_id]["short_name"])
+ if not os.path.isfile(stage_filename):
+ return old_pipeline[first_incomplete_stage_id]
+ first_incomplete_stage_id += 1
+
+ return None
+
+
+def get_command_and_stage_id_before_restart_from(draft_commands, cfg, log):
+ restart_from_stage_name = options_storage.args.restart_from.split(":")[0]
+
+ if options_storage.args.restart_from == options_storage.LAST_STAGE:
+ last_command = get_first_incomplete_command(os.path.join(get_stage.cfg["common"].output_dir, "run_spades.yaml"))
+ if last_command is None:
+ restart_from_stage_name = draft_commands[-1].short_name
+ else:
+ restart_from_stage_name = last_command["short_name"]
+
+ restart_from_stage_id = None
+ for num in range(len(draft_commands)):
+ stage = draft_commands[num]
+ if stage.short_name.startswith(restart_from_stage_name):
+ restart_from_stage_id = num
+ break
+
+ if restart_from_stage_id is None:
+ support.error(
+ "failed to restart from %s because this stage was not specified!" % options_storage.args.restart_from,
+ log)
+
+ if ":" in options_storage.args.restart_from or options_storage.args.restart_from == options_storage.LAST_STAGE:
+ return draft_commands[restart_from_stage_id], restart_from_stage_id
+
+ if restart_from_stage_id > 0:
+ stage_filename = options_storage.get_stage_filename(restart_from_stage_id - 1, draft_commands[restart_from_stage_id - 1].short_name)
+ if not os.path.isfile(stage_filename):
+ support.error(
+ "cannot restart from stage %s: previous stage was not complete." % options_storage.args.restart_from,
+ log)
+ return draft_commands[restart_from_stage_id - 1], restart_from_stage_id - 1
+ return None, None
+
+
+def print_info_about_output_files(cfg, log, output_files):
+ def check_and_report_output_file(output_file_key, message_prefix_text):
+ if os.path.isfile(output_files[output_file_key]):
+ message = message_prefix_text + support.process_spaces(output_files[output_file_key])
+ log.info(message)
+
+ if "error_correction" in cfg and os.path.isdir(
+ os.path.dirname(output_files["corrected_dataset_yaml_filename"])):
+ log.info(" * Corrected reads are in " + support.process_spaces(
+ os.path.dirname(output_files["corrected_dataset_yaml_filename"]) + "/"))
+
+ if "assembly" in cfg:
+ check_and_report_output_file("result_contigs_filename", " * Assembled contigs are in ")
+
+ if options_storage.args.bio:
+ check_and_report_output_file("result_domain_graph_filename", " * Domain graph is in ")
+ check_and_report_output_file("result_gene_clusters_filename", " * Gene cluster sequences are in ")
+ check_and_report_output_file("result_bgc_stats_filename", " * BGC cluster statistics ")
+
+ if options_storage.args.rna:
+ check_and_report_output_file("result_transcripts_filename", " * Assembled transcripts are in ")
+ check_and_report_output_file("result_transcripts_paths_filename",
+ " * Paths in the assembly graph corresponding to the transcripts are in ")
+
+ for filtering_type in options_storage.filtering_types:
+ result_filtered_transcripts_filename = os.path.join(cfg["common"].output_dir,
+ filtering_type + "_filtered_" +
+ options_storage.transcripts_name)
+ if os.path.isfile(result_filtered_transcripts_filename):
+ message = " * " + filtering_type.capitalize() + " filtered transcripts are in " + \
+ support.process_spaces(result_filtered_transcripts_filename)
+ log.info(message)
+ else:
+ check_and_report_output_file("result_scaffolds_filename", " * Assembled scaffolds are in ")
+ check_and_report_output_file("result_contigs_paths_filename",
+ " * Paths in the assembly graph corresponding to the contigs are in ")
+ check_and_report_output_file("result_scaffolds_paths_filename",
+ " * Paths in the assembly graph corresponding to the scaffolds are in ")
+
+ check_and_report_output_file("result_assembly_graph_filename", " * Assembly graph is in ")
+ check_and_report_output_file("result_assembly_graph_filename_gfa", " * Assembly graph in GFA format is in ")
+
+
+def get_output_files(cfg):
+ output_files = dict()
+ output_files["corrected_dataset_yaml_filename"] = ""
+ output_files["result_contigs_filename"] = os.path.join(cfg["common"].output_dir, options_storage.contigs_name)
+ output_files["result_scaffolds_filename"] = os.path.join(cfg["common"].output_dir, options_storage.scaffolds_name)
+ output_files["result_assembly_graph_filename"] = os.path.join(cfg["common"].output_dir,
+ options_storage.assembly_graph_name)
+ output_files["result_assembly_graph_filename_gfa"] = os.path.join(cfg["common"].output_dir,
+ options_storage.assembly_graph_name_gfa)
+ output_files["result_contigs_paths_filename"] = os.path.join(cfg["common"].output_dir,
+ options_storage.contigs_paths)
+ output_files["result_scaffolds_paths_filename"] = os.path.join(cfg["common"].output_dir,
+ options_storage.scaffolds_paths)
+ output_files["result_transcripts_filename"] = os.path.join(cfg["common"].output_dir,
+ options_storage.transcripts_name)
+ output_files["result_transcripts_paths_filename"] = os.path.join(cfg["common"].output_dir,
+ options_storage.transcripts_paths)
+ output_files["result_bgc_stats_filename"] = os.path.join(cfg["common"].output_dir, options_storage.bgc_stats_name)
+ output_files["result_domain_graph_filename"] = os.path.join(cfg["common"].output_dir, options_storage.domain_graph_name)
+ output_files["result_gene_clusters_filename"] = os.path.join(cfg["common"].output_dir, options_storage.gene_clusters_name)
+ output_files["truseq_long_reads_file_base"] = os.path.join(cfg["common"].output_dir, "truseq_long_reads")
+ output_files["truseq_long_reads_file"] = output_files["truseq_long_reads_file_base"] + ".fasta"
+ output_files["misc_dir"] = os.path.join(cfg["common"].output_dir, "misc")
+ ### if mismatch correction is enabled then result contigs are copied to misc directory
+ output_files["assembled_contigs_filename"] = os.path.join(output_files["misc_dir"], "assembled_contigs.fasta")
+ output_files["assembled_scaffolds_filename"] = os.path.join(output_files["misc_dir"], "assembled_scaffolds.fasta")
+ return output_files
+
+
+def get_stage(iteration_name):
+ if not options_storage.args.continue_mode:
+ return options_storage.BASE_STAGE
+
+ if options_storage.args.restart_from is not None and \
+ options_storage.args.restart_from != options_storage.LAST_STAGE:
+ if ":" in options_storage.args.restart_from and \
+ iteration_name == options_storage.args.restart_from.split(":")[0]:
+ return options_storage.args.restart_from.split(":")[-1]
+ else:
+ return options_storage.BASE_STAGE
+
+ if get_stage.restart_stage is None:
+ last_command = get_first_incomplete_command(os.path.join(get_stage.cfg["common"].output_dir, "run_spades.yaml"))
+
+ if last_command is not None:
+ get_stage.restart_stage = last_command["short_name"]
+ else:
+ get_stage.restart_stage = "finish"
+
+ if iteration_name == get_stage.restart_stage:
+ return options_storage.LAST_STAGE
+ else:
+ return options_storage.BASE_STAGE
+
+
+def build_pipeline(pipeline, cfg, output_files, tmp_configs_dir, dataset_data, log, bin_home,
+ ext_python_modules_home, python_modules_home):
+ from stages import error_correction_stage
+ from stages import spades_stage
+ from stages import postprocessing_stage
+ from stages import correction_stage
+ from stages import check_test_stage
+ from stages import breaking_scaffolds_stage
+ from stages import preprocess_reads_stage
+ from stages import terminating_stage
+
+ preprocess_reads_stage.add_to_pipeline(pipeline, cfg, output_files, tmp_configs_dir, dataset_data, log, bin_home,
+ ext_python_modules_home, python_modules_home)
+ error_correction_stage.add_to_pipeline(pipeline, cfg, output_files, tmp_configs_dir, dataset_data, log, bin_home,
+ ext_python_modules_home, python_modules_home)
+
+ get_stage.cfg, get_stage.restart_stage = cfg, None
+ spades_stage.add_to_pipeline(pipeline, get_stage, cfg, output_files, tmp_configs_dir, dataset_data, log, bin_home,
+ ext_python_modules_home, python_modules_home)
+ postprocessing_stage.add_to_pipeline(pipeline, cfg, output_files, tmp_configs_dir, dataset_data, log, bin_home,
+ ext_python_modules_home, python_modules_home)
+ correction_stage.add_to_pipeline(pipeline, cfg, output_files, tmp_configs_dir, dataset_data, log, bin_home,
+ ext_python_modules_home, python_modules_home)
+ check_test_stage.add_to_pipeline(pipeline, cfg, output_files, tmp_configs_dir, dataset_data, log, bin_home,
+ ext_python_modules_home, python_modules_home)
+ breaking_scaffolds_stage.add_to_pipeline(pipeline, cfg, output_files, tmp_configs_dir, dataset_data, log, bin_home,
+ ext_python_modules_home, python_modules_home)
+ terminating_stage.add_to_pipeline(pipeline, cfg, output_files, tmp_configs_dir, dataset_data, log, bin_home,
+ ext_python_modules_home, python_modules_home)
+
+
+def check_dir_is_empty(dir_name):
+ if dir_name is not None and \
+ os.path.exists(dir_name) and \
+ os.listdir(dir_name):
+ support.warning("output dir is not empty! Please, clean output directory before run.")
+
+
+def init_parser(args):
+ if options_parser.is_first_run():
+ options_storage.first_command_line = args
+ check_dir_is_empty(options_parser.get_output_dir_from_args())
+ else:
+ command_line, options, script, err_msg = get_options_from_params(
+ os.path.join(options_parser.get_output_dir_from_args(), "params.txt"),
+ args[0])
+
+ if err_msg != "":
+ support.error(err_msg)
+
+ options_storage.first_command_line = [script] + options
+
+
+def main(args):
+ os.environ["LC_ALL"] = "C"
+
+ init_parser(args)
+
+ if len(args) == 1:
+ options_parser.usage(spades_version)
+ sys.exit(0)
+
+ pipeline = Pipeline()
+
+ log = create_logger()
+ cfg, dataset_data, command_line = parse_args(args, log)
+ log_filename, log_handler = add_file_to_log(cfg, log)
+ print_params(log, log_filename, command_line, args, cfg)
+
+ if not options_storage.args.continue_mode:
+ log.info("\n======= SPAdes pipeline started. Log can be found here: " + log_filename + "\n")
+
+ support.check_binaries(bin_home, log)
+ try:
+ output_files = get_output_files(cfg)
+ tmp_configs_dir = os.path.join(cfg["common"].output_dir, "configs")
+
+ build_pipeline(pipeline, cfg, output_files, tmp_configs_dir, dataset_data, log,
+ bin_home, ext_python_modules_home, python_modules_home)
+
+ if options_storage.args.restart_from:
+ draft_commands = pipeline.get_commands(cfg)
+ command_before_restart_from, stage_id_before_restart_from = \
+ get_command_and_stage_id_before_restart_from(draft_commands, cfg, log)
+ clear_configs(cfg, log, command_before_restart_from, stage_id_before_restart_from)
+
+ pipeline.generate_configs(cfg, spades_home, tmp_configs_dir)
+ commands = pipeline.get_commands(cfg)
+
+ executor = executor_save_yaml.Executor(log)
+ executor.execute(commands)
+
+ if not options_storage.args.only_generate_config:
+ executor = executor_local.Executor(log)
+ executor.execute(commands)
+ print_info_about_output_files(cfg, log, output_files)
+
+ if not support.log_warnings(log):
+ log.info("\n======= SPAdes pipeline finished.")
+
+ except Exception:
+ exc_type, exc_value, _ = sys.exc_info()
+ if exc_type == SystemExit:
+ sys.exit(exc_value)
+ else:
+ import errno
+ if exc_type == OSError and exc_value.errno == errno.ENOEXEC: # Exec format error
+ support.error("it looks like you are using SPAdes binaries for another platform.\n" +
+ support.get_spades_binaries_info_message())
+ else:
+ log.exception(exc_value)
+ support.error("exception caught: %s" % exc_type, log)
+ except BaseException: # since python 2.5 system-exiting exceptions (e.g. KeyboardInterrupt) are derived from BaseException
+ exc_type, exc_value, _ = sys.exc_info()
+ if exc_type == SystemExit:
+ sys.exit(exc_value)
+ else:
+ log.exception(exc_value)
+ support.error("exception caught: %s" % exc_type, log)
+ finally:
+ log.info("\nSPAdes log can be found here: %s" % log_filename)
+ log.info("")
+ log.info("Thank you for using SPAdes!")
+ log.removeHandler(log_handler)
+
+
+if __name__ == "__main__":
+ main(sys.argv)
\ No newline at end of file
diff --git a/src/SPAdes-3.10.1-Linux/bin/spades_init.py b/src/SPAdes-3.14.0-Linux/bin/spades_init.py
similarity index 52%
rename from src/SPAdes-3.10.1-Linux/bin/spades_init.py
rename to src/SPAdes-3.14.0-Linux/bin/spades_init.py
index 4baebdd..0bcd4e5 100644
--- a/src/SPAdes-3.10.1-Linux/bin/spades_init.py
+++ b/src/SPAdes-3.14.0-Linux/bin/spades_init.py
@@ -1,7 +1,7 @@
#!/usr/bin/env python
############################################################################
-# Copyright (c) 2015 Saint Petersburg State University
+# Copyright (c) 2015-2019 Saint Petersburg State University
# Copyright (c) 2011-2014 Saint Petersburg Academic University
# All Rights Reserved
# See file LICENSE for details.
@@ -11,14 +11,14 @@
import sys
from os.path import abspath, dirname, realpath, join, isfile
-source_dirs = ["", "truspades", "common"]
+source_dirs = ["", "truspades", "common", "executors", "scripts"]
# developers configuration
spades_home = abspath(dirname(realpath(__file__)))
-bin_home = join(spades_home, 'bin')
-python_modules_home = join(spades_home, 'src')
-ext_python_modules_home = join(spades_home, 'ext', 'src', 'python_libs')
-spades_version = ''
+bin_home = join(spades_home, "bin")
+python_modules_home = join(spades_home, "src")
+ext_python_modules_home = join(spades_home, "ext", "src", "python_libs")
+spades_version = ""
def init():
@@ -29,19 +29,19 @@ def init():
global ext_python_modules_home
# users configuration (spades_init.py and spades binary are in the same directory)
- if isfile(os.path.join(spades_home, 'spades')):
+ if isfile(os.path.join(spades_home, "spades-core")):
install_prefix = dirname(spades_home)
- bin_home = join(install_prefix, 'bin')
- spades_home = join(install_prefix, 'share', 'spades')
+ bin_home = join(install_prefix, "bin")
+ spades_home = join(install_prefix, "share", "spades")
python_modules_home = spades_home
ext_python_modules_home = spades_home
for dir in source_dirs:
- sys.path.append(join(python_modules_home, 'spades_pipeline', dir))
+ sys.path.append(join(python_modules_home, "spades_pipeline", dir))
- spades_version = open(join(spades_home, 'VERSION'), 'r').readline().strip()
+ spades_version = open(join(spades_home, "VERSION"), 'r').readline().strip()
-if __name__ == '__main__':
- spades_py_path = join(dirname(realpath(__file__)), 'spades.py')
- sys.stderr.write('Please use ' + spades_py_path + ' for running SPAdes genome assembler\n')
\ No newline at end of file
+if __name__ == "__main__":
+ spades_py_path = join(dirname(realpath(__file__)), "spades.py")
+ sys.stderr.write("Please use " + spades_py_path + " for running SPAdes genome assembler\n")
diff --git a/src/SPAdes-3.14.0-Linux/bin/spaligner b/src/SPAdes-3.14.0-Linux/bin/spaligner
new file mode 100755
index 0000000..9132ffe
Binary files /dev/null and b/src/SPAdes-3.14.0-Linux/bin/spaligner differ
diff --git a/src/SPAdes-3.10.1-Linux/bin/truspades.py b/src/SPAdes-3.14.0-Linux/bin/truspades.py
similarity index 97%
rename from src/SPAdes-3.10.1-Linux/bin/truspades.py
rename to src/SPAdes-3.14.0-Linux/bin/truspades.py
index dbca5d3..ecf4e41 100755
--- a/src/SPAdes-3.10.1-Linux/bin/truspades.py
+++ b/src/SPAdes-3.14.0-Linux/bin/truspades.py
@@ -17,11 +17,12 @@
spades_home = os.path.abspath(os.path.dirname(os.path.realpath(__file__)))
spades_version = spades_init.spades_version
-import SeqIO # TODO: add to ext/scr/python_libs
-import parallel_launcher
+import support
+from common import SeqIO # TODO: add to ext/scr/python_libs
+from common import parallel_launcher
+# the next modules are from spades_pipeline/truspades/ (can't write "from truspades import ..." since we are in truspades.py)
import reference_construction
import launch_options
-import support
import barcode_extraction
def generate_dataset(input_dirs, log):
diff --git a/src/SPAdes-3.10.1-Linux/share/spades/GPLv2.txt b/src/SPAdes-3.14.0-Linux/share/spades/GPLv2.txt
similarity index 100%
rename from src/SPAdes-3.10.1-Linux/share/spades/GPLv2.txt
rename to src/SPAdes-3.14.0-Linux/share/spades/GPLv2.txt
diff --git a/src/SPAdes-3.10.1-Linux/share/spades/LICENSE b/src/SPAdes-3.14.0-Linux/share/spades/LICENSE
similarity index 78%
rename from src/SPAdes-3.10.1-Linux/share/spades/LICENSE
rename to src/SPAdes-3.14.0-Linux/share/spades/LICENSE
index 0438b8d..a26d33b 100644
--- a/src/SPAdes-3.10.1-Linux/share/spades/LICENSE
+++ b/src/SPAdes-3.14.0-Linux/share/spades/LICENSE
@@ -1,5 +1,5 @@
SPADES: SAINT-PETERSBURG GENOME ASSEMBLER
-Copyright (c) 2015-2017 Saint Petersburg State University
+Copyright (c) 2015-2019 Saint Petersburg State University
Copyright (c) 2011-2014 Saint Petersburg Academic University
SPAdes is free software; you can redistribute it and/or modify
@@ -17,29 +17,41 @@ with this program; if not, write to the Free Software Foundation, Inc.,
-------------------------------------------------------------------------------
+SPAdes
+Genome assembler for single-cell and isolates data sets
+Version: see VERSION
+
+Developed in Center for Algorithmic Biotechnology, Institute of Translational Biomedicine, St. Petersburg State University.
+Developed in Algorithmic Biology Lab of St. Petersburg Academic University of the Russian Academy of Sciences.
+
Current SPAdes contributors:
Dmitry Antipov,
- Anton Bankevich,
- Yuriy Gorshkov,
+ Elena Bushmanova,
Alexey Gurevich,
Anton Korobeynikov,
+ Olga Kunyavskaya,
Dmitriy Meleshko,
Sergey Nurk,
Andrey Prjibelski,
- Yana Safonova,
+ Alexander Shlemov,
+ Ivan Tolstoganov,
Alla Lapidus and
Pavel Pevzner
Also contributed:
Max Alekseyev,
+ Anton Bankevich,
Mikhail Dvorkin,
+ Vasisliy Ershov,
+ Yuriy Gorshkov,
Alexander Kulikov,
Valery Lesin,
Sergey Nikolenko,
Son Pham,
Alexey Pyshkin,
+ Yana Safonova,
Vladislav Saveliev,
Alexander Sirotkin,
Yakov Sirotkin,
@@ -48,9 +60,10 @@ Also contributed:
Irina Vasilinetc,
Nikolay Vyahhi
-Contacts:
- http://cab.spbu.ru/software/spades/
- spades.support@cab.spbu.ru
+Installation instructions and manual can be found on the website:
+http://cab.spbu.ru/software/spades/
+
+Address for communication: spades.support@cab.spbu.ru
References:
diff --git a/src/SPAdes-3.14.0-Linux/share/spades/README.md b/src/SPAdes-3.14.0-Linux/share/spades/README.md
new file mode 100644
index 0000000..ad0026c
--- /dev/null
+++ b/src/SPAdes-3.14.0-Linux/share/spades/README.md
@@ -0,0 +1,1209 @@
+__SPAdes 3.14.0 Manual__
+
+
+1. [About SPAdes](#sec1)
+Â Â Â Â 1.1. [Supported data types](#sec1.1)
+Â Â Â Â 1.2. [SPAdes pipeline](#sec1.2)
+Â Â Â Â 1.3. [SPAdes performance](#sec1.3)
+2. [Installation](#sec2)
+Â Â Â Â 2.1. [Downloading SPAdes Linux binaries](#sec2.1)
+Â Â Â Â 2.2. [Downloading SPAdes binaries for Mac](#sec2.2)
+Â Â Â Â 2.3. [Downloading and compiling SPAdes source code](#sec2.3)
+Â Â Â Â 2.4. [Verifying your installation](#sec2.4)
+3. [Running SPAdes](#sec3)
+Â Â Â Â 3.1. [SPAdes input](#sec3.1)
+Â Â Â Â 3.2. [SPAdes command line options](#sec3.2)
+Â Â Â Â 3.3. [Assembling IonTorrent reads](#sec3.3)
+Â Â Â Â 3.4. [Assembling long Illumina paired reads (2x150 and 2x250)](#sec3.4)
+Â Â Â Â 3.5. [SPAdes output](#sec3.5)
+Â Â Â Â 3.6. [plasmidSPAdes output](#sec3.6)
+Â Â Â Â 3.7. [biosyntheticSPAdes output](#sec3.7)
+Â Â Â Â 3.8. [Assembly evaluation](#sec3.8)
+4. [Stand-alone binaries released within SPAdes package](#sec4)
+Â Â Â Â 4.1. [k-mer counting](#sec4.1)
+Â Â Â Â 4.2. [k-mer coverage read filter](#sec4.2)
+Â Â Â Â 4.3. [k-mer cardinality estimating](#sec4.3)
+Â Â Â Â 4.4. [Graph construction](#sec4.4)
+Â Â Â Â 4.5. [Long read to graph alignment](#sec4.5)
+Â Â Â Â Â Â Â Â 4.5.1. [hybridSPAdes aligner](#sec4.5.1)
+Â Â Â Â Â Â Â Â 4.5.2. [SPAligner](#sec4.5.2)
+5. [Citation](#sec5)
+6. [Feedback and bug reports](#sec6)
+
+
+# About SPAdes
+
+SPAdes – St. Petersburg genome assembler – is an assembly toolkit containing various assembly pipelines. This manual will help you to install and run SPAdes. SPAdes version 3.14.0 was released under GPLv2 on December 27, 2019 and can be downloaded from . []()
+
+
+## Supported data types
+
+The current version of SPAdes works with Illumina or IonTorrent reads and is capable of providing hybrid assemblies using PacBio, Oxford Nanopore and Sanger reads. You can also provide additional contigs that will be used as long reads.
+
+Version 3.14.0 of SPAdes supports paired-end reads, mate-pairs and unpaired reads. SPAdes can take as input several paired-end and mate-pair libraries simultaneously. Note, that SPAdes was initially designed for small genomes. It was tested on bacterial (both single-cell MDA and standard isolates), fungal and other small genomes. SPAdes is not intended for larger genomes (e.g. mammalian size genomes). For such purposes you can use it at your own risk.
+
+If you have high-coverage data for bacterial/viral isolate or multi-cell organism, we highly recommend to use [`--isolate`](#isolate) option.
+
+SPAdes 3.14.0 includes the following additional pipelines:
+- metaSPAdes – a pipeline for metagenomic data sets (see [metaSPAdes options](#meta)).
+- plasmidSPAdes – a pipeline for extracting and assembling plasmids from WGS data sets (see [plasmidSPAdes options](#plasmid)).
+- rnaSPAdes – a *de novo* transcriptome assembler from RNA-Seq data (see [rnaSPAdes manual](assembler/rnaspades_manual.html)).
+- truSPAdes – a module for TruSeq barcode assembly (see [truSPAdes manual](assembler/truspades_manual.html)).
+- biosyntheticSPAdes – a module for biosynthetic gene cluster assembly with paired-end reads (see [biosynthicSPAdes options](#biosynthetic)).
+
+In addition, we provide several stand-alone binaries with relatively simple command-line interface: [k-mer counting](#sec4.1) (`spades-kmercounter`), [assembly graph construction](#sec4.2) (`spades-gbuilder`) and [long read to graph aligner](#sec4.3) (`spades-gmapper`). To learn options of these tools you can either run them without any parameters or read [this section](#sec4).
+
+[]()
+
+
+## SPAdes pipeline
+
+SPAdes comes in several separate modules:
+
+- [BayesHammer](http://bioinf.spbau.ru/en/spades/bayeshammer) – read error correction tool for Illumina reads, which works well on both single-cell and standard data sets.
+- IonHammer – read error correction tool for IonTorrent data, which also works on both types of data.
+- SPAdes – iterative short-read genome assembly module; values of K are selected automatically based on the read length and data set type.
+- MismatchCorrector – a tool which improves mismatch and short indel rates in resulting contigs and scaffolds; this module uses the [BWA](http://bio-bwa.sourceforge.net) tool \[[Li H. and Durbin R., 2009](http://www.ncbi.nlm.nih.gov/pubmed/19451168)\]; MismatchCorrector is turned off by default, but we recommend to turn it on (see [SPAdes options section](#correctoropt)).
+
+We recommend to run SPAdes with BayesHammer/IonHammer to obtain high-quality assemblies. However, if you use your own read correction tool, it is possible to turn error correction module off. It is also possible to use only the read error correction stage, if you wish to use another assembler. See the [SPAdes options section](#pipelineopt). []()
+
+
+## SPAdes performance
+
+In this section we give approximate data about SPAdes performance on two data sets:
+
+- [Standard isolate *E. coli*](http://spades.bioinf.spbau.ru/spades_test_datasets/ecoli_mc/); 6.2Gb, 28M reads, 2x100bp, insert size ~ 215bp
+- [MDA single-cell *E. coli*](http://spades.bioinf.spbau.ru/spades_test_datasets/ecoli_sc/); 6.3 Gb, 29M reads, 2x100bp, insert size ~ 270bp
+
+We ran SPAdes with default parameters using 16 threads on a server with Intel Xeon 2.27GHz processors. BayesHammer runs in approximately half an hour and takes up to 8Gb of RAM to perform read error correction on each data set. Assembly takes about 10 minutes for the *E. coli* isolate data set and 20 minutes for the *E. coli* single-cell data set. Both data sets require about 8Gb of RAM (see notes below). MismatchCorrector runs for about 15 minutes on both data sets, and requires less than 2Gb of RAM. All modules also require additional disk space for storing results (corrected reads, contigs, etc) and temporary files. See the table below for more precise values.
+
+
+
+
Data set
+
E. coli isolate
+
E. coli single-cell
+
+
+
+
Stage
+
Time
+
Peak RAM usage (Gb)
+
Additional disk space (Gb)
+
Time
+
Peak RAM usage (Gb)
+
Additional disk space (Gb)
+
+
+
+
BayesHammer
+
24m
+
7.8
+
8.5
+
25m
+
7.7
+
8.6
+
+
+
+
SPAdes
+
8m
+
8.4
+
1.4
+
10m
+
8.3
+
2.1
+
+
+
+
MismatchCorrector
+
10m
+
1.7
+
21.4
+
12m
+
1.8
+
22.4
+
+
+
+
Whole pipeline
+
42m
+
8.4
+
23.9
+
47m
+
8.3
+
25.1
+
+
+
+Notes:
+
+- Running SPAdes without preliminary read error correction (e.g. without BayesHammer or IonHammer) will likely require more time and memory.
+- Each module removes its temporary files as soon as it finishes.
+- SPAdes uses 512 Mb per thread for buffers, which results in higher memory consumption. If you set memory limit manually, SPAdes will use smaller buffers and thus less RAM.
+- Performance statistics is given for SPAdes version 3.14.0.
+
+
+# Installation
+
+
+SPAdes requires a 64-bit Linux system or Mac OS and Python (supported versions are Python2: 2.4–2.7, and Python3: 3.2 and higher) to be pre-installed on it. To obtain SPAdes you can either download binaries or download source code and compile it yourself. []()
+
+In case of successful installation the following files will be placed in the `bin` directory:
+
+- `spades.py` (main executable script)
+- `metaspades.py` (main executable script for [metaSPAdes](#meta))
+- `plasmidspades.py` (main executable script for [plasmidSPAdes](#plasmid))
+- `rnaspades.py` (main executable script for [rnaSPAdes](assembler/rnaspades_manual.html))
+- `truspades.py` (main executable script for [truSPAdes](assembler/truspades_manual.html))
+- `spades-core` (assembly module)
+- `spades-gbuilder` (standalone graph builder application)
+- `spades-gmapper` (standalone long read to graph aligner)
+- `spades-kmercount` (standalone k-mer counting application)
+- `spades-hammer` (read error correcting module for Illumina reads)
+- `spades-ionhammer` (read error correcting module for IonTorrent reads)
+- `spades-bwa` ([BWA](http://bio-bwa.sourceforge.net) alignment module which is required for mismatch correction)
+- `spades-corrector-core` (mismatch correction module)
+- `spades-truseq-scfcorrection` (executable used in truSPAdes pipeline)
+
+
+## Downloading SPAdes Linux binaries
+
+To download [SPAdes Linux binaries](http://cab.spbu.ru/files/release3.14.0/SPAdes-3.14.0-Linux.tar.gz) and extract them, go to the directory in which you wish SPAdes to be installed and run:
+
+``` bash
+
+ wget http://cab.spbu.ru/files/release3.14.0/SPAdes-3.14.0-Linux.tar.gz
+ tar -xzf SPAdes-3.14.0-Linux.tar.gz
+ cd SPAdes-3.14.0-Linux/bin/
+```
+
+In this case you do not need to run any installation scripts – SPAdes is ready to use. We also suggest adding SPAdes installation directory to the `PATH` variable. []()
+
+Note, that pre-build binaries do not work on new Linux kernels.
+
+
+## Downloading SPAdes binaries for Mac
+
+To obtain [SPAdes binaries for Mac](http://cab.spbu.ru/files/release3.14.0/SPAdes-3.14.0-Darwin.tar.gz), go to the directory in which you wish SPAdes to be installed and run:
+
+``` bash
+
+ curl http://cab.spbu.ru/files/release3.14.0/SPAdes-3.14.0-Darwin.tar.gz -o SPAdes-3.14.0-Darwin.tar.gz
+ tar -zxf SPAdes-3.14.0-Darwin.tar.gz
+ cd SPAdes-3.14.0-Darwin/bin/
+```
+
+Just as in Linux, SPAdes is ready to use and no further installation steps are required. We also suggest adding SPAdes installation directory to the `PATH` variable. []()
+
+
+## Downloading and compiling SPAdes source code
+
+If you wish to compile SPAdes by yourself you will need the following libraries to be pre-installed:
+
+- g++ (version 5.3.1 or higher)
+- cmake (version 2.8.12 or higher)
+- zlib
+- libbz2
+
+If you meet these requirements, you can download the [SPAdes source code](http://cab.spbu.ru/files/release3.14.0/SPAdes-3.14.0.tar.gz):
+
+``` bash
+
+ wget http://cab.spbu.ru/files/release3.14.0/SPAdes-3.14.0.tar.gz
+ tar -xzf SPAdes-3.14.0.tar.gz
+ cd SPAdes-3.14.0
+```
+
+and build it with the following script:
+
+``` bash
+
+ ./spades_compile.sh
+```
+
+SPAdes will be built in the directory `./bin`. If you wish to install SPAdes into another directory, you can specify full path of destination folder by running the following command in `bash` or `sh`:
+
+``` bash
+
+ PREFIX= ./spades_compile.sh
+```
+
+for example:
+
+``` bash
+
+ PREFIX=/usr/local ./spades_compile.sh
+```
+
+which will install SPAdes into `/usr/local/bin`.
+
+After installation you will get the same files (listed above) in `./bin` directory (or `/bin` if you specified PREFIX). We also suggest adding SPAdes installation directory to the `PATH` variable. []()
+
+
+## Verifying your installation
+
+For testing purposes, SPAdes comes with a toy data set (reads that align to first 1000 bp of *E. coli*). To try SPAdes on this data set, run:
+
+``` bash
+
+ /spades.py --test
+```
+
+If you added SPAdes installation directory to the `PATH` variable, you can run:
+
+``` bash
+
+ spades.py --test
+```
+
+For the simplicity we further assume that SPAdes installation directory is added to the `PATH` variable.
+
+If the installation is successful, you will find the following information at the end of the log:
+
+``` plain
+
+===== Assembling finished. Used k-mer sizes: 21, 33, 55
+
+ * Corrected reads are in spades_test/corrected/
+ * Assembled contigs are in spades_test/contigs.fasta
+ * Assembled scaffolds are in spades_test/scaffolds.fasta
+ * Assembly graph is in spades_test/assembly_graph.fastg
+ * Assembly graph in GFA format is in spades_test/assembly_graph.gfa
+ * Paths in the assembly graph corresponding to the contigs are in spades_test/contigs.paths
+ * Paths in the assembly graph corresponding to the scaffolds are in spades_test/scaffolds.paths
+
+======= SPAdes pipeline finished.
+
+========= TEST PASSED CORRECTLY.
+
+SPAdes log can be found here: spades_test/spades.log
+
+Thank you for using SPAdes!
+```
+
+
+# Running SPAdes
+
+
+## SPAdes input
+
+SPAdes takes as input paired-end reads, mate-pairs and single (unpaired) reads in FASTA and FASTQ. For IonTorrent data SPAdes also supports unpaired reads in unmapped BAM format (like the one produced by Torrent Server). However, in order to run read error correction, reads should be in FASTQ or BAM format. Sanger, Oxford Nanopore and PacBio CLR reads can be provided in both formats since SPAdes does not run error correction for these types of data.
+
+To run SPAdes 3.14.0 you need at least one library of the following types:
+
+- Illumina paired-end/high-quality mate-pairs/unpaired reads
+- IonTorrent paired-end/high-quality mate-pairs/unpaired reads
+- PacBio CCS reads
+
+Illumina and IonTorrent libraries should not be assembled together. All other types of input data are compatible. SPAdes should not be used if only PacBio CLR, Oxford Nanopore, Sanger reads or additional contigs are available.
+
+SPAdes supports mate-pair only assembly. However, we recommend to use only high-quality mate-pair libraries in this case (e.g. that do not have a paired-end part). We tested mate-pair only pipeline using Illumina Nextera mate-pairs. See more [here](#hqmp).
+
+Current version SPAdes also supports Lucigen NxSeq® Long Mate Pair libraries, which always have forward-reverse orientation. If you wish to use Lucigen NxSeq® Long Mate Pair reads, you will need Python [regex library](https://pypi.python.org/pypi/regex) to be pre-installed on your machine. You can install it with Python [pip-installer](http://www.pip-installer.org/):
+
+``` bash
+
+ pip install regex
+```
+
+or with the [Easy Install](http://peak.telecommunity.com/DevCenter/EasyInstall) Python module:
+
+``` bash
+
+ easy_install regex
+```
+
+Notes:
+
+- It is strongly suggested to provide multiple paired-end and mate-pair libraries according to their insert size (from smallest to longest).
+- It is not recommended to run SPAdes on PacBio reads with low coverage (less than 5).
+- We suggest not to run SPAdes on PacBio reads for large genomes.
+- SPAdes accepts gzip-compressed files.
+
+### Read-pair libraries
+
+By using command line interface, you can specify up to nine different paired-end libraries, up to nine mate-pair libraries and also up to nine high-quality mate-pair ones. If you wish to use more, you can use [YAML data set file](#yaml). We further refer to paired-end and mate-pair libraries simply as to read-pair libraries.
+
+By default, SPAdes assumes that paired-end and high-quality mate-pair reads have forward-reverse (fr) orientation and usual mate-pairs have reverse-forward (rf) orientation. However, different orientations can be set for any library by using SPAdes options.
+
+To distinguish reads in pairs we refer to them as left and right reads. For forward-reverse orientation, the forward reads correspond to the left reads and the reverse reads, to the right. Similarly, in reverse-forward orientation left and right reads correspond to reverse and forward reads, respectively, etc.
+
+Each read-pair library can be stored in several files or several pairs of files. Paired reads can be organized in two different ways:
+
+- In file pairs. In this case left and right reads are placed in different files and go in the same order in respective files.
+- In interleaved files. In this case, the reads are interlaced, so that each right read goes after the corresponding paired left read.
+
+For example, Illumina produces paired-end reads in two files: `R1.fastq` and `R2.fastq`. If you choose to store reads in file pairs make sure that for every read from `R1.fastq` the corresponding paired read from `R2.fastq` is placed in the respective paired file on the same line number. If you choose to use interleaved files, every read from `R1.fastq` should be followed by the corresponding paired read from `R2.fastq`.
+
+If adapter and/or quality trimming software has been used prior to assembly, files with the orphan reads can be provided as "single read files" for the corresponding read-pair library.
+
+
+If you have merged some of the reads from your paired-end (not mate-pair or high-quality mate-pair) library (using tools s.a. [BBMerge](https://jgi.doe.gov/data-and-tools/bbtools/bb-tools-user-guide/bbmerge-guide/) or [STORM](https://bitbucket.org/yaoornl/align_test/overview)), you should provide the file with resulting reads as a "merged read file" for the corresponding library.
+Note that non-empty files with the remaining unmerged left/right reads (separate or interlaced) must be provided for the same library (for SPAdes to correctly detect the original read length).
+
+In an unlikely case some of the reads from your mate-pair (or high-quality mate-pair) library are "merged", you should provide the resulting reads as a SEPARATE single-read library.
+
+### Unpaired (single-read) libraries
+
+By using command line interface, you can specify up to nine different single-read libraries. To input more libraries, you can use [YAML data set file](#yaml).
+
+Single librairies are assumed to have high quality and a reasonable coverage. For example, you can provide PacBio CCS reads as a single-read library.
+
+Note, that you should not specify PacBio CLR, Sanger reads or additional contigs as single-read libraries, each of them has a separate [option](#inputdata). []()
+
+
+### PacBio and Oxford Nanopore reads
+
+SPAdes can take as an input an unlimited number of PacBio and Oxford Nanopore libraries.
+
+PacBio CLR and Oxford Nanopore reads are used for hybrid assemblies (e.g. with Illumina or IonTorrent). There is no need to pre-correct this kind of data. SPAdes will use PacBio CLR and Oxford Nanopore reads for gap closure and repeat resolution.
+
+For PacBio you just need to have filtered subreads in FASTQ/FASTA format. Provide these filtered subreads using `--pacbio` option. Oxford Nanopore reads are provided with `--nanopore` option.
+
+PacBio CCS/Reads of Insert reads or pre-corrected (using third-party software) PacBio CLR / Oxford Nanopore reads can be simply provided as single reads to SPAdes.
+
+### Additional contigs
+
+In case you have contigs of the same genome generated by other assembler(s) and you wish to merge them into SPAdes assembly, you can specify additional contigs using `--trusted-contigs` or `--untrusted-contigs`. First option is used when high quality contigs are available. These contigs will be used for graph construction, gap closure and repeat resolution. Second option is used for less reliable contigs that may have more errors or contigs of unknown quality. These contigs will be used only for gap closure and repeat resolution. The number of additional contigs is unlimited.
+
+Note, that SPAdes does not perform assembly using genomes of closely-related species. Only contigs of the same genome should be specified.
+
+[]()
+
+## SPAdes command line options
+
+To run SPAdes from the command line, type
+
+``` bash
+
+ spades.py [options] -o
+```
+
+Note that we assume that SPAdes installation directory is added to the `PATH` variable (provide full path to SPAdes executable otherwise: `/spades.py`). []()
+
+
+### Basic options
+
+`-o `
+Â Â Â Â Specify the output directory. Required option.
+
+[]()
+
+
+`--isolate `
+Â Â Â Â This flag is highly recommended for high-coverage isolate and multi-cell data; improves the assembly quality and running time.
+ Not compatible with `--only-error-correction` or `--careful` options.
+
+
+`--sc `
+Â Â Â Â This flag is required for MDA (single-cell) data.
+
+[]()
+
+
+`--meta ` Â (same as `metaspades.py`)
+Â Â Â Â This flag is recommended when assembling metagenomic data sets (runs metaSPAdes, see [paper](https://genome.cshlp.org/content/27/5/824.short) for more details). Currently metaSPAdes supports only a **_single_** short-read library which has to be **_paired-end_** (we hope to remove this restriction soon). In addition, you can provide long reads (e.g. using `--pacbio` or `--nanopore` options), but hybrid assembly for metagenomes remains an experimental pipeline and optimal performance is not guaranteed. It does not support [careful mode](#correctoropt) (mismatch correction is not available). In addition, you cannot specify coverage cutoff for metaSPAdes. Note that metaSPAdes might be very sensitive to presence of the technical sequences remaining in the data (most notably adapter readthroughs), please run quality control and pre-process your data accordingly.
+
+[]()
+
+
+`--plasmid ` Â (same as `plasmidspades.py`)
+Â Â Â Â This flag is required when assembling only plasmids from WGS data sets (runs plasmidSPAdes, see [paper](http://biorxiv.org/content/early/2016/04/20/048942) for the algorithm details). Note, that plasmidSPAdes is not compatible with [metaSPAdes](#meta) and [single-cell mode](#sc). Additionally, we do not recommend to run plasmidSPAdes on more than one library. See [section 3.6](#sec3.6) for plasmidSPAdes output details.
+
+[]()
+
+
+`--bio `
+Â Â Â Â This flag is required when assembling only non-ribosomal and polyketide gene clusters from WGS data sets (runs biosyntheticSPAdes, see [paper](https://genome.cshlp.org/content/early/2019/06/03/gr.243477.118?top=1) for the algorithm details). biosyntheticSPAdes is supposed to work on isolate or metagenomic WGS dataset. Note, that biosyntheticSPAdes is not compatible with any other modes. See [section 3.7](#sec3.7) for biosyntheticSPAdes output details.
+
+[]()
+
+
+`--rna ` Â (same as `rnaspades.py`)
+Â Â Â Â This flag should be used when assembling RNA-Seq data sets (runs rnaSPAdes). To learn more, see [rnaSPAdes manual](assembler/rnaspades_manual.html).
+ Not compatible with `--only-error-correction` or `--careful` options.
+
+
+`--iontorrent `
+Â Â Â Â This flag is required when assembling IonTorrent data. Allows BAM files as input. Carefully read [section 3.3](#sec3.3) before using this option.
+
+`--test`
+Â Â Â Â Runs SPAdes on the toy data set; see [section 2.4](#sec2.4).
+
+`-h` (or `--help`)
+Â Â Â Â Prints help.
+
+`-v` (or `--version`)
+Â Â Â Â Prints SPAdes version.
+
+[]()
+
+### Pipeline options
+
+`--only-error-correction`
+Â Â Â Â Performs read error correction only.
+
+`--only-assembler`
+Â Â Â Â Runs assembly module only.
+
+[]()
+
+`--careful`
+Â Â Â Â Tries to reduce the number of mismatches and short indels. Also runs MismatchCorrector – a post processing tool, which uses [BWA](http://bio-bwa.sourceforge.net) tool (comes with SPAdes). This option is recommended only for assembly of small genomes. We strongly recommend not to use it for large and medium-size eukaryotic genomes. Note, that this options is is not supported by metaSPAdes and rnaSPAdes.
+
+`--continue`
+Â Â Â Â Continues SPAdes run from the specified output folder starting from the last available check-point. Check-points are made after:
+
+- error correction module is finished
+- iteration for each specified K value of assembly module is finished
+- mismatch correction is finished for contigs or scaffolds
+
+For example, if specified K values are 21, 33 and 55 and SPAdes was stopped or crashed during assembly stage with K = 55, you can run SPAdes with the `--continue` option specifying the same output directory. SPAdes will continue the run starting from the assembly stage with K = 55. Error correction module and iterations for K equal to 21 and 33 will not be run again. If `--continue` is set, the only allowed option is `-o `.
+
+`--restart-from `
+Â Â Â Â Restart SPAdes run from the specified output folder starting from the specified check-point. Check-points are:
+
+- `ec` – start from error correction
+- `as` – restart assembly module from the first iteration
+- `k` – restart from the iteration with specified k values, e.g. k55 (not available in RNA-Seq mode)
+- `mc` – restart mismatch correction
+- `last` – restart from the last available check-point (similar to `--continue`)
+
+In contrast to the `--continue` option, you can change some of the options when using `--restart-from`. You can change any option except: all basic options, all options for specifying input data (including `--dataset`), `--only-error-correction` option and `--only-assembler` option. For example, if you ran assembler with k values 21,33,55 without mismatch correction, you can add one more iteration with k=77 and run mismatch correction step by running SPAdes with following options:
+`--restart-from k55 -k 21,33,55,77 --mismatch-correction -o `.
+Since all files will be overwritten, do not forget to copy your assembly from the previous run if you need it.
+
+`--disable-gzip-output`
+Â Â Â Â Forces read error correction module not to compress the corrected reads. If this options is not set, corrected reads will be in `*.fastq.gz` format.
+
+[]()
+
+
+### Input data
+
+#### Specifying single library (paired-end or single-read)
+
+`--12 `
+Â Â Â Â File with interlaced forward and reverse paired-end reads.
+
+`-1 `
+Â Â Â Â File with forward reads.
+
+`-2 `
+Â Â Â Â File with reverse reads.
+
+`--merged `
+Â Â Â Â File with merged paired reads.
+Â Â Â Â If the properties of the library permit, overlapping paired-end reads can be merged using special software.
+Â Â Â Â Non-empty files with (remaining) unmerged left/right reads (separate or interlaced) must be provided for the same library for SPAdes to correctly detect the original read length.
+
+`-s `
+Â Â Â Â File with unpaired reads.
+
+#### Specifying multiple libraries
+
+**_Single-read libraries_**
+
+`--s<#> `
+Â Â Â Â File for single-read library number `<#>` (`<#>` = 1,2,..,9). For example, for the first paired-end library the option is: `--s1 `
+Â Â Â Â Do not use `-s` options for single-read libraries, since it specifies unpaired reads for the first paired-end library.
+
+**_Paired-end libraries_**
+
+`--pe<#>-12 `
+Â Â Â Â File with interlaced reads for paired-end library number `<#>` (`<#>` = 1,2,..,9). For example, for the first single-read library the option is: `--pe1-12 `
+
+`--pe<#>-1 `
+Â Â Â Â File with left reads for paired-end library number `<#>` (`<#>` = 1,2,..,9).
+
+`--pe<#>-2 `
+Â Â Â Â File with right reads for paired-end library number `<#>` (`<#>` = 1,2,..,9).
+
+`--pe<#>-m `
+Â Â Â Â File with merged reads from paired-end library number `<#>` (`<#>` = 1,2,..,9)
+Â Â Â Â If the properties of the library permit, paired reads can be merged using special software. Â Â Â Â Non-empty files with (remaining) unmerged left/right reads (separate or interlaced) must be provided for the same library for SPAdes to correctly detect the original read length.
+
+`--pe<#>-s `
+Â Â Â Â File with unpaired reads from paired-end library number `<#>` (`<#>` = 1,2,..,9)
+Â Â Â Â For example, paired reads can become unpaired during the error correction procedure.
+
+`--pe<#>- `
+Â Â Â Â Orientation of reads for paired-end library number `<#>` (`<#>` = 1,2,..,9; `` = "fr","rf","ff").
+Â Â Â Â The default orientation for paired-end libraries is forward-reverse (`--> <--`). For example, to specify reverse-forward orientation for the second paired-end library, you should use the flag: `--pe2-rf `
+ Should not be confused with FR and RF strand-specificity for RNA-Seq data (see rnaSPAdes manual).
+
+**_Mate-pair libraries_**
+
+`--mp<#>-12 `
+Â Â Â Â File with interlaced reads for mate-pair library number `<#>` (`<#>` = 1,2,..,9).
+
+`--mp<#>-1 `
+Â Â Â Â File with left reads for mate-pair library number `<#>` (`<#>` = 1,2,..,9).
+
+`--mp<#>-2 `
+Â Â Â Â File with right reads for mate-pair library number `<#>` (`<#>` = 1,2,..,9).
+
+`--mp<#>- `
+Â Â Â Â Orientation of reads for mate-pair library number `<#>` (`<#>` = 1,2,..,9; `` = "fr","rf","ff").
+Â Â Â Â The default orientation for mate-pair libraries is reverse-forward (`<-- -->`). For example, to specify forward-forward orientation for the first mate-pair library, you should use the flag: `--mp1-ff `
+
+
+**_High-quality mate-pair libraries_** (can be used for mate-pair only assembly)
+
+`--hqmp<#>-12 `
+Â Â Â Â File with interlaced reads for high-quality mate-pair library number `<#>` (`<#>` = 1,2,..,9).
+
+`--hqmp<#>-1 `
+Â Â Â Â File with left reads for high-quality mate-pair library number `<#>` (`<#>` = 1,2,..,9).
+
+`--hqmp<#>-2 `
+Â Â Â Â File with right reads for high-quality mate-pair library number `<#>` (`<#>` = 1,2,..,9).
+
+`--hqmp<#>-s `
+Â Â Â Â File with unpaired reads from high-quality mate-pair library number `<#>` (`<#>` = 1,2,..,9)
+
+`--hqmp<#>- `
+Â Â Â Â Orientation of reads for high-quality mate-pair library number `<#>` (`<#>` = 1,2,..,9; `` = "fr","rf","ff").
+Â Â Â Â The default orientation for high-quality mate-pair libraries is forward-reverse (`--> <--`). For example, to specify reverse-forward orientation for the first high-quality mate-pair library, you should use the flag: `--hqmp1-rf `
+
+
+**_Lucigen NxSeq® Long Mate Pair libraries_** (see [section 3.1](#sec3.1) for details)
+
+`--nxmate<#>-1 `
+    File with left reads for Lucigen NxSeq® Long Mate Pair library number `<#>` (`<#>` = 1,2,..,9).
+
+`--nxmate<#>-2 `
+    File with right reads for Lucigen NxSeq® Long Mate Pair library number `<#>` (`<#>` = 1,2,..,9).
+
+**_Specifying data for hybrid assembly_**
+
+`--pacbio `
+Â Â Â Â File with PacBio CLR reads. For PacBio CCS reads use `-s` option. More information on PacBio reads is provided in [section 3.1](#pacbio).
+
+`--nanopore `
+Â Â Â Â File with Oxford Nanopore reads.
+
+`--sanger `
+Â Â Â Â File with Sanger reads
+
+`--trusted-contigs `
+Â Â Â Â Reliable contigs of the same genome, which are likely to have no misassemblies and small rate of other errors (e.g. mismatches and indels). This option is not intended for contigs of the related species.
+
+`--untrusted-contigs `
+Â Â Â Â Contigs of the same genome, quality of which is average or unknown. Contigs of poor quality can be used but may introduce errors in the assembly. This option is also not intended for contigs of the related species.
+
+
+**_Specifying input data with YAML data set file (advanced)_**
+
+An alternative way to specify an input data set for SPAdes is to create a [YAML](http://www.yaml.org/) data set file. By using a YAML file you can provide an unlimited number of paired-end, mate-pair and unpaired libraries. Basically, YAML data set file is a text file, in which input libraries are provided as a comma-separated list in square brackets. Each library is provided in braces as a comma-separated list of attributes. The following attributes are available:
+
+- orientation ("fr", "rf", "ff")
+- type ("paired-end", "mate-pairs", "hq-mate-pairs", "single", "pacbio", "nanopore", "sanger", "trusted-contigs", "untrusted-contigs")
+- interlaced reads (comma-separated list of files with interlaced reads)
+- left reads (comma-separated list of files with left reads)
+- right reads (comma-separated list of files with right reads)
+- single reads (comma-separated list of files with single reads or unpaired reads from paired library)
+- merged reads (comma-separated list of files with [merged reads](#merged))
+
+To properly specify a library you should provide its type and at least one file with reads. Orientation is an optional attribute. Its default value is "fr" (forward-reverse) for paired-end libraries and "rf" (reverse-forward) for mate-pair libraries.
+
+The value for each attribute is given after a colon. Comma-separated lists of files should be given in square brackets. For each file you should provide its full path in double quotes. Make sure that files with right reads are given in the same order as corresponding files with left reads.
+
+For example, if you have one paired-end library splitted into two pairs of files:
+
+``` bash
+
+ lib_pe1_left_1.fastq
+ lib_pe1_right_1.fastq
+ lib_pe1_left_2.fastq
+ lib_pe1_right_2.fastq
+```
+
+one mate-pair library:
+
+``` bash
+
+ lib_mp1_left.fastq
+ lib_mp1_right.fastq
+```
+
+and PacBio CCS and CLR reads:
+
+``` bash
+
+ pacbio_ccs.fastq
+ pacbio_clr.fastq
+```
+
+YAML file should look like this:
+
+``` bash
+
+ [
+ {
+ orientation: "fr",
+ type: "paired-end",
+ right reads: [
+ "/FULL_PATH_TO_DATASET/lib_pe1_right_1.fastq",
+ "/FULL_PATH_TO_DATASET/lib_pe1_right_2.fastq"
+ ],
+ left reads: [
+ "/FULL_PATH_TO_DATASET/lib_pe1_left_1.fastq",
+ "/FULL_PATH_TO_DATASET/lib_pe1_left_2.fastq"
+ ]
+ },
+ {
+ orientation: "rf",
+ type: "mate-pairs",
+ right reads: [
+ "/FULL_PATH_TO_DATASET/lib_mp1_right.fastq"
+ ],
+ left reads: [
+ "/FULL_PATH_TO_DATASET/lib_mp1_left.fastq"
+ ]
+ },
+ {
+ type: "single",
+ single reads: [
+ "/FULL_PATH_TO_DATASET/pacbio_ccs.fastq"
+ ]
+ },
+ {
+ type: "pacbio",
+ single reads: [
+ "/FULL_PATH_TO_DATASET/pacbio_clr.fastq"
+ ]
+ }
+ ]
+```
+
+Once you have created a YAML file save it with `.yaml` extension (e.g. as `my_data_set.yaml`) and run SPAdes using the `--dataset` option:
+`--dataset `
+Notes:
+
+- The `--dataset` option cannot be used with any other options for specifying input data.
+- We recommend to nest all files with long reads of the same data type in a single library block.
+
+[]()
+
+
+### Advanced options
+
+`-t ` (or `--threads `)
+Â Â Â Â Number of threads. The default value is 16.
+
+`-m ` (or `--memory `)
+Â Â Â Â Set memory limit in Gb. SPAdes terminates if it reaches this limit. The default value is 250 Gb. Actual amount of consumed RAM will be below this limit. Make sure this value is correct for the given machine. SPAdes uses the limit value to automatically determine the sizes of various buffers, etc.
+
+`--tmp-dir `
+Â Â Â Â Set directory for temporary files from read error correction. The default value is `/corrected/tmp`
+
+`-k `
+Â Â Â Â Comma-separated list of k-mer sizes to be used (all values must be odd, less than 128 and listed in ascending order). If `--sc` is set the default values are 21,33,55. For multicell data sets K values are automatically selected using maximum read length ([see note for assembling long Illumina paired reads for details](#sec3.4)). To properly select K values for IonTorrent data read [section 3.3](#sec3.3).
+
+`--cov-cutoff `
+Â Â Â Â Read coverage cutoff value. Must be a positive float value, or "auto", or "off". Default value is "off". When set to "auto" SPAdes automatically computes coverage threshold using conservative strategy. Note, that this option is not supported by metaSPAdes.
+
+`--phred-offset <33 or 64>`
+Â Â Â Â PHRED quality offset for the input reads, can be either 33 or 64. It will be auto-detected if it is not specified.
+
+
+
+### Examples
+
+To test the toy data set, you can also run the following command from the SPAdes `bin` directory:
+
+``` bash
+
+ spades.py --pe1-1 ../share/spades/test_dataset/ecoli_1K_1.fq.gz \
+ --pe1-2 ../share/spades/test_dataset/ecoli_1K_2.fq.gz -o spades_test
+```
+
+If you have your library separated into several pairs of files, for example:
+
+``` bash
+
+ lib1_forward_1.fastq
+ lib1_reverse_1.fastq
+ lib1_forward_2.fastq
+ lib1_reverse_2.fastq
+```
+
+make sure that corresponding files are given in the same order:
+
+``` bash
+
+ spades.py --pe1-1 lib1_forward_1.fastq --pe1-2 lib1_reverse_1.fastq \
+ --pe1-1 lib1_forward_2.fastq --pe1-2 lib1_reverse_2.fastq \
+ -o spades_output
+```
+
+Files with interlacing paired-end reads or files with unpaired reads can be specified in any order with one file per option, for example:
+
+``` bash
+
+ spades.py --pe1-12 lib1_1.fastq --pe1-12 lib1_2.fastq \
+ --pe1-s lib1_unpaired_1.fastq --pe1-s lib1_unpaired_2.fastq \
+ -o spades_output
+```
+
+If you have several paired-end and mate-pair reads, for example:
+
+paired-end library 1
+
+``` bash
+
+ lib_pe1_left.fastq
+ lib_pe1_right.fastq
+```
+
+mate-pair library 1
+
+``` bash
+
+ lib_mp1_left.fastq
+ lib_mp1_right.fastq
+```
+
+mate-pair library 2
+
+``` bash
+
+ lib_mp2_left.fastq
+ lib_mp2_right.fastq
+```
+
+make sure that files corresponding to each library are grouped together:
+
+``` bash
+
+ spades.py --pe1-1 lib_pe1_left.fastq --pe1-2 lib_pe1_right.fastq \
+ --mp1-1 lib_mp1_left.fastq --mp1-2 lib_mp1_right.fastq \
+ --mp2-1 lib_mp2_left.fastq --mp2-2 lib_mp2_right.fastq \
+ -o spades_output
+```
+
+If you have IonTorrent unpaired reads, PacBio CLR and additional reliable contigs:
+
+``` bash
+
+ it_reads.fastq
+ pacbio_clr.fastq
+ contigs.fasta
+```
+
+run SPAdes with the following command:
+
+``` bash
+
+ spades.py --iontorrent -s it_reads.fastq \
+ --pacbio pacbio_clr.fastq --trusted-contigs contigs.fastq \
+ -o spades_output
+```
+
+If a single-read library is splitted into several files:
+
+``` bash
+
+ unpaired1_1.fastq
+ unpaired1_2.fastq
+ unpaired1_3.fasta
+```
+
+specify them as one library:
+
+``` bash
+
+ spades.py --s1 unpaired1_1.fastq \
+ --s1 unpaired1_2.fastq --s1 unpaired1_3.fastq \
+ -o spades_output
+```
+
+All options for specifying input data can be mixed if needed, but make sure that files for each library are grouped and files with left and right paired reads are listed in the same order. []()
+
+
+## Assembling IonTorrent reads
+
+Only FASTQ or BAM files are supported as input.
+
+The selection of k-mer length is non-trivial for IonTorrent. If the dataset is more or less conventional (good coverage, not high GC, etc), then use our [recommendation for long reads](#sec3.4) (e.g. assemble using k-mer lengths 21,33,55,77,99,127). However, due to increased error rate some changes of k-mer lengths (e.g. selection of shorter ones) may be required. For example, if you ran SPAdes with k-mer lengths 21,33,55,77 and then decided to assemble the same data set using more iterations and larger values of K, you can run SPAdes once again specifying the same output folder and the following options: `--restart-from k77 -k 21,33,55,77,99,127 --mismatch-correction -o `. Do not forget to copy contigs and scaffolds from the previous run. We are planning to tackle issue of selecting k-mer lengths for IonTorrent reads in next versions.
+
+You may need no error correction for Hi-Q enzyme at all. However, we suggest trying to assemble your data with and without error correction and select the best variant.
+
+For non-trivial datasets (e.g. with high GC, low or uneven coverage) we suggest to enable single-cell mode (setting `--sc` option) and use k-mer lengths of 21,33,55. []()
+
+
+## Assembling long Illumina paired reads (2x150 and 2x250)
+
+Recent advances in DNA sequencing technology have led to a rapid increase in read length. Nowadays, it is a common situation to have a data set consisting of 2x150 or 2x250 paired-end reads produced by Illumina MiSeq or HiSeq2500. However, the use of longer reads alone will not automatically improve assembly quality. An assembler that can properly take advantage of them is needed.
+
+SPAdes use of iterative k-mer lengths allows benefiting from the full potential of the long paired-end reads. Currently one has to set the assembler options up manually, but we plan to incorporate automatic calculation of necessary options soon.
+
+Please note that in addition to the read length, the insert length also matters a lot. It is not recommended to sequence a 300bp fragment with a pair of 250bp reads. We suggest using 350-500 bp fragments with 2x150 reads and 550-700 bp fragments with 2x250 reads.
+
+### Multi-cell data set with read length 2x150
+
+Do not turn off SPAdes error correction (BayesHammer module), which is included in SPAdes default pipeline.
+
+If you have enough coverage (50x+), then you may want to try to set k-mer lengths of 21, 33, 55, 77 (selected by default for reads with length 150bp).
+
+Make sure you run assembler with the `--careful` option to minimize number of mismatches in the final contigs.
+
+We recommend that you check the SPAdes log file at the end of the each iteration to control the average coverage of the contigs.
+
+For reads corrected prior to running the assembler:
+
+``` bash
+
+ spades.py -k 21,33,55,77 --careful --only-assembler -o spades_output
+```
+
+To correct and assemble the reads:
+
+``` bash
+
+ spades.py -k 21,33,55,77 --careful -o spades_output
+```
+
+### Multi-cell data set with read lengths 2 x 250
+
+Do not turn off SPAdes error correction (BayesHammer module), which is included in SPAdes default pipeline.
+
+By default we suggest to increase k-mer lengths in increments of 22 until the k-mer length reaches 127. The exact length of the k-mer depends on the coverage: k-mer length of 127 corresponds to 50x k-mer coverage and higher. For read length 250bp SPAdes automatically chooses K values equal to 21, 33, 55, 77, 99, 127.
+
+Make sure you run assembler with `--careful` option to minimize number of mismatches in the final contigs.
+
+We recommend you to check the SPAdes log file at the end of the each iteration to control the average coverage of the contigs.
+
+For reads corrected prior to running the assembler:
+
+``` bash
+
+ spades.py -k 21,33,55,77,99,127 --careful --only-assembler -o spades_output
+```
+
+To correct and assemble the reads:
+
+``` bash
+
+ spades.py -k 21,33,55,77,99,127 --careful -o spades_output
+```
+
+### Single-cell data set with read lengths 2 x 150 or 2 x 250
+
+The default k-mer lengths are recommended. For single-cell data sets SPAdes selects k-mer sizes 21, 33 and 55.
+
+However, it might be tricky to fully utilize the advantages of long reads you have. Consider contacting us for more information and to discuss assembly strategy.
+[]()
+
+
+## SPAdes output
+
+SPAdes stores all output files in ` `, which is set by the user.
+
+- `/corrected/` directory contains reads corrected by BayesHammer in `*.fastq.gz` files; if compression is disabled, reads are stored in uncompressed `*.fastq` files
+- `/scaffolds.fasta` contains resulting scaffolds (recommended for use as resulting sequences)
+- `/contigs.fasta` contains resulting contigs
+- `/assembly_graph.gfa` contains SPAdes assembly graph and scaffolds paths in [GFA 1.0 format](https://github.com/GFA-spec/GFA-spec/blob/master/GFA1.md)
+- `/assembly_graph.fastg` contains SPAdes assembly graph in [FASTG format](http://fastg.sourceforge.net/FASTG_Spec_v1.00.pdf)
+- `/contigs.paths` contains paths in the assembly graph corresponding to contigs.fasta (see details below)
+- `/scaffolds.paths` contains paths in the assembly graph corresponding to scaffolds.fasta (see details below)
+
+Contigs/scaffolds names in SPAdes output FASTA files have the following format:
+`>NODE_3_length_237403_cov_243.207`
+Here `3` is the number of the contig/scaffold, `237403` is the sequence length in nucleotides and `243.207` is the k-mer coverage for the last (largest) k value used. Note that the k-mer coverage is always lower than the read (per-base) coverage.
+
+In general, SPAdes uses two techniques for joining contigs into scaffolds. First one relies on read pairs and tries to estimate the size of the gap separating contigs. The second one relies on the assembly graph: e.g. if two contigs are separated by a complex tandem repeat, that cannot be resolved exactly, contigs are joined into scaffold with a fixed gap size of 100 bp. Contigs produced by SPAdes do not contain N symbols.
+
+To view FASTG and GFA files we recommend to use [Bandage visualization tool](http://rrwick.github.io/Bandage/). Note that sequences stored in `assembly_graph.fastg` correspond to contigs before repeat resolution (edges of the assembly graph). Paths corresponding to contigs after repeat resolution (scaffolding) are stored in `contigs.paths` (`scaffolds.paths`) in the format accepted by Bandage (see [Bandage wiki](https://github.com/rrwick/Bandage/wiki/Graph-paths) for details). The example is given below.
+
+Let the contig with the name `NODE_5_length_100000_cov_215.651` consist of the following edges of the assembly graph:
+
+``` plain
+ >EDGE_2_length_33280_cov_199.702
+ >EDGE_5_length_84_cov_321.414"
+ >EDGE_3_length_111_cov_175.304
+ >EDGE_5_length_84_cov_321.414"
+ >EDGE_4_length_66661_cov_223.548
+```
+
+Then, `contigs.paths` will contain the following record:
+
+``` plain
+ NODE_5_length_100000_cov_215.651
+ 2+,5-,3+,5-,4+
+```
+
+
+Since the current version of Bandage does not accept paths with gaps, paths corresponding contigs/scaffolds jumping over a gap in the assembly graph are splitted by semicolon at the gap positions. For example, the following record
+
+``` plain
+ NODE_3_length_237403_cov_243.207
+ 21-,17-,15+,17-,16+;
+ 31+,23-,22+,23-,4-
+```
+
+states that `NODE_3_length_237403_cov_243.207` corresponds to the path with 10 edges, but jumps over a gap between edges `EDGE_16_length_21503_cov_482.709` and `EDGE_31_length_140767_cov_220.239`.
+
+The full list of `` content is presented below:
+
+- scaffolds.fasta – resulting scaffolds (recommended for use as resulting sequences)
+- contigs.fasta – resulting contigs
+- assembly_graph.fastg – assembly graph
+- contigs.paths – contigs paths in the assembly graph
+- scaffolds.paths – scaffolds paths in the assembly graph
+- before_rr.fasta – contigs before repeat resolution
+
+- corrected/ – files from read error correction
+ - configs/ – configuration files for read error correction
+ - corrected.yaml – internal configuration file
+ - Output files with corrected reads
+
+- params.txt – information about SPAdes parameters in this run
+- spades.log – SPAdes log
+- dataset.info – internal configuration file
+- input_dataset.yaml – internal YAML data set file
+- K<##>/ – directory containing intermediate files from the run with K=<##>. These files should not be used as assembly results; use resulting contigs/scaffolds in files mentioned above.
+
+
+SPAdes will overwrite these files and directories if they exist in the specified ``. []()
+
+
+## plasmidSPAdes output
+
+plasmidSPAdes outputs only DNA sequences from putative plasmids. Output file names and formats remain the same as in SPAdes (see [previous](#sec3.5) section), with the following difference. For all contig names in `contigs.fasta`, `scaffolds.fasta` and `assembly_graph.fastg` we append suffix `_component_X`, where `X` is the id of the putative plasmid, which the contig belongs to. Note that plasmidSPAdes may not be able to separate similar plasmids and thus their contigs may appear with the same id. []()
+
+
+## biosyntheticSPAdes output
+
+biosyntheticSPAdes outputs three files of interest:
+- gene_clusters.fasta – contains DNA sequences from putative biosynthetic gene clusters (BGC). Since eash sample may contain multiple BGCs and biosyntheticSPAdes can output several putative DNA sequences for eash cluster, for each contig name we append suffix `_cluster_X_candidate_Y`, where X is the id of the BGC and Y is the id of the candidate from the BGC.
+- bgc_statistics.txt – contains statistics about BGC composition in the sample. First, it outputs number of domain hits in the sample. Then, for each BGC candidate we output domain order with positions on the corresponding DNA sequence from gene_clusters.fasta.
+- domain_graph.dot – contains domain graph structure, that can be used to assess complexity of the sample and structure of BGCs. For more information about domain graph construction, please refer to the paper.
+
+
+
+## Assembly evaluation
+
+[QUAST](http://cab.spbu.ru/software/quast/) may be used to generate summary statistics (N50, maximum contig length, GC %, \# genes found in a reference list or with built-in gene finding tools, etc.) for a single assembly. It may also be used to compare statistics for multiple assemblies of the same data set (e.g., SPAdes run with different parameters, or several different assemblers).
+[]()
+
+
+
+# Stand-alone binaries released within SPAdes package
+
+
+## k-mer counting
+
+To provide input data to SPAdes k-mer counting tool `spades-kmercounter ` you may just specify files in [SPAdes-supported formats](#sec3.1) without any flags (after all options) or provide dataset description file in [YAML format](#yaml).
+
+Output: /final_kmers - unordered set of kmers in binary format. Kmers from both forward a
+nd reverse-complementary reads are taken into account.
+
+Output format: All kmers are written sequentially without any separators. Each kmer takes the same nu
+mber of bits. One kmer of length K takes 2*K bits. Kmers are aligned by 64 bits. For example, one kme
+r with length=21 takes 8 bytes, with length=33 takes 16 bytes, and with length=55 takes 16 bytes. Eac
+h nucleotide is coded with 2 bits: 00 - A, 01 - C, 10 - G, 11 - T.
+
+Example:
+
+ For kmer: AGCTCT
+ Memory: 6 bits * 2 = 12, 64 bits (8 bytes)
+ Let’s describe bytes:
+ data[0] = AGCT -> 11 01 10 00 -> 0xd8
+ data[1] = CT00 -> 00 00 11 01 -> 0x0d
+ data[2] = 0000 -> 00 00 00 00 -> 0x00
+ data[3] = 0000 -> 00 00 00 00 -> 0x00
+ data[4] = 0000 -> 00 00 00 00 -> 0x00
+ data[5] = 0000 -> 00 00 00 00 -> 0x00
+ data[6] = 0000 -> 00 00 00 00 -> 0x00
+ data[7] = 0000 -> 00 00 00 00 -> 0x00
+
+Synopsis: `spades-kmercount [OPTION...] `
+
+The options are:
+
+`-d, --dataset file `
+ dataset description (in YAML format), input files ignored
+
+`-k, --kmer `
+ k-mer length (default: 21)
+
+`-t, --threads `
+ number of threads to use (default: number of CPUs)
+
+`-w, --workdir `
+ working directory to use (default: current directory)
+
+`-b, --bufsize `
+ sorting buffer size in bytes, per thread (default 536870912)
+
+`-h, --help `
+ print help message
+
+
+
+## k-mer coverage read filter
+
+`spades-read-filter` is a tool for filtering reads with median kmer coverage less than threshold.
+
+To provide input data to SPAdes k-mer read filter tool `spades-read-filter ` you should provide dataset description file in [YAML format](#yaml).
+
+Synopsis: `spades-read-filter [OPTION...] -d `
+
+The options are:
+
+`-d, --dataset file `
+ dataset description (in YAML format)
+
+`-k, --kmer `
+ k-mer length (default: 21)
+
+`-t, --threads `
+ number of threads to use (default: number of CPUs)
+
+`-o, --outdir `
+ output directory to use (default: current directory)
+
+`-c, --cov `
+ median kmer count threshold (read pairs, s.t. kmer count median for BOTH reads LESS OR EQUAL to this value will be ignored)
+
+`-h, --help `
+ print help message
+
+
+## k-mer cardinality estimating
+
+`spades-kmer-estimating ` is a tool for estimating approximate number of unique k-mers in the provided reads. Kmers from reverse-complementary reads aren"t taken into account for k-mer cardinality estimating.
+
+To provide input data to SPAdes k-mer cardinality estimating tool `spades-kmer-estimating ` you should provide dataset description file in [YAML format](#yaml).
+
+Synopsis: `spades-kmer-estimating [OPTION...] -d `
+
+The options are:
+
+`-d, --dataset file `
+ dataset description (in YAML format)
+
+`-k, --kmer `
+ k-mer length (default: 21)
+
+`-t, --threads `
+ number of threads to use (default: number of CPUs)
+
+`-h, --help `
+ print help message
+
+
+## Graph construction
+Graph construction tool `spades-gbuilder ` has two mandatory options: dataset description file in [YAML format](#yaml) and an output file name.
+
+Synopsis: `spades-gbuilder