Replies: 5 comments
-
Hi Tom, thanks a ton for the feedback!
Fair enough, I will need to adjust
Indeed, pip3 can be a bit hard to work with sometimes... Thanks for sharing the list; I’ll make sure to update the documentation to reflect this.
Thx, I created an issue specifically for this.
This bigmem argument part was quite underdeveloped... I'd recommend using BR, |
Beta Was this translation helpful? Give feedback.
-
Thanks for the feedback Tom, always good to hear from users what didn't work smoothly. @bogdan: For the bigmem parameter, do we document and explain this aspect clearly (having several 'buckets')? |
Beta Was this translation helpful? Give feedback.
-
@MichaelHiller, there are two distinct facets here:
Once Yuri completes his single-exon-CESAR approach, this memory issue should become obsolete (hopefully, we'll manage to contain the RAM usage within 32-64Gb for all genes). |
Beta Was this translation helpful? Give feedback.
-
Hi @tbrown91 Regarding the cluster args in the |
Beta Was this translation helpful? Give feedback.
-
This is also a quirk of our cluster, but I have to specify the |
Beta Was this translation helpful? Give feedback.
-
Hi Bogdan, hi Michael,
I just wanted to document some of my experiences installing/running this and the make_lastz_chains directory for some "invaluable" customer feedback.
I downloaded the Release version of TOGA (version 1.1.6) and this does not include the CESAR2.0 and postoga directories, I guess as these are forked from another location. The install files also required that I had run
git init .
after download, which perhaps would not be an issue if I had used git clone to download TOGA originally, but was still an issue I ran into.The pip3 install did not work for me at all, this is likely an issue with the way python/conda is setup on our cluster. In the end I installed every dependency using conda inside a
toga
environment. I have attached the output fromconda list
here in case it is useful for others.toga_conda.txt
Inside 'make_lastz_chains` I had to hard-code the slurm arguments inside the script for nextflow. This is due to a number of particulars of our cluster here, such as the main queue not being called "main", etc. I wonder if there is a possibility to point to a config file like in TOGA.
I think this has been mentioned elsewhere, but my 'bigmem' jobs did not run, and as a result I had about 800 orthologs which were missing due to requiring too much memory. Unfortunately if I change the max_mem for the cesar jobs, this changes the max mem for all jobs and I get a very long waiting time for each job to run. In my case I had to set this to 320GB, which was complete overkill for most jobs. Am I supposed to run the bigmem jobs separately and combine in some way? I see there is a nextflow config for bigmem jobs, but don't think this was ever called.
Many thanks for getting the release version working smoothly. It's great to be able to play around with our genomes.
All the best,
Tom
Beta Was this translation helpful? Give feedback.
All reactions