Hi! I’m analyzing a lot of long-read data using tools like Porechop and NanoLyse in nf-core pipelines (taxprofiler, mag) but there is a huge slow-down in using the BusyBox gzip to compress ~100GB files. For example, gzip compression alone can take an extra 6 hours on top of an 8 minute(!) NanoLyse job.
This is a question about nf-core workflows, but I understand that Galaxy maintains the Docker/Singularity images that are used by nf-core.. Is that right? Sorry if it’s off-topic here, I’m trying to find out where to ask!!
This is a question about nf-core workflows, but I understand that Galaxy maintains the Docker/Singularity images that are used by nf-core.. Is that right? Sorry if it’s off-topic here, I’m trying to find out where to ask!!
That is an interesting view on it
There is a project called Biocontainers, where we maintain containers for all different workflow engines. It’s unfortunately true that only Galaxy people are maintaining this for everyone, we are also trying to mirror and store those containers, but it does not need to be just Galaxy