I encountered this error while processing my data file:
“Job output file grew too large (greater than 200.0 GB), please try different input or parameters.”
The file size exceeding 200 GB is expected; it will be around 350 GB.
Could you please help me with how to proceed with this step?
I have an input file with a 400 GB sequencing file, and I was performing trimming on the sequencing data. The output file is typically over 300 GB.
The public Galaxy servers have significant computational resources but there are still practical computational limits. Tool can also have inherent limits but that usually has a different message. Your error is probably reporting that the data hit a processing limit on the cluster where the job is running.
Would you like to share a link to the history containing the job? We can help to do two things: check that there isn’t something technical going on and maybe also provide an alternative way to get the reads processed (potential “chunking”).
To complicated it, the UseGalaxy.org server is undergoing maintenance this week, so some work is temporarily throttled. I can’t tell yet if you are impacted but it seems possible. This means that once the maintenance event resolves the job might work as-is.
The last solution is to try at a different public Galaxy server. UseGalaxy.eu can scale for the largest outputs and longest runtimes.
We can start with the tool/job review if you want. How to share is in the banner of this forum, also here → How to get faster help with your question. You can post the link back as a reply, then unshare once we are done. Thanks!
Hi Jennifer,
Thank you very much for your input.
of course, I can share the history, I tried again and got same error.
Please let me know how to share the history.
Best,
Wonsik