Disk quota error occurring with paired-end Trinity assembly -- Sept 22 2021 = Resolved


Our lab uses Trinity for most of our assembly work, but lately, I’ve been unable to get the tool to run correctly.
What I am attempting to assemble is a set of paired-end Illumina reads, with a file size of about 360MB for each strand. This created a near-instantaneous error report when the job transitioned from ‘pending’ to ‘running’. The job API ID is bbd44e69cb8906b567f21c7489016a08, and the error code is “[Errno 122] Disk quota exceeded”. My understanding of that error code is that it is a memory issue, but we’ve assembled larger datasets than that in the last without issue. Any clarification you could give would be appreciated.

1 Like

I had the same issue today. Must be a problem on the server. I sent an error report.

1 Like

Hi @BirdNerd

This error was due to a cluster problem at UseGalaxy.org that should be resolved right now [Errno 122] Disk quota exceeded. If you are still getting that error from jobs that were started this week it would be unexpected. Please send in a bug report that includes a link to this topic thread in the comments.

That said, reviewing your account at this server, the most recent Trinity runs have a different error. The complaint from the tool stdout is that the fastq inputs were not formatted corrrectly. Common reasons include: data not fully uploaded, gz compression didn’t load correctly, or the data is truncated before uploading to Galaxy. Check the data locally to make sure it is intact, then upload uncompressed fastq and run some QA/QC checks. As you probably know, Trinity expects that both ends of a pair are input. I added a few tags to your post that link to prior Q&A that helps with troubleshooting that kind of problem.


Hi @R_R_R – Thanks for sending in the bug report. That issue should be fixed but … some data migrations are still going on. We’ll review and post back an update.

Okay, thanks for helping with that. I’m less concerned about the second dataset (although I had noticed that it had a different error code), as I just wanted to see if I could replicate the error independent of the Pluvialis data. Thanks for the information, and I’ll try and run it again.

1 Like

Thanks Jen.

I am trying again now with a small dataset (one I know should work) to see if it happens again. If it does, I’ll make sure to send another bug report.


For your run sent in, the job did hit a cluster problem that should now be resolved (was present for a short time today).


Also please try a rerun (with intact fastq).

About 20 GB per end (compressed) is the largest I’ve seen assemble at a public Galaxy. Larger work can be moved to a custom Galaxy.

Thanks to you both for reporting the problem!

Thanks Jen,

I actually have run Trinity with a set that were around 160 GB per end and didn’t have a problem (uncompressed, 18 GB compressed). I just double checked it to be sure. I can let you know which data set that was if you want to see it. Most of my Trinities have been over 100 GB uncompressed.

1 Like

@R_R_R Thanks for clarifying – I updated the original post. Maybe the “large data” info will help someone else :slight_smile: