[Errno 28] No space left on device


I was using QIIME 2 on the Galaxy server and I got an error message saying;
[Errno 28] No space left on device. I tried deleting histories but it didn’t work. How can I fix this? I’ve used only 3.5GB of storage.

Thank you in Advance.

Hi @Brigitta,
this error does not refer to your user quota, but to space left on the hard disk of the machine your job has been running on.

Can you please file a bug report (through that little bug icon on your failed dataset)? That will give us all information we need to debug this via an email to the Galaxy Europe team.


Hi @wm75 ,

There was an error when I tried to report it. The error I got was as follows;
“An error occurred sending the report by email: Error reporting has been disabled for this Galaxy instance”

Stated below are the details of the activities that lead me to the “Errno 28” error
I have been using the Galaxy server to analyze a data set using QIIME2. I got the above error (Errno 28 No space left on device) while I was trying to use the qiime2 feature-classifier classify-sklearn plugin to assign taxonomy to my data set using a database called silva-138-99-515-806-nb-classifier but it works fine when I use the greengenes database (gg-13-8-99-515-806).

Thank you In advance,

So you are not working on usegalaxy.eu then?

No, I am working on cancer.usegalaxy.org

Ok, then you need to contact the admins of that server. As this is an internal error in their compute environment, there is not much anyone else can help you with.

I’ve tried to inform people behind your server about your issue and, hopefully, they’ll chime in here later today (it’s around midnight in their timezone currently).

1 Like

Thank you so much!

Hi @Brigitta

We were able to contact the administrators for this server, and the issue should be resolved by now. Would you please try a rerun?

cc @sargentl


Heya, thanks for the report, and sorry it was cumbersome to file it.

Re: your main issue: We’ve adjusted resource allocations to ease the pressure on the file system and memory when running that particular tool. I was able to replicate the error, and confirm that the changes made resolved it, so you should be good to go! It’s possible, however, that other, more complex/large datasets might exceed this new allocation, so I’ll be keeping an eye on future executions more closely to see if further adjustments are needed.

Re: your issue issuing issues: we are working on making it easier to report errors directly to us instead of driving you to go through intermediaries (thanks @wm75 and @jennaj !); those changes should be live soon.

Thanks again for the report, very valuable and actionable info!