QIIME 2 feature-table summarize/classify-sklearn failed: Docker image decompression runs out of space

Hi everyone,
I’m encountering a persistent error when running QIIME 2 tools on Galaxy Europe and would appreciate some help.
Problem Description
When I run feature-table summarize and feature-classifier classify-sklearn, the jobs fail during the Docker image pull phase with this error:
Unable to find image ‘ Quay ’ locally
2025.10: Pulling from qiime2/amplicon
37d8ef6328fe: Pulling fs layer
2f04a8928893: Pulling fs layer
6e7462332504: Pulling fs layer
39e102696b4b: Pulling fs layer
39e102696b4b: Waiting
37d8ef6328fe: Download complete
2f04a8928893: Verifying Checksum
2f04a8928893: Download complete
6e7462332504: Download complete
37d8ef6328fe: Pull complete
2f04a8928893: Pull complete
6e7462332504: Pull complete
39e102696b4b: Verifying Checksum
39e102696b4b: Download complete
docker: failed to register layer: open /opt/conda/envs/qiime2-amplicon-2025.10/lib/python3.10/site-packages/statsmodels/tsa/statespace/tests/test_exponential_smoothing.py: no space left on device.
See ‘docker run --help’.
My Situation
My personal storage is sufficient: I have a 250GB quota and only used 42.7GB. I also have 2TB of scratch storage that is completely unused.
Cannot change tool version: There is no option to select a different QIIME 2 version in the tool parameters; I can only use the default 2025.10.
Cannot access advanced options: I cannot find an entry to set environment variables or modify the Docker image path.
What I’ve Tried
Cleaned up my history and unused datasets to confirm storage is not the issue.
Tried re-running the jobs, but they fail at the same point every time.
Tried filtering my input table to reduce its size, but the problem persists.
Request for Help
I suspect this is caused by insufficient space on the Docker default storage partition (e.g., /var/lib/docker) on the compute node, since my personal storage and scratch space are both very ample.
Could you please:
Check if the compute node’s Docker cache and old images can be cleaned up to free space?
Consider configuring Docker to use my 2TB scratch storage for temporary files?
Advise if there are any temporary workarounds other users have found for this issue?
Thank you very much for your help!

You’re correct—this isn’t your scratch or Galaxy quota. The failure occurs when the compute node’s `/var/lib/docker` is out of space and Docker is unpacking the qiime2/amplicon:2025.10 image. According to that statsmodels path, it passes away before QIIME 2 ever starts.

Docker cannot be redirected to clean image caches or scratch by users. Filtering inputs won’t be beneficial. Rerunning can occasionally be successful if the job falls on a new node; otherwise, Galaxy Europe administrators must recycle workers or prune Docker images. There is nothing wrong with your QIIME 2 configuration.

Welcome @hxj !

I agree with @susan about the root issue. And I’ve had problems with some of the other Qiime2 tools at UseGalaxy.eu recently. Some were already reported, but let’s ping an administrator from their team to make them aware that more is going on. Would you be able to help @wm75 ?

Then, for the immediate use, you can try at ether UseGalaxy.org or UseGalaxy.org.au. You don’t need to completely start over but instead move data between the servers by URL.

There is probably some problem with how the jobs are being routed out to appropriate cluster nodes. But that’s just a guess! In any case, only an administrator can correct this.

Hope this helps! :slight_smile: