Quota reset issues

I have permanently deleted all but two histories and logged out several times. My 2 active histories are using 1.34 MB of disk space but my quota is at 2.7 TB and growing. Do I have some hidden jobs or histories that are running?

Thanks,
Mike

1 Like

You can send a email to galaxy-bugs@lists.galaxyproject.org and they will be able to reset your quota.

2 Likes

I am assuming there is the same problem.

1 Like

Thanks! I sent the reset request. I appreciate the quick response and suggestion.

2 Likes

Hi @mrrossi366

I reset your quota at Galaxy Main https://usegalaxy.org.

Went from 2.7 TB to 1.3 MB.

Am reviewing the account now to look for any processes with problems. You’ll probably have many that ended up in a paused data (due to being over quota). Those won’t start up by themselves – it takes a direct action (under history menu > resume paused jobs). If you could leave everything as-is for the next 15 mins or so while I look at what went wrong, that would be great

Also – did you run a workflow? If yes, a share link to the workflow and the history that contains the input datasets would help us to explain why it created so much data (if that is what is triggers the disc usage explosion). Make sure I can tell which datasets go with each input of the workflow. The history can just contain the inputs – copy them to a new history and share that. You can share that by email again or a direct message here.

If did not run a workflow, what were your last few steps? Tools used, input/output types, anything else special about what you may have clicked on.

There could be other issues, like inputting compressed fastq fastqsanger.gz into a tool that needs uncompressed fastq fastqsanger – these are usually older tools they create hidden uncompressed versions of the data in the history (view hidden datasets in a history to see these). The hidden dataset is used by the immediate and future tools to use when uncompressed is required. This too can create a lot of new data, very fast. The solution is generally to upgrade your tool to the latest version or use a different tool that does the same function but handles compressed fastq inputs directly. If this turns out to be the problem, we can help to choose alternative tools (if don’t know which to use, or even if available – and some won’t be).

Let me review first and we can follow up. If using a workflow, you can send the links now.

I see some Join Two Datasets jobs. Those can often create really large results (all versus all match!) but it doesn’t look like your data / data size / query would NOT do this, so ignore. I’ve posted the info here for others that might run into this kind of problem in the future (it happens).

I do also see many many now purged unnamed histories. These might have been created by a workflow (if the inputs are multiple single datasets and the output is “send to a new history”). How a problem like that looks and how to avoid it is in this prior post – in short, use dataset collections or just accept that is how the functionality works when not using collections + using output to new history: Workflow Splitting Samples Automatically

If these last two posts doesn’t cover what you think went wrong, please explain more about what you were doing & share links as appropriate.

Thanks! Jen

Thanks for your help. My quota is back to 0% and I have started requeuing jobs. I am parsing VCF files from ThermoFisher’s OCAv3 assay into several summary tables and my quota issue arouse from line duplication in merging tabular datasets side-by-side. I think I’ve fixed the workflow by using the vcf-bed intersect tool and some cut and select options that produce smaller tabular files that are easier to merge.

1 Like