I am at the tail end of completing a RNA-STAR run (a few hidden files are still processing) and my storage jumped suddenly from ~75% to 100% full. I have been permanently deleting files as I go so I was surprised to see that files were, in fact, not being permanently deleted. This obviously explains the full storage quota so now I’m wondering how I can force the deleted files to be permanently deleted to free up space?
I have tried permanently deleting individual files, purging all deleted files (tried this maybe 50 times) and logging out and back in. Is there a glitch or am I doing something wrong? I noticed yesterday that it was taking a couple of tries to purge data to get it to actually purge.
if you use the new Galaxy history view, click at Storage Dashboard (barrel) icon under history name, left corner and proceed to Free up disk usage. If you use legacy view, maybe swith to Beta view in the history menu. Alternatively, you can access the Storage Dashboard through User (the top Galaxy menu) > Preferences, at the bottom of the middle window.
Hope that helps.
I replied to your email. Some of the datasets in your history were in a mixed state due to purging and still processing datasets in the same collection.
For others that may run into this problem (should be very very rare!):
What to try:
- Go to User → Histories, and make a copy of the history. This won’t consume extra quota space since that copy is really just a clone.
- Go to that copy and work from there.
- Go into the Hidden tab and click on the trash cans to delete unwanted datasets.
- Retry the Purge all Deleted files action. This function will take a few minutes to fully process. Let it finish.
- Go to User → Preferences → Storage Dashboard and use the Refresh function to recalculate quota usage. Galaxy will catch up in the background but this forces the recalculation to happen “quicker”. Make sure to let this process complete without interruption. It can commonly take several minutes – timing depends on how much data was purged and how busy the server is.
This seems to be related to deleting collections but not fully deleting the datasets included in that collection. A few of those datasets were still processing when the output went over quota and at least one of the jobs was paused.
Making a copy of a history resets all the metadata for that history and contents. You can delete + purge the original history then refresh quota again after you have a copy that functions correctly. Don’t attempt to run any new jobs until this is done.
Again, this should be rare for anyone but the above is how to “start over”. @igor instructions should then work correctly.