One solution is to break out the read data into distinct collections during download and processing in batches.
- Trimmomatic creates near duplicate very large fastq outputs.
- The idea would be to get through that step with a portion of the data, purge the original files, repeat with another portion, then combine the results into one collection for downstream processing.
- From your description it seems like 2-4 batches would be enough.
Another is to make a small short-term extra quota request (extra 100-250 GB for a few days to a week).
- This is only available for academic researchers at the public sites. How-to for accounts at UseGalaxy.org. Account quotas - Galaxy Community Hub. Make the request via email as described in that FAQ – don’t post your information here publicly.
- For non academics, additional resources require the use of a private Galaxy server with resources scaled to fit the work. Galaxy Platform Directory: Servers, Clouds, and Deployable Resources - Galaxy Community Hub
If that is not enough help, would you please post back a share link to this history? That context will help with review and specific solutions.