Utilizing galaxy space effectively

Hi @LittleBlueHeron

One solution is to break out the read data into distinct collections during download and processing in batches.

  • Trimmomatic creates near duplicate very large fastq outputs.
  • The idea would be to get through that step with a portion of the data, purge the original files, repeat with another portion, then combine the results into one collection for downstream processing.
  • From your description it seems like 2-4 batches would be enough.

Another is to make a small short-term extra quota request (extra 100-250 GB for a few days to a week).

If that is not enough help, would you please post back a share link to this history? That context will help with review and specific solutions. :slight_smile:

faqs/galaxy/#sharing-your-history

1 Like