SPAdes error -- running out of memory/not enough memory allocated for job

I am trying to run SPAdes for genome assembly but get error below which I think means it is running out of memory.
Job Message 1:

  • desc: Fatal error: Exit code 250 ()
  • error_level: 3
  • exit_code: 250
  • type: exit_code

It seems only 182 GB of ram are allocated. According to SPAdes manual at least 250 GB of ram are needed. So then how does anyone run SPAdes on Galaxy? Or is there a way to allocate more ram so I can run this job. Its not a terribly large dataset – around 50 million 150 bp paired end reads.


Hi @Daniel_Hogan

You are working at, correct? Try running this job at or to see if those servers can handle the job instead. Each public server offers distinct public computational resources. Details → Public Galaxy Servers

Other common options include:

  1. Use Sub-sample sequences files e.g. to reduce coverage. This is a pretty common solution. More coverage is not necessarily “better” during initial assembly steps, and for scientific reasons, too.
  2. Run SPAdes through Shovill Faster SPAdes assembly of Illumina reads.
  • Why? The logs are very informative and can give some clues about how to better tune parameters to fit your data when you rerun (with SPAdes directly or with this same tool-around-a-tool).
  1. Or, tune parameters on your own, maybe by reviewing some publications that also work with your target species. Defaults are unlikely to be the best fit for real data. To match things up: know that options are labeled on the tool form with the command line flags (try a browser search), and the Galaxy command line is captured on the same place you found the other logs (“i” icon).

Hope this helps! :slight_smile: