align.seq tool- Fatal error 153

Currently analysing soil microbiome data and following the “Analyses of metagenomics data - The global picture” tutorial on galaxy :
When running the align.seq tool, using the silva.v4.fasta reference, the following error keeps appearing;

Fatal error: Exit code 153 ()

/data/jwd05e/main/066/600/66600318/ line 23: 18 Done echo ‘align.seqs( fasta=fasta.dat, reference=alignment.reference.dat, align=needleman, ksize=8, flip=true, threshold=0.5, processors=’${GALAXY_SLOTS:-8}’ )’
19 | sed ‘s/ //g’
20 File size limit exceeded(core dumped) | ./mothur
21 | tee mothur.out.log

My data is under the limit, however the silva reference file is 191.7MB.

Any suggestions?
Thanks in advance!

Hi @ekil

You are using the tutorial data and the tutorial steps? If yes, then the workflow in the tutorial should work as well.

Consider importing the workflow and running it. Send the results to a new history. That creates a “reference history” that you can compare to when working through the steps manually to learn them.

I do this at to help with Q&A, and you are welcome to review my copy. But I would strongly suggest you do this yourself at

Notice that one of my jobs failed, and I just reran it. When you rerun a transient cluster failure from a job run in a workflow, there will be an extra box on the form to replace the original output, then to resume downstream tools. Makes it easy to rerun complex work like this with just a few clicks.

For your error, this indicates a problem with the output from upstream tool sent to this tool. It produced a file that was tool large for this tool to work with. That is usually some parameter mixup, especially if using the tutorial data since is designed to be smaller.

20 File size limit exceeded(core dumped) | ./mothur

Let’s start there. :slight_smile:

Hi Jenna,

Sorry for the late reply. I have tried to run the actual tutorial, however when I get to the cluster.split command, this error appears;
Fatal error: Exit code 1 ()
Fatal error: Matched on [ERROR]

I have done everything exactly as the instructions say.

I appreciate your help :slight_smile:

Update: Looks like that worked. You can review that history and maybe spot the difference between your error run and my success run?

If you need more help than that, you can share more details – screenshots of the job information page, of the input datasets, etc.

Note: If this is your own data, not the tutorial data, you probably need to adjust some of the parameters in the workflow to better fit your particular reads. See the longer version of those tutorials for many details about this.

Hi @ekil

I just started up the tutorial with the data and workflow, at the server instead. It is still running (I also wait in the queue!). Let’s see what happens!

The error Fatal error: Matched on [ERROR] usually means that there was a problem upstream, and likely an empty input file – or the wrong file was chosen (these are very easy to mix up). So check those inputs, then re-check the tool that generated those and you’ll probably find what went wrong.

Tips on reviewing Datasets and Jobs

  • Click on the “i” icon for any dataset to see what created it in a summary.
  • Click on the “tree” icon for any dataset to filter the history for just the other datasets that the file was involved with e.g. used as an input or output.
  • All dataset navigation buttons are in this cheatsheet → FAQ: Different dataset icons and their usage