HISAT2 crash/failure

For anyone getting an error message like this one:

This job was terminated because it used more memory than it was allocated.

The problem is unrelated to the amount of available data storage space in an account (data e.g. storage memory quota) and instead related to the memory allocated to the tool during job execution (working memory on a cluster node).

See → FAQ: Understanding 'exceeds memory allocation' error messages

The issue is usually some problem with the input files or the parameters used when working at a public Galaxy server. Why? The resources are massive at public sites, so if a job failed with this reason, it would probably fail anywhere. Meaning, the tool is spinning out and wouldn’t produce meaningful results until the input problem is resolved during a rerun.

We can troubleshoot input problems at this forum. See these guides for how to share your work. That can be screenshots, but all the parts involved need to be captured: input data labels, the input file content (headers and at least one data line), the exact tool used and parameters (all is on the job info page), any logs that are available, and the public server involved.

This is one example of a prior Q&A where this message was involved, and it turned out to be a parameter problem → Usage help -- single end versus paired end BAM options

Hope that helps! :slight_smile: