RNA seq HiSAT2 pipeline - slurmstepd: error: Detected 1 oom-kill

Hi there,

I’ve got 10 fastqsanger files (5 forward and reverse reads). I’ve concatenated the forward and reverse reads into 2 files, and then run HiSAT 2 on the paired-end library using Jetstream as the resource parameter. This job fails due to the following message:

“slurmstepd: error: Detected 1 oom-kill event(s) in step 2535724.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.”

Could someone please suggest what I should do to resolve this?

I’ve tried to run the same process on a single forward and reverse read and I have experienced the same error.

(I’m relatively new to Galaxy and I am using the tool panel)

Thanks,
Tom

1 Like

Hi @TGoddard

It seems very likely that the tool is running out of resources during job execution.

That could be due to a problem with the inputs (format/content) or the data really is too large.

How to check for input problems, and your options for what to do about that are covered in this FAQ: Galaxy Support - Galaxy Community Hub >> My job ended with an error. What can I do?

This strongly suggests a content or format issue, or possibly a parameter issue, unless there is something special about your data or even that single pair was really large. Did you do fastq QA/QC steps already? If not, that would be a good place to start.

I added a tag to your post that will point to much prior Q&A about how to do QA/QC within Galaxy (more FAQs, tutorials, direct help within the post).

Thanks!

Hi @jennaj,

Thank you for your reply. The fastq files have already been through QA/QC, but, taking what you’ve said, I believe the cause of the error could be within the reference genome that I am using. I uploaded my own reference and created a custom build. The file format is fasta, but I didn’t run NormalizeFasta.

I’m now going to repeat the workflow using the updated reference file, and I shall report back whether passes (hopefully!) or fails.

Best,
Tom

1 Like

@TGoddard Checking the custom genome/build is a great place to start. Most tools do not work well when the fasta identifier lines “>” contain any description content. Plus, if you are incorporating reference annotation while mapping – if it doesn’t match the reference genome that can also lead to problems (the chromosome names/format must be exactly the same between those two inputs). If you are not incorporating reference annotation yet, you’ll still need to check it – most if not all downstream tools are sensitive to that kind of chromosome mismatch issue. The job may not even fail – just produce odd results.

FAQs that may help. I’m guessing you found these already, but the mismatch issue comes up so often, will post again for reference for others reading.

Once you have confirmed/resolved all of the above, and the job still fails, especially with a resource-related error, it may be that your custom genome is simply too large to process at the public site. Fragmented assemblies (over 100 or so “chromosomes”) and large assemblies (most mammals, certain plants) are the usual underlying reason for the “genome too large” types of failures when using a custom genome/build. The original troubleshooting FAQ I posted has more details about what you can try next.

If you want a 2nd opinion on the CG/annotation match or content, before trying other ways to use Galaxy, please send in a bug report from the error and leave all inputs/outputs intact. Paste a link to this topic in the bug report comments for context, then post back here so we know when to look for it. But I hope it works out before then :grinning: