RNA STARSolo out of memory: where to adjust memory/node?

Hi,

I got this error. “This job was terminated because it used more memory than it was allocated. Please click the bug icon to report this problem if you need help”. There does not seem to be anything wrong with the input data when I check it. Also, the tool runs fine on other files, files were all generated with the same software (trimmomatic), larger files than the ones failing were running without issues. What would be the issue? I read that it is possible to execute the job on a different “cluster node”.
I checked the tool options (STARsolo), but could not find where I can allocate memory or change nodes. Could you please point me in the right direction?

With kind regards,

Stephanie

Hi @Stephanie_Dv

If the tool is running out of memory during job execution, that can mean that the work is actually exceeding the job limits at the public server you are working at, or that there is some data format or content problem you need to address.

Reference: RNA STARSolo mapping, demultiplexing and gene quantification for single cell RNA-seq (link at ORG)

So, breaking that down:

  1. Data Format: it sounds like you already checked this part. Good!

  2. Data Content: Did you run QA steps? You could maybe re-review those quality steps to see if you can learn what might be going on or if you need to adjust parameters with this tool.

  3. Server Resources: Each server hosts different resources, and distinct computational limits (practical reasons). You can try at a different server to see what happens. UseGalaxy.eu and UseGalaxy.org.au are the most similar to UseGalaxy.org, and both host this tool. It is easy to move data around between servers, and it is totally expected that people work at all of these (one account at each!).

Hopefully this explains what is going on, and we can certainly follow up. No one can really tell of the server resources are the actual problem, since these tools are so computationally demanding, the data and parameters even for smaller data file-size wise can be “demanding”, so trying where there are more resources is the best test. For this tool, I would suggest comparing to UseGalaxy.eu. Then, if it still fails there, you’ll have to look really close at the parameters, and maybe even the QA steps again – something is going on and the tool itself cannot understand/process that specific sample (scientifically).

Let us know how this goes! :slight_smile:

Hi Jenna,

in the meantime, rerunning the tool has done the trick!
I will keep your valuable advice for when it happens again and rerunning does not work.

Have a fine Friday!

1 Like

Great! I forgot to add in the “try at least one rerun” to my reply. Very glad that worked – some tiny fraction of jobs fail at our monster sized clusters. A rerun solves those rare cases. :rocket: