Unique.seqs fatal error 137 -- Tool/memory failure on a local Galaxy

Dear Galaxy users,

I am trying to analyse some 16S rRNA data following the Galaxy tutorial. When i run the unique.seqs call i get the following message:

An error occured while running the tool toolshed.g2.bx.psu.edu/repos/iuc/mothur_unique_seqs/mothur_unique_seqs/1.39.5.0.

Tool execution generated the following messages:

Fatal error: Exit code 137 ()
/home/nick/galaxy/database/jobs_directory/002/2067/tool_script.sh: line 25: 31810 Done echo ‘unique.seqs( fasta=fasta.dat, format=name )’
31811 | sed ‘s/ //g’
31812 Killed | mothur
31813 | tee mothur.out.log

Do you know why this keeps happening? The previous call was the screen.seqs() and was successful.

Kind regards

Nick

It looks like the tool is running out of memory during the job processing.

This could be because of mixed-up inputs or using inputs that are too large to execute on your Galaxy server.

Has the tool worked before or is the first time using it? You might want to double check that the Mothur tool suite is installed correctly and all dependencies are intact.

Server administrator help is in a few places:

Dear jennaj,

Many thanks for your reply. I have run before this command without any problems. Do you mean that my RAM is running out?

Shall i try to break down the dataset to smaller datasets and then merge the data? Is that possible?

Thanks a lot for your kind help

Best

N

1 Like

There could be a format/content problem with the input or the job is really running out of memory during execution.

Breaking the dataset up into smaller chunks would not work well with this tool.

What happens if you run the same job at a public Galaxy server? You can try at Galaxy Main https://usegalaxy.org or Galaxy EU https://usegalaxy.eu. If the tool errors, a share link to the history can be generated and sent in for feedback publically by posting back the link and the dataset numbers involved. Or, ask for feedback privately by sending in a bug report from the red error dataset – please include a link to this post in the comments to associate the two support questions with context.

Many thanks for your reply again. I might not be able to run the same job online as my dataset is large and i might exceed the available space. I am running again the same calls with a smaller dataset. Is there any way to increase the memory?

[UPDATE]
I finished running the same calls with a smaller dataset and commands were successful. It might be then that i am running out of space.

Kind regards

Nick

1 Like

Ok, that does help to confirm that memory is the root problem.

Please see the Galaxy administrator help for how to increase memory allocation for tools. This tuning is usually most appropriate for those that are running jobs on a cluster, which might not be your case.

By default, when running Galaxy locally, job memory is limited by whatever hardware you have natively available on your computer. 16 GB is considered the minimum, but many tools and certainly large datasets can need more.

You might want to consider moving to a cloud version of Galaxy if you plan to process large data. See:

Dear jennaj,

I tried to use Galaxy.org and i got again the same problem. Are there any other options? Is there a way to segment the dataset and after the alignment pool the samples?
This is a University Laboratory project on 16S rRNA.

Best

N

2 posts were split to a new topic: Troublshooting Unique.seqs: Status: resolved, please rerun prior failures