It looks like the tool is running out of memory during the job processing.
This could be because of mixed-up inputs or using inputs that are too large to execute on your Galaxy server.
Has the tool worked before or is the first time using it? You might want to double check that the Mothur tool suite is installed correctly and all dependencies are intact.
There could be a format/content problem with the input or the job is really running out of memory during execution.
Breaking the dataset up into smaller chunks would not work well with this tool.
What happens if you run the same job at a public Galaxy server? You can try at Galaxy Main https://usegalaxy.org or Galaxy EU https://usegalaxy.eu. If the tool errors, a share link to the history can be generated and sent in for feedback publically by posting back the link and the dataset numbers involved. Or, ask for feedback privately by sending in a bug report from the red error dataset – please include a link to this post in the comments to associate the two support questions with context.
Many thanks for your reply again. I might not be able to run the same job online as my dataset is large and i might exceed the available space. I am running again the same calls with a smaller dataset. Is there any way to increase the memory?
[UPDATE]
I finished running the same calls with a smaller dataset and commands were successful. It might be then that i am running out of space.
Ok, that does help to confirm that memory is the root problem.
Please see the Galaxy administrator help for how to increase memory allocation for tools. This tuning is usually most appropriate for those that are running jobs on a cluster, which might not be your case.
By default, when running Galaxy locally, job memory is limited by whatever hardware you have natively available on your computer. 16 GB is considered the minimum, but many tools and certainly large datasets can need more.
You might want to consider moving to a cloud version of Galaxy if you plan to process large data. See:
I tried to use Galaxy.org and i got again the same problem. Are there any other options? Is there a way to segment the dataset and after the alignment pool the samples?
This is a University Laboratory project on 16S rRNA.