If you didn’t filter the assembly result, you might try that first. Anything in the assembly that is just a single read from the original input can usually be filtered out by length. This will reduce the assembly size while preserving meaningful content and getting rid of excess data that may trigger a job that is running out of resources (guess!!).
How to share the job information if the length filter is not enough. What you posted is just the command string. We need all of the inputs, parameters, and stdout/stderr logs. In context is best. So, share the history or paste most of the job details view back with the datasets expanded to show the “peek” view and the full logs captured.
Any persistent problems can be reported in a new question for community help. Be sure to provide enough context so others can review the situation exactly and quickly offer advice.
I trimmed the original raw data with cut adapt (seq length: 150bp), then aligned it with BBmap and removed the host genome. I assembled the BBmap unmapped output (seq length:150bp) into MEGAHIT assembly.
Which cropping tool can I use after assembly?
Also, if there is a problem with trimming in reads, how did assembly occur? Wouldn’t we expect it to fail?
But the metaSpades tool did not work, maybe because of the cropping process, you are right.
My point was about common issues with sending a very large assembly, with many short fragments, to an annotation tool. Filtering the assembly for quality assembly fragments, as minimally defined by some minimum length, tends to be a solution.
If the reads fail assembly with some tools, that is another clue that the assembly that did work might contain those sorts of single-read outputs. So, removing those may help.
Thank you Jennaj, Quast worked fine even though I tried again without Lenght cropping. However, the information you provided is very valuable, thank you very much.