Job fail in one element of the list stops the whole pipelin

Dear everyone!

I am currently analysing the output of a PE sequencing of 96 samples. I have uploaded the reads and included them into a list to use the tools more efficiently. I am running a workflow using snippy, snippy core, Clustal and so. My problem is that 4/96 snippy jobs failed due to memory allocation problems and that interrupted the whole workflow. Even if I delete the failed runs, the output list is marked in red and cant resume the workflow.
My question is: Is there a way for me to run the failed snippy jobs and include the output in the list of the successful ones in order to continue with my workflow?
I have thought of downloading the successful runs output, running the failed ones again, download these outputs and then after re uploading everything again make a new list but my problem is that the snippy output and the bam files are 54Gb each, way too heavy for me to do.
I really appreciate your assistance on this :slight_smile:

Hi @Alan

There are two primary solutions for this situation:

  1. Instead of deleting failed jobs, rerun instead. The bottom of the tool form will have an extra option to resume dependencies in a way that replaces the original failed datasets.

  2. Remove failures during the original workflow run. tools-that-manipulate-elements-within-a-collection

More workflow and collection help is here, including examples of programmatically controlled workflow execution.