Failed trinity assembly

Hi everyone,
I’m having some troubles understanding what I’ve done wrong.
I want to assembly a transcriptome from different samples retrieved from the NCBI. Since I had several forward and reverse reads I used the concatenate tool to obtain one dataset for all the forward reads and one for all the reverse reads. At this point I’ve built a collection (with the build dataset pair function, named: All sequences-cleaned ) and I’ve performed trimmomatic and FASTQC (both ran well).

After that I ran Trinity with these parameters:
|Are you pooling sequence datasets?|No|
|Paired or Single-end data?|unmerged_paired_collection|
|FASTA/FASTQ dataset collection with R1/R2 pair| 142: All sequences-cleaned

This gave me this error:
Execution resulted in the following messages:
Fatal error: Exit code 2 ()

Detected Common Potential Problems

The tool was executed with one or more empty input datasets. This frequently results in tool errors due to problematic input choices.

Can someone help me to fix this problem?

Hi @Letizia_Iuffrida
Trinity can pull files for a single job, hence no need for concatenation.
Maybe try assembly on a single set of data (F and R reads). Does it work? If the answer is positive, try Trinity with with pulling data set to Yes. I completed a test Trinity job on Galaxy Europe on several samples using pulling option, and don’t see any issue with the tool.
As for the error message:
The tool was executed with one or more empty input datasets.
I don’t know if it is generic (aka generated for any failed job) or specific, but please preview the input files.
Kind regards,

Thanks @igor for the reply, I used the concatenate tool to simplify the previous preparation since I have more than ten samples and it’s easier for me to have just one dataset with all the R reads and one for the F reads. Anyway, I’ll try to run Trinity with a single run.
I checked the input files and they seems ok to me.
I was thinking, could the problem be related to the fact that I’m trying to assemble about 45-50 GB and maybe it’s too heavy for the server? I was reading in the forum that maybe this is the problem…

Hi @Letizia_Iuffrida
Do you mean total size of gzipped FASTQ files is 50 GB? It might be too big, but you need a feedback from the server admin. I assume, in silico normalization of reads was enabled. Maybe consider using khmer: Normalize By Median on merged files before Trinity.
Kind regards,