workflow trim sequences: premature-end of file

Hi, my name is Julia.
When Im using my workflow for the analysis of Read 1 of my fastq files it stops with the trim sequences step and says: premature-end of file and that my input file is empty, even though I just uploaded the fastq file and it is 28 MB as shown in my history.
If im using different settings the workflow works with the same input files.
My problem is that I want the workflow to work with the settings I need, but at the moment it always aborts the analysis.
Can anyone help me with that problem?

Thank you!

1 Like

Hi @Juliaesser

This kind of problem can happen when a dataset has not fully loaded. Or it may have been truncated upstream (before loaded to Galaxy), or while performing other data manipulations in Galaxy. Or, the assigned datatype is incorrect. However, given that the same data is readable when you modify workflow settings (or inputs?) indicates that is not your issue.

Mixing compressed/uncompressed inputs, submitted together in a batch (collection or multiple individual datasets), might be the problem. If you are submitting the data in a dataset collection, all collection datasets need to have the same datatype assignment.

Check to make sure all datasets grouped together for the same tool “input” are either compressed or uncompressed, assigned a datatype that matches the actual data format/content, and are all the same.

This is just a “best guess” about what is going wrong. So please check, adjust data if needed, then write back if you need more help. An issue with the workflow settings, or some upstream tool, would be the next troubleshooting step.

Thanks!

FAQs for datatypes/metadata:

Thank you so much for your help,

unfortunatly it still doesn’t work. I reloaded the dezipped fastq files (unzipped them in the usb folder) directly from the usb stick we took the data from the missed with.

I tried running the workflow with single input data an double checked the format.

What else can I do?

Thank you for your help!