I have been running a few different workflows successfully as of late (read alignment/quant to transcriptome w/ Salmon). However, recently my fasta.gz transcriptome input file seems to be randomly deleted in the middle of the workflow, resulting in the Salmon error “Input dataset ‘gencode.vM23.transcripts.fa uncompressed’ was deleted before the job started.” I have run this same workflow for Salmon before using the same transcriptome file without an issue.
Within the same workflow, I am also using a genome based alignment (STAR), which uses an input GTF as a gene model for alignment. This part of the workflow executes successfully with no issue.
I also noticed that when I attempt to rerun a workflow using the same input history, I have to reselect the transcriptome file, since it claims it does not exist anymore, before rerunning (I just reselect the same file from the history, I do not reupload the fasta.gz).
I’m just wondering if this is a typical bug that can be resolved by reuploading the file, or if there is something I am missing in my execution that is causing this issue.
Any help would be deeply appreciated!