There was a small cluster issue earlier at UseGalaxy.org, and a rerun might resolve the error you had. So, please try that as well. It seems the tool could not locate the input file.
If the rerun also fails, then start by checking your input fastq file. Does it contain data? Or did the Upload fail?
We can follow up more if you cannot resolve this. A shared history link would be helpful. How to share is in the banner at this forum, and you can post the link back here, then unshare once we are done.
Thanks for your help. I re-uploaded my fastq files and re-runed the pipeline. And then, an error was reported. The detail of my histories have been shared on this url: Galaxy | GalaxyTrakr 🧬🔬.
Thanks for sharing the history, very helpful! I see the problem. The labels for the assemblies in the Quast report were slightly different from the names of the assemblies in the collection folder. Tools are making “exact matches” with common identifiers between files – so making a small change was enough.
You can adjust the labels assigned to that data using tools from the Collection Operations tool group, along with some simple text manipulation tools.
I’ve done this for you as an example in a copy of your history. You can click to import this, or just review then try it in your own history. You can extract those steps into a mini-workflow for any data with a similar naming problem.
The first step where I extracted the current element identifier labels is how you can “check” future collections. These labels can be manipulated any way you want as long as the label is allOne_Word2 with no special characters except for an underscore.
Hi Jennifer,
Much appreciated for your kindly help. I reviewed your imported histories and tried it in my own history. But in the step of “196 data 193, data 135, and data 134 (relabelled)” in your imported histories, I have no idea about how to run this step in my history. Could you please help me?
And if I finish this step, can I use this dataset directly to run CFSAN SNP pipeline?
Notice that I extracted the existing labels, make a format change with another tool, combined the existing label with the changed label, then used that as a mapping file with the relabel tools. If you click on the rerun icon for each of those steps, you can see which tool and I used and how it was used.
Datasets 191, 192, 193, 196 are where I did the steps above. I’ve clicked on dataset 191 and shown where the rerun icon is for that one. Notice how it brings up the original tool form, set up exactly how the job was originally run.
Yes, that is what I did. It seemed to work correctly.
Then for your history here
Notice how a different tool was used for the element extractions, and there wasn’t an adjustment to update those labels, merge the original with new, then relabel. Since that new collection is very similar to the example I used before, you could extract just those steps into a mini-workflow and reuse it.
Uncheck all steps except for those involved in the manipulation. The input in the workflow will be a collection that you choose at runtime, then the output will be that collection with the new labels.
Please give that a try and you can share a history again and the workflow you created if something doesn’t go right. Make sure the history contains 1) the original collection and matched quast report 2) the output from the workflow. You don’t need to run the CFSAN tool until the collection has the labels correct (you could check by extracting the new labels out again after the workflow).