I am running the tool sRNApipe. The process interrupts and is returning the error message:
“The tool was started with one or more duplicate input datasets. This frequently results in tool errors due to problematic input choices.”
However, I checked that only one input dataset was given. So I don’t understand what is going wrong . It does that repeatedly even with a dataset which has worked previously as control.
Maybe share the history, so, people can check what is going on. If you have many datasets, copy input data into a new history and submit the job.
To share a history, click at History options (three horizontal bars icon) in the top right corner > Share and manage access > Make history accessible (in the middle window), copy and paste the link into reply.
The tool form has a minimum of 5 required inputs and 3 optional inputs.
The first input is your query fastq sequences file.
The next four inputs are required reference data. Only the first, the reference genome, will be hosted natively at a public Galaxy server. Then the transcripts, TE, and miRNAs data all need to be supplied by you from the history.
Then the last three are the additional optional reference data (also from the history): snRNAs, rRNAs, and tRNAs.
So, based on that, I think I see the problem you are having here.
If you only have the query sequences input so far, then the other 4 required inputs are defaulting to a “built in index” on the server. Since these are all reference genomes, at least two are probably currently set to be the same reference genome. Or, maybe your fastq sequence input is being selected from the history again for one of these? This is being detected and resulting in your error message.
I think this is is a correct message but you can double check! Go to the error dataset, then click on the pencil icon. From here you can click over the “Details” tab to review more information about your job, including a listing of the inputs and parameters used for the job. Can you spot a duplicated input choice on this view? If yes, try to correct that when you rerun.
I tried to trigger a few errors in the testing history above as examples. You’ll be able to compare to any of the example job’s Details to your own job’s Details to see where things may have gone wrong. If you get stuck, you are still welcome to post back your history share link and we can try to help with more troubleshooting (then you can unshare after we are done). Screenshots of those same views can sometimes work too, just be sure to capture the full server URL and the tool/version.