Same error time and again

I get the same eerror whike I am running either hisat2 and deseq2 “The tool was started with one or more duplicate input datasets. This frequently results in tool errors due to problematic input choices.”, these seem to be coming up repeatedly and will magically work with the same parameters and dataset after some trials, i dont know what is the problem and why is it not working one time and not thew other (the jobs are failing more often when it goes from grey to red, and not when it goes from grey-to yellow with a loading sign and then it turns green)

Hi @aish.sxn

This does sound odd. Please share back your history so we can help more. Stating which datasets are involved also helps. Try not to delete anything yet. Troubleshooting errors

Hi there,. Here is my history. I unfortunately deleted all the history!

https://usegalaxy.org/api/datasets/f9cad7b01a4721352091d62b1738f518/display?to_ext=tabular

Hum, that’s a link to a dataset download. It is better to share the data in these ways to keep all of the metadata intact (usually matters).

Try to recreate the work, since the error was probably related to small cluster issue that is now resolved.

If the error shows up again, or later on you get an error you want troubleshooting help with, you’ll need to share back enough information that others can help. You can start out your question that way, and don’t need to wait for us to ask. Some moderators can “guess” with very little info if it is a common problem, but that also usually means that the solution is already on this forum a lot so you don’t need to wait for feedback to start with! Use a search instead.

What we need, also included in

  1. Leave the datasets that the question is about undeleted
  2. Set the history to a share state
  3. Then either
  • copy/paste back the “i” info icon for an error dataset. This limits a bit what people publicly can see, and admins can see more, but they can do that anyway at any time :construction_worker_woman:
  • copy/paste back the share link to the history, and note which datasets are involved. You can always unshare when we are done, and more people can give you feedback this way.

The idea here about posting back to the topic is to make sure we are looking at the exact same problem and to create a record of error use-cases with example solutions, and to potentially squash bugs. E.g. “helping future others while getting current help now”.

Thank you so much for your response. I am still learning how to use galaxy. I just copied the dataset into a new history and redid thee steps (in my case concatenation>hisat2>feature counts:deseq2 all over again) I had 7 conditions and I wanted to work on different histories for each but I was having issues on the deseq2 for all but later on I started seeing the trouble in all the steps.I wondered if it was just my internet or something because the task will go from grey>red and not transition into yellow. Well I redid the analysis from another internet connection and made sure only one task is running at a time and it seems to have worked. Apologies for deleting the data that didnt work, I was getting lost in all the red steps. I ll share the data better next time. By the way I did check the (i) icon, sometimes it wasn’t giving a bug at all and if it was then it was just stating that the input dataset was duplicated (when it wasn’t).
I have now retained some jobs that didn’t work!

Ok, thanks for explaining. Let’s back up a bit. It is great that you are learning how to do bioinformatics, and the trouble you are having is common when people first start out. But there are quick ways to get oriented that you should really consider doing!

All of your data and the jobs are on the remote web server. Your internet connection is only involved for local Upload steps, and maybe performance of the application when clicking around.

This sounds tedious and isn’t needed. Put your data into collections, use a workflow, and run everything in batch in the same history. The server knows how to execute that kind of work correctly and most of the errors/mixups will “go away”. :slight_smile:

I would strongly recommend that you run through a few tutorials! In less than a day you will learn how to run analysis in batch with better organization. There are existing workflows including these exact tools you could import and use.

If you did this, the entire batch with 7 samples could be started with a few clicks, and would probably complete within a day, maybe two.



The training event materials from last May are still current. The organization is less overwhelming than the training site as a whole. Most have (optional) close captioned videos that walk through the courses and you don’t need to have any experience at all to use these. We have tens of thousands of brand new people running through these every year that think it is well worth it.

Start with Day 1 and things will become much less confusing. Smörgåsbord 2023

Note: The dedicated Slack for the event is not open outside of events, but you can ask new questions here at the forum or at the GTN chat. Share the link to the tutorial, your shared history, and explain what the problem is so we can quickly help. Later on, this is how you can ask for help in the same way when using your own data.