Since yesterday, 6 October 2025, I have been having trouble running tasks on https://nanopore.usegalaxy.eu. When I launch a task (Minimap2, Filter fasta), it remains greyed out for more than 24 hours. I had to relaunch these tasks today. Three of the five tasks worked and started running after a short period of time. I had to restart the other two because they remained greyed out and did not start. When I restarted the exact same task (using the ‘Re-do’ button), they were finally executed. No error messages appeared, and the tasks remained greyed out (and not blue).
The job queues are expected. Please try to avoid deleting and restarting jobs since that only puts your job back at the end of the queue again. If you do this often enough, you job may never have a chance to move up the queue and actually run!
Example topic where this is explained, with many more under troubleshooting. In short, the public clusters are in use, are processing several thousands of jobs a day, and getting your jobs queued and left queued is how to effectively compete for the shared public resources!
A workflow is the fastest way to queue a lot of jobs all at once but you can still work tool-by-tool, just expect that to take a bit more time since that will be waiting for a person to start up the next tool, instead of streaming the work.
For today, for the Minimap2 statistics, there are about 10-12 jobs concurrently running and about 60 queued. The server is processing about 900 total jobs at any particular time.
The 60 queued will move into the ~10 processing as those cluster nodes are freed up by the current jobs. So, try to make sure your jobs are in that queued set or they won’t get a chance to run!
Thank you very much for your detailed response and information. In the meantime, I think there was another problem. It was as if my jobs were never queued, because other jobs (the same Minimap2) did not wait long to run shortly after. I have several years of experience with Galaxy and have never encountered this kind of problem, except when our local Galaxy server restarted or crashed. It seemed more like a bug caused by the large number of jobs in the queue. Anyway, my jobs are done now
The only other part of this that I can think of is that the public servers have “server level” job queues, but also “account level” and “tool-level” job queues. These concurrency limits are custom/responsive based on administrative configurations. The public servers might have different rule sets than a local cluster. But, that said, maybe there was some small issue, and they reset something! They watch the discussions here, so asking is always Ok, and if anything is big enough to need special instructions we can address it (possibly needing the specific job details via a shared history)!
In any case: queued jobs would still get priority if they needed to reboot. So, while asking, leaving the jobs “as-is” is what I would recommended while we discuss here.
And, thanks for letting us know your jobs completed!!