Fastp not running: Confirmed job delays at UseGalaxy.org 09/23/2022. Solution: allow queued jobs to process

Fastp jobs that were queued over 16 hrs ago have still not started running.

Update 09/26/2022

The UseGalaxy.org server is still very busy.

Leave jobs queued for the fastest processing.


Update 09/23/2022

At usegalaxy.org we were able to confirm the delays and fixed a small technical issue.

Please leave queued jobs queued. Avoid deleting and rerunning as that will lose the job’s original place in the queue.

Thanks to everyone who reported the problem!


Hi @imperialadmiral121

The public servers might just be busy. https://status.galaxyproject.org/

Are you working at usegalaxy.org? There aren’t any known issues as of now, so this prior Q&A would apply. If you think more is going on, please post screenshots or share your history. If we find something on our own, we will post an update.

And, someone else reported longer than usual delays at usegalaxy.eu a few hours ago – no update yet: FastQC not running

Thanks. A few jobs have executed after being queued (gray) for a long time, but there are still jobs that have been queued for close to 24 hrs. Since there are no reported server issues, I will assume that the public servers are just busy for now.

1 Like

Jobs are still queued up after 2 days. I’m concerned that the issue has not yet been resolved.

I have been having the same issue. None of my programs (bowtie2, bwa-mem2, fastp, computeMatrix) are running across multiple histories, and everyone in my lab is having the same issue. I haven’t had a job execute in 3 days.

Some of my HISAT2 jobs that were queued for 72 hours and finally executed but returned with an error saying “Failed to communicate with remote job server.” Is this possibly related to the sever delays? Is there any solution other than re-running the jobs and waiting for the jobs to get to the front of the queue?

I am also getting failed jobs, with the message, “Remote job server indicated a problem running or monitoring this job.”

Jobs that fail this way do need to be rerun. Some small fraction of failures are transient cluster issues, even when the service is not super busy.

And, this doesn’t make jobs run faster, but consider setting up an email notification on the tool form or workflow. That way you don’t need to manually check the progress so often.

Our admin is actively monitoring and managing the overall situation.

Last night FASTP worked great. Today, I have waited over 12hrs and jobs are still in the Q. I see systems are up but cannot see where I am in the Q. Is there a way to see how much longer (or how many jobs are ahead of mine)? I am concerned reading all these old posts about days to get results after experiencing such great service last night. Thanks for the update!

Hi – let’s follow up in the newer topic you created. Thanks!