job in queue 72+ hours

Hi. New to Galaxy here. I submitted a few jobs about 72 hours ago, and they still haven’t started running. Is this typical or did I make an error somewhere?


I took a look at one of your queued jobs (Blastp) – there are input problems that will cause the job to fail once it does run. You should fix these, then rerun.

  1. Incorrect datatype – both datasets are in fasta format, but one is labeled as fastq. FAQ
  2. Both fasta datasets includes description content on the title line. This won’t always lead to errors at the mapping step, but definitely can with many downstream tools. Standardizing the format at the start of an analysis is usually best. FAQ

In one of your other histories, there is a queued Busco job that also has an input that is in fasta format but is labeled with a fastq datatype. Fix that as well, then rerun.

That same history has another fasta dataset that is uncompressed and with what appears to be the correct datatype assigned. However, it seems odd that some data is uncompressed and some is compressed for similar jobs. You should confirm if any of your data is actually compressed or not, and assign the correct datatype. Running QA tools can help catch format problems, example: FastQC. Galaxy will guess the datatype when data is uploaded, or you can use the redetect function after upload/manipulations, but this tends to work best with uncompressed data for most data formats (BAM is one exception).

How to check for input problems (before a job is run, or after once it fails): FAQ

Hope that helps!

Hi @melT

The job queue for some tools has been very busy over the last few days at the public Galaxy server. This includes mapping and other computationally intensive tools.

The best strategy is to leave queued jobs queued. Avoid deleting and rerunning, as all new jobs are placed back at the end of the queue extending the wait time.