Hi, Is IQ-Tree currently functional? I set up some jobs over 12hrs ago and they have yet to start running. I used the same input in FastTree and got results within minutes. I deleted yesterday’s jobs and resubmitted to IQ-Tree 2+hrs ago. Thanks for the update on its status.
Hello @Laura_Harris
Yes, the tool is supported and working. It sounds like you job was queued. Try to leave queued job undisturbed. If you delete and restart, then then queuing process starts over again, and if done quickly enough, the job may never get a chance to move up and process on a cluster node.
Different tools have different processing requirements and so may route to different types of cluster nodes. Some of those nodes may be busier than others! This can change during the day and over time, and there isn’t a reliable way to estimate the wait at the individual job level.
The best advice we have is to get jobs queued (directly, or with a workflow), set with the optional notification, then to come back once the work is completed.
Many more details are in topics like these. → queued-gray-datasets
With the general process explained here.
What to do
Try to leave the newly queued job still queued so that they can process. You can queue up more work, too.
Let us know how this goes! ![]()
It has been 24hrs and they still hasn’t started to run. I will just keep resubmitting the request while keeping the old ones undeleted with the hope someday it’ll work. Thanks….
??? I resubmitted and now it works…. what you said now makes no sense and I’m glad I didn’t listen and wait…..
Your queued IQ-TREE jobs have been assigned to a cluster where you’re only allowed to have 4 jobs active (queued or running) at a time. Currently, you also have a large number of MAFFT jobs also waiting to run on that cluster. Occasionally when you’ve resubmitted your IQ-TREE jobs, they are assigned to a different cluster where you’re not currently at that limit, so they ran immediately.
This is caused by a gap in our scheduling algorithm - when selecting destinations for your jobs, the scheduler is aware of the availability of compute resources at the destination, but not your active job limits, so sometimes selects a cluster that you cannot currently submit more jobs to over one that you can. It is a relatively rare case, and normally deleting and resubmitting your jobs is inadvisable because it just moves you to the back of the queue. But in this specific instance it did not, since you happened to submit them at a time where doing so assigned them to a different cluster where they could run immediately.
I’ll be working on addressing this gap in the new year to avoid this kind of indeterminate and confusing behavior from the scheduler.
