Jobs automatically being paused

Hello!
I am running Roary for a pangenome analysis on Prokka output. The jobs are automatically being paused. I have tried several time to “resume jobs” via the history options with no success.

Hi @feanor

Are the inputs in an error state? If so, then you’ll need to fix the upstream problem.

That may just be a rerun. A new option “to resume downstream jobs” will be above the Submit button on that rerun tool form.

Please give that a try, thanks!

Hi @jennaj

The input (a collection of prokka-annotated genomes) is not in error state. I have tried to rerun the Roary tool and again the jobs are first shown to be on queue and then they are paused.

More screens attached. Thanks!



Sometimes an error in just one of the inputs can be hidden inside the collection, i.e. one of your 2024 gff3 inputs might be showing in red when you step inside the collection. Rerunning with “resume dependent jobs” as suggested by @jennaj might fix your issue.
From the eu side we can also offer to take a look at this right in your history, but we’d need to know your eu user name to do so.

Hi @wm75

When I select “Run Job Again” for the paused Roary job, I see no option to “resume dependent jobs” as you and @jennaj suggested. Perhaps I am missing something. Here’s a screenshot of the “Run Job Again” page.

EDIT: Same for Galaxy Version 3.13.0+galaxy2, I have tried both current and previous version:

This is not about the paused job @feanor, but about the inputs to that job. Look inside the collection of gff3s!

Got it, thanks @wm75. I checked manually, all gff3 files are in OK status (green). I also downloaded and inspected the collection, all files are there with expected-size data written in them.

I may have identified the issue. I used the filter collection tool to filter out 3 gff from my initial 2027 gff collection. I tried to run Roary on this collection and it is queued but not paused. So this may a bug of the filter collection tool afterall. Will update further.

EDIT: the job on the initial collection (n=2027) was also paused

@wm75 @jennaj

Is there anything else I could try? Should I share with you my User ID as @wm75 suggested?

Thanks!

@feanor yes, from my point of view further debugging would require me to take a look at your actual history, and for this I’d need you to share your history either via link or privately with me as explained here. For sharing privately you can use the email from my profile here on the help forum.
Alternatively, if your history also contains a failed job, you can simply submit a bug report for that dataset and mention in there what the bug really is about. The bug report will then allow me to discover your history, too.

Hi @wm75,

Where exactly can I get your email address? I can’t seem to find it here. Thanks again!

Ah sorry, that won’t work. Messaged you privately.

So, this turned out to be a non-trivial case, but:
the key to finding out what’s wrong was to expand the view of one of your paused jobs. When you do that it says: “Input dataset ‘IHIT32037’ was deleted before the job started. To resume this job fix the input dataset(s).”
That’s rather clear, but the challenge is to locate that dataset inside your large input collections. How I did this was via the advanced search interface (downwards-facing arrows at the top of the history panel) and configure it like shown in the screenshot (filter by name: IHIT32037; deleted: Yes).
This will reveal two deleted datasets from two different collections (the two datasets are actually at index number 1494 inside their collections, but unfortunately the current UI doesn’t have a way to show the deleted state from inside the collection - that’s why @jennaj’s and my own previous advice wasn’t really helpful). If you undelete these, you should be able to resume the paused jobs or to run the tool again without getting into the paused step.

No idea how you arrived at this situation, but that should allow you to proceed finally.

2 Likes

Hi @wm75, this solved the issue and the Roary job ran normally, many thanks!
FYI the output files of the job were all emtpy. It seems that I need to check all my input files to avoid such issues.

Thanks again!