Cut Issue Related to Metagenomic

Welcome @Arif_Omar

We can probably help!

If you job is only queued, you can wait a bit longer or you can welcome to generate a share link to your history and we can help to confirm the state, including making sure that the inputs appear to be ready to process and appropriate for the tool/parameters used.

How to interpret the job state is below! Even “small” jobs can queue. Later on when using a workflow this won’t matter as much. :slight_smile:

Job status

The first item to check is whether the job is queued or if it is executing already. A queued job seems possible for the 30 minute time frame but an executing job would be very odd to last this long!


Compare the color of your datasets to these job processing stages.

  • Grey: The job is queued. Allow this to complete!
  • Yellow: The job is executing. Allow this to complete!
  • Green: The job has completed successfully.
  • Red: The job has failed. Check your inputs and parameters with Help examples and GTN tutorials. Scroll to the bottom of the tool form to find these.
  • Light Blue: The job is paused. This indicates either an input has a problem or that you have exceeded the disk quota set by the administrator of the Galaxy instance you are working on.
  • Grey, Yellow, Grey again: The job is waiting to run due to admin re-run or an automatic fail-over to a longer-running cluster.

:warning: Don’t lose your queue placement! It is essential to allow queued jobs to remain queued, and to never interrupt an executing job. If you delete/re-run jobs, they are added back to the end of the queue again.

Related FAQs


With another summary here with some practical help:



Queued job state

From there, a queued job (gray in color) can be examined closer to learn more about why the job may be waiting to run!

Check the input to your Cut jobs.

  1. Is the upstream job still processing or has it completed (green dataset).
  2. Tools process once there are available inputs and a place on the cluster to run it!
  3. Review these inputs to make sure the data is not empty, or in an error state, or in a paused job state!

Your job queues as soon as the tool is submitted, even if the inputs isn’t ready yet. This is one reason why using a workflow to queue everything at once is so helpful!


Executing job state

Executing jobs (yellow in color) have been sent out to a cluster node for processing. Galaxy captures some technical metrics about a job and the job environment while it is away, but the total job must complete before details are returned (outputs, detailed logs).

This is a bit different from running a tool on a single computer where everything is local and intermediate results can be inspected, or job logs reported in real-time.

Instead, Galaxy processes jobs across a distributed cluster network, similar to how nodes process jobs on a university cluster, or at a commercial cloud cluster.

Allowing a job to completely process is how to get results and logs!