Job running for more than an hour

Any admins present kindly check and allocate more memory if possible

|

11ac94870d0bb33a0c7895567614c98b

Welcome @Adhiti_R

Hopefully we can explain a bit more about how to use the public clusters!

Allowing the job to completely process through the different stages is important. The color of the dataset roughly translates into to the processing stage. If a job is deleted and rerun, that new job starts back at the end of the process again!

Later on, you can try using a workflow to get more data processed quicker. How? Workflows queue up all of your jobs at the same time. Then your data “streams” through tools in these flowing batches.


Then for this part of your question – This was a good observation! More memory can “help” sometimes but only for certain steps.

How much memory is allocated on the cluster node the job is sent to is mostly unrelated to these general job stages. But should you run into an error like exceeds-memory-error, and technical issues are eliminated, we can review the current allocation and possibly make changes using your use-case example!

We’ll need to know the URL for the public server you are working at (find this in your browser window at the top!) and a shared history link is bit better than a job ID since this allows more people to offer advice. Maybe there are some adjustments that will help the tool to process on your data with a parameter difference, or a slight protocol re-ordering, or maybe there is a different public Galaxy server that specializes in the kind of analysis you are doing.

Please let us know if this helps or not! Maybe I misunderstood your primary concern? Follow up questions are welcome! :slight_smile: