ClustalW computing power?

I have began a ClustalW run for 8 phage genomes, with approx. 136 kb length each, with the standard settings. Is it normal that it has been taking almost 6 hours to run now? When is it normally expected to finish?
What computational power is at a user’s disposal for such a calculation (only the storage space is available in the help topics, the computation resources (RAM, CPU) is not)? Is it advisable to use a cloud service instead (to be faster, for instance)?
I will be grateful for any useful information in this regard.

1 Like

Welcome @svab.domonkos

The tool is allocated maximum resources. See this FAQ for cluster details: Using Galaxy Main

Your data is either queued (gray) or executing (yellow). For both, allow the process to complete. Avoid deleting/rerunning, as this only extends the wait time. FAQ: Datasets and how jobs execute

If it errors, then you’ll know for certain if the data is too large to execute on the public server (it probably is, due to the lengths in your case, not the number of fasta inputs. Memory or Walltime (runtime) will be exceeded and reported as a the error reason. FAQ: My job ended with an error. What can I do?

This all assumes that your inputs are formatted correctly. This prior Q&A has help concerning formats for this and a few related tools: Logo tool in Galaxy

Let’s start there and follow up as needed if your job fails. Or, you can move to a Cloud Galaxy instead, if the error messages are clear enough about this problem being related to job resources.