There isn’t a way to predict with confidence how long any particular job (any tool) will run since this depends on several factors:
For the queued phase (grey dataset):
how busy the server is
how busy the connected clusters that run that tool are
how many other jobs you have queued
For the executing phase (peach/yellow dataset):
cluster runtime resources
characteristics of the inputs
parameters chosen
how the algorithm run-time scales with different inputs/params
The public computational resources are significant at any of the UseGalaxy servers, so you are working at a good server. Using workflows, and making use of the notification systems is one way to track progress overall.
i entirely agreed your view. previously this job was executed. i run same job but then execution time some parameter was wrongly adjust so i purged those job. if any purged job is activated my account. please close, only keep this job 19 and 20 job .
Yes, purging the jobs is how to clear them from the queue. Glad you got this started up again with better parameters.
It is usually best to allow a job to reach that failed state, otherwise the logs are lost, but 14 days of runtime is a pretty good clue that something is wrong and taking a look at the inputs is the next step. Then, if you notice something wrong, you can certainly cancel what is running and start up a new run.
This job is taken almost 7-9 days with default parameters for small genome. if genome size is larger take so much time. Is it good that this job purged.