Job execution timing and computational resources: any tool, any public Galaxy server

Hii admin

How can I monitor this job to determine how many days it will take to complete?

Please, could I request your assistance in this scenario?

this is job information

the job executed on usegalaxy.org.eu

usr id h_1994

Number 19
Name necat on data 16: corrected reads
Created Tuesday Jul 2nd 20:32:44 2024 GMT+5:30
Filesize -
Dbkey ?
Format fasta.gz
File contents contents
History Content API ID 4838ba20a6d867657c772e59cbfc4aab
History API ID 7b8db1ba941fd929
UUID 6b89ce33-49ca-4878-b267-ba75d2734fe0
Full Path /data/dnb10/galaxy_db/files/6/b/8/dataset_6b89ce33-49ca-4878-b267-ba75d2734fe0.dat
Originally Created From a File Named cns_final.fasta.gz

Job Information

Galaxy Tool ID toolshed.g2.bx.psu.edu/repos/iuc/necat/necat/0.0.1_update20200803+galaxy0
Job State running
Command Line cp ‘/data/dnb10/galaxy_db/files/1/2/b/dataset_12b55dcd-c137-4425-9136-a5d3c93ffc43.dat’ reads_1.fastq && echo reads_1.fastq >> read_list.txt && necat correct ‘/data/jwd05e/main/071/359/71359862/configs/tmpych6_ene’ && necat assemble ‘/data/jwd05e/main/071/359/71359862/configs/tmpych6_ene’ && necat bridge ‘/data/jwd05e/main/071/359/71359862/configs/tmpych6_ene’
Tool Standard Output empty
Tool Standard Error empty
Tool Exit Code
Job API ID 11ac94870d0bb33aa0098ba40187ef00

best regards

Hari

Welcome, @h_1994 !

There isn’t a way to predict with confidence how long any particular job (any tool) will run since this depends on several factors:

For the queued phase (grey dataset):

  • how busy the server is
  • how busy the connected clusters that run that tool are
  • how many other jobs you have queued

For the executing phase (peach/yellow dataset):

  • cluster runtime resources
  • characteristics of the inputs
  • parameters chosen
  • how the algorithm run-time scales with different inputs/params

The public computational resources are significant at any of the UseGalaxy servers, so you are working at a good server. Using workflows, and making use of the notification systems is one way to track progress overall.

Related:

Do you want to add anything else @igor ?

Hii jennaj

i entirely agreed your view. previously this job was executed. i run same job but then execution time some parameter was wrongly adjust so i purged those job. if any purged job is activated my account. please close, only keep this job 19 and 20 job .

1 Like

Hi @h_1994

Yes, purging the jobs is how to clear them from the queue. Glad you got this started up again with better parameters.

It is usually best to allow a job to reach that failed state, otherwise the logs are lost, but 14 days of runtime is a pretty good clue that something is wrong and taking a look at the inputs is the next step. Then, if you notice something wrong, you can certainly cancel what is running and start up a new run. :scientist:

Hi jenej

This job is taken almost 7-9 days with default parameters for small genome. if genome size is larger take so much time. Is it good that this job purged.

1 Like