I am running MiModd variant call tool and I have noticed that a job could take more than 12 hs to finish. Also, it seems that only one core is assigned by default for these type of jobs. Is it normal? in contrast bowtie2 runs in 1 hour for the same type of data and and use 8 cores.
The best advice I have is to put your data into collections, the tools into a workflow (even if simple and include just a few of the core tools) then to launch the whole thing as a batch. This gets everything queued at once, which is the most important part of getting high throughout work to process on the public clusters. These larger clusters are in high demand. There will always be completion but even really massive workloads will stream and complete with this strategy.
XRef
Hope this helps! Apologies for the delays! We are all academics.. summer schedules and recent server updates led to some delays here, but we can certainly follow up more now.
I just wanted to make sure that you saw the updates in the ticket! The tool is being updated to run on 6 cores. The details are still being worked out and I can’t give a firm estimate beyond checking the ticket over the next few weeks, and watching your jobs, but this will flow out to the public servers and anyone else that uses the shared job environment configurations. Thanks for the suggestion!
happy to hear that MiModd tool use 6 core now! Does the update will affect all servers (for these jobs we are using galaxy.eu. ). tanks for your suggestions and thank you for your help. I can’t complain about galaxy, I love it, hope it’ll continue forever.
The configuration change made it into the shared config for Galaxy US, Galaxy Europe and Galaxy Australia (into which other instances can also opt in).
So at least on these three servers you should see the tool running with 6 cores once the servers have picked up the updated config (which should be the case by tomorrow morning for the EU server).