Hi,
I run a local Galaxy installation and have setup some resubmission rules. Using 24.2.3 I didn’t had any issues with this, however, after upgrading to 25.0.2 the log gets an infinity amount of spam messages like
galaxy.jobs.handler DEBUG 2025-09-23 12:44:10,097 [pN:main.1,p:4176744,tN:JobHandlerQueue.monitor_thread] (82999) Job was resubmitted and is being dispatched immediately
galaxy.jobs.handler DEBUG 2025-09-23 12:44:10,098 [pN:main.1,p:4176744,tN:JobHandlerQueue.monitor_thread] (82999) Dispatching to high_memory_and_cpu runner
I can stop this only by shutting down the server, and manipulating the SQL database by setting all jobs with the status ‘resubmitted’ to ‘error’.
I can see that the jobs are successfully resubmitted to the slurm queue with the correct settings, but after they are done computing, they never appear in the user interface as computed. They stay in their ‘yellow’-computing status.
Either we have a bug in this Galaxy version, or more likley, my job_conf.yml is errounous. I add it here, maybe someone sees some obvious mistake?
runners:
local:
load: galaxy.jobs.runners.local:LocalJobRunner
workers: 4
high_memory_and_cpu:
load: galaxy.jobs.runners.slurm:SlurmJobRunner
workers: 4
high_cpu:
load: galaxy.jobs.runners.slurm:SlurmJobRunner
workers: 4
high_memory_single_core:
load: galaxy.jobs.runners.slurm:SlurmJobRunner
workers: 4
moderate_memory_single_core:
load: galaxy.jobs.runners.slurm:SlurmJobRunner
workers: 4
multicore_cpu:
load: galaxy.jobs.runners.slurm:SlurmJobRunner
workers: 4
singlecore_cpu:
load: galaxy.jobs.runners.slurm:SlurmJobRunner
workers: 4
ultra_high_memory:
load: galaxy.jobs.runners.slurm:SlurmJobRunner
workers: 4
multicore_data_fetch:
load: galaxy.jobs.runners.slurm:SlurmJobRunner
workers: 4
general:
load: galaxy.jobs.runners.slurm:SlurmJobRunner
workers: 4
medium_memmory_and_high_cpu:
load: galaxy.jobs.runners.slurm:SlurmJobRunner
workers: 4
execution:
default: general
environments:
general:
runner: general
native_specification: '--mem=10000 --cpus-per-task=3'
env:
- name: '_JAVA_OPTIONS'
value: '-Xmx10G'
resubmit:
- condition: memory_limit_reached
environment: medium_memmory_and_high_cpu
medium_memmory_and_high_cpu:
runner: medium_memmory_and_high_cpu
native_specification: '--cpus-per-task=10 --mem=30000'
resubmit:
- condition: memory_limit_reached
environment: high_memory_and_cpu
high_memory_and_cpu:
runner: high_memory_and_cpu
native_specification: '--cpus-per-task=10 --mem=80000'
resubmit:
- condition: memory_limit_reached
environment: ultra_high_memory
ultra_high_memory:
runner: ultra_high_memory
native_specification: '--cpus-per-task=10 --mem=300000'
high_cpu:
runner: high_cpu
native_specification: '--cpus-per-task=10 --mem=20000'
env:
- name: '_JAVA_OPTIONS'
value: '-Xmx20G'
resubmit:
- condition: memory_limit_reached
environment: medium_memmory_and_high_cpu
high_memory_single_core:
runner: high_memory_single_core
native_specification: '--cpus-per-task=1 --mem=80000'
resubmit:
- condition: memory_limit_reached
environment: ultra_high_memory
moderate_memory_single_core:
runner: moderate_memory_single_core
native_specification: '--cpus-per-task=1 --mem=20000'
multicore_data_fetch:
runner: multicore_data_fetch
native_specification: '--cpus-per-task=1 --mem=5000'
resubmit:
- condition: memory_limit_reached
environment: singlecore_cpu
multicore_cpu:
runner: multicore_cpu
native_specification: '--cpus-per-task=4 --mem=35000'
resubmit:
- condition: memory_limit_reached
environment: high_memory_and_cpu
singlecore_cpu:
runner: singlecore_cpu
native_specification: '--cpus-per-task=1 --mem=10000'
resubmit:
- condition: memory_limit_reached
environment: high_memory_single_core
limits:
-
type: environment_user_concurrent_jobs
tag: medium_memmory_and_high_cpu
value: 10
-
type: environment_user_concurrent_jobs
tag: high_memory_and_cpu
value: 7
-
type: environment_user_concurrent_jobs
tag: ultra_high_memory
value: 1
-
type: environment_user_concurrent_jobs
tag: general
value: 20
-
type: environment_total_concurrent_jobs
tag: ultra_high_memory
value: 3
-
type: environment_total_concurrent_jobs
tag: medium_memmory_and_high_cpu
value: 30
-
type: environment_total_concurrent_jobs
tag: general
value: 105
-
type: environment_total_concurrent_jobs
tag: high_memory_and_cpu
value: 15
-
type: environment_total_concurrent_jobs
tag: high_cpu
value: 31
-
type: environment_total_concurrent_jobs
tag: high_memory_single_core
value: 15
-
type: environment_total_concurrent_jobs
tag: moderate_memory_single_core
value: 316
-
type: environment_total_concurrent_jobs
tag: multicore_cpu
value: 79
-
type: environment_total_concurrent_jobs
tag: singlecore_cpu
value: 316
-
type: environment_total_concurrent_jobs
tag: multicore_data_fetch
value: 20
tools:
- id: sortmerna
environment: high_cpu
- id: bg_sortmerna
environment: high_cpu
- id: vardict_java
environment: high_cpu
- id: __DATA_FETCH__
environment: multicore_data_fetch
- id: upload
environment: multicore_data_fetch
- id: upload1
environment: multicore_data_fetch
- id: bedtools_coveragebed
environment: ultra_high_memory
- id: textutil
environment: singlecore_cpu
- id: bwa_mem
environment: medium_memmory_and_high_cpu
- id: bwa_mem2
environment: medium_memmory_and_high_cpu
- id: rna_star
environment: medium_memmory_and_high_cpu
- id: rna_star_index_builder_data_manager
environment: high_memory_and_cpu
- id: hisat2_index_builder_data_manager
environment: high_memory_and_cpu
- id: bwa_mem_index_builder_data_manager
environment: high_memory_and_cpu
- id: bowtie2
environment: medium_memmory_and_high_cpu
- id: fastqc
environment: singlecore_cpu
The error is observed using RNA STAR going from medium_memmory_and_high_cpu into high_memory_and_cpu environment.
Thanks a lot!