Killing job in Galaxy does not kill Slurm job on cluster

I have v20.01 installed on a Cluster using Slurm and when the Slurm job is killed via the command line using scancel, the job is killed and Galaxy recognizes the job as killed and the job files turns red. But when the job is killed within Galaxy, Galaxy shows the job as killed but the Slurm job still runs until completion. I see the same behavior with v19.05. Here are some galaxy.log debug lines right after killing the job within galaxy: DEBUG 2020-02-14 11:55:31,166 [p:33591,w:1,m:0] [JobHandlerStopQueue.monitor_thread] Stopping job 148 in slurm runner INFO 2020-02-14 11:55:31,191 [p:33591,w:1,m:0] [JobHandlerStopQueue.monitor_thread] (148/3735654) Removed from DRM queue at user’s request
/usr/bin/env: python: No such file or directory

This error message is fixed by ensuring that the venv python version 3.6.6 in our case (.venv/bin/python) is used with the sudo command in galaxy.yml instead of the system python 2.7.5:
drmaa_external_killjob_script: /usr/bin/sudo -E LD_LIBRARY_PATH=$LD_LIBRARY_PATH PATH=$PATH .venv/bin/python scripts/
Now our issue is most likely caused by the DUO authentication on our system. I’ll post a solution once we figure this out.