Killing job in Galaxy does not kill Slurm job on cluster

I have v20.01 installed on a Cluster using Slurm and when the Slurm job is killed via the command line using scancel, the job is killed and Galaxy recognizes the job as killed and the job files turns red. But when the job is killed within Galaxy, Galaxy shows the job as killed but the Slurm job still runs until completion. I see the same behavior with v19.05. Here are some galaxy.log debug lines right after killing the job within galaxy:

galaxy.jobs.handler DEBUG 2020-02-14 11:55:31,166 [p:33591,w:1,m:0] [JobHandlerStopQueue.monitor_thread] Stopping job 148 in slurm runner
galaxy.jobs.runners.drmaa INFO 2020-02-14 11:55:31,191 [p:33591,w:1,m:0] [JobHandlerStopQueue.monitor_thread] (148/3735654) Removed from DRM queue at user’s request
/usr/bin/env: python: No such file or directory