How to configure Slurm without Singularity

Hi,
I am trying to configure Slurm following the tutorial from may 2022 (I am using an older version of the tutorial since I am using an older version, version 22.01, of Galaxy as well) (admin tutorial: Connecting Galaxy to a compute cluster). Here I saw that the tutorial for connecting to a compute cluster contained some setting for Singularity, but Singularity wasn’t part of the requirements and isn’t installed on my server. I am wondering if I could just skip these lines containing the setting for singularity or if it is necessary to install singularity.

Besides this, I am quite lost on how to exactly set the usage of CPU and RAM for each tool, since this is my first time using Slurm (or any workload manager/job scheduler for that matter). Can I just create new partitions and set these partitions for the destination id together with slurm as the runner? And what is the param id? I had the following settings in mind for Slurm, but am wondering if this would actually work or not.

Any feedback/help is appreciated! Thanks in advance!

Parts of the files containing my settings:
requirements.yml

- src: galaxyproject.repos
  version: 0.0.2
- src: galaxyproject.slurm
  version: 0.1.3

galaxy.yml

 roles:
    # SLURM
    - galaxyproject.repos
    - galaxyproject.slurm

galaxyservers.yml

# Slurm
slurm_roles: ['controller', 'exec']
slurm_nodes:
- name: localhost 
  CPUs: 14 # Host has 16 cores total
  RealMemory: 110000 # 110000MB, viewed with command 'free --mega' which shows 135049MB total free
  ThreadsPerCore: 1
# slurm_partitions:
#  - name: main
#    Nodes: localhost
#    Default: YES
slurm_config:
  SlurmdParameters: config_overrides   # Ignore errors if the host actually has cores != 2
  SelectType: select/cons_res
  SelectTypeParameters: CR_CPU_Memory  # Allocate individual cores/memory instead of entire node

job_conf.xml.j1

<job_conf>
    <plugins workers="4">
        <plugin id="local_plugin" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner"/>
        <plugin id="slurm" type="runner" load="galaxy.jobs.runners.slurm:SlurmJobRunner"/>
            <param id="drmaa_library_path">/usr/lib/slurm-drmaa/lib/libdrmaa.so.1</param>
    </plugins>
    <destinations default="slurm">
        <destination id="local_destination" runner="local_plugin"/>
        <destination id="partition_1" runner="slurm">
            <param id="?">--nodes=1 --ntasks=1 --cpus-per-task=1 --mem=4000</param>
            <param id="tmp_dir">True</param>
        </destination>
        <destination id="partition_2" runner="slurm">
            <param id="?">--nodes=1 --ntasks=1 --cpus-per-task=2 --mem=6000</param>
            <param id="tmp_dir">True</param>
        </destination>
        <destination id="partition_3" runner="slurm">
            <param id="?">--nodes=1 --ntasks=1 --cpus-per-task=4 --mem=16000</param>
            <param id="tmp_dir">True</param>
        </destination>
    </destinations>
    </limits>
     <tools>
        <tool id="tool_example_1" destination="partition_1"/>
        <tool id="tool_example_2" destination="partition_2"/>
        <tool id="tool_example_3" destination="partition_3"/>
     </tools>
</job_conf>

Hi @cbass – it looks like you are getting help already.

For others reading, this chat is a great place to reach other Galaxy admins for quick discussion: https://matrix.to/#/#galaxyproject_admins:gitter.im

Related Jobs in workflow sometimes run, sometimes don't, on local galaxy 22.01 using slurm cluster

Yes you can just skip that then. It’s not necessary, it’s an additional thing.

you are very much on the right path. Setting multiple destinations, each with their own submission parameters is the right answer. You’re looking for the next tutorial in the series: Mapping Jobs to Destinations

your id="?" should be “nativeSpecification”:

<param id="nativeSpecification">--nodes=1 --ntasks=1 --cpus-per-task=2</param>

And then it should be fine. Next you’ll have to map specific tools to those specific destinations as you’ve started doing. Good luck! Apologies for the delayed response.

1 Like