How to set CPU and RAM usage.

Hi,
I am wondering how to exactly set the usage of CPU and RAM for each tool with Slurm, since this is my first time using Slurm (or any workload manager/job scheduler for that matter). Can I just create new partitions and set these partitions for the destination id together with slurm as the runner? And what is the param id? I had the following settings in mind for Slurm, but am wondering if this would actually work or not. Also, I am using galaxy version 22.01, in case this might be important.
Any feedback/help is appreciated! Thanks in advance!

Parts of the files containing my settings:
requirements.yml

- src: galaxyproject.repos
  version: 0.0.2
- src: galaxyproject.slurm
  version: 0.1.3

galaxy.yml

 roles:
    # SLURM
    - galaxyproject.repos
    - galaxyproject.slurm

galaxyservers.yml

# Slurm
slurm_roles: ['controller', 'exec']
slurm_nodes:
- name: localhost 
  CPUs: 14 # Host has 16 cores total
  RealMemory: 110000 # 110000MB, viewed with command 'free --mega' which shows 135049MB total free
  ThreadsPerCore: 1
slurm_partitions:
  - name: main
    Nodes: localhost
    Default: YES
slurm_config:
  SlurmdParameters: config_overrides   # Ignore errors if the host actually has cores != 2
  SelectType: select/cons_res
  SelectTypeParameters: CR_CPU_Memory  # Allocate individual cores/memory instead of entire node

job_conf.xml.j1

<job_conf>
    <plugins workers="4">
        <plugin id="local_plugin" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner"/>
        <plugin id="slurm" type="runner" load="galaxy.jobs.runners.slurm:SlurmJobRunner"/>
    </plugins>
    <destinations default="slurm">
        <destination id="local_destination" runner="local_plugin"/>
        <destination id="partition_1" runner="slurm">
            <param id="?">--nodes=1 --ntasks=1 --cpus-per-task=1 --mem=4000</param>
            <param id="tmp_dir">True</param>
        </destination>
        <destination id="partition_2" runner="slurm">
            <param id="?">--nodes=1 --ntasks=1 --cpus-per-task=2 --mem=6000</param>
            <param id="tmp_dir">True</param>
        </destination>
        <destination id="partition_3" runner="slurm">
            <param id="?">--nodes=1 --ntasks=1 --cpus-per-task=4 --mem=16000</param>
            <param id="tmp_dir">True</param>
        </destination>
    </destinations>
    </limits>
     <tools>
        <tool id="tool_example_1" destination="partition_1"/>
        <tool id="tool_example_2" destination="partition_2"/>
        <tool id="tool_example_3" destination="partition_3"/>
     </tools>
</job_conf>

Edit: If anyone has any idea if this would work, please let me know. Also, I do not have Singularity installed on the server, so that’s why some settings from the ‘Connecting Galaxy to a compute cluster’ tutorial (Connecting Galaxy to a compute cluster) are missing.

I am mainly wondering what to set for the destination id and param id which I called partition_1, partition_2 and partition_3 in the previously post under job_conf.xml.j1 and if i should add the following line or refer to ‘galaxy_systemd_env’ as specified in the galaxyservers.yml file.

            <param id="drmaa_library_path">/usr/lib/slurm-drmaa/lib/libdrmaa.so.1>

Thanks in advance!