Hello - I use Galaxy EU for most of my jobs and am trying to find out the limit of concurrent jobs and memory usage per job. I tend to run some memory-intense jobs, so knowing this information would help me plan accordingly. I did find this page about job quotas on the Galaxy Main public site, but was wondering if EU has different quotas?
Most likely, your jobs fit into “variable walltime, variable cores, variable memory” category on the ORG server. I don’t have the admin access to Galaxy Europe, but most likely they use somewhat different settings than the ORG server. Galaxy Australia uses different limits. All usegalaxy.* servers support big scale jobs. Note that the job limits/settings are not set in stone and modified according to the needs.
In my opinion, the best strategy for a user is to have as many jobs as possible, assuming the pipeline was tested before: for memory-intensive jobs availability of resources might be a bottleneck. With this approach the user can get the most from any Galaxy server. If a job fails with Out of Memory error, report it to server admins. Assuming it was a legit job, most likely the server setup will be modified to accommodate the job. For some tools memory requirements can be controlled through job settings, also an option for OoM errors. Only very small proportion of jobs got cancelled because of walltime on Galaxy Australia, not sure about other servers, often because of jobs’ setup.