Concurrent user limits and benchmark


I am deploying a new instance of galaxy on my aws account in a m5.xlarge instance.
Is there any benchmark available to do an approx sizing of the compute capacity I will need to provide my students with the environment for a class?

In addition, How many user can access concurrently to my galaxy instance on AWS? Is there any limit at Galaxy (software) level?

1 Like

Galaxy itself has little requirements, the performance will almost solely be based on what tools do your users run and how many.

1 Like

Teaching resources for Galaxy:

Galaxy Training Network (GTN) tutorials/resources:

As Martin states, there are no hard limits with respect to Galaxy administration and the number of users, but we have tended to allocate 10 users per cloud server with one node available for each. This keeps things moving along during a live hands-on workshop where the same tools are actually being used concurrently, which is a bit different than having students work over a longer time period when some waiting for shared resources to become available is Ok (and wait time is communicated to them, e.g. for homework).

If you’d like to get more feedback from trainers as you get set up, please visit the GTN gitter chat channel:


This is very useful Jennaj, and thank you for the additional resources. Just one additional question.
In order to make the class infrastructure as cost effective as possible. Would you say that the ideal configuration would have an on-demand ec2 master node and then add spot instances as worker nodes? keeping 10 user per worker node? Does it make sense?

1 Like

Much depends on how users are doing work, or in other words - when the master server(s) and worker nodes need to be accessible and if it matters or not if user accounts are persistent over time.

Work will be done during a single or series of discrete time frames, with non-persistent accounts.


Work will be done asynchronously over longer time frames, throughout the class week/semester/quarter/etc, with persistent accounts.

There are a few variables to play around with. In short:

  • Some “initial” image/storage bucket – preconfigured with tools/data the class will be using. There could be one or more of these. Keep these as small as possible.
  • Master nodes/buckets – launched from the initial image(s)/bucket(s), kept active or brought up/down or completely discarded, as appropriate.
  • Worker nodes – dedicated or on-demand or both, tuned as needed, per master node.

Starting up a master node or multiple master clone nodes, based on a baseline pre-configured saved image with one worker node per user allocated, or possibly 2+ users per worker node, depending on acceptable wait time (sometimes I pair users to work together), with ~10 nodes per master, has worked well in the past for me and others. The idea is to bring it up master nodes when in use and take them down when not in use. Discard after completely done with them.

The pre-configured image can be saved for reuse (tools and limited “starting” data – such as indexes, small staged libraries of training data, shared objects like histories/workflows) which incurs ongoing costs or at the end delete everything completely (the image itself plus any data buckets attached). It is important to note that only after shutting down and removing all resources from AWS are costs no longer incurred.

Leaving one or more master nodes continuously active will cost more.

The number of dedicated or on-demand worker nodes per master node can be tuned at any time to help manage costs. Small warning – on-demand worker nodes can get expensive but might be appropriate for some cases or time frames. It just depends on how important it is to get work done quickly versus some acceptable wait time.

Also, keep in mind that master nodes are independent servers, so accounts and user data (like histories) are not common/shared between them, apart from what was already in the original image each was created from.

Hope that helps!