[Galaxy Local] Installing

I am receiving a server for our laboratory with 128 GB of memory to work with prokaryotes.

I am thinking of installing a Galaxy environment in it with a SLURM queue to share with other users, with the possibility of increasing the infrastructure with the addition of new nodes in the future.

Can I install the local GVL project, other than on a Cloud server? Could you install on an infrastructure with Rancher for your management? Is there a tutorial for this procedure?

1 Like

Hey @Fabiano_Menegidio!
It is definitely possible, although fyi by default Galaxy within the GVL uses the Kubernetes Job runner rather than Slurm (although hybrid setups, running on k8s but dispatching to Slurm is definitely possible). Running the GVL locally will need some tinkering to the out-of-the box configuration which is intended for the cloud, but in theory the biggest blocker is just figuring out a proper StorageClass for the Kubernetes cluster. Traditionally for a local setup we’ve used hostPath storageClass which just uses the local host storage. I haven’t set it up locally in a few months, but can try it again soon and maybe add a bit of documentation for future cases.
If you have a lot of experience in this realm, you can likely figure it out on your own. The entry point for the boot mechanism is the cloudman-boot docker image which sets up Rancher and installs the base GVL infrastructure (CloudMan). If you are just starting with K8S, it might be a bit harder to find the right configuration on your own, but we could schedule a time to do an interactive session over Zoom to help you get started.
Feel free to email at almahmoud@jhu.edu to set-up a time in the next few weeks, or send me a message on Gitter/Slack (@almahmoud) and I’ll be happy to help however i can and get other people involved to help as well.

1 Like