No module named 'galaxy_ext'


Running a newly installed galaxy on a centos8 server.
We run on a cluster with a shared filesystem (/storage/), Galaxy is not installed on the shared filesystem (/srv/galaxy). Jobs are submitted to the cluster via slurm.

We wonder if this is the right place to get some help related to this error on all jobs on our local galaxy server:

Traceback (most recent call last):
  File "metadata/", line 1, in <module>
  from galaxy_ext.metadata.set_metadata import set_metadata; set_metadata()
  ModuleNotFoundError: No module named 'galaxy_ext'

I found this Framework Dependencies β€” Galaxy Project 18.09 documentation which I thought was relevant.

We created a venv in the shared filesystem following this: Framework Dependencies β€” Galaxy Project 18.09 documentation
But still see the error.


virtualenv /storage/galaxy/venv
. /storage/galaxy/venv/bin/activate
cd /srv/galaxy/server
PYTHONPATH= sh /srv/galaxy/server/scripts/ --no-create-venv

In job_conf.xml the an example of the description field is

 <destination id="slurm_static" runner="slurm">
      <param id="nativeSpecification">--time=05:00 --mem-per-cpu=2</param>
      <env file="/storage/galaxy/venv/bin/activate" />

And in galaxy.yml in the uwsgi we have

 virtualenv: /storage/galaxy/venv       

What might we be missing?
What should the ownership of the venv be? Currently we have it owned by the galaxy user.

An attempt to change the service file (/etc/systemd/system/galaxy.service) which specify the venv was not successfull (same modulenotfound error). Probably getting this right is what is missing. Our attempt was:

ExecStart=/srv/galaxy/venv/bin/uwsgi --yaml /srv/galaxy/config/galaxy.yml --stats
Environment=HOME=/srv/galaxy VIRTUAL_ENV=/storage/galaxy/venv PATH=/storage/galaxy/venv/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin DOCUTILSCONFIG= PYTHONPATH=/srv/galaxy/server/lib/galaxy/jobs/rules DRMAA_LIBRARY_PATH=/srv/drmaa/lib/

More info: On the galaxy server if I add the path /srv/galaxy/server/lib to PYTHONPATH, then I can do

[root@zpath local_tools]# export PYTHONPATH=/srv/galaxy/server/lib
[root@zpath local_tools]# python -i
Python 3.7.3 | packaged by conda-forge | (default, Jul  1 2019, 21:52:21) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import galaxy_ext

All fine, with the library added to the PYTHONPATH galaxy_ext module is found.

How to accomplish the same on the compute-node where I do not have access to the /srv/galaxy folder. I attempted the virtualenv path described above, but there must be some last part I am missing.

1 Like

Hi @maikenp

This might be a permissions problem but I’m not certain. The relevant documentation is here: Connecting to a Cluster β€” Galaxy Project 21.01 documentation

I’m going to ask the admin group for advice at Gitter. They may reply here or there, and feel free to join the chat. galaxyproject/admins - Gitter

Let’s start there :slight_smile:

Hi, just to clarify that we can submit jobs fine to the underlying cluster, and the jobs work (slurm is installed), but we get this ModuleNotFoundError problem with galaxy_ext - so clearly the PYTHONPATH is not set right for these jobs running on the cluster.

Also the documentation Connecting to a Cluster β€” Galaxy Project 21.01 documentation suggests that Galaxy is installed on the shared filesystem, note that we do not do that.
Our Galaxy installation is installed on the galaxy-servers local filesystem. However, the manually set up virtualenv (as described above) is installed on the shared filesystem in order to solve the ModuleNotFoundError.

More info:

by setting

 include_metadata = False

in the prepare_job method in lib/galaxy/jobs/runners/ the error goes away.

Not such a nice solution though. What is recommended for our setup?
Is there a better way to set the include_metadata option?
Or is there a way that the metadata will work anyway even with galaxy installed on a non-shared filesystem?

Maybe the Galaxy Training from earlier this year would be helpful. The third day (Wednesday) covers connecting to a cluster. You are connected to the right people at Gitter for more help. This can be a complex configuration, and sometimes site specific.