Seeking Assistance: Integrating Custom Tools in Local Galaxy Server - Conda Environment Configuration Challenge

Hello everyone,

I have set up a local Galaxy server following the Galaxy Admin Training up to step 11 (connect to compute cluster) and in general everything is working great. I now want to add some own tools, not available in the tool shed.

Thanks to help from @marten I was able to add the bellerophon tool to my local Galaxy server without Installing it from the tool shed by copying the tool.xml to the respective directory for testing/ elarning purpose (Local tool installation without using tool shed).
The conda dependencies got resolved automatically.

I then wrote an R script, the corresponding tool.xml, and created a conda environment with the following command:

conda create -n __conda_env_test_r@1.0 r-base=4.2 r-data.table r-ggplot2 r-readxl r-writexl

The conda env lives in /home/pthor/miniconda3/envs/.
I used these specifications in the tool.xml:

   <requirements>
                <requirement type="package" version="1.0">conda_env_test_r</requirement>
   </requirements>
   <command detect_errors="exit_code">
                <![CDATA[Rscript main.R $input_file $result_file ]]>
   </command>

Running

planemo test --conda_use_local

was successful and with

planemo serve

the tool also produces the expected output.

I moved the R script and the tool.xml to the tools/stats folder and modified the tool_conf.xml file.

The tool is displayed in Galaxy, but when I run it I get the following error message:

/data/jobs/000/77/tool_script.sh: Line 9: Rscript: command not found

Checking the journalctl, it looks like the conda environment is not being found by galaxy.

I also added conda_use_local: true to the /srv/galaxy/config/galaxy.yml file to see if this would make the conda env available, but it did not help.
Following Conda for Tool Dependencies — Galaxy Project 23.2.1.dev0 documentation, I tired to modify conda_prefix, poiting to /home/pthor/miniconda3/envs/ but this did not work out for me (I couldn’t reach the webpage/ UI anymore).

I would appreciate any guidance on how to proceed.

Thanks in advance and best regards,
Patrick

PS:

Here is the journalctl output:

Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: galaxy.tools INFO 2024-02-18 15:21:04,457 [pN:main.2,p:1985,tN:WSGI_0] Validated and populated state for tool request (6.882 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: galaxy.tools.actions INFO 2024-02-18 15:21:04,462 [pN:main.2,p:1985,tN:WSGI_0] Handled output named result_file for tool test_r_tool (0.773 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: galaxy.tools.actions INFO 2024-02-18 15:21:04,464 [pN:main.2,p:1985,tN:WSGI_0] Added output datasets to history (2.281 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: galaxy.tools.actions INFO 2024-02-18 15:21:04,465 [pN:main.2,p:1985,tN:WSGI_0] Setup for job Job[unflushed,tool_id=test_r_tool] complete, ready to be enqueued (1.009 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: galaxy.tools.execute DEBUG 2024-02-18 15:21:04,465 [pN:main.2,p:1985,tN:WSGI_0] Tool test_r_tool created job None (6.858 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: galaxy.web_stack.handlers INFO 2024-02-18 15:21:04,504 [pN:main.2,p:1985,tN:WSGI_0] (Job[id=78,tool_id=test_r_tool]) Handler ‘default’ assigned using ‘db-skip-locked’ assignment method
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: galaxy.tools.execute DEBUG 2024-02-18 15:21:04,506 [pN:main.2,p:1985,tN:WSGI_0] Created 1 job(s) for tool test_r_tool request (48.975 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.handler DEBUG 2024-02-18 15:21:04,506 [pN:handler_1,p:944,tN:JobHandlerQueue.monitor_thread] Grabbed Job(s): 78
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: uvicorn.access INFO 2024-02-18 15:21:04,517 [pN:main.2,p:1985,tN:MainThread] 144.41.156.245:0 - “POST /api/tools HTTP/1.0” 200
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.mapper DEBUG 2024-02-18 15:21:04,522 [pN:handler_1,p:944,tN:JobHandlerQueue.monitor_thread] (78) Mapped job to destination id: slurm
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.handler DEBUG 2024-02-18 15:21:04,528 [pN:handler_1,p:944,tN:JobHandlerQueue.monitor_thread] (78) Dispatching to slurm runner
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs DEBUG 2024-02-18 15:21:04,536 [pN:handler_1,p:944,tN:JobHandlerQueue.monitor_thread] (78) Persisting job destination (destination id: slurm)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs DEBUG 2024-02-18 15:21:04,542 [pN:handler_1,p:944,tN:JobHandlerQueue.monitor_thread] (78) Working directory for job is: /data/jobs/000/78
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.runners DEBUG 2024-02-18 15:21:04,548 [pN:handler_1,p:944,tN:JobHandlerQueue.monitor_thread] Job [78] queued (19.939 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.handler INFO 2024-02-18 15:21:04,551 [pN:handler_1,p:944,tN:JobHandlerQueue.monitor_thread] (78) Job dispatched
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: uvicorn.access INFO 2024-02-18 15:21:04,558 [pN:main.2,p:1985,tN:MainThread] 144.41.156.245:0 - “GET /history/current_history_json?since=2024-02-18T15:02:24.519071 HTTP/1.0” 200
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs DEBUG 2024-02-18 15:21:04,586 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Job wrapper for Job [78] prepared (28.237 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.containers INFO 2024-02-18 15:21:04,586 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Checking with container resolver [CachedExplicitSingularityContainerResolver[cache_directory=/srv/galaxy/var/cache/singularity/explicit/]] found description [None]
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.container_resolvers.mulled DEBUG 2024-02-18 15:21:04,586 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Image name for tool test_r_tool: /home/pthor/miniconda3/envs/conda_env_test_r:1.0
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.containers INFO 2024-02-18 15:21:04,587 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Checking with container resolver [CachedMulledSingularityContainerResolver[cache_directory=/srv/galaxy/var/cache/singularity/mulled/]] found description [None]
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.container_resolvers.mulled DEBUG 2024-02-18 15:21:04,587 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Image name for tool test_r_tool: /home/pthor/miniconda3/envs/ conda_env_test_r:1.0
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.mulled.util INFO 2024-02-18 15:21:04,589 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] skipping mulled_tags_for [/home/pthor/miniconda3/envs/conda_env_test_r] no repository
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.containers INFO 2024-02-18 15:21:04,589 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Checking with container resolver [MulledSingularityContainerResolver[namespace=biocontainers]] found description [None]
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.container_resolvers.mulled DEBUG 2024-02-18 15:21:04,589 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Image name for tool test_r_tool: /home/pthor/miniconda3/envs/conda_env_test_r:1.0
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.containers ERROR 2024-02-18 15:21:04,591 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Could not get container description for tool ‘test_r_tool’
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: Traceback (most recent call last):
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/tool_util/deps/containers.py”, line 363, in find_best_container_description
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: resolved_container_description = self.resolve(enabled_container_types, tool_info, **kwds)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/tool_util/deps/containers.py”, line 394, in resolve
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: container_description = container_resolver.resolve(
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/tool_util/deps/container_resolvers/mulled.py”, line 809, in resolve
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: mull_targets(targets, involucro_context=self.involucro_context, **self._mulled_kwds)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/tool_util/deps/mulled/mulled_build.py”, line 285, in mull_targets
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: conda_context = CondaInDockerContext(ensure_channels=channels)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/tool_util/deps/mulled/mulled_build.py”, line 383, in init
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: super().init(
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/tool_util/deps/conda_util.py”, line 125, in init
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: info = self.conda_info()
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/tool_util/deps/conda_util.py”, line 201, in conda_info
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: info_out = commands.execute(cmd)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/util/commands.py”, line 104, in execute
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: return _wait(cmds, input=input, shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/util/commands.py”, line 121, in _wait
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: p = subprocess.Popen(cmds, **popen_kwds)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/usr/lib/python3.10/subprocess.py”, line 971, in init
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: self._execute_child(args, executable, preexec_fn, close_fds,
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/usr/lib/python3.10/subprocess.py”, line 1863, in _execute_child
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: raise child_exception_type(errno_num, err_msg, err_filename)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: FileNotFoundError: [Errno 2] No such file or directory: ‘docker’
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.command_factory INFO 2024-02-18 15:21:04,596 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Built script [/data/jobs/000/78/tool_script.sh] for tool command [Rscript main.R /data/datasets/7/2/4/dataset_7249853c-5176-4e35-9d77-83b26183f466.dat /data/jobs/000/78/outputs/dataset_febbdd27-6e13-4488-8353-af2042018fa2.dat]
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: uvicorn.access INFO 2024-02-18 15:21:04,617 [pN:main.2,p:1985,tN:MainThread] 144.41.156.245:0 - “GET /api/histories/cc97ae811bf6756a/contents?v=dev&limit=1000&q=update_time-ge&qv=2024-02-18T15:02:25.915Z&details=421123d490548da8,df8cf2aa639cfcdf,ca0105d99f25b125,794544e0333b3bf3,5241243e2939ef59,115297d8366d265e,dfc291ee71abb920,7e4f102dfb157f2e,35f2b95da9926874 HTTP/1.0” 200
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.runners DEBUG 2024-02-18 15:21:04,637 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] (78) command is: mkdir -p working outputs configs
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: if [ -d _working ]; then
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: rm -rf working/ outputs/ configs/; cp -R _working working; cp -R _outputs outputs; cp -R _configs configs
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: else
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: cp -R working _working; cp -R outputs _outputs; cp -R configs _configs
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: fi
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: cd working; /bin/bash /data/jobs/000/78/tool_script.sh > ‘…/outputs/tool_stdout’ 2> ‘…/outputs/tool_stderr’; return_code=$?; echo $return_code > /data/jobs/000/78/galaxy_78.ec; cd ‘/data/jobs/000/78’;
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: [ “$GALAXY_VIRTUAL_ENV” = “None” ] && GALAXY_VIRTUAL_ENV=“$_GALAXY_VIRTUAL_ENV”; _galaxy_setup_environment True; python metadata/set.py; sh -c “exit $return_code”
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.runners.drmaa DEBUG 2024-02-18 15:21:04,642 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] (78) submitting file /data/jobs/000/78/galaxy_78.sh
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.runners.drmaa INFO 2024-02-18 15:21:04,644 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] (78) queued as 37
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: uvicorn.access INFO 2024-02-18 15:21:04,667 [pN:main.2,p:1985,tN:MainThread] 144.41.156.245:0 - “GET /api/users/da56ac9b6935872b HTTP/1.0” 200
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.runners.drmaa DEBUG 2024-02-18 15:21:04,695 [pN:handler_1,p:944,tN:SlurmRunner.monitor_thread] (78/37) state change: job is queued and active
Feb 18 15:21:04 galaxy-ftw galaxyctl[1976]: uvicorn.access INFO 2024-02-18 15:21:04,757 [pN:main.1,p:1976,tN:MainThread] 144.41.156.245:0 - “GET /api/histories/cc97ae811bf6756a/contents?v=dev&order=hid&offset=0&limit=100&q=deleted&qv=false&q=visible&qv=true HTTP/1.0” 200
Feb 18 15:21:05 galaxy-ftw galaxyctl[944]: galaxy.jobs.runners.drmaa DEBUG 2024-02-18 15:21:05,702 [pN:handler_1,p:944,tN:SlurmRunner.monitor_thread] (78/37) state change: job is running
Feb 18 15:21:08 galaxy-ftw galaxyctl[1976]: uvicorn.access INFO 2024-02-18 15:21:08,292 [pN:main.1,p:1976,tN:MainThread] 144.41.156.245:0 - “GET /history/current_history_json?since=2024-02-18T15:21:04.512707 HTTP/1.0” 200
Feb 18 15:21:08 galaxy-ftw galaxyctl[1985]: uvicorn.access INFO 2024-02-18 15:21:08,361 [pN:main.2,p:1985,tN:MainThread] 144.41.156.245:0 - “GET /api/histories/cc97ae811bf6756a/contents?v=dev&limit=1000&q=update_time-ge&qv=2024-02-18T15:21:04.539Z&details=421123d490548da8,df8cf2aa639cfcdf,ca0105d99f25b125,794544e0333b3bf3,5241243e2939ef59,115297d8366d265e,dfc291ee71abb920,7e4f102dfb157f2e,35f2b95da9926874 HTTP/1.0” 200
Feb 18 15:21:08 galaxy-ftw galaxyctl[1976]: uvicorn.access INFO 2024-02-18 15:21:08,430 [pN:main.1,p:1976,tN:MainThread] 144.41.156.245:0 - “GET /api/users/da56ac9b6935872b HTTP/1.0” 200
Feb 18 15:21:08 galaxy-ftw galaxyctl[1976]: uvicorn.access INFO 2024-02-18 15:21:08,542 [pN:main.1,p:1976,tN:MainThread] 144.41.156.245:0 - “GET /api/histories/cc97ae811bf6756a/contents?v=dev&order=hid&offset=0&limit=100&q=deleted&qv=false&q=visible&qv=true HTTP/1.0” 200
Feb 18 15:21:10 galaxy-ftw galaxyctl[944]: galaxy.jobs.runners.drmaa DEBUG 2024-02-18 15:21:10,765 [pN:handler_1,p:944,tN:SlurmRunner.monitor_thread] (78/37) state change: job finished normally
Feb 18 15:21:10 galaxy-ftw galaxyctl[944]: galaxy.tool_util.output_checker INFO 2024-02-18 15:21:10,783 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-3] Job error detected, failing job. Reasons are [{‘type’: ‘exit_code’, ‘desc’: ‘Fatal error: Exit code 127 ()’, ‘exit_code’: 127, ‘code_desc’: ‘’, ‘error_level’: 3}]
Feb 18 15:21:10 galaxy-ftw galaxyctl[944]: galaxy.jobs DEBUG 2024-02-18 15:21:10,792 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-3] finish(): Moved /data/jobs/000/78/outputs/dataset_febbdd27-6e13-4488-8353-af2042018fa2.dat to /data/datasets/f/e/b/dataset_febbdd27-6e13-4488-8353-af2042018fa2.dat
Feb 18 15:21:10 galaxy-ftw galaxyctl[944]: galaxy.jobs DEBUG 2024-02-18 15:21:10,824 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-3] (78) setting dataset 85 state to ERROR
Feb 18 15:21:10 galaxy-ftw galaxyctl[944]: galaxy.jobs INFO 2024-02-18 15:21:10,840 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-3] Collecting metrics for Job 78 in /data/jobs/000/78
Feb 18 15:21:10 galaxy-ftw galaxyctl[944]: galaxy.jobs DEBUG 2024-02-18 15:21:10,858 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-3] job_wrapper.finish for job 78 executed (72.562 ms)

I am not sure but I think it is because you installed galaxy under a different user. If you also automatically installed conda, this conda installation is installed under that user. You can try to first switch to the user that you used to installed galaxy and from there create the conda env. Mostly the username is “galaxy”. You can switch user by doing su galaxy or something like sudo su - galaxy

If you want to change conda_prefix you need to point it to /home/pthor/miniconda3 but I would not recommend it for now.

First try to leave the conda_prefix default and use the conda installation that is installed under the user “galaxy”. If later you want to share the same conda installation between two different user you need to make a specific group for this otherwise you will run into all kind of rights problems.

1 Like

Dear @gbbio,

thanks for your reply!

Switching the user to galaxy, conda is not found by the galaxy user even though conda gets installed by the ansible playbook and seems to do what it should when installing from the tool shed/ admin interface in galaxy. There is also some conda stuff living in /srv/galaxy/var/dependencies/_conda/bin. Do you have a hint on how to proceed?

Kind regards,
Patrick

Yes, conda is not in your .bashrc yet. You first need to execute something like /srv/galaxy/var/dependencies/_conda/bin/conda init bash.

After this you can create your environment.

I think if you would do something like /srv/galaxy/var/dependencies/_conda/bin/conda create -n __conda_env_test_r@1.0 r-base=4.2 r-data.table r-ggplot2 r-readxl r-writexl before the init, conda will tell you to do the init command first.

After all this don’t forget to put the correct settings back in the config file (I believe you changed something).

1 Like

Thanks @gbbio, I followed your suggestion, but unfortunately the problem remains. Galaxy cannot find the conda environments even though they are now created by the galaxy user and live in /srv/galaxy/var/dependencies/_conda/envs/. I run the ansible playbook before to reset everything to the admin tutorial state.
Following https://training.galaxyproject.org/training-material/topics/dev/tutorials/tool-from-scratch/tutorial.html, I was wondering if conda packages and environments might be treated differently? Would the specification in the tool XML be different?

In Galaxy Tool XML File — Galaxy Project 24.0.dev0 documentation there is an explicit container type=“docker”, but for conda it is always type="package".

Any further suggestions are appreciated :slight_smile:
Happy to provide any information that might help!

Is there some information in the log?

Did you also tried to install a tool from the toolshed already?

I am not sure but I notice now you have a version with two numbers. The manual says to use 3 numbers like __samtools@0.1.19. Conda for Tool Dependencies — Galaxy Project 23.2.1.dev0 documentation

1 Like

Hi @gbbio,

first thanks again!

I installed tools from the toolshed using the galaxy admin UI and it works fine. But there are no new environments created in /srv/galaxy/var/dependencies/_conda/envs/, maybe because they do not use conda. I tried with three numbers and it did not change the error messages. They remained the same as in the log I posted at the beginning.

Locally the conda env is working fine and doing everything I am expecting of it. Just the connection with Galaxy does not work, unfortunately.

I see in your log things like No such file or directory: ‘docker’ and SlurmRunner.work_thread-1] Are you trying to run tools on a cluster?

Which tool from the toolshed did you installed?

Can you go to Admin > Manage Dependencies and click the tool you just installed and is working. When you click the tool it folds open and shows you the path of the env and the path to conda. Then you can check if that is correct with the one you manually created.

I have for example fastqc from the toolshed and a custom tool with a local conda env like you. And they both point to the same env directories.

1 Like

Yes @gbbio, I want to run it on a local cluster. For now the SLURM handler and runner is on the same system as galaxy (this is also from the tutorial) in the future I would like to separte this setup.

I installed bellerophon and Trinity and fastqc it seems to run with a data set from the Admin Training:

Following your suggestion to look at Manage Dependencies it looks like that:

such file or directory: ‘docker’ I do not use docker explicitly but also was already wondering where this comes from.

Thanks a lot, really appreciate your help and feedback!

What do you have in your config for:

conda_prefix:
conda_exec:
conda_debug:
conda_use_local:
conda_auto_install:
conda_auto_init:
conda_copy_dependencies:

1 Like

Dear @gbbio,

I now found the solution thanks to your hint above looking at ‘Manage Dependencies’.

The problem was that the ‘dependency resovlers’ did not get set up by any of the ansible roles included in the ‘Admin Training’. Looking at Dependency Resolvers in Galaxy — Galaxy Project 24.0.dev0 documentation I first tried to add the configuration to the galaxy.yml. This did not work and for some reason unknown to me resulted in a 504 error when trying to access the local galaxy webpage.

What did work was the set up for Dependency Resolvers using an older galxy version, as described here : Dependency Resolvers in Galaxy — Galaxy Project 18.05 documentation

Now the Tool Shed tools that I install via the UI also look right:

Thank you very much for your help!

Just duplicating my response from Matrix, dependency_resolvers_conf.xml should have been configured during the Apptainer tutorial: https://training.galaxyproject.org/training-material/topics/admin/tutorials/apptainer/tutorial.html#hands-on-configure-galaxy-to-use-apptainer

it’s not automatically configured by the roles, you must manually set the configuration.

As for why it isn’t working when included directly into galaxy.yml, that’s odd but I would not know there.

2 Likes