Hello everyone,
I have set up a local Galaxy server following the Galaxy Admin Training up to step 11 (connect to compute cluster) and in general everything is working great. I now want to add some own tools, not available in the tool shed.
Thanks to help from @marten I was able to add the bellerophon tool to my local Galaxy server without Installing it from the tool shed by copying the tool.xml to the respective directory for testing/ elarning purpose (Local tool installation without using tool shed).
The conda dependencies got resolved automatically.
I then wrote an R script, the corresponding tool.xml, and created a conda environment with the following command:
conda create -n __conda_env_test_r@1.0 r-base=4.2 r-data.table r-ggplot2 r-readxl r-writexl
The conda env lives in /home/pthor/miniconda3/envs/.
I used these specifications in the tool.xml:
<requirements>
<requirement type="package" version="1.0">conda_env_test_r</requirement>
</requirements>
<command detect_errors="exit_code">
<![CDATA[Rscript main.R $input_file $result_file ]]>
</command>
Running
planemo test --conda_use_local
was successful and with
planemo serve
the tool also produces the expected output.
I moved the R script and the tool.xml to the tools/stats folder and modified the tool_conf.xml file.
The tool is displayed in Galaxy, but when I run it I get the following error message:
/data/jobs/000/77/tool_script.sh: Line 9: Rscript: command not found
Checking the journalctl, it looks like the conda environment is not being found by galaxy.
I also added conda_use_local: true to the /srv/galaxy/config/galaxy.yml file to see if this would make the conda env available, but it did not help.
Following Conda for Tool Dependencies — Galaxy Project 23.2.1.dev0 documentation, I tired to modify conda_prefix
, poiting to /home/pthor/miniconda3/envs/ but this did not work out for me (I couldn’t reach the webpage/ UI anymore).
I would appreciate any guidance on how to proceed.
Thanks in advance and best regards,
Patrick
PS:
Here is the journalctl output:
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: galaxy.tools INFO 2024-02-18 15:21:04,457 [pN:main.2,p:1985,tN:WSGI_0] Validated and populated state for tool request (6.882 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: galaxy.tools.actions INFO 2024-02-18 15:21:04,462 [pN:main.2,p:1985,tN:WSGI_0] Handled output named result_file for tool test_r_tool (0.773 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: galaxy.tools.actions INFO 2024-02-18 15:21:04,464 [pN:main.2,p:1985,tN:WSGI_0] Added output datasets to history (2.281 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: galaxy.tools.actions INFO 2024-02-18 15:21:04,465 [pN:main.2,p:1985,tN:WSGI_0] Setup for job Job[unflushed,tool_id=test_r_tool] complete, ready to be enqueued (1.009 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: galaxy.tools.execute DEBUG 2024-02-18 15:21:04,465 [pN:main.2,p:1985,tN:WSGI_0] Tool test_r_tool created job None (6.858 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: galaxy.web_stack.handlers INFO 2024-02-18 15:21:04,504 [pN:main.2,p:1985,tN:WSGI_0] (Job[id=78,tool_id=test_r_tool]) Handler ‘default’ assigned using ‘db-skip-locked’ assignment method
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: galaxy.tools.execute DEBUG 2024-02-18 15:21:04,506 [pN:main.2,p:1985,tN:WSGI_0] Created 1 job(s) for tool test_r_tool request (48.975 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.handler DEBUG 2024-02-18 15:21:04,506 [pN:handler_1,p:944,tN:JobHandlerQueue.monitor_thread] Grabbed Job(s): 78
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: uvicorn.access INFO 2024-02-18 15:21:04,517 [pN:main.2,p:1985,tN:MainThread] 144.41.156.245:0 - “POST /api/tools HTTP/1.0” 200
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.mapper DEBUG 2024-02-18 15:21:04,522 [pN:handler_1,p:944,tN:JobHandlerQueue.monitor_thread] (78) Mapped job to destination id: slurm
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.handler DEBUG 2024-02-18 15:21:04,528 [pN:handler_1,p:944,tN:JobHandlerQueue.monitor_thread] (78) Dispatching to slurm runner
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs DEBUG 2024-02-18 15:21:04,536 [pN:handler_1,p:944,tN:JobHandlerQueue.monitor_thread] (78) Persisting job destination (destination id: slurm)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs DEBUG 2024-02-18 15:21:04,542 [pN:handler_1,p:944,tN:JobHandlerQueue.monitor_thread] (78) Working directory for job is: /data/jobs/000/78
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.runners DEBUG 2024-02-18 15:21:04,548 [pN:handler_1,p:944,tN:JobHandlerQueue.monitor_thread] Job [78] queued (19.939 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.handler INFO 2024-02-18 15:21:04,551 [pN:handler_1,p:944,tN:JobHandlerQueue.monitor_thread] (78) Job dispatched
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: uvicorn.access INFO 2024-02-18 15:21:04,558 [pN:main.2,p:1985,tN:MainThread] 144.41.156.245:0 - “GET /history/current_history_json?since=2024-02-18T15:02:24.519071 HTTP/1.0” 200
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs DEBUG 2024-02-18 15:21:04,586 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Job wrapper for Job [78] prepared (28.237 ms)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.containers INFO 2024-02-18 15:21:04,586 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Checking with container resolver [CachedExplicitSingularityContainerResolver[cache_directory=/srv/galaxy/var/cache/singularity/explicit/]] found description [None]
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.container_resolvers.mulled DEBUG 2024-02-18 15:21:04,586 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Image name for tool test_r_tool: /home/pthor/miniconda3/envs/conda_env_test_r:1.0
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.containers INFO 2024-02-18 15:21:04,587 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Checking with container resolver [CachedMulledSingularityContainerResolver[cache_directory=/srv/galaxy/var/cache/singularity/mulled/]] found description [None]
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.container_resolvers.mulled DEBUG 2024-02-18 15:21:04,587 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Image name for tool test_r_tool: /home/pthor/miniconda3/envs/ conda_env_test_r:1.0
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.mulled.util INFO 2024-02-18 15:21:04,589 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] skipping mulled_tags_for [/home/pthor/miniconda3/envs/conda_env_test_r] no repository
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.containers INFO 2024-02-18 15:21:04,589 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Checking with container resolver [MulledSingularityContainerResolver[namespace=biocontainers]] found description [None]
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.container_resolvers.mulled DEBUG 2024-02-18 15:21:04,589 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Image name for tool test_r_tool: /home/pthor/miniconda3/envs/conda_env_test_r:1.0
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.tool_util.deps.containers ERROR 2024-02-18 15:21:04,591 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Could not get container description for tool ‘test_r_tool’
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: Traceback (most recent call last):
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/tool_util/deps/containers.py”, line 363, in find_best_container_description
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: resolved_container_description = self.resolve(enabled_container_types, tool_info, **kwds)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/tool_util/deps/containers.py”, line 394, in resolve
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: container_description = container_resolver.resolve(
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/tool_util/deps/container_resolvers/mulled.py”, line 809, in resolve
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: mull_targets(targets, involucro_context=self.involucro_context, **self._mulled_kwds)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/tool_util/deps/mulled/mulled_build.py”, line 285, in mull_targets
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: conda_context = CondaInDockerContext(ensure_channels=channels)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/tool_util/deps/mulled/mulled_build.py”, line 383, in init
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: super().init(
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/tool_util/deps/conda_util.py”, line 125, in init
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: info = self.conda_info()
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/tool_util/deps/conda_util.py”, line 201, in conda_info
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: info_out = commands.execute(cmd)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/util/commands.py”, line 104, in execute
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: return _wait(cmds, input=input, shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/srv/galaxy/server/lib/galaxy/util/commands.py”, line 121, in _wait
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: p = subprocess.Popen(cmds, **popen_kwds)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/usr/lib/python3.10/subprocess.py”, line 971, in init
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: self._execute_child(args, executable, preexec_fn, close_fds,
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: File “/usr/lib/python3.10/subprocess.py”, line 1863, in _execute_child
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: raise child_exception_type(errno_num, err_msg, err_filename)
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: FileNotFoundError: [Errno 2] No such file or directory: ‘docker’
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.command_factory INFO 2024-02-18 15:21:04,596 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] Built script [/data/jobs/000/78/tool_script.sh] for tool command [Rscript main.R /data/datasets/7/2/4/dataset_7249853c-5176-4e35-9d77-83b26183f466.dat /data/jobs/000/78/outputs/dataset_febbdd27-6e13-4488-8353-af2042018fa2.dat]
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: uvicorn.access INFO 2024-02-18 15:21:04,617 [pN:main.2,p:1985,tN:MainThread] 144.41.156.245:0 - “GET /api/histories/cc97ae811bf6756a/contents?v=dev&limit=1000&q=update_time-ge&qv=2024-02-18T15:02:25.915Z&details=421123d490548da8,df8cf2aa639cfcdf,ca0105d99f25b125,794544e0333b3bf3,5241243e2939ef59,115297d8366d265e,dfc291ee71abb920,7e4f102dfb157f2e,35f2b95da9926874 HTTP/1.0” 200
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.runners DEBUG 2024-02-18 15:21:04,637 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] (78) command is: mkdir -p working outputs configs
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: if [ -d _working ]; then
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: rm -rf working/ outputs/ configs/; cp -R _working working; cp -R _outputs outputs; cp -R _configs configs
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: else
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: cp -R working _working; cp -R outputs _outputs; cp -R configs _configs
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: fi
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: cd working; /bin/bash /data/jobs/000/78/tool_script.sh > ‘…/outputs/tool_stdout’ 2> ‘…/outputs/tool_stderr’; return_code=$?; echo $return_code > /data/jobs/000/78/galaxy_78.ec; cd ‘/data/jobs/000/78’;
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: [ “$GALAXY_VIRTUAL_ENV” = “None” ] && GALAXY_VIRTUAL_ENV=“$_GALAXY_VIRTUAL_ENV”; _galaxy_setup_environment True; python metadata/set.py; sh -c “exit $return_code”
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.runners.drmaa DEBUG 2024-02-18 15:21:04,642 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] (78) submitting file /data/jobs/000/78/galaxy_78.sh
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.runners.drmaa INFO 2024-02-18 15:21:04,644 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-1] (78) queued as 37
Feb 18 15:21:04 galaxy-ftw galaxyctl[1985]: uvicorn.access INFO 2024-02-18 15:21:04,667 [pN:main.2,p:1985,tN:MainThread] 144.41.156.245:0 - “GET /api/users/da56ac9b6935872b HTTP/1.0” 200
Feb 18 15:21:04 galaxy-ftw galaxyctl[944]: galaxy.jobs.runners.drmaa DEBUG 2024-02-18 15:21:04,695 [pN:handler_1,p:944,tN:SlurmRunner.monitor_thread] (78/37) state change: job is queued and active
Feb 18 15:21:04 galaxy-ftw galaxyctl[1976]: uvicorn.access INFO 2024-02-18 15:21:04,757 [pN:main.1,p:1976,tN:MainThread] 144.41.156.245:0 - “GET /api/histories/cc97ae811bf6756a/contents?v=dev&order=hid&offset=0&limit=100&q=deleted&qv=false&q=visible&qv=true HTTP/1.0” 200
Feb 18 15:21:05 galaxy-ftw galaxyctl[944]: galaxy.jobs.runners.drmaa DEBUG 2024-02-18 15:21:05,702 [pN:handler_1,p:944,tN:SlurmRunner.monitor_thread] (78/37) state change: job is running
Feb 18 15:21:08 galaxy-ftw galaxyctl[1976]: uvicorn.access INFO 2024-02-18 15:21:08,292 [pN:main.1,p:1976,tN:MainThread] 144.41.156.245:0 - “GET /history/current_history_json?since=2024-02-18T15:21:04.512707 HTTP/1.0” 200
Feb 18 15:21:08 galaxy-ftw galaxyctl[1985]: uvicorn.access INFO 2024-02-18 15:21:08,361 [pN:main.2,p:1985,tN:MainThread] 144.41.156.245:0 - “GET /api/histories/cc97ae811bf6756a/contents?v=dev&limit=1000&q=update_time-ge&qv=2024-02-18T15:21:04.539Z&details=421123d490548da8,df8cf2aa639cfcdf,ca0105d99f25b125,794544e0333b3bf3,5241243e2939ef59,115297d8366d265e,dfc291ee71abb920,7e4f102dfb157f2e,35f2b95da9926874 HTTP/1.0” 200
Feb 18 15:21:08 galaxy-ftw galaxyctl[1976]: uvicorn.access INFO 2024-02-18 15:21:08,430 [pN:main.1,p:1976,tN:MainThread] 144.41.156.245:0 - “GET /api/users/da56ac9b6935872b HTTP/1.0” 200
Feb 18 15:21:08 galaxy-ftw galaxyctl[1976]: uvicorn.access INFO 2024-02-18 15:21:08,542 [pN:main.1,p:1976,tN:MainThread] 144.41.156.245:0 - “GET /api/histories/cc97ae811bf6756a/contents?v=dev&order=hid&offset=0&limit=100&q=deleted&qv=false&q=visible&qv=true HTTP/1.0” 200
Feb 18 15:21:10 galaxy-ftw galaxyctl[944]: galaxy.jobs.runners.drmaa DEBUG 2024-02-18 15:21:10,765 [pN:handler_1,p:944,tN:SlurmRunner.monitor_thread] (78/37) state change: job finished normally
Feb 18 15:21:10 galaxy-ftw galaxyctl[944]: galaxy.tool_util.output_checker INFO 2024-02-18 15:21:10,783 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-3] Job error detected, failing job. Reasons are [{‘type’: ‘exit_code’, ‘desc’: ‘Fatal error: Exit code 127 ()’, ‘exit_code’: 127, ‘code_desc’: ‘’, ‘error_level’: 3}]
Feb 18 15:21:10 galaxy-ftw galaxyctl[944]: galaxy.jobs DEBUG 2024-02-18 15:21:10,792 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-3] finish(): Moved /data/jobs/000/78/outputs/dataset_febbdd27-6e13-4488-8353-af2042018fa2.dat to /data/datasets/f/e/b/dataset_febbdd27-6e13-4488-8353-af2042018fa2.dat
Feb 18 15:21:10 galaxy-ftw galaxyctl[944]: galaxy.jobs DEBUG 2024-02-18 15:21:10,824 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-3] (78) setting dataset 85 state to ERROR
Feb 18 15:21:10 galaxy-ftw galaxyctl[944]: galaxy.jobs INFO 2024-02-18 15:21:10,840 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-3] Collecting metrics for Job 78 in /data/jobs/000/78
Feb 18 15:21:10 galaxy-ftw galaxyctl[944]: galaxy.jobs DEBUG 2024-02-18 15:21:10,858 [pN:handler_1,p:944,tN:SlurmRunner.work_thread-3] job_wrapper.finish for job 78 executed (72.562 ms)