Newly created GCE system with 1 NVIDIA L4 gpu and ubuntu accelerator optimized 22.04 OS, nvidia driver 570.
Installed galaxy 25_0 in user account from git using ./run.sh
There appears to be no conda, conda-forge, or bioconda environment on this system, if that’s relevant. This help article says
Be sure to install Galaxy into a Conda .venv. Why is in the admin docs under Framework Dependencies.
but I could find no article/topic called “Framework Dependencies” in the admin docs.
When running basic_illumination, the jobs die with the error:
In: failed to create symbolic link 'ImageJ' ->"; No such file or directory
/srv/galaxy/galaxy/database/
There was no java environment on this system, so I installed
apt install openjdk-11-jre-headless
I’m now trying to install ImageJ by unzipping the distribution, but it’s not clear to me where it needs to be unzipped. This article indicates it should be unzipped in the home directory, and an alias created for imagej. Is that appropriate for the galaxy server? The above error references <server-root>/database, not <server-root>/ImageJ or <server-user-home>/ImageJ. Is any other magic necessary for a server tool to find it?
On the chance it might be enough to install the imagej package, I tried the following:
$ sudo apt-get update
$ sudo apt install imagej
After doing so, I see:
$ which imagej
/usr/bin/imagej
Restarted the server, but the job fails with the same error.
/usr/bin/imagej is the standard startup script which determines the java environment and invokes imagej via
For the short guide, yes the link should be adjusted (I’ll do that!).
The current resource links as a reference can be found at:
Notes:
Installing and running a modern version of a Local Galaxy from Github requires administration beyond simply running run.sh. And if the first execution is not run inside of a standard .venv environment, you might get Galaxy to start up, but will usually have problems with tool installations, or tool dependency management pretty quickly! The tools you want to use from that tutorial will have many dependencies. The best way to correct this is to start over. You can save back your database, but it doesn’t sound like you will need that.
Back to the start, if you still want to try the Local Galaxy version instead!
Confirm your version of Python on your computer, and update if needed. Which version? You will be checking against the release notes for this instruction, and for the (very general) installation instructions. The current release is here. → 25.0 Galaxy Release (June 2025) — Galaxy Project 25.0.2.dev0 documentation
The instructions for setting up the .venv look pretty good to me. This is a bit “outside of Galaxy” since the operating system can have slightly different methods. But maybe this helps to understand how this works? → Create virtual environments for python with conda
You will follow up until Step 4. After that, Step 5 is what the run.sh command starting up Galaxy will be doing, and what tool installations will be doing.
The idea is to create then activate a directory. Then you install Galaxy from Github into that directory. Finally, start up Galaxy for the first time, then install tools (Ephemeris, or directly). You can install those in the tutorial, or you can use a workflow as the “baseline” and install based on the content.
In short, I don’t think these tools will work correctly without significant custom configuration with what you have now, and then you’ll be doing the same for other tool installations (or your own development). A clean environment, a clean install, then managed dependencies is probably the best path, and the Docker version will be easier.
New remote galaxy server running on ubuntu 22.04 GCE. Trying to get McMicro Tissue workflow to run on exemplar-001. Made list of the three ome.tiff files, workflow failed at step 4 BaSic Illumination with error “ImageJ: command not found”. This is a test server with standard google ubuntu OS, without graphics accelerators; GCE was installed normally by downloading and ./run.sh. There is no imagej package installed (“apt list --installed | grep -i imagej” comes up empty) Shouldn’t the galaxy install or tools install provide the needed components?
I have the following tools installed – older versions as referenced in the McMicro *.ga files:
I installed the imagej package on the galaxy machine. There is a shell script /usr/bin/imagej to set up the java environment for ImageJ. Attempting to run the McMicro Tissues example still fails with “tool_script.sh: line 9: ImageJ: Command not found”
I also tried running Galaxy-Workflow-MCMICRO_Tissue_v1.0.0.ga on usegalaxy.org, but the workflow has numerous errors due to missing and out-dated tools. Is there even a version of the workflow that works with the current galaxy (25.0)?
Sorry to hear about the problems. I agree, this is likely dependency related. We can bring in the developers to learn if they recognize what may be going wrong (I don’t but can investigate more, too). I’ve ask them for feedback, and they’ll reply back here.
At UseGalaxy.org. I did need to accept one tool update, and double checking the options set on the updated tool form were still correct versus the tutorial guide, so this was a single load workflow / save cycle before execution.
Then at cancer.UseGalaxy.org, the workflow loaded without changes (likely because this is where the workflow was developed originally). I can’t share the invocation directly since the server is running an earlier version of Galaxy, but the workflow itself was unchanged from the tutorial version, so maybe this is enough.
If you want to share the workflow link you are using, I can also help to compare with any updates that may have been applied that way, too. The tool update at ORG versus the Cancer server can be meaningful, meaning a change may have been needed to make it compatible with the r25 release or other OS updates in general. We can confirm that and solve your issue.
I got a little further and then posted a new topic that’s more general, I think; sorry for the confusion.
The problem is not related to the workflow; it is related to the Java jre and where the imagej code is unpacked. Simply attempting to run basic_illumination against one of the Tissue Microarray Image Analysis tutorial files will exhibit the error for a server which does not have a java jre and imagej installed.
Running the workflow on an existing server which already has basic_illumination installed won’t tell me much, as that system will already have a jre and the imagej code installed in the appropriate place.
Maybe I am misunderstanding, but this is what the .venv is giving you: a clean environment where the system level dependencies are ignored unless explicitly referenced (and this is usually not wanted at all for job working directories). This means you can control which resources to access, and which version, when executing tools and visualizations.
Let’s merge this all together since the tests at the public servers might still be helpful.
Ok – I just checked those public server tests, and both worked! This is good, because the team who is working on these complicated tool wrappers just updated everything, and how to get this configured is fresh in everyone’s mind.
If you want to try with a fresh Galaxy checkout in the right environment, that configuration will be very close to what these two public servers host (we are first in line for release testing, and usually the most current with respect to what is found at Github or the ToolShed). This means reproducibility should be very high: both in config and usage.
There are other reasons to get started in the right environment. These are mostly about not having complicated dependency installations for other tools, and potential job execution issues, too.
Whenever you see a “command not found” error, that is 98% a dependency resolution error, and quite often some Python sub-module version that is missing, not what is reported. The tool reported is what that sub-module couldn’t find because it itself could not be found.
Let’s merge this all together since the tests at the public servers might still be helpful.
I don’t know how to do dat but sounds good to me.
Maybe I am misunderstanding, but this is what the .venv is giving you: a clean environment where the system level dependencies are ignored unless explicitly referenced
Understood, thanks.
If you want to try with a fresh Galaxy checkout in the right environment, that configuration will be very close to what these two public servers host
Not sure what you mean by “in the right environment”. Since run.sh sets up a .venv, are you saying a fresh git checkout followed by ./run.sh should work?
Or should I be setting up a docker env first?
What do you mean by a fresh checkout? git clone -b release_25.0 or clone followed by pull or something more?
On the off chance no docker setup is now supposed to be required, I did a fresh git clone and installed the tools; the system had the following installed prior to the git clone:
openjdk-11-jre/jammy-updates,jammy-security,now 11.0.27+6~us1-0ubuntu1~22.04 amd64 [installed,automatic]
imagej/jammy,now 1.53o-1 all [installed]
basic_illumination still fails because it can’t find ImageJ.
When and how is the tool dependency supposed to be resolved? It can’t be resolved when galaxy is being installed from the initial ./run.sh, because the basic_illumination tool hasn’t been loaded into galaxy yet. So it has to be resolved dynamically either when the tool is loaded by galaxy, or when the tool is first run. In either of those cases, it is happening after the downloaded galaxy image is installed and up and running. So the running galaxy has to resolve it either to the local OS, outside of the installed image; or to something the installed image brings in to its own environment. Neither of those appears to be happening.
Don’t know if this helps or not: the basic_illumination tool is already present in the usegalaxy.eu server. It appears to have already been used there, so its dependencies are resolved there. I’m guessing the server was built with the tool and its dependencies already included, as opposed to being dynamically loaded. It might be worth verifying that. Whether or not it has been successfully used I cannot tell; if I try to run it on the usegalaxy.eu server, I get the following error with each image:
Cannot display TIFF image
getUint16@https://usegalaxy.eu/static/plugins/visualizations/tiffviewer/static/index.js:3:21331
fromSource@https://usegalaxy.eu/static/plugins/visualizations/tiffviewer/static/index.js:5:215
Reason: offset is outside the bounds of the DataView
BaSiC_Illumination is not already present in the usegalaxy.org server. What happens if an admin dynamically loads it (it will load ok) and then tries to execute it? Does it fail because of the inability to find ImageJ, or does it get further along?
Local Galaxy: Yes, create a new empty default.venv directory, git clone the release, then start up Galaxy for the first time. At this step, you’ll want to make sure the logs do not report any problems (resolve these), and that you can locate your local server through a browser on your computer. See → Galaxy Configuration — Galaxy Project 25.0.2.dev0 documentation
The tool dependency resolution happens when a tool is installed. A repository from the ToolShed contains all of the details.
Using the resources from the tutorial through the Ephemeris process will access the ToolShed to pull in the tool repositories. Each is resolved during installation. The script batches this all together.
Both options are a full Galaxy server! The configuration for a single-user will be mostly the defaults for both.
If you plan to have multiple users or to connect to a computing cluster, the default options in Docker Galaxy will be easier to configure, and it has the README with the exact instructions. This version was designed to be used by people (scientists, teachers, but also developers) who want one or more quick Galaxy instances up and running.
If you plan to do that with the Local Galaxy, this will be a larger project with many configurations. Following the Admin Training → Learning Pathway here is strongly recommended.
The basic steps are
Create the environment you plan to install Galaxy into
Clone Galaxy into that environment
Start Galaxy up and resolve any issues reported
Set basic configurations about how data is saved, which logs are written, where jobs will run, and related details.
Then you can start to customize the tool and data content: installing tools and reference data. This is when all of the dependencies for each tool (including visualization components) are installed.
Once those are done, you are ready to work in Galaxy in a similar way to how you work at a public Galaxy server.
Does this helps?
Then back to the data test question:
All of the dependencies for a tool are installed by the administrator, before the user is using a tool.
The error is suggesting that there is some data content issue with the TIFF file. My first guess is a coordinate problem between what the display is expecting, and the data provided, likely a file format issue, maybe related to version changes. Maybe there is a way to standardize or otherwise correct the data.
I can help to confirm this, and to report issues to the developers (if needed) but I would need to see the actual data in the history to do this. You are welcome to generate a history share link and post that back here for troubleshooting.
I would also suggest trying to load your test images at cancer.GalaxyProject.org, since that is the server where these spacial omics tool are developed. Problems (or success!) as a comparison would help with any troubleshooting, too.
I found the tool at UseGalaxy.org. Do I have the right tool here?
BaSiC Illumination ImageJ BaSiC shading correction for use with Ashlar (link at ORG)
This tool would be installed with all dependencies if you use the Ephemeris script. This is what the first lines in the script are configuring. They are telling Galaxy to pull all the dependencies in, in the exact version that the tool needs to function correctly.
First off, thanks for your patience.
I’m new to this so learning both how galaxy works as well as
trying to figure out an install for multiple researchers.
Upload => Choose from repository => search zenodo
click the Zenodo folder
and a zillion possibilities are there. How would I find them there?
I ended up downloading from the above urls.
On cancer.usegalaxy.org, under libraries, they are in an obvious folder Exemplar_002
On usegalaxy.org:
If I click on Workflows => GTN_Exemplar_002_TMA_workflow_Feb2025
I get a “Workflow Preview” that says last updated Thursday Jul 10 17:10:45 2025 GMT-7 that appears to be when I imported it.
If I click the “Edit” button, I get a dialog that says:
Step 8: Convert dearray images to OME-TIFF
Using version '6.7.0+galaxy3' instead of version '6.7.0+galaxy0' specified in this workflow.
What is the significance of +galaxyN ?
I’ve seen things without any “+galaxy” or “+galaxyN”, as well as the same tool versions (as with basic_illumination here) with different Ns.
Clicking “Continue” puts me in the workflow editor.
How the heck does one get out of the workflow editor without (or with) changing anything?
The only thing that seems to work is the browser back button, which isn’t used anywhere else I’ve found so far.
If I click on step 8: Convert image format,
I see the Galaxy3 version; clicking on the “versions” button shows a Galaxy2 option.
In post #6 you said you loaded the workflow and ran it, and needed to accept one tool update. By “accepting one tool update” do you mean leaving that version at Galaxy3, or changing/accepting something else?
If I click on “step 4: Illumination correction with Basic”, it shows “ImageJ BaSiC shading correction for use with Ashlar (Galaxy Version 1.1.1+Galaxy2)”
Suppose I just want to run the basic_illumination step, not the whole workflow.
If I get out of the editor (browser back button), go to Tools, and search for basic_illumination, it’s not there.
But the workflow didn’t complain about a tool not being installed.
Similarly for ashlar.
If I search for these tools on cancer.usegalaxy.org they show up.
If I load the workflow on usegalaxy.eu
I get a message saying “Imported, but some steps on this workflow have validation errors.” Steps 5, 11, 12 are marked in red with errors:
5. Stitching and registration with Ashlar
rename|markers_file
11. Convert to Anndata
Tool is not installed
12. Scimap phenotyping
Tool is not installed
So on both cancer.usegalaxy.org and usegalaxy.eu, basic_illumination appears to be there and shows up if I search for it in the tools.
I’ve finally figured out it appears to be an issue with the search mechanism on usegalaxy.org:
It appears to be there if I run the whole workflow, as the basic_illumination workflow step completes without errors.
I tried typing “basic”, “BaSiC”, and “illumination” in the search bar and got “No results found”.
If I go to Advanced Tool Search, I see the following behavior:
Filter by name:
BaSic_Illumination no results
basic_illumination no results, even though actual tool name is basic_illumination
basic illumination finds it
basic finds it
illumination finds it
Rather strange behavior that it finds it in the advanced search but not the simple one.
I have some fires to put out so will get back to the galaxy install issue when I have digested a bit more of the admin docs. Thanks again for your patience.
Thanks for all the updates! You are making fast progress!
For the Zenodo data – yes, with so many samples this can be complicated, and some servers might have the data pre-loaded into a Data Library (as you noticed) but replicating everything from a public location down on a local server is too much data.
The usual strategy is to put large files that several users might want to work with (like example data, but also certain reference data) in a Data Library. This means one master copy (immutable) then everyone else gets a clone they can use (what you did when getting the data from the folder on the Cancer server). You can load these libraries yourself, promote one of your trusted scientists to an admin account (after having a backup/recovery plan!), and you can script data migrations too, see → Hands-on: Data Libraries / Data Libraries / Galaxy Server administration. Regular scientists can learn how to load complicated data, but they might need instructions to reference. If you plan for having a lot of Zenodo data loading happen, consider adding in that User → Preferences integration to make it smoother?
Then, for tool versions, the extra “galaxyN” part is another part of the notation, a flavor of a minor revision. Optional, and depends in the last major revision needed something small (tiny bug fix, dependency issue) versus large. The largest number is the “most current”. So tool1.2.3+galaxy5 is “more current” then tool1.2.3+galaxy4 which is more current than tool1.2.3. Galaxy developed organically, shorter tool_ids are nicer to work with, and this is what we have discussed a million times and still have. Not perfect but any questions, please ask.
With respect to workflow editing – once you are in the workflow editor, it wants to lock you in unless you explicitly take some action. You can back out of the window or close it to get out and abandon any changes. If you make a change and click on the run > icon, you can do that and that run will use the changes you applied but it won’t “save” the changes. Then if you want to save changes, use the red icon for Save+Exit on the bottom of the left side. Avoiding the loss of changes that you DO want is more important than having to exit in a not-so-nice way. Plus – every save has a revision number, and those can be navigated. You can also explore changes made during that session. If nothing changes, then no new revision number is assigned. This means clicking on Save+Exit doesn’t lose anything. Worst case, delete that copy entirely and import it again.
If you are editing workflows all day, you will really appreciate the functionality the way it is. If you are inspecting workflows closely but not making changes, yes, what you notice is what I notice too. Then the users of workflows are likely not going into the editor very often (they are using the inspector pop-up) so making it very clear whether a change is happening or not is a good defensive posture. So – I understand everything you are staying! I would like another button, too: Get me out no changes! but maybe only right up until I made 37 tedious changes, got tired, and clicked on the wrong button by accident…
And finally, as you have noticed, each server can host a different set of tools and tool versions. Keeping servers up to date with the most current tool version is easier than filling in all the older versions, and mixed states exist across all servers, even the larger servers. Galaxy is in constant development mode and reproducibility trumps all. That means a workflow is tied to an exact tool version for each step. You will be warning if anything is different. You can sometimes accept the differences in a batch but will want to pay attention to the message: sometimes the different version means that a new option is set at the default (usually OK, on purpose – not losing functionality is by design) but that isn’t perfect. When in doubt, you can install all the tools called by a workflow in the exact version that is needed. That is what this tutorial is doing: Hands-on: Galaxy Tool Management with Ephemeris / Galaxy Tool Management with Ephemeris / Galaxy Server administration (extracting-tools from a workflow for installation). Tool wrappers themselves are small text files, so hosting several copies should be fine, and different Galaxy versions of the wrapper do not mean different versions of the underlying tools. The wrapper is calling that underlying tool and will change more often: bug fixes, new parameters added or reorganized, etc. The most current version has the most current known issues resolved (if any!) including dependency issues resolved (that may or may not impact the other versions working on certain OS systems or job runtime containers). Most administrators will never remove an older copy, and instead only add in more. If a tool is a hassle to support, consider hiding it from the tool panel instead of removing it.
And, I noticed this too:
Unfortunate! The author of the tool wrapper probably didn’t know our “rules” for how the tool search works, and it has to be “just right” or corner cases like this can come up. The tool panel roughly searches the tool name, then the description, and finally the other text on the tool form, then the tool_id last. We discuss this every few years! If you would like to join in or help to make improvements, this was a “recent” topic we used, and is what new discussions will at least reference (opened 2016, closed 2023) → improve tool panel search · Issue #2272 · galaxyproject/galaxy · GitHub
For the workflow run itself here, on the EU server
I think this is probably expected given how brand new this workflow is. It was developed on the Cancer server, and we used the example for our yearly conference in late June. The tools were “frozen” at a certain state, then the other servers had changes applied. The first was a renamed tool, the second is a different version, and the last is a brand new tool.
This workflow was polished in June and is available from the IWC. The GTN version is now “older”. I would try the versions from here if you plan to use this for serious work outside of training (and not expect the IWC version to work exactly the same with the GTN training data).
I’m writing a book here but you have hit so many of the little details with how Galaxy works, I thought it was worth bringing you into the current thinking on these topics.
In summary:
Install and configure Galaxy (sounds like this is done now!)
Install the exact tools for the workflows you want to use (consider using IWC version? Or add in tools for both GTN and IWC versions? Using Ephemeris will help)
Decide how you want users to load data (directly, or through secure integrations, or via data libraries)
Test the workflows you plan to support.
A workflow invocation is tied to an exact history, workflow version, tool versions, data versions and is considered the primary artifact when working in Galaxy since it captures all the reproducibility details. When all those inputs match up – two workflow invocations at different severs would be expected to also have their outputs “match up” eg similar results in their histories.