Hello @jennaj, I am using the “deg-analysis.ga” workflow from the list of workflows that you shared earlier in this thread and my problem with deseq2 is solved, but now I seem to be having a problem with the “Join two Datasets side by side on a specified field” within this workflow. Each of the steps works and then it fails at this step with this error -
Traceback (most recent call last):
File "/data/tools/galaxy/tools/filters/join.py", line 19, in <module>
from galaxy.util import stringify_dictionary_keys
ModuleNotFoundError: No module named 'galaxy'
I had experienced this earlier too and I can’t seem to get around it. Do you have any recommendations about how to get around this? Any help or suggestions are highly appreciated. Thank you!
This is the important part of the error message. It can mean there is a configuration problem in the environment where the tool is executing.
ModuleNotFoundError: No module named some-python-module
So, in this context, the cluster node where the job is running cannot communicate Galaxy for some reason.
But … that can also come up when
There isn’t any output from the prior tool to use as an input, so as the process dies from lack of input it falls back to this general error about the environment not being right at a higher level (Galaxy itself, rather than what is actually going wrong). This should be rare but still possible.
Some python environment configuration problem, likely related to dependencies within the container the job is running in on your cluster nodes.
Running jobs in a container with dependency resolvers will fix this.
Some tools still need Python version 2, instead of 3, mostly for legacy/reproducibility reasons.
Using containers standardizes the compute environment so jobs can run anywhere.
If you are attempting to use tools/dependencies already available on the compute node, consider using containers instead.
Another person was asking about a similar situation earlier today, see the link to the training docs about “best practices” for attaching a cluster to Galaxy.
Try this first to narrow down what is actually going on.
Create an input that you expect to work with this tool
Run the tool directly (outside of the workflow) but on the same compute environment where you run the workflow.
If that works, there is likely a data output problem within your workflow to solve. Or, the data itself doesn’t pass through a particular tool, or group of tools. Technically, that should fail the upstream tool but not all of the underlying tools were written in a way by the original tool authors to catch a problem… meaning, will happily output green empty results and claim victory. Again, rare but possible.
If you get the same error again, then this is probably an actual configuration problem.
Be aware that you might have different classes of nodes at your cluster, and/or different clusters! The tool might be failing at some or all, and that is a different clue about where/what needs to have the configuration tuned up. This also seems to explain what you reported: doesn’t always happen but you’ve seen it enough to be concerned. Check where those jobs were run and you’ll likely find a pattern.
Please give that a review and we can follow up. Explain a bit more about your job/scheduling config and use of dependency resolvers. The training will help you to learn which config files/sections are involved, and what other admins helping will need for troubleshooting. Be sure to redact any private info!