Intermediate files will not be available at public servers. Jobs run on a cluster – and only the final results are sent back to the history as datasets. If there are memory issues during execution or the job times out (exceeds walltime), sometimes those jobs don’t actually fail (red), instead, the results are empty (green).
What results when a cluster cannot process work depends a bit on the server/cluster configuration and the underlying 3rd party tool itself. Some tool execution problems are easier to trap than others. Ideally, all execution problems would create red error results with a meaningful message, but practically that isn’t always possible.
This FAQ explains the different ways jobs can fail, and what to do about it with more details. Your inputs were fine last time I checked, so that is not a factor. You need to run the job with more resources allocated server-side than the public servers can provide. https://galaxyproject.org/support/tool-error/
The GVL Cloudman is a really great resource for large data or time-sensitive work. AWS is generous with grants (especially now). You’ll be the server administrator – so can install tools, reference data, or use it as-is. Much is preconfigured and server/cluster administration is handled through a web interface. If you decide to try this, choose a high-memory server type to avoid problems – clusters that you add will be the same configuration as the primary server – so if you need more memory, you would have to start over. Jobs will not fail for exceeding execution time on your own server/cluster, so that issue is entirely eliminated (same as when running your own local).
You can also look into the academic cloud options (Jetstream, etc). This will take more administrative work on your part, but is definitely a choice many make.