Maker empty output

Hello Galaxy Team,

I am a newby to the environment. I launched a Maker job and I was vey happy to see it succesfully completed… Until when I discovered the output is empty !

Can please somebody help me to figure out what the problem was? I am scared of running another 6 days of a job to find out the same result :joy:

Thanks a lot !!!



Hi Clavar - did you get any help or find a solution? I am having similar outcome on a second round of maker annotations. The first worked but it was pretty basic, just augustus and repbase. The time that failed, I had est and protein from other species and added the original maker result. I really want to know where things went wrong!! Especially since there was no bug report just a sad empty output file after several days :’(

1 Like

Hi @Jessica

The job may have failed for memory reasons (inputs too large to process – I suspect that was also @Clavar issue) but the server has also had some issues. Both over the last week and back in March. Please try a rerun to eliminate any server issues as being a factor.

Please also leave your existing “empty results” undeleted (inputs and outputs) – we see your message sent into the galaxy-bugs mailing list. We’d like to review what went wrong. If an input issue is present (format/content), we can try to offer help to fix the problems via email.

Notice: (applies to everyone)

The Galaxy Main server is up again now. Please rerun any jobs that produced odd results. This may include the need to re-upload data.

We have had a few server-side issues recently, and will likely post a notice about it here:

Thanks for reporting the problem and our apologies for the inconvenience!

Thank you! I will rerun the analysis - do you oversee both the .org and .eu sites?

1 Like

Great, thank you. Still reviewing prior – curious about what is going wrong. My guess is that the jobs are simply exceeding resources, but I’d like to confirm that is actually what is going on.

My role is to support any Galaxy question/learning/troubleshooting. I definitely do not do it alone – is a group/community effort. I’m part of the US Galaxy team and an admin at (and related resources, including this forum). I am not an admin at but can help with much there, and most other public Galaxy servers. Or, try to help… then triage as needed :woman_technologist:

Hi - so I ran Maker again and the same thing happened. It seems to have happened a little sooner - less than 24 hours this time.


Help please!! I ran Maker again on, it ran for weeks (!) and the output is empty. Is there any way I can get any intermediate files the program wrote?? The error just says:

Job 9475949’s output dataset(s) could not be read

This is breaking my heart!!

1 Like

Sorry for the delay. The job as configured at before was too large to process (inputs were fine). I’m not sure how you configured at, but if it was the same, that is likely the same issue. Maker is a very compute-intensive tool. Very large inputs are unlikely to be successfully processed at any public Galaxy server.

Options include setting up your own Galaxy server with more dedicated resources. The GVL version of Cloudman is a popular choice for scientists. Search this forum with keywords like “cloud” or “gvl” or “cloudman” to learn more about that option. Galaxy itself is always free, but commercial storage/compute is not. AWS has a grant program to cover costs for academic research. You can also review here:


Thanks for getting back to me. Are there any intermediate files? It ran for a very long time. I do have it running on a little galaxy instance I made and there are files being written as the program runs.

1 Like

Intermediate files will not be available at public servers. Jobs run on a cluster – and only the final results are sent back to the history as datasets. If there are memory issues during execution or the job times out (exceeds walltime), sometimes those jobs don’t actually fail (red), instead, the results are empty (green).

What results when a cluster cannot process work depends a bit on the server/cluster configuration and the underlying 3rd party tool itself. Some tool execution problems are easier to trap than others. Ideally, all execution problems would create red error results with a meaningful message, but practically that isn’t always possible.

This FAQ explains the different ways jobs can fail, and what to do about it with more details. Your inputs were fine last time I checked, so that is not a factor. You need to run the job with more resources allocated server-side than the public servers can provide.

The GVL Cloudman is a really great resource for large data or time-sensitive work. AWS is generous with grants (especially now). You’ll be the server administrator – so can install tools, reference data, or use it as-is. Much is preconfigured and server/cluster administration is handled through a web interface. If you decide to try this, choose a high-memory server type to avoid problems – clusters that you add will be the same configuration as the primary server – so if you need more memory, you would have to start over. Jobs will not fail for exceeding execution time on your own server/cluster, so that issue is entirely eliminated (same as when running your own local).

You can also look into the academic cloud options (Jetstream, etc). This will take more administrative work on your part, but is definitely a choice many make.

OK, thanks for looking into it for me!



1 Like