error with Picard's MarkDuplicates - This job was terminated because it used more memory than it was allocated.

Welcome, @Rose

Whenever this error is reported

This job was terminated because it used more memory than it was allocated.

The message means that there is one of these going on

  1. some input or parameter problem should be adjusted before the rerun
  2. or, the work is actually too large to process at the Galaxy server where it was run

The public computational resources are significant, so the first case is much more common. But these do have practical limits.

If you think your job is actually running out of processing memory resources, the solution can be to try at a different server. Please see the directory here for choices → Public Galaxy Servers.

Note: The computational processing memory utilized for job execution is unrelated to storage space (quota) for your account.


For the Picard tool, we recently increased the memory at the UseGalaxy.org server to the maximum we can support at this time. Really large jobs might still fail.

You can try to reduce the size of the BAM dataset through filtering out any reads that wouldn’t be used in the core analysis anyway. Meaning: removing unmapped reads, filtering by a minimum mapQ value or filtering by the paired alignment state (proper pairs). Search the tool panel with “filter bam” for tool choices if you want to try this, and the extra step can be added to a workflow.

If that is not enough, then consider trying at a different server. Other UseGalaxy servers are good choices, so UseGalaxy.org.au or UseGalaxy.eu for your case.

Hope this helps! :slight_smile:

1 Like