Hello all,
With the risk of something similar having been addressed before: I have downloaded results from Galaxy (not created within a workflow). How can I (batch-) link these basically anonymous (“job on dataset xxx”) results to my original file names that were uploaded?
It would help a lot if the file/job-list under history could easily be exported to a tsv- or csv-file…!
Thank you for your consideration!
Kind regards, Corné
Hi @cornek
I don’t think this has been discussed here yet or not recently!
For an API approach, you would query the original Galaxy history directly instead of parsing a downloaded history archive. Then your mapping can be used to navigate the downloaded content.
Start with the history contents endpoint, then query each dataset for details such as name, tags, state, and related metadata. If you use #nametags for your samples, this will be easier when tracking through tools (outside of a workflow) to summarize by those tags.
The relevant pattern is:
GET /api/histories/{history_id}/contentsGET /api/histories/{history_id}/contents/datasets/{dataset_id}
This could be wrapped in a script to run against your history and write back into wherever you downloaded the tar.xvf history archive as a supplementary custom map.
The alternative I can this of is to parse the [ARCHIVE]/datasets_attrs.txt JSON directly with a custom script.
Let us know what you think or if this helps! 
Update: I’m still thinking about your idea about a UI navigation or downloadable tabular version of the JSON. Or, maybe a standalone utility to parse it that can be shared. More soon about this. 
Hi JennaJ,
Thank you for your reply! One of the things I like about Galaxy is that I (as a non-bioinformatician) can perform complicated bioinformatics analyses
To me, you’re proposed solution still sounds rather complicated
I’m thinking there should be an easier way to do this?