How to upload locally built reference genome to galaxy cloud server

Try clicking on the “eye” icon for the BAM dataset, does the dataset contain alignment lines and not just headers? Review the logs and bed dataset as well, it may give some clues about what went wrong.

I suspect the job failed for memory reasons (STAR is very memory intensive), otherwise redetecting the metadata for the BAM would have been successful. Indexed genomes will use fewer resources, but even when indexed, the entire genome is held in memory. This prior QA just from today explains the resources the tool needs: RNA-STAR, hg38 GTF reference annotation, Cloudman/AWS options plus local Galaxy "Cloud Bursting" for memory intensive mapping - #7 by jennaj.

This post also covers a new function for “Cloud Bursting” – meaning, attaching cloud resources to a local Galaxy for larger jobs, on demand. Is just another choice to consider, but requires more administrative work to set up and space on your local to store data would be needed. Using a Cloud Galaxy will be simpler to configure and offer more space and memory – so hope one of those works out!

The “name” issue with IGV was likely due to the existing metadata problems with the failed BAM. Once you have a successful result, any datasets you want to view in IGV need to have a “database” assignment (see the Custom Genome FAQ, specifically the “Custom Build” option). This allows you to assign a custom “database” to data. To have it display in IGV with the underlying genome sequence, then you’ll need to install your custom genome into a local IGV as well, and name it the same.

I don’t think Barley is available as a pre-indexed genome in IGV, but you could check the list, and if there, confirm it is the same build/version as the version you are using in Galaxy for mapping. If the same, then create your Custom Build in Galaxy to have the same “database” name, aka “dbkey”, as IGV uses for the data to “match up”.