Uploaded files depleted memory on Galaxy Cloudman?

Hello Galaxy community,

I was trying to upload my sequencing data (~2.3 Gb/file x 32 files=73 Gb) to my Galaxy on Cloudman 2.0 on an EC2 instance. The uploading process stoped and failed when ~ 75% of the files were done, showing " Warning: Please make sure the file is available. (500)". I check the “cluster status” page, it showed that the “memory cached” kept increasing and depleted the memory of the instance. However, the disk space usage did not grow at all. I guess the files were kept in the memory instead of being written into the disk storage, thus depleted the memory. I wonder how to solve this problem?

I am new to galaxy and really appreciate it if anyone can help!

1 Like

I’ll try to replicate this but in the meantime, something to try would be to upload fewer (eg, 5) files at once instead of all 32.

Your comment also made us realize we do not expose an option to set the size of disk for launched instances. By default this is 90GB, which would likely be a problem for you if you have 73GB of input data. We’ll fix that as well and post here.

BTW, did you launch the GVL or the CloudMan 2.0 appliance from CloudLaunch? I’d suggest using the GVL one in the future (the CloudMan one is largely considered a development version).

2 Likes

Thank you Enis! I will try to load less files each time. It happened on Cloudman 2.0, where I set the root volume to 300GB but the problem still occured. Maybe I confused the root volume and the storage?

1 Like

The root volume is not actually used for storing Galaxy data so that would unfortunately not help. We’ll roll out an updated version in the next few days.

2 Likes