Yes, well said @Evgeni_Bolotin and I’m glad you found a solution for this time! 
The problem of transferring really large amounts of data over sometimes slower/personal internet connections is pretty common, and like you stated, not new. Even the large cloud providers like AWS avoid hosting “personal on demand data” like this (without charging a premium everyone wants to avoid!) and use specialized distributed (and sometimes complicated) protocols for hosting the large, static public datasets.
What we have decided to do is focus on avoiding the need to transfer data around at all! Inputs and outputs in Galaxy could have always technically existed anywhere and now that is actual as of the last year or so. The “server side” data storage we offer is mostly for connivence. Large throughput projects will instead benefit from a data management plan at the start!
I’m not sure if you have seen the “BYOS” (Bring Your Own Storage) options the public servers yet, but this is the approach we are focusing on the most.
These options under User → Preferences fit together. (See note below).
Manage Your Repositories. Add, remove, or update your personally configured location to find files from and write files to.
Manage Your Preferred Galaxy Storage. Select a Preferred Galaxy storage for the outputs of new jobs.
*Note: Most of these options are available at the UseGalaxy servers and some, like UseGalaxy.eu, offer even more choices, sometimes regional, under User → Preferences → Manage Information. Here you’ll find not just more personalized data storage choices but more personalized computational infrastructure choices (“BYOC” Bring Your Own Compute). All this will continue to expand in scope across all UseGalaxy servers!
The idea is to set up your storage profiles, then decide how to sort your own data out to those locations. This can be at the account level, history level, workflow level (entire workflow, or some tools, or certain classes of data..), and collection/dataset level.
Then, Galaxy becomes the tool kit, the methods log, and the data catalogue. Location agnostic “storage” (and, optionally, compute) means it doesn’t matter where the data originally comes from or where the resulting outputs go. Priority projects can be launched from the same server where everyone else you may be working with is working, even if they use different resources. There can be no upload or download step at all.
Where data lives and where it is processed are the least interesting part of analysis yet are the big technical bottlenecks. We hope to streamline that so the actually interesting parts, how data is processed and what resulted, work really well for everyone. Upload will become more of an indexing step and download a “what happened” reporting/sharing step.
We’ll still offer local storage but this will become more of a supplementary resource rather than the primary resource over time. Even 1 Tb of space is often not enough but no one wants to move around a Tb file!
Hope this helps and please let us know your thoughts! 