Hi @Jon_Colman
I would be very interested in reviewing that use case!
The dataset name shouldn’t matter, only the assigned format datatype. And for implicitly converted reads (that “decompressed” step), the alternative version(s) are sort of nested into the original should be available to any downstream tools that needs them.
Would you like to share back your example? Maybe there is a corner case bug with the downstream tool and it isn’t finding all of the different format versions of the data for some reason. We could fix it!
Update: I reread and think I can test this on my own. Started in here. But you can still share your example since I’m not sure which downstream tool you are using and that might matter.
For the job not completing at EU, do you mean that it is still executing (yellow datasets)? If so, that means it is still processing. This is a large target database, so that might be expected, but I can check all of that too if you want to share the history with the job.