Been using DESeq2 for a couple months and have some exposure to machine learning and Bayesian inference. However, I really dug into Michael Love’s paper on DESeq2 and confirmed what I thought: the output option Galaxy/DESeq2 provide to see “normalized counts” of your samples/replicates isn’t quite the estimated counts used for significance testing using a Wald test- they’re transformed (regularized and log).
Questions: how kosher is it to take that normalized counts file (that is a regularized log transform of the counts) and use them, say, in a heatmap? Or a line graph to show gene changes over time, e.g.? Should I just take the mapped reads (from Salmon) and use those? I want to say that I’m regaining all the noise/variability that was removed from the Bayesian method by using priors and computing MAPs and shrinking…