1
$\begingroup$

Is there any theoretical work on how to measure posterior collapse? One can measure decoder output, but it is not clear if the degradation (if any) happened due to posterior collapse or due to failing to match the data distribution. Therefore I'm interested in measuring "how informative latent variable z is". Thank you.

UPDATE

By "posterior collapse" I meant an event when signal from input x to posterior parameters is either too weak or too noisy, and as a result, decoder starts ignoring z samples drawn from the posterior $q_\theta(z_d | x)$. If z is too noisy then decoder ignores it during x' generation. If z too weak, then we observe that $\mu$ and $\sigma$ become constant regardless of the input x.

$\endgroup$
5
  • $\begingroup$ You need to add context to your question as you seem to be assuming that we know what you mean by “posterior collapse”, “decoder”, “latent variable”, etc here. This may be clear for some users, but won't be clear for a vast majority. For example, in Bayesian statistics posteriors don't collapse and it’d even be hard to imagine what could it mean. $\endgroup$
    – Tim
    Commented Mar 12, 2023 at 20:35
  • $\begingroup$ @Tim I updated the post, please let me know if I missed something or the question is still not clear. $\endgroup$ Commented Mar 12, 2023 at 20:43
  • $\begingroup$ It'd be worth adding what do you mean by "decoder" here etc, I guess it's some kind of GAN? $\endgroup$
    – Tim
    Commented Mar 12, 2023 at 22:58
  • 1
    $\begingroup$ @PavelPodlipensky you might be interested in this 2022 NeurIPS paper by Amazon. $\endgroup$
    – Durden
    Commented Mar 18, 2023 at 14:20
  • $\begingroup$ @Tim decoder is a reconstruction part of the U-net that generates sample x based on provided latent code z $\endgroup$ Commented Mar 28, 2023 at 3:31

0