8
$\begingroup$

I have a question regarding the time evolution of a quantity related to a Bayesian posterior.

Suppose we have binary parameter space $\{ s_1, s_2 \}$ with prior $(p, 1-p)$, The data generating processes are Brownian motions with parameter-determined drift:

$$ dX_t = \theta_i(t) dt + dB_t $$

for deterministic functions $\theta_i(t)$, $i = 1, 2$.

By Girsanov's theorem, the likelihood ratio, conditional on $X_t$, as a stochastic process is $e^{\gamma_t}$

where

$$ \gamma_t = \int_0^t \theta_1(s) - \theta_2(s) dB_s - \int_0^t \frac{ \theta_1^2(s) - \theta_2^2(s) }{2} ds. $$

So the posterior probability (of $s_1$) is then given by

$$ \frac{p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) }. $$

Question: Can one describe explicitly how $E[\frac{e^{\gamma_t} p}{ (e^{\gamma_t} p + (1 - p))^2 }]$ evolves?

Direct calculation, by Ito's lemma, shows that

$$ d \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) } = (1 - p) \frac{1}{(p e^{\gamma_t} + (1 - p) )^2} d pe^{\gamma_t} - (1 - p) \frac{1}{(p e^{\gamma_t} + (1 - p) )^{3}} ( d pe^{\gamma_t} )^2 \\ $$

So

\begin{align*} \frac{ d E[ \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) }] }{dt} &= (1 - p) E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}] (\theta_1^2(t) - \theta_2^2(t)) \\ &\; - (1 - p) E[\frac{( p e^{\gamma_t})^2 }{(p e^{\gamma_t} + (1 - p) )^3}] \ (\theta_1^2(t) - \theta_2^2(t)) , \end{align*}

i.e.

\begin{align*} \frac{ d E[ \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) }]}{dt} &= (1 - p) E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}(1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) })] \cdot (\theta_1^2(t) - \theta_2^2(t)) \\ &= (1 - p) [\; E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}] E[1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) }]\\ &\; + Cov(\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}, 1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p }) \;] \cdot (\theta_1^2(t) - \theta_2^2(t)) \\ &= 0. \\ \end{align*}

The last inequality follows from the fact that successive Bayesian priors form a martingale: let $p(s)$ be the prior, $q(s)$ be the posterior, and $p(x|s)$ be the conditional densities, then

\begin{align*} E[q(s)] &= E[ \frac{ p(x|s)p(s) }{ \int p(x|s')p(s') ds' }] \\ &= \int \frac{ p(x|s)p(s) }{ \int p(x|s')p(s') ds' } \int p(x|s')p(s') ds' dx\\ &= \int p(x|s) p(s) dx \\ &= p(s). \end{align*}

So the whole thing boils down to

\begin{align*} 0 &= E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}(1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) })] \\ &= E[\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}] E[1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) }]\\ &\; + Cov(\frac{p e^{\gamma_t} }{(p e^{\gamma_t} + (1 - p) )^2}, 1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p }). \end{align*}

The question is then whether one can solve for $E[ \frac{ p e^{\gamma_t} }{ ( p e^{\gamma_t} + (1 - p) )^2}]$ from above---by computing the time evolution of the covariance term. Or, is there another way to get at this?

Follow Up Question: Does $\frac{ p e^{\gamma_t} }{ ( p e^{\gamma_t} + (1 - p) )^2}$ have a Bayesian interpretation? Why should it be orthogonal to $1- \frac{ p e^{\gamma_t} }{ p e^{\gamma_t} + (1 - p) }$---the posterior probability of the other state $s_2$---at all times $t$?

$\endgroup$
1
  • $\begingroup$ How are the parameters $s_1$ and $s_2$ involved in the data generating process? $\endgroup$ Commented Jun 21, 2023 at 11:44

0