4
$\begingroup$

Let $d\in\mathbb N$ and $(X_t)_{t\ge0}$ be an $\mathbb R^d$-valued process. Assume $$\operatorname P\left[X_t\in\;\cdot\;\mid X_0\right]=\mathcal N(X_0,\Sigma_t)\tag1$$ for some covariance matrix $\Sigma_t$. Moreover, assume $X_t$ has density $p_t$ with respect to the $d$-dimensional Lebesgue measure. Let $\varphi_{\Sigma_t}$ denote the density of the $d$-dimensional normal distribution with mean $0$ and covariance matrix $\Sigma_t$. By assumption, $$p_t=p_0\ast\varphi_{\Sigma_t}\tag2$$ (is the convolution of $p_0$ and $\varphi_{\Sigma_t}$). Now, what I need to do is numerically compute/estimate $$\operatorname E\left[\left\|\nabla\ln p_t(X_t)\right\|^2\right]\tag3.$$

Please assume that we do not know the distribution of $X_0$. Instead, we only have i.i.d. samples drawn from the distribution of $X_0$. And given such a sample, I'm able to produce a sample from $X_t$ which is distributed according to $(1)$.

I'm not sure what exactly I need to do. Maybe it's worth noting that $$\nabla\ln p_t=\frac{p_0\ast\nabla\varphi_{\Sigma_t}}{p_0\ast\varphi_{\Sigma_t}}\tag4,$$ but since I don't see that this simplifies further, I'm stuck with that.

Remark: For the numerical computation, please note that I do not actually know $\Sigma_t$. I only know that it exists. So, what I'm doing, is computing $\Sigma_t$ numerically as well.

$\endgroup$
4
  • $\begingroup$ So to clarify, what you want is to numerically approximate the Fisher information of a multivariate stochastic process? ($\ln p_t$ being a log-likelihood, $\nabla \ln p_t$ being a score.) Are you looking for a specific algorithm? Or when you say "numerically" do you mean pen-and-paper calculate something? Possibly this paper is relevant to your interests (if not this specific problem): Fisher Information and Maximum-Likelihood Estimation of Covariance Parameters in Gaussian Stochastic Processes, Markus Abt and William J. Welch, The Canadian Journal of Statistics Vol. 26, No. 1, pp. 127-137 $\endgroup$ Commented Apr 13 at 13:59
  • $\begingroup$ Reading more carefully, you're not assuming that $X_t$ is Gaussian, just that conditionally on $X_0$ it is. I would guess that probably you can just estimate the Fisher information of $X_0$ and the Fisher information of $X_t | X_0$ separately and then apply the "chain rule" for Fisher information: awni.github.io/intro-fisher-information $I_{x,y}(\theta) = I_{x|y}(\theta) + I_y(\theta)$. The conditional Fisher information should be relatively straightforward once you've estimated $\Sigma_t$ because the conditional distribution is Gaussian,but I'm not sure about estimating it for $X_0$ $\endgroup$ Commented Apr 13 at 14:08
  • $\begingroup$ Possibly also helpful: equation (11) of iopscience.iop.org/article/10.1088/1367-2630/acd321 "Fisher information of correlated stochastic processes", Radaelli et al. This stuff isn't fresh for me, but I think the Fisher information of a multivariate Gaussian is just the inverse of the covariance matrix? So if you can estimate either $\Sigma_t$ or its inverse (probably directly estimating the latter would be better for numerical / statistical stability reasons) then you should be fine perswww.kuleuven.be/~u0015224/publications/… $\endgroup$ Commented Apr 13 at 14:13
  • $\begingroup$ The hard part actually might be non-parametrically estimating the Fisher information of $X_0$ -- I think I genuinely don't know anything about that and wasn't able to find anything quickly. $\endgroup$ Commented Apr 13 at 14:14

0

You must log in to answer this question.