2
$\begingroup$

Let consider two observables, $x$ and $y$. Suppose that $y$ depends on the independent variable $x$ through the model $m(x; \boldsymbol{\theta})$, where $\boldsymbol{\theta}$ is a vector of model parameters. I want to estimate the Cramer-Rao bound of one of these parameters, for forecasting purposes. To this aim, I have to calculate the Fisher information matrix and inverse it. I should be able to compute it before doing any experiments.

Let assume that

  1. Measurements of $\{x;y\}$ are uncorrelated each other.
  2. Measurements of the independent variable $x$ have no uncertainty.
  3. Measurements of $y$ are Gaussian distributed, with fixed variance $\sigma^2$ (it's the uncertainty of each measurement of $y$).
  4. The likelihood is Gaussian, i.e.: $L(\boldsymbol{\theta};\{x,y\}) = \dfrac{1}{\sqrt{|2 \pi C|}} e^{-\dfrac{1}{2} \left[y - m(x; \boldsymbol{\theta})\right]^{T} C^{-1} \left[y - m(x; \boldsymbol{\theta})\right]}$, where $C$ is the covariance matrix of the data.
  5. $C$ does not depend on $\boldsymbol{\theta}$.

With these assumptions, the covariance matrix of the data is diagonal ($C = \dfrac{\mathbb{1}}{\sigma^2}$) and thus the elements of the Fisher information matrix are:

\begin{equation} F_{\alpha \beta} = \dfrac{1}{\sigma^2}\sum_{i}^{N} \dfrac{\partial m(x_i; \boldsymbol{\theta})}{\partial\theta_\alpha} \dfrac{\partial m(x_i; \boldsymbol{\theta})}{\partial\theta_\beta} \;, \end{equation}

where $N$ is the number of measurements.

Questions:

  1. If so far it is correct, knowing $F$ implies knowing the measurements of $x_i$ a priori of the experiment; how can this be possible?
  2. Can $F$ be computed and inverted analytically, in case of $N=1000$ measurements for example?
  3. Even in case of $N = 1$, $F$ always has null determinant. In fact, $F$ is always of the type: \begin{pmatrix} a^2 & ab\\ ba & b^2 \end{pmatrix} How to deal with it?

Practical example:

My observables $\{x;y\}$ are respectively time and position $\{t;s\}$. The model is $m(t; \delta, \phi, \omega, G) = A e^{-\delta t} \cos(\phi - \sqrt{\omega^2 - \delta^2}t) + \dfrac{G}{\omega}$.

  • In the case of $N=1$, $F$ has a null determinant.
  • In the case of $N>1$, each element of $F$ is a sum over the values of $t$. The matrix cannot be computed without knowing the array of the measurements of $t$. Therefore, if you know the values of $t$, is the computation of the matrix can be done only numerically, do you agree?

At this point I can see only one possible way out: since I need to calculate the Cramer-rao bound of $G$, I could first marginalize over all the parameters; deleting all the columns and rows of $F$ related to the other parameters should do the trick (or this procedure should be applied to the inverse of F? I am not sure about this). As a result the reduced Fisher matrix is 1x1, that is trivially invertible. Does it make sense?

Any suggestion or example would be highly appreciated, if I'm missing something. Thank you in advance for your help.

$\endgroup$
9
  • 2
    $\begingroup$ Would this be a better question for Cross Validated? $\endgroup$ Commented Mar 2, 2023 at 16:45
  • $\begingroup$ @MichaelSeifert I've already asked the question there. Can I leave it here, or should I remove it? To me it seems inherent in both topics. Thanks $\endgroup$
    – Wil
    Commented Mar 2, 2023 at 16:49
  • $\begingroup$ It seems to me that if both $\mathbf y$ and $\mathbf {\theta}$ are assumed normal then perforce the function $m(x, \mathbf {\theta})$ must be a linear function of $\mathbf {\theta}$ something like $m(x, \mathbf {\theta} ) = \sum_j p_j(x) \theta_j$ for some functions $p_j(x)$ of $x$ only$. $\endgroup$
    – hyportnex
    Commented Mar 2, 2023 at 17:29
  • $\begingroup$ Also, $x$ as a known input parameter just means that in an experiment, say, to measure the "non-ideality" $n$ of a diode characteristic (en.wikipedia.org/wiki/Diode_modelling), you set the diode voltage $x=V_{D}$ and measure the current $y=I$ and you try to estimate $\theta = n$. The measurement of $I$, especially at low values can be noisy but your diode voltage is set by the power supply and is known accurately. $\endgroup$
    – hyportnex
    Commented Mar 2, 2023 at 17:40
  • 1
    $\begingroup$ Crossposted from stats.stackexchange.com/q/607058/307210 $\endgroup$
    – Qmechanic
    Commented Mar 2, 2023 at 18:07

1 Answer 1

0
$\begingroup$
  1. If so far it is correct, knowing $F$ implies knowing the measurements of $x_i$ a priori of the experiment; how can this be possible?

If I understood the description correctly, $x_i$ are not measurements, but rather parameter values. E.g., we measure current-voltage characteristics of a resistor using a voltage source: we set voltage, and measure the current: voltage is $x_i$, current is $y_i$. We know $x_i$, because we set it. We could however be in a different situation, where the nominal voltage on teh source does not correspond to that, when the curcuit is assembled - in this case we would measure both voltage and current. We will analyze the results in pretty much the same way (e.g., using linear regression), but both $x_i$ and $y_i$ are not known a priori.

  1. Can $F$ be computed and inverted analytically, in case of $N=1000$ measurements for example?
  2. Even in case of $N = 1$, $F$ always has null determinant. How to deal with it?

Matrix with a zero determinant cannot be inverted analytically - some of its rows/columns are linearly dependent. I don't however see where this claim comes from (perhaps, add a bit more math; $N=1$ is however very problematic when talking about variance - in general, don't do any statistics with $N<3$.) See also Determinant of Fisher information.

If the determinant is not zero, inverting a matrix analytically implies finding its eigenvalues (even if implicitly)... but we cannot solve analytically polynomial equations of order higher than four, and even for orders 3 and 4 it is somewhat cumbersome. So I would go with numerical inversion, using python, R or whatever is more handy.

$\endgroup$
2
  • $\begingroup$ Thank you for the answer. Well, actually, values of the voltage $x$ are measured because of the instrument sensibility. But in case we want to overlook the uncertainty about $x$ (as in this case), I agree with you in the sense that $x$ is an independent variable, more than a parameter. In my case, I have time as independent variable, so it's not obvious, but yes you are right: if you know everything about the set up of the experiment, you can set values of $x$ before the experiment. $\endgroup$
    – Wil
    Commented Mar 3, 2023 at 12:03
  • $\begingroup$ For what concern the Fisher matrix, I’ve updated the question so that it’s clearer why $F$ is singular when N=1. I'm not sure this problem solves when N>1. In fact this reference says that the covariance matrix need not be diagonal (Section 10.1). $\endgroup$
    – Wil
    Commented Mar 3, 2023 at 12:14

Not the answer you're looking for? Browse other questions tagged or ask your own question.