0
$\begingroup$

Suppose that you have $n$ uncorrelated measurements $A_{1}= a_{1} \pm \sigma_{1}$ , $A_{2}= a_{2} \pm \sigma_{2}$ ... $A_{n} = a_{n} \pm \sigma_{n}$, where $a_{m}$ is the mean value of $A_{m}$ and $\sigma_{m}$ and is their corresponding uncertainty, then the inverse of $A_{m}$ (lets call it $B_{m}$) is:

\begin{equation} B_{m}= \frac{1}{a_{m}} \pm \frac{\sigma_{m}}{a_{m}^{2}}. \quad (1) \end{equation}

My question

Suppose that each one of the measurements is Gaussian distributed, that is $A_{m}\sim \mathcal{N} (a_{m}, \sigma_{m}) $ and suppose that the measurements are correlated through a correlation matrix ($\Sigma$).

If I want to estimate the inverse of the measurement $A_{m}$, Can I use the equation number (1)? Or I have to take in to account the covariance matrix?

$\endgroup$
0

3 Answers 3

1
$\begingroup$

Let's say you start with $n$ measurements $A_m$ ($m=1,2,\cdots,n$), which are noisy measurements of a set of physical quantities with true values $a_m$. The uncertainties are correlated with an $n\times n$ covariance matrix $\Sigma$. Just to clarify a point in your question, we should express the observed values $A_m \sim \mathcal{N}(a_m,\Sigma)$ (that is, we draw the vector $A_m$ from a multi-dimensional Gaussian distribution, not $m$ univariate Gaussian distributions).

Now consider a vector $B_m=F_m(A_m)$, where $F_m$ is a non-linear function that takes as input an $n$ dimensional vector and returns an $n$ dimensional vector. Then, using the standard propagation of error formulas (namely, assuming the errors are small and Gaussian), we can compute the covariance matrix for $B_m$ (call it $\tilde{\Sigma}$) in terms of the covariance matrix for $A_m$ (which we called $\Sigma$) as \begin{equation} \tilde{\Sigma} = J \Sigma J^T \end{equation} where $J$ is the Jacobian matrix. Essentially, this formula follows from linearizing the transformation $f_m$ \begin{equation} F_m = F_m(a_m) + \sum_{k=1}^n J_{mk} (A_k - a_k) + \cdots \end{equation} and then using $J$ to perform a linear transformation (change of basis) from $\Sigma$ to $\tilde{\Sigma}$.

Given the covariance matrix $\tilde{\Sigma}$, you can express uncertainties on $F_m(A_m)$ in whatever way you prefer. You can report the diagonal elements of the covariance matrix, or you could show a heat map or table of the covariance matrix, or whatever makes the most sense for your application.

For your problem, $F_m=1/A_m$, so $J$ is a diagonal matrix with $-1/A_m^2$ on the diagonal. The diagonal elements of $\tilde{\Sigma}$ are therefore equal to the diagonal elements of $\Sigma$ multiplied by $1/A_m^4$. The square root of the $i$-th diagonal element is $\sqrt{\Sigma_{ii}}/A_m^2$, which agrees with the uncorrelated case as you would expect. Of course this expression doesn't account for off-diagonal elements of the covariance matrix -- but this general issue of how to present the covariance matrix always comes up whenever there are correlated measurements and is not specific to your question. Sometimes people go for ease-of-presentation and just report the diagonal values; others are more rigorous and report every value or plot a heat map.

$\endgroup$
2
  • $\begingroup$ Where I can read more about the equation $\widetilde{\Sigma} = J \Sigma J^{T}$? I would like to learn about how to take into account the off-diagonal elements. $\endgroup$
    – Cruz
    Commented Jul 18, 2021 at 3:26
  • 1
    $\begingroup$ Wikipedia is a good start, but if you read that and you have a specific thing it doesn't address let me know. $\endgroup$
    – Andrew
    Commented Jul 18, 2021 at 3:27
0
$\begingroup$

Since $B_m$ is a function of $A_m$ only and not a function of any $A_i$ for $i \ne m$ then you can simply use formula 1 without worrying about any covariance.

It is only functions of multiple variables that will require accounting for the covariance of those variables.

$\endgroup$
0
$\begingroup$

Note that:

$$ \frac{\Delta B_m}{B_m}=\frac{\sigma_m/a_m^2}{1/a_m}=\frac{\sigma_m}{a_m}=\frac{\Delta A_m}{A_m}$$

so the relative error on each measurement $A_m$ is the same as the relative error on $B_m$, and you can treat the correlation between different $m$ similarly.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.