1
$\begingroup$

I came across two formulas for the Wald test statistic in a maximum likelihood framework:

One has $(R\hat{\theta}-r)'(RI_n^{-1}R')^{-1}(R\hat{\theta}-r)$, where $I_n^{-1}$ is the inverse of the information matrix.

The other one has $(R\hat{\theta}-r)'(R\frac{\hat{V}}{n}R')^{-1}(R\hat{\theta}-r)$, where V is a consistent estimator of the variance - covariance matgrix

I am confused since the inverse of the information matrix should be equal to the variance covariance matrix? Where does the division by n come from?

$\endgroup$

1 Answer 1

1
$\begingroup$

Without a source giving full definitions of all quantities in your formulae, that is hard to answer. But let me give an answer in a simplified context of a Wald statistic, namely a t-statistic. A Wald statistic for testing a single restriction is just a squared t-statistic.

So, take the simple case of testing $\mu=0$ for a random sample $x_i\sim (\mu,\sigma^2)$. The t-statistic can be written as $$ t=\frac{\bar x}{s.e.(\bar x)} $$ where the standard error of the sample mean is defined as $$ s.e.(\bar x)=\sqrt{\frac{s^2}{n}} $$ with $s^2=\frac{1}{n-1}\sum_i(x_i-\bar x)^2$. Hence, you could also write $$ t=\frac{\sqrt{n}\bar x}{s} $$ That is, in one notation, the denominator (what corresponds to the inverse in the Wald statistic) might be defined as the standard error, in another as the estimator of (the square root of) the error variance, with then the normalization (the square root of sample size) in the numerator.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.