0
$\begingroup$

In Aronow & Miller, "Foundations of Agnostic Statistics", the authors write on p105:

[A]lthough unbiased estimators are not necessarily consistent, any unbiased estimator $\widehat{\theta}$ with $\lim_{n\to \infty} V[\widehat{\theta}]=0$ is consistent.

[footnote:] The converse, however, does not necessarily hold: an unbiased and consistent estimator may nonetheless have positive (or even infinite) sampling variance (and thus, MSE) even as $n\to\infty$

I can't understand how the claim in the footnote could be true. Can anyone provide an example or an explanation?

By their definition, consistency of an estimator implies that $\forall \epsilon > 0$, as $n\to \infty$, $\Pr[ | \widehat{\theta} - \theta| > \epsilon] \to 0$. It seems to me that it must follow that $V[\widehat{\theta}] \to 0$. But clearly this is not the case.

$\endgroup$
1

1 Answer 1

1
$\begingroup$

An example: consider any unbiased and consistent estimator $\hat{\theta}$ with variance $\to 0$ as $n \to \infty$. Now add to it a zero-mean $t(2)$ variate (label it $t$) with scale parameter $ \tau = 1/n$. The $t(2)$ distribution has finite mean but infinite variance regardless of the value of $\tau$; however, as $\tau \to 0$, $\mathrm{Pr}(|t| > \epsilon) \to 0$ for all $\epsilon > 0$.

The estimator $\theta^* = \hat{\theta} + t$ is clearly unbiased, as $\mathrm{E}t = 0$ and $\mathrm{E}\hat{\theta} = \theta$. It is clearly consistent. However, its variance is infinite for all $n$.

$\endgroup$
2
  • $\begingroup$ Thanks for the explanation. The one point that I'm stuck on is: how can $Pr(|t|>\epsilon) \to 0$ but variance of $t$ remains infinite? (I understand that this is a textbook characterization of the t distribution, but I'm just struggling to make sense of how both of those properties can hold.) $\endgroup$
    – user24465
    Commented Jun 18 at 2:32
  • 1
    $\begingroup$ The reason is that $Pr(|t| > \epsilon = \int 1(|t| > \epsilon) f(t)dt$, where $1(\cdot)$ is the indicator function with value equal to $1$ if "true" and $0$ otherwise, but the variance equals $\int t^2 f(t)dt$ (assuming the mean equals zero.) The first one is bounded above by $1$, but, thanks to that $t^2$, the second one isn't necessarily bounded above by anything, as $t^2 \to \infty$ as $t \to \infty$ but $1(|t| > \epsilon) \to 1$ as $t \to \infty$. $\endgroup$
    – jbowman
    Commented Jun 18 at 2:43

Not the answer you're looking for? Browse other questions tagged or ask your own question.