18
$\begingroup$

I know that the Rao-Blackwell theorem states that an unbiased estimator given a sufficient statistic will yield the best unbiased estimator. Is the only difference between Lehmann-Scheffé and Rao-Blackwell that in Lehmann-Scheffé, you need an unbiased estimator that is based on a complete sufficient statistic? I am also having a hard time conceptually understanding the definition of a complete statistic.

$\endgroup$
1

2 Answers 2

26
$\begingroup$

Rao–Blackwell says the conditional expected value of an unbiased estimator given a sufficient statistic is another unbiased estimator that's at least as good. (I seem to recall that you can drop the assumption of unbiasedness and all you lose is the conclusion of unbiasedness; you still improve the estimator. So you can apply it to MLEs and other possibly biased estimators.) In examples that are commonly exhibited, the Rao–Blackwell estimator is immensely better than the estimator that you start with. That's because you usually start with something really crude, because it's easy to find, and you know that the Rao–Blackwell estimator will be pretty good no matter how crude the thing you start with is.

The Lehmann–Scheffé theorem has an additional hypothesis that the sufficient statistic is complete, i.e. it admits no unbiased estimators of zero. It also has an additional conclusion: the estimator you get is the unique best unbiased estimator.

So if an estimator is complete, unbiased, and sufficient, then it's the best possible unbiased estimator. Lehmann–Scheffé gives you that conclusion, but Rao–Blackwell does not. So the statement in the question about what Rao–Blackwell says is incorrect.

It should also be remembered that in some cases it's far better to use a biased estimator than an unbiased estimator.

$\endgroup$
6
  • 5
    $\begingroup$ Concerning my last comment above, that it's sometimes far better to use biased than unbiased estimators: This seems not to be too widely known among non-statisticians. I wrote a paper about it, devoted largely to my own concrete example of that phenomenon: math.umn.edu/~hardy/An_Illuminating_Counterexample.pdf $\endgroup$ Commented Oct 3, 2011 at 0:49
  • $\begingroup$ Thanks for the comment and answer. It cleared up some conceptual issues I had. $\endgroup$
    – lord12
    Commented Oct 3, 2011 at 1:02
  • $\begingroup$ @Michael: (+1) Thanks for that link to your note. It's an interesting example. I thought it curious that you opted to constrast the unbiased estimator of the variance with the MLE, when in the particular case you chose, the optimal choice would be to use a denominator of $n+1$. Also, what motivated the "light source" problem originally? $\endgroup$
    – cardinal
    Commented Oct 5, 2011 at 18:41
  • $\begingroup$ Actually I thought only that MLEs are sometimes biased, so I mentioned them as an example. In cases often cited as examples, the MLE is usually a sufficient statistic, so in that sense it's not a good example (but in some cases the MLE is not sufficient and one could apply Rao-Blackwell). I don't remember exactly what led me to the light-source problem, but it was while taking a graduate course that covered some related matters that I came up with that. $\endgroup$ Commented Oct 5, 2011 at 19:36
  • 1
    $\begingroup$ The link above no longer works. Here's one that does: arxiv.org/abs/math/0206006 $\endgroup$ Commented Mar 3, 2018 at 16:32
4
$\begingroup$

As Michael Hardy said, Rao-Blackwell only guarantees to improve (or not hurt) your original unbiased estimator. I.e. you start with some unbiased estimator $T(\underline x)$, then you improve it by taking the expected value conditioned on a sufficient statistic $T'(\underline x) := \mathbb{E}[T(\underline x) | S(\underline x)]$. Improve meaning that it's variance will be less or equal than the original estimator variance.

But something funny happens if you add the completeness to the statistic ($S(\underline x)$) - you get uniqueness. I.e. there will only be one unbiased estimator that will be be a function of $S$. And so if you start with some unbiased estimator, and use the Rao-Blackwell method with a sufficient & complete statistic, the result will be the only unbiased estimator for that parameter, and it will have a better variance than any of the original estimator. So it's (U)MVUE (Uniform-ly minimal variance unbiased estimator).

Completeness is a restriction for a statistic that if $\mathbb{E}(g(\underline x)) = 0$ then $g(\underline x) = 0$ always, with probability 1 for any $x$. Take a Bernoulli distribution - and $g(\underline x) = x_1 - x_2$. $\mathbb{E}(x_1 -x_2) = 0$ but $g(x) \neq 0$ when there are different results between the first and second experiment. It's only equal zero when they are the same. So it's not a complete statistic. $h(\underline x) = \sum x_i$ however is a complete statistic, as if it's expected value is equal to zero, it means that it itself is equal to zero with probability of 1.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .