1
$\begingroup$

I have been learning different notions of convergence for sequences of random variables. I know the almost sure convergence is the strongest. The next best thing is convergence in probability. Can anyone provide an example of application where convergence in probability is still valuable albeit not as good as almost sure convergence.

My definitions are:

  1. $X_n$ converges to $X$ almost surely if $\mathbb{P}(\lim_{n\rightarrow \infty}X_n=X)=1$.
  2. $X_n$ converges to $X$ in probability if for any $\epsilon>0$, $\lim_{n\rightarrow \infty}\mathbb{P}(|X_n-X|\geq \epsilon)=0$
$\endgroup$
1
  • 2
    $\begingroup$ Stochastic integrals are defined via limit in probability, because it is in general impossible to define them using a.s. convergence. $\endgroup$ Commented May 26 at 12:17

1 Answer 1

3
$\begingroup$

Convergence in probability can be highly valuable precisely because it is weaker than almost sure convergence. In many practical senarios, we can only establish convergence in probability, whereas almost sure convergence may not hold.

Strong and Weak LLN

For example, we have two notions for the LNN (law of large number); one is strong law of large number and the other is weak law of large number. The former refers to almost sure convergence, while the latter refers to convergence in probability.

The simplest form of the LLN is for i.i.d. random variables $X_1,X_2,\cdots$ with a finite mean $\mathbb{E}[X_1]<\infty$. It can be proved that $$ \frac{1}{n}\sum_{k=1}^nX_k\xrightarrow{\text{a.s.}}\mathbb{E}[X_1]. $$ This convergence happens almost surely (also in $L^1$).

Another fundamental LLN is for a stationary Markov chain $\{X_n\}$. If $\{X_n\}$ is ergodic, the convergence also happens in both a.s. and $L^1$ sense. However, things are different for a not necessarily stationary ergodic Markov chain $\{X_n\}$. We can only prove the convergence in probability.

For reference, see Kulik (2018) pp. 175-176.

Strong and Weak Consistency

A statistical estimator that converges to the true value is called consistent in a strong (respectively weak) sense if the convergence happens a.s. (respectly in probability).

There is a long list of strongly or weakly consistent estimators in Chapter 6 of the book by Bhattacharya (2016).

$\endgroup$
4
  • 1
    $\begingroup$ Thank you so much for your comprehensive answer along with great references. This helps a lot. $\endgroup$
    – curiosity
    Commented May 26 at 15:01
  • 1
    $\begingroup$ May I also ask, in cases where only the convergence in probability is attainable, is that possible to prove that a.s convergence is not attainable? If this is too involved for a comment, I can also write it up as a question and post it. I appreciate your insight. $\endgroup$
    – curiosity
    Commented May 27 at 1:13
  • 1
    $\begingroup$ There are some artificial examples where a.s. convergence can be shown to fail, but there are not many instances arising in more practical contexts where such a result or a counterexample can be established, including those I mentioned. $\endgroup$ Commented May 27 at 7:40
  • 1
    $\begingroup$ But, I am somewhat curious what responses you would receive. $\endgroup$ Commented May 27 at 7:42

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .