2
$\begingroup$

Let $X_n$ and $Y_n$ both be sequences of nonnegative random variables.Define $A_n:=\left\{\omega:X_{n}(\omega)>0\right\}.$ Suppose that $\lim_{n\rightarrow\infty}\mathbf{P}(A_n)=0,Y_n=\mathcal{O}_{p}(\beta_n),$ where $\left\{\beta_n\right\}_{n\in\mathbf{N}}$ be a sequence of strictly positive reals. Can we conclude that $X_nY_n=\mathcal{o}_{p}(\beta_n)$?

From the definition of $\mathcal{O}_{p}$ and $\mathcal{o}_{p}$, I try to show that for every $\epsilon>0,\mathbf{P}\left(\frac{X_n Y_n}{\beta_n}>\epsilon\right)\rightarrow 0,$ as $n\rightarrow\infty.$

Since $\left\{\omega:\frac{X_n Y_n}{\beta_n}>\epsilon\right\}=\left\{\omega:\frac{X_n Y_n}{\beta_n}\cdot\mathbb{I}_{\{X_n>0\}}>\epsilon\right\}\biguplus\left\{\omega:\frac{X_n Y_n}{\beta_n}\cdot\mathbb{I}_{\{X_n=0\}}>\epsilon\right\}(\biguplus \text{means disjoint union}),$ $$\mathbf{P}\left\{\omega:\frac{X_n Y_n}{\beta_n}>\epsilon\right\}=\mathbf{P}\left\{\omega:\frac{X_n Y_n}{\beta_n}\cdot\mathbb{I}_{\{X_n>0\}}>\epsilon\right\}.$$ Whatever,$$\left\{\omega:\frac{X_n Y_n}{\beta_n}\cdot\mathbb{I}_{\{X_n>0\}}>\epsilon\right\}\subseteq A_n,$$ then $$\lim_{n\rightarrow\infty}\mathbf{P}\left(\frac{X_n Y_n}{\beta_n}>\epsilon\right)=0.$$ I'm unsure about my proof which does not seem to use $Y_n=\mathcal{O}_{p}(\beta_n).$ Please correct me if needed.

$\endgroup$
4
  • $\begingroup$ I'm unfamiliar with the notation, but based on the usual big $O$ notation and your definition of $o_p(\beta_n)$, I'd guess that the definition of $\mathcal{O}_p(\beta_n)$ is that for some constant $C>0$, $$\lim_n \mathbb{P}(Y_n>C\beta_n)=0.$$ Is this correct? You're implicitly using $X_nY_n/\beta_n>\epsilon\Leftrightarrow X_n>\epsilon$, which is implicitly assuming $Y_n\leqslant \beta_n$ (almost surely, with $C=1$). $\endgroup$
    – user469053
    Commented Apr 12 at 11:19
  • $\begingroup$ @user469053 I'm sorry to say that your definition of $\mathcal{O}_p$ is not standard in probability theory. $\endgroup$
    – Kevin
    Commented Apr 12 at 11:58
  • 1
    $\begingroup$ @user469053 No. The $C$ need not be uniform as you have written it. The definition says that for each $\epsilon>0$, there exists an $M>0$ finite, depending on $\epsilon$ such that $P(|X_{n}/a_{n}|>M)<\epsilon$. $\endgroup$ Commented Apr 12 at 18:22
  • 1
    $\begingroup$ For $X_{n}Y_{n}/\beta_{n}$ to converge in probability to $0$ under the condition that $P(A_{n})\to 0$, you don't need $Y_{n}$ to be $O_{p}(\beta_{n})$. If you have a sequence $Z_{n}$ of random variables which do not take the value $\pm\infty$ with non-zero probability, then $X_{n}Z_{n}$ is automatically convergent to $0$, simply because $P(|X_{n}Z_{n}|>\epsilon)\leq P(A_{n})\to 0$. But we do have something stronger. If $X_{n}$ simply converges in probability to $0$ (this is weaker than $P(A_{n})\to 0$, then too, if $Y_{n}=O_{p}(\beta_{n})$, then $X_{n}Y_{n}=o_{p}(\beta_{n})$. @Kevin $\endgroup$ Commented Apr 13 at 7:49

1 Answer 1

1
$\begingroup$

The conclusion is true even if $X_{n}$ converges in prpbability to $0$ which is a weaker assumption than yours. You can prove it by in two ways.

Method 1

Fix $\epsilon>0$ and a $\delta>0$.

As $Y_{n}=O_{p}(\beta_{n})$, find and $M_{\epsilon}>0$ such that $P(|Y_{n}/\beta_{n}|>M_{\epsilon})<\epsilon$.

Now, be definition of convergence in probability, there exists $N(\delta,\epsilon)$ such that for $n\geq N$, you have $P(|X_{n}|>\frac{\delta}{M_{\epsilon}})<\epsilon$

Now $$P(|X_{n}Y_{n}/\beta_{n}|>\delta)\leq P(|X_{n}|>\frac{\delta}{M_{\epsilon}})+\cdot P(|Y_{n}/\beta_{n}|>M_{\epsilon})+P(|X_{n}|>\frac{\delta}{M_{\epsilon}})\leq 3\epsilon$$ for all $n\geq N(\epsilon,\delta)$ .

The above is due to \begin{align}\{|X_{n}Y_{n}/\beta_{n}|>\delta\}&=\{|X_{n}Y_{n}/\beta_{n}|>\delta , |X_{n}|>\delta/M_{n},|Y_{n}/\beta_{n}|\leq M_{\epsilon}\}\\\\&\bigcup\{|X_{n}Y_{n}/\beta_{n}|>\delta,|X_{n}|\leq\delta/M_{\epsilon},|Y_{n}/\beta_{n}|>M_{\epsilon}\}\\\\&\bigcup\{|X_{n}Y_{n}/\beta_{n}|>\delta, X_{n}>\delta/M_{\epsilon},|Y_{n}/\beta_{n}|>M_{\epsilon}\}\end{align},

This holds for all $\epsilon>0$. This means $P(|X_{n}Y_{n}/\beta_{n}|>\delta)\xrightarrow{n\to\infty} 0$.

And the above holds for all $\delta>0$. Thus $X_{n}Y_{n}/\beta_{n}\xrightarrow{P}0$.

Second Method is a subsequential method and is more for measure theory enthusiasts.

First note that for a sequence of random variables $X_{n}$, you have that $X_{n}\xrightarrow{P}X$ if and only if for every subsequence $X_{n_{k}}$, there exists a further subsequence $X_{n_{k_{l}}}$ such that $ X_{n_{k_{l}}}\xrightarrow{P} X$. This is often a very useful characterization of convergence in probability.

The proof is very simple. You take the real sequence $a_{n}=P(|X_{n}-X|>\epsilon)$. And now, you have that for each subsequence $a_{n_{k}}$, there exists a further subsequence $a_{n_{k_{l}}}$ which converges to $0$. This means that $a_{n}$ converges to $0$. (This is a property of sequences in arbitrary metric spaces).

Now, let $X_{n_{k}}Y_{n_{k}}/\beta_{n_{k}}$ be an arbitrary subsequence of $X_{n}Y_{n}/\beta_{n}$.

But, WLOG for simplicity of notation, assume that the subsequence is the sequence itself. i.e. We want to work with $X_{n}Y_{n}/\beta_{n}$ itself.

By definition of $O_{p}$, For each $k>0$, there exists an $M_{k}$ such that $P(|\frac{Y_{n}}{\beta_{n}}|>M_{k})\leq \frac{1}{2^k}$ for all $n\geq n_{k}$.

So you have that $\sum_{k=1}^{\infty}P(|\frac{Y_{n_{k}}}{\beta_{n_{k}}}|>M_{k})<\infty$.

Thus, the Borel-Cantelli Lemma implies that $Y_{n_{k}}(\omega)/\beta_{n_{k}}\leq M_{k}$ almost surely for all large enough $k$. (Say this occurs for all $\omega\in A$ such that $P(A)=1$).

Case 1: Let $\sup_{k}M_{k}=\infty$. WLOG assume $M_{k}\uparrow \infty$.

Then, by definition of convergence in probability for the sequence $X_{n_{k}}\xrightarrow{P} 0$ we have for each $l$, $P(|X_{n_{k}}|> \frac{1}{M_{k_{l}}^{2}})\to 0$

So now, choose $n_{k_{l}}$ such that $$P(|X_{n_{k_{l}}}|>\frac{1}{M_{k_{l}}^{2}})\leq \frac{1}{2^{k_{l}}}$$

So again, by Borel-Cantelli Lemma, you have that $\displaystyle X_{n_{k_{l}}}(\omega)\leq \frac{1}{M_{k_{l}}^{2}}$ almost surely for all large enough $l$. (Say this occurs for all $\omega$ such that $P(B)=1$).

Thus, you have that in a set of probability $1$ (i.e. in $A\cap B$),

$$|X_{n_{k_{l}}}(\omega)Y_{n_{k_{l}}}(\omega)/\beta_{n_{k_{l}}}|\leq \dfrac{M_{n_{k_{l}}}}{M_{n_{k_{l}}}^{2}} $$ almost surely for all large enough $l$. And the right hand side goes to $0$.

Thus $X_{n_{k_{l}}}(\omega)Y_{n_{k_{l}}}(\omega)/\beta_{n_{k_{l}}}\to 0$ almost surely in $A\cap B$.

Thus $X_{n_{k_{l}}}Y_{n_{k_{l}}}/\beta_{n_{k_{l}}}\xrightarrow{a.s.}0$ which implies $X_{n_{k_{l}}}Y_{n_{k_{l}}}/\beta_{n_{k_{l}}}\xrightarrow{P}0$.

Case 2:

If $\sup_{k}M_{k}<\infty$. Then you have that $Y_{n_{k}}/\beta_{n_{k}}\leq M_{k}$ almost surely for all large enogu $k$.

Thus $\sup_{k}Y_{n_{k}}(\omega)/\beta_{n_{k}}\leq C(\omega)$ almost surely .

Now as $X_{n_{k}}$ converges in probability to $0$, find a subsequence $X_{n_{k_{l}}}$ that converges almost surely to $0$.

Thus you have $\bigg|X_{n_{k_{l}}}Y_{n_{k_{l}}}/\beta_{n_{k_{l}}}\bigg|\leq C|X_{n_{k_{l}}}|$ for all large enough $l$ and the RHS goes to $0$.

Thus $X_{n_{k_{l}}}Y_{n_{k_{l}}}/\beta_{n_{k_{l}}}\xrightarrow{a.s.}0$ which also means $X_{n_{k_{l}}}Y_{n_{k_{l}}}/\beta_{n_{k_{l}}}\xrightarrow{P}0$

Thus combining all cases, what we have shown is that given any subsequence of the original sequence, there exists a further subsequence which converges in probability to $0$. This means that $\dfrac{X_{n}Y_{n}}{\beta_{n}}\xrightarrow{P}0$.

NOTE Essentially, Case $2$ is just a subcase of $1$ because if $M_{k}$ is bounded, we can always replace this sequence by a sequence $L_{k}$ such that $L_{k}>M_{k}$ and $L_{k}\uparrow\infty$ and work with that instead. This works because $P(|Y_{n}/\beta_{n}|>L_{k})\leq P(|Y_{n}/\beta_{n}|>M_{k})$

$\endgroup$
3
  • 1
    $\begingroup$ Note that your solution itself suffices for your problem. I have shown a stronger version in which you have $X_{n}$ converges to 0 in probability which is weaker than what you are assuming about $X_{n}$, i.e. $P(A_{n})\to 0$. @Kevin $\endgroup$ Commented Apr 13 at 6:58
  • 1
    $\begingroup$ @Kevin For example, if you take $X_n$ to be the deterministic sequence $1/n$, then your method will fail as $P(A_n)=1$ identically. $\endgroup$ Commented Apr 13 at 7:01
  • $\begingroup$ Thank you for the reminder and example. I appreciate the general result $(X_n = o_p(1), Y_n = O_p(\beta_n) \Rightarrow X_nY_n = o_p(\beta_n))$ you've derived from a subsequential approach. $\endgroup$
    – Kevin
    Commented Apr 13 at 9:00

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .