3
$\begingroup$

Let $Y_n,X_n$ be a sequence of random variables. Does $\sup_z \Big|P(Y_n<z\ |\ X_n) - \Phi(z)\Big|\xrightarrow[]{p}0$ implies $\{Y_n|X_n=x_n\}\xrightarrow[]{d}N(0,1)?$ where $x_n$ is any sequence of real numbers within the support of $X_n$. Note that $P(Y_n<z\ |\ X_n)$ is a random variable.

By asymptotic equivalence, the above implies for all value of $z$, $P(Y_n<z\ |\ X_n)\xrightarrow[]{d}\Phi(z)$ which implies $P(Y_n<z\ |\ X_n)\xrightarrow[]{p}\Phi(z)$ for all $z$. This is as far as I could go.

This question is closely related to this other question.

$\endgroup$
25
  • 3
    $\begingroup$ Please check the section 'Measure-theoretic formulation' in your link. The conditional probability distribution of $Y_n$ given $X_n$ (or more precisely, the sigma-algebra $\sigma(X_n)$ generated by $X_n$) is a random distribution (or equivalently, a measurable function from $\Omega\times\mathcal{B}(\mathbb{R})$ to $\mathbb{R}$). $\endgroup$ Commented Jan 2, 2022 at 13:20
  • 2
    $\begingroup$ I think $Y_n \mid X_n$ may not be well defined at all. For example $(Y_n \mid X_n) \le z$ looks like an event, but I doubt there is a probability space that has it as an event and I do not see whether $P\big ( (Y_n \mid X_n) \le z\big)$ is supposed to be a numerical value or a random variable or something else $\endgroup$
    – Henry
    Commented Jan 2, 2022 at 13:21
  • 1
    $\begingroup$ Referring to your last formulation, you should specify I think what is $x_n$ with $x$ small. Maybe any sequence of real numbers going to infinity? $\endgroup$
    – Thomas
    Commented Jan 2, 2022 at 22:53
  • 1
    $\begingroup$ Since the notion of $F_{Y_n\mid X_n=x}$ only makes sense $\mu_n$-almost surely (where $\mu_n(\cdot)=\mathbf{P}(X_n\in\cdot)$ is the law of $X_n$), and since $\mu_n$ changes as $n$ progresses, comparing this function for different $n$ will only make sense when we introduce a ($\sigma$-fintie) measure $\mu$ with respect to which all the $\mu_n$'s are absolutely continuous. (A quintessential example of this is when all $X_n$'s are discrete.) Even under this, the question kind of asks if we can improve convergence in probability to a.e.-convergence, so I am skeptical about the validity... $\endgroup$ Commented Jan 2, 2022 at 23:08
  • 1
    $\begingroup$ At least we can prove that $(Y_n\mid X_n)$ converges weakly to $\mathcal{N}(0,1)$ in probability, in the sense that $$\forall f\in C_b(\mathbb{R}) \ : \qquad \mathbf{E}[f(Y_n)\mid X_n] \to \mathbf{E}[f(Z)] \quad \text{in probability}$$ where $Z\sim\mathcal{N}(0,1)$. (Note that, if there is no conditioning, then this is precisely the definition of $Y_n \to Z$ in distribution by the Portmanteau theorem.) $\endgroup$ Commented Jan 2, 2022 at 23:12

1 Answer 1

2
$\begingroup$

Let $\mathcal{P}(\mathbb{R})$ denote the space of Borel probability measures on $\mathbb{R}$ equipped with the topology of convergence in distribution. Also, ​for each $n$, let $\mu_n(\cdot) = \mathbf{P}(Y_n \in \cdot \mid X_n)$ denote the regular conditional distribution of $Y_n$ given $X_n$. Note that each $\mu_n$ is a $\mathcal{P}(\mathbb{R})$-valued random variable.

Then we claim that $\mu_n$ converges weakly to $\mathcal{N}(0, 1)$ in probability, in the sense that for each neighborhood $U$ of $\mathcal{N}(0, 1)$,

$$ \lim_{n\to\infty} \mathbf{P}(\mu_n \notin U) = 0. $$

By the Portmanteau theorem, this is equivalent to showing that for each bounded, continuous $f : \mathbb{R} \to \mathbb{R}$,

$$ \mu_n [f] = \mathbf{E}[f(Y_n) \mid X_n] \to \mathbf{E}[f(Z)] \quad \text{in probability}, $$

where $Z \sim \mathcal{N}(0, 1)$.


To this end, write

$$ D_n(g) = \left| \mu_n[g] - \mathbf{E}[g(Z)] \right| \qquad\text{and}\qquad M_n = \sup_{z\in\mathbb{R}} \left| D_n(\mathbf{1}_{(-\infty, z)}) \right|. $$

Then each $M_n$ defines a random variable, and the assumption corresponds to the fact that $M_n \stackrel{p}\to 0$. Also, for each $-\infty \leq a < b \leq +\infty$, we have

\begin{align*} \mu_n([a, b)) = \mu_n[\mathbf{1}_{[a, b)}] &\leq D_n(\mathbf{1}_{[a, b)}) + \mathbf{P}(Z \in [a, b)) \\ &\leq D_n(\mathbf{1}_{(-\infty, b)}) + D_n(\mathbf{1}_{(-\infty, a)}) + \mathbf{P}(Z \in [a, b)) \\ &\leq 2M_n + \mathbf{P}(Z \in [a, b)) \end{align*}

and hence

\begin{align*} D_n(f \mathbf{1}_{[a, b)}) &\leq \biggl(\sup_{[a, b)} |f| \biggr) \bigl[ \mu_n([a, b)) + \mathbf{P}(Z \in [a, b)) \bigr] \\ &\leq \biggl(\sup_{[a, b)} |f| \biggr) \bigl[ 2M_n + 2\mathbf{P}(Z \in [a, b)) \bigr]. \end{align*}

Now, let $\varepsilon > 0$ be arbitrary, and choose $\eta, R, \delta > 0$ so that

  • $(2+4\sup|f|)\eta < \varepsilon$,
  • $\mathbf{P}(Z \geq R) < \eta$, and
  • $|f(x) - f(y)| < \eta$ whenever $x, y \in [-R, R]$ and $|x - y| < \delta$,

Also. let $-R = x_0 < x_1 < \ldots < x_N = R$ be such that $|x_{k+1} - x_k| < \delta$. We first bound $D_n(f)$ as

\begin{align*} D_n(f) &\leq D_n(f\mathbf{1}_{(-\infty, -R)}) + D_n(f\mathbf{1}_{[R, \infty)}) + \sum_{k=0}^{N-1} D_n(f\mathbf{1}_{[x_k, x_{k+1})}). \end{align*}

Then by noting that

\begin{align*} &D_n(f\mathbf{1}_{(-\infty, -R)}) + D_n(f\mathbf{1}_{[R, \infty)}) \\ &\leq (\sup|f|)(4M_n + 2\mathbf{P}(Z < -R) + 2\mathbf{P}(Z \geq R)) \\ &\leq (\sup|f|)(4M_n + 4\eta) \end{align*}

and

\begin{align*} &\sum_{k=0}^{N-1} D_n(f\mathbf{1}_{[x_k, x_{k+1})}) \\ &\leq \sum_{k=0}^{N-1} D_n( (f - f(x_k)) \mathbf{1}_{[x_k, x_{k+1})} ) + \sum_{k=0}^{N-1}|f(x_k)| D_n(\mathbf{1}_{[x_k, x_{k+1})}) \\ &\leq \eta \sum_{k=0}^{N-1} \bigl[ \mu_n( [x_k, x_{k+1}) ) + \mathbf{P}(Z \in [x_k, x_{k+1})) \bigr] + (\sup |f|) \sum_{k=0}^{N-1} D_n(\mathbf{1}_{[x_k, x_{k+1})}) \\ &\leq 2\eta+ 2 (\sup|f|) N M_n, \end{align*}

we obtain

$$ D_n(f) \leq (2+4\sup|f|)\eta + (\sup|f|)(2N + 4) M_n. $$

Using this and $M_n \stackrel{p}\to 0$ together, we conclude

$$ \lim_{n\to\infty} \mathbf{P}( D_n(f) > \varepsilon) = 0. $$

Since $\varepsilon > 0$ is arbitrary, this implies that $D_n(f) \stackrel{p}\to 0$ and therefore the desired conclusion follows.


Remark. If you look at the proof closely, all we need is to assume that $$ \mathbf{P}(Y_n < z \mid X_n) \stackrel{p}\to \mathbf{P}(Z < z), \qquad\text{or equivanetly,}\qquad D_n(\mathbf{1}_{(-\infty, z)}) \stackrel{p}\to 0 $$ for each $z \in \mathbb{R}$. I am not sure if the (seemingly) stronger assumption $M_n \stackrel{p}\to 0$ leads to a stronger conclusion.

$\endgroup$
2
  • $\begingroup$ I would like to understand your answer. Why in the initial formula you want to show that the sequence mu_n belong to the neighborhood with probability zero and not one? Intuitively as far as I understand you want to show kind of a convergence to the normal distribution of which U is a neighborhood? $\endgroup$
    – Thomas
    Commented Jan 3, 2022 at 23:31
  • $\begingroup$ @Thomas, Ah, indeed that is a typo. It should be analogous to the definition of convergence in probability for $\mathbb{R}$-valued random variables: $$X_n\stackrel{p}\to c \qquad\iff\qquad \lim_{n\to\infty}\mathbf{P}(X_n\notin U)=0\quad\text{for any neighborhood $U$ of $c$}$$ Let me fix it :) $\endgroup$ Commented Jan 3, 2022 at 23:34

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .