0
$\begingroup$

I'm trying to solve the following exercise:

Suppose $X_1$ and $X_2$ are i.i.d. random variables with common $\mathcal{N}(0,1)$ normal distribution. Define $Y_n = X_1(\frac{1}{n} + |X_2|)^{-1}$. Use Fubini's theorem to verify that $\mathbb{E}(Y_n) = 0$. Note that as $n\to \infty$, $Y_n \to Y = X_1 |X_2|^{-1}$ and that the expectation of $Y$ does not exists, so this is one case where random variables converge but means do not.

I really don't know how to start solving this. Can you give me any hint (not the solution, but hints)? Thank you very much in advance

P.S.: This exercise appears is A Probability Path, by Resnick, chapter 5 (which is about integration and expectation).

P.P.S.: I had an idea, but I do not use Fubini's theorem: given that $X_1$ is independent of $(\frac{1}{n} + |X_2|)^{-1}$ (because the composition is measurable), and then $$ \mathbb{E}(Y_n) = \mathbb{E}(X_1) \mathbb{E} \left[ \left(\frac{1}{n} + |X_2| \right)^{-1} \right] = 0$$

$\endgroup$
5
  • $\begingroup$ In general, the first equality in your p.p.s. is shown using Fubini's theorem. Just insert the details there and you are done. $\endgroup$
    – David
    Commented Apr 5, 2017 at 15:58
  • $\begingroup$ Yes, you are right, thank you. I've just re-read that in the chapter and indeed it does use Fubini. $\endgroup$ Commented Apr 5, 2017 at 20:05
  • $\begingroup$ Not a problem. You don't happen to be taking a course from Denis Sauré down in Chile do you? He taught me probability/stochastic calculus back when he was at UPitt. (I ask because very few professors use Resnick over Billingsley) $\endgroup$
    – David
    Commented Apr 5, 2017 at 20:11
  • $\begingroup$ Holy cow! No, but I know him! I've just worked with him in my MSc thesis and my math engineering degree. He was my adviser/guiding professor, and in one course he dictated with me as T.A. he used this book :) $\endgroup$ Commented Apr 5, 2017 at 20:16
  • $\begingroup$ Small world! Anyway, best of luck with your work/studies. $\endgroup$
    – David
    Commented Apr 5, 2017 at 20:21

1 Answer 1

0
$\begingroup$

You can start like this,

Since, $Y_n = X_1 [\frac{1}{n} + |X_2 |]^{-1}$ $$Pr[Y_n \leq y] = Pr[X_1 [\frac{1}{n} + |X_2 |]^{-1} \leq y] \\ = Pr[X_1 \leq y [\frac{1}{n} + |X_2 |]] \\ = \int_{x_2=-\infty}^{\infty} \int_{x_1=-\infty}^{y[\frac{1}{n} + |X_2 |]} \phi(x_1).\phi(x_2) dx_1 dx_2 \\ = \int_{x_2=-\infty}^{\infty} \phi(x_2) \int_{x_1=-\infty}^{y[\frac{1}{n} + |X_2 |]} \phi(x_1) dx_1dx_2 \\ = \int_{x_2=-\infty}^{\infty} \phi(x_2) . \Phi(y[\frac{1}{n} + |X_2 |) dx_2$$

Then find the pdf of $Y_n$ by differentiation of $Pr[Y_n \leq y]$. After that use the formula for finding expectation, $$E(Y_n) = \int_{y} y * \text{pdf of Y_n} dy$$ The range of $Y_n$ can be found from the second line.

NOTE$ \phi$ and $\Phi$ has been used to denote pdf and cdf

$\endgroup$
1
  • $\begingroup$ Thank you for your answer. I have some doubts, though: (1) The upper limit of the inner integral should be $y [\frac{1}{n} + |x_2|]$, right? Not $|X_2|$. (2) You used $\phi$ to denote both p.d.f. indistinctly, right? So in $\phi(x_2)$ that p.d.f. is the one associated to $|X_2|$, the folded normal distribution. (3) Which "second line" are you referring to to find the range of $Y_n$? $\endgroup$ Commented Apr 5, 2017 at 20:07

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .