3
$\begingroup$

Let $X_1,X_2$ be i.i.d with pdf $$f_X(x)=\begin{cases} e^{-x} & \text{for } 0< x<\infty{} \\0 & \text{elsewhere } \end{cases}$$ Show that the random variables $Y_1$ and $Y_2$ with $Y_1=X_1+X_2$ and $Y_2=\frac{X_1}{X_1+X_2}$ are independent.

I know that for $Y_1$ and $Y_2$ to be independent. $P(Y_1\cap Y_2)=P(Y_1)P(Y_2)$.

$\endgroup$
9
  • 2
    $\begingroup$ What you wrote down is indepence of events. For independent random variables, see here: probabilitycourse.com/chapter3/3_1_4_independent_random_var.php $\endgroup$
    – Gnuk
    Commented Oct 3, 2016 at 8:32
  • $\begingroup$ More information is needed. I suspect that $X_1$ and $X_2$ are iid, but that is not mentioned in your question. $\endgroup$
    – drhab
    Commented Oct 3, 2016 at 8:40
  • $\begingroup$ @HarrySmit. Thanks for the clarification. $\endgroup$
    – angelo086
    Commented Oct 3, 2016 at 8:43
  • $\begingroup$ @drhab. The question doesn't mention it. But all the problems in class had variables that were iid so that is a fair assumption. $\endgroup$
    – angelo086
    Commented Oct 3, 2016 at 8:44
  • 2
    $\begingroup$ A direct route is to compute the PDF of $(Y_1,Y_2)$ using the arch classical change of variables formula and to check that this PDF factorizes as a product. Did you try that? $\endgroup$
    – Did
    Commented Oct 3, 2016 at 8:46

2 Answers 2

5
$\begingroup$

It seems that you already have found that $X_1=Y_1Y_2$ and $X_2=Y_1(1-Y_2)$. But you are not done. What is the domain of $Y_1$ and $Y_2$? Since $X_1,X_2>0$ you have that $Y_1>0$ and $0<Y_2<1$. Hence \begin{align*}f_{Y_1,Y_2}(y_1,y_2)&=f_{X_1,X_2}(y_1y_2,y_1(1-y_2))\begin{vmatrix}y_2&y_1\\1-y_2&-y_1\end{vmatrix}\mathbf{1}_{\{0<y_1, 0<y_2<1\}}\\[0.3cm]&=e^{-y_1y_2}e^{-y_1+y_1y_2}|-y_1|\mathbf{1}_{\{0<y_1, 0<y_2<1\}}\\[0.3cm]&=y_1e^{-y_1}\mathbf{1}_{\{0<y_1\}}\mathbf{1}_{\{0<y_2<1\}}\\[0.3cm]&=\underbrace{y_1e^{-y_1}\mathbf{1}_{\{0<y_1\}}}_{f_{Y_1(y_1)}}\underbrace{\mathbf{1}_{\{0<y_2<1\}}}_{f_{Y_2(y_2)}}\end{align*} So, never forget the domain! The result is that $Y_2 \sim U(0,1)$ and $Y_1\sim f_{Y_1}(y_1)=y_1e^{-y_1}$ for $0<y_1$.

$\endgroup$
2
  • $\begingroup$ Nice clear formal proof of independence, and appropriate emphasis on domains. $\endgroup$
    – BruceET
    Commented Oct 3, 2016 at 9:00
  • $\begingroup$ +1. In the last line, how do you tell that the marginal density of $Y_1$ and $Y_2$ are those 2 terms? I suppose one can integrate w.r.t $y_1$ and $y_2$ to get those 2 terms, but is that how it is done or am I missing something? $\endgroup$
    – darkgbm
    Commented Jul 9, 2023 at 1:32
3
$\begingroup$

Here is a simulation of 100,000 $(Y_1, Y_2)$-pairs from R statistical software. The $X_i$ are iid $Exp(rate=1),$ $Y_1 \sim Gamma(shape=2, rate=1)$ and $Y_2 \sim Unif(0, 1).$ Also, $Y_1$ and $Y_2$ are uncorrelated. (If these distributions are not covered in your text, you can see Wikipedia articles on 'exponential distribution' and 'gamma distribution'.)

x1 = rexp(10^5);  x2 = rexp(10^5)
y1 = x1 + x2;  y2 = x1/y1
cor(y1,y2)
## 0.002440974 # consistent with 0 population correlation

In the figure below, the first panel shows no pattern of association between $Y_1$ and $Y_2$. Of course, this is no formal proof of independence, but if you do the bivariate transformation to get the joint density function of $Y_1$ and $Y_2,$ you should be able to see that it factors into the PDFs of $Y_1$ and $Y_2$. These PDFs are plotted along with the histograms of the simulated distributions of $Y_1$ and $Y_2$ in the second and third panels, respectively.

enter image description here

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .