22
$\begingroup$

Could anyone please indicate a general strategy (if there is any) to get the PDF (or CDF) of the product of two random variables, each having known distributions and limits?

My particular need is the following: Let $w :=u \cdot v$. The PDF of $u$ is $$\frac{1}{\pi u}\frac{1}{\sqrt{u^2-0.25}},$$ for $z>0.5$ and the PDF of $v$ is $$\exp\left(-\frac{v}{v_0}\right),$$ for $v_1<v<v_2$. What's the PDF of $w$?

$\endgroup$
1
  • 2
    $\begingroup$ For the very general case, the approach is to use the Laplace transform and Mellin transform techniques to do that. $\endgroup$
    – user217113
    Commented Feb 17, 2015 at 13:49

4 Answers 4

20
$\begingroup$

In case $U$ is a positive random variable with PDF $f_U$, and $V$ has a simple PDF $f_V$, so that the corresponding CDF, $F_V$, is simple too, it may be useful to use the following, assuming that $U$ and $V$ are independent. $$ {\rm P}(UV \le x) = \int {{\rm P}(UV \le x|U = u)f_U (u)\,du} = \int {{\rm P}\bigg(V \le \frac{x}{u}\bigg)f_U (u)\,du} = \int {F_V \bigg(\frac{x}{u}\bigg)f_U (u)\,du}. $$ You may then obtain the PDF of $UV$ upon differentiation.

EDIT: Here's a particularly simple example. Let $U$ and $V$ be independent uniform$(0,1)$ rv's. Here $f_U (u) = 1$, $0 < u <1$, $F_V (v) = v$, $0 < v < 1$, and $F_V (v) = 1$, $v \geq 1$. Thus, by the above formula, for any $0 < x \leq 1$, $$ {\rm P}(UV \le x) = \int_0^1 {F_V \bigg(\frac{x}{u}\bigg)du} = \int_0^x {F_V \bigg(\frac{x}{u}\bigg)du} + \int_x^1 {F_V \bigg(\frac{x}{u}\bigg)du} $$ $$ = \int_0^x {1\,du} + \int_x^1 {\frac{x}{u}\,du} = x - x\log x. $$ Hence the PDF of $UV$ is given, for $0 < x < 1$, by $$ f_{UV} (x) = \frac{d}{{dx}}(x - x\log x) = - \log x. $$

$\endgroup$
4
  • 3
    $\begingroup$ I know what you mean informally, but formally $P(U = u) = 0$ since $U$ is continuous so $P(UV\leq x \mid U = u)$ does not make sense. What would be the formal approach? (My question arrives from doing it the same way you did but need the formalities in order) $\endgroup$
    – Therkel
    Commented Dec 4, 2017 at 14:00
  • 5
    $\begingroup$ @Therkel The formal approach is to learn measure theory and call it a regular conditional probability. But really, it's the exact same thing. $\endgroup$
    – nth
    Commented May 1, 2019 at 13:25
  • 1
    $\begingroup$ @jth I think that should be included in the answer since I was very confused upon reading this the first time without paying attention to the comments. $\endgroup$ Commented Jun 25, 2019 at 19:23
  • 1
    $\begingroup$ @nth I agree with the comment by Therkel, this answer is not accurate, $f_U(u) \neq p(U=u)$. $\endgroup$ Commented Jun 20, 2021 at 3:23
5
$\begingroup$

Let $X$ and $Y$ be independent non-negative random variables, with density functions $f_X(x)$ and $f_Y(y)$. Let $Z=XY$.

Then $$P(Z\le z)=\iint_D f_X(x)f_Y(y)\,dx\,dy.$$ Here $D$ is the region in the first quadrant which is "below" the hyperbola $xy=z$.

Evaluate the integral, and differentiate the result with respect to $z$ to get the density function of $Z$. We can usually arrange to do the differentiation under the integral sign, but that still leaves one integral that may, like most integrals, not be expressible in terms of standard functions.

An alternative approach is to find the density functions of the random variables $\ln X$ and $\ln Y$, by using standard methods. Then use the usual "convolution" formula to find the density function of $U$, where $U=\ln X +\ln Y$. Finally, find the density function of $\exp(U)$.

Added: To my surprise, an integral not far from what is necessary for the first approach can be expressed in terms of modified Bessel functions and modified Struve functions (whatever those are). So says Wolfram Alpha. Thus there may be a sort of closed form for your density function. There would be, anyway, if what is called $v$ in the problem was a plain exponential. I would suggest also trying the second approach.

$\endgroup$
1
  • $\begingroup$ The idea with taking the log is a very fine thing. If understood it correctly, for two variables distributed according to $x\~f$ and $y\~g$ your idea should translate to $p(z)=\exp{\int (\ln(f(z))ln(g(z-y))dy }$, where $z=xy$. Unfortunately, I don't see how the general product distribution formula is derived from this. Moreover the integral in the argument of the exponential have infinite bounds if say a Gauß distribution is used. en.wikipedia.org/wiki/Product_distribution $\endgroup$ Commented Feb 2, 2018 at 16:29
3
$\begingroup$

Here's another way using convolution and the functional equation of the natural logarithm, provided $X,Y \ge 1$ almost surely.

Theorem. (Convolution) Let $X,Y$ be two idependently distributed $\mathbb{R}$-valued random variables with the PDFs $f_X$ and $f_Y$. Then their sum $Z := X + Y$ has a PDF $f_Z = f_X \ast f_Y$.

Proof. For $z \in \mathbb{R}$ we have \begin{align*} \mathbb{P}(X + Y \le z) & = \iint\limits_{\{(x,y): x + y \le z\}} f_{X}(x) f_{Y}(y) \ \text{d}y \ \text{d}x = \int_{\mathbb{R}} \int_{-\infty}^{z - x} f_{X}(x) f_{Y}(y) \ \text{d}y \ \text{d}x \\ & = \int_{\mathbb{R}} \int_{-\infty}^{z} f_{X}(x) f_{Y}(y - x) \ \text{d}y \ \text{d}x \\ & = \int_{-\infty}^{z} \int_{\mathbb{R}} f_{X}(x) f_{Y}(y - x) \ \text{d}x \ \text{d}y = \int_{-\infty}^{z} f_Z(y) \ \text{d}y. \end{align*}

"The Logarithm Method" Since $\ln(XY) = \ln(X) + \ln(Y)$, we know that $$f_{\ln(Z)} = f_{\ln(X)} \ast f_{\ln(Y)}$$ Therefore, for $k \ge 1$ we have \begin{align*} \mathbb{P}(XY \le k) & = \mathbb{P}(\ln(XY) \le \ln(k)) = \int_{-\infty}^{\ln(k)} f_{\ln(Z)}(y) \ \text{d}y \\ & = \boxed{\int_{-\infty}^{\ln(k)} \int_{\mathbb{R}} f_{\ln(Z)}(x) f_{\ln(Y)}(y-x) \ \text{d}x \ \text{d}y.} \end{align*}

$\endgroup$
0
2
$\begingroup$

See the direct formula for the probability density function (pdf) here:
https://en.wikipedia.org/wiki/Distribution_of_the_product_of_two_random_variables

Here's the standard proof that only uses the change-of-variables formula from multivariate calculus. It's just a flattening of the arguments of the other answers above to something elementary.

Let $X$ and $Y$ be independent random variables with $\mathbb{P}(Y=0) = 0$.

Write $T = X \cdot Y$ and $U = Y$. Observe $g(T,U) = (X,Y)$ where $g(t,u) := (t/u, u)$.

Then, in terms of the jacobian matrix $\partial g = \begin{bmatrix} 1/u & -t/u^2\\ 0 & 1\end{bmatrix}$, note the joint pdf $$ f_{T,U}(t,u) = f_{X,Y}(g(t,u)) \cdot | \mathrm{det}\,\partial g | = f_X(t/u) \cdot f_Y(u) \,/\, |u|.$$

Therefore, we obtain the desired pdf as its marginal pdf via ``partial integration'': $$ f_{X\cdot Y}(t) = \int_{\mathbb{R}} f_{T,U}(t,u) \, \partial u = \int_{-\infty}^\infty f_X\left(\frac{t}{u}\right) \cdot f_Y(u) \, \frac{\partial u}{u}. $$

This method is generic and applies to finding the pdf of $\varphi(X,Y)$ for any $C^1$-function $\varphi:\mathbb{R}^2 \longrightarrow \mathbb{R}$.

$\endgroup$
1
  • $\begingroup$ I think you want the absolute value of $\frac1u$ in the integrand of $f_{X\cdot Y}(t)$. $\endgroup$
    – psie
    Commented Jun 15 at 21:09

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .