4
$\begingroup$

I have seen a lot of posts that describe the case for just 2 random variables.

Independent random variables and function of them

Are functions of independent variables also independent?

If $X$ and $Y$ are independent then $f(X)$ and $g(Y)$ are also independent.

If $X$ and $Y$ are independent. How about $X^2$ and $Y$? And how about $f(X)$ and $g(Y)$?

Are squares of independent random variables independent?

Prove that if $X$ and $Y$ are independent, then $h(X)$ and $g(Y)$ are independent in BASIC probability -- can we use double integration? (oh I actually asked the 2 variable elementary case here, but there's no answer)

I have yet to see a post that describes the case for at least 3.


Please answer in 2 situations

1 - for advanced probability theory:

Let $X_i: \Omega \to \mathbb R$ be independent random variables in $(\Omega, \mathscr F, \mathbb P)$. Let $i \in I$ for any index set I think (or maybe has to be countable). Of course, assume $card(I) \ge 3$. Then show $f_i(X_i)$ are independent. Give conditions on $f_i$ such that $f_i(X_i)$ is independent. I read in above posts that the condition is 'measurable', which I guess means $\mathscr F$- measurable, but I could've sworn that I read before that the condition is supposed to be 'bounded and Borel-measurable', as in bounded and $\mathscr B(\mathbb R)$-measurable for $(\mathbb R, \mathscr B(\mathbb R), Lebesgue)$

2 - for elementary probability theory

Let $X_i: \Omega \to \mathbb R$ be independent random variables that have pdf's. Use the elementary probability definition of independence that is 'independent if the joint pdf splits up', or something. I guess the index set $I$ need not be finite, in which case I think the definition is that the joint pdf of any finite subset of is independent. Give conditions on $f_i$ such that $f_i(X_i)$ is independent. Of course we can't exactly say that $f_i$ is 'measurable'.

$\endgroup$
2
  • 2
    $\begingroup$ For 1, it is enough that each $f_i$ is a measurable function from $(\mathbb{R},\mathcal{B}(\mathbb{R}))$ to some measurable space $(Y_i,\mathcal{G}_i)$ (which is allowed to depend on $i$). The proof is trivial in light of the definition of independence of $(X_i)_{i\in I}$ and measurability. $\endgroup$ Commented Dec 15, 2020 at 13:41
  • 1
    $\begingroup$ @SangchulLee thanks. 1 down, 1 to go. how about posting as a partial answer? $\endgroup$
    – BCLC
    Commented Dec 16, 2020 at 7:37

2 Answers 2

4
+50
$\begingroup$

For $i\in I$ let $\sigma\left(X_{i}\right)\subseteq\mathscr{F}$ denote the $\sigma$-algebra generated by random variable $X_{i}:\Omega\to\mathbb{R}$.

Then actually we have $\sigma\left(X_{i}\right)=X_{i}^{-1}\left(\mathscr{B}\left(\mathbb{R}\right)\right)=\left\{ X_{i}^{-1}\left(B\right)\mid B\in\mathscr{B}\left(\mathbb{R}\right)\right\} $.

The collection $(X_i)_{i\in I}$ of random variables is independent iff:

For every finite $J\subseteq I$ and every collection $\left\{ A_{i}\mid i\in J\right\} $ satisfying $\forall i\in J\left[A_{i}\in\sigma\left(X_{i}\right)\right]$ we have:

$$P\left(\bigcap_{i\in J}A_{i}\right)=\prod_{i\in J}P\left(A_{i}\right)\tag {1}$$

Now if $f_{i}:\mathbb{R}\to Y_{i}$ for $i\in I$ where $\left(Y_{i},\mathcal{A}_{i}\right)$ denotes a measurable space and where every $f_{i}$ is Borel-measurable in the sense that $f_{i}^{-1}\left(\mathcal{A}_{i}\right)\subseteq\mathscr{B}\left(\mathbb{R}\right)$ then for checking independence we must look at the $\sigma$-algebras $\sigma\left(f_{i}\left(X_{i}\right)\right)$.

But evidently: $$\sigma\left(f_{i}\left(X_{i}\right)\right)=\left(f_{i}\circ X_{i}\right)^{-1}\left(\mathcal{A}_{i}\right)=X_{i}^{-1}\left(f_{i}^{-1}\left(\mathcal{A}_{i}\right)\right)\subseteq X_{i}^{-1}\left(\mathscr{B}\left(\mathbb{R}\right)\right)=\sigma\left(X_{i}\right)$$ So if $\left(1\right)$ is satisfied for the $\sigma\left(X_{i}\right)$ then automatically it is satisfied for the smaller $\sigma\left(f_{i}\left(X_{i}\right)\right)$.

2)

The concept independence of random variables has impact on PDF's and calculation of moments, but its definition stands completely loose from it. Based on e.g. a split up of PDF's it can be deduced that there is independence but things like that must not be promoted to the status of "definition of independence". In situations like that we can at most say that it is a sufficient (not necessary) condition for independence. If we wonder: "what is needed for the $f_i(X_i)$ to be independent?" then we must focus on the definition of independence (not sufficient conditions). Doing so we find that measurability of the $f_i$ is enough whenever the $X_i$ are independent already.

BCLC edit: (let drhab edit this part further): There's no 'measurable' in elementary probability, so we just say 'suitable' or 'well-behaved' in that whatever functions that students of elementary probability will encounter, we hope that they are suitable. Probably, some textbooks will use weaker conditions than 'measurable' that will be used as the definition of independence for that book.

Edit: Functions that are not measurable (or not suitable, if you like) are in usual context very rare. The axiom of choice is needed to prove the existence of such functions. In that sense you could say that constructible functions (no arbitrary choice function is needed) are suitable.

$\endgroup$
25
  • $\begingroup$ 'measurability of the fi' --> what does measurability mean for elementary probability? :| $\endgroup$
    – BCLC
    Commented Dec 19, 2020 at 1:38
  • $\begingroup$ i really think it's like any $f_i$ s.t. $E[f_i(X_i)]$ exists $\endgroup$
    – BCLC
    Commented Dec 19, 2020 at 1:38
  • 1
    $\begingroup$ Yes, it is. It works for any 'suitable function' (in terms of elementary probability). In my answer you find no further restrictions on the $f_i$. Only measurability is required. $\endgroup$
    – drhab
    Commented Dec 24, 2020 at 13:00
  • 1
    $\begingroup$ ok thanks. merry christmas, and happy new year! $\endgroup$
    – BCLC
    Commented Dec 28, 2020 at 5:38
  • 1
    $\begingroup$ @BCLC You are very welcome. Thank you for your wishes and I wish you a merry christmas and a happy new year as well. $\endgroup$
    – drhab
    Commented Dec 28, 2020 at 7:18
1
$\begingroup$

measure-theoretic:

The measure-theoretic answer is extremely general. It requires nothing special about the real line or Borel sets, just pure measurability. Suppose $(X)_{i \in I}$ is a family (countable is not needed) of random elements, where $X_i: (\Omega, \mathscr{F}) \to (A_i, \mathscr{A}_i)$, i.e. each $X_i$ takes values in some space $A_i$ and $X_i$ is measurable, but all $X_i$ live on the same input space $\Omega$. No assumptions are made about the spaces $\Omega, A_i$ or $\sigma$-algebras $\mathscr{F}, \mathscr{A}_i$.

Let a corresponding family of functions $(f_i)_{i \in I}$ be given such that for each $i$, $f_i: (A_i, \mathscr{A}_i) \to (B_i, \mathscr{B}_i)$ is measurable. That is, each $f_i$ accepts inputs from $A_i$ (the codomain of $X_i$) and takes values in some space $B_i$ such that $f_i$ is measurable. (This ensures that for each $i$, $f_i(X_i): (\Omega, \mathscr{F}) \to (B_i, \mathscr{B}_i)$ makes sense and is measureable.) Again, no assumptions are made about the spaces $B_i$ or $\sigma$-algebras $\mathscr{B}_i$.

Now suppose $(X_i)_i$ is an independent family under some probability measure $P$ on $(\Omega, \mathscr{F})$, i.e. that for any finite subset $J \subseteq I$ of indices and any measurable subsets $U_i \in \mathscr{A}_i$ one has $$P(X_i \in U_i \text{ for all } i \in J) = \prod_{i \in J} P(X_i \in U_i).$$

Then we claim that $(f_i(X_i))_{i \in I}$ is also an independent family under $P$. Indeed, let $J \subseteq I$ be some finite subset of indices and let measurable subsets $V_i \in \mathscr{B}_i$ be given. For each $i \in J$, by the measurability of $f_i$ and $V_i$, one has that $f_i^{-1}(V_i) \in \mathscr{A}_i$ and thus $$ P(f_i(X_i) \in V_i \text{ for all } i \in J) = P(X_i \in f^{-1}_i(V_i) \text{ for all } i \in J) $$ $$ = \prod_{i \in J} P(X_i \in f^{-1}_i(V_i)) $$ $$ = \prod_{i \in J} P(f_i(X_i) \in V_i). $$ Thus, $f_i(X_i))_{i \in I}$ is an independent family.


elementary probability:

As for the elementary probability solution, it really depends on what your definition of independence is. In all cases, the definition only involves finite subsets of the random variables. I would say that without the definition of a $\sigma$-algebra, the proof is out of grasp unless you make extra (unnecessary) assumptions. If your definition is that densities split as a product, then you must assume some conditions to ensure that $f_i(X_i)$ has a density and that you can apply the usual density transformation rules. If your functions take values in a countable space, the above proof can be repeated essentially verbatim replacing arbitrary $U_i, V_i$ with singletons, i.e. look at $P(f_i(X_i) = y_i, \forall i)$.

Alternatively, since you are avoiding a measure-theoretic answer to a question whose very definition is measure-theoretic, perhaps correctness of the argument is not a requirement? Just tell your students the independence condition must hold for "all sets (verbal asteristk)" and then give the above proof without mentioning the measurability. Or if your students are perhaps more comfortable with topology, you could use only continuous functions and look at preimages of open sets.

$\endgroup$
2
  • $\begingroup$ thanks nullUser! so basically just suitable/well-behaved sets and suitable/well-behaved functions for random variables that have pdfs? $\endgroup$
    – BCLC
    Commented Jan 7, 2021 at 23:56
  • $\begingroup$ nullUser please help here: so there's a similar problem to not using the term 'measurable' namely not using the term 'integrable' $\endgroup$
    – BCLC
    Commented Apr 28, 2021 at 0:08

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .