14
$\begingroup$

This problem is motivated by my self study of Cinlar's "Probability and Stochastics", it is Exercise 1.26 in chapter 4 (on conditioning).

The exercise goes as follows: Let H be an event and let $\mathcal{F} = \sigma H = \{\emptyset, H, H^c, \Omega\}.$ Show that $\mathbb{E}_\mathcal{F}(X) = \mathbb{E}_HX$ for all $\omega \in H.$

I'm not quite clear what I'm supposed to show, since when $\omega \in H$, then the $\sigma$-algebra is "reduced" to the event H, or am I misunderstanding something here?

$\endgroup$

4 Answers 4

13
$\begingroup$

For absolute clarity, I will add to the above answers, even though this is basically a repeat of above. Fix a probability space $(\Omega, \mathcal{H}, P)$. For any event $H\in\mathcal{H}$, we can define the $\sigma$-field generated by $H$ to be the smallest $\sigma$-field containing $H$, i.e., $$\sigma(H) := \{\emptyset, H, H^c, \Omega\}.$$ If, in addition, $H$ is such that $P(H)>0$, we can formally define $$E(X\,\vert\, H) := \frac{1}{P(H)}\int_H X\,dP = \frac{E(X1_H)}{P(H)}.$$ Now, the question also has another mathematical object $E(X\,\vert\, \mathcal{F})$. By definition, for any sub-$\sigma$-field $\mathcal{F}\subseteq \mathcal{H}$, $E(X\,\vert\, \mathcal{F})$ is defined to be any $\mathcal{F}$-measurable random variable such that $$E(X1_A) \equiv \int_A X\,dP = \int_A E(X\,\vert\, \mathcal{F})\,dP\equiv E(E(X\,\vert\, \mathcal{F})1_A) \quad\quad(A \in \mathcal{F}).$$ One can prove that $E(X\,\vert\, \mathcal{F})$ exists and is almost surely unique (e.g., see Page 222 in Probability: Theory and Examples by Rick Durrett).

With the definitions in place, the question is asking to prove (for $H\in\mathcal{H}$ such that $P(H)>0$) that $$E(X\,\vert\, \mathcal{F})(\omega) = E(X\,\vert\, H)\quad \text{for almost all } \omega\in H,$$ where $\mathcal{F} := \sigma(H)$. By following the definition of $E(X\,\vert\,\mathcal{F})$, you can pretty easily show that (I can put details if necessary) $$E(X\,\vert\,\mathcal{F}) = E(X\,\vert\, H)1_H + E(X\,\vert\, H^c)1_{H^c}\quad \text{almost surely, if } 0<P(H)<1,$$ and that $$E(X\,\vert\,\mathcal{F}) = E(X\,\vert\, H)1_H\quad \text{almost surely, if } P(H)=1.$$ In either case, we find that $$E(X\,\vert\,\mathcal{F})(\omega) = E(X\,\vert\, H)\quad \text{for almost all }\omega\in H.$$ (A technical point is that it is not necessarily true that $E(X\,\vert\, \mathcal{F})(\omega) = E(X\,\vert\, H)$ for all $\omega\in H$. It is only true for almost all $\omega\in H$, i.e., for all $\omega\in H-N$, where $N\in\mathcal{H}$ is such that $P(N)=0$. The representative we chose for $E(X\,\vert\, \mathcal{F})$ actually equals $E(X\,\vert\, H)$ for all $\omega\in H$, but we could have easily picked a different representation that disagrees with the above on a measure-zero set.)

More generally, if $\mathcal{P} = \{H_1,H_2,\dots\}$ is countable partition of $\Omega$ (i.e., $H_i\cap H_j=\emptyset$ for all $i\ne j$ and $\bigcup_{i=1}^{\infty} H_i = \Omega$) such that $P(H_i)>0$ for all $i\ge 1$, in a similar manner one can show that $$E(X\,\vert\,\mathcal{F})(\omega) = E(X\,\vert\, H_i)\quad\text{for almost all }\omega\in H_i,$$ where $\mathcal{F}:= \sigma(H_1,H_2,\dots)$ is the smallest $\sigma$-field containing all the elements of $\mathcal{P}$.

The result basically says that if the collection of all of the information (i.e., $\mathcal{F}$) comes from distinct pieces of information (i.e., the $H_i$) then the "best guess" for $X$ on $H_i$ (i.e., $E(X\,\vert\,\mathcal{F})(\omega)$ for $\omega\in H_i$) is the average of $X$ over $H_i$ (i.e., $E(X\,\vert\, H_i$)). While I believe the word "intuition" is wildly overused in mathematics, this result seems to be relatively "intuitive".

$\endgroup$
2
  • $\begingroup$ Does this mean that a conditional expectation given an event A is not the same as the conditional expectation given (the sigma-field generated by) the indicator function of A. I mean, ${E} [X \mid A] = {E} [X \mid {1}_A]$ almost surely?. $\endgroup$
    – J. Goles
    Commented Sep 17, 2018 at 17:20
  • $\begingroup$ No, this is saying that they are the same thing. Because the sigma algebra generated by an event $A$ can be shown to be equal to the sigma algebra generated by the indicator function of $A$. $\endgroup$
    – Satana
    Commented Sep 18, 2018 at 18:44
5
$\begingroup$

The answers from BCLC and Satana state (without proof) the property:

$$ \mathbb{E}[X|\mathcal{F}] = \mathbb{E}[X|{H}]1_H + \mathbb{E}[X|H^c]1_{H^c} \quad\quad \mathrm{a.s.} \tag 1\label 1 $$

where $0 < P(H) < 1$.

To show where $\eqref 1$ comes from, first recall that the conditional expectation must satisfy:

$$ \mathbb{E}[\mathbb{E}[X|\mathcal{F}]1_A] = \mathbb{E}[X1_A] \quad\quad(A \in \mathcal{F}) \tag 2\label 2. $$

To observe why $\eqref 2$ holds, let $\mathcal{F} = \sigma(H) = \{\varnothing, H, H^c, \Omega\}$ and observe that for any $A \in \mathcal{F}$ we have

$$ \begin{align} &\mathbb{E}[(\mathbb{E}[X|H]1_H + \mathbb{E}[X|H^c]1_{H^c})1_A]\\ &= \mathbb{E}[\mathbb{E}[X|H]1_H1_A] + \mathbb{E}[\mathbb{E}[X|H^c]1_{H^c}1_A]] && \mbox{(linearity)}\\ &= \mathbb{E}[X|H]\mathbb{E}[1_H1_A] + \mathbb{E}[X|H^c]\mathbb{E}[1_{H^c}1_A] && \mbox{(extract constants}) \\ &= \mathbb{E}[X|H]\mathbb{P}[H \cap A] + \mathbb{E}[X|H^c]\mathbb{P}[H^c \cap A] && \mbox{(expectation of indicators}) \\ &= \frac{\mathbb{E}[X1_H]}{\mathbb{P}(H)}\mathbb{P}[H \cap A] + \frac{\mathbb{E}[X1_{H^c}]}{\mathbb{P}(H^c)}\mathbb{P}[H^c \cap A] && \mbox{(def of $\mathbb{E}[X|H]$}) \end{align} $$

Now consider the four possibilities of $A \in \{\varnothing, H, H^c, \Omega \}$ and conclude that $\eqref 1$ satisfies $\eqref 2$.

More generally, one can extend this approach to show that if $(B_i)_{i=1}^{n}$ are a collection of (non-zero probability) events that partition $\Omega$ and $\mathcal{F} = \sigma(B_1, \dots, B_n)$ then

$$ \mathbb{E}[X|\mathcal{F}] = \sum_{i=1}^{n}\mathbb{E}[X|B_i]1_{B_i} \quad\quad \mathrm{a.s} $$

$\endgroup$
2
$\begingroup$

Here is the problem stated in full context:

Let $(\Omega, \mathcal H,\mathbb P)$ be a probability space. Let $H\in\mathcal H$ and let $\mathcal F:=\sigma(H) = \{\varnothing, H, H^c, \Omega\}$. Show that $$\mathbb E[X\mid \mathcal F](\omega) = \mathbb E[X\mid H] $$ for all $\omega\in H$.

The conditional expectation of $X$ given the event $H$ is defined in the text by $$\mathbb E[X\mid H] = \frac1{\mathbb P(H)}\int_H X\ \mathsf d\mathbb P = \frac{\mathbb E[X\mathsf 1_H]}{\mathbb P(H)}. $$ By the general definition of conditional expectation it follows that $$\mathbb E[\mathbb E[X\mid\mathcal F]\mathsf 1_H]=\mathbb E[X\mathsf 1_H]=\mathbb E[X\mid H]\mathbb P(H), $$ so if $\omega\in H$ then $$\mathbb E[\mathbb E[X\mid\mathcal F](\omega)\mathsf 1_H] = \mathbb E[X\mid\mathcal F](\omega)\mathbb P(H) = \mathbb E[X\mid H]\mathbb P(H), $$ from which we conclude that $$\mathbb E[X\mid\mathcal F](\omega) = \mathbb E[X\mid H].$$

$\endgroup$
4
  • 1
    $\begingroup$ How did you get the penultimate and antepenultimate equalities? $\endgroup$
    – BCLC
    Commented Dec 8, 2015 at 16:39
  • 2
    $\begingroup$ This answer uses only the values of $E(X\mid H)$ and of $E(E(X\mid \mathcal F)\mathbf 1_H)$ to conclude. These are not sufficient hence this answer cannot be correct. As a matter of fact, starting at "so if $\omega\in H$", one cannot follow the reasoning. (Add here some sorry considerations about how votes and acceptations work on the site.) $\endgroup$
    – Did
    Commented Dec 14, 2018 at 15:45
  • $\begingroup$ @Did I would delete this answer since a better one has been given, but it has been accepted, so I cannot. $\endgroup$
    – Math1000
    Commented Dec 15, 2018 at 4:55
  • 2
    $\begingroup$ Yes. The OP's page indicates "Last seen 11 hours ago", that is, two hours after my comment, hence they have read it. Now, it is really up to them to correct the situation. $\endgroup$
    – Did
    Commented Dec 15, 2018 at 5:45
2
$\begingroup$

That is really strange. How about showing the context? Anyway:

If $\mathbb{E}_\mathcal{F}(X) = \mathbb{E}(X | \mathcal{F})$

and if $\mathbb{E}_HX = \mathbb{E} [X|H]$,

then $\mathbb{E}(X | \mathcal{F}) = \mathbb{E}(X | H)1_H + \mathbb{E}(X | H^C)1_H^C$

If $\omega \in H$, then

$$\mathbb{E}(X | \mathcal{F})(\omega) = \mathbb{E}(X | H)1_H(\omega) + \mathbb{E}(X | H^C)1_H^C(\omega)$$

$$= \mathbb{E}(X | H)(1) + \mathbb{E}(X | H^C)(0)$$

$$= \mathbb{E}(X | H)$$

But you probably already knew that.

$\endgroup$
2
  • $\begingroup$ It is assumed in the problem that $\omega\in H$. $\endgroup$
    – Math1000
    Commented Nov 29, 2015 at 10:18
  • $\begingroup$ @Math1000 And sooooo.......? $\endgroup$
    – BCLC
    Commented Nov 29, 2015 at 10:25

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .