12
$\begingroup$

I have found a theorem (see below) in two papers an I try to figure how it could be proved. The result seems to be intuitive, but I'm not able to prove it in a rigorous way.

Assumptions:

Consider a continuous stochastic process $(X_t)$ together with a Brownian motion $(B_t)$. The two processes are assumed to be independent. Their natural filtrations are denoted by $(\mathcal F_t^X)$ and $(\mathcal F_t^B)$

Theorem:

Under the above assumption, the following relation holds: $$\mathbb E\left(\exp\left(\int_0^tX_sdB_s\right) \mid \mathcal F_t^X\right)=\exp\left(\frac12\int_0^tX_s^2ds\right)$$

Proof 1

I have found a proof in Piterbarg page 1 (see here for the main article). The proof is based on computing the expectation path by path. That is, they assume that the path $(X_t)$ is known and compute the integral.

However, I believe that this argument only works when the number of paths $(X_t)$ is finite, using this result.

Proof 2

In Sin this result is proved (in particular) when $(X_t)$ is given by the dynamic $dX_t=X_td\tilde B_t$, where $(\tilde B_t)$ is a Brownian motion independent of $(B_t)$. They define the processes $$S_t=\exp\left(\int_0^tX_sdB_s-\frac12\int_0^tX_s^2ds\right)\quad\text{and}\quad S_t^{(n)}=S_{t\wedge\tau_n},$$ where we define the following stopping time $$\tau_n=\inf\left\{t>0:\int_0^t X_s^2ds\geq n\right\}.$$ In the paper, they use the following step, based on the Lebesgue dominated convergence: $$\mathbb E(\lim_{n\to\infty}S_T^{(n)})=\lim_{n\to\infty}\mathbb E(S_T^{(n)})$$

However I don't directly see how the Lebesgue dominated convergence can be applied in this case.

Question:

Can someone explain me why this results holds or has a good reference book about it?

Attempt of a proof:

Following this post, one can prove the result for simple processes, that is when $$X_s =\sum_{j=1}^n 1_{[t_{j-1},t_j)} \xi_j.$$ Now, we can take a sequence of processes $(X_s^n)$ converging to $(X_s)$ (with respect to the sup norm): $$(X_s^n)\to(X_s)\quad(n\to\infty)$$ Using the Itô isometry, we obtain the following convergence in probability: $$\int_0^tX_s^ndB_s\to\int_0^tX_sdB_s\quad (n\to\infty)$$ For each subsequence, we can find a subsubsequence (wlog $n_k$) such that the following convergence holds almost surely: $$\int_0^tX_s^{n_k}dB_s\to\int_0^tX_sdB_s\quad (k\to\infty)$$ If the Lebesgue dominated convergence theorem did hold, then we could conclude that $$\mathbb E\left(\exp\left(\int_0^tX_sdB_s\right) \mid \mathcal F_t^X\right)=\exp\left(\frac12\int_0^tX_s^2ds\right).$$

However I can't find an integrable majoring function.

Remark: What I basically did above is use the following property, which is true when $X$ is a discrete-valued random variable (see here).

The conditional expectation of $Y$ given $X$ is given by $\mathbb E(Y \mid \sigma(X))=f(X)$ where $f(x)=E(Y \mid X=x)$.

This computation seems to work above, by conditioning on the process $(X_t)$, even though it is not mathematically correct. In my view, this way of reasoning would fail in the following case if we say that $$\mathbb E\left(\exp\left(\int_0^tX_sdB_s\right) \mid \mathcal F_t^B\right)=f((B_s)_{0<s<t})$$ where $$f((b_s)_{0<s<t})=\mathbb E\left(\exp\left(\int_0^tX_sdB_s\right) \mid(B_s)_{0<s<t}=(b_s)_{0<s<t}\right)=\mathbb E\left(\exp\left(\int_0^tX_sdb_s\right)\right)$$

Namely, we would (almost surely) integrate with respect to a realization of the Brownian path $(b_s)_{0<s<t}$ which isn't of locally bounded variation. Thus the function $f$ is not well defined.

$\endgroup$
5
  • $\begingroup$ It seems that you missed at least a square in your equation or where do you apply Ito isometry? $\endgroup$
    – user190080
    Commented Jun 25, 2015 at 20:25
  • $\begingroup$ I have used the Ito isometry to compute the variance of the term in the exponential, which follows a normal distribution. There are surely other ways to compute it. $\endgroup$ Commented Jun 25, 2015 at 20:27
  • 1
    $\begingroup$ It seems that what you need is a definition of conditional expectation in the non trivial case, that is, when the sigma-algebra one is conditioning on is not generated by a discrete random variable. Have you got this at your disposal? $\endgroup$
    – Did
    Commented Jun 25, 2015 at 21:28
  • $\begingroup$ ((OP stays silent. Well...)) $\endgroup$
    – Did
    Commented Jun 26, 2015 at 8:34
  • 2
    $\begingroup$ See this question: math.stackexchange.com/q/1287843 $\endgroup$
    – saz
    Commented Jun 26, 2015 at 11:05

1 Answer 1

7
$\begingroup$

First of all, note that we have to ensure that

$$\exp \left( \int_0^t X_s \, dB_s \right) \in L^1. \tag{1}$$

If your claim is true, then

$$\mathbb{E} \exp \left( \int_0^t X_s \, dB_s \right) = \mathbb{E}\exp \left( \frac{1}{2} \int_0^t X_s^2 \, ds \right),$$

i.e. $(1)$ holds if

$$\mathbb{E}\exp \left( \frac{1}{2} \int_0^t X_s^2 \, ds \right)< \infty. \tag{2}$$

Through the remaining part of my answer, I'll assume that $(2)$ holds. The last part of the following proof is very close to the proof of Novikov's condition in René Schilling/Lothar Partzsch: Brownian Motion - An Introduction to Stochastic Processes.


We denote by $(M_t)_{t \geq 0}$ the stochastic exponential of $(X_t)_t$:

$$M_t := \mathcal{E}(X)_t := \exp \left( \int_0^t X_s \, dB_s- \frac{1}{2} \int_0^t X_s^2 \, ds \right).$$

Then the claim is equivalent to $\mathbb{E}(M_t \mid \mathcal{F}_t^X)=1$. Moreover, we set $$Y_t := \int_0^t X_s \, dB_s \qquad \text{and} \qquad \langle Y \rangle_t := \int_0^t X_s^2 \, ds.$$

Using e.g. the approximation procedure described in the OP and (conditional) Fatou's lemma, it is not difficult to see that $$\mathbb{E}(M_t \mid \mathcal{F}_t^X) \leq 1.$$ Consequently, it remains to show that $$\mathbb{E}(M_t \mid \mathcal{F}_t^X) \geq 1. \tag{3}$$

For $n \in \mathbb{N}$ we define

$$\tau_n := \inf\{t \geq 0; \max\{|B_t|,|X_t|\} \geq n\}.$$

Since, by Itô's formula,

$$M_{t \wedge \tau_n} -1 = \int_0^{t \wedge \tau_n} M_s \, dB_s$$

we find for any $F \in \mathcal{F}_t^X$

$$M_{t \wedge \tau_n}1_F-1_F = \int_0^{t \wedge \tau_n} (1_F M_s) \, dB_s.$$

(Note: Because of the independence of $X$ and $B$, we can put the indicator function $1_F$ under the integral sign.) As the right-hand side is a martingale, we get

$$\mathbb{E}(M_{t \wedge \tau_n} 1_F) = \mathbb{P}(F).$$

Since this holds for any $F \in \mathcal{F}_t^X$, this shows

$$\mathbb{E}(M_{t \wedge \tau_n} \mid \mathcal{F}_t^X) = 1. \tag{4}$$

In particular,

$$\mathbb{E}(M_{t \wedge \tau_n}) = 1. \tag{5}$$

For fixed $c \in (0,1)$, we pick $p=p(c)>1$ such that $p < \frac{1}{c(2-c)}$. By Hölder's inequality (for the exponents $1/pc$ and $1/(1-pc)$), we obtain

$$\begin{align*} \mathbb{E}[\mathcal{E}(c X_{t \wedge \tau_n})^p] &\stackrel{\text{def}}{=} \mathbb{E} \exp \left( pc Y_{t \wedge \tau_n} - \frac{1}{2} pc^2 \langle Y \rangle_{t \wedge \tau_n} \right) \\ &= \mathbb{E} \left[\exp \left(pc (Y_{t \wedge \tau_n} - \frac{1}{2} \langle Y \rangle_{t \wedge \tau_n} \right) \exp \left( \frac{1}{2} pc(1-c) \langle Y \rangle_{t \wedge \tau_n} \right) \right] \\ &\leq \underbrace{ \left[\mathbb{E}\exp \left( Y_{t \wedge \tau_n} - \frac{1}{2} \langle Y \rangle_{t \wedge \tau_n} \right) \right]^{pc}}_{[\mathbb{E}(M_{t \wedge \tau_n}]^{pc} \stackrel{(5)}{=} 1} \left[ \mathbb{E} \exp \left( \frac{1}{2} \frac{pc(1-c)}{1-pc} \langle Y \rangle_t \right) \right]^{1-pc} \\ &\leq \left[ \mathbb{E} \exp \left( \frac{1}{2} \frac{pc(1-c)}{1-pc} \langle Y \rangle_T \right) \right]^{1-pc} \end{align*}$$

for any $t \leq T$. This shows that the $p$-th moments of the family $(\mathcal{E}(c X_{t \wedge \tau_n}))_{n \in \mathbb{N}}$ is uniformly integrable. It follows from Vitali's convergence theorem that

$$\mathcal{E}(c X_{t \wedge \tau_n}) \to \mathcal{E}(cX_t) \qquad \text{in $L^1$} \tag{6}$$

as $n \to \infty$. By $(4)$ (applied for $\tilde{X} := cX$), this implies

$$\begin{align*} 1 &\stackrel{(4)}{=} \lim_{n \to \infty} \mathbb{E}\left[ \exp \left( cY_{t \wedge \tau_n} - \frac{1}{2} c^2 \langle Y \rangle_{t \wedge \tau_n} \right) \mid \mathcal{F}_t^X \right] \\ &\stackrel{(6)}{=} \mathbb{E}\left[ \exp \left( cY_{t} - \frac{1}{2} c^2 \langle Y \rangle_{t} \right) \mid \mathcal{F}_t^X \right] \end{align*}$$

Finally, we use a last time (conditional) Hölder's inequality (for $1/c$ and $1/(1-c)$) to obtain

$$\begin{align*} 1 &= \mathbb{E} \left[ \exp \left( c Y_t - \frac{1}{2} c \langle Y \rangle_t \right) \exp \left( \frac{1}{2} c(1-c) \langle Y \rangle_t \right) \mid \mathcal{F}_t^X \right] \\ &\leq \left[ \mathbb{E} \left( \exp \left( Y_t- \frac{1}{2} \langle Y \rangle_t \right) \mid \mathcal{F}_t^X \right) \right]^c \left[ \mathbb{E} \left( \exp \left( \frac{1}{2} \langle Y \rangle_t \right) \mid \mathcal{F}_t^X \right) \right]^{1-c} \end{align*}$$

Since, by $(2)$, the last term is bounded for $t \leq T$, we can let $c \uparrow 1$:

$$1 \leq \mathbb{E} \left[ \exp \left( Y_t - \frac{1}{2} \langle Y \rangle_t \right) \mid \mathcal{F}_t^X \right] \stackrel{\text{def}}{=} \mathbb{E}(M_t \mid \mathcal{F}_t^X).$$

$\endgroup$
2
  • $\begingroup$ Thank you for the answer, however the aim of the papers claiming the above theorem is (in particular) to prove that the equation (2) holds. $\endgroup$ Commented Jul 1, 2015 at 22:57
  • 8
    $\begingroup$ @MotsduJour ... would have been nice if you had made this more clear in your question. $\endgroup$
    – saz
    Commented Jul 2, 2015 at 5:19

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .