16
$\begingroup$

When we integrate a function, we must make some choice about how we approximate it before we take the limit.

enter image description here

In principle, we can choose $\tau_i$ to be any value between $t_{i-1}$ and $t_i$. But for an ordinary Riemann integral our choice doesn't matter since for any value of the intermediate point $\tau \equiv \frac{\tau_i}{t_i-t_{i-1}}$, we find the same value in the limit of vanishing box sizes.

enter image description here

For stochastic integrals, however, this is no longer the case. For example, for the Itô integral, we choose $\tau =0$, while for the Stratonovich integral we choose $\tau = 0.5$.

I'm wondering what feature of stochastic integrals leads to their dependence on the choice of $\tau$? (Since I'm a physicist by trade, a somewhat intutive argument would be great.)

$\endgroup$
3
  • $\begingroup$ This is related to an ``ordering ambiguity'' in physics-speak $\endgroup$ Commented Jan 5, 2020 at 2:04
  • $\begingroup$ @AlexArvanitakis can you elaborate or do you have any reference? $\endgroup$
    – jak
    Commented Jan 8, 2020 at 15:21
  • 1
    $\begingroup$ fair enough I suppose that that comment was cryptic. In quantum mechanical path integrals the choice of time-slicing prescription (the QM analogue of Ito VS Stratonovich) is related to a choice of operator ordering. The symmetric or Weyl ordering corresponds to the QM midpoint prescription which is essentially the Stratonovich integral (in stochastic contexts). I think you can find a discussion in Hagen Kleinert's path integral bible. I also see a discussion in M Chaichian, A Demichev, "Path Integrals in Physics: Volume I Stochastic Processes and Quantum Mechanics" section 2.2.5 $\endgroup$ Commented Jan 8, 2020 at 23:35

2 Answers 2

15
$\begingroup$

First, note that the right comparison is not with the Riemann integral but rather with the Riemann-Stieltjes integral.

To be concrete, consider $\int_0^1 X_s dW_s$ where $W$ is Brownian motion and $X_s$ is an adapted, not differentiable process (for example you can take $X_s=W_s$). Now replace $X_s$ by $X_s=X_{t_i}+\Delta_i$ where $\Delta_i$ is a random variable (in your case, you take $\Delta_i=X_{\tau_i}-X_{t_i}$, but you could take all sorts of other choices, for example, the average of $X_s-X_{t_i}$ over the interval $(t_i,t_{i+1})$). Then your ``Riemann sum'' reads $\sum X_{t_i} (W_{t_{i+1}}-W_{t_i})+ \sum \Delta_i (W_{t_{i+1}}-W_{t_i})$. As you state correctly, Ito's theory tells us that the first sum converges to $\int_0^1 X_s dW_s$ (this requires a proof, and a hint that things are subtle is that it requires $X$ to be adapted).

Now, what about the second term? let's compute the expectation of one of the summands, conditioned on the process up to time $t_i$: it is $\beta_i:=E(\Delta_i(W_{t_{i+1}}-W_{t_i})| {\cal F}_{t_i})$. Because $\Delta_i$ may be correlated with $(W_{t_{i+1}}-W_{t_i})$, you may get a contribution of order $(t_{i+1}-t_i)$ (it is easy to see that the variance will be negligible). For example, if you take $\Delta_i=(X_{t_{i+1}}-X_{t_i})$ and $X=W$, you get $\beta_i=(t_{i+1}-t_i)$, while if you take the Stratonovich choice $\Delta_i=X_{(t_{i+1}-t_i)/2}-X_{t_i}$ you get $\beta_i=(t_{i+1}-t_i)/2$. In either case, note that the variance is of order $(t_{i+1}-t_i)^2$. Now sum over $i$ to see that different approximations will give different answers.

At a high level, the terms $\beta_i$ comes from a ``second order'' term: if the functions where differentiable you would get something of order $(t_{i+1}-t_i)^2$ instead of $\beta_i$; the lack of differentiability (reflected by the fact that the increment of BM over the interval $(t_{i+1}-t_i)$ is of order of $\sqrt{(t_{i+1}-t_i)}$) forces you to consider also quadratic terms in the expansion.

$\endgroup$
1
  • 2
    $\begingroup$ Could we state a more quantitative version? Something like $I(\tau)=\int X_s dY_s+\tau [ X,Y]$ with $\int$ the ito integral and $[,]$ the quadratic variation? $\endgroup$
    – RaphaelB4
    Commented Jan 9, 2020 at 18:17
7
$\begingroup$

To complement the excellent answer by Ofer Zeitouni, let me offer a functional analysis perspective. We want to define an integral of the following form: $\int F(W_t)dW_t=\int F(W_t)W'_tdt$, say, for a nice $F$. We can ask, generally, when is the integral $\int G(t)H(t)dt$ naturally defined? An obvious answer is: whenever $G$ belongs to some function space and $H$ belongs to the dual of that space. Then, in particular, any "reasonable" approximation scheme $\int G_n(t)H_n(t)dt$, where $G_n,H_n$ approximate $G,H$ in their corresponding spaces, will produce the same result.

Which function spaces are we talking about? Well, note that $W'_t$ only makes sense as a distribution, and $F(W_t)$ has the same regularity as $W_t$, that it, it is not smooth. Therefore, "soft" tools like Schwarz spaces will not do. A natural scale is then that of Sobolev spaces; a function $f$ is in $H^s$ if $(1+|\xi|^2)^\frac{s}{2}\hat{f}(\xi)$ is square integrable; to a very rough first approximation this means that $f$ is (almost) s-Holder continuous. It is then clear that the dual of $H^s$ is $H^{-s}$, and that differentiation takes away $1$ from $s$. This implies that the integral $\int F(W_t)W'_t dt$ would be naturally defined if we had $W_t\in H^s$ for some $s\geq\frac12$. But by Wiener's construction, we control the Fourier coefficients very well: we know that $\hat{W}(n)=\text{sgn}(n)n^{-1}\zeta_{|n|}$, where $\zeta_n$ are i. i. d. Gaussians, and so, we are essentially asking for which $s$ does the series $\sum_n \zeta^2_n (1+n^2)^sn^{-2}$ converge. The answer is provided by Kolmogorov's three series theorem: it converges (almost surely) if and only if the series of expectations and of variances converge, which happens if and only if $s<\frac12$. So, the condition we are after fails just barely.

This explains why we cannot define the integral $\int F(W_t)dW_t$ "pathwise" just by applying Riemann, Lebesgue or whatever integration to each realization. But also the fact that the required condition is just barely missed indicates why something like Ito integration, exploiting randomness additionally, has a chance to work.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.