3
$\begingroup$

Suppose that $\{X_t : Ω → S := \mathbb{R}^d, t\in T\}$ is a stochastic process with independent increments and let $\mathcal{B}_t :=\mathcal{B}_t^X$ (natural filtration) for all $t\in T$. Show, for all $0 ≤ s < t$, that $(X_t − X_s )$ is independent of $\mathcal{B}_s^X$ and then use this to show $\{X_t\}_{t\in T}$ is a Markov process with transition kernels defined by $0 ≤ s ≤ t$, $$q_{s,t}(x, A) := E [1_A (x + X_t − X_s )]\text{ for all }A\in \mathcal{S}\text{ and }x\in\mathbb{R}^d.$$

The first part showing that $X_t-X_s$ is independent of $\mathcal{B}^X_s$. I more or less understand from a monotone class lemma.

For the part where I need to compute the transition kernel, I am not sure what I have to show. Is seems to me I have to show $P(X_t\in A|X_s)=q_{s,t}(X_s,A)$, is that correct? To do this I observe \begin{align} P(X_t\in A|X_s)=E(1_{X_t\in A}|X_s)=E(1_A(X_t)|X_s)=E(1_A(X_t-X_s+X_s)|X_s) \end{align} But then I am not sure how to finish. The exercise hintes that I should use that $X_t-X_s$ is independent of $\mathcal{B}^X_s$.

$\endgroup$
4
  • $\begingroup$ The right hand side of $P(X_t\in A|X_s)=q_{s,t}(x,A)$ takes a value $x$, which should occur on the left hand side, too. Note, that $E[Y|\mathcal{G}]=E[Y]$, if $Y$ is independent from $\mathcal{G}$. $\endgroup$
    – user408858
    Commented Oct 27, 2021 at 2:26
  • $\begingroup$ Yes, that was a typo. Yes but $X_t-X_s$ is independent of $\mathcal{F}_s$, so independent of $X_s$ and as consequence $1_A(X_t-X_s)$ is independent of $X_s$. But we don't have that $X_t$ is independent of $\mathcal{F}_s$ or $X_s$ no? So Why is $1_A(X_t)$ independent of $X_s$? $\endgroup$
    – edamondo
    Commented Oct 27, 2021 at 8:34
  • $\begingroup$ I have to say, it's a while ago, since I studied on Markov kernels. However, it seems, when you plug in $\omega$, you have $q_{s,t}(X_s(\omega),A)=E[1_A(X_s(\omega)+X_t-X_s)]$, so $1_A(X_s(\omega)+X_t-X_s)$ would still be treated as $f(X_t-X_s)$. $\endgroup$
    – user408858
    Commented Oct 29, 2021 at 19:19
  • 1
    $\begingroup$ @USER408858, maybe this was the right idea. I tried to make it more formal in my own answer to my question $\endgroup$
    – edamondo
    Commented Oct 30, 2021 at 17:51

2 Answers 2

2
+50
$\begingroup$

The lemma you cite, states following: If $Y$ is independent of $\mathcal{G}$, it holds

$$h(X,Y)(\omega)=E[f(X,Y)|\mathcal{G}](\omega)=E[f(X(\omega),Y)]=g(X)(\omega),$$

Note: Because of the independence of $Y$ from $\mathcal{G}$, the random variable $E[f(X,Y)|\mathcal{G}]$ loses the randomness of $Y$, so to speak. $$ $$

In the situation of the question, we use $f(x,y)=1_A(x+y)$ and find

$$E[1_A(X_s+X_t-X_s)|\mathcal{F}_s](\omega)=E[1_A(X_s(\omega)+X_t-X_s)]=g(X_s)(\omega),$$

i.e.

$$E[1_A(X_s+X_t-X_s)|\mathcal{F}_s]=g(X_s)$$

Note: The notation "$\Big|_{x=X_s}$" turns the expression on the left of it into a function of $X_s$: $$E[1_A(x+X_t-X_s)]\Big|_{x=X_s(\omega)}=E[1_A(X_s(\omega)+X_t-X_s)]=g(X_s)(\omega)$$ $$ $$ Using the tower-property and the fact, that $g(X_s)$ is $\sigma(X_s)$-measurable, we find $$ \begin{align} P(X_t\in A|\mathcal{F}_s)&=E[1_A(X_s+X_t-X_s)|\mathcal{F}_s]\\ &=g(X_s)\\ &=E[g(X_s)|X_s]\\ &=E[E[1_A(X_s+X_t-X_s)|\mathcal{F}_s]|X_s]\\ &=E[1_A(X_s+X_t-X_s)|X_s]\\ &=P(X_t\in A|X_s) \end{align} $$ $$ $$ From this, we have also found the transition kernel:

$$q_{s,t}(X_s(\omega),A)=P(X_t\in A|X_s)(\omega)=P(X_t\in A|\mathcal{F}_s)(\omega)=E[1_A(X_s(\omega)+X_t-X_s)]$$

Note: The transition kernel is exactly the function, that we used in the lemma, since $$q_{s,t}(X_s(\omega),A)=g(X_s(\omega))$$

$\endgroup$
3
  • $\begingroup$ By the way what do you think about the "reverse question" : Given a process $X_t$ , say we construct a process with kernels $q_{st}(x,A)=1_A(X_t-X_s+x)$ then this process will have automatically independent increments? $\endgroup$
    – edamondo
    Commented Oct 31, 2021 at 12:37
  • $\begingroup$ I don't think this must hold true. There is a theorem that states, that any set of markov kernels define a markov process and vice versa, such that the marginal distribution of $X$ is well-defined via the markov kernels. (Klenke, Theorem 17.8) However, the Ornstein-Uhlenbeck process seems to be a Markov process without independent increments, according to the following source: math.stackexchange.com/a/1758122/408858 $\endgroup$
    – user408858
    Commented Oct 31, 2021 at 12:54
  • 1
    $\begingroup$ let's share the reward. I keep my answer and you take the bounty $\endgroup$
    – edamondo
    Commented Oct 31, 2021 at 15:12
2
$\begingroup$

$\newcommand{\G}{\mathcal{G}} \newcommand{\F}{\mathcal{F}_s}$

I am going to try to answer my question using these three sources source1, source2, source3.

The result seems to be based on this result I didn't know:

lemma : Let $X$ be measurable with respect to $\G$ and $Y$ be independent with respect to $\G$. Then $E[f(X,Y)|\G]=E[f(x,Y)]|_{x=X}$.

Proof : Define $g(x)=E[f(x,Y)]$. By the averaging property of conditional expectation, we need to show that $E[f(X,Y)Z]=E[g(X)Z]$ for every bounded $\G$-measurable $Z$. Also note that since $X$ and $Z$ are $\G$-measurable, $(X,Z)$ is $\G$-measurable and since Y is independent of $\G$ we have that $(X,Z)$ is independent of $Y$. Then $$E[f(X,Y)Z]=\int f(x,y)z d\mu_{X,Y,Z}=\int d\mu_{X,Z}z\left[\int d\mu_{Y}f(x,y)\right]=\int d\mu_{X,Z}zg(x)=E[g(X)Z].$$ Since $g(X)$ is $\G$-measurable, we have proven $E[f(X,Y)|\G]=g(X)=E[f(x,Y)]|_{x=X_s}$.

To show that we have a Markov process we need to show $P(X_t\in A|\F)=P(X_t\in A| X_s)$. However $$P(X_t\in A|\F)=E[1_A(X_t-X_s+X_s)|\F]$$ and since $X_t-X_s$ is independent of $\F$ and $X_s$ is $\F$-measurable, we can apply the lemma to find $$P(X_t\in A|\F)=E[1_A(X_t-X_s+x)]|_{x=X_s}.$$ Last, since $\sigma(X_s)\subset\F$ we can apply the tower property and the same reasonning as above to see $$P(X_t\in A|X_s)=E[1_A(X_t-X_s+X_s)|X_s]=E[E[1_A(X_t-X_s+X_s)]|\F]|X_s]=E[E[1_A(X_t-X_s+x)]|_{x=X_s}|X_s]=E[1_A(X_t-X_s+x)]|_{x=X_s}$$ were the last equality comes from the fact that $h(X)=E[1_A(X_t-X_s+x)]|_{x=X_s}$ is clearly $\sigma(X_s)$-measurable. Then we have shown that $P(X_t\in A|\F)=P(X_t\in A| X_s)$ and moreover $P(X_t\in A|\F)=E[1_A(X_t-X_s+x)]|_{x=X_s}$. However, since $P(X_t\in A|X_s)=q_{st}(X_s,A)$ this means $q_{st}(x,A)=E[1_A(X_t-X_s+x)]$

$\endgroup$
3
  • $\begingroup$ Great! I'm glad my idea helped. Your answer has some slight mistakes, I guess. I think you should have written $P(X_t\in A|X_s)=E[1_A(X_t-X_s+X_s)|X_s]=E[E[1_A(X_t-X_s+X_s)]|\mathcal{F}_s]|X_s]=E[E[1_A(X_t-X_s+x)]|_{x=X_s}|X_s]=E[1_A(X_t-X_s+x)]|_{x=X_s}$ $\endgroup$
    – user408858
    Commented Oct 30, 2021 at 20:28
  • $\begingroup$ Since I get a bit confused of the "$\Big|_{x=X_s}$" notation myself, I wrote an additional answer. I hope, you can find some further inspiration from it. $\endgroup$
    – user408858
    Commented Oct 30, 2021 at 20:32
  • $\begingroup$ @USER408858, Yes thank you. I is always useful to have different perspectives. $\endgroup$
    – edamondo
    Commented Oct 31, 2021 at 10:14

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .