14
$\begingroup$

I'm not asking for the proof, I'm just asking for a simple explanation or intuition for this condition. What does it represent ??

Theorem $1.1.$ Let $f$ and $∂f/∂y$ be continuous functions on the rectangle $R = [−a, a]×[−b, b]$. Then there is an $h ≤ a$ such that there is a unique solution to the differential equation $dy/dt = f(t, y)$ with initial condition $y(0) = 0$ for all $t ∈ (−h, h)$.

$\endgroup$
4
  • $\begingroup$ The condition is more commonly stated as $f$ being Lipschitz in $x$ on the rectangle. $\endgroup$
    – Ningxin
    Commented Jun 10, 2016 at 8:31
  • $\begingroup$ Could you please explain ? $\endgroup$ Commented Jun 10, 2016 at 20:17
  • $\begingroup$ Since $\frac{\partial f}{\partial y}$ is continuous, it is also bounded say by $M$, so by the Mean Value Theorem we have $|f(x, y_1) - f(x,y_1)| \leq M |y_1 - y_2|$, which is the Lipschitz condition. $\endgroup$
    – Weaam
    Commented Jun 10, 2016 at 20:41
  • $\begingroup$ The intuition is the following: We want to prove a theorem that is of utter importance in many applications. For a start we shall be content with a proof that covers most situations occurring in practice. It is then no big deal to assume that $f$ and ${\partial f\over\partial y}$ are cotinuous in the neighborhood of the initial point. – With problematic situations we can deal later. $\endgroup$ Commented Jun 11, 2016 at 18:31

1 Answer 1

19
$\begingroup$

Geometrically, we are given a direction field $f(t,y)$, and we seek an integral curve $y(t)$ (over possibly smaller region) that is tangent to the slope lines defined by $f(t,y)$.

What happens if $f$ is not continuous?

  • Consider the ODE $f(t,y) = \frac{dy}{dt} = \frac{1}{t+1}$ with initial condition $y(0) = 0$. It is not continuous at $t=-1$ (see vertical line, an integral curve that is tangent can't exist).

  • However, over the interval $t \in [-0.5, 0.5]$ it is. Therefore, it is bounded with $|f(t,y)| \leq \frac{1}{-0.5+1}=2$, and geometrically, $f(t,y) = \frac{dy}{dt}$ is the slope of any solution $y$ passing through $(0,0)$, so the solution is contained in the gray area.

enter image description here

What happens if $\frac{\partial f}{\partial y}$ is infinite at some point?

  • Consider the ODE $$f(t,y) = \sqrt{|y|}, \qquad\qquad y(0) = 0$$

  • Although continuous, the derivative $\frac{\partial f}{\partial y}$ is discontinuous, particularly infinite at $y= 0$. At least two curves will pass through $(0,0)$, i.e. $$y(t) \equiv 0 \qquad \qquad y(t) = \begin{cases}\frac{t^2}{4} \quad \mbox{if } t\geq 0\\ -\frac{t^2}{4} \quad \mbox{otherwise}\end{cases}$$

    Intuitively, the erratic transitions of the slope lines of $f(t,y)$ around $y = 0$ (due to $\frac{\partial f}{\partial y}$ being infinite) here allowed distinct integral curves (that followed different slope lines outside) to merge at y = 0. For example, for any $C \geq 0$, even $y(t) = \begin{cases}0 \qquad &\mbox{for}\ t< C\\\frac{(t-C)^2}{4} \qquad &\mbox{for} \ t\geq C \end{cases}$ or other combinations are solutions.

enter image description here

Why assuming $f$ and $\frac{\partial f}{\partial y}$ continuous is (more than) enough?

Fast forward to the main idea

  • If two curves $y_n, y_m$ are close, we assume the slope lines of $f(t, y_n), f(t, y_m)$ to be proportionally close. This is the key assumption (Lipschitz), also implied if $f$ and $\frac{\partial f}{\partial y}$ were assumed continuous. But doesn't rule out nice nondifferentiable functions like $f(t, y) = |y|$.

  • If $Ay_n, Ay_m$ are two curves that follow the slope lines $f(t,y_n), f(t, y_m)$, then we can show as a result they will get strictly closer to each other more than $y_n, y_m$ does. That is, $y_n, y_m$ gets contracted into $Ay_n, Ay_m$.

  • Any such contraction mapping $A$ has a unique fixed point $y$ with $Ay = y$, i.e. $y = y_0 + \int_{t_0}^tf(\tau,y)d\tau$, or $\frac{dy}{dt} = f(t,y)$.

How to follow the slope line, successively?

  • Picard's idea is to find curves $y_i$ tangent to slope lines defined by $f(\tau,y_{i-1})$ of previous solutions, i.e. $\frac{d y_i}{dt} = f(t, y_{i-1})$ so that we may define: $$y_i(t) = Ay_{i-1}(t) := y(t_0) + \int_{t_0}^t f(\tau,y_{i-1}) d\tau$$

  • Consider the ODE $\frac{dy}{dt}=f(t,y) = y$, with $y_0 = 1$. The successive solutions are easily calculated from the Picard mapping: $y_1 = Ay_0 = 1 + t$, $y_2 = 1 + (t + t^2/2)$, ... i.e. $y_n = \sum_{i=0}^n t^n/n!$ for all $n$, which is the series expansion of $e^t$.

enter image description here

Key technical tools: distance and fixed points

  • We can measure the distance between two (continuous on a bounded interval) functions at the time that gives maximum value (via the sup norm, which result in a very well behaved space of functions) $$d(y_a, y_b) = \| y_a - y_b\|_\infty := max_{t \in [a,b]} | y_a(t) - y_b(t)|$$

    (In the figure, we'd be comparing the functions at $t = 4$)

  • If a map $A: M \to M$ always contracts the distance between any two points (functions in this nice, complete space), i.e. $d(Ay_a, Ay_b) \leq \lambda d(y_a, y_b)$ for some $0\leq\lambda < 1$, then a unique fixed point $y^* = Ay^*$ exists The Banach Fixed Point theorem

In a bit more details

  • As f, $\frac{\partial f}{\partial y}$ continuous then $\frac{\partial f}{\partial y}\leq K$ is bounded, and with mean value theorem implies $\|f(t,y_a) - f(t, y_b)\| \leq K \|y_a - y_b\|$ for any two functions $y_a, y_b$ (Lipschitz continuity).

  • Hence, $$d(Ay_a, Ay_b) \leq \int_{t_0}^t \| f(t, y_a) - f(t,y_b) \| d\tau\leq K \int_{t_0}^t \|y_a - y_b\|d\tau \leq K \alpha d(y_a, y_b)$$ which contracts (in possibly a smaller interval) when $Ka < 1$.

  • Therefore, the Picard map $A$ defined was indeed a contraction, giving the unique fixed point $y = Ay = y_0 + \int_{t_0}^t f(\tau, y)d\tau$.

*Lots of details are omitted, justifying where $Ay_a$ is, reduced size of region, the weaker Lipschitz assumption... among others. For an intuitive but rigorous introduction, see "Ordinary Differential Equations, V. I. Arnol'd"

$\endgroup$
10
  • 1
    $\begingroup$ this is an incredibly motivating and incisive answer +1 $\endgroup$ Commented Jun 11, 2016 at 1:25
  • 1
    $\begingroup$ I find it unlikely that the banach fixed point theorem will be understood here, but from a geometric standpoint, if you look at a small enough $\epsilon$ range of values, you certainly see that the range of values for the function to take get "squeezed," and that gives a sense for why it's unique as well, I think. $\endgroup$ Commented Jun 11, 2016 at 1:29
  • $\begingroup$ Thank you. Please post an answer, i'd love to learn another take on uniqueness. Regards! $\endgroup$
    – Weaam
    Commented Jun 11, 2016 at 1:57
  • 1
    $\begingroup$ An absolutely amazing answer. +1 from me as well. $\endgroup$ Commented Jun 11, 2016 at 2:09
  • 1
    $\begingroup$ Thank you very much, I got it (erratic transition of slope) has clarified the intuition nicely. Thanks again :) $\endgroup$ Commented Jun 11, 2016 at 18:55

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .