6
$\begingroup$

I am trying to build up some intuition of what the various notions of analysis mean. By intuition of continuity is that it means the graph of the function is 'connected', but not necessarily in any nice way. This lead me to ask the question 'Given a function $f: \mathbb{R} \to \mathbb{R}$ and a point $x_0 \in \mathbb{R}$, what assumptions must we impose on $f$ so that we can say $f$ is 'approximately a line' in some neighbourhood of $x_0$?'

To apply Taylor's theorem, we need that $f$ is twice continuously differentiable at $x_0$. However I am only interested in some tiny neighbourhood of $x_0$, so can we weaken these assumptions?

We certainly can't weaken them all the way down to continuity at $x_0$; for example the function $g: \mathbb{R} \to \mathbb{R}$ defined by $$g(x) = \begin{cases} x \sin(\frac{1}{x}), \ x \neq 0 \\ 0, \qquad \quad \ x=0 \end{cases}$$ is continuous at $x_0 = 0$, but certainly cannot be approximated by a line there. Differentiability at $x_0$ also won't do; consider $h: \mathbb{R} \to \mathbb{R}$ given by $$h(x) = \begin{cases} x^2 \sin(\frac{1}{x}), \ x \neq 0 \\ 0, \qquad \quad \ \ \ x=0. \end{cases}$$ This is differentiable and hence continuous at $x_0 = 0$. Indeed it is differentiable everywhere, but it is not continuously differentiable at $0$; it's derivative is $h': \mathbb{R} \to \mathbb{R}$ given by $$h'(x) = \begin{cases} 2x\sin(\frac{1}{x}) - \cos(\frac{1}{x}), \ x \neq 0 \\ 0, \qquad \qquad \qquad \qquad x=0 \end{cases}$$ whose limit at $0$ doesn't exist. Going one step further, we consider $k: \mathbb{R} \to \mathbb{R}$ given by $$k(x) = \begin{cases} x^3 \sin(\frac{1}{x}), \ x \neq 0 \\ 0, \qquad \quad \ \ \ x=0. \end{cases}$$ Again this is differentiable everywhere, but now it is continuously differentiable at $0$ (and so everywhere); it's derivative is $k': \mathbb{R} \to \mathbb{R}$ given by $$k'(x) = \begin{cases} 3x^2\sin(\frac{1}{x}) - x\cos(\frac{1}{x}), \ x \neq 0 \\ 0, \qquad \qquad \qquad \qquad \quad \ x=0 \end{cases}$$ which is continuous everywhere. However this function obviously still can't be approximated by a straight line near $0$ (since it takes on positive, negative, and zero values arbitrarily near the origin, and a straight line through the origin can only take on positive, negative, $\textit{or}$ zero values near the origin).

At first I thought the problem was that the value of the derivatives of $g, h$ and $k$ around $x_0 = 0$ were unbounded, i.e. their 'slopes' grow without bound, however this is only the case for $g$; the derivative of $h$ near $0$ is approximately between $-1$ and $1$ and the derivative of $k$ near $0$ is approximately $0$. Then I thought the problem was that that the derivatives of $g, h,$ and $k$ had arbitrarily many sign changes near $x_0 = 0$, however this also can't be right.

So now I'm just confused. Do we in fact need all the assumptions warranted by Taylor's theorem, i.e. twice continuously differentiable at $x_0$, to guarantee that a function is 'like a line' at a point? Do we actually need more hypotheses than Taylor's theorem in that we have to assume information about $f$ not at $x_0$, i.e. in some neighbourhood?

Thanks in advance. I also wouldn't mind comments explaining your intuitions about related notions in analysis.

tl;dr: what restrictions must we place on a function $f: \mathbb{R} \to \mathbb{R}$ so that when we 'zoom in close enough', it looks like a straight line?

$\endgroup$

2 Answers 2

4
$\begingroup$

There are several related concepts at work here. The standard term in Calculus/Analysis is the linearization $L(x)=f^{\prime}(x_{0}) (x-x_{0}) + f(x_{0})$ of a function $f(x)$ at a point $x_{0}$, also sometime referred to as the linear approximation to $f(x)$ at $x_{0}$. This can obviously be done whenever $f(x)$ is differentiable at $x_{0}$, but to actually consider it an "approximation" to $f(x)$ in any meaningful sense, we want the behavior of $f(x)$ and $L(x)$ to match at least under some assumptions. This is where we consider the second derivative: if the second derivative $f^{\prime\prime}= 0$ in an interval, then $f^{\prime}$ will be constant, and so $f(x) = L(x)$. If $f^{\prime\prime}$ is not constant but bounded, then we can make an estimate about how far away $f(x)$ and $L(x)$ can get. If $f^{\prime\prime}$ is unbounded or undefined at some points in that interval, then $f(x)$ can diverge wildly from $L(x)$ and our approximation won't be any good. We still can form the "linear approximation", it just might not be very good to actually approximate our function.

If you only need to consider a tiny neighborhood around $x_{0}$, the proper way to weaken your conditions is that you only need $f^{\prime\prime}$ to be defined in that tiny neighborhood. You will unfortunately probably have lots of problems if $f^{\prime\prime}(x_{0})$ is itself undefined.

I personally wouldn't ever say $f$ is "approximately a line", just that we can approximate $f(x)$ by a linear function in that neighborhood. The closest exact definition for saying to functions are "approximately the same" has to do with the concept of the functions being equal "almost everywhere", so saying $f$ is "approximately a line" might be confused to mean that you have a linear function $L(x)$ where $f(x) = L(x)$ for almost all values of $x$ with a few exceptions. Saying that one function approximates another function basically says that they are close, essentially saying that $|f(x)-L(x)| < A$ for some $A$. Then we can say it is a "good" approximation if $A$ is "small" in some sense.

About $k(x)$: "this function [...] can't be approximated by a straight line near 0 (since it takes on positive, negative, and zero values arbitrarily near the origin, and a straight line through the origin can only take on positive, negative, or or zero values". We judge an approximation by the amount of error, or the distance between $f(x)$ and $L(x)$ (where $L(x)$ is the linear approximation). If this difference is small, we don't care if one is positive and the other is negative.

$\endgroup$
1
  • $\begingroup$ Thanks, I hadn't considered the more statistical approach to approximating a function by a line (involving errors). Although I'm not sure how '$f$ is approximately a line on a neighbourhood' and '$f$ can be approximated by a line on a neighbourhood$' are any different semantically, i.e. 'is approximately = can be approximated by' (at least to me). Would you mind elaborating the distinction for me? $\endgroup$ Commented Apr 15, 2018 at 17:37
0
$\begingroup$

Let us first define what it means that a curve $c\colon I\to \mathbb R^2$, defined on an interval $I$of $\mathbb R$, possesses a tangent in $\tau\in I$: There exists a one-dimensional subspace $W$ of $\mathbb R^2$ and an $\epsilon>0$ such that for all $s,t\in I\cap(\tau-\epsilon,\tau+\epsilon)$ it holds that $$c(s)\neq c(t)\text{ and } \lim_{s,t\to\tau}\mathbb R\cdot(c(t)-c(s))=W, \text{ if $s\leq\tau\leq t$ and $s\neq t$}.$$

For those $\tau$ where $c$ doesn’t have a tangent, but the left- resp. rightsided tangents exist we will say that $c$ has a tip resp. an edge in $\tau$ if the left- and rightsided tangents are equal resp. not equal.

And now strange things may happen. Consider one slightly modified of your examples, namely $c(t)=(t,5+t^2\cdot\sin(\pi/2t))$ for $t\neq0$ and $c(0)=(0,0)$. It is immersive in $\tau=0$, has a tangent in $\tau=0$ but it is a limit point of $t$’s, for which $c$ has a tip. (Consider the sequences $s_n=1/(2n+2)$ and $t_n=1/(2n+1)$.). This doesn’t fit with the notion of “smoothness”.

Therefore let us define

$c$ is smooth in $\tau$ if there’s a neighborhood $U$ of $\tau$ such that $c$ possesses a tangent $c(t)+W_t$ for all $t\in U$ and $\lim_{t\to\tau}W_t=W_{\tau}$,

i.e., in a neighborhood the tangents exist an they converge to the tangent in $c(\tau)$.

PS. Even more weirdness reveals the following beast. Define $\phi(t)=e^{-1/t}$ for positive $t$ and $0$ else. Define the injective path $$c(t)=(\phi(t)+2\phi(-t))\cdot(\cos(1/|t|),\sin(1/|t|))$$ for $t\neq0$ and $c(0)=(0,0)$. Now $c$ has neither a tangent nor a tip nor an edge in $t=0$. Please feel free to plot this curve.

$\endgroup$
2
  • $\begingroup$ Thanks for your reply. The maths here is a tiny bit over my head but I will do my best to understand it. I presume it's differential geometry? What does the $\cdot$ which appears next to the limit mean? Also can you verify that $W$ is a one-dimensional subspace of $\mathbb{R}$ and not of $\mathbb{R}^2$; the only 1D subspaces of $\mathbb{R}$ I can think of (in the linear algebra sense) are $\mathbb{R}$ itself as well as incomplete sets such as $\mathbb{Q}, n\mathbb{Z}$, etc, but the $\lim$ seems to suggest you want a complete set. $\endgroup$ Commented Apr 15, 2018 at 21:45
  • 1
    $\begingroup$ Sorry, $W$ is supposed to be a line, that is a one-dimensional subspace of $\mathbb R^2$, I‘ve corrected. The dot means the multiplication sign, hence $\mathbb R\cdot(c(t)-c(s))$ describes the line through $c(t)-c(s)$ $\endgroup$ Commented Apr 16, 2018 at 10:14

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .