44
$\begingroup$

The Taylor expansion itself can be derived from mean value theorems which themselves are valid over the entire domain of the function. Then why doesn't the Taylor series converge over the entire domain? I understand the part about the convergence of infinite series and the various tests. But I seem to be missing something very fundamental here..

$\endgroup$
3
  • 24
    $\begingroup$ To complicate things even further note that even if the Taylor series converges for all real $x$, it may happen that it never (except at $x=0$) converges to the original function! $\endgroup$ Commented Jun 2, 2015 at 9:28
  • 2
    $\begingroup$ Oh, so the reason we use the expansion of sin(x) to find limits and so on is because we are sure that the Taylor series converges to the actual sin(x)? $\endgroup$ Commented Jun 2, 2015 at 9:31
  • 5
    $\begingroup$ @user2277550: Yes. :) For non-zero functions that "vanish to infinite order", such as $f(x) = e^{-1/x^{2}}$, $f(0) = 0$, the Taylor polynomials are (ironically?) too good of an approximation, in that no matter how many terms are taken, the Taylor polynomial is identically $0$ and the remainder is $f(x)$. $\endgroup$ Commented Jun 2, 2015 at 11:18

4 Answers 4

64
$\begingroup$

It is rather unfortunate that in calc II we teach Taylor series at the same time as we teach Taylor polynomials, all the while not doing a very good job of stressing the distinction between an infinite series and a finite sum. In the process we seem to teach students that Taylor series are a much more powerful tool than they are, and that Taylor polynomials are a much less powerful tool than they are.

The main idea is really the finite Taylor polynomial. The Taylor series is just a limit of these polynomials, as the degree tends to infinity. The Taylor polynomial is an approximation to the function based on its value and a certain number of derivatives at a certain point. The remainder formulae tell us about the error in this approximation. In particular they tell us that higher degree polynomials provide better local approximations to a function.

But the issue is that "local" has a different meaning for different values of the degree $n$. To explain what I mean by that, you can look at the Lagrange remainder, which tells you that the error in an approximation of degree $n$ is

$$\frac{f^{(n+1)}(\xi_n) (x-x_0)^{n+1}}{(n+1)!}$$

where $\xi_n$ is between $x_0$ and $x$. So the ratio of the error with degree $n$ to the error with degree $n-1$ is*

$$\frac{f^{(n+1)}(\xi_n) (x-x_0)}{f^{(n)}(\xi_{n-1}) (n+1)}$$

where similarly $\xi_{n-1}$ is between $x$ and $x_0$. So the error is smaller where this quantity is less than $1$. From this form we can see that if $|x-x_0|<(n+1) \left | \frac{f^{(n)}(\xi_{n-1})}{f^{(n+1)}(\xi_n)} \right |$ then the error with degree $n$ is less than that with degree $n-1$. That looks good because we have that growing factor of $n+1$, but what if $\frac{f^{(n)}(\xi_{n-1})}{f^{(n+1)}(\xi_n)}$ goes to zero, maybe even doing so really fast? Then the interval where the error is reduced by adding another term will shrink, potentially contracting down to just the point of expansion as $n$ tends to infinity.

In other words, if the derivatives near $x_0$ (not necessarily just at $x_0$, because we have to evaluate the derivatives at these $\xi$'s) grow way too fast with $n$, then Taylor expansion has no hope of being successful, even when the derivatives needed exist and are continuous.

*Here I am technically assuming that $f^{(n)}(\xi_{n-1}) \neq 0$. This assumption can fail even when $f$ is not a polynomial; consider $f=\sin,x_0=\pi/2,n=1$. But this is a "degenerate" situation in some sense.

$\endgroup$
1
  • 3
    $\begingroup$ A perfectly reasonable reason to add the discussion of Taylor polynomials where it belongs. In particular, we ought to teach Taylor polynomials in first semester calculus to support the discussion of tangent lines and graphing with calculus. $\endgroup$ Commented Dec 11, 2015 at 15:14
28
$\begingroup$

One of the intuitive reasons is that working with functions of real argument we do not care about their singularities in the complex plane. However these do restrict the domain of convergence.

The simplest example is the function $$f(x)=\frac{1}{1+x^2},$$ which can be expanded into Taylor series around $x=0$. The radius of convergence of this series is equal to $1$ because of the poles $x=\pm i$ of $f$ in the complex plane of $x$.

$\endgroup$
5
  • $\begingroup$ The book Visual Complex Analysis by Needham has a nice discussion of this on pages 64-70. $\endgroup$
    – eipi10
    Commented Jun 3, 2015 at 0:50
  • 2
    $\begingroup$ With all little I know in complex analysis, that was always a confusing explanation for me. Ok, there are poles in the complex plane - so why there is no convergence for real arguments? Sounds like: everything looks fine in reals, but we don't have convergence - let's introduce some unlucky extension of reals which would explain this fact (of course, complex plane is a very lucky extension and the motivation for introducing it was different). $\endgroup$
    – SBF
    Commented Jun 3, 2015 at 6:24
  • $\begingroup$ @Ilya Maybe it's rather a matter of taste than that of confusion. I would rephrase your comment as "Why do the unlucky extra structures we don't even care about influence the "more local" properties?" The short answer to this is: it's a fact of life. The decay rate of Fourier coefficients of a periodic function feels its smoothness properties, complex poles of the scattering matrix determine the bound state energies, non-rotating elliptic planet trajectories are consequences of the hidden $SO(4)$ symmetry of the Coulomb potential and so on. $\endgroup$ Commented Jun 3, 2015 at 8:40
  • $\begingroup$ @Ilya Philosophically, putting an extra structure to understand something turned out to be a very fruitful approach. A recent instance of it is the proof of Poincaré conjecture (introducing a metric to prove a topological statement). My favorite elementary example is Kelly's proof of the Sylvester-Gallai theorem (using distance to establish a fact about incidence properties): en.wikipedia.org/wiki/… $\endgroup$ Commented Jun 3, 2015 at 8:40
  • $\begingroup$ For sure I agree with both of your statements. Just it always felt like: somebody can come up with another extension of reals where Taylor series for $\frac1{1+x^2}$ does not converge on the complement of the real line at all - what would we do in that case? Of course, such an extension either does not exist, or the radius criterion won't work there, but still. $\endgroup$
    – SBF
    Commented Jun 3, 2015 at 8:46
13
$\begingroup$

The Taylor expansion is not derived from the mean value theorem. Taylor's expansion is a definition valid for any function which is infinitely differentiable at a point. The various forms for the remainder are derived in various ways. By definition, the remainder function is $R(x)=f(x) - T(x)$ where $f$ is the given function and $T$ is its Taylor expansion (about some point). There is no a-priori guarantee that the Taylor expansion gives any value remotely related to the value of the function, other than at the point of expansion. The various forms for the remainder may be used to obtain bounds on the error which in turn can be used to show convergence at some region, but there is no a-priori reason to expect well-behaved bounds. You simply have a formula for the remainder. The remainder may still be large. Of course, the existence of well-known examples where the Taylor expansion really does not approximate the function at all show that it is hopeless to expect miracles.

$\endgroup$
3
  • 7
    $\begingroup$ The Lagrange remainder form of Taylor's theorem is more or less the mean value theorem applied $n$ times, where $n$ is the number of derivatives used in the approximation and we assume $n+1$ derivatives are continuous. $\endgroup$
    – Ian
    Commented Jun 2, 2015 at 11:21
  • $\begingroup$ Can you give some of these well-known examples? $\endgroup$
    – Mitch
    Commented Jun 2, 2015 at 12:04
  • 3
    $\begingroup$ @Mitch The commonly used examples are based off of the observation that the function which is $0$ at $0$ and $e^{-1/x^2}$ elsewhere is smooth. Roughly speaking, this function is not zero, but it grows slower than any polynomial in a neighborhood of zero. With a small modification, you get an extremely useful object called a bump function, which is a nonzero smooth function with compact support. These are used quite frequently in analysis to achieve approximation of a function by a smoother function. $\endgroup$
    – Ian
    Commented Jun 2, 2015 at 12:05
-1
$\begingroup$

As there is few examples I provide one :

$$f(x)=x/(\cosh(x))-xe^{-x}-1/10×x^6/(\sinh(x))^4,g(x)=e^{-x}\frac{\sqrt{1+4x^2}+\sqrt{1+x^2}-2}{2},r(x)=10((f(x)-g(x))+1/12×x^2e^{-2x^2}),t(x)=-0.73(\sqrt{x^2+1/2^{4.85}}-1/2^{4.85/2}),p(x)=-1.5(\sqrt{x^2+1/2^6}-1/8),k(x)=r(x)+t(x)-p(x)$$

I think we have to not separate physics and maths at the 18th century here since the potential wall explain why it fails or converges extremely slowly on the entire positive axis around $x=0$ for $k(x),x>0$

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .