37
$\begingroup$

I am aware that $e$, the base of natural logarithms, can be defined as:

$$e = \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n$$

Recently, I found out that

$$\lim_{n\to\infty}\left(1-\frac{1}{n}\right)^n = e^{-1}$$

How does that work? Surely the minus sign makes no difference, as when $n$ is large, $\frac{1}{n}$ is very small?

I'm not asking for just any rigorous method of proving this. I've been told one: as $n$ goes to infinity, $\left(1+\frac{1}{n}\right)^n\left(1-\frac{1}{n}\right)^n = 1$, so the latter limit must be the reciprocal of $e$. However, I still don't understand why changing such a tiny component of the limit changes the output so drastically. Does anyone have a remotely intuitive explanation of this concept?

$\endgroup$
5
  • 4
    $\begingroup$ Compare with the limits $\lim_{n\to\infty}(\frac1n\times n)=1$ while $\lim_{n\to\infty}((-\frac1n)\times n)=-1$, and that even though $\lim_{n\to\infty}\frac1n=0=\lim_{n\to\infty}-\frac1n$. $\endgroup$ Commented Apr 5, 2016 at 13:22
  • 2
    $\begingroup$ I think your comparison is fundamentally different. Here, changing the plus sign into a minus subtracts a small quantity, whereas in your example changing the sign changes the sign of the whole expression. $\endgroup$
    – Bluefire
    Commented Apr 5, 2016 at 13:25
  • 9
    $\begingroup$ No, it is not fundamentally different, but my example uses multiplication where you example uses exponentiation. These operations correspond via the logarithm: taking the $n$-th power of a number multiplies its logarithm by $n$. Now the logarithm of $1+\frac1n$ is close to $\frac1n$ while the logarithm of $1-\frac1n$ is close to $-\frac1n$. So under the logarithm, the difference between adding or subtracting $\frac1n$ results in a sign change (and the magnitude of the change then gets multiplied by $n$). Your example is really very close to mine. $\endgroup$ Commented Apr 5, 2016 at 14:54
  • 2
    $\begingroup$ It's because 0.999 to a large power gets very small, while 1.001 to a large power gets very large. $\endgroup$ Commented Apr 6, 2016 at 15:29
  • $\begingroup$ Maybe you should look for a rigorous way of proving it, considering that your intuition is wrong. $\endgroup$
    – anomaly
    Commented Apr 6, 2016 at 20:58

17 Answers 17

48
$\begingroup$

The point is that $1-\frac{1}{n}$ is less than $1$, so raising it to a large power will make it even less-er than $1$. On the other hand, $1+\frac{1}{n}$ is bigger than $1$, so raising it to a large power will make it even bigger than $1$.


There's been some brouhaha in the comments about this answer. I should probably add that $(1-\epsilon(n))^n$ could go to any value less than or equal to $1$, and in particular it could go to $1$, as $n$ increases. It so happens that in this example, it goes to something less than $1$. The reason it goes to something less than $1$ is because we end up raising something sufficiently less than $1$ to a sufficiently high power.

$\endgroup$
7
  • 7
    $\begingroup$ @xyz But the question wasn't why $e \neq 1$, it was how one limit can go to $e$ while the other goes to $e^{-1}$. $\endgroup$
    – fgp
    Commented Apr 6, 2016 at 14:21
  • 2
    $\begingroup$ This argument is wrong. It will not work with $(1+\frac1{n^2})^n$. $\endgroup$
    – user65203
    Commented Apr 6, 2016 at 17:34
  • 3
    $\begingroup$ @YvesDaoust It's not supposed to be general. The question asked for intuition; I provided it. No-one expects intuition to hold generally. $\endgroup$ Commented Apr 6, 2016 at 17:49
  • 1
    $\begingroup$ @GlenO: if I recall right, $1=1^{-1}$. But okay, the argument is not wrong, it is just dangerous. $\endgroup$
    – user65203
    Commented Apr 6, 2016 at 18:48
  • 3
    $\begingroup$ @Yves Daoust is making an excellent point here. You can't just say that because it's less that one/greater than one, that's why the limit remains like that. In Yves' example, that limit is 1: wolframalpha.com/input/…. $\endgroup$
    – Paul Raff
    Commented Apr 7, 2016 at 4:06
40
$\begingroup$

Perhaps think about the binomial expansions of $\left(1 + \frac{1}{n}\right)^n$ and $\left(1 - \frac{1}{n}\right)^n$. The first two terms are $1 + n \frac{1}{n}$ and $1 - n \frac{1}{n}$ respectively. And after that the terms in $\left(1 + \frac{1}{n}\right)^n$ are all positive, whereas the terms in $\left(1 - \frac{1}{n}\right)^n$ alternate. So the difference between the two limits is going to be at least 2.

$\endgroup$
4
  • $\begingroup$ While intuitive, how does this actually justify $(1-\frac1n)^n=e^{-1}$? It only seems to approximate. $\endgroup$ Commented Apr 6, 2016 at 21:02
  • 2
    $\begingroup$ I was trying to respond to the final paragraph of the question. The OP says "I still don't understand why changing such a tiny component of the limit changes the output so drastically," so I wanted to focus on that. $\endgroup$
    – Aidan Sims
    Commented Apr 6, 2016 at 22:22
  • $\begingroup$ Ok, I guess, but Patrick Steven's answer just seemed much more intuitive and easier to understand in my opinion. But of course, that's only my opinion. (And try "@simpleart" next time.) $\endgroup$ Commented Apr 6, 2016 at 22:31
  • 1
    $\begingroup$ @simpleart Yes, his is a good answer. Perhaps putting the two of them together is better again, since applying the binomial-expansion idea also indicates why $(1-\frac{1}{n^2})^n$ behaves differently (in response to Yves Daoust's comment below). In particular, with a little work it shows why, as the OP says, $(1 - \frac{1}{n})^n(1 + \frac{1}{n})^n = (1 - \frac{1}{n^2})^n \to 1$, answering your original comment. $\endgroup$
    – Aidan Sims
    Commented Apr 6, 2016 at 22:43
21
$\begingroup$

The true issue is not why changing the sign has such an impact, it is why adding such a small quantity as $\dfrac1n$ drastically changes the result.

$$1^n\to1\text{ vs. }\left(1+\frac1n\right)^n\to e$$

(and very similarly $\left(1-\frac1n\right)^n\to e^{-1}$.)

The reason is that the tiny quantity gets multiplied over and over so that it becomes a finite quantity,

$$\left(1+\frac1n\right)\left(1+\frac1n\right)\left(1+\frac1n\right)\cdots=1+\frac1n+\frac1n+\frac1n+\cdots>2$$ as there are $n$ terms $\dfrac1n$ (and yet others). The "tininess" of the terms is well compensated by the amount of terms.

Also notice that the "asymmetry" shown by $e-1\ne 1-e^{-1}$ is just due to the non-linearity of the exponential.

$\endgroup$
17
$\begingroup$

Actually you have the stronger true statement that $$ \lim_{x\to0}(1+x)^{1/x}=e, $$ of which the initial limit you stated is a special case, approaching $0$ through the sequence of values $x=\frac1n$ for $n\in\Bbb N_{>0}$. But if you approach $0$ through the sequence of values $x=-\frac1n$ for $n\in\Bbb N_{>1}$, the same limit gives you $$ \lim_{n\to\infty}\left(1-\frac1n\right)^{-n}=e. $$ Now it is a simple matter to see that the sequence of inverses $\left(1-\frac1n\right)^n$ tends to the inverse value $e^{-1}$.

It should be noted that while the first limit above is more general than the limits for $n\to\infty$, it is also less elementary to define, since it involves powers of positive real numbers with arbitrary real exponents. Introducing such powers requires studying exponential functions in the first place, which is why the limit statement with integer exponents is often preferred. But the more general limit statement is true, and can serve to give intuition for the relation between the two limits in your question.

$\endgroup$
1
  • $\begingroup$ That's actually quite a nice way of showing it. Thanks! $\endgroup$
    – Bluefire
    Commented Apr 8, 2016 at 10:05
7
$\begingroup$

Here's a useful generalization of the limit definition of $e$ from the OP:

Given

$$e = \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n$$

Raise both sides to the power of $x$:

$$e^x = \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{nx}$$

This is trivially true when $x = 0$, as both sides evaluate to 1

Assume $x \ne 0$ and let $m = nx$, i.e., $n = \frac{m}{x}$

As $n\to\infty, \, m\to\infty$

$$e^x = \lim_{m\to\infty}\left(1+\frac{x}{m}\right)^{m}$$

[Note the similarity between this and the first limit in Marc van Leeuwen's answer].

In particular, for $x = -1$

$$e^{-1} = \lim_{m\to\infty}\left(1+\frac{-1}{m}\right)^{m}$$

or

$$e^{-1} = \lim_{m\to\infty}\left(1-\frac{1}{m}\right)^{m}$$


As mathmandan notes in the comments, my derivation is flawed when $x < 0$, since then $n\to\infty \implies m\to -\infty$ :oops:

I'll try to justify my result for negative $x$ without relying on the fact that $e^x$ is an entire function and that there is only a single infinity in the (extended) complex plane.

For any finite $u, v \ge 0$, we have

$$e^u = \lim_{n\to\infty}\left(1+\frac{u}{n}\right)^{n}$$

and

$$e^v = \lim_{n\to\infty}\left(1+\frac{v}{n}\right)^{n}$$

Therefore,

$$e^{u-v} = \lim_{n\to\infty}\left(\frac{1+\frac{u}{n}}{1+\frac{v}{n}}\right)^{n}$$

Let $m = n + v$. For any (finite) $v$ as $n\to\infty, \, m\to\infty$.

$$\begin{align}\\ \frac{1+\frac{u}{n}}{1+\frac{v}{n}} & = \frac{n + u}{n + v}\\ & = \frac{m + u - v}{m}\\ & = 1 + \frac{u - v}{m}\\ \end{align}$$

Thus $$\begin{align}\\ e^{u-v} & = \lim_{n\to\infty}\left(1+\frac{u - v}{m}\right)^{n}\\ & = \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^{m-v}\\ & = \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^m \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^{-v}\\ & = \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^m\\ \end{align}$$

since

$$\lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^{-v} = 1$$

In other words,

$$e^{u-v} = \lim_{m\to\infty}\left(1+\frac{u - v}{m}\right)^m$$

is valid for any finite $u, v \ge 0$. And since we can write any finite $x$ as $u-v$ with $u, v \ge 0$, we have shown that

$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^{n}$$

is valid for any finite $x$, so

$$e^{-x} = \lim_{n\to\infty}\left(1+\frac{-x}{n}\right)^{n}$$
And hence $$e^{-x} = \lim_{n\to\infty}\left(1-\frac{x}{n}\right)^{n}$$

$\endgroup$
11
  • $\begingroup$ Wait...if $x = -1$, then as $n \to \infty$, we'll have $m \to -\infty$, surely? $\endgroup$
    – mathmandan
    Commented Apr 6, 2016 at 14:07
  • $\begingroup$ @mathmandan: :oops: Good point! Let me think about that for a minute or two... :) The short answer is that my expression for $e^x$ is actually valid for the whole complex plane, and if we map the complex plane to the Riemann sphere there's only a single point at infinity, and the distinction between $+\infty$ and $-\infty$ from the real number line evaporates. But that's not elementary, and I haven't actually proved that. $\endgroup$
    – PM 2Ring
    Commented Apr 6, 2016 at 14:11
  • $\begingroup$ @mathmandan We can see when we actually try to do the limit, we run into $\lim_{n\to\infty}\ln(1+\frac1n)$, which, when replaced with $-\infty$, the limit stays the same. In fact, just as PM 2Ring noted, it works for $\lim_{|n|\to\infty}$, where all infinities converge in the complex plane $\endgroup$ Commented Apr 6, 2016 at 22:35
  • $\begingroup$ @mathmandan: I've added some new material to my answer which I believe addresses your concerns. $\endgroup$
    – PM 2Ring
    Commented Apr 7, 2016 at 9:05
  • 1
    $\begingroup$ It is not necessary to keep $n$ as integer but then you need to define general power $a^{x}$ and then the definition for $e$ will not be simple. Keeping $n$ as integer makes the definition very simple and yet it is powerful enough to show that $(1+(x/n))^{n}$ tends to $e^{x}$ when $x$ is rational. For irrational $x$ it is possible to show that limit exists and can be taken as definition of $e^{x}$. $\endgroup$
    – Paramanand Singh
    Commented Apr 9, 2016 at 16:53
5
$\begingroup$

Intuitively,

$$1-\frac1n\approx\frac1{1+\dfrac1n}.$$

For example,

$$0.99999=\frac1{1.000010000100001\cdots}\approx \frac1{1.000001}$$ so that

$$0.99999^{100000}=0.36787760177\dots=\frac1{2.7182954100\cdots}\\ \approx \frac1{1.000001^{100000}}=\frac1{2.7182682371\cdots}$$


More rigorously,

$$\left(1+\frac1n\right)^n\left(1-\frac1n\right)^n=\left(1-\frac1{n^2}\right)^n=\sqrt[n]{\left(1-\frac1{n^2}\right)^{n^2}}.$$

As the expression under the radical goes to a finite value, the $n^{th}$ root goes to one.

You can also use the binomial formula,

$$\left(1-\frac1{n^2}\right)^n=1-\frac n{n^2}+\frac{(n)_2}{2n^4}-\frac{(n)_3}{3!n^6}\cdots\to1$$ ($(n)_k$ is the falling factorial).

$\endgroup$
2
  • $\begingroup$ I've already established that :) I was asking for a more intuitive proof... $\endgroup$
    – Bluefire
    Commented Apr 5, 2016 at 11:06
  • $\begingroup$ @Bluefire: that's right. My first formula gives the best intuition. But for a full understanding, you need to show that the truncation (terms neglected in the approximation of the inverse) makes no difference. I have added a numerical example to please you. $\endgroup$
    – user65203
    Commented Apr 5, 2016 at 11:23
2
$\begingroup$

Logarithms were invented (discovered?) by John Napier before there was calculus and before a generalized theory of exponents. It was found that you can find approximate logs to a base very close to $1$ by calculation, for example, by repeated squaring, and other short-cuts. For example if $b=1.000,001$ then $b^x$ is about $2$, where $x=693 147$, so $\log_{1.000 001}$ is about $693,147.$ The motivation for logs was for calculation, replacing $\times$ with $+$ by using tables of logs and anti-logs.

Logs to base $1+1/n$ could be "normalized" by dividing them by $n.$ (So $\log 2$ normalized always is about $ 0.693147$ .) The number $e=2.71828...$ kept showing up as the approximate "normalized" anti-log of $1 $ in base $(1+1/n)$ for any large $n$. Which is because $2.71828....=\lim_{n\to \infty}(1+1/n)^n.$

It was found that if $f(x)=\int_1^x (1/t)\;dt$ for $x>0,$ then $f(a b)=f(a)+f(b),$ that is, $f$ is a logarithm. And that its base $b$, which satisfies $1=\log_b b=\int_1^b (1/t)\;dt$ is that same number, so we could take the def'n of e as the solution $x$ to $f(x)=1.$

We can take any other equation or formula that has $e$ for its unique solution as the def'n of $e$.

(But defining it as the unique $x>1$ such that $\int_{-\infty}^{\infty} x^{-t^2}\;dt=\sqrt \pi$ is not advisable even though it's a true equation.)

$\endgroup$
2
  • 1
    $\begingroup$ Thanks for adding the historical perspective. Of course, once we have calculus and the product rule, it's easy to see the logarithmic nature of $\int_1^x (1/t)\;dt$. The product rule says $d(uv) = udv + vdu$. Dividing through by $uv$ and taking integrals yields $\int\frac{d(uv)}{uv} = \int\frac{dv}{v} + \int\frac{du}{u}$ $\endgroup$
    – PM 2Ring
    Commented Apr 6, 2016 at 11:27
  • $\begingroup$ @PM 2Ring . Nice way to show it. $\endgroup$ Commented Apr 6, 2016 at 12:00
2
$\begingroup$

If you take $(1-1/n)^n$, the result is obviously less than 1 for every n. So it is absolutely obvious that there cannot be a limit greater than 1.

If you take $(1+1/n)^n$, your argument "1/n gets smaller and smaller" still applies. So if that sequence has a limit of e ≈ 2.718 which is greater than 1, then it is a priori unreasonable to argue "-1/n gets smaller and smaller" as evidence that this sequence cannot have a limit significantly less than 1.

$\endgroup$
2
$\begingroup$

Consider that $e \times \frac{1}{e} = 1$. In our case, the $\frac{1}{n^2}$ is too small.

$$ \lim_{n \to \infty} \left( 1 - \frac{1}{n} \right)^n \cdot \lim_{n \to \infty} \left( 1 + \frac{1}{n} \right)^n = \lim_{n \to \infty} \left( 1 - \frac{1}{n^2} \right)^n \to 1$$

$\endgroup$
1
  • 4
    $\begingroup$ You should give an argument for the last. With Bernoulli's inequality, we have $$1 - n\cdot \frac{1}{n^2} < \biggl( 1 - \frac{1}{n^2}\biggr)^n < 1$$ for $n > 1$. $\endgroup$ Commented Apr 6, 2016 at 19:02
2
$\begingroup$

If you know that $$\lim_{n \to \infty}\left(1 + \frac{1}{n}\right)^{n} = e\tag{1}$$ (and some books / authors prefer to define symbol $e$ via above equation) then it is a matter of simple algebra of limits to show that $$\lim_{n \to \infty}\left(1 - \frac{1}{n}\right)^{n} = \frac{1}{e}\tag{2}$$ Clearly we have \begin{align} L &= \lim_{n \to \infty}\left(1 - \frac{1}{n}\right)^{n}\notag\\ &= \lim_{n \to \infty}\left(\frac{n - 1}{n}\right)^{n}\notag\\ &= \lim_{n \to \infty}\dfrac{1}{\left(\dfrac{n}{n - 1}\right)^{n}}\notag\\ &= \lim_{n \to \infty}\dfrac{1}{\left(1 + \dfrac{1}{n - 1}\right)^{n}}\notag\\ &= \lim_{n \to \infty}\dfrac{1}{\left(1 + \dfrac{1}{n - 1}\right)^{n - 1}\cdot\dfrac{n}{n - 1}}\notag\\ &= \frac{1}{e\cdot 1}\notag\\ &= \frac{1}{e} \end{align} Using similar algebraic simplification it is possible to prove that $$\lim_{n \to \infty}\left(1 + \frac{x}{n}\right)^{n} = e^{x}\tag{3}$$ where $x$ is a rational number. For irrational/complex values of $x$ the relation $(3)$ holds, but it is not possible to establish it just using algebra of limits and equation $(1)$.

Regarding the intuition about "changing a tiny component in limit expression changes the output" I think it is better to visualize this simple example. We have $$\lim_{n \to \infty}n^{2}\cdot\frac{1}{n^{2}} = 1$$ and if we change the second factor $1/n^{2}$ with $(1/n^{2} + 1/n)$ then we have $$\lim_{n \to \infty}n^{2}\left(\frac{1}{n^{2}} + \frac{1}{n}\right) = \lim_{n \to \infty} 1 + n = \infty$$ The reason is very simple. The change of $1/n$ which you see here is small but due to the multiplication with other factor $n^{2}$ its impact its magnified significantly resulting in an infinite limit. You always calculate the limit of the full expression (and only when you are lucky you can evaluate the limit of a complicated expression in terms of limit of the sub-expressions via algebra of limits) and any change in a sub-expression may or may not impact the whole expression in a significant way depending upon the other parts of the expression.

$\endgroup$
1
  • $\begingroup$ If you define real exponentiation with positive base by squeezing using rational exponent, then to get (3) for real $x$ you just have to squeeze using rational $x$. Though personally I prefer going straight to complex exponentiation via the Taylor series from the beginning. =) $\endgroup$
    – user21820
    Commented Apr 27, 2016 at 6:42
1
$\begingroup$

Let me try. Consider $$A=\left(1+\frac{a}{n}\right)^n$$ Take logarithms $$\log(A)=n\log(1+\frac a n)$$ Now, when $x$ is small, by Taylor, $$\log(1+x)=x-\frac{x^2}{2}+O\left(x^3\right)$$ Replace $x$ by $\frac{a}{n}$. This makes $$\log(A)=n \Big(\frac{a}{n}-\frac{a^2}{2 n^2}+O\left(\frac{1}{n^3}\right)\big)=a-\frac{a^2}{2 n}+O\left(\frac{1}{n^2}\right)$$ Now, $$A=e^{\log(A)}=e^a-\frac{a^2 e^a}{2 n}+O\left(\frac{1}{n^2}\right)$$ Now, play with $a$.

Hoping that this makes things clearer to you.

$\endgroup$
4
  • 6
    $\begingroup$ In line (5), how did you establish the second equality? $\endgroup$
    – Bluefire
    Commented Apr 5, 2016 at 11:05
  • $\begingroup$ @Bluefire. Sorry but what is line 5 ? $\endgroup$ Commented Apr 5, 2016 at 17:28
  • 3
    $\begingroup$ I mean the 5th equation in your answer (A =...) $\endgroup$
    – Bluefire
    Commented Apr 5, 2016 at 17:35
  • $\begingroup$ I do not know if you make it on purpose but this is a very very good point. Please be sure that I am serious. Continue to be on that way. if I may : remember that there is no bad questions ... but answers can be stupid. Cheers $\endgroup$ Commented Apr 5, 2016 at 17:48
1
$\begingroup$

Let me offer you the synopsis of a proof: (you can find the full proof in Apostol)

First of all you need to show that for any $a \in \mathbb{R}$, the sequence of the form $(1+\frac{a}{n})^n$ converges to a number say G(a).

Next you show that the function $G(a)$ is of the form $p^a$ where $p$ is some fixed number.

To find $p$ all you have to do is find $G(1)$ which is the limit of the sequence $(1+\frac{1}{n})^n$.

$\endgroup$
0
$\begingroup$

Going off from cactus314's answer,

$$\lim_{n\to\infty}\left(1+\frac1n\right)^n\left(1-\frac1n\right)^n=\lim_{n\to\infty}\left(1-\frac1{n^2}\right)^n=1$$

So we really only have to prove the right side:

$$\lim_{n\to\infty}\left(1-\frac1{n^2}\right)^n=\lim_{n\to\infty}\left(\left(1-\frac1{n^2}\right)^{n^2}\right)^{1/n}$$

$$=\lim_{n\to\infty}e^{1/n}$$

$$=e^0=1$$

$\endgroup$
0
$\begingroup$

$$(1-\frac{1}{n})^n=(\frac{n-1}{n})^n=(\frac{n}{n-1})^{-n}=(\frac{n-1+1}{n-1})^{-n}$$ $$=(1+\frac{1}{n-1})^{-n}=\frac{1}{(1+\frac{1}{n-1})^{n}}=\frac{1}{(1+\frac{1}{n-1})^{n-1}\cdot(1+\frac{1}{n-1})}$$

Now take the limit

$$\lim_{n\to \infty}(1-\frac{1}{n})^n=\lim_{n\to \infty}\frac{1}{(1+\frac{1}{n-1})^{n-1}\cdot(1+\frac{1}{n-1})}=\lim_{n\to \infty}\frac{1}{(1+\frac{1}{n-1})^{n-1}}\cdot\lim_{n\to \infty}\frac{1}{1+\frac{1}{n-1}}=\frac{1}{e}$$

$\endgroup$
1
  • $\begingroup$ You lost the $n-1$ exponent in the last line. $\endgroup$
    – PM 2Ring
    Commented Apr 8, 2016 at 14:53
0
$\begingroup$

Actually, a variant of this question was answered long ago in the famous text on Algebra by Chrystal by using an intuitive argument as follows: Expand $\left(1-\frac{1}{n}\right)^n$ as a binomial series for a positive integer $n$ and then tend $n$ to infinity, to give the power series for $e^{-1}$. Chrystal himself expanded $\left(1+\frac{1}{n}\right)^n$ for a positive integer $n$ to obtain $e$. So this answers your question nicely.

$\endgroup$
0
$\begingroup$
  1. For any $n$, $$f_n:[0,1]→ [0,1], \quad f_n(x) = x^n$$ always takes the values $0$ at $x=0$ and $1$ at $x=1$. This is illustrated for $n=1,2,…,30$ in the below picture.
  2. By continuity of each $f_n$, we can by Intermediate Value Theorem always find an $x_n∈ [0,1]$ so that $f_n(x_n)$ is any $y_n∈[0,1]$ we pick. In particular, we can create a sequence $x_n$ so that $f_n(x_n)$ converges to any value in $[0,1]$.

These make it not as surprising that this is a particular one that gets you the limit $1/e$. Here you can see the point $x_n := (1-\frac{1}{n})^n$ which lies on the graph of $y=x^n$ get closer to the line $y=1/e$.

enter image description here

$\endgroup$
-1
$\begingroup$

The first definition of $e$ is $$ \lim_{n \to \infty} \left(1 + \frac{1}{n} \right)^{n} = e^1 $$ which is basically just answering the question of what happens if you take the limit of discrete compounded growth by 100% to continuous growth. Note that $e>2$, i.e. this limit of discrete compounded growth asymptotically approaches a value that is greater than one we would have arrived at with the initial rate of growth. This means that even though we're chipping away at the rate we're growing by with every step of compounding, the aggregate effect is more growth. Also, I just want to note that $e$ is the universal constant from continuous growth by a certain rate - meaning that simply raising $e^{rt}$ will give the effect of continuously growing at a rate $r$ for $t$ units of time.

If we decide instead to see what happens when we take the limit of discrete compounded decay instead of growth, $$ \lim_{n \to \infty} \left( 1- \frac{1}{n} \right)^n = e^{-1} $$ we see that the opposite happens. Going from discrete compounded decay to instantaneous decay lessens the amount to which we are decaying by with every step of compounding, and asymptotically approaches a value $1/e> 0$, the value we would have been at if we did a single step of decay at a rate of 100%.

Don't know if this helps at all.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .