184
$\begingroup$

Could you provide a proof of Euler's formula: $e^{i\varphi}=\cos(\varphi) +i\sin(\varphi)$?

$\endgroup$
8
  • 6
    $\begingroup$ Actually, it is common to define $e^{it}$ using your equation. If something is to be proved we must start by asking what we know about the involved parameters, so how is your definition of $e^{it}$? Do you use a series or some other limit process? $\endgroup$ Commented Aug 28, 2010 at 20:54
  • 3
    $\begingroup$ A basic idea is this: note that $1.00001^x\approx1+.00001x$, when $x$ is small. (For example, $1.00001^5\approx1.00005$; check this!) Now, assume this is sill true when $x$ is complex! This helps give us a way to define complex exponentiation. Of course, this is far from rigorous, but it's something. And it works: $1.00001^i=.99999999995\ldots+.00000999995\ldots i$, whereas our approximation says that it should be around $1+.00001i$. $\endgroup$ Commented Mar 4, 2015 at 17:38
  • 1
    $\begingroup$ Two words: TAYLOR SERIES!!!! $\endgroup$
    – user285523
    Commented Nov 23, 2015 at 4:54
  • $\begingroup$ Let $i$ be orthogonal to 1, then $(1+i\sin\frac{\phi}{\infty})^\infty = (1+\frac{i\phi}{\infty})^\infty = e^{i\phi}$ $\endgroup$ Commented Dec 27, 2016 at 15:19
  • $\begingroup$ @Jichao: See my derivation math.stackexchange.com/questions/976967/verify-eulers-formula/… $\endgroup$
    – 926reals
    Commented Feb 17, 2017 at 23:42

17 Answers 17

363
$\begingroup$

Proof: Consider the function $f(t) = e^{-it}(\cos t + i \sin t)$ for $t \in \mathbb{R}$. By the product rule \begin{eqnarray} f^{\prime}(t) = e^{-i t}(i \cos t - \sin t) - i e^{-i t}(\cos t + i \sin t) = 0 \end{eqnarray} identically for all $t \in \mathbb{R}$. Hence, $f$ is constant everywhere. Since $f(0) = 1$, it follows that $f(t) = 1$ identically. Therefore, $e^{it} = \cos t + i \sin t$ for all $t \in \mathbb{R}$, as claimed.

$\endgroup$
8
  • 9
    $\begingroup$ @user02138 Please help me, let's clarify what you used/assumed in this proof. (i) What is the definition of $e^{it}$ for $t\in\mathbf{R}$? (ii) Depending on your answer of (i) what is the definition of the derivative of a function $g:\mathbf{R}\to\mathbf{C}$? For (ii) you cannot say the function $g(t)$ can be written $g(t)=u(t) + i v(t)$ and $g'(t)=u'(t)+iv'(t)$. $\endgroup$
    – vesszabo
    Commented Mar 22, 2018 at 13:43
  • $\begingroup$ @vesszabo: if your last question is: can you not say that the function $g(t)$ which is written $g(t)=u(t) + iv(t)$ has the derivative $g'(t)=u'(t)+iv'(t)$? Then the answer to that is that you can also get the derivative of $g(t)$ using the method that you've posted. $\endgroup$
    – Skm
    Commented Aug 4, 2019 at 17:57
  • $\begingroup$ How do we get equality to zero in the second line? $\endgroup$
    – Eli Rose
    Commented Oct 13, 2019 at 4:12
  • 1
    $\begingroup$ The constant derivative of the function multiplied by its inverse. What an elegant and simple proof. $\endgroup$
    – emandret
    Commented Nov 22, 2019 at 13:46
  • 2
    $\begingroup$ @vesszabo I agree that the complex exponential function must be defined before its derivative can be taken. One way to do that is to define $\exp :\, \mathbb{C}\to\mathbb{C},\,z\mapsto \sum_{n\ge 0}\frac{z^n}{n!}$. This implies that $\exp a\exp b=\exp (a+b)$ for all complex $a$ and $b$ (by the Cauchy product), and $\exp '=\exp$. After $\exp 1$ is identified $e$, we have $\frac{d}{dt}e^{it}=ie^{it}$ and $e^{it}e^{-it}=1$, then user02138's proof follows. $\endgroup$
    – Poder Rac
    Commented Aug 9, 2020 at 2:12
174
$\begingroup$

Assuming you mean $e^{ix}=\cos x+i\sin x$, one way is to use the MacLaurin series for sine and cosine, which are known to converge for all real $x$ in a first-year calculus context, and the MacLaurin series for $e^z$, trusting that it converges for pure-imaginary $z$ since this result requires complex analysis.

The MacLaurin series: \begin{align} \sin x&=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n+1)!}x^{2n+1}=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots \\\\ \cos x&=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n)!}x^{2n}=1-\frac{x^2}{2!}+\frac{x^4}{4!}-\cdots \\\\ e^z&=\sum_{n=0}^{\infty}\frac{z^n}{n!}=1+z+\frac{z^2}{2!}+\frac{z^3}{3!}+\cdots \end{align}

Substitute $z=ix$ in the last series: \begin{align} e^{ix}&=\sum_{n=0}^{\infty}\frac{(ix)^n}{n!}=1+ix+\frac{(ix)^2}{2!}+\frac{(ix)^3}{3!}+\cdots \\\\ &=1+ix-\frac{x^2}{2!}-i\frac{x^3}{3!}+\frac{x^4}{4!}+i\frac{x^5}{5!}-\cdots \\\\ &=1-\frac{x^2}{2!}+\frac{x^4}{4!}+\cdots +i\left(x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots\right) \\\\ &=\cos x+i\sin x \end{align}

$\endgroup$
8
  • 14
    $\begingroup$ This is by far the easiest proof to follow. $\endgroup$
    – Noldorin
    Commented Aug 29, 2010 at 12:16
  • 52
    $\begingroup$ @Noldorin: yes, but it gives no intuition. The real mystery here is why the RHS should satisfy the identity a(x+y) = a(x) a(y) and this proof gives no insight into this. Of course this is fundamentally a geometric statement about rotation, and a good proof of Euler's formula should have a clear connection to these geometric ideas. $\endgroup$ Commented Aug 29, 2010 at 13:22
  • 3
    $\begingroup$ +1 This is the way Euler originally proved it (though @Noldorin I think my proof is easier to follow :) $\endgroup$ Commented Aug 29, 2010 at 17:19
  • 3
    $\begingroup$ @Quiaochu: Yes, true. Unfortunately, I (among others) aren't familiar enough with group theory (Lie groups specifically) to appreciate it fully. From a fundamental perspective, your proof would seem like the most insightful; from a historical/simple perspective, possibly this one is. $\endgroup$
    – Noldorin
    Commented Aug 29, 2010 at 20:33
  • 1
    $\begingroup$ @Isaac This is also the way I took to prove the Euler's Formula when I was still a sophomore. My major then was Electrical Engineering. My teacher of engineering mathematics then didn't bother to prove this formula. But my conscience made me to. My classmate laughed at me about my proof. It's great to see someone share the same thought as mine after so many years. $\endgroup$ Commented Oct 28, 2016 at 23:53
72
$\begingroup$

Let $\mathbf{A}$ be an $n \times n$ matrix. Recall that the system of differential equations

$$\mathbf{x}' = \mathbf{Ax}$$

has the unique solution $\mathbf{x} = e^{\mathbf{A}t} \mathbf{x}(0)$, where $\mathbf{x}$ is a vector-valued differentiable function and $e^{\mathbf{A}t}$ denotes the matrix exponential. In particular, let $\mathbf{J} = \left[ \begin{array}{cc} 0 & 1 \\\ -1 & 0 \end{array} \right]$. Then the system of differential equations

$$x' = y, y' = -x$$

with initial conditions $x(0) = 1, y(0) = 0$ has the unique solution $\left[ \begin{array}{cc} x \\\ y \end{array} \right] = e^{\mathbf{J}t} \left[ \begin{array}{cc} 1 \\\ 0 \end{array} \right]$. On the other hand, the above equations tell us that $x'' = -x$ and $y'' = -y$, and we know that the solutions to this differential equation are of the form $a \cos t + b \sin t$ for constants $a, b$. By matching initial conditions we in fact find that $x = \cos t, y = \sin t$. Now verify that on vectors multiplying by $\mathbf{J}$ has the same effect as multiplying a complex number by $i$, and you obtain Euler's formula.

This proof has the following attractive physical interpretation: a particle whose $x$- and $y$-coordinates satisfy $x' = y, y' = -x$ has the property that its velocity is always perpendicular to and has proportional magnitude to its displacement. But from physics lessons you know that this uniquely describes particles which move in a circle.

Another way to interpret this proof is as a description of the exponential map from the Lie algebra $\mathbb{R}$ to the Lie group $\text{SO}(2)$. Euler's formula generalizes to quaternions, and this in turn can be thought of as describing the exponential map from the Lie algebra $\mathbb{R}^3$ (with the cross product) to $\text{SU}(2)$ (which can then be sent to $\text{SO}(3)$). This is one reason it is convenient to use quaternions to describe 3-d rotations in computer graphics; the exponential map makes it easy to interpolate between two rotations.

Edit: whuber's answer reminded me of the following excellent graphic.

This is what is happening geometrically in whuber's answer, and is essentially what happens if you apply Euler's method to the system of ODEs I described above. Each step is a plot of the powers of $1 + \frac{i}{N}$ up to the $N^{th}$ power.

$\endgroup$
9
  • 13
    $\begingroup$ Is there the possibility of circular logic here? You are using too many cannons... one of which might well require the result you are trying to prove. $\endgroup$
    – Aryabhata
    Commented Aug 28, 2010 at 18:43
  • 2
    $\begingroup$ @Moron: everything I said requires only real-variable techniques except the last step, which is trivial. In particular one can show that the solutions to x'' = -x are linear combinations of sine and cosine directly; this doesn't require the theory of the characteristic polynomial, if that's what you're thinking of. I would appreciate if you could be more specific. $\endgroup$ Commented Aug 28, 2010 at 19:16
  • 3
    $\begingroup$ @Qiaochu: I was just wondering if there could be a case of circular logic. I had nothing specific in mind. Euler's formula is quite a fundamental result, and we never know where it could have been used. I don't expect one to know the proof of every dependent theorem of a given result. But anyway, you seem to have justification, so I won't bother you :-) $\endgroup$
    – Aryabhata
    Commented Aug 29, 2010 at 7:22
  • 1
    $\begingroup$ @Moron: the principal part of Qiaochu's answer--namely, that Euler's formula can be understood as the exponential map from R to SO(2)--is a modern way of expressing the elementary construction described in my answer. An advantage of the modern approach is its far reach, as hinted at by the application to SU(20; a possible disadvantage is that to the uninitiated it appears to obscure the basic geometric idea. But there's no "circular logic." $\endgroup$
    – whuber
    Commented Aug 29, 2010 at 15:07
  • 5
    $\begingroup$ Since no-one else has said it in nearly 6 years, I hope that I may seize the opportunity to say pace @Aryabhata's worry that it would be a shame if a discussion of this identity didn't involve circular logic. :-) $\endgroup$
    – LSpice
    Commented Jun 14, 2016 at 22:03
62
$\begingroup$

One could provide answers based on a wide range of definitions of $\exp$, $\cos$, and $\sin$ (e.g., via differential equations, power series, Lie theory, inverting integrals, infinite sums, infinite products, complex line integrals, continued fractions, circular functions, and even Euclidean geometry) as well as offering Euler's formula up as a tautology based on a definition. But let's consider where in one's education this question arises: it's usually well before most of these other concepts are encountered. The complex numbers have just been introduced; the Argand diagram is available; $\exp(x)$ is most likely defined as the limiting value of $(1 + x/n)^n$, and $\cos$ and $\sin$ are defined as circular functions (that is, as arclength parameters for coordinates of the unit circle). So the question deserves a response in this context using mathematics accessible to someone at this level.

Accordingly, I propose that we interpret $\exp(i x)$ as the limit of $(1 + i x/n)^n$, because at least one understands how to compute the latter (as iterated multiplication of 1 by $1 + i x/n$), one at least vaguely intuits what the limit of a sequence of points in the complex plane might mean, and one has learned that $\cos(\theta) + i \sin(\theta)$ is quite literally the complex number plotted at $\left( \cos(\theta), \sin(\theta) \right)$ in the plane.

It seems natural to evaluate the limit by looking at its modulus and argument separately. The modulus is relatively easy: because $|1 + i x/n| = \sqrt { 1 + (x/n)^2 }$, the modulus of its $n$th power equals $\left( 1 + (x/n)^2 \right) ^{n/2}$. We can exploit the one limit assumed to be known:

$\left( 1 + (x/n)^2 \right) ^{n/2} = \left( \left( 1 + x^2 / n^2 \right) ^ {n^2} \right) ^ {1/{2n}} \simeq \left( \exp ( x^2 ) \right) ^ {1/{2n}} \to 1$.

(There's some hand-waving in the last two steps. This is characteristic of limiting arguments at this level. Those of you who can see where the rigor is lacking also know exactly how to supply the missing steps.) Whence, whatever $\exp( i x )$ might be, we deduce it should lie on the unit circle.

Now we turn our attention to the argument of the limit. The sequence of arguments

$\left( 1 + (x/n)^2 \right) ^{1/2}, \left( 1 + (x/n)^2 \right) ^{2/2}, \ldots, \left( 1 + (x/n)^2 \right) ^{k/2}, \ldots$

obviously is non-decreasing, because it's a geometric sequence with multiplier of 1 or greater. That is, each successive multiplication by $1 + i x / n$ is expanding an original right triangle with vertices at (0,0), (1,0), and (1, $x/n$), but in the limit the amount of expansion is reduced to unity, as we have seen. In the Argand diagram we're just laying out similar copies of this original right triangle, placing one leg of each new copy along the hypotenuse of the previous one. In the limit, the length of the small leg (originally $x/n$) therefore remains constant. These observations imply that near the limit, we can take all these $n$ little triangles to be essentially congruent, whence the length of the path traced out by the succession of images of the small leg must be extremely close to $n (x/n) = x$. This pins down where on the circle $\exp( i x )$ must lie: it is the point reached by traveling (signed) distance $x$ counterclockwise along the circle beginning at (1,0). To anyone exposed to the definition of $\sin$ and $\cos$ in terms of circular functions, it is immediate that the coordinates of this limiting location are $\left( \cos(x), \sin(x) \right)$ and we are done.

$\endgroup$
4
  • 5
    $\begingroup$ This to my mind is the most intuitive and explanatory argument. (It was a long time after I knew Euler's identity that I discovered this argument, which was the moment I felt I had truly understood Euler's identity on a visceral level.) The same argument is hinted at in Conway and Guy's The Book of Numbers, with a supporting (static) graphic. $\endgroup$
    – user43208
    Commented Sep 1, 2013 at 23:38
  • 1
    $\begingroup$ Thank you! I very much like this argument, particularly for teaching at the earlier levels, as you say. I especially appreciate the way the proof is based on the fundamental notion that complex multiplication means rotation (while real multiplication means scaling). With the exponential defined as a limit of successive multiplications, the geometric interpretation as successive rotations becomes very clear. It also connects nicely to Archimedes' method of exhaustion which students probably has seen before in dealing with circles. $\endgroup$
    – Roland
    Commented Dec 4, 2016 at 11:29
  • $\begingroup$ @user43208 also, I believe after reading a translation of Euler's original work that this is how he did it (in the sense of using (1+x/n)^n). $\endgroup$ Commented May 15, 2020 at 0:39
  • 1
    $\begingroup$ Hmm... I always think of the geometrization of complex numbers as a bit past Euler's era, beginning with Gauss and Argand and others in the early 1800's, in which complex numbers were conceived in terms of dilations and rotations, as opposed to their formal algebra. And it's really this geometric aspect, along the lines of logarithmic spirals as Lie group homomorphisms, using an infinitesimal generator $i \cdot dx$ in the Lie algebra, where the truth of Euler's identity really hit me at the gut level, as opposed to a formal level. It's this geometric aspect that speaks to me in this answer. $\endgroup$
    – user43208
    Commented May 15, 2020 at 0:52
27
$\begingroup$

Hint $ $ Both $\:\rm e^{ix}\:$ and $\:\rm cos(x) + i \; sin(x) \:$ are solutions of $\;\rm y' = i \; y,\;\; y(0) = 1, \;$ so they are equal by the uniqueness theorem. Alternatively, bisect into even & odd parts the power series for $\,\rm e^{ix},\,$ i.e.

$$\begin{align} \rm f(x) \ \ &=\ \ \rm\frac{f(x)+f(-x)}{2} \;+\; \frac{f(x)-f(-x)}{2} \\[.5em] \Rightarrow\ \ \ \rm e^{ix} \ \ &=\ \ \rm\cos(x) \ +\ i \:\sin(x) \end{align}$$

Remarks 1.$\;$ Uniqueness theorems provide powerful tools for proving equalities for functions that satisfy certain nice differential or difference (recurrence) equations. This includes a large majority of functions encountered in theory and practice. Such ideas lie behind algorithms implemented in computer algebra systems, e.g. search the computational literature using the terms "D-finite" and/or "holonomic" functions. For a less trivial but still easy example of this technique see my recent post which proves the identity

$$\rm \frac{sinh^{-1}(x)}{\sqrt{x^2+1}} \ = \ \ \sum_{k=0}^\infty\ (-1)^k \frac{(2k)!!}{(2k+1)!!} \: x^{2k+1}$$

For a simple discrete example see here where we remark that $\rm\; 13 \:|\: 3^{n+1} + 4^{2n-1} =: f_n \;$ follows from $\;\rm f_2 = f_1 = 0 \;$ and the (obvious) fact that $\;\rm f_n \;$ satisfies a monic 2nd order linear recurrence. Namely let $\;\rm S\; f_n := f_{n+1} \;$ be the shift operator. Then $\;\rm S - 3 \;$ kills $\rm\; 3^{n+1} \;$ and $\;\rm S - 16 \;$ kills $\;\rm 4^{2n-1} \;$ therefore $\;\rm (S-3)\:(S-16) = S^2 - 19\: S + 48 \;$ kills their sum $\;\rm f_n \:,\;$ i.e. $\;\rm f_{n+2} = 19\; f_{n+1} - 48\; f_n\;$. So $\:\rm mod\:13:\ f_2 = f_1 = 0 \;\Rightarrow\; f_3 = 19\: f_2 - 48\: f_1 = 0 \;\Rightarrow f_4 = 0 \Rightarrow f_5 = 0 \Rightarrow \cdots\Rightarrow\; f_n = 0\: $. So $\:0\:$ is the unique solution of the above recurrence that satisfies the intial conditions $\;\rm f_2 = f_1 = 0. \;$ This is simply an obvious special case of the uniqueness theorem for difference equations (recurrences). Once again, by invoking a uniqueness theorem, we have greatly simplified the deduction of an equality. See this answer for a simple operator-theoretic generalization of the above method.

Notice that, above, we don't need to know the precise recurrence relation. Rather, we need only know a bound on its degree, so that we know how many initial terms are needed to determine the solution uniquely. In practice, as above, one can often easily derive simple upper bounds on the degree of the recurrence or differential equation - which makes the method even more practical.

2. $\;$ Generalizing the above bisection into even and odd parts, one can employ n'th roots of unity to take arbitrary n-part multisections of power series and generating functions. They often prove handy, e.g.

Exercise $\;$ Give elegant proofs of the following

$\quad\quad\rm\displaystyle sin(x)/e^{x} \quad\:$ has every $\rm 4k\;$'th term zero

$\quad\quad\rm\displaystyle cos(x)/e^{x} \quad$ has every $\rm 4k+2\;$'th term zero

See the posts in this thread for various solutions and more on multisections.

$\endgroup$
19
$\begingroup$

Well this question actually boils down to "How is the complex exponential defined?"

Here is my view of this problem:

Let

$$f(x+iy)= e^{x}(\cos(y)+i\sin(y) ) \,.$$

Then $f$ has continuous partial derivatives $f_x$ and $f_y$, and verifies the Cauchy-Riemann equations, thus it is analytic.

Moreover, for any $z_1,z_2 \in {\mathbb C}$ we have

$$f(z_1+z_2)=f(z_1) f(z_2) \,.$$

Last but not least $f(x)=e^x$ for all $x \in {\mathbb R}$.

In particular we showed that $e^x$ can be extended to an analytic complex function, and the theory tells us that such extension is unique.

Now, since $f(z)$ is the unique analytic extension of $e^x$ to the complex plane, and it also satisfies the exponential relation $f(z_1+z_2)=f(z_1) f(z_2)$, we call this function $e^z$.

$\endgroup$
5
  • 2
    $\begingroup$ I've got one question - why do we want the generalization of $e^x$ to complex numbers to be an analytic function? Yes, I admit it's important the generalization is unique, but it is unique only if we want the generalization to be analytic. $\endgroup$
    – user216094
    Commented Dec 28, 2015 at 19:01
  • 1
    $\begingroup$ @user216094 In analysis the identifying property of $e^x$ is the fact that $f'=f$ (together with $f(0)=1$). Also note that without considering the derivative you cannot really differentiate between $e^x$ and $a^x$.... Therefore, when extending to the complex plane, it is reasonable to seek, if possible, a function which satisfies $f'=f$....If you expect this, you must ask for the function to be differentiable. In reality, we don't ask for the exponential to be analytic, we only seek differentiability everywhere (the functional equation yields that as long as it is diffrentiable at .... $\endgroup$
    – N. S.
    Commented Dec 29, 2015 at 0:33
  • 1
    $\begingroup$ a point is differentiable everywhere), but for complex functions the two are equivalent... $\endgroup$
    – N. S.
    Commented Dec 29, 2015 at 0:34
  • $\begingroup$ @user216094 Euler, Bernouilli, Cotes, and the pioneers did not think quite in terms of preserving differentiability, but rather had the view that $\sqrt(-1)$ should be treated just like any other constant when integrating and the like, which turned out to be an equivalent form of the assumption of differentiability and hence analyticity, although they did not think of it that way. $\endgroup$ Commented May 17, 2020 at 19:00
  • $\begingroup$ @user216094 also, people seem to not like this question for whatever reason; see math.stackexchange.com/questions/3658418/… . After trying (unsuccessfully) to figure it out for myself for some time, I can contend that it is not at all clear or obvious that we should define exp this way. See level1807's answer, it makes more sense in the historical context where complex exponents arose - before they took on a life of their own. $\endgroup$ Commented May 17, 2020 at 19:04
17
$\begingroup$

My favorite proof involves nothing but basic integration (of course it still secretly relies on complex analysis, but looks extremely explicit and straightforward). In fact, this is based on the historical observation made by Bernoulli, which became the first hint of this famous identity.

Write

$$\frac{2i}{x^2+1}=\frac{1}{x-i}-\frac{1}{x+i}.$$

Now integrate both sides just as you would integrate real-valued functions of this form:

$$2i\arctan{x}+c=\log\frac{x-i}{x+i}.$$

Replace $x=\tan(y/2)$ and exponentiate (and I'll replace $e^c$ with just $c$ for simplicity):

$$ce^{iy}=\frac{\sin(y/2)-i\cos(y/2)}{\sin(y/2)+i\cos(y/2)}.$$

After the multiplication of both the denominator and the numerator by $\sin(y/2)-i\cos(y/2)$, the right hand side is simplified to

$$\frac{\sin(y/2)-i\cos(y/2)}{\sin(y/2)+i\cos(y/2)}=\frac{(\sin(y/2)-i\cos(y/2))^2}{\sin(y/2)^2+\cos(y/2)^2}=(\sin(y/2)-i\cos(y/2))^2=-\cos y-i\sin y.$$

Substituting $y=0$, we see that $c=-1$, therefore

$$e^{iy}=\cos y+i\sin y.$$

$\endgroup$
17
$\begingroup$

Taking same path as whuber's answer, one could also note that if we define $e^x$ as follows,

$$e^x=\lim_{n\to\infty}\left(1+\frac xn\right)^n$$

Then it follows that

$$e^{ix}=\lim_{n\to\infty}\left(1+\frac{ix}n\right)^n$$

Putting this into polar form, we find that

$$1+\frac{ix}n=\sqrt{1+\frac{x^2}{n^2}}\left(\cos\left(\arctan\left(\frac xn\right)\right)+i\sin\left(\arctan\left(\frac xn\right)\right)\right)$$

By De Moivre's formula,

$$\left(1+\frac{ix}n\right)^n=\left(1+\frac{x^2}{n^2}\right)^{n/2}\left(\cos\left(n\arctan\left(\frac xn\right)\right)+i\sin\left(n\arctan\left(\frac xn\right)\right)\right)$$

One can then see that

$$\lim_{n\to\infty}\left(1+\frac{x^2}{n^2}\right)^{n/2}=1$$

$$\lim_{n\to\infty}n\arctan\left(\frac xn\right)=x$$

Thus,

$$e^{ix}=\lim_{n\to\infty}\left(1+\frac{ix}n\right)^n=\cos(x)+i\sin(x)$$

$\endgroup$
2
  • 1
    $\begingroup$ (+1) I just came across this question, but I see that this answer is similar to this answer. $\endgroup$
    – robjohn
    Commented Aug 20, 2020 at 16:06
  • $\begingroup$ This is honestly the best answer. It follows the route which seems the closest to the obvious observation that $1+ix/n \approx \cos(x/n)+i\sin(x/n)$ and applying De Moivre's Formula yields the result, but this is how we can make the approxiamtion rigurous. I had been looking for ways using little-o notation, but this is straight-forward and I can't believe I didn't think about this. Thank you so much! $\endgroup$ Commented Mar 3, 2021 at 14:06
11
$\begingroup$

I'll copy my answer to this question:


Let $f(x) = \cos(x) + i\cdot \sin(x)$

Then $$\begin{align*}\frac{df}{dx} &= -\sin(x) + i \cdot \cos(x)\\ &= i \cdot f(x) \end{align*}$$

So that $$ ∫(\frac{1}{f(x)}) df = ∫i \cdot dx\\ \ln(f(x)) = ix + C\\ f(x) = e^{ix + C} = \cos(x) + i \cdot \sin(x)\\ $$

Since $f(0) = 1, C = 0$, so: $$ e^{ix} = \cos(x) + i \cdot \sin(x) $$

$\endgroup$
1
  • 16
    $\begingroup$ There appear to be some gaps here; the most significant one concerns the use of a complex logarithm. You seem to have overlooked the fact there are many more solutions to exp(C) = 1 than merely C = 0. Their existence is an implication of Euler's formula itself. To avoid circular logic, the crucial thing is to make clear what definitions you are using for exp, sin, and cos (and also, in this case, of the complex logarithm). $\endgroup$
    – whuber
    Commented Aug 30, 2010 at 14:21
5
$\begingroup$

How about the Laplace transform of

$$\exp(i t)=\cos(t)+i\sin(t)$$

Let's evaluate the Laplace transform of the two sides of the formula:

$$\frac{1}{s-i}=\frac{s}{s^2+1}+\frac{i}{s^2+1}=\frac{s+i}{s^2+1}$$

Now, let's multiply both sides by $s-i$ :

$$1=\frac{(s-i)(s+i)}{s^2+1}=\frac{s^2-i^2}{s^2+1}=\frac{s^2+1}{s^2+1}=1$$

Voilà

$\endgroup$
2
  • $\begingroup$ Wait, isn't that circular reasoning since the Laplace transform of trigonometric functions uses the Euler's formula in the first place? $\endgroup$ Commented Feb 28, 2020 at 19:25
  • 1
    $\begingroup$ @AnindyaMahajan $\int e^{-sx}\sin x \,dx=-\frac{s\sin x+\cos x}{1+s^2}e^{-sx}+\text{const}$. This indefinite integral can be computed by integrating by parts with no mention to the Euler's formula or even complex analysis. The same for $\cos x$ $\endgroup$ Commented Feb 29, 2020 at 14:30
5
$\begingroup$

I've noticed that several of the above answers are heuristic in that they use/operate on the complex exponential in an attempt to define it. This would be like looking up a word in a dictionary and finding the word itself in the definition. Until the complex exponential is defined the expression $ e^{iy}$ has no meaning. One cannot "prove" euler's identity because the identity itself is the DEFINITION of the complex exponential.

So really proving euler's identity amounts to showing that it is the only reasonable way to extend the exponential function to the complex numbers while still maintaining its properties. What follows is basically a more detailed version of what has been summarized above by asmaire. We want to define $$f\left(z\right)=e^z$$ in such a way that it satisfies the following properties: $$ f'\left(z\right) = f\left(z\right) , \quad f\left(x+ 0i\right) = e^x$$ In other words we want it to be its own derivative and we would like it to reduce to the regular exponential function when the exponent is purely real. Lets explore what properties such a function would have to have.

To make things easier write $f\left(z\right) = f\left(x+iy\right) = u + iv$ with $ u = u\left(x,y\right) , v = v\left(x,y\right)$.

By the Cauchy-Riemann equations we know that $$f'\left(x+iy\right) = u_x + i v_x = v_y + i \left(-u_y\right) = u + iv.$$ where the rightmost equality comes from the fact that$ f' = f$. Equating real/imaginary parts we see that $$u\left(x,y\right) = u_x\left(x,y\right)$$ $$ v\left(x,y\right) = v_x\left(x,y\right)$$ for all x,y. The general solutions to these equations are $$u\left(x,y\right) = a\left(y\right)e^x$$ $$ v\left(x,y\right)= b\left(y\right)e^x $$ Looking at the other constraint we have $$ e^x + 0i = e^ x = f\left(x+0i\right) = u\left(x,0\right) + i v\left(x,0\right) = a\left(0\right)e^x + i b\left(0\right)e^x$$ This gives our "initial conditions" $$ a\left(0\right) = 1 ,\ b\left(0\right) = 0.$$

Going back to the C-R equations we have $$ u_x = v_y \implies a\left(y\right)e^x = b'\left(y\right)e^x$$ $$v_x = - u_y \implies a'\left(y\right)e^x = -b\left(y\right)e^x$$ Giving the system $$a = b'$$ $$- b = a'$$ Which we can cleverly write as $$ \vec{x}' = \begin{bmatrix} a'\left(y\right) \\ b' \left(y\right) \end{bmatrix} = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} a\left(y\right) \\ b\left(y\right) \end{bmatrix} = A \vec{x} $$ It turns out the solution to a linear system like this is given by the matrix exponential $$ \vec{x}\left(y\right) = e^{Ay} * \vec{x_0} $$

where $ \vec{x_0} = \begin{bmatrix} a\left(0\right) \\ b\left(0\right) \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $ e^{Ay} = \displaystyle∑_{k=0}^{\infty} \frac{A^ky^k}{k!} $ and$A^k$ denotes matrix exponentiation.

Note that $ A^2 = -I$ so that $ A^3 = - A, A^4 = I, A^5 = A $ etc.

This gives $$ e^{Ay} = \displaystyle∑_{k=0}^{\infty} \frac{A^ky^k}{k!} = \displaystyle∑_{even k} \frac{A^ky^k}{k!} + \displaystyle∑_{odd k} \frac{A^ky^k}{k!}$$

$$ = \displaystyle∑_{k=0}^{\infty} \frac{\left(-1\right)^k Iy^{\left(2k\right)}}{2k!} + \displaystyle∑_{k=0}^{\infty} \frac{\left(-1\right)^kAy^{\left(2k+1\right)}}{\left(2k+1\right)!}$$ $= \begin{bmatrix} ∑_{k=0}^{\infty} \frac{\left(-1\right)^k y^{\left(2k\right)}}{2k!} & 0 \\ 0 & ∑_{k=0}^{\infty} \frac{\left(-1\right)^k y^{\left(2k\right)}}{2k!}\end{bmatrix} + \begin{bmatrix} 0 & - ∑_{k=0}^{\infty} \frac{\left(-1\right)^ky^{\left(2k+1\right)}}{\left(2k+1\right)!} \\ ∑_{k=0}^{\infty} \frac{\left(-1\right)^ky^{\left(2k+1\right)}}{\left(2k+1\right)!} & 0 \end{bmatrix} $

$= \begin{bmatrix} ∑_{k=0}^{\infty} \frac{\left(-1\right)^k y^{\left(2k\right)}}{2k!} & - ∑_{k=0}^{\infty} \frac{\left(-1\right)^ky^{\left(2k+1\right)}}{\left(2k+1\right)!} \\ ∑_{k=0}^{\infty} \frac{\left(-1\right)^ky^{\left(2k+1\right)}}{\left(2k+1\right)!} & ∑_{k=0}^{\infty} \frac{\left(-1\right)^k y^{\left(2k\right)}}{2k!} \end{bmatrix}= \begin{bmatrix} \cos \left(y\right) & - \sin \left(y\right) \\ \sin \left(y\right) & \cos \left(y\right) \end{bmatrix}$

As mentioned above, multiplying by the initial conditions vector gives us our solution : $$ \begin{bmatrix} a\left(y\right) \\ b\left(y\right) \end{bmatrix} = \begin{bmatrix} \cos \left(y\right) & - \sin \left(y\right) \\ \sin \left(y\right) & \cos \left(y\right) \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} \cos \left(y\right) \\ \sin \left(y\right) \end{bmatrix}$$ We finally arrive at $$f\left(x+iy\right) = u\left(x,y\right) + i v\left(x,y\right) = e^x\cos \left(y\right) + i e^x\sin \left(y\right).$$ In other words, if we want the complex exponential to naturally generalize the real exponential then

we must DEFINE it as $ e^{x+iy} = e^x\left(\cos \left(y\right) + i \sin \left(y\right)\right)$.

$\endgroup$
6
  • $\begingroup$ Sry for the formatting. Don't use latex often. $\endgroup$
    – David Reed
    Commented Nov 2, 2017 at 1:59
  • $\begingroup$ It would be really helpful if you tried to make more of an effort with the formatting. As it is, it is a difficult-to-read and difficult-to-parse wall of text. A couple of paragraph breaks and some aligned blocks would go a long way... $\endgroup$
    – Xander Henderson
    Commented Nov 2, 2017 at 2:20
  • $\begingroup$ No, you don't have to define $\exp z$ like that. See Jonas Meyer's comment under user17762's answer to this question or the Wikipedia article on Euler's formula. $\endgroup$
    – Poder Rac
    Commented Aug 6, 2020 at 23:02
  • $\begingroup$ @PoderRac I think you misunderstand what I mean by "have to". All other definitions you appear to cite are equivalent to this one. The above is nice in that it shows that any alternative definition that is NOT equivalent will not naturally generalize the real exponential. $\endgroup$
    – David Reed
    Commented Aug 6, 2020 at 23:27
  • $\begingroup$ @David Well, you can define $\exp z$ for complex $z$ as the unique analytic continuation of $\exp x$ for real $x$. Then you can prove that the real and imaginary parts of $\exp ix$ can be separated as $\cos x$ and $\sin x$, respectively. From the question, it seems that OP wants to prove it from something, not to use it as a definition. Otherwise their question would be meaningless. $\endgroup$
    – Poder Rac
    Commented Aug 6, 2020 at 23:49
3
$\begingroup$

An easy way out to answer the question is when we try to extend the definition of exponential to complex plane in a "nice" way (read "nice" as holomorphic) we then end up with this definition. And as you are probably aware there is only at most one such extension. So if we want to define our exponential in the complex plane such that the exponential function is holomorphic, and it matches with the conventional exponential function on the real line, we end up with $e^{it} = \cos(t) + i \sin(t)$.

All the other answers I think are circular in some sense.

For instance, how do we know that we can expand $e^{i x}$ as a power series. All we know is that we can expand $e^{x}$ as a power series when $x$ is real. We do not know apriori that the expansion can be carried forward if $x$ happens to be complex.

$\endgroup$
1
  • 5
    $\begingroup$ Any answer will have to give/assume some definition of what $e^{it}$ means, but that is not the same as assuming the result. For example, taking the unique holomorphic extension of $\exp$ (as you suggest) leads to the power series expansion. But then the formula in question holds based on the series for $\exp$ and the series for the real functions $\sin$ and $\cos$, as shown in Isaac's answer, but this formula is not actually part of the definition. Answers that don't give full justification may be incomplete as far as rigorous proof goes, but that is different from being circular. $\endgroup$ Commented Nov 25, 2010 at 14:30
3
$\begingroup$

"Combining" the answers from Qiaochu Yuan and Isaac one can also directly evaluate $\begin{bmatrix}x \\ y \end{bmatrix} = e^{\mathbf{J}t} \begin{bmatrix}1 \\ 0 \end{bmatrix}$ with $\mathbf{J}=\begin{bmatrix}0&1 \\ -1&0 \end{bmatrix}$ (note that $\mathbf{J}^2 = - \begin{bmatrix}1&0 \\ 0&1 \end{bmatrix}$) as follows:

$$ e^{\mathbf{J}t} \begin{bmatrix}1 \\ 0 \end{bmatrix} = \left( \mathbf{J}^0 + \frac{t}{1!} \mathbf{J}^1 + \frac{t^2}{2!} \mathbf{J}^2 + \frac{t^3}{3!} \mathbf{J}^3 + \frac{t^4}{4!} \mathbf{J}^4 + \cdots \right) \begin{bmatrix}1 \\ 0 \end{bmatrix} \\ = \left( \begin{bmatrix}1&0 \\ 0&1 \end{bmatrix} + t \begin{bmatrix}0&1 \\ -1&0 \end{bmatrix} - \frac{t^2}{2!}\begin{bmatrix}1&0 \\ 0&1 \end{bmatrix}-\frac{t^3}{3!}\begin{bmatrix}0&1 \\ -1&0 \end{bmatrix}+ \frac{t^4}{4!}\begin{bmatrix}1&0 \\ 0&1 \end{bmatrix}+\cdots\right)\begin{bmatrix}1 \\ 0 \end{bmatrix} \\ = \begin{bmatrix} 1 + 0 - \frac{t^2}{2!} - 0 + \frac{t^4}{4!} + \cdots \\ 0 - t - 0 + \frac{t^3}{3!} + 0 + \cdots \end{bmatrix} \\ = \begin{bmatrix} \cos(t) \\ - \sin(t) \end{bmatrix} $$ The result is a parametrization of a helix curve. That means Eulers formula simply shows how one can parametrize a helix using the exponential function.

The result can also be written like $$ e^{\mathbf{J}t} \begin{bmatrix}1 \\ 0 \end{bmatrix} = \left( \cos(t) + \mathbf{J} \sin(t) \right)\begin{bmatrix} 1 \\ 0 \end{bmatrix} $$ making the connection to the traditional form of Eulers formula manifest.

It should be noted that with $\mathbf{K}=\begin{bmatrix}0&1 \\ 1&0 \end{bmatrix}$ the following relation can be proved in the same way: $$ e^{\mathbf{K}t} \begin{bmatrix}1 \\ 0 \end{bmatrix} = \left( \cosh(t) + \mathbf{K} \sinh(t) \right)\begin{bmatrix} 1 \\ 0 \end{bmatrix} $$

$\endgroup$
3
$\begingroup$

Let $f(x) = e^{ix} - \cos(x) - i \sin(x)$. Now $f'' + f = 0$, $f(0)=0$, $f'(0)=0$ and hence $f(x) = 0$.

$\endgroup$
3
$\begingroup$

Let $y=\cos \phi+i\sin \phi$ $...(1)$

Differentiating both sides of equation (1) with respect to $\phi$, we get,

$\frac{dy}{d\phi}=-\sin \phi+i\cos \phi$

$\implies \frac{dy}{d\phi}=i(\cos \phi-\frac{1}{i}\sin \phi)$

$\implies \frac{dy}{d\phi}=i(\cos\phi+i\sin \phi)$

$\implies \frac{dy}{d\phi}=iy$

$\implies\frac{1}{y}dy=id\phi$ $...(2)$

Integrating both sides of equation (2), we get,

$\int\frac{1}{y}dy=\int id\phi$

$\implies \ln(y)=i\phi+c$ $...(3)$

Substituting $\phi=0$ in equation (1), we get,

$y=\cos 0+i\sin 0$

$\implies y=1$

Substituting $\phi=0$ and $y=1$ in equation (3) we get,

$\ln(1)=c$

$\implies c=0$

Substituting $c=0$ in eqaution (3) we get,

$\ln(y)=i\phi$

$e^{i\phi}=y$

$\therefore e^{i\phi}=\cos \phi+i\sin \phi$

$\endgroup$
2
$\begingroup$

Consider asking the question what is the value of cos(ix) in polar form. That is find A(x) and B(x) such that $$\cos(ix)=A+iB$$ Differentiating the equality a couple of times $$-i\sin(ix)=A'+iB'$$ $$\cos(ix) = A" +iB"$$ Thus $$A"=A$$ $$B"=B$$ The solution for A or B is $C_1e^x+C_2e^{-x}$. For arbitrary $C_1$ and $C_2$. Something very-clear at the time the equation was developed. Both the question and the solution of the differential $A"=A$ where known at the time the equation was developed.

$$\cos(ix)=C_1e^x+C_2e^{-x} +i[C_3e^x+C_4e^{-x}]$$

$$\cos(i(-ix))=\cos(x)=C_1e^{-ix}+C_2e^{ix} +i[C_3e^{-ix}+C_4e^{ix}]$$ $$\cos(x)=C_1e^{-ix}+C_2e^{ix} +i[C_3e^{-ix}+C_4e^{ix}]$$ $$-sin(x)=C_1(-i)e^{-ix}+C_2(i)e^{ix} +i[C_3(-i)e^{-ix}+C_4(i)e^{ix}]$$ Applying BCs $\cos(0)=1$ and $\sin(0)= 0$ one finds that $C_1=C_2=1/2$ and $C_3=C_4=0$ fit the conditions.

Hence: $$\cos(x)=\frac{e^{-ix}+e^{ix}}{2}$$ $$-\sin(x)=\frac{-ie^{-ix}+ie^{ix}}{2}$$ $$-i\sin(x)=\frac{e^{-ix}-e^{ix}}{2}$$ Thus $$\cos(x)+i\sin(x)=e^{ix}$$

More on the BCs.

$$cos(0)=[C_1+C_2] +i[C_3+C_4]=1$$ $$[C_1+C_2]=1, [C_3+C_4]=0$$ $$-sin(0)=i[C_2-C_1] +[C_3-C_4]=0$$ $$[C_2-C_1]=0, [C_3-C_4]=0$$

Thus $C_1=C_2=1/2$ and $C_3=C_4=0$ fit the conditions. (I cannot demonstrate uniqueness for solution for $C_n$. In so far I accept the uniqueness because the same procedure apply to $sin(ix)$ and $e^{ix}$ yield the same result: Euler's equation.)

An interesting example:

$$Atan(ix)= A+Bi$$ Taking derivative $$ \frac{i}{1-x^2}= A'+B'i$$ Thus $$ \frac{i}{1-x^2}= B'i$$ $$ \frac{1}{1-x^2}= B'$$ solving for B one finds $$ \frac{1}{2}Ln[\frac{x+1}{1-x}]= B$$ => $$ Atan(ix)=\frac{i}{2}Ln[\frac{x+1}{1-x}]= iB$$ Substituting x=-iz, becomes $$ Atan(z)=\frac{i}{2}Ln[\frac{-iz+1}{1+iz}]$$ which is equivalently expressed as $$ Atan(z)=\frac{1}{2i}Ln[\frac{1+iz}{1-iz}]$$

$\endgroup$
1
  • 11
    $\begingroup$ Are you sure you're answering the right question? $\endgroup$ Commented Feb 8, 2017 at 4:10
1
$\begingroup$

$\newcommand{\E}{\mathrm e}\newcommand{\I}{\mathrm i}$Notice that $\E^{\I t}$ (a real number raised to an imaginary or complex power) is usually defined to be $a^b = \exp(b \log a)$.

Define $\exp$ by the power series: $$ \exp z := \sum_{j=0}^\infty \frac{z^j}{j!} $$ which converges on $\mathbb{C}$ by Cauchy-Hadamard formula.

Then $\cos,\sin$ can be defined by $$ \cos z := \frac{\exp(\I z) + \exp(-\I z)}2, \qquad \sin z := \frac{\exp(\I z) - \exp(-\I z)}{2\I} $$ Since $\exp$ is a real power series, $\exp \overline {z} = \overline{\exp z}$. It follows that, if $t \in \mathbb{R}$, we have $\cos t = \Re \exp \I t,\sin t = \Im \exp \I t$.

It remains to define $\log$ and show that $\log \E = 1$. Using Cauchy products, it can be shown that $$ \exp(w +z) = \exp w \exp z $$ Since $\exp$ is positive on $[0,+\infty[$ and $\exp 0 = 1$, the above suggests that $\exp$ is positive on $\mathbb R$. Differentiating gives $\exp\restriction_{\mathbb R}' = \exp\restriction_{\mathbb R} > 0$. Using Mean Value Theorem, $\exp$ is strictly increasing on $\mathbb R$ and hence its inverse $\log := \exp\restriction_{\mathbb R}^{-1}$ can be defined.

It is known that $\exp 1 = \E$ (the proof depends on the definition of $\E$) Hence $\log e = 1$, and $\E^{\I t} = \exp(\I t) = \cos t + \I \sin t$.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .