157
$\begingroup$

So, this question asks about how useful computational tricks are to mathematics research, and several people's response was "well, computational tricks are often super cool theorems in disguise." So what "computational tricks" or "easy theorems" or "fun patterns" turn out to be important theorems?

The ideal answer to this question would be a topic that can be understood at two different levels that have a great gulf in terms of sophistication between them, although the simplistic example doesn't have to be "trivial."

For example, the unique prime factorization theorem is often proven from the division algorithm through Bezout's lemma and the fact that $p\mid ab\implies p\mid a$ or $p\mid b$. A virtually identical proof allows you to establish that every Euclidean Domain is a unique factorization domain, and the problem as a whole - once properly abstracted - gives rise to the notion of ideals and a significant amount of ring theory.

For another example, it's well known that finite dimensional vector spaces are uniquely determined by their base field and their dimension. However, a far more general theorem in Model Theory basically lets you say "given a set of objects that have a dimension-like parameter that are situated in the right manner, every object with finite "dimension" is uniquely determined by its minimal example and the "dimension." I don't actually quite remember the precise statement of this theorem, so if someone wants to explain in detail how vector spaces are a particular example of $k$-categorical theories for every finite $k$ that would be great.

From the comments: In a certain sense I'm interested in the inverse question as this Math Overflow post. Instead of being interested in deep mathematics that produce horribly complicated proofs of simple ideas, I want simple ideas that contain within them, or generalize to, mathematics of startling depth.

$\endgroup$
17

28 Answers 28

152
$\begingroup$

In school they teach us that

$$\int\frac 1x\;\mathrm dx=\log\left|x\right|+C$$

But as Tom Leinster points out, this is an incomplete solution. The function $x\mapsto 1/x$ has more antiderivatives than just the ones of the above form. This is because the constant $C$ could be different on the positive and negative portions of the axis. So really we should write:

$$\int\frac 1x\;\mathrm dx=\log\left|x\right|+C\cdot1_{x>0}+D\cdot1_{x<0}$$

where $1_{x>0}$ and $1_{x<0}$ are the indicator functions for the positive and negative reals.

This means that the space of antiderivatives of the fuction $x\mapsto 1/x$ is two dimensional. Really what we have done is to calculate the zeroth de Rham cohomology of the manifold $\mathbb R-\{0\}$ (the domain on which $x\mapsto 1/x$ is defined). The fact that $\mathrm{H}^0_{\mathrm{dR}}\!\!\left(\mathbb R-\{0\}\right)=\mathbb R^2$ results from the fact that $\mathbb R-\{0\}$ has two components.

$\endgroup$
23
  • 12
    $\begingroup$ You just have to know what the indefinite integral really stands for. After all it doesn't make sense to compute $\int_{-1}^1 (1/x) dx$... $\endgroup$
    – Thompson
    Commented Apr 5, 2017 at 1:59
  • 62
    $\begingroup$ @Thompson Sure. But note that the answer $\log\left|x\right|+C$ that I was taught in school is never correct. Either you are doing a definite integral, in which case you have to stick to the positives or the negatives and the answer is $\log x-\log a$ or $\log -x-\log -a$, or you want an antiderivative, in which case $\log\left|x\right|$ is fine, or you want all the antiderivatives, in which case the answer is $\log\left|x\right|+C\cdot1_{x>0}+D\cdot1_{x<0}$. No reasonable question has the answer $\log\left|x\right|+C$. $\endgroup$ Commented Apr 5, 2017 at 7:07
  • 12
    $\begingroup$ Yep. And the official solution to the 2015 Math Methods Examination 1 gets this wrong, see Question 2. I think it's pretty sad that an official solution to the official exam that thousands of students take is just... mistaken, the solution is incorrect. This is what we get for requiring our math teachers to study teaching, when really, we should require them to study math. $\endgroup$ Commented Apr 5, 2017 at 9:20
  • 7
    $\begingroup$ Every calculus book I've read makes it a specific point to say that by convention they will only ever consider one-sided antiderivatives of $1/x$ (they will never have a situation where sometimes you want one, sometimes the other), which makes the specification of a second constant an error. The function 1/x in this context is implicitly assumed to have domain in $x<0$ (exclusively) or $x>0$; the domain is never split into two pieces. $\endgroup$ Commented Apr 5, 2017 at 19:27
  • 61
    $\begingroup$ Every calculus book I've read explicitly mentions the de Rham cohomology! (I've never read any calculus books.) $\endgroup$ Commented Apr 5, 2017 at 21:19
56
$\begingroup$

I'm not sure if this answer really fits the question. But the nice question prompted me to write down some thoughts I've been mulling for a while.

I think the simple distributive law is essentially deep mathematics that comes up early in school.

I hang out in K-3 classrooms these days. I'm struck by how often understanding a kid's problem turns out to hinge on showing how the distributive law applies. For example to explain $20+30=50$ (sometimes necessary) - you start with "2 apples + 3 apples = 5 apples" and then $$ 20 + 30 = 2 \text{ tens} + 3 \text{ tens} = (2+3)\text{ tens} = 5 \text{ tens} = 50. $$ So the distributive law is behind positional notation, and the idea that you "can't add apples to oranges" (unless you generalize to "fruits"). You even get to discuss a little etymology: "fifty" was literally once "five tens".

Euclid relies on the distributive law when he computes products as areas, as in Book II Proposition 5, illustrated with

enter image description here

The distributive law is behind lots of grade school algebra exercises in multiplying and factoring. If it were more explicit I think kids would understand FOIL as well as memorizing the rule.

Later on you wish they'd stop thinking everything distributes, leading to algebra errors with square roots (and squares), logarithms (and powers).

All of this before you study linear transformations, abstract algebra, rings, and ring-like structures where you explore the consequences when distributivity fails.

$\endgroup$
16
  • 33
    $\begingroup$ This is related to the fact that place-value arithmetic is actually a specific instance of polynomial arithmetic. If I know that $11^2=121$ then I know that $(x+1)^2=x^2+2x+1$. Of course, in high school this is never explained to anyone because that would be unreasonable. I remember having an argument with a student when they insisted that they understood long division but not polynomial long division and I refused to teach them a technique for "polynomial long division" and instead started talking about the nature of the symbols we use to represent numbers. $\endgroup$ Commented Apr 4, 2017 at 19:06
  • 16
    $\begingroup$ +1 Personally, I strongly dislike "FOIL", as once learned, the majority of students stop making any effort to understand how to multiply sums by distributing, and thus are at a loss about what to do with more complex problems. $\endgroup$ Commented Apr 4, 2017 at 23:48
  • 26
    $\begingroup$ I just sit down the 8th grader and ask him what's $117 \times 277 - 116 \times 277$. A surprising number will compute it the long way. I don't even point it out to them; they often still don't see it. Then I give them progressively bigger numbers, like $13754 \times 347 - 13654 \times 347$ (and they get annoyed at being asked to do this without a calculator) until they suddenly get it. Then we go from there to trickier problems, like $97 \times 103$ without a calculator, then $498 \times 502$, and so on. $\endgroup$
    – Wildcard
    Commented Apr 5, 2017 at 0:39
  • 19
    $\begingroup$ For those who, like me, wonder what FOIL is: en.wikipedia.org/wiki/FOIL_method $\endgroup$
    – Oliphaunt
    Commented Apr 5, 2017 at 8:08
  • 69
    $\begingroup$ @Wildcard My saddest experience akin to yours occurred in a sophomore number theory class. I pointed out that it was easy to factor $2491 = 2500-9$ since it was a difference of squares. One student said "I didn't know $a^2-b^2 = (a-b)(a+b)$ works for numbers too." $\endgroup$ Commented Apr 5, 2017 at 13:17
49
$\begingroup$

Let's get the obvious example out of the way - almost all representation theorems are shadows of the Yoneda lemma. In particular all of the following facts, some of which are elementary, follow from the (enriched) Yoneda lemma.

  • That every group is a isomorphic to a subgroup of a permutation group. (Cayley's theorem)
  • That every partially ordered set embeds into some power set ordered by inclusion.
  • That every graph is the intersection graph of some sets.
  • That every ring has a faithful module.
  • That for every proposition or truth value $p$ we have $p\Rightarrow \top$.
$\endgroup$
42
$\begingroup$

The school arithmetic is a particular case of the cohomology. Reference: A Cohomological Viewpoint on Elementary School Arithmetic by Daniel C. Isaksen.

$\endgroup$
2
  • 7
    $\begingroup$ I learned about cocycles in a graduate algebra class and found them to be abstract and difficult. The importance of them, while perhaps mathematically evident, never really got to my brain. This simple example of $z$ would have easily motivated both cocycle and normalization conditions for me much more effectively. Absolutely brilliant. $\endgroup$ Commented Apr 8, 2017 at 22:30
  • 1
    $\begingroup$ Oh, I remember studying cohomology in second grade, it was very insightful! It surely made multiplication, addition and fractions much simpler. $\endgroup$
    – Roy Sht
    Commented Aug 1, 2022 at 21:46
42
$\begingroup$

$$\sum_\triangle\theta=\pi$$

The maths behind Euclid's parallel postulate is so profound that it took two thousand years for us to deduce that it is not, in fact, self-evident. The consequences of this fact are fundamental to our laws of geometry; and the fact it is not self-evident, suggested that other geometries such as Special and General Relativity may be required to understand the Universe 2,000 years before the invention of Newtonian mechanics.

$\endgroup$
1
  • 4
    $\begingroup$ @RobertFrost That's still not correct - we can easily have $A\not=D$ but $\pi\not=ABC+DCB$. (Also minor point, we should distinguish between an angle and its measure, but this is a conflation which is common and well-understood - my point is that I genuinely don't know exactly what you're trying to say, I'm not just giving you a hard time here.) I think you're just trying for some reason to avoid saying "the internal angles of a triangle sum to $\pi$" - I don't really understand why you're doing this. $\endgroup$ Commented Apr 7, 2017 at 16:30
39
$\begingroup$

Everyone knows: There are even numbers and odd numbers. And there are rules when doing arithmetic with them: Even plus even is even, as is odd plus odd. Even plus odd gives odd. Also, odd times odd is odd, even times odd is even, as is even times even.

Of course when saying this in school, this is considered as an abbreviation of "an even number plus an even number is an even number" etc. But those formulations make sense on their own, and are just a special case of a more general structure, the rings of integers modulo $n$, which even is a field if $n$ is prime. Even/odd just are the integers modulo $2$ (and as $2$ is prime, even and odd actually form a field). The set of even numbers and the set of odd numbers are the congruence classes modulo $2$.

But there's more to it: The concept generalises from numbers to more general rings. For example it generalizes to polynomials. And then one way to define the complex numbers is to take the real polynomials modulo $x^2+1$.

But the concept of congruence can be defined much more generally. In all above examples, congruence classes are equivalence classes under the specific equivalence relation $a\equiv b \pmod n$ iff $n$ divides $a-b$. But there is no need to have the equivalence relation defined this way; one can use any equivalence relation that's compatible with the structure one considers.

This concept of congruence can for example be used to define the tensor product from the free product of vector spaces, and the exterior and symmetric algebras from the tensor product. It also, in the form of quotient groups, is an important concept in group theory.

But you can also go in a different direction: Given a prime $p$, an integer $k$ is completely determined by the sequence of its congruence classes modulo $p$, modulo $p^2$, modulo $p^3$ etc., but not all consistent series correspond to an integer. It is a natural question whether one can make sense of the other sequences, and indeed one can; the result is the $p$-adic integers, which then can be extended to the field of $p$-adic numbers.

$\endgroup$
4
  • 3
    $\begingroup$ Riffing off of the even/odd numbers, then congruence modulo a polynomial, I've always meant to sit down and figure out if there's some nice algebraic stuff happening with even and odd functions. I guess since the even/oddness really gets at the degrees of monomial terms, they behave nicely when we multiply, not when we add (and so have more in common with the $(\{\pm 1\}, \cdot)$ version of the group with two elements, rather than the $(\{0, 1\}, +_{\text{mod }2})$ version). $\endgroup$
    – pjs36
    Commented Apr 5, 2017 at 14:55
  • 1
    $\begingroup$ Even and odd functions are subspaces of the vector space of all functions on $\Bbb R$. In fact, each is the other's orthogonal complement. (I think means that quotienting out by the even functions gives the odd functions and vice versa.) $\endgroup$ Commented Apr 5, 2017 at 23:17
  • 1
    $\begingroup$ And the p adics are equivalent modulo $p^{\infty}$ $\endgroup$ Commented Apr 8, 2017 at 20:00
  • 1
    $\begingroup$ @ old me: No, it means that if you integrate their product you get 0 $\endgroup$ Commented Dec 24, 2018 at 7:39
30
$\begingroup$

Planimeter may be a rather simple mechanical computer. You can call its job a "computational trick". The theorem is as simple as:

The area of the shape is proportional to the number of turns through which the measuring wheel rotates.

Still the explanation of why it works starts with

The operation of a linear planimeter can be justified by applying Green's theorem onto the components of the vector field $N$ […]

and then it gets deeper.

$\endgroup$
25
$\begingroup$

If you allow conjectures, then I'm gonna throw the Collatz Conjecture into the mix:

xkcd

A problem simple enough to describe to just about anyone, but as Paul Erdős said "mathematics is simply not ready for such problems"

$\endgroup$
15
  • 4
    $\begingroup$ +1. I've watched fifth graders stumble on the Collatz conjecture, do lots of ariithmetic and just "know" that it's true, even if they're not sure they've solved it. This is an instance of a generic problem that might be the best general answer to the OPs general question: playing with easy/elementary numerical examples can suggest fiendishly difficult problems. Goldbach? Fermat? Edit that into your answer? $\endgroup$ Commented Apr 5, 2017 at 14:33
  • 10
    $\begingroup$ @EthanBolker: Those who think it is true based on their measly evidence should go and learn some logic, besides checking out the big list of conjectures that have extremely large counter-examples. $\endgroup$
    – user21820
    Commented Apr 5, 2017 at 14:56
  • 9
    $\begingroup$ @user21820 I think the fifth graders' problems are as much psychological as logical. It's hard to grasp the fact that any finite initial segment of the integers is essentially 0% of them all. You can use this discussion to distinguish between proof and pattern. (The first place for that is often proving odd + odd = even abstractly, not just by example.) Thanks for the link. $\endgroup$ Commented Apr 5, 2017 at 18:07
  • 1
    $\begingroup$ @g------ It's not a matter of bothering. I don't think anyone knows how to prove it. The problem is so famous that it's safe to assume a proof is published if one is found. $\endgroup$ Commented Apr 6, 2017 at 4:34
  • 6
    $\begingroup$ @g------ prove it then, if its so easy $\endgroup$ Commented Apr 6, 2017 at 14:45
22
$\begingroup$

The chain-rule in calculus is pretty intuitive to students learning it for the first time. "If you get 3 y per x, and 4 z per y, how many z per x?" $$\frac{dz}{dy}\frac{dy}{dx} = (4)(3) = 12 = \frac{dz}{dx}$$ But the chain-rule and its extensions and related theorems are pretty fundamental to all of calculus.

I also think that a lot of probability theory people can intuitively reason out when given very concrete problems, but the underlying math necessary to make rigorous what is going on is amazingly deep. Results about "probability" predate measure theory, so it's clear that the difficult rigor lagged behind the simple intuition. "What are the odds?" a little kid intuitively asks about an unlikely situation... "What are odds?" asks a mathematician who dedicates his life to laying groundwork for measure theory.

$\endgroup$
22
$\begingroup$

The elementary properties of the exponential function:

$e^{a+b} = e^{a}e^{b}$

$\left(e^{a}\right)^{b} = e^{ab}$

$e^{2\pi i} = 1$

$\frac{d}{dz}e^{cz} = ce^{cz}$

Where to begin? Let me count the ways.

The first three equations basically give you all of group theory, field theory, lie algebra, harmonic analysis, and number theory. Equation I is the prototypical example of the exponential map in lie theory; it is also probably the first instance of a truly significant homomorphism any of us come across. The success and failure of Equation I in various spaces underlies foundational issues in functional calculus—non-commutativity of operators). It also is the crux of semigroup theory, and hence, quite a bit of the study of dynamical systems. And, of course, where would quantum mechanics be without the exponential function—Hilbert spaces of wave-functions, or l Lie algebras, Heisenberg groups, and gauge theories. (Also, this gives us transistors, and hence, the handy dandy laptop computer on which I am typing this prose ode to the exponential function.) Equation II (with the help of Equation III) gives you all the cyclic groups, and hence, abstract algebra. Roots of unity are foundational objects in number theory and field theory. Gauss, Kronecker, Dirichlet, and so many others have shown just how important roots of unity are, both in their own right, and as intersectional objects that provide the links between many different areas of algebra and number theory. Cyclic groups lead to characters, which lead to L-functions, group algebras, class number formulae, explicit formulas for the prime-counting function, and god knows what else—and maybe even K-Theory.

Aside from further illustrating some of the already mentioned concepts, multiplying both sides of Equation III by $e^{z}$ and then using Equation I to obtain the $2\pi i$ periodicity of $e^{z}$ gives us even more; it is the prototypical example of a periodic function, and hence, of an automorphic forms. This, when generalized, leads to elliptic functions and modular forms—Ramanujan's playground. Thanks to Andrew Wiles, we know that we can then proceed not only to prove Fermat's Last Theorem, but also move on to algebraic geometry (elliptic curves). Then, as always, the Bernoulli suddenly numbers appear for some magical reason, which bring us back to number theory: the Riemann zeta function. And boy, do we get a lot of mileage from $\zeta\left(s\right)$. If you look out the window to your left, you can see the Weil Conjectures, and frolicking herds of special functions—the Gamma functions, Zeta Functions, and lots of Polylogarithms (and, again— somehow—K theory).

When you look out the window to your right, you'll see various important Frenchmen—Fourier, Poisson, Legendre, Laplace, Poincaré, Schwartz (to name a few). The periodicity of the exponential function (and hence, the trigonometric functions) leads to the formulation of Fourier series, empowering us in the study of partial differential equations and, eventually, functional analysis, the study of dual spaces, and the theory of distributions. Off in the distance is Mount Navier-Stokes, still waiting for someone to be the first to ascend to its peak. This fourier foray brings us naturally to Equation IV, which underpins most of (all?) integral and differential calculus. Linear algebra emerges just as naturally from the study of differential equations, where we can see the exponential function as the eigenfunction of the derivative—the prototypical differential operator. The study of differential operators in more general contexts gives us yet more functional analysis—and also the algebraic notion of derivations. And, if you're willing to make the leap, the study of integration leads to differential geometry, which leads to Einstein, cohomology, and even category theory.

I can go on.

$\endgroup$
1
  • 2
    $\begingroup$ I'm very impressed by this answer, and the light it shines on just the fundamental role $e$ plays in math. Thank you! $\endgroup$
    – D.R.
    Commented Nov 3, 2019 at 23:01
21
$\begingroup$

The fundamental theorem of calculus is familiar to many: $\int_a^bf'(x)\,dx=f(b)-f(a)$ for suitable functions $f\colon[a,b]\to\mathbb R$. Here are some ideas stemming from it:

  • The usual fundamental theorem of calculus is very one-dimensional. How might one generalize that to several variables? There are different kinds of derivatives (gradients, curls, divergences and whatnot), but how do they all fit in? One natural generalization is Stokes' theorem for differential forms, which indeed contains the familiar theorem (and several higher dimensional results) as a special case.

  • The fundamental theorem of calculus implies that if the derivative of a nice function $\mathbb R\to\mathbb R$ vanishes, the function has to be constant. If the derivative is small (in absolute value), the function is almost constant. In some sense, it means that you can control the amount of change in the function by its derivative. This might not sound surprising, given the definition of a derivative, but certain generalizations of this idea are immensely useful in analysis. Perhaps the best known result of this kind is the Poincaré inequality, and it is indispensable in the study of partial differential equations.

  • Consider a function $f\colon M\to\mathbb R$ on a Riemannian manifold. Its differential $\alpha=df$ is a one-form, which satisfies $\int_\gamma\alpha=\gamma(b)-\gamma(a)$ for any geodesic $\gamma\colon[a,b]\to M$. Proving this is nothing but the good old one-dimensional theorem applied along the geodesic. If $M$ is a Riemannian manifold with boundary (simple example: closed ball in Euclidean space) and $f\colon M\to\mathbb R$ vanishes at the boundary, then $df$ integrates to zero over every maximal geodesic. You can ask the reverse question1: If a one-form $\alpha$ on $M$ integrates to zero over all maximal geodesics, is there necessarily a function $f\colon M\to\mathbb R$ vanishing at the boundary so that $\alpha=df$? This turns out to be true in some cases, for example when the manifold is "simple". (This is a not-so-simple technical condition that I will not discuss here. The Euclidean ball is simple.) You can also ask similar questions for symmetric covariant tensor fields of higher order. Questions of this kind have, perhaps surprisingly, applications in real-word indirect measurement problems. Problems of this kind are known as tensor tomography, and I refer you to this review for details.


1 Asking reverse questions of certain kinds is its own field of mathematics, known as inverse problems. Tensor tomography is only one of many kinds of inverse problems one could study, but surprisingly many are related to some version of it.

$\endgroup$
20
$\begingroup$

An easy theorem is quadratic reciprocity from elementary number theory. However, it reflects deep mathematics, namely that reciprocity is a very deep principle within number theory and mathematics. There is a nice article by Richard Taylor on Reciprocity Laws and Density Theorems, where he explains what the related ideas of reciprocity laws (such as quadratic reciprocity and the Shimura-Taniyama conjecture) and of density theorems (such as Dirichlet’s theorem and the Sato-Tate conjecture) are.

$\endgroup$
2
  • $\begingroup$ Indeed, I learned this theorem in the context of elementary number theory in high school, and about halfway through my first algebraic number theory course this connection struck me suddenly. $\endgroup$ Commented Apr 4, 2017 at 18:38
  • $\begingroup$ Relatedly, ramification is an abstraction of some pretty straight forward elementary number theory. Given how algebraic number theory evolved, historically speaking, it seems like a place rife with examples. $\endgroup$ Commented Apr 4, 2017 at 18:40
19
$\begingroup$

Everybody knows that when you find the antiderivative of a function, you add "$+\,C$" at the end. For example, $\int x^n = \frac{1}{n+1}x^{n+1} + C$. But what's really going on here? Well, the set $F$ of functions from $\mathbb{R}$ to $\mathbb{R}$ forms an $\mathbb{R}$ vector space. It has the set $D := \{ f\colon \mathbb{R} \to \mathbb{R} \mid \text{$f$ is differentiable}\}$ as a proper subspace. Now consider $$ d\colon\, D \to F \\ \quad f \mapsto f' $$ This is a vector space homomorphism! This means that we can apply the isomorphism theorem. We find: $$ C := \ker d = \{ f \in D \mid d(f) = 0\} = \{ f \in E \mid \text{$f$ is constant} \} \\ \operatorname{im} d = \{ f \in E \mid \text{$f$ has an antiderivative} \} $$ Using the isomorphism theorem, we get that $$ d_\ast\colon\, D/C \to \operatorname{im} d $$ is an isomorphism. That means that for $f \in \operatorname{im} d$ we get $(d_\ast)^{-1}(f)$ is well defined and equals $g + C$, where $g$ is any antiderivative of $f$. How cool is that!

$\endgroup$
16
$\begingroup$

The Brouwer fixed point theorem is highly nontrivial, but the 1D case is an easy consequence of the Bolzano's Theorem.

$\endgroup$
16
$\begingroup$

Take $\sin$ and $\cos$. At first you define them geometrically. You draw triangles and you can find formulas for $\sin(\frac \alpha 2)$, $ \cos(\beta + \gamma)$, $\frac {{\rm d} \sin (\alpha)} {{\rm d} \alpha}$, etc.

And then you learn and understand the concept of ${\rm e}^{i x}$, you can express $\sin(x)$ and $\cos(x)$ with it. Suddenly all those triangle-based formulas hook up to algebra and you can derive them relatively easily without drawing triangles.

$\endgroup$
1
  • 10
    $\begingroup$ Go deeper: this relates to the equivalence between $SO(2)$, the group of rotations of the plane, with $U(1)$, the group of complex numbers of unit length, by the mapping that takes a rotation by $\theta$ to the complex number $e^{i\theta}$ (and that both of these are essentially $S^1$ in disguise) and 'generalizes' one level out to the quaternion representation of $SO(3)$, the group of rotations of space, being double-covered by the group $SU(2)$ (or $S^0(\mathbb{H})$) of unit quaternions. $\endgroup$ Commented Apr 7, 2017 at 19:25
16
$\begingroup$

In every course on linear algebra you will learn that a real-symmetric square matrix $A$ is orthogonally diagonalizable, i.e. there exists an orthogonal matrix $P$ and a diagonal matrix $D$ such that $$A=PDP^t.$$ Perhaps the course also deals with the complex counterpart: any Hermitian matrix $A$ is unitarily diagonalizable, i.e. $$A=UDU^*$$ where $U$ is unitary. If you are lucky the course will call these theorems the spectral theorems.

Off course these are special cases of the much more general spectral theorem for bounded normal operators on Hilbert spaces. That is, given a Hilbert space $\mathcal{H}$ and a bounded normal operator $T\in B(\mathcal{H})$, then there exists a unique spectral measure $E$ on the Borel $\sigma$-algebra of $\sigma(T)$ such that $$T=\int_{\sigma(T)}\lambda dE(\lambda).$$ The applications of these theorems to representation theory are fundamental to the subject.

The proofs of the finite-dimensional variants are fairly easy, whereas one requires big theorems and concepts (such as spectral measures) to prove the general version. In this sense there is a long way to go from the easy variants to the full theorem, it also took a brilliant mathematician to do this. One can even weaken the boundedness of the operator.

$\endgroup$
2
  • 1
    $\begingroup$ Halmos has a nice paper on this (the large distance one must travel from the humdrum theorem to the Hilbert space generalization), called "What does the spectral theorem say?"; perhaps you know it, but leaving it here for others: math.wsu.edu/faculty/watkins/Math502/pdfiles/spectral.pdf $\endgroup$ Commented May 8, 2017 at 8:39
  • 1
    $\begingroup$ @symplectomorphic: I didn't know the paper, it's well explained though. It's a nice introduction to this fundamental theorem whose importance cannot be overstated. $\endgroup$ Commented May 9, 2017 at 9:31
14
$\begingroup$

Schur's lemma (in its various incarnations) is my go-to example for this sort of question. It is quite simple to prove — Serre does it in in a matter of two short paragraphs in ''Linear Representations of Finite Groups'' — yet is the backbone for many foundational results in basic representation theory, including the usual orthogonality relations for characters.

It is also a very useful result in the setting of basic noncommutative algebra, where it is similarly simple to prove (Lam does it in two lines in ''A First Course in Noncommutative Rings''!), and has a host of interesting and important consequences. For instance, in ''A First Course in Noncommutative Rings'', Lam uses it in his proof of the Artin-Wedderburn classification of left semisimple rings, a major result in basic noncommutative ring theory.

I should add that Wikipedia notes that Schur's lemma has generalizations to Lie Groups and Lie Algebras, though I am less familiar with these results.

$\endgroup$
12
$\begingroup$

The case $n = 4$ of the Fermat's Last Theorem can be proved by elementary means. But the proof of the general case

[...] stimulated the development of algebraic number theory in the 19th century and the proof of the modularity theorem in the 20th century.
$\endgroup$
12
$\begingroup$

If anyone has seen an introduction to knot theory, they have probably seen the proof that the trefoil is not the unknot by Tricolorability.

Trefoil knot, which has 3 arcs, all colored a different color.

[Image By Jim.belk - Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=7903214 -- (Thanks @JimBelk) ]

Well, there is a more general invariant called $n$-colorable, and all of these are actually a special case of something called a quandle.

A very important theorem about knot quandles:

The fundamental quandle of a knot is a complete invaraint, i.e., they completely classify all knots.

And this is extremely important. There are not many complete invariants, so when there is one, we would love to really understand it better.

$\endgroup$
10
$\begingroup$

Multiplication of integers. This takes distributivity as discussed in Ethan Bolker's example in a slightly different direction. I'm pretty sure this idea is in Mathematics Made Difficult, which likely includes many more instances as well as many instances of purely obfuscatory proofs.

Even at completely elementary levels it's not unusual to demonstrate something like $3\times 4 = 12$ as $$3\times 4 = (1+1+1)\times 4 = 1\times 4+ 1\times 4 + 1\times 4 = 4+4+4 = 12$$

One could describe this as, "every integer is a sum or difference of $1$s and multiplication simply replaces each of those $1$s with a different integer". Or, as a modern mathematician would state it, the integers are the free group on one generator and multiplication is the induced group homomorphism $F(1)\to F(1)$ induced by elements of $F(1)$ (that is to say functions $1 \to |F(1)|$). It's nice how this automatically gives distributivity, associativity, unit, and zero laws of multiplication. This example is actually a good example demonstrating the ideas behind the notion of a free group.

$\endgroup$
10
$\begingroup$

Yet another "simple idea that generalizes to mathematics of startling depth" is Euler's Polyhedral Formula $$ V - E + F = 2, $$ where $V$ is the number of vertices of a convex 3-dimensional polyhedron, $F$ is the number of its faces, and $E$ is the number of its edges.

The polyhedral formula can be explained to 5th graders, yet it gives rise to the Euler characteristic (an early example of a topological invariant), which in turn admits beautiful generalizations to higher dimensions - and also serves as a bridge from topology to geometry via the Gauss-Bonnet theorem.

$\endgroup$
3
  • $\begingroup$ Also the Lefschetz fixed point theorem, the Poincare-Hopf theorem, the Euler class, .... , probably many more things than these too. $\endgroup$ Commented Jun 28, 2017 at 15:09
  • $\begingroup$ This is my weird comment, but I don't llike the order in which you've written the operations $+,-$ since the appropriate generalization is an alternating sum, and I think it's more natural to write the sum/difference in increasing dimension $\endgroup$ Commented Apr 30, 2018 at 19:15
  • 1
    $\begingroup$ Edited as suggested. Thanks! $\endgroup$
    – Alex
    Commented May 5, 2018 at 17:49
9
$\begingroup$

The equality of mixed partials (Clairaut-Schwarz theorem): If $E\subset \mathbb{R}^n$ is an open set, and $f\in\mathcal{C}^2(E)$, then $D_{ij} f=D_{ji}f$.

The proof, given twice continuous differentiability, is elementary, but gives rise to the property that $d(d\omega)=0$ for any differential form $\omega$, a fundamental property of the exterior derivative that has an enormous number of implications in differential and algebraic topology.

$\endgroup$
9
$\begingroup$

Equality of mixed partials $$\frac{\partial^2f}{\partial x\,\partial y} = \frac{\partial^2f}{\partial y\, \partial x}$$ is the simplest instance of several far-reaching ideas in geometry and topology. Here are several examples to justify this claim.

(1) Equality of mixed partials is the reason the exterior derivative squares to zero ($d(d\omega) = 0$), meaning that the de Rham complex is actually a complex, so de Rham cohomology makes sense. And since de Rham cohomology is "dual" to (say) singular homology, equality of mixed partials is (formally) equivalent to the statement that "the boundary of a boundary of a geometric object is empty."

(2) On curved spaces (Riemannian manifolds), equality of mixed partials fails in a variety of contexts. Both the "torsion of a connection" and "curvature of a connection" measure this failure (in different senses).

(3) As a generalization of (2): One can ask when a given geometric structure (a $G$-structure) on a manifold is locally equivalent to the relevant flat model. For instance, a local frame field $(e_1, \ldots, e_n)$ on a manifold arises from a (local) coordinate system if and only if the Lie brackets $[e_i, e_j] = 0$ vanish (i.e.: mixed partials commute). The Newlander-Nirenberg Theorem in complex geometry and Darboux' Theorem in symplectic geometry also fit this paradigm.

(4) As a generalization of (3): Equality of mixed partials is a necessary "integrability condition" to solve various overdetermined systems of PDE. In many instances, this necessary condition for solvability is sufficient. One of the most beautiful instances of this is the Frobenius Theorem.

(4a) The Frobenius Theorem is responsible for the fact that Lie algebras can be "integrated" to Lie groups. The "equality of mixed partials" in this case is exactly (literally) the Jacobi identity (for Lie algebras).

(4b) Another use of the Frobenius Theorem is to prove Bonnet's Theorem (the "Fundamental Theorem of Hypersurfaces") that the Gauss-Codazzi equations (equality of mixed partials) are the necessary and sufficient (!) conditions for two quadratic forms (one positive-definite) to be the first and second fundamental forms of an immersion of a hypersurface into euclidean space.

$\endgroup$
7
$\begingroup$

Thinking about the words that the OP wrote: "simple ideas that contain within them, or generalize to, mathematics of startling depth", it comes to my mind the special case of Euler's formula known as Euler's identity. It is indeed (excerpt from Wikipedia) "often cited as an example of deep mathematical beauty".

$$e^{i \pi}+1=0$$

A short and simple formulation, but the result lies on the development of several fields, the study of the periodicity of the trigonometric functions, complex logarithms, and series expansions of the exponential and trigonometric expressions by Bernoulli, Euler and others.

$\endgroup$
5
$\begingroup$

Today's youth are growing up with computers, they are used to texting, sending digital images to each other. They are familiar with zooming in and out of images. So, at least at the intuitive level, they know what coarse graining is. Now, whenever mathematics is applied to the real world, one can always ask how the applied formalism follows from the fundamental laws of physics.

In general, when the question is about some effective model used to describe macroscopic phenomena, deriving the model from first principles from, say, the properties of molecules, can be extremely complicated. But in principle, it's clear that it will involve integrating out the microscopic degrees of freedom the system described by the model ultimately consists of. This observation is in some cases good enough to derive certain scaling relations of the model. The argument is then that you could have integrated out a bit more and then rescaled the system, the effect of this is then that the parameters change a bit.

If you're on board a plane that is flying over the ocean, then looking down at the ocean you'll see water; the extremely coarse grained version of water still looks like water. If you make a video of it and pretend that it's a video of a fluid taken from a short distance, then the fluid would look like having different properties than real water, e.g. the viscosity would have to be a lot larger.

This equations that determine the change in the parameters that is equivalent to a rescaling, are called renormalization group equations. This way of thinking about rescaling and coarse graining was applied by Kenneth Wilson to the theory of phase transitions who won the Nobel Prize for his work.

$\endgroup$
1
  • $\begingroup$ This is very interesting, but what is the "simple theorem" that is an instance of deep mathematics here? $\endgroup$ Commented Aug 30, 2021 at 0:17
5
$\begingroup$

I think another nice example of an easily understandable or "obvious" statement is the Jordan Curve Theorem:

Every continuous non-selfintersection loop (a so called Jordan curve) in the real plane splits it in exactly two connected components, one of which is unbounded.

I think the statement is very easy to believe, but all the elementary proofs I know get very technical (You try to approximate the curve by a polygon and reduce to that case). However, there is a really beautiful argument if you dig deeper and start using algebraic topology, more precisely homology theory. And as soon as you are there you have entered one of the areas with (probably) the most profound impact on modern theoretical mathematics.

$\endgroup$
2
$\begingroup$

I think that in this list a place shall be reserved to the Chinese Remainder Theorem and to Imaginary Number, since:
- both appeared as a computational "trick" or "puzzle" or "curiosity";
- it took centuries to give them a solid "ground";
- thereafter they opened the way to new theoretic fields and developments and countless applications;
- nowadays they are widely received as "common" and relatively "simple" basic tools.

$\endgroup$
1
  • 2
    $\begingroup$ Centuries? It is like two millenniums for CRT. And Wikipedia page is not really gives an intuitive understanding of the theorem apart of the rigorous definition. For sake of convenience, this theorem is about calculating the intersection point of two (or more) periodic events started at a known time. Or to show the intersection point not exists at all... A very useful tool for many sciences like astronomy indeed. $\endgroup$ Commented Jul 23, 2019 at 22:47
-3
$\begingroup$

You learned the Taylor expansion of a smooth function. It tells you that there is a polynomial function "identical" to your starting function.

It also tells you that pretty much all the math you have learned up until that moment is fake, and can be thrown away. Be a little patient with me here while I explain.

So all properties of all functions you have seen is contained in its property at 0, all you need to represent a function is just a list of all derivatives at 0.

This is very different from any real function like the temperature outside. You can not study the weather at midnight, and then know the temperature for all time to come, and all time that was. Nothing in the real world behave like this

All the analysis of slopes, max, intersections, limits etc can be done by only looking at the function for x=0. No purpose in drawing a function in a diagram. That is just meaningless doodling.

For real functions like the temperature outside a graph and intersections make sense. Is the air warmer than the lake? etc. The weather is a real mathematical problem that a lot of math is used to solve.

Back to our smooth functions: it is all a charade. As you draw functions, and wonder when they will intersect or when they will reach a max, you are doing something useless. The answer is already in the functions property at 0, or at any other location for that matter. So instead of drawing a function across some interval, just draw it carefully around 0. Done. No more needed.

Example when does x^2 intersect e^x. You might as well ask

{0,0,2,0...} = {1,1,1,1...}.

$\endgroup$
6
  • 2
    $\begingroup$ Of course, "most" smooth functions are analytic nowhere.... $\endgroup$ Commented Apr 11, 2017 at 0:24
  • 1
    $\begingroup$ The use of the word "real" here is confusing, perhaps "real-world" would be better $\endgroup$ Commented Apr 11, 2017 at 2:42
  • 2
    $\begingroup$ Correct me if I'm wrong but can't a function have different Taylor expansions at different neighbourhoods of $x$? $\endgroup$
    – Jam
    Commented Apr 24, 2017 at 18:37
  • 2
    $\begingroup$ This looks to me like a gross misunderstanding on what Taylor series are about - which is local approximation and not global approximation. Also most students are confronted with examples where there is a finite radius of convergence early on like $\sqrt{x}$, $\log(x)$ or even $\exp(-\frac{1}{x^2})$. $\endgroup$
    – Hyperplane
    Commented Apr 18, 2018 at 16:30
  • $\begingroup$ [like a gross misunderstanding] It is no misunderstanding. I believe you are not getting how fundamental this is. This can be done for any domain of interest. That all algebraic functions are analytical (excluding poles) and are holomorphic functions. The existence of a complex derivative in a neighborhood is a very strong condition, for it implies that any holomorphic function is actually infinitely differentiable and equal to its own Taylor series. $\endgroup$
    – Viking
    Commented Apr 19, 2018 at 21:58

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .