48
$\begingroup$

I just learned (thanks to Harry Gindi's answer on MO and to Qiaochu Yuan's blog post on AoPS) that the chinese remainder theorem and Lagrange interpolation are really just two instances of the same thing. Similarly the method of partial fractions can be applied to rationals rather than polynomials. I find that seeing a method applied in different contexts, or just learning a connection that wasn't apparent helps me appreciate a deeper understanding of both.

So I ask, can you help me find more examples of this? Especially ones which you personally found inspiring.

$\endgroup$
8
  • 10
    $\begingroup$ Community wiki? $\endgroup$ Commented Aug 1, 2010 at 20:15
  • 1
    $\begingroup$ If you want to see a neat approach to the chinese remainder theorem check out page 14 of math.stanford.edu/~vakil/0708-216/216class0708.pdf $\endgroup$
    – BBischof
    Commented Aug 1, 2010 at 20:30
  • 1
    $\begingroup$ I removed the abstract algebra tag because I think it implies you are restricting answers, which I don't think you want to do. $\endgroup$
    – BBischof
    Commented Aug 1, 2010 at 20:36
  • 1
    $\begingroup$ @Akhil (not that I don't like gaining rep but) maybe you should edit it to make CW?.. $\endgroup$
    – Grigory M
    Commented Aug 1, 2010 at 21:02
  • 1
    $\begingroup$ @Grigory M: We can't edit to make it CW, the original poster has to (or maybe a ♦mod). $\endgroup$
    – Isaac
    Commented Aug 1, 2010 at 22:17

15 Answers 15

25
$\begingroup$

Galois Connections

Let's be honest, the correspondence between Galois groups and field extension is pretty hott. The first time I saw this I was duly impressed. However, about two years ago, I learned about universal covering spaces. Wow! I swear my understanding of covering spaces doubled when the prof told me that this was a "Galois correspondence for fundamental groups and covering spaces".

Again here is a link!

$\endgroup$
3
  • 2
    $\begingroup$ Well, it's more like a (very impressive, indeed) analogy then a generalization, isn't it? $\endgroup$
    – Grigory M
    Commented Aug 1, 2010 at 20:40
  • $\begingroup$ @Grigory Hmm, I don't know. For the purposes of answering this question, I will take the position that it is clearly a generalization. :D $\endgroup$
    – BBischof
    Commented Aug 1, 2010 at 21:03
  • 3
    $\begingroup$ Also, here is a MO question from way back that you might be interested in if you like this answer. I also answered that question with a pretty neat version of this correspondence. :D mathoverflow.net/questions/546/… $\endgroup$
    – BBischof
    Commented Aug 1, 2010 at 21:04
24
$\begingroup$

I agree! I spend much of my mathematical free time exploring such connections.

Here is a basic one that I constantly ponder. The rules of matrix multiplication encode two things:

  • How to compose a linear transformation $A$ with another linear transformation $B$, with respect to a fixed basis.
  • How to follow an edge of type $A$ on a graph, and then follow an edge of type $B$ (where $A$ and $B$ are just a disjoint partition of the set of edges).

This means that one can study walks on graphs by studying how a matrix called the adjacency matrix behaves. This leads into all sorts of beautiful mathematics; for example, this is the basic tool behind Google's PageRank algorithm, and it also in some sense motivated Heisenberg's matrix mechanics formulation of quantum mechanics. I often try to recast results in linear algebra in terms of some combinatorial statement about walks on graphs.

$\endgroup$
7
  • 1
    $\begingroup$ The relevant posts (in a broad sense) are under the tag "walks on graphs": qchu.wordpress.com/tag/walks-on-graphs . I haven't discussed some of the easier ones explicitly: for example, there is a very tidy proof that tr(A_1 ... A_n) = tr(A_n A_1 ... A_{n-1}). $\endgroup$ Commented Aug 1, 2010 at 22:33
  • 3
    $\begingroup$ How is this an example of a generalization? :) $\endgroup$ Commented Aug 1, 2010 at 22:35
  • 5
    $\begingroup$ This falls under "learning a connection that wasn't apparent," I think. $\endgroup$ Commented Aug 2, 2010 at 2:24
  • 1
    $\begingroup$ @AmV: given a graph $G$, color some of the edges red and others blue. Let $r_{ij}, b_{ij}$ denote the number of red (resp. blue) edges between vertices $i$ and $j$. We get two matrices $R, B$, and their product $C = RB$ has coefficients $c_{ij} = \sum r_{ik} b_{kj}$. This is precisely the number of ways to get from $i$ to $j$ by first crossing a red edge and then a blue edge. Matrix theory is discussed from this perspective in, for example, Brualdi and Cvetkovic: books.google.com/… $\endgroup$ Commented Jul 28, 2011 at 15:01
  • 1
    $\begingroup$ (The above picture generalizes to non-square matrices but one needs to use in full generality three sets of vertices instead of one. This is all very intuitive though: the vertices just represent chosen bases of three vector spaces.) $\endgroup$ Commented Jul 28, 2011 at 15:02
19
$\begingroup$

Classification of finitely-generated abelian groups and Jordan normal form are two instances of the structure theorem for finitely generated modules over a principal ideal domain.

$\endgroup$
5
  • 6
    $\begingroup$ Another one is the solution of homogeneous linear differential equations with constant coefficients! $\endgroup$ Commented Aug 2, 2010 at 6:24
  • $\begingroup$ @Qiaochu, could you expand on that a bit / provide a reference? I know very little about the theory of differential equations unfortunately. $\endgroup$ Commented Jul 26, 2011 at 2:51
  • 4
    $\begingroup$ @Zev: the space of solutions to a homogeneous linear differential equation with constant coefficients has a $k[D]$-module structure, where $D$ is differentiation, and moreover it's finitely-generated. So the structure theorem tells you what kind of decomposition to expect, and then you explicitly construct the corresponding generalized eigenvectors $x^n e^{rx}$. $\endgroup$ Commented Jul 26, 2011 at 2:55
  • $\begingroup$ @Qiaochu: Do you have a reference where I could see the details of this example? Thank you. $\endgroup$ Commented Aug 20, 2011 at 1:08
  • $\begingroup$ @Bruno: unfortunately I don't know a place where these ideas are written down in this language. Once you learn about Jordan normal form (using generalized eigenvectors), then uniqueness and existence of ODEs, it should be a nice exercise. $\endgroup$ Commented Aug 20, 2011 at 2:04
19
$\begingroup$

I loved learning about how differential forms and the exterior derivative generalize 3-d vector calculus (div, grad, curl). Differential forms are so elegant in comparison, work in arbitrary dimensions, and give rise to beautiful mathematics (e.g. de Rham cohomology, Hodge theory). And of course, the generalized Stoke's theorem is one of the prettiest equations: $\int_{\partial R} \phi = \int_R d\phi$.

$\endgroup$
9
$\begingroup$

The law of cosines and the equation for the variance of a sum of (possibly correlated) random variables are both consequences of basic inner product space properties. Details here.

$\endgroup$
1
9
$\begingroup$

The examples of mathematical formulas applied ‘elsewhere’ than where would’ve been ‘normally expected’, and which I personally found very surprising and inspiring, and sometimes even outright ‘shocking’, are the following :


1. Extending factorials to non-natural arguments : The factorial of any positive real number is the Gaussian integral of its reciprocal or multiplicative inverse : $$n!\ =\ \mathcal{G}\left(\frac1n\right) \qquad,\qquad \mathcal{G}(n)\ =\ \int_0^\infty e^{-x^n} dx \qquad,\qquad n \geqslant 0$$ which for $n = \tfrac12$ gives us the famous identity $\Gamma^2\left(\tfrac12\right) = \pi$ .


2. Extending combinations or binomial coefficients to non-natural arguments: The well-known formula $$C_n^k\ =\ \displaystyle\prod_{j=0}^{k-1} \frac{n-j}{k-j}$$ works just as well for any other values of $n$, be they negative $(\mathbb{Z}\setminus\mathbb{N})$ , fractionary $(\mathbb{Q}\setminus\mathbb{Z})$ , irrational $(\mathbb{R}\setminus\mathbb{Q})$ , or even complex $(\mathbb{C})$ ! ...


3. Extending Newton's famous binomial theorem to non-natural powers, with the help of the previous observation: For instance, $$\frac1{(a + b)^n}\ =\ \frac1{b^n} \cdot \sum_{k=0}^\infty\ C_{-n}^k \cdot \left(\frac{a}{b}\right)^k\ \qquad;\qquad\ \sqrt[n]{a + b}\ =\ \sqrt[n]b \cdot \sum_{k=0}^\infty\ C_\frac1n^k \cdot \left(\frac{a}{b}\right)^k$$ etc. , where $|a|\ \leqslant\ |\ b\ |$ . Basically, since k takes only natural values, it never ‘reaches’ its non-natural destination, therefore the sum never stops, but goes up to $\infty$.


4. Linking binomial coefficients and Gaussian integrals (both notions being usually associated with the fields of combinatorics, probability theory, and statistics) to geometric shapes known as super-ellipses $(X^n + Y^n = R^n$ , related to the equation of the circle, as well as to Fermat's last theorem $a^n + b^n = c^n$ : see Fermat curves), by means of the famous Wallis integrals $($proof by induction, using the binomial theorem) : $$\int_0^1 \left(1 - \sqrt[n]x\right)^m dx\ =\ \int_0^1 \left(1 - \sqrt[m]x\right)^n dx\ =\ \frac1{C_{m + n}^n}\ =\ \frac1{C_{m + n}^m}\ =\ \frac{m! \cdot n!}{(m + n)!}$$ which are then easily extended to non-natural arguments using the generalized expression for the factorial function described earlier at point 1 above : $$\int_0^1 \sqrt[m]{1 - x^n} dx\ =\ \int_0^1 \sqrt[n]{1 - x^m} dx\ =\ \frac{\frac1m ! \frac1n !}{\left(\frac1m + \frac1n\right)!}\ =\ \frac{\mathcal{G}(m) \cdot \mathcal{G}(n)}{\mathcal{G}\left(\frac{m\ \cdot\ n}{m\ +\ n}\right)}$$ and $$\int_0^1 \frac{dx}{\sqrt[m]{1 - x^n}}\ =\ \frac{\left(-\frac1m\right) ! \frac1n !}{\left(\frac1n - \frac1m\right)!}$$ where the value of $\left(-\frac1m\right) !$ is deduced from the reflection formula $(-x)! = (x!\cdot{\text{sinc}}[\pi x])^{-1}$ .

The connection between Wallis integrals and superelliptic areas is pretty plain and straightforward; the one between Gaussians and superellipses however quickly becomes clear if one integrates the exponential function alongside both axes, thus transforming a product of two exponentials into an exponential of a polynomial sum, which ultimately results in : $$\int_0^\infty\int_0^\infty e^{-(x^n\ +\ y^n)}\ dx\ dy .$$


5. Speaking of ‘surprising generalizations’ : Continued fractions are actually nested radicals in disguise ! If

$$\underset{_{\text{k}\,=\,0}}{\overset{n}{\Large\Xi}}\ \Bigg(a_{_\text{k}}\ ,\ b_{_\text{k}},\ \frac1{N_{_\text{k}}}\Bigg)\ =\ \sqrt[^{N_{_\text{0}}}]{a_{_\text{0}}\ +\ b_{_\text{0}}\sqrt[^{N_{_\text{1}}}]{a_{_\text{1}}\ +\ b_{_\text{1}}\sqrt[^{N_{_\text{2}}}]{\ldots\ \sqrt[^{N_{_{n}}}]{a_{_{n}}}}}}$$ then $$\underset{_{\text{k}\,=\,0}}{\overset{n}{\Large\Xi}}\ \Big(a_{_\text{k}}\ ,\ b_{_\text{k}},\ -1\Big)\ =\ \cfrac1{a_{_0}\ +\ \cfrac{b_{_0}}{a_{_1}\ +\ \cfrac{b_{_1}}{\ddots\ {a_{_n}}}}}$$ Also, $$\underset{_{\text{k}\,=\,0}}{\overset{n}{\Large\Xi}}\ \Big(a_{_\text{k}}\ ,\ 1,\ 1\Big)\ =\ \sum_{k\,=\,0}^n a_{_\text{k}} \qquad\qquad;\qquad\qquad \underset{_{\text{k}\,=\,0}}{\overset{n}{\Large\Xi}}\ \Big(0,\ a_{_\text{k}}\ ,\ 1\Big)\ =\ \prod_{k\,=\,0}^n a_{_\text{k}}$$ and $$\underset{_{\text{k}\,=\,0}}{\overset{n}{\Large\Xi}}\ \Big(a_{_\text{k}}\ ,\ x,\ 1\Big)\ =\ \sum_{k\,=\,0}^n a_{_\text{k}}\ x^k\ =\ P_n(x) .$$ :-)

$\endgroup$
8
$\begingroup$

Model categories as a framework for both complexes of R-modules and topological spaces (making precise, for example, analogy between taking projective resolution and replacing a space with weakly homotopy equivalent CW-complex).

$\endgroup$
8
$\begingroup$

Pretty basic, I know, but the Fundamental Theorem of Calculus (first form) is a generalization of the Telescoping Property of Finite Sums.

$\endgroup$
1
  • $\begingroup$ Very nice observation. $\endgroup$ Commented Jan 3, 2014 at 5:42
7
$\begingroup$

Localization

When I learned that you could localize categories(and not just abelian!) I was floored. The general idea that we take a class of morphisms in a category and send them functorially to another category where they are isos is awesome. It is also very important in my work, which is generalizing some ideas of Algebraic Geometry to a more categorical setting.

Here is a link!

$\endgroup$
6
$\begingroup$

Falling more under the heading of "surprising connection" is the relationship between the determinant and the volume and thus, by Cramer's rule, the association between the solution of a system of linear equations and volume. Indeed, the determinant manages to creep its way into a surprising amount of mathematics.

$\endgroup$
4
$\begingroup$

Galois connections are further reaching than one first realizes. One small application is in model theory, where the relation R between sentences of a theory and models given by t R M if sentence t is true in model M, leads to deductively closed theories versus classes of models closed under certain operations, many of which are algebraic in nature. They also arise in algebraic geometry and computer science, among other fields. There is a book on Galois connections edited by Marcel Erne; for the strongly inquisitive I recommend checking it out.

$\endgroup$
0
3
$\begingroup$

I recall seeing a bitter rant by Paul Halmos that as a graduate student he wasn't told about a certain generalization, or connection, between two big topics in mathematics. As I recall, I saw this rant in a book on Finite Dimensional Vectors Spaces that he authored. I'm sure that the rant is well-known enough that someone can supply the details.

$\endgroup$
1
  • 5
    $\begingroup$ From the first line of the Preface: >"That Hilbert space theory and elementary matrix theory are intimately associated came as a surprise to me and to many colleagues of my generation only after studying the two subject separately. This is deplorable..." $\endgroup$ Commented Jul 28, 2011 at 13:18
3
$\begingroup$

Most surprising connection for me: random walk with, ...well more or less anything :) Heat equation, harmonic functions, quantum mechanics (through Wick rotation), statistical physics of spin models (where the correspondence is edge interaction strength $\leftrightarrow$ transition probabilities), Gaussian fields (covariance matrix $\leftrightarrow$ transition probabilities), the list goes on$\ldots$

$\endgroup$
1
$\begingroup$

The definition of spectrum for operators on an infinite-dimensional Hilbert space. When I first learned that there are nice operators (bounded, linear, self-adjoint) on infinite-dimensional Hilbert space which have no eigenvalues at all, the possibility of generalizing spectral theory to an infinite-dimensional setting seemed pretty hopeless to me. After all, in finite dimension the eigenvalues of an operator play such a large and significant role in the analysis of the operator. I found it surprising and astounding that there is a good substitute for eigenvalues in the infinite-dimensional case, and that one can actually develop a very powerful spectral theory using it. (I guess that in retrospect this isn't so surprising, seeing that in many infinite-dimensional situations one must apply some "completion" of the analogous construction in finite dimension)

$\endgroup$
2
  • $\begingroup$ I find this quite puzzling. I never encountered the word spectrum until I started studying functional analysis, at which point we were told straight off the ‘correct’ definition for bounded linear operators. What do you count as ‘spectral theory’ for operators on finite-dimensional spaces? $\endgroup$
    – Zhen Lin
    Commented Jul 28, 2011 at 16:05
  • 1
    $\begingroup$ @Zhen: Well, I can't speak for Mark, but there's the spectral theorem in almost innumerable incarnations in linear algebra (orthogonal, unitary, self-adjoint, normal, symmetric matrices). All the ideas involved there make a nice spectral theory. In fact, my first year linear algebra course had about a trimester under the heading "spectral theory" and covered many things up to the holomorphic functional calculus and spectral projections. $\endgroup$
    – t.b.
    Commented Jul 28, 2011 at 21:41
0
$\begingroup$

Here is a transformation formula for the Regularized $Q(a,z)$ Incomplete Gamma function $\Gamma(a,z)$ and the Regularized $\text I_z(a,b)$ Incomplete Beta function $\text B_z(a,b)$. Please see the definitions to understand the functions. We use this hypergeometric limit:

$$\lim_{b\to \infty}b^a\text B_\frac xb(a,b)=γ(a,z)=\Gamma(a)-\Gamma(a,z)$$

which works for many values, but $z>0$ and other restrictions may not. Now we have a truly a Generalized Incomplete Gamma function, but as a beta function with $p$ as if it was evaluated at $p=-\infty$

Now we see a double generalization of the Lambert W function $\text W_k(z)$, but only for a small domain of $\text W_0(z)=\text W(z)$ and $\text W_{-1}(z)$. We see that Inverse Beta Regularized $\text I^{-1}_z(a,b)$ generalizes $\text W(z)$, using limits, and Inverse Gamma Regularized $Q^{-1}(a,z)$ which generalizes $\text W_{-1}(z)$. The inverse Gamma\Beta functions are quantile functions implemented into Mathematica:

$$\boxed{\lim_{b\to\infty}b\text I^{-1}_x(a,b)=Q^{-1}(a,1-x)}$$

which also works for most values and using this identity may extend the domain of one function while giving a computable representation in terms of the other without needing a series. Please correct me and give me feedback!

Does this $4$ argument inverse function really generalize the exotic Inverse Beta Regularized function?

$\endgroup$
0

You must log in to answer this question.