174
$\begingroup$

It often happens in mathematics that the answer to a problem is "known" long before anybody knows how to prove it. (Some examples of contemporary interest are among the Millennium Prize problems: E.g. Yang-Mills existence is widely believed to be true based on ideas from physics, and the Riemann hypothesis is widely believed to be true because it would be an awful shame if it wasn't. Another good example is Schramm–Loewner evolution, where again the answer was anticipated by ideas from physics.)

More rare are the instances where an abstract mathematical "idea" floats around for many years before even a rigorous definition or interpretation can be developed to describe the idea. An example of this is umbral calculus, where a mysterious technique for proving properties of certain sequences existed for over a century before anybody understood why the technique worked, in a rigorous way.

I find these instances of mathematical ideas without rigorous interpretation fascinating, because they seem to often lead to the development of radically new branches of mathematics$^1$. What are further examples of this type?

I am mainly interested in historical examples, but contemporary ones (i.e. ideas which have yet to be rigorously formulated) are also welcome.


  1. Footnote: I have some specific examples in mind that I will share as an answer, if nobody else does.
$\endgroup$
21
  • 5
    $\begingroup$ Ideas from what we now call generalised functions or distributions were around for a while before being formalised $\endgroup$ Commented Feb 3, 2018 at 8:52
  • 13
    $\begingroup$ The notion of a function. See history of the function concept $\endgroup$ Commented Feb 3, 2018 at 17:21
  • 5
    $\begingroup$ Shouldn't this be community wiki? $\endgroup$ Commented Feb 3, 2018 at 19:33
  • 9
    $\begingroup$ A corollary of this question is prompted by the observation that the 19th and 20th century seemed to do a lot of"cleaning up" of older mathematics by putting it on a rigorous foundation. Is it possible that in years to come, people will look back at 21st century math and scoff at our impreciseness? Or are we "done" tidying the house? $\endgroup$ Commented Feb 4, 2018 at 5:48
  • 8
    $\begingroup$ Sets were simply known as random aggregations of possibly no entities long before Zermelo and Fraenkel defined their modern meaning. Still, the old definition is good enough for math basics. $\endgroup$ Commented Feb 4, 2018 at 12:44

19 Answers 19

106
$\begingroup$

The notion of probability has been in use since the middle ages or maybe before. But it took quite a while to formalize the probability theory and giving it a rigorous basis in the midst of 20th century. According to wikipedia:

There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation, sets are interpreted as events and probability itself as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (that is, not further analyzed) and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.

There are other methods for quantifying uncertainty, such as the Dempster–Shafer theory or possibility theory, but those are essentially different and not compatible with the laws of probability as usually understood.

$\endgroup$
3
  • 1
    $\begingroup$ In fact, I would argue that that effort is not over, even as of today; see en.wikipedia.org/wiki/German_tank_problem and the XKCD take on a similar problem, xkcd.com/1132 $\endgroup$
    – DomQ
    Commented Feb 12, 2018 at 8:42
  • $\begingroup$ @DomQ I think the examples you mentioned are not so relevant as this one, especially the part mentioning: Nicholas Shackel affirms that after more than a century the paradox remains unresolved $\endgroup$
    – polfosol
    Commented Feb 12, 2018 at 9:31
  • $\begingroup$ It is definitely not over, there remains today considerable controversy about the philosophy (and hence the formalism) of probability--start with Wikipedia "Interpretations of probability," and then Google to see the many papers and book chapters about the frequentist vs. Bayesian and belief based (e.g., Dempster-Shafer) models. $\endgroup$ Commented Jul 9, 2018 at 19:50
73
$\begingroup$

Euclidean geometry. You think calculus was missing rigorous understanding? Non-Euclidean geometry? How about plain old Euclidean geometry itself? You see, even though Euclid's Elements invented rigorous mathematics, even though it pioneered the axiomatic method, even though for thousands of years it was the gold standard of logical reasoning - it wasn't actually rigorous.

The Elements is structured to seem as though it openly states its first principles (the infamous parallel postulate being one of them), and as though it proves all its propositions from those first principles. For the most part, it accomplishes the goal. In notable places, though, the proofs make use of unstated assumptions. Some proofs are blatant non-proofs: to prove side-angle-side (SAS) congruence of triangles, Euclid tells us to just "apply" one triangle to the other, moving them so that their vertices end up coinciding. There's no axiom about moving a figure onto another! Other proofs have more insidious omissions. In the diagram, does there exist any point where the circles intersect? It's "visually obvious", and Euclid assumes they intersect while proving Proposition 1, but the assumption does not follow from the axioms.

allegedly intersecting circles

In general, the Elements pays little attention to issues of whether things really intersect in the places you'd expect them to, or whether a point is really between two other points, or whether a point really lies on one side of a line or the other, etc. We all "know" these concepts, but to avoid the trap of, say, a fake proof that all triangles are isosceles, a rigorous approach to geometry must address these concepts too.

It was not until the work of Pasch, Hilbert, and others in the late 1800s and early 1900s for truly rigorous systems of synthetic geometry to be developed, with the axiomatic definition of "betweenness" being a key new fundamental idea. Only then, millennia since the journey began, were the elements of Euclidean geometry truly accounted for.

$\endgroup$
7
  • 21
    $\begingroup$ Maybe instead of Euclidean geometry, it is the very idea of rigor that this is about $\endgroup$ Commented Feb 3, 2018 at 11:15
  • $\begingroup$ Did the Cartesian plane formalise it? $\endgroup$ Commented Feb 4, 2018 at 22:20
  • 6
    $\begingroup$ @PyRulez: $\mathbb{R}^2$, once constructed, formalizes one model of a Euclidean plane. It doesn't formalize what a Euclidean plane is. What if you declare, "I say a Euclidean plane is whatever is isomorphic to $\mathbb{R}^2$"? Then you still have to say what structure the "isomorphisms" have to preserve. The idea that needs formalization is the structure - the lines/circles/angles/etc. system that Euclid describes (but with gaps). "$\mathbb{R}^2$ is far from being the end of the story. $\endgroup$ Commented Feb 5, 2018 at 2:48
  • $\begingroup$ Calling the parallel postulate infamous sounds like it was a mistake to include it. $\endgroup$
    – JiK
    Commented Feb 6, 2018 at 9:25
  • 3
    $\begingroup$ @JiK, for a long time, it seemed like it was a mistake. Euclid's other axioms have a "fundamental" feel to them, but the parallel postulate seems like it should be provable from the others, and many efforts were made to develop such a proof. $\endgroup$
    – Mark
    Commented Feb 8, 2018 at 1:03
72
$\begingroup$

Natural transformations are a "natural" example of this. Mathematicians knew for a long time that certain maps--e.g. the canonical isomorphism between a finite-dimensional vector space and its double dual, or the identifications among the varied definitions of homology groups--were more special than others. The desire to have a rigorous definition of "natural" in this context led Eilenberg and Mac Lane to develop category theory. As Mac Lane allegedly put it:

"I didn't invent categories to study functors; I invented them to study natural transformations."

$\endgroup$
2
  • 6
    $\begingroup$ The way I heard it was that "I didn't invent functors to study categories; I invented them to study natural transformations", which makes more sense, since Functors are less important than the other two concepts. $\endgroup$ Commented Feb 4, 2018 at 22:17
  • 1
    $\begingroup$ Fun fact I learned in grad algebra: Functors form a category, the morphisms of which are natural transformations. $\endgroup$ Commented Feb 6, 2018 at 16:31
63
$\begingroup$

Following from the continuity example, in which the $\epsilon$-$\delta$ formulation eventually became ubiquitous, I submit the notion of the infinitesimal. It took until Robinson in the 1950s and early 60s before we had "the right construction" of infinitesimals via ultrapowers, in a way that made infinitesimal manipulation fully rigorous as a way of dealing with the reals. They were a very useful tool for centuries before then, with (e.g.) Cauchy using them regularly, attempting to formalise them but not succeeding, and with Leibniz's calculus being defined entirely in terms of infinitesimals.

Of course, there are other systems which contain infinitesimals - for example, the field of formal Laurent series, in which the variable may be viewed as an infinitesimal - but e.g. the infinitesimal $x$ doesn't have a square root in this system, so it's not ideal as a place in which to do analysis.

$\endgroup$
7
  • $\begingroup$ What was Cauchy's formulation exactly? $\endgroup$ Commented Feb 3, 2018 at 19:37
  • $\begingroup$ @MikhailKatz The $\epsilon$-$\delta$ formulation is what I understand by those words. $\endgroup$ Commented Feb 3, 2018 at 20:09
  • 2
    $\begingroup$ Patrick, the fact is that Cauchy never gave an epsilon-delta definition of continuity. Cauchy defined continuity in terms of infinitesimals. Some historians have been engaged in a quest for departed quantifiers in Cauchy which yielded meager results, because the thrust of Cauchy's approach to the calculus was via infinitesimals. $\endgroup$ Commented Feb 3, 2018 at 20:12
  • $\begingroup$ In that case I'll remove Cauchy's name from this :) $\endgroup$ Commented Feb 3, 2018 at 20:14
  • 1
    $\begingroup$ No, Cauchy's work is an outstanding confirmation of this principle. Of course, Leibniz did it two centuries earlier already. $\endgroup$ Commented Feb 3, 2018 at 20:19
57
$\begingroup$

Sets. As late as the early twentieth century, Bertrand Russell showed that one leading theory of them was self-contradictory, because it led to Russell's Paradox: Does the set of all sets that do not contain themselves, contain itself? The accepted solution was ZF set theory.

Another example that jumps to mind is counting up: Peano arithmetic was axiomatized in the nineteenth century (and has been considerably revised since). Or algorithms.

Which raises the point, I guess, that we're still looking for the best foundation for mathematics itself.

$\endgroup$
2
  • 2
    $\begingroup$ I'd in fact argue that we still do not have a satisfactory axiomatization of "collection". ZFC sets don't do well, since there are 'classes'. MK set theory doesn't work, for the same reason. I prefer a type theory with a universal type, and it is possible, but something still has to be given up to avoid Russell's paradox. And you are right about 'algorithms'; I also mentioned that in my comment on the question, before I came to your answer. =) $\endgroup$
    – user21820
    Commented Feb 6, 2018 at 17:17
  • $\begingroup$ I think the point of Russell's paradox is that there is no satisfactory axiomatization of "collection" because the idea is self-contradictory, unless you severely limit what a collection is allowed to be. $\endgroup$ Commented Nov 27, 2018 at 21:24
47
$\begingroup$

Continuity is an example of a concept that was unclear for some time and also defined differently from what we now consider its "correct" definition. See this source (Israel Kleiner, Excursions in the History of Mathematics, pp. 142-3) for example.

In the eighteenth century, Euler did define a notion of "continuity" to distinguish between functions as analytic expressions and the new types of functions which emerged from the vibrating-string debate. Thus a continuous function was one given by a single analytic expression, while functions given by several analytic expressions or freely drawn curves were considered discontinuous. For example, to Euler the function

$$ f(x) = \left\{\begin{array}{ll} x^2 & x > 0 \\ x & x \leq 0 \end{array}\right. $$

was discontinuous, while the function comprising the two branches of a hyperbola was considered continuous (!) since it was given by the single analytic expression $f(x) = 1/x$.

[...]

In his important Cours d'Analyse of 1821 Cauchy initiated a reappraisal and reorganization of the foundations of eighteenth-century calculus. In this work he defined continuity essentially as we understand it, although he used the then-prevailing language of infinitesimals rather than the now-accepted $\varepsilon - \delta$ formulation given by Weierstrass in the 1850s. [...]

(Note that it's useful to make explicit on what domain you're saying something is continuous; here, the author of the book is actually slipping up, since $1/x$ is in fact continuous on its natural domain $\mathbb R \setminus \{0\}$. Thanks @jkabrg)

$\endgroup$
8
  • 6
    $\begingroup$ Related to this, the notion of a function itself (see ). Basically since the 17th century, there was an intuitive understanding of what a function is, but it took at least until the 19th century before the appearance of any definition that was resilient enough not to immediately crumble under close scrutiny. $\endgroup$
    – mlk
    Commented Feb 3, 2018 at 9:58
  • 6
    $\begingroup$ $1/x$ is continuous $\endgroup$
    – wlad
    Commented Feb 3, 2018 at 17:47
  • $\begingroup$ @jkabrg The function defined by $f(x) = 1/x$ is only continuous on the subset $(-\infty, 0) \cup (0, \infty)$, not on the whole real line. By "the two branches of a hyperbola", the author refers precisely to the two branches of the "standard" hyperbola $1/x$, which are the two continuous components, one on $(-\infty,0)$ and one on $(0,\infty)$. $\endgroup$
    – tomsmeding
    Commented Feb 3, 2018 at 17:51
  • 5
    $\begingroup$ @tomsmeding The domain of the function $1/x$ is not the whole real line $\endgroup$
    – wlad
    Commented Feb 3, 2018 at 17:53
  • $\begingroup$ @jkabrg Fair point. You may write the author of that book if you'd like to have the book corrected. :) $\endgroup$
    – tomsmeding
    Commented Feb 3, 2018 at 17:54
35
$\begingroup$

At the risk of having it called an obvious example, I submit Euclid's parallel postulate. It was formulated in his Elements cca. 300 B.C., then rationalized (including challenges and proof attempts) for many centuries before Saccheri laid down the $\,3\,$ alternatives of one/none/multiple parallels to a line through a given point in the $\,18^{th}\,$ century, then it took another $100$ years for the non-Euclidean geometries to be formalized by Lobachevsky and Bolyai.

$\endgroup$
4
  • 5
    $\begingroup$ Wish the downvoter had left a comment why. $\endgroup$
    – dxiv
    Commented Feb 6, 2018 at 16:34
  • $\begingroup$ Well, it's not exactly a mathematical idea that took time to define rigorously. It's a mathematical proposition that took time to prove (rather, took time to disprove). The question explicitly distinguishes between ideas and propositions and is asking only about the former. $\endgroup$ Commented Feb 9, 2018 at 20:50
  • $\begingroup$ @6005 We'll have to agree to disagree. The OP specifically asked about "instances where an abstract mathematical "idea" floats around for many years before even a rigorous definition or interpretation can be developed to describe the idea". The notion that geometry must be based on a set of axioms was a novel and "abstract" idea back then. The particular significance of the parallel postulate was noted early, but it "floated around" for more than two millennia before a "rigorous definition or interpretation" was developed. So, in my reading at least, the answer above is fully on-topic. $\endgroup$
    – dxiv
    Commented Feb 9, 2018 at 21:25
  • 1
    $\begingroup$ Your reading is absolutely as valid as my own. To my reading, it didn't feel like an example. I didn't downvote, but I hope perhaps it may provide some explanation why someone might. $\endgroup$ Commented Feb 10, 2018 at 0:16
31
$\begingroup$

The delta "function" showed up in a paper by Fourier - "Théorie analytique de la chaleur" in 1822.

It wasn't until ~1945 that Schwartz formally defined the delta functional as a distribution.

$\endgroup$
2
  • 8
    $\begingroup$ Can you link to that paper by Schwartz? $\endgroup$ Commented Feb 5, 2018 at 9:28
  • 1
    $\begingroup$ @nbubis it seems it was a book, not a paper — "Théorie des distributions." Hermann, 2 vols., 1950/1951, according to Wikipedia $\endgroup$
    – Ruslan
    Commented Feb 12, 2018 at 11:33
29
$\begingroup$

The notions of real numbers themselves - as rational cuts, as equivalent rational Cauchy sequences, or as elements of the unique model for the theory of complete ordered fields - would only appear in the 19th century, despite the centrality of calculus in mathematics and other sciences.

$\endgroup$
8
  • $\begingroup$ Euclid (V, def. 5) already had the construction of real numbers as rational cuts, except he was calling them ratios of magnitudes. $\endgroup$ Commented Feb 4, 2018 at 11:22
  • 2
    $\begingroup$ @AntonTykhyy: I don't see it. If the magnitudes are rational, then their ratios are rational as well; so I don't see how that definition is a construction of real numbers at all. Furthermore, the only similarity I see between that definition and Dedekind cuts is that both involve comparison of rational numbers. What am I missing? $\endgroup$
    – ruakh
    Commented Feb 6, 2018 at 5:14
  • $\begingroup$ Euclid didn't apply the notion of rationality/irrationality to geometric magnitudes (lengths, areas and volumes) but to their ratios. It was known since Pythagoras that some ratios are irrational, and Euclid loc. cit. provides the definition of what it means for two ratios to be equal, less than or greater than one another, regardless of whether these ratios are rational or not. A Dedekind cut's upper and lower halves are the sets of rational numbers the numerators and denominators of which are those integers for which Euclid's equimultiples "alike exceed" or "alike fall short of" one another. $\endgroup$ Commented Feb 6, 2018 at 10:29
  • $\begingroup$ I'd accept this as a definition of real numbers, equivalent to Riemann's cuts, if Euclid had remarked that, given (say) a fixed line segment $x$, any line segment $y$ is determined (up to congruence, so really we're talking about lengths as a kind of geometric magnitude) by the pairs of natural numbers $(m,n)$ such that $my$ is shorter than $nx$, or something like that. $\endgroup$ Commented Feb 9, 2018 at 21:11
  • $\begingroup$ (Preferably he would also have characterized which collections of pairs of natural numbers could arise from such a ratio of magnitudes, namely that there exists at least one pair $(m,n)$ in the collection, and for each such pair, every pair $(m',n')$ such that $mn' < m'n$ is in, and also so is some pair $(m',n')$ such that $m'n < mn'$. Only then could one define a positive real number as an appropriate collection of pairs of natural numbers, equivalent to Riemann's definition of a real number as an appropriate collection of rational numbers. But perhaps this is asking too much.) $\endgroup$ Commented Feb 9, 2018 at 21:14
27
$\begingroup$

Differentiable manifolds are an example. A rigorous definition only appeared about 100 years ago, in the works of Hermann Weyl and Hassler Whitney, although they were studied long before that time. Gauss's Theorema Egregium can already be seen as a theorem about this kind of concept, although stated long before it was formaly defined.

$\endgroup$
2
  • 2
    $\begingroup$ Do you think it might be worth mentioning the strong Whitney embedding theorem explicitly, as this was an important milestone uniting two different notions of "manifold" floating around at the time. $\endgroup$ Commented Feb 9, 2018 at 5:14
  • 1
    $\begingroup$ @WetSavannaAnimalakaRodVance That's an interesting complement, yes. $\endgroup$ Commented Feb 9, 2018 at 6:31
24
$\begingroup$

Complex Numbers are an example of
" ..abstract mathematical "idea" floats around for many years before even a rigorous definition or interpretation can be developed to describe the idea ".

And it was much an embarrassing idea at Cardano and Bombelli times (XVI century) that took lot of imagination (sic) and mental stress to be settled.

$\endgroup$
20
$\begingroup$

"Computation" (or effective calculability) is still an abstract mathematical idea that floats around awaiting a rigorous definition.

There are various candidates for defining what the term could mean -- e.g. the set of strings that can be generated by a certain type of grammar, or the set of strings that can be accepted by a certain type of machine, or the set of functions that can be defined given a certain set of function-construction rules. And there are rigorous proofs of the equivalences between many of those definitions.

But we are still left with an intuition about whether those definitions are adequate. That intuition is called Church's Thesis or the Church-Turing Thesis, but it remains (merely) a thesis. We might still come up with a broader definition of what constitutes a "computation" that cannot be subsumed under the existing candidates.

$\endgroup$
2
  • $\begingroup$ I like this example a great deal, but do you really think that this is purely mathematics? Is there not a certain experimental aspect to this: that we won't come up with a definition of computability that isn't equivalent to the Church-Turing-Gödel triad of notions: the "candidates" you refer to are already rigorous. My impression is that although many quantum computing physicists conjecture that quantum computing will not come up with anything that isn't in principle computable in the classical sense, the idea that it might is not ruled out of the question. $\endgroup$ Commented Feb 9, 2018 at 5:25
  • 1
    $\begingroup$ I would say that the results of Church and Turing and all the equivalences then have solved the problem of defining computation, or effective calculability. It certainly can be said that most mathematicians take Turing-computability to be the definition of what an algorithm or effective computation is. However, this is a philosophical point and I like your answer. $\endgroup$ Commented Feb 9, 2018 at 20:53
17
$\begingroup$

Weil's conception of a cohomology theory for varieties adequate enough to solve the Weil Conjectures (the "Riemann Hypothesis" for varieties over finite fields) are an example. The idea for a Weil Cohomology was formulated in 1949 by Weil, and then Grothendieck came along in the 60's with etale and $\ell$-adic cohomology, which fit Weil's criteria and allowed Deligne prove the conjectures in 1974.

25 years may not be the longest time for something mentioned here, but exploring this idea and trying to rigorously realize it definitely helped create a decent portion of 20th century math.

$\endgroup$
12
$\begingroup$

Structure-preserving function.
It seems that this concept doesn't have a general definition yet. Category theory define the rules for calculations with morphisms but doesn't provide a general and formal rule what a structure-preserving function is, when the objects of the category are sets with additional structures.

For the kind of structures appearing in universal algebra it's clear enough but, for example, from an algebraic perspective, what makes continuity to the natural concept in topology?


This may provide a clue?

There is a subcategory of the category of relations as objects and relations between relations as morphisms, consisting of all relations as objects and certain relations between relations, that can be expressed by two relations, as morphisms.

Given two relations $R\subseteq A\times B$ and $R'\subseteq A'\times B'$. Some relations $r\subseteq R\times R'$ can be characterized by two relations $\alpha\subseteq A\times A'$ and $\beta\subseteq B\times B'$ so that

$((a,b),(a',b'))\in r \iff \Big((a,a')\in\alpha\wedge (b,b')\in\beta\wedge (a,b)\in R\implies (a',b')\in R'\Big)$

and if $R''\subseteq A''\times B''$, $r'\subseteq R'\times R''$, where $r'$ is characterized by $\alpha'\subseteq A'\times A''$ and $\beta'\subseteq B'\times B''$, then the composition $r'\circ r$ is characterized by the relations $\alpha'\circ\alpha\subseteq A\times A''$ and $\beta'\circ\beta\subseteq B\times B''$? (Where $\circ$ denote the composition of relations).

Suppose $A=B\times B$ and that $R\subseteq A\times B$ is the composition in a magma. Then the functions among the morphisms between two such objects defines magma morphisms $B\to B'$.

Suppose $B=\mathcal P(A)$ and that $R\subseteq A\times B$ is the relation $(a,S)\in R\iff a\in\overline{S}$ for some topology on $A$. Then the functions among the morphisms between two such objects define continuous functions $A\to A'$.

$\endgroup$
1
  • 2
    $\begingroup$ We have a general definition of a structure-preserving bijection, that is an isomorphism, and thus of the groupoid of sets equipped with a given structure, and this goes back at least to Bourbaki I (first published slightly before Eilenberg & MacLane). But that's not good enough for what you're talking about, I agree. $\endgroup$ Commented Feb 9, 2018 at 21:29
8
$\begingroup$

You might want to check out Imre Lakatos' "Proofs and Refutations", which depicts in a fictional dialog the evolution of the idea of a "polyhedron" over the centuries. His goal is to illuminate the dialectical process of definition and redefinition in mathematics, and perhaps in cognition generally.

$\endgroup$
8
$\begingroup$

Computation seems to fall into this category - for a long time, there had been an informal notion of something like "information processing". There had, of course, been the idea of a function for a long time. There were also prototypical algorithms, even as far back as Euclid. But the general idea of a well-defined process that implements a function based on small steps did not appear until Turing defined it in his 1936 paper.

$\endgroup$
5
$\begingroup$

About Fractals (I know a fractal when I see it). What is it mathematical definition of this concept?

Till nowadays the notion of fractal does not have yet a proper mathematical definition.

Basically, a fractal is figure or shape that have a self-similarity property. the geometry of a fractal differ from one shape to another.

To this end I would like to quote a speaker at conference in Leipzig last year (2017) He was asked by an attender: Sir what is a fractals ?

His answer: I know a fractal when I see it

Patently once someone has shown you a fractal for your first time , next time you will certainly recognise a fractal just by looking at his shape no matter how different it could be from the previous one: this is a fact.

There are different kind of well know fractals : Speinski-Gasket (whose the triangle below), Mandelbrot set (see the second figure), Julia set.....

https://georgemdallas.wordpress.com/2014/05/02/what-are-fractals-and-why-should-i-care/

enter image description here enter image description here

$\endgroup$
4
$\begingroup$

The calculus of variations was actively applied from the 17th Century and put on a firm theoretical foundation with the introduction of Banach spaces in 1920.

$\endgroup$
2
$\begingroup$

The Egyptians and the Babylonians knew a lot of mathematical facts (e.g. the Pythagorean theorem, the quadratic formula, the volumes of prisms and pyramids, etc.) which were later proved by the Greeks.

Also, Pascal and Fermat used mathematical induction and the well ordering of the naturals (in the form of infinite descent) 250 years before Peano formalized the axioms defining the natural numbers.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .