25
$\begingroup$

Just the question in the title, I'm trying to understand how something like analysis could be developed without formal constructions of the real numbers.

I'm also very interested, if the answer is "yes", to know what was that people thought was a "real number" at that time.

$\endgroup$
11
  • 6
    $\begingroup$ The pythagoreans believed that all numbers occuring in the nature are rational. Ironically, the pythagorean theorem showed that the length of the diagonal of a unit square is $\sqrt{2}$, which is irrational. That was such a shock for the pythagoreans that they did not want to publish that result. $\endgroup$
    – Peter
    Commented Jan 24, 2016 at 20:38
  • 4
    $\begingroup$ Certainly people like Newton and Euler had a clear understanding of what real numbers were, if only though their decimal representations, although I imagine they would probably have called them otherwise ("quantities", perhaps?). And they unquestionably "knew" that a bounded monotonic sequence of real numbers converges, even though they could not prove, or even state, this formally: so they "knew" that the real numbers were complete. $\endgroup$
    – Gro-Tsen
    Commented Jan 24, 2016 at 20:48
  • 1
    $\begingroup$ I don't think analysis needs the real numbers, it only needs the concept of completion of a metric space with the limits of cauchy sequences, and restricting to the definable cauchy sequences yields only countable sets which are enough for analysis $\endgroup$
    – reuns
    Commented Jan 24, 2016 at 21:10
  • 6
    $\begingroup$ You can use the reals without knowing a construction, just as you can learn addition, multiplication, prime factorization, etc without knowing the Peano Axioms (or even the word axiom). $\endgroup$ Commented Jan 24, 2016 at 21:20
  • 5
    $\begingroup$ @user1952009: doesn't the definition of a metic space depend on the real numbers? $\endgroup$
    – Rob Arthan
    Commented Jan 24, 2016 at 21:40

5 Answers 5

12
$\begingroup$

See e.g.:

ARTICLE I

Whatever is capable of increase or diminution, is called magnitude, or quantity.

[...] §4. the determination, or the measure of magnitude of all kinds, is reduced to this: fix at pleasure upon any one known magnitude of the same species with that which is to be determined, and consider it as the measure or unit; then, determine the proportion of the proposed magnitude to this known measure. This proportion is always expressed by numbers; so that a number is nothing but the proportion of one magnitude to another arbitrarily assumed as tne unit.

§5. From this it appears, that all magnitudes may be expressed by numbers; and that the foundation of all the Mathematical Sciences must be laid in a complete treatise on the science of Numbers, and in an accurate examination of the different possible methods of calculation. This fundamental part of mathematics is called Analysis, or Algebra.

And page 39 :

§128. There is therefore a sort of numbers, which cannot be assigned by fractions, but which are nevertheless determinate quantities; as, for instance, the square root of $12$: and we call this new species of numbers, irrational numbers. They occur whenever we endeavour to find the square root of a number which is not a square; thus, $2$ not being a perfect square, the square root of $2$, or the number which, multiplied by itself, would produce $2$, is an irrational quantity. These numbers are also called surd quantities, or incommensurable.

$\endgroup$
5
  • $\begingroup$ Curious, from the two quotations is would seem that Euler thought the square root of $12$ is capable of increase or diminution; I don't know how to envision that. $\endgroup$ Commented Jan 25, 2016 at 10:30
  • $\begingroup$ @MarcvanLeeuwen: I think what he meant is simply that the operation of adding something to $\sqrt{12}$ and/or subtracting from it makes sense, if you find an appropriate something. $\endgroup$ Commented Jan 25, 2016 at 12:00
  • $\begingroup$ @MarcvanLeeuwen: Probably the easiest method is by proving that there are smaller and larger numbers. Trivially, 3 is smaller and 4 is larger. With slightly more effort, it's possible to show that every number is either smaller than √12, bigger than √12,or exactly equal to √12. $\endgroup$
    – MSalters
    Commented Jan 25, 2016 at 13:13
  • $\begingroup$ @MSalters: My comment was not about proving anything, but purely linguistically motivated. From Henning's comment and yours, I deduce that in place of increase/diminution you prefer to read respectively addition/subtraction and comparison. Whichever comes closer to what Euler could have actually meant is not clear to me, though alternatives both might be somewhat more precise that increase/diminution. $\endgroup$ Commented Jan 25, 2016 at 15:12
  • $\begingroup$ It seems to me that Article I is describing how to convert a problem consisting of not-numbers to a problem that consists of numbers. The fact that you can choose a unit of a particular measure implies that you aren't already dealing with numbers, where the unit is $1$. So "whatever is capable of increase or diminution" might be the depth of water in a reservoir, which we then convert to a number in proportion to some unit of depth, and from there on we need only work with numbers. Section 128 implies that it is possible for the depth of the water to be $\sqrt{12}$. $\endgroup$
    – David K
    Commented Jan 25, 2016 at 15:15
4
$\begingroup$

Yes, mathematicians used the concept of real number long before rigorous definitions arose, just as mathematicians used the concept of a complex number before the argand plane was described, the dirac delta function was used before it became rigorous, etc. Intuition almost always arises before rigour.

In modern times, the style has become to model every mathematical discipline as a field of the theory of sets. Before the late 19th century, sets were virtually nonexistent. Newton could not have possibly come up with dedekind cuts, cauchy sequences, etc, for one requires some intuition about set theory to interpret these results. If one were rigorous, one would instead proceed by an axiom system as in the style of Euclid (though not a system as rigorous as those in the mathematical logic of today).

All constructions of the real numbers above are meant to construct a ``completion'' of the rational line $\mathbf{Q}$. Most of the time, we prove things about $\mathbf{R}$ from abstract field axioms, with a completeness axioms included (least upper bound principle, etc). So Newton could have proceeded from this route, but if we look at Newton's work we do not find this style of axiomatics. Most of Newton's proofs are not analytical -- they do not involve numbers. If you read the Principia, you will find that most proofs proceed by geometrical diagrams, like the greeks. Newton's main method is treating diagrams as infinitisimal, using intuition to obtain the results about the limits of diagrams (for instance, if we take a line between two points on a circle, then when they are taken to be infinitisimally close the line is perpendicular to the circle. In geometry, one uses much more intuition than rigour. In particular, I imagine at some point in the principia Newton applies (via intuition) a disguised geometric form of the intermediate value theorem -- a theorem which, if taken axiomatically, implies the completeness of the real numbers, and thus implies that Newton really is using the reals. This theorem is `obvious' to the unititated, but in introductory analysis courses one finds the idea is much more subtle. Remember that in order for these types of principles to be scrutinized, one needs paradoxes which challenge thought, which space filling curves and nowhere differentiable functions provided in ample amount.

Of course, there was still controversy back then. The philosopher George Berkeley in particular criticized the method:

It must, indeed, be acknowledged, that [Newton] used Fluxions, like the Scaffold of a building, as things to be laid aside or got rid of, as soon as finite Lines were found proportional to them. But then these finite Exponents are found by the help of Fluxions. Whatever therefore is got by such Exponents and Proportions is to be ascribed to Fluxions: which must therefore be previously understood. And what are these Fluxions? The Velocities of evanescent Increments? And what are these same evanescent Increments? They are neither finite Quantities nor Quantities infinitely small, nor yet nothing. May we not call them the Ghosts of departed Quantities?

Nonetheless, the results Newton obtained were correct, so there wasn't too much of this backlash.

$\endgroup$
1
  • 2
    $\begingroup$ Having read Berkeley's criticism in its entirety, it's not clear to me that he objected particularly to mathematicians' use of fluxions or infinitesimals; what I saw he objected to was "believing" in these things and yet not believing in God. He was, after all, a bishop. $\endgroup$
    – David K
    Commented Jan 25, 2016 at 15:38
3
$\begingroup$

The history of mathematics is a huge topic with a large literature. In ancient Greek mathematics, what we would call numbers were called magnitudes and represented continuous quantities like lengths, areas and magnitudes. The theory of proportions in Book 5 of Euclid's elements is particularly relevant, e.g., see How to best understand Euclid's definition of equal ratios? How does it relate to Dedekind cuts? A favourite of mine is Proposition 2 of Book 10, which gives one of the methods the Greeks used to show that certain numbers are irrational: roughly speaking you apply Euclid's algorithm to try to find a greatest common divisor of given magnitudes $x$ and $y$ and if it doesn't terminate $x/y$ is irrational. (This leads to very nice geometric proofs of the irrationality of quadratic surds like $\sqrt{2}$.)

It is perhaps also worth pointing out that Euclid was writing the equivalent of an undergraduate textbook several centuries after Pythagoras. Results like some of those obtained by Archimedes using Eudoxus's method of exhaustion (a method of calculating areas and volumes)are not in the Elements.

Other answers to your question relate to what happened over the subsequent two millennia. Virtually all western mathematicians up to the mid-twentieth century will have had Euclid's Elements as their first mathematical textbook.

$\endgroup$
3
  • $\begingroup$ There is also a special SE site for the topic: History of Science and Mathematics $\endgroup$
    – Danu
    Commented Jan 24, 2016 at 22:42
  • $\begingroup$ Re: "many advanced techniques like Eudoxus's method of exhaustion (a method of calculating areas and volumes) were known that are not in the Elements": The article that you link to claims otherwise, listing several propositions that the Elements proves using the method of exhaustion. $\endgroup$
    – ruakh
    Commented Jan 25, 2016 at 1:17
  • $\begingroup$ Thanks, I was forgetting the material on volumes of cones etc. I have fixed it. My point was to dispel the common misconception that Euclid constitutes a comprehensive encyclopaedia of ancient Greek mathematics. $\endgroup$
    – Rob Arthan
    Commented Jan 25, 2016 at 11:30
2
$\begingroup$

Yes, they were using them, even though mathematicians never really knew back then "what they were". They were generally understood to be those numbers that could be values of physical quantities (such as time, length, energy, speed etc.).

Note, though, that in the really distant times (contemporary to Plato, say), the Greeks were still perplexed by the existence of irrational numbers (they knew that the ratio of the diagonal of a square to its edge was not rational - what we call today $\sqrt 2$). It took some time for humanity to "swallow" irrational numbers...

As a side-note, to understand how confused even the greatest mathematical minds of that time were (with respect to foundational issues), note that Gauss once called the imaginary number $\rm i$ "vera umbrae umbra" ("a true shadow of shadows" - i.e. something without a proper existence), and Euler was puzzled by the fact that if he replace $x$ by $2$ in the (formal) equality $\dfrac 1 {1-x} = \sum \limits _{n = 0} ^\infty x^n$ he obtained a negative number equal to a positive infinite one...

$\endgroup$
1
  • 2
    $\begingroup$ In the end, Euler erroneously concluded that any divergent series could (and should) be assigned a unique finite value. Today we know it's not like that, but it was him, thanks to his "outlaw attitude", who made the very first steps towards a theory of divergent series. $\endgroup$ Commented Jan 25, 2016 at 10:32
1
$\begingroup$

Simon Stevin used unending decimals to represent all numbers (whether rational or not) already at the end of the 16th century. In the 17th century, Descartes seems to have been the first to use the term real to describe ordinary numbers.

Since unending decimals provide a satisfactory account of the real numbers, the latter should be attributed to Stevin rather than either Cantor or Dedekind. In particular, many authors since Stevin have used these "ordinary" numbers to prove theorems without awaiting the more abstract developments of the last third of the 19th century.

In particular, Cauchy provided a satisfactory proof of the intermediate value theorem way before Cantor. The existence of the root follows simply by constructing an unending decimal.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .