25
$\begingroup$

(Disclaimer: apologies for any incorrect usage of mathematical terminology throughout this question.)

In modern mathematical notation, a variable with a subscript can represent a couple of different concepts relating to the notion of index.

For example, we can define an integer sequence such as the triangle numbers as:

$$T_n = \frac{n(n+1)}{2}$$

Or, we can write an infinite series as:

$$\sum_{i=1}^\infty a_i = a_1+a_2+a_3+\ldots$$

And we can label the elements of a matrix or vector like so:

$$ \mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33} \end{bmatrix} , \mathbf{v} = \begin{bmatrix} v_1\\ v_2\\ v_3 \end{bmatrix} $$

But, when did this notation first arrive in mathematics? I've been trying to track down the origins of where this notation comes from through various sources, to no avail. The closest place I've come to an answer was through the book A History of Mathematical Notations, by Florian Cajori. One interpretation of the book may hint that they seem to have emerged around the time determinants were beginning to be studied, before the modern day invention of matrices, possibly by Leibniz? However, this is just from my reading, and the book doesn't acknowledge this directly, or make any note of it. And they could have very easily originated from later or earlier than this. Is there an original field of maths they come from, that their use proliferated out from?

$\endgroup$
0

6 Answers 6

23
$\begingroup$

Looking in Mathsym, I find:

$\bullet \;$Use of $\;{}^1a,\;{}^2a,\;{}^3a$ in Laplace, 1772.
Histoire de l'Académie royale des sciences, p. 294
It didn't catch on.
Laplace

$\bullet \;$Then Cauchy, 1815, with subscripts.
Œuvres complètes d'Augustin Cauchy, Serie 2, tome 1, p. 130

Cauchy

$\endgroup$
0
6
$\begingroup$

There is an interesting passage in

Dodgson, Charles Lutwidge. An Elementary Treatise on Determinants: With Their Application to Simultaneous Linear Equations and Algebraical Geometry. Vol. 13. Macmillan and Company, 1867:

New words and symbols are always a most unwelcome addition to a Science, especially to one already burdened with an enormous vocabulary, yet I think the Definitions given of them will be found to justify their introduction, as the only way of avoiding tedious periphrasis. The symbols employed to represent the single elements of a Determinant, require perhaps a word of apology, and it may be well to enumerate those already in use, and to point out what seem to be their chief defects. We may commence with

enter image description here

, where the change of letter indicates a change of column, and the change of subscript a change of row. Now the properties of Determinants, relating to columns, being always convertible into properties relating to rows, and vice versa, it was a sufficient objection to this system of notation, that it represented things distinctly analogous by methods so different, and it was properly superseded by the notation introduced by Leibnitz,

enter image description here

, both of column and row, are alike denoted by subscripts. But it seems a fatal objection to this system that most of the space is occupied by a number of a's, which are wholly superfluous, while the only important part of the notation is reduced to minute subscripts, alike difficult to the writer and the reader. It was almost an obvious improvement on this system to raise the subscripts into the line, and omit the a's altogether, as suggested by Baltzer, thus -

enter image description here

, and this system, though tedious for writing, might serve very well, were it not for its liability to be confused with the notation, common in Plane Algebraical Geometry, by which $(1,1)$ denotes the Point $x=1,\ y=1$. The symbol

enter image description here

, which I have ventured to suggest as an emendation on this last, will be found, I have great hopes, sufficient'y simple, distinct, and easy to be written. I have turned the symbol towards the left, in order to avoid all chance of confusion with $\int$, the symbol for integration.


Edit

The reference to the Hidden Math in Alice in Wonderland is due since that is where I've learned about that published math book.

$\endgroup$
4
$\begingroup$

According to this article Frege used it in 1879: Frege works

Although you may also be interested in this article which says Georg Cantor used subscripts in his notation for cardinal numbers, and he was born in 1845.

Whilst this is just my opinion, I suspect them to be much older due to the use of symbols to vary letters in many languages and variations of characters in previous cultures however I cannot find a source for this.

$\endgroup$
1
  • 3
    $\begingroup$ Cantor provides an example published in 1874: the first displayed equation in his Ueber eine Eigenschaft des Inbegriffes aller reellen albegraischen Zahlen [PDF], Journal für die Reine und Angewandte Mathematik, 1874 (77): 258–262, is $$(1.)\qquad a_0\omega^n+a_1\omega^{n-1}+\cdots+a_n=0\,.$$ $\endgroup$
    – Brian M. Scott
    Commented May 10, 2021 at 1:54
4
$\begingroup$

Hermann Grassmann used subscripts in his Lineale Ausdehnungslehre from 1844. For example, on page 71 he sets up a system of $n$ linear equations in $n$ unknowns on the form $$a_1 x_1 + a_2 x_2 +\dots + a_n x_n = a_0 \\ b_1 x_1 + b_2 x_2 +\dots+ b_n x_n = b_0 \\ \vdots \\ s_1 x_1 + s_2 x_2 + \dots + s_n x_n = s_0$$ Note that he does not use double subscripts, as we would today.

$\endgroup$
3
$\begingroup$

Cajori's History of Mathematical Notations, which you mention, also describes the use of subscripts for terms of Taylor series by William Emerson in Method of Increments (1763), which are introduced in the section "Notation" on page 2.

Indeed, you can see this in Google Books: The Method of Increments.

$\endgroup$
2
$\begingroup$

Comments above have centered on the sequential notation used in matrix assessment, or expansion of determinants, in particular. There is a thread that seems to trace the collective history on the use of subscripts, but key aspects regarding dates seem elusive. This is for good reason: access to original materials is rare. Nevertheless, we may glean from the writings of others some information regarding the circumstances and earliest dates when subscripts were first used. When, exactly, was the first use of the subscript as an index?

From the comments above, g.kov mentions Dodgson's commentary noting Leibnitz as having introduced the notation we currently use. However, Dodgson gives no date to establish when Leibnitz introduced this notation. A valuable clue is nevertheless given by Paul H. Hanus in An Elementary Treatise on the Theory of Determinants, published in 1886 by Ginn and Company, Boston. Hanus mentions in Chapter 1, Preliminary Notions and Definitions, Article 1, Discovery of Determinants, the following:

In a letter dated April 28, 1693, Leibnitz communicates his discovery to L'Hospital; and later, in another letter, expresses the conviction that the functions will develop remarkable and very important properties, - a conviction which time has abundantly verified. Leibnitz, however, never pursued the subject himself, and his discovery lay dormant till the middle of the eighteenth century.

Hanus continues that in 1750, Gabriel Cramer, rediscovered determinants while working upon the analysis of curves, wherein Cramer had to solve sets of linear equations and naturally encountered the same functions that had attracted the attention of Leibnitz. Hanus further states in Chapter 1, articles 2 thru 7, Determinants produced by eliminating the unknowns from a system of simultaneous equations, that the notation is attributable to Laplace, namely:

...the terms in which the subscripts occur in their natural order are positive, while in the negative terms there is an inversion of the natural order in the subscripts... It has been agreed to denote them, following Laplace, by writing the letters involved in regular succession, affecting each with the subscripts in order, and enclosing the whole expression within parentheses, thus,

$$(a_1 b_2) \equiv a_1 b_2 - a_2 b_1 ; (a_2 c_3) \equiv a_2 c_3 - a_3 c_2, etc. $$

This is, of course, a second order determinant, the first part of which, expressed in current, or modern, notation, would appear as

$$ \begin{bmatrix} a_{1} & b_{1}\\ a_{2} & b_{2}\\ \end{bmatrix} = a_1 b_2 - a_2 b_1 .$$

This general result is known as Laplace's expansion as noted in C. R. Wylie, Jr, Advanced Engineering Mathematics, Second Edition, McGraw-Hill Book Company, Inc., New York, 1960.

$\endgroup$