54
$\begingroup$

How many "super imaginary" numbers are there? Numbers like $i$? I always wanted to come up with a number like $i$ but it seemed like it was impossible, until I thought about the relation of $i$ and rotation, but what about hyperbolic rotation? Like we have a complex number $$ z = a + bi $$ can describe a matrix $$ \begin{bmatrix} a & -b \\ b & a\end{bmatrix} $$ You can "discover" $i$ by doing (which is used for another discovery) $$ \begin{bmatrix} c & -d \\ d & c\end{bmatrix} \cdot \begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} ac - bd \\ ad + bc \end{pmatrix} $$ $$ (a + bi) \cdot (c + di) = ac + adi + bci + bdi^2 $$ From here on you can infer that $ i^2 = -1 $.

So what if we do the same thing, but a different matrix? $$ z = a + bh$$ can describe a matrix $$ \begin{bmatrix} a & b \\ b & a\end{bmatrix} $$ and we can discover it the same way $$ \begin{bmatrix} c & d \\ d & c\end{bmatrix} \cdot \begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} ac + bd \\ ad + bc \end{pmatrix} $$ $$ (a + bh) \cdot (c + dh) = ac + adh + bch + bdh^2 $$ From here we infer that $ h^2 = 1 $.

Also $$ e^x = 1 + \frac{x}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \frac{x^5}{5!} + \cdots $$ $$ \begin{align} e^{xh} & = 1 + \frac{xh}{1!} + \frac{(xh)^2}{2!} + \frac{(xh)^3}{3!} + \frac{(xh)^4}{4!} + \frac{(xh)^5}{5!} + \cdots \\ & = 1 + \frac{xh}{1!} + \frac{x^2}{2!} + \frac{x^3h}{3!} + \frac{x^4}{4!} + \frac{x^5h}{5!} + \cdots \\ & = \cosh{x} + h \cdot \sinh{x} \end{align} $$

How many more numbers like this are there? And does that mean that for each set of trigonometric functions there exists a number which can turn multiplication into a rotation using those trigonometric functions?

(Sorry if I got some things wrong)

$\endgroup$
7
  • 8
    $\begingroup$ One need not even restrict themselves to using $2\times 2$ matrices. There are for example the quaternions, like the complex numbers, but with three distinct imaginary units $i,j,k$ such that $i^2=j^2=k^2=ijk=-1$ (note: they aren't commutative). The question of "how many" different ways can we extend the real numbers to include numbers with similar properties to $i$, of course there are infinitely many ways. Many of them won't be useful or ever studied, but there are some extensions like the quaternions which have received some attention. $\endgroup$
    – JMoravitz
    Commented Dec 31, 2017 at 20:30
  • 7
    $\begingroup$ Apparently edited onto the current Wikepedia Page. $\endgroup$
    – Paul LeVan
    Commented Dec 31, 2017 at 21:10
  • 1
    $\begingroup$ Not essential, but do you write matrix multiplication in the reverse order, like $\begin{pmatrix} a \\ b \end{pmatrix} \cdot \begin{bmatrix} c & d \\ d & c\end{bmatrix}$? $\endgroup$ Commented Jan 1, 2018 at 23:20
  • $\begingroup$ Jeppe Stig Neilsen: I see computer graphics versus mathematics going opposite ways on handedness, row vs column matrix, left vs right, etc. When reading computer graphics papers, you have to be careful about interpreting what you read. $\endgroup$
    – Rob
    Commented Jan 3, 2018 at 1:39
  • 1
    $\begingroup$ And because the quats are not multiplicatively commutative, you cant have a function like logarithm that maps multiplication into addition (which IS commutative). $\endgroup$ Commented Jan 3, 2018 at 4:20

4 Answers 4

65
$\begingroup$

Your $h$-based number system is called split-complex numbers, and what you called $h$ is in my experience usually called $j$ (although, as @user14972 notes, $h$ is sometimes used). A related system introduces an $\epsilon$ satisfying $\epsilon^2=0$, and this gives dual numbers. Linear transformations guarantee these two systems and complex numbers are the only ways to extend $\mathbb{R}$ to a $2$-dimensional commutative associative number system satisfying certain properties. However:

  • The Cayley-Dickson construction allows you to go from real numbers to complex numbers and thereafter double the dimension as often as you like by adding new square roots of $-1$, taking you to quaternions, octonions, sedenions etc.;
  • Variants exist in which some new numbers square to $0$ or $1$ instead, e.g. you can have split quaternions and other confusingly named number systems;
  • If you really like, you can take any degree-$d$ polynomial $p\in\mathbb{R}[X]$ with $d\ge 2$ and create a number system of the degree-$<d$ polynomial functions of a non-real root of $p$ you've dreamed up, e.g. $\mathbb{C}$ arises from $p=X^2+1$, whereas this video explores $p=X^3-1$.
$\endgroup$
12
  • 5
    $\begingroup$ That is so interesting! I never thought this existed. Thank you for your response. $\endgroup$
    – EEVV
    Commented Dec 31, 2017 at 21:22
  • $\begingroup$ It's called the "hyperbolic numbers" too; and $h$ does get used for the "hyperbolic unit". $\endgroup$
    – user14972
    Commented Jan 1, 2018 at 0:07
  • 1
    $\begingroup$ Hm, I've heard of multicomplex numbers, but not split-complex numbers... $\endgroup$
    – user541686
    Commented Jan 1, 2018 at 7:12
  • 1
    $\begingroup$ Hurkyl, the funny thing is that I called it the hyperbolic number too! That's why it is $ h $ -- for hyperbolic. $\endgroup$
    – EEVV
    Commented Jan 1, 2018 at 11:09
  • $\begingroup$ Wikipedia Split-complex number § Synonyms lists so many other names (many of which are not that old). $\endgroup$ Commented Jan 1, 2018 at 23:22
15
$\begingroup$

There are stockpiles of algebraic constructions out there... Direct sums, direct products, quotients, sub-structures, free algebras, polynomial rings, localisations, algebraic closures, completions, to name a few. This gives us so many different ways to create new algebraic structures, i.e. produce new and beautiful objects that you can calculate with. Some of those can be called "numbers", if you wish, but this is just a question how to label them: the real push is to investigate the constructed structures, see what they are useful for, and apply them to solving various problems.

Your construction is a commutative subring of $M_2(\mathbb R)$. Another construction gives you the same structure: quotient $\mathbb R[x]/(x^2-1)$, i.e. residues of polynomials in one variable under division by the polynomial $x^2-1$. What you've got there is a ring with zero divisors, for example, $0=h^2-1=(h-1)(h+1)$ but $h-1\ne 0$ and $h+1\ne 0$. This makes it harder to solve equations with those numbers. Thus, this structure, still being interesting, is harder to work with (and produces fewer results) than e.g. complex numbers.

My big point was: with the machinery mathematics has these days, it is not that hard to invent new numbers, but it is as hard as ever to invent new useful numbers. A whole other challenge is to invent new constructions, which would produce new structures in ways never seen before.

$\endgroup$
8
$\begingroup$

Geometric Algebra (GA) allows for infinite-dimensional analogues of complex numbers. It subsumes from scalars, through vectors, to normals (ie: bivectors), through quaternions, tensors, etc. The whole basis of it is that you perform algebra in a coordinate-free way, and yet constructs such as imaginary numbers and quaternions just show up as special cases. The fact that it is coordinate-free makes it easy to work with high dimensional cases. GA is a mathematics language designed to align with geometric intuition. The key to all of it is joining together the dot product and the cross-product in a way that generalizes to all dimensions.

$$ u v = (u \cdot v) + (u \wedge v) $$

The geometric product has a commutative part, and an anti-commutative part. So the geometric product does not commute in general. But scalars commute with everything. If $e_1$ is perpendicular to $e_2$, and $e_3$ is perpendicular to them both, then they form a basis for doing 3D geometry. The basis vectors are akin to $x$, $y$, and $z$ axis. Multiplication of these basis vectors anti-commute and self-annihilate like this:

$$ e_1 e_2 = -e_2 e_1 $$

The same basis vector times itself cancels out. $$ e_1 e_1 = 1 $$

Which causes determinants to just fall out of the definition for instance. Multiply two 2D vectors, where we have scalar coefficients:

$$ (a_1 e_1 + a_2 e_2) (b_1 e_1 + b_2 e_2) $$

Just distribute across them as typical, but don't commute anything yet: $$ a_1 e_1 b_1 e_1 + a_1 e_1 b_2 e_2 + a_2 e_2 b_1 e_1 + a_2 e_2 b_2 e_2 $$

Collect scalars together $$ a_1 b_1 e_1 e_1 + a_1 b_2 e_1 e_2 + a_2 b_1 e_2 e_1 + a_2 b_2 e_2 e_2 $$

Anti-commute and cancel vectors to simplify $$ a_1 b_1 + a_1 b_2 e_1 e_2 + -a_2 b_1 e_1 e_2 + a_2 b_2 $$ $$ (a_1 b_1 + a_2 b_2) + (a_1 b_2 - a_2 b_1) e_1 e_2 $$ Note that we multiplied a pair of 1D objects (vectors), and got back a sum that is a 0D object (scalar) plus a 2D object (bivector). A bivector represents a plane for rotation. The bivector is the dual of a cross-product vector. But with GA, you can have the functionality of a cross-product in all dimensions - not just 3D.

In 2D, $e_1 e_2$ functions as $I$, one of the imaginary planes.

$\endgroup$
3
  • $\begingroup$ This is very nice. I never really understood the non-commutative property. Thank you. $\endgroup$
    – EEVV
    Commented Jan 1, 2018 at 11:04
  • $\begingroup$ If it's not apparent, the dot product is related to $cos$ $$ (a_1 b_1 + a_2 b_2) = \| \mathbf{a} \| \| \mathbf{b} \| cos(\theta) $$ $$ (a_1 b_2 - a_2 b_1) e_1 e_2 = \| \mathbf{a} \| \| \mathbf{b} \| sin(\theta) I $$ And if you divide $\mathbf{a} \mathbf{b}$ by those lengths, you just get a familiar formula relating angles to vectors. $\endgroup$
    – Rob
    Commented Jan 2, 2018 at 2:39
  • $\begingroup$ The justification for calling two perpendicular unit vectors an I (of which there are many in multidimensional space), is that they square to -1: $$ (e_1 e_2) (e_1 e_2) = e_1 e_2 e_1 e_2 = -e_1 e_2 e_2 e_1 = -1 $$ $\endgroup$
    – Rob
    Commented Jan 2, 2018 at 2:53
1
$\begingroup$

We can view the complex numbers $\Bbb C$ as the quotient $$\Bbb R[x] / \langle x^2 + 1 \rangle :$$ informally, it is the set (in fact, ring) of real polynomials, where we declare two polynomials to be the same iff their difference is a multiple of $x^2 + 1$. If we denote by $i$ the image of $x$ under the quotient map $\Bbb R[x] \to \Bbb C$, then we have $i^2 + 1 = 0$, so this definition of $i$ coincides with the usual one, and we can write any complex number as $c + d i$ for some real numbers $c, d$.

This suggests in turn thinking of $\Bbb C$ as a real vector space with basis $(1, i)$, in which case we can write $c + d i$ as $$ \pmatrix{c\\d} . $$

Multiplication by $i$ is given by $$ i \cdot \pmatrix{c\\d} = i (c + i d) = -d + c i = \pmatrix{-d \\ c} = \pmatrix{\cdot&-1\\1&\cdot} \pmatrix{c\\d} . $$ Likewise multiplication by $1$ amounts to multiplication by the identity matrix, so by linearity the matrix representation of multiplication by $a + i b$ is $$ \pmatrix{a&-b\\b&a} . $$ A straightforward verification shows that multiplication of complex numbers corresponds to multiplication by these matrices. In particular, these matrices form a commutative subring of the ring $M_2(\Bbb R)$ of $2 \times 2$ real matrices.

We can play the same game but replacing $x^2 + 1$ with any quadratic polynomial (which we may as well take to be monic), $f(x) := x^2 + p x + q$. If we denote the image of $x$ under the quotient map $\Bbb R[x] \to \Bbb R [x] / \langle f(x) \rangle$ by $\xi$, then $\xi^2 = -q - p \xi$, and the matrix representation (w.r.t. the basis $(1, \xi)$) of multiplication by $\xi$ is $$ \pmatrix{\cdot&-q\\1&-p} $$ (it is not a coincidence that this is the companion matrix of $f(x)$), identifying $$a + b \xi \leftrightarrow \pmatrix{a & -q b \\ q& a - pb} ,$$ and again these matrices comprise a commutative subring of $M_2(\Bbb R)$.

Example The special case $f(x) = x^2 - 1$ ($p = 0, q = -1$) recovers the example in the question statement, which correspond to the ring of matrices of the form $$\pmatrix{a&b\\b&a},$$ which we may identify with the quotient $\Bbb R[x] / \langle x^2 - 1 \rangle$. We sometimes call this ring the split-complex numbers. Unlike $x^2 + 1$, $x^2 - 1 = (x - 1) (x + 1)$ has real roots, so this ring has zero divisors: Using $h$ for $\xi$ as in the question statement, we have $$0 = h^2 - 1 = (h + 1) (h - 1) .$$ Indeed, $$\Bbb R[x] / \langle x^2 - 1 \rangle \cong (\Bbb R[x] / \langle x - 1 \rangle) \oplus (\Bbb R[x] / \langle x + 1 \rangle) \cong \Bbb R \oplus \Bbb R .$$ In fact, this is an isomorphism of $\Bbb R$-algebras.

Example Instead taking $f(x) = x^2$ ($p = q = 0$), we sometimes use the symbol $\epsilon$ for $\xi$, so that gives $\epsilon^2 = 0$ and $$a + b \epsilon \leftrightarrow \pmatrix{a&\cdot\\b&a} .$$ This ring, which we have identified with $\Bbb R[x] / \langle x^2 \rangle$, is sometimes called the dual numbers.

These three cases exhaust all the examples, in the sense that for any $p, q$, the resulting ring is isomorphic to one of the above three examples, according to the sign of the discriminant $p^2 - 4 q$ of $f$. So, for example, for any $p, q$ with $p^2 < 4 q$, we can identify the quotient ring with the complex numbers.

As indicated in the other answers, the notion of complex numbers can be generalized in many other ways, too. (For example, the split-quaternions mentioned in J.G.'s answer can be identified, as an $\Bbb R$-algebra, with $M_2(\Bbb R)$ itself.)

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .