208
$\begingroup$

Suppose I have a square matrix $\mathsf{A}$ with $\det \mathsf{A}\neq 0$.

How could we define the following operation? $$\mathsf{A}!$$

Maybe we could make some simple example, admitted it makes any sense, with

$$\mathsf{A} = \left(\begin{matrix} 1 & 3 \\ 2 & 1 \end{matrix} \right) $$

$\endgroup$
3
  • 29
    $\begingroup$ Made me wonder: do you ask just out of curiosity (i.e. recreational mathematics), or do you have some application/usage in mind? $\endgroup$
    – Oliphaunt
    Commented Jan 31, 2016 at 21:19
  • 6
    $\begingroup$ I asked this as a separate question: math.stackexchange.com/questions/1637318/… $\endgroup$
    – Oliphaunt
    Commented Feb 2, 2016 at 12:21
  • $\begingroup$ @Alan Turing You might wish to see my contribution to your interesting question which I saw only yesterday, as I am not a regular visitor here. $\endgroup$ Commented Oct 23, 2016 at 8:15

7 Answers 7

298
$\begingroup$

For any holomorphic function $G$, we can define a corresponding matrix function $\tilde{G}$ via (a formal version of) the Cauchy Integral Formula: We set $$\tilde{G}(B) := \frac{1}{2 \pi i} \oint_C G(z) (z I - B)^{-1} \, dz ,$$ where $C$ is an (arbitrary) anticlockwise curve that encloses (once each) the eigenvalues of the (square) matrix $B$. Note that the condition on $C$ means that restrictions on the domain of $G$ determine restrictions on the domain of $\tilde{G}$.

So, we could make sense of the factorial of a matrix if we had a holomorphic function that restricted to the factorial function $n \mapsto n!$ on nonnegative integers. Fortunately, there is such a function: The function $$F: z \mapsto \Gamma(z + 1),$$ where $\Gamma$ denotes the Gamma function, satisfies $F(n) = n!$ for nonnegative integers $n$. (There is a sense in which $F$ is the best possible function extending the factorial function, but notice the target of that link really just discusses the real Gamma function, which our $\Gamma$ preferentially extends.) Thus, we may define factorial of a (square) matrix $B$ by substituting the second display equation above into the first: $$\color{#df0000}{\boxed{B! := \tilde{F}(B) = \frac{1}{2 \pi i} \oint_C \Gamma(z + 1) (z I - B)^{-1} \, dz}} .$$

The (scalar) Cauchy Integral Formula shows that this formulation has the obviously desirable property that for scalar matrices it recovers the usual factorial, or more precisely, that $\pmatrix{n}! = \pmatrix{n!}$ (for nonnegative integers $n$).

Alternatively, one could define a matrix function $\tilde G$ (and in particular define $B!$) by evaluating formally the power series $\sum_{i = 0}^{\infty} a_k (z - z_0)^k$ for $G$ about some point $z_0$, that is, declaring $\tilde G(B) := \sum_{i = 0}^{\infty} a_k (B - z_0 I)^k$, but in general this definition is more restrictive than the Cauchy Integral Formula definition, simply because the power series need not converge everywhere (where it does converge, it converges to the value given by the integral formula). Indeed, we cannot use a power series for $F$ to evaluate $A!$ directly for our particular $A$: The function $F$ has a pole on the line segment in $\Bbb C$ with endpoints the eigenvalues of $A$, so there is no open disk in the domain of $F$ containing all of the eigenvalues of $A$, and hence there is no basepoint $z_0$ for which the series for $\tilde F$ converges at $A$.

We can define $\tilde G$ in yet another way, which coincides appropriately with the above definitions but which is more amenable to explicit computation: If $B$ is diagonalizable, so that we can decompose $$B = P \pmatrix{\lambda_1 & & \\ & \ddots & \\ & & \lambda_n} P^{-1} ,$$ for eigenvalues $\lambda_a$ of $B$ and some matrix $P$, we define $$\tilde{G}(B) := P \pmatrix{G(\lambda_1) & & \\ & \ddots & \\ & & G(\lambda_n)} P^{-1} .$$ Indeed, by substituting and rearranging, we can see that this coincides, at least formally, with the power series characterization. There is a similar but more complicated formula for nondiagonalizable $B$ that I won't write out here but which is given in the Wikipedia article Matrix function.

Example The given matrix $A$ has distinct eigenvalues $\lambda_{\pm} = 1 \pm \sqrt{6}$, and so can be diagonalized as $$P \pmatrix{1 - \sqrt{6} & 0 \\ 0 & 1 + \sqrt{6}} P^{-1} ;$$ indeed, we can take $$P = \pmatrix{\tfrac{1}{2} & \tfrac{1}{2} \\ -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}}}.$$

Now, $F(\lambda_{\pm}) = \Gamma(\lambda_{\pm} + 1) = \Gamma (2 {\pm} \sqrt{6}),$ and putting this all together gives that \begin{align*}\pmatrix{1 & 3 \\ 2 & 1} ! = \bar{F}(A) &= P \pmatrix{F(\lambda_-) & 0 \\ 0 & F(\lambda_+)} P^{-1} \\ &= \pmatrix{\tfrac{1}{2} & \tfrac{1}{2} \\ -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}}} \pmatrix{\Gamma (2 - \sqrt{6}) & 0 \\ 0 & \Gamma (2 + \sqrt{6})} \pmatrix{1 & -\frac{\sqrt{3}}{\sqrt{2}} \\ 1 & \frac{\sqrt{3}}{\sqrt{2}}} .\end{align*} Multiplying this out gives $$\color{#df0000}{\boxed{\pmatrix{1 & 3 \\ 2 & 1} ! = \pmatrix{\frac{1}{2} \alpha_+ & \frac{\sqrt{3}}{2 \sqrt{2}} \alpha_- \\ \frac{1}{\sqrt{6}} \alpha_- & \frac{1}{2} \alpha_+}}} ,$$ where $$\color{#df0000}{\alpha_{\pm} = \Gamma(2 + \sqrt{6}) \pm \Gamma(2 - \sqrt{6})}. $$

It's perhaps not very illuminating, but $A!$ has numerical value $$ \pmatrix{1 & 3 \\ 2 & 1}! \approx \pmatrix{3.62744 & 8.84231 \\ 5.89488 & 3.62744} . $$

To carry out these computations, one can use Maple's built-in MatrixFunction routine (it requires the LinearAlgebra package) to write a function that computes the factorial of any matrix:

MatrixFactorial := X -> LinearAlgebra:-MatrixFunction(X, GAMMA(z + 1), z);

To evaluate, for example, $A!$, we then need only run the following:

A := Matrix([[1, 3], [2, 1]]);
MatrixFactorial(A);

(NB executing this code returns an expression for $A!$ different from the one above: Their values can be seen to coincide using the the reflection formula $-z \Gamma(z) \Gamma(-z) = \frac{\pi}{\sin \pi z} .$ We can further simplify the expression using the identity $\Gamma(z + 1) = z \Gamma(z)$ extending the factorial identity $(n + 1)! = (n + 1) \cdot n!$ to write $\Gamma(2 \pm \sqrt{6}) = (6 \pm \sqrt{6}) \Gamma(\pm \sqrt{6})$ and so write the entries as expressions algebraic in $\pi$, $\sin(\pi \sqrt{6})$, and $\Gamma(\sqrt{6})$ alone. One can compel Maple to carry out these substitutions by executing simplify(map(expand, %)); immediately after executing the previous code.) To compute the numerical value, we need only execute evalf(%); immediately after the previous code.

By the way, we need not have that $\det B \neq 0$ in order to define $B!$. In fact, proceeding as above we find that the factorial of the (already diagonal) zero matrix is the identity matrix: $$0! = \pmatrix{\Gamma(1) \\ & \ddots \\ & & \Gamma(1)} = I .$$ Likewise using the formula for nondiagonalizable matrices referenced above together with a special identity gives that the factorial of the $2 \times 2$ Jordan block of eigenvalue $0$ is, somewhat amusingly, $$\pmatrix{0 & 1\\0 & 0} ! = \pmatrix{1 & -\gamma \\ 0 & 1} ,$$ where $\gamma$ is the Euler-Mascheroni constant.

$\endgroup$
13
  • 34
    $\begingroup$ A truly excellent answer. $\endgroup$ Commented Jan 31, 2016 at 15:57
  • 3
    $\begingroup$ Thank you, @Mehrdad, I'm glad you found it interesting! The C.I.F. definition is useful because it takes advantage of the behavior of holomorphic functions but avoids issues of convergence entailed in power series expansions. By construction, $\overline{\exp}$ so defined is just the usual matrix exponential. $\endgroup$ Commented Feb 1, 2016 at 11:02
  • 10
    $\begingroup$ @YoTengoUnLCD One can use, e.g., {\Large !} to increase the size of the factorial symbol, but the way MathJax aligns elements vertically makes this look strange for font sizes as large as you might like them. A kludge for forcing the vertical alignment is embedding the factorial symbol in a (bracketless) matrix, with something like \pmatrix{a&b\\c&d}\!\!\matrix{\Huge !}, which produces $$\pmatrix{a&b\\c&d}\!\!\matrix{\Huge !}$$ The commands \! are used to improve the kerning. $\endgroup$ Commented Feb 1, 2016 at 11:06
  • 4
    $\begingroup$ @KimPeek *thou took'st, if I'm not mistaken :-D $\endgroup$
    – The Vee
    Commented Feb 1, 2016 at 15:44
  • 3
    $\begingroup$ @TobiasKienzler The function $\Gamma$ is holomorphic on its domain, and indeed, this much is necessary to guarantee path-independence of the integral in the definition of $\bar{F}$. This is also why we need to enclose all of the eigenvalues: The integrand $F(z) (z I - B)^{-1}$ has poles at the eigenvalues of $B$, and in general these poles contribute to the resulting integrand, so an integral over some loop not enclosing all the eigenvalues of $B$ will simply give a value other than $\bar{F}(B)$. $\endgroup$ Commented Feb 2, 2016 at 11:59
42
$\begingroup$

The gamma function is analytic. Use the power series of it.

EDIT: already done: Some properties of Gamma and Beta matrix functions (maybe paywalled).

$\endgroup$
6
  • 2
    $\begingroup$ (+1) for mentioning this technique, but I don't believe it's possible to use power series to compute $A!$ for the particular example matrix $A$ given: The line segment connecting the eigenvalues of $A$ contains a pole of $z \mapsto \Gamma(z + 1)$, so no power series for that function (i.e., for any basepoint) converges at $A$. $\endgroup$ Commented Jan 31, 2016 at 14:23
  • $\begingroup$ This would not have occurred to me! But the issue of convergence is a complicated one, I think. The gamma function has an infinite number of poles, after all, so it doesn't have a power series valid everywhere. $\endgroup$
    – TonyK
    Commented Jan 31, 2016 at 14:23
  • $\begingroup$ @Travis, obviously the convergence will depend of the matrix. $\endgroup$ Commented Jan 31, 2016 at 14:29
  • 2
    $\begingroup$ @VašekPotoček I don't think that's true; since the function $x \mapsto \Gamma(x + 1)$ is well-behaved at the eigenvalues of the given matrix $A$, I believe we can use the Jordan Canonical Form to make sense of $A!$. See my answer for more---comments and corrections are most welcome! $\endgroup$ Commented Jan 31, 2016 at 14:51
  • 1
    $\begingroup$ @Martín-BlasPérezPinilla Yes, my above comment was restricted to the example in the question. But the same reasoning shows that the relevant power series will not converge (again, for any base point) for a large open set of matrices: I think this is the case, for example, if a matrix has an eigenvalue $\lambda$ with $\Re \lambda < -1$ and $|\Im \lambda| > \frac{1}{2}$. $\endgroup$ Commented Jan 31, 2016 at 15:56
25
$\begingroup$

I would start from the logical definition of the matrix factorial, without assuming that we want to cover all properties that we know from factorial in set of reals.

We define standard factorial as $1 \cdot (1+1) \cdot (1+1+1) \cdot ... \cdot (1+1+...+1+1)$

So first let us define $[n]!$ using the same logic replacing 1 with identity matrix. The obvious way to define it is

$$[n]!=\prod\limits_{k=1}^{n}\begin{bmatrix} k & 0\\ 0 & k \end{bmatrix}=\begin{bmatrix} n! & 0\\ 0 & n! \end{bmatrix}$$

All properties of the standard factorial are there. Now, we were defining Gamma function by simple extension $\Gamma (x+1)=x\Gamma (x)$ where $n!=\Gamma (n+1)$. That is all that is required. So we want to find matrix Gamma $\Gamma ([x]+I)=[x]\Gamma ([x])$

If we define

$$\Gamma (\begin{bmatrix} x & 0\\ 0 & x \end{bmatrix})=\begin{bmatrix} \Gamma (x) & 0\\ 0 & \Gamma (x) \end{bmatrix}$$

we are totally fine because

$$\begin{bmatrix} x & 0\\ 0 & x \end{bmatrix}\begin{bmatrix} \Gamma (x) & 0\\ 0 & \Gamma (x) \end{bmatrix}=\begin{bmatrix} x\Gamma (x) & 0\\ 0 & x\Gamma (x) \end{bmatrix}=\begin{bmatrix} \Gamma (x+1) & 0\\ 0 & \Gamma (x+1) \end{bmatrix}$$

There is nothing amiss if we start from $\begin{bmatrix} x & 0\\ 0 & y \end{bmatrix}$ because

$$\begin{bmatrix} x & 0\\ 0 & y \end{bmatrix}\begin{bmatrix} \Gamma (x) & 0\\ 0 & \Gamma (y) \end{bmatrix}=\begin{bmatrix} x\Gamma (x) & 0\\ 0 & y\Gamma (y) \end{bmatrix}=\begin{bmatrix} \Gamma (x+1) & 0\\ 0 & \Gamma (y+1) \end{bmatrix}$$

The remaining part is the other diagonal. What to do with $A=\begin{bmatrix} x_{0} & x_{1}\\ x_{2} & x_{3} \end{bmatrix}$?

So we start from what we would like to have $\Gamma([A]+I)=[A]\Gamma([A])$.

If we are able to diagonalize $A=P^{-1}\overline{A}P$ and to express in the same manner $\Gamma([A]) = P^{-1}\Gamma(\overline{A})P$ then we have

$$\Gamma([A]+I) = P^{-1} \overline{A} P P^{-1} \Gamma(\overline{A}) P = P^{-1} \overline{A} \Gamma(\overline{A}) P = P^{-1} \Gamma(\overline{A+I}) P=\Gamma(A+I)$$

so all should be fine.

Since $\overline{A}$ is diagonal with eigenvalues on the main diagonal $\lambda_{1},\lambda_{2}$ and we know how to deal with that type of matrix, we have the full definition of $\Gamma(A)$ even for matrices.

$$\Gamma(A)=P^{-1}\begin{bmatrix} \Gamma (\lambda_{1}) & 0\\ 0 & \Gamma (\lambda_{2}) \end{bmatrix}P$$

and now $A!=\Gamma(A+I)$ making it all

$$A!=P^{-1}\begin{bmatrix} \Gamma (\lambda_{1}+1) & 0\\ 0 & \Gamma (\lambda_{2}+1) \end{bmatrix}P$$

Instead of giving the solution just to the example I will give a general form for 2x2 matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$. Take discriminant $D=\sqrt{(a-d)^2+4bc} \neq 0, c \neq 0$. Then

$$\begin{bmatrix} a & b \\ c & d \end{bmatrix} ! = \begin{bmatrix} \frac{a-d-D}{2c} & \frac{a-d+D}{2c} \\ 1 & 1 \end{bmatrix} \begin{bmatrix} \Gamma (\frac{a+d-D}{2}+1 ) & 0 \\ 0 & \Gamma ( \frac{a+d+D}{2} +1)\end{bmatrix} \begin{bmatrix} -\frac{c}{D} & \frac{a-d+D}{2D} \\ \frac{c}{D} & -\frac{a-d-D}{2D} \end{bmatrix}$$

From here you can nicely conclude that the factorial matrix can be expressed using classical integer factorial if $a+d \pm D$ are even positive integers (including $0$).

For other values we use the extension of $\Gamma(x)$ itself.

$\endgroup$
3
  • $\begingroup$ Very nice! Could you fix the two little "bugs" in the LaTeX code? ^^ Then I'll read it with pleasure! $\endgroup$
    – Enrico M.
    Commented Feb 3, 2016 at 19:54
  • $\begingroup$ @KimPeek: still editing, I have to look for myself how it looks. I think it looks fine now $\endgroup$
    – user195934
    Commented Feb 3, 2016 at 19:55
  • $\begingroup$ @AlexPeter: If you can add the factorial of the matrix which is given in the question that would be awesome. Any way very nice answer. ++1 $\endgroup$
    – Bumblebee
    Commented Feb 9, 2016 at 7:09
9
$\begingroup$

I use the well known (and simple) definitions

$$n!=\Gamma (n+1)$$

and

$$\Gamma (A+1)=\int_0^{\infty } \exp (-t) \exp (A \log (t)) \, dt$$

Now, if A is a (square) matrix, all we need is to define the exponential function for a matrix.

This can always be done (in principle) via the power series which only requires to calculate powers of the matrix and adding matrices.

We skip here a possible diagonalization procedure (which was shown by others before) and use the function MatrixExp[] of Mathematica. For the Matrix given in the OP

$$A=\left( \begin{array}{cc} 1 & 3 \\ 2 & 1 \\ \end{array} \right);$$

we have

Ax = MatrixExp[A Log[t]]

which gives

$$\left( \begin{array}{cc} \frac{t^{1-\sqrt{6}}}{2}+\frac{t^{1+\sqrt{6}}}{2} & \frac{1}{2} \sqrt{\frac{3}{2}} t^{1+\sqrt{6}}-\frac{1}{2} \sqrt{\frac{3}{2}} t^{1-\sqrt{6}} \\ \frac{t^{1+\sqrt{6}}}{\sqrt{6}}-\frac{t^{1-\sqrt{6}}}{\sqrt{6}} & \frac{t^{1-\sqrt{6}}}{2}+\frac{t^{1+\sqrt{6}}}{2} \\ \end{array} \right)$$

We observe that in some cases the exponent of t is less than -1 ($1-\sqrt{6}=-1.44949$). This leads to a divergent integral which will then be understood as the analytic continuation.

This is accomplished simply by replacing each $t^q$ by $\Gamma (q+1)$.

fA = Ax /. t^q_ -> Gamma[q + 1]

giving

$$\left( \begin{array}{cc} \frac{\Gamma \left(2-\sqrt{6}\right)}{2}+\frac{\Gamma \left(2+\sqrt{6}\right)}{2} & \frac{1}{2} \sqrt{\frac{3}{2}} \Gamma \left(2+\sqrt{6}\right)-\frac{1}{2} \sqrt{\frac{3}{2}} \Gamma \left(2-\sqrt{6}\right) \\ \frac{\Gamma \left(2+\sqrt{6}\right)}{\sqrt{6}}-\frac{\Gamma \left(2-\sqrt{6}\right)}{\sqrt{6}} & \frac{\Gamma \left(2-\sqrt{6}\right)}{2}+\frac{\Gamma \left(2+\sqrt{6}\right)}{2} \\ \end{array} \right)$$

Numerically this is

$$\left( \begin{array}{cc} 3.62744 & 8.84231 \\ 5.89488 & 3.62744 \\ \end{array} \right)$$

This is in agreement with the result of hans and travis.

Discussion

(1) Let me point out that the definitions presented here do not need any specific property of the matrix. For example, is does not matter whether the eigenvalues and eigenvectors are singular or not, the matrix might well be deficient.

(2) I have used Mathematica here just to facilitate things. In the end we all use some tools at a certain stage to come to terms. The main ideas are idependent.

(3) The procedure desribed here obviously generalizes to other more or less complicated analytic functions.

As a more exotic example let us take the harmonic number H(A) of a matrix A. This function can be defined using the integral representation (see e.g. Relation between binomial coefficients and harmonic numbers)

$$H(\text{A})\text{=}\int_0^1 \frac{1-(1-x)^A}{x} \, dx$$

This definition also needs only the exponential function of the matrix.

The result for our matrix A is (after some analytic continuation)

$$\left( \begin{array}{cc} \frac{1}{2} \left(H_{-\sqrt{6}}+H_{\sqrt{6}}+\frac{1}{1-\sqrt{6}}+\frac{1}{1+\sqrt{6}}\right) & \frac{1}{2} \sqrt{\frac{3}{2}} \left(-H_{-\sqrt{6}}+H_{\sqrt{6}}-\frac{1}{1-\sqrt{6}}+\frac{1}{1+\sqrt{6}}\right) \\ \frac{-H_{-\sqrt{6}}+H_{\sqrt{6}}-\frac{1}{1-\sqrt{6}}+\frac{1}{1+\sqrt{6}}}{\sqrt{6}} & \frac{1}{2} \left(H_{-\sqrt{6}}+H_{\sqrt{6}}+\frac{1}{1-\sqrt{6}}+\frac{1}{1+\sqrt{6}}\right) \\ \end{array} \right)$$

Numerically,

$$\left( \begin{array}{cc} 1.51079 & 0.542134 \\ 0.361423 & 1.51079 \\ \end{array} \right)$$

$\endgroup$
0
4
$\begingroup$

Travis' answer is very nice.

It would be good to mention that (almost) any matrix function can be made into a power-series expansion, which eventually involves the values of the function on the eigenvalues of the matrix multiplied by the eigenvectors.

In other words the matrix function is completely characterised by the values it takes on the eigenvalues of the matrix (even if a power-series expansion may be needed).

The above hold for matrices which are diagonalisable (i.e. the number of linearly independent eigenvectors is equal to the matrix dimension). There are ways to expand an arbitrary matrix into what is referred to as generalised eigenvectors, but this will not be pursued further here.

Furthermore, since any square, finite-dimensional, matrix satisfies its characteristic polynomial equation (if seen as a matrix function), aka Cayley-Hamilton theorem, the powers of $A^k$ for $k \ge n$ ($n$ is the dimension) can be expressed as a function of the powers of $A$ up to $n$. So eventually the matrix function power-series expansion collapses to polynomial expansion (for square matrices). Finally, this polynomial expansion, for a given function, can be found more easily by methods such as variation of parameters or polynomial modeling.

$\endgroup$
10
  • 1
    $\begingroup$ "any matrix function can be made into a power-series expansion, which eventualy involves the values of the function on the eigen-values of the matrix multiplied by the eigen-vectors." "every matrix function is completely characterised by the values it takes on the eigen-values of the matrix" Both statements are wrong for nondiagonalizable matrices. $\endgroup$
    – Did
    Commented Feb 4, 2016 at 6:42
  • 1
    $\begingroup$ Your remark about full-rank or non full-rank matrices is completely offtopic. Please refer to some simple examples of nondiagonalizable matrices to check if the statements in your answer are valid for such matrices (they are not). $\endgroup$
    – Did
    Commented Feb 4, 2016 at 12:30
  • 2
    $\begingroup$ Full-rank or not full-rank $\ne$ Diagonalisable or not. Please refer to some textbook on matrices. (How come that "eigenvectors" in your first comment mutate to "generalised eigenvectors"? Is this some kind of rhetorical trick? Are "full-rank" and "diagonalisable" supposed to become "generalized full-rank" and "generalized diagonalizable"?) $\endgroup$
    – Did
    Commented Feb 4, 2016 at 18:39
  • 1
    $\begingroup$ @Did, i'm not sure who is trying to use rhetorical tricks here. Anyway "The diagonalization theorem states that an n×n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors, i.e., if the matrix rank of the matrix formed by the eigenvectors is n", i hope you do like Wolfram, as for the rest they are not touched upon. If you think the phrasing can be made better, no problem, else it is beating around the bush and poor use of my time $\endgroup$
    – Nikos M.
    Commented Feb 4, 2016 at 21:18
  • 3
    $\begingroup$ @NikosM. "Full-rank" does not mean that the eigenvectors have full span. Please revise what the rank of a matrix is. $\endgroup$
    – Did
    Commented Feb 5, 2016 at 6:46
3
$\begingroup$

A factorial of an element of a group $n$ can be interpreted in an abstract algebraic sense, which can then be used with matrices. I will try to motivate the intuition here rather than ONLY giving a definition. Consider the expression $n!$ this is $ 1 \times 2 \times 3 \times ... n $

We now consider an arbitrary group $G$ with (not necessarily commutative operation +) then we can consider a particular element of $G$ (with generators $g_1, g_2 ... g_r$) which we express minimally as the word

$$W = g_{\mu_1} + g_{\mu_2}+.... g_{\mu_s} $$

To be concrete if we let $G$ be the integers then there are two generators $1, -1$ and a positive integer number $N$ can be expressed in the way above as

$$N = N(1) = \underbrace{1+1+1...}_{N \ \text{times}} $$

Here $g_{\mu_k}$ is just positive $1$ the whole time.

If there was a a multiplication operation $\times$ defined now on this group as well (We haven't done it yet!) then we could define $W!$ as

$$ g_{\mu_1} \times \left( g_{\mu_1} + g_{\mu_2} \right) \times \ (g_{\mu_1} + g_{\mu_2} + g_{\mu_3} ) \times \ ...\times \ (g_{\mu_1} + g_{\mu_2} + ... g_{\mu_s}) $$

You can verify that if the group $G$ is the integers this results in

$$ 1 \times 2 \times 3 ... n $$

Which is what we would expect.

So the problem of defining a factorial on a group reduces very naturally to: "How to add a multiplication to a group"?

If $G$ is abelian then you can make $G$ into a ring (with possibly non commutative multiplication). If $G$ is not abelian then either a left or right near ring is the way to go.

Once you add your multiplication then you get a natural factorial.

$\endgroup$
1
  • 1
    $\begingroup$ That is rather "innovative", thank you for this new point of view! $\endgroup$
    – Enrico M.
    Commented Jan 4, 2022 at 21:58
0
$\begingroup$

For a matrix $A$ the value $f(A)$ depends only on the restriction of $f$ on the spectrum of $A$ ( with multiplicities).

If the spectrum is simple, consider $P$ the Lagrange interpolation polynomial of the function $f$ on the spectrum of $A$. Then $f(A)= P(A)$. ( if the spectrum of $A$ is not simple, consider the Hermite interpolation polynomial instead).

Examples:

  1. Assume that the $2\times 2$ matrix $A$ has a simple spectrum $\{\alpha, \beta\}$. The Lagrange interpolation polynomial for the function $f$ at the nodes $\alpha$, $\beta$ equals

$$P(x) = \frac{f(\alpha)- f(\beta)}{\alpha - \beta}\cdot x + \frac{\alpha f(\beta) - \beta f(\alpha)}{\alpha- \beta}$$ and

$$f(A) = P(A) = \frac{f(\alpha)- f(\beta)}{\alpha - \beta}\cdot A + \frac{\alpha f(\beta) - \beta f(\alpha)}{\alpha- \beta}\cdot I_2$$

  1. Let t $q(x)$ be a polynomial such that $q(A) = 0$ (for instance $q(x)$ is the characteristic polynomial, or the minimal polynomial). Then $f(A) = P(A)$, where $P$ is the Lagrange (Hermite) interpolation polynomial of $f$ at the roots of $q$.

For instance,

Assume $A$ is a projector ($A^2 = A$) then

$$f(A) = (f(1)-f(0))\cdot A + f(0) \cdot I_2$$

Assume $A$ is nilponent, $A^m =0$. Then

$$f(A) = \sum_{k=0}^{m-1} \frac{f^{k}(0)}{k!} A^k$$

  1. $f(A)= P(A)$ for any polynomial that agrees with $f$ at the roots of some $q(x)$ annihilating $A$. It follows that

$$(f\cdot g) (A) = f(A) \cdot g(A)$$

$$f(g(A)) = (f\circ g)(A)$$

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .