80
$\begingroup$

Recently on this site, the question was raised how we might define the factorial operation $\mathsf{A}!$ on a square matrix $\mathsf{A}$. The answer, perhaps unsurprisingly, involves the Gamma function.

What use might it be to take the factorial of a matrix? Do any applications come to mind, or does this – for now* – seem to be restricted to the domain of recreational mathematics?

(*Until e.g. theoretical physics turns out to have a use for this, as happened with Calabi–Yau manifolds and superstring theory...)

$\endgroup$
2
  • 19
    $\begingroup$ Perhaps it can be used to define the factorial of a quaternion? (That is, one would represent the quaternion by its matrix representation and find the factorial of that. It seems the final answer involves $\left(a\pm i\sqrt{b^2+c^2+d^2}\right)\!\!\matrix{\large!}\!$, which we know how to define.) $\endgroup$ Commented Feb 5, 2016 at 1:50
  • $\begingroup$ This is a bit speculative, but it is probably useful in probability theory. The factorial/gamma function is a pretty common component in random variable probability densities, (think poisson, gamma, beta distributions), and generalising univariate pdfs to multivariate pdfs often involves replacing the role of a single variable with a matrix. The matrix exponential shows up a lot in probability theory, so there might be some occurrences of the matrix factorial as well. $\endgroup$ Commented Apr 1, 2021 at 1:34

4 Answers 4

6
$\begingroup$

I could find the following references that take the use of matrix factorial for a concrete applied context:

Coherent Transform, Quantization and Poisson Geometry: In his book Mikhail Vladimirovich Karasev uses for example the matrix factorial for hypersurfaces and twisted hypergeometric functions.

enter image description here

Artificial Intelligence Algorithms and Applications: In this conference proceeding the Bayesian probability matrix factorial is used in the context of classification of imputation methods for missing traffic data.

Fluid model: Mao, Wang and Tan deal in their paper (Journal of Applied Mathematics and Computing) with a fluid model driven by an $M/M/1$ queue with multiple exponential vacations and $N$-policy.

Construction of coherent states for multi-level quantum systems: In their paper "Vector coherent states with matrix moment problems" Thirulogasanthar and Hohouéto use matrix factorial in context of quantum physics.

Algorithm Optimization: Althought this is a more theoretic field of application, I would like to mention this matrix factorial use case as well. Vladica Andrejić, Alin Bostan and Milos Tatarevic present in their paper improved algorithms for computing the left factorial residues $!p=0!+1!+\ldots+(p−1)!\bmod{p}$. They confirm that there are no socialist primes $p$ with $5<p<2^{40}$. You may take a look into an arXiv version of this paper.

$\endgroup$
0
$\begingroup$

The factorial has a straightforward interpretation in terms of automorphisms/permutations as the size of the set of automorphisms.

One possible generalization of matrices is the double category of spans.

So an automorphism $ R ! = R \leftrightarrow R $ over a span $A \leftarrow R \rightarrow B$ ought to be a reasonable generalization.

I usually find it easier to think in terms of profunctors or relations than spans.

The residual/internal hom of profunctors $(R/R)(a, b) = \forall x, R(x, a) \leftrightarrow R(x, b) $ is a kan extension. The kan extension of a functor with itself is the codensity monad. For profunctors and spans the automorphism ought to be a groupoid (a monad in the category of endospans equipped with inverses.)

The factorial is the size of the automorphism group of a set. The automorphism group ought to generalize to a "automorphism groupoid" of a span. I suspect permutation of a matrix ought to be a automorphism groupoid enriched in Vect but this confuses me.

$\endgroup$
0
$\begingroup$

CA systems implement all algebraic functions of square matrices, simply as the natural extension of a function $f$ with a zero at $x=0$ to diagonal matrices as arguments that map

$$x\to f(x) = \sum_1^\infty \ f_n x^n$$

$$f(A) =f\ \left(\left( \begin{array}{cccc}a_1&0&0\dots\\0&a_2&0\dots\\0&0&a_3\dots\\.&.&.&.\end{array}\right)\right)= \left( \begin{array}{ccc}f(a_1)&0&0\\0&f(a_2)&0\\0&0&f(a_3)\end{array}\right)$$

If $f(0)\ne 0$, matrix functions become 'complex'

e.g. here for $n! =\Gamma(n+q)$

  (MatrixFunction[(Gamma[# + 1] &), 
  IdentityMatrix[2] + 
   \[Alpha] PauliMatrix[1] + 
    \[Beta] PauliMatrix[2] + 
    \[Gamma] PauliMatrix[3]] //. 
     {\[Alpha]^2 + \[Beta]^2 + \[Gamma]^2 :> \[Phi]^2} // 
FullSimplify // PowerExpand) )

$$\left( \begin{array}{cc} \frac{\gamma \Gamma (\phi +2)}{2 \phi }-\frac{\gamma \Gamma (2-\phi )}{2 \phi }+\frac{\Gamma (\phi +2)}{2}+\frac{\Gamma (2-\phi )}{2} & \frac{\gamma ^2 \Gamma (2-\phi )}{2 \phi (\alpha +i \beta )}-\frac{\gamma ^2 \Gamma (\phi +2)}{2 \phi (\alpha +i \beta )}+\frac{\phi \Gamma (\phi +2)}{2 (\alpha +i \beta )}-\frac{\phi \Gamma (2-\phi )}{2 (\alpha +i \beta )} \\ \frac{\alpha \Gamma (\phi +2)}{2 \phi }-\frac{\alpha \Gamma (2-\phi )}{2 \phi }+\frac{i \beta \Gamma (\phi +2)}{2 \phi }-\frac{i \beta \Gamma (2-\phi )}{2 \phi } & -\frac{\gamma \Gamma (\phi +2)}{2 \phi }+\frac{\gamma \Gamma (2-\phi )}{2 \phi }+\frac{\Gamma (\phi +2)}{2}+\frac{\Gamma (2-\phi )}{2} \\ \end{array} \right)$$

$\endgroup$
0
$\begingroup$

CA systems implement all algebraic functions of square matrices, simply as the natural extension of a function $f$ with a zero at $x=0$ to diagonal matrices as arguments that map the functions to the diagonal elements by the principle of power series.

$$x\to f(x) = \sum_1^\infty \ f_n x^n$$

$$f(A) =f\ \left(\quad\left( \begin{array}{cccc}a_1&0&0&\dots\\0&a_2&0 \\0&0&a_3\\\vdots\end{array}\right)\quad\right)= \left( \begin{array}{cccc}f(a_1)&0&0&\dots\\0&f(a_2)&0\\0&0&f(a_3)\\ \vdots\end{array}\right)$$

If $f(0)\ne 0$, matrix functions become 'complex'

e.g. here for $n! =\Gamma(n+1)$

  (MatrixFunction[(Gamma[# + 1] &), 
  IdentityMatrix[2] + 
   \[Alpha] PauliMatrix[1] + 
    \[Beta] PauliMatrix[2] + 
    \[Gamma] PauliMatrix[3]] //. 
     {\[Alpha]^2 + \[Beta]^2 + \[Gamma]^2 :> \[Phi]^2} // 
FullSimplify // PowerExpand) )

$$\left( \begin{array}{cc} \frac{\gamma \Gamma (\phi +2)}{2 \phi }-\frac{\gamma \Gamma (2-\phi )}{2 \phi }+\frac{\Gamma (\phi +2)}{2}+\frac{\Gamma (2-\phi )}{2} & \frac{\gamma ^2 \Gamma (2-\phi )}{2 \phi (\alpha +i \beta )}-\frac{\gamma ^2 \Gamma (\phi +2)}{2 \phi (\alpha +i \beta )}+\frac{\phi \Gamma (\phi +2)}{2 (\alpha +i \beta )}-\frac{\phi \Gamma (2-\phi )}{2 (\alpha +i \beta )} \\ \frac{\alpha \Gamma (\phi +2)}{2 \phi }-\frac{\alpha \Gamma (2-\phi )}{2 \phi }+\frac{i \beta \Gamma (\phi +2)}{2 \phi }-\frac{i \beta \Gamma (2-\phi )}{2 \phi } & -\frac{\gamma \Gamma (\phi +2)}{2 \phi }+\frac{\gamma \Gamma (2-\phi )}{2 \phi }+\frac{\Gamma (\phi +2)}{2}+\frac{\Gamma (2-\phi )}{2} \\ \end{array} \right)$$

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .