18
$\begingroup$

Let me be more precise:

Is there an $f: \mathbb R^{n \times n}\to \mathbb R$ such that for any square matrices $A$ and $B$ of the same order $n \times n$ we have:

  1. $f(A+B) = f(A) + f(B)$
  2. $f(kA) = kf(A), k \in \mathbb R$
  3. $f(AB) = f(A)f(B)$

Can there be such an $f$? It is clear that $f(0) = 0, f(-A) = -f(A), f(I) =1 $ (or $0$). Do we get some contradiction from this? All nilpotent matrices have $f=0$

EDIT: A few interesting corollaries that one of your answers allowed me to arrive to:

There is also no operator that satisfies only $1$ and $3$ to all square matrices but the trivial one $f \equiv 0$.

If an operator satisfies $1$ and $2$, then it must be a random (but fixed) linear combination of the elements of the matrix, like the trace or just any random element of fixed coordinates.

$\endgroup$
5
  • 1
    $\begingroup$ if the domain is the set of $n \times n$ matrices, it should be $\mathbb{R}^{n \times n}$. $\endgroup$
    – smitke6
    Commented May 17 at 21:03
  • 5
    $\begingroup$ I think it is constant zero. Part (3) says $f(N) = 0$ for any nilpotent $N.$ Easy enough to find nilpotent $M$ such that $M+N$ is not nilpotent... $\endgroup$
    – Will Jagy
    Commented May 17 at 21:04
  • $\begingroup$ @WillJagy well, if it is constant zero then it doesn't exist, as $f(I)=1$ but your argument is interesting. Does it need to not be zero for not nilpotent matrices? $\endgroup$ Commented May 17 at 21:06
  • 6
    $\begingroup$ pert (3) says either $f(I) = 1$ or $f(I) = 0.$ I suspect there is a way to write the identity matrix as the sum of several nilpotent matrices; let me try 2 by 2 $\endgroup$
    – Will Jagy
    Commented May 17 at 21:09
  • 4
    $\begingroup$ Easy route: a matrix $N$ with just one element nonzero, and this element off diagonal, is nilpotent in that $N^2 = 0.$ The companion matrix $C$ for $x^n - 1$ has only zeroes on the diagonal, therefore it is the sum of (exactly $n$) nilpotent matrices. By your rules $f(C) = 0.$ However, $C^n = I,$ so that $f(I) = 0$ also. Finally, third rule, all $f(A) = 0$ here is such a companion $$ \left( \begin{array}{rrrrr} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 \\ \end{array} \right) $$ $\endgroup$
    – Will Jagy
    Commented May 18 at 1:16

5 Answers 5

27
$\begingroup$

Edit: for completion, I added the $n=1$ and odd $n$ cases, using the helpful comments of Will Jagy.

First, observe that $f$ is a linear map between vectors spaces $ \mathbb{R}^{n \times n}$ and $\mathbb{R} $. Moreover, using properties $1$ and $3$, such a function $f$ must satisfy: $$ f \left( AB-BA \right) = 0 $$ for all matrices $A$ and $B$, i.e. $f$ vanishes on all matrices that can be written as commutators. A necessary and sufficient condition for a real square matrices to be a commutator is for it to be traceless. Thus, the kernel of $f$ must contain at least $ \mathfrak{sl} \left(n; \mathbb{R} \right) $, which is the subspace of traceless matrices, which has dimension $n^2-1$. However, it could still be nonzero on the one-dimensional subspace of scalar matrices; and as you've correctly mentioned, it must map the unit matrix to $1$ or $0$. Thus, if $ f $ is not the constant zero function, you must have: $$ f ( a I ) = a $$ for all $ a \in \mathbb{R} $, and $f$ vanishes on every traceless matrix. This works for $n=1$, i.e. the identity map from $ \mathbb{R} $ to itself obviously satisfies all three properties (since it is a field isomorphism). However, for $n>1$ a map of the form above can't satisfy the third property. Take: $$ A = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} $$ Then $ f (A) = 0 $ but $ f (A^2) = f(I)=1 $, in contradiction. More generally, for any $n$ we can find a traceless matrix whose $n$th power is the identity, e.g. the companion matrix to $x^n-1$, also known as the shift matrix (thanks to Will Jagy for this suggestion): $$ \left[ A \right]_{ij} = \begin{cases} 1 &; & i+1 = j \, \mathrm{mod} \, n \\ 0 &; & \mathrm{otherwise} \end{cases}$$ where the row and column indices go from $0$ to $n-1$. Since $x^n-1$ is the minimal polynomial of $A$, it is obvious that $A^n = I$; but since $A$ has no nonzero diagonal elements, it is traceless, and we have $f (A)=0$ but $f(A)^n = f(A^n)=f(I)=1$, in contradiction. Thus, for $n>1$ we must have $f \equiv 0$.

$\endgroup$
4
  • $\begingroup$ I had made a mistake, which is now corrected. $f$ must be identically zero. $\endgroup$
    – smitke6
    Commented May 17 at 21:15
  • $\begingroup$ that's good enough, at least for even $n$s $\endgroup$ Commented May 17 at 21:19
  • 4
    $\begingroup$ for odd $n$ we may take the companion matrix to $x^n - 1$ which has all diagonal elements zero. $\endgroup$
    – Will Jagy
    Commented May 17 at 21:22
  • 7
    $\begingroup$ $$ \left( \begin{array}{rrrrr} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 \\ \end{array} \right) $$ $\endgroup$
    – Will Jagy
    Commented May 17 at 21:25
10
$\begingroup$

All such maps are either the constant zero map or an algebra homomorphism from the real algebra $M_n(\mathbb R)$ to the real algebra $\mathbb R$. Over a field, matrix algebras are simple. Thus, this map must be injective, which is clearly impossible (if $n \geq 2$, otherwise the identity map $\mathbb R \to \mathbb R$ works). Hence, for $n \geq 2$, only the constant zero map exists.

$\endgroup$
10
$\begingroup$

If you don't want to invoke ring theory:

Let $n\geq 2$.

For any such $f$, we have that if $A^2=A$, then $f(A)=0$ or $f(A)=1$. If $f(I_n)=0$, then $f$ is constant $0$, so we may assume that $f(I_n)=1$.

Let $E_{ij}$ be the matrix that has a $1$ in the $(i,j)$ coordinate and $0$ elsewhere. Since $E_{ii}^2 = E_{ii}$, it follows that $f(E_{ii})=0$ or $f(E_{ii})=1$.

Since $I_n = E_{11}+\cdots+E_{nn}$, we have that $f(E_{11})+\cdots + f(E_{nn})= f(I_n) = 1$. That means that exactly one of $E_{ii}$ maps to $1$, and the rest map to $0$. Suppose that $f(E_{ii}) = 1$.

Then for any matrix $A$, we have that $f(E_{ii}A) = f(AE_{ii}) = f(A)$.

If $A$ and $B$ have the same $i$th column, then $AE_{ii}=BE_{ii}$. By the above, we have $f(A)=f(B)$.

Likewise, if $A$ and $B$ have the same $i$th row, then $E_{ii}A=E_{ii}B$, so $f(A)=f(B)$. Thus, $f(A)$ is completely determined by the value of $a_{ii}$, the $(i,i)$ entry of $A$.

Let $A$ be the matrix of all $1$s. Then $A^2$ is the matrix of all $n$s. But $A$ has the same $(i,i)$ entry as $I_n$, so $f(A)=1$, which means that $f(A^2) = f(A)^2 = 1$; while $A^2$ has the same $(i,i)$th entry as $nI_n$, so $f(A^2) = f(nI_n) = nf(I_n)=n$. This is a contradiction.

So if $n\geq 2$, then the only map is the zero map. If $n=1$, then a ring homomorphism from $\mathbb{R}$ to itself must be either the zero map, or the identity.


If you know ring theory, though, there is a shorter argument (modulo knowing the ideals of a matrix ring over a ring with unity): because $f$ is a ring homomorphism, the kernel is an ideal. It is well known that the ring of matrices over a division ring is simple: the only ideals are the trivial ideal and the whole ring. If the kernel is the whole ring, then this is the zero map. If the kernel is trivial, then thinking of it as a linear map of real vector spaces, we have $\dim(\mathbb{R}^{n\times n})=n^2$ and $\dim(\mathbb{R})=1$, so $n^2=1$, thus $n=1$. Then you just need to verify that the only nonzero ring homomorphism from $\mathbb{R}\to\mathbb{R}$ is the identity map (it maps positives to positives, so it respects order, and restricts to the identity on $\mathbb{Q})$. Thus, either $f(A)=0$ for all $A$, or $n=1$ and $f=\mathrm{id}$.

$\endgroup$
4
  • 1
    $\begingroup$ I think this is the only answer OP will fully understand. A slightly different finish would be to notice Property (3) implies $f$ is a (conjugacy) class function (esp by permutation matrices) so $f\big(E_{1,1}\big)=\cdots = f\big(E_{n,n}\big)$, and we can conclude $0=f(I_n)$ or $n=f(I_n)$. $\endgroup$ Commented May 17 at 23:30
  • $\begingroup$ indeed this proof works even if we remove $f(kA) = kf(A)$. Very nice. $\endgroup$ Commented May 18 at 0:13
  • $\begingroup$ @hellofriends Well, I use that property at the end, but since I am only using it for $k$ a positive integer, it can be deduced from additivity. $\endgroup$ Commented May 18 at 0:26
  • $\begingroup$ @ArturoMagidin I made a slight variation of your proof assuming WLOG that $f(E_{11})=1$ and I figured from this that $f(A)= a_{11}$ for any matrix $A$ but this contradicts $f(AB) = f(A)f(B)$ for most matrices $\endgroup$ Commented May 18 at 0:36
3
$\begingroup$

Suppose $n\ge2$. For any rank-one matrix $xy^T$, pick two vectors $u$ and $v$ such that $u^Tx=0$ and $u^Tv=1$. Then $f(xu^T)^2=f(xu^Txu^T)=f(0)=0f(0)=0$. Therefore $f(xu^T)=0$. In turn, $f(xy^T)=f(xu^Tvy^T)=f(xu^T)f(vy^T)=0$. As every matrix can be written as a finite sum of rank-one matrices, we conclude that $f=0$.

$\endgroup$
1
$\begingroup$

Here's a machinery-light proof. Property 3 tells us $f(AB)=f(A)f(B)=f(B)f(A)=f(BA)$, which among other things implies that $f$ is a (conjugacy) class function, i.e.
$f\big(P^{-1}AP\big)=f\big((P^{-1}A)P\big)= f\big(P(P^{-1}A)\big)=f\big(A\big)$

Write a basis for $V=E\oplus W \oplus S$, where $E=\big\{\lambda I\big\}$, $W$ is the subspace of traceless symmetric matrices and $S$ the subspace of skew-symmetric matrices.

$W$ has generators given by (i) $\mathbf e_1\mathbf e_1^T -\mathbf e_j \mathbf e_j^T$ for $j\in \big\{2,3,\cdots, n\big\}$ and (ii) $\big(\mathbf e_i\mathbf e_j^T\big)+\big(\mathbf e_i\mathbf e_j^T\big)^T$ for $j\neq i$.
In the case of (i) each basis vector is conjugate (in fact permutation similar --use a type 2 elementary matrix) to its negative so $f\big(\mathbf e_1\mathbf e_1^T -\mathbf e_j \mathbf e_j^T\big)=f\big(-(\mathbf e_1\mathbf e_1^T -\mathbf e_j \mathbf e_j^T)\big)=-f\big(\mathbf e_1\mathbf e_1^T -\mathbf e_j \mathbf e_j^T\big)$
$\implies 2 \cdot f\big(\mathbf e_1\mathbf e_1^T -\mathbf e_j \mathbf e_j^T\big)=0\implies f\big(\mathbf e_1\mathbf e_1^T -\mathbf e_j \mathbf e_j^T\big)=0$

in the case of (ii) each vector is permutation similar (via type 2 elementary matrix) to
$\mathbf e_1\mathbf e_2^T + (\mathbf e_1\mathbf e_2^T)^T =\left[\begin{matrix}0 & 1&\mathbf 0\\1 & 0&\mathbf 0\\\mathbf 0 & \mathbf 0&\mathbf 0 \end{matrix}\right]$ which may be diagonalized by $S=\left[\begin{matrix}1 & -1&\mathbf 0\\1 & 1&\mathbf 0\\\mathbf 0 & \mathbf 0&I_{n-2} \end{matrix}\right]$ to recover a vector from (i) hence these are killed by $f$ as well. Conclude $f\big(W\big)=\big\{0\big\}$.

Skew-symmetric subspace $S$ is generated by vectors of the form $\big(\mathbf e_i\mathbf e_j^T\big)-\big(\mathbf e_i\mathbf e_j^T\big)^T$ for $i\neq j$, each of which is similar to its transpose, i.e. its negative (again via a type 2 Elementary matrix), so as in (i) $f(v) =0$ for $v\in S$.

Finally examine $E$. If $f(I)= r$ then $f(A)-\frac{r}{n}\cdot\text{trace}\big(A\big)$ is the zero map, i.e. $\frac{r}{n}\cdot\text{trace}\big(A\big)=f(A)$. But if $r\neq 0$ and $n\geq 2$ then select any skew symmetric $A\neq \mathbf 0$ and confirm $\frac{r}{n}\cdot\text{trace}\big(A^2\big) - \Big(\frac{r}{n}\cdot\text{trace}\big(A\big)\Big)^2 =\frac{r}{n}\cdot\text{trace}\big(-A^TA\big)\lt 0 = f\big(A^2\big)-f\big(A\big)f\big(A\big)$
which is a contradiction

remark:
if we ignore the last sentence, what we've proven is that any $f$ obeying Properties (1) and (2) that is also a class function must be $\propto \text{trace}$ over any field $\mathbb F$ where $\text{char }\mathbb F \neq 2$, except we'd want to instead evaluate $f$ on $E=\big\{\lambda \mathbf e_1\mathbf e_1^T\big\}$ to handle issues that can arise in positive characteristic.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .