36
$\begingroup$

Today I was thinking about composition of functions. It has nice properties, its always associative, there is an identity, and if we restrict to bijective functions then we have an inverse.

But then I thought about commutativity. My first intuition was that bijective self maps of a space should commute but then I saw some counter-examples. The symmetric group is only abelian if $n \le 2$ so clearly there need to be more restrictions on functions than bijectivity for them to commute.

The only examples I could think of were boring things like multiplying by a constant or maximal tori of groups like $O(n)$ (maybe less boring).

My question: In a euclidean space, what are (edit) some nice characterizations of sets of functions that commute? What about in a more general space?

Bonus: Is this notion of commutativity important anywhere in analysis?

$\endgroup$
8
  • 1
    $\begingroup$ Rotations in the plane commute; in space, however... $\endgroup$ Commented Nov 23, 2010 at 2:19
  • $\begingroup$ Yes, but thats just multiplication by a constant in $\mathbb{C}$ so its not very interesting I think. Like I noted, in higher dimensions we have the maximal tori of the orthogonal group I think. $\endgroup$ Commented Nov 23, 2010 at 2:31
  • $\begingroup$ I don't understand the question. What sort of conditions are you looking for? What sort of functions? Do you want to assume continuity, differentiability, linearity, etc.? $\endgroup$ Commented Nov 23, 2010 at 2:49
  • $\begingroup$ @jonas yes this question was mainly motivated from an analysis perspective originally, so those properties would be nice to consider. It would narrow the search a little bit. But im really open to anything since its more of a general inquiry. $\endgroup$ Commented Nov 23, 2010 at 2:57
  • $\begingroup$ @GottfriedLeibniz: But what sort of general necessary conditions are you looking for to make functions commute? Different sets of commuting functions will commute for different reasons. Are you looking for a bunch of examples? $\endgroup$ Commented Nov 23, 2010 at 3:03

3 Answers 3

45
$\begingroup$

A classic result of Ritt shows that polynomials that commute under composition must be, up to a linear homeomorphism, either both powers of $x$, both iterates of the same polynomial, or both Chebyshev polynomials. Actually Ritt proved a more general rational function case - follow the link. His work was motivated by work of Julia and Fatou's work on Julia sets of rational functions, e.g. see here for a modern presentation.

$\endgroup$
5
  • 3
    $\begingroup$ Yes, the identity $T_m(T_n(x))=T_n(T_m(x))=T_{mn}(x)$, even after looking at it in terms of cosines, is pretty deep, I think. $\endgroup$ Commented Nov 23, 2010 at 3:39
  • 2
    $\begingroup$ Ritt's theorem is for polynomials with complex coefficients (which implies the same result for polynomials over a field of characteristic 0). The problem is unsolved in characteristic p > 0. $\endgroup$
    – T..
    Commented Nov 23, 2010 at 9:52
  • $\begingroup$ What is meant by "up to linear homeomorphism"? Not even scalar multiples of powers of $x$ will commute in general unless the scalars and the powers are the same. $\endgroup$ Commented Aug 3, 2013 at 12:30
  • $\begingroup$ @PrimeRibeyeDeal It means that since $x^2$ is a solution, the function $(x-c)^2+c$ is as well. $\endgroup$
    – Phira
    Commented May 18, 2020 at 19:14
  • $\begingroup$ @J.M. looking at it in terms of cosines, the identity is $\cos(m\arccos(\cos(n\arccos(x))))=\cos(n\arccos(\cos(m\arccos(x))))=cos(mn \arccos(x))$, or $\cos m\arccos\cos ny =\cos n\arccos\cos my = \cos mn y$. Doesn't look deep to me, and the cosine doesn't even play any special role. What is special to me is the fact that those are polynomials, but maybe that's what you meant too $\endgroup$
    – Bananach
    Commented Jun 10, 2020 at 11:31
8
$\begingroup$

According to Wikipedia, a set of diagonalizable matrices commute if and only if they are simultaneously diagonalizable. There is a far-reaching generalization, namely the Gelfand representation theorem.

The Gelfand representation theorem for commutative $C^*$ algebras represents every commutative $C^*$ algebra as an algebra of functions with pointwise multiplication; the domain of the latter algebra is the spectrum of the former algebra.

$\endgroup$
2
  • $\begingroup$ The Gelfand-Neumark theorem you cite generalizes the case where the matrices are normal, and therefore simultaneously unitarily diagonalizable, if you want to think of this as realizing the matrices as complex-valued functions on a finite discrete space. $\endgroup$ Commented Nov 23, 2010 at 3:58
  • $\begingroup$ One direction of your first statement is clear. In the other direction, after changing bases so that one of the matrices, T, is diagonal, the others will have to be block diagonal (up to conjugating by a permutation matrix) with blocks corresponding to the distinct eigenvalues of T. If they're not yet all diagonal, take a matrix in the set that is not diagonal, and change bases to diagonalize its blocks (which will leave T diagonal because its blocks are scalar). Because the size of the blocks will decrease, this process terminates with the matrices all diagonalized. $\endgroup$ Commented Nov 23, 2010 at 4:12
5
$\begingroup$

This question may also be related to how certain functions behave under functions of their variables. In this context, the property of commuting with binary operators, such as addition and multiplication, can be used to define classes of functions:

  1. additive commutation: if $g(x, y) = x + y$, then $f\big(g(x, y)\big) = g\big(f(x),\ f(y)\big)$ if and only if $f(x + y) = f(x) + f(y)$ thus $f$ is a homogeneous linear function of the form $f(x; a) \equiv ax$

  2. multiplicative commutation: if $g(x, y) = xy$, then $f\big( g(x, y) \big) = g\big(f(x),\ f(y)\big)$ if and only if $f(xy) = f(x)f(y)$ thus $f$ is "scale invariant" i.e. a power law of the form $f(x; a) \equiv x^a$

  3. log-additive commutation: if $g(x, y) = x + y$, then $\log f\big( g(x, y) \big) = g\big( \log f(x),\ \log f(y) \big)$ if and only if $f(x + y) = f(x)f(y)$ thus $f$ is an exponential function of the form $f(x; a) \equiv \exp(ax)$

The last item (3) involves a third function (the logarithm) which when denoted as $h$ gives

$h\big(f[g(x, y)]\big) = g\big(h[f(x)],\ h[f(y)]\big)$

or

$h \circ f \circ g(x, y) = g\big(h \circ f(x),\ h \circ f(y)\big).$

Since $h \circ f$ occurs on both sides, we can denote this as $\tilde f$ to get

$\tilde f \big( g(x, y) \big) = g \big( \tilde f(x), \tilde f(y) \big)$

which has the same form as item (1) above. From this perspective, items (1) and (3) above can be seen as being isomorphic under the $\exp$ and $\log$ pair of invertible mappings.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .