You are currently browsing the monthly archive for August 2023.

As someone who had a relatively light graduate education in algebra, the import of Yoneda’s lemma in category theory has always eluded me somewhat; the statement and proof are simple enough, but definitely have the “abstract nonsense” flavor that one often ascribes to this part of mathematics, and I struggled to connect it to the more grounded forms of intuition, such as those based on concrete examples, that I was more comfortable with. There is a popular MathOverflow post devoted to this question, with many answers that were helpful to me, but I still felt vaguely dissatisfied. However, recently when pondering the very concrete concept of a polynomial, I managed to accidentally stumble upon a special case of Yoneda’s lemma in action, which clarified this lemma conceptually for me. In the end it was a very simple observation (and would be extremely pedestrian to anyone who works in an algebraic field of mathematics), but as I found this helpful to a non-algebraist such as myself, and I thought I would share it here in case others similarly find it helpful.

In algebra we see a distinction between a polynomial form (also known as a formal polynomial), and a polynomial function, although this distinction is often elided in more concrete applications. A polynomial form in, say, one variable with integer coefficients, is a formal expression {P} of the form

\displaystyle  P = a_d {\mathrm n}^d + \dots + a_1 {\mathrm n} + a_0 \ \ \ \ \ (1)

where {a_0,\dots,a_d} are coefficients in the integers, and {{\mathrm n}} is an indeterminate: a symbol that is often intended to be interpreted as an integer, real number, complex number, or element of some more general ring {R}, but is for now a purely formal object. The collection of such polynomial forms is denoted {{\bf Z}[{\mathrm n}]}, and is a commutative ring.

A polynomial form {P} can be interpreted in any ring {R} (even non-commutative ones) to create a polynomial function {P_R : R \rightarrow R}, defined by the formula

\displaystyle  P_R(n) := a_d n^d + \dots + a_1 n + a_0 \ \ \ \ \ (2)

for any {n \in R}. This definition (2) looks so similar to the definition (1) that we usually abuse notation and conflate {P} with {P_R}. This conflation is supported by the identity theorem for polynomials, that asserts that if two polynomial forms {P, Q} agree at an infinite number of (say) complex numbers, thus {P_{\bf C}(z) = Q_{\bf C}(z)} for infinitely many {z}, then they agree {P=Q} as polynomial forms (i.e., their coefficients match). But this conflation is sometimes dangerous, particularly when working in finite characteristic. For instance:

  • (i) The linear forms {{\mathrm n}} and {-{\mathrm n}} are distinct as polynomial forms, but agree when interpreted in the ring {{\bf Z}/2{\bf Z}}, since {n = -n} for all {n \in {\bf Z}/2{\bf Z}}.
  • (ii) Similarly, if {p} is a prime, then the degree one form {{\mathrm n}} and the degree {p} form {{\mathrm n}^p} are distinct as polynomial forms (and in particular have distinct degrees), but agree when interpreted in the ring {{\bf Z}/p{\bf Z}}, thanks to Fermat’s little theorem.
  • (iii) The polynomial form {{\mathrm n}^2+1} has no roots when interpreted in the reals {{\bf R}}, but has roots when interpreted in the complex numbers {{\bf C}}. Similarly, the linear form {2{\mathrm n}-1} has no roots when interpreted in the integers {{\bf Z}}, but has roots when interpreted in the rationals {{\bf Q}}.

The above examples show that if one only interprets polynomial forms in a specific ring {R}, then some information about the polynomial could be lost (and some features of the polynomial, such as roots, may be “invisible” to that interpretation). But this turns out not to be the case if one considers interpretations in all rings simultaneously, as we shall now discuss.

If {R, S} are two different rings, then the polynomial functions {P_R: R \rightarrow R} and {P_S: S \rightarrow S} arising from interpreting a polynomial form {P} in these two rings are, strictly speaking, different functions. However, they are often closely related to each other. For instance, if {R} is a subring of {S}, then {P_R} agrees with the restriction of {P_S} to {R}. More generally, if there is a ring homomorphism {\phi: R \rightarrow S} from {R} to {S}, then {P_R} and {P_S} are intertwined by the relation

\displaystyle  \phi \circ P_R = P_S \circ \phi, \ \ \ \ \ (3)

which basically asserts that ring homomorphism respect polynomial operations. Note that the previous observation corresponded to the case when {\phi} was an inclusion homomorphism. Another example comes from the complex conjugation automorphism {z \mapsto \overline{z}} on the complex numbers, in which case (3) asserts the identity

\displaystyle  \overline{P_{\bf C}(z)} = P_{\bf C}(\overline{z})

for any polynomial function {P_{\bf C}} on the complex numbers, and any complex number {z}.

What was surprising to me (as someone who had not internalized the Yoneda lemma) was that the converse statement was true: if one had a function {F_R: R \rightarrow R} associated to every ring {R} that obeyed the intertwining relation

\displaystyle  \phi \circ F_R = F_S \circ \phi \ \ \ \ \ (4)

for every ring homomorphism {\phi: R \rightarrow S}, then there was a unique polynomial form {P \in {\bf Z}[\mathrm{n}]} such that {F_R = P_R} for all rings {R}. This seemed surprising to me because the functions {F} were a priori arbitrary functions, and as an analyst I would not expect them to have polynomial structure. But the fact that (4) holds for all rings {R,S} and all homomorphisms {\phi} is in fact rather powerful. As an analyst, I am tempted to proceed by first working with the ring {{\bf C}} of complex numbers and taking advantage of the aforementioned identity theorem, but this turns out to be tricky because {{\bf C}} does not “talk” to all the other rings {R} enough, in the sense that there are not always as many ring homomorphisms from {{\bf C}} to {R} as one would like. But there is in fact a more elementary argument that takes advantage of a particularly relevant (and “talkative”) ring to the theory of polynomials, namely the ring {{\bf Z}[\mathrm{n}]} of polynomials themselves. Given any other ring {R}, and any element {n} of that ring, there is a unique ring homomorphism {\phi_{R,n}: {\bf Z}[\mathrm{n}] \rightarrow R} from {{\bf Z}[\mathrm{n}]} to {R} that maps {\mathrm{n}} to {n}, namely the evaluation map

\displaystyle  \phi_{R,n} \colon a_d {\mathrm n}^d + \dots + a_1 {\mathrm n} + a_0 \mapsto a_d n^d + \dots + a_1 n + a_0

that sends a polynomial form to its evaluation at {n}. Applying (4) to this ring homomorphism, and specializing to the element {\mathrm{n}} of {{\bf Z}[\mathrm{n}]}, we conclude that

\displaystyle  \phi_{R,n}( F_{{\bf Z}[\mathrm{n}]}(\mathrm{n}) ) = F_R( n )

for any ring {R} and any {n \in R}. If we then define {P \in {\bf Z}[\mathrm{n}]} to be the formal polynomial

\displaystyle  P := F_{{\bf Z}[\mathrm{n}]}(\mathrm{n}),

then this identity can be rewritten as

\displaystyle  F_R = P_R

and so we have indeed shown that the family {F_R} arises from a polynomial form {P}. Conversely, from the identity

\displaystyle  P = P_{{\bf Z}[\mathrm{n}]}(\mathrm{n})

valid for any polynomial form {P}, we see that two polynomial forms {P,Q} can only generate the same polynomial functions {P_R, Q_R} for all rings {R} if they are identical as polynomial forms. So the polynomial form {P} associated to the family {F_R} is unique.

We have thus created an identification of form and function: polynomial forms {P} are in one-to-one correspondence with families of functions {F_R} obeying the intertwining relation (4). But this identification can be interpreted as a special case of the Yoneda lemma, as follows. There are two categories in play here: the category {\mathbf{Ring}} of rings (where the morphisms are ring homomorphisms), and the category {\mathrm{Set}} of sets (where the morphisms are arbitrary functions). There is an obvious forgetful functor {\mathrm{Forget}: \mathbf{Ring} \rightarrow \mathbf{Set}} between these two categories that takes a ring and removes all of the algebraic structure, leaving behind just the underlying set. A collection {F_R: R \rightarrow R} of functions (i.e., {\mathbf{Set}}-morphisms) for each {R} in {\mathbf{Ring}} that obeys the intertwining relation (4) is precisely the same thing as a natural transformation from the forgetful functor {\mathrm{Forget}} to itself. So we have identified formal polynomials in {{\bf Z}[\mathbf{n}]} as a set with natural endomorphisms of the forgetful functor:

\displaystyle  \mathrm{Forget}({\bf Z}[\mathbf{n}]) \equiv \mathrm{Hom}( \mathrm{Forget}, \mathrm{Forget} ). \ \ \ \ \ (5)

Informally: polynomial forms are precisely those operations on rings that are respected by ring homomorphisms.

What does this have to do with Yoneda’s lemma? Well, remember that every element {n} of a ring {R} came with an evaluation homomorphism {\phi_{R,n}: {\bf Z}[\mathrm{n}] \rightarrow R}. Conversely, every homomorphism from {{\bf Z}[\mathrm{n}]} to {R} will be of the form {\phi_{R,n}} for a unique {n} – indeed, {n} will just be the image of {\mathrm{n}} under this homomorphism. So the evaluation homomorphism provides a one-to-one correspondence between elements of {R}, and ring homomorphisms in {\mathrm{Hom}({\bf Z}[\mathrm{n}], R)}. This correspondence is at the level of sets, so this gives the identification

\displaystyle  \mathrm{Forget} \equiv \mathrm{Hom}({\bf Z}[\mathrm{n}], -).

Thus our identification can be written as

\displaystyle  \mathrm{Forget}({\bf Z}[\mathbf{n}]) \equiv \mathrm{Hom}( \mathrm{Hom}({\bf Z}[\mathrm{n}], -), \mathrm{Forget} )

which is now clearly a special case of the Yoneda lemma

\displaystyle  F(A) \equiv \mathrm{Hom}( \mathrm{Hom}(A, -), F )

that applies to any functor {F: {\mathcal C} \rightarrow \mathbf{Set}} from a (locally small) category {{\mathcal C}} and any object {A} in {{\mathcal C}}. And indeed if one inspects the standard proof of this lemma, it is essentially the same argument as the argument we used above to establish the identification (5). More generally, it seems to me that the Yoneda lemma is often used to identify “formal” objects with their “functional” interpretations, as long as one simultaneously considers interpretations across an entire category (such as the category of rings), as opposed to just a single interpretation in a single object of the category in which there may be some loss of information due to the peculiarities of that specific object. Grothendieck’s “functor of points” interpretation of a scheme, discussed in this previous blog post, is one typical example of this.

Kevin Ford, Dimitris Koukoulopoulos and I have just uploaded to the arXiv our paper “A lower bound on the mean value of the Erdős-Hooley delta function“. This paper complements the recent paper of Dimitris and myself obtaining the upper bound

\displaystyle  \frac{1}{x} \sum_{n \leq x} \Delta(n) \ll (\log\log x)^{11/4}

on the mean value of the Erdős-Hooley delta function

\displaystyle  \Delta(n) := \sup_u \# \{ d|n: e^u < d \leq e^{u+1} \}

In this paper we obtain a lower bound

\displaystyle  \frac{1}{x} \sum_{n \leq x} \Delta(n) \gg (\log\log x)^{1+\eta-o(1)}

where {\eta = 0.3533227\dots} is an exponent that arose in previous work of result of Ford, Green, and Koukoulopoulos, who showed that

\displaystyle  \Delta(n) \gg (\log\log n)^{\eta-o(1)} \ \ \ \ \ (1)

for all {n} outside of a set of density zero. The previous best known lower bound for the mean value was

\displaystyle  \frac{1}{x} \sum_{n \leq x} \Delta(n) \gg \log\log x,

due to Hall and Tenenbaum.

The point is the main contributions to the mean value of {\Delta(n)} are driven not by “typical” numbers {n} of some size {x}, but rather of numbers that have a splitting

\displaystyle  n = n' n''

where {n''} is the product of primes between some intermediate threshold {1 \leq y \leq x} and {x} and behaves “typically” (so in particular, it has about {\log\log x - \log\log y + O(\sqrt{\log\log x})} prime factors, as per the Hardy-Ramanujan law and the Erdős-Kac law, but {n'} is the product of primes up to {y} and has double the number of typical prime factors – {2 \log\log y + O(\sqrt{\log\log x})}, rather than {\log\log y + O(\sqrt{\log\log x})} – thus {n''} is the type of number that would make a significant contribution to the mean value of the divisor function {\tau(n'')}. Here {y} is such that {\log\log y} is an integer in the range

\displaystyle  \varepsilon\log\log x \leq \log \log y \leq (1-\varepsilon) \log\log x

for some small constant {\varepsilon>0} there are basically {\log\log x} different values of {y} give essentially disjoint contributions. From the easy inequalities

\displaystyle  \Delta(n) \gg \Delta(n') \Delta(n'') \geq \frac{\tau(n')}{\log n'} \Delta(n'') \ \ \ \ \ (2)

(the latter coming from the pigeonhole principle) and the fact that {\frac{\tau(n')}{\log n'}} has mean about one, one would expect to get the above result provided that one could get a lower bound of the form

\displaystyle  \Delta(n'') \gg (\log \log n'')^{\eta-o(1)} \ \ \ \ \ (3)

for most typical {n''} with prime factors between {y} and {x}. Unfortunately, due to the lack of small prime factors in {n''}, the arguments of Ford, Green, Koukoulopoulos that give (1) for typical {n} do not quite work for the rougher numbers {n''}. However, it turns out that one can get around this problem by replacing (2) by the more efficient inequality

\displaystyle  \Delta(n) \gg \frac{\tau(n')}{\log n'} \Delta^{(\log n')}(n'')

where

\displaystyle  \Delta^{(v)}(n) := \sup_u \# \{ d|n: e^u < d \leq e^{u+v} \}

is an enlarged version of {\Delta^{(n)}} when {v \geq 1}. This inequality is easily proven by applying the pigeonhole principle to the factors of {n} of the form {d' d''}, where {d'} is one of the {\tau(n')} factors of {n'}, and {d''} is one of the {\Delta^{(\log n')}(n'')} factors of {n''} in the optimal interval {[e^u, e^{u+\log n'}]}. The extra room provided by the enlargement of the range {[e^u, e^{u+1}]} to {[e^u, e^{u+\log n'}]} turns out to be sufficient to adapt the Ford-Green-Koukoulopoulos argument to the rough setting. In fact we are able to use the main technical estimate from that paper as a “black box”, namely that if one considers a random subset {A} of {[D^c, D]} for some small {c>0} and sufficiently large {D} with each {n \in [D^c, D]} lying in {A} with an independent probability {1/n}, then with high probability there should be {\gg c^{-1/\eta+o(1)}} subset sums of {A} that attain the same value. (Initially, what “high probability” means is just “close to {1}“, but one can reduce the failure probability significantly as {c \rightarrow 0} by a “tensor power trick” taking advantage of Bennett’s inequality.)

I have just uploaded to the arXiv my paper “The convergence of an alternating series of Erdős, assuming the Hardy–Littlewood prime tuples conjecture“. This paper concerns an old problem of Erdős concerning whether the alternating series {\sum_{n=1}^\infty \frac{(-1)^n n}{p_n}} converges, where {p_n} denotes the {n^{th}} prime. The main result of this paper is that the answer to this question is affirmative assuming a sufficiently strong version of the Hardy–Littlewood prime tuples conjecture.

The alternating series test does not apply here because the ratios {\frac{n}{p_n}} are not monotonically decreasing. The deviations of monotonicity arise from fluctuations in the prime gaps {p_{n+1}-p_n}, so the enemy arises from biases in the prime gaps for odd and even {n}. By changing variables from {n} to {p_n} (or more precisely, to integers in the range between {p_n} and {p_{n+1}}), this is basically equivalent to biases in the parity {(-1)^{\pi(n)}} of the prime counting function. Indeed, it is an unpublished observation of Said that the convergence of {\sum_{n=1}^\infty \frac{(-1)^n n}{p_n}} is equivalent to the convergence of {\sum_{n=10}^\infty \frac{(-1)^{\pi(n)}}{n \log n}}. So this question is really about trying to get a sufficiently strong amount of equidistribution for the parity of {\pi(n)}.

The prime tuples conjecture does not directly say much about the value of {\pi(n)}; however, it can be used to control differences {\pi(n+\lambda \log x) - \pi(n)} for {n \sim x} and {\lambda>0} not too large. Indeed, it is a famous calculation of Gallagher that for fixed {\lambda}, and {n} chosen randomly from {1} to {x}, the quantity {\pi(n+\lambda \log x) - \pi(n)} is distributed according to the Poisson distribution of mean {\lambda} asymptotically if the prime tuples conjecture holds. In particular, the parity {(-1)^{\pi(n+\lambda \log x)-\pi(n)}} of this quantity should have mean asymptotic to {e^{-2\lambda}}. An application of the van der Corput {A}-process then gives some decay on the mean of {(-1)^{\pi(n)}} as well. Unfortunately, this decay is a bit too weak for this problem; even if one uses the most quantitative version of Gallagher’s calculation, worked out in a recent paper of (Vivian) Kuperberg, the best bound on the mean {|\frac{1}{x} \sum_{n \leq x} (-1)^{\pi(n)}|} is something like {1/(\log\log x)^{-1/4+o(1)}}, which is not quite strong enough to overcome the doubly logarithmic divergence of {\sum_{n=1}^\infty \frac{1}{n \log n}}.

To get around this obstacle, we take advantage of the random sifted model {{\mathcal S}_z} of the primes that was introduced in a paper of Banks, Ford, and myself. To model the primes in an interval such as {[n, n+\lambda \log x]} with {n} drawn randomly from say {[x,2x]}, we remove one random residue class {a_p \hbox{ mod } p} from this interval for all primes {p} up to Pólya’s “magic cutoff” {z \approx x^{1/e^\gamma}}. The prime tuples conjecture can then be intepreted as the assertion that the random set {{\mathcal S}_z} produced by this sieving process is statistically a good model for the primes in {[n, n+\lambda \log x]}. After some standard manipulations (using a version of the Bonferroni inequalities, as well as some upper bounds of Kuperberg), the problem then boils down to getting sufficiently strong estimates for the expected parity {{\bf E} (-1)^{|{\mathcal S}_z|}} of the random sifted set {{\mathcal S}_z}.

For this problem, the main advantage of working with the random sifted model, rather than with the primes or the singular series arising from the prime tuples conjecture, is that the sifted model can be studied iteratively from the partially sifted sets {{\mathcal S}_w} arising from sifting primes {p} up to some intermediate threshold {w<z}, and that the expected parity of the {{\mathcal S}_w} experiences some decay in {w}. Indeed, once {w} exceeds the length {\lambda \log x} of the interval {[n,n+\lambda \log x]}, sifting {{\mathcal S}_w} by an additional prime {p} will cause {{\mathcal S}_w} to lose one element with probability {|{\mathcal S}_w|/p}, and remain unchanged with probability {1 - |{\mathcal S}_w|/p}. If {|{\mathcal S}_w|} concentrates around some value {\overline{S}_w}, this suggests that the expected parity {{\bf E} (-1)^{|{\mathcal S}_w|}} will decay by a factor of about {|1 - 2 \overline{S}_w/p|} as one increases {w} to {p}, and iterating this should give good bounds on the final expected parity {{\bf E} (-1)^{|{\mathcal S}_z|}}. It turns out that existing second moment calculations of Montgomery and Soundararajan suffice to obtain enough concentration to make this strategy work.

Archives