You are currently browsing the monthly archive for September 2015.

Starting this week, I will be teaching an introductory graduate course (Math 275A) on probability theory here at UCLA. While I find myself using probabilistic methods routinely nowadays in my research (for instance, the probabilistic concept of Shannon entropy played a crucial role in my recent paper on the Chowla and Elliott conjectures, and random multiplicative functions similarly played a central role in the paper on the Erdos discrepancy problem), this will actually be the first time I will be teaching a course on probability itself (although I did give a course on random matrix theory some years ago that presumed familiarity with graduate-level probability theory). As such, I will be relying primarily on an existing textbook, in this case Durrett’s Probability: Theory and Examples. I still need to prepare lecture notes, though, and so I thought I would continue my practice of putting my notes online, although in this particular case they will be less detailed or complete than with other courses, as they will mostly be focusing on those topics that are not already comprehensively covered in the text of Durrett. Below the fold are my first such set of notes, concerning the classical measure-theoretic foundations of probability. (I wrote on these foundations also in this previous blog post, but in that post I already assumed that the reader was familiar with measure theory and basic probability, whereas in this course not every student will have a strong background in these areas.)

Note: as this set of notes is primarily concerned with foundational issues, it will contain a large number of pedantic (and nearly trivial) formalities and philosophical points. We dwell on these technicalities in this set of notes primarily so that they are out of the way in later notes, when we work with the actual mathematics of probability, rather than on the supporting foundations of that mathematics. In particular, the excessively formal and philosophical language in this set of notes will not be replicated in later notes.

Read the rest of this entry »

Let {X} and {Y} be two random variables taking values in the same (discrete) range {R}, and let {E} be some subset of {R}, which we think of as the set of “bad” outcomes for either {X} or {Y}. If {X} and {Y} have the same probability distribution, then clearly

\displaystyle  {\bf P}( X \in E ) = {\bf P}( Y \in E ).

In particular, if it is rare for {Y} to lie in {E}, then it is also rare for {X} to lie in {E}.

If {X} and {Y} do not have exactly the same probability distribution, but their probability distributions are close to each other in some sense, then we can expect to have an approximate version of the above statement. For instance, from the definition of the total variation distance {\delta(X,Y)} between two random variables (or more precisely, the total variation distance between the probability distributions of two random variables), we see that

\displaystyle  {\bf P}(Y \in E) - \delta(X,Y) \leq {\bf P}(X \in E) \leq {\bf P}(Y \in E) + \delta(X,Y) \ \ \ \ \ (1)

for any {E \subset R}. In particular, if it is rare for {Y} to lie in {E}, and {X,Y} are close in total variation, then it is also rare for {X} to lie in {E}.

A basic inequality in information theory is Pinsker’s inequality

\displaystyle  \delta(X,Y) \leq \sqrt{\frac{1}{2} D_{KL}(X||Y)}

where the Kullback-Leibler divergence {D_{KL}(X||Y)} is defined by the formula

\displaystyle  D_{KL}(X||Y) = \sum_{x \in R} {\bf P}( X=x ) \log \frac{{\bf P}(X=x)}{{\bf P}(Y=x)}.

(See this previous blog post for a proof of this inequality.) A standard application of Jensen’s inequality reveals that {D_{KL}(X||Y)} is non-negative (Gibbs’ inequality), and vanishes if and only if {X}, {Y} have the same distribution; thus one can think of {D_{KL}(X||Y)} as a measure of how close the distributions of {X} and {Y} are to each other, although one should caution that this is not a symmetric notion of distance, as {D_{KL}(X||Y) \neq D_{KL}(Y||X)} in general. Inserting Pinsker’s inequality into (1), we see for instance that

\displaystyle  {\bf P}(X \in E) \leq {\bf P}(Y \in E) + \sqrt{\frac{1}{2} D_{KL}(X||Y)}.

Thus, if {X} is close to {Y} in the Kullback-Leibler sense, and it is rare for {Y} to lie in {E}, then it is rare for {X} to lie in {E} as well.

We can specialise this inequality to the case when {Y} a uniform random variable {U} on a finite range {R} of some cardinality {N}, in which case the Kullback-Leibler divergence {D_{KL}(X||U)} simplifies to

\displaystyle  D_{KL}(X||U) = \log N - {\bf H}(X)

where

\displaystyle  {\bf H}(X) := \sum_{x \in R} {\bf P}(X=x) \log \frac{1}{{\bf P}(X=x)}

is the Shannon entropy of {X}. Again, a routine application of Jensen’s inequality shows that {{\bf H}(X) \leq \log N}, with equality if and only if {X} is uniformly distributed on {R}. The above inequality then becomes

\displaystyle  {\bf P}(X \in E) \leq {\bf P}(U \in E) + \sqrt{\frac{1}{2}(\log N - {\bf H}(X))}. \ \ \ \ \ (2)

Thus, if {E} is a small fraction of {R} (so that it is rare for {U} to lie in {E}), and the entropy of {X} is very close to the maximum possible value of {\log N}, then it is rare for {X} to lie in {E} also.

The inequality (2) is only useful when the entropy {{\bf H}(X)} is close to {\log N} in the sense that {{\bf H}(X) = \log N - O(1)}, otherwise the bound is worse than the trivial bound of {{\bf P}(X \in E) \leq 1}. In my recent paper on the Chowla and Elliott conjectures, I ended up using a variant of (2) which was still non-trivial when the entropy {{\bf H}(X)} was allowed to be smaller than {\log N - O(1)}. More precisely, I used the following simple inequality, which is implicit in the arguments of that paper but which I would like to make more explicit in this post:

Lemma 1 (Pinsker-type inequality) Let {X} be a random variable taking values in a finite range {R} of cardinality {N}, let {U} be a uniformly distributed random variable in {R}, and let {E} be a subset of {R}. Then

\displaystyle  {\bf P}(X \in E) \leq \frac{(\log N - {\bf H}(X)) + \log 2}{\log 1/{\bf P}(U \in E)}.

Proof: Consider the conditional entropy {{\bf H}(X | 1_{X \in E} )}. On the one hand, we have

\displaystyle  {\bf H}(X | 1_{X \in E} ) = {\bf H}(X, 1_{X \in E}) - {\bf H}(1_{X \in E} )

\displaystyle  = {\bf H}(X) - {\bf H}(1_{X \in E})

\displaystyle  \geq {\bf H}(X) - \log 2

by Jensen’s inequality. On the other hand, one has

\displaystyle  {\bf H}(X | 1_{X \in E} ) = {\bf P}(X \in E) {\bf H}(X | X \in E )

\displaystyle  + (1-{\bf P}(X \in E)) {\bf H}(X | X \not \in E)

\displaystyle  \leq {\bf P}(X \in E) \log |E| + (1-{\bf P}(X \in E)) \log N

\displaystyle  = \log N - {\bf P}(X \in E) \log \frac{N}{|E|}

\displaystyle  = \log N - {\bf P}(X \in E) \log \frac{1}{{\bf P}(U \in E)},

where we have again used Jensen’s inequality. Putting the two inequalities together, we obtain the claim. \Box

Remark 2 As noted in comments, this inequality can be viewed as a special case of the more general inequality

\displaystyle  {\bf P}(X \in E) \leq \frac{D(X||Y) + \log 2}{\log 1/{\bf P}(Y \in E)}

for arbitrary random variables {X,Y} taking values in the same discrete range {R}, which follows from the data processing inequality

\displaystyle  D( f(X)||f(Y)) \leq D(X|| Y)

for arbitrary functions {f}, applied to the indicator function {f = 1_E}. Indeed one has

\displaystyle  D( 1_E(X) || 1_E(Y) ) = {\bf P}(X \in E) \log \frac{{\bf P}(X \in E)}{{\bf P}(Y \in E)}

\displaystyle + {\bf P}(X \not \in E) \log \frac{{\bf P}(X \not \in E)}{{\bf P}(Y \not \in E)}

\displaystyle  \geq {\bf P}(X \in E) \log \frac{1}{{\bf P}(Y \in E)} - h( {\bf P}(X \in E) )

\displaystyle  \geq {\bf P}(X \in E) \log \frac{1}{{\bf P}(Y \in E)} - \log 2

where {h(u) := u \log \frac{1}{u} + (1-u) \log \frac{1}{1-u}} is the entropy function.

Thus, for instance, if one has

\displaystyle  {\bf H}(X) \geq \log N - o(K)

and

\displaystyle  {\bf P}(U \in E) \leq \exp( - K )

for some {K} much larger than {1} (so that {1/K = o(1)}), then

\displaystyle  {\bf P}(X \in E) = o(1).

More informally: if the entropy of {X} is somewhat close to the maximum possible value of {\log N}, and it is exponentially rare for a uniform variable to lie in {E}, then it is still somewhat rare for {X} to lie in {E}. The estimate given is close to sharp in this regime, as can be seen by calculating the entropy of a random variable {X} which is uniformly distributed inside a small set {E} with some probability {p} and uniformly distributed outside of {E} with probability {1-p}, for some parameter {0 \leq p \leq 1}.

It turns out that the above lemma combines well with concentration of measure estimates; in my paper, I used one of the simplest such estimates, namely Hoeffding’s inequality, but there are of course many other estimates of this type (see e.g. this previous blog post for some others). Roughly speaking, concentration of measure inequalities allow one to make approximations such as

\displaystyle  F(U) \approx {\bf E} F(U)

with exponentially high probability, where {U} is a uniform distribution and {F} is some reasonable function of {U}. Combining this with the above lemma, we can then obtain approximations of the form

\displaystyle  F(X) \approx {\bf E} F(U) \ \ \ \ \ (3)

with somewhat high probability, if the entropy of {X} is somewhat close to maximum. This observation, combined with an “entropy decrement argument” that allowed one to arrive at a situation in which the relevant random variable {X} did have a near-maximum entropy, is the key new idea in my recent paper; for instance, one can use the approximation (3) to obtain an approximation of the form

\displaystyle  \sum_{j=1}^H \sum_{p \in {\mathcal P}} \lambda(n+j) \lambda(n+j+p) 1_{p|n+j}

\displaystyle  \approx \sum_{j=1}^H \sum_{p \in {\mathcal P}} \frac{\lambda(n+j) \lambda(n+j+p)}{p}

for “most” choices of {n} and a suitable choice of {H} (with the latter being provided by the entropy decrement argument). The left-hand side is tied to Chowla-type sums such as {\sum_{n \leq x} \frac{\lambda(n)\lambda(n+1)}{n}} through the multiplicativity of {\lambda}, while the right-hand side, being a linear correlation involving two parameters {j,p} rather than just one, has “finite complexity” and can be treated by existing techniques such as the Hardy-Littlewood circle method. One could hope that one could similarly use approximations such as (3) in other problems in analytic number theory or combinatorics.

I’ve just uploaded two related papers to the arXiv:

This pair of papers is an outgrowth of these two recent blog posts and the ensuing discussion. In the first paper, we establish the following logarithmically averaged version of the Chowla conjecture (in the case {k=2} of two-point correlations (or “pair correlations”)):

Theorem 1 (Logarithmically averaged Chowla conjecture) Let {a_1,a_2} be natural numbers, and let {b_1,b_2} be integers such that {a_1 b_2 - a_2 b_1 \neq 0}. Let {1 \leq \omega(x) \leq x} be a quantity depending on {x} that goes to infinity as {x \rightarrow \infty}. Let {\lambda} denote the Liouville function. Then one has

\displaystyle  \sum_{x/\omega(x) < n \leq x} \frac{\lambda(a_1 n + b_1) \lambda(a_2 n+b_2)}{n} = o( \log \omega(x) ) \ \ \ \ \ (1)

as {x \rightarrow \infty}.

Thus for instance one has

\displaystyle  \sum_{n \leq x} \frac{\lambda(n) \lambda(n+1)}{n} = o(\log x). \ \ \ \ \ (2)

For comparison, the non-averaged Chowla conjecture would imply that

\displaystyle  \sum_{n \leq x} \lambda(n) \lambda(n+1) = o(x) \ \ \ \ \ (3)

which is a strictly stronger estimate than (2), and remains open.

The arguments also extend to other completely multiplicative functions than the Liouville function. In particular, one obtains a slightly averaged version of the non-asymptotic Elliott conjecture that was shown in the previous blog post to imply a positive solution to the Erdos discrepancy problem. The averaged version of the conjecture established in this paper is slightly weaker than the one assumed in the previous blog post, but it turns out that the arguments there can be modified without much difficulty to accept this averaged Elliott conjecture as input. In particular, we obtain an unconditional solution to the Erdos discrepancy problem as a consequence; this is detailed in the second paper listed above. In fact we can also handle the vector-valued version of the Erdos discrepancy problem, in which the sequence {f(1), f(2), \dots} takes values in the unit sphere of an arbitrary Hilbert space, rather than in {\{-1,+1\}}.

Estimates such as (2) or (3) are known to be subject to the “parity problem” (discussed numerous times previously on this blog), which roughly speaking means that they cannot be proven solely using “linear” estimates on functions such as the von Mangoldt function. However, it is known that the parity problem can be circumvented using “bilinear” estimates, and this is basically what is done here.

We now describe in informal terms the proof of Theorem 1, focusing on the model case (2) for simplicity. Suppose for contradiction that the left-hand side of (2) was large and (say) positive. Using the multiplicativity {\lambda(pn) = -\lambda(n)}, we conclude that

\displaystyle  \sum_{n \leq x} \frac{\lambda(n) \lambda(n+p) 1_{p|n}}{n}

is also large and positive for all primes {p} that are not too large; note here how the logarithmic averaging allows us to leave the constraint {n \leq x} unchanged. Summing in {p}, we conclude that

\displaystyle  \sum_{n \leq x} \frac{ \sum_{p \in {\mathcal P}} \lambda(n) \lambda(n+p) 1_{p|n}}{n}

is large and positive for any given set {{\mathcal P}} of medium-sized primes. By a standard averaging argument, this implies that

\displaystyle  \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} \lambda(n+j) \lambda(n+p+j) 1_{p|n+j} \ \ \ \ \ (4)

is large for many choices of {n}, where {H} is a medium-sized parameter at our disposal to choose, and we take {{\mathcal P}} to be some set of primes that are somewhat smaller than {H}. (A similar approach was taken in this recent paper of Matomaki, Radziwill, and myself to study sign patterns of the Möbius function.) To obtain the required contradiction, one thus wants to demonstrate significant cancellation in the expression (4). As in that paper, we view {n} as a random variable, in which case (4) is essentially a bilinear sum of the random sequence {(\lambda(n+1),\dots,\lambda(n+H))} along a random graph {G_{n,H}} on {\{1,\dots,H\}}, in which two vertices {j, j+p} are connected if they differ by a prime {p} in {{\mathcal P}} that divides {n+j}. A key difficulty in controlling this sum is that for randomly chosen {n}, the sequence {(\lambda(n+1),\dots,\lambda(n+H))} and the graph {G_{n,H}} need not be independent. To get around this obstacle we introduce a new argument which we call the “entropy decrement argument” (in analogy with the “density increment argument” and “energy increment argument” that appear in the literature surrounding Szemerédi’s theorem on arithmetic progressions, and also reminiscent of the “entropy compression argument” of Moser and Tardos, discussed in this previous post). This argument, which is a simple consequence of the Shannon entropy inequalities, can be viewed as a quantitative version of the standard subadditivity argument that establishes the existence of Kolmogorov-Sinai entropy in topological dynamical systems; it allows one to select a scale parameter {H} (in some suitable range {[H_-,H_+]}) for which the sequence {(\lambda(n+1),\dots,\lambda(n+H))} and the graph {G_{n,H}} exhibit some weak independence properties (or more precisely, the mutual information between the two random variables is small).

Informally, the entropy decrement argument goes like this: if the sequence {(\lambda(n+1),\dots,\lambda(n+H))} has significant mutual information with {G_{n,H}}, then the entropy of the sequence {(\lambda(n+1),\dots,\lambda(n+H'))} for {H' > H} will grow a little slower than linearly, due to the fact that the graph {G_{n,H}} has zero entropy (knowledge of {G_{n,H}} more or less completely determines the shifts {G_{n+kH,H}} of the graph); this can be formalised using the classical Shannon inequalities for entropy (and specifically, the non-negativity of conditional mutual information). But the entropy cannot drop below zero, so by increasing {H} as necessary, at some point one must reach a metastable region (cf. the finite convergence principle discussed in this previous blog post), within which very little mutual information can be shared between the sequence {(\lambda(n+1),\dots,\lambda(n+H))} and the graph {G_{n,H}}. Curiously, for the application it is not enough to have a purely quantitative version of this argument; one needs a quantitative bound (which gains a factor of a bit more than {\log H} on the trivial bound for mutual information), and this is surprisingly delicate (it ultimately comes down to the fact that the series {\sum_{j \geq 2} \frac{1}{j \log j \log\log j}} diverges, which is only barely true).

Once one locates a scale {H} with the low mutual information property, one can use standard concentration of measure results such as the Hoeffding inequality to approximate (4) by the significantly simpler expression

\displaystyle  \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} \frac{\lambda(n+j) \lambda(n+p+j)}{p}. \ \ \ \ \ (5)

The important thing here is that Hoeffding’s inequality gives exponentially strong bounds on the failure probability, which is needed to counteract the logarithms that are inevitably present whenever trying to use entropy inequalities. The expression (5) can then be controlled in turn by an application of the Hardy-Littlewood circle method and a non-trivial estimate

\displaystyle  \sup_\alpha \frac{1}{X} \int_X^{2X} |\frac{1}{H} \sum_{x \leq n \leq x+H} \lambda(n) e(\alpha n)|\ dx = o(1) \ \ \ \ \ (6)

for averaged short sums of a modulated Liouville function established in another recent paper by Matomäki, Radziwill and myself.

When one uses this method to study more general sums such as

\displaystyle  \sum_{n \leq x} \frac{g_1(n) g_2(n+1)}{n},

one ends up having to consider expressions such as

\displaystyle  \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} c_p \frac{g_1(n+j) g_2(n+p+j)}{p}.

where {c_p} is the coefficient {c_p := \overline{g_1}(p) \overline{g_2}(p)}. When attacking this sum with the circle method, one soon finds oneself in the situation of wanting to locate the large Fourier coefficients of the exponential sum

\displaystyle  S(\alpha) := \sum_{p \in {\mathcal P}} \frac{c_p}{p} e^{2\pi i \alpha p}.

In many cases (such as in the application to the Erdös discrepancy problem), the coefficient {c_p} is identically {1}, and one can understand this sum satisfactorily using the classical results of Vinogradov: basically, {S(\alpha)} is large when {\alpha} lies in a “major arc” and is small when it lies in a “minor arc”. For more general functions {g_1,g_2}, the coefficients {c_p} are more or less arbitrary; the large values of {S(\alpha)} are no longer confined to the major arc case. Fortunately, even in this general situation one can use a restriction theorem for the primes established some time ago by Ben Green and myself to show that there are still only a bounded number of possible locations {\alpha} (up to the uncertainty mandated by the Heisenberg uncertainty principle) where {S(\alpha)} is large, and we can still conclude by using (6). (Actually, as recently pointed out to me by Ben, one does not need the full strength of our result; one only needs the {L^4} restriction theorem for the primes, which can be proven fairly directly using Plancherel’s theorem and some sieve theory.)

It is tempting to also use the method to attack higher order cases of the (logarithmically) averaged Chowla conjecture, for instance one could try to prove the estimate

\displaystyle  \sum_{n \leq x} \frac{\lambda(n) \lambda(n+1) \lambda(n+2)}{n} = o(\log x).

The above arguments reduce matters to obtaining some non-trivial cancellation for sums of the form

\displaystyle  \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} \frac{\lambda(n+j) \lambda(n+p+j) \lambda(n+2p+j)}{p}.

A little bit of “higher order Fourier analysis” (as was done for very similar sums in the ergodic theory context by Frantzikinakis-Host-Kra and Wooley-Ziegler) lets one control this sort of sum if one can establish a bound of the form

\displaystyle  \frac{1}{X} \int_X^{2X} \sup_\alpha |\frac{1}{H} \sum_{x \leq n \leq x+H} \lambda(n) e(\alpha n)|\ dx = o(1) \ \ \ \ \ (7)

where {X} goes to infinity and {H} is a very slowly growing function of {X}. This looks very similar to (6), but the fact that the supremum is now inside the integral makes the problem much more difficult. However it looks worth attacking (7) further, as this estimate looks like it should have many nice applications (beyond just the {k=3} case of the logarithmically averaged Chowla or Elliott conjectures, which is already interesting).

For higher {k} than {k=3}, the same line of analysis requires one to replace the linear phase {e(\alpha n)} by more complicated phases, such as quadratic phases {e(\alpha n^2 + \beta n)} or even {k-2}-step nilsequences. Given that (7) is already beyond the reach of current literature, these even more complicated expressions are also unavailable at present, but one can imagine that they will eventually become tractable, in which case we would obtain an averaged form of the Chowla conjecture for all {k}, which would have a number of consequences (such as a logarithmically averaged version of Sarnak’s conjecture, as per this blog post).

It would of course be very nice to remove the logarithmic averaging, and be able to establish bounds such as (3). I did attempt to do so, but I do not see a way to use the entropy decrement argument in a manner that does not require some sort of averaging of logarithmic type, as it requires one to pick a scale {H} that one cannot specify in advance, which is not a problem for logarithmic averages (which are quite stable with respect to dilations) but is problematic for ordinary averages. But perhaps the problem can be circumvented by some clever modification of the argument. One possible approach would be to start exploiting multiplicativity at products of primes, and not just individual primes, to try to keep the scale fixed, but this makes the concentration of measure part of the argument much more complicated as one loses some independence properties (coming from the Chinese remainder theorem) which allowed one to conclude just from the Hoeffding inequality.

The Chowla conjecture asserts that all non-trivial correlations of the Liouville function are asymptotically negligible; for instance, it asserts that

\displaystyle  \sum_{n \leq X} \lambda(n) \lambda(n+h) = o(X)

as {X \rightarrow \infty} for any fixed natural number {h}. This conjecture remains open, though there are a number of partial results (e.g. these two previous results of Matomaki, Radziwill, and myself).

A natural generalisation of Chowla’s conjecture was proposed by Elliott. For simplicity we will only consider Elliott’s conjecture for the pair correlations

\displaystyle  \sum_{n \leq X} g(n) \overline{g}(n+h).

For such correlations, the conjecture was that one had

\displaystyle  \sum_{n \leq X} g(n) \overline{g}(n+h) = o(X) \ \ \ \ \ (1)

as {X \rightarrow \infty} for any natural number {h}, as long as {g} was a completely multiplicative function with magnitude bounded by {1}, and such that

\displaystyle  \sum_p \hbox{Re} \frac{1 - g(p) \overline{\chi(p)} p^{-it}}{p} = +\infty \ \ \ \ \ (2)

for any Dirichlet character {\chi} and any real number {t}. In the language of “pretentious number theory”, as developed by Granville and Soundararajan, the hypothesis (2) asserts that the completely multiplicative function {g} does not “pretend” to be like the completely multiplicative function {n \mapsto \chi(n) n^{it}} for any character {\chi} and real number {t}. A condition of this form is necessary; for instance, if {g(n)} is precisely equal to {\chi(n) n^{it}} and {\chi} has period {q}, then {g(n) \overline{g}(n+q)} is equal to {1_{(n,q)=1} + o(1)} as {n \rightarrow \infty} and (1) clearly fails. The prime number theorem in arithmetic progressions implies that the Liouville function obeys (2), and so the Elliott conjecture contains the Chowla conjecture as a special case.

As it turns out, Elliott’s conjecture is false as stated, with the counterexample {g} having the property that {g} “pretends” locally to be the function {n \mapsto n^{it_j}} for {n} in various intervals {[1, X_j]}, where {X_j} and {t_j} go to infinity in a certain prescribed sense. See this paper of Matomaki, Radziwill, and myself for details. However, we view this as a technicality, and continue to believe that certain “repaired” versions of Elliott’s conjecture still hold. For instance, our counterexample does not apply when {g} is restricted to be real-valued rather than complex, and we believe that Elliott’s conjecture is valid in this setting. Returning to the complex-valued case, we still expect the asymptotic (1) provided that the condition (2) is replaced by the stronger condition

\displaystyle  \sup_{|t| \leq X} |\sum_{p \leq X} \hbox{Re} \frac{1 - g(p) \overline{\chi(p)} p^{-it}}{p}| \rightarrow +\infty

as {X \rightarrow +\infty} for all fixed Dirichlet characters {\chi}. In our paper we supported this claim by establishing a certain “averaged” version of this conjecture; see that paper for further details. (See also this recent paper of Frantzikinakis and Host which establishes a different averaged version of this conjecture.)

One can make a stronger “non-asymptotic” version of this corrected Elliott conjecture, in which the {X} parameter does not go to infinity, or equivalently that the function {g} is permitted to depend on {X}:

Conjecture 1 (Non-asymptotic Elliott conjecture) Let {\varepsilon > 0}, let {A \geq 1} be sufficiently large depending on {\varepsilon}, and let {X} be sufficiently large depending on {A,\varepsilon}. Suppose that {g} is a completely multiplicative function with magnitude bounded by {1}, such that

\displaystyle  \inf_{|t| \leq AX} |\sum_{p \leq X} \hbox{Re} \frac{1 - g(p) \overline{\chi(p)} p^{-it}}{p}| \geq A

for all Dirichlet characters {\chi} of period at most {A}. Then one has

\displaystyle  |\sum_{n \leq X} g(n) \overline{g(n+h)}| \leq \varepsilon X

for all natural numbers {1 \leq h \leq 1/\varepsilon}.

The {\varepsilon}-dependent factor {A} in the constraint {|t| \leq AX} is necessary, as can be seen by considering the completely multiplicative function {g(n) := n^{2iX}} (for instance). Again, the results in my previous paper with Matomaki and Radziwill can be viewed as establishing an averaged version of this conjecture.

Meanwhile, we have the following conjecture that is the focus of the Polymath5 project:

Conjecture 2 (Erdös discrepancy conjecture) For any function {f: {\bf N} \rightarrow \{-1,+1\}}, the discrepancy

\displaystyle  \sup_{n,d \in {\bf N}} |\sum_{j=1}^n f(jd)|

is infinite.

It is instructive to compute some near-counterexamples to Conjecture 2 that illustrate the difficulty of the Erdös discrepancy problem. The first near-counterexample is that of a non-principal Dirichlet character {f(n) = \chi(n)} that takes values in {\{-1,0,+1\}} rather than {\{-1,+1\}}. For this function, one has from the complete multiplicativity of {\chi} that

\displaystyle  |\sum_{j=1}^n f(jd)| = |\sum_{j=1}^n \chi(j) \chi(d)|

\displaystyle  \leq |\sum_{j=1}^n \chi(j)|.

If {q} denotes the period of {\chi}, then {\chi} has mean zero on every interval of length {q}, and thus

\displaystyle  |\sum_{j=1}^n f(jd)| \leq |\sum_{j=1}^n \chi(j)| \leq q.

Thus {\chi} has bounded discrepancy.

Of course, this is not a true counterexample to Conjecture 2 because {\chi} can take the value {0}. Let us now consider the following variant example, which is the simplest member of a family of examples studied by Borwein, Choi, and Coons. Let {\chi = \chi_3} be the non-principal Dirichlet character of period {3} (thus {\chi(n)} equals {+1} when {n=1 \hbox{ mod } 3}, {-1} when {n = 2 \hbox{ mod } 3}, and {0} when {n = 0 \hbox{ mod } 3}), and define the completely multiplicative function {f = \tilde \chi: {\bf N} \rightarrow \{-1,+1\}} by setting {\tilde \chi(p) := \chi(p)} when {p \neq 3} and {\tilde \chi(3) = +1}. This is about the simplest modification one can make to the previous near-counterexample to eliminate the zeroes. Now consider the sum

\displaystyle  \sum_{j=1}^n \tilde \chi(j)

with {n := 1 + 3 + 3^2 + \dots + 3^k} for some large {k}. Writing {j = 3^a m} with {m} coprime to {3} and {a} at most {k}, we can write this sum as

\displaystyle  \sum_{a=0}^k \sum_{1 \leq m \leq n/3^j} \tilde \chi(3^a m).

Now observe that {\tilde \chi(3^a m) = \tilde \chi(3)^a \tilde \chi(m) = \chi(m)}. The function {\chi} has mean zero on every interval of length three, and {\lfloor n/3^j\rfloor} is equal to {1} mod {3}, and thus

\displaystyle  \sum_{1 \leq m \leq n/3^j} \tilde \chi(3^a m) = 1

for every {a=0,\dots,k}, and thus

\displaystyle  \sum_{j=1}^n \tilde \chi(j) = k+1 \gg \log n.

Thus {\tilde \chi} also has unbounded discrepancy, but only barely so (it grows logarithmically in {n}). These examples suggest that the main “enemy” to proving Conjecture 2 comes from completely multiplicative functions {f} that somehow “pretend” to be like a Dirichlet character but do not vanish at the zeroes of that character. (Indeed, the special case of Conjecture 2 when {f} is completely multiplicative is already open, appears to be an important subcase.)

All of these conjectures remain open. However, I would like to record in this blog post the following striking connection, illustrating the power of the Elliott conjecture (particularly in its nonasymptotic formulation):

Theorem 3 (Elliott conjecture implies unbounded discrepancy) Conjecture 1 implies Conjecture 2.

The argument relies heavily on two observations that were previously made in connection with the Polymath5 project. The first is a Fourier-analytic reduction that replaces the Erdos Discrepancy Problem with an averaged version for completely multiplicative functions {g}. An application of Cauchy-Schwarz then shows that any counterexample to that version will violate the conclusion of Conjecture 1, so if one assumes that conjecture then {g} must pretend to be like a function of the form {n \mapsto \chi(n) n^{it}}. One then uses (a generalisation) of a second argument from Polymath5 to rule out this case, basically by reducing matters to a more complicated version of the Borwein-Choi-Coons analysis. Details are provided below the fold.

There is some hope that the Chowla and Elliott conjectures can be attacked, as the parity barrier which is so impervious to attack for the twin prime conjecture seems to be more permeable in this setting. (For instance, in my previous post I raised a possible approach, based on establishing expander properties of a certain random graph, which seems to get around the parity problem, in principle at least.)

(Update, Sep 25: fixed some treatment of error terms, following a suggestion of Andrew Granville.)

Read the rest of this entry »

Kaisa Matomäki, Maksym Radziwiłł, and I have just uploaded to the arXiv our paper “Sign patterns of the Liouville and Möbius functions“. This paper is somewhat similar to our previous paper in that it is using the recent breakthrough of Matomäki and Radziwiłł on mean values of multiplicative functions to obtain partial results towards the Chowla conjecture. This conjecture can be phrased, roughly speaking, as follows: if {k} is a fixed natural number and {n} is selected at random from a large interval {[1,x]}, then the sign pattern {(\lambda(n), \lambda(n+1),\dots,\lambda(n+k-1)) \in \{-1,+1\}^k} becomes asymptotically equidistributed in {\{-1,+1\}^k} in the limit {x \rightarrow \infty}. This remains open for {k \geq 2}. In fact even the significantly weaker statement that each of the sign patterns in {\{-1,+1\}^k} is attained infinitely often is open for {k \geq 4}. However, in 1986, Hildebrand showed that for {k \leq 3} all sign patterns are indeed attained infinitely often. Our first result is a strengthening of Hildebrand’s, moving a little bit closer to Chowla’s conjecture:

Theorem 1 Let {k \leq 3}. Then each of the sign patterns in {\{-1,+1\}^k} is attained by the Liouville function for a set of natural numbers {n} of positive lower density.

Thus for instance one has {\lambda(n)=\lambda(n+1)=\lambda(n+2)} for a set of {n} of positive lower density. The {k \leq 2} case of this theorem already appears in the original paper of Matomäki and Radziwiłł (and the significantly simpler case of the sign patterns {++} and {--} was treated previously by Harman, Pintz, and Wolke).

The basic strategy in all of these arguments is to assume for sake of contradiction that a certain sign pattern occurs extremely rarely, and then exploit the complete multiplicativity of {\lambda} (which implies in particular that {\lambda(2n) = -\lambda(n)}, {\lambda(3n) = -\lambda(n)}, and {\lambda(5n) = -\lambda(n)} for all {n}) together with some combinatorial arguments (vaguely analogous to solving a Sudoku puzzle!) to establish more complex sign patterns for the Liouville function, that are either inconsistent with each other, or with results such as the Matomäki-Radziwiłł result. To illustrate this, let us give some {k=2} examples, arguing a little informally to emphasise the combinatorial aspects of the argument. First suppose that the sign pattern {(\lambda(n),\lambda(n+1)) = (+1,+1)} almost never occurs. The prime number theorem tells us that {\lambda(n)} and {\lambda(n+1)} are each equal to {+1} about half of the time, which by inclusion-exclusion implies that the sign pattern {(\lambda(n),\lambda(n+1))=(-1,-1)} almost never occurs. In other words, we have {\lambda(n+1) = -\lambda(n)} for almost all {n}. But from the multiplicativity property {\lambda(2n)=-\lambda(n)} this implies that one should have

\displaystyle \lambda(2n+2) = -\lambda(2n)

\displaystyle \lambda(2n+1) = -\lambda(2n)

and

\displaystyle \lambda(2n+2) = -\lambda(2n+1)

for almost all {n}. But the above three statements are contradictory, and the claim follows.

Similarly, if we assume that the sign pattern {(\lambda(n),\lambda(n+1)) = (+1,-1)} almost never occurs, then a similar argument to the above shows that for any fixed {h}, one has {\lambda(n)=\lambda(n+1)=\dots=\lambda(n+h)} for almost all {n}. But this means that the mean {\frac{1}{h} \sum_{j=1}^h \lambda(n+j)} is abnormally large for most {n}, which (for {h} large enough) contradicts the results of Matomäki and Radziwiłł. Here we see that the “enemy” to defeat is the scenario in which {\lambda} only changes sign very rarely, in which case one rarely sees the pattern {(+1,-1)}.

It turns out that similar (but more combinatorially intricate) arguments work for sign patterns of length three (but are unlikely to work for most sign patterns of length four or greater). We give here one fragment of such an argument (due to Hildebrand) which hopefully conveys the Sudoku-type flavour of the combinatorics. Suppose for instance that the sign pattern {(\lambda(n),\lambda(n+1),\lambda(n+2)) = (+1,+1,+1)} almost never occurs. Now suppose {n} is a typical number with {\lambda(15n-1)=\lambda(15n+1)=+1}. Since we almost never have the sign pattern {(+1,+1,+1)}, we must (almost always) then have {\lambda(15n) = -1}. By multiplicativity this implies that

\displaystyle (\lambda(60n-4), \lambda(60n), \lambda(60n+4)) = (+1,-1,+1).

We claim that this (almost always) forces {\lambda(60n+5)=-1}. For if {\lambda(60n+5)=+1}, then by the lack of the sign pattern {(+1,+1,+1)}, this (almost always) forces {\lambda(60n+3)=\lambda(60n+6)=-1}, which by multiplicativity forces {\lambda(20n+1)=\lambda(20n+2)=+1}, which by lack of {(+1,+1,+1)} (almost always) forces {\lambda(20n)=-1}, which by multiplicativity contradicts {\lambda(60n)=-1}. Thus we have {\lambda(60n+5)=-1}; a similar argument gives {\lambda(60n-5)=-1} almost always, which by multiplicativity gives {\lambda(12n-1)=\lambda(12n)=\lambda(12n+1)=+1}, a contradiction. Thus we almost never have {\lambda(15n-1)=\lambda(15n+1)=+1}, which by the inclusion-exclusion argument mentioned previously shows that {\lambda(15n+1) = - \lambda(15n-1)} for almost all {n}.

One can continue these Sudoku-type arguments and conclude eventually that {\lambda(3n-1)=-\lambda(3n+1)=\lambda(3n+2)} for almost all {n}. To put it another way, if {\chi_3} denotes the non-principal Dirichlet character of modulus {3}, then {\lambda \chi_3} is almost always constant away from the multiples of {3}. (Conversely, if {\lambda \chi_3} changed sign very rarely outside of the multiples of three, then the sign pattern {(+1,+1,+1)} would never occur.) Fortunately, the main result of Matomäki and Radziwiłł shows that this scenario cannot occur, which establishes that the sign pattern {(+1,+1,+1)} must occur rather frequently. The other sign patterns are handled by variants of these arguments.

Excluding a sign pattern of length three leads to useful implications like “if {\lambda(n-1)=\lambda(n)=+1}, then {\lambda(n+1)=-1}” which turn out are just barely strong enough to quite rigidly constrain the Liouville function using Sudoku-like arguments. In contrast, excluding a sign pattern of length four only gives rise to implications like “`if {\lambda(n-2)=\lambda(n-1)=\lambda(n)=+1}, then {\lambda(n+1)=-1}“, and these seem to be much weaker for this purpose (the hypothesis in these implications just isn’t satisfied nearly often enough). So a different idea seems to be needed if one wishes to extend the above theorem to larger values of {k}.

Our second theorem gives an analogous result for the Möbius function {\mu} (which takes values in {\{-1,0,+1\}} rather than {\{-1,1\}}), but the analysis turns out to be remarkably difficult and we are only able to get up to {k=2}:

Theorem 2 Let {k \leq 2}. Then each of the sign patterns in {\{-1,0,+1\}^k} is attained by the Möbius function for a set {n} of positive lower density.

It turns out that the prime number theorem and elementary sieve theory can be used to handle the {k=1} case and all the {k=2} cases that involve at least one {0}, leaving only the four sign patterns {(\pm 1, \pm 1)} to handle. It is here that the zeroes of the Möbius function cause a significant new obstacle. Suppose for instance that the sign pattern {(+1, -1)} almost never occurs for the Möbius function. The same arguments that were used in the Liouville case then show that {\mu(n)} will be almost always equal to {\mu(n+1)}, provided that {n,n+1} are both square-free. One can try to chain this together as before to create a long string {\mu(n)=\dots=\mu(n+h) \in \{-1,+1\}} where the Möbius function is constant, but this cannot work for any {h} larger than three, because the Möbius function vanishes at every multiple of four.

The constraints we assume on the Möbius function can be depicted using a graph on the squarefree natural numbers, in which any two adjacent squarefree natural numbers are connected by an edge. The main difficulty is then that this graph is highly disconnected due to the multiples of four not being squarefree.

To get around this, we need to enlarge the graph. Note from multiplicativity that if {\mu(n)} is almost always equal to {\mu(n+1)} when {n,n+1} are squarefree, then {\mu(n)} is almost always equal to {\mu(n+p)} when {n,n+p} are squarefree and {n} is divisible by {p}. We can then form a graph on the squarefree natural numbers by connecting {n} to {n+p} whenever {n,n+p} are squarefree and {n} is divisible by {p}. If this graph is “locally connected” in some sense, then {\mu} will be constant on almost all of the squarefree numbers in a large interval, which turns out to be incompatible with the results of Matomäki and Radziwiłł. Because of this, matters are reduced to establishing the connectedness of a certain graph. More precisely, it turns out to be sufficient to establish the following claim:

Theorem 3 For each prime {p}, let {a_p \hbox{ mod } p^2} be a residue class chosen uniformly at random. Let {G} be the random graph whose vertices {V} consist of those integers {n} not equal to {a_p \hbox{ mod } p^2} for any {p}, and whose edges consist of pairs {n,n+p} in {V} with {n = a_p \hbox{ mod } p}. Then with probability {1}, the graph {G} is connected.

We were able to show the connectedness of this graph, though it turned out to be remarkably tricky to do so. Roughly speaking (and suppressing a number of technicalities), the main steps in the argument were as follows.

  • (Early stage) Pick a large number {X} (in our paper we take {X} to be odd, but I’ll ignore this technicality here). Using a moment method to explore neighbourhoods of a single point in {V}, one can show that a vertex {v} in {V} is almost always connected to at least {\log^{10} X} numbers in {[v,v+X^{1/100}]}, using relatively short paths of short diameter. (This is the most computationally intensive portion of the argument.)
  • (Middle stage) Let {X'} be a typical number in {[X/40,X/20]}, and let {R} be a scale somewhere between {X^{1/40}} and {X'}. By using paths {n, n+p_1, n+p_1-p_2, n+p_1-p_2+p_3} involving three primes, and using a variant of Vinogradov’s theorem and some routine second moment computations, one can show that with quite high probability, any “good” vertex in {[v+X'-R, v+X'-0.99R]} is connected to a “good” vertex in {[v+X'-0.01R, v+X-0.0099 R]} by paths of length three, where the definition of “good” is somewhat technical but encompasses almost all of the vertices in {V}.
  • (Late stage) Combining the two previous results together, we can show that most vertices {v} will be connected to a vertex in {[v+X'-X^{1/40}, v+X']} for any {X'} in {[X/40,X/20]}. In particular, {v} will be connected to a set of {\gg X^{9/10}} vertices in {[v,v+X/20]}. By tracking everything carefully, one can control the length and diameter of the paths used to connect {v} to this set, and one can also control the parity of the elements in this set.
  • (Final stage) Now if we have two vertices {v, w} at a distance {X} apart. By the previous item, one can connect {v} to a large set {A} of vertices in {[v,v+X/20]}, and one can similarly connect {w} to a large set {B} of vertices in {[w,w+X/20]}. Now, by using a Vinogradov-type theorem and second moment calculations again (and ensuring that the elements of {A} and {B} have opposite parity), one can connect many of the vertices in {A} to many of the vertices {B} by paths of length three, which then connects {v} to {w}, and gives the claim.

It seems of interest to understand random graphs like {G} further. In particular, the graph {G'} on the integers formed by connecting {n} to {n+p} for all {n} in a randomly selected residue class mod {p} for each prime {p} is particularly interesting (it is to the Liouville function as {G} is to the Möbius function); if one could show some “local expander” properties of this graph {G'}, then one would have a chance of modifying the above methods to attack the first unsolved case of the Chowla conjecture, namely that {\lambda(n)\lambda(n+1)} has asymptotic density zero (perhaps working with logarithmic density instead of natural density to avoids some technicalities).

Archives