You are currently browsing the monthly archive for September 2020.

I have uploaded to the arXiv my paper “Exploring the toolkit of Jean Bourgain“. This is one of a collection of papers to be published in the Bulletin of the American Mathematical Society describing aspects of the work of Jean Bourgain; other contributors to this collection include Keith Ball, Ciprian Demeter, and Carlos Kenig. Because the other contributors will be covering specific areas of Jean’s work in some detail, I decided to take a non-overlapping tack, and focus instead on some basic tools of Jean that he frequently used across many of the fields he contributed to. Jean had a surprising number of these “basic tools” that he wielded with great dexterity, and in this paper I focus on just a few of them:

  • Reducing qualitative analysis results (e.g., convergence theorems or dimension bounds) to quantitative analysis estimates (e.g., variational inequalities or maximal function estimates).
  • Using dyadic pigeonholing to locate good scales to work in or to apply truncations.
  • Using random translations to amplify small sets (low density) into large sets (positive density).
  • Combining large deviation inequalities with metric entropy bounds to control suprema of various random processes.

Each of these techniques is individually not too difficult to explain, and were certainly employed on occasion by various mathematicians prior to Bourgain’s work; but Jean had internalized them to the point where he would instinctively use them as soon as they became relevant to a given problem at hand. I illustrate this at the end of the paper with an exposition of one particular result of Jean, on the Erdős similarity problem, in which his main result (that any sum {S = S_1+S_2+S_3} of three infinite sets of reals has the property that there exists a positive measure set {E} that does not contain any homothetic copy {x+tS} of {S}) is basically proven by a sequential application of these tools (except for dyadic pigeonholing, which turns out not to be needed here).

I had initially intended to also cover some other basic tools in Jean’s toolkit, such as the uncertainty principle and the use of probabilistic decoupling, but was having trouble keeping the paper coherent with such a broad focus (certainly I could not identify a single paper of Jean’s that employed all of these tools at once). I hope though that the examples given in the paper gives some reasonable impression of Jean’s research style.

Starting on Oct 2, I will be teaching Math 246A, the first course in the three-quarter graduate complex analysis sequence at the math department here at UCLA.  This first course covers much of the same ground as an honours undergraduate complex analysis course, in particular focusing on the basic properties of holomorphic functions such as the Cauchy and residue theorems, the classification of singularities, and the maximum principle, but there will be more of an emphasis on rigour, generalisation and abstraction, and connections with other parts of mathematics.  The main text I will be using for this course is Stein-Shakarchi (with Ahlfors as a secondary text), but I will also be using the blog lecture notes I wrote the last time I taught this course in 2016. At this time I do not expect to significantly deviate from my past lecture notes, though I do not know at present how different the pace will be this quarter when the course is taught remotely. As with my 247B course last spring, the lectures will be open to the public, though other coursework components will be restricted to enrolled students.

Vaughan Jones, who made fundamental contributions in operator algebras and knot theory (in particular developing a surprising connection between the two), died this week, aged 67.

Vaughan and I grew up in extremely culturally similar countries, worked in adjacent areas of mathematics, shared (as of this week) a coauthor in Dima Shylakhtenko, started out our career with the same postdoc position (as UCLA Hedrick Assistant Professors, sixteen years apart) and even ended up in sister campuses of the University of California, but surprisingly we only interacted occasionally, via chance meetings at conferences or emails on some committee business. I found him extremely easy to get along with when we did meet, though, perhaps because of our similar cultural upbringing.

I have not had much occasion to directly use much of Vaughan’s mathematical contributions, but I did very much enjoy reading his influential 1999 preprint on planar algebras (which, for some odd reason has never been formally published). Traditional algebra notation is one-dimensional in nature, with algebraic expressions being described by strings of mathematical symbols; a linear operator T, for instance, might appear in the middle of such a string, taking in an input x on the right and returning an output Tx on its left that might then be fed into some other operation. There are a few mathematical notations which are two-dimensional, such as the commutative diagrams in homological algebra, the tree expansions of solutions to nonlinear PDE (particularly stochastic nonlinear PDE), or the Feynman diagrams and Penrose graphical notations from physics, but these are the exception rather than the rule, and the notation is often still concentrated on a one-dimensional complex of vertices and edges (or arrows) in the plane. Planar algebras, by contrast, fully exploit the topological nature of the plane; a planar “operator” (or “operad”) inhabits some punctured region of the plane, such as an annulus, with “inputs” entering from the inner boundaries of the region and “outputs” emerging from the outer boundary. These algebras arose for Vaughan in both operator theory and knot theory, and have since been used in some other areas of mathematics such as representation theory and homology. I myself have not found a direct use for this type of algebra in my own work, but nevertheless I found the mere possibility of higher dimensional notation being the natural choice for a given mathematical problem to be conceptually liberating.

Abdul Basit, Artem Chernikov, Sergei Starchenko, Chiu-Minh Tran and I have uploaded to the arXiv our paper Zarankiewicz’s problem for semilinear hypergraphs. This paper is in the spirit of a number of results in extremal graph theory in which the bounds for various graph-theoretic problems or results can be greatly improved if one makes some additional hypotheses regarding the structure of the graph, for instance by requiring that the graph be “definable” with respect to some theory with good model-theoretic properties.

A basic motivating example is the question of counting the number of incidences between points and lines (or between points and other geometric objects). Suppose one has {n} points and {n} lines in a space. How many incidences can there be between these points and lines? The utterly trivial bound is {n^2}, but by using the basic fact that two points determine a line (or two lines intersect in at most one point), a simple application of Cauchy-Schwarz improves this bound to {n^{3/2}}. In graph theoretic terms, the point is that the bipartite incidence graph between points and lines does not contain a copy of {K_{2,2}} (there does not exist two points and two lines that are all incident to each other). Without any other further hypotheses, this bound is basically sharp: consider for instance the collection of {p^2} points and {p^2+p} lines in a finite plane {{\bf F}_p^2}, that has {p^3+p^2} incidences (one can make the situation more symmetric by working with a projective plane rather than an affine plane). If however one considers lines in the real plane {{\bf R}^2}, the famous Szemerédi-Trotter theorem improves the incidence bound further from {n^{3/2}} to {O(n^{4/3})}. Thus the incidence graph between real points and lines contains more structure than merely the absence of {K_{2,2}}.

More generally, bounding on the size of bipartite graphs (or multipartite hypergraphs) not containing a copy of some complete bipartite subgraph {K_{k,k}} (or {K_{k,\dots,k}} in the hypergraph case) is known as Zarankiewicz’s problem. We have results for all {k} and all orders of hypergraph, but for sake of this post I will focus on the bipartite {k=2} case.

In our paper we improve the {n^{3/2}} bound to a near-linear bound in the case that the incidence graph is “semilinear”. A model case occurs when one considers incidences between points and axis-parallel rectangles in the plane. Now the {K_{2,2}} condition is not automatic (it is of course possible for two distinct points to both lie in two distinct rectangles), so we impose this condition by fiat:

Theorem 1 Suppose one has {n} points and {n} axis-parallel rectangles in the plane, whose incidence graph contains no {K_{2,2}}‘s, for some large {n}.
  • (i) The total number of incidences is {O(n \log^4 n)}.
  • (ii) If all the rectangles are dyadic, the bound can be improved to {O( n \frac{\log n}{\log\log n} )}.
  • (iii) The bound in (ii) is best possible (up to the choice of implied constant).

We don’t know whether the bound in (i) is similarly tight for non-dyadic boxes; the usual tricks for reducing the non-dyadic case to the dyadic case strangely fail to apply here. One can generalise to higher dimensions, replacing rectangles by polytopes with faces in some fixed finite set of orientations, at the cost of adding several more logarithmic factors; also, one can replace the reals by other ordered division rings, and replace polytopes by other sets of bounded “semilinear descriptive complexity”, e.g., unions of boundedly many polytopes, or which are cut out by boundedly many functions that enjoy coordinatewise monotonicity properties. For certain specific graphs we can remove the logarithmic factors entirely. We refer to the preprint for precise details.

The proof techniques are combinatorial. The proof of (i) relies primarily on the order structure of {{\bf R}} to implement a “divide and conquer” strategy in which one can efficiently control incidences between {n} points and rectangles by incidences between approximately {n/2} points and boxes. For (ii) there is additional order-theoretic structure one can work with: first there is an easy pruning device to reduce to the case when no rectangle is completely contained inside another, and then one can impose the “tile partial order” in which one dyadic rectangle {I \times J} is less than another {I' \times J'} if {I \subset I'} and {J' \subset J}. The point is that this order is “locally linear” in the sense that for any two dyadic rectangles {R_-, R_+}, the set {[R_-,R_+] := \{ R: R_- \leq R \leq R_+\}} is linearly ordered, and this can be exploited by elementary double counting arguments to obtain a bound which eventually becomes {O( n \frac{\log n}{\log\log n})} after optimising certain parameters in the argument. The proof also suggests how to construct the counterexample in (iii), which is achieved by an elementary iterative construction.

Dimitri Shlyakhtenko and I have uploaded to the arXiv our paper Fractional free convolution powers. For me, this project (which we started during the 2018 IPAM program on quantitative linear algebra) was motivated by a desire to understand the behavior of the minor process applied to a large random Hermitian {N \times N} matrix {A_N}, in which one takes the successive upper left {n \times n} minors {A_n} of {A_N} and computes their eigenvalues {\lambda_1(A_n) \leq \dots \leq \lambda_n(A_n)} in non-decreasing order. These eigenvalues are related to each other by the Cauchy interlacing inequalities

\displaystyle  \lambda_i(A_{n+1}) \leq \lambda_i(A_n) \leq \lambda_{i+1}(A_{n+1})

for {1 \leq i \leq n < N}, and are often arranged in a triangular array known as a Gelfand-Tsetlin pattern, as discussed in these previous blog posts.

When {N} is large and the matrix {A_N} is a random matrix with empirical spectral distribution converging to some compactly supported probability measure {\mu} on the real line, then under suitable hypotheses (e.g., unitary conjugation invariance of the random matrix ensemble {A_N}), a “concentration of measure” effect occurs, with the spectral distribution of the minors {A_n} for {n = \lfloor N/k\rfloor} for any fixed {k \geq 1} converging to a specific measure {k^{-1}_* \mu^{\boxplus k}} that depends only on {\mu} and {k}. The reason for this notation is that there is a surprising description of this measure {k^{-1}_* \mu^{\boxplus k}} when {k} is a natural number, namely it is the free convolution {\mu^{\boxplus k}} of {k} copies of {\mu}, pushed forward by the dilation map {x \mapsto k^{-1} x}. For instance, if {\mu} is the Wigner semicircular measure {d\mu_{sc} = \frac{1}{\pi} (4-x^2)^{1/2}_+\ dx}, then {k^{-1}_* \mu_{sc}^{\boxplus k} = k^{-1/2}_* \mu_{sc}}. At the random matrix level, this reflects the fact that the minor of a GUE matrix is again a GUE matrix (up to a renormalizing constant).

As first observed by Bercovici and Voiculescu and developed further by Nica and Speicher, among other authors, the notion of a free convolution power {\mu^{\boxplus k}} of {\mu} can be extended to non-integer {k \geq 1}, thus giving the notion of a “fractional free convolution power”. This notion can be defined in several different ways. One of them proceeds via the Cauchy transform

\displaystyle  G_\mu(z) := \int_{\bf R} \frac{d\mu(x)}{z-x}

of the measure {\mu}, and {\mu^{\boxplus k}} can be defined by solving the Burgers-type equation

\displaystyle  (k \partial_k + z \partial_z) G_{\mu^{\boxplus k}}(z) = \frac{\partial_z G_{\mu^{\boxplus k}}(z)}{G_{\mu^{\boxplus k}}(z)} \ \ \ \ \ (1)

with initial condition {G_{\mu^{\boxplus 1}} = G_\mu} (see this previous blog post for a derivation). This equation can be solved explicitly using the {R}-transform {R_\mu} of {\mu}, defined by solving the equation

\displaystyle  \frac{1}{G_\mu(z)} + R_\mu(G_\mu(z)) = z

for sufficiently large {z}, in which case one can show that

\displaystyle  R_{\mu^{\boxplus k}}(z) = k R_\mu(z).

(In the case of the semicircular measure {\mu_{sc}}, the {R}-transform is simply the identity: {R_{\mu_{sc}}(z)=z}.)

Nica and Speicher also gave a free probability interpretation of the fractional free convolution power: if {A} is a noncommutative random variable in a noncommutative probability space {({\mathcal A},\tau)} with distribution {\mu}, and {p} is a real projection operator free of {A} with trace {1/k}, then the “minor” {[pAp]} of {A} (viewed as an element of a new noncommutative probability space {({\mathcal A}_p, \tau_p)} whose elements are minors {[pXp]}, {X \in {\mathcal A}} with trace {\tau_p([pXp]) := k \tau(pXp)}) has the law of {k^{-1}_* \mu^{\boxplus k}} (we give a self-contained proof of this in an appendix to our paper). This suggests that the minor process (or fractional free convolution) can be studied within the framework of free probability theory.

One of the known facts about integer free convolution powers {\mu^{\boxplus k}} is monotonicity of the free entropy

\displaystyle  \chi(\mu) = \int_{\bf R} \int_{\bf R} \log|s-t|\ d\mu(s) d\mu(t) + \frac{3}{4} + \frac{1}{2} \log 2\pi

and free Fisher information

\displaystyle  \Phi(\mu) = \frac{2\pi^2}{3} \int_{\bf R} \left(\frac{d\mu}{dx}\right)^3\ dx

which were introduced by Voiculescu as free probability analogues of the classical probability concepts of differential entropy and classical Fisher information. (Here we correct a small typo in the normalization constant of Fisher entropy as presented in Voiculescu’s paper.) Namely, it was shown by Shylakhtenko that the quantity {\chi(k^{-1/2}_* \mu^{\boxplus k})} is monotone non-decreasing for integer {k}, and the Fisher information {\Phi(k^{-1/2}_* \mu^{\boxplus k})} is monotone non-increasing for integer {k}. This is the free probability analogue of the corresponding monotonicities for differential entropy and classical Fisher information that was established by Artstein, Ball, Barthe, and Naor, answering a question of Shannon.

Our first main result is to extend the monotonicity results of Shylakhtenko to fractional {k \geq 1}. We give two proofs of this fact, one using free probability machinery, and a more self contained (but less motivated) proof using integration by parts and contour integration. The free probability proof relies on the concept of the free score {J(X)} of a noncommutative random variable, which is the analogue of the classical score. The free score, also introduced by Voiculescu, can be defined by duality as measuring the perturbation with respect to semicircular noise, or more precisely

\displaystyle  \frac{d}{d\varepsilon} \tau( Z P( X + \varepsilon Z) )|_{\varepsilon=0} = \tau( J(X) P(X) )

whenever {P} is a polynomial and {Z} is a semicircular element free of {X}. If {X} has an absolutely continuous law {\mu = f\ dx} for a sufficiently regular {f}, one can calculate {J(X)} explicitly as {J(X) = 2\pi Hf(X)}, where {Hf} is the Hilbert transform of {f}, and the Fisher information is given by the formula

\displaystyle  \Phi(X) = \tau( J(X)^2 ).

One can also define a notion of relative free score {J(X:B)} relative to some subalgebra {B} of noncommutative random variables.

The free score interacts very well with the free minor process {X \mapsto [pXp]}, in particular by standard calculations one can establish the identity

\displaystyle  J( [pXp] : [pBp] ) = k {\bf E}( [p J(X:B) p] | [pXp], [pBp] )

whenever {X} is a noncommutative random variable, {B} is an algebra of noncommutative random variables, and {p} is a real projection of trace {1/k} that is free of both {X} and {B}. The monotonicity of free Fisher information then follows from an application of Pythagoras’s theorem (which implies in particular that conditional expectation operators are contractions on {L^2}). The monotonicity of free entropy then follows from an integral representation of free entropy as an integral of free Fisher information along the free Ornstein-Uhlenbeck process (or equivalently, free Fisher information is essentially the rate of change of free entropy with respect to perturbation by semicircular noise). The argument also shows when equality holds in the monotonicity inequalities; this occurs precisely when {\mu} is a semicircular measure up to affine rescaling.

After an extensive amount of calculation of all the quantities that were implicit in the above free probability argument (in particular computing the various terms involved in the application of Pythagoras’ theorem), we were able to extract a self-contained proof of monotonicity that relied on differentiating the quantities in {k} and using the differential equation (1). It turns out that if {d\mu = f\ dx} for sufficiently regular {f}, then there is an identity

\displaystyle  \partial_k \Phi( k^{-1/2}_* \mu^{\boxplus k} ) = -\frac{1}{2\pi^2} \lim_{\varepsilon \rightarrow 0} \sum_{\alpha,\beta = \pm} f(x) f(y) K(x+i\alpha \varepsilon, y+i\beta \varepsilon)\ dx dy \ \ \ \ \ (2)

where {K} is the kernel

\displaystyle  K(z,w) := \frac{1}{G(z) G(w)} (\frac{G(z)-G(w)}{z-w} + G(z) G(w))^2

and {G(z) := G_\mu(z)}. It is not difficult to show that {K(z,\overline{w})} is a positive semi-definite kernel, which gives the required monotonicity. It would be interesting to obtain some more insightful interpretation of the kernel {K} and the identity (2).

These monotonicity properties hint at the minor process {A \mapsto [pAp]} being associated to some sort of “gradient flow” in the {k} parameter. We were not able to formalize this intuition; indeed, it is not clear what a gradient flow on a varying noncommutative probability space {({\mathcal A}_p, \tau_p)} even means. However, after substantial further calculation we were able to formally describe the minor process as the Euler-Lagrange equation for an intriguing Lagrangian functional that we conjecture to have a random matrix interpretation. We first work in “Lagrangian coordinates”, defining the quantity {\lambda(s,y)} on the “Gelfand-Tsetlin pyramid”

\displaystyle  \Delta = \{ (s,y): 0 < s < 1; 0 < y < s \}

by the formula

\displaystyle  \mu^{\boxplus 1/s}((-\infty,\lambda(s,y)/s])=y/s,

which is well defined if the density of {\mu} is sufficiently well behaved. The random matrix interpretation of {\lambda(s,y)} is that it is the asymptotic location of the {\lfloor yN\rfloor^{th}} eigenvalue of the {\lfloor sN \rfloor \times \lfloor sN \rfloor} upper left minor of a random {N \times N} matrix {A_N} with asymptotic empirical spectral distribution {\mu} and with unitarily invariant distribution, thus {\lambda} is in some sense a continuum limit of Gelfand-Tsetlin patterns. Thus for instance the Cauchy interlacing laws in this asymptotic limit regime become

\displaystyle  0 \leq \partial_s \lambda \leq \partial_y \lambda.

After a lengthy calculation (involving extensive use of the chain rule and product rule), the equation (1) is equivalent to the Euler-Lagrange equation

\displaystyle  \partial_s L_{\lambda_s}(\partial_s \lambda, \partial_y \lambda) + \partial_y L_{\lambda_y}(\partial_s \lambda, \partial_y \lambda) = 0

where {L} is the Lagrangian density

\displaystyle  L(\lambda_s, \lambda_y) := \log \lambda_y + \log \sin( \pi \frac{\lambda_s}{\lambda_y} ).

Thus the minor process is formally a critical point of the integral {\int_\Delta L(\partial_s \lambda, \partial_y \lambda)\ ds dy}. The quantity {\partial_y \lambda} measures the mean eigenvalue spacing at some location of the Gelfand-Tsetlin pyramid, and the ratio {\frac{\partial_s \lambda}{\partial_y \lambda}} measures mean eigenvalue drift in the minor process. This suggests that this Lagrangian density is some sort of measure of entropy of the asymptotic microscale point process emerging from the minor process at this spacing and drift. There is work of Metcalfe demonstrating that this point process is given by the Boutillier bead model, so we conjecture that this Lagrangian density {L} somehow measures the entropy density of this process.

Archives