You are currently browsing the tag archive for the ‘additive combinatorics’ tag.
[This post is dedicated to Luca Trevisan, who recently passed away due to cancer. Though far from his most significant contribution to the field, I would like to mention that, as with most of my other blog posts on this site, this page was written with the assistance of Luca’s LaTeX to WordPress converter. Mathematically, his work and insight on pseudorandomness in particular have greatly informed how I myself think about the concept. – T.]
Recently, Timothy Gowers, Ben Green, Freddie Manners, and I were able to establish the following theorem:
Theorem 1 (Marton’s conjecture) Let be non-empty with . Then there exists a subgroup of with such that is covered by at most translates of , for some absolute constant .
We established this result with , although it has since been improved to by Jyun-Jie Liao.
Our proof was written in order to optimize the constant as much as possible; similarly for the more detailed blueprint of the proof that was prepared in order to formalize the result in Lean. I have been asked a few times whether it is possible to present a streamlined and more conceptual version of the proof in which one does not try to establish an explicit constant , but just to show that the result holds for some constant . This is what I will attempt to do in this post, though some of the more routine steps will be outsourced to the aforementioned blueprint.
The key concept here is that of the entropic Ruzsa distance between two random variables taking values , defined as
where are independent copies of , and denotes the Shannon entropy of . This distance is symmetric and non-negative, and obeys the triangle inequality
for any random variables ; see the blueprint for a proof. The above theorem then follows from an entropic analogue:
Theorem 2 (Entropic Marton’s conjecture) Let be a -valued random variable with . Then there exists a uniform random variable on a subgroup of such that for some absolute constant .
We were able to establish Theorem 2 with , which implies Theorem 1 with by fairly standard additive combinatorics manipulations; see the blueprint for details.
The key proposition needed to establish Theorem 2 is the following distance decrement property:
Proposition 3 (Distance decrement) If are -valued random variables, then one can find -valued random variables such that
and
for some absolute constants .
Indeed, suppose this proposition held. Starting with both equal to and iterating, one can then find sequences of random variables with ,
and
In particular, from the triangle inequality and geometric series
By weak compactness, some subsequence of the , converge to some limiting random variables , and by some simple continuity properties of entropic Ruzsa distance, we conclude that
and
Theorem 2 then follows from the “100% inverse theorem” for entropic Ruzsa distance; see the blueprint for details.
To prove Proposition 3, we can reformulate it as follows:
Proposition 4 (Lack of distance decrement implies vanishing) If are -valued random variables, with the property that
for all -valued random variables and some sufficiently small absolute constant , then one can derive a contradiction.
Indeed, we may assume from the above proposition that
for some , which will imply Proposition 3 with .
The entire game is now to use Shannon entropy inequalities and “entropic Ruzsa calculus” to deduce a contradiction from (1) for small enough. This we will do below the fold, but before doing so, let us first make some adjustments to (1) that will make it more useful for our purposes. Firstly, because conditional entropic Ruzsa distance (see blueprint for definitions) is an average of unconditional entropic Ruzsa distance, we can automatically upgrade (1) to the conditional version
for any random variables that are possibly coupled with respectively. In particular, if we define a “relevant” random variable (conditioned with respect to some auxiliary data ) to be a random variable for which
or equivalently (by the triangle inequality)
then we have the useful lower bound
whenever and are relevant conditioning on respectively. This is quite a useful bound, since the laws of “entropic Ruzsa calculus” will tell us, roughly speaking, that virtually any random variable that we can create from taking various sums of copies of and conditioning against other sums, will be relevant. (Informally: the space of relevant random variables is -separated with respect to the entropic Ruzsa distance.)
— 1. Main argument —
Now we derive more and more consequences of (2) – at some point crucially using the hypothesis that we are in characteristic two – before we reach a contradiction.
Right now, our hypothesis (2) only supplies lower bounds on entropic distances. The crucial ingredient that allows us to proceed is what we call the fibring identity, which lets us convert these lower bounds into useful upper bounds as well, which in fact match up very nicely when is small. Informally, the fibring identity captures the intuitive fact that the doubling constant of a set should be at least as large as the doubling constant of the image of that set under a homomorphism, times the doubling constant of a typical fiber of that homomorphism; and furthermore, one should only be close to equality if the fibers “line up” in some sense.
Here is the fibring identity:
Proposition 5 (Fibring identity) Let be a homomorphism. Then for any independent -valued random variables , one has
The proof is of course in the blueprint, but given that it is a central pillar of the argumnt, I reproduce it here.
Proof: Expanding out the definition of Ruzsa distance, and using the conditional entropy chain rule
and
it suffices to establish the identity
But from the chain rule again we have
and from the definition of conditional mutual information (using the fact that is determined both by and by ) one has
giving the claim.
We will only care about the characteristic setting here, so we will now assume that all groups involved are -torsion, so that we can replace all subtractions with additions. If we specialize the fibring identity to the case where , , is the addition map , and , are pairs of independent random variables in , we obtain the following corollary:
Corollary 6 Let be independent -valued random variables. Then we have the identity
This is a useful and flexible identity, especially when combined with (2). For instance, we can discard the conditional mutual information term as being non-negative, to obtain the inequality
If we let be independent copies of respectively (note the swap in the last two variables!) we obtain
From entropic Ruzsa calculus, one can check that , , and are all relevant random variables, so from (2) we now obtain both upper and lower bounds for :
A pleasant upshot of this is that we now get to work in the symmetric case without loss of generality. Indeed, if we set , we now have from (2) that
whenever are relevant, which by entropic Ruzsa calculus is equivalent to asking that
Now we use the fibring identity again, relabeling as and requiring to be independent copies of . We conclude that
As before, the random variables , , , are all relevant, so from (3) we have
We could now also match these lower bounds with upper bounds, but the more important takeaway from this analysis is a really good bound on the conditional mutual information:
By the data processing inequality, we can discard some of the randomness here, and conclude
Let us introduce the random variables
then we have
Intuitively, this means that and are very nearly independent given . For sake of argument, let us assume that they are actually independent; one can achieve something resembling this by invoking the entropic Balog-Szemerédi-Gowers theorem, established in the blueprint, after conceding some losses of in the entropy, but we skip over the details for this blog post. The key point now is that because we are in characteristic , has the same form as or :
In particular, by permutation symmetry, we have
and so by the definition of conditional Ruzsa distance we have a massive distance decrement
contradicting (1) as desired. (In reality, we end up decreasing the distance not all the way to zero, but instead to due to losses in the Balog-Szemerédi-Gowers theorem, but this is still enough to reach a contradiction.)
Remark 7 A similar argument works in the -torsion case for general . Instead of decrementing the entropic Ruzsa distance, one instead decrements a “multidistance”
for independent . By an iterated version of the fibring identity, one can first reduce again to the symmetric case where the random variables are all copies of the same variable . If one then takes , to be an array of copies of , one can get to the point where the row sums and the column sums have small conditional mutual information with respect to the double sum . If we then set and , the data processing inequality again shows that and are nearly independent given . The -torsion now crucially intervenes as before to ensure that has the same form as or , leading to a contradiction as before. See this previous blog post for more discussion.
Tim Gowers, Ben Green, Freddie Manners, and I have just uploaded to the arXiv our paper “On a conjecture of Marton“. This paper establishes a version of the notorious Polynomial Freiman–Ruzsa conjecture (first proposed by Katalin Marton):
Theorem 1 (Polynomial Freiman–Ruzsa conjecture) Let be such that . Then can be covered by at most translates of a subspace of of cardinality at most .
The previous best known result towards this conjecture was by Konyagin (as communicated in this paper of Sanders), who obtained a similar result but with replaced by for any (assuming that say to avoid some degeneracies as approaches , which is not the difficult case of the conjecture). The conjecture (with replaced by an unspecified constant ) has a number of equivalent forms; see this survey of Green, and these papers of Lovett and of Green and myself for some examples; in particular, as discussed in the latter two references, the constants in the inverse theorem are now polynomial in nature (although we did not try to optimize the constant).
The exponent here was the product of a large number of optimizations to the argument (our original exponent here was closer to ), but can be improved even further with additional effort (our current argument, for instance, allows one to replace it with , but we decided to state our result using integer exponents instead).
In this paper we will focus exclusively on the characteristic case (so we will be cavalier in identifying addition and subtraction), but in a followup paper we will establish similar results in other finite characteristics.
Much of the prior progress on this sort of result has proceeded via Fourier analysis. Perhaps surprisingly, our approach uses no Fourier analysis whatsoever, being conducted instead entirely in “physical space”. Broadly speaking, it follows a natural strategy, which is to induct on the doubling constant . Indeed, suppose for instance that one could show that every set of doubling constant was “commensurate” in some sense to a set of doubling constant at most . One measure of commensurability, for instance, might be the Ruzsa distance , which one might hope to control by . Then one could iterate this procedure until doubling constant dropped below say , at which point the conjecture is known to hold (there is an elementary argument that if has doubling constant less than , then is in fact a subspace of ). One can then use several applications of the Ruzsa triangle inequality
to conclude (the fact that we reduce to means that the various Ruzsa distances that need to be summed are controlled by a convergent geometric series).There are a number of possible ways to try to “improve” a set of not too large doubling by replacing it with a commensurate set of better doubling. We note two particular potential improvements:
- (i) Replacing with . For instance, if was a random subset (of density ) of a large subspace of , then replacing with usually drops the doubling constant from down to nearly (under reasonable choices of parameters).
- (ii) Replacing with for a “typical” . For instance, if was the union of random cosets of a subspace of large codimension, then replacing with again usually drops the doubling constant from down to nearly .
Unfortunately, there are sets where neither of the above two operations (i), (ii) significantly improves the doubling constant. For instance, if is a random density subset of random translates of a medium-sized subspace , one can check that the doubling constant stays close to if one applies either operation (i) or operation (ii). But in this case these operations don’t actually worsen the doubling constant much either, and by applying some combination of (i) and (ii) (either intersecting with a translate, or taking a sumset of with itself) one can start lowering the doubling constant again.
This begins to suggest a potential strategy: show that at least one of the operations (i) or (ii) will improve the doubling constant, or at least not worsen it too much; and in the latter case, perform some more complicated operation to locate the desired doubling constant improvement.
A sign that this strategy might have a chance of working is provided by the following heuristic argument. If has doubling constant , then the Cartesian product has doubling constant . On the other hand, by using the projection map defined by , we see that projects to , with fibres being essentially a copy of . So, morally, also behaves like a “skew product” of and the fibres , which suggests (non-rigorously) that the doubling constant of is also something like the doubling constant of , times the doubling constant of a typical fibre . This would imply that at least one of and would have doubling constant at most , and thus that at least one of operations (i), (ii) would not worsen the doubling constant.
Unfortunately, this argument does not seem to be easily made rigorous using the traditional doubling constant; even the significantly weaker statement that has doubling constant at most is false (see comments for more discussion). However, it turns out (as discussed in this recent paper of myself with Green and Manners) that things are much better. Here, the analogue of a subset in is a random variable taking values in , and the analogue of the (logarithmic) doubling constant is the entropic doubling constant , where are independent copies of . If is a random variable in some additive group and is a homomorphism, one then has what we call the fibring inequality
where the conditional doubling constant is defined as Indeed, from the chain rule for Shannon entropy one has and while from the non-negativity of conditional mutual information one has and it is an easy matter to combine all these identities and inequalities to obtain the claim.Applying this inequality with replaced by two independent copies of itself, and using the addition map for , we obtain in particular that
or (since is determined by once one fixes ) So if , then at least one of or will be less than or equal to . This is the entropy analogue of at least one of (i) or (ii) improving, or at least not degrading the doubling constant, although there are some minor technicalities involving how one deals with the conditioning to in the second term that we will gloss over here (one can pigeonhole the instances of to different events , , and “depolarise” the induction hypothesis to deal with distances between pairs of random variables that do not necessarily have the same distribution). Furthermore, we can even calculate the defect in the above inequality: a careful inspection of the above argument eventually reveals that where we now take four independent copies . This leads (modulo some technicalities) to the following interesting conclusion: if neither (i) nor (ii) leads to an improvement in the entropic doubling constant, then and are conditionally independent relative to . This situation (or an approximation to this situation) is what we refer to in the paper as the “endgame”.A version of this endgame conclusion is in fact valid in any characteristic. But in characteristic , we can take advantage of the identity
Conditioning on , and using symmetry we now conclude that if we are in the endgame exactly (so that the mutual information is zero), then the independent sum of two copies of has exactly the same distribution; in particular, the entropic doubling constant here is zero, which is certainly a reduction in the doubling constant.To deal with the situation where the conditional mutual information is small but not completely zero, we have to use an entropic version of the Balog-Szemeredi-Gowers lemma, but fortunately this was already worked out in an old paper of mine (although in order to optimise the final constant, we ended up using a slight variant of that lemma).
I am planning to formalize this paper in the Lean4 language. Further discussion of this project will take place on this Zulip stream, and the project itself will be held at this Github repository.
Asgar Jamneshan and myself have just uploaded to the arXiv our preprint “The inverse theorem for the Gowers uniformity norm on arbitrary finite abelian groups: Fourier-analytic and ergodic approaches“. This paper, which is a companion to another recent paper of ourselves and Or Shalom, studies the inverse theory for the third Gowers uniformity norm
on an arbitrary finite abelian group , where is the multiplicative derivative. Our main result is as follows:
Theorem 1 (Inverse theorem for ) Let be a finite abelian group, and let be a -bounded function with for some . Then:
- (i) (Correlation with locally quadratic phase) There exists a regular Bohr set with and , a locally quadratic function , and a function such that
- (ii) (Correlation with nilsequence) There exists an explicit degree two filtered nilmanifold of dimension , a polynomial map , and a Lipschitz function of constant such that
Such a theorem was proven by Ben Green and myself in the case when was odd, and by Samorodnitsky in the -torsion case . In all cases one uses the “higher order Fourier analysis” techniques introduced by Gowers. After some now-standard manipulations (using for instance what is now known as the Balog-Szemerédi-Gowers lemma), one arrives (for arbitrary ) at an estimate that is roughly of the form
where denotes various -bounded functions whose exact values are not too important, and is a symmetric locally bilinear form. The idea is then to “integrate” this form by expressing it in the form for some locally quadratic ; this then allows us to write the above correlation as (after adjusting the functions suitably), and one can now conclude part (i) of the above theorem using some linear Fourier analysis. Part (ii) follows by encoding locally quadratic phase functions as nilsequences; for this we adapt an algebraic construction of Manners.So the key step is to obtain a representation of the form (1), possibly after shrinking the Bohr set a little if needed. This has been done in the literature in two ways:
- When is odd, one has the ability to divide by , and on the set one can establish (1) with . (This is similar to how in single variable calculus the function is a function whose second derivative is equal to .)
- When , then after a change of basis one can take the Bohr set to be for some , and the bilinear form can be written in coordinates as for some with . The diagonal terms cause a problem, but by subtracting off the rank one form one can write on the orthogonal complement of for some coefficients which now vanish on the diagonal: . One can now obtain (1) on this complement by taking
In our paper we can now treat the case of arbitrary finite abelian groups , by means of the following two new ingredients:
- (i) Using some geometry of numbers, we can lift the group to a larger (possibly infinite, but still finitely generated) abelian group with a projection map , and find a globally bilinear map on the latter group, such that one has a representation of the locally bilinear form by the globally bilinear form when are close enough to the origin.
- (ii) Using an explicit construction, one can show that every globally bilinear map has a representation of the form (1) for some globally quadratic function .
To illustrate (i), consider the Bohr set in (where denotes the distance to the nearest integer), and consider a locally bilinear form of the form for some real number and all integers (which we identify with elements of . For generic , this form cannot be extended to a globally bilinear form on ; however if one lifts to the finitely generated abelian group
(with projection map ) and introduces the globally bilinear form by the formula then one has (2) when lie in the interval . A similar construction works for higher rank Bohr sets.To illustrate (ii), the key case turns out to be when is a cyclic group , in which case will take the form
for some integer . One can then check by direct construction that (1) will be obeyed with regardless of whether is even or odd. A variant of this construction also works for , and the general case follows from a short calculation verifying that the claim (ii) for any two groups implies the corresponding claim (ii) for the product .This concludes the Fourier-analytic proof of Theorem 1. In this paper we also give an ergodic theory proof of (a qualitative version of) Theorem 1(ii), using a correspondence principle argument adapted from this previous paper of Ziegler, and myself. Basically, the idea is to randomly generate a dynamical system on the group , by selecting an infinite number of random shifts , which induces an action of the infinitely generated free abelian group on by the formula
Much as the law of large numbers ensures the almost sure convergence of Monte Carlo integration, one can show that this action is almost surely ergodic (after passing to a suitable Furstenberg-type limit where the size of goes to infinity), and that the dynamical Host-Kra-Gowers seminorms of that system coincide with the combinatorial Gowers norms of the original functions. One is then well placed to apply an inverse theorem for the third Host-Kra-Gowers seminorm for -actions, which was accomplished in the companion paper to this one. After doing so, one almost gets the desired conclusion of Theorem 1(ii), except that after undoing the application of the Furstenberg correspondence principle, the map is merely an almost polynomial rather than a polynomial, which roughly speaking means that instead of certain derivatives of vanishing, they instead are merely very small outside of a small exceptional set. To conclude we need to invoke a “stability of polynomials” result, which at this level of generality was first established by Candela and Szegedy (though we also provide an independent proof here in an appendix), which roughly speaking asserts that every approximate polynomial is close in measure to an actual polynomial. (This general strategy is also employed in the Candela-Szegedy paper, though in the absence of the ergodic inverse theorem input that we rely upon here, the conclusion is weaker in that the filtered nilmanifold is replaced with a general space known as a “CFR nilspace”.)This transference principle approach seems to work well for the higher step cases (for instance, the stability of polynomials result is known in arbitrary degree); the main difficulty is to establish a suitable higher step inverse theorem in the ergodic theory setting, which we hope to do in future research.
Laura Cladek and I have just uploaded to the arXiv our paper “Additive energy of regular measures in one and higher dimensions, and the fractal uncertainty principle“. This paper concerns a continuous version of the notion of additive energy. Given a finite measure on and a scale , define the energy at scale to be the quantity
where is the product measure on formed from four copies of the measure on . We will be interested in Cantor-type measures , supported on a compact set and obeying the Ahlfors-David regularity condition for all balls and some constants , as well as the matching lower bound when whenever . One should think of as a -dimensional fractal set, and as some vaguely self-similar measure on this set.Note that once one fixes , the variable in (1) is constrained to a ball of radius , hence we obtain the trivial upper bound
If the set contains a lot of “additive structure”, one can expect this bound to be basically sharp; for instance, if is an integer, is a -dimensional unit disk, and is Lebesgue measure on this disk, one can verify that (where we allow implied constants to depend on . However we show that if the dimension is non-integer, then one obtains a gain:
Theorem 1 If is not an integer, and are as above, then for some depending only on .
Informally, this asserts that Ahlfors-David regular fractal sets of non-integer dimension cannot behave as if they are approximately closed under addition. In fact the gain we obtain is quasipolynomial in the regularity constant :
(We also obtain a localised version in which the regularity condition is only required to hold at scales between and .) Such a result was previously obtained (with more explicit values of the implied constants) in the one-dimensional case by Dyatlov and Zahl; but in higher dimensions there does not appear to have been any results for this general class of sets and measures . In the paper of Dyatlov and Zahl it is noted that some dependence on is necessary; in particular, cannot be much better than . This reflects the fact that there are fractal sets that do behave reasonably well with respect to addition (basically because they are built out of long arithmetic progressions at many scales); however, such sets are not very Ahlfors-David regular. Among other things, this result readily implies a dimension expansion result for any non-degenerate smooth map , including the sum map and (in one dimension) the product map , where the non-degeneracy condition required is that the gradients are invertible for every . We refer to the paper for the formal statement.Our higher-dimensional argument shares many features in common with that of Dyatlov and Zahl, notably a reliance on the modern tools of additive combinatorics (and specifically the Bogulybov-Ruzsa lemma of Sanders). However, in one dimension we were also able to find a completely elementary argument, avoiding any particularly advanced additive combinatorics and instead primarily exploiting the order-theoretic properties of the real line, that gave a superior value of , namely
One of the main reasons for obtaining such improved energy bounds is that they imply a fractal uncertainty principle in some regimes. We focus attention on the model case of obtaining such an uncertainty principle for the semiclassical Fourier transform
where is a small parameter. If are as above, and denotes the -neighbourhood of , then from the Hausdorff-Young inequality one obtains the trivial bound (There are also variants involving pairs of sets , but for simplicity we focus on the uncertainty principle for a single set .) The fractal uncertainty principle, when it applies, asserts that one can improve this to for some ; informally, this asserts that a function and its Fourier transform cannot simultaneously be concentrated in the set when , and that a function cannot be concentrated on and have its Fourier transform be of maximum size on when . A modification of the disk example mentioned previously shows that such a fractal uncertainty principle cannot hold if is an integer. However, in one dimension, the fractal uncertainty principle is known to hold for all . The above-mentioned results of Dyatlov and Zahl were able to establish this for close to , and the remaining cases and were later established by Bourgain-Dyatlov and Dyatlov-Jin respectively. Such uncertainty principles have applications to hyperbolic dynamics, in particular in establishing spectral gaps for certain Selberg zeta functions.It remains a largely open problem to establish a fractal uncertainty principle in higher dimensions. Our results allow one to establish such a principle when the dimension is close to , and is assumed to be odd (to make a non-integer). There is also work of Han and Schlag that obtains such a principle when one of the copies of is assumed to have a product structure. We hope to obtain further higher-dimensional fractal uncertainty principles in subsequent work.
We now sketch how our main theorem is proved. In both one dimension and higher dimensions, the main point is to get a preliminary improvement
over the trivial bound (2) for any small , provided is sufficiently small depending on ; one can then iterate this bound by a fairly standard “induction on scales” argument (which roughly speaking can be used to show that energies behave somewhat multiplicatively in the scale parameter ) to propagate the bound to a power gain at smaller scales. We found that a particularly clean way to run the induction on scales was via use of the Gowers uniformity norm , and particularly via a clean Fubini-type inequality (ultimately proven using the Gowers-Cauchy-Schwarz inequality) that allows one to “decouple” coarse and fine scale aspects of the Gowers norms (and hence of additive energies).It remains to obtain the preliminary improvement. In one dimension this is done by identifying some “left edges” of the set that supports : intervals that intersect , but such that a large interval just to the left of this interval is disjoint from . Here is a large constant and is a scale parameter. It is not difficult to show (using in particular the Archimedean nature of the real line) that if one has the Ahlfors-David regularity condition for some then left edges exist in abundance at every scale; for instance most points of would be expected to lie in quite a few of these left edges (much as most elements of, say, the ternary Cantor set would be expected to contain a lot of s in their base expansion). In particular, most pairs would be expected to lie in a pair of left edges of equal length. The key point is then that if lies in such a pair with , then there are relatively few pairs at distance from for which one has the relation , because will both tend to be to the right of respectively. This causes a decrement in the energy at scale , and by carefully combining all these energy decrements one can eventually cobble together the energy bound (3).
We were not able to make this argument work in higher dimension (though perhaps the cases and might not be completely out of reach from these methods). Instead we return to additive combinatorics methods. If the claim (3) failed, then by applying the Balog-Szemeredi-Gowers theorem we can show that the set has high correlation with an approximate group , and hence (by the aforementioned Bogulybov-Ruzsa type theorem of Sanders, which is the main source of the quasipolynomial bounds in our final exponent) will exhibit an approximate “symmetry” along some non-trivial arithmetic progression of some spacing length and some diameter . The -neighbourhood of will then resemble the union of parallel “cylinders” of dimensions . If we focus on a typical -ball of , the set now resembles a Cartesian product of an interval of length with a subset of a -dimensional hyperplane, which behaves approximately like an Ahlfors-David regular set of dimension (this already lets us conclude a contradiction if ). Note that if the original dimension was non-integer then this new dimension will also be non-integer. It is then possible to contradict the failure of (3) by appealing to a suitable induction hypothesis at one lower dimension.
Let , be additive groups (i.e., groups with an abelian addition group law). A map is a homomorphism if one has
for all . A map is an affine homomorphism if one has
for all additive quadruples in , by which we mean that and . The two notions are closely related; it is easy to verify that is an affine homomorphism if and only if is the sum of a homomorphism and a constant.
Now suppose that also has a translation-invariant metric . A map is said to be a quasimorphism if one has
for all , where denotes a quantity at a bounded distance from the origin. Similarly, is an affine quasimorphism if
for all additive quadruples in . Again, one can check that is an affine quasimorphism if and only if it is the sum of a quasimorphism and a constant (with the implied constant of the quasimorphism controlled by the implied constant of the affine quasimorphism). (Since every constant is itself a quasimorphism, it is in fact the case that affine quasimorphisms are quasimorphisms, but now the implied constant in the latter is not controlled by the implied constant of the former.)
“Trivial” examples of quasimorphisms include the sum of a homomorphism and a bounded function. Are there others? In some cases, the answer is no. For instance, suppose we have a quasimorphism . Iterating (2), we see that for any integer and natural number , which we can rewrite as for non-zero . Also, is Lipschitz. Sending , we can verify that is a Cauchy sequence as and thus tends to some limit ; we have for , hence for positive , and then one can use (2) one last time to obtain for all . Thus is the sum of the homomorphism and a bounded sequence.
In general, one can phrase this problem in the language of group cohomology (discussed in this previous post). Call a map a -cocycle. A -cocycle is a map obeying the identity
for all . Given a -cocycle , one can form its derivative by the formula
Such functions are called -coboundaries. It is easy to see that the abelian group of -coboundaries is a subgroup of the abelian group of -cocycles. The quotient of these two groups is the first group cohomology of with coefficients in , and is denoted .
If a -cocycle is bounded then its derivative is a bounded -coboundary. The quotient of the group of bounded -cocycles by the derivatives of bounded -cocycles is called the bounded first group cohomology of with coefficients in , and is denoted . There is an obvious homomorphism from to , formed by taking a coset of the space of derivatives of bounded -cocycles, and enlarging it to a coset of the space of -coboundaries. By chasing all the definitions, we see that all quasimorphism from to are the sum of a homomorphism and a bounded function if and only if this homomorphism is injective; in fact the quotient of the space of quasimorphisms by the sum of homomorphisms and bounded functions is isomorphic to the kernel of .
In additive combinatorics, one is often working with functions which only have additive structure a fraction of the time, thus for instance (1) or (3) might only hold “ of the time”. This makes it somewhat difficult to directly interpret the situation in terms of group cohomology. However, thanks to tools such as the Balog-Szemerédi-Gowers lemma, one can upgrade this sort of -structure to -structure – at the cost of restricting the domain to a smaller set. Here I record one such instance of this phenomenon, thus giving a tentative link between additive combinatorics and group cohomology. (I thank Yuval Wigderson for suggesting the problem of locating such a link.)
Theorem 1 Let , be additive groups with , let be a subset of , let , and let be a function such that
for additive quadruples in . Then there exists a subset of containing with , a subset of with , and a function such that
for all (thus, the derivative takes values in on ), and such that for each , one has
Presumably the constants and can be improved further, but we have not attempted to optimise these constants. We chose as the domain on which one has a bounded derivative, as one can use the Bogulybov lemma (see e.g, Proposition 4.39 of my book with Van Vu) to find a large Bohr set inside . In applications, the set need not have bounded size, or even bounded doubling; for instance, in the inverse theory over a small finite fields , one would be interested in the situation where is the group of matrices with coefficients in (for some large , and being the subset consisting of those matrices of rank bounded by some bound .
Proof: By hypothesis, there are triples such that and
Thus, there is a set with such that for all , one has (6) for pairs with ; in particular, there exists such that (6) holds for values of . Setting , we conclude that for each , one has
Consider the bipartite graph whose vertex sets are two copies of , and and connected by a (directed) edge if and (7) holds. Then this graph has edges. Applying (a slight modification of) the Balog-Szemerédi-Gowers theorem (for instance by modifying the proof of Corollary 5.19 of my book with Van Vu), we can then find a subset of with with the property that for any , there exist triples such that the edges all lie in this bipartite graph. This implies that, for all , there exist septuples obeying the constraints
and for . These constraints imply in particular that
Also observe that
Thus, if and are such that , we see that
for octuples in the hyperplane
By the pigeonhole principle, this implies that for any fixed , there can be at most sets of the form with , that are pairwise disjoint. Using a greedy algorithm, we conclude that there is a set of cardinality , such that each set with , intersects for some , or in other words that
This implies that there exists a subset of with , and an element for each , such that
for all . Note we may assume without loss of generality that and .
By construction of , and permuting labels, we can find 16-tuples such that
and
for . We sum this to obtain
and hence by (8)
where . Since
we see that there are only possible values of . By the pigeonhole principle, we conclude that at most of the sets can be disjoint. Arguing as before, we conclude that there exists a set of cardinality such that
whenever (10) holds.
For any , write arbitrarily as for some (with if , and if ) and then set
Then from (11) we have (4). For we have , and (5) then follows from (9).
A sequence of complex numbers is said to be quasiperiodic if it is of the form
for some real numbers and continuous function . For instance, linear phases such as (where ) are examples of quasiperiodic sequences; the top order coefficient (modulo ) can be viewed as a “frequency” of the integers, and an element of the Pontryagin dual of the integers. Any periodic sequence is also quasiperiodic (taking and to be the reciprocal of the period). A sequence is said to be almost periodic if it is the uniform limit of quasiperiodic sequences. For instance any Fourier series of the form
with real numbers and an absolutely summable sequence of complex coefficients, will be almost periodic.
These sequences arise in various “complexity one” problems in arithmetic combinatorics and ergodic theory. For instance, if is a measure-preserving system – a probability space equipped with a measure-preserving shift, and are bounded measurable functions, then the correlation sequence
can be shown to be an almost periodic sequence, plus an error term which is “null” in the sense that it has vanishing uniform density:
This can be established in a number of ways, for instance by writing as the Fourier coefficients of the spectral measure of the shift with respect to the functions , and then decomposing that measure into pure point and continuous components.
In the last two decades or so, it has become clear that there are natural higher order versions of these concepts, in which linear polynomials such as are replaced with higher degree counterparts. The most obvious candidates for these counterparts would be the polynomials , but this turns out to not be a complete set of higher degree objects needed for the theory. Instead, the higher order versions of quasiperiodic and almost periodic sequences are now known as basic nilsequences and nilsequences respectively, while the higher order version of a linear phase is a nilcharacter; each nilcharacter then has a symbol that is a higher order generalisation of the concept of a frequency (and the collection of all symbols forms a group that can be viewed as a higher order version of the Pontryagin dual of ). The theory of these objects is spread out in the literature across a number of papers; in particular, the theory of nilcharacters is mostly developed in Appendix E of this 116-page paper of Ben Green, Tamar Ziegler, and myself, and is furthermore written using nonstandard analysis and treating the more general setting of higher dimensional sequences. I therefore decided to rewrite some of that material in this blog post, in the simpler context of the qualitative asymptotic theory of one-dimensional nilsequences and nilcharacters rather than the quantitative single-scale theory that is needed for combinatorial applications (and which necessitated the use of nonstandard analysis in the previous paper).
For technical reasons (having to do with the non-trivial topological structure on nilmanifolds), it will be convenient to work with vector-valued sequences, that take values in a finite-dimensional complex vector space rather than in . By doing so, the space of sequences is now, technically, no longer a ring, as the operations of addition and multiplication on vector-valued sequences become ill-defined. However, we can still take complex conjugates of a sequence, and add sequences taking values in the same vector space , and for sequences taking values in different vector spaces , , we may utilise the tensor product , which we will normalise by defining
This product is associative and bilinear, and also commutative up to permutation of the indices. It also interacts well with the Hermitian norm
since we have .
The traditional definition of a basic nilsequence (as defined for instance by Bergelson, Host, and Kra) is as follows:
Definition 1 (Basic nilsequence, first definition) A nilmanifold of step at most is a quotient , where is a connected, simply connected nilpotent Lie group of step at most (thus, all -fold commutators vanish) and is a discrete cocompact lattice in . A basic nilsequence of degree at most is a sequence of the form , where , , and is a continuous function.
For instance, it is not difficult using this definition to show that a sequence is a basic nilsequence of degree at most if and only if it is quasiperiodic. The requirement that be simply connected can be easily removed if desired by passing to a universal cover, but it is technically convenient to assume it (among other things, it allows for a well-defined logarithm map that obeys the Baker-Campbell-Hausdorff formula). When one wishes to perform a more quantitative analysis of nilsequences (particularly when working on a “single scale”. sich as on a single long interval ), it is common to impose additional regularity conditions on the function , such as Lipschitz continuity or smoothness, but ordinary continuity will suffice for the qualitative discussion in this blog post.
Nowadays, and particularly when one needs to understand the “single-scale” equidistribution properties of nilsequences, it is more convenient (as is for instance done in this ICM paper of Green) to use an alternate definition of a nilsequence as follows.
Definition 2 Let . A filtered group of degree at most is a group together with a sequence of subgroups with and for . A polynomial sequence into a filtered group is a function such that for all and , where is the difference operator. A filtered nilmanifold of degree at most is a quotient , where is a filtered group of degree at most such that and all of the subgroups are connected, simply connected nilpotent filtered Lie group, and is a discrete cocompact subgroup of such that is a discrete cocompact subgroup of . A basic nilsequence of degree at most is a sequence of the form , where is a polynomial sequence, is a filtered nilmanifold of degree at most , and is a continuous function which is -automorphic, in the sense that for all and .
One can easily identify a -automorphic function on with a function on , but there are some (very minor) advantages to working on the group instead of the quotient , as it becomes slightly easier to modify the automorphy group when needed. (But because the action of on is free, one can pass from -automorphic functions on to functions on with very little difficulty.) The main reason to work with polynomial sequences rather than geometric progressions is that they form a group, a fact essentially established by by Lazard and Leibman; see Corollary B.4 of this paper of Green, Ziegler, and myself for a proof in the filtered group setting.
It is easy to see that any sequence that is a basic nilsequence of degree at most in the sense of the first definition, is also a basic nilsequence of degree at most in the second definition, since a nilmanifold of degree at most can be filtered using the lower central series, and any linear sequence will be a polynomial sequence with respect to that filtration. The converse implication is a little trickier, but still not too hard to show: see Appendix C of this paper of Ben Green, Tamar Ziegler, and myself. There are two key examples of basic nilsequences to keep in mind. The first are the polynomially quasiperiodic sequences
where are polynomials of degree at most , and is a -automorphic (i.e., -periodic) continuous function. The map defined by is a polynomial map of degree at most , if one filters by defining to equal when , and for . The torus then becomes a filtered nilmanifold of degree at most , and is thus a basic nilsequence of degree at most as per the second definition. It is also possible explicitly describe as a basic nilsequence of degree at most as per the first definition, for instance (in the case) by taking to be the space of upper triangular unipotent real matrices, and the subgroup with integer coefficients; we leave the details to the interested reader.
The other key example is a sequence of the form
where are real numbers, denotes the fractional part of , and and is a -automorphic continuous function that vanishes in a neighbourhood of . To describe this as a nilsequence, we use the nilpotent connected, simply connected degree , Heisenberg group
with the lower central series filtration , , and for , to be the discrete compact subgroup
to be the polynomial sequence
and to be the -automorphic function
one easily verifies that this function is indeed -automorphic, and it is continuous thanks to the vanishing properties of . Also we have , so is a basic nilsequence of degree at most . One can concoct similar examples with replaced by other “bracket polynomials” of ; for instance
will be a basic nilsequence if now vanishes in a neighbourhood of rather than . See this paper of Bergelson and Leibman for more discussion of bracket polynomials (also known as generalised polynomials) and their relationship to nilsequences.
A nilsequence of degree at most is defined to be a sequence that is the uniform limit of basic nilsequences of degree at most . Thus for instance a sequence is a nilsequence of degree at most if and only if it is almost periodic, while a sequence is a nilsequence of degree at most if and only if it is constant. Such objects arise in higher order recurrence: for instance, if are integers, is a measure-preserving system, and , then it was shown by Leibman that the sequence
is equal to a nilsequence of degree at most , plus a null sequence. (The special case when the measure-preserving system was ergodic and for was previously established by Bergelson, Host, and Kra.) Nilsequences also arise in the inverse theory of the Gowers uniformity norms, as discussed for instance in this previous post.
It is easy to see that a sequence is a basic nilsequence of degree at most if and only if each of its components are. The scalar basic nilsequences of degree are easily seen to form a -algebra (that is to say, they are a complex vector space closed under pointwise multiplication and complex conjugation), which implies similarly that vector-valued basic nilsequences of degree at most form a complex vector space closed under complex conjugation for each , and that the tensor product of any two basic nilsequences of degree at most is another basic nilsequence of degree at most . Similarly with “basic nilsequence” replaced by “nilsequence” throughout.
Now we turn to the notion of a nilcharacter, as defined in this paper of Ben Green, Tamar Ziegler, and myself:
Definition 3 (Nilcharacters) Let . A sub-nilcharacter of degree is a basic nilsequence of degree at most , such that obeys the additional modulation property
for all and , where is a continuous homomorphism . (Note from (1) and -automorphy that unless vanishes identically, must map to , thus without loss of generality one can view as an element of the Pontryagial dual of the torus .) If in addition one has for all , we call a nilcharacter of degree .
In the degree one case , the only sub-nilcharacters are of the form for some vector and , and this is a nilcharacter if is a unit vector. Similarly, in higher degree, any sequence of the form , where is a vector and is a polynomial of degree at most , is a sub-nilcharacter of degree , and a character if is a unit vector. A nilsequence of degree at most is automatically a sub-nilcharacter of degree , and a nilcharacter if it is of magnitude . A further example of a nilcharacter is provided by the two-dimensional sequence defined by
where are continuous, -automorphic functions that vanish on a neighbourhood of and respectively, and which form a partition of unity in the sense that
for all . Note that one needs both and to be not identically zero in order for all these conditions to be satisfied; it turns out (for topological reasons) that there is no scalar nilcharacter that is “equivalent” to this nilcharacter in a sense to be defined shortly. In some literature, one works exclusively with sub-nilcharacters rather than nilcharacters, however the former space contains zero-divisors, which is a little annoying technically. Nevertheless, both nilcharacters and sub-nilcharacters generate the same set of “symbols” as we shall see later.
We claim that every degree sub-nilcharacter can be expressed in the form , where is a degree nilcharacter, and is a linear transformation. Indeed, by scaling we may assume where uniformly. Using partitions of unity, one can find further functions also obeying (1) for the same character such that is non-zero; by dividing out the by the square root of this quantity, and then multiplying by , we may assume that
and then
becomes a degree nilcharacter that contains amongst its components, giving the claim.
As we shall show below, nilsequences can be approximated uniformly by linear combinations of nilcharacters, in much the same way that quasiperiodic or almost periodic sequences can be approximated uniformly by linear combinations of linear phases. In particular, nilcharacters can be used as “obstructions to uniformity” in the sense of the inverse theory of the Gowers uniformity norms.
The space of degree nilcharacters forms a semigroup under tensor product, with the constant sequence as the identity. One can upgrade this semigroup to an abelian group by quotienting nilcharacters out by equivalence:
Definition 4 Let . We say that two degree nilcharacters , are equivalent if is equal (as a sequence) to a basic nilsequence of degree at most . (We will later show that this is indeed an equivalence relation.) The equivalence class of such a nilcharacter will be called the symbol of that nilcharacter (in analogy to the symbol of a differential or pseudodifferential operator), and the collection of such symbols will be denoted .
As we shall see below the fold, has the structure of an abelian group, and enjoys some nice “symbol calculus” properties; also, one can view symbols as precisely describing the obstruction to equidistribution for nilsequences. For , the group is isomorphic to the Ponytragin dual of the integers, and for should be viewed as higher order generalisations of this Pontryagin dual. In principle, this group can be explicitly described for all , but the theory rapidly gets complicated as increases (much as the classification of nilpotent Lie groups or Lie algebras of step rapidly gets complicated even for medium-sized such as or ). We will give an explicit description of the case here. There is however one nice (and non-trivial) feature of for – it is not just an abelian group, but is in fact a vector space over the rationals !
Van Vu and I just posted to the arXiv our paper “sum-free sets in groups” (submitted to Discrete Analysis), as well as a companion survey article (submitted to J. Comb.). Given a subset of an additive group , define the quantity to be the cardinality of the largest subset of which is sum-free in in the sense that all the sums with distinct elements of lie outside of . For instance, if is itself a group, then , since no two elements of can sum to something outside of . More generally, if is the union of groups, then is at most , thanks to the pigeonhole principle.
If is the integers, then there are no non-trivial subgroups, and one can thus expect to start growing with . For instance, one has the following easy result:
Proof: We use an argument of Ruzsa, which is based in turn on an older argument of Choi. Let be the largest element of , and then recursively, once has been selected, let be the largest element of not equal to any of the , such that for all , terminating this construction when no such can be located. This gives a sequence of elements in which are sum-free in , and with the property that for any , either is equal to one of the , or else for some with . Iterating this, we see that any is of the form for some and . The number of such expressions is at most , thus which implies . Since , the claim follows.
In particular, we have for subsets of the integers. It has been possible to improve upon this easy bound, but only with remarkable effort. The best lower bound currently is
a result of Shao (building upon earlier work of Sudakov, Szemeredi, and Vu and of Dousse). In the opposite direction, a construction of Ruzsa gives examples of large sets with .
Using the standard tool of Freiman homomorphisms, the above results for the integers extend to other torsion-free abelian groups . In our paper we study the opposite case where is finite (but still abelian). In this paper of Erdös (in which the quantity was first introduced), the following question was posed: if is sufficiently large depending on , does this imply the existence of two elements with ? As it turns out, we were able to find some simple counterexamples to this statement. For instance, if is any finite additive group, then the set has but with no summing to zero; this type of example in fact works with replaced by any larger Mersenne prime, and we also have a counterexample in for arbitrarily large. However, in the positive direction, we can show that the answer to Erdös’s question is positive if is assumed to have no small prime factors. That is to say,
Theorem 2 For every there exists such that if is a finite abelian group whose order is not divisible by any prime less than or equal to , and is a subset of with order at least and , then there exist with .
There are two main tools used to prove this result. One is an “arithmetic removal lemma” proven by Král, Serra, and Vena. Note that the condition means that for any distinct , at least one of the , , must also lie in . Roughly speaking, the arithmetic removal lemma allows one to “almost” remove the requirement that be distinct, which basically now means that for almost all . This near-dilation symmetry, when combined with the hypothesis that has no small prime factors, gives a lot of “dispersion” in the Fourier coefficients of which can now be exploited to prove the theorem.
The second tool is the following structure theorem, which is the main result of our paper, and goes a fair ways towards classifying sets for which is small:
Theorem 3 Let be a finite subset of an arbitrary additive group , with . Then one can find finite subgroups with such that and . Furthermore, if , then the exceptional set is empty.
Roughly speaking, this theorem shows that the example of the union of subgroups mentioned earlier is more or less the “only” example of sets with , modulo the addition of some small exceptional sets and some refinement of the subgroups to dense subsets.
This theorem has the flavour of other inverse theorems in additive combinatorics, such as Freiman’s theorem, and indeed one can use Freiman’s theorem (and related tools, such as the Balog-Szemeredi theorem) to easily get a weaker version of this theorem. Indeed, if there are no sum-free subsets of of order , then a fraction of all pairs in must have their sum also in (otherwise one could take random elements of and they would be sum-free in with positive probability). From this and the Balog-Szemeredi theorem and Freiman’s theorem (in arbitrary abelian groups, as established by Green and Ruzsa), we see that must be “commensurate” with a “coset progression” of bounded rank. One can then eliminate the torsion-free component of this coset progression by a number of methods (e.g. by using variants of the argument in Proposition 1), with the upshot being that one can locate a finite group that has large intersection with .
At this point it is tempting to simply remove from and iterate. But one runs into a technical difficulty that removing a set such as from can alter the quantity in unpredictable ways, so one has to still keep around when analysing the residual set . A second difficulty is that the latter set could be considerably smaller than or , but still large in absolute terms, so in particular any error term whose size is only bounded by for a small could be massive compared with the residual set , and so such error terms would be unacceptable. One can get around these difficulties if one first performs some preliminary “normalisation” of the group , so that the residual set does not intersect any coset of too strongly. The arguments become even more complicated when one starts removing more than one group from and analyses the residual set ; indeed the “epsilon management” involved became so fearsomely intricate that we were forced to use a nonstandard analysis formulation of the problem in order to keep the complexity of the argument at a reasonable level (cf. my previous blog post on this topic). One drawback of doing so is that we have no effective bounds for the implied constants in our main theorem; it would be of interest to obtain a more direct proof of our main theorem that would lead to effective bounds.
I’ve just uploaded to the arXiv my paper “Inverse theorems for sets and measures of polynomial growth“. This paper was motivated by two related questions. The first question was to obtain a qualitatively precise description of the sets of polynomial growth that arise in Gromov’s theorem, in much the same way that Freiman’s theorem (and its generalisations) provide a qualitatively precise description of sets of small doubling. The other question was to obtain a non-abelian analogue of inverse Littlewood-Offord theory.
Let me discuss the former question first. Gromov’s theorem tells us that if a finite subset of a group exhibits polynomial growth in the sense that grows polynomially in , then the group generated by is virtually nilpotent (the converse direction also true, and is relatively easy to establish). This theorem has been strengthened a number of times over the years. For instance, a few years ago, I proved with Shalom that the condition that grew polynomially in could be replaced by for a single , as long as was sufficiently large depending on (in fact we gave a fairly explicit quantitative bound on how large needed to be). A little more recently, with Breuillard and Green, the condition was weakened to , that is to say it sufficed to have polynomial relative growth at a finite scale. In fact, the latter paper gave more information on in this case, roughly speaking it showed (at least in the case when was a symmetric neighbourhood of the identity) that was “commensurate” with a very structured object known as a coset nilprogression. This can then be used to establish further control on . For instance, it was recently shown by Breuillard and Tointon (again in the symmetric case) that if for a single that was sufficiently large depending on , then all the for have a doubling constant bounded by a bound depending only on , thus for all .
In this paper we are able to refine this analysis a bit further; under the same hypotheses, we can show an estimate of the form
for all and some piecewise linear, continuous, non-decreasing function with , where the error is bounded by a constant depending only on , and where has at most pieces, each of which has a slope that is a natural number of size . To put it another way, the function for behaves (up to multiplicative constants) like a piecewise polynomial function, where the degree of the function and number of pieces is bounded by a constant depending on .
One could ask whether the function has any convexity or concavity properties. It turns out that it can exhibit either convex or concave behaviour (or a combination of both). For instance, if is contained in a large finite group, then will eventually plateau to a constant, exhibiting concave behaviour. On the other hand, in nilpotent groups one can see convex behaviour; for instance, in the Heisenberg group , if one sets to be a set of matrices of the form for some large (abusing the notation somewhat), then grows cubically for but then grows quartically for .
To prove this proposition, it turns out (after using a somewhat difficult inverse theorem proven previously by Breuillard, Green, and myself) that one has to analyse the volume growth of nilprogressions . In the “infinitely proper” case where there are no unexpected relations between the generators of the nilprogression, one can lift everything to a simply connected Lie group (where one can take logarithms and exploit the Baker-Campbell-Hausdorff formula heavily), eventually describing with fair accuracy by a certain convex polytope with vertices depending polynomially on , which implies that depends polynomially on up to constants. If one is not in the “infinitely proper” case, then at some point the nilprogression develops a “collision”, but then one can use this collision to show (after some work) that the dimension of the “Lie model” of has dropped by at least one from the dimension of (the notion of a Lie model being developed in the previously mentioned paper of Breuillard, Greenm, and myself), so that this sort of collision can only occur a bounded number of times, with essentially polynomial volume growth behaviour between these collisions.
The arguments also give a precise description of the location of a set for which grows polynomially in . In the symmetric case, what ends up happening is that becomes commensurate to a “coset nilprogression” of bounded rank and nilpotency class, whilst is “virtually” contained in a scaled down version of that nilprogression. What “virtually” means is a little complicated; roughly speaking, it means that there is a set of bounded cardinality such that for all . Conversely, if is virtually contained in , then is commensurate to (and more generally, is commensurate to for any natural number ), giving quite a (qualitatively) precise description of in terms of coset nilprogressions.
The main tool used to prove these results is the structure theorem for approximate groups established by Breuillard, Green, and myself, which roughly speaking asserts that approximate groups are always commensurate with coset nilprogressions. A key additional trick is a pigeonholing argument of Sanders, which in this context is the assertion that if is comparable to , then there is an between and such that is very close in size to (up to a relative error of ). It is this fact, together with the comparability of to a coset nilprogression , that allows us (after some combinatorial argument) to virtually place inside .
Similar arguments apply when discussing iterated convolutions of (symmetric) probability measures on a (discrete) group , rather than combinatorial powers of a finite set. Here, the analogue of volume is given by the negative power of the norm of (thought of as a non-negative function on of total mass 1). One can also work with other norms here than , but this norm has some minor technical conveniences (and other measures of the “spread” of end up being more or less equivalent for our purposes). There is an analogous structure theorem that asserts that if spreads at most polynomially in , then is “commensurate” with the uniform probability distribution on a coset progression , and itself is largely concentrated near . The factor of here is the familiar scaling factor in random walks that arises for instance in the central limit theorem. The proof of (the precise version of) this statement proceeds similarly to the combinatorial case, using pigeonholing to locate a scale where has almost the same norm as .
A special case of this theory occurs when is the uniform probability measure on elements of and their inverses. The probability measure is then the distribution of a random product , where each is equal to one of or its inverse , selected at random with drawn uniformly from with replacement. This is very close to the Littlewood-Offord situation of random products where each is equal to or selected independently at random (thus is now fixed to equal rather than being randomly drawn from . In the case when is abelian, it turns out that a little bit of Fourier analysis shows that these two random walks have “comparable” distributions in a certain sense. As a consequence, the results in this paper can be used to recover an essentially optimal abelian inverse Littlewood-Offord theorem of Nguyen and Vu. In the nonabelian case, the only Littlewood-Offord theorem I am aware of is a recent result of Tiep and Vu for matrix groups, but in this case I do not know how to relate the above two random walks to each other, and so we can only obtain an analogue of the Tiep-Vu results for the symmetrised random walk instead of the ordered random walk .
Suppose that are two subgroups of some ambient group , with the index of in being finite. Then is the union of left cosets of , thus for some set of cardinality . The elements of are not entirely arbitrary with regards to . For instance, if is a normal subgroup of , then for each , the conjugation map preserves . In particular, if we write for the conjugate of by , then
Even if is not normal in , it turns out that the conjugation map approximately preserves , if is bounded. To quantify this, let us call two subgroups -commensurate for some if one has
Proposition 1 Let be groups, with finite index . Then for every , the groups and are -commensurate, in fact
Proof: One can partition into left translates of , as well as left translates of . Combining the partitions, we see that can be partitioned into at most non-empty sets of the form . Each of these sets is easily seen to be a left translate of the subgroup , thus . Since
and , we obtain the claim.
One can replace the inclusion by commensurability, at the cost of some worsening of the constants:
Corollary 2 Let be -commensurate subgroups of . Then for every , the groups and are -commensurate.
Proof: Applying the previous proposition with replaced by , we see that for every , and are -commensurate. Since and have index at most in and respectively, the claim follows.
It turns out that a similar phenomenon holds for the more general concept of an approximate group, and gives a “classification” of all the approximate groups containing a given approximate group as a “bounded index approximate subgroup”. Recall that a -approximate group in a group for some is a symmetric subset of containing the identity, such that the product set can be covered by at most left translates of (and thus also right translates, by the symmetry of ). For simplicity we will restrict attention to finite approximate groups so that we can use their cardinality as a measure of size. We call finite two approximate groups -commensurate if one has
note that this is consistent with the previous notion of commensurability for genuine groups.
Theorem 3 Let be a group, and let be real numbers. Let be a finite -approximate group, and let be a symmetric subset of that contains .
- (i) If is a -approximate group with , then one has for some set of cardinality at most . Furthermore, for each , the approximate groups and are -commensurate.
- (ii) Conversely, if for some set of cardinality at most , and and are -commensurate for all , then , and is a -approximate group.
Informally, the assertion that is an approximate group containing as a “bounded index approximate subgroup” is equivalent to being covered by a bounded number of shifts of , where approximately normalises in the sense that and have large intersection. Thus, to classify all such , the problem essentially reduces to that of classifying those that approximately normalise .
To prove the theorem, we recall some standard lemmas from arithmetic combinatorics, which are the foundation stones of the “Ruzsa calculus” that we will use to establish our results:
Lemma 4 (Ruzsa covering lemma) If and are finite non-empty subsets of , then one has for some set with cardinality .
Proof: We take to be a subset of with the property that the translates are disjoint in , and such that is maximal with respect to set inclusion. The required properties of are then easily verified.
Lemma 5 (Ruzsa triangle inequality) If are finite non-empty subsets of , then
Proof: If is an element of with and , then from the identity we see that can be written as the product of an element of and an element of in at least distinct ways. The claim follows.
Now we can prove (i). By the Ruzsa covering lemma, can be covered by at most
left-translates of , and hence by at most left-translates of , thus for some . Since only intersects if , we may assume that
and hence for any
By the Ruzsa covering lemma again, this implies that can be covered by at most left-translates of , and hence by at most left-translates of . By the pigeonhole principle, there thus exists a group element with
Since
and
the claim follows.
Now we prove (ii). Clearly
Now we control the size of . We have
From the Ruzsa triangle inequality and symmetry we have
and thus
By the Ruzsa covering lemma, this implies that is covered by at most left-translates of , hence by at most left-translates of . Since , the claim follows.
We now establish some auxiliary propositions about commensurability of approximate groups. The first claim is that commensurability is approximately transitive:
Proposition 6 Let be a -approximate group, be a -approximate group, and be a -approximate group. If and are -commensurate, and and are -commensurate, then and are -commensurate.
Proof: From two applications of the Ruzsa triangle inequality we have
By the Ruzsa covering lemma, we may thus cover by at most left-translates of , and hence by left-translates of . By the pigeonhole principle, there thus exists a group element such that
and so by arguing as in the proof of part (i) of the theorem we have
and similarly
and the claim follows.
The next proposition asserts that the union and (modified) intersection of two commensurate approximate groups is again an approximate group:
Proposition 7 Let be a -approximate group, be a -approximate group, and suppose that and are -commensurate. Then is a -approximate subgroup, and is a -approximate subgroup.
Using this proposition, one may obtain a variant of the previous theorem where the containment is replaced by commensurability; we leave the details to the interested reader.
Proof: We begin with . Clearly is symmetric and contains the identity. We have . The set is already covered by left translates of , and hence of ; similarly is covered by left translates of . As for , we observe from the Ruzsa triangle inequality that
and hence by the Ruzsa covering lemma, is covered by at most left translates of , and hence by left translates of , and hence of . Similarly is covered by at most left translates of . The claim follows.
Now we consider . Again, this is clearly symmetric and contains the identity. Repeating the previous arguments, we see that is covered by at most left-translates of , and hence there exists a group element with
Now observe that
and so by the Ruzsa covering lemma, can be covered by at most left-translates of . But this latter set is (as observed previously) contained in , and the claim follows.
In graph theory, the recently developed theory of graph limits has proven to be a useful tool for analysing large dense graphs, being a convenient reformulation of the Szemerédi regularity lemma. Roughly speaking, the theory asserts that given any sequence of finite graphs, one can extract a subsequence which converges (in a specific sense) to a continuous object known as a “graphon” – a symmetric measurable function . What “converges” means in this context is that subgraph densities converge to the associated integrals of the graphon . For instance, the edge density
converge to the integral
the triangle density
converges to the integral
the four-cycle density
converges to the integral
and so forth. One can use graph limits to prove many results in graph theory that were traditionally proven using the regularity lemma, such as the triangle removal lemma, and can also reduce many asymptotic graph theory problems to continuous problems involving multilinear integrals (although the latter problems are not necessarily easy to solve!). See this text of Lovasz for a detailed study of graph limits and their applications.
One can also express graph limits (and more generally hypergraph limits) in the language of nonstandard analysis (or of ultraproducts); see for instance this paper of Elek and Szegedy, Section 6 of this previous blog post, or this paper of Towsner. (In this post we assume some familiarity with nonstandard analysis, as reviewed for instance in the previous blog post.) Here, one starts as before with a sequence of finite graphs, and then takes an ultraproduct (with respect to some arbitrarily chosen non-principal ultrafilter ) to obtain a nonstandard graph , where is the ultraproduct of the , and similarly for the . The set can then be viewed as a symmetric subset of which is measurable with respect to the Loeb -algebra of the product (see this previous blog post for the construction of Loeb measure). A crucial point is that this -algebra is larger than the product of the Loeb -algebra of the individual vertex set . This leads to a decomposition
where the “graphon” is the orthogonal projection of onto , and the “regular error” is orthogonal to all product sets for . The graphon then captures the statistics of the nonstandard graph , in exact analogy with the more traditional graph limits: for instance, the edge density
(or equivalently, the limit of the along the ultrafilter ) is equal to the integral
where denotes Loeb measure on a nonstandard finite set ; the triangle density
(or equivalently, the limit along of the triangle densities of ) is equal to the integral
and so forth. Note that with this construction, the graphon is living on the Cartesian square of an abstract probability space , which is likely to be inseparable; but it is possible to cut down the Loeb -algebra on to minimal countable -algebra for which remains measurable (up to null sets), and then one can identify with , bringing this construction of a graphon in line with the traditional notion of a graphon. (See Remark 5 of this previous blog post for more discussion of this point.)
Additive combinatorics, which studies things like the additive structure of finite subsets of an abelian group , has many analogies and connections with asymptotic graph theory; in particular, there is the arithmetic regularity lemma of Green which is analogous to the graph regularity lemma of Szemerédi. (There is also a higher order arithmetic regularity lemma analogous to hypergraph regularity lemmas, but this is not the focus of the discussion here.) Given this, it is natural to suspect that there is a theory of “additive limits” for large additive sets of bounded doubling, analogous to the theory of graph limits for large dense graphs. The purpose of this post is to record a candidate for such an additive limit. This limit can be used as a substitute for the arithmetic regularity lemma in certain results in additive combinatorics, at least if one is willing to settle for qualitative results rather than quantitative ones; I give a few examples of this below the fold.
It seems that to allow for the most flexible and powerful manifestation of this theory, it is convenient to use the nonstandard formulation (among other things, it allows for full use of the transfer principle, whereas a more traditional limit formulation would only allow for a transfer of those quantities continuous with respect to the notion of convergence). Here, the analogue of a nonstandard graph is an ultra approximate group in a nonstandard group , defined as the ultraproduct of finite -approximate groups for some standard . (A -approximate group is a symmetric set containing the origin such that can be covered by or fewer translates of .) We then let be the external subgroup of generated by ; equivalently, is the union of over all standard . This space has a Loeb measure , defined by setting
whenever is an internal subset of for any standard , and extended to a countably additive measure; the arguments in Section 6 of this previous blog post can be easily modified to give a construction of this measure.
The Loeb measure is a translation invariant measure on , normalised so that has Loeb measure one. As such, one should think of as being analogous to a locally compact abelian group equipped with a Haar measure. It should be noted though that is not actually a locally compact group with Haar measure, for two reasons:
- There is not an obvious topology on that makes it simultaneously locally compact, Hausdorff, and -compact. (One can get one or two out of three without difficulty, though.)
- The addition operation is not measurable from the product Loeb algebra to . Instead, it is measurable from the coarser Loeb algebra to (compare with the analogous situation for nonstandard graphs).
Nevertheless, the analogy is a useful guide for the arguments that follow.
Let denote the space of bounded Loeb measurable functions (modulo almost everywhere equivalence) that are supported on for some standard ; this is a complex algebra with respect to pointwise multiplication. There is also a convolution operation , defined by setting
whenever , are bounded nonstandard functions (extended by zero to all of ), and then extending to arbitrary elements of by density. Equivalently, is the pushforward of the -measurable function under the map .
The basic structural theorem is then as follows.
Theorem 1 (Kronecker factor) Let be an ultra approximate group. Then there exists a (standard) locally compact abelian group of the form
for some standard and some compact abelian group , equipped with a Haar measure and a measurable homomorphism (using the Loeb -algebra on and the Baire -algebra on ), with the following properties:
- (i) has dense image, and is the pushforward of Loeb measure by .
- (ii) There exists sets with open and compact, such that
- (iii) Whenever with compact and open, there exists a nonstandard finite set such that
- (iv) If , then we have the convolution formula
where are the pushforwards of to , the convolution on the right-hand side is convolution using , and is the pullback map from to . In particular, if , then for all .
One can view the locally compact abelian group as a “model “or “Kronecker factor” for the ultra approximate group (in close analogy with the Kronecker factor from ergodic theory). In the case that is a genuine nonstandard finite group rather than an ultra approximate group, the non-compact components of the Kronecker group are trivial, and this theorem was implicitly established by Szegedy. The compact group is quite large, and in particular is likely to be inseparable; but as with the case of graphons, when one is only studying at most countably many functions , one can cut down the size of this group to be separable (or equivalently, second countable or metrisable) if desired, so one often works with a “reduced Kronecker factor” which is a quotient of the full Kronecker factor . Once one is in the separable case, the Baire sigma algebra is identical with the more familiar Borel sigma algebra.
Given any sequence of uniformly bounded functions for some fixed , we can view the function defined by
as an “additive limit” of the , in much the same way that graphons are limits of the indicator functions . The additive limits capture some of the statistics of the , for instance the normalised means
converge (along the ultrafilter ) to the mean
and for three sequences of functions, the normalised correlation
converges along to the correlation
the normalised Gowers norm
converges along to the Gowers norm
and so forth. We caution however that some correlations that involve evaluating more than one function at the same point will not necessarily be preserved in the additive limit; for instance the normalised norm
does not necessarily converge to the norm
but can converge instead to a larger quantity, due to the presence of the orthogonal projection in the definition (4) of .
An important special case of an additive limit occurs when the functions involved are indicator functions of some subsets of . The additive limit does not necessarily remain an indicator function, but instead takes values in (much as a graphon takes values in even though the original indicators take values in ). The convolution is then the ultralimit of the normalised convolutions ; in particular, the measure of the support of provides a lower bound on the limiting normalised cardinality of a sumset. In many situations this lower bound is an equality, but this is not necessarily the case, because the sumset could contain a large number of elements which have very few () representations as the sum of two elements of , and in the limit these portions of the sumset fall outside of the support of . (One can think of the support of as describing the “essential” sumset of , discarding those elements that have only very few representations.) Similarly for higher convolutions of . Thus one can use additive limits to partially control the growth of iterated sumsets of subsets of approximate groups , in the regime where stays bounded and goes to infinity.
Theorem 1 can be proven by Fourier-analytic means (combined with Freiman’s theorem from additive combinatorics), and we will do so below the fold. For now, we give some illustrative examples of additive limits.
Example 2 (Bohr sets) We take to be the intervals , where is a sequence going to infinity; these are -approximate groups for all . Let be an irrational real number, let be an interval in , and for each natural number let be the Bohr set
In this case, the (reduced) Kronecker factor can be taken to be the infinite cylinder with the usual Lebesgue measure . The additive limits of and end up being and , where is the finite cylinder
and is the rectangle
Geometrically, one should think of and as being wrapped around the cylinder via the homomorphism , and then one sees that is converging in some normalised weak sense to , and similarly for and . In particular, the additive limit predicts the growth rate of the iterated sumsets to be quadratic in until becomes comparable to , at which point the growth transitions to linear growth, in the regime where is bounded and is large.
If were rational instead of irrational, then one would need to replace by the finite subgroup here.
Example 3 (Structured subsets of progressions) We take be the rank two progression
where is a sequence going to infinity; these are -approximate groups for all . Let be the subset
Then the (reduced) Kronecker factor can be taken to be with Lebesgue measure , and the additive limits of the and are then and , where is the square
and is the circle
Geometrically, the picture is similar to the Bohr set one, except now one uses a Freiman homomorphism for to embed the original sets into the plane . In particular, one now expects the growth rate of the iterated sumsets and to be quadratic in , in the regime where is bounded and is large.
Example 4 (Dissociated sets) Let be a fixed natural number, and take
where are randomly chosen elements of a large cyclic group , where is a sequence of primes going to infinity. These are -approximate groups. The (reduced) Kronecker factor can (almost surely) then be taken to be with counting measure, and the additive limit of is , where and is the standard basis of . In particular, the growth rates of should grow approximately like for bounded and large.
Example 5 (Random subsets of groups) Let be a sequence of finite additive groups whose order is going to infinity. Let be a random subset of of some fixed density . Then (almost surely) the Kronecker factor here can be reduced all the way to the trivial group , and the additive limit of the is the constant function . The convolutions then converge in the ultralimit (modulo almost everywhere equivalence) to the pullback of ; this reflects the fact that of the elements of can be represented as the sum of two elements of in ways. In particular, occupies a proportion of .
Example 6 (Trigonometric series) Take for a sequence of primes going to infinity, and for each let be an infinite sequence of frequencies chosen uniformly and independently from . Let denote the random trigonometric series
Then (almost surely) we can take the reduced Kronecker factor to be the infinite torus (with the Haar probability measure ), and the additive limit of the then becomes the function defined by the formula
In fact, the pullback is the ultralimit of the . As such, for any standard exponent , the normalised norm
can be seen to converge to the limit
The reader is invited to consider combinations of the above examples, e.g. random subsets of Bohr sets, to get a sense of the general case of Theorem 1.
It is likely that this theorem can be extended to the noncommutative setting, using the noncommutative Freiman theorem of Emmanuel Breuillard, Ben Green, and myself, but I have not attempted to do so here (see though this recent preprint of Anush Tserunyan for some related explorations); in a separate direction, there should be extensions that can control higher Gowers norms, in the spirit of the work of Szegedy.
Note: the arguments below will presume some familiarity with additive combinatorics and with nonstandard analysis, and will be a little sketchy in places.
Recent Comments