You are currently browsing the tag archive for the ‘additive combinatorics’ tag.

Tim Gowers, Ben Green, Freddie Manners, and I have just uploaded to the arXiv our paper “Marton’s conjecture in abelian groups with bounded torsion“. This paper fully resolves a conjecture of Katalin Marton (the bounded torsion case of the Polynomial Freiman–Ruzsa conjecture (first proposed by Katalin Marton):

Theorem 1 (Marton’s conjecture) Let {G = (G,+)} be an abelian {m}-torsion group (thus, {mx=0} for all {x \in G}), and let {A \subset G} be such that {|A+A| \leq K|A|}. Then {A} can be covered by at most {(2K)^{O(m^3)}} translates of a subgroup {H} of {G} of cardinality at most {|A|}. Moreover, {H} is contained in {\ell A - \ell A} for some {\ell \ll (2 + m \log K)^{O(m^3 \log m)}}.

We had previously established the {m=2} case of this result, with the number of translates bounded by {(2K)^{12}} (which was subsequently improved to {(2K)^{11}} by Jyun-Jie Liao), but without the additional containment {H \subset \ell A - \ell A}. It remains a challenge to replace {\ell} by a bounded constant (such as {2}); this is essentially the “polynomial Bogolyubov conjecture”, which is still open. The {m=2} result has been formalized in the proof assistant language Lean, as discussed in this previous blog post. As a consequence of this result, many of the applications of the previous theorem may now be extended from characteristic {2} to higher characteristic.
Our proof techniques are a modification of those in our previous paper, and in particular continue to be based on the theory of Shannon entropy. For inductive purposes, it turns out to be convenient to work with the following version of the conjecture (which, up to {m}-dependent constants, is actually equivalent to the above theorem):

Theorem 2 (Marton’s conjecture, entropy form) Let {G} be an abelian {m}-torsion group, and let {X_1,\dots,X_m} be independent finitely supported random variables on {G}, such that

\displaystyle {\bf H}[X_1+\dots+X_m] - \frac{1}{m} \sum_{i=1}^m {\bf H}[X_i] \leq \log K,

where {{\bf H}} denotes Shannon entropy. Then there is a uniform random variable {U_H} on a subgroup {H} of {G} such that

\displaystyle \frac{1}{m} \sum_{i=1}^m d[X_i; U_H] \ll m^3 \log K,

where {d} denotes the entropic Ruzsa distance (see previous blog post for a definition); furthermore, if all the {X_i} take values in some symmetric set {S}, then {H} lies in {\ell S} for some {\ell \ll (2 + \log K)^{O(m^3 \log m)}}.

As a first approximation, one should think of all the {X_i} as identically distributed, and having the uniform distribution on {A}, as this is the case that is actually relevant for implying Theorem 1; however, the recursive nature of the proof of Theorem 2 requires one to manipulate the {X_i} separately. It also is technically convenient to work with {m} independent variables, rather than just a pair of variables as we did in the {m=2} case; this is perhaps the biggest additional technical complication needed to handle higher characteristics.
The strategy, as with the previous paper, is to attempt an entropy decrement argument: to try to locate modifications {X'_1,\dots,X'_m} of {X_1,\dots,X_m} that are reasonably close (in Ruzsa distance) to the original random variables, while decrementing the “multidistance”

\displaystyle {\bf H}[X_1+\dots+X_m] - \frac{1}{m} \sum_{i=1}^m {\bf H}[X_i]

which turns out to be a convenient metric for progress (for instance, this quantity is non-negative, and vanishes if and only if the {X_i} are all translates of a uniform random variable {U_H} on a subgroup {H}). In the previous paper we modified the corresponding functional to minimize by some additional terms in order to improve the exponent {12}, but as we are not attempting to completely optimize the constants, we did not do so in the current paper (and as such, our arguments here give a slightly different way of establishing the {m=2} case, albeit with somewhat worse exponents).
As before, we search for such improved random variables {X'_1,\dots,X'_m} by introducing more independent random variables – we end up taking an array of {m^2} random variables {Y_{i,j}} for {i,j=1,\dots,m}, with each {Y_{i,j}} a copy of {X_i}, and forming various sums of these variables and conditioning them against other sums. Thanks to the magic of Shannon entropy inequalities, it turns out that it is guaranteed that at least one of these modifications will decrease the multidistance, except in an “endgame” situation in which certain random variables are nearly (conditionally) independent of each other, in the sense that certain conditional mutual informations are small. In particular, in the endgame scenario, the row sums {\sum_j Y_{i,j}} of our array will end up being close to independent of the column sums {\sum_i Y_{i,j}}, subject to conditioning on the total sum {\sum_{i,j} Y_{i,j}}. Not coincidentally, this type of conditional independence phenomenon also shows up when considering row and column sums of iid independent gaussian random variables, as a specific feature of the gaussian distribution. It is related to the more familiar observation that if {X,Y} are two independent copies of a Gaussian random variable, then {X+Y} and {X-Y} are also independent of each other.
Up until now, the argument does not use the {m}-torsion hypothesis, nor the fact that we work with an {m \times m} array of random variables as opposed to some other shape of array. But now the torsion enters in a key role, via the obvious identity

\displaystyle \sum_{i,j} i Y_{i,j} + \sum_{i,j} j Y_{i,j} + \sum_{i,j} (-i-j) Y_{i,j} = 0.

In the endgame, the any pair of these three random variables are close to independent (after conditioning on the total sum {\sum_{i,j} Y_{i,j}}). Applying some “entropic Ruzsa calculus” (and in particular an entropic version of the Balog–Szeméredi–Gowers inequality), one can then arrive at a new random variable {U} of small entropic doubling that is reasonably close to all of the {X_i} in Ruzsa distance, which provides the final way to reduce the multidistance.
Besides the polynomial Bogolyubov conjecture mentioned above (which we do not know how to address by entropy methods), the other natural question is to try to develop a characteristic zero version of this theory in order to establish the polynomial Freiman–Ruzsa conjecture over torsion-free groups, which in our language asserts (roughly speaking) that random variables of small entropic doubling are close (in Ruzsa distance) to a discrete Gaussian random variable, with good bounds. The above machinery is consistent with this conjecture, in that it produces lots of independent variables related to the original variable, various linear combinations of which obey the same sort of entropy estimates that gaussian random variables would exhibit, but what we are missing is a way to get back from these entropy estimates to an assertion that the random variables really are close to Gaussian in some sense. In continuous settings, Gaussians are known to extremize the entropy for a given variance, and of course we have the central limit theorem that shows that averages of random variables typically converge to a Gaussian, but it is not clear how to adapt these phenomena to the discrete Gaussian setting (without the circular reasoning of assuming the polynoimal Freiman–Ruzsa conjecture to begin with).

Tim Gowers, Ben Green, Freddie Manners, and I have just uploaded to the arXiv our paper “On a conjecture of Marton“. This paper establishes a version of the notorious Polynomial Freiman–Ruzsa conjecture (first proposed by Katalin Marton):

Theorem 1 (Polynomial Freiman–Ruzsa conjecture) Let {A \subset {\bf F}_2^n} be such that {|A+A| \leq K|A|}. Then {A} can be covered by at most {2K^{12}} translates of a subspace {H} of {{\bf F}_2^n} of cardinality at most {A}.

The previous best known result towards this conjecture was by Konyagin (as communicated in this paper of Sanders), who obtained a similar result but with {2K^{12}} replaced by {\exp(O_\varepsilon(\log^{3+\varepsilon} K))} for any {\varepsilon>0} (assuming that say {K \geq 3/2} to avoid some degeneracies as {K} approaches {1}, which is not the difficult case of the conjecture). The conjecture (with {12} replaced by an unspecified constant {C}) has a number of equivalent forms; see this survey of Green, and these papers of Lovett and of Green and myself for some examples; in particular, as discussed in the latter two references, the constants in the inverse {U^3({\bf F}_2^n)} theorem are now polynomial in nature (although we did not try to optimize the constant).

The exponent {12} here was the product of a large number of optimizations to the argument (our original exponent here was closer to {1000}), but can be improved even further with additional effort (our current argument, for instance, allows one to replace it with {7+\sqrt{17} = 11.123\dots}, but we decided to state our result using integer exponents instead).

In this paper we will focus exclusively on the characteristic {2} case (so we will be cavalier in identifying addition and subtraction), but in a followup paper we will establish similar results in other finite characteristics.

Much of the prior progress on this sort of result has proceeded via Fourier analysis. Perhaps surprisingly, our approach uses no Fourier analysis whatsoever, being conducted instead entirely in “physical space”. Broadly speaking, it follows a natural strategy, which is to induct on the doubling constant {|A+A|/|A|}. Indeed, suppose for instance that one could show that every set {A} of doubling constant {K} was “commensurate” in some sense to a set {A'} of doubling constant at most {K^{0.99}}. One measure of commensurability, for instance, might be the Ruzsa distance {\log \frac{|A+A'|}{|A|^{1/2} |A'|^{1/2}}}, which one might hope to control by {O(\log K)}. Then one could iterate this procedure until doubling constant dropped below say {3/2}, at which point the conjecture is known to hold (there is an elementary argument that if {A} has doubling constant less than {3/2}, then {A+A} is in fact a subspace of {{\bf F}_2^n}). One can then use several applications of the Ruzsa triangle inequality

\displaystyle  \log \frac{|A+C|}{|A|^{1/2} |C|^{1/2}} \leq \log \frac{|A+B|}{|A|^{1/2} |B|^{1/2}} + \log \frac{|B+C|}{|B|^{1/2} |C|^{1/2}}

to conclude (the fact that we reduce {K} to {K^{0.99}} means that the various Ruzsa distances that need to be summed are controlled by a convergent geometric series).

There are a number of possible ways to try to “improve” a set {A} of not too large doubling by replacing it with a commensurate set of better doubling. We note two particular potential improvements:

  • (i) Replacing {A} with {A+A}. For instance, if {A} was a random subset (of density {1/K}) of a large subspace {H} of {{\bf F}_2^n}, then replacing {A} with {A+A} usually drops the doubling constant from {K} down to nearly {1} (under reasonable choices of parameters).
  • (ii) Replacing {A} with {A \cap (A+h)} for a “typical” {h \in A+A}. For instance, if {A} was the union of {K} random cosets of a subspace {H} of large codimension, then replacing {A} with {A \cap (A+h)} again usually drops the doubling constant from {K} down to nearly {1}.

Unfortunately, there are sets {A} where neither of the above two operations (i), (ii) significantly improves the doubling constant. For instance, if {A} is a random density {1/\sqrt{K}} subset of {\sqrt{K}} random translates of a medium-sized subspace {H}, one can check that the doubling constant stays close to {K} if one applies either operation (i) or operation (ii). But in this case these operations don’t actually worsen the doubling constant much either, and by applying some combination of (i) and (ii) (either intersecting {A+A} with a translate, or taking a sumset of {A \cap (A+h)} with itself) one can start lowering the doubling constant again.

This begins to suggest a potential strategy: show that at least one of the operations (i) or (ii) will improve the doubling constant, or at least not worsen it too much; and in the latter case, perform some more complicated operation to locate the desired doubling constant improvement.

A sign that this strategy might have a chance of working is provided by the following heuristic argument. If {A} has doubling constant {K}, then the Cartesian product {A \times A} has doubling constant {K^2}. On the other hand, by using the projection map {\pi: {\bf F}_2^n \times {\bf F}_2^n \rightarrow {\bf F}_2^n} defined by {\pi(x,y) := x+y}, we see that {A \times A} projects to {\pi(A \times A) = A+A}, with fibres {\pi^{-1}(\{h\})} being essentially a copy of {A \cap (A+h)}. So, morally, {A \times A} also behaves like a “skew product” of {A+A} and the fibres {A \cap (A+h)}, which suggests (non-rigorously) that the doubling constant {K^2} of {A \times A} is also something like the doubling constant of {A + A}, times the doubling constant of a typical fibre {A \cap (A+h)}. This would imply that at least one of {A +A} and {A \cap (A+h)} would have doubling constant at most {K}, and thus that at least one of operations (i), (ii) would not worsen the doubling constant.

Unfortunately, this argument does not seem to be easily made rigorous using the traditional doubling constant; even the significantly weaker statement that {A+A} has doubling constant at most {K^2} is false (see comments for more discussion). However, it turns out (as discussed in this recent paper of myself with Green and Manners) that things are much better. Here, the analogue of a subset {A} in {{\bf F}_2^n} is a random variable {X} taking values in {{\bf F}_2^n}, and the analogue of the (logarithmic) doubling constant {\log \frac{|A+A|}{|A|}} is the entropic doubling constant {d(X;X) := {\bf H}(X_1+X_2)-{\bf H}(X)}, where {X_1,X_2} are independent copies of {X}. If {X} is a random variable in some additive group {G} and {\pi: G \rightarrow H} is a homomorphism, one then has what we call the fibring inequality

\displaystyle  d(X;X) \geq d(\pi(X);\pi(X)) + d(X|\pi(X); X|\pi(X)),

where the conditional doubling constant {d(X|\pi(X); X|\pi(X))} is defined as

\displaystyle  d(X|\pi(X); X|\pi(X)) = {\bf H}(X_1 + X_2 | \pi(X_1), \pi(X_2)) - {\bf H}( X | \pi(X) ).

Indeed, from the chain rule for Shannon entropy one has

\displaystyle  {\bf H}(X) = {\bf H}(\pi(X)) + {\bf H}(X|\pi(X))

and

\displaystyle  {\bf H}(X_1+X_2) = {\bf H}(\pi(X_1) + \pi(X_2)) + {\bf H}(X_1 + X_2|\pi(X_1) + \pi(X_2))

while from the non-negativity of conditional mutual information one has

\displaystyle  {\bf H}(X_1 + X_2|\pi(X_1) + \pi(X_2)) \geq {\bf H}(X_1 + X_2|\pi(X_1), \pi(X_2))

and it is an easy matter to combine all these identities and inequalities to obtain the claim.

Applying this inequality with {X} replaced by two independent copies {(X_1,X_2)} of itself, and using the addition map {(x,y) \mapsto x+y} for {\pi}, we obtain in particular that

\displaystyle  2 d(X;X) \geq d(X_1+X_2; X_1+X_2) + d(X_1,X_2|X_1+X_2; X_1,X_2|X_1+X_2)

or (since {X_2} is determined by {X_1} once one fixes {X_1+X_2})

\displaystyle  2 d(X;X) \geq d(X_1+X_2; X_1+X_2) + d(X_1|X_1+X_2; X_1|X_1+X_2).

So if {d(X;X) = \log K}, then at least one of {d(X_1+X_2; X_1+X_2)} or {d(X_1|X_1+X_2; X_1|X_1+X_2)} will be less than or equal to {\log K}. This is the entropy analogue of at least one of (i) or (ii) improving, or at least not degrading the doubling constant, although there are some minor technicalities involving how one deals with the conditioning to {X_1+X_2} in the second term {d(X_1|X_1+X_2; X_1|X_1+X_2)} that we will gloss over here (one can pigeonhole the instances of {X_1} to different events {X_1+X_2=x}, {X_1+X_2=x'}, and “depolarise” the induction hypothesis to deal with distances {d(X;Y)} between pairs of random variables {X,Y} that do not necessarily have the same distribution). Furthermore, we can even calculate the defect in the above inequality: a careful inspection of the above argument eventually reveals that

\displaystyle  2 d(X;X) = d(X_1+X_2; X_1+X_2) + d(X_1|X_1+X_2; X_1|X_1+X_2)

\displaystyle  + {\bf I}( X_1 + X_2 : X_1 + X_3 | X_1 + X_2 + X_3 + X_4)

where we now take four independent copies {X_1,X_2,X_3,X_4}. This leads (modulo some technicalities) to the following interesting conclusion: if neither (i) nor (ii) leads to an improvement in the entropic doubling constant, then {X_1+X_2} and {X_2+X_3} are conditionally independent relative to {X_1+X_2+X_3+X_4}. This situation (or an approximation to this situation) is what we refer to in the paper as the “endgame”.

A version of this endgame conclusion is in fact valid in any characteristic. But in characteristic {2}, we can take advantage of the identity

\displaystyle  (X_1+X_2) + (X_2+X_3) = X_1 + X_3.

Conditioning on {X_1+X_2+X_3+X_4}, and using symmetry we now conclude that if we are in the endgame exactly (so that the mutual information is zero), then the independent sum of two copies of {(X_1+X_2|X_1+X_2+X_3+X_4)} has exactly the same distribution; in particular, the entropic doubling constant here is zero, which is certainly a reduction in the doubling constant.

To deal with the situation where the conditional mutual information is small but not completely zero, we have to use an entropic version of the Balog-Szemeredi-Gowers lemma, but fortunately this was already worked out in an old paper of mine (although in order to optimise the final constant, we ended up using a slight variant of that lemma).

I am planning to formalize this paper in the Lean4 language. Further discussion of this project will take place on this Zulip stream, and the project itself will be held at this Github repository.

Asgar Jamneshan and myself have just uploaded to the arXiv our preprint “The inverse theorem for the {U^3} Gowers uniformity norm on arbitrary finite abelian groups: Fourier-analytic and ergodic approaches“. This paper, which is a companion to another recent paper of ourselves and Or Shalom, studies the inverse theory for the third Gowers uniformity norm

\displaystyle  \| f \|_{U^3(G)}^8 = {\bf E}_{h_1,h_2,h_3,x \in G} \Delta_{h_1} \Delta_{h_2} \Delta_{h_3} f(x)

on an arbitrary finite abelian group {G}, where {\Delta_h f(x) := f(x+h) \overline{f(x)}} is the multiplicative derivative. Our main result is as follows:

Theorem 1 (Inverse theorem for {U^3(G)}) Let {G} be a finite abelian group, and let {f: G \rightarrow {\bf C}} be a {1}-bounded function with {\|f\|_{U^3(G)} \geq \eta} for some {0 < \eta \leq 1/2}. Then:
  • (i) (Correlation with locally quadratic phase) There exists a regular Bohr set {B(S,\rho) \subset G} with {|S| \ll \eta^{-O(1)}} and {\exp(-\eta^{-O(1)}) \ll \rho \leq 1/2}, a locally quadratic function {\phi: B(S,\rho) \rightarrow {\bf R}/{\bf Z}}, and a function {\xi: G \rightarrow \hat G} such that

    \displaystyle  {\bf E}_{x \in G} |{\bf E}_{h \in B(S,\rho)} f(x+h) e(-\phi(h)-\xi(x) \cdot h)| \gg \eta^{O(1)}.

  • (ii) (Correlation with nilsequence) There exists an explicit degree two filtered nilmanifold {H/\Lambda} of dimension {O(\eta^{-O(1)})}, a polynomial map {g: G \rightarrow H/\Lambda}, and a Lipschitz function {F: H/\Lambda \rightarrow {\bf C}} of constant {O(\exp(\eta^{-O(1)}))} such that

    \displaystyle  |{\bf E}_{x \in G} f(x) \overline{F}(g(x))| \gg \exp(-\eta^{-O(1)}).

Such a theorem was proven by Ben Green and myself in the case when {|G|} was odd, and by Samorodnitsky in the {2}-torsion case {G = {\bf F}_2^n}. In all cases one uses the “higher order Fourier analysis” techniques introduced by Gowers. After some now-standard manipulations (using for instance what is now known as the Balog-Szemerédi-Gowers lemma), one arrives (for arbitrary {G}) at an estimate that is roughly of the form

\displaystyle  |{\bf E}_{x \in G} {\bf E}_{h,k \in B(S,\rho)} f(x+h+k) b(x,k) b(x,h) e(-B(h,k))| \gg \eta^{O(1)}

where {b} denotes various {1}-bounded functions whose exact values are not too important, and {B: B(S,\rho) \times B(S,\rho) \rightarrow {\bf R}/{\bf Z}} is a symmetric locally bilinear form. The idea is then to “integrate” this form by expressing it in the form

\displaystyle  B(h,k) = \phi(h+k) - \phi(h) - \phi(k) \ \ \ \ \ (1)

for some locally quadratic {\phi: B(S,\rho) \rightarrow {\bf C}}; this then allows us to write the above correlation as

\displaystyle  |{\bf E}_{x \in G} {\bf E}_{h,k \in B(S,\rho)} f(x+h+k) e(-\phi(h+k)) b(x,k) b(x,h)| \gg \eta^{O(1)}

(after adjusting the {b} functions suitably), and one can now conclude part (i) of the above theorem using some linear Fourier analysis. Part (ii) follows by encoding locally quadratic phase functions as nilsequences; for this we adapt an algebraic construction of Manners.

So the key step is to obtain a representation of the form (1), possibly after shrinking the Bohr set {B(S,\rho)} a little if needed. This has been done in the literature in two ways:

  • When {|G|} is odd, one has the ability to divide by {2}, and on the set {2 \cdot B(S,\frac{\rho}{10}) = \{ 2x: x \in B(S,\frac{\rho}{10})\}} one can establish (1) with {\phi(h) := B(\frac{1}{2} h, h)}. (This is similar to how in single variable calculus the function {x \mapsto \frac{1}{2} x^2} is a function whose second derivative is equal to {1}.)
  • When {G = {\bf F}_2^n}, then after a change of basis one can take the Bohr set {B(S,\rho)} to be {{\bf F}_2^m} for some {m}, and the bilinear form can be written in coordinates as

    \displaystyle  B(h,k) = \sum_{1 \leq i,j \leq m} a_{ij} h_i k_j / 2 \hbox{ mod } 1

    for some {a_{ij} \in {\bf F}_2} with {a_{ij}=a_{ji}}. The diagonal terms {a_{ii}} cause a problem, but by subtracting off the rank one form {(\sum_{i=1}^m a_{ii} h_i) ((\sum_{i=1}^m a_{ii} k_i) / 2} one can write

    \displaystyle  B(h,k) = \sum_{1 \leq i,j \leq m} b_{ij} h_i k_j / 2 \hbox{ mod } 1

    on the orthogonal complement of {(a_{11},\dots,a_{mm})} for some coefficients {b_{ij}=b_{ji}} which now vanish on the diagonal: {b_{ii}=0}. One can now obtain (1) on this complement by taking

    \displaystyle  \phi(h) := \sum_{1 \leq i < j \leq m} b_{ij} h_i h_k / 2 \hbox{ mod } 1.

In our paper we can now treat the case of arbitrary finite abelian groups {G}, by means of the following two new ingredients:

  • (i) Using some geometry of numbers, we can lift the group {G} to a larger (possibly infinite, but still finitely generated) abelian group {G_S} with a projection map {\pi: G_S \rightarrow G}, and find a globally bilinear map {\tilde B: G_S \times G_S \rightarrow {\bf R}/{\bf Z}} on the latter group, such that one has a representation

    \displaystyle  B(\pi(x), \pi(y)) = \tilde B(x,y) \ \ \ \ \ (2)

    of the locally bilinear form {B} by the globally bilinear form {\tilde B} when {x,y} are close enough to the origin.
  • (ii) Using an explicit construction, one can show that every globally bilinear map {\tilde B: G_S \times G_S \rightarrow {\bf R}/{\bf Z}} has a representation of the form (1) for some globally quadratic function {\tilde \phi: G_S \rightarrow {\bf R}/{\bf Z}}.

To illustrate (i), consider the Bohr set {B(S,1/10) = \{ x \in {\bf Z}/N{\bf Z}: \|x/N\|_{{\bf R}/{\bf Z}} < 1/10\}} in {G = {\bf Z}/N{\bf Z}} (where {\|\|_{{\bf R}/{\bf Z}}} denotes the distance to the nearest integer), and consider a locally bilinear form {B: B(S,1/10) \times B(S,1/10) \rightarrow {\bf R}/{\bf Z}} of the form {B(x,y) = \alpha x y \hbox{ mod } 1} for some real number {\alpha} and all integers {x,y \in (-N/10,N/10)} (which we identify with elements of {G}. For generic {\alpha}, this form cannot be extended to a globally bilinear form on {G}; however if one lifts {G} to the finitely generated abelian group

\displaystyle  G_S := \{ (x,\theta) \in {\bf Z}/N{\bf Z} \times {\bf R}: \theta = x/N \hbox{ mod } 1 \}

(with projection map {\pi: (x,\theta) \mapsto x}) and introduces the globally bilinear form {\tilde B: G_S \times G_S \rightarrow {\bf R}/{\bf Z}} by the formula

\displaystyle  \tilde B((x,\theta),(y,\sigma)) = N^2 \alpha \theta \sigma \hbox{ mod } 1

then one has (2) when {\theta,\sigma} lie in the interval {(-1/10,1/10)}. A similar construction works for higher rank Bohr sets.

To illustrate (ii), the key case turns out to be when {G_S} is a cyclic group {{\bf Z}/N{\bf Z}}, in which case {\tilde B} will take the form

\displaystyle  \tilde B(x,y) = \frac{axy}{N} \hbox{ mod } 1

for some integer {a}. One can then check by direct construction that (1) will be obeyed with

\displaystyle  \tilde \phi(x) = \frac{a \binom{x}{2}}{N} - \frac{a x \binom{N}{2}}{N^2} \hbox{ mod } 1

regardless of whether {N} is even or odd. A variant of this construction also works for {{\bf Z}}, and the general case follows from a short calculation verifying that the claim (ii) for any two groups {G_S, G'_S} implies the corresponding claim (ii) for the product {G_S \times G'_S}.

This concludes the Fourier-analytic proof of Theorem 1. In this paper we also give an ergodic theory proof of (a qualitative version of) Theorem 1(ii), using a correspondence principle argument adapted from this previous paper of Ziegler, and myself. Basically, the idea is to randomly generate a dynamical system on the group {G}, by selecting an infinite number of random shifts {g_1, g_2, \dots \in G}, which induces an action of the infinitely generated free abelian group {{\bf Z}^\omega = \bigcup_{n=1}^\infty {\bf Z}^n} on {G} by the formula

\displaystyle  T^h x := x + \sum_{i=1}^\infty h_i g_i.

Much as the law of large numbers ensures the almost sure convergence of Monte Carlo integration, one can show that this action is almost surely ergodic (after passing to a suitable Furstenberg-type limit {X} where the size of {G} goes to infinity), and that the dynamical Host-Kra-Gowers seminorms of that system coincide with the combinatorial Gowers norms of the original functions. One is then well placed to apply an inverse theorem for the third Host-Kra-Gowers seminorm {U^3(X)} for {{\bf Z}^\omega}-actions, which was accomplished in the companion paper to this one. After doing so, one almost gets the desired conclusion of Theorem 1(ii), except that after undoing the application of the Furstenberg correspondence principle, the map {g: G \rightarrow H/\Lambda} is merely an almost polynomial rather than a polynomial, which roughly speaking means that instead of certain derivatives of {g} vanishing, they instead are merely very small outside of a small exceptional set. To conclude we need to invoke a “stability of polynomials” result, which at this level of generality was first established by Candela and Szegedy (though we also provide an independent proof here in an appendix), which roughly speaking asserts that every approximate polynomial is close in measure to an actual polynomial. (This general strategy is also employed in the Candela-Szegedy paper, though in the absence of the ergodic inverse theorem input that we rely upon here, the conclusion is weaker in that the filtered nilmanifold {H/\Lambda} is replaced with a general space known as a “CFR nilspace”.)

This transference principle approach seems to work well for the higher step cases (for instance, the stability of polynomials result is known in arbitrary degree); the main difficulty is to establish a suitable higher step inverse theorem in the ergodic theory setting, which we hope to do in future research.

Laura Cladek and I have just uploaded to the arXiv our paper “Additive energy of regular measures in one and higher dimensions, and the fractal uncertainty principle“. This paper concerns a continuous version of the notion of additive energy. Given a finite measure {\mu} on {{\bf R}^d} and a scale {r>0}, define the energy {\mathrm{E}(\mu,r)} at scale {r} to be the quantity

\displaystyle  \mathrm{E}(\mu,r) := \mu^4\left( \{ (x_1,x_2,x_3,x_4) \in ({\bf R}^d)^4: |x_1+x_2-x_3-x_4| \leq r \}\right) \ \ \ \ \ (1)

where {\mu^4} is the product measure on {({\bf R}^d)^4} formed from four copies of the measure {\mu} on {{\bf R}^d}. We will be interested in Cantor-type measures {\mu}, supported on a compact set {X \subset B(0,1)} and obeying the Ahlfors-David regularity condition

\displaystyle  \mu(B(x,r)) \leq C r^\delta

for all balls {B(x,r)} and some constants {C, \delta > 0}, as well as the matching lower bound

\displaystyle  \mu(B(x,r)) \geq C^{-1} r^\delta

when {x \in X} whenever {0 < r < 1}. One should think of {X} as a {\delta}-dimensional fractal set, and {\mu} as some vaguely self-similar measure on this set.

Note that once one fixes {x_1,x_2,x_3}, the variable {x_4} in (1) is constrained to a ball of radius {r}, hence we obtain the trivial upper bound

\displaystyle  \mathrm{E}(\mu,r) \leq C^4 r^\delta. \ \ \ \ \ (2)

If the set {X} contains a lot of “additive structure”, one can expect this bound to be basically sharp; for instance, if {\delta} is an integer, {X} is a {\delta}-dimensional unit disk, and {\mu} is Lebesgue measure on this disk, one can verify that {\mathrm{E}(\mu,r) \sim r^\delta} (where we allow implied constants to depend on {d,\delta}. However we show that if the dimension is non-integer, then one obtains a gain:

Theorem 1 If {0 < \delta < d} is not an integer, and {X, \mu} are as above, then

\displaystyle  \mathrm{E}(\mu,r) \lesssim_{C,\delta,d} r^{\delta+\beta}

for some {\beta>0} depending only on {C,\delta,d}.

Informally, this asserts that Ahlfors-David regular fractal sets of non-integer dimension cannot behave as if they are approximately closed under addition. In fact the gain {\beta} we obtain is quasipolynomial in the regularity constant {C}:

\displaystyle  \beta = \exp\left( - O_{\delta,d}( 1 + \log^{O_{\delta,d}(1)}(C) ) \right).

(We also obtain a localised version in which the regularity condition is only required to hold at scales between {r} and {1}.) Such a result was previously obtained (with more explicit values of the {O_{\delta,d}()} implied constants) in the one-dimensional case {d=1} by Dyatlov and Zahl; but in higher dimensions there does not appear to have been any results for this general class of sets {X} and measures {\mu}. In the paper of Dyatlov and Zahl it is noted that some dependence on {C} is necessary; in particular, {\beta} cannot be much better than {1/\log C}. This reflects the fact that there are fractal sets that do behave reasonably well with respect to addition (basically because they are built out of long arithmetic progressions at many scales); however, such sets are not very Ahlfors-David regular. Among other things, this result readily implies a dimension expansion result

\displaystyle  \mathrm{dim}( f( X, X) ) \geq \delta + \beta

for any non-degenerate smooth map {f: {\bf R}^d \times {\bf R}^d \rightarrow {\bf R}^d}, including the sum map {f(x,y) := x+y} and (in one dimension) the product map {f(x,y) := x \cdot y}, where the non-degeneracy condition required is that the gradients {D_x f(x,y), D_y f(x,y): {\bf R}^d \rightarrow {\bf R}^d} are invertible for every {x,y}. We refer to the paper for the formal statement.

Our higher-dimensional argument shares many features in common with that of Dyatlov and Zahl, notably a reliance on the modern tools of additive combinatorics (and specifically the Bogulybov-Ruzsa lemma of Sanders). However, in one dimension we were also able to find a completely elementary argument, avoiding any particularly advanced additive combinatorics and instead primarily exploiting the order-theoretic properties of the real line, that gave a superior value of {\beta}, namely

\displaystyle  \beta := c \min(\delta,1-\delta) C^{-25}.

One of the main reasons for obtaining such improved energy bounds is that they imply a fractal uncertainty principle in some regimes. We focus attention on the model case of obtaining such an uncertainty principle for the semiclassical Fourier transform

\displaystyle  {\mathcal F}_h f(\xi) := (2\pi h)^{-d/2} \int_{{\bf R}^d} e^{-i x \cdot \xi/h} f(x)\ dx

where {h>0} is a small parameter. If {X, \mu, \delta} are as above, and {X_h} denotes the {h}-neighbourhood of {X}, then from the Hausdorff-Young inequality one obtains the trivial bound

\displaystyle  \| 1_{X_h} {\mathcal F}_h 1_{X_h} \|_{L^2({\bf R}^d) \rightarrow L^2({\bf R}^d)} \lesssim_{C,d} h^{\max\left(\frac{d}{2}-\delta,0\right)}.

(There are also variants involving pairs of sets {X_h, Y_h}, but for simplicity we focus on the uncertainty principle for a single set {X_h}.) The fractal uncertainty principle, when it applies, asserts that one can improve this to

\displaystyle  \| 1_{X_h} {\mathcal F}_h 1_{X_h} \|_{L^2({\bf R}^d) \rightarrow L^2({\bf R}^d)} \lesssim_{C,d} h^{\max\left(\frac{d}{2}-\delta,0\right) + \beta}

for some {\beta>0}; informally, this asserts that a function and its Fourier transform cannot simultaneously be concentrated in the set {X_h} when {\delta \leq \frac{d}{2}}, and that a function cannot be concentrated on {X_h} and have its Fourier transform be of maximum size on {X_h} when {\delta \geq \frac{d}{2}}. A modification of the disk example mentioned previously shows that such a fractal uncertainty principle cannot hold if {\delta} is an integer. However, in one dimension, the fractal uncertainty principle is known to hold for all {0 < \delta < 1}. The above-mentioned results of Dyatlov and Zahl were able to establish this for {\delta} close to {1/2}, and the remaining cases {1/2 < \delta < 1} and {0 < \delta < 1/2} were later established by Bourgain-Dyatlov and Dyatlov-Jin respectively. Such uncertainty principles have applications to hyperbolic dynamics, in particular in establishing spectral gaps for certain Selberg zeta functions.

It remains a largely open problem to establish a fractal uncertainty principle in higher dimensions. Our results allow one to establish such a principle when the dimension {\delta} is close to {d/2}, and {d} is assumed to be odd (to make {d/2} a non-integer). There is also work of Han and Schlag that obtains such a principle when one of the copies of {X_h} is assumed to have a product structure. We hope to obtain further higher-dimensional fractal uncertainty principles in subsequent work.

We now sketch how our main theorem is proved. In both one dimension and higher dimensions, the main point is to get a preliminary improvement

\displaystyle  \mathrm{E}(\mu,r_0) \leq \varepsilon r_0^\delta \ \ \ \ \ (3)

over the trivial bound (2) for any small {\varepsilon>0}, provided {r_0} is sufficiently small depending on {\varepsilon, \delta, d}; one can then iterate this bound by a fairly standard “induction on scales” argument (which roughly speaking can be used to show that energies {\mathrm{E}(\mu,r)} behave somewhat multiplicatively in the scale parameter {r}) to propagate the bound to a power gain at smaller scales. We found that a particularly clean way to run the induction on scales was via use of the Gowers uniformity norm {U^2}, and particularly via a clean Fubini-type inequality

\displaystyle  \| f \|_{U^2(V \times V')} \leq \|f\|_{U^2(V; U^2(V'))}

(ultimately proven using the Gowers-Cauchy-Schwarz inequality) that allows one to “decouple” coarse and fine scale aspects of the Gowers norms (and hence of additive energies).

It remains to obtain the preliminary improvement. In one dimension this is done by identifying some “left edges” of the set {X} that supports {\mu}: intervals {[x, x+K^{-n}]} that intersect {X}, but such that a large interval {[x-K^{-n+1},x]} just to the left of this interval is disjoint from {X}. Here {K} is a large constant and {n} is a scale parameter. It is not difficult to show (using in particular the Archimedean nature of the real line) that if one has the Ahlfors-David regularity condition for some {0 < \delta < 1} then left edges exist in abundance at every scale; for instance most points of {X} would be expected to lie in quite a few of these left edges (much as most elements of, say, the ternary Cantor set {\{ \sum_{n=1}^\infty \varepsilon_n 3^{-n} \varepsilon_n \in \{0,1\} \}} would be expected to contain a lot of {0}s in their base {3} expansion). In particular, most pairs {(x_1,x_2) \in X \times X} would be expected to lie in a pair {[x,x+K^{-n}] \times [y,y+K^{-n}]} of left edges of equal length. The key point is then that if {(x_1,x_2) \in X \times X} lies in such a pair with {K^{-n} \geq r}, then there are relatively few pairs {(x_3,x_4) \in X \times X} at distance {O(K^{-n+1})} from {(x_1,x_2)} for which one has the relation {x_1+x_2 = x_3+x_4 + O(r)}, because {x_3,x_4} will both tend to be to the right of {x_1,x_2} respectively. This causes a decrement in the energy at scale {K^{-n+1}}, and by carefully combining all these energy decrements one can eventually cobble together the energy bound (3).

We were not able to make this argument work in higher dimension (though perhaps the cases {0 < \delta < 1} and {d-1 < \delta < d} might not be completely out of reach from these methods). Instead we return to additive combinatorics methods. If the claim (3) failed, then by applying the Balog-Szemeredi-Gowers theorem we can show that the set {X} has high correlation with an approximate group {H}, and hence (by the aforementioned Bogulybov-Ruzsa type theorem of Sanders, which is the main source of the quasipolynomial bounds in our final exponent) {X} will exhibit an approximate “symmetry” along some non-trivial arithmetic progression of some spacing length {r} and some diameter {R \gg r}. The {r}-neighbourhood {X_r} of {X} will then resemble the union of parallel “cylinders” of dimensions {r \times R}. If we focus on a typical {R}-ball of {X_r}, the set now resembles a Cartesian product of an interval of length {R} with a subset of a {d-1}-dimensional hyperplane, which behaves approximately like an Ahlfors-David regular set of dimension {\delta-1} (this already lets us conclude a contradiction if {\delta<1}). Note that if the original dimension {\delta} was non-integer then this new dimension {\delta-1} will also be non-integer. It is then possible to contradict the failure of (3) by appealing to a suitable induction hypothesis at one lower dimension.

Let {G = (G,+)}, {H = (H,+)} be additive groups (i.e., groups with an abelian addition group law). A map {f: G \rightarrow H} is a homomorphism if one has

\displaystyle  f(x+y) - f(x) - f(y) = 0

for all {x,y \in G}. A map {f: G \rightarrow H} is an affine homomorphism if one has

\displaystyle  f(x_1) - f(x_2) + f(x_3) - f(x_4) = 0 \ \ \ \ \ (1)

for all additive quadruples {(x_1,x_2,x_3,x_4)} in {G}, by which we mean that {x_1,x_2,x_3,x_4 \in G} and {x_1-x_2+x_3-x_4=0}. The two notions are closely related; it is easy to verify that {f} is an affine homomorphism if and only if {f} is the sum of a homomorphism and a constant.

Now suppose that {H} also has a translation-invariant metric {d}. A map {f: G \rightarrow H} is said to be a quasimorphism if one has

\displaystyle  f(x+y) - f(x) - f(y) = O(1) \ \ \ \ \ (2)

for all {x,y \in G}, where {O(1)} denotes a quantity at a bounded distance from the origin. Similarly, {f: G \rightarrow H} is an affine quasimorphism if

\displaystyle  f(x_1) - f(x_2) + f(x_3) - f(x_4) = O(1) \ \ \ \ \ (3)

for all additive quadruples {(x_1,x_2,x_3,x_4)} in {G}. Again, one can check that {f} is an affine quasimorphism if and only if it is the sum of a quasimorphism and a constant (with the implied constant of the quasimorphism controlled by the implied constant of the affine quasimorphism). (Since every constant is itself a quasimorphism, it is in fact the case that affine quasimorphisms are quasimorphisms, but now the implied constant in the latter is not controlled by the implied constant of the former.)

“Trivial” examples of quasimorphisms include the sum of a homomorphism and a bounded function. Are there others? In some cases, the answer is no. For instance, suppose we have a quasimorphism {f: {\bf Z} \rightarrow {\bf R}}. Iterating (2), we see that {f(kx) = kf(x) + O(k)} for any integer {x} and natural number {k}, which we can rewrite as {f(kx)/kx = f(x)/x + O(1/|x|)} for non-zero {x}. Also, {f} is Lipschitz. Sending {k \rightarrow \infty}, we can verify that {f(x)/x} is a Cauchy sequence as {x \rightarrow \infty} and thus tends to some limit {\alpha}; we have {\alpha = f(x)/x + O(1/x)} for {x \geq 1}, hence {f(x) = \alpha x + O(1)} for positive {x}, and then one can use (2) one last time to obtain {f(x) = \alpha x + O(1)} for all {x}. Thus {f} is the sum of the homomorphism {x \mapsto \alpha x} and a bounded sequence.

In general, one can phrase this problem in the language of group cohomology (discussed in this previous post). Call a map {f: G \rightarrow H} a {0}-cocycle. A {1}-cocycle is a map {\rho: G \times G \rightarrow H} obeying the identity

\displaystyle  \rho(x,y+z) + \rho(y,z) = \rho(x,y) + \rho(x+y,z)

for all {x,y,z \in G}. Given a {0}-cocycle {f: G \rightarrow H}, one can form its derivative {\partial f: G \times G \rightarrow H} by the formula

\displaystyle  \partial f(x,y) := f(x+y)-f(x)-f(y).

Such functions are called {1}-coboundaries. It is easy to see that the abelian group of {1}-coboundaries is a subgroup of the abelian group of {1}-cocycles. The quotient of these two groups is the first group cohomology of {G} with coefficients in {H}, and is denoted {H^1(G; H)}.

If a {0}-cocycle is bounded then its derivative is a bounded {1}-coboundary. The quotient of the group of bounded {1}-cocycles by the derivatives of bounded {0}-cocycles is called the bounded first group cohomology of {G} with coefficients in {H}, and is denoted {H^1_b(G; H)}. There is an obvious homomorphism {\phi} from {H^1_b(G; H)} to {H^1(G; H)}, formed by taking a coset of the space of derivatives of bounded {0}-cocycles, and enlarging it to a coset of the space of {1}-coboundaries. By chasing all the definitions, we see that all quasimorphism from {G} to {H} are the sum of a homomorphism and a bounded function if and only if this homomorphism {\phi} is injective; in fact the quotient of the space of quasimorphisms by the sum of homomorphisms and bounded functions is isomorphic to the kernel of {\phi}.

In additive combinatorics, one is often working with functions which only have additive structure a fraction of the time, thus for instance (1) or (3) might only hold “{1\%} of the time”. This makes it somewhat difficult to directly interpret the situation in terms of group cohomology. However, thanks to tools such as the Balog-Szemerédi-Gowers lemma, one can upgrade this sort of {1\%}-structure to {100\%}-structure – at the cost of restricting the domain to a smaller set. Here I record one such instance of this phenomenon, thus giving a tentative link between additive combinatorics and group cohomology. (I thank Yuval Wigderson for suggesting the problem of locating such a link.)

Theorem 1 Let {G = (G,+)}, {H = (H,+)} be additive groups with {|G|=N}, let {S} be a subset of {H}, let {E \subset G}, and let {f: E \rightarrow H} be a function such that

\displaystyle  f(x_1) - f(x_2) + f(x_3) - f(x_4) \in S

for {\geq K^{-1} N^3} additive quadruples {(x_1,x_2,x_3,x_4)} in {E}. Then there exists a subset {A} of {G} containing {0} with {|A| \gg K^{-O(1)} N}, a subset {X} of {H} with {|X| \ll K^{O(1)}}, and a function {g: 4A-4A \rightarrow H} such that

\displaystyle  g(x+y) - g(x)-g(y) \in X + 496S - 496S \ \ \ \ \ (4)

for all {x, y \in 2A-2A} (thus, the derivative {\partial g} takes values in {X + 496 S - 496 S} on {2A - 2A}), and such that for each {h \in A}, one has

\displaystyle  f(x+h) - f(x) - g(h) \in 8S - 8S \ \ \ \ \ (5)

for {\gg K^{-O(1)} N} values of {x \in E}.

Presumably the constants {8} and {496} can be improved further, but we have not attempted to optimise these constants. We chose {2A-2A} as the domain on which one has a bounded derivative, as one can use the Bogulybov lemma (see e.g, Proposition 4.39 of my book with Van Vu) to find a large Bohr set inside {2A-2A}. In applications, the set {S} need not have bounded size, or even bounded doubling; for instance, in the inverse {U^4} theory over a small finite fields {F}, one would be interested in the situation where {H} is the group of {n \times n} matrices with coefficients in {F} (for some large {n}, and {S} being the subset consisting of those matrices of rank bounded by some bound {C = O(1)}.

Proof: By hypothesis, there are {\geq K N^3} triples {(h,x,y) \in G^3} such that {x,x+h,y,y+h \in E} and

\displaystyle  f(x+h) - f(x) \in f(y+h)-f(y) + S. \ \ \ \ \ (6)

Thus, there is a set {B \subset G} with {|B| \gg K^{-1} N} such that for all {h \in B}, one has (6) for {\gg K^{-1} N^2} pairs {(x,y) \in G^2} with {x,x+h,y,y+h \in E}; in particular, there exists {y = y(h) \in E \cap (E-h)} such that (6) holds for {\gg K^{-1} N} values of {x \in E \cap (E-h)}. Setting {g_0(h) := f(y(h)+h) - f(y(h))}, we conclude that for each {h \in B}, one has

\displaystyle  f(x+h) - f(x) \in g_0(h) + S \ \ \ \ \ (7)

for {\gg K^{-1} N} values of {x \in E \cap (E-h)}.

Consider the bipartite graph whose vertex sets are two copies of {E}, and {x} and {x+h} connected by a (directed) edge if {h \in B} and (7) holds. Then this graph has {\gg K^{-2} N^2} edges. Applying (a slight modification of) the Balog-Szemerédi-Gowers theorem (for instance by modifying the proof of Corollary 5.19 of my book with Van Vu), we can then find a subset {C} of {E} with {|C| \gg K^{-O(1)} N} with the property that for any {x_1,x_3 \in C}, there exist {\gg K^{-O(1)} N^3} triples {(x_2,y_1,y_2) \in E^3} such that the edges {(x_1,y_1), (x_2,y_1), (x_2,y_2), (x_3,y_2)} all lie in this bipartite graph. This implies that, for all {x_1,x_3 \in C}, there exist {\gg K^{-O(1)} N^7} septuples {(x_2,y_1,y_2,z_{11},z_{21},z_{22},z_{32}) \in G^7} obeying the constraints

\displaystyle  f(y_j) - f(x_i), f(y_j+z_{ij}) - f(x_i+z_{ij}) \in g_0(y_j-x_i) + S

and {y_j, x_i, y_j+z_{ij}, x_i+z_{ij} \in E} for {ij = 11, 21, 22, 32}. These constraints imply in particular that

\displaystyle  f(x_3) - f(x_1) \in f(x_3+z_{32}) - f(y_2+z_{32}) + f(y_2+z_{22}) - f(x_2+z_{22}) + f(x_2+z_{21}) - f(y_1+z_{21}) + f(y_1+z_{11}) - f(x_1+z_{11}) + 4S - 4S.

Also observe that

\displaystyle  x_3 - x_1 = (x_3+z_{32}) - (y_2+z_{32}) + (y_2+z_{22}) - (x_2+z_{22}) + (x_2+z_{21}) - (y_1+z_{21}) + (y_1+z_{11}) - (x_1+z_{11}).

Thus, if {h \in G} and {x_3,x_1 \in C} are such that {x_3-x_1 = h}, we see that

\displaystyle  f(w_1) - f(w_2) + f(w_3) - f(w_4) + f(w_5) - f(w_6) + f(w_7) - f(w_8) \in f(x_3) - f(x_1) + 4S - 4S

for {\gg K^{-O(1)} N^7} octuples {(w_1,w_2,w_3,w_4,w_5,w_6,w_7,w_8) \in E^8} in the hyperplane

\displaystyle  h = w_1 - w_2 + w_3 - w_4 + w_5 - w_6 + w_7 - w_8.

By the pigeonhole principle, this implies that for any fixed {h \in G}, there can be at most {O(K^{O(1)})} sets of the form {f(x_3)-f(x_1) + 3S-3S} with {x_3-x_1=h}, {x_1,x_3 \in C} that are pairwise disjoint. Using a greedy algorithm, we conclude that there is a set {W_h} of cardinality {O(K^{O(1)})}, such that each set {f(x_3) - f(x_1) + 3S-3S} with {x_3-x_1=h}, {x_1,x_3 \in C} intersects {w+4S -4S} for some {w \in W_h}, or in other words that

\displaystyle  f(x_3) - f(x_1) \in W_{x_3-x_1} + 8S-8S \ \ \ \ \ (8)

whenever {x_1,x_3 \in C}. In particular,

\displaystyle  \sum_{h \in G} \sum_{w \in W_h} | \{ (x_1,x_3) \in C^2: x_3-x_1 = h; f(x_3) - f(x_1) \in w + 8S-8S \}| \geq |C|^2 \gg K^{-O(1)} N^2.

This implies that there exists a subset {A} of {G} with {|A| \gg K^{-O(1)} N}, and an element {g_1(h) \in W_h} for each {h \in A}, such that

\displaystyle  | \{ (x_1,x_3) \in C^2: x_3-x_1 = h; f(x_3) - f(x_1) \in g_1(h) + 8S-8S \}| \gg K^{-O(1)} N \ \ \ \ \ (9)

for all {h \in A}. Note we may assume without loss of generality that {0 \in A} and {g_1(0)=0}.

Suppose that {h_1,\dots,h_{16} \in A} are such that

\displaystyle  \sum_{i=1}^{16} (-1)^{i-1} h_i = 0. \ \ \ \ \ (10)

By construction of {A}, and permuting labels, we can find {\gg K^{-O(1)} N^{16}} 16-tuples {(x_1,\dots,x_{16},y_1,\dots,y_{16}) \in C^{32}} such that

\displaystyle  y_i - x_i = (-1)^{i-1} h_i

and

\displaystyle  f(y_i) - f(x_i) \in (-1)^{i-1} g_i(h) + 8S - 8S

for {i=1,\dots,16}. We sum this to obtain

\displaystyle  f(y_1) + \sum_{i=1}^{15} f(y_{i+1})-f(x_i) - f(x_8) \in \sum_{i=1}^{16} (-1)^{i-1} g_1(h_i) + 128 S - 128 S

and hence by (8)

\displaystyle  f(y_1) - f(x_{16}) + \sum_{i=1}^{15} W_{k_i} \in \sum_{i=1}^{16} (-1)^{i-1} g_1(h_i) + 248 S - 248 S

where {k_i := y_{i+1}-x_i}. Since

\displaystyle  y_1 - x_{16} + \sum_{i=1}^{15} k_i = 0

we see that there are only {N^{16}} possible values of {(y_1,x_{16},k_1,\dots,k_{15})}. By the pigeonhole principle, we conclude that at most {O(K^{O(1)})} of the sets {\sum_{i=1}^{16} (-1)^i g_1(h_i) + 248 S - 248 S} can be disjoint. Arguing as before, we conclude that there exists a set {X} of cardinality {O(K^{O(1)})} such that

\displaystyle  \sum_{i=1}^{16} (-1)^{i-1} g_1(h_i) \in X + 496 S - 496 S \ \ \ \ \ (11)

whenever (10) holds.

For any {h \in 4A-4A}, write {h} arbitrarily as {h = \sum_{i=1}^8 (-1)^{i-1} h_i} for some {h_1,\dots,h_8 \in A} (with {h_5=\dots=h_8=0} if {h \in 2A-2A}, and {h_2 = \dots = h_8 = 0} if {h \in A}) and then set

\displaystyle  g(h) := \sum_{i=1}^8 (-1)^i g_1(h_i).

Then from (11) we have (4). For {h \in A} we have {g(h) = g_1(h)}, and (5) then follows from (9). \Box

A sequence {a: {\bf Z} \rightarrow {\bf C}} of complex numbers is said to be quasiperiodic if it is of the form

\displaystyle a(n) = F( \alpha_1 n \hbox{ mod } 1, \dots, \alpha_k n \hbox{ mod } 1)

for some real numbers {\alpha_1,\dots,\alpha_k} and continuous function {F: ({\bf R}/{\bf Z})^k \rightarrow {\bf C}}. For instance, linear phases such as {n \mapsto e(\alpha n + \beta)} (where {e(\theta) := e^{2\pi i \theta}}) are examples of quasiperiodic sequences; the top order coefficient {\alpha} (modulo {1}) can be viewed as a “frequency” of the integers, and an element of the Pontryagin dual {\hat {\bf Z} \equiv {\bf R}/{\bf Z}} of the integers. Any periodic sequence is also quasiperiodic (taking {k=1} and {\alpha_1} to be the reciprocal of the period). A sequence is said to be almost periodic if it is the uniform limit of quasiperiodic sequences. For instance any Fourier series of the form

\displaystyle a(n) = \sum_{j=1}^\infty c_j e(\alpha_j n)

with {\alpha_1,\alpha_2,\dots} real numbers and {c_1,c_2,\dots} an absolutely summable sequence of complex coefficients, will be almost periodic.

These sequences arise in various “complexity one” problems in arithmetic combinatorics and ergodic theory. For instance, if {(X, \mu, T)} is a measure-preserving system – a probability space {(X,\mu)} equipped with a measure-preserving shift, and {f_1,f_2 \in L^\infty(X)} are bounded measurable functions, then the correlation sequence

\displaystyle a(n) := \int_X f_1(x) f_2(T^n x)\ d\mu(x)

can be shown to be an almost periodic sequence, plus an error term {b_n} which is “null” in the sense that it has vanishing uniform density:

\displaystyle \sup_N \frac{1}{M} \sum_{n=N+1}^{N+M} |b_n| \rightarrow 0 \hbox{ as } M \rightarrow \infty.

This can be established in a number of ways, for instance by writing {a(n)} as the Fourier coefficients of the spectral measure of the shift {T} with respect to the functions {f_1,f_2}, and then decomposing that measure into pure point and continuous components.

In the last two decades or so, it has become clear that there are natural higher order versions of these concepts, in which linear polynomials such as {\alpha n + \beta} are replaced with higher degree counterparts. The most obvious candidates for these counterparts would be the polynomials {\alpha_d n^d + \dots + \alpha_0}, but this turns out to not be a complete set of higher degree objects needed for the theory. Instead, the higher order versions of quasiperiodic and almost periodic sequences are now known as basic nilsequences and nilsequences respectively, while the higher order version of a linear phase is a nilcharacter; each nilcharacter then has a symbol that is a higher order generalisation of the concept of a frequency (and the collection of all symbols forms a group that can be viewed as a higher order version of the Pontryagin dual of {{\bf Z}}). The theory of these objects is spread out in the literature across a number of papers; in particular, the theory of nilcharacters is mostly developed in Appendix E of this 116-page paper of Ben Green, Tamar Ziegler, and myself, and is furthermore written using nonstandard analysis and treating the more general setting of higher dimensional sequences. I therefore decided to rewrite some of that material in this blog post, in the simpler context of the qualitative asymptotic theory of one-dimensional nilsequences and nilcharacters rather than the quantitative single-scale theory that is needed for combinatorial applications (and which necessitated the use of nonstandard analysis in the previous paper).

For technical reasons (having to do with the non-trivial topological structure on nilmanifolds), it will be convenient to work with vector-valued sequences, that take values in a finite-dimensional complex vector space {{\bf C}^m} rather than in {{\bf C}}. By doing so, the space of sequences is now, technically, no longer a ring, as the operations of addition and multiplication on vector-valued sequences become ill-defined. However, we can still take complex conjugates of a sequence, and add sequences taking values in the same vector space {{\bf C}^m}, and for sequences taking values in different vector spaces {{\bf C}^m}, {{\bf C}^{m'}}, we may utilise the tensor product {\otimes: {\bf C}^m \times {\bf C}^{m'} \rightarrow {\bf C}^{mm'}}, which we will normalise by defining

\displaystyle (z_1,\dots,z_m) \otimes (w_1,\dots,w_{m'}) = (z_1 w_1, \dots, z_1 w_{m'}, \dots, z_m w_1, \dots, z_m w_{m'} ).

This product is associative and bilinear, and also commutative up to permutation of the indices. It also interacts well with the Hermitian norm

\displaystyle \| (z_1,\dots,z_m) \| := \sqrt{|z_1|^2 + \dots + |z_m|^2}

since we have {\|z \otimes w\| = \|z\| \|w\|}.

The traditional definition of a basic nilsequence (as defined for instance by Bergelson, Host, and Kra) is as follows:

Definition 1 (Basic nilsequence, first definition) A nilmanifold of step at most {d} is a quotient {G/\Gamma}, where {G} is a connected, simply connected nilpotent Lie group of step at most {d} (thus, all {d+1}-fold commutators vanish) and {\Gamma} is a discrete cocompact lattice in {G}. A basic nilsequence of degree at most {d} is a sequence of the form {n \mapsto F(g^n g_0 \Gamma)}, where {g_0 \Gamma \in G/\Gamma}, {g \in G}, and {F: G/\Gamma \rightarrow {\bf C}^m} is a continuous function.

For instance, it is not difficult using this definition to show that a sequence is a basic nilsequence of degree at most {1} if and only if it is quasiperiodic. The requirement that {G} be simply connected can be easily removed if desired by passing to a universal cover, but it is technically convenient to assume it (among other things, it allows for a well-defined logarithm map that obeys the Baker-Campbell-Hausdorff formula). When one wishes to perform a more quantitative analysis of nilsequences (particularly when working on a “single scale”. sich as on a single long interval {\{ N+1, \dots, N+M\}}), it is common to impose additional regularity conditions on the function {F}, such as Lipschitz continuity or smoothness, but ordinary continuity will suffice for the qualitative discussion in this blog post.

Nowadays, and particularly when one needs to understand the “single-scale” equidistribution properties of nilsequences, it is more convenient (as is for instance done in this ICM paper of Green) to use an alternate definition of a nilsequence as follows.

Definition 2 Let {d \geq 0}. A filtered group of degree at most {d} is a group {G} together with a sequence {G_\bullet = (G_0,G_1,G_2,\dots)} of subgroups {G \geq G_0 \geq G_1 \geq \dots} with {G_{d+1}=\{\hbox{id}\}} and {[G_i,G_j] \subset G_{i+j}} for {i,j \geq 0}. A polynomial sequence {g: {\bf Z} \rightarrow G} into a filtered group is a function such that {\partial_{h_i} \dots \partial_{h_1} g(n) \in G_i} for all {i \geq 0} and {n,h_1,\dots,h_i \in{\bf Z}}, where {\partial_h g(n) := g(n+h) g(n)^{-1}} is the difference operator. A filtered nilmanifold of degree at most {s} is a quotient {G/\Gamma}, where {G} is a filtered group of degree at most {s} such that {G} and all of the subgroups {G_i} are connected, simply connected nilpotent filtered Lie group, and {\Gamma} is a discrete cocompact subgroup of {G} such that {\Gamma_i := \Gamma \cap G_i} is a discrete cocompact subgroup of {G_i}. A basic nilsequence of degree at most {d} is a sequence of the form {n \mapsto F(g(n))}, where {g: {\bf Z} \rightarrow G} is a polynomial sequence, {G/\Gamma} is a filtered nilmanifold of degree at most {d}, and {F: G \rightarrow {\bf C}^m} is a continuous function which is {\Gamma}-automorphic, in the sense that {F(g \gamma) = F(g)} for all {g \in G} and {\gamma \in \Gamma}.

One can easily identify a {\Gamma}-automorphic function on {G} with a function on {G/\Gamma}, but there are some (very minor) advantages to working on the group {G} instead of the quotient {G/\Gamma}, as it becomes slightly easier to modify the automorphy group {\Gamma} when needed. (But because the action of {\Gamma} on {G} is free, one can pass from {\Gamma}-automorphic functions on {G} to functions on {G/\Gamma} with very little difficulty.) The main reason to work with polynomial sequences {n \mapsto g(n)} rather than geometric progressions {n \mapsto g^n g_0 \Gamma} is that they form a group, a fact essentially established by by Lazard and Leibman; see Corollary B.4 of this paper of Green, Ziegler, and myself for a proof in the filtered group setting.

It is easy to see that any sequence that is a basic nilsequence of degree at most {d} in the sense of the first definition, is also a basic nilsequence of degree at most {d} in the second definition, since a nilmanifold of degree at most {d} can be filtered using the lower central series, and any linear sequence {n \mapsto g^n g_0} will be a polynomial sequence with respect to that filtration. The converse implication is a little trickier, but still not too hard to show: see Appendix C of this paper of Ben Green, Tamar Ziegler, and myself. There are two key examples of basic nilsequences to keep in mind. The first are the polynomially quasiperiodic sequences

\displaystyle a(n) = F( P_1(n), \dots, P_k(n) ),

where {P_1,\dots,P_k: {\bf Z} \rightarrow {\bf R}} are polynomials of degree at most {d}, and {F: {\bf R}^k \rightarrow {\bf C}^m} is a {{\bf Z}^k}-automorphic (i.e., {{\bf Z}^k}-periodic) continuous function. The map {P: {\bf Z} \rightarrow {\bf R}^k} defined by {P(n) := (P_1(n),\dots,P_k(n))} is a polynomial map of degree at most {d}, if one filters {{\bf R}^k} by defining {({\bf R}^k)_i} to equal {{\bf R}^k} when {i \leq d}, and {\{0\}} for {i > d}. The torus {{\bf R}^k/{\bf Z}^k} then becomes a filtered nilmanifold of degree at most {d}, and {a(n)} is thus a basic nilsequence of degree at most {d} as per the second definition. It is also possible explicitly describe {a_n} as a basic nilsequence of degree at most {d} as per the first definition, for instance (in the {k=1} case) by taking {G} to be the space of upper triangular unipotent {d+1 \times d+1} real matrices, and {\Gamma} the subgroup with integer coefficients; we leave the details to the interested reader.

The other key example is a sequence of the form

\displaystyle a(n) = F( \alpha n, \{ \alpha n \} \beta n )

where {\alpha,\beta} are real numbers, {\{ \alpha n \} = \alpha n - \lfloor \alpha n \rfloor} denotes the fractional part of {\alpha n}, and and {F: {\bf R}^2 \rightarrow {\bf C}^m} is a {{\bf Z}^2}-automorphic continuous function that vanishes in a neighbourhood of {{\bf Z} \times {\bf R}}. To describe this as a nilsequence, we use the nilpotent connected, simply connected degree {2}, Heisenberg group

\displaystyle G := \begin{pmatrix} 1 & {\bf R} & {\bf R} \\ 0 & 1 & {\bf R} \\ 0 & 0 & 1 \end{pmatrix}

with the lower central series filtration {G_0=G_1=G}, {G_2= [G,G] = \begin{pmatrix} 1 &0 & {\bf R} \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}}, and {G_i = \{ \mathrm{id} \}} for {i > 2}, {\Gamma} to be the discrete compact subgroup

\displaystyle \Gamma := \begin{pmatrix} 1 & {\bf Z} & {\bf Z} \\ 0 & 1 & {\bf Z} \\ 0 & 0 & 1 \end{pmatrix},

{g: {\bf Z} \rightarrow G} to be the polynomial sequence

\displaystyle g(n) := \begin{pmatrix} 1 & \beta n & \alpha \beta n^2 \\ 0 & 1 & \alpha n \\ 0 & 0 & 1 \end{pmatrix}

and {\tilde F: G \rightarrow {\bf C}^m} to be the {\Gamma}-automorphic function

\displaystyle \tilde F( \begin{pmatrix} 1 & x & y \\ 0 & 1 & z \\ 0 & 0 & 1 \end{pmatrix} ) = F( \{ z \}, y - \lfloor z \rfloor x );

one easily verifies that this function is indeed {\Gamma}-automorphic, and it is continuous thanks to the vanishing properties of {F}. Also we have {a(n) = \tilde F(g(n))}, so {a} is a basic nilsequence of degree at most {2}. One can concoct similar examples with {\{ \alpha n \} \beta n} replaced by other “bracket polynomials” of {n}; for instance

\displaystyle a(n) = F( \alpha n, \{ \alpha n - \frac{1}{2} \} \beta n )

will be a basic nilsequence if {F} now vanishes in a neighbourhood of {(\frac{1}{2}+{\bf Z}) \times {\bf R}} rather than {{\bf Z} \times {\bf R}}. See this paper of Bergelson and Leibman for more discussion of bracket polynomials (also known as generalised polynomials) and their relationship to nilsequences.

A nilsequence of degree at most {d} is defined to be a sequence that is the uniform limit of basic nilsequences of degree at most {d}. Thus for instance a sequence is a nilsequence of degree at most {1} if and only if it is almost periodic, while a sequence is a nilsequence of degree at most {0} if and only if it is constant. Such objects arise in higher order recurrence: for instance, if {h_0,\dots,h_d} are integers, {(X,\mu,T)} is a measure-preserving system, and {f_0,\dots,f_d \in L^\infty(X)}, then it was shown by Leibman that the sequence

\displaystyle n \mapsto \int_X f_0(T^{h_0 n} x) \dots f_d(T^{h_d n} x)\ d\mu(x)

is equal to a nilsequence of degree at most {d}, plus a null sequence. (The special case when the measure-preserving system was ergodic and {h_i = i} for {i=0,\dots,d} was previously established by Bergelson, Host, and Kra.) Nilsequences also arise in the inverse theory of the Gowers uniformity norms, as discussed for instance in this previous post.

It is easy to see that a sequence {a: {\bf Z} \rightarrow {\bf C}^m} is a basic nilsequence of degree at most {d} if and only if each of its {m} components are. The scalar basic nilsequences {a: {\bf Z} \rightarrow {\bf C}} of degree {d} are easily seen to form a {*}-algebra (that is to say, they are a complex vector space closed under pointwise multiplication and complex conjugation), which implies similarly that vector-valued basic nilsequences {a: {\bf Z} \rightarrow {\bf C}^m} of degree at most {d} form a complex vector space closed under complex conjugation for each {m}, and that the tensor product of any two basic nilsequences of degree at most {d} is another basic nilsequence of degree at most {d}. Similarly with “basic nilsequence” replaced by “nilsequence” throughout.

Now we turn to the notion of a nilcharacter, as defined in this paper of Ben Green, Tamar Ziegler, and myself:

Definition 3 (Nilcharacters) Let {d \geq 1}. A sub-nilcharacter of degree {d} is a basic nilsequence {\chi: n \mapsto F(g(n))} of degree at most {d}, such that {F} obeys the additional modulation property

\displaystyle F( g_d g ) = e( \xi \cdot g_d ) F(g) \ \ \ \ \ (1)

for all {g \in G} and {g_d \in G_d}, where {\xi: G_d \rightarrow {\bf R}} is a continuous homomorphism {g_d \mapsto \xi \cdot g_d}. (Note from (1) and {\Gamma}-automorphy that unless {F} vanishes identically, {\xi} must map {\Gamma_d} to {{\bf Z}}, thus without loss of generality one can view {\xi} as an element of the Pontryagial dual of the torus {G_d / \Gamma_d}.) If in addition one has {\|F(g)\|=1} for all {g \in G}, we call {\chi} a nilcharacter of degree {d \geq 1}.

In the degree one case {d=1}, the only sub-nilcharacters are of the form {\chi(n) = e(\alpha n)} for some vector {c \in {\bf C}^m} and {\alpha \in {\bf R}}, and this is a nilcharacter if {c} is a unit vector. Similarly, in higher degree, any sequence of the form {\chi(n) = c e(P(n))}, where {c \in {\bf C}^m} is a vector and {P: {\bf Z} \rightarrow {\bf R}} is a polynomial of degree at most {d}, is a sub-nilcharacter of degree {d}, and a character if {c} is a unit vector. A nilsequence of degree at most {d-1} is automatically a sub-nilcharacter of degree {d}, and a nilcharacter if it is of magnitude {1}. A further example of a nilcharacter is provided by the two-dimensional sequence {\chi: {\bf Z} \rightarrow {\bf C}^2} defined by

\displaystyle \chi(n) := ( F_0( \alpha n ) e( \{ \alpha n \} \beta n ), F_{1/2}( \alpha n ) e( \{ \alpha n - \frac{1}{2} \} \beta n ) ) \ \ \ \ \ (2)

where {F_0, F_{1/2}: {\bf R} \rightarrow {\bf C}} are continuous, {{\bf Z}}-automorphic functions that vanish on a neighbourhood of {{\bf Z}} and {\frac{1}{2}+{\bf Z}} respectively, and which form a partition of unity in the sense that

\displaystyle |F_0(x)|^2 + |F_{1/2}(x)|^2 = 1

for all {x \in {\bf R}}. Note that one needs both {F_0} and {F_{1/2}} to be not identically zero in order for all these conditions to be satisfied; it turns out (for topological reasons) that there is no scalar nilcharacter that is “equivalent” to this nilcharacter in a sense to be defined shortly. In some literature, one works exclusively with sub-nilcharacters rather than nilcharacters, however the former space contains zero-divisors, which is a little annoying technically. Nevertheless, both nilcharacters and sub-nilcharacters generate the same set of “symbols” as we shall see later.

We claim that every degree {d} sub-nilcharacter {f: {\bf Z} \rightarrow {\bf C}^m} can be expressed in the form {f = c \chi}, where {\chi: {\bf Z} \rightarrow {\bf C}^{m'}} is a degree {d} nilcharacter, and {c: {\bf C}^{m'} \rightarrow {\bf C}^m} is a linear transformation. Indeed, by scaling we may assume {f(n) = F(g(n))} where {|F| < 1} uniformly. Using partitions of unity, one can find further functions {F_1,\dots,F_m} also obeying (1) for the same character {\xi} such that {|F_1|^2 + \dots + |F_m|^2} is non-zero; by dividing out the {F_1,\dots,F_m} by the square root of this quantity, and then multiplying by {\sqrt{1-|F|^2}}, we may assume that

\displaystyle |F|^2 + |F_1|^2 + \dots + |F_m|^2 = 1,

and then

\displaystyle \chi(n) := (F(g(n)), F_1(g(n)), \dots, F_m(g(n)))

becomes a degree {d} nilcharacter that contains {f(n)} amongst its components, giving the claim.

As we shall show below, nilsequences can be approximated uniformly by linear combinations of nilcharacters, in much the same way that quasiperiodic or almost periodic sequences can be approximated uniformly by linear combinations of linear phases. In particular, nilcharacters can be used as “obstructions to uniformity” in the sense of the inverse theory of the Gowers uniformity norms.

The space of degree {d} nilcharacters forms a semigroup under tensor product, with the constant sequence {1} as the identity. One can upgrade this semigroup to an abelian group by quotienting nilcharacters out by equivalence:

Definition 4 Let {d \geq 1}. We say that two degree {d} nilcharacters {\chi: {\bf Z} \rightarrow {\bf C}^m}, {\chi': {\bf Z} \rightarrow {\bf C}^{m'}} are equivalent if {\chi \otimes \overline{\chi'}: {\bf Z} \rightarrow {\bf C}^{mm'}} is equal (as a sequence) to a basic nilsequence of degree at most {d-1}. (We will later show that this is indeed an equivalence relation.) The equivalence class {[\chi]_{\mathrm{Symb}^d({\bf Z})}} of such a nilcharacter will be called the symbol of that nilcharacter (in analogy to the symbol of a differential or pseudodifferential operator), and the collection of such symbols will be denoted {\mathrm{Symb}^d({\bf Z})}.

As we shall see below the fold, {\mathrm{Symb}^d({\bf Z})} has the structure of an abelian group, and enjoys some nice “symbol calculus” properties; also, one can view symbols as precisely describing the obstruction to equidistribution for nilsequences. For {d=1}, the group is isomorphic to the Ponytragin dual {\hat {\bf Z} = {\bf R}/{\bf Z}} of the integers, and {\mathrm{Symb}^d({\bf Z})} for {d > 1} should be viewed as higher order generalisations of this Pontryagin dual. In principle, this group can be explicitly described for all {d}, but the theory rapidly gets complicated as {d} increases (much as the classification of nilpotent Lie groups or Lie algebras of step {d} rapidly gets complicated even for medium-sized {d} such as {d=3} or {d=4}). We will give an explicit description of the {d=2} case here. There is however one nice (and non-trivial) feature of {\mathrm{Symb}^d({\bf Z})} for {d \geq 2} – it is not just an abelian group, but is in fact a vector space over the rationals {{\bf Q}}!

Read the rest of this entry »

Van Vu and I just posted to the arXiv our paper “sum-free sets in groups” (submitted to Discrete Analysis), as well as a companion survey article (submitted to J. Comb.). Given a subset {A} of an additive group {G = (G,+)}, define the quantity {\phi(A)} to be the cardinality of the largest subset {B} of {A} which is sum-free in {A} in the sense that all the sums {b_1+b_2} with {b_1,b_2} distinct elements of {B} lie outside of {A}. For instance, if {A} is itself a group, then {\phi(A)=1}, since no two elements of {A} can sum to something outside of {A}. More generally, if {A} is the union of {k} groups, then {\phi(A)} is at most {k}, thanks to the pigeonhole principle.

If {G} is the integers, then there are no non-trivial subgroups, and one can thus expect {\phi(A)} to start growing with {A}. For instance, one has the following easy result:

Proposition 1 Let {A} be a set of {2^k} natural numbers. Then {\phi(A) > k}.

Proof: We use an argument of Ruzsa, which is based in turn on an older argument of Choi. Let {x_1} be the largest element of {A}, and then recursively, once {x_1,\dots,x_i} has been selected, let {x_{i+1}} be the largest element of {A} not equal to any of the {x_1,\dots,x_i}, such that {x_{i+1}+x_j \not \in A} for all {j=1,\dots,i}, terminating this construction when no such {x_{i+1}} can be located. This gives a sequence {x_1 > x_2 > \dots > x_m} of elements in {A} which are sum-free in {A}, and with the property that for any {y \in A}, either {y} is equal to one of the {x_i}, or else {y + x_i \in A} for some {i} with {x_i > y}. Iterating this, we see that any {y \in A} is of the form {x_{i_1} - x_{i_2} - \dots - x_{i_j}} for some {j \geq 1} and {1 \leq i_1 < i_2 < \dots \leq i_j \leq m}. The number of such expressions {x_{i_1} - x_{i_2} - \dots - x_{i_j}} is at most {2^{m}-1}, thus {2^k \leq 2^m-1} which implies {m \geq k+1}. Since {\phi(A) \geq m}, the claim follows. \Box

In particular, we have {\phi(A) \gg \log |A|} for subsets {A} of the integers. It has been possible to improve upon this easy bound, but only with remarkable effort. The best lower bound currently is

\displaystyle \phi(A) \geq \log |A| (\log\log|A|)^{1/2 - o(1)},

a result of Shao (building upon earlier work of Sudakov, Szemeredi, and Vu and of Dousse). In the opposite direction, a construction of Ruzsa gives examples of large sets {A} with {\phi(A) \leq \exp( O( \sqrt{\log |A|} ) )}.

Using the standard tool of Freiman homomorphisms, the above results for the integers extend to other torsion-free abelian groups {G}. In our paper we study the opposite case where {G} is finite (but still abelian). In this paper of Erdös (in which the quantity {\phi(A)} was first introduced), the following question was posed: if {A} is sufficiently large depending on {\phi(A)}, does this imply the existence of two elements {x,y \in A} with {x+y=0}? As it turns out, we were able to find some simple counterexamples to this statement. For instance, if {H} is any finite additive group, then the set {A := \{ 1 \hbox{ mod } 7, 2 \hbox{ mod } 7, 4 \hbox{ mod } 7\} \times H \subset {\bf Z}/7{\bf Z} \times H} has {\phi(A)=3} but with no {x,y \in A} summing to zero; this type of example in fact works with {7} replaced by any larger Mersenne prime, and we also have a counterexample in {{\bf Z}/2^n{\bf Z}} for {n} arbitrarily large. However, in the positive direction, we can show that the answer to Erdös’s question is positive if {|G|} is assumed to have no small prime factors. That is to say,

Theorem 2 For every {k \geq 1} there exists {C \geq 1} such that if {G} is a finite abelian group whose order is not divisible by any prime less than or equal to {C}, and {A} is a subset of {G} with order at least {C} and {\phi(A) \leq k}, then there exist {x,y \in A} with {x+y=0}.

There are two main tools used to prove this result. One is an “arithmetic removal lemma” proven by Král, Serra, and Vena. Note that the condition {\phi(A) \leq k} means that for any distinct {x_1,\dots,x_{k+1} \in A}, at least one of the {x_i+x_j}, {1 \leq i < j \leq k+1}, must also lie in {A}. Roughly speaking, the arithmetic removal lemma allows one to “almost” remove the requirement that {x_1,\dots,x_{k+1}} be distinct, which basically now means that {x \in A \implies 2x \in A} for almost all {x \in A}. This near-dilation symmetry, when combined with the hypothesis that {|G|} has no small prime factors, gives a lot of “dispersion” in the Fourier coefficients of {1_A} which can now be exploited to prove the theorem.

The second tool is the following structure theorem, which is the main result of our paper, and goes a fair ways towards classifying sets {A} for which {\phi(A)} is small:

Theorem 3 Let {A} be a finite subset of an arbitrary additive group {G}, with {\phi(A) \leq k}. Then one can find finite subgroups {H_1,\dots,H_m} with {m \leq k} such that {|A \cap H_i| \gg_k |H_i|} and {|A \backslash (H_1 \cup \dots \cup H_m)| \ll_k 1}. Furthermore, if {m=k}, then the exceptional set {A \backslash (H_1 \cup \dots \cup H_m)} is empty.

Roughly speaking, this theorem shows that the example of the union of {k} subgroups mentioned earlier is more or less the “only” example of sets {A} with {\phi(A) \leq k}, modulo the addition of some small exceptional sets and some refinement of the subgroups to dense subsets.

This theorem has the flavour of other inverse theorems in additive combinatorics, such as Freiman’s theorem, and indeed one can use Freiman’s theorem (and related tools, such as the Balog-Szemeredi theorem) to easily get a weaker version of this theorem. Indeed, if there are no sum-free subsets of {A} of order {k+1}, then a fraction {\gg_k 1} of all pairs {a,b} in {A} must have their sum also in {A} (otherwise one could take {k+1} random elements of {A} and they would be sum-free in {A} with positive probability). From this and the Balog-Szemeredi theorem and Freiman’s theorem (in arbitrary abelian groups, as established by Green and Ruzsa), we see that {A} must be “commensurate” with a “coset progression” {H+P} of bounded rank. One can then eliminate the torsion-free component {P} of this coset progression by a number of methods (e.g. by using variants of the argument in Proposition 1), with the upshot being that one can locate a finite group {H_1} that has large intersection with {A}.

At this point it is tempting to simply remove {H_1} from {A} and iterate. But one runs into a technical difficulty that removing a set such as {H_1} from {A} can alter the quantity {\phi(A)} in unpredictable ways, so one has to still keep {H_1} around when analysing the residual set {A \backslash H_1}. A second difficulty is that the latter set {A \backslash H_1} could be considerably smaller than {A} or {H_1}, but still large in absolute terms, so in particular any error term whose size is only bounded by {\varepsilon |A|} for a small {\varepsilon} could be massive compared with the residual set {A\backslash H_1}, and so such error terms would be unacceptable. One can get around these difficulties if one first performs some preliminary “normalisation” of the group {H_1}, so that the residual set {A \backslash H_1} does not intersect any coset of {H_1} too strongly. The arguments become even more complicated when one starts removing more than one group {H_1,\dots,H_i} from {A} and analyses the residual set {A \backslash (H_1 \cup \dots \cup H_i)}; indeed the “epsilon management” involved became so fearsomely intricate that we were forced to use a nonstandard analysis formulation of the problem in order to keep the complexity of the argument at a reasonable level (cf. my previous blog post on this topic). One drawback of doing so is that we have no effective bounds for the implied constants in our main theorem; it would be of interest to obtain a more direct proof of our main theorem that would lead to effective bounds.

I’ve just uploaded to the arXiv my paper “Inverse theorems for sets and measures of polynomial growth“. This paper was motivated by two related questions. The first question was to obtain a qualitatively precise description of the sets of polynomial growth that arise in Gromov’s theorem, in much the same way that Freiman’s theorem (and its generalisations) provide a qualitatively precise description of sets of small doubling. The other question was to obtain a non-abelian analogue of inverse Littlewood-Offord theory.

Let me discuss the former question first. Gromov’s theorem tells us that if a finite subset {A} of a group {G} exhibits polynomial growth in the sense that {|A^n|} grows polynomially in {n}, then the group generated by {A} is virtually nilpotent (the converse direction also true, and is relatively easy to establish). This theorem has been strengthened a number of times over the years. For instance, a few years ago, I proved with Shalom that the condition that {|A^n|} grew polynomially in {n} could be replaced by {|A^n| \leq C n^d} for a single {n}, as long as {n} was sufficiently large depending on {C,d} (in fact we gave a fairly explicit quantitative bound on how large {n} needed to be). A little more recently, with Breuillard and Green, the condition {|A^n| \leq C n^d} was weakened to {|A^n| \leq n^d |A|}, that is to say it sufficed to have polynomial relative growth at a finite scale. In fact, the latter paper gave more information on {A} in this case, roughly speaking it showed (at least in the case when {A} was a symmetric neighbourhood of the identity) that {A^n} was “commensurate” with a very structured object known as a coset nilprogression. This can then be used to establish further control on {A}. For instance, it was recently shown by Breuillard and Tointon (again in the symmetric case) that if {|A^n| \leq n^d |A|} for a single {n} that was sufficiently large depending on {d}, then all the {A^{n'}} for {n' \geq n} have a doubling constant bounded by a bound {C_d} depending only on {d}, thus {|A^{2n'}| \leq C_d |A^{n'}|} for all {n' \geq n}.

In this paper we are able to refine this analysis a bit further; under the same hypotheses, we can show an estimate of the form

\displaystyle  \log |A^{n'}| = \log |A^n| + f( \log n' - \log n ) + O_d(1)

for all {n' \geq n} and some piecewise linear, continuous, non-decreasing function {f: [0,+\infty) \rightarrow [0,+\infty)} with {f(0)=0}, where the error {O_d(1)} is bounded by a constant depending only on {d}, and where {f} has at most {O_d(1)} pieces, each of which has a slope that is a natural number of size {O_d(1)}. To put it another way, the function {n' \mapsto |A^{n'}|} for {n' \geq n} behaves (up to multiplicative constants) like a piecewise polynomial function, where the degree of the function and number of pieces is bounded by a constant depending on {d}.

One could ask whether the function {f} has any convexity or concavity properties. It turns out that it can exhibit either convex or concave behaviour (or a combination of both). For instance, if {A} is contained in a large finite group, then {n \mapsto |A^n|} will eventually plateau to a constant, exhibiting concave behaviour. On the other hand, in nilpotent groups one can see convex behaviour; for instance, in the Heisenberg group {\begin{pmatrix}{} {1} {\mathbf Z} {\mathbf Z} \\ {0} {1} {\mathbf Z} \\ {0} {1} \end{pmatrix}}, if one sets {A} to be a set of matrices of the form {\begin{pmatrix} 1 & O(N) & O(N^3) \\ 0 & 1 & O(N) \\ 0 & 0 & 1 \end{pmatrix}} for some large {N} (abusing the {O()} notation somewhat), then {n \mapsto A^n} grows cubically for {n \leq N} but then grows quartically for {n > N}.

To prove this proposition, it turns out (after using a somewhat difficult inverse theorem proven previously by Breuillard, Green, and myself) that one has to analyse the volume growth {n \mapsto |P^n|} of nilprogressions {P}. In the “infinitely proper” case where there are no unexpected relations between the generators of the nilprogression, one can lift everything to a simply connected Lie group (where one can take logarithms and exploit the Baker-Campbell-Hausdorff formula heavily), eventually describing {P^n} with fair accuracy by a certain convex polytope with vertices depending polynomially on {n}, which implies that {|P^n|} depends polynomially on {n} up to constants. If one is not in the “infinitely proper” case, then at some point {n_0} the nilprogression {P^{n_0}} develops a “collision”, but then one can use this collision to show (after some work) that the dimension of the “Lie model” of {P^{n_0}} has dropped by at least one from the dimension of {P} (the notion of a Lie model being developed in the previously mentioned paper of Breuillard, Greenm, and myself), so that this sort of collision can only occur a bounded number of times, with essentially polynomial volume growth behaviour between these collisions.

The arguments also give a precise description of the location of a set {A} for which {A^n} grows polynomially in {n}. In the symmetric case, what ends up happening is that {A^n} becomes commensurate to a “coset nilprogression” {HP} of bounded rank and nilpotency class, whilst {A} is “virtually” contained in a scaled down version {HP^{1/n}} of that nilprogression. What “virtually” means is a little complicated; roughly speaking, it means that there is a set {X} of bounded cardinality such that {aXHP^{1/n} \approx XHP^{1/n}} for all {a \in A}. Conversely, if {A} is virtually contained in {HP^{1/n}}, then {A^n} is commensurate to {HP} (and more generally, {A^{mn}} is commensurate to {HP^m} for any natural number {m}), giving quite a (qualitatively) precise description of {A} in terms of coset nilprogressions.

The main tool used to prove these results is the structure theorem for approximate groups established by Breuillard, Green, and myself, which roughly speaking asserts that approximate groups are always commensurate with coset nilprogressions. A key additional trick is a pigeonholing argument of Sanders, which in this context is the assertion that if {A^n} is comparable to {A^{2n}}, then there is an {n'} between {n} and {2n} such that {A \cdot A^{n'}} is very close in size to {A^{n'}} (up to a relative error of {1/n}). It is this fact, together with the comparability of {A^{n'}} to a coset nilprogression {HP}, that allows us (after some combinatorial argument) to virtually place {A} inside {HP^{1/n}}.

Similar arguments apply when discussing iterated convolutions {\mu^{*n}} of (symmetric) probability measures on a (discrete) group {G}, rather than combinatorial powers {A^n} of a finite set. Here, the analogue of volume {A^n} is given by the negative power {\| \mu^{*n} \|_{\ell^2}^{-2}} of the {\ell^2} norm of {\mu^{*n}} (thought of as a non-negative function on {G} of total mass 1). One can also work with other norms here than {\ell^2}, but this norm has some minor technical conveniences (and other measures of the “spread” of {\mu^{*n}} end up being more or less equivalent for our purposes). There is an analogous structure theorem that asserts that if {\mu^{*n}} spreads at most polynomially in {n}, then {\mu^{*n}} is “commensurate” with the uniform probability distribution on a coset progression {HP}, and {\mu} itself is largely concentrated near {HP^{1/\sqrt{n}}}. The factor of {\sqrt{n}} here is the familiar scaling factor in random walks that arises for instance in the central limit theorem. The proof of (the precise version of) this statement proceeds similarly to the combinatorial case, using pigeonholing to locate a scale {n'} where {\mu *\mu^{n'}} has almost the same {\ell^2} norm as {\mu^{n'}}.

A special case of this theory occurs when {\mu} is the uniform probability measure on {n} elements {v_1,\dots,v_n} of {G} and their inverses. The probability measure {\mu^{*n}} is then the distribution of a random product {w_1 \dots w_n}, where each {w_i} is equal to one of {v_{j_i}} or its inverse {v_{j_i}^{-1}}, selected at random with {j_i} drawn uniformly from {\{1,\dots,n\}} with replacement. This is very close to the Littlewood-Offord situation of random products {u_1 \dots u_n} where each {u_i} is equal to {v_i} or {v_i^{-1}} selected independently at random (thus {j_i} is now fixed to equal {i} rather than being randomly drawn from {\{1,\dots,n\}}. In the case when {G} is abelian, it turns out that a little bit of Fourier analysis shows that these two random walks have “comparable” distributions in a certain {\ell^2} sense. As a consequence, the results in this paper can be used to recover an essentially optimal abelian inverse Littlewood-Offord theorem of Nguyen and Vu. In the nonabelian case, the only Littlewood-Offord theorem I am aware of is a recent result of Tiep and Vu for matrix groups, but in this case I do not know how to relate the above two random walks to each other, and so we can only obtain an analogue of the Tiep-Vu results for the symmetrised random walk {w_1 \dots w_n} instead of the ordered random walk {u_1 \dots u_n}.

Suppose that {A \subset B} are two subgroups of some ambient group {G}, with the index {K := [B:A]} of {A} in {B} being finite. Then {B} is the union of {K} left cosets of {A}, thus {B = SA} for some set {S \subset B} of cardinality {K}. The elements {s} of {S} are not entirely arbitrary with regards to {A}. For instance, if {A} is a normal subgroup of {B}, then for each {s \in S}, the conjugation map {g \mapsto s^{-1} g s} preserves {A}. In particular, if we write {A^s := s^{-1} A s} for the conjugate of {A} by {s}, then

\displaystyle  A = A^s.

Even if {A} is not normal in {B}, it turns out that the conjugation map {g \mapsto s^{-1} g s} approximately preserves {A}, if {K} is bounded. To quantify this, let us call two subgroups {A,B} {K}-commensurate for some {K \geq 1} if one has

\displaystyle  [A : A \cap B], [B : A \cap B] \leq K.

Proposition 1 Let {A \subset B} be groups, with finite index {K = [B:A]}. Then for every {s \in B}, the groups {A} and {A^s} are {K}-commensurate, in fact

\displaystyle  [A : A \cap A^s ] = [A^s : A \cap A^s ] \leq K.

Proof: One can partition {B} into {K} left translates {xA} of {A}, as well as {K} left translates {yA^s} of {A^s}. Combining the partitions, we see that {B} can be partitioned into at most {K^2} non-empty sets of the form {xA \cap yA^s}. Each of these sets is easily seen to be a left translate of the subgroup {A \cap A^s}, thus {[B: A \cap A^s] \leq K^2}. Since

\displaystyle [B: A \cap A^s] = [B:A] [A: A \cap A^s] = [B:A^s] [A^s: A \cap A^s]

and {[B:A] = [B:A^s]=K}, we obtain the claim. \Box

One can replace the inclusion {A \subset B} by commensurability, at the cost of some worsening of the constants:

Corollary 2 Let {A, B} be {K}-commensurate subgroups of {G}. Then for every {s \in B}, the groups {A} and {A^s} are {K^2}-commensurate.

Proof: Applying the previous proposition with {A} replaced by {A \cap B}, we see that for every {s \in B}, {A \cap B} and {(A \cap B)^s} are {K}-commensurate. Since {A \cap B} and {(A \cap B)^s} have index at most {K} in {A} and {A^s} respectively, the claim follows. \Box

It turns out that a similar phenomenon holds for the more general concept of an approximate group, and gives a “classification” of all the approximate groups {B} containing a given approximate group {A} as a “bounded index approximate subgroup”. Recall that a {K}-approximate group {A} in a group {G} for some {K \geq 1} is a symmetric subset of {G} containing the identity, such that the product set {A^2 := \{ a_1 a_2: a_1,a_2 \in A\}} can be covered by at most {K} left translates of {A} (and thus also {K} right translates, by the symmetry of {A}). For simplicity we will restrict attention to finite approximate groups {A} so that we can use their cardinality {A} as a measure of size. We call finite two approximate groups {A,B} {K}-commensurate if one has

\displaystyle  |A^2 \cap B^2| \geq \frac{1}{K} |A|, \frac{1}{K} |B|;

note that this is consistent with the previous notion of commensurability for genuine groups.

Theorem 3 Let {G} be a group, and let {K_1,K_2,K_3 \geq 1} be real numbers. Let {A} be a finite {K_1}-approximate group, and let {B} be a symmetric subset of {G} that contains {A}.

  • (i) If {B} is a {K_2}-approximate group with {|B| \leq K_3 |A|}, then one has {B \subset SA} for some set {S} of cardinality at most {K_1 K_2 K_3}. Furthermore, for each {s \in S}, the approximate groups {A} and {A^s} are {K_1 K_2^5 K_3}-commensurate.
  • (ii) Conversely, if {B \subset SA} for some set {S} of cardinality at most {K_3}, and {A} and {A^s} are {K_2}-commensurate for all {s \in S}, then {|B| \leq K_3 |A|}, and {B} is a {K_1^6 K_2 K_3^2}-approximate group.

Informally, the assertion that {B} is an approximate group containing {A} as a “bounded index approximate subgroup” is equivalent to {B} being covered by a bounded number of shifts {sA} of {A}, where {s} approximately normalises {A^2} in the sense that {A^2} and {(A^2)^s} have large intersection. Thus, to classify all such {B}, the problem essentially reduces to that of classifying those {s} that approximately normalise {A^2}.

To prove the theorem, we recall some standard lemmas from arithmetic combinatorics, which are the foundation stones of the “Ruzsa calculus” that we will use to establish our results:

Lemma 4 (Ruzsa covering lemma) If {A} and {B} are finite non-empty subsets of {G}, then one has {B \subset SAA^{-1}} for some set {S \subset B} with cardinality {|S| \leq \frac{|BA|}{|A|}}.

Proof: We take {S} to be a subset of {B} with the property that the translates {sA, s \in S} are disjoint in {BA}, and such that {S} is maximal with respect to set inclusion. The required properties of {S} are then easily verified. \Box

Lemma 5 (Ruzsa triangle inequality) If {A,B,C} are finite non-empty subsets of {G}, then

\displaystyle  |A C^{-1}| \leq |A B^{-1}| |B C^{-1}| / |B|.

Proof: If {ac^{-1}} is an element of {AC^{-1}} with {a \in A} and {c \in C}, then from the identity {ac^{-1} = (ab^{-1}) (bc^{-1})} we see that {ac^{-1}} can be written as the product of an element of {AB^{-1}} and an element of {BC^{-1}} in at least {|B|} distinct ways. The claim follows. \Box

Now we can prove (i). By the Ruzsa covering lemma, {B} can be covered by at most

\displaystyle \frac{|BA|}{|A|} \leq \frac{|B^2|}{|A|} \leq \frac{K_2 |B|}{|A|} \leq K_2 K_3

left-translates of {A^2}, and hence by at most {K_1 K_2 K_3} left-translates of {A}, thus {B \subset SA} for some {|S| \leq K_1 K_2 K_3}. Since {sA} only intersects {B} if {s \in BA}, we may assume that

\displaystyle  S \subset BA \subset B^2

and hence for any {s \in S}

\displaystyle  |A^s A| \leq |B^2 A B^2 A| \leq |B^6|

\displaystyle  \leq K_2^5 |B| \leq K_2^5 K_3 |A|.

By the Ruzsa covering lemma again, this implies that {A^s} can be covered by at most {K_2^5 K_3} left-translates of {A^2}, and hence by at most {K_1 K_2^5 K_3} left-translates of {A}. By the pigeonhole principle, there thus exists a group element {g} with

\displaystyle  |A^s \cap gA| \geq \frac{1}{K_1 K_2^5 K_3} |A|.

Since

\displaystyle  |A^s \cap gA| \leq | (A^s \cap gA)^{-1} (A^s \cap gA)|

and

\displaystyle  (A^s \cap gA)^{-1} (A^s \cap gA) \subset A^2 \cap (A^s)^2

the claim follows.

Now we prove (ii). Clearly

\displaystyle  |B| \leq |S| |A| \leq K_3 |A|.

Now we control the size of {B^2 A}. We have

\displaystyle  |B^2 A| \leq |SA SA^2| \leq K_3^2 \sup_{s \in S} |A s A^2| = K_3^2 \sup_{s \in S} |A^s A^2|.

From the Ruzsa triangle inequality and symmetry we have

\displaystyle  |A^s A^2| \leq \frac{ |A^s (A^2 \cap (A^2)^s)| |(A^2 \cap (A^2)^s) A^2|}{|A^2 \cap (A^2)^s|}

\displaystyle  \leq \frac{ |(A^3)^s| |A^4| }{|A|/K_2}

\displaystyle  \leq K_2 \frac{ |A^3| |A^4| }{|A|}

\displaystyle  \leq K_1^5 K_2 |A|

and thus

\displaystyle  |B^2 A| \leq K_1^5 K_2 K_3^2 |A|.

By the Ruzsa covering lemma, this implies that {B^2} is covered by at most {K_1^5 K_2 K_3^2} left-translates of {A^2}, hence by at most {K_1^6 K_2 K_3^2} left-translates of {A}. Since {A \subset B}, the claim follows.

We now establish some auxiliary propositions about commensurability of approximate groups. The first claim is that commensurability is approximately transitive:

Proposition 6 Let {A} be a {K_1}-approximate group, {B} be a {K_2}-approximate group, and {C} be a {K_3}-approximate group. If {A} and {B} are {K_4}-commensurate, and {B} and {C} are {K_5}-commensurate, then {A} and {C} are {K_1^2 K_2^3 K_2^3 K_4 K_5 \max(K_1,K_3)}-commensurate.

Proof: From two applications of the Ruzsa triangle inequality we have

\displaystyle  |AC| \leq \frac{|A (A^2 \cap B^2)| |(A^2 \cap B^2) (B^2 \cap C^2)| |(B^2 \cap C^2) C|}{|A^2 \cap B^2| |B^2 \cap C^2|}

\displaystyle  \leq \frac{|A^3| |B^4| |C^3|}{ (|A|/K_4) (|B|/K_5)}

\displaystyle  \leq K_4 K_5 \frac{K_1^2 |A| K_2^3 |B| K_3^2 |C|}{ |A| |B| }

\displaystyle  = K_1^2 K_2^3 K_3^2 K_4 K_5 |C|.

By the Ruzsa covering lemma, we may thus cover {A} by at most {K_1^2 K_2^3 K_3^2 K_4 K_5} left-translates of {C^2}, and hence by {K_1^2 K_2^3 K_3^3 K_4 K_5} left-translates of {C}. By the pigeonhole principle, there thus exists a group element {g} such that

\displaystyle  |A \cap gC| \geq \frac{1}{K_1^2 K_2^3 K_3^3 K_4 K_5} |A|,

and so by arguing as in the proof of part (i) of the theorem we have

\displaystyle  |A^2 \cap C^2| \geq \frac{1}{K_1^2 K_2^3 K_3^3 K_4 K_5} |A|

and similarly

\displaystyle  |A^2 \cap C^2| \geq \frac{1}{K_1^3 K_2^3 K_3^2 K_4 K_5} |C|

and the claim follows. \Box

The next proposition asserts that the union and (modified) intersection of two commensurate approximate groups is again an approximate group:

Proposition 7 Let {A} be a {K_1}-approximate group, {B} be a {K_2}-approximate group, and suppose that {A} and {B} are {K_3}-commensurate. Then {A \cup B} is a {K_1 + K_2 + K_1^2 K_2^4 K_3 + K_1^4 K_2^2 K_3}-approximate subgroup, and {A^2 \cap B^2} is a {K_1^6 K_2^3 K_3}-approximate subgroup.

Using this proposition, one may obtain a variant of the previous theorem where the containment {A \subset B} is replaced by commensurability; we leave the details to the interested reader.

Proof: We begin with {A \cup B}. Clearly {A \cup B} is symmetric and contains the identity. We have {(A \cup B)^2 = A^2 \cup AB \cup BA \cup B^2}. The set {A^2} is already covered by {K_1} left translates of {A}, and hence of {A \cup B}; similarly {B^2} is covered by {K_2} left translates of {A \cup B}. As for {AB}, we observe from the Ruzsa triangle inequality that

\displaystyle  |AB^2| \leq \frac{|A (A^2 \cap B^2)| |(A^2 \cap B^2) B^2|}{|A^2 \cap B^2|}

\displaystyle  \leq \frac{|A^3| |B^4|}{|A|/K_3}

\displaystyle  \leq K_1^2 K_2^3 K_3 |B|

and hence by the Ruzsa covering lemma, {AB} is covered by at most {K_1^2 K_2^3 K_3} left translates of {B^2}, and hence by {K_1^2 K_2^4 K_3} left translates of {B}, and hence of {A \cup B}. Similarly {BA} is covered by at most {K_1^4 K_2^2 K_3} left translates of {B}. The claim follows.

Now we consider {A^2 \cap B^2}. Again, this is clearly symmetric and contains the identity. Repeating the previous arguments, we see that {A} is covered by at most {K_1^2 K_2^3 K_3} left-translates of {B}, and hence there exists a group element {g} with

\displaystyle  |A \cap gB| \geq \frac{1}{K_1^2 K_2^3 K_3} |A|.

Now observe that

\displaystyle  |(A^2 \cap B^2)^2 (A \cap gB)| \leq |A^5| \leq K_1^4 |A|

and so by the Ruzsa covering lemma, {(A^2 \cap B^2)^2} can be covered by at most {K_1^6 K_2^3 K_3} left-translates of {(A \cap gB) (A \cap gB)^{-1}}. But this latter set is (as observed previously) contained in {A^2 \cap B^2}, and the claim follows. \Box

In graph theory, the recently developed theory of graph limits has proven to be a useful tool for analysing large dense graphs, being a convenient reformulation of the Szemerédi regularity lemma. Roughly speaking, the theory asserts that given any sequence {G_n = (V_n, E_n)} of finite graphs, one can extract a subsequence {G_{n_j} = (V_{n_j}, E_{n_j})} which converges (in a specific sense) to a continuous object known as a “graphon” – a symmetric measurable function {p\colon [0,1] \times [0,1] \rightarrow [0,1]}. What “converges” means in this context is that subgraph densities converge to the associated integrals of the graphon {p}. For instance, the edge density

\displaystyle  \frac{1}{|V_{n_j}|^2} |E_{n_j}|

converge to the integral

\displaystyle  \int_0^1 \int_0^1 p(x,y)\ dx dy,

the triangle density

\displaystyle  \frac{1}{|V_{n_j}|^3} \lvert \{ (v_1,v_2,v_3) \in V_{n_j}^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_{n_j} \} \rvert

converges to the integral

\displaystyle  \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ dx_1 dx_2 dx_3,

the four-cycle density

\displaystyle  \frac{1}{|V_{n_j}|^4} \lvert \{ (v_1,v_2,v_3,v_4) \in V_{n_j}^4: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_4\}, \{v_4,v_1\} \in E_{n_j} \} \rvert

converges to the integral

\displaystyle  \int_0^1 \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_4) p(x_4,x_1)\ dx_1 dx_2 dx_3 dx_4,

and so forth. One can use graph limits to prove many results in graph theory that were traditionally proven using the regularity lemma, such as the triangle removal lemma, and can also reduce many asymptotic graph theory problems to continuous problems involving multilinear integrals (although the latter problems are not necessarily easy to solve!). See this text of Lovasz for a detailed study of graph limits and their applications.

One can also express graph limits (and more generally hypergraph limits) in the language of nonstandard analysis (or of ultraproducts); see for instance this paper of Elek and Szegedy, Section 6 of this previous blog post, or this paper of Towsner. (In this post we assume some familiarity with nonstandard analysis, as reviewed for instance in the previous blog post.) Here, one starts as before with a sequence {G_n = (V_n,E_n)} of finite graphs, and then takes an ultraproduct (with respect to some arbitrarily chosen non-principal ultrafilter {\alpha \in\beta {\bf N} \backslash {\bf N}}) to obtain a nonstandard graph {G_\alpha = (V_\alpha,E_\alpha)}, where {V_\alpha = \prod_{n\rightarrow \alpha} V_n} is the ultraproduct of the {V_n}, and similarly for the {E_\alpha}. The set {E_\alpha} can then be viewed as a symmetric subset of {V_\alpha \times V_\alpha} which is measurable with respect to the Loeb {\sigma}-algebra {{\mathcal L}_{V_\alpha \times V_\alpha}} of the product {V_\alpha \times V_\alpha} (see this previous blog post for the construction of Loeb measure). A crucial point is that this {\sigma}-algebra is larger than the product {{\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha}} of the Loeb {\sigma}-algebra of the individual vertex set {V_\alpha}. This leads to a decomposition

\displaystyle  1_{E_\alpha} = p + e

where the “graphon” {p} is the orthogonal projection of {1_{E_\alpha}} onto {L^2( {\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha} )}, and the “regular error” {e} is orthogonal to all product sets {A \times B} for {A, B \in {\mathcal L}_{V_\alpha}}. The graphon {p\colon V_\alpha \times V_\alpha \rightarrow [0,1]} then captures the statistics of the nonstandard graph {G_\alpha}, in exact analogy with the more traditional graph limits: for instance, the edge density

\displaystyle  \hbox{st} \frac{1}{|V_\alpha|^2} |E_\alpha|

(or equivalently, the limit of the {\frac{1}{|V_n|^2} |E_n|} along the ultrafilter {\alpha}) is equal to the integral

\displaystyle  \int_{V_\alpha} \int_{V_\alpha} p(x,y)\ d\mu_{V_\alpha}(x) d\mu_{V_\alpha}(y)

where {d\mu_V} denotes Loeb measure on a nonstandard finite set {V}; the triangle density

\displaystyle  \hbox{st} \frac{1}{|V_\alpha|^3} \lvert \{ (v_1,v_2,v_3) \in V_\alpha^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_\alpha \} \rvert

(or equivalently, the limit along {\alpha} of the triangle densities of {E_n}) is equal to the integral

\displaystyle  \int_{V_\alpha} \int_{V_\alpha} \int_{V_\alpha} p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ d\mu_{V_\alpha}(x_1) d\mu_{V_\alpha}(x_2) d\mu_{V_\alpha}(x_3),

and so forth. Note that with this construction, the graphon {p} is living on the Cartesian square of an abstract probability space {V_\alpha}, which is likely to be inseparable; but it is possible to cut down the Loeb {\sigma}-algebra on {V_\alpha} to minimal countable {\sigma}-algebra for which {p} remains measurable (up to null sets), and then one can identify {V_\alpha} with {[0,1]}, bringing this construction of a graphon in line with the traditional notion of a graphon. (See Remark 5 of this previous blog post for more discussion of this point.)

Additive combinatorics, which studies things like the additive structure of finite subsets {A} of an abelian group {G = (G,+)}, has many analogies and connections with asymptotic graph theory; in particular, there is the arithmetic regularity lemma of Green which is analogous to the graph regularity lemma of Szemerédi. (There is also a higher order arithmetic regularity lemma analogous to hypergraph regularity lemmas, but this is not the focus of the discussion here.) Given this, it is natural to suspect that there is a theory of “additive limits” for large additive sets of bounded doubling, analogous to the theory of graph limits for large dense graphs. The purpose of this post is to record a candidate for such an additive limit. This limit can be used as a substitute for the arithmetic regularity lemma in certain results in additive combinatorics, at least if one is willing to settle for qualitative results rather than quantitative ones; I give a few examples of this below the fold.

It seems that to allow for the most flexible and powerful manifestation of this theory, it is convenient to use the nonstandard formulation (among other things, it allows for full use of the transfer principle, whereas a more traditional limit formulation would only allow for a transfer of those quantities continuous with respect to the notion of convergence). Here, the analogue of a nonstandard graph is an ultra approximate group {A_\alpha} in a nonstandard group {G_\alpha = \prod_{n \rightarrow \alpha} G_n}, defined as the ultraproduct of finite {K}-approximate groups {A_n \subset G_n} for some standard {K}. (A {K}-approximate group {A_n} is a symmetric set containing the origin such that {A_n+A_n} can be covered by {K} or fewer translates of {A_n}.) We then let {O(A_\alpha)} be the external subgroup of {G_\alpha} generated by {A_\alpha}; equivalently, {A_\alpha} is the union of {A_\alpha^m} over all standard {m}. This space has a Loeb measure {\mu_{O(A_\alpha)}}, defined by setting

\displaystyle \mu_{O(A_\alpha)}(E_\alpha) := \hbox{st} \frac{|E_\alpha|}{|A_\alpha|}

whenever {E_\alpha} is an internal subset of {A_\alpha^m} for any standard {m}, and extended to a countably additive measure; the arguments in Section 6 of this previous blog post can be easily modified to give a construction of this measure.

The Loeb measure {\mu_{O(A_\alpha)}} is a translation invariant measure on {O(A_{\alpha})}, normalised so that {A_\alpha} has Loeb measure one. As such, one should think of {O(A_\alpha)} as being analogous to a locally compact abelian group equipped with a Haar measure. It should be noted though that {O(A_\alpha)} is not actually a locally compact group with Haar measure, for two reasons:

  • There is not an obvious topology on {O(A_\alpha)} that makes it simultaneously locally compact, Hausdorff, and {\sigma}-compact. (One can get one or two out of three without difficulty, though.)
  • The addition operation {+\colon O(A_\alpha) \times O(A_\alpha) \rightarrow O(A_\alpha)} is not measurable from the product Loeb algebra {{\mathcal L}_{O(A_\alpha)} \times {\mathcal L}_{O(A_\alpha)}} to {{\mathcal L}_{O(\alpha)}}. Instead, it is measurable from the coarser Loeb algebra {{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}} to {{\mathcal L}_{O(\alpha)}} (compare with the analogous situation for nonstandard graphs).

Nevertheless, the analogy is a useful guide for the arguments that follow.

Let {L(O(A_\alpha))} denote the space of bounded Loeb measurable functions {f\colon O(A_\alpha) \rightarrow {\bf C}} (modulo almost everywhere equivalence) that are supported on {A_\alpha^m} for some standard {m}; this is a complex algebra with respect to pointwise multiplication. There is also a convolution operation {\star\colon L(O(A_\alpha)) \times L(O(A_\alpha)) \rightarrow L(O(A_\alpha))}, defined by setting

\displaystyle  \hbox{st} f \star \hbox{st} g(x) := \hbox{st} \frac{1}{|A_\alpha|} \sum_{y \in A_\alpha^m} f(y) g(x-y)

whenever {f\colon A_\alpha^m \rightarrow {}^* {\bf C}}, {g\colon A_\alpha^l \rightarrow {}^* {\bf C}} are bounded nonstandard functions (extended by zero to all of {O(A_\alpha)}), and then extending to arbitrary elements of {L(O(A_\alpha))} by density. Equivalently, {f \star g} is the pushforward of the {{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}}-measurable function {(x,y) \mapsto f(x) g(y)} under the map {(x,y) \mapsto x+y}.

The basic structural theorem is then as follows.

Theorem 1 (Kronecker factor) Let {A_\alpha} be an ultra approximate group. Then there exists a (standard) locally compact abelian group {G} of the form

\displaystyle  G = {\bf R}^d \times {\bf Z}^m \times T

for some standard {d,m} and some compact abelian group {T}, equipped with a Haar measure {\mu_G} and a measurable homomorphism {\pi\colon O(A_\alpha) \rightarrow G} (using the Loeb {\sigma}-algebra on {O(A_\alpha)} and the Baire {\sigma}-algebra on {G}), with the following properties:

  • (i) {\pi} has dense image, and {\mu_G} is the pushforward of Loeb measure {\mu_{O(A_\alpha)}} by {\pi}.
  • (ii) There exists sets {\{0\} \subset U_0 \subset K_0 \subset G} with {U_0} open and {K_0} compact, such that

    \displaystyle  \pi^{-1}(U_0) \subset 4A_\alpha \subset \pi^{-1}(K_0). \ \ \ \ \ (1)

  • (iii) Whenever {K \subset U \subset G} with {K} compact and {U} open, there exists a nonstandard finite set {B} such that

    \displaystyle  \pi^{-1}(K) \subset B \subset \pi^{-1}(U). \ \ \ \ \ (2)

  • (iv) If {f, g \in L}, then we have the convolution formula

    \displaystyle  f \star g = \pi^*( (\pi_* f) \star (\pi_* g) ) \ \ \ \ \ (3)

    where {\pi_* f,\pi_* g} are the pushforwards of {f,g} to {L^2(G, \mu_G)}, the convolution {\star} on the right-hand side is convolution using {\mu_G}, and {\pi^*} is the pullback map from {L^2(G,\mu_G)} to {L^2(O(A_\alpha), \mu_{O(A_\alpha)})}. In particular, if {\pi_* f = 0}, then {f*g=0} for all {g \in L}.

One can view the locally compact abelian group {G} as a “model “or “Kronecker factor” for the ultra approximate group {A_\alpha} (in close analogy with the Kronecker factor from ergodic theory). In the case that {A_\alpha} is a genuine nonstandard finite group rather than an ultra approximate group, the non-compact components {{\bf R}^d \times {\bf Z}^m} of the Kronecker group {G} are trivial, and this theorem was implicitly established by Szegedy. The compact group {T} is quite large, and in particular is likely to be inseparable; but as with the case of graphons, when one is only studying at most countably many functions {f}, one can cut down the size of this group to be separable (or equivalently, second countable or metrisable) if desired, so one often works with a “reduced Kronecker factor” which is a quotient of the full Kronecker factor {G}. Once one is in the separable case, the Baire sigma algebra is identical with the more familiar Borel sigma algebra.

Given any sequence of uniformly bounded functions {f_n\colon A_n^m \rightarrow {\bf C}} for some fixed {m}, we can view the function {f \in L} defined by

\displaystyle  f := \pi_* \hbox{st} \lim_{n \rightarrow \alpha} f_n \ \ \ \ \ (4)

as an “additive limit” of the {f_n}, in much the same way that graphons {p\colon V_\alpha \times V_\alpha \rightarrow [0,1]} are limits of the indicator functions {1_{E_n}\colon V_n \times V_n \rightarrow \{0,1\}}. The additive limits capture some of the statistics of the {f_n}, for instance the normalised means

\displaystyle  \frac{1}{|A_n|} \sum_{x \in A_n^m} f_n(x)

converge (along the ultrafilter {\alpha}) to the mean

\displaystyle  \int_G f(x)\ d\mu_G(x),

and for three sequences {f_n,g_n,h_n\colon A_n^m \rightarrow {\bf C}} of functions, the normalised correlation

\displaystyle  \frac{1}{|A_n|^2} \sum_{x,y \in A_n^m} f_n(x) g_n(y) h_n(x+y)

converges along {\alpha} to the correlation

\displaystyle  \int_G \int_G f(x) g(y) h(x+y)\ d\mu_G(x) d\mu_G(y),

the normalised {U^2} Gowers norm

\displaystyle  ( \frac{1}{|A_n|^3} \sum_{x,y,z,w \in A_n^m: x+w=y+z} f_n(x) \overline{f_n(y)} \overline{f_n(z)} f_n(w))^{1/4}

converges along {\alpha} to the {U^2} Gowers norm

\displaystyle  ( \int_{G \times G \times G} f(x) \overline{f(y)} \overline{f(z)} f_n(x+y-z)\ d\mu_G(x) d\mu_G(y) d\mu_G(z))^{1/4}

and so forth. We caution however that some correlations that involve evaluating more than one function at the same point will not necessarily be preserved in the additive limit; for instance the normalised {\ell^2} norm

\displaystyle  (\frac{1}{|A_n|} \sum_{x \in A_n^m} |f_n(x)|^2)^{1/2}

does not necessarily converge to the {L^2} norm

\displaystyle  (\int_G |f(x)|^2\ d\mu_G(x))^{1/2},

but can converge instead to a larger quantity, due to the presence of the orthogonal projection {\pi_*} in the definition (4) of {f}.

An important special case of an additive limit occurs when the functions {f_n\colon A_n^m \rightarrow {\bf C}} involved are indicator functions {f_n = 1_{E_n}} of some subsets {E_n} of {A_n^m}. The additive limit {f \in L} does not necessarily remain an indicator function, but instead takes values in {[0,1]} (much as a graphon {p} takes values in {[0,1]} even though the original indicators {1_{E_n}} take values in {\{0,1\}}). The convolution {f \star f\colon G \rightarrow [0,1]} is then the ultralimit of the normalised convolutions {\frac{1}{|A_n|} 1_{E_n} \star 1_{E_n}}; in particular, the measure of the support of {f \star f} provides a lower bound on the limiting normalised cardinality {\frac{1}{|A_n|} |E_n + E_n|} of a sumset. In many situations this lower bound is an equality, but this is not necessarily the case, because the sumset {2E_n = E_n + E_n} could contain a large number of elements which have very few ({o(|A_n|)}) representations as the sum of two elements of {E_n}, and in the limit these portions of the sumset fall outside of the support of {f \star f}. (One can think of the support of {f \star f} as describing the “essential” sumset of {2E_n = E_n + E_n}, discarding those elements that have only very few representations.) Similarly for higher convolutions of {f}. Thus one can use additive limits to partially control the growth {k E_n} of iterated sumsets of subsets {E_n} of approximate groups {A_n}, in the regime where {k} stays bounded and {n} goes to infinity.

Theorem 1 can be proven by Fourier-analytic means (combined with Freiman’s theorem from additive combinatorics), and we will do so below the fold. For now, we give some illustrative examples of additive limits.

Example 2 (Bohr sets) We take {A_n} to be the intervals {A_n := \{ x \in {\bf Z}: |x| \leq N_n \}}, where {N_n} is a sequence going to infinity; these are {2}-approximate groups for all {n}. Let {\theta} be an irrational real number, let {I} be an interval in {{\bf R}/{\bf Z}}, and for each natural number {n} let {B_n} be the Bohr set

\displaystyle  B_n := \{ x \in A^{(n)}: \theta x \hbox{ mod } 1 \in I \}.

In this case, the (reduced) Kronecker factor {G} can be taken to be the infinite cylinder {{\bf R} \times {\bf R}/{\bf Z}} with the usual Lebesgue measure {\mu_G}. The additive limits of {1_{A_n}} and {1_{B_n}} end up being {1_A} and {1_B}, where {A} is the finite cylinder

\displaystyle  A := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]\}

and {B} is the rectangle

\displaystyle  B := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]; t \in I \}.

Geometrically, one should think of {A_n} and {B_n} as being wrapped around the cylinder {{\bf R} \times {\bf R}/{\bf Z}} via the homomorphism {x \mapsto (\frac{x}{N_n}, \theta x \hbox{ mod } 1)}, and then one sees that {B_n} is converging in some normalised weak sense to {B}, and similarly for {A_n} and {A}. In particular, the additive limit predicts the growth rate of the iterated sumsets {kB_n} to be quadratic in {k} until {k|I|} becomes comparable to {1}, at which point the growth transitions to linear growth, in the regime where {k} is bounded and {n} is large.

If {\theta = \frac{p}{q}} were rational instead of irrational, then one would need to replace {{\bf R}/{\bf Z}} by the finite subgroup {\frac{1}{q}{\bf Z}/{\bf Z}} here.

Example 3 (Structured subsets of progressions) We take {A_n} be the rank two progression

\displaystyle  A_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|, |b| \leq N_n \},

where {N_n} is a sequence going to infinity; these are {4}-approximate groups for all {n}. Let {B_n} be the subset

\displaystyle  B_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|^2 + |b|^2 \leq N_n^2 \}.

Then the (reduced) Kronecker factor can be taken to be {G = {\bf R}^2} with Lebesgue measure {\mu_G}, and the additive limits of the {1_{A_n}} and {1_{B_n}} are then {1_A} and {1_B}, where {A} is the square

\displaystyle  A := \{ (a,b) \in {\bf R}^2: |a|, |b| \leq 1 \}

and {B} is the circle

\displaystyle  B := \{ (a,b) \in {\bf R}^2: a^2+b^2 \leq 1 \}.

Geometrically, the picture is similar to the Bohr set one, except now one uses a Freiman homomorphism {a + b N_n^2 \mapsto (\frac{a}{N_n}, \frac{b}{N_n})} for {a,b = O( N_n )} to embed the original sets {A_n, B_n} into the plane {{\bf R}^2}. In particular, one now expects the growth rate of the iterated sumsets {k A_n} and {k B_n} to be quadratic in {k}, in the regime where {k} is bounded and {n} is large.

Example 4 (Dissociated sets) Let {d} be a fixed natural number, and take

\displaystyle  A_n = \{0, v_1,\dots,v_d,-v_1,\dots,-v_d \}

where {v_1,\dots,v_d} are randomly chosen elements of a large cyclic group {{\bf Z}/p_n{\bf Z}}, where {p_n} is a sequence of primes going to infinity. These are {O(d)}-approximate groups. The (reduced) Kronecker factor {G} can (almost surely) then be taken to be {{\bf Z}^d} with counting measure, and the additive limit of {1_{A_n}} is {1_A}, where {A = \{ 0, e_1,\dots,e_d,-e_1,\dots,-e_d\}} and {e_1,\dots,e_d} is the standard basis of {{\bf Z}^d}. In particular, the growth rates of {k A_n} should grow approximately like {k^d} for {k} bounded and {n} large.

Example 5 (Random subsets of groups) Let {A_n = G_n} be a sequence of finite additive groups whose order is going to infinity. Let {B_n} be a random subset of {G_n} of some fixed density {0 \leq \lambda \leq 1}. Then (almost surely) the Kronecker factor here can be reduced all the way to the trivial group {\{0\}}, and the additive limit of the {1_{B_n}} is the constant function {\lambda}. The convolutions {\frac{1}{|G_n|} 1_{B_n} * 1_{B_n}} then converge in the ultralimit (modulo almost everywhere equivalence) to the pullback of {\lambda^2}; this reflects the fact that {(1-o(1))|G_n|} of the elements of {G_n} can be represented as the sum of two elements of {B_n} in {(\lambda^2 + o(1)) |G_n|} ways. In particular, {B_n+B_n} occupies a proportion {1-o(1)} of {G_n}.

Example 6 (Trigonometric series) Take {A_n = G_n = {\bf Z}/p_n {\bf C}} for a sequence {p_n} of primes going to infinity, and for each {n} let {\xi_{n,1},\xi_{n,2},\dots} be an infinite sequence of frequencies chosen uniformly and independently from {{\bf Z}/p_n{\bf Z}}. Let {f_n\colon {\bf Z}/p_n{\bf Z} \rightarrow {\bf C}} denote the random trigonometric series

\displaystyle  f_n(x) := \sum_{j=1}^\infty 2^{-j} e^{2\pi i \xi_{n,j} x / p_n }.

Then (almost surely) we can take the reduced Kronecker factor {G} to be the infinite torus {({\bf R}/{\bf Z})^{\bf N}} (with the Haar probability measure {\mu_G}), and the additive limit of the {f_n} then becomes the function {f\colon ({\bf R}/{\bf Z})^{\bf N} \rightarrow {\bf R}} defined by the formula

\displaystyle  f( (x_j)_{j=1}^\infty ) := \sum_{j=1}^\infty e^{2\pi i x_j}.

In fact, the pullback {\pi^* f} is the ultralimit of the {f_n}. As such, for any standard exponent {1 \leq q < \infty}, the normalised {l^q} norm

\displaystyle  (\frac{1}{p_n} \sum_{x \in {\bf Z}/p_n{\bf Z}} |f_n(x)|^q)^{1/q}

can be seen to converge to the limit

\displaystyle  (\int_{({\bf R}/{\bf Z})^{\bf N}} |f(x)|^q\ d\mu_G(x))^{1/q}.

The reader is invited to consider combinations of the above examples, e.g. random subsets of Bohr sets, to get a sense of the general case of Theorem 1.

It is likely that this theorem can be extended to the noncommutative setting, using the noncommutative Freiman theorem of Emmanuel Breuillard, Ben Green, and myself, but I have not attempted to do so here (see though this recent preprint of Anush Tserunyan for some related explorations); in a separate direction, there should be extensions that can control higher Gowers norms, in the spirit of the work of Szegedy.

Note: the arguments below will presume some familiarity with additive combinatorics and with nonstandard analysis, and will be a little sketchy in places.

Read the rest of this entry »

Archives