You are currently browsing the category archive for the ‘Mathematics’ category.

Note: this post is of a particularly technical nature, in particular presuming familiarity with nilsequences, nilsystems, characteristic factors, etc., and is primarily intended for experts.

As mentioned in the previous post, Ben Green, Tamar Ziegler, and myself proved the following inverse theorem for the Gowers norms:

Theorem 1 (Inverse theorem for Gowers norms) Let ${N \geq 1}$ and ${s \geq 1}$ be integers, and let ${\delta > 0}$. Suppose that ${f: {\bf Z} \rightarrow [-1,1]}$ is a function supported on ${[N] := \{1,\dots,N\}}$ such that

$\displaystyle \frac{1}{N^{s+2}} \sum_{n,h_1,\dots,h_{s+1}} \prod_{\omega \in \{0,1\}^{s+1}} f(n+\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}) \geq \delta.$

Then there exists a filtered nilmanifold ${G/\Gamma}$ of degree ${\leq s}$ and complexity ${O_{s,\delta}(1)}$, a polynomial sequence ${g: {\bf Z} \rightarrow G}$, and a Lipschitz function ${F: G/\Gamma \rightarrow {\bf R}}$ of Lipschitz constant ${O_{s,\delta}(1)}$ such that

$\displaystyle \frac{1}{N} \sum_n f(n) F(g(n) \Gamma) \gg_{s,\delta} 1.$

This result was conjectured earlier by Ben Green and myself; this conjecture was strongly motivated by an analogous inverse theorem in ergodic theory by Host and Kra, which we formulate here in a form designed to resemble Theorem 1 as closely as possible:

Theorem 2 (Inverse theorem for Gowers-Host-Kra seminorms) Let ${s \geq 1}$ be an integer, and let ${(X, T)}$ be an ergodic, countably generated measure-preserving system. Suppose that one has

$\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N^{s+1}} \sum_{h_1,\dots,h_{s+1} \in [N]} \int_X \prod_{\omega \in \{0,1\}^{s+1}} f(T^{\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}}x)\ d\mu(x)$

$\displaystyle > 0$

for all non-zero ${f \in L^\infty(X)}$ (all ${L^p}$ spaces are real-valued in this post). Then ${(X,T)}$ is an inverse limit (in the category of measure-preserving systems, up to almost everywhere equivalence) of ergodic degree ${\leq s}$ nilsystems, that is to say systems of the form ${(G/\Gamma, x \mapsto gx)}$ for some degree ${\leq s}$ filtered nilmanifold ${G/\Gamma}$ and a group element ${g \in G}$ that acts ergodically on ${G/\Gamma}$.

It is a natural question to ask if there is any logical relationship between the two theorems. In the finite field category, one can deduce the combinatorial inverse theorem from the ergodic inverse theorem by a variant of the Furstenberg correspondence principle, as worked out by Tamar Ziegler and myself, however in the current context of ${{\bf Z}}$-actions, the connection is less clear.

One can split Theorem 2 into two components:

Theorem 3 (Weak inverse theorem for Gowers-Host-Kra seminorms) Let ${s \geq 1}$ be an integer, and let ${(X, T)}$ be an ergodic, countably generated measure-preserving system. Suppose that one has

$\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N^{s+1}} \sum_{h_1,\dots,h_{s+1} \in [N]} \int_X \prod_{\omega \in \{0,1\}^{s+1}} T^{\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}} f\ d\mu$

$\displaystyle > 0$

for all non-zero ${f \in L^\infty(X)}$, where ${T^h f := f \circ T^h}$. Then ${(X,T)}$ is a factor of an inverse limit of ergodic degree ${\leq s}$ nilsystems.

Theorem 4 (Pro-nilsystems closed under factors) Let ${s \geq 1}$ be an integer. Then any factor of an inverse limit of ergodic degree ${\leq s}$ nilsystems, is again an inverse limit of ergodic degree ${\leq s}$ nilsystems.

Indeed, it is clear that Theorem 2 implies both Theorem 3 and Theorem 4, and conversely that the two latter theorems jointly imply the former. Theorem 4 is, in principle, purely a fact about nilsystems, and should have an independent proof, but this is not known; the only known proofs go through the full machinery needed to prove Theorem 2 (or the closely related theorem of Ziegler). (However, the fact that a factor of a nilsystem is again a nilsystem was established previously by Parry.)

The purpose of this post is to record a partial implication in reverse direction to the correspondence principle:

Proposition 5 Theorem 1 implies Theorem 3.

As mentioned at the start of the post, a fair amount of familiarity with the area is presumed here, and some routine steps will be presented with only a fairly brief explanation.

A few years ago, Ben Green, Tamar Ziegler, and myself proved the following (rather technical-looking) inverse theorem for the Gowers norms:

Theorem 1 (Discrete inverse theorem for Gowers norms) Let ${N \geq 1}$ and ${s \geq 1}$ be integers, and let ${\delta > 0}$. Suppose that ${f: {\bf Z} \rightarrow [-1,1]}$ is a function supported on ${[N] := \{1,\dots,N\}}$ such that

$\displaystyle \frac{1}{N^{s+2}} \sum_{n,h_1,\dots,h_{s+1}} \prod_{\omega \in \{0,1\}^{s+1}} f(n+\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1}) \geq \delta.$

Then there exists a filtered nilmanifold ${G/\Gamma}$ of degree ${\leq s}$ and complexity ${O_{s,\delta}(1)}$, a polynomial sequence ${g: {\bf Z} \rightarrow G}$, and a Lipschitz function ${F: G/\Gamma \rightarrow {\bf R}}$ of Lipschitz constant ${O_{s,\delta}(1)}$ such that

$\displaystyle \frac{1}{N} \sum_n f(n) F(g(n) \Gamma) \gg_{s,\delta} 1.$

For the definitions of “filtered nilmanifold”, “degree”, “complexity”, and “polynomial sequence”, see the paper of Ben, Tammy, and myself. (I should caution the reader that this blog post will presume a fair amount of familiarity with this subfield of additive combinatorics.) This result has a number of applications, for instance to establishing asymptotics for linear equations in the primes, but this will not be the focus of discussion here.

The purpose of this post is to record the observation that this “discrete” inverse theorem, together with an equidistribution theorem for nilsequences that Ben and I worked out in a separate paper, implies a continuous version:

Theorem 2 (Continuous inverse theorem for Gowers norms) Let ${s \geq 1}$ be an integer, and let ${\delta>0}$. Suppose that ${f: {\bf R} \rightarrow [-1,1]}$ is a measurable function supported on ${[0,1]}$ such that

$\displaystyle \int_{{\bf R}^{s+1}} \prod_{\omega \in \{0,1\}^{s+1}} f(t+\omega_1 h_1 + \dots + \omega_{s+1} h_{s+1})\ dt dh_1 \dots dh_{s+1} \geq \delta. \ \ \ \ \ (1)$

Then there exists a filtered nilmanifold ${G/\Gamma}$ of degree ${\leq s}$ and complexity ${O_{s,\delta}(1)}$, a (smooth) polynomial sequence ${g: {\bf R} \rightarrow G}$, and a Lipschitz function ${F: G/\Gamma \rightarrow {\bf R}}$ of Lipschitz constant ${O_{s,\delta}(1)}$ such that

$\displaystyle \int_{\bf R} f(t) F(g(t) \Gamma)\ dt \gg_{s,\delta} 1.$

The interval ${[0,1]}$ can be easily replaced with any other fixed interval by a change of variables. A key point here is that the bounds are completely uniform in the choice of ${f}$. Note though that the coefficients of ${g}$ can be arbitrarily large (and this is necessary, as can be seen just by considering functions of the form ${f(t) = \cos( \xi t)}$ for some arbitrarily large frequency ${\xi}$).

It is likely that one could prove Theorem 2 by carefully going through the proof of Theorem 1 and replacing all instances of ${{\bf Z}}$ with ${{\bf R}}$ (and making appropriate modifications to the argument to accommodate this). However, the proof of Theorem 1 is quite lengthy. Here, we shall proceed by the usual limiting process of viewing the continuous interval ${[0,1]}$ as a limit of the discrete interval ${\frac{1}{N} \cdot [N]}$ as ${N \rightarrow \infty}$. However there will be some problems taking the limit due to a failure of compactness, and specifically with regards to the coefficients of the polynomial sequence ${g: {\bf N} \rightarrow G}$ produced by Theorem 1, after normalising these coefficients by ${N}$. Fortunately, a factorisation theorem from a paper of Ben Green and myself resolves this problem by splitting ${g}$ into a “smooth” part which does enjoy good compactness properties, as well as “totally equidistributed” and “periodic” parts which can be eliminated using the measurability (and thus, approximate smoothness), of ${f}$.

Szemerédi’s theorem asserts that any subset of the integers of positive upper density contains arbitrarily large arithmetic progressions. Here is an equivalent quantitative form of this theorem:

Theorem 1 (Szemerédi’s theorem) Let ${N}$ be a positive integer, and let ${f: {\bf Z}/N{\bf Z} \rightarrow [0,1]}$ be a function with ${{\bf E}_{x \in {\bf Z}/N{\bf Z}} f(x) \geq \delta}$ for some ${\delta>0}$, where we use the averaging notation ${{\bf E}_{x \in A} f(x) := \frac{1}{|A|} \sum_{x \in A} f(x)}$, ${{\bf E}_{x,r \in A} f(x) := \frac{1}{|A|^2} \sum_{x, r \in A} f(x)}$, etc.. Then for ${k \geq 3}$ we have

$\displaystyle {\bf E}_{x,r \in {\bf Z}/N{\bf Z}} f(x) f(x+r) \dots f(x+(k-1)r) \geq c(k,\delta)$

for some ${c(k,\delta)>0}$ depending only on ${k,\delta}$.

The equivalence is basically thanks to an averaging argument of Varnavides; see for instance Chapter 11 of my book with Van Vu or this previous blog post for a discussion. We have removed the cases ${k=1,2}$ as they are trivial and somewhat degenerate.

There are now many proofs of this theorem. Some time ago, I took an ergodic-theoretic proof of Furstenberg and converted it to a purely finitary proof of the theorem. The argument used some simplifying innovations that had been developed since the original work of Furstenberg (in particular, deployment of the Gowers uniformity norms, as well as a “dual” norm that I called the uniformly almost periodic norm, and an emphasis on van der Waerden’s theorem for handling the “compact extension” component of the argument). But the proof was still quite messy. However, as discussed in this previous blog post, messy finitary proofs can often be cleaned up using nonstandard analysis. Thus, there should be a nonstandard version of the Furstenberg ergodic theory argument that is relatively clean. I decided (after some encouragement from Ben Green and Isaac Goldbring) to write down most of the details of this argument in this blog post, though for sake of brevity I will skim rather quickly over arguments that were already discussed at length in other blog posts. In particular, I will presume familiarity with nonstandard analysis (in particular, the notion of a standard part of a bounded real number, and the Loeb measure construction), see for instance this previous blog post for a discussion.

In analytic number theory, there is a well known analogy between the prime factorisation of a large integer, and the cycle decomposition of a large permutation; this analogy is central to the topic of “anatomy of the integers”, as discussed for instance in this survey article of Granville. Consider for instance the following two parallel lists of facts (stated somewhat informally). Firstly, some facts about the prime factorisation of large integers:

• Every positive integer ${m}$ has a prime factorisation

$\displaystyle m = p_1 p_2 \dots p_r$

into (not necessarily distinct) primes ${p_1,\dots,p_r}$, which is unique up to rearrangement. Taking logarithms, we obtain a partition

$\displaystyle \log m = \log p_1 + \log p_2 + \dots + \log p_r$

of ${\log m}$.

• (Prime number theorem) A randomly selected integer ${m}$ of size ${m \sim N}$ will be prime with probability ${\approx \frac{1}{\log N}}$ when ${N}$ is large.
• If ${m \sim N}$ is a randomly selected large integer of size ${N}$, and ${p = p_i}$ is a randomly selected prime factor of ${m = p_1 \dots p_r}$ (with each index ${i}$ being chosen with probability ${\frac{\log p_i}{\log m}}$), then ${\log p_i}$ is approximately uniformly distributed between ${0}$ and ${\log N}$. (See Proposition 9 of this previous blog post.)
• The set of real numbers ${\{ \frac{\log p_i}{\log m}: i=1,\dots,r \}}$ arising from the prime factorisation ${m = p_1 \dots p_r}$ of a large random number ${m \sim N}$ converges (away from the origin, and in a suitable weak sense) to the Poisson-Dirichlet process in the limit ${N \rightarrow \infty}$. (See the previously mentioned blog post for a definition of the Poisson-Dirichlet process, and a proof of this claim.)

Now for the facts about the cycle decomposition of large permutations:

• Every permutation ${\sigma \in S_n}$ has a cycle decomposition

$\displaystyle \sigma = C_1 \dots C_r$

into disjoint cycles ${C_1,\dots,C_r}$, which is unique up to rearrangement, and where we count each fixed point of ${\sigma}$ as a cycle of length ${1}$. If ${|C_i|}$ is the length of the cycle ${C_i}$, we obtain a partition

$\displaystyle n = |C_1| + \dots + |C_r|$

of ${n}$.

• (Prime number theorem for permutations) A randomly selected permutation of ${S_n}$ will be an ${n}$-cycle with probability exactly ${1/n}$. (This was noted in this previous blog post.)
• If ${\sigma}$ is a random permutation in ${S_n}$, and ${C_i}$ is a randomly selected cycle of ${\sigma}$ (with each ${i}$ being selected with probability ${|C_i|/n}$), then ${|C_i|}$ is exactly uniformly distributed on ${\{1,\dots,n\}}$. (See Proposition 8 of this blog post.)
• The set of real numbers ${\{ \frac{|C_i|}{n} \}}$ arising from the cycle decomposition ${\sigma = C_1 \dots C_r}$ of a random permutation ${\sigma \in S_n}$ converges (in a suitable sense) to the Poisson-Dirichlet process in the limit ${n \rightarrow \infty}$. (Again, see this previous blog post for details.)

See this previous blog post (or the aforementioned article of Granville, or the Notices article of Arratia, Barbour, and Tavaré) for further exploration of the analogy between prime factorisation of integers and cycle decomposition of permutations.

There is however something unsatisfying about the analogy, in that it is not clear why there should be such a kinship between integer prime factorisation and permutation cycle decomposition. It turns out that the situation is clarified if one uses another fundamental analogy in number theory, namely the analogy between integers and polynomials ${P \in {\mathbf F}_q[T]}$ over a finite field ${{\mathbf F}_q}$, discussed for instance in this previous post; this is the simplest case of the more general function field analogy between number fields and function fields. Just as we restrict attention to positive integers when talking about prime factorisation, it will be reasonable to restrict attention to monic polynomials ${P}$. We then have another analogous list of facts, proven very similarly to the corresponding list of facts for the integers:

• Every monic polynomial ${f \in {\mathbf F}_q[T]}$ has a factorisation

$\displaystyle f = P_1 \dots P_r$

into irreducible monic polynomials ${P_1,\dots,P_r \in {\mathbf F}_q[T]}$, which is unique up to rearrangement. Taking degrees, we obtain a partition

$\displaystyle \hbox{deg} f = \hbox{deg} P_1 + \dots + \hbox{deg} P_r$

of ${\hbox{deg} f}$.

• (Prime number theorem for polynomials) A randomly selected monic polynomial ${f \in {\mathbf F}_q[T]}$ of degree ${n}$ will be irreducible with probability ${\approx \frac{1}{n}}$ when ${q}$ is fixed and ${n}$ is large.
• If ${f \in {\mathbf F}_q[T]}$ is a random monic polynomial of degree ${n}$, and ${P_i}$ is a random irreducible factor of ${f = P_1 \dots P_r}$ (with each ${i}$ selected with probability ${\hbox{deg} P_i / n}$), then ${\hbox{deg} P_i}$ is approximately uniformly distributed in ${\{1,\dots,n\}}$ when ${q}$ is fixed and ${n}$ is large.
• The set of real numbers ${\{ \hbox{deg} P_i / n \}}$ arising from the factorisation ${f = P_1 \dots P_r}$ of a randomly selected polynomial ${f \in {\mathbf F}_q[T]}$ of degree ${n}$ converges (in a suitable sense) to the Poisson-Dirichlet process when ${q}$ is fixed and ${n}$ is large.

The above list of facts addressed the large ${n}$ limit of the polynomial ring ${{\mathbf F}_q[T]}$, where the order ${q}$ of the field is held fixed, but the degrees of the polynomials go to infinity. This is the limit that is most closely analogous to the integers ${{\bf Z}}$. However, there is another interesting asymptotic limit of polynomial rings to consider, namely the large ${q}$ limit where it is now the degree ${n}$ that is held fixed, but the order ${q}$ of the field goes to infinity. Actually to simplify the exposition we will use the slightly more restrictive limit where the characteristic ${p}$ of the field goes to infinity (again keeping the degree ${n}$ fixed), although all of the results proven below for the large ${p}$ limit turn out to be true as well in the large ${q}$ limit.

The large ${q}$ (or large ${p}$) limit is technically a different limit than the large ${n}$ limit, but in practice the asymptotic statistics of the two limits often agree quite closely. For instance, here is the prime number theorem in the large ${q}$ limit:

Theorem 1 (Prime number theorem) The probability that a random monic polynomial ${f \in {\mathbf F}_q[T]}$ of degree ${n}$ is irreducible is ${\frac{1}{n}+o(1)}$ in the limit where ${n}$ is fixed and the characteristic ${p}$ goes to infinity.

Proof: There are ${q^n}$ monic polynomials ${f \in {\mathbf F}_q[T]}$ of degree ${n}$. If ${f}$ is irreducible, then the ${n}$ zeroes of ${f}$ are distinct and lie in the finite field ${{\mathbf F}_{q^n}}$, but do not lie in any proper subfield of that field. Conversely, every element ${\alpha}$ of ${{\mathbf F}_{q^n}}$ that does not lie in a proper subfield is the root of a unique monic polynomial in ${{\mathbf F}_q[T]}$ of degree ${f}$ (the minimal polynomial of ${\alpha}$). Since the union of all the proper subfields of ${{\mathbf F}_{q^n}}$ has size ${o(q^n)}$, the total number of irreducible polynomials of degree ${n}$ is thus ${\frac{q^n - o(q^n)}{n}}$, and the claim follows. $\Box$

Remark 2 The above argument and inclusion-exclusion in fact gives the well known exact formula ${\frac{1}{n} \sum_{d|n} \mu(\frac{n}{d}) q^d}$ for the number of irreducible monic polynomials of degree ${n}$.

Now we can give a precise connection between the cycle distribution of a random permutation, and (the large ${p}$ limit of) the irreducible factorisation of a polynomial, giving a (somewhat indirect, but still connected) link between permutation cycle decomposition and integer factorisation:

Theorem 3 The partition ${\{ \hbox{deg}(P_1), \dots, \hbox{deg}(P_r) \}}$ of a random monic polynomial ${f= P_1 \dots P_r\in {\mathbf F}_q[T]}$ of degree ${n}$ converges in distribution to the partition ${\{ |C_1|, \dots, |C_r|\}}$ of a random permutation ${\sigma = C_1 \dots C_r \in S_n}$ of length ${n}$, in the limit where ${n}$ is fixed and the characteristic ${p}$ goes to infinity.

We can quickly prove this theorem as follows. We first need a basic fact:

Lemma 4 (Most polynomials square-free in large ${q}$ limit) A random monic polynomial ${f \in {\mathbf F}_q[T]}$ of degree ${n}$ will be square-free with probability ${1-o(1)}$ when ${n}$ is fixed and ${q}$ (or ${p}$) goes to infinity. In a similar spirit, two randomly selected monic polynomials ${f,g}$ of degree ${n,m}$ will be coprime with probability ${1-o(1)}$ if ${n,m}$ are fixed and ${q}$ or ${p}$ goes to infinity.

Proof: For any polynomial ${g}$ of degree ${m}$, the probability that ${f}$ is divisible by ${g^2}$ is at most ${1/q^{2m}}$. Summing over all polynomials of degree ${1 \leq m \leq n/2}$, and using the union bound, we see that the probability that ${f}$ is not squarefree is at most ${\sum_{1 \leq m \leq n/2} \frac{q^m}{q^{2m}} = o(1)}$, giving the first claim. For the second, observe from the first claim (and the fact that ${fg}$ has only a bounded number of factors) that ${fg}$ is squarefree with probability ${1-o(1)}$, giving the claim. $\Box$

Now we can prove the theorem. Elementary combinatorics tells us that the probability of a random permutation ${\sigma \in S_n}$ consisting of ${c_k}$ cycles of length ${k}$ for ${k=1,\dots,r}$, where ${c_k}$ are nonnegative integers with ${\sum_{k=1}^r k c_k = n}$, is precisely

$\displaystyle \frac{1}{\prod_{k=1}^r c_k! k^{c_k}},$

since there are ${\prod_{k=1}^r c_k! k^{c_k}}$ ways to write a given tuple of cycles ${C_1,\dots,C_r}$ in cycle notation in nondecreasing order of length, and ${n!}$ ways to select the labels for the cycle notation. On the other hand, by Theorem 1 (and using Lemma 4 to isolate the small number of cases involving repeated factors) the number of monic polynomials of degree ${n}$ that are the product of ${c_k}$ irreducible polynomials of degree ${k}$ is

$\displaystyle \frac{1}{\prod_{k=1}^r c_k!} \prod_{k=1}^r ( (\frac{1}{k}+o(1)) q^k )^{c_k} + o( q^n )$

which simplifies to

$\displaystyle \frac{1+o(1)}{\prod_{k=1}^r c_k! k^{c_k}} q^n,$

and the claim follows.

This was a fairly short calculation, but it still doesn’t quite explain why there is such a link between the cycle decomposition ${\sigma = C_1 \dots C_r}$ of permutations and the factorisation ${f = P_1 \dots P_r}$ of a polynomial. One immediate thought might be to try to link the multiplication structure of permutations in ${S_n}$ with the multiplication structure of polynomials; however, these structures are too dissimilar to set up a convincing analogy. For instance, the multiplication law on polynomials is abelian and non-invertible, whilst the multiplication law on ${S_n}$ is (extremely) non-abelian but invertible. Also, the multiplication of a degree ${n}$ and a degree ${m}$ polynomial is a degree ${n+m}$ polynomial, whereas the group multiplication law on permutations does not take a permutation in ${S_n}$ and a permutation in ${S_m}$ and return a permutation in ${S_{n+m}}$.

I recently found (after some discussions with Ben Green) what I feel to be a satisfying conceptual (as opposed to computational) explanation of this link, which I will place below the fold.

I’ve just uploaded to the arXiv my paper “Inverse theorems for sets and measures of polynomial growth“. This paper was motivated by two related questions. The first question was to obtain a qualitatively precise description of the sets of polynomial growth that arise in Gromov’s theorem, in much the same way that Freiman’s theorem (and its generalisations) provide a qualitatively precise description of sets of small doubling. The other question was to obtain a non-abelian analogue of inverse Littlewood-Offord theory.

Let me discuss the former question first. Gromov’s theorem tells us that if a finite subset ${A}$ of a group ${G}$ exhibits polynomial growth in the sense that ${|A^n|}$ grows polynomially in ${n}$, then the group generated by ${A}$ is virtually nilpotent (the converse direction also true, and is relatively easy to establish). This theorem has been strengthened a number of times over the years. For instance, a few years ago, I proved with Shalom that the condition that ${|A^n|}$ grew polynomially in ${n}$ could be replaced by ${|A^n| \leq C n^d}$ for a single ${n}$, as long as ${n}$ was sufficiently large depending on ${C,d}$ (in fact we gave a fairly explicit quantitative bound on how large ${n}$ needed to be). A little more recently, with Breuillard and Green, the condition ${|A^n| \leq C n^d}$ was weakened to ${|A^n| \leq n^d |A|}$, that is to say it sufficed to have polynomial relative growth at a finite scale. In fact, the latter paper gave more information on ${A}$ in this case, roughly speaking it showed (at least in the case when ${A}$ was a symmetric neighbourhood of the identity) that ${A^n}$ was “commensurate” with a very structured object known as a coset nilprogression. This can then be used to establish further control on ${A}$. For instance, it was recently shown by Breuillard and Tointon (again in the symmetric case) that if ${|A^n| \leq n^d |A|}$ for a single ${n}$ that was sufficiently large depending on ${d}$, then all the ${A^{n'}}$ for ${n' \geq n}$ have a doubling constant bounded by a bound ${C_d}$ depending only on ${d}$, thus ${|A^{2n'}| \leq C_d |A^{n'}|}$ for all ${n' \geq n}$.

In this paper we are able to refine this analysis a bit further; under the same hypotheses, we can show an estimate of the form

$\displaystyle \log |A^{n'}| = \log |A^n| + f( \log n' - \log n ) + O_d(1)$

for all ${n' \geq n}$ and some piecewise linear, continuous, non-decreasing function ${f: [0,+\infty) \rightarrow [0,+\infty)}$ with ${f(0)=0}$, where the error ${O_d(1)}$ is bounded by a constant depending only on ${d}$, and where ${f}$ has at most ${O_d(1)}$ pieces, each of which has a slope that is a natural number of size ${O_d(1)}$. To put it another way, the function ${n' \mapsto |A^{n'}|}$ for ${n' \geq n}$ behaves (up to multiplicative constants) like a piecewise polynomial function, where the degree of the function and number of pieces is bounded by a constant depending on ${d}$.

One could ask whether the function ${f}$ has any convexity or concavity properties. It turns out that it can exhibit either convex or concave behaviour (or a combination of both). For instance, if ${A}$ is contained in a large finite group, then ${n \mapsto |A^n|}$ will eventually plateau to a constant, exhibiting concave behaviour. On the other hand, in nilpotent groups one can see convex behaviour; for instance, in the Heisenberg group ${\begin{pmatrix}{} {1} {\mathbf Z} {\mathbf Z} \\ {0} {1} {\mathbf Z} \\ {0} {1} \end{pmatrix}}$, if one sets ${A}$ to be a set of matrices of the form ${\begin{pmatrix} 1 & O(N) & O(N^3) \\ 0 & 1 & O(N) \\ 0 & 0 & 1 \end{pmatrix}}$ for some large ${N}$ (abusing the ${O()}$ notation somewhat), then ${n \mapsto A^n}$ grows cubically for ${n \leq N}$ but then grows quartically for ${n > N}$.

To prove this proposition, it turns out (after using a somewhat difficult inverse theorem proven previously by Breuillard, Green, and myself) that one has to analyse the volume growth ${n \mapsto |P^n|}$ of nilprogressions ${P}$. In the “infinitely proper” case where there are no unexpected relations between the generators of the nilprogression, one can lift everything to a simply connected Lie group (where one can take logarithms and exploit the Baker-Campbell-Hausdorff formula heavily), eventually describing ${P^n}$ with fair accuracy by a certain convex polytope with vertices depending polynomially on ${n}$, which implies that ${|P^n|}$ depends polynomially on ${n}$ up to constants. If one is not in the “infinitely proper” case, then at some point ${n_0}$ the nilprogression ${P^{n_0}}$ develops a “collision”, but then one can use this collision to show (after some work) that the dimension of the “Lie model” of ${P^{n_0}}$ has dropped by at least one from the dimension of ${P}$ (the notion of a Lie model being developed in the previously mentioned paper of Breuillard, Greenm, and myself), so that this sort of collision can only occur a bounded number of times, with essentially polynomial volume growth behaviour between these collisions.

The arguments also give a precise description of the location of a set ${A}$ for which ${A^n}$ grows polynomially in ${n}$. In the symmetric case, what ends up happening is that ${A^n}$ becomes commensurate to a “coset nilprogression” ${HP}$ of bounded rank and nilpotency class, whilst ${A}$ is “virtually” contained in a scaled down version ${HP^{1/n}}$ of that nilprogression. What “virtually” means is a little complicated; roughly speaking, it means that there is a set ${X}$ of bounded cardinality such that ${aXHP^{1/n} \approx XHP^{1/n}}$ for all ${a \in A}$. Conversely, if ${A}$ is virtually contained in ${HP^{1/n}}$, then ${A^n}$ is commensurate to ${HP}$ (and more generally, ${A^{mn}}$ is commensurate to ${HP^m}$ for any natural number ${m}$), giving quite a (qualitatively) precise description of ${A}$ in terms of coset nilprogressions.

The main tool used to prove these results is the structure theorem for approximate groups established by Breuillard, Green, and myself, which roughly speaking asserts that approximate groups are always commensurate with coset nilprogressions. A key additional trick is a pigeonholing argument of Sanders, which in this context is the assertion that if ${A^n}$ is comparable to ${A^{2n}}$, then there is an ${n'}$ between ${n}$ and ${2n}$ such that ${A \cdot A^{n'}}$ is very close in size to ${A^{n'}}$ (up to a relative error of ${1/n}$). It is this fact, together with the comparability of ${A^{n'}}$ to a coset nilprogression ${HP}$, that allows us (after some combinatorial argument) to virtually place ${A}$ inside ${HP^{1/n}}$.

Similar arguments apply when discussing iterated convolutions ${\mu^{*n}}$ of (symmetric) probability measures on a (discrete) group ${G}$, rather than combinatorial powers ${A^n}$ of a finite set. Here, the analogue of volume ${A^n}$ is given by the negative power ${\| \mu^{*n} \|_{\ell^2}^{-2}}$ of the ${\ell^2}$ norm of ${\mu^{*n}}$ (thought of as a non-negative function on ${G}$ of total mass 1). One can also work with other norms here than ${\ell^2}$, but this norm has some minor technical conveniences (and other measures of the “spread” of ${\mu^{*n}}$ end up being more or less equivalent for our purposes). There is an analogous structure theorem that asserts that if ${\mu^{*n}}$ spreads at most polynomially in ${n}$, then ${\mu^{*n}}$ is “commensurate” with the uniform probability distribution on a coset progression ${HP}$, and ${\mu}$ itself is largely concentrated near ${HP^{1/\sqrt{n}}}$. The factor of ${\sqrt{n}}$ here is the familiar scaling factor in random walks that arises for instance in the central limit theorem. The proof of (the precise version of) this statement proceeds similarly to the combinatorial case, using pigeonholing to locate a scale ${n'}$ where ${\mu *\mu^{n'}}$ has almost the same ${\ell^2}$ norm as ${\mu^{n'}}$.

A special case of this theory occurs when ${\mu}$ is the uniform probability measure on ${n}$ elements ${v_1,\dots,v_n}$ of ${G}$ and their inverses. The probability measure ${\mu^{*n}}$ is then the distribution of a random product ${w_1 \dots w_n}$, where each ${w_i}$ is equal to one of ${v_{j_i}}$ or its inverse ${v_{j_i}^{-1}}$, selected at random with ${j_i}$ drawn uniformly from ${\{1,\dots,n\}}$ with replacement. This is very close to the Littlewood-Offord situation of random products ${u_1 \dots u_n}$ where each ${u_i}$ is equal to ${v_i}$ or ${v_i^{-1}}$ selected independently at random (thus ${j_i}$ is now fixed to equal ${i}$ rather than being randomly drawn from ${\{1,\dots,n\}}$. In the case when ${G}$ is abelian, it turns out that a little bit of Fourier analysis shows that these two random walks have “comparable” distributions in a certain ${\ell^2}$ sense. As a consequence, the results in this paper can be used to recover an essentially optimal abelian inverse Littlewood-Offord theorem of Nguyen and Vu. In the nonabelian case, the only Littlewood-Offord theorem I am aware of is a recent result of Tiep and Vu for matrix groups, but in this case I do not know how to relate the above two random walks to each other, and so we can only obtain an analogue of the Tiep-Vu results for the symmetrised random walk ${w_1 \dots w_n}$ instead of the ordered random walk ${u_1 \dots u_n}$.

Just a short post here to note that the cover story of this month’s Notices of the AMS, by John Friedlander, is about the recent work on bounded gaps between primes by Zhang, Maynard, our own Polymath project, and others.

I may as well take this opportunity to upload some slides of my own talks on this subject: here are my slides on small and large gaps between the primes that I gave at the “Latinos in the Mathematical Sciences” back in April, and here are my slides on the Polymath project for the Schock Prize symposium last October.  (I also gave an abridged version of the latter talk at an AAAS Symposium in February, as well as the Breakthrough Symposium from last November.)

Suppose that ${A \subset B}$ are two subgroups of some ambient group ${G}$, with the index ${K := [B:A]}$ of ${A}$ in ${B}$ being finite. Then ${B}$ is the union of ${K}$ left cosets of ${A}$, thus ${B = SA}$ for some set ${S \subset B}$ of cardinality ${K}$. The elements ${s}$ of ${S}$ are not entirely arbitrary with regards to ${A}$. For instance, if ${A}$ is a normal subgroup of ${B}$, then for each ${s \in S}$, the conjugation map ${g \mapsto s^{-1} g s}$ preserves ${A}$. In particular, if we write ${A^s := s^{-1} A s}$ for the conjugate of ${A}$ by ${s}$, then

$\displaystyle A = A^s.$

Even if ${A}$ is not normal in ${B}$, it turns out that the conjugation map ${g \mapsto s^{-1} g s}$ approximately preserves ${A}$, if ${K}$ is bounded. To quantify this, let us call two subgroups ${A,B}$ ${K}$-commensurate for some ${K \geq 1}$ if one has

$\displaystyle [A : A \cap B], [B : A \cap B] \leq K.$

Proposition 1 Let ${A \subset B}$ be groups, with finite index ${K = [B:A]}$. Then for every ${s \in B}$, the groups ${A}$ and ${A^s}$ are ${K}$-commensurate, in fact

$\displaystyle [A : A \cap A^s ] = [A^s : A \cap A^s ] \leq K.$

Proof: One can partition ${B}$ into ${K}$ left translates ${xA}$ of ${A}$, as well as ${K}$ left translates ${yA^s}$ of ${A^s}$. Combining the partitions, we see that ${B}$ can be partitioned into at most ${K^2}$ non-empty sets of the form ${xA \cap yA^s}$. Each of these sets is easily seen to be a left translate of the subgroup ${A \cap A^s}$, thus ${[B: A \cap A^s] \leq K^2}$. Since

$\displaystyle [B: A \cap A^s] = [B:A] [A: A \cap A^s] = [B:A^s] [A^s: A \cap A^s]$

and ${[B:A] = [B:A^s]=K}$, we obtain the claim. $\Box$

One can replace the inclusion ${A \subset B}$ by commensurability, at the cost of some worsening of the constants:

Corollary 2 Let ${A, B}$ be ${K}$-commensurate subgroups of ${G}$. Then for every ${s \in B}$, the groups ${A}$ and ${A^s}$ are ${K^2}$-commensurate.

Proof: Applying the previous proposition with ${A}$ replaced by ${A \cap B}$, we see that for every ${s \in B}$, ${A \cap B}$ and ${(A \cap B)^s}$ are ${K}$-commensurate. Since ${A \cap B}$ and ${(A \cap B)^s}$ have index at most ${K}$ in ${A}$ and ${A^s}$ respectively, the claim follows. $\Box$

It turns out that a similar phenomenon holds for the more general concept of an approximate group, and gives a “classification” of all the approximate groups ${B}$ containing a given approximate group ${A}$ as a “bounded index approximate subgroup”. Recall that a ${K}$-approximate group ${A}$ in a group ${G}$ for some ${K \geq 1}$ is a symmetric subset of ${G}$ containing the identity, such that the product set ${A^2 := \{ a_1 a_2: a_1,a_2 \in A\}}$ can be covered by at most ${K}$ left translates of ${A}$ (and thus also ${K}$ right translates, by the symmetry of ${A}$). For simplicity we will restrict attention to finite approximate groups ${A}$ so that we can use their cardinality ${A}$ as a measure of size. We call finite two approximate groups ${A,B}$ ${K}$-commensurate if one has

$\displaystyle |A^2 \cap B^2| \geq \frac{1}{K} |A|, \frac{1}{K} |B|;$

note that this is consistent with the previous notion of commensurability for genuine groups.

Theorem 3 Let ${G}$ be a group, and let ${K_1,K_2,K_3 \geq 1}$ be real numbers. Let ${A}$ be a finite ${K_1}$-approximate group, and let ${B}$ be a symmetric subset of ${G}$ that contains ${A}$.

• (i) If ${B}$ is a ${K_2}$-approximate group with ${|B| \leq K_3 |A|}$, then one has ${B \subset SA}$ for some set ${S}$ of cardinality at most ${K_1 K_2 K_3}$. Furthermore, for each ${s \in S}$, the approximate groups ${A}$ and ${A^s}$ are ${K_1 K_2^5 K_3}$-commensurate.
• (ii) Conversely, if ${B \subset SA}$ for some set ${S}$ of cardinality at most ${K_3}$, and ${A}$ and ${A^s}$ are ${K_2}$-commensurate for all ${s \in S}$, then ${|B| \leq K_3 |A|}$, and ${B}$ is a ${K_1^6 K_2 K_3^2}$-approximate group.

Informally, the assertion that ${B}$ is an approximate group containing ${A}$ as a “bounded index approximate subgroup” is equivalent to ${B}$ being covered by a bounded number of shifts ${sA}$ of ${A}$, where ${s}$ approximately normalises ${A^2}$ in the sense that ${A^2}$ and ${(A^2)^s}$ have large intersection. Thus, to classify all such ${B}$, the problem essentially reduces to that of classifying those ${s}$ that approximately normalise ${A^2}$.

To prove the theorem, we recall some standard lemmas from arithmetic combinatorics, which are the foundation stones of the “Ruzsa calculus” that we will use to establish our results:

Lemma 4 (Ruzsa covering lemma) If ${A}$ and ${B}$ are finite non-empty subsets of ${G}$, then one has ${B \subset SAA^{-1}}$ for some set ${S \subset B}$ with cardinality ${|S| \leq \frac{|BA|}{|A|}}$.

Proof: We take ${S}$ to be a subset of ${B}$ with the property that the translates ${sA, s \in S}$ are disjoint in ${BA}$, and such that ${S}$ is maximal with respect to set inclusion. The required properties of ${S}$ are then easily verified. $\Box$

Lemma 5 (Ruzsa triangle inequality) If ${A,B,C}$ are finite non-empty subsets of ${G}$, then

$\displaystyle |A C^{-1}| \leq |A B^{-1}| |B C^{-1}| / |B|.$

Proof: If ${ac^{-1}}$ is an element of ${AC^{-1}}$ with ${a \in A}$ and ${c \in C}$, then from the identity ${ac^{-1} = (ab^{-1}) (bc^{-1})}$ we see that ${ac^{-1}}$ can be written as the product of an element of ${AB^{-1}}$ and an element of ${BC^{-1}}$ in at least ${|B|}$ distinct ways. The claim follows. $\Box$

Now we can prove (i). By the Ruzsa covering lemma, ${B}$ can be covered by at most

$\displaystyle \frac{|BA|}{|A|} \leq \frac{|B^2|}{|A|} \leq \frac{K_2 |B|}{|A|} \leq K_2 K_3$

left-translates of ${A^2}$, and hence by at most ${K_1 K_2 K_3}$ left-translates of ${A}$, thus ${B \subset SA}$ for some ${|S| \leq K_1 K_2 K_3}$. Since ${sA}$ only intersects ${B}$ if ${s \in BA}$, we may assume that

$\displaystyle S \subset BA \subset B^2$

and hence for any ${s \in S}$

$\displaystyle |A^s A| \leq |B^2 A B^2 A| \leq |B^6|$

$\displaystyle \leq K_2^5 |B| \leq K_2^5 K_3 |A|.$

By the Ruzsa covering lemma again, this implies that ${A^s}$ can be covered by at most ${K_2^5 K_3}$ left-translates of ${A^2}$, and hence by at most ${K_1 K_2^5 K_3}$ left-translates of ${A}$. By the pigeonhole principle, there thus exists a group element ${g}$ with

$\displaystyle |A^s \cap gA| \geq \frac{1}{K_1 K_2^5 K_3} |A|.$

Since

$\displaystyle |A^s \cap gA| \leq | (A^s \cap gA)^{-1} (A^s \cap gA)|$

and

$\displaystyle (A^s \cap gA)^{-1} (A^s \cap gA) \subset A^2 \cap (A^s)^2$

the claim follows.

Now we prove (ii). Clearly

$\displaystyle |B| \leq |S| |A| \leq K_3 |A|.$

Now we control the size of ${B^2 A}$. We have

$\displaystyle |B^2 A| \leq |SA SA^2| \leq K_3^2 \sup_{s \in S} |A s A^2| = K_3^2 \sup_{s \in S} |A^s A^2|.$

From the Ruzsa triangle inequality and symmetry we have

$\displaystyle |A^s A^2| \leq \frac{ |A^s (A^2 \cap (A^2)^s)| |(A^2 \cap (A^2)^s) A^2|}{|A^2 \cap (A^2)^s|}$

$\displaystyle \leq \frac{ |(A^3)^s| |A^4| }{|A|/K_2}$

$\displaystyle \leq K_2 \frac{ |A^3| |A^4| }{|A|}$

$\displaystyle \leq K_1^5 K_2 |A|$

and thus

$\displaystyle |B^2 A| \leq K_1^5 K_2 K_3^2 |A|.$

By the Ruzsa covering lemma, this implies that ${B^2}$ is covered by at most ${K_1^5 K_2 K_3^2}$ left-translates of ${A^2}$, hence by at most ${K_1^6 K_2 K_3^2}$ left-translates of ${A}$. Since ${A \subset B}$, the claim follows.

We now establish some auxiliary propositions about commensurability of approximate groups. The first claim is that commensurability is approximately transitive:

Proposition 6 Let ${A}$ be a ${K_1}$-approximate group, ${B}$ be a ${K_2}$-approximate group, and ${C}$ be a ${K_3}$-approximate group. If ${A}$ and ${B}$ are ${K_4}$-commensurate, and ${B}$ and ${C}$ are ${K_5}$-commensurate, then ${A}$ and ${C}$ are ${K_1^2 K_2^3 K_2^3 K_4 K_5 \max(K_1,K_3)}$-commensurate.

Proof: From two applications of the Ruzsa triangle inequality we have

$\displaystyle |AC| \leq \frac{|A (A^2 \cap B^2)| |(A^2 \cap B^2) (B^2 \cap C^2)| |(B^2 \cap C^2) C|}{|A^2 \cap B^2| |B^2 \cap C^2|}$

$\displaystyle \leq \frac{|A^3| |B^4| |C^3|}{ (|A|/K_4) (|B|/K_5)}$

$\displaystyle \leq K_4 K_5 \frac{K_1^2 |A| K_2^3 |B| K_3^2 |C|}{ |A| |B| }$

$\displaystyle = K_1^2 K_2^3 K_3^2 K_4 K_5 |C|.$

By the Ruzsa covering lemma, we may thus cover ${A}$ by at most ${K_1^2 K_2^3 K_3^2 K_4 K_5}$ left-translates of ${C^2}$, and hence by ${K_1^2 K_2^3 K_3^3 K_4 K_5}$ left-translates of ${C}$. By the pigeonhole principle, there thus exists a group element ${g}$ such that

$\displaystyle |A \cap gC| \geq \frac{1}{K_1^2 K_2^3 K_3^3 K_4 K_5} |A|,$

and so by arguing as in the proof of part (i) of the theorem we have

$\displaystyle |A^2 \cap C^2| \geq \frac{1}{K_1^2 K_2^3 K_3^3 K_4 K_5} |A|$

and similarly

$\displaystyle |A^2 \cap C^2| \geq \frac{1}{K_1^3 K_2^3 K_3^2 K_4 K_5} |C|$

and the claim follows. $\Box$

The next proposition asserts that the union and (modified) intersection of two commensurate approximate groups is again an approximate group:

Proposition 7 Let ${A}$ be a ${K_1}$-approximate group, ${B}$ be a ${K_2}$-approximate group, and suppose that ${A}$ and ${B}$ are ${K_3}$-commensurate. Then ${A \cup B}$ is a ${K_1 + K_2 + K_1^2 K_2^4 K_3 + K_1^4 K_2^2 K_3}$-approximate subgroup, and ${A^2 \cap B^2}$ is a ${K_1^6 K_2^3 K_3}$-approximate subgroup.

Using this proposition, one may obtain a variant of the previous theorem where the containment ${A \subset B}$ is replaced by commensurability; we leave the details to the interested reader.

Proof: We begin with ${A \cup B}$. Clearly ${A \cup B}$ is symmetric and contains the identity. We have ${(A \cup B)^2 = A^2 \cup AB \cup BA \cup B^2}$. The set ${A^2}$ is already covered by ${K_1}$ left translates of ${A}$, and hence of ${A \cup B}$; similarly ${B^2}$ is covered by ${K_2}$ left translates of ${A \cup B}$. As for ${AB}$, we observe from the Ruzsa triangle inequality that

$\displaystyle |AB^2| \leq \frac{|A (A^2 \cap B^2)| |(A^2 \cap B^2) B^2|}{|A^2 \cap B^2|}$

$\displaystyle \leq \frac{|A^3| |B^4|}{|A|/K_3}$

$\displaystyle \leq K_1^2 K_2^3 K_3 |B|$

and hence by the Ruzsa covering lemma, ${AB}$ is covered by at most ${K_1^2 K_2^3 K_3}$ left translates of ${B^2}$, and hence by ${K_1^2 K_2^4 K_3}$ left translates of ${B}$, and hence of ${A \cup B}$. Similarly ${BA}$ is covered by at most ${K_1^4 K_2^2 K_3}$ left translates of ${B}$. The claim follows.

Now we consider ${A^2 \cap B^2}$. Again, this is clearly symmetric and contains the identity. Repeating the previous arguments, we see that ${A}$ is covered by at most ${K_1^2 K_2^3 K_3}$ left-translates of ${B}$, and hence there exists a group element ${g}$ with

$\displaystyle |A \cap gB| \geq \frac{1}{K_1^2 K_2^3 K_3} |A|.$

Now observe that

$\displaystyle |(A^2 \cap B^2)^2 (A \cap gB)| \leq |A^5| \leq K_1^4 |A|$

and so by the Ruzsa covering lemma, ${(A^2 \cap B^2)^2}$ can be covered by at most ${K_1^6 K_2^3 K_3}$ left-translates of ${(A \cap gB) (A \cap gB)^{-1}}$. But this latter set is (as observed previously) contained in ${A^2 \cap B^2}$, and the claim follows. $\Box$

Here’s a cute identity I discovered by accident recently. Observe that

$\displaystyle \frac{d}{dx} (1+x^2)^{0/2} = 0$

$\displaystyle \frac{d^2}{dx^2} (1+x^2)^{1/2} = \frac{1}{(1+x^2)^{3/2}}$

$\displaystyle \frac{d^3}{dx^3} (1+x^2)^{2/2} = 0$

$\displaystyle \frac{d^4}{dx^4} (1+x^2)^{3/2} = \frac{9}{(1+x^2)^{5/2}}$

$\displaystyle \frac{d^5}{dx^5} (1+x^2)^{4/2} = 0$

$\displaystyle \frac{d^6}{dx^6} (1+x^2)^{5/2} = \frac{225}{(1+x^2)^{7/2}}$

and so one can conjecture that one has

$\displaystyle \frac{d^{k+1}}{dx^{k+1}} (1+x^2)^{k/2} = 0$

when $k$ is even, and

$\displaystyle \frac{d^{k+1}}{dx^{k+1}} (1+x^2)^{k/2} = \frac{(1 \times 3 \times \dots \times k)^2}{(1+x^2)^{(k+2)/2}}$

when $k$ is odd. This is obvious in the even case since $(1+x^2)^{k/2}$ is a polynomial of degree $k$, but I struggled for a while with the odd case before finding a slick three-line proof. (I was first trying to prove the weaker statement that $\frac{d^{k+1}}{dx^{k+1}} (1+x^2)^{k/2}$ was non-negative, but for some strange reason I was only able to establish this by working out the derivative exactly, rather than by using more analytic methods, such as convexity arguments.) I thought other readers might like the challenge (and also I’d like to see some other proofs), so rather than post my own proof immediately, I’ll see if anyone would like to supply their own proofs or thoughts in the comments. Also I am curious to know if this identity is connected to any other existing piece of mathematics.

I’ve just uploaded to the arXiv my paper “Cancellation for the multilinear Hilbert transform“, submitted to Collectanea Mathematica. This paper uses methods from additive combinatorics (and more specifically, the arithmetic regularity and counting lemmas from this paper of Ben Green and myself) to obtain a slight amount of progress towards the open problem of obtaining ${L^p}$ bounds for the trilinear and higher Hilbert transforms (as discussed in this previous blog post). For instance, the trilinear Hilbert transform

$\displaystyle H_3( f_1, f_2, f_3 )(x) := p.v. \int_{\bf R} f_1(x+t) f_2(x+2t) f_3(x+3t)\ \frac{dt}{t}$

is not known to be bounded for any ${L^{p_1}({\bf R}) \times L^{p_2}({\bf R}) \times L^{p_3}({\bf R})}$ to ${L^p({\bf R})}$, although it is conjectured to do so when ${1/p =1/p_1 +1/p_2+1/p_3}$ and ${1 < p_1,p_2,p_3,p < \infty}$. (For ${p}$ well below ${1}$, one can use additive combinatorics constructions to demonstrate unboundedness; see this paper of Demeter.) One can approach this problem by considering the truncated trilinear Hilbert transforms

$\displaystyle H_{3,r,R}( f_1, f_2, f_3 )(x) := \int_{r \leq |t| \leq R} f_1(x+t) f_2(x+2t) f_3(x+3t)\ \frac{dt}{t}$

for ${0 < r < R}$. It is not difficult to show that the boundedness of ${H_3}$ is equivalent to the boundedness of ${H_{3,r,R}}$ with bounds that are uniform in ${R}$ and ${r}$. On the other hand, from Minkowski’s inequality and Hölder’s inequality one can easily obtain the non-uniform bound of ${2 \log \frac{R}{r}}$ for ${H_{3,r,R}}$. The main result of this paper is a slight improvement of this trivial bound to ${o( \log \frac{R}{r})}$ as ${R/r \rightarrow \infty}$. Roughly speaking, the way this gain is established is as follows. First there are some standard time-frequency type reductions to reduce to the task of obtaining some non-trivial cancellation on a single “tree”. Using a “generalised von Neumann theorem”, we show that such cancellation will happen if (a discretised version of) one or more of the functions ${f_1,f_2,f_3}$ (or a dual function ${f_0}$ that it is convenient to test against) is small in the Gowers ${U^3}$ norm. However, the arithmetic regularity lemma alluded to earlier allows one to represent an arbitrary function ${f_i}$, up to a small error, as the sum of such a “Gowers uniform” function, plus a structured function (or more precisely, an irrational virtual nilsequence). This effectively reduces the problem to that of establishing some cancellation in a single tree in the case when all functions ${f_0,f_1,f_2,f_3}$ involved are irrational virtual nilsequences. At this point, the contribution of each component of the tree can be estimated using the “counting lemma” from my paper with Ben. The main term in the asymptotics is a certain integral over a nilmanifold, but because the kernel ${\frac{dt}{t}}$ in the trilinear Hilbert transform is odd, it turns out that this integral vanishes, giving the required cancellation.

The same argument works for higher order Hilbert transforms (and one can also replace the coefficients in these transforms with other rational constants). However, because the quantitative bounds in the arithmetic regularity and counting lemmas are so poor, it does not seem likely that one can use these methods to remove the logarithmic growth in ${R/r}$ entirely, and some additional ideas will be needed to resolve the full conjecture.

I’ve just uploaded to the arXiv my paper “Failure of the ${L^1}$ pointwise and maximal ergodic theorems for the free group“, submitted to Forum of Mathematics, Sigma. This paper concerns a variant of the pointwise ergodic theorem of Birkhoff, which asserts that if one has a measure-preserving shift map ${T: X \rightarrow X}$ on a probability space ${X = (X,\mu)}$, then for any ${f \in L^1(X)}$, the averages ${\frac{1}{N} \sum_{n=1}^N f \circ T^{-n}}$ converge pointwise almost everywhere. (In the important case when the shift map ${T}$ is ergodic, the pointwise limit is simply the mean ${\int_X f\ d\mu}$ of the original function ${f}$.)

The pointwise ergodic theorem can be extended to measure-preserving actions of other amenable groups, if one uses a suitably “tempered” Folner sequence of averages; see this paper of Lindenstrauss for more details. (I also wrote up some notes on that paper here, back in 2006 before I had started this blog.) But the arguments used to handle the amenable case break down completely for non-amenable groups, and in particular for the free non-abelian group ${F_2}$ on two generators.

Nevo and Stein studied this problem and obtained a number of pointwise ergodic theorems for ${F_2}$-actions ${(T_g)_{g \in F_2}}$ on probability spaces ${(X,\mu)}$. For instance, for the spherical averaging operators

$\displaystyle {\mathcal A}_n f := \frac{1}{4 \times 3^{n-1}} \sum_{g \in F_2: |g| = n} f \circ T_g^{-1}$

(where ${|g|}$ denotes the length of the reduced word that forms ${g}$), they showed that ${{\mathcal A}_{2n} f}$ converged pointwise almost everywhere provided that ${f}$ was in ${L^p(X)}$ for some ${p>1}$. (The need to restrict to spheres of even radius can be seen by considering the action of ${F_2}$ on the two-element set ${\{0,1\}}$ in which both generators of ${F_2}$ act by interchanging the elements, in which case ${{\mathcal A}_n}$ is determined by the parity of ${n}$.) This result was reproven with a different and simpler proof by Bufetov, who also managed to relax the condition ${f \in L^p(X)}$ to the weaker condition ${f \in L \log L(X)}$.

The question remained open as to whether the pointwise ergodic theorem for ${F_2}$-actions held if one only assumed that ${f}$ was in ${L^1(X)}$. Nevo and Stein were able to establish this for the Cesáro averages ${\frac{1}{N} \sum_{n=1}^N {\mathcal A}_n}$, but not for ${{\mathcal A}_n}$ itself. About six years ago, Assaf Naor and I tried our hand at this problem, and was able to show an associated maximal inequality on ${\ell^1(F_2)}$, but due to the non-amenability of ${F_2}$, this inequality did not transfer to ${L^1(X)}$ and did not have any direct impact on this question, despite a fair amount of effort on our part to attack it.

Inspired by some recent conversations with Lewis Bowen, I returned to this problem. This time around, I tried to construct a counterexample to the ${L^1}$ pointwise ergodic theorem – something Assaf and I had not seriously attempted to do (perhaps due to being a bit too enamoured of our ${\ell^1(F_2)}$ maximal inequality). I knew of an existing counterexample of Ornstein regarding a failure of an ${L^1}$ ergodic theorem for iterates ${P^n}$ of a self-adjoint Markov operator – in fact, I had written some notes on this example back in 2007. Upon revisiting my notes, I soon discovered that the Ornstein construction was adaptable to the ${F_2}$ setting, thus settling the problem in the negative:

Theorem 1 (Failure of ${L^1}$ pointwise ergodic theorem) There exists a measure-preserving ${F_2}$-action on a probability space ${X}$ and a non-negative function ${f \in L^1(X)}$ such that ${\sup_n {\mathcal A}_{2n} f(x) = +\infty}$ for almost every ${x}$.

To describe the proof of this theorem, let me first briefly sketch the main ideas of Ornstein’s construction, which gave an example of a self-adjoint Markov operator ${P}$ on a probability space ${X}$ and a non-negative ${f \in L^1(X)}$ such that ${\sup_n P^n f(x) = +\infty}$ for almost every ${x}$. By some standard manipulations, it suffices to show that for any given ${\alpha > 0}$ and ${\varepsilon>0}$, there exists a self-adjoint Markov operator ${P}$ on a probability space ${X}$ and a non-negative ${f \in L^1(X)}$ with ${\|f\|_{L^1(X)} \leq \alpha}$, such that ${\sup_n P^n f \geq 1-\varepsilon}$ on a set of measure at least ${1-\varepsilon}$. Actually, it will be convenient to replace the Markov chain ${(P^n f)_{n \geq 0}}$ with an ancient Markov chain ${(f_n)_{n \in {\bf Z}}}$ – that is to say, a sequence of non-negative functions ${f_n}$ for both positive and negative ${f}$, such that ${f_{n+1} = P f_n}$ for all ${n \in {\bf Z}}$. The purpose of requiring the Markov chain to be ancient (that is, to extend infinitely far back in time) is to allow for the Markov chain to be shifted arbitrarily in time, which is key to Ornstein’s construction. (Technically, Ornstein’s original argument only uses functions that go back to a large negative time, rather than being infinitely ancient, but I will gloss over this point for sake of discussion, as it turns out that the ${F_2}$ version of the argument can be run using infinitely ancient chains.)

For any ${\alpha>0}$, let ${P(\alpha)}$ denote the claim that for any ${\varepsilon>0}$, there exists an ancient Markov chain ${(f_n)_{n \in {\bf Z}}}$ with ${\|f_n\|_{L^1(X)} = \alpha}$ such that ${\sup_{n \in {\bf Z}} f_n \geq 1-\varepsilon}$ on a set of measure at least ${1-\varepsilon}$. Clearly ${P(1)}$ holds since we can just take ${f_n=1}$ for all ${n}$. Our objective is to show that ${P(\alpha)}$ holds for arbitrarily small ${\alpha}$. The heart of Ornstein’s argument is then the implication

$\displaystyle P(\alpha) \implies P( \alpha (1 - \frac{\alpha}{4}) ) \ \ \ \ \ (1)$

for any ${0 < \alpha \leq 1}$, which upon iteration quickly gives the desired claim.

Let’s see informally how (1) works. By hypothesis, and ignoring epsilons, we can find an ancient Markov chain ${(f_n)_{n \in {\bf Z}}}$ on some probability space ${X}$ of total mass ${\|f_n\|_{L^1(X)} = \alpha}$, such that ${\sup_n f_n}$ attains the value of ${1}$ or greater almost everywhere. Assuming that the Markov process is irreducible, the ${f_n}$ will eventually converge as ${n \rightarrow \infty}$ to the constant value of ${\|f_n\|_{L^1(X)}}$, in particular its final state will essentially stay above ${\alpha}$ (up to small errors).

Now suppose we duplicate the Markov process by replacing ${X}$ with a double copy ${X \times \{1,2\}}$ (giving ${\{1,2\}}$ the uniform probability measure), and using the disjoint sum of the Markov operators on ${X \times \{1\}}$ and ${X \times \{2\}}$ as the propagator, so that there is no interaction between the two components of this new system. Then the functions ${f'_n(x,i) := f_n(x) 1_{i=1}}$ form an ancient Markov chain of mass at most ${\alpha/2}$ that lives solely in the first half ${X \times \{1\}}$ of this copy, and ${\sup_n f'_n}$ attains the value of ${1}$ or greater on almost all of the first half ${X \times \{1\}}$, but is zero on the second half. The final state of ${f'_n}$ will be to stay above ${\alpha}$ in the first half ${X \times \{1\}}$, but be zero on the second half.

Now we modify the above example by allowing an infinitesimal amount of interaction between the two halves ${X \times \{1\}}$, ${X \times \{2\}}$ of the system (I mentally think of ${X \times \{1\}}$ and ${X \times \{2\}}$ as two identical boxes that a particle can bounce around in, and now we wish to connect the boxes by a tiny tube). The precise way in which this interaction is inserted is not terribly important so long as the new Markov process is irreducible. Once one does so, then the ancient Markov chain ${(f'_n)_{n \in {\bf Z}}}$ in the previous example gets replaced by a slightly different ancient Markov chain ${(f''_n)_{n \in {\bf Z}}}$ which is more or less identical with ${f'_n}$ for negative times ${n}$, or for bounded positive times ${n}$, but for very large values of ${n}$ the final state is now constant across the entire state space ${X \times \{1,2\}}$, and will stay above ${\alpha/2}$ on this space.

Finally, we consider an ancient Markov chain ${F_n}$ which is basically of the form

$\displaystyle F_n(x,i) \approx f''_n(x,i) + (1 - \frac{\alpha}{2}) f_{n-M}(x) 1_{i=2}$

for some large parameter ${M}$ and for all ${n \leq M}$ (the approximation becomes increasingly inaccurate for ${n}$ much larger than ${M}$, but never mind this for now). This is basically two copies of the original Markov process in separate, barely interacting state spaces ${X \times \{1\}, X \times \{2\}}$, but with the second copy delayed by a large time delay ${M}$, and also attenuated in amplitude by a factor of ${1-\frac{\alpha}{2}}$. The total mass of this process is now ${\frac{\alpha}{2} + \frac{\alpha}{2} (1 -\frac{\alpha}{2}) = \alpha (1 - \alpha/4)}$. Because of the ${f''_n}$ component of ${F_n}$, we see that ${\sup_n F_n}$ basically attains the value of ${1}$ or greater on the first half ${X \times \{1\}}$. On the second half ${X \times \{2\}}$, we work with times ${n}$ close to ${M}$. If ${M}$ is large enough, ${f''_n}$ would have averaged out to about ${\alpha/2}$ at such times, but the ${(1 - \frac{\alpha}{2}) f_{n-M}(x)}$ component can get as large as ${1-\alpha/2}$ here. Summing (and continuing to ignore various epsilon losses), we see that ${\sup_n F_n}$ can get as large as ${1}$ on almost all of the second half of ${X \times \{2\}}$. This concludes the rough sketch of how one establishes the implication (1).

It was observed by Bufetov that the spherical averages ${{\mathcal A}_n}$ for a free group action can be lifted up to become powers ${P^n}$ of a Markov operator, basically by randomly assigning a “velocity vector” ${s \in \{a,b,a^{-1},b^{-1}\}}$ to one’s base point ${x}$ and then applying the Markov process that moves ${x}$ along that velocity vector (and then randomly changing the velocity vector at each time step to the “reduced word” condition that the velocity never flips from ${s}$ to ${s^{-1}}$). Thus the spherical average problem has a Markov operator interpretation, which opens the door to adapting the Ornstein construction to the setting of ${F_2}$ systems. This turns out to be doable after a certain amount of technical artifice; the main thing is to work with ${F_2}$-measure preserving systems that admit ancient Markov chains that are initially supported in a very small region in the “interior” of the state space, so that one can couple such systems to each other “at the boundary” in the fashion needed to establish the analogue of (1) without disrupting the ancient dynamics of such chains. The initial such system (used to establish the base case ${P(1)}$) comes from basically considering the action of ${F_2}$ on a (suitably renormalised) “infinitely large ball” in the Cayley graph, after suitably gluing together the boundary of this ball to complete the action. The ancient Markov chain associated to this system starts at the centre of this infinitely large ball at infinite negative time ${n=-\infty}$, and only reaches the boundary of this ball at the time ${n=0}$.