You are currently browsing the category archive for the ‘math.CA’ category.

In the previous set of notes, we constructed the measure-theoretic notion of the Lebesgue integral, and used this to set up the probabilistic notion of expectation on a rigorous footing. In this set of notes, we will similarly construct the measure-theoretic concept of a product measure (restricting to the case of probability measures to avoid unnecessary techncialities), and use this to set up the probabilistic notion of independence on a rigorous footing. (To quote Durrett: “measure theory ends and probability theory begins with the definition of independence.”) We will be able to take virtually any collection of random variables (or probability distributions) and couple them together to be independent via the product measure construction, though for infinite products there is the slight technicality (a requirement of the Kolmogorov extension theorem) that the random variables need to range in standard Borel spaces. This is not the only way to couple together such random variables, but it is the simplest and the easiest to compute with in practice, as we shall see in the next few sets of notes.

In Notes 0, we introduced the notion of a measure space ${\Omega = (\Omega, {\mathcal F}, \mu)}$, which includes as a special case the notion of a probability space. By selecting one such probability space ${(\Omega,{\mathcal F},\mu)}$ as a sample space, one obtains a model for random events and random variables, with random events ${E}$ being modeled by measurable sets ${E_\Omega}$ in ${{\mathcal F}}$, and random variables ${X}$ taking values in a measurable space ${R}$ being modeled by measurable functions ${X_\Omega: \Omega \rightarrow R}$. We then defined some basic operations on these random events and variables:

• Given events ${E,F}$, we defined the conjunction ${E \wedge F}$, the disjunction ${E \vee F}$, and the complement ${\overline{E}}$. For countable families ${E_1,E_2,\dots}$ of events, we similarly defined ${\bigwedge_{n=1}^\infty E_n}$ and ${\bigvee_{n=1}^\infty E_n}$. We also defined the empty event ${\emptyset}$ and the sure event ${\overline{\emptyset}}$, and what it meant for two events to be equal.
• Given random variables ${X_1,\dots,X_n}$ in ranges ${R_1,\dots,R_n}$ respectively, and a measurable function ${F: R_1 \times \dots \times R_n \rightarrow S}$, we defined the random variable ${F(X_1,\dots,X_n)}$ in range ${S}$. (As the special case ${n=0}$ of this, every deterministic element ${s}$ of ${S}$ was also a random variable taking values in ${S}$.) Given a relation ${P: R_1 \times \dots \times R_n \rightarrow \{\hbox{true}, \hbox{false}\}}$, we similarly defined the event ${P(X_1,\dots,X_n)}$. Conversely, given an event ${E}$, we defined the indicator random variable ${1_E}$. Finally, we defined what it meant for two random variables to be equal.
• Given an event ${E}$, we defined its probability ${{\bf P}(E)}$.

These operations obey various axioms; for instance, the boolean operations on events obey the axioms of a Boolean algebra, and the probabilility function ${E \mapsto {\bf P}(E)}$ obeys the Kolmogorov axioms. However, we will not focus on the axiomatic approach to probability theory here, instead basing the foundations of probability theory on the sample space models as discussed in Notes 0. (But see this previous post for a treatment of one such axiomatic approach.)

It turns out that almost all of the other operations on random events and variables we need can be constructed in terms of the above basic operations. In particular, this allows one to safely extend the sample space in probability theory whenever needed, provided one uses an extension that respects the above basic operations; this is an important operation when one needs to add new sources of randomness to an existing system of events and random variables, or to couple together two separate such systems into a joint system that extends both of the original systems. We gave a simple example of such an extension in the previous notes, but now we give a more formal definition:

Definition 1 Suppose that we are using a probability space ${\Omega = (\Omega, {\mathcal F}, \mu)}$ as the model for a collection of events and random variables. An extension of this probability space is a probability space ${\Omega' = (\Omega', {\mathcal F}', \mu')}$, together with a measurable map ${\pi: \Omega' \rightarrow \Omega}$ (sometimes called the factor map) which is probability-preserving in the sense that

$\displaystyle \mu'( \pi^{-1}(E) ) = \mu(E) \ \ \ \ \ (1)$

for all ${E \in {\mathcal F}}$. (Caution: this does not imply that ${\mu(\pi(F)) = \mu'(F)}$ for all ${F \in {\mathcal F}'}$ – why not?)

An event ${E}$ which is modeled by a measurable subset ${E_\Omega}$ in the sample space ${\Omega}$, will be modeled by the measurable set ${E_{\Omega'} := \pi^{-1}(E_\Omega)}$ in the extended sample space ${\Omega'}$. Similarly, a random variable ${X}$ taking values in some range ${R}$ that is modeled by a measurable function ${X_\Omega: \Omega \rightarrow R}$ in ${\Omega}$, will be modeled instead by the measurable function ${X_{\Omega'} := X_\Omega \circ \pi}$ in ${\Omega'}$. We also allow the extension ${\Omega'}$ to model additional events and random variables that were not modeled by the original sample space ${\Omega}$ (indeed, this is one of the main reasons why we perform extensions in probability in the first place).

Thus, for instance, the sample space ${\Omega'}$ in Example 3 of the previous post is an extension of the sample space ${\Omega}$ in that example, with the factor map ${\pi: \Omega' \rightarrow \Omega}$ given by the first coordinate projection ${\pi(i,j) := i}$. One can verify that all of the basic operations on events and random variables listed above are unaffected by the above extension (with one caveat, see remark below). For instance, the conjunction ${E \wedge F}$ of two events can be defined via the original model ${\Omega}$ by the formula

$\displaystyle (E \wedge F)_\Omega := E_\Omega \cap F_\Omega$

or via the extension ${\Omega'}$ via the formula

$\displaystyle (E \wedge F)_{\Omega'} := E_{\Omega'} \cap F_{\Omega'}.$

The two definitions are consistent with each other, thanks to the obvious set-theoretic identity

$\displaystyle \pi^{-1}( E_\Omega \cap F_\Omega ) = \pi^{-1}(E_\Omega) \cap \pi^{-1}(F_\Omega).$

Similarly, the assumption (1) is precisely what is needed to ensure that the probability ${\mathop{\bf P}(E)}$ of an event remains unchanged when one replaces a sample space model with an extension. We leave the verification of preservation of the other basic operations described above under extension as exercises to the reader.

Remark 2 There is one minor exception to this general rule if we do not impose the additional requirement that the factor map ${\pi}$ is surjective. Namely, for non-surjective ${\pi}$, it can become possible that two events ${E, F}$ are unequal in the original sample space model, but become equal in the extension (and similarly for random variables), although the converse never happens (events that are equal in the original sample space always remain equal in the extension). For instance, let ${\Omega}$ be the discrete probability space ${\{a,b\}}$ with ${p_a=1}$ and ${p_b=0}$, and let ${\Omega'}$ be the discrete probability space ${\{ a'\}}$ with ${p'_{a'}=1}$, and non-surjective factor map ${\pi: \Omega' \rightarrow \Omega}$ defined by ${\pi(a') := a}$. Then the event modeled by ${\{b\}}$ in ${\Omega}$ is distinct from the empty event when viewed in ${\Omega}$, but becomes equal to that event when viewed in ${\Omega'}$. Thus we see that extending the sample space by a non-surjective factor map can identify previously distinct events together (though of course, being probability preserving, this can only happen if those two events were already almost surely equal anyway). This turns out to be fairly harmless though; while it is nice to know if two given events are equal, or if they differ by a non-null event, it is almost never useful to know that two events are unequal if they are already almost surely equal. Alternatively, one can add the additional requirement of surjectivity in the definition of an extension, which is also a fairly harmless constraint to impose (this is what I chose to do in this previous set of notes).

Roughly speaking, one can define probability theory as the study of those properties of random events and random variables that are model-independent in the sense that they are preserved by extensions. For instance, the cardinality ${|E_\Omega|}$ of the model ${E_\Omega}$ of an event ${E}$ is not a concept within the scope of probability theory, as it is not preserved by extensions: continuing Example 3 from Notes 0, the event ${E}$ that a die roll ${X}$ is even is modeled by a set ${E_\Omega = \{2,4,6\}}$ of cardinality ${3}$ in the original sample space model ${\Omega}$, but by a set ${E_{\Omega'} = \{2,4,6\} \times \{1,2,3,4,5,6\}}$ of cardinality ${18}$ in the extension. Thus it does not make sense in the context of probability theory to refer to the “cardinality of an event ${E}$“.

On the other hand, the supremum ${\sup_n X_n}$ of a collection of random variables ${X_n}$ in the extended real line ${[-\infty,+\infty]}$ is a valid probabilistic concept. This can be seen by manually verifying that this operation is preserved under extension of the sample space, but one can also see this by defining the supremum in terms of existing basic operations. Indeed, note from Exercise 24 of Notes 0 that a random variable ${X}$ in the extended real line is completely specified by the threshold events ${(X \leq t)}$ for ${t \in {\bf R}}$; in particular, two such random variables ${X,Y}$ are equal if and only if the events ${(X \leq t)}$ and ${(Y \leq t)}$ are surely equal for all ${t}$. From the identity

$\displaystyle (\sup_n X_n \leq t) = \bigwedge_{n=1}^\infty (X_n \leq t)$

we thus see that one can completely specify ${\sup_n X_n}$ in terms of ${X_n}$ using only the basic operations provided in the above list (and in particular using the countable conjunction ${\bigwedge_{n=1}^\infty}$.) Of course, the same considerations hold if one replaces supremum, by infimum, limit superior, limit inferior, or (if it exists) the limit.

In this set of notes, we will define some further important operations on scalar random variables, in particular the expectation of these variables. In the sample space models, expectation corresponds to the notion of integration on a measure space. As we will need to use both expectation and integration in this course, we will thus begin by quickly reviewing the basics of integration on a measure space, although we will then translate the key results of this theory into probabilistic language.

As the finer details of the Lebesgue integral construction are not the core focus of this probability course, some of the details of this construction will be left to exercises. See also Chapter 1 of Durrett, or these previous blog notes, for a more detailed treatment.

Starting this week, I will be teaching an introductory graduate course (Math 275A) on probability theory here at UCLA. While I find myself using probabilistic methods routinely nowadays in my research (for instance, the probabilistic concept of Shannon entropy played a crucial role in my recent paper on the Chowla and Elliott conjectures, and random multiplicative functions similarly played a central role in the paper on the Erdos discrepancy problem), this will actually be the first time I will be teaching a course on probability itself (although I did give a course on random matrix theory some years ago that presumed familiarity with graduate-level probability theory). As such, I will be relying primarily on an existing textbook, in this case Durrett’s Probability: Theory and Examples. I still need to prepare lecture notes, though, and so I thought I would continue my practice of putting my notes online, although in this particular case they will be less detailed or complete than with other courses, as they will mostly be focusing on those topics that are not already comprehensively covered in the text of Durrett. Below the fold are my first such set of notes, concerning the classical measure-theoretic foundations of probability. (I wrote on these foundations also in this previous blog post, but in that post I already assumed that the reader was familiar with measure theory and basic probability, whereas in this course not every student will have a strong background in these areas.)

Note: as this set of notes is primarily concerned with foundational issues, it will contain a large number of pedantic (and nearly trivial) formalities and philosophical points. We dwell on these technicalities in this set of notes primarily so that they are out of the way in later notes, when we work with the actual mathematics of probability, rather than on the supporting foundations of that mathematics. In particular, the excessively formal and philosophical language in this set of notes will not be replicated in later notes.

The equidistribution theorem asserts that if ${\alpha \in {\bf R}/{\bf Z}}$ is an irrational phase, then the sequence ${(n\alpha)_{n=1}^\infty}$ is equidistributed on the unit circle, or equivalently that

$\displaystyle \frac{1}{N} \sum_{n=1}^N F(n\alpha) \rightarrow \int_{{\bf R}/{\bf Z}} F(x)\ dx$

for any continuous (or equivalently, for any smooth) function ${F: {\bf R}/{\bf Z} \rightarrow {\bf C}}$. By approximating ${F}$ uniformly by a Fourier series, this claim is equivalent to that of showing that

$\displaystyle \frac{1}{N} \sum_{n=1}^N e(hn\alpha) \rightarrow 0$

for any non-zero integer ${h}$ (where ${e(x) := e^{2\pi i x}}$), which is easily verified from the irrationality of ${\alpha}$ and the geometric series formula. Conversely, if ${\alpha}$ is rational, then clearly ${\frac{1}{N} \sum_{n=1}^N e(hn\alpha)}$ fails to go to zero when ${h}$ is a multiple of the denominator of ${\alpha}$.

One can then ask for more quantitative information about the decay of exponential sums of ${\frac{1}{N} \sum_{n=1}^N e(n \alpha)}$, or more generally on exponential sums of the form ${\frac{1}{|Q|} \sum_{n \in Q} e(P(n))}$ for an arithmetic progression ${Q}$ (in this post all progressions are understood to be finite) and a polynomial ${P: Q \rightarrow \/{\bf Z}}$. It will be convenient to phrase such information in the form of an inverse theorem, describing those phases for which the exponential sum is large. Indeed, we have

Lemma 1 (Geometric series formula, inverse form) Let ${Q \subset {\bf Z}}$ be an arithmetic progression of length at most ${N}$ for some ${N \geq 1}$, and let ${P(n) = n \alpha + \beta}$ be a linear polynomial for some ${\alpha,\beta \in {\bf R}/{\bf Z}}$. If

$\displaystyle \frac{1}{N} |\sum_{n \in Q} e(P(n))| \geq \delta$

for some ${\delta > 0}$, then there exists a subprogression ${Q'}$ of ${Q}$ of size ${|Q'| \gg \delta^2 N}$ such that ${P(n)}$ varies by at most ${\delta}$ on ${Q'}$ (that is to say, ${P(n)}$ lies in a subinterval of ${{\bf R}/{\bf Z}}$ of length at most ${\delta}$).

Proof: By a linear change of variable we may assume that ${Q}$ is of the form ${\{0,\dots,N'-1\}}$ for some ${N' \geq 1}$. We may of course assume that ${\alpha}$ is non-zero in ${{\bf R}/{\bf Z}}$, so that ${\|\alpha\|_{{\bf R}/{\bf Z}} > 0}$ (${\|x\|_{{\bf R}/{\bf Z}}}$ denotes the distance from ${x}$ to the nearest integer). From the geometric series formula we see that

$\displaystyle |\sum_{n \in Q} e(P(n))| \leq \frac{2}{|e(\alpha) - 1|} \ll \frac{1}{\|\alpha\|_{{\bf R}/{\bf Z}}},$

and so ${\|\alpha\|_{{\bf R}/{\bf Z}} \ll \frac{1}{\delta N}}$. Setting ${Q' := \{ n \in Q: n \leq c \delta^2 N \}}$ for some sufficiently small absolute constant ${c}$, we obtain the claim. $\Box$

Thus, in order for a linear phase ${P(n)}$ to fail to be equidistributed on some long progression ${Q}$, ${P}$ must in fact be almost constant on large piece of ${Q}$.

As is well known, this phenomenon generalises to higher order polynomials. To achieve this, we need two elementary additional lemmas. The first relates the exponential sums of ${P}$ to the exponential sums of its “first derivatives” ${n \mapsto P(n+h)-P(n)}$.

Lemma 2 (Van der Corput lemma, inverse form) Let ${Q \subset {\bf Z}}$ be an arithmetic progression of length at most ${N}$, and let ${P: Q \rightarrow {\bf R}/{\bf Z}}$ be an arbitrary function such that

$\displaystyle \frac{1}{N} |\sum_{n \in Q} e(P(n))| \geq \delta \ \ \ \ \ (1)$

for some ${\delta > 0}$. Then, for ${\gg \delta^2 N}$ integers ${h \in Q-Q}$, there exists a subprogression ${Q_h}$ of ${Q}$, of the same spacing as ${Q}$, such that

$\displaystyle \frac{1}{N} |\sum_{n \in Q_h} e(P(n+h)-P(n))| \gg \delta^2. \ \ \ \ \ (2)$

Proof: Squaring (1), we see that

$\displaystyle \sum_{n,n' \in Q} e(P(n') - P(n)) \geq \delta^2 N^2.$

We write ${n' = n+h}$ and conclude that

$\displaystyle \sum_{h \in Q-Q} \sum_{n \in Q_h} e( P(n+h)-P(n) ) \geq \delta^2 N^2$

where ${Q_h := Q \cap (Q-h)}$ is a subprogression of ${Q}$ of the same spacing. Since ${\sum_{n \in Q_h} e( P(n+h)-P(n) ) = O(N)}$, we conclude that

$\displaystyle |\sum_{n \in Q_h} e( P(n+h)-P(n) )| \gg \delta^2 N$

for ${\gg \delta^2 N}$ values of ${h}$ (this can be seen, much like the pigeonhole principle, by arguing via contradiction for a suitable choice of implied constants). The claim follows. $\Box$

The second lemma (which we recycle from this previous blog post) is a variant of the equidistribution theorem.

Lemma 3 (Vinogradov lemma) Let ${I \subset [-N,N] \cap {\bf Z}}$ be an interval for some ${N \geq 1}$, and let ${\theta \in{\bf R}/{\bf Z}}$ be such that ${\|n\theta\|_{{\bf R}/{\bf Z}} \leq \varepsilon}$ for at least ${\delta N}$ values of ${n \in I}$, for some ${0 < \varepsilon, \delta < 1}$. Then either

$\displaystyle N < \frac{2}{\delta}$

or

$\displaystyle \varepsilon > 10^{-2} \delta$

or else there is a natural number ${q \leq 2/\delta}$ such that

$\displaystyle \| q \theta \|_{{\bf R}/{\bf Z}} \ll \frac{\varepsilon}{\delta N}.$

Proof: We may assume that ${N \geq \frac{2}{\delta}}$ and ${\varepsilon \leq 10^{-2} \delta}$, since we are done otherwise. Then there are at least two ${n \in I}$ with ${\|n \theta \|_{{\bf R}/{\bf Z}} \leq \varepsilon}$, and by the pigeonhole principle we can find ${n_1 < n_2}$ in ${Q}$ with ${\|n_1 \theta \|_{{\bf R}/{\bf Z}}, \|n_2 \theta \|_{{\bf R}/{\bf Z}} \leq \varepsilon}$ and ${n_2-n_1 \leq \frac{2}{\delta}}$. By the triangle inequality, we conclude that there exists at least one natural number ${q \leq \frac{2}{\delta}}$ for which

$\displaystyle \| q \theta \|_{{\bf R}/{\bf Z}} \leq 2\varepsilon.$

We take ${q}$ to be minimal amongst all such natural numbers, then we see that there exists ${a}$ coprime to ${q}$ and ${|\kappa| \leq 2\varepsilon}$ such that

$\displaystyle \theta = \frac{a}{q} + \frac{\kappa}{q}. \ \ \ \ \ (3)$

If ${\kappa=0}$ then we are done, so suppose that ${\kappa \neq 0}$. Suppose that ${n < m}$ are elements of ${I}$ such that ${\|n\theta \|_{{\bf R}/{\bf Z}}, \|m\theta \|_{{\bf R}/{\bf Z}} \leq \varepsilon}$ and ${m-n \leq \frac{1}{10 \kappa}}$. Writing ${m-n = qk + r}$ for some ${0 \leq r < q}$, we have

$\displaystyle \| (m-n) \theta \|_{{\bf R}/{\bf Z}} = \| \frac{ra}{q} + (m-n) \frac{\kappa}{q} \|_{{\bf R}/{\bf Z}} \leq 2\varepsilon.$

By hypothesis, ${(m-n) \frac{\kappa}{q} \leq \frac{1}{10 q}}$; note that as ${q \leq 2/\delta}$ and ${\varepsilon \leq 10^{-2} \delta}$ we also have ${\varepsilon \leq \frac{1}{10q}}$. This implies that ${\| \frac{ra}{q} \|_{{\bf R}/{\bf Z}} < \frac{1}{q}}$ and thus ${r=0}$. We then have

$\displaystyle |k \kappa| \leq 2 \varepsilon.$

We conclude that for fixed ${n \in I}$ with ${\|n\theta \|_{{\bf R}/{\bf Z}} \leq \varepsilon}$, there are at most ${\frac{2\varepsilon}{|\kappa|}}$ elements ${m}$ of ${[n, n + \frac{1}{10 |\kappa|}]}$ such that ${\|m\theta \|_{{\bf R}/{\bf Z}} \leq \varepsilon}$. Iterating this with a greedy algorithm, we see that the number of ${n \in I}$ with ${\|n\theta \|_{{\bf R}/{\bf Z}} \leq \varepsilon}$ is at most ${(\frac{N}{1/10|\kappa|} + 1) 2\varepsilon/|\kappa|}$; since ${\varepsilon < 10^{-2} \delta}$, this implies that

$\displaystyle \delta N \ll 2 \varepsilon / \kappa$

and the claim follows. $\Box$

Now we can quickly obtain a higher degree version of Lemma 1:

Proposition 4 (Weyl exponential sum estimate, inverse form) Let ${Q \subset {\bf Z}}$ be an arithmetic progression of length at most ${N}$ for some ${N \geq 1}$, and let ${P: {\bf Z} \rightarrow {\bf R}/{\bf Z}}$ be a polynomial of some degree at most ${d \geq 0}$. If

$\displaystyle \frac{1}{N} |\sum_{n \in Q} e(P(n))| \geq \delta$

for some ${\delta > 0}$, then there exists a subprogression ${Q'}$ of ${Q}$ with ${|Q'| \gg_d \delta^{O_d(1)} N}$ such that ${P}$ varies by at most ${\delta}$ on ${Q'}$.

Proof: We induct on ${d}$. The cases ${d=0,1}$ are immediate from Lemma 1. Now suppose that ${d \geq 2}$, and that the claim had already been proven for ${d-1}$. To simplify the notation we allow implied constants to depend on ${d}$. Let the hypotheses be as in the proposition. Clearly ${\delta}$ cannot exceed ${1}$. By shrinking ${\delta}$ as necessary we may assume that ${\delta \leq c}$ for some sufficiently small constant ${c}$ depending on ${d}$.

By rescaling we may assume ${Q \subset [0,N] \cap {\bf Z}}$. By Lemma 3, we see that for ${\gg \delta^2 N}$ choices of ${h \in [-N,N] \cap {\bf Z}}$ such that

$\displaystyle \frac{1}{N} |\sum_{n \in I_h} e(P(n+h) - P(n))| \gg \delta^2$

for some interval ${I_h \subset [0,N] \cap {\bf Z}}$. We write ${P(n) = \sum_{i \leq d} \alpha_i n^i}$, then ${P(n+h)-P(n)}$ is a polynomial of degree at most ${d-1}$ with leading coefficient ${h \alpha_d n^{d-1}}$. We conclude from induction hypothesis that for each such ${h}$, there exists a natural number ${q_h \ll \delta^{-O(1)}}$ such that ${\|q_h h \alpha_d \|_{{\bf R}/{\bf Z}} \ll \delta^{-O(1)} / N^{d-1}}$, by double-counting, this implies that there are ${\gg \delta^{O(1)} N}$ integers ${n}$ in the interval ${[-\delta^{-O(1)} N, \delta^{-O(1)} N] \cap {\bf Z}}$ such that ${\|n \alpha_d \|_{{\bf R}/{\bf Z}} \ll \delta^{-O(1)} / N^{d-1}}$. Applying Lemma 3, we conclude that either ${N \ll \delta^{-O(1)}}$, or that

$\displaystyle \| q \alpha_d \|_{{\bf R}/{\bf Z}} \ll \delta^{-O(1)} / N^d. \ \ \ \ \ (4)$

In the former case the claim is trivial (just take ${Q'}$ to be a point), so we may assume that we are in the latter case.

We partition ${Q}$ into arithmetic progressions ${Q'}$ of spacing ${q}$ and length comparable to ${\delta^{-C} N}$ for some large ${C}$ depending on ${d}$ to be chosen later. By hypothesis, we have

$\displaystyle \frac{1}{|Q|} |\sum_{n \in Q} e(P(n))| \geq \delta$

so by the pigeonhole principle, we have

$\displaystyle \frac{1}{|Q'|} |\sum_{n \in Q'} e(P(n))| \geq \delta$

for at least one such progression ${Q'}$. On this progression, we may use the binomial theorem and (4) to write ${\alpha_d n^d}$ as a polynomial in ${n}$ of degree at most ${d-1}$, plus an error of size ${O(\delta^{C - O(1)})}$. We thus can write ${P(n) = P'(n) + O(\delta^{C-O(1)})}$ for ${n \in Q'}$ for some polynomial ${P'}$ of degree at most ${d-1}$. By the triangle inequality, we thus have (for ${C}$ large enough) that

$\displaystyle \frac{1}{|Q'|} |\sum_{n \in Q'} e(P'(n))| \gg \delta$

and hence by induction hypothesis we may find a subprogression ${Q''}$ of ${Q'}$ of size ${|Q''| \gg \delta^{O(1)} N}$ such that ${P'}$ varies by most ${\delta/2}$ on ${Q''}$, and thus (for ${C}$ large enough again) that ${P}$ varies by at most ${\delta}$ on ${Q''}$, and the claim follows. $\Box$

This gives the following corollary (also given as Exercise 16 in this previous blog post):

Corollary 5 (Weyl exponential sum estimate, inverse form II) Let ${I \subset [-N,N] \cap {\bf Z}}$ be a discrete interval for some ${N \geq 1}$, and let ${P(n) = \sum_{i \leq d} \alpha_i n^i}$ polynomial of some degree at most ${d \geq 0}$ for some ${\alpha_0,\dots,\alpha_d \in {\bf R}/{\bf Z}}$. If

$\displaystyle \frac{1}{N} |\sum_{n \in I} e(P(n))| \geq \delta$

for some ${\delta > 0}$, then there is a natural number ${q \ll_d \delta^{-O_d(1)}}$ such that ${\| q\alpha_i \|_{{\bf R}/{\bf Z}} \ll_d \delta^{-O_d(1)} N^{-i}}$ for all ${i=0,\dots,d}$.

One can obtain much better exponents here using Vinogradov’s mean value theorem; see Theorem 1.6 this paper of Wooley. (Thanks to Mariusz Mirek for this reference.) However, this weaker result already suffices for many applications, and does not need any result as deep as the mean value theorem.

Proof: To simplify notation we allow implied constants to depend on ${d}$. As before, we may assume that ${\delta \leq c}$ for some small constant ${c>0}$ depending only on ${d}$. We may also assume that ${N \geq \delta^{-C}}$ for some large ${C}$, as the claim is trivial otherwise (set ${q=1}$).

Applying Proposition 4, we can find a natural number ${q \ll \delta^{-O(1)}}$ and an arithmetic subprogression ${Q}$ of ${I}$ such that ${|Q| \gg \delta^{O(1)}}$ and such that ${P}$ varies by at most ${\delta}$ on ${Q}$. Writing ${Q = \{ qn+r: n \in I'\}}$ for some interval ${I' \subset [0,N] \cap {\bf Z}}$ of length ${\gg \delta^{O(1)}}$ and some ${0 \leq r < q}$, we conclude that the polynomial ${n \mapsto P(qn+r)}$ varies by at most ${\delta}$ on ${I'}$. Taking ${d^{th}}$ order differences, we conclude that the ${d^{th}}$ coefficient of this polynomial is ${O(\delta^{-O(1)} / N^d)}$; by the binomial theorem, this implies that ${n \mapsto P(qn+r)}$ differs by at most ${O(\delta)}$ on ${I'}$ from a polynomial of degree at most ${d-1}$. Iterating this, we conclude that the ${i^{th}}$ coefficient of ${n \mapsto P(qn+r)}$ is ${O(\delta N^{-i})}$ for ${i=0,\dots,d}$, and the claim then follows by inverting the change of variables ${n \mapsto qn+r}$ (and replacing ${q}$ with a larger quantity such as ${q^d}$ as necessary). $\Box$

For future reference we also record a higher degree version of the Vinogradov lemma.

Lemma 6 (Polynomial Vinogradov lemma) Let ${I \subset [-N,N] \cap {\bf Z}}$ be a discrete interval for some ${N \geq 1}$, and let ${P: {\bf Z} \rightarrow {\bf R}/{\bf Z}}$ be a polynomial ${P(n) = \sum_{i \leq d} \alpha_i n^i}$ of degree at most ${d}$ for some ${d \geq 1}$ such that ${\|P(n)\|_{{\bf R}/{\bf Z}} \leq \varepsilon}$ for at least ${\delta N}$ values of ${n \in I}$, for some ${0 < \varepsilon, \delta < 1}$. Then either

$\displaystyle N \ll_d \delta^{-O_d(1)} \ \ \ \ \ (5)$

or

$\displaystyle \varepsilon \gg_d \delta^{O_d(1)} \ \ \ \ \ (6)$

or else there is a natural number ${q \ll_d \delta^{-O_d(1)}}$ such that

$\displaystyle \| q \alpha_i \|_{{\bf R}/{\bf Z}} \ll \frac{\delta^{-O(1)} \varepsilon}{N^i}$

for all ${i=0,\dots,d}$.

Proof: We induct on ${d}$. For ${d=1}$ this follows from Lemma 3 (noting that if ${\|P(n)\|_{{\bf R}/{\bf Z}}, \|P(n_0)\|_{{\bf R}/Z} \leq \varepsilon}$ then ${\|P(n)-P(n_0)\|_{{\bf R}/{\bf Z}} \leq 2\varepsilon}$), so suppose that ${d \geq 2}$ and that the claim is already proven for ${d-1}$. We now allow all implied constants to depend on ${d}$.

For each ${h \in [-2N,2N] \cap {\bf Z}}$, let ${N_h}$ denote the number of ${n \in [-N,N] \cap {\bf Z}}$ such that ${\| P(n+h)\|_{{\bf R}/{\bf Z}}, \|P(n)\|_{{\bf R}/{\bf Z}} \leq \varepsilon}$. By hypothesis, ${\sum_{h \in [-2N,2N] \cap {\bf Z}} N_h \gg \delta^2 N^2}$, and clearly ${N_h = O(N)}$, so we must have ${N_h \gg \delta^2 N}$ for ${\gg \delta^2 N}$ choices of ${h}$. For each such ${h}$, we then have ${\|P(n+h)-P(n)\|_{{\bf R}/{\bf Z}} \leq 2\varepsilon}$ for ${\gg \delta^2 N}$ choices of ${n \in [-N,N] \cap {\bf Z}}$, so by induction hypothesis, either (5) or (6) holds, or else for ${\gg \delta^{O(1)} N}$ choices of ${h \in [-2N,2N] \cap {\bf Z}}$, there is a natural number ${q_h \ll \delta^{-O(1)}}$ such that

$\displaystyle \| q_h \alpha_{i,h} \|_{{\bf R}/{\bf Z}} \ll \frac{\delta^{-O(1)} \varepsilon}{N^i}$

for ${i=1,\dots,d-1}$, where ${\alpha_{i,h}}$ are the coefficients of the degree ${d-1}$ polynomial ${n \mapsto P(n+h)-P(n)}$. We may of course assume it is the latter which holds. By the pigeonhole principle we may take ${q_h= q}$ to be independent of ${h}$.

Since ${\alpha_{d-1,h} = dh \alpha_d}$, we have

$\displaystyle \| qd h \alpha_d \|_{{\bf R}/{\bf Z}} \ll \frac{\delta^{-O(1)} \varepsilon}{N^{d-1}}$

for ${\gg \delta^{O(1)} N}$ choices of ${h}$, so by Lemma 3, either (5) or (6) holds, or else (after increasing ${q}$ as necessary) we have

$\displaystyle \| q \alpha_d \|_{{\bf R}/{\bf Z}} \ll \frac{\delta^{-O(1)} \varepsilon}{N^d}.$

We can again assume it is the latter that holds. This implies that ${q \alpha_{d-2,h} = (d-1) h \alpha_{d-1} + O( \delta^{-O(1)} \varepsilon / N^{d-2} )}$ modulo ${1}$, so that

$\displaystyle \| q(d-1) h \alpha_{d-1} \|_{{\bf R}/{\bf Z}} \ll \frac{\delta^{-O(1)} \varepsilon}{N^{d-2}}$

for ${\gg \delta^{O(1)} N}$ choices of ${h}$. Arguing as before and iterating, we obtain the claim. $\Box$

The above results also extend to higher dimensions. Here is the higher dimensional version of Proposition 4:

Proposition 7 (Multidimensional Weyl exponential sum estimate, inverse form) Let ${k \geq 1}$ and ${N_1,\dots,N_k \geq 1}$, and let ${Q_i \subset {\bf Z}}$ be arithmetic progressions of length at most ${N_i}$ for each ${i=1,\dots,k}$. Let ${P: {\bf Z}^k \rightarrow {\bf R}/{\bf Z}}$ be a polynomial of degrees at most ${d_1,\dots,d_k}$ in each of the ${k}$ variables ${n_1,\dots,n_k}$ separately. If

$\displaystyle \frac{1}{N_1 \dots N_k} |\sum_{n \in Q_1 \times \dots \times Q_k} e(P(n))| \geq \delta$

for some ${\delta > 0}$, then there exists a subprogression ${Q'_i}$ of ${Q_i}$ with ${|Q'_i| \gg_{k,d_1,\dots,d_k} \delta^{O_{k,d_1,\dots,d_k}(1)} N_i}$ for each ${i=1,\dots,k}$ such that ${P}$ varies by at most ${\delta}$ on ${Q'_1 \times \dots \times Q'_k}$.

A much more general statement, in which the polynomial phase ${n \mapsto e(P(n))}$ is replaced by a nilsequence, and in which one does not necessarily assume the exponential sum is small, is given in Theorem 8.6 of this paper of Ben Green and myself, but it involves far more notation to even state properly.

Proof: We induct on ${k}$. The case ${k=1}$ was established in Proposition 5, so we assume that ${k \geq 2}$ and that the claim has already been proven for ${k-1}$. To simplify notation we allow all implied constants to depend on ${k,d_1,\dots,d_k}$. We may assume that ${\delta \leq c}$ for some small ${c>0}$ depending only on ${k,d_1,\dots,d_k}$.

By a linear change of variables, we may assume that ${Q_i \subset [0,N_i] \cap {\bf Z}}$ for all ${i=1,\dots,k}$.

We write ${n' := (n_1,\dots,n_{k-1})}$. First suppose that ${N_k = O(\delta^{-O(1)})}$. Then by the pigeonhole principle we can find ${n_k \in I_k}$ such that

$\displaystyle \frac{1}{N_1 \dots N_{k-1}} |\sum_{n' \in Q_1 \times \dots \times Q_{k-1}} e(P(n',n_k))| \geq \delta$

and the claim then follows from the induction hypothesis. Thus we may assume that ${N_k \geq \delta^{-C}}$ for some large ${C}$ depending only on ${k,d_1,\dots,d_k}$. Similarly we may assume that ${N_i \geq \delta^{-C}}$ for all ${i=1,\dots,k}$.

By the triangle inequality, we have

$\displaystyle \frac{1}{N_1 \dots N_k} \sum_{n_k \in Q_k} |\sum_{n' \in Q_1 \times \dots \times Q_{k-1}} e(P(n',n_k))| \geq \delta.$

The inner sum is ${O(N_k)}$, and the outer sum has ${O(N_1 \dots N_{k-1})}$ terms. Thus, for ${\gg \delta N_1 \dots N_{k-1}}$ choices of ${n' \in Q_1 \times \dots \times Q_{k-1}}$, one has

$\displaystyle \frac{1}{N_k} |\sum_{n_k \in Q_k} e(P(n',n_k))| \gg \delta. \ \ \ \ \ (7)$

We write

$\displaystyle P(n',n_k) = \sum_{i_k \leq d_k} P_{i_k}(n') n_k^i$

for some polynomials ${P_{i_k}: {\bf Z}^{k-1} \rightarrow {\bf R}/{\bf Z}}$ of degrees at most ${d_1,\dots,d_{k-1}}$ in the variables ${n_1,\dots,n_{k-1}}$. For each ${n'}$ obeying (7), we apply Corollary 5 to conclude that there exists a natural number ${q_{n'} \ll \delta^{-O(1)}}$ such that

$\displaystyle \| q_{n'} P_{i_k}(n') \|_{{\bf R}/{\bf Z}} \ll \delta^{-O(1)} / N_k^{i_k}$

for ${i_k=1,\dots,d_k}$ (the claim also holds for ${i_k=0}$ but we discard it as being trivial). By the pigeonhole principle, there thus exists a natural number ${q \ll \delta^{-O(1)}}$ such that

$\displaystyle \| q P_{i_k}(n') \|_{{\bf R}/{\bf Z}} \ll \delta^{-O(1)} / N_k^{i_k}$

for all ${i_k=1,\dots,d_k}$ and for ${\gg \delta^{O(1)} N_1 \dots N_{k-1}}$ choices of ${n' \in Q_1 \times \dots \times Q_{k-1}}$. If we write

$\displaystyle P_{i_k}(n') = \sum_{i_{k-1} \leq d_{k-1}} P_{i_{k-1},i_k}(n_1,\dots,n_{k-2}) n_{k-1}^{i_{k-1}},$

where ${P_{i_{k-1},i_k}: {\bf Z}^{k-2} \rightarrow {\bf R}/{\bf Z}}$ is a polynomial of degrees at most ${d_1,\dots,d_{k-2}}$, then for ${\gg \delta^{O(1)} N_1 \dots N_{k-2}}$ choices of ${(n_1,\dots,n_{k-2}) \in Q_1 \times \dots \times Q_{k-2}}$ we then have

$\displaystyle \| \sum_{i_{k-1} \leq d_{k-1}} q P_{i_{k-1},i_k}(n_1,\dots,n_{k-2}) n_{k-1}^{i_{k-1}} \|_{{\bf R}/{\bf Z}} \ll \delta^{-O(1)} / N_k^{i_k}.$

Applying Lemma 6 in the ${n_{k-1}}$ and the largeness hypotheses on the ${N_i}$ (and also the assumption that ${i_k \geq 1}$) we conclude (after enlarging ${q}$ as necessary, and pigeonholing to keep ${q}$ independent of ${n_1,\dots,n_{k-2}}$) that

$\displaystyle \| q P_{i_{k-1},i_k}(n_1,\dots,n_{k-2}) \|_{{\bf R}/{\bf Z}} \ll \frac{\delta^{-O(1)}}{N_{k-1}^{i_{k-1}} N_k^{i_k}}$

for all ${i_{k-1}=0,\dots,d_{k-1}}$ (note that we now include that ${i_{k-1}=0}$ case, which is no longer trivial) and for ${\gg \delta^{O(1)} N_1 \dots N_{k-2}}$ choices of ${(n_1,\dots,n_{k-2}) \in Q_1 \times \dots \times Q_{k-2}}$. Iterating this, we eventually conclude (after enlarging ${q}$ as necessary) that

$\displaystyle \| q \alpha_{i_1,\dots,i_k} \|_{{\bf R}/{\bf Z}} \ll \frac{\delta^{-O(1)}}{N_1^{i_1} \dots N_k^{i_k}} \ \ \ \ \ (8)$

whenever ${i_j \in \{0,\dots,d_j\}}$ for ${j=1,\dots,k}$, with ${i_k}$ nonzero. Permuting the indices, and observing that the claim is trivial for ${(i_1,\dots,i_k) = (0,\dots,0)}$, we in fact obtain (8) for all ${(i_1,\dots,i_k) \in \{0,\dots,d_1\} \times \dots \times \{0,\dots,d_k\}}$, at which point the claim easily follows by taking ${Q'_j := \{ qn_j: n_j \leq \delta^C N_j\}}$ for each ${j=1,\dots,k}$. $\Box$

An inspection of the proof of the above result (or alternatively, by combining the above result again with many applications of Lemma 6) reveals the following general form of Proposition 4, which was posed as Exercise 17 in this previous blog post, but had a slight misprint in it (it did not properly treat the possibility that some of the ${N_j}$ could be small) and was a bit trickier to prove than anticipated (in fact, the reason for this post was that I was asked to supply a more detailed solution for this exercise):

Proposition 8 (Multidimensional Weyl exponential sum estimate, inverse form, II) Let ${k \geq 1}$ be an natural number, and for each ${j=1,\dots,k}$, let ${I_j \subset [0,N_j]_{\bf Z}}$ be a discrete interval for some ${N_j \geq 1}$. Let

$\displaystyle P(n_1,\dots,n_k) = \sum_{i_1 \leq d_1, \dots, i_k \leq d_k} \alpha_{i_1,\dots,i_k} n_1^{i_1} \dots n_k^{i_k}$

be a polynomial in ${k}$ variables of multidegrees ${d_1,\dots,d_k \geq 0}$ for some ${\alpha_{i_1,\dots,i_k} \in {\bf R}/{\bf Z}}$. If

$\displaystyle \frac{1}{N_1 \dots N_k} |\sum_{n \in I_1 \times \dots \times I_k} e(P(n))| \geq \delta \ \ \ \ \ (9)$

for some ${\delta > 0}$, then either

$\displaystyle N_j \ll_{k,d_1,\dots,d_k} \delta^{-O_{k,d_1,\dots,d_k}(1)} \ \ \ \ \ (10)$

for some ${1 \leq j \leq d_k}$, or else there is a natural number ${q \ll_{k,d_1,\dots,d_k} \delta^{-O_{k,d_1,\dots,d_k}(1)}}$ such that

$\displaystyle \| q\alpha_{i_1,\dots,i_k} \|_{{\bf R}/{\bf Z}} \ll_{k,d_1,\dots,d_k} \delta^{-O_d(1)} N_1^{-i_1} \dots N_k^{-i_k} \ \ \ \ \ (11)$

whenever ${i_j \leq d_j}$ for ${j=1,\dots,k}$.

Again, the factor of ${N_1^{-i_1} \dots N_k^{-i_k}}$ is natural in this bound. In the ${k=1}$ case, the option (10) may be deleted since (11) trivially holds in this case, but this simplification is no longer available for ${k>1}$ since one needs (10) to hold for all ${j}$ (not just one ${j}$) to make (11) completely trivial. Indeed, the above proposition fails for ${k \geq 2}$ if one removes (10) completely, as can be seen for instance by inspecting the exponential sum ${\sum_{n_1 \in \{0,1\}} \sum_{n_2 \in [1,N] \cap {\bf Z}} e( \alpha n_1 n_2)}$, which has size comparable to ${N}$ regardless of how irrational ${\alpha}$ is.

Here’s a cute identity I discovered by accident recently. Observe that

$\displaystyle \frac{d}{dx} (1+x^2)^{0/2} = 0$

$\displaystyle \frac{d^2}{dx^2} (1+x^2)^{1/2} = \frac{1}{(1+x^2)^{3/2}}$

$\displaystyle \frac{d^3}{dx^3} (1+x^2)^{2/2} = 0$

$\displaystyle \frac{d^4}{dx^4} (1+x^2)^{3/2} = \frac{9}{(1+x^2)^{5/2}}$

$\displaystyle \frac{d^5}{dx^5} (1+x^2)^{4/2} = 0$

$\displaystyle \frac{d^6}{dx^6} (1+x^2)^{5/2} = \frac{225}{(1+x^2)^{7/2}}$

and so one can conjecture that one has

$\displaystyle \frac{d^{k+1}}{dx^{k+1}} (1+x^2)^{k/2} = 0$

when $k$ is even, and

$\displaystyle \frac{d^{k+1}}{dx^{k+1}} (1+x^2)^{k/2} = \frac{(1 \times 3 \times \dots \times k)^2}{(1+x^2)^{(k+2)/2}}$

when $k$ is odd. This is obvious in the even case since $(1+x^2)^{k/2}$ is a polynomial of degree $k$, but I struggled for a while with the odd case before finding a slick three-line proof. (I was first trying to prove the weaker statement that $\frac{d^{k+1}}{dx^{k+1}} (1+x^2)^{k/2}$ was non-negative, but for some strange reason I was only able to establish this by working out the derivative exactly, rather than by using more analytic methods, such as convexity arguments.) I thought other readers might like the challenge (and also I’d like to see some other proofs), so rather than post my own proof immediately, I’ll see if anyone would like to supply their own proofs or thoughts in the comments. Also I am curious to know if this identity is connected to any other existing piece of mathematics.

I’ve just uploaded to the arXiv my paper “Cancellation for the multilinear Hilbert transform“, submitted to Collectanea Mathematica. This paper uses methods from additive combinatorics (and more specifically, the arithmetic regularity and counting lemmas from this paper of Ben Green and myself) to obtain a slight amount of progress towards the open problem of obtaining ${L^p}$ bounds for the trilinear and higher Hilbert transforms (as discussed in this previous blog post). For instance, the trilinear Hilbert transform

$\displaystyle H_3( f_1, f_2, f_3 )(x) := p.v. \int_{\bf R} f_1(x+t) f_2(x+2t) f_3(x+3t)\ \frac{dt}{t}$

is not known to be bounded for any ${L^{p_1}({\bf R}) \times L^{p_2}({\bf R}) \times L^{p_3}({\bf R})}$ to ${L^p({\bf R})}$, although it is conjectured to do so when ${1/p =1/p_1 +1/p_2+1/p_3}$ and ${1 < p_1,p_2,p_3,p < \infty}$. (For ${p}$ well below ${1}$, one can use additive combinatorics constructions to demonstrate unboundedness; see this paper of Demeter.) One can approach this problem by considering the truncated trilinear Hilbert transforms

$\displaystyle H_{3,r,R}( f_1, f_2, f_3 )(x) := \int_{r \leq |t| \leq R} f_1(x+t) f_2(x+2t) f_3(x+3t)\ \frac{dt}{t}$

for ${0 < r < R}$. It is not difficult to show that the boundedness of ${H_3}$ is equivalent to the boundedness of ${H_{3,r,R}}$ with bounds that are uniform in ${R}$ and ${r}$. On the other hand, from Minkowski’s inequality and Hölder’s inequality one can easily obtain the non-uniform bound of ${2 \log \frac{R}{r}}$ for ${H_{3,r,R}}$. The main result of this paper is a slight improvement of this trivial bound to ${o( \log \frac{R}{r})}$ as ${R/r \rightarrow \infty}$. Roughly speaking, the way this gain is established is as follows. First there are some standard time-frequency type reductions to reduce to the task of obtaining some non-trivial cancellation on a single “tree”. Using a “generalised von Neumann theorem”, we show that such cancellation will happen if (a discretised version of) one or more of the functions ${f_1,f_2,f_3}$ (or a dual function ${f_0}$ that it is convenient to test against) is small in the Gowers ${U^3}$ norm. However, the arithmetic regularity lemma alluded to earlier allows one to represent an arbitrary function ${f_i}$, up to a small error, as the sum of such a “Gowers uniform” function, plus a structured function (or more precisely, an irrational virtual nilsequence). This effectively reduces the problem to that of establishing some cancellation in a single tree in the case when all functions ${f_0,f_1,f_2,f_3}$ involved are irrational virtual nilsequences. At this point, the contribution of each component of the tree can be estimated using the “counting lemma” from my paper with Ben. The main term in the asymptotics is a certain integral over a nilmanifold, but because the kernel ${\frac{dt}{t}}$ in the trilinear Hilbert transform is odd, it turns out that this integral vanishes, giving the required cancellation.

The same argument works for higher order Hilbert transforms (and one can also replace the coefficients in these transforms with other rational constants). However, because the quantitative bounds in the arithmetic regularity and counting lemmas are so poor, it does not seem likely that one can use these methods to remove the logarithmic growth in ${R/r}$ entirely, and some additional ideas will be needed to resolve the full conjecture.

I’ve just uploaded to the arXiv my paper “Failure of the ${L^1}$ pointwise and maximal ergodic theorems for the free group“, submitted to Forum of Mathematics, Sigma. This paper concerns a variant of the pointwise ergodic theorem of Birkhoff, which asserts that if one has a measure-preserving shift map ${T: X \rightarrow X}$ on a probability space ${X = (X,\mu)}$, then for any ${f \in L^1(X)}$, the averages ${\frac{1}{N} \sum_{n=1}^N f \circ T^{-n}}$ converge pointwise almost everywhere. (In the important case when the shift map ${T}$ is ergodic, the pointwise limit is simply the mean ${\int_X f\ d\mu}$ of the original function ${f}$.)

The pointwise ergodic theorem can be extended to measure-preserving actions of other amenable groups, if one uses a suitably “tempered” Folner sequence of averages; see this paper of Lindenstrauss for more details. (I also wrote up some notes on that paper here, back in 2006 before I had started this blog.) But the arguments used to handle the amenable case break down completely for non-amenable groups, and in particular for the free non-abelian group ${F_2}$ on two generators.

Nevo and Stein studied this problem and obtained a number of pointwise ergodic theorems for ${F_2}$-actions ${(T_g)_{g \in F_2}}$ on probability spaces ${(X,\mu)}$. For instance, for the spherical averaging operators

$\displaystyle {\mathcal A}_n f := \frac{1}{4 \times 3^{n-1}} \sum_{g \in F_2: |g| = n} f \circ T_g^{-1}$

(where ${|g|}$ denotes the length of the reduced word that forms ${g}$), they showed that ${{\mathcal A}_{2n} f}$ converged pointwise almost everywhere provided that ${f}$ was in ${L^p(X)}$ for some ${p>1}$. (The need to restrict to spheres of even radius can be seen by considering the action of ${F_2}$ on the two-element set ${\{0,1\}}$ in which both generators of ${F_2}$ act by interchanging the elements, in which case ${{\mathcal A}_n}$ is determined by the parity of ${n}$.) This result was reproven with a different and simpler proof by Bufetov, who also managed to relax the condition ${f \in L^p(X)}$ to the weaker condition ${f \in L \log L(X)}$.

The question remained open as to whether the pointwise ergodic theorem for ${F_2}$-actions held if one only assumed that ${f}$ was in ${L^1(X)}$. Nevo and Stein were able to establish this for the Cesáro averages ${\frac{1}{N} \sum_{n=1}^N {\mathcal A}_n}$, but not for ${{\mathcal A}_n}$ itself. About six years ago, Assaf Naor and I tried our hand at this problem, and was able to show an associated maximal inequality on ${\ell^1(F_2)}$, but due to the non-amenability of ${F_2}$, this inequality did not transfer to ${L^1(X)}$ and did not have any direct impact on this question, despite a fair amount of effort on our part to attack it.

Inspired by some recent conversations with Lewis Bowen, I returned to this problem. This time around, I tried to construct a counterexample to the ${L^1}$ pointwise ergodic theorem – something Assaf and I had not seriously attempted to do (perhaps due to being a bit too enamoured of our ${\ell^1(F_2)}$ maximal inequality). I knew of an existing counterexample of Ornstein regarding a failure of an ${L^1}$ ergodic theorem for iterates ${P^n}$ of a self-adjoint Markov operator – in fact, I had written some notes on this example back in 2007. Upon revisiting my notes, I soon discovered that the Ornstein construction was adaptable to the ${F_2}$ setting, thus settling the problem in the negative:

Theorem 1 (Failure of ${L^1}$ pointwise ergodic theorem) There exists a measure-preserving ${F_2}$-action on a probability space ${X}$ and a non-negative function ${f \in L^1(X)}$ such that ${\sup_n {\mathcal A}_{2n} f(x) = +\infty}$ for almost every ${x}$.

To describe the proof of this theorem, let me first briefly sketch the main ideas of Ornstein’s construction, which gave an example of a self-adjoint Markov operator ${P}$ on a probability space ${X}$ and a non-negative ${f \in L^1(X)}$ such that ${\sup_n P^n f(x) = +\infty}$ for almost every ${x}$. By some standard manipulations, it suffices to show that for any given ${\alpha > 0}$ and ${\varepsilon>0}$, there exists a self-adjoint Markov operator ${P}$ on a probability space ${X}$ and a non-negative ${f \in L^1(X)}$ with ${\|f\|_{L^1(X)} \leq \alpha}$, such that ${\sup_n P^n f \geq 1-\varepsilon}$ on a set of measure at least ${1-\varepsilon}$. Actually, it will be convenient to replace the Markov chain ${(P^n f)_{n \geq 0}}$ with an ancient Markov chain ${(f_n)_{n \in {\bf Z}}}$ – that is to say, a sequence of non-negative functions ${f_n}$ for both positive and negative ${f}$, such that ${f_{n+1} = P f_n}$ for all ${n \in {\bf Z}}$. The purpose of requiring the Markov chain to be ancient (that is, to extend infinitely far back in time) is to allow for the Markov chain to be shifted arbitrarily in time, which is key to Ornstein’s construction. (Technically, Ornstein’s original argument only uses functions that go back to a large negative time, rather than being infinitely ancient, but I will gloss over this point for sake of discussion, as it turns out that the ${F_2}$ version of the argument can be run using infinitely ancient chains.)

For any ${\alpha>0}$, let ${P(\alpha)}$ denote the claim that for any ${\varepsilon>0}$, there exists an ancient Markov chain ${(f_n)_{n \in {\bf Z}}}$ with ${\|f_n\|_{L^1(X)} = \alpha}$ such that ${\sup_{n \in {\bf Z}} f_n \geq 1-\varepsilon}$ on a set of measure at least ${1-\varepsilon}$. Clearly ${P(1)}$ holds since we can just take ${f_n=1}$ for all ${n}$. Our objective is to show that ${P(\alpha)}$ holds for arbitrarily small ${\alpha}$. The heart of Ornstein’s argument is then the implication

$\displaystyle P(\alpha) \implies P( \alpha (1 - \frac{\alpha}{4}) ) \ \ \ \ \ (1)$

for any ${0 < \alpha \leq 1}$, which upon iteration quickly gives the desired claim.

Let’s see informally how (1) works. By hypothesis, and ignoring epsilons, we can find an ancient Markov chain ${(f_n)_{n \in {\bf Z}}}$ on some probability space ${X}$ of total mass ${\|f_n\|_{L^1(X)} = \alpha}$, such that ${\sup_n f_n}$ attains the value of ${1}$ or greater almost everywhere. Assuming that the Markov process is irreducible, the ${f_n}$ will eventually converge as ${n \rightarrow \infty}$ to the constant value of ${\|f_n\|_{L^1(X)}}$, in particular its final state will essentially stay above ${\alpha}$ (up to small errors).

Now suppose we duplicate the Markov process by replacing ${X}$ with a double copy ${X \times \{1,2\}}$ (giving ${\{1,2\}}$ the uniform probability measure), and using the disjoint sum of the Markov operators on ${X \times \{1\}}$ and ${X \times \{2\}}$ as the propagator, so that there is no interaction between the two components of this new system. Then the functions ${f'_n(x,i) := f_n(x) 1_{i=1}}$ form an ancient Markov chain of mass at most ${\alpha/2}$ that lives solely in the first half ${X \times \{1\}}$ of this copy, and ${\sup_n f'_n}$ attains the value of ${1}$ or greater on almost all of the first half ${X \times \{1\}}$, but is zero on the second half. The final state of ${f'_n}$ will be to stay above ${\alpha}$ in the first half ${X \times \{1\}}$, but be zero on the second half.

Now we modify the above example by allowing an infinitesimal amount of interaction between the two halves ${X \times \{1\}}$, ${X \times \{2\}}$ of the system (I mentally think of ${X \times \{1\}}$ and ${X \times \{2\}}$ as two identical boxes that a particle can bounce around in, and now we wish to connect the boxes by a tiny tube). The precise way in which this interaction is inserted is not terribly important so long as the new Markov process is irreducible. Once one does so, then the ancient Markov chain ${(f'_n)_{n \in {\bf Z}}}$ in the previous example gets replaced by a slightly different ancient Markov chain ${(f''_n)_{n \in {\bf Z}}}$ which is more or less identical with ${f'_n}$ for negative times ${n}$, or for bounded positive times ${n}$, but for very large values of ${n}$ the final state is now constant across the entire state space ${X \times \{1,2\}}$, and will stay above ${\alpha/2}$ on this space.

Finally, we consider an ancient Markov chain ${F_n}$ which is basically of the form

$\displaystyle F_n(x,i) \approx f''_n(x,i) + (1 - \frac{\alpha}{2}) f_{n-M}(x) 1_{i=2}$

for some large parameter ${M}$ and for all ${n \leq M}$ (the approximation becomes increasingly inaccurate for ${n}$ much larger than ${M}$, but never mind this for now). This is basically two copies of the original Markov process in separate, barely interacting state spaces ${X \times \{1\}, X \times \{2\}}$, but with the second copy delayed by a large time delay ${M}$, and also attenuated in amplitude by a factor of ${1-\frac{\alpha}{2}}$. The total mass of this process is now ${\frac{\alpha}{2} + \frac{\alpha}{2} (1 -\frac{\alpha}{2}) = \alpha (1 - \alpha/4)}$. Because of the ${f''_n}$ component of ${F_n}$, we see that ${\sup_n F_n}$ basically attains the value of ${1}$ or greater on the first half ${X \times \{1\}}$. On the second half ${X \times \{2\}}$, we work with times ${n}$ close to ${M}$. If ${M}$ is large enough, ${f''_n}$ would have averaged out to about ${\alpha/2}$ at such times, but the ${(1 - \frac{\alpha}{2}) f_{n-M}(x)}$ component can get as large as ${1-\alpha/2}$ here. Summing (and continuing to ignore various epsilon losses), we see that ${\sup_n F_n}$ can get as large as ${1}$ on almost all of the second half of ${X \times \{2\}}$. This concludes the rough sketch of how one establishes the implication (1).

It was observed by Bufetov that the spherical averages ${{\mathcal A}_n}$ for a free group action can be lifted up to become powers ${P^n}$ of a Markov operator, basically by randomly assigning a “velocity vector” ${s \in \{a,b,a^{-1},b^{-1}\}}$ to one’s base point ${x}$ and then applying the Markov process that moves ${x}$ along that velocity vector (and then randomly changing the velocity vector at each time step to the “reduced word” condition that the velocity never flips from ${s}$ to ${s^{-1}}$). Thus the spherical average problem has a Markov operator interpretation, which opens the door to adapting the Ornstein construction to the setting of ${F_2}$ systems. This turns out to be doable after a certain amount of technical artifice; the main thing is to work with ${F_2}$-measure preserving systems that admit ancient Markov chains that are initially supported in a very small region in the “interior” of the state space, so that one can couple such systems to each other “at the boundary” in the fashion needed to establish the analogue of (1) without disrupting the ancient dynamics of such chains. The initial such system (used to establish the base case ${P(1)}$) comes from basically considering the action of ${F_2}$ on a (suitably renormalised) “infinitely large ball” in the Cayley graph, after suitably gluing together the boundary of this ball to complete the action. The ancient Markov chain associated to this system starts at the centre of this infinitely large ball at infinite negative time ${n=-\infty}$, and only reaches the boundary of this ball at the time ${n=0}$.

The lonely runner conjecture is the following open problem:

Conjecture 1 Suppose one has ${n \geq 1}$ runners on the unit circle ${{\bf R}/{\bf Z}}$, all starting at the origin and moving at different speeds. Then for each runner, there is at least one time ${t}$ for which that runner is “lonely” in the sense that it is separated by a distance at least ${1/n}$ from all other runners.

One can normalise the speed of the lonely runner to be zero, at which point the conjecture can be reformulated (after replacing ${n}$ by ${n+1}$) as follows:

Conjecture 2 Let ${v_1,\dots,v_n}$ be non-zero real numbers for some ${n \geq 1}$. Then there exists a real number ${t}$ such that the numbers ${tv_1,\dots,tv_n}$ are all a distance at least ${\frac{1}{n+1}}$ from the integers, thus ${\|tv_1\|_{{\bf R}/{\bf Z}},\dots,\|tv_n\|_{{\bf R}/{\bf Z}} \geq \frac{1}{n+1}}$ where ${\|x\|_{{\bf R}/{\bf Z}}}$ denotes the distance of ${x}$ to the nearest integer.

This conjecture has been proven for ${n \leq 7}$, but remains open for larger ${n}$. The bound ${\frac{1}{n+1}}$ is optimal, as can be seen by looking at the case ${v_i=i}$ and applying the Dirichlet approximation theorem. Note that for each non-zero ${v}$, the set ${\{ t \in {\bf R}: \|vt\|_{{\bf R}/{\bf Z}} \leq r \}}$ has (Banach) density ${2r}$ for any ${0 < r < 1/2}$, and from this and the union bound we can easily find ${t \in {\bf R}}$ for which

$\displaystyle \|tv_1\|_{{\bf R}/{\bf Z}},\dots,\|tv_n\|_{{\bf R}/{\bf Z}} \geq \frac{1}{2n}-\varepsilon$

for any ${\varepsilon>0}$, but it has proven to be quite challenging to remove the factor of ${2}$ to increase ${\frac{1}{2n}}$ to ${\frac{1}{n+1}}$. (As far as I know, even improving ${\frac{1}{2n}}$ to ${\frac{1+c}{2n}}$ for some absolute constant ${c>0}$ and sufficiently large ${n}$ remains open.)

The speeds ${v_1,\dots,v_n}$ in the above conjecture are arbitrary non-zero reals, but it has been known for some time that one can reduce without loss of generality to the case when the ${v_1,\dots,v_n}$ are rationals, or equivalently (by scaling) to the case where they are integers; see e.g. Section 4 of this paper of Bohman, Holzman, and Kleitman.

In this post I would like to remark on a slight refinement of this reduction, in which the speeds ${v_1,\dots,v_n}$ are integers of bounded size, where the bound depends on ${n}$. More precisely:

Proposition 3 In order to prove the lonely runner conjecture, it suffices to do so under the additional assumption that the ${v_1,\dots,v_n}$ are integers of size at most ${n^{Cn^2}}$, where ${C}$ is an (explicitly computable) absolute constant. (More precisely: if this restricted version of the lonely runner conjecture is true for all ${n \leq n_0}$, then the original version of the conjecture is also true for all ${n \leq n_0}$.)

In principle, this proposition allows one to verify the lonely runner conjecture for a given ${n}$ in finite time; however the number of cases to check with this proposition grows faster than exponentially in ${n}$, and so this is unfortunately not a feasible approach to verifying the lonely runner conjecture for more values of ${n}$ than currently known.

One of the key tools needed to prove this proposition is the following additive combinatorics result. Recall that a generalised arithmetic progression (or ${GAP}$) in the reals ${{\bf R}}$ is a set of the form

$\displaystyle P = \{ n_1 v_1 + \dots + n_d v_d: n_1,\dots,n_d \in {\bf Z}; |n_1| \leq N_1, \dots, |n_d| \leq N_d \}$

for some ${v_1,\dots,v_d \in {\bf R}}$ and ${N_1,\dots,N_d > 0}$; the quantity ${d}$ is called the rank of the progression. If ${t>0}$, the progression ${P}$ is said to be ${t}$-proper if the sums ${n_1 v_1 + \dots + n_d v_d}$ with ${|n_i| \leq t N_i}$ for ${i=1,\dots,d}$ are all distinct. We have

Lemma 4 (Progressions lie inside proper progressions) Let ${P}$ be a GAP of rank ${d}$ in the reals, and let ${t \geq 1}$. Then ${P}$ is contained in a ${t}$-proper GAP ${Q}$ of rank at most ${d}$, with

$\displaystyle |Q| \leq (2t)^d d^{6d^2} \prod_{i=1}^d (2N_i+1).$

Proof: See Theorem 2.1 of this paper of Bilu. (Very similar results can also be found in Theorem 3.40 of my book with Van Vu, or Theorem 1.10 of this paper of mine with Van Vu.) $\Box$

Now let ${n \geq 1}$, and assume inductively that the lonely runner conjecture has been proven for all smaller values of ${n}$, as well as for the current value of ${n}$ in the case that ${v_1,\dots,v_n}$ are integers of size at most ${n^{Cn^2}}$ for some sufficiently large ${C}$. We will show that the lonely runner conjecture holds in general for this choice of ${n}$.

let ${v_1,\dots,v_n}$ be non-zero real numbers. Let ${C_0}$ be a large absolute constant to be chosen later. From the above lemma applied to the GAP ${\{ n_1 v_1 + \dots + n_d v_d: n_1,\dots,n_d \in \{-1,0,1\}\}}$, one can find a ${n^{C_0n}}$-proper GAP ${Q}$ of rank at most ${n}$ containing ${\{v_1,\dots,v_n\}}$ such that

$\displaystyle |Q| \leq (6n^{C_0 n})^n n^{6n^2};$

in particular ${|Q| \leq n^{Cn^2}}$ if ${C}$ is large enough depending on ${C_0}$.

We write

$\displaystyle Q = \{ n_1 w_1 + \dots + n_d w_d: n_1,\dots,n_d \in {\bf Z}; |n_1| \leq N_1,\dots,|n_d| \leq N_d \}$

for some ${d \leq n}$, ${w_1,\dots,w_d}$, and ${N_1,\dots,N_d \geq 0}$. We thus have ${v_i = \phi(a_i)}$ for ${i=1,\dots,n}$, where ${\phi: {\bf R}^d \rightarrow {\bf R}}$ is the linear map ${\phi(n_1,\dots,n_d) := n_1 w_1 + \dots + n_d w_d}$ and ${a_1,\dots,a_n \in {\bf Z}^d}$ are non-zero and lie in the box ${\{ (n_1,\dots,n_d) \in {\bf R}^d: |n_1| \leq N_1,\dots,|n_d| \leq N_d \}}$.

We now need an elementary lemma that allows us to create a “collision” between two of the ${a_1,\dots,a_n}$ via a linear projection, without making any of the ${a_i}$ collide with the origin:

Lemma 5 Let ${a_1,\dots,a_n \in {\bf R}^d}$ be non-zero vectors that are not all collinear with the origin. Then, after replacing one or more of the ${a_i}$ with their negatives ${-a_i}$ if necessary, there exists a pair ${a_i,a_j}$ such that ${a_i-a_j \neq 0}$, and such that none of the ${a_1,\dots,a_n}$ is a scalar multiple of ${a_i-a_j}$.

Proof: We may assume that ${d \geq 2}$, since the ${d \leq 1}$ case is vacuous. Applying a generic linear projection to ${{\bf R}^2}$ (which does not affect collinearity, or the property that a given ${a_k}$ is a scalar multiple of ${a_i-a_j}$), we may then reduce to the case ${d=2}$.

By a rotation and relabeling, we may assume that ${a_1}$ lies on the negative ${x}$-axis; by flipping signs as necessary we may then assume that all of the ${a_2,\dots,a_n}$ lie in the closed right half-plane. As the ${a_i}$ are not all collinear with the origin, one of the ${a_i}$ lies off of the ${x}$-axis, by relabeling, we may assume that ${a_2}$ lies off of the ${x}$ axis and makes a minimal angle with the ${x}$-axis. Then the angle of ${a_2-a_1}$ with the ${x}$-axis is non-zero but smaller than any non-zero angle that any of the ${a_i}$ make with this axis, and so none of the ${a_i}$ are a scalar multiple of ${a_2-a_1}$, and the claim follows. $\Box$

We now return to the proof of the proposition. If the ${a_1,\dots,a_n}$ are all collinear with the origin, then ${\phi(a_1),\dots,\phi(a_n)}$ lie in a one-dimensional arithmetic progression ${\{ mv: |m| \leq |Q| \}}$, and then by rescaling we may take the ${v_1,\dots,v_n}$ to be integers of magnitude at most ${|Q| \leq n^{Cn}}$, at which point we are done by hypothesis. Thus, we may assume that the ${a_1,\dots,a_n}$ are not all collinear with the origin, and so by the above lemma and relabeling we may assume that ${a_n-a_1}$ is non-zero, and that none of the ${a_i}$ are scalar multiples of ${a_n-a_1}$.

We write

$\displaystyle a_n-a_1 = (c_1,\dots,c_d) \ \ \ \ \ (1)$

with ${|c_i| \leq 2 N_i}$ for ${i=1,\dots,d}$; by relabeling we may assume without loss of generality that ${c_d}$ is non-zero, and furthermore that

$\displaystyle \frac{|c_i|}{N_i} \leq \frac{|c_d|}{N_d}$

for ${i=1,\dots,d}$. We can also factor

$\displaystyle (c_1,\dots,c_d) = q (c'_1,\dots,c'_d) \ \ \ \ \ (2)$

where ${q}$ is a natural number and ${c'_1,\dots,c'_d}$ have no common factor.

We now define a variant ${\tilde \phi: {\bf R}^d \rightarrow {\bf R}}$ of ${\phi: {\bf R}^d \rightarrow {\bf R}}$ by the map

$\displaystyle \tilde \phi(n_1,\dots,n_d) := n_1 \tilde w_1 + \dots + n_{d-1} \tilde w_{d-1} - \frac{n_d}{c_d} (c_1 \tilde w_1 + \dots + c_{d-1} \tilde w_{d-1}),$

where the ${\tilde w_1,\dots,\tilde w_{d-1}}$ are real numbers that are linearly independent over ${{\bf Q}}$, whose precise value will not be of importance in our argument. This is a linear map with the property that ${\tilde \phi(a_n-a_1)=0}$, so that ${\tilde \phi(a_1),\dots,\tilde \phi(a_n)}$ consists of at most ${n-1}$ distinct real numbers, which are non-zero since none of the ${a_i}$ are scalar multiples of ${a_n-a_1}$, and the ${\tilde w_i}$ are linearly independent over ${{\bf Q}}$. As we are assuming inductively that the lonely runner conjecture holds for ${n-1}$, we conclude (after deleting duplicates) that there exists at least one real number ${\tilde t}$ such that

$\displaystyle \| \tilde t \tilde \phi(a_1) \|_{{\bf R}/{\bf Z}}, \dots, \| \tilde t \tilde \phi(a_n) \|_{{\bf R}/{\bf Z}} \geq \frac{1}{n}.$

We would like to “approximate” ${\phi}$ by ${\tilde \phi}$ to then conclude that there is at least one real number ${t}$ such that

$\displaystyle \| t \phi(a_1) \|_{{\bf R}/{\bf Z}}, \dots, \| t \phi(a_n) \|_{{\bf R}/{\bf Z}} \geq \frac{1}{n+1}.$

It turns out that we can do this by a Fourier-analytic argument taking advantage of the ${n^{C_0 n}}$-proper nature of ${Q}$. Firstly, we see from the Dirichlet approximation theorem that one has

$\displaystyle \| \tilde t \tilde \phi(a_1) \|_{{\bf R}/{\bf Z}}, \dots, \| \tilde t \tilde \phi(a_n) \|_{{\bf R}/{\bf Z}} \leq \frac{1}{10 n^2}$

for a set ${\tilde t}$ of reals of (Banach) density ${\gg n^{-O(n)}}$. Thus, by the triangle inequality, we have

$\displaystyle \| \tilde t \tilde \phi(a_1) \|_{{\bf R}/{\bf Z}}, \dots, \| \tilde t \tilde \phi(a_n) \|_{{\bf R}/{\bf Z}} \geq \frac{1}{n} - \frac{1}{10n^2}$

for a set ${\tilde t}$ of reals of density ${\gg n^{-O(n)}}$.

Applying a smooth Fourier multiplier of Littlewood-Paley type, one can find a trigonometric polynomial

$\displaystyle \eta(x) = \sum_{m: |m| \leq n^{C_0 n/10}} b_m e^{2\pi i mx}$

which takes values in ${[0,1]}$, is ${\gg 1}$ for ${\|x\|_{{\bf R}/{\bf Z}} \geq \frac{1}{n} - \frac{1}{10n^2}}$, and is no larger than ${O( n^{-100 C_0n} )}$ for ${\|x\|_{{\bf R}/{\bf Z}} \leq \frac{1}{n+1}}$. We then have

$\displaystyle \mathop{\bf E}_t \prod_{j=1}^n \eta( t \tilde \phi(a_j) ) \gg n^{-O(n)}$

where ${\mathop{\bf E}_t f(t)}$ denotes the mean value of a quasiperiodic function ${f}$ on the reals ${{\bf R}}$. We expand the left-hand side out as

$\displaystyle \sum_{m_1,\dots,m_n: m_1 \tilde \phi(a_1) + \dots + m_n \tilde \phi(a_n) = 0} b_{m_1} \dots b_{m_n}.$

From the genericity of ${\tilde w_1,\dots,\tilde w_n}$, we see that the constraint

$\displaystyle m_1 \tilde \phi(a_1) + \dots + m_n \tilde \phi(a_n) = 0$

occurs if and only if ${m_1 a_1 + \dots + m_n a_n}$ is a scalar multiple of ${a_n-a_1}$, or equivalently (by (1), (2)) an integer multiple of ${(c'_1,\dots,c'_d)}$. Thus

$\displaystyle \sum_{m_1,\dots,m_n: m_1 a_1 + \dots + m_n a_n \in {\bf Z} (c'_1,\dots,c'_d)} b_{m_1} \dots b_{m_n} \gg n^{-O(n)}. \ \ \ \ \ (3)$

Next, we consider the average

$\displaystyle \mathop{\bf E}_t \varphi( t \xi ) \prod_{j=1}^n \eta( t v_j ) \ \ \ \ \ (4)$

where

$\displaystyle \xi := c'_1 w_1 + \dots + c'_d w_d. \ \ \ \ \ (5)$

and ${\varphi}$ is the Dirichlet series

$\displaystyle \varphi(x) := \sum_{m: |m| \leq n^{C_0 n/2}} e^{2\pi i mx}.$

By Fourier expansion and writing ${v_j = \phi(a_j)}$, we may write (4) as

$\displaystyle \sum_{m,m_1,\dots,m_n: |m| \leq n^{C_0n/2}; m_1 \phi(a_1) + \dots + m_n \phi(a_n) = m \xi} b_{m_1} \dots b_{m_n}.$

The support of the ${b_{m_i}}$ implies that ${|m_i| \leq n^{C_0n/10}}$. Because of the ${n^{C_0 n}}$-properness of ${Q}$, we see (for ${n}$ large enough) that the equation

$\displaystyle m_1 \phi(a_1) + \dots + m_n \phi(a_n) = m \xi \ \ \ \ \ (6)$

implies that

$\displaystyle m_1 a_1 + \dots + m_n a_n \in {\bf Z} (c'_1,\dots,c'_d) \ \ \ \ \ (7)$

and conversely that (7) implies that (6) holds for some ${m}$ with ${|m| \leq n^{C_0 n/2}}$. From (3) we thus have

$\displaystyle \mathop{\bf E}_t \varphi( t \xi ) \prod_{j=1}^n \eta( t v_j ) \gg n^{-O(1)}.$

In particular, there exists a ${t}$ such that

$\displaystyle \varphi( t \xi ) \prod_{j=1}^n \eta( t v_j ) \gg n^{-O(1)}.$

Since ${\varphi}$ is bounded in magnitude by ${n^{C_0n/2}}$, and ${\eta}$ is bounded by ${1}$, we thus have

$\displaystyle \eta(t v_j) \gg n^{-C_0 n/2 - O(1)}$

for each ${1 \leq j \leq n}$, which by the size properties of ${\eta}$ implies that ${\|tv_j\|_{{\bf R}/{\bf Z}} \geq \frac{1}{n+1}}$ for all ${1 \leq j \leq n}$, giving the lonely runner conjecture for ${n}$.

The von Neumann ergodic theorem (the Hilbert space version of the mean ergodic theorem) asserts that if ${U: H \rightarrow H}$ is a unitary operator on a Hilbert space ${H}$, and ${v \in H}$ is a vector in that Hilbert space, then one has

$\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N U^n v = \pi_{H^U} v$

in the strong topology, where ${H^U := \{ w \in H: Uw = w \}}$ is the ${U}$-invariant subspace of ${H}$, and ${\pi_{H^U}}$ is the orthogonal projection to ${H^U}$. (See e.g. these previous lecture notes for a proof.) The same proof extends to more general amenable groups: if ${G}$ is a countable amenable group acting on a Hilbert space ${H}$ by unitary transformations ${T^g: H \rightarrow H}$ for ${g \in G}$, and ${v \in H}$ is a vector in that Hilbert space, then one has

$\displaystyle \lim_{N \rightarrow \infty} \mathop{\bf E}_{g \in \Phi_N} T^g v = \pi_{H^G} v \ \ \ \ \ (1)$

for any Folner sequence ${\Phi_N}$ of ${G}$, where ${H^G := \{ w \in H: T^g w = w \hbox{ for all }g \in G \}}$ is the ${G}$-invariant subspace, and ${\mathop{\bf E}_{a \in A} f(a) := \frac{1}{|A|} \sum_{a \in A} f(a)}$ is the average of ${f}$ on ${A}$. Thus one can interpret ${\pi_{H^G} v}$ as a certain average of elements of the orbit ${Gv := \{ T^g v: g \in G \}}$ of ${v}$.

In a previous blog post, I noted a variant of this ergodic theorem (due to Alaoglu and Birkhoff) that holds even when the group ${G}$ is not amenable (or not discrete), using a more abstract notion of averaging:

Theorem 1 (Abstract ergodic theorem) Let ${G}$ be an arbitrary group acting unitarily on a Hilbert space ${H}$, and let ${v}$ be a vector in ${H}$. Then ${\pi_{H^G} v}$ is the element in the closed convex hull of ${Gv := \{ T^g v: g \in G \}}$ of minimal norm, and is also the unique element of ${H^G}$ in this closed convex hull.

I recently stumbled upon a different way to think about this theorem, in the additive case ${G = (G,+)}$ when ${G}$ is abelian, which has a closer resemblance to the classical mean ergodic theorem. Given an arbitrary additive group ${G = (G,+)}$ (not necessarily discrete, or countable), let ${{\mathcal F}}$ denote the collection of finite non-empty multisets in ${G}$ – that is to say, unordered collections ${\{a_1,\dots,a_n\}}$ of elements ${a_1,\dots,a_n}$ of ${G}$, not necessarily distinct, for some positive integer ${n}$. Given two multisets ${A = \{a_1,\dots,a_n\}}$, ${B = \{b_1,\dots,b_m\}}$ in ${{\mathcal F}}$, we can form the sum set ${A + B := \{ a_i + b_j: 1 \leq i \leq n, 1 \leq j \leq m \}}$. Note that the sum set ${A+B}$ can contain multiplicity even when ${A, B}$ do not; for instance, ${\{ 1,2\} + \{1,2\} = \{2,3,3,4\}}$. Given a multiset ${A = \{a_1,\dots,a_n\}}$ in ${{\mathcal F}}$, and a function ${f: G \rightarrow H}$ from ${G}$ to a vector space ${H}$, we define the average ${\mathop{\bf E}_{a \in A} f(a)}$ as

$\displaystyle \mathop{\bf E}_{a \in A} f(a) = \frac{1}{n} \sum_{j=1}^n f(a_j).$

Note that the multiplicity function of the set ${A}$ affects the average; for instance, we have ${\mathop{\bf E}_{a \in \{1,2\}} a = \frac{3}{2}}$, but ${\mathop{\bf E}_{a \in \{1,2,2\}} a = \frac{5}{3}}$.

We can define a directed set on ${{\mathcal F}}$ as follows: given two multisets ${A,B \in {\mathcal F}}$, we write ${A \geq B}$ if we have ${A = B+C}$ for some ${C \in {\mathcal F}}$. Thus for instance we have ${\{ 1, 2, 2, 3\} \geq \{1,2\}}$. It is easy to verify that this operation is transitive and reflexive, and is directed because any two elements ${A,B}$ of ${{\mathcal F}}$ have a common upper bound, namely ${A+B}$. (This is where we need ${G}$ to be abelian.) The notion of convergence along a net, now allows us to define the notion of convergence along ${{\mathcal F}}$; given a family ${x_A}$ of points in a topological space ${X}$ indexed by elements ${A}$ of ${{\mathcal F}}$, and a point ${x}$ in ${X}$, we say that ${x_A}$ converges to ${x}$ along ${{\mathcal F}}$ if, for every open neighbourhood ${U}$ of ${x}$ in ${X}$, one has ${x_A \in U}$ for sufficiently large ${A}$, that is to say there exists ${B \in {\mathcal F}}$ such that ${x_A \in U}$ for all ${A \geq B}$. If the topological space ${V}$ is Hausdorff, then the limit ${x}$ is unique (if it exists), and we then write

$\displaystyle x = \lim_{A \rightarrow G} x_A.$

When ${x_A}$ takes values in the reals, one can also define the limit superior or limit inferior along such nets in the obvious fashion.

We can then give an alternate formulation of the abstract ergodic theorem in the abelian case:

Theorem 2 (Abelian abstract ergodic theorem) Let ${G = (G,+)}$ be an arbitrary additive group acting unitarily on a Hilbert space ${H}$, and let ${v}$ be a vector in ${H}$. Then we have

$\displaystyle \pi_{H^G} v = \lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^a v$

in the strong topology of ${H}$.

Proof: Suppose that ${A \geq B}$, so that ${A=B+C}$ for some ${C \in {\mathcal F}}$, then

$\displaystyle \mathop{\bf E}_{a \in A} T^a v = \mathop{\bf E}_{c \in C} T^c ( \mathop{\bf E}_{b \in B} T^b v )$

so by unitarity and the triangle inequality we have

$\displaystyle \| \mathop{\bf E}_{a \in A} T^a v \|_H \leq \| \mathop{\bf E}_{b \in B} T^b v \|_H,$

thus ${\| \mathop{\bf E}_{a \in A} T^a v \|_H^2}$ is monotone non-increasing in ${A}$. Since this quantity is bounded between ${0}$ and ${\|v\|_H}$, we conclude that the limit ${\lim_{A \rightarrow G} \| \mathop{\bf E}_{a \in A} T^a v \|_H^2}$ exists. Thus, for any ${\varepsilon > 0}$, we have for sufficiently large ${A}$ that

$\displaystyle \| \mathop{\bf E}_{b \in B} T^b v \|_H^2 \geq \| \mathop{\bf E}_{a \in A} T^a v \|_H^2 - \varepsilon$

for all ${B \geq A}$. In particular, for any ${g \in G}$, we have

$\displaystyle \| \mathop{\bf E}_{b \in A + \{0,g\}} T^b v \|_H^2 \geq \| \mathop{\bf E}_{a \in A} T^a v \|_H^2 - \varepsilon.$

We can write

$\displaystyle \mathop{\bf E}_{b \in A + \{0,g\}} T^b v = \frac{1}{2} \mathop{\bf E}_{a \in A} T^a v + \frac{1}{2} T^g \mathop{\bf E}_{a \in A} T^a v$

and so from the parallelogram law and unitarity we have

$\displaystyle \| \mathop{\bf E}_{a \in A} T^a v - T^g \mathop{\bf E}_{a \in A} T^a v \|_H^2 \leq 4 \varepsilon$

for all ${g \in G}$, and hence by the triangle inequality (averaging ${g}$ over a finite multiset ${C}$)

$\displaystyle \| \mathop{\bf E}_{a \in A} T^a v - \mathop{\bf E}_{b \in A+C} T^b v \|_H^2 \leq 4 \varepsilon$

for any ${C \in {\mathcal F}}$. This shows that ${\mathop{\bf E}_{a \in A} T^a v}$ is a Cauchy sequence in ${H}$ (in the strong topology), and hence (by the completeness of ${H}$) tends to a limit. Shifting ${A}$ by a group element ${g}$, we have

$\displaystyle \lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^a v = \lim_{A \rightarrow G} \mathop{\bf E}_{a \in A + \{g\}} T^a v = T^g \lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^a v$

and hence ${\lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^a v}$ is invariant under shifts, and thus lies in ${H^G}$. On the other hand, for any ${w \in H^G}$ and ${A \in {\mathcal F}}$, we have

$\displaystyle \langle \mathop{\bf E}_{a \in A} T^a v, w \rangle_H = \mathop{\bf E}_{a \in A} \langle v, T^{-a} w \rangle_H = \langle v, w \rangle_H$

and thus on taking strong limits

$\displaystyle \langle \lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^a v, w \rangle_H = \langle v, w \rangle_H$

and so ${v - \lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^a v}$ is orthogonal to ${H^G}$. Combining these two facts we see that ${\lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^a v}$ is equal to ${\pi_{H^G} v}$ as claimed. $\Box$

To relate this result to the classical ergodic theorem, we observe

Lemma 3 Let ${G}$ be a countable additive group, with a F{\o}lner sequence ${\Phi_n}$, and let ${f_g}$ be a bounded sequence in a normed vector space indexed by ${G}$. If ${\lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} f_a}$ exists, then ${\lim_{n \rightarrow \infty} \mathop{\bf E}_{a \in \Phi_n} f_a}$ exists, and the two limits are equal.

Proof: From the F{\o}lner property, we see that for any ${A}$ and any ${\varepsilon>0}$, the averages ${\mathop{\bf E}_{a \in \Phi_n} f_a}$ and ${\mathop{\bf E}_{a \in A+\Phi_n} f_a}$ differ by at most ${\varepsilon}$ in norm if ${n}$ is sufficiently large depending on ${A}$, ${\varepsilon}$ (and the ${f_a}$). On the other hand, by the existence of the limit ${\lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} f_a}$, the averages ${\mathop{\bf E}_{a \in A} f_a}$ and ${\mathop{\bf E}_{a \in A + \Phi_n} f_a}$ differ by at most ${\varepsilon}$ in norm if ${A}$ is sufficiently large depending on ${\varepsilon}$ (regardless of how large ${n}$ is). The claim follows. $\Box$

It turns out that this approach can also be used as an alternate way to construct the GowersHost-Kra seminorms in ergodic theory, which has the feature that it does not explicitly require any amenability on the group ${G}$ (or separability on the underlying measure space), though, as pointed out to me in comments, even uncountable abelian groups are amenable in the sense of possessing an invariant mean, even if they do not have a F{\o}lner sequence.

Given an arbitrary additive group ${G}$, define a ${G}$-system ${({\mathrm X}, T)}$ to be a probability space ${{\mathrm X} = (X, {\mathcal X}, \mu)}$ (not necessarily separable or standard Borel), together with a collection ${T^g: X \rightarrow X}$ of invertible, measure-preserving maps, such that ${T^0}$ is the identity and ${T^g T^h = T^{g+h}}$ (modulo null sets) for all ${g,h \in G}$. This then gives isomorphisms ${T^g: L^p({\mathrm X}) \rightarrow L^p({\mathrm X})}$ for ${1 \leq p \leq \infty}$ by setting ${T^g f(x) := f(T^{-g} x)}$. From the above abstract ergodic theorem, we see that

$\displaystyle {\mathbf E}( f | {\mathcal X}^G ) = \lim_{A \rightarrow G} \mathop{\bf E}_{a \in A} T^g f$

in the strong topology of ${L^2({\mathrm X})}$ for any ${f \in L^2({\mathrm X})}$, where ${{\mathcal X}^G}$ is the collection of measurable sets ${E}$ that are essentially ${G}$-invariant in the sense that ${T^g E = E}$ modulo null sets for all ${g \in G}$, and ${{\mathbf E}(f|{\mathcal X}^G)}$ is the conditional expectation of ${f}$ with respect to ${{\mathcal X}^G}$.

In a similar spirit, we have

Theorem 4 (Convergence of Gowers-Host-Kra seminorms) Let ${({\mathrm X},T)}$ be a ${G}$-system for some additive group ${G}$. Let ${d}$ be a natural number, and for every ${\omega \in\{0,1\}^d}$, let ${f_\omega \in L^{2^d}({\mathrm X})}$, which for simplicity we take to be real-valued. Then the expression

$\displaystyle \langle (f_\omega)_{\omega \in \{0,1\}^d} \rangle_{U^d({\mathrm X})} := \lim_{A_1,\dots,A_d \rightarrow G}$

$\displaystyle \mathop{\bf E}_{h_1 \in A_1-A_1,\dots,h_d \in A_d-A_d} \int_X \prod_{\omega \in \{0,1\}^d} T^{\omega_1 h_1 + \dots + \omega_d h_d} f_\omega\ d\mu$

converges, where we write ${\omega = (\omega_1,\dots,\omega_d)}$, and we are using the product direct set on ${{\mathcal F}^d}$ to define the convergence ${A_1,\dots,A_d \rightarrow G}$. In particular, for ${f \in L^{2^d}({\mathrm X})}$, the limit

$\displaystyle \| f \|_{U^d({\mathrm X})}^{2^d} = \lim_{A_1,\dots,A_d \rightarrow G}$

$\displaystyle \mathop{\bf E}_{h_1 \in A_1-A_1,\dots,h_d \in A_d-A_d} \int_X \prod_{\omega \in \{0,1\}^d} T^{\omega_1 h_1 + \dots + \omega_d h_d} f\ d\mu$

converges.

We prove this theorem below the fold. It implies a number of other known descriptions of the Gowers-Host-Kra seminorms ${\|f\|_{U^d({\mathrm X})}}$, for instance that

$\displaystyle \| f \|_{U^d({\mathrm X})}^{2^d} = \lim_{A \rightarrow G} \mathop{\bf E}_{h \in A-A} \| f T^h f \|_{U^{d-1}({\mathrm X})}^{2^{d-1}}$

for ${d > 1}$, while from the ergodic theorem we have

$\displaystyle \| f \|_{U^1({\mathrm X})} = \| {\mathbf E}( f | {\mathcal X}^G ) \|_{L^2({\mathrm X})}.$

This definition also manifestly demonstrates the cube symmetries of the Host-Kra measures ${\mu^{[d]}}$ on ${X^{\{0,1\}^d}}$, defined via duality by requiring that

$\displaystyle \langle (f_\omega)_{\omega \in \{0,1\}^d} \rangle_{U^d({\mathrm X})} = \int_{X^{\{0,1\}^d}} \bigotimes_{\omega \in \{0,1\}^d} f_\omega\ d\mu^{[d]}.$

In a subsequent blog post I hope to present a more detailed study of the ${U^2}$ norm and its relationship with eigenfunctions and the Kronecker factor, without assuming any amenability on ${G}$ or any separability or topological structure on ${{\mathrm X}}$.

We return to the study of the Riemann zeta function ${\zeta(s)}$, focusing now on the task of upper bounding the size of this function within the critical strip; as seen in Exercise 43 of Notes 2, such upper bounds can lead to zero-free regions for ${\zeta}$, which in turn lead to improved estimates for the error term in the prime number theorem.

In equation (21) of Notes 2 we obtained the somewhat crude estimates

$\displaystyle \zeta(s) = \sum_{n \leq x} \frac{1}{n^s} - \frac{x^{1-s}}{1-s} + O( \frac{|s|}{\sigma} \frac{1}{x^\sigma} ) \ \ \ \ \ (1)$

for any ${x > 0}$ and ${s = \sigma+it}$ with ${\sigma>0}$ and ${s \neq 1}$. Setting ${x=1}$, we obtained the crude estimate

$\displaystyle \zeta(s) = \frac{1}{s-1} + O( \frac{|s|}{\sigma} )$

in this region. In particular, if ${0 < \varepsilon \leq \sigma \ll 1}$ and ${|t| \gg 1}$ then we had ${\zeta(s) = O_\varepsilon( |t| )}$. Using the functional equation and the Hadamard three lines lemma, we can improve this to ${\zeta(s) \ll_\varepsilon |t|^{\frac{1-\sigma}{2}+\varepsilon}}$; see Supplement 3.

Now we seek better upper bounds on ${\zeta}$. We will reduce the problem to that of bounding certain exponential sums, in the spirit of Exercise 33 of Supplement 3:

Proposition 1 Let ${s = \sigma+it}$ with ${0 < \varepsilon \leq \sigma \ll 1}$ and ${|t| \gg 1}$. Then

$\displaystyle \zeta(s) \ll_\varepsilon \log(2+|t|) \sup_{1 \leq M \leq N \ll |t|} N^{1-\sigma} |\frac{1}{N} \sum_{N \leq n < N+M} e( -\frac{t}{2\pi} \log n)|$

where ${e(x) := e^{2\pi i x}}$.

Proof: We fix a smooth function ${\eta: {\bf R} \rightarrow {\bf C}}$ with ${\eta(t)=1}$ for ${t \leq -1}$ and ${\eta(t)=0}$ for ${t \geq 1}$, and allow implied constants to depend on ${\eta}$. Let ${s=\sigma+it}$ with ${\varepsilon \leq \sigma \ll 1}$. From Exercise 33 of Supplement 3, we have

$\displaystyle \zeta(s) = \sum_n \frac{1}{n^s} \eta( \log n - \log C|t| ) + O_\varepsilon( 1 )$

for some sufficiently large absolute constant ${C}$. By dyadic decomposition, we thus have

$\displaystyle \zeta(s) \ll_{\varepsilon} 1 + \log(2+|t|) \sup_{1 \leq N \ll |t|} |\sum_{N \leq n < 2N} \frac{1}{n^s} \eta( \log n - \log C|t| )|.$

We can absorb the first term in the second using the ${N=1}$ case of the supremum. Writing ${\frac{1}{n^s} \eta( \log n - \log|C| t ) = N^{-\sigma} e( - \frac{t}{2\pi} \log n ) F_N(n)}$, where

$\displaystyle F_N(n) := (N/n)^\sigma \eta(\log n - \log C|t| ),$

it thus suffices to show that

$\displaystyle \sum_{N \leq n < 2N} e(-\frac{t}{2\pi} \log N) F_N(n) \ll \sup_{1 \leq M \leq N} |\sum_{N \leq n < N+M} e(-\frac{t}{2\pi} \log n)|$

for each ${N}$. But from the fundamental theorem of calculus, the left-hand side can be written as

$\displaystyle F_N(2N) \sum_{N \leq n < 2N} e(-\frac{t}{2\pi} \log n)$

$\displaystyle - \int_0^{N} (\sum_{N \leq n < N+M} e(-\frac{t}{2\pi} \log n)) F'_N(M)\ dM$

and the claim then follows from the triangle inequality and a routine calculation. $\Box$

We are thus interested in getting good bounds on the sum ${\sum_{N \leq n < N+M} e( -\frac{t}{2\pi} \log n )}$. More generally, we consider normalised exponential sums of the form

$\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ \ \ \ \ (2)$

where ${I \subset {\bf R}}$ is an interval of length at most ${N}$ for some ${N \geq 1}$, and ${f: {\bf R} \rightarrow {\bf R}}$ is a smooth function. We will assume smoothness estimates of the form

$\displaystyle |f^{(j)}(x)| = \exp( O(j^2) ) \frac{T}{N^j} \ \ \ \ \ (3)$

for some ${T>0}$, all ${x \in I}$, and all ${j \geq 1}$, where ${f^{(j)}}$ is the ${j}$-fold derivative of ${f}$; in the case ${f(x) := -\frac{t}{2\pi} \log x}$, ${I \subset [N,2N]}$ of interest for the Riemann zeta function, we easily verify that these estimates hold with ${T := |t|}$. (One can consider exponential sums under more general hypotheses than (3), but the hypotheses here are adequate for our needs.) We do not bound the zeroth derivative ${f^{(0)}=f}$ of ${f}$ directly, but it would not be natural to do so in any event, since the magnitude of the sum (2) is unaffected if one adds an arbitrary constant to ${f(n)}$.

The trivial bound for (2) is

$\displaystyle \frac{1}{N} \sum_{n \in I} e(f(n)) \ll 1 \ \ \ \ \ (4)$

and we will seek to obtain significant improvements to this bound. Pseudorandomness heuristics predict a bound of ${O_\varepsilon(N^{-1/2+\varepsilon})}$ for (2) for any ${\varepsilon>0}$ if ${T = O(N^{O(1)})}$; this assertion (a special case of the exponent pair hypothesis) would have many consequences (for instance, inserting it into Proposition 1 soon yields the Lindelöf hypothesis), but is unfortunately quite far from resolution with known methods. However, we can obtain weaker gains of the form ${O(N^{1-c_K})}$ when ${T \ll N^K}$ and ${c_K > 0}$ depends on ${K}$. We present two such results here, which perform well for small and large values of ${K}$ respectively:

Theorem 2 Let ${2 \leq N \ll T}$, let ${I}$ be an interval of length at most ${N}$, and let ${f: I \rightarrow {\bf R}}$ be a smooth function obeying (3) for all ${j \geq 1}$ and ${x \in I}$.

• (i) (van der Corput estimate) For any natural number ${k \geq 2}$, one has

$\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll (\frac{T}{N^k})^{\frac{1}{2^k-2}} \log^{1/2} (2+T). \ \ \ \ \ (5)$

• (ii) (Vinogradov estimate) If ${k}$ is a natural number and ${T \leq N^{k}}$, then

$\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll N^{-c/k^2} \ \ \ \ \ (6)$

for some absolute constant ${c>0}$.

The factor of ${\log^{1/2} (2+T)}$ can be removed by a more careful argument, but we will not need to do so here as we are willing to lose powers of ${\log T}$. The estimate (6) is superior to (5) when ${T \sim N^K}$ for ${K}$ large, since (after optimising in ${k}$) (5) gives a gain of the form ${N^{-c/2^{cK}}}$ over the trivial bound, while (6) gives ${N^{-c/K^2}}$. We have not attempted to obtain completely optimal estimates here, settling for a relatively simple presentation that still gives good bounds on ${\zeta}$, and there are a wide variety of additional exponential sum estimates beyond the ones given here; see Chapter 8 of Iwaniec-Kowalski, or Chapters 3-4 of Montgomery, for further discussion.

We now briefly discuss the strategies of proof of Theorem 2. Both parts of the theorem proceed by treating ${f}$ like a polynomial of degree roughly ${k}$; in the case of (ii), this is done explicitly via Taylor expansion, whereas for (i) it is only at the level of analogy. Both parts of the theorem then try to “linearise” the phase to make it a linear function of the summands (actually in part (ii), it is necessary to introduce an additional variable and make the phase a bilinear function of the summands). The van der Corput estimate achieves this linearisation by squaring the exponential sum about ${k}$ times, which is why the gain is only exponentially small in ${k}$. The Vinogradov estimate achieves linearisation by raising the exponential sum to a significantly smaller power – on the order of ${k^2}$ – by using Hölder’s inequality in combination with the fact that the discrete curve ${\{ (n,n^2,\dots,n^k): n \in \{1,\dots,M\}\}}$ becomes roughly equidistributed in the box ${\{ (a_1,\dots,a_k): a_j = O( M^j ) \}}$ after taking the sumset of about ${k^2}$ copies of this curve. This latter fact has a precise formulation, known as the Vinogradov mean value theorem, and its proof is the most difficult part of the argument, relying on using a “${p}$-adic” version of this equidistribution to reduce the claim at a given scale ${M}$ to a smaller scale ${M/p}$ with ${p \sim M^{1/k}}$, and then proceeding by induction.

One can combine Theorem 2 with Proposition 1 to obtain various bounds on the Riemann zeta function:

Exercise 3 (Subconvexity bound)

• (i) Show that ${\zeta(\frac{1}{2}+it) \ll (1+|t|)^{1/6} \log^{O(1)}(1+|t|)}$ for all ${t \in {\bf R}}$. (Hint: use the ${k=3}$ case of the Van der Corput estimate.)
• (ii) For any ${0 < \sigma < 1}$, show that ${\zeta(\sigma+it) \ll (1+|t|)^{\max( \frac{1-\sigma}{3}, \frac{1}{2} - \frac{2\sigma}{3}) + o(1)}}$ as ${|t| \rightarrow \infty}$.

Exercise 4 Let ${t}$ be such that ${|t| \geq 100}$, and let ${\sigma \geq 1/2}$.

• (i) (Littlewood bound) Use the van der Corput estimate to show that ${\zeta(\sigma+it) \ll \log^{O(1)} |t|}$ whenever ${\sigma \geq 1 - O( \frac{(\log\log |t|)^2}{\log |t|} ))}$.
• (ii) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that ${\zeta(\sigma+it) \ll \log^{O(1)} |t|}$ whenever ${\sigma \geq 1 - O( \frac{(\log\log |t|)^{2/3}}{\log^{2/3} |t|} )}$.

As noted in Exercise 43 of Notes 2, the Vinogradov-Korobov bound leads to the zero-free region ${\{ \sigma+it: \sigma > 1 - c \frac{1}{(\log |t|)^{2/3} (\log\log |t|)^{1/3}}; |t| \geq 100 \}}$, which in turn leads to the prime number theorem with error term

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O\left( x \exp\left( - c \frac{\log^{3/5} x}{(\log\log x)^{1/5}} \right) \right)$

for ${x > 100}$. If one uses the weaker Littlewood bound instead, one obtains the narrower zero-free region

$\displaystyle \{ \sigma+it: \sigma > 1 - c \frac{\log\log|t|}{\log |t|}; |t| \geq 100 \}$

(which is only slightly wider than the classical zero-free region) and an error term

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x \exp( - c \sqrt{\log x \log\log x} ) )$

in the prime number theorem.

Exercise 5 (Vinogradov-Korobov in arithmetic progressions) Let ${\chi}$ be a non-principal character of modulus ${q}$.

• (i) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that ${L(\sigma+it,\chi) \ll \log^{O(1)}(q|t|)}$ whenever ${|t| \geq 100}$ and

$\displaystyle \sigma \geq 1 - O( \min( \frac{\log\log(q|t|)}{\log q}, \frac{(\log\log(q|t|))^{2/3}}{\log^{2/3} |t|} ) ).$

(Hint: use the Vinogradov estimate and a change of variables to control ${\sum_{n \in I: n = a\ (q)} \exp( -it \log n)}$ for various intervals ${I}$ of length at most ${N}$ and residue classes ${a\ (q)}$, in the regime ${N \geq q^2}$ (say). For ${N < q^2}$, do not try to capture any cancellation and just use the triangle inequality instead.)

• (ii) Obtain a zero-free region

$\displaystyle \{ \sigma+it: \sigma > 1 - c \min( \frac{1}{(\log |t|)^{2/3} (\log\log |t|)^{1/3}}, \frac{1}{\log q} );$

$\displaystyle |t| \geq 100 \}$

for ${L(s,\chi)}$, for some (effective) absolute constant ${c>0}$.

• (iii) Obtain the prime number theorem in arithmetic progressions with error term

$\displaystyle \sum_{n \leq x: n = a\ (q)} \Lambda(n) = x + O\left( x \exp\left( - c_A \frac{\log^{3/5} x}{(\log\log x)^{1/5}} \right) \right)$

whenever ${x > 100}$, ${q \leq \log^A x}$, ${a\ (q)}$ is primitive, and ${c_A>0}$ depends (ineffectively) on ${A}$.