You are currently browsing the category archive for the ‘math.CA’ category.

In the previous set of notes, we constructed the measure-theoretic notion of the Lebesgue integral, and used this to set up the probabilistic notion of expectation on a rigorous footing. In this set of notes, we will similarly construct the measure-theoretic concept of a product measure (restricting to the case of probability measures to avoid unnecessary techncialities), and use this to set up the probabilistic notion of independence on a rigorous footing. (To quote Durrett: “measure theory ends and probability theory begins with the definition of independence.”) We will be able to take virtually any collection of random variables (or probability distributions) and couple them together to be independent via the product measure construction, though for infinite products there is the slight technicality (a requirement of the Kolmogorov extension theorem) that the random variables need to range in standard Borel spaces. This is not the only way to couple together such random variables, but it is the simplest and the easiest to compute with in practice, as we shall see in the next few sets of notes.

In Notes 0, we introduced the notion of a measure space , which includes as a special case the notion of a probability space. By selecting one such probability space as a sample space, one obtains a model for random events and random variables, with random events being modeled by measurable sets in , and random variables taking values in a measurable space being modeled by measurable functions . We then defined some basic operations on these random events and variables:

- Given events , we defined the conjunction , the disjunction , and the complement . For countable families of events, we similarly defined and . We also defined the empty event and the sure event , and what it meant for two events to be equal.
- Given random variables in ranges respectively, and a measurable function , we defined the random variable in range . (As the special case of this, every deterministic element of was also a random variable taking values in .) Given a relation , we similarly defined the event . Conversely, given an event , we defined the indicator random variable . Finally, we defined what it meant for two random variables to be equal.
- Given an event , we defined its probability .

These operations obey various axioms; for instance, the boolean operations on events obey the axioms of a Boolean algebra, and the probabilility function obeys the Kolmogorov axioms. However, we will not focus on the axiomatic approach to probability theory here, instead basing the foundations of probability theory on the sample space models as discussed in Notes 0. (But see this previous post for a treatment of one such axiomatic approach.)

It turns out that almost all of the other operations on random events and variables we need can be constructed in terms of the above basic operations. In particular, this allows one to safely *extend* the sample space in probability theory whenever needed, provided one uses an extension that respects the above basic operations; this is an important operation when one needs to add new sources of randomness to an existing system of events and random variables, or to couple together two separate such systems into a joint system that extends both of the original systems. We gave a simple example of such an extension in the previous notes, but now we give a more formal definition:

Definition 1Suppose that we are using a probability space as the model for a collection of events and random variables. Anextensionof this probability space is a probability space , together with a measurable map (sometimes called thefactor map) which is probability-preserving in the sense thatfor all . (

Caution: this doesnotimply that for all – why not?)An event which is modeled by a measurable subset in the sample space , will be modeled by the measurable set in the extended sample space . Similarly, a random variable taking values in some range that is modeled by a measurable function in , will be modeled instead by the measurable function in . We also allow the extension to model additional events and random variables that were not modeled by the original sample space (indeed, this is one of the main reasons why we perform extensions in probability in the first place).

Thus, for instance, the sample space in Example 3 of the previous post is an extension of the sample space in that example, with the factor map given by the first coordinate projection . One can verify that all of the basic operations on events and random variables listed above are unaffected by the above extension (with one caveat, see remark below). For instance, the conjunction of two events can be defined via the original model by the formula

or via the extension via the formula

The two definitions are consistent with each other, thanks to the obvious set-theoretic identity

Similarly, the assumption (1) is precisely what is needed to ensure that the probability of an event remains unchanged when one replaces a sample space model with an extension. We leave the verification of preservation of the other basic operations described above under extension as exercises to the reader.

Remark 2There is one minor exception to this general rule if we do not impose the additional requirement that the factor map is surjective. Namely, for non-surjective , it can become possible that two events are unequal in the original sample space model, but become equal in the extension (and similarly for random variables), although the converse never happens (events that are equal in the original sample space always remain equal in the extension). For instance, let be the discrete probability space with and , and let be the discrete probability space with , and non-surjective factor map defined by . Then the event modeled by in is distinct from the empty event when viewed in , but becomes equal to that event when viewed in . Thus we see that extending the sample space by a non-surjective factor map can identify previously distinct events together (though of course, being probability preserving, this can only happen if those two events were already almost surely equal anyway). This turns out to be fairly harmless though; while it is nice to know if two given events are equal, or if they differ by a non-null event, it is almost never useful to know that two events are unequal if they are already almost surely equal. Alternatively, one can add the additional requirement of surjectivity in the definition of an extension, which is also a fairly harmless constraint to impose (this is what I chose to do in this previous set of notes).

Roughly speaking, one can define probability theory as the study of those properties of random events and random variables that are model-independent in the sense that they are preserved by extensions. For instance, the cardinality of the model of an event is *not* a concept within the scope of probability theory, as it is not preserved by extensions: continuing Example 3 from Notes 0, the event that a die roll is even is modeled by a set of cardinality in the original sample space model , but by a set of cardinality in the extension. Thus it does not make sense in the context of probability theory to refer to the “cardinality of an event “.

On the other hand, the supremum of a collection of random variables in the extended real line is a valid probabilistic concept. This can be seen by manually verifying that this operation is preserved under extension of the sample space, but one can also see this by defining the supremum in terms of existing basic operations. Indeed, note from Exercise 24 of Notes 0 that a random variable in the extended real line is completely specified by the threshold events for ; in particular, two such random variables are equal if and only if the events and are surely equal for all . From the identity

we thus see that one can completely specify in terms of using only the basic operations provided in the above list (and in particular using the countable conjunction .) Of course, the same considerations hold if one replaces supremum, by infimum, limit superior, limit inferior, or (if it exists) the limit.

In this set of notes, we will define some further important operations on scalar random variables, in particular the *expectation* of these variables. In the sample space models, expectation corresponds to the notion of integration on a measure space. As we will need to use both expectation and integration in this course, we will thus begin by quickly reviewing the basics of integration on a measure space, although we will then translate the key results of this theory into probabilistic language.

As the finer details of the Lebesgue integral construction are not the core focus of this probability course, some of the details of this construction will be left to exercises. See also Chapter 1 of Durrett, or these previous blog notes, for a more detailed treatment.

Starting this week, I will be teaching an introductory graduate course (Math 275A) on probability theory here at UCLA. While I find myself *using* probabilistic methods routinely nowadays in my research (for instance, the probabilistic concept of Shannon entropy played a crucial role in my recent paper on the Chowla and Elliott conjectures, and random multiplicative functions similarly played a central role in the paper on the Erdos discrepancy problem), this will actually be the first time I will be *teaching* a course on probability itself (although I did give a course on random matrix theory some years ago that presumed familiarity with graduate-level probability theory). As such, I will be relying primarily on an existing textbook, in this case Durrett’s Probability: Theory and Examples. I still need to prepare lecture notes, though, and so I thought I would continue my practice of putting my notes online, although in this particular case they will be less detailed or complete than with other courses, as they will mostly be focusing on those topics that are not already comprehensively covered in the text of Durrett. Below the fold are my first such set of notes, concerning the classical measure-theoretic foundations of probability. (I wrote on these foundations also in this previous blog post, but in that post I already assumed that the reader was familiar with measure theory and basic probability, whereas in this course not every student will have a strong background in these areas.)

Note: as this set of notes is primarily concerned with foundational issues, it will contain a large number of pedantic (and nearly trivial) formalities and philosophical points. We dwell on these technicalities in this set of notes primarily so that they are out of the way in later notes, when we work with the actual mathematics of probability, rather than on the supporting foundations of that mathematics. In particular, the excessively formal and philosophical language in this set of notes will not be replicated in later notes.

The equidistribution theorem asserts that if is an irrational phase, then the sequence is equidistributed on the unit circle, or equivalently that

for any continuous (or equivalently, for any smooth) function . By approximating uniformly by a Fourier series, this claim is equivalent to that of showing that

for any non-zero integer (where ), which is easily verified from the irrationality of and the geometric series formula. Conversely, if is rational, then clearly fails to go to zero when is a multiple of the denominator of .

One can then ask for more quantitative information about the decay of exponential sums of , or more generally on exponential sums of the form for an arithmetic progression (in this post all progressions are understood to be finite) and a polynomial . It will be convenient to phrase such information in the form of an *inverse theorem*, describing those phases for which the exponential sum is large. Indeed, we have

Lemma 1 (Geometric series formula, inverse form)Let be an arithmetic progression of length at most for some , and let be a linear polynomial for some . Iffor some , then there exists a subprogression of of size such that varies by at most on (that is to say, lies in a subinterval of of length at most ).

*Proof:* By a linear change of variable we may assume that is of the form for some . We may of course assume that is non-zero in , so that ( denotes the distance from to the nearest integer). From the geometric series formula we see that

and so . Setting for some sufficiently small absolute constant , we obtain the claim.

Thus, in order for a linear phase to fail to be equidistributed on some long progression , must in fact be almost constant on large piece of .

As is well known, this phenomenon generalises to higher order polynomials. To achieve this, we need two elementary additional lemmas. The first relates the exponential sums of to the exponential sums of its “first derivatives” .

Lemma 2 (Van der Corput lemma, inverse form)Let be an arithmetic progression of length at most , and let be an arbitrary function such that

for some . Then, for integers , there exists a subprogression of , of the same spacing as , such that

*Proof:* Squaring (1), we see that

We write and conclude that

where is a subprogression of of the same spacing. Since , we conclude that

for values of (this can be seen, much like the pigeonhole principle, by arguing via contradiction for a suitable choice of implied constants). The claim follows.

The second lemma (which we recycle from this previous blog post) is a variant of the equidistribution theorem.

Lemma 3 (Vinogradov lemma)Let be an interval for some , and let be such that for at least values of , for some . Then eitheror

or else there is a natural number such that

*Proof:* We may assume that and , since we are done otherwise. Then there are at least two with , and by the pigeonhole principle we can find in with and . By the triangle inequality, we conclude that there exists at least one natural number for which

We take to be minimal amongst all such natural numbers, then we see that there exists coprime to and such that

If then we are done, so suppose that . Suppose that are elements of such that and . Writing for some , we have

By hypothesis, ; note that as and we also have . This implies that and thus . We then have

We conclude that for fixed with , there are at most elements of such that . Iterating this with a greedy algorithm, we see that the number of with is at most ; since , this implies that

and the claim follows.

Now we can quickly obtain a higher degree version of Lemma 1:

Proposition 4 (Weyl exponential sum estimate, inverse form)Let be an arithmetic progression of length at most for some , and let be a polynomial of some degree at most . Iffor some , then there exists a subprogression of with such that varies by at most on .

*Proof:* We induct on . The cases are immediate from Lemma 1. Now suppose that , and that the claim had already been proven for . To simplify the notation we allow implied constants to depend on . Let the hypotheses be as in the proposition. Clearly cannot exceed . By shrinking as necessary we may assume that for some sufficiently small constant depending on .

By rescaling we may assume . By Lemma 3, we see that for choices of such that

for some interval . We write , then is a polynomial of degree at most with leading coefficient . We conclude from induction hypothesis that for each such , there exists a natural number such that , by double-counting, this implies that there are integers in the interval such that . Applying Lemma 3, we conclude that either , or that

In the former case the claim is trivial (just take to be a point), so we may assume that we are in the latter case.

We partition into arithmetic progressions of spacing and length comparable to for some large depending on to be chosen later. By hypothesis, we have

so by the pigeonhole principle, we have

for at least one such progression . On this progression, we may use the binomial theorem and (4) to write as a polynomial in of degree at most , plus an error of size . We thus can write for for some polynomial of degree at most . By the triangle inequality, we thus have (for large enough) that

and hence by induction hypothesis we may find a subprogression of of size such that varies by most on , and thus (for large enough again) that varies by at most on , and the claim follows.

This gives the following corollary (also given as Exercise 16 in this previous blog post):

Corollary 5 (Weyl exponential sum estimate, inverse form II)Let be a discrete interval for some , and let polynomial of some degree at most for some . Iffor some , then there is a natural number such that for all .

One can obtain much better exponents here using Vinogradov’s mean value theorem; see Theorem 1.6 this paper of Wooley. (Thanks to Mariusz Mirek for this reference.) However, this weaker result already suffices for many applications, and does not need any result as deep as the mean value theorem.

*Proof:* To simplify notation we allow implied constants to depend on . As before, we may assume that for some small constant depending only on . We may also assume that for some large , as the claim is trivial otherwise (set ).

Applying Proposition 4, we can find a natural number and an arithmetic subprogression of such that and such that varies by at most on . Writing for some interval of length and some , we conclude that the polynomial varies by at most on . Taking order differences, we conclude that the coefficient of this polynomial is ; by the binomial theorem, this implies that differs by at most on from a polynomial of degree at most . Iterating this, we conclude that the coefficient of is for , and the claim then follows by inverting the change of variables (and replacing with a larger quantity such as as necessary).

For future reference we also record a higher degree version of the Vinogradov lemma.

Lemma 6 (Polynomial Vinogradov lemma)Let be a discrete interval for some , and let be a polynomial of degree at most for some such that for at least values of , for some . Then either

or else there is a natural number such that

for all .

*Proof:* We induct on . For this follows from Lemma 3 (noting that if then ), so suppose that and that the claim is already proven for . We now allow all implied constants to depend on .

For each , let denote the number of such that . By hypothesis, , and clearly , so we must have for choices of . For each such , we then have for choices of , so by induction hypothesis, either (5) or (6) holds, or else for choices of , there is a natural number such that

for , where are the coefficients of the degree polynomial . We may of course assume it is the latter which holds. By the pigeonhole principle we may take to be independent of .

Since , we have

for choices of , so by Lemma 3, either (5) or (6) holds, or else (after increasing as necessary) we have

We can again assume it is the latter that holds. This implies that modulo , so that

for choices of . Arguing as before and iterating, we obtain the claim.

The above results also extend to higher dimensions. Here is the higher dimensional version of Proposition 4:

Proposition 7 (Multidimensional Weyl exponential sum estimate, inverse form)Let and , and let be arithmetic progressions of length at most for each . Let be a polynomial of degrees at most in each of the variables separately. Iffor some , then there exists a subprogression of with for each such that varies by at most on .

A much more general statement, in which the polynomial phase is replaced by a nilsequence, and in which one does not necessarily assume the exponential sum is small, is given in Theorem 8.6 of this paper of Ben Green and myself, but it involves far more notation to even state properly.

*Proof:* We induct on . The case was established in Proposition 5, so we assume that and that the claim has already been proven for . To simplify notation we allow all implied constants to depend on . We may assume that for some small depending only on .

By a linear change of variables, we may assume that for all .

We write . First suppose that . Then by the pigeonhole principle we can find such that

and the claim then follows from the induction hypothesis. Thus we may assume that for some large depending only on . Similarly we may assume that for all .

By the triangle inequality, we have

The inner sum is , and the outer sum has terms. Thus, for choices of , one has

for some polynomials of degrees at most in the variables . For each obeying (7), we apply Corollary 5 to conclude that there exists a natural number such that

for (the claim also holds for but we discard it as being trivial). By the pigeonhole principle, there thus exists a natural number such that

for all and for choices of . If we write

where is a polynomial of degrees at most , then for choices of we then have

Applying Lemma 6 in the and the largeness hypotheses on the (and also the assumption that ) we conclude (after enlarging as necessary, and pigeonholing to keep independent of ) that

for all (note that we now include that case, which is no longer trivial) and for choices of . Iterating this, we eventually conclude (after enlarging as necessary) that

whenever for , with nonzero. Permuting the indices, and observing that the claim is trivial for , we in fact obtain (8) for all , at which point the claim easily follows by taking for each .

An inspection of the proof of the above result (or alternatively, by combining the above result again with many applications of Lemma 6) reveals the following general form of Proposition 4, which was posed as Exercise 17 in this previous blog post, but had a slight misprint in it (it did not properly treat the possibility that some of the could be small) and was a bit trickier to prove than anticipated (in fact, the reason for this post was that I was asked to supply a more detailed solution for this exercise):

Proposition 8 (Multidimensional Weyl exponential sum estimate, inverse form, II)Let be an natural number, and for each , let be a discrete interval for some . Letbe a polynomial in variables of multidegrees for some . If

for some , or else there is a natural number such that

Again, the factor of is natural in this bound. In the case, the option (10) may be deleted since (11) trivially holds in this case, but this simplification is no longer available for since one needs (10) to hold for *all* (not just one ) to make (11) completely trivial. Indeed, the above proposition fails for if one removes (10) completely, as can be seen for instance by inspecting the exponential sum , which has size comparable to regardless of how irrational is.

Here’s a cute identity I discovered by accident recently. Observe that

and so one can conjecture that one has

when is even, and

when is odd. This is obvious in the even case since is a polynomial of degree , but I struggled for a while with the odd case before finding a slick three-line proof. (I was first trying to prove the weaker statement that was non-negative, but for some strange reason I was only able to establish this by working out the derivative exactly, rather than by using more analytic methods, such as convexity arguments.) I thought other readers might like the challenge (and also I’d like to see some other proofs), so rather than post my own proof immediately, I’ll see if anyone would like to supply their own proofs or thoughts in the comments. Also I am curious to know if this identity is connected to any other existing piece of mathematics.

I’ve just uploaded to the arXiv my paper “Cancellation for the multilinear Hilbert transform“, submitted to Collectanea Mathematica. This paper uses methods from additive combinatorics (and more specifically, the arithmetic regularity and counting lemmas from this paper of Ben Green and myself) to obtain a slight amount of progress towards the open problem of obtaining bounds for the trilinear and higher Hilbert transforms (as discussed in this previous blog post). For instance, the trilinear Hilbert transform

is not known to be bounded for any to , although it is conjectured to do so when and . (For well below , one can use additive combinatorics constructions to demonstrate unboundedness; see this paper of Demeter.) One can approach this problem by considering the truncated trilinear Hilbert transforms

for . It is not difficult to show that the boundedness of is equivalent to the boundedness of with bounds that are uniform in and . On the other hand, from Minkowski’s inequality and Hölder’s inequality one can easily obtain the *non-uniform* bound of for . The main result of this paper is a slight improvement of this trivial bound to as . Roughly speaking, the way this gain is established is as follows. First there are some standard time-frequency type reductions to reduce to the task of obtaining some non-trivial cancellation on a single “tree”. Using a “generalised von Neumann theorem”, we show that such cancellation will happen if (a discretised version of) one or more of the functions (or a dual function that it is convenient to test against) is small in the Gowers norm. However, the arithmetic regularity lemma alluded to earlier allows one to represent an arbitrary function , up to a small error, as the sum of such a “Gowers uniform” function, plus a structured function (or more precisely, an *irrational virtual nilsequence*). This effectively reduces the problem to that of establishing some cancellation in a single tree in the case when all functions involved are irrational virtual nilsequences. At this point, the contribution of each component of the tree can be estimated using the “counting lemma” from my paper with Ben. The main term in the asymptotics is a certain integral over a nilmanifold, but because the kernel in the trilinear Hilbert transform is odd, it turns out that this integral vanishes, giving the required cancellation.

The same argument works for higher order Hilbert transforms (and one can also replace the coefficients in these transforms with other rational constants). However, because the quantitative bounds in the arithmetic regularity and counting lemmas are so poor, it does not seem likely that one can use these methods to remove the logarithmic growth in entirely, and some additional ideas will be needed to resolve the full conjecture.

I’ve just uploaded to the arXiv my paper “Failure of the pointwise and maximal ergodic theorems for the free group“, submitted to Forum of Mathematics, Sigma. This paper concerns a variant of the pointwise ergodic theorem of Birkhoff, which asserts that if one has a measure-preserving shift map on a probability space , then for any , the averages converge pointwise almost everywhere. (In the important case when the shift map is ergodic, the pointwise limit is simply the mean of the original function .)

The pointwise ergodic theorem can be extended to measure-preserving actions of other amenable groups, if one uses a suitably “tempered” Folner sequence of averages; see this paper of Lindenstrauss for more details. (I also wrote up some notes on that paper here, back in 2006 before I had started this blog.) But the arguments used to handle the amenable case break down completely for non-amenable groups, and in particular for the free non-abelian group on two generators.

Nevo and Stein studied this problem and obtained a number of pointwise ergodic theorems for -actions on probability spaces . For instance, for the spherical averaging operators

(where denotes the length of the reduced word that forms ), they showed that converged pointwise almost everywhere provided that was in for some . (The need to restrict to spheres of even radius can be seen by considering the action of on the two-element set in which both generators of act by interchanging the elements, in which case is determined by the parity of .) This result was reproven with a different and simpler proof by Bufetov, who also managed to relax the condition to the weaker condition .

The question remained open as to whether the pointwise ergodic theorem for -actions held if one only assumed that was in . Nevo and Stein were able to establish this for the Cesáro averages , but not for itself. About six years ago, Assaf Naor and I tried our hand at this problem, and was able to show an associated maximal inequality on , but due to the non-amenability of , this inequality did not transfer to and did not have any direct impact on this question, despite a fair amount of effort on our part to attack it.

Inspired by some recent conversations with Lewis Bowen, I returned to this problem. This time around, I tried to construct a counterexample to the pointwise ergodic theorem – something Assaf and I had not seriously attempted to do (perhaps due to being a bit too enamoured of our maximal inequality). I knew of an existing counterexample of Ornstein regarding a failure of an ergodic theorem for iterates of a self-adjoint Markov operator – in fact, I had written some notes on this example back in 2007. Upon revisiting my notes, I soon discovered that the Ornstein construction was adaptable to the setting, thus settling the problem in the negative:

Theorem 1 (Failure of pointwise ergodic theorem)There exists a measure-preserving -action on a probability space and a non-negative function such that for almost every .

To describe the proof of this theorem, let me first briefly sketch the main ideas of Ornstein’s construction, which gave an example of a self-adjoint Markov operator on a probability space and a non-negative such that for almost every . By some standard manipulations, it suffices to show that for any given and , there exists a self-adjoint Markov operator on a probability space and a non-negative with , such that on a set of measure at least . Actually, it will be convenient to replace the Markov chain with an *ancient Markov chain* – that is to say, a sequence of non-negative functions for both positive and negative , such that for all . The purpose of requiring the Markov chain to be ancient (that is, to extend infinitely far back in time) is to allow for the Markov chain to be shifted arbitrarily in time, which is key to Ornstein’s construction. (Technically, Ornstein’s original argument only uses functions that go back to a large negative time, rather than being infinitely ancient, but I will gloss over this point for sake of discussion, as it turns out that the version of the argument can be run using infinitely ancient chains.)

For any , let denote the claim that for any , there exists an ancient Markov chain with such that on a set of measure at least . Clearly holds since we can just take for all . Our objective is to show that holds for arbitrarily small . The heart of Ornstein’s argument is then the implication

for any , which upon iteration quickly gives the desired claim.

Let’s see informally how (1) works. By hypothesis, and ignoring epsilons, we can find an ancient Markov chain on some probability space of total mass , such that attains the value of or greater almost everywhere. Assuming that the Markov process is irreducible, the will eventually converge as to the constant value of , in particular its final state will essentially stay above (up to small errors).

Now suppose we duplicate the Markov process by replacing with a double copy (giving the uniform probability measure), and using the disjoint sum of the Markov operators on and as the propagator, so that there is no interaction between the two components of this new system. Then the functions form an ancient Markov chain of mass at most that lives solely in the first half of this copy, and attains the value of or greater on almost all of the first half , but is zero on the second half. The final state of will be to stay above in the first half , but be zero on the second half.

Now we modify the above example by allowing an infinitesimal amount of interaction between the two halves , of the system (I mentally think of and as two identical boxes that a particle can bounce around in, and now we wish to connect the boxes by a tiny tube). The precise way in which this interaction is inserted is not terribly important so long as the new Markov process is irreducible. Once one does so, then the ancient Markov chain in the previous example gets replaced by a slightly different ancient Markov chain which is more or less identical with for negative times , or for bounded positive times , but for very large values of the final state is now constant across the entire state space , and will stay above on this space.

Finally, we consider an ancient Markov chain which is basically of the form

for some large parameter and for all (the approximation becomes increasingly inaccurate for much larger than , but never mind this for now). This is basically two copies of the original Markov process in separate, barely interacting state spaces , but with the second copy delayed by a large time delay , and also attenuated in amplitude by a factor of . The total mass of this process is now . Because of the component of , we see that basically attains the value of or greater on the first half . On the second half , we work with times close to . If is large enough, would have averaged out to about at such times, but the component can get as large as here. Summing (and continuing to ignore various epsilon losses), we see that can get as large as on almost all of the second half of . This concludes the rough sketch of how one establishes the implication (1).

It was observed by Bufetov that the spherical averages for a free group action can be lifted up to become powers of a Markov operator, basically by randomly assigning a “velocity vector” to one’s base point and then applying the Markov process that moves along that velocity vector (and then randomly changing the velocity vector at each time step to the “reduced word” condition that the velocity never flips from to ). Thus the spherical average problem has a Markov operator interpretation, which opens the door to adapting the Ornstein construction to the setting of systems. This turns out to be doable after a certain amount of technical artifice; the main thing is to work with -measure preserving systems that admit ancient Markov chains that are initially supported in a very small region in the “interior” of the state space, so that one can couple such systems to each other “at the boundary” in the fashion needed to establish the analogue of (1) without disrupting the ancient dynamics of such chains. The initial such system (used to establish the base case ) comes from basically considering the action of on a (suitably renormalised) “infinitely large ball” in the Cayley graph, after suitably gluing together the boundary of this ball to complete the action. The ancient Markov chain associated to this system starts at the centre of this infinitely large ball at infinite negative time , and only reaches the boundary of this ball at the time .

The lonely runner conjecture is the following open problem:

Conjecture 1Suppose one has runners on the unit circle , all starting at the origin and moving at different speeds. Then for each runner, there is at least one time for which that runner is “lonely” in the sense that it is separated by a distance at least from all other runners.

One can normalise the speed of the lonely runner to be zero, at which point the conjecture can be reformulated (after replacing by ) as follows:

Conjecture 2Let be non-zero real numbers for some . Then there exists a real number such that the numbers are all a distance at least from the integers, thus where denotes the distance of to the nearest integer.

This conjecture has been proven for , but remains open for larger . The bound is optimal, as can be seen by looking at the case and applying the Dirichlet approximation theorem. Note that for each non-zero , the set has (Banach) density for any , and from this and the union bound we can easily find for which

for any , but it has proven to be quite challenging to remove the factor of to increase to . (As far as I know, even improving to for some absolute constant and sufficiently large remains open.)

The speeds in the above conjecture are arbitrary non-zero reals, but it has been known for some time that one can reduce without loss of generality to the case when the are rationals, or equivalently (by scaling) to the case where they are integers; see e.g. Section 4 of this paper of Bohman, Holzman, and Kleitman.

In this post I would like to remark on a slight refinement of this reduction, in which the speeds are integers of *bounded size*, where the bound depends on . More precisely:

Proposition 3In order to prove the lonely runner conjecture, it suffices to do so under the additional assumption that the are integers of size at most , where is an (explicitly computable) absolute constant. (More precisely: if this restricted version of the lonely runner conjecture is true for all , then the original version of the conjecture is also true for all .)

In principle, this proposition allows one to verify the lonely runner conjecture for a given in finite time; however the number of cases to check with this proposition grows faster than exponentially in , and so this is unfortunately not a feasible approach to verifying the lonely runner conjecture for more values of than currently known.

One of the key tools needed to prove this proposition is the following additive combinatorics result. Recall that a *generalised arithmetic progression* (or ) in the reals is a set of the form

for some and ; the quantity is called the *rank* of the progression. If , the progression is said to be *-proper* if the sums with for are all distinct. We have

Lemma 4 (Progressions lie inside proper progressions)Let be a GAP of rank in the reals, and let . Then is contained in a -proper GAP of rank at most , with

*Proof:* See Theorem 2.1 of this paper of Bilu. (Very similar results can also be found in Theorem 3.40 of my book with Van Vu, or Theorem 1.10 of this paper of mine with Van Vu.)

Now let , and assume inductively that the lonely runner conjecture has been proven for all smaller values of , as well as for the current value of in the case that are integers of size at most for some sufficiently large . We will show that the lonely runner conjecture holds in general for this choice of .

let be non-zero real numbers. Let be a large absolute constant to be chosen later. From the above lemma applied to the GAP , one can find a -proper GAP of rank at most containing such that

in particular if is large enough depending on .

We write

for some , , and . We thus have for , where is the linear map and are non-zero and lie in the box .

We now need an elementary lemma that allows us to create a “collision” between two of the via a linear projection, without making any of the collide with the origin:

Lemma 5Let be non-zero vectors that are not all collinear with the origin. Then, after replacing one or more of the with their negatives if necessary, there exists a pair such that , and such that none of the is a scalar multiple of .

*Proof:* We may assume that , since the case is vacuous. Applying a generic linear projection to (which does not affect collinearity, or the property that a given is a scalar multiple of ), we may then reduce to the case .

By a rotation and relabeling, we may assume that lies on the negative -axis; by flipping signs as necessary we may then assume that all of the lie in the closed right half-plane. As the are not all collinear with the origin, one of the lies off of the -axis, by relabeling, we may assume that lies off of the axis and makes a minimal angle with the -axis. Then the angle of with the -axis is non-zero but smaller than any non-zero angle that any of the make with this axis, and so none of the are a scalar multiple of , and the claim follows.

We now return to the proof of the proposition. If the are all collinear with the origin, then lie in a one-dimensional arithmetic progression , and then by rescaling we may take the to be integers of magnitude at most , at which point we are done by hypothesis. Thus, we may assume that the are not all collinear with the origin, and so by the above lemma and relabeling we may assume that is non-zero, and that none of the are scalar multiples of .

with for ; by relabeling we may assume without loss of generality that is non-zero, and furthermore that

where is a natural number and have no common factor.

We now define a variant of by the map

where the are real numbers that are linearly independent over , whose precise value will not be of importance in our argument. This is a linear map with the property that , so that consists of at most distinct real numbers, which are non-zero since none of the are scalar multiples of , and the are linearly independent over . As we are assuming inductively that the lonely runner conjecture holds for , we conclude (after deleting duplicates) that there exists at least one real number such that

We would like to “approximate” by to then conclude that there is at least one real number such that

It turns out that we can do this by a Fourier-analytic argument taking advantage of the -proper nature of . Firstly, we see from the Dirichlet approximation theorem that one has

for a set of reals of (Banach) density . Thus, by the triangle inequality, we have

for a set of reals of density .

Applying a smooth Fourier multiplier of Littlewood-Paley type, one can find a trigonometric polynomial

which takes values in , is for , and is no larger than for . We then have

where denotes the mean value of a quasiperiodic function on the reals . We expand the left-hand side out as

From the genericity of , we see that the constraint

occurs if and only if is a scalar multiple of , or equivalently (by (1), (2)) an integer multiple of . Thus

By Fourier expansion and writing , we may write (4) as

The support of the implies that . Because of the -properness of , we see (for large enough) that the equation

and conversely that (7) implies that (6) holds for some with . From (3) we thus have

In particular, there exists a such that

Since is bounded in magnitude by , and is bounded by , we thus have

for each , which by the size properties of implies that for all , giving the lonely runner conjecture for .

The von Neumann ergodic theorem (the Hilbert space version of the mean ergodic theorem) asserts that if is a unitary operator on a Hilbert space , and is a vector in that Hilbert space, then one has

in the strong topology, where is the -invariant subspace of , and is the orthogonal projection to . (See e.g. these previous lecture notes for a proof.) The same proof extends to more general amenable groups: if is a countable amenable group acting on a Hilbert space by unitary transformations for , and is a vector in that Hilbert space, then one has

for any Folner sequence of , where is the -invariant subspace, and is the average of on . Thus one can interpret as a certain average of elements of the orbit of .

In a previous blog post, I noted a variant of this ergodic theorem (due to Alaoglu and Birkhoff) that holds even when the group is not amenable (or not discrete), using a more abstract notion of averaging:

Theorem 1 (Abstract ergodic theorem)Let be an arbitrary group acting unitarily on a Hilbert space , and let be a vector in . Then is the element in the closed convex hull of of minimal norm, and is also the unique element of in this closed convex hull.

I recently stumbled upon a different way to think about this theorem, in the additive case when is abelian, which has a closer resemblance to the classical mean ergodic theorem. Given an arbitrary additive group (not necessarily discrete, or countable), let denote the collection of finite non-empty multisets in – that is to say, unordered collections of elements of , not necessarily distinct, for some positive integer . Given two multisets , in , we can form the sum set . Note that the sum set can contain multiplicity even when do not; for instance, . Given a multiset in , and a function from to a vector space , we define the average as

Note that the multiplicity function of the set affects the average; for instance, we have , but .

We can define a directed set on as follows: given two multisets , we write if we have for some . Thus for instance we have . It is easy to verify that this operation is transitive and reflexive, and is directed because any two elements of have a common upper bound, namely . (This is where we need to be abelian.) The notion of convergence along a net, now allows us to define the notion of convergence along ; given a family of points in a topological space indexed by elements of , and a point in , we say that *converges* to along if, for every open neighbourhood of in , one has for sufficiently large , that is to say there exists such that for all . If the topological space is Hausdorff, then the limit is unique (if it exists), and we then write

When takes values in the reals, one can also define the limit superior or limit inferior along such nets in the obvious fashion.

We can then give an alternate formulation of the abstract ergodic theorem in the abelian case:

Theorem 2 (Abelian abstract ergodic theorem)Let be an arbitrary additive group acting unitarily on a Hilbert space , and let be a vector in . Then we havein the strong topology of .

*Proof:* Suppose that , so that for some , then

so by unitarity and the triangle inequality we have

thus is monotone non-increasing in . Since this quantity is bounded between and , we conclude that the limit exists. Thus, for any , we have for sufficiently large that

for all . In particular, for any , we have

We can write

and so from the parallelogram law and unitarity we have

for all , and hence by the triangle inequality (averaging over a finite multiset )

for any . This shows that is a Cauchy sequence in (in the strong topology), and hence (by the completeness of ) tends to a limit. Shifting by a group element , we have

and hence is invariant under shifts, and thus lies in . On the other hand, for any and , we have

and thus on taking strong limits

and so is orthogonal to . Combining these two facts we see that is equal to as claimed.

To relate this result to the classical ergodic theorem, we observe

Lemma 3Let be a countable additive group, with a F{\o}lner sequence , and let be a bounded sequence in a normed vector space indexed by . If exists, then exists, and the two limits are equal.

*Proof:* From the F{\o}lner property, we see that for any and any , the averages and differ by at most in norm if is sufficiently large depending on , (and the ). On the other hand, by the existence of the limit , the averages and differ by at most in norm if is sufficiently large depending on (regardless of how large is). The claim follows.

It turns out that this approach can also be used as an alternate way to construct the Gowers–Host-Kra seminorms in ergodic theory, which has the feature that it does not explicitly require any amenability on the group (or separability on the underlying measure space), though, as pointed out to me in comments, even uncountable abelian groups are amenable in the sense of possessing an invariant mean, even if they do not have a F{\o}lner sequence.

Given an arbitrary additive group , define a *-system* to be a probability space (not necessarily separable or standard Borel), together with a collection of invertible, measure-preserving maps, such that is the identity and (modulo null sets) for all . This then gives isomorphisms for by setting . From the above abstract ergodic theorem, we see that

in the strong topology of for any , where is the collection of measurable sets that are essentially -invariant in the sense that modulo null sets for all , and is the conditional expectation of with respect to .

In a similar spirit, we have

Theorem 4 (Convergence of Gowers-Host-Kra seminorms)Let be a -system for some additive group . Let be a natural number, and for every , let , which for simplicity we take to be real-valued. Then the expressionconverges, where we write , and we are using the product direct set on to define the convergence . In particular, for , the limit

converges.

We prove this theorem below the fold. It implies a number of other known descriptions of the Gowers-Host-Kra seminorms , for instance that

for , while from the ergodic theorem we have

This definition also manifestly demonstrates the cube symmetries of the Host-Kra measures on , defined via duality by requiring that

In a subsequent blog post I hope to present a more detailed study of the norm and its relationship with eigenfunctions and the Kronecker factor, without assuming any amenability on or any separability or topological structure on .

We return to the study of the Riemann zeta function , focusing now on the task of upper bounding the size of this function within the critical strip; as seen in Exercise 43 of Notes 2, such upper bounds can lead to zero-free regions for , which in turn lead to improved estimates for the error term in the prime number theorem.

In equation (21) of Notes 2 we obtained the somewhat crude estimates

for any and with and . Setting , we obtained the crude estimate

in this region. In particular, if and then we had . Using the functional equation and the Hadamard three lines lemma, we can improve this to ; see Supplement 3.

Now we seek better upper bounds on . We will reduce the problem to that of bounding certain exponential sums, in the spirit of Exercise 33 of Supplement 3:

Proposition 1Let with and . Thenwhere .

*Proof:* We fix a smooth function with for and for , and allow implied constants to depend on . Let with . From Exercise 33 of Supplement 3, we have

for some sufficiently large absolute constant . By dyadic decomposition, we thus have

We can absorb the first term in the second using the case of the supremum. Writing , where

it thus suffices to show that

for each . But from the fundamental theorem of calculus, the left-hand side can be written as

and the claim then follows from the triangle inequality and a routine calculation.

We are thus interested in getting good bounds on the sum . More generally, we consider normalised exponential sums of the form

where is an interval of length at most for some , and is a smooth function. We will assume smoothness estimates of the form

for some , all , and all , where is the -fold derivative of ; in the case , of interest for the Riemann zeta function, we easily verify that these estimates hold with . (One can consider exponential sums under more general hypotheses than (3), but the hypotheses here are adequate for our needs.) We do not bound the zeroth derivative of directly, but it would not be natural to do so in any event, since the magnitude of the sum (2) is unaffected if one adds an arbitrary constant to .

The trivial bound for (2) is

and we will seek to obtain significant improvements to this bound. Pseudorandomness heuristics predict a bound of for (2) for any if ; this assertion (a special case of the *exponent pair hypothesis*) would have many consequences (for instance, inserting it into Proposition 1 soon yields the Lindelöf hypothesis), but is unfortunately quite far from resolution with known methods. However, we can obtain weaker gains of the form when and depends on . We present two such results here, which perform well for small and large values of respectively:

Theorem 2Let , let be an interval of length at most , and let be a smooth function obeying (3) for all and .

The factor of can be removed by a more careful argument, but we will not need to do so here as we are willing to lose powers of . The estimate (6) is superior to (5) when for large, since (after optimising in ) (5) gives a gain of the form over the trivial bound, while (6) gives . We have not attempted to obtain completely optimal estimates here, settling for a relatively simple presentation that still gives good bounds on , and there are a wide variety of additional exponential sum estimates beyond the ones given here; see Chapter 8 of Iwaniec-Kowalski, or Chapters 3-4 of Montgomery, for further discussion.

We now briefly discuss the strategies of proof of Theorem 2. Both parts of the theorem proceed by treating like a polynomial of degree roughly ; in the case of (ii), this is done explicitly via Taylor expansion, whereas for (i) it is only at the level of analogy. Both parts of the theorem then try to “linearise” the phase to make it a linear function of the summands (actually in part (ii), it is necessary to introduce an additional variable and make the phase a *bilinear* function of the summands). The van der Corput estimate achieves this linearisation by squaring the exponential sum about times, which is why the gain is only exponentially small in . The Vinogradov estimate achieves linearisation by raising the exponential sum to a significantly smaller power – on the order of – by using Hölder’s inequality in combination with the fact that the discrete curve becomes roughly equidistributed in the box after taking the sumset of about copies of this curve. This latter fact has a precise formulation, known as the Vinogradov mean value theorem, and its proof is the most difficult part of the argument, relying on using a “-adic” version of this equidistribution to reduce the claim at a given scale to a smaller scale with , and then proceeding by induction.

One can combine Theorem 2 with Proposition 1 to obtain various bounds on the Riemann zeta function:

Exercise 3 (Subconvexity bound)

- (i) Show that for all . (
Hint:use the case of the Van der Corput estimate.)- (ii) For any , show that as .

Exercise 4Let be such that , and let .

- (i) (Littlewood bound) Use the van der Corput estimate to show that whenever .
- (ii) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that whenever .

As noted in Exercise 43 of Notes 2, the Vinogradov-Korobov bound leads to the zero-free region , which in turn leads to the prime number theorem with error term

for . If one uses the weaker Littlewood bound instead, one obtains the narrower zero-free region

(which is only slightly wider than the classical zero-free region) and an error term

in the prime number theorem.

Exercise 5 (Vinogradov-Korobov in arithmetic progressions)Let be a non-principal character of modulus .

- (i) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that whenever and
(

Hint:use the Vinogradov estimate and a change of variables to control for various intervals of length at most and residue classes , in the regime (say). For , do not try to capture any cancellation and just use the triangle inequality instead.)- (ii) Obtain a zero-free region
for , for some (effective) absolute constant .

- (iii) Obtain the prime number theorem in arithmetic progressions with error term
whenever , , is primitive, and depends (ineffectively) on .

## Recent Comments