You are currently browsing the tag archive for the ‘nilsequences’ tag.

One of the basic objects of study in combinatorics are finite strings or infinite strings of symbols from some given alphabet , which could be either finite or infinite (but which we shall usually take to be compact). For instance, a set of natural numbers can be identified with the infinite string of s and s formed by the indicator of , e.g. the even numbers can be identified with the string from the alphabet , the multiples of three can be identified with the string , and so forth. One can also consider doubly infinite strings , which among other things can be used to describe arbitrary subsets of integers.

On the other hand, the basic object of study in dynamics (and in related fields, such as ergodic theory) is that of a dynamical system , that is to say a space together with a shift map (which is often assumed to be invertible, although one can certainly study non-invertible dynamical systems as well). One often adds additional structure to this dynamical system, such as topological structure (giving rise topological dynamics), measure-theoretic structure (giving rise to ergodic theory), complex structure (giving rise to complex dynamics), and so forth. A dynamical system gives rise to an action of the natural numbers on the space by using the iterates of for ; if is invertible, we can extend this action to an action of the integers on the same space. One can certainly also consider dynamical systems whose underlying group (or semi-group) is something other than or (e.g. one can consider continuous dynamical systems in which the evolution group is ), but we will restrict attention to the classical situation of or actions here.

There is a fundamental *correspondence principle* connecting the study of strings (or subsets of natural numbers or integers) with the study of dynamical systems. In one direction, given a dynamical system , an *observable* taking values in some alphabet , and some initial datum , we can first form the forward orbit of , and then observe this orbit using to obtain an infinite string . If the shift in this system is invertible, one can extend this infinite string into a doubly infinite string . Thus we see that every quadruplet consisting of a dynamical system , an observable , and an initial datum creates an infinite string.

Example 1If is the three-element set with the shift map , is the observable that takes the value at the residue class and zero at the other two classes, and one starts with the initial datum , then the observed string becomes the indicator of the multiples of three.

In the converse direction, every infinite string in some alphabet arises (in a decidedly *non*-unique fashion) from a quadruple in the above fashion. This can be easily seen by the following “universal” construction: take to be the set of infinite strings in the alphabet , let be the shift map

let be the observable

and let be the initial point

Then one easily sees that the observed string is nothing more than the original string . Note also that this construction can easily be adapted to doubly infinite strings by using instead of , at which point the shift map now becomes invertible. An important variant of this construction also attaches an invariant probability measure to that is associated to the limiting density of various sets associated to the string , and leads to the *Furstenberg correspondence principle*, discussed for instance in these previous blog posts. Such principles allow one to rigorously pass back and forth between the combinatorics of strings and the dynamics of systems; for instance, Furstenberg famously used his correspondence principle to demonstrate the equivalence of Szemerédi’s theorem on arithmetic progressions with what is now known as the Furstenberg multiple recurrence theorem in ergodic theory.

In the case when the alphabet is the binary alphabet , and (for technical reasons related to the infamous non-injectivity of the decimal representation system) the string does not end with an infinite string of s, then one can reformulate the above universal construction by taking to be the interval , to be the doubling map , to be the observable that takes the value on and on (that is, is the first binary digit of ), and is the real number (that is, in binary).

The above universal construction is very easy to describe, and is well suited for “generic” strings that have no further obvious structure to them, but it often leads to dynamical systems that are much larger and more complicated than is actually needed to produce the desired string , and also often obscures some of the key dynamical features associated to that sequence. For instance, to generate the indicator of the multiples of three that were mentioned previously, the above universal construction requires an uncountable space and a dynamics which does not obviously reflect the key features of the sequence such as its periodicity. (Using the unit interval model, the dynamics arise from the orbit of under the doubling map, which is a rather artificial way to describe the indicator function of the multiples of three.)

A related aesthetic objection to the universal construction is that of the four components of the quadruplet used to generate the sequence , three of the components are completely universal (in that they do not depend at all on the sequence ), leaving only the initial datum to carry all the distinctive features of the original sequence. While there is nothing wrong with this mathematically, from a conceptual point of view it would make sense to make all four components of the quadruplet to be adapted to the sequence, in order to take advantage of the accumulated intuition about various special dynamical systems (and special observables), not just special initial data.

One step in this direction can be made by restricting to the orbit of the initial datum (actually for technical reasons it is better to restrict to the topological closure of this orbit, in order to keep compact). For instance, starting with the sequence , the orbit now consists of just three points , , , bringing the system more in line with the example in Example 1. Technically, this is the “optimal” representation of the sequence by a quadruplet , because any other such representation is a factor of this representation (in the sense that there is a unique map with , , and ). However, from a conceptual point of view this representation is still somewhat unsatisfactory, given that the elements of the system are interpreted as infinite strings rather than elements of a more geometrically or algebraically rich object (e.g. points in a circle, torus, or other homogeneous space).

For general sequences , locating relevant geometric or algebraic structure in a dynamical system generating that sequence is an important but very difficult task (see e.g. this paper of Host and Kra, which is more or less devoted to precisely this task in the context of working out what component of a dynamical system controls the multiple recurrence behaviour of that system). However, for specific examples of sequences , one can use an informal procedure of educated guesswork in order to produce a more natural-looking quadruple that generates that sequence. This is not a particularly difficult or deep operation, but I found it very helpful in internalising the intuition behind the correspondence principle. Being non-rigorous, this procedure does not seem to be emphasised in most presentations of the correspondence principle, so I thought I would describe it here.

In Notes 5, we saw that the Gowers uniformity norms on vector spaces in high characteristic were controlled by classical polynomial phases .

Now we study the analogous situation on cyclic groups . Here, there is an unexpected surprise: the polynomial phases (classical or otherwise) are no longer sufficient to control the Gowers norms once exceeds . To resolve this problem, one must enlarge the space of polynomials to a larger class. It turns out that there are at least three closely related options for this class: the *local polynomials*, the *bracket polynomials*, and the *nilsequences*. Each of the three classes has its own strengths and weaknesses, but in my opinion the nilsequences seem to be the most natural class, due to the rich algebraic and dynamical structure coming from the nilpotent Lie group undergirding such sequences. For reasons of space we shall focus primarily on the nilsequence viewpoint here.

Traditionally, nilsequences have been defined in terms of linear orbits on nilmanifolds ; however, in recent years it has been realised that it is convenient for technical reasons (particularly for the quantitative “single-scale” theory) to generalise this setup to that of *polynomial* orbits , and this is the perspective we will take here.

A polynomial phase on a finite abelian group is formed by starting with a polynomial to the unit circle, and then composing it with the exponential function . To create a nilsequence , we generalise this construction by starting with a polynomial into a *nilmanifold* , and then composing this with a Lipschitz function . (The Lipschitz regularity class is convenient for minor technical reasons, but one could also use other regularity classes here if desired.) These classes of sequences certainly include the polynomial phases, but are somewhat more general; for instance, they *almost* include *bracket polynomial* phases such as . (The “almost” here is because the relevant functions involved are only piecewise Lipschitz rather than Lipschitz, but this is primarily a technical issue and one should view bracket polynomial phases as “morally” being nilsequences.)

In these notes we set out the basic theory for these nilsequences, including their equidistribution theory (which generalises the equidistribution theory of polynomial flows on tori from Notes 1) and show that they are indeed obstructions to the Gowers norm being small. This leads to the *inverse conjecture for the Gowers norms* that shows that the Gowers norms on cyclic groups are indeed controlled by these sequences.

Ben Green, and I have just uploaded to the arXiv our paper “An arithmetic regularity lemma, an associated counting lemma, and applications“, submitted (a little behind schedule) to the 70th birthday conference proceedings for Endre Szemerédi. In this paper we describe the general-degree version of the *arithmetic regularity lemma*, which can be viewed as the counterpart of the Szemerédi regularity lemma, in which the object being regularised is a function on a discrete interval rather than a graph, and the type of patterns one wishes to count are additive patterns (such as arithmetic progressions ) rather than subgraphs. Very roughly speaking, this regularity lemma asserts that all such functions can be decomposed as a degree nilsequence (or more precisely, a variant of a nilsequence that we call an *virtual irrational nilsequence*), plus a small error, plus a third error which is extremely tiny in the Gowers uniformity norm . In principle, at least, the latter two errors can be readily discarded in applications, so that the regularity lemma reduces many questions in additive combinatorics to questions concerning (virtual irrational) nilsequences. To work with these nilsequences, we also establish a *arithmetic counting lemma* that gives an integral formula for counting additive patterns weighted by such nilsequences.

The regularity lemma is a manifestation of the “dichotomy between structure and randomness”, as discussed for instance in my ICM article or FOCS article. In the degree case , this result is essentially due to Green. It is powered by the *inverse conjecture for the Gowers norms*, which we and Tamar Ziegler have recently established (paper to be forthcoming shortly; the case of our argument is discussed here). The counting lemma is established through the quantitative equidistribution theory of nilmanifolds, which Ben and I set out in this paper.

The regularity and counting lemmas are designed to be used together, and in the paper we give three applications of this combination. Firstly, we give a new proof of Szemerédi’s theorem, which proceeds via an energy increment argument rather than a density increment one. Secondly, we establish a conjecture of Bergelson, Host, and Kra, namely that if has density , and , then there exist shifts for which contains at least arithmetic progressions of length of spacing . (The case of this conjecture was established earlier by Green; the case is false, as was shown by Ruzsa in an appendix to the Bergelson-Host-Kra paper.) Thirdly, we establish a variant of a recent result of Gowers-Wolf, showing that the true complexity of a system of linear forms over indeed matches the conjectured value predicted in their first paper.

In all three applications, the scheme of proof can be described as follows:

- Apply the arithmetic regularity lemma, and decompose a relevant function into three pieces, .
- The uniform part is so tiny in the Gowers uniformity norm that its contribution can be easily dealt with by an appropriate “generalised von Neumann theorem”.
- The contribution of the (virtual, irrational) nilsequence can be controlled using the arithmetic counting lemma.
- Finally, one needs to check that the contribution of the small error does not overwhelm the main term . This is the trickiest bit; one often needs to use the counting lemma again to show that one can find a set of arithmetic patterns for that is so sufficiently “equidistributed” that it is not impacted by the small error.

To illustrate the last point, let us give the following example. Suppose we have a set of some positive density (say ) and we have managed to prove that contains a reasonable number of arithmetic progressions of length (say), e.g. it contains at least such progressions. Now we perturb by deleting a small number, say , elements from to create a new set . Can we still conclude that the new set contains any arithmetic progressions of length ?

Unfortunately, the answer could be no; conceivably, all of the arithmetic progressions in could be wiped out by the elements removed from , since each such element of could be associated with up to (or even ) arithmetic progressions in .

But suppose we knew that the arithmetic progressions in were *equidistributed*, in the sense that each element in belonged to the same number of such arithmetic progressions, namely . Then each element deleted from only removes at most progressions, and so one can safely remove elements from and still retain some arithmetic progressions. The same argument works if the arithmetic progressions are only *approximately* equidistributed, in the sense that the number of progressions that a given element belongs to concentrates sharply around its mean (for instance, by having a small variance), provided that the equidistribution is sufficiently strong. Fortunately, the arithmetic regularity and counting lemmas are designed to give precisely such a strong equidistribution result.

A succinct (but slightly inaccurate) summation of the regularity+counting lemma strategy would be that in order to solve a problem in additive combinatorics, it “suffices to check it for nilsequences”. But this should come with a caveat, due to the issue of the small error above; in addition to checking it for nilsequences, the answer in the nilsequence case must be sufficiently “dispersed” in a suitable sense, so that it can survive the addition of a small (but not completely negligible) perturbation.

One last “production note”. Like our previous paper with Emmanuel Breuillard, we used Subversion to write this paper, which turned out to be a significant efficiency boost as we could work on different parts of the paper simultaneously (this was particularly important this time round as the paper was somewhat lengthy and complicated, and there was a submission deadline). When doing so, we found it convenient to split the paper into a dozen or so pieces (one for each section of the paper, basically) in order to avoid conflicts, and to help coordinate the writing process. I’m also looking into git (a more advanced version control system), and am planning to use it for another of my joint projects; I hope to be able to comment on the relative strengths of these systems (and with plain old email) in the future.

Ben Green and I have just uploaded to the arXiv our paper, “The Möbius function is asymptotically orthogonal to nilsequences“, which is a sequel to our earlier paper “The quantitative behaviour of polynomial orbits on nilmanifolds“, which I talked about in this post. In this paper, we apply our previous results on quantitative equidistribution polynomial orbits in nilmanifolds to settle the Möbius and nilsequences conjecture from our earlier paper, as part of our program to detect and count solutions to linear equations in primes. (The other major plank of that program, namely the inverse conjecture for the Gowers norm, remains partially unresolved at present.) Roughly speaking, this conjecture asserts the asymptotic orthogonality

(1)

between the Möbius function and any Lipschitz nilsequence f(n), by which we mean a sequence of the form for some orbit in a nilmanifold , and some Lipschitz function on that nilmanifold. (The implied constant can depend on the nilmanifold and on the Lipschitz constant of F, but it is important that it be independent of the generator g of the orbit or the base point x.) The case when f is constant is essentially the prime number theorem; the case when f is periodic is essentially the prime number theorem in arithmetic progressions. The case when f is almost periodic (e.g. for some irrational ) was established by Davenport, using the method of Vinogradov. The case when f was a 2-step nilsequence (such as the quadratic phase ; bracket quadratic phases such as can also be covered by an approximation argument, though the logarithmic decay in (1) is weakened as a consequence) was done by Ben and myself a few years ago, by a rather *ad hoc* adaptation of Vinogradov’s method. By using the equidistribution theory of nilmanifolds, we were able to apply Vinogradov’s method more systematically, and in fact the proof is relatively short (20 pages), although it relies on the 64-page predecessor paper on equidistribution. I’ll talk a little bit more about the proof after the fold.

There is an amusing way to interpret the conjecture (using the close relationship between nilsequences and bracket polynomials) as an assertion of the pseudorandomness of the Liouville function from a computational complexity perspective. Suppose you possess a calculator with the wonderful property of being infinite precision: it can accept arbitrarily large real numbers as input, manipulate them precisely, and also store them in memory. However, this calculator has two limitations. Firstly, the only operations available are addition, subtraction, multiplication, integer part , fractional part , memory store (into one of O(1) registers), and memory recall (from one of these O(1) registers). In particular, there is no ability to perform division. Secondly, the calculator only has a finite display screen, and when it shows a real number, it only shows O(1) digits before and after the decimal point. (Thus, for instance, the real number 1234.56789 might be displayed only as .)

Now suppose you play the following game with an opponent.

- The opponent specifies a large integer d.
- You get to enter in O(1) real constants of your choice into your calculator. These can be absolute constants such as and , or they can depend on d (e.g. you can enter in ).
- The opponent randomly selects an d-digit integer n, and enters n into one of the registers of your calculator.
- You are allowed to perform O(1) operations on your calculator and record what is displayed on the calculator’s viewscreen.
- After this, you have to guess whether the opponent’s number n had an odd or even number of prime factors (i.e. you guess .)
- If you guess correctly, you win $1; otherwise, you lose $1.

For instance, using your calculator you can work out the first few digits of , provided of course that you entered the constants and in advance. You can also work out the leading digits of n by storing in advance, and computing the first few digits of .

Our theorem is *equivalent* to the assertion that as d goes to infinity (keeping the O(1) constants fixed), your probability of winning this game converges to 1/2; in other words, your calculator becomes asymptotically useless to you for the purposes of guessing whether n has an odd or even number of prime factors, and you may as well just guess randomly.

[I should mention a recent result in a similar spirit by Mauduit and Rivat; in this language, their result asserts that knowing the last few digits of the digit-sum of n does not increase your odds of guessing correctly.]

## Recent Comments