Ben Green, and I have just uploaded to the arXiv our paper “An arithmetic regularity lemma, an associated counting lemma, and applications“, submitted (a little behind schedule) to the 70th birthday conference proceedings for Endre Szemerédi. In this paper we describe the general-degree version of the *arithmetic regularity lemma*, which can be viewed as the counterpart of the Szemerédi regularity lemma, in which the object being regularised is a function on a discrete interval rather than a graph, and the type of patterns one wishes to count are additive patterns (such as arithmetic progressions ) rather than subgraphs. Very roughly speaking, this regularity lemma asserts that all such functions can be decomposed as a degree nilsequence (or more precisely, a variant of a nilsequence that we call an *virtual irrational nilsequence*), plus a small error, plus a third error which is extremely tiny in the Gowers uniformity norm . In principle, at least, the latter two errors can be readily discarded in applications, so that the regularity lemma reduces many questions in additive combinatorics to questions concerning (virtual irrational) nilsequences. To work with these nilsequences, we also establish a *arithmetic counting lemma* that gives an integral formula for counting additive patterns weighted by such nilsequences.

The regularity lemma is a manifestation of the “dichotomy between structure and randomness”, as discussed for instance in my ICM article or FOCS article. In the degree case , this result is essentially due to Green. It is powered by the *inverse conjecture for the Gowers norms*, which we and Tamar Ziegler have recently established (paper to be forthcoming shortly; the case of our argument is discussed here). The counting lemma is established through the quantitative equidistribution theory of nilmanifolds, which Ben and I set out in this paper.

The regularity and counting lemmas are designed to be used together, and in the paper we give three applications of this combination. Firstly, we give a new proof of Szemerédi’s theorem, which proceeds via an energy increment argument rather than a density increment one. Secondly, we establish a conjecture of Bergelson, Host, and Kra, namely that if has density , and , then there exist shifts for which contains at least arithmetic progressions of length of spacing . (The case of this conjecture was established earlier by Green; the case is false, as was shown by Ruzsa in an appendix to the Bergelson-Host-Kra paper.) Thirdly, we establish a variant of a recent result of Gowers-Wolf, showing that the true complexity of a system of linear forms over indeed matches the conjectured value predicted in their first paper.

In all three applications, the scheme of proof can be described as follows:

- Apply the arithmetic regularity lemma, and decompose a relevant function into three pieces, .
- The uniform part is so tiny in the Gowers uniformity norm that its contribution can be easily dealt with by an appropriate “generalised von Neumann theorem”.
- The contribution of the (virtual, irrational) nilsequence can be controlled using the arithmetic counting lemma.
- Finally, one needs to check that the contribution of the small error does not overwhelm the main term . This is the trickiest bit; one often needs to use the counting lemma again to show that one can find a set of arithmetic patterns for that is so sufficiently “equidistributed” that it is not impacted by the small error.

To illustrate the last point, let us give the following example. Suppose we have a set of some positive density (say ) and we have managed to prove that contains a reasonable number of arithmetic progressions of length (say), e.g. it contains at least such progressions. Now we perturb by deleting a small number, say , elements from to create a new set . Can we still conclude that the new set contains any arithmetic progressions of length ?

Unfortunately, the answer could be no; conceivably, all of the arithmetic progressions in could be wiped out by the elements removed from , since each such element of could be associated with up to (or even ) arithmetic progressions in .

But suppose we knew that the arithmetic progressions in were *equidistributed*, in the sense that each element in belonged to the same number of such arithmetic progressions, namely . Then each element deleted from only removes at most progressions, and so one can safely remove elements from and still retain some arithmetic progressions. The same argument works if the arithmetic progressions are only *approximately* equidistributed, in the sense that the number of progressions that a given element belongs to concentrates sharply around its mean (for instance, by having a small variance), provided that the equidistribution is sufficiently strong. Fortunately, the arithmetic regularity and counting lemmas are designed to give precisely such a strong equidistribution result.

A succinct (but slightly inaccurate) summation of the regularity+counting lemma strategy would be that in order to solve a problem in additive combinatorics, it “suffices to check it for nilsequences”. But this should come with a caveat, due to the issue of the small error above; in addition to checking it for nilsequences, the answer in the nilsequence case must be sufficiently “dispersed” in a suitable sense, so that it can survive the addition of a small (but not completely negligible) perturbation.

One last “production note”. Like our previous paper with Emmanuel Breuillard, we used Subversion to write this paper, which turned out to be a significant efficiency boost as we could work on different parts of the paper simultaneously (this was particularly important this time round as the paper was somewhat lengthy and complicated, and there was a submission deadline). When doing so, we found it convenient to split the paper into a dozen or so pieces (one for each section of the paper, basically) in order to avoid conflicts, and to help coordinate the writing process. I’m also looking into git (a more advanced version control system), and am planning to use it for another of my joint projects; I hope to be able to comment on the relative strengths of these systems (and with plain old email) in the future.

## 5 comments

Comments feed for this article

13 February, 2010 at 10:43 am

gowersTwo questions. The first (based on not having thought about the problem at all, so there is probably a fairly simple answer) is whether it is easy to explain why the arithmetic regularity approach to the conjecture of Bergelson, Host and Kra works up to k=4 and fails at k=5. Of course, you can say that there is a counterexample for k=5 but then it makes it mysterious that the proof works for k=4. There must be something special about the k=4 case that you have to add in as an extra ingredient and I’m curious to know what that is.

My second question is whether there is any mileage in a trick that Julia Wolf and I used in our paper, which we needed in order to deal with a problem rather similar to the one you mention about the contribution from the small part of f in your decomposition. We had a similar difficulty, but our method of dealing with it was to decompose the function f in several ways. To take your example of APs of length 5, the idea would be to start by decomposing f into structured + small + uniform and argue that we could replace the first occurrence of f in the AP-counting expression by just the structured part, at the cost of only a small error. Then we would decompose the second occurrence of f with different parameters, in our case because the first structured part of f did not have to be bounded with the same bound as the original f, but was still bounded independently of n. And so on. In that way we ended up just having to consider the structured parts, but these were different for different occurrences of f. (We didn’t apply this idea to APs of length 5, but I’m using this example so as to relate it to your discussion above.)

A lot depends on precise error dependences, so it may well be that this doesn’t work in your context, but I thought I’d ask just in case.

13 February, 2010 at 12:03 pm

Terence TaoDear Tim,

With regards to the first question, my take on it is that there is a residual “positive definite” structure to AP4’s that don’t show up in AP5’s or higher. Consider for instance the problem of counting the expression when f takes the quadratically quasiperiodic form

for some irrational and some smooth (or at least Lipschitz) . With irrationality, the only constraint between the for i=0,1,2,3 is provided by the identity

.

As a consequence, given sufficient equidistribution, the above average should collapse to an integral

on a three-dimensional subtorus of . Now, by a minor miracle, this expression turns out to be positive definite; this can be seen from the Fourier representation

of this expression. In particular, if F has mean , then the average always has to be at least , thanks to the contribution of the 0 Fourier mode.

If, in contrast, one looks at AP5s and a function of the form , the analogous expression

has no obvious positivity properties; in particular, if F has mean , this average can be significantly less than .

[As for AP3s, with , then the mean number of AP3s can certainly dip below (e.g. Behrend’s example), but if one fixes the difference d so that is very close to 0, then the AP3 count becomes which is indeed at least . Note though that fixing the difference d does not constrain the quadratic or higher dynamics.]

With regards to the second question, it seems our approach is a little different from yours in this respect. We have only one structural decomposition, but then we work a little harder on the structured part to make it extremely “irrational”. This is roughly analogous to regularising a family of polynomials to make them “high rank” (or more precisely, to represent the original family as combinations of a new family of high rank polynomials). With this irrationality we were able to handle all the small errors.

Perhaps your trick with Julia is more closely related to a trick that comes up in hypergraph regularity. If regularising a 3-uniform graph, one might first do so by finding a colouring of the 2-uniform structure relative to which the 3-uniform component is reasonably regular, but then one has to do a secondary regularisation in order to regularise the resulting 2-uniform structures in terms of a (much finer) 1-uniform structure (i.e. a vertex partition). This hierarchy of regularisations is crucial in order to properly do counting (and in particular to deal with all the small errors), as of course you well know…

20 May, 2010 at 9:45 pm

254B, Notes 5: The inverse conjecture for the Gowers norm I. The finite field case « What’s new[…] increment method (an energy increment argument is also possible, but is more complicated; see this paper). Let be a set of density at least containing no lines. This implies that the -linear […]

29 May, 2010 at 2:16 pm

254B, Notes 6: The inverse conjecture for the Gowers norm II. The integer case « What’s new[…] of consequences, analogous to the consequences for the finite field analogues of these facts; see this paper of Green and myself for further […]

12 October, 2014 at 11:57 am

Additive limits | What's new[…] of Green which is analogous to the graph regularity lemma of Szemerédi. (There is also a higher order arithmetic regularity lemma analogous to hypergraph regularity lemmas, but this is not the focus of the discussion here.) Given […]