You are currently browsing the tag archive for the ‘Gowers uniformity norms’ tag.

Let be a finite set of order ; in applications will be typically something like a finite abelian group, such as the cyclic group . Let us define a *-bounded function* to be a function such that for all . There are many seminorms of interest that one places on functions that are bounded by on -bounded functions, such as the Gowers uniformity seminorms for (which are genuine norms for ). All seminorms in this post will be implicitly assumed to obey this property.

In additive combinatorics, a significant role is played by *inverse theorems*, which abstractly take the following form for certain choices of seminorm , some parameters , and some class of -bounded functions:

Theorem 1 (Inverse theorem template)If is a -bounded function with , then there exists such that , where denotes the usual inner product

Informally, one should think of as being somewhat small but fixed independently of , as being somewhat smaller but depending only on (and on the seminorm), and as representing the “structured functions” for these choices of parameters. There is some flexibility in exactly how to choose the class of structured functions, but intuitively an inverse theorem should become more powerful when this class is small. Accordingly, let us define the *-entropy* of the seminorm to be the least cardinality of for which such an inverse theorem holds. Seminorms with low entropy are ones for which inverse theorems can be expected to be a useful tool. This concept arose in some discussions I had with Ben Green many years ago, but never appeared in print, so I decided to record some observations we had on this concept here on this blog.

Lebesgue norms for have exponentially large entropy (and so inverse theorems are not expected to be useful in this case):

Proposition 2 ( norm has exponentially large inverse entropy)Let and . Then the -entropy of is at most . Conversely, for any , the -entropy of is at least for some absolute constant .

*Proof:* If is -bounded with , then we have

Now suppose that there is an -inverse theorem for some of cardinality . If we let be a random sign function (so the are independent random variables taking values in with equal probability), then there is a random such that

and hence by the pigeonhole principle there is a deterministic such that On the other hand, from the Hoeffding inequality one has for some absolute constant , hence as claimed.Most seminorms of interest in additive combinatorics, such as the Gowers uniformity norms, are bounded by some finite norm thanks to Hölder’s inequality, so from the above proposition and the obvious monotonicity properties of entropy, we conclude that all Gowers norms on finite abelian groups have at most exponential inverse theorem entropy. But we can do significantly better than this:

- For the seminorm , one can simply take to consist of the constant function , and the -entropy is clearly equal to for any .
- For the norm, the standard Fourier-analytic inverse theorem asserts that if then for some Fourier character . Thus the -entropy is at most .
- For the norm on cyclic groups for , the inverse theorem proved by Green, Ziegler, and myself gives an -inverse theorem for some and consisting of nilsequences for some filtered nilmanifold of degree in a finite collection of cardinality , some polynomial sequence (which was subsequently observed by Candela-Sisask (see also Manners) that one can choose to be -periodic), and some Lipschitz function of Lipschitz norm . By the Arzela-Ascoli theorem, the number of possible (up to uniform errors of size at most , say) is . By standard arguments one can also ensure that the coefficients of the polynomial are , and then by periodicity there are only such polynomials. As a consequence, the -entropy is of polynomial size (a fact that seems to have first been implicitly observed in Lemma 6.2 of this paper of Frantzikinakis; thanks to Ben Green for this reference). One can obtain more precise dependence on using the quantitative version of this inverse theorem due to Manners; back of the envelope calculations using Section 5 of that paper suggest to me that one can take to be polynomial in and the entropy to be of the order , or alternatively one can reduce the entropy to at the cost of degrading to .
- If one replaces the cyclic group by a vector space over some fixed finite field of prime order (so that ), then the inverse theorem of Ziegler and myself (available in both high and low characteristic) allows one to obtain an -inverse theorem for some and the collection of non-classical degree polynomial phases from to , which one can normalize to equal at the origin, and then by the classification of such polynomials one can calculate that the entropy is of quasipolynomial size in . By using the recent work of Gowers and Milicevic, one can make the dependence on here more precise, but we will not perform these calcualtions here.
- For the norm on an arbitrary finite abelian group, the recent inverse theorem of Jamneshan and myself gives (after some calculations) a bound of the polynomial form on the -entropy for some , which one can improve slightly to if one degrades to , where is the maximal order of an element of , and is the rank (the number of elements needed to generate ). This bound is polynomial in in the cyclic group case and quasipolynomial in general.

For general finite abelian groups , we do not yet have an inverse theorem of comparable power to the ones mentioned above that give polynomial or quasipolynomial upper bounds on the entropy. However, there is a cheap argument that at least gives some subexponential bounds:

Proposition 3 (Cheap subexponential bound)Let and , and suppose that is a finite abelian group of order for some sufficiently large . Then the -complexity of is at most .

*Proof:* (Sketch) We use a standard random sampling argument, of the type used for instance by Croot-Sisask or Briet-Gopi (thanks to Ben Green for this latter reference). We can assume that for some sufficiently large , since otherwise the claim follows from Proposition 2.

Let be a random subset of with the events being iid with probability to be chosen later, conditioned to the event . Let be a -bounded function. By a standard second moment calculation, we see that with probability at least , we have

Thus, by the triangle inequality, if we choose for some sufficiently large , then for any -bounded with , one has with probability at least that We can write the left-hand side as where is the randomly sampled dual function Unfortunately, is not -bounded in general, but we have and the right-hand side can be shown to be on the average, so we can condition on the event that the right-hand side is without significant loss in falure probability.If we then let be rounded to the nearest Gaussian integer multiple of in the unit disk, one has from the triangle inequality that

where is the discretised randomly sampled dual function For any given , there are at most places where can be non-zero, and in those places there are possible values for . Thus, if we let be the collection of all possible associated to a given , the cardinality of this set is , and for any with , we have with probability at least .Now we remove the failure probability by independent resampling. By rounding to the nearest Gaussian integer multiple of in the unit disk for a sufficiently small , one can find a family of cardinality consisting of -bounded functions of norm at least such that for every -bounded with there exists such that

Now, let be independent samples of for some to be chosen later. By the preceding discussion, we see that with probability at least , we have for any given , so by the union bound, if we choose for a large enough , we can find such that for*all*, and hence y the triangle inequality Taking to be the union of the (applying some truncation and rescaling to these -bounded functions to make them -bounded, and then -bounded), we obtain the claim.

One way to obtain *lower* bounds on the inverse theorem entropy is to produce a collection of almost orthogonal functions with large norm. More precisely:

Proposition 4Let be a seminorm, let , and suppose that one has a collection of -bounded functions such that for all , one has for all but at most choices of for all distinct . Then the -entropy of is at least .

*Proof:* Suppose we have an -inverse theorem with some family . Then for each there is such that . By the pigeonhole principle, there is thus such that for all in a subset of of cardinality at least :

Thus for instance:

- For the norm, one can take to be the family of linear exponential phases with and , and obtain a linear lower bound of for the -entropy, thus matching the upper bound of up to constants when is fixed.
- For the norm, a similar calculation using polynomial phases of degree , combined with the Weyl sum estimates, gives a lower bound of for the -entropy for any fixed ; by considering nilsequences as well, together with nilsequence equidistribution theory, one can replace the exponent here by some quantity that goes to infinity as , though I have not attempted to calculate the exact rate.
- For the norm, another similar calculation using polynomial phases of degree should give a lower bound of for the -entropy, though I have not fully performed the calculation.

We close with one final example. Suppose is a product of two sets of cardinality , and we consider the Gowers box norm

One possible choice of class here are the indicators of “rectangles” with , (cf. this previous blog post on cut norms). By standard calculations, one can use this class to show that the -entropy of is , and a variant of the proof of the second part of Proposition 2 shows that this is the correct order of growth in . In contrast, a modification of Proposition 3 only gives an upper bound of the form (the bottleneck is ensuring that the randomly sampled dual functions stay bounded in ), which shows that while this cheap bound is not optimal, it can still broadly give the correct “type” of bound (specifically, intermediate growth between polynomial and exponential).In the modern theory of higher order Fourier analysis, a key role are played by the Gowers uniformity norms for . For finitely supported functions , one can define the (non-normalised) Gowers norm by the formula

where denotes complex conjugation, and then on any discrete interval and any function we can then define the (normalised) Gowers norm where is the extension of by zero to all of . Thus for instance (which technically makes a seminorm rather than a norm), and one can calculate where , and we use the averaging notation .The significance of the Gowers norms is that they control other multilinear forms that show up in additive combinatorics. Given any polynomials and functions , we define the multilinear form

(assuming that the denominator is finite and non-zero). Thus for instance where we view as formal (indeterminate) variables, and are understood to be extended by zero to all of . These forms are used to count patterns in various sets; for instance, the quantity is closely related to the number of length three arithmetic progressions contained in . Let us informally say that a form is*controlled*by the norm if the form is small whenever are -bounded functions with at least one of the small in norm. This definition was made more precise by Gowers and Wolf, who then defined the

*true complexity*of a form to be the least such that is controlled by the norm. For instance,

- and have true complexity ;
- has true complexity ;
- has true complexity ;
- The form (which among other things could be used to count twin primes) has infinite true complexity (which is quite unfortunate for applications).

Gowers and Wolf formulated a conjecture on what this complexity should be, at least for linear polynomials ; Ben Green and I thought we had resolved this conjecture back in 2010, though it turned out there was a subtle gap in our arguments and we were only able to resolve the conjecture in a partial range of cases. However, the full conjecture was recently resolved by Daniel Altman.

The (semi-)norm is so weak that it barely controls any averages at all. For instance the average

is not controlled by the semi-norm: it is perfectly possible for a -bounded function to even have vanishing norm but have large value of (consider for instance the parity function ).Because of this, I propose inserting an additional norm in the Gowers uniformity norm hierarchy between the and norms, which I will call the (or “profinite “) norm:

where ranges over all arithmetic progressions in . This can easily be seen to be a norm on functions that controls the norm. It is also basically controlled by the norm for -bounded functions ; indeed, if is an arithmetic progression in of some spacing , then we can write as the intersection of an interval with a residue class modulo , and from Fourier expansion we have If we let be a standard bump function supported on with total mass and is a parameter then (extending by zero outside of ), as can be seen by using the triangle inequality and the estimate After some Fourier expansion of we now have Writing as a linear combination of and using the Gowers–Cauchy–Schwarz inequality, we conclude hence on optimising in we have Forms which are controlled by the norm (but not ) would then have their true complexity adjusted to with this insertion.The norm recently appeared implicitly in work of Peluse and Prendiville, who showed that the form had true complexity in this notation (with polynomially strong bounds). [Actually, strictly speaking this control was only shown for the third function ; for the first two functions one needs to localize the norm to intervals of length . But I will ignore this technical point to keep the exposition simple.] The weaker claim that has true complexity is substantially easier to prove (one can apply the circle method together with Gauss sum estimates).

The well known inverse theorem for the norm tells us that if a -bounded function has norm at least for some , then there is a Fourier phase such that

this follows easily from (1) and Plancherel’s theorem. Conversely, from the Gowers–Cauchy–Schwarz inequality one hasFor one has a trivial inverse theorem; by definition, the norm of is at least if and only if

Thus the frequency appearing in the inverse theorem can be taken to be zero when working instead with the norm.For one has the intermediate situation in which the frequency is not taken to be zero, but is instead major arc. Indeed, suppose that is -bounded with , thus

for some progression . This forces the spacing of this progression to be . We write the above inequality as for some residue class and some interval . By Fourier expansion and the triangle inequality we then have for some integer . Convolving by for a small multiple of and a Schwartz function of unit mass with Fourier transform supported on , we have The Fourier transform of is bounded by and supported on , thus by Fourier expansion and the triangle inequality we have for some , so in particular . Thus we have for some of the major arc form with . Conversely, for of this form, some routine summation by parts gives the bound so if (2) holds for a -bounded then one must have .Here is a diagram showing some of the control relationships between various Gowers norms, multilinear forms, and duals of classes of functions (where each class of functions induces a dual norm :

Here I have included the three classes of functions that one can choose from for the inverse theorem, namely degree two nilsequences, bracket quadratic phases, and local quadratic phases, as well as the more narrow class of globally quadratic phases.

The Gowers norms have counterparts for measure-preserving systems , known as *Host-Kra seminorms*. The norm can be defined for as

*invariant factor*(generated by the (almost everywhere) invariant measurable subsets of ) in the sense that a function has vanishing seminorm if and only if it is orthogonal to all -measurable (bounded) functions. Similarly, the norm is orthogonal to the

*Kronecker factor*, generated by the eigenfunctions of (that is to say, those obeying an identity for some -invariant ); for ergodic systems, it is the largest factor isomorphic to rotation on a compact abelian group. In analogy to the Gowers norm, one can then define the Host-Kra seminorm by it is orthogonal to the

*profinite factor*, generated by the periodic sets of (or equivalently, by those eigenfunctions whose eigenvalue is a root of unity); for ergodic systems, it is the largest factor isomorphic to rotation on a profinite abelian group.

Joni Teräväinen and myself have just uploaded to the arXiv our preprint “Quantitative bounds for Gowers uniformity of the Möbius and von Mangoldt functions“. This paper makes quantitative the Gowers uniformity estimates on the Möbius function and the von Mangoldt function .

To discuss the results we first discuss the situation of the Möbius function, which is technically simpler in some (though not all) ways. We assume familiarity with Gowers norms and standard notations around these norms, such as the averaging notation and the exponential notation . The prime number theorem in qualitative form asserts that

as . With Vinogradov-Korobov error term, the prime number theorem is strengthened to we refer to such decay bounds (With type factors) as*pseudopolynomial decay*. Equivalently, we obtain pseudopolynomial decay of Gowers seminorm of : As is well known, the Riemann hypothesis would be equivalent to an upgrade of this estimate to polynomial decay of the form for any .

Once one restricts to arithmetic progressions, the situation gets worse: the Siegel-Walfisz theorem gives the bound

for any residue class and any , but with the catch that the implied constant is ineffective in . This ineffectivity cannot be removed without further progress on the notorious Siegel zero problem.In 1937, Davenport was able to show the discorrelation estimate

for any uniformly in , which leads (by standard Fourier arguments) to the Fourier uniformity estimate Again, the implied constant is ineffective. If one insists on effective constants, the best bound currently available is for some small effective constant .For the situation with the norm the previously known results were much weaker. Ben Green and I showed that

uniformly for any , any degree two (filtered) nilmanifold , any polynomial sequence , and any Lipschitz function ; again, the implied constants are ineffective. On the other hand, in a separate paper of Ben Green and myself, we established the following inverse theorem: if for instance we knew that for some , then there exists a degree two nilmanifold of dimension , complexity , a polynomial sequence , and Lipschitz function of Lipschitz constant such that Putting the two assertions together and comparing all the dependencies on parameters, one can establish the qualitative decay bound However the decay rate produced by this argument is*completely*ineffective: obtaining a bound on when this quantity dips below a given threshold depends on the implied constant in (3) for some whose dimension depends on , and the dependence on obtained in this fashion is ineffective in the face of a Siegel zero.

For higher norms , the situation is even worse, because the quantitative inverse theory for these norms is poorer, and indeed it was only with the recent work of Manners that any such bound is available at all (at least for ). Basically, Manners establishes if

then there exists a degree nilmanifold of dimension , complexity , a polynomial sequence , and Lipschitz function of Lipschitz constant such that (We allow all implied constants to depend on .) Meanwhile, the bound (3) was extended to arbitrary nilmanifolds by Ben and myself. Again, the two results when concatenated give the qualitative decay but the decay rate is completely ineffective.Our first result gives an effective decay bound:

Theorem 1For any , we have for some . The implied constants are effective.

This is off by a logarithm from the best effective bound (2) in the case. In the case there is some hope to remove this logarithm based on the improved quantitative inverse theory currently available in this case, but there is a technical obstruction to doing so which we will discuss later in this post. For the above bound is the best one could hope to achieve purely using the quantitative inverse theory of Manners.

We have analogues of all the above results for the von Mangoldt function . Here a complication arises that does not have mean close to zero, and one has to subtract off some suitable approximant to before one would expect good Gowers norms bounds. For the prime number theorem one can just use the approximant , giving

but even for the prime number theorem in arithmetic progressions one needs a more accurate approximant. In our paper it is convenient to use the “Cramér approximant” where and is the quasipolynomial quantity Then one can show from the Siegel-Walfisz theorem and standard bilinear sum methods that and for all and (with an ineffective dependence on ), again regaining effectivity if is replaced by a sufficiently small constant . All the previously stated discorrelation and Gowers uniformity results for then have analogues for , and our main result is similarly analogous:

Theorem 2For any , we have for some . The implied constants are effective.

By standard methods, this result also gives quantitative asymptotics for counting solutions to various systems of linear equations in primes, with error terms that gain a factor of with respect to the main term.

We now discuss the methods of proof, focusing first on the case of the Möbius function. Suppose first that there is no “Siegel zero”, by which we mean a quadratic character of some conductor with a zero with for some small absolute constant . In this case the Siegel-Walfisz bound (1) improves to a quasipolynomial bound

To establish Theorem 1 in this case, it suffices by Manners’ inverse theorem to establish the polylogarithmic bound for all degree nilmanifolds of dimension and complexity , all polynomial sequences , and all Lipschitz functions of norm . If the nilmanifold had bounded dimension, then one could repeat the arguments of Ben and myself more or less verbatim to establish this claim from (5), which relied on the quantitative equidistribution theory on nilmanifolds developed in a separate paper of Ben and myself. Unfortunately, in the latter paper the dependence of the quantitative bounds on the dimension was not explicitly given. In an appendix to the current paper, we go through that paper to account for this dependence, showing that all exponents depend at most doubly exponentially in the dimension , which is barely sufficient to handle the dimension of that arises here.
Now suppose we have a Siegel zero . In this case the bound (5) will *not* hold in general, and hence also (6) will not hold either. Here, the usual way out (while still maintaining effective estimates) is to approximate not by , but rather by a more complicated approximant that takes the Siegel zero into account, and in particular is such that one has the (effective) pseudopolynomial bound

For the analogous problem with the von Mangoldt function (assuming a Siegel zero for sake of discussion), the approximant is simpler; we ended up using

which allows one to state the standard prime number theorem in arithmetic progressions with classical error term and Siegel zero term compactly as Routine modifications of previous arguments also give and The one tricky new step is getting from the discorrelation estimate (8) to the Gowers uniformity estimate One cannot directly apply Manners’ inverse theorem here because and are unbounded. There is a standard tool for getting around this issue, now known as the*dense model theorem*, which is the standard engine powering the

*transference principle*from theorems about bounded functions to theorems about certain types of unbounded functions. However the quantitative versions of the dense model theorem in the literature are expensive and would basically weaken the doubly logarithmic gain here to a triply logarithmic one. Instead, we bypass the dense model theorem and directly transfer the inverse theorem for bounded functions to an inverse theorem for unbounded functions by using the

*densification*approach to transference introduced by Conlon, Fox, and Zhao. This technique turns out to be quantitatively quite efficient (the dependencies of the main parameters in the transference are polynomial in nature), and also has the technical advantage of avoiding the somewhat tricky “correlation condition” present in early transference results which are also not beneficial for quantitative bounds.

In principle, the above results can be improved for due to the stronger quantitative inverse theorems in the setting. However, there is a bottleneck that prevents us from achieving this, namely that the equidistribution theory of two-step nilmanifolds has exponents which are exponential in the dimension rather than polynomial in the dimension, and as a consequence we were unable to improve upon the doubly logarithmic results. Specifically, if one is given a sequence of bracket quadratics such as that fails to be -equidistributed, one would need to establish a nontrivial linear relationship modulo 1 between the (up to errors of ), where the coefficients are of size ; current methods only give coefficient bounds of the form . An old result of Schmidt demonstrates proof of concept that these sorts of polynomial dependencies on exponents is possible in principle, but actually implementing Schmidt’s methods here seems to be a quite non-trivial task. There is also another possible route to removing a logarithm, which is to strengthen the inverse theorem to make the dimension of the nilmanifold logarithmic in the uniformity parameter rather than polynomial. Again, the Freiman-Bilu theorem (see for instance this paper of Ben and myself) demonstrates proof of concept that such an improvement in dimension is possible, but some work would be needed to implement it.

Ben Green and I have updated our paper “An arithmetic regularity lemma, an associated counting lemma, and applications” to account for a somewhat serious issue with the paper that was pointed out to us recently by Daniel Altman. This paper contains two core theorems:

- An “arithmetic regularity lemma” that, roughly speaking, decomposes an arbitrary bounded sequence on an interval as an “irrational nilsequence” of controlled complexity, plus some “negligible” errors (where one uses the Gowers uniformity norm as the main norm to control the neglibility of the error); and
- An “arithmetic counting lemma” that gives an asymptotic formula for counting various averages for various affine-linear forms when the functions are given by irrational nilsequences.

The combination of the two theorems is then used to address various questions in additive combinatorics.

There are no direct issues with the arithmetic regularity lemma. However, it turns out that the arithmetic counting lemma is only true if one imposes an additional property (which we call the “flag property”) on the affine-linear forms . Without this property, there does not appear to be a clean asymptotic formula for these averages if the only hypothesis one places on the underlying nilsequences is irrationality. Thus when trying to understand the asymptotics of averages involving linear forms that do not obey the flag property, the paradigm of understanding these averages via a combination of the regularity lemma and a counting lemma seems to require some significant revision (in particular, one would probably have to replace the existing regularity lemma with some variant, despite the fact that the lemma is still technically true in this setting). Fortunately, for most applications studied to date (including the important subclass of translation-invariant affine forms), the flag property holds; however our claim in the paper to have resolved a conjecture of Gowers and Wolf on the true complexity of systems of affine forms must now be narrowed, as our methods only verify this conjecture under the assumption of the flag property.

In a bit more detail: the asymptotic formula for our counting lemma involved some finite-dimensional vector spaces for various natural numbers , defined as the linear span of the vectors as ranges over the parameter space . Roughly speaking, these spaces encode some constraints one would expect to see amongst the forms . For instance, in the case of length four arithmetic progressions when , , and

for , then is spanned by the vectors and and can thus be described as the two-dimensional linear space while is spanned by the vectors , , and can be described as the hyperplane As a special case of the counting lemma, we can check that if takes the form for some irrational , some arbitrary , and some smooth , then the limiting value of the average as is equal to which reflects the constraints and These constraints follow from the descriptions (1), (2), using the containment to dispense with the lower order term (which then plays no further role in the analysis).The arguments in our paper turn out to be perfectly correct under the assumption of the “flag property” that for all . The problem is that the flag property turns out to not always hold. A counterexample, provided by Daniel Altman, involves the four linear forms

Here it turns out that and and is no longer contained in . The analogue of the asymptotic formula given previously for is then valid when vanishes, but not when is non-zero, because the identity holds in the former case but not the latter. Thus the output of any purported arithmetic regularity lemma in this case is now sensitive to the lower order terms of the nilsequence and cannot be described in a uniform fashion for all “irrational” sequences. There should still be some sort of formula for the asymptotics from the general equidistribution theory of nilsequences, but it could be considerably more complicated than what is presented in this paper.Fortunately, the flag property does hold in several key cases, most notably the translation invariant case when contains , as well as “complexity one” cases. Nevertheless non-flag property systems of affine forms do exist, thus limiting the range of applicability of the techniques in this paper. In particular, the conjecture of Gowers and Wolf (Theorem 1.13 in the paper) is now open again in the non-flag property case.

Kaisa Matomäki, Maksym Radziwill, Joni Teräväinen, Tamar Ziegler and I have uploaded to the arXiv our paper Higher uniformity of bounded multiplicative functions in short intervals on average. This paper (which originated from a working group at an AIM workshop on Sarnak’s conjecture) focuses on the *local Fourier uniformity conjecture* for bounded multiplicative functions such as the Liouville function . One form of this conjecture is the assertion that

The conjecture gets more difficult as increases, and also becomes more difficult the more slowly grows with . The conjecture is equivalent to the assertion

which was proven (for arbitrarily slowly growing ) in a landmark paper of Matomäki and Radziwill, discussed for instance in this blog post.For , the conjecture is equivalent to the assertion

This remains open for sufficiently slowly growing (and it would be a major breakthrough in particular if one could obtain this bound for as small as for any fixed , particularly if applicable to more general bounded multiplicative functions than , as this would have new implications for a generalization of the Chowla conjecture known as the Elliott conjecture). Recently, Kaisa, Maks and myself were able to establish this conjecture in the range (in fact we have since worked out in the current paper that we can get as small as ). In our current paper we establish Fourier uniformity conjecture for higher for the same range of . This in particular implies local orthogonality to polynomial phases, where denotes the polynomials of degree at most , but the full conjecture is a bit stronger than this, establishing the more general statement for any degree filtered nilmanifold and Lipschitz function , where now ranges over polynomial maps from to . The method of proof follows the same general strategy as in the previous paper with Kaisa and Maks. (The equivalence of (4) and (1) follows from the inverse conjecture for the Gowers norms, proven in this paper.) We quickly sketch first the proof of (3), using very informal language to avoid many technicalities regarding the precise quantitative form of various estimates. If the estimate (3) fails, then we have the correlation estimate for many and some polynomial depending on . The difficulty here is to understand how can depend on . We write the above correlation estimate more suggestively as Because of the multiplicativity at small primes , one expects to have a relation of the form for many for which for some small primes . (This can be formalised using an inequality of Elliott related to the Turan-Kubilius theorem.) This gives a relationship between and for “edges” in a rather sparse “graph” connecting the elements of say . Using some graph theory one can locate some non-trivial “cycles” in this graph that eventually lead (in conjunction to a certain technical but important “Chinese remainder theorem” step to modify the to eliminate a rather serious “aliasing” issue that was already discussed in this previous post) to obtain functional equations of the form for some large and close (but not identical) integers , where should be viewed as a first approximation (ignoring a certain “profinite” or “major arc” term for simplicity) as “differing by a slowly varying polynomial” and the polynomials should now be viewed as taking values on the reals rather than the integers. This functional equation can be solved to obtain a relation of the form for some real number of polynomial size, and with further analysis of the relation (5) one can make basically independent of . This simplifies (3) to something like and this is now of a form that can be treated by the theorem of Matomäki and Radziwill (because is a bounded multiplicative function). (Actually because of the profinite term mentioned previously, one also has to insert a Dirichlet character of bounded conductor into this latter conclusion, but we will ignore this technicality.)Now we apply the same strategy to (4). For abelian the claim follows easily from (3), so we focus on the non-abelian case. One now has a polynomial sequence attached to many , and after a somewhat complicated adaptation of the above arguments one again ends up with an approximate functional equation

where the relation is rather technical and will not be detailed here. A new difficulty arises in that there are some unwanted solutions to this equation, such as for some , which do not necessarily lead to multiplicative characters like as in the polynomial case, but instead to some unfriendly looking “generalized multiplicative characters” (think of as a rough caricature). To avoid this problem, we rework the graph theory portion of the argument to produce not just one functional equation of the form (6)for each , but*many*, leading to dilation invariances for a “dense” set of . From a certain amount of Lie algebra theory (ultimately arising from an understanding of the behaviour of the exponential map on nilpotent matrices, and exploiting the hypothesis that is non-abelian) one can conclude that (after some initial preparations to avoid degenerate cases) must behave like for some

*central*element of . This eventually brings one back to the multiplicative characters that arose in the polynomial case, and the arguments now proceed as before.

We give two applications of this higher order Fourier uniformity. One regards the growth of the number

of length sign patterns in the Liouville function. The Chowla conjecture implies that , but even the weaker conjecture of Sarnak that for some remains open. Until recently, the best asymptotic lower bound on was , due to McNamara; with our result, we can now show for any (in fact we can get for any ). The idea is to repeat the now-standard argument to exploit multiplicativity at small primes to deduce Chowla-type conjectures from Fourier uniformity conjectures, noting that the Chowla conjecture would give all the sign patterns one could hope for. The usual argument here uses the “entropy decrement argument” to eliminate a certain error term (involving the large but mean zero factor ). However the observation is that if there are extremely few sign patterns of length , then the entropy decrement argument is unnecessary (there isn’t much entropy to begin with), and a more low-tech moment method argument (similar to the derivation of Chowla’s conjecture from Sarnak’s conjecture, as discussed for instance in this post) gives enough of Chowla’s conjecture to produce plenty of length sign patterns. If there are not extremely few sign patterns of length then we are done anyway. One quirk of this argument is that the sign patterns it produces may only appear exactly once; in contrast with preceding arguments, we were not able to produce a large number of sign patterns that each occur infinitely often.The second application is to obtain cancellation for various polynomial averages involving the Liouville function or von Mangoldt function , such as

or where are polynomials of degree at most , no two of which differ by a constant (the latter is essential to avoid having to establish the Chowla or Hardy-Littlewood conjectures, which of course remain open). Results of this type were previously obtained by Tamar Ziegler and myself in the “true complexity zero” case when the polynomials had distinct degrees, in which one could use the theory of Matomäki and Radziwill; now that higher is available at the scale we can now remove this restriction.In the modern theory of additive combinatorics, a large role is played by the *Gowers uniformity norms* , where , is a finite abelian group, and is a function (one can also consider these norms in finite approximate groups such as instead of finite groups, but we will focus on the group case here for simplicity). These norms can be defined by the formula

where we use the averaging notation

for any non-empty finite set (with denoting the cardinality of ), and is the multiplicative discrete derivative operator

One reason why these norms play an important role is that they control various multilinear averages. We give two sample examples here:

We establish these claims a little later in this post.

In some more recent literature (e.g., this paper of Conlon, Fox, and Zhao), the role of Gowers norms have been replaced by (generalisations) of the *cut norm*, a concept originating from graph theory. In this blog post, it will be convenient to define these cut norms in the language of probability theory (using boldface to denote random variables).

Definition 2 (Cut norm)Let be independent random variables with ; to avoid minor technicalities we assume that these random variables are discrete and take values in a finite set. Given a random variable of these independent random variables, we define thecut normwhere the supremum ranges over all choices of random variables that are -bounded (thus surely), and such that does not depend on .

If , we abbreviate as .

Strictly speaking, the cut norm is only a cut semi-norm when , but we will abuse notation by referring to it as a norm nevertheless.

Example 3If is a bipartite graph, and , are independent random variables chosen uniformly from respectively, thenwhere the supremum ranges over all -bounded functions , . The right hand side is essentially the cut norm of the graph , as defined for instance by Frieze and Kannan.

The cut norm is basically an expectation when :

Example 4If , we see from definition thatIf , one easily checks that

where is the conditional expectation of to the -algebra generated by all the variables other than , i.e., the -algebra generated by . In particular, if are independent random variables drawn uniformly from respectively, then

Here are some basic properties of the cut norm:

Lemma 5 (Basic properties of cut norm)Let be independent discrete random variables, and a function of these variables.

- (i) (Permutation invariance) The cut norm is invariant with respect to permutations of the , or permutations of the .
- (ii) (Conditioning) One has
where on the right-hand side we view, for each realisation of , as a function of the random variables alone, thus the right-hand side may be expanded as

- (iii) (Monotonicity) If , we have
- (iv) (Multiplicative invariances) If is a -bounded function that does not depend on one of the , then
In particular, if we additionally assume , then

- (v) (Cauchy-Schwarz) If , one has
where is a copy of that is independent of and is the random variable

- (vi) (Averaging) If and , where is another random variable independent of , and is a random variable depending on both and , then

*Proof:* The claims (i), (ii) are clear from expanding out all the definitions. The claim (iii) also easily follows from the definitions (the left-hand side involves a supremum over a more general class of multipliers , while the right-hand side omits the multiplier), as does (iv) (the multiplier can be absorbed into one of the multipliers in the definition of the cut norm). The claim (vi) follows by expanding out the definitions, and observing that all of the terms in the supremum appearing in the left-hand side also appear as terms in the supremum on the right-hand side. It remains to prove (v). By definition, the left-hand side is the supremum over all quantities of the form

where the are -bounded functions of that do not depend on . We average out in the direction (that is, we condition out the variables ), and pull out the factor (which does not depend on ), to write this as

which by Cauchy-Schwarz is bounded by

which can be expanded using the copy as

Expanding

and noting that each is -bounded and independent of for , we obtain the claim.

Now we can relate the cut norm to Gowers uniformity norms:

Lemma 6Let be a finite abelian group, let be independent random variables uniformly drawn from for some , and let . ThenIf is additionally assumed to be -bounded, we have the converse inequalities

*Proof:* Applying Lemma 5(v) times, we can bound

where are independent copies of that are also independent of . The expression inside the norm can also be written as

so by Example 4 one can write (6) as

which after some change of variables simplifies to

which by Cauchy-Schwarz is bounded by

which one can rearrange as

giving (2). A similar argument bounds

by

which gives (3).

For (4), we can reverse the above steps and expand as

which we can write as

for some -bounded function . This can in turn be expanded as

for some -bounded functions that do not depend on . By Example 4, this can be written as

which by several applications of Theorem 5(iii) and then Theorem 5(iv) can be bounded by

giving (4). A similar argument gives (5).

Now we can prove Proposition 1. We begin with part (i). By permutation we may assume , then by translation we may assume . Replacing by and by , we can write the left-hand side of (1) as

where

is a -bounded function that does not depend on . Taking to be independent random variables drawn uniformly from , the left-hand side of (1) can then be written as

which by Example 4 is bounded in magnitude by

After many applications of Lemma 5(iii), (iv), this is bounded by

By Lemma 5(ii) we may drop the variable, and then the claim follows from Lemma 6.

For part (ii), we replace by and by to write the left-hand side as

the point here is that the first factor does not involve , the second factor does not involve , and the third factor has no quadratic terms in . Letting be independent variables drawn uniformly from , we can use Example 4 to bound this in magnitude by

which by Lemma 5(i),(iii),(iv) is bounded by

and then by Lemma 5(v) we may bound this by

which by Example 4 is

Now the expression inside the expectation is the product of four factors, each of which is or applied to an affine form where depends on and is one of , , , . With probability , the four different values of are distinct, and then by part (i) we have

When they are not distinct, we can instead bound this quantity by . Taking expectations in , we obtain the claim.

The analogue of the inverse theorem for cut norms is the following claim (which I learned from Ben Green):

Lemma 7 (-type inverse theorem)Let be independent random variables drawn from a finite abelian group , and let be -bounded. Then we havewhere is the group of homomorphisms is a homomorphism from to , and .

*Proof:* Suppose first that for some , then by definition

for some -bounded . By Fourier expansion, the left-hand side is also

where . From Plancherel’s theorem we have

hence by Hölder’s inequality one has for some , and hence

Conversely, suppose (7) holds. Then there is such that

which on substitution and Example 4 implies

The term splits into the product of a factor not depending on , and a factor not depending on . Applying Lemma 5(iii), (iv) we conclude that

The claim follows.

The higher order inverse theorems are much less trivial (and the optimal quantitative bounds are not currently known). However, there is a useful *degree lowering* argument, due to Peluse and Prendiville, that can allow one to lower the order of a uniformity norm in some cases. We give a simple version of this argument here:

Lemma 8 (Degree lowering argument, special case)Let be a finite abelian group, let be a non-empty finite set, and let be a function of the form for some -bounded functions indexed by . Suppose thatfor some and . Then one of the following claims hold (with implied constants allowed to depend on ):

- (i) (Degree lowering) one has .
- (ii) (Non-zero frequency) There exist and non-zero such that

There are more sophisticated versions of this argument in which the frequency is “minor arc” rather than “zero frequency”, and then the Gowers norms are localised to suitable large arithmetic progressions; this is implicit in the above-mentioned paper of Peluse and Prendiville.

*Proof:* One can write

and hence we conclude that

for a set of tuples of density . Applying Lemma 6 and Lemma 7, we see that for each such tuple, there exists such that

where is drawn uniformly from .

Let us adopt the convention that vanishes for not in , then from Lemma 5(ii) we have

where are independent random variables drawn uniformly from and also independent of . By repeated application of Lemma 5(iii) we then have

Expanding out and using Lemma 5(iv) repeatedly we conclude that

From definition of we then have

By Lemma 5(vi), we see that the left-hand side is less than

where is drawn uniformly from , independently of . By repeated application of Lemma 5(i), (v) repeatedly, we conclude that

where are independent copies of that are also independent of , . By Lemma 5(ii) and Example 4 we conclude that

with probability .

The left-hand side can be rewritten as

where is the additive version of , thus

Translating , we can simplify this a little to

If the frequency is ever non-vanishing in the event (9) then conclusion (ii) applies. We conclude that

with probability . In particular, by the pigeonhole principle, there exist such that

with probability . Expanding this out, we obtain a representation of the form

holding with probability , where the are functions that do not depend on the coordinate. From (8) we conclude that

for of the tuples . Thus by Lemma 5(ii)

By repeated application of Lemma 5(iii) we then have

and then by repeated application of Lemma 5(iv)

and then the conclusion (i) follows from Lemma 6.

As an application of degree lowering, we give an inverse theorem for the average in Proposition 1(ii), first established by Bourgain-Chang and later reproved by Peluse (by different methods from those given here):

Proposition 9Let be a cyclic group of prime order. Suppose that one has -bounded functions such thatfor some . Then either , or one has

We remark that a modification of the arguments below also give .

*Proof:* The left-hand side of (10) can be written as

where is the *dual function*

By Cauchy-Schwarz one thus has

and hence by Proposition 1, we either have (in which case we are done) or

Writing with , we conclude that either , or that

for some and non-zero . The left-hand side can be rewritten as

where and . We can rewrite this in turn as

which is bounded by

where are independent random variables drawn uniformly from . Applying Lemma 5(v), we conclude that

However, a routine Gauss sum calculation reveals that the left-hand side is for some absolute constant because is non-zero, so that . The only remaining case to consider is when

Repeating the above arguments we then conclude that

and then

The left-hand side can be computed to equal , and the claim follows.

This argument was given for the cyclic group setting, but the argument can also be applied to the integers (see Peluse-Prendiville) and can also be used to establish an analogue over the reals (that was first obtained by Bourgain).

Tamar Ziegler and I have just uploaded to the arXiv two related papers: “Concatenation theorems for anti-Gowers-uniform functions and Host-Kra characteoristic factors” and “polynomial patterns in primes“, with the former developing a “quantitative Bessel inequality” for local Gowers norms that is crucial in the latter.

We use the term “concatenation theorem” to denote results in which structural control of a function in two or more “directions” can be “concatenated” into structural control in a *joint* direction. A trivial example of such a concatenation theorem is the following: if a function is constant in the first variable (thus is constant for each ), and also constant in the second variable (thus is constant for each ), then it is constant in the joint variable . A slightly less trivial example: if a function is affine-linear in the first variable (thus, for each , there exist such that for all ) and affine-linear in the second variable (thus, for each , there exist such that for all ) then is a quadratic polynomial in ; in fact it must take the form

for some real numbers . (This can be seen for instance by using the affine linearity in to show that the coefficients are also affine linear.)

The same phenomenon extends to higher degree polynomials. Given a function from one additive group to another, we say that is of *degree less than * along a subgroup of if all the -fold iterated differences of along directions in vanish, that is to say

for all and , where is the difference operator

(We adopt the convention that the only of degree less than is the zero function.)

We then have the following simple proposition:

Proposition 1 (Concatenation of polynomiality)Let be of degree less than along one subgroup of , and of degree less than along another subgroup of , for some . Then is of degree less than along the subgroup of .

Note the previous example was basically the case when , , , , and .

*Proof:* The claim is trivial for or (in which is constant along or respectively), so suppose inductively and the claim has already been proven for smaller values of .

We take a derivative in a direction along to obtain

where is the shift of by . Then we take a further shift by a direction to obtain

leading to the *cocycle equation*

Since has degree less than along and degree less than along , has degree less than along and less than along , so is degree less than along by induction hypothesis. Similarly is also of degree less than along . Combining this with the cocycle equation we see that is of degree less than along for any , and hence is of degree less than along , as required.

While this proposition is simple, it already illustrates some basic principles regarding how one would go about proving a concatenation theorem:

- (i) One should perform induction on the degrees involved, and take advantage of the recursive nature of degree (in this case, the fact that a function is of less than degree along some subgroup of directions iff all of its first derivatives along are of degree less than ).
- (ii) Structure is preserved by operations such as addition, shifting, and taking derivatives. In particular, if a function is of degree less than along some subgroup , then any derivative of is also of degree less than along ,
*even if does not belong to*.

Here is another simple example of a concatenation theorem. Suppose an at most countable additive group acts by measure-preserving shifts on some probability space ; we call the pair (or more precisely ) a *-system*. We say that a function is a *generalised eigenfunction of degree less than * along some subgroup of and some if one has

almost everywhere for all , and some functions of degree less than along , with the convention that a function has degree less than if and only if it is equal to . Thus for instance, a function is an generalised eigenfunction of degree less than along if it is constant on almost every -ergodic component of , and is a generalised function of degree less than along if it is an eigenfunction of the shift action on almost every -ergodic component of . A basic example of a higher order eigenfunction is the function on the *skew shift* with action given by the generator for some irrational . One can check that for every integer , where is a generalised eigenfunction of degree less than along , so is of degree less than along .

We then have

Proposition 2 (Concatenation of higher order eigenfunctions)Let be a -system, and let be a generalised eigenfunction of degree less than along one subgroup of , and a generalised eigenfunction of degree less than along another subgroup of , for some . Then is a generalised eigenfunction of degree less than along the subgroup of .

The argument is almost identical to that of the previous proposition and is left as an exercise to the reader. The key point is the point (ii) identified earlier: the space of generalised eigenfunctions of degree less than along is preserved by multiplication and shifts, as well as the operation of “taking derivatives” even along directions that do not lie in . (To prove this latter claim, one should restrict to the region where is non-zero, and then divide by to locate .)

A typical example of this proposition in action is as follows: consider the -system given by the -torus with generating shifts

for some irrational , which can be checked to give a action

The function can then be checked to be a generalised eigenfunction of degree less than along , and also less than along , and less than along . One can view this example as the dynamical systems translation of the example (1) (see this previous post for some more discussion of this sort of correspondence).

The main results of our concatenation paper are analogues of these propositions concerning a more complicated notion of “polynomial-like” structure that are of importance in additive combinatorics and in ergodic theory. On the ergodic theory side, the notion of structure is captured by the *Host-Kra characteristic factors* of a -system along a subgroup . These factors can be defined in a number of ways. One is by duality, using the *Gowers-Host-Kra uniformity seminorms* (defined for instance here) . Namely, is the factor of defined up to equivalence by the requirement that

An equivalent definition is in terms of the *dual functions* of along , which can be defined recursively by setting and

where denotes the ergodic average along a Følner sequence in (in fact one can also define these concepts in non-amenable abelian settings as per this previous post). The factor can then be alternately defined as the factor generated by the dual functions for .

In the case when and is -ergodic, a deep theorem of Host and Kra shows that the factor is equivalent to the inverse limit of nilsystems of step less than . A similar statement holds with replaced by any finitely generated group by Griesmer, while the case of an infinite vector space over a finite field was treated in this paper of Bergelson, Ziegler, and myself. The situation is more subtle when is not -ergodic, or when is -ergodic but is a proper subgroup of acting non-ergodically, when one has to start considering measurable families of directional nilsystems; see for instance this paper of Austin for some of the subtleties involved (for instance, higher order group cohomology begins to become relevant!).

One of our main theorems is then

Proposition 3 (Concatenation of characteristic factors)Let be a -system, and let be measurable with respect to the factor and with respect to the factor for some and some subgroups of . Then is also measurable with respect to the factor .

We give two proofs of this proposition in the paper; an ergodic-theoretic proof using the Host-Kra theory of “cocycles of type (along a subgroup )”, which can be used to inductively describe the factors , and a combinatorial proof based on a combinatorial analogue of this proposition which is harder to state (but which roughly speaking asserts that a function which is nearly orthogonal to all bounded functions of small norm, and also to all bounded functions of small norm, is also nearly orthogonal to alll bounded functions of small norm). The combinatorial proof parallels the proof of Proposition 2. A key point is that dual functions obey a property analogous to being a generalised eigenfunction, namely that

where and is a “structured function of order ” along . (In the language of this previous paper of mine, this is an assertion that dual functions are uniformly almost periodic of order .) Again, the point (ii) above is crucial, and in particular it is key that any structure that has is inherited by the associated functions and . This sort of inheritance is quite easy to accomplish in the ergodic setting, as there is a ready-made language of factors to encapsulate the concept of structure, and the shift-invariance and -algebra properties of factors make it easy to show that just about any “natural” operation one performs on a function measurable with respect to a given factor, returns a function that is still measurable in that factor. In the finitary combinatorial setting, though, encoding the fact (ii) becomes a remarkably complicated notational nightmare, requiring a huge amount of “epsilon management” and “second-order epsilon management” (in which one manages not only scalar epsilons, but also function-valued epsilons that depend on other parameters). In order to avoid all this we were forced to utilise a nonstandard analysis framework for the combinatorial theorems, which made the arguments greatly resemble the ergodic arguments in many respects (though the two settings are still not equivalent, see this previous blog post for some comparisons between the two settings). Unfortunately the arguments are still rather complicated.

For combinatorial applications, dual formulations of the concatenation theorem are more useful. A direct dualisation of the theorem yields the following decomposition theorem: a bounded function which is small in norm can be split into a component that is small in norm, and a component that is small in norm. (One may wish to understand this type of result by first proving the following baby version: any function that has mean zero on every coset of , can be decomposed as the sum of a function that has mean zero on every coset, and a function that has mean zero on every coset. This is dual to the assertion that a function that is constant on every coset and constant on every coset, is constant on every coset.) Combining this with some standard “almost orthogonality” arguments (i.e. Cauchy-Schwarz) give the following Bessel-type inequality: if one has a lot of subgroups and a bounded function is small in norm for most , then it is also small in norm for most . (Here is a baby version one may wish to warm up on: if a function has small mean on for some large prime , then it has small mean on most of the cosets of most of the one-dimensional subgroups of .)

There is also a generalisation of the above Bessel inequality (as well as several of the other results mentioned above) in which the subgroups are replaced by more general *coset progressions* (of bounded rank), so that one has a Bessel inequailty controlling “local” Gowers uniformity norms such as by “global” Gowers uniformity norms such as . This turns out to be particularly useful when attempting to compute polynomial averages such as

for various functions . After repeated use of the van der Corput lemma, one can control such averages by expressions such as

(actually one ends up with more complicated expressions than this, but let’s use this example for sake of discussion). This can be viewed as an average of various Gowers uniformity norms of along arithmetic progressions of the form for various . Using the above Bessel inequality, this can be controlled in turn by an average of various Gowers uniformity norms along rank two generalised arithmetic progressions of the form for various . But for generic , this rank two progression is close in a certain technical sense to the “global” interval (this is ultimately due to the basic fact that two randomly chosen large integers are likely to be coprime, or at least have a small gcd). As a consequence, one can use the concatenation theorems from our first paper to control expressions such as (2) in terms of *global* Gowers uniformity norms. This is important in number theoretic applications, when one is interested in computing sums such as

or

where and are the Möbius and von Mangoldt functions respectively. This is because we are able to control global Gowers uniformity norms of such functions (thanks to results such as the proof of the inverse conjecture for the Gowers norms, the orthogonality of the Möbius function with nilsequences, and asymptotics for linear equations in primes), but much less control is currently available for local Gowers uniformity norms, even with the assistance of the generalised Riemann hypothesis (see this previous blog post for some further discussion).

By combining these tools and strategies with the “transference principle” approach from our previous paper (as improved using the recent “densification” technique of Conlon, Fox, and Zhao, discussed in this previous post), we are able in particular to establish the following result:

Theorem 4 (Polynomial patterns in the primes)Let be polynomials of degree at most , whose degree coefficients are all distinct, for some . Suppose that is admissible in the sense that for every prime , there are such that are all coprime to . Then there exist infinitely many pairs of natural numbers such that are prime.

Furthermore, we obtain an asymptotic for the number of such pairs in the range , (actually for minor technical reasons we reduce the range of to be very slightly less than ). In fact one could in principle obtain asymptotics for smaller values of , and relax the requirement that the degree coefficients be distinct with the requirement that no two of the differ by a constant, provided one had good enough local uniformity results for the Möbius or von Mangoldt functions. For instance, we can obtain an asymptotic for triplets of the form unconditionally for , and conditionally on GRH for all , using known results on primes in short intervals on average.

The case of this theorem was obtained in a previous paper of myself and Ben Green (using the aforementioned conjectures on the Gowers uniformity norm and the orthogonality of the Möbius function with nilsequences, both of which are now proven). For higher , an older result of Tamar and myself was able to tackle the case when (though our results there only give lower bounds on the number of pairs , and no asymptotics). Both of these results generalise my older theorem with Ben Green on the primes containing arbitrarily long arithmetic progressions. The theorem also extends to multidimensional polynomials, in which case there are some additional previous results; see the paper for more details. We also get a technical refinement of our previous result on narrow polynomial progressions in (dense subsets of) the primes by making the progressions just a little bit narrower in the case of the density of the set one is using is small.

This week I have been at a Banff workshop “Combinatorics meets Ergodic theory“, focused on the combinatorics surrounding Szemerédi’s theorem and the Gowers uniformity norms on one hand, and the ergodic theory surrounding Furstenberg’s multiple recurrence theorem and the Host-Kra structure theory on the other. This was quite a fruitful workshop, and directly inspired the various posts this week on this blog. Incidentally, BIRS being as efficient as it is, videos for this week’s talks are already online.

As mentioned in the previous two posts, Ben Green, Tamar Ziegler, and myself proved the following inverse theorem for the Gowers norms:

Theorem 1 (Inverse theorem for Gowers norms)Let and be integers, and let . Suppose that is a function supported on such thatThen there exists a filtered nilmanifold of degree and complexity , a polynomial sequence , and a Lipschitz function of Lipschitz constant such that

There is a higher dimensional generalisation, which first appeared explicitly (in a more general form) in this preprint of Szegedy (which used a slightly different argument than the one of Ben, Tammy, and myself; see also this previous preprint of Szegedy with related results):

Theorem 2 (Inverse theorem for multidimensional Gowers norms)Let and be integers, and let . Suppose that is a function supported on such thatThen there exists a filtered nilmanifold of degree and complexity , a polynomial sequence , and a Lipschitz function of Lipschitz constant such that

The case of this theorem was recently used by Wenbo Sun. One can replace the polynomial sequence with a linear sequence if desired by using a lifting trick (essentially due to Furstenberg, but which appears explicitly in Appendix C of my paper with Ben and Tammy).

In this post I would like to record a very neat and simple observation of Ben Green and Nikos Frantzikinakis, that uses the tool of Freiman isomorphisms to derive Theorem 2 as a corollary of the one-dimensional theorem. Namely, consider the linear map defined by

that is to say is the digit string base that has digits . This map is a linear map from to a subset of of density . Furthermore it has the following “Freiman isomorphism” property: if lie in with in the image set of for all , then there exist (unique) lifts such that

and

for all . Indeed, the injectivity of on uniquely determines the sum for each , and one can use base arithmetic to verify that the alternating sum of these sums on any -facet of the cube vanishes, which gives the claim. (In the language of additive combinatorics, the point is that is a Freiman isomorphism of order (say) on .)

Now let be the function defined by setting whenever , with vanishing outside of . If obeys (1), then from the above Freiman isomorphism property we have

Applying the one-dimensional inverse theorem (Theorem 1), with reduced by a factor of and replaced by , this implies the existence of a filtered nilmanifold of degree and complexity , a polynomial sequence , and a Lipschitz function of Lipschitz constant such that

which by the Freiman isomorphism property again implies that

But the map is clearly a polynomial map from to (the composition of two polynomial maps is polynomial, see e.g. Appendix B of my paper with Ben and Tammy), and the claim follows.

Remark 3This trick appears to be largely restricted to the case of boundedly generated groups such as ; I do not see any easy way to deduce an inverse theorem for, say, from the -inverse theorem by this method.

Remark 4By combining this argument with the one in the previous post, one can obtain a weak ergodic inverse theorem for -actions. Interestingly, the Freiman isomorphism argument appears to be difficult to implement directly in the ergodic category; in particular, there does not appear to be an obvious direct way to derive the Host-Kra inverse theorem for actions (a result first obtained in the PhD thesis of Griesmer) from the counterpart for actions.

Note: this post is of a particularly technical nature, in particular presuming familiarity with nilsequences, nilsystems, characteristic factors, etc., and is primarily intended for experts.

As mentioned in the previous post, Ben Green, Tamar Ziegler, and myself proved the following inverse theorem for the Gowers norms:

Theorem 1 (Inverse theorem for Gowers norms)Let and be integers, and let . Suppose that is a function supported on such thatThen there exists a filtered nilmanifold of degree and complexity , a polynomial sequence , and a Lipschitz function of Lipschitz constant such that

This result was conjectured earlier by Ben Green and myself; this conjecture was strongly motivated by an analogous inverse theorem in ergodic theory by Host and Kra, which we formulate here in a form designed to resemble Theorem 1 as closely as possible:

Theorem 2 (Inverse theorem for Gowers-Host-Kra seminorms)Let be an integer, and let be an ergodic, countably generated measure-preserving system. Suppose that one hasfor all non-zero (all spaces are real-valued in this post). Then is an inverse limit (in the category of measure-preserving systems, up to almost everywhere equivalence) of ergodic degree nilsystems, that is to say systems of the form for some degree filtered nilmanifold and a group element that acts ergodically on .

It is a natural question to ask if there is any logical relationship between the two theorems. In the finite field category, one can deduce the combinatorial inverse theorem from the ergodic inverse theorem by a variant of the Furstenberg correspondence principle, as worked out by Tamar Ziegler and myself, however in the current context of -actions, the connection is less clear.

One can split Theorem 2 into two components:

Theorem 3 (Weak inverse theorem for Gowers-Host-Kra seminorms)Let be an integer, and let be an ergodic, countably generated measure-preserving system. Suppose that one hasfor all non-zero , where . Then is a

factorof an inverse limit of ergodic degree nilsystems.

Theorem 4 (Pro-nilsystems closed under factors)Let be an integer. Then any factor of an inverse limit of ergodic degree nilsystems, is again an inverse limit of ergodic degree nilsystems.

Indeed, it is clear that Theorem 2 implies both Theorem 3 and Theorem 4, and conversely that the two latter theorems jointly imply the former. Theorem 4 is, in principle, purely a fact about nilsystems, and should have an independent proof, but this is not known; the only known proofs go through the full machinery needed to prove Theorem 2 (or the closely related theorem of Ziegler). (However, the fact that a factor of a nilsystem is again a nilsystem was established previously by Parry.)

The purpose of this post is to record a partial implication in reverse direction to the correspondence principle:

As mentioned at the start of the post, a fair amount of familiarity with the area is presumed here, and some routine steps will be presented with only a fairly brief explanation.

A few years ago, Ben Green, Tamar Ziegler, and myself proved the following (rather technical-looking) inverse theorem for the Gowers norms:

Theorem 1 (Discrete inverse theorem for Gowers norms)Let and be integers, and let . Suppose that is a function supported on such that

For the definitions of “filtered nilmanifold”, “degree”, “complexity”, and “polynomial sequence”, see the paper of Ben, Tammy, and myself. (I should caution the reader that this blog post will presume a fair amount of familiarity with this subfield of additive combinatorics.) This result has a number of applications, for instance to establishing asymptotics for linear equations in the primes, but this will not be the focus of discussion here.

The purpose of this post is to record the observation that this “discrete” inverse theorem, together with an equidistribution theorem for nilsequences that Ben and I worked out in a separate paper, implies a continuous version:

Theorem 2 (Continuous inverse theorem for Gowers norms)Let be an integer, and let . Suppose that is a measurable function supported on such thatThen there exists a filtered nilmanifold of degree and complexity , a (smooth) polynomial sequence , and a Lipschitz function of Lipschitz constant such that

The interval can be easily replaced with any other fixed interval by a change of variables. A key point here is that the bounds are completely uniform in the choice of . Note though that the coefficients of can be arbitrarily large (and this is necessary, as can be seen just by considering functions of the form for some arbitrarily large frequency ).

It is likely that one could prove Theorem 2 by carefully going through the proof of Theorem 1 and replacing all instances of with (and making appropriate modifications to the argument to accommodate this). However, the proof of Theorem 1 is quite lengthy. Here, we shall proceed by the usual limiting process of viewing the continuous interval as a limit of the discrete interval as . However there will be some problems taking the limit due to a failure of compactness, and specifically with regards to the coefficients of the polynomial sequence produced by Theorem 1, after normalising these coefficients by . Fortunately, a factorisation theorem from a paper of Ben Green and myself resolves this problem by splitting into a “smooth” part which does enjoy good compactness properties, as well as “totally equidistributed” and “periodic” parts which can be eliminated using the measurability (and thus, approximate smoothness), of .

## Recent Comments