Tamar Ziegler and I have just uploaded to the arXiv two related papers: “Concatenation theorems for anti-Gowers-uniform functions and Host-Kra characteoristic factors” and “polynomial patterns in primes“, with the former developing a “quantitative Bessel inequality” for local Gowers norms that is crucial in the latter.

We use the term “concatenation theorem” to denote results in which structural control of a function in two or more “directions” can be “concatenated” into structural control in a *joint* direction. A trivial example of such a concatenation theorem is the following: if a function is constant in the first variable (thus is constant for each ), and also constant in the second variable (thus is constant for each ), then it is constant in the joint variable . A slightly less trivial example: if a function is affine-linear in the first variable (thus, for each , there exist such that for all ) and affine-linear in the second variable (thus, for each , there exist such that for all ) then is a quadratic polynomial in ; in fact it must take the form

for some real numbers . (This can be seen for instance by using the affine linearity in to show that the coefficients are also affine linear.)

The same phenomenon extends to higher degree polynomials. Given a function from one additive group to another, we say that is of *degree less than * along a subgroup of if all the -fold iterated differences of along directions in vanish, that is to say

for all and , where is the difference operator

(We adopt the convention that the only of degree less than is the zero function.)

We then have the following simple proposition:

Proposition 1 (Concatenation of polynomiality)Let be of degree less than along one subgroup of , and of degree less than along another subgroup of , for some . Then is of degree less than along the subgroup of .

Note the previous example was basically the case when , , , , and .

*Proof:* The claim is trivial for or (in which is constant along or respectively), so suppose inductively and the claim has already been proven for smaller values of .

We take a derivative in a direction along to obtain

where is the shift of by . Then we take a further shift by a direction to obtain

leading to the *cocycle equation*

Since has degree less than along and degree less than along , has degree less than along and less than along , so is degree less than along by induction hypothesis. Similarly is also of degree less than along . Combining this with the cocycle equation we see that is of degree less than along for any , and hence is of degree less than along , as required.

While this proposition is simple, it already illustrates some basic principles regarding how one would go about proving a concatenation theorem:

- (i) One should perform induction on the degrees involved, and take advantage of the recursive nature of degree (in this case, the fact that a function is of less than degree along some subgroup of directions iff all of its first derivatives along are of degree less than ).
- (ii) Structure is preserved by operations such as addition, shifting, and taking derivatives. In particular, if a function is of degree less than along some subgroup , then any derivative of is also of degree less than along ,
*even if does not belong to*.

Here is another simple example of a concatenation theorem. Suppose an at most countable additive group acts by measure-preserving shifts on some probability space ; we call the pair (or more precisely ) a *-system*. We say that a function is a *generalised eigenfunction of degree less than * along some subgroup of and some if one has

almost everywhere for all , and some functions of degree less than along , with the convention that a function has degree less than if and only if it is equal to . Thus for instance, a function is an generalised eigenfunction of degree less than along if it is constant on almost every -ergodic component of , and is a generalised function of degree less than along if it is an eigenfunction of the shift action on almost every -ergodic component of . A basic example of a higher order eigenfunction is the function on the *skew shift* with action given by the generator for some irrational . One can check that for every integer , where is a generalised eigenfunction of degree less than along , so is of degree less than along .

We then have

Proposition 2 (Concatenation of higher order eigenfunctions)Let be a -system, and let be a generalised eigenfunction of degree less than along one subgroup of , and a generalised eigenfunction of degree less than along another subgroup of , for some . Then is a generalised eigenfunction of degree less than along the subgroup of .

The argument is almost identical to that of the previous proposition and is left as an exercise to the reader. The key point is the point (ii) identified earlier: the space of generalised eigenfunctions of degree less than along is preserved by multiplication and shifts, as well as the operation of “taking derivatives” even along directions that do not lie in . (To prove this latter claim, one should restrict to the region where is non-zero, and then divide by to locate .)

A typical example of this proposition in action is as follows: consider the -system given by the -torus with generating shifts

for some irrational , which can be checked to give a action

The function can then be checked to be a generalised eigenfunction of degree less than along , and also less than along , and less than along . One can view this example as the dynamical systems translation of the example (1) (see this previous post for some more discussion of this sort of correspondence).

The main results of our concatenation paper are analogues of these propositions concerning a more complicated notion of “polynomial-like” structure that are of importance in additive combinatorics and in ergodic theory. On the ergodic theory side, the notion of structure is captured by the *Host-Kra characteristic factors* of a -system along a subgroup . These factors can be defined in a number of ways. One is by duality, using the *Gowers-Host-Kra uniformity seminorms* (defined for instance here) . Namely, is the factor of defined up to equivalence by the requirement that

An equivalent definition is in terms of the *dual functions* of along , which can be defined recursively by setting and

where denotes the ergodic average along a Følner sequence in (in fact one can also define these concepts in non-amenable abelian settings as per this previous post). The factor can then be alternately defined as the factor generated by the dual functions for .

In the case when and is -ergodic, a deep theorem of Host and Kra shows that the factor is equivalent to the inverse limit of nilsystems of step less than . A similar statement holds with replaced by any finitely generated group by Griesmer, while the case of an infinite vector space over a finite field was treated in this paper of Bergelson, Ziegler, and myself. The situation is more subtle when is not -ergodic, or when is -ergodic but is a proper subgroup of acting non-ergodically, when one has to start considering measurable families of directional nilsystems; see for instance this paper of Austin for some of the subtleties involved (for instance, higher order group cohomology begins to become relevant!).

One of our main theorems is then

Proposition 3 (Concatenation of characteristic factors)Let be a -system, and let be measurable with respect to the factor and with respect to the factor for some and some subgroups of . Then is also measurable with respect to the factor .

We give two proofs of this proposition in the paper; an ergodic-theoretic proof using the Host-Kra theory of “cocycles of type (along a subgroup )”, which can be used to inductively describe the factors , and a combinatorial proof based on a combinatorial analogue of this proposition which is harder to state (but which roughly speaking asserts that a function which is nearly orthogonal to all bounded functions of small norm, and also to all bounded functions of small norm, is also nearly orthogonal to alll bounded functions of small norm). The combinatorial proof parallels the proof of Proposition 2. A key point is that dual functions obey a property analogous to being a generalised eigenfunction, namely that

where and is a “structured function of order ” along . (In the language of this previous paper of mine, this is an assertion that dual functions are uniformly almost periodic of order .) Again, the point (ii) above is crucial, and in particular it is key that any structure that has is inherited by the associated functions and . This sort of inheritance is quite easy to accomplish in the ergodic setting, as there is a ready-made language of factors to encapsulate the concept of structure, and the shift-invariance and -algebra properties of factors make it easy to show that just about any “natural” operation one performs on a function measurable with respect to a given factor, returns a function that is still measurable in that factor. In the finitary combinatorial setting, though, encoding the fact (ii) becomes a remarkably complicated notational nightmare, requiring a huge amount of “epsilon management” and “second-order epsilon management” (in which one manages not only scalar epsilons, but also function-valued epsilons that depend on other parameters). In order to avoid all this we were forced to utilise a nonstandard analysis framework for the combinatorial theorems, which made the arguments greatly resemble the ergodic arguments in many respects (though the two settings are still not equivalent, see this previous blog post for some comparisons between the two settings). Unfortunately the arguments are still rather complicated.

For combinatorial applications, dual formulations of the concatenation theorem are more useful. A direct dualisation of the theorem yields the following decomposition theorem: a bounded function which is small in norm can be split into a component that is small in norm, and a component that is small in norm. (One may wish to understand this type of result by first proving the following baby version: any function that has mean zero on every coset of , can be decomposed as the sum of a function that has mean zero on every coset, and a function that has mean zero on every coset. This is dual to the assertion that a function that is constant on every coset and constant on every coset, is constant on every coset.) Combining this with some standard “almost orthogonality” arguments (i.e. Cauchy-Schwarz) give the following Bessel-type inequality: if one has a lot of subgroups and a bounded function is small in norm for most , then it is also small in norm for most . (Here is a baby version one may wish to warm up on: if a function has small mean on for some large prime , then it has small mean on most of the cosets of most of the one-dimensional subgroups of .)

There is also a generalisation of the above Bessel inequality (as well as several of the other results mentioned above) in which the subgroups are replaced by more general *coset progressions* (of bounded rank), so that one has a Bessel inequailty controlling “local” Gowers uniformity norms such as by “global” Gowers uniformity norms such as . This turns out to be particularly useful when attempting to compute polynomial averages such as

for various functions . After repeated use of the van der Corput lemma, one can control such averages by expressions such as

(actually one ends up with more complicated expressions than this, but let’s use this example for sake of discussion). This can be viewed as an average of various Gowers uniformity norms of along arithmetic progressions of the form for various . Using the above Bessel inequality, this can be controlled in turn by an average of various Gowers uniformity norms along rank two generalised arithmetic progressions of the form for various . But for generic , this rank two progression is close in a certain technical sense to the “global” interval (this is ultimately due to the basic fact that two randomly chosen large integers are likely to be coprime, or at least have a small gcd). As a consequence, one can use the concatenation theorems from our first paper to control expressions such as (2) in terms of *global* Gowers uniformity norms. This is important in number theoretic applications, when one is interested in computing sums such as

or

where and are the Möbius and von Mangoldt functions respectively. This is because we are able to control global Gowers uniformity norms of such functions (thanks to results such as the proof of the inverse conjecture for the Gowers norms, the orthogonality of the Möbius function with nilsequences, and asymptotics for linear equations in primes), but much less control is currently available for local Gowers uniformity norms, even with the assistance of the generalised Riemann hypothesis (see this previous blog post for some further discussion).

By combining these tools and strategies with the “transference principle” approach from our previous paper (as improved using the recent “densification” technique of Conlon, Fox, and Zhao, discussed in this previous post), we are able in particular to establish the following result:

Theorem 4 (Polynomial patterns in the primes)Let be polynomials of degree at most , whose degree coefficients are all distinct, for some . Suppose that is admissible in the sense that for every prime , there are such that are all coprime to . Then there exist infinitely many pairs of natural numbers such that are prime.

Furthermore, we obtain an asymptotic for the number of such pairs in the range , (actually for minor technical reasons we reduce the range of to be very slightly less than ). In fact one could in principle obtain asymptotics for smaller values of , and relax the requirement that the degree coefficients be distinct with the requirement that no two of the differ by a constant, provided one had good enough local uniformity results for the Möbius or von Mangoldt functions. For instance, we can obtain an asymptotic for triplets of the form unconditionally for , and conditionally on GRH for all , using known results on primes in short intervals on average.

The case of this theorem was obtained in a previous paper of myself and Ben Green (using the aforementioned conjectures on the Gowers uniformity norm and the orthogonality of the Möbius function with nilsequences, both of which are now proven). For higher , an older result of Tamar and myself was able to tackle the case when (though our results there only give lower bounds on the number of pairs , and no asymptotics). Both of these results generalise my older theorem with Ben Green on the primes containing arbitrarily long arithmetic progressions. The theorem also extends to multidimensional polynomials, in which case there are some additional previous results; see the paper for more details. We also get a technical refinement of our previous result on narrow polynomial progressions in (dense subsets of) the primes by making the progressions just a little bit narrower in the case of the density of the set one is using is small.

## 22 comments

Comments feed for this article

27 March, 2016 at 10:27 pm

AnonymousShould replace the Uncategorized tag with paper.

[Corrected, thanks – T.]27 March, 2016 at 11:25 pm

AnonymousThe prime tuples conjecture seems to be the (still unproved) case of theorem 4.

28 March, 2016 at 6:52 am

Terence TaoYes, that’s correct! Unfortunately, the d=0 case has “infinite complexity” in the sense that no amount of Cauchy-Schwarz is capable of controlling averages such as into Gowers uniformity norm type objects in such a way that some cancellation is preserved, so the prime tuples conjecture remains unfortunately out of reach by these methods. However, one could probably use these results to show that the prime tuples conjecture holds “on average” for tuples parameterised by various polynomial families, e.g. that has the right asymptotic for most . (In the linear case, there are results of this form by Balog, and by Ben Green and myself, using the circle method and our theorem on linear equations in primes respectively.)

28 March, 2016 at 2:16 am

AnonymousCan theorem 4 be made effective?

28 March, 2016 at 6:54 am

Terence TaoUnfortunately, at one point we use Siegel’s theorem (in order to establish asymptotic orthogonality of the Mobius function with “major arc” nilsequences), which is still ineffective. Also we rely on the inverse conjecture for the Gowers norms, which is also ineffective currently due to the use of nonstandard analysis techniques. I believe it is possible to remove the latter source of ineffectivity with some significant effort, but there is still no plausible path currently to remove the former (unless one assumes some powerful conjecture, such as GRH).

30 March, 2016 at 2:46 am

AnonymousAre the above nonstandard techniques require only ZFC axioms?

30 March, 2016 at 11:45 am

Terence TaoYes (in fact one only really needs the ultrafilter lemma, and possibly also countable choice, in place of the full axiom of choice). But it is also possible to appeal to general “proof mining” results to show that any nonstandard argument establishing a standard conclusion can (in principle, at least) be written in a fully standard fashion that avoids the use of choice, though the resulting proof might be significantly larger and messier than the nonstandard one.

29 March, 2016 at 12:04 am

ZoiI feel depressed. I am a 17 years old, new to number theory and I just bought first books (Dirichlet lectures on number theory and Dedekind theory of algebraic integers) and I enjoy taking the time quietly learning from them. But I realize now that there’s so much existing knowledge in modern math, that if I want to do some research of my own someday, I must learn the subject in the most dense and fast way I can.. without really taking the time and enjoying the process and or devoting days to think (instead of simply reading the solution) about problems already solved by people like Gauss. otherwise I’ll be simply too old for research when I’ll finally reach the surface of modern knowledge.

(and even if I won’t, it is known that mathematicians are best when they are young and I still “lose” those early effective years of research).

However I simply don’t enjoy math that way. I feel stress and lack of time constantly. Is there any solution?

29 March, 2016 at 8:37 am

Terence TaoLearning mathematics in a university environment is substantially more efficient than learning from self-study alone; you’d be amazed what one can pick up in four years of undergraduate mathematics and four to five years of graduate mathematics (particularly once one has completed the basic courses and started to do some supervised research). But modern mathematics is indeed a vast subject, and one actually learns more maths after one’s PhD than before it. I finished my own PhD with a specialist knowledge in harmonic analysis, and only really picked up PDE, number theory, ergodic theory, combinatorics, algebraic geometry, model theory, etc. after I started research (particularly after collaborating with people in these areas); I had seen each of these topics in various classes, but never internalised them until after my PhD. Once one acquires a really thorough knowledge of one part of mathematics, it becomes a lot easier (particularly in the internet age) to branch out to neighbouring areas, and then to whatever other parts of mathematics one needs; one can pick up far more material this way than one might expect just from linearly extrapolating from one’s learning rate as a student. (But I don’t think anybody these days can really claim mastery of

allareas of modern mathematics; perhaps Hilbert and von Neumann were the last to do so, back when mathematics wasn’t quite so large as it is today. On the other hand, one doesn’t have to know all this stuff directly; one can (and should!) work in collaboration with coauthors with complementary areas of expertise.)This little illustrated guide is fairly apt, I think: http://matt.might.net/articles/phd-school-in-pictures/

The relationship between age and productivity in mathematics is complicated; generally speaking, younger mathematicians have more energy, but older mathematicians have more experience, and it is not always clear which is more advantageous (although a collaboration between a senior mathematician and a junior one can be a particularly effective combination). I remember several times during my own PhD when I would spend an entire week trying to resolve some issue in my research project, trying various fruitless attacks for hours, until I would meet with my advisor, Eli Stein, who would listen for a few minutes and then go over to his filing cabinet to fish out an offprint of a paper where the authors encountered and resolved a similar problem, and hand it to me; after studying the offprint, I was usually able to figure out what to do (and move on to the next issue).

30 March, 2016 at 12:15 pm

ZoiTerence and anonymous, thank you very much for your comments :)

30 March, 2016 at 4:48 am

AnonymousYou are too young to be depressed! Look at me: I was born in a rural family in a poor 3rd world country and I heard of calculus for the first time when I was 25 years old! I am now 42 years old, working as a shoes seller, and I read math books in my free time just for the pleasure it causes on me (currently I am –slowly :-)– reading the classic “Lehrbuch der Topologie”).

There is a big difference in enjoying to learn and teach mathematics, and enjoying to do research in mathematics. Just for the sake of analogy: I know people that love to hear about the experimental discoveries of physicists, but would feel really no pleasure in working in experimental physics.

Maybe you need to discover what is that you really want to do. You should not be depressed with the activity you choose do in your life.

By the way: very good exposition of the mathematical works of others is (or should be) really appreciated too. I feel that there are really few historians of mathematics nowadays! It is an activity of fundamental importance, but not highly valued in an era where most people (outside mathematics) only want money or fame. This should not discourage you. It’s a profession that can be a source of great happiness (I imagine). You are the only one who can possibly know what will make you happy.

Make your own decisions, choose patiently and wisely, and then enjoy happiness, young man.

30 March, 2016 at 10:20 am

arch1Zoi, Terry, and especially Anonymous, your comments made my day.

30 March, 2016 at 8:56 am

Not mentioned on The Aperiodical, March 2016 | The Aperiodical[…] has, as usual, written a post on his blog announcing the result, and the paper itself is on the arXiv, titled “Concatenation theorems for anti-Gowers-uniform […]

3 April, 2016 at 10:12 am

Pierre-Yves BienvenuHi Terry, looking at Theorem 1.3 of the paper, I wonder whether it could be possible to let the non-constant coefficients of the P_i be unbounded, something like a power (less than A) of log I’d guess. In view of Siegel-Walfisz, this works for k=2. For larger k, when I try to adapt the argument of Linear equations in primes, I run into trouble because the generalised von Neumann theorem seems to seriously require bounded coefficients (while other steps seem to be fine).

3 April, 2016 at 4:29 pm

Terence TaoThat’s a good question! If one uses existing techniques to try to control these patterns, one ends up wanting to control a Gowers norm of von Mangoldt or Mobius that is not the usual norm associated with the usual shift , but rather a Gowers box norm associated to a collection of shifts where each is polylogarithmic in size. Unfortunately, in this higher dimensional setting we don’t have a good inverse theorem that reduces everything to nilsequence type objects, so the linear equations in primes strategy doesn’t work, except in some special cases such as when all the are equal (which roughly speaking corresponds to the non-constant coefficients of the linear forms being large but equal to each other). One may have to somehow rework the linear equations in primes argument to rely less on nilsequences (e.g. using “soft” dual functions rather than “hard” nilsequences as the obstructions to uniformity), though I don’t know whether this can actually be done with current technology.

ADDED LATER: Actually, now that I think about it a bit more, I think the “clearing denominators” step in the older polynomial patterns paper of Tamar and myself should resolve this problem (in the linear pattern case, at least), controlling the aforementioned Gowers box norm by a Gowers uniformity norm based on a shift such as . (In that previous paper we needed to clear denominators to deal with some pesky -dependence in the shifts; here the enemy comes from terms of size rather than , but it seems the same method should work nevertheless.) The inverse theorem should control that in terms of nilsequences on arithmetic progressions at this polylogarithmic spacing , and the existing methods should be able to correlate Mobius with the nilsequences on these progressions by using the full power of Siegel-Walfisz. Might be a good project to hammer all this out…

p.s. Incidentally, in an appendix to a previous paper with Ford, Green, and Konyagin we did at least observe that the

constantterms in the linear equations in primes stuff can be made polylogarithmic in size. We didn’t do any clearing of denominators there though.11 April, 2016 at 6:58 am

MatjazGThe “{\o}” in “F{\o}lner sequence” should be

[Corrected, thanks – T.]7 August, 2017 at 1:10 am

AnonymousA “.” at the beginning of the last paragraph should be removed.

[Actually, that entire paragraph was not meant to be there, and is now deleted. -T.]11 January, 2020 at 8:42 pm

Some recent papers | What's new[…] increment argument efficiently. To resolve the second difficulty one needs to find a quantitative concatenation theorem for Gowers uniformity norms. Many of these ideas were developed in previous papers of Peluse and […]

30 July, 2020 at 7:13 pm

Higher uniformity of bounded multiplicative functions in short intervals on average | What's new[…] conjectures, which of course remain open). Results of this type were previously obtained by Tamar Ziegler and myself in the “true complexity zero” case when the polynomials had distinct degrees, in which […]

1 January, 2021 at 11:06 am

Marcel GohHi Prof. Tao, I was wondering, in the statement of Theorem 4, whether we can say anything about the structure of the “infinitely many n” in the conclusion (perhaps related to the n in the admissibility condition)? For example, I have run into a problem where it would be very useful to know if there are infinitely many n having only a fixed set of prime factors $\latex \{p_1,\ldots,p_n\}$. This seems like it’s probably too much to ask of the theorem, but I am still wondering if the n are just sporadic or if there is some pattern to them. Thanks!

1 January, 2021 at 11:32 am

Terence TaoI would imagine that if one utilised tools such as the arithmetic regularity lemma, one could show that the set of in Theorem 4 is basically measurable with respect to local nilsequences (in the spirit of the Bergelson-Host-Kra theory of sets of multiple recurrence). But there are error terms in this characterisation that may make this unsuitable for your application; if the set of prime factors is indeed fixed (presumably you want to use something like rather than here), you have an extremely sparse set of which is probably well beyond the range of what the regularity lemma can control, even when augmented with a transference principle, as these methods allow for an exceptional set of that is small but otherwise unstructured, and which could conceivably contain all the that you are interested in. For instance, I doubt that current technology can establish the infinitude of arithmetic progressions of length three, in which two of the elements are prime and the third is a Mersenne number.

1 January, 2021 at 11:57 am

Marcel GohHi Prof. Tao, thanks for your reply! I will take some time to ponder this; indeed it sounded far-fetched after I thought about it for a bit. However, if we increase the set of to something with positive density (for example, suppose we want to be congruent to 4 modulo 100), would there be more hope of finding pairs ?

I am actually working in the simple case where the degree equals 1 and it would actually be enough for me to find a single pair , but I suspect that finding a single pair is equivalent to finding infinitely many. I’m still working on wrapping my head around the implications and limits of these polynomial patterns, but your comments are extremely helpful! I hope you have a Happy New Year!