Tamar Ziegler and I have just uploaded to the arXiv our paper “Narrow progressions in the primes“, submitted to the special issue “Analytic Number Theory” in honor of the 60th birthday of Helmut Maier. The results here are vaguely reminiscent of the recent progress on bounded gaps in the primes, but use different methods.

About a decade ago, Ben Green and I showed that the primes contained arbitrarily long arithmetic progressions: given any , one could find a progression with consisting entirely of primes. In fact we showed the same statement was true if the primes were replaced by any subset of the primes of positive relative density.

A little while later, Tamar Ziegler and I obtained the following generalisation: given any and any polynomials with , one could find a “polynomial progression” with consisting entirely of primes. Furthermore, we could make this progression somewhat “narrow” by taking (where denotes a quantity that goes to zero as goes to infinity). Again, the same statement also applies if the primes were replaced by a subset of positive relative density. My previous result with Ben corresponds to the linear case .

In this paper we were able to make the progressions a bit narrower still: given any and any polynomials with , one could find a “polynomial progression” with consisting entirely of primes, and such that , where depends only on and (in fact it depends only on and the degrees of ). The result is still true if the primes are replaced by a subset of positive density , but unfortunately in our arguments we must then let depend on . However, in the linear case , we were able to make independent of (although it is still somewhat large, of the order of ).

The polylogarithmic factor is somewhat necessary: using an upper bound sieve, one can easily construct a subset of the primes of density, say, , whose arithmetic progressions of length all obey the lower bound . On the other hand, the prime tuples conjecture predicts that if one works with the actual primes rather than dense subsets of the primes, then one should have infinitely many length arithmetic progressions of bounded width for any fixed . The case of this is precisely the celebrated theorem of Yitang Zhang that was the focus of the recently concluded Polymath8 project here. The higher case is conjecturally true, but appears to be out of reach of known methods. (Using the multidimensional Selberg sieve of Maynard, one can get primes inside an interval of length , but this is such a sparse set of primes that one would not expect to find even a progression of length three within such an interval.)

The argument in the previous paper was unable to obtain a polylogarithmic bound on the width of the progressions, due to the reliance on a certain technical “correlation condition” on a certain Selberg sieve weight . This correlation condition required one to control arbitrarily long correlations of , which was not compatible with a bounded value of (particularly if one wanted to keep independent of ).

However, thanks to recent advances in this area by Conlon, Fox, and Zhao (who introduced a very nice “densification” technique), it is now possible (in principle, at least) to delete this correlation condition from the arguments. Conlon-Fox-Zhao did this for my original theorem with Ben; and in the current paper we apply the densification method to our previous argument to similarly remove the correlation condition. This method does not fully eliminate the need to control arbitrarily long correlations, but allows most of the factors in such a long correlation to be *bounded*, rather than merely controlled by an unbounded weight such as . This turns out to be significantly easier to control, although in the non-linear case we still unfortunately had to make large compared to due to a certain “clearing denominators” step arising from the complicated nature of the Gowers-type uniformity norms that we were using to control polynomial averages. We believe though that this an artefact of our method, and one should be able to prove our theorem with an that is uniform in .

Here is a simple instance of the densification trick in action. Suppose that one wishes to establish an estimate of the form

for some real-valued functions which are bounded in magnitude by a weight function , but which are not expected to be bounded; this average will naturally arise when trying to locate the pattern in a set such as the primes. Here I will be vague as to exactly what range the parameters are being averaged over. Suppose that the factor (say) has enough uniformity that one can already show a smallness bound

whenever are bounded functions. (One should think of as being like the indicator functions of “dense” sets, in contrast to which are like the normalised indicator functions of “sparse” sets). The bound (2) cannot be directly applied to control (1) because of the unbounded (or “sparse”) nature of and . However one can “densify” and as follows. Since is bounded in magnitude by , we can bound the left-hand side of (1) as

The weight function will be normalised so that , so by the Cauchy-Schwarz inequality it suffices to show that

The left-hand side expands as

Now, it turns out that after an enormous (but finite) number of applications of the Cauchy-Schwarz inequality to steadily eliminate the factors, as well as a certain “polynomial forms condition” hypothesis on , one can show that

(Because of the polynomial shifts, this requires a method known as “PET induction”, but let me skip over this point here.) In view of this estimate, we now just need to show that

Now we can reverse the previous steps. First, we collapse back to

One can bound by , which can be shown to be “bounded on average” in a suitable sense (e.g. bounded norm) via the aforementioned polynomial forms condition. Because of this and the Hölder inequality, the above estimate is equivalent to

By setting to be the signum of , this is equivalent to

This is halfway between (1) and (2); the sparsely supported function has been replaced by its “densification” , but we have not yet densified to . However, one can shift by and repeat the above arguments to achieve a similar densificiation of , at which point one has reduced (1) to (2).

## 11 comments

Comments feed for this article

5 September, 2014 at 7:25 am

wolfCan somebody please clarify this statement for me?

“given any k, one could find a progression n, n+r, … , n+(k-1)r with r>0 consisting entirely of primes”.

If r=1 then n and n+1 cannot both be primes for the trivial reason that one must be even if the other is odd.

What am I missing?

5 September, 2014 at 7:49 am

arch1I think the claim is not that such a progression exists for *all* r>0 , but that it exists for *at least one* r>0 (else the statement would have begun “given any k and any r>0…”).

5 September, 2014 at 7:50 am

AnonymousThe assertion is that one could find an “r” for which that statement is true, not that the statement is true for every r.

21 September, 2014 at 5:56 am

pybienvenuIn what sense it is the same densification trick as in Fox-Conlon-Zhao ? In their paper, they replaced g(x,y) by g'(x,y)=Eg(x,z)g(y,z) and here you also replace a function by an expectation, but is there more analogy ? Moreover, in your article, what you call the densification of f is not related to f as far as I understand; it depends on f and g actually. Thank you for any clarification !

24 September, 2014 at 6:51 am

arch1Beginner Q: In the discussion following (1), does “[g is] not expected to be bounded” mean that g’s expected value is not bounded, or simply that g is not bounded? (I initially guessed the former, but this seems to conflict w/ the later supposition that (2) is true whenever F, H are bounded functions, because the choice F=H=1 would then imply EnEr(g(n+r)) = o(1))

24 September, 2014 at 11:33 am

Terence TaoWe do not expect to be bounded in general.

27 November, 2014 at 4:39 am

pybienvenuHello, why do you actually take the signum and not the expectation itself, just like FCZ do ? You would introduce , and then and notice that the expression is now and you could find that both are negligible.

[Yes, this should work also; there are multiple ways to conclude this portion of the argument. -T.]27 November, 2014 at 5:22 am

pybienvenuSorry, I mean that the last expectation is now of the form while the first is dominated by where is the averaged just like in “the Green-Tao theorem : an exposition”.

8 December, 2014 at 2:30 pm

pybienvenuHello, I just wanted to mention two little problems in the paper I can see currently on the arxiv. First in the second page, the singular series is not correct (k instead of -k). Secondly, there’s a problem in the 6th page in how you define your W-tricked set (because you placed your original set in [N’,2N’]). Best regards.

[Thanks, this will be corrected in the next revision of the ms. -T.]10 December, 2014 at 7:52 am

pybienvenuMoreover, in the inequality (17), the Gowers-Cauchy-Schwarz inequality, there’s an extra 2^d exponent I’m afraid.

[Thanks, this will be corrected in the next revision of the ms. -T.]27 March, 2016 at 5:18 pm

Concatenation theorems for anti-Gowers-uniform functions and Host-Kra characteristic factors; polynomial patterns in primes | What's new[…] using the recent “densification” technique of Conlon, Fox, and Zhao, discussed in this previous post), we are able in particular to establish the following […]