Kevin Ford, James Maynard, and I have uploaded to the arXiv our preprint “Chains of large gaps between primes“. This paper was announced in our previous paper with Konyagin and Green, which was concerned with the largest gap

between consecutive primes up to , in which we improved the Rankin bound of

to

for large (where we use the abbreviations , , and ). Here, we obtain an analogous result for the quantity

which measures how far apart the gaps between chains of consecutive primes can be. Our main result is

whenever is sufficiently large depending on , with the implied constant here absolute (and effective). The factor of is inherent to the method, and related to the basic probabilistic fact that if one selects numbers at random from the unit interval , then one expects the minimum gap between adjacent numbers to be about (i.e. smaller than the mean spacing of by an additional factor of ).

Our arguments combine those from the previous paper with the matrix method of Maier, who (in our notation) showed that

for an infinite sequence of going to infinity. (Maier needed to restrict to an infinite sequence to avoid Siegel zeroes, but we are able to resolve this issue by the now standard technique of simply eliminating a prime factor of an exceptional conductor from the sieve-theoretic portion of the argument. As a byproduct, this also makes all of the estimates in our paper effective.)

As its name suggests, the Maier matrix method is usually presented by imagining a matrix of numbers, and using information about the distribution of primes in the columns of this matrix to deduce information about the primes in at least one of the rows of the matrix. We found it convenient to interpret this method in an equivalent probabilistic form as follows. Suppose one wants to find an interval which contained a block of at least primes, each separated from each other by at least (ultimately, will be something like and something like ). One can do this by the probabilistic method: pick to be a random large natural number (with the precise distribution to be chosen later), and try to lower bound the probability that the interval contains at least primes, no two of which are within of each other.

By carefully choosing the residue class of with respect to small primes, one can eliminate several of the from consideration of being prime immediately. For instance, if is chosen to be large and even, then the with even have no chance of being prime and can thus be eliminated; similarly if is large and odd, then cannot be prime for any odd . Using the methods of our previous paper, we can find a residue class (where is a product of a large number of primes) such that, if one chooses to be a large random element of (that is, for some large random integer ), then the set of shifts for which still has a chance of being prime has size comparable to something like ; furthermore this set is fairly well distributed in in the sense that it does not concentrate too strongly in any short subinterval of . The main new difficulty, not present in the previous paper, is to get *lower* bounds on the size of in addition to upper bounds, but this turns out to be achievable by a suitable modification of the arguments.

Using a version of the prime number theorem in arithmetic progressions due to Gallagher, one can show that for each remaining shift , is going to be prime with probability comparable to , so one expects about primes in the set . An upper bound sieve (e.g. the Selberg sieve) also shows that for any distinct , the probability that and are both prime is . Using this and some routine second moment calculations, one can then show that with large probability, the set will indeed contain about primes, no two of which are closer than to each other; with no other numbers in this interval being prime, this gives a lower bound on .

## 23 comments

Comments feed for this article

16 November, 2015 at 8:31 pm

David RobertsTypo, I think: “inherent for your method”

[Corrected, thanks – T.]17 November, 2015 at 1:24 am

AnonymousThis typo is in the second line below the main result.

[Corrected, thanks – T.]16 November, 2015 at 10:35 pm

Nick CookTypos in the last paragraph: some s are slashes.

[Corrected, thanks – T.]17 November, 2015 at 1:13 am

AnonymousIn the definition of , it seems clearer to have inside parentheses.

[Corrected, thanks – T.]17 November, 2015 at 1:20 am

AnonymousCan the “sufficiently large” dependence of on be made effective?

17 November, 2015 at 8:36 am

Terence TaoYes; we do not rely on ineffective results such as Siegel’s theorem, so all of the assertions in our paper can be made effective (though the lower bound required on is likely to be quite poor, as we did not try to optimise in this direction).

17 November, 2015 at 5:37 am

MatjazGA typo in the 3rd paragraph from the bottom up: should probably be (or even ?).

[Corrected, thanks – T.]17 November, 2015 at 6:00 am

AnonymousIn the line before the last, it seems that “ primes” should be “ consecutive primes”.

[Corrected, thanks – T.]18 November, 2015 at 4:23 am

AnonymousIs it possible (using the above methods) to show (unconditionally) that

?

18 November, 2015 at 8:36 am

Terence TaoI doubt it, because we only control a tiny contribution to , roughly speaking we only understand those large prime gaps with the property that all the composite numbers within the gap are divisible by small primes. There could be much larger prime gaps (e.g. of the order of the Cramer prediction ) that are not analysed at all by known methods, and in that case we have essentially no relationship between and other than the trivial bound .

18 November, 2015 at 10:25 am

AnonymousI see. But is it still possible that the transition from a lower bound on to a lower bound on (via the factor may be done by current methods even for “slightly better” lower bounds on than currently known?

(i.e. Is the transition from a lower bound for to a lower bound for via the factor is independent(!) of the exact lower bound on as long as it is “sufficiently small”?)

19 November, 2015 at 7:19 am

AnonymousMore precisely, what is the limitation on the growth rate of a function such that still implies (using current methods) that ?

19 November, 2015 at 8:21 am

Terence TaoAs I said earlier, there is no direct relationship beyond the trivial inequality; the lower bound on G_1 is not directly used in the arguments establishing a lower bound on G_k. Any new lower bound on G_1 beyond what we currently know could be due to the presence of sporadic large prime gaps that are not adjacent to each other, and so do not serve to increase G_k from beyond the bounds currently available while making G_1 bigger. We have no tool currently to understand these sporadic large gaps, so this new information about G_1 has no direct bearing on G_k. (But it is likely that any new

methodthat is capable of improving the bounds on G_1 will also, after some nontrivial modification, also improve the bounds on G_k, but one actually has to use the method of proof and not simply take the bound on G_1 as a “black box”.)18 November, 2015 at 7:42 am

LionelDear Terry, I have just noticed the French flag on the banner. As a Parisian, I am touched by this discreet display of sympathy. I have been reading your blog with great interest for quite a while. Not being a mathematician but with a background on the subject, I would like to praise the way your provide the flavour of the results and the issues at stake. I cannot follow all the developments but I dream about them.

20 November, 2015 at 4:00 am

AnonymousProfessor Tao.

I guess you already know that: all prime number is in the interval.

and latex p = (11,17,23,29) $

as is obvious twin primes are included. But if we want to know only yhe twin primes, the these are defined.

when the mathematicians look for something that is already defined, has no sense.

21 November, 2015 at 6:04 am

AnonymousDear Terry and every one,I’m Dr.Tao’s blog’s fan.Maybe I’m the oddest and craziest man in the world.Because I have crazy remark about Terry that related to Riemann problem.Terry never present the result of Riemann unless someone annouce to prove Riemann.But Terry will do a thing(very surprised,very interesting,very quick).He will do a hattrick(Rie-Gold-Twin)at the same time.I predict many things very exactly,from football,economy,rainy, sun to math.Trust me.Please do not vote against.All you never do level to understand anymore

breakthrough(Riemann,Goldbach,Twin p

22 November, 2015 at 5:57 am

AnonymousI think there is a typo in Maier’s previous result; at least I have no idea what the notation \gg_k is supposed to mean.

[ denotes an estimate of the form for some that depends only on . -T.]

4 December, 2015 at 7:49 am

AnonymousIs it possible (using current methods) to extend this lower bound for to the case where is a sufficiently slowly increasing and unbounded function of ?

4 December, 2015 at 2:01 pm

Terence TaoIf one is willing to lose a factor of on the RHS, then our techniques would extend to some sufficiently slowly growing function of (basically because we already demonstrated this for any fixed with sufficiently large depending on ). But we do not know how to get rid of this factor (as pointed out in the post, it is related to the fact that given random points in the unit interval, one expects one pair of these to be within about of each other).

5 December, 2015 at 1:26 am

AnonymousTherefore, let (for each fixed ) be a “sufficiently large” such that for any the above lower bound (for ) holds. Clearly, we may assume to be a strictly increasing function of , so by taking to be the inverse function of , the above lower bound for holds with .

This leads to the question: what is the fastest growth of the function (i.e. the slowest growth of its inverse function ) available by the current methods?

5 December, 2015 at 3:59 am

AnonymousOf course, by taking growing sufficiently fast (e.g. ), the resulting lower bound for can (due to the factor) be made trivial (e.g. , so the above question for the fastest growth of seems interesting only under the additional constraint that the resulting lower bound for grows no slower than a given increasing unbounded function of (e.g. the Rankin bound).

5 December, 2015 at 11:00 am

Terence TaoI think the methods in the paper might be pushed to take as large as for a small absolute constant , though I haven’t fully checked the details. (In fact one may be able to take arbitrarily close to 1 if one is careful enough. Not coincidentally, is the amount by which our arguments improve over the Erdos-Rankin bound.)

14 January, 2016 at 1:58 am

epsilonHello Prof. Tao,

With the prime number theorem intrinsically tied to the Riemann Hypothesis, what impact to RH would occur if the existence of primes were finite? Would that be possible?

Assuming RH to be false, what would this imply for prime numbers and their existence?