Kevin Ford, Ben Green, Sergei Konyagin, James Maynard, and I have just uploaded to the arXiv our paper “Long gaps between primes“. This is a followup work to our two previous papers (discussed in this previous post), in which we had simultaneously shown that the maximal gap

between primes up to exhibited a lower bound of the shape

for some function that went to infinity as ; this improved upon previous work of Rankin and other authors, who established the same bound but with replaced by a constant. (Again, see the previous post for a more detailed discussion.)

In our previous papers, we did not specify a particular growth rate for . In my paper with Kevin, Ben, and Sergei, there was a good reason for this: our argument relied (amongst other things) on the inverse conjecture on the Gowers norms, as well as the Siegel-Walfisz theorem, and the known proofs of both results both have ineffective constants, rendering our growth function similarly ineffective. Maynard’s approach ostensibly also relies on the Siegel-Walfisz theorem, but (as shown in another recent paper of his) can be made quite effective, even when tracking -tuples of fairly large size (about for some small ). If one carefully makes all the bounds in Maynard’s argument quantitative, one eventually ends up with a growth rate of shape

on the gaps between primes for large ; this is an unpublished calculation of James’.

In this paper we make a further refinement of this calculation to obtain a growth rate

leading to a bound of the form

for large and some small constant . Furthermore, this appears to be the limit of current technology (in particular, falling short of Cramer’s conjecture that is comparable to ); in the spirit of Erdös’ original prize on this problem, I would like to offer 10,000 USD for anyone who can show (in a refereed publication, of course) that the constant here can be replaced by an arbitrarily large constant .

The reason for the growth rate (3) is as follows. After following the sieving process discussed in the previous post, the problem comes down to something like the following: can one sieve out all (or almost all) of the primes in by removing one residue class modulo for all primes in (say) ? Very roughly speaking, if one can solve this problem with , then one can obtain a growth rate on of the shape . (This is an oversimplification, as one actually has to sieve out a random subset of the primes, rather than all the primes in , but never mind this detail for now.)

Using the quantitative “dense clusters of primes” machinery of Maynard, one can find lots of -tuples in which contain at least primes, for as large as or so (so that is about ). By considering -tuples in arithmetic progression, this means that one can find lots of residue classes modulo a given prime in that capture about primes. In principle, this means that union of all these residue classes can cover about primes, allowing one to take as large as , which corresponds to (3). However, there is a catch: the residue classes for different primes may collide with each other, reducing the efficiency of the covering. In our previous papers on the subject, we selected the residue classes randomly, which meant that we had to insert an additional logarithmic safety margin in expected number of times each prime would be shifted out by one of the residue classes, in order to guarantee that we would (with high probability) sift out most of the primes. This additional safety margin is ultimately responsible for the loss in (2).

The main innovation of this paper, beyond detailing James’ unpublished calculations, is to use ideas from the literature on efficient hypergraph covering, to avoid the need for a logarithmic safety margin. The hypergraph covering problem, roughly speaking, is to try to cover a set of vertices using as few “edges” from a given hypergraph as possible. If each edge has vertices, then one certainly needs at least edges to cover all the vertices, and the question is to see if one can come close to attaining this bound given some reasonable uniform distribution hypotheses on the hypergraph . As before, random methods tend to require something like edges before one expects to cover, say of the vertices.

However, it turns out (under reasonable hypotheses on ) to eliminate this logarithmic loss, by using what is now known as the “semi-random method” or the “Rödl nibble”. The idea is to randomly select a small number of edges (a first “nibble”) – small enough that the edges are unlikely to overlap much with each other, thus obtaining maximal efficiency. Then, one pauses to remove all the edges from that intersect edges from this first nibble, so that all remaining edges will not overlap with the existing edges. One then randomly selects another small number of edges (a second “nibble”), and repeats this process until enough nibbles are taken to cover most of the vertices. Remarkably, it turns out that under some reasonable assumptions on the hypergraph , one can maintain control on the uniform distribution of the edges throughout the nibbling process, and obtain an efficient hypergraph covering. This strategy was carried out in detail in an influential paper of Pippenger and Spencer.

In our setup, the vertices are the primes in , and the edges are the intersection of the primes with various residue classes. (Technically, we have to work with a family of hypergraphs indexed by a prime , rather than a single hypergraph, but let me ignore this minor technical detail.) The semi-random method would *in principle* eliminate the logarithmic loss and recover the bound (3). However, there is a catch: the analysis of Pippenger and Spencer relies heavily on the assumption that the hypergraph is uniform, that is to say all edges have the same size. In our context, this requirement would mean that each residue class captures exactly the same number of primes, which is not the case; we only control the number of primes in an average sense, but we were unable to obtain any concentration of measure to come close to verifying this hypothesis. And indeed, the semi-random method, when applied naively, does not work well with edges of variable size – the problem is that edges of large size are much more likely to be eliminated after each nibble than edges of small size, since they have many more vertices that could overlap with the previous nibbles. Since the large edges are clearly the more useful ones for the covering problem than small ones, this bias towards eliminating large edges significantly reduces the efficiency of the semi-random method (and also greatly complicates the *analysis* of that method).

Our solution to this is to iteratively *reweight* the probability distribution on edges after each nibble to compensate for this bias effect, giving larger edges a greater weight than smaller edges. It turns out that there is a natural way to do this reweighting that allows one to repeat the Pippenger-Spencer analysis in the presence of edges of variable size, and this ultimately allows us to recover the full growth rate (3).

To go beyond (3), one either has to find a lot of residue classes that can capture significantly more than primes of size (which is the limit of the multidimensional Selberg sieve of Maynard and myself), or else one has to find a very different method to produce large gaps between primes than the Erdös-Rankin method, which is the method used in all previous work on the subject.

It turns out that the arguments in this paper can be combined with the Maier matrix method to also produce chains of consecutive large prime gaps whose size is of the order of (4); three of us (Kevin, James, and myself) will detail this in a future paper. (A similar combination was also recently observed in connection with our earlier result (1) by Pintz, but there are some additional technical wrinkles required to recover the full gain of (3) for the chains of large gaps problem.)

## 27 comments

Comments feed for this article

16 December, 2014 at 8:43 pm

samuelfhopkinsDo you think the non-uniform hypergraph packing developed in your paper could be applied to this question?: http://mathoverflow.net/questions/121961/is-there-a-n-2-version-of-the-erd%C5%91s-hanani-conjecture

16 December, 2014 at 9:11 pm

Terence TaoHmm. One could try simply applying our covering theorem off-the-shelf (Corollary 1 from our paper), but it looks like the hypergraph here is already uniform, so our theorem won’t add much power beyond what existing tools give; we do allow the edges to grow somewhat in size, but perhaps not as fast as in that application. But one would have to compute all the parameters carefully to see for certain whether our version of the theorem gets any further than previous ones with regards to what values of k and t it can handle. (We also didn’t try very hard to optimise the bound on codegree needed, since in our application the codegree ends up being very small.)

17 December, 2014 at 1:57 am

AnonymousIs the constant c in (4) effective?

17 December, 2014 at 8:22 am

Terence TaoYes, we manage to avoid the use of the (ineffective) Siegel-Walfisz theorem by deleting an exceptional prime from the multidimensional Selberg sieve, leading to an effective (but quite small) value of c.

6 March, 2019 at 9:06 pm

goingtoinfinityis it feasible to calculate the actual numerical value of c?

17 December, 2014 at 5:31 am

AnonymousLet be the largest for which DHL[k, m+1] holds. What should be the bound in (4) in terms of the function ?

17 December, 2014 at 8:24 am

Terence TaoIt depends on how large one can make in terms of the order of magnitude of the primes being considered (and in the Erdos-Rankin method, x is of the order of ). Roughly speaking, the quantity in (1) is then the largest one can make . With current technology, can be as large as and as large as , which is why we can make as large as . (In our previous papers we took to be a large constant independent of , but this will not create a that grows with .)

If one assumes a sufficiently strong version of the prime tuples conjecture, then one could conceivably take as large as , in which case becomes . This is likely to be the limit of the Erdos-Rankin method (from probabilistic heuristics, one should not expect to ever get larger than if one is to capture primes of size ).

17 December, 2014 at 1:34 pm

Eytan PaldiHence should imply (4) without the denominator.

17 December, 2014 at 1:47 pm

FanSome tangentially relevant question: Can you claim Erdos’ prize now? And do you put a time limit on the prize you offer?

18 December, 2014 at 8:32 am

Kyle PrattVery nice work.

The lower bound on G(X) after (2) is not what you meant to say. As written, this is the result of Westzynthius, not the improved result of Maynard.

[I think it’s correct as stated; the Westzynthius bound has an additional logarithm in both the numerator and denominator. -T.]19 December, 2014 at 11:29 am

Kyle PrattAh yes, thanks.

20 December, 2014 at 6:14 am

Eytan PaldiA natural generalization (analogous to definition) of the maximal prime gap function to cluster of consecutive prime gaps seems to be

Clearly, is subadditive in , in particular

for

It seems that this upper bound is quite good.

20 December, 2014 at 8:20 am

Eytan PaldiIn fact, the bound is not necessarily good for large . It is possible that and (which are in ) can be both unbounded and as .

22 December, 2014 at 2:42 pm

Gerhard PasemanThere are some typos and areas for small improvements in clarity and exposition (e.g. “that are all either at least log 20 x .”). Would you like me to send the corrections and suggestions to you in bulk via email, or would you prefer them piecemeal here as comments?

[Either is fine – T.]24 December, 2014 at 11:13 am

Gerhard PasemanOk. In your section 1, isn’t Y(x) just j(P(x))-1 [ or h(x) – 1, to alter notation from Hagedorn’s 2009, or even L(P(x)), using Jacobsthal’s L(x) from his 1960 paper] ? I may have got Y(x) wrong, but if not, I see some of the paragraphs in that section that could be made more clear. Also, in defining a arrow right after (3.5), I think you want a script S for the index set, not the plain S in “s \in S”.

[Thanks, this will be corrected in the next revision of the ms – T.]2 January, 2015 at 12:26 pm

hikliceplehUsing the result Schoenfeld, assuming the Riemann hypothesis, we can choose the best result for p_n. It turns recursion.

|pi(n) – Li(n)| < 1/(8*pi)(n^1/2)logn – http://en.wikipedia.org/wiki/Riemann_hypothesis#Distribution_of_prime_numbers

Take n=p_n. |n – Li(p_n)| < 1/(8*pi)(p_n^1/2)log(p_n)

Now we can choose the best result for p_n. (maybe)

For example take p_n=cnlogn(loglognloglogloglogn/logloglogn). Could this somehow help?

16 November, 2015 at 6:54 pm

Chains of large gaps between primes | What's new[…] our preprint “Chains of large gaps between primes“. This paper was announced in our previous paper with Konyagin and Green, which was concerned with the largest […]

2 February, 2016 at 8:21 pm

Mathematics: It’s About the Future | Gödel's Lost Letter and P=NP[…] to the Future II” also came closer. For best result of 2014 we nominate discoveries by Tao and others about gaps between […]

26 May, 2016 at 2:20 pm

Jaime MaldonadoThat wants the gap size? … The voids or gaps between two prime numbers can not be determined, in fact, are as large as the largest prime number known to date, but still, this big hole is repeated indefinitely as you all gap you can find up to this prime number. If tomorrow is a much higher prime number, say a million million digits then there is a gap of that size and many of twice that size … and so …. I can prove it. A gap double in size depends on two numbers in the middle of two consecutive gap of the same size that can be twins or one cousin cousins and not the other or both compounds, in this case the gap doubles.

24 June, 2017 at 3:12 pm

Stefan KohlI’d just like to point out that ‘refereed publication’ alone is a pretty weak requirement when offering a prize. — That something got refereed is as such not necessarily a sign of quality, even not if it is published in a journal indexed by zbMATH — cf. e.g. https://zbmath.org/?q=an:1360.81259, https://zbmath.org/?q=an:06699962.

21 February, 2018 at 9:05 pm

Long gaps in sieved set | What's new[…] around variants of the question of finding large gaps between primes. As discussed for instance in this previous post, it is now known that within the set of primes , one can find infinitely adjacent elements whose […]

25 December, 2018 at 4:00 am

LAGRIDAI write an article about prime numbers construction where i give the reason of irregularty of primes,

the complete article : http://lagrida.com/prime_numbers_construction.html

For with q is the next prime of p, consider the set

it’s clear that :

In my article, i proove :

For :

For :

For :

if

And if , and b denote the next prime to prime number b’, then :

For :

And following the same proofs we can calculate the number of elements in $ with gaps

25 December, 2018 at 4:04 am

LAGRIDAThere is a Latex error, i will rewrite the formule of :

25 December, 2018 at 10:51 am

LAGRIDAFor we have (error of writting)

Latex code didn’t write the complete setence of !

25 December, 2018 at 12:01 pm

jaimemaldonadolondonoGracias por esta publicación, es de máximo interes para mi. Hasta hoy 12/25/18 los matemáticos se encuentran muy lejos de conocer o descubrir la verdad sobre la distribución de los Números Primos y de las Brechas entre dos consecutivos … Espero darla a conocer en el teanscurso del año 2019.

Thanks for this publication, it is of maximum interest to me. Until today 12/25/18 the mathematicians are very far from knowing or discovering the truth about the distribution of the Prime Numbers and the Gaps between two consecutive … I hope to make it known in the course of the year 2019.

27 December, 2018 at 10:41 am

LAGRIDAWhat do you think Mr Terrence about those values of gaps in ? i think if we enter analytique théory to my work, we will have a very great results ?

The complete article : http://lagrida.com/prime_numbers_construction.html

26 August, 2019 at 8:48 am

Large prime gaps and probabilistic models | What's new[…] choices of , which is only enough to establish the far weaker bound , which was already known (see this previous paper for a discussion of the best known unconditional lower bounds on ). An inspection of the argument […]