Kevin Ford, Ben Green, Sergei Konyagin, and myself have just posted to the arXiv our preprint “Large gaps between consecutive prime numbers“. This paper concerns the “opposite” problem to that considered by the recently concluded Polymath8 project, which was concerned with very small values of the prime gap . Here, we wish to consider the largest prime gap that one can find in the interval as goes to infinity.
Finding lower bounds on is more or less equivalent to locating long strings of consecutive composite numbers that are not too large compared to the length of the string. A classic (and quite well known) construction here starts with the observation that for any natural number , the consecutive numbers are all composite, because each , is divisible by some prime , while being strictly larger than that prime . From this and Stirling’s formula, it is not difficult to obtain the bound
A more efficient bound comes from the prime number theorem: there are only primes up to , so just from the pigeonhole principle one can locate a string of consecutive composite numbers up to of length at least , thus
What about upper bounds? The Cramér random model predicts that the primes up to are distributed like a random subset of density . Using this model, Cramér arrived at the conjecture
In fact, if one makes the extremely optimistic assumption that the random model perfectly describes the behaviour of the primes, one would arrive at the even more precise prediction
However, it is no longer widely believed that this optimistic version of the conjecture is true, due to some additional irregularities in the primes coming from the basic fact that large primes cannot be divisible by very small primes. Using the Maier matrix method to capture some of this irregularity, Granville was led to the conjecture that
(note that is slightly larger than ). For comparison, the known upper bounds on are quite weak; unconditionally one has by the work of Baker, Harman, and Pintz, and even on the Riemann hypothesis one only gets down to , as shown by Cramér (a slight improvement is also possible if one additionally assumes the pair correlation conjecture; see this article of Heath-Brown and the references therein).
This conjecture remains out of reach of current methods. In 1931, Westzynthius managed to improve the bound (2) slightly to
which Erdös in 1935 improved to
with . Remarkably, this rather strange bound then proved extremely difficult to advance further on; until recently, the only improvements were to the constant , which was raised to in 1963 by Schönhage, to in 1963 by Rankin, to by Maier and Pomerance, and finally to in 1997 by Pintz.
Erdös listed the problem of making arbitrarily large one of his favourite open problems, even offering (“somewhat rashly”, in his words) a cash prize for the solution. Our main result answers this question in the affirmative:
Theorem 1 The bound (3) holds for arbitrarily large .
In principle, we thus have a bound of the form
for some that grows to infinity. Unfortunately, due to various sources of ineffectivity in our methods, we cannot provide any explicit rate of growth on at all.
We decided to announce this result the old-fashioned way, as part of a research lecture; more precisely, Ben Green announced the result in his ICM lecture this Tuesday. (The ICM staff have very efficiently put up video of his talks (and most of the other plenary and prize talks) online; Ben’s talk is here, with the announcement beginning at about 0:48. Note a slight typo in his slides, in that the exponent of in the denominator is instead of .) Ben’s lecture slides may be found here.
By coincidence, an independent proof of this theorem has also been obtained very recently by James Maynard.
I discuss our proof method below the fold.
— 1. Sketch of proof —
Our method is a modification of Rankin’s method, combined with some work of myself and Ben Green (and of Tamar Ziegler) on counting various linear patterns in the primes. To explain this, let us first go back to Rankin’s argument, presented in a fashion that allows for comparison with our own methods. Let’s first go back to the easy bound (1) that came from using the consecutive string of composite numbers. This bound was inferior to the prime number theorem bound (2), however this can be easily remedied by replacing with the somewhat smaller primorial , defined as the product of the primes up to and including . It is still easy to see that are all composite, with each divisible by some prime while being larger than that prime. On the other hand, the prime number theorem tells us that . From this, one can recover an alternate proof of (2) (perhaps not so surprising, since the prime number theorem is a key ingredient in both proofs).
This gives hope that further modification of this construction can be used to go beyond (2). If one looks carefully at the above proof, we see that the key fact used here is that the discrete interval of integers is completely sieved out by the residue classes for primes , in the sense that each element in this interval is contained in at least one of these residue classes. More generally (and shifting the interval by for a more convenient normalisation), suppose we can find an interval which is completely sieved out by one residue class for each , for some and . Then the string of consecutive numbers will be composite, whenever is an integer larger than or equal to with for each prime , since each of the will be divisible by some prime while being larger than that prime. From the Chinese remainder theorem, one can find such an that is of size at most . From this and the prime number theorem, one can obtain lower bounds on if one can get lower bounds on in terms of . In particular, if for any large one can completely sieve out with a residue class for each , and
then one can establish the bound (3). (The largest one can take for a given is known as the Jacobsthal function of the primorial .) So the task is basically to find a smarter set of congruence classes than just the zero congruence classes that can sieve out a larger interval than . (Unfortunately, this approach by itself is unlikely to reach the Cramér conjecture; it was shown by Iwaniec using the large sieve that is necessarily of size (which somewhat coincidentally matches the Cramér bound), but Maier and Pomerance conjecture that in fact one must have , which would mean that the limit of this method would be to establish a bound of the form .)
So, how can one do better than just using the “Eratosthenes” sieve ? We will divide the sieving into different stages, depending on the size of . It turns out that a reasonably optimal division of primes up to will be into the following four classes:
- Stage 1 primes: primes that are either tiny (less than ) or medium size (between and ), where is a parameter to be chosen later.
- Stage 2 primes: primes that are small (between and ).
- Stage 3 primes: Primes that are very large (between and ).
- Stage 4 primes: Primes that are fairly large (between and ).
We will take an interval , where is given by (4), and sieve out first by Stage 1 primes, then Stage 2 primes, then Stage 3 primes, then Stage 4 primes, until none of the elements of are left.
Let’s first discuss the final sieving step, which is rather trivial. Suppose that our sieving by the first three sieving stages is so efficient that the number of surviving elements of is less than or equal to the number of Stage 4 primes (by the prime number theorem, this will for instance be the case for sufficiently large if there are fewer than survivors). Then one can finish off the remaining survivors simply by using each of the Stage 4 primes to remove one of the surviving integers in by an appropriate choice of residue class . So we can recast our problem as an approximate sieving problem rather than a perfect sieving problem; we now only need to eliminate most of the elements of rather than all of them, at the cost of only using primes from the Stages 1-3, rather than 1-4. Note though that for given by (4), the Stage 1-3 sieving has to be reasonably efficient, in that the proportion of survivors cannot be too much larger than (ignoring factors of etc.).
Next, we discuss the Stage 1 sieving process. Here, we will simply copy the classic construction and use the Eratosthenes sieve for these primes. The elements of that survive this process are those elements that are not divisible by any Stage 1 primes, that is to say they are only divisible by small (Stage 2) primes, or else contain at least one prime factor larger than , and no prime factors less than . In the latter case, the survivor has no choice but to be a prime in (since from (4) we have for large enough). In the former case, the survivor is a -smooth number – a number with no prime factors greater than . How many such survivors are there? Here we can use a somewhat crude upper bound of Rankin:
Lemma 2 Let be large quantities, and write . Suppose that
Then the number of -smooth numbers in is at most .
Proof: We use a Dirichlet series method commonly known as “Rankin’s trick”. Let be a quantity to be optimised in later, and abbreviate “-smooth” as “smooth”. Observe that if is a smooth number less than , then
where we have simply discarded the constraint . The point of doing this is that the above expression factors into a tractable Euler product
so we may bound the previous expression by
which we rewrite using (6) as
Next, we use the convexity inequality
for and , applied with and , to conclude that
Finally, from the prime number theorem we have . The bound follows.
Remark 1 One can basically eliminate the factor here (at the cost of worsening the error slightly to ) by a more refined version of the Rankin trick, based on replacing the crude bound (5) by the more sophisticated inequality
where denotes the assertion that divides exactly times. (Thanks to Kevin Ford for pointing out this observation to me.) In fact, the number of -smooth numbers in is known to be asymptotically in the range , a result of de Bruijn.
In view of the error term permitted by the Stage 4 process, we would like to take as large as possible while still leaving only smooth numbers in . A somewhat efficient choice of here is
so that and , and then one can check that the above lemma does indeed show that there are smooth numbers in . (If we use the sharper bound in the remark, we can reduce the here to a , although this makes little difference to the final bound.) If we let denote all the primes in , the remaining task is then to sieve out all but of the primes in by using one congruence class from each of the Stage 2 and Stage 3 primes.
Note that is still quite large compared to the error that the Stage 4 primes can handle – it is of size about , whereas we need to get down to a bit less than . Still, this is some progress (the remaining sparsification needed is of the order of rather than ).
For the Stage 2 sieve, we will just use a random construction, choosing uniformly at random for each Stage 2 prime. This sieve is expected to sparsify the set of survivors by a factor
which by Mertens’ theorem is of size
In particular, if is given by (4), then all the strange logarithmic factors cancel out and
In particular, we expect to be cut down to a random set (which we have called in our paper) of size about . This would already finish the job for very small (e.g. ), and indeed Rankin’s original argument proceeds more or less along these lines. But now we want to take to be large.
Fortunately, we still have the Stage 3 primes to play with. But the number of Stage 3 primes is about , which is a bit smaller than the number of surviving primes , which is about . So to make this work, most of the Stage 3 congruence classes need to sieve out many primes from , rather than just one or two. (Rankin’s original argument is based on sieving out one prime per congruence class; the subsequent work of Maier-Pomerance and Pintz is basically based on sieving out two primes per congruence class.)
Here, one has to take some care because the set is already quite sparse inside (its density is about ). So a randomly chosen would in fact most likely catch none of the primes in at all. So we need to restrict attention to congruence classes which already catch a large number of primes in , so that even after the Stage 2 sieving one can hope to be left with many congruence classes that also catch a large number of primes in .
Here’s where my work with Ben came in. Suppose one has an arithmetic progression of length consisting entirely of primes in , and with a multiple of , then the congruence class is guaranteed to pick up at least primes in . My first theorem with Ben shows that no matter how large is, the set does indeed contain some arithmetic progressions of length . This result is not quite suitable for our applications here, because (a) we need the spacing to also be divisible by a Stage 3 prime (in our paper, we take for concreteness, although other choices are certainly possible), and (b) for technical reasons, it is insufficient to simply have a large number of arithmetic progressions of primes strewn around ; they have to be “evenly distributed” in some sense in order to be able to still cover most of after throwing out any progression that is partly or completely sieved out by the Stage 2 primes. Fortunately, though, these distributional results for linear equations in primes were established by a subsequent paper of Ben and myself, contingent on two conjectures (the Mobius-Nilsequences conjecture and the inverse conjecture for the Gowers norms) which we also proved (the latter with Tamar Ziegler) in some further papers. (Actually, strictly speaking our work does not quite cover the case needed here, because the progressions are a little “narrow”; we need progressions of primes in whose spacing is comparable to instead of , whereas our paper only considered the situation in which the spacing was comparable to the elements of the progression. It turns out though that the arguments can be modified (somewhat tediously) to extend to this case though.)