Kevin Ford, James Maynard, and I have uploaded to the arXiv our preprint “Chains of large gaps between primes“. This paper was announced in our previous paper with Konyagin and Green, which was concerned with the largest gap
between consecutive primes up to , in which we improved the Rankin bound of
to
for large (where we use the abbreviations
,
, and
). Here, we obtain an analogous result for the quantity
which measures how far apart the gaps between chains of consecutive primes can be. Our main result is
whenever is sufficiently large depending on
, with the implied constant here absolute (and effective). The factor of
is inherent to the method, and related to the basic probabilistic fact that if one selects
numbers at random from the unit interval
, then one expects the minimum gap between adjacent numbers to be about
(i.e. smaller than the mean spacing of
by an additional factor of
).
Our arguments combine those from the previous paper with the matrix method of Maier, who (in our notation) showed that
for an infinite sequence of going to infinity. (Maier needed to restrict to an infinite sequence to avoid Siegel zeroes, but we are able to resolve this issue by the now standard technique of simply eliminating a prime factor of an exceptional conductor from the sieve-theoretic portion of the argument. As a byproduct, this also makes all of the estimates in our paper effective.)
As its name suggests, the Maier matrix method is usually presented by imagining a matrix of numbers, and using information about the distribution of primes in the columns of this matrix to deduce information about the primes in at least one of the rows of the matrix. We found it convenient to interpret this method in an equivalent probabilistic form as follows. Suppose one wants to find an interval which contained a block of at least
primes, each separated from each other by at least
(ultimately,
will be something like
and
something like
). One can do this by the probabilistic method: pick
to be a random large natural number
(with the precise distribution to be chosen later), and try to lower bound the probability that the interval
contains at least
primes, no two of which are within
of each other.
By carefully choosing the residue class of with respect to small primes, one can eliminate several of the
from consideration of being prime immediately. For instance, if
is chosen to be large and even, then the
with
even have no chance of being prime and can thus be eliminated; similarly if
is large and odd, then
cannot be prime for any odd
. Using the methods of our previous paper, we can find a residue class
(where
is a product of a large number of primes) such that, if one chooses
to be a large random element of
(that is,
for some large random integer
), then the set
of shifts
for which
still has a chance of being prime has size comparable to something like
; furthermore this set
is fairly well distributed in
in the sense that it does not concentrate too strongly in any short subinterval of
. The main new difficulty, not present in the previous paper, is to get lower bounds on the size of
in addition to upper bounds, but this turns out to be achievable by a suitable modification of the arguments.
Using a version of the prime number theorem in arithmetic progressions due to Gallagher, one can show that for each remaining shift ,
is going to be prime with probability comparable to
, so one expects about
primes in the set
. An upper bound sieve (e.g. the Selberg sieve) also shows that for any distinct
, the probability that
and
are both prime is
. Using this and some routine second moment calculations, one can then show that with large probability, the set
will indeed contain about
primes, no two of which are closer than
to each other; with no other numbers in this interval being prime, this gives a lower bound on
.
28 comments
Comments feed for this article
16 November, 2015 at 8:31 pm
David Roberts
Typo, I think: “inherent for your method”
[Corrected, thanks – T.]
17 November, 2015 at 1:24 am
Anonymous
This typo is in the second line below the main result.
[Corrected, thanks – T.]
16 November, 2015 at 10:35 pm
Nick Cook
Typos in the last paragraph: some
s are slashes.
[Corrected, thanks – T.]
17 November, 2015 at 1:13 am
Anonymous
In the definition of
, it seems clearer to have
inside parentheses.
[Corrected, thanks – T.]
17 November, 2015 at 1:20 am
Anonymous
Can the “sufficiently large” dependence of
on
be made effective?
17 November, 2015 at 8:36 am
Terence Tao
Yes; we do not rely on ineffective results such as Siegel’s theorem, so all of the assertions in our paper can be made effective (though the lower bound required on
is likely to be quite poor, as we did not try to optimise in this direction).
17 November, 2015 at 5:37 am
MatjazG
A typo in the 3rd paragraph from the bottom up:
should probably be
(or even
?).
[Corrected, thanks – T.]
17 November, 2015 at 6:00 am
Anonymous
In the line before the last, it seems that “
primes” should be “
consecutive primes”.
[Corrected, thanks – T.]
18 November, 2015 at 4:23 am
Anonymous
Is it possible (using the above methods) to show (unconditionally) that
?
18 November, 2015 at 8:36 am
Terence Tao
I doubt it, because we only control a tiny contribution to
, roughly speaking we only understand those large prime gaps with the property that all the composite numbers within the gap are divisible by small primes. There could be much larger prime gaps (e.g. of the order of the Cramer prediction
) that are not analysed at all by known methods, and in that case we have essentially no relationship between
and
other than the trivial bound
.
18 November, 2015 at 10:25 am
Anonymous
I see. But is it still possible that the transition from a lower bound on
to a lower bound on
(via the
factor may be done by current methods even for “slightly better” lower bounds on
than currently known?
to a lower bound for
via the
factor is independent(!) of the exact lower bound on
as long as it is “sufficiently small”?)
(i.e. Is the transition from a lower bound for
19 November, 2015 at 7:19 am
Anonymous
More precisely, what is the limitation on the growth rate of a function
such that
still implies (using current methods) that
?
19 November, 2015 at 8:21 am
Terence Tao
As I said earlier, there is no direct relationship beyond the trivial inequality; the lower bound on G_1 is not directly used in the arguments establishing a lower bound on G_k. Any new lower bound on G_1 beyond what we currently know could be due to the presence of sporadic large prime gaps that are not adjacent to each other, and so do not serve to increase G_k from beyond the bounds currently available while making G_1 bigger. We have no tool currently to understand these sporadic large gaps, so this new information about G_1 has no direct bearing on G_k. (But it is likely that any new method that is capable of improving the bounds on G_1 will also, after some nontrivial modification, also improve the bounds on G_k, but one actually has to use the method of proof and not simply take the bound on G_1 as a “black box”.)
18 November, 2015 at 7:42 am
Lionel
Dear Terry, I have just noticed the French flag on the banner. As a Parisian, I am touched by this discreet display of sympathy. I have been reading your blog with great interest for quite a while. Not being a mathematician but with a background on the subject, I would like to praise the way your provide the flavour of the results and the issues at stake. I cannot follow all the developments but I dream about them.
20 November, 2015 at 4:00 am
Anonymous
Professor Tao.
I guess you already know that: all prime number is in the interval.
as is obvious twin primes are included. But if we want to know only yhe twin primes, the these are defined.
when the mathematicians look for something that is already defined, has no sense.
21 November, 2015 at 6:04 am
Anonymous
Dear Terry and every one,I’m Dr.Tao’s blog’s fan.Maybe I’m the oddest and craziest man in the world.Because I have crazy remark about Terry that related to Riemann problem.Terry never present the result of Riemann unless someone annouce to prove Riemann.But Terry will do a thing(very surprised,very interesting,very quick).He will do a hattrick(Rie-Gold-Twin)at the same time.I predict many things very exactly,from football,economy,rainy, sun to math.Trust me.Please do not vote against.All you never do level to understand anymore
breakthrough(Riemann,Goldbach,Twin p
22 November, 2015 at 5:57 am
Anonymous
I think there is a typo in Maier’s previous result; at least I have no idea what the notation \gg_k is supposed to mean.
[
denotes an estimate of the form
for some
that depends only on
. -T.]
4 December, 2015 at 7:49 am
Anonymous
Is it possible (using current methods) to extend this lower bound for
to the case where
is a sufficiently slowly increasing and unbounded function of
?
4 December, 2015 at 2:01 pm
Terence Tao
If one is willing to lose a factor of
on the RHS, then our techniques would extend to some sufficiently slowly growing function of
(basically because we already demonstrated this for any fixed
with
sufficiently large depending on
). But we do not know how to get rid of this
factor (as pointed out in the post, it is related to the fact that given
random points in the unit interval, one expects one pair of these to be within about
of each other).
5 December, 2015 at 1:26 am
Anonymous
Therefore, let (for each fixed
)
be a “sufficiently large”
such that for any
the above lower bound (for
) holds. Clearly, we may assume
to be a strictly increasing function of
, so by taking
to be the inverse function of
, the above lower bound for
holds with
.
(i.e. the slowest growth of its inverse function
) available by the current methods?
This leads to the question: what is the fastest growth of the function
5 December, 2015 at 3:59 am
Anonymous
Of course, by taking
growing sufficiently fast (e.g.
), the resulting lower bound for
can (due to the
factor) be made trivial (e.g.
, so the above question for the fastest growth of
seems interesting only under the additional constraint that the resulting lower bound for
grows no slower than a given increasing unbounded function of
(e.g. the Rankin bound).
5 December, 2015 at 11:00 am
Terence Tao
I think the methods in the paper might be pushed to take
as large as
for a small absolute constant
, though I haven’t fully checked the details. (In fact one may be able to take
arbitrarily close to 1 if one is careful enough. Not coincidentally,
is the amount by which our arguments improve over the Erdos-Rankin bound.)
14 January, 2016 at 1:58 am
epsilon
Hello Prof. Tao,
With the prime number theorem intrinsically tied to the Riemann Hypothesis, what impact to RH would occur if the existence of primes were finite? Would that be possible?
Assuming RH to be false, what would this imply for prime numbers and their existence?
8 August, 2017 at 7:19 am
Anonymous
Hello math enthusiast
Can anyone try to provide a proof for or against the following statement
“For any subset {1,..,x} of the integer set, if x is even , then the number of prime numbers in the subset {1,..,x} is 1/2x and if x is odd, then the number of prime number numbers in the subset {1,..,x} is 1/2x + 1/2”
please help an amateur from africa
8 August, 2017 at 8:37 am
Anonymous
sorry the claim was false i found the disproof for x = 9
2 September, 2017 at 7:04 am
gninrepoli
Good day. The weakened hypothesis of Grimm’s states
. The trivial upper bound for
gives a relatively good boundary.
as
,
– characteristic function; 
You can rewrite
where
Can this expression be promising?
On the other hand i used
relying on this work http://www.iecl.univ-lorraine.fr/~Gerald.Tenenbaum/PUBLIC/PPP/ShortSums.pdf with no luck
I will note that by this I try to give an estimate
at least for
. What would be the improvement of the result obtained by Erdös. https://www.renyi.hu/~p_erdos/1971-24.pdf
Unfortunately I did not succeed. Maybe someone will find it useful.
2 September, 2017 at 10:42 pm
gninrepoli
It seems that
for
.
Using the method Heat-Brown method, you can reach such a boundary, perhaps , but I do not know how to approach it.
4 September, 2017 at 11:03 am
gninrepoli
Instead of
should be
, where
–
-th prime number and
– prime-counting function. Sorry for the confusion.