You are currently browsing the tag archive for the ‘Yitang Zhang’ tag.
As in previous posts, we use the following asymptotic notation: is a parameter going off to infinity, and all quantities may depend on
unless explicitly declared to be “fixed”. The asymptotic notation
is then defined relative to this parameter. A quantity
is said to be of polynomial size if one has
, and bounded if
. We also write
for
, and
for
.
The purpose of this post is to collect together all the various refinements to the second half of Zhang’s paper that have been obtained as part of the polymath8 project and present them as a coherent argument (though not fully self-contained, as we will need some lemmas from previous posts).
In order to state the main result, we need to recall some definitions.
Definition 1 (Singleton congruence class system) Let
, and let
denote the square-free numbers whose prime factors lie in
. A singleton congruence class system on
is a collection
of primitive residue classes
for each
, obeying the Chinese remainder theorem property
whenever
are coprime. We say that such a system
has controlled multiplicity if the
for any fixed
and any congruence class
with
. Here
is the divisor function.
Next we need a relaxation of the concept of -smoothness.
Definition 2 (Dense divisibility) Let
. A positive integer
is said to be
-densely divisible if, for every
, there exists a factor of
in the interval
. We let
denote the set of
-densely divisible positive integers.
Now we present a strengthened version of the Motohashi-Pintz-Zhang conjecture
, which depends on parameters
and
.
Conjecture 3 (
) Let
, and let
be a congruence class system with controlled multiplicity. Then
for any fixed
, where
is the von Mangoldt function.
The difference between this conjecture and the weaker conjecture is that the modulus
is constrained to be
-densely divisible rather than
-smooth (note that
is no longer constrained to lie in
). This relaxation of the smoothness condition improves the Goldston-Pintz-Yildirim type sieving needed to deduce
from
; see this previous post.
The main result we will establish is
This improves upon previous constraints of (see this blog comment) and
(see Theorem 13 of this previous post), which were also only established for
instead of
. Inserting Theorem 4 into the Pintz sieve from this previous post gives
for
(see this blog comment), which when inserted in turn into newly set up tables of narrow prime tuples gives infinitely many prime gaps of separation at most
.
As in previous posts, we use the following asymptotic notation: is a parameter going off to infinity, and all quantities may depend on
unless explicitly declared to be “fixed”. The asymptotic notation
is then defined relative to this parameter. A quantity
is said to be of polynomial size if one has
, and said to be bounded if
. Another convenient notation: we write
for
. Thus for instance the divisor bound asserts that if
has polynomial size, then the number of divisors of
is
.
This post is intended to highlight a phenomenon unearthed in the ongoing polymath8 project (and is in fact a key component of Zhang’s proof that there are bounded gaps between primes infinitely often), namely that one can get quite good bounds on relatively short exponential sums when the modulus is smooth, through the basic technique of Weyl differencing (ultimately based on the Cauchy-Schwarz inequality, and also related to the van der Corput lemma in equidistribution theory). Improvements in the case of smooth moduli have appeared before in the literature (e.g. in this paper of Heath-Brown, paper of Graham and Ringrose, this later paper of Heath-Brown, this paper of Chang, or this paper of Goldmakher); the arguments here are particularly close to that of the first paper of Heath-Brown. It now also appears that further optimisation of this Weyl differencing trick could lead to noticeable improvements in the numerology for the polymath8 project, so I am devoting this post to explaining this trick further.
To illustrate the method, let us begin with the classical problem in analytic number theory of estimating an incomplete character sum
where is a primitive Dirichlet character of some conductor
,
is an integer, and
is some quantity between
and
. Clearly we have the trivial bound
we also have the classical Pólya-Vinogradov inequality
This latter inequality gives improvements over the trivial bound when is much larger than
, but not for
much smaller than
. The Pólya-Vinogradov inequality can be deduced via a little Fourier analysis from the completed exponential sum bound
for any , where
. (In fact, from the classical theory of Gauss sums, this exponential sum is equal to
for some complex number
of norm
.)
In the case when is a prime, improving upon the above two inequalities is an important but difficult problem, with only partially satisfactory results so far. To give just one indication of the difficulty, the seemingly modest improvement
to the Pólya-Vinogradov inequality when was a prime required a 14-page paper in Inventiones by Montgomery and Vaughan to prove, and even then it was only conditional on the generalised Riemann hypothesis! See also this more recent paper of Granville and Soundararajan for an unconditional variant of this result in the case that
has odd order.
Another important improvement is the Burgess bound, which in our notation asserts that
for any fixed integer , assuming that
is square-free (for simplicity) and of polynomial size; see this previous post for a discussion of the Burgess argument. This is non-trivial for
as small as
.
In the case when is prime, there has been very little improvement to the Burgess bound (or its Fourier dual, which can give bounds for
as large as
) in the last fifty years; an improvement to the exponents in (3) in this case (particularly anything that gave a power saving for
below
) would in fact be rather significant news in analytic number theory.
However, in the opposite case when is smooth – that is to say, all of its factors are much smaller than
– then one can do better than the Burgess bound in some regimes. This fact has been observed in several places in the literature (in particular, in the papers of Heath-Brown, Graham-Ringrose, Chang, and Goldmakher mentioned previously), but also turns out to (implicitly) be a key insight in Zhang’s paper on bounded prime gaps. In the case of character sums, one such improved estimate (closely related to Theorem 2 of the Heath-Brown paper) is as follows:
Proposition 1 Let
be square-free with a factorisation
and of polynomial size, and let
be integers with
. Then for any primitive character
with conductor
, one has
This proposition is particularly powerful when is smooth, as this gives many factorisations
with the ability to specify
with a fair amount of accuracy. For instance, if
is
-smooth (i.e. all prime factors are at most
), then by the greedy algorithm one can find a divisor
of
with
; if we set
, then
, and the above proposition then gives
which can improve upon the Burgess bound when is small. For instance, if
, then this bound becomes
; in contrast the Burgess bound only gives
for this value of
(using the optimal choice
for
), which is inferior for
.
The hypothesis that be squarefree may be relaxed, but for applications to the Polymath8 project, it is only the squarefree moduli that are relevant.
Proof: If then the claim follows from the trivial bound (1), while for
the claim follows from (2). Hence we may assume that
We use the method of Weyl differencing, the key point being to difference in multiples of .
Let , thus
. For any
, we have
By the Chinese remainder theorem, we may factor
where are primitive characters of conductor
respectively. As
is periodic of period
, we thus have
and so we can take out of the inner summation of the right-hand side of (4) to obtain
and hence by the triangle inequality
Note how the characters on the right-hand side only have period rather than
. This reduction in the period is ultimately the source of the saving over the Pólya-Vinogradov inequality.
Note that the inner sum vanishes unless , which is an interval of length
by choice of
. Thus by Cauchy-Schwarz one has
We expand the right-hand side as
We first consider the diagonal contribution . In this case we use the trivial bound
for the inner summation, and we soon see that the total contribution here is
.
Now we consider the off-diagonal case; by symmetry we can take . Then the indicator functions
restrict
to the interval
. On the other hand, as a consequence of the Weil conjectures for curves one can show that
for any ; indeed one can use the Chinese remainder theorem and the square-free nature of
to reduce to the case when
is prime, in which case one can apply (for instance) the original paper of Weil to establish this bound, noting also that
and
are coprime since
is squarefree. Applying the method of completion of sums (or the Parseval formula), this shows that
Summing in (using Lemma 5 from this previous post) we see that the total contribution to the off-diagonal case is
which simplifies to . The claim follows.
A modification of the above argument (using more complicated versions of the Weil conjectures) allows one to replace the summand by more complicated summands such as
for some polynomials or rational functions
of bounded degree and obeying a suitable non-degeneracy condition (after restricting of course to those
for which the arguments
are well-defined). We will not detail this here, but instead turn to the question of estimating slightly longer exponential sums, such as
where should be thought of as a little bit larger than
.
This is the final continuation of the online reading seminar of Zhang’s paper for the polymath8 project. (There are two other continuations; this previous post, which deals with the combinatorial aspects of the second part of Zhang’s paper, and this previous post, that covers the Type I and Type II sums.) The main purpose of this post is to present (and hopefully, to improve upon) the treatment of the final and most innovative of the key estimates in Zhang’s paper, namely the Type III estimate.
The main estimate was already stated as Theorem 17 in the previous post, but we quickly recall the relevant definitions here. As in other posts, we always take to be a parameter going off to infinity, with the usual asymptotic notation
associated to this parameter.
Definition 1 (Coefficient sequences) A coefficient sequence is a finitely supported sequence
that obeys the bounds
for all
, where
is the divisor function.
- (i) If
is a coefficient sequence and
is a primitive residue class, the (signed) discrepancy
of
in the sequence is defined to be the quantity
- (ii) A coefficient sequence
is said to be at scale
for some
if it is supported on an interval of the form
.
- (iii) A coefficient sequence
at scale
is said to be smooth if it takes the form
for some smooth function
supported on
obeying the derivative bounds
for all fixed
(note that the implied constant in the
notation may depend on
).
For any , let
denote the square-free numbers whose prime factors lie in
. The main result of this post is then the following result of Zhang:
Theorem 2 (Type III estimate) Let
be fixed quantities, and let
be quantities such that
and
and
for some fixed
. Let
be coefficient sequences at scale
respectively with
smooth. Then for any
we have
In fact we have the stronger “pointwise” estimate
for all
with
and all
, and some fixed
.
(This is very slightly stronger than previously claimed, in that the condition has been dropped.)
It turns out that Zhang does not exploit any averaging of the factor, and matters reduce to the following:
Theorem 3 (Type III estimate without
) Let
be fixed, and let
be quantities such that
and
and
for some fixed
. Let
be smooth coefficient sequences at scales
respectively. Then we have
for all
and some fixed
.
Let us quickly see how Theorem 3 implies Theorem 2. To show (4), it suffices to establish the bound
for all , where
denotes a quantity that is independent of
(but can depend on other quantities such as
). The left-hand side can be rewritten as
From Theorem 3 we have
where the quantity does not depend on
or
. Inserting this asymptotic and using crude bounds on
(see Lemma 8 of this previous post) we conclude (4) as required (after modifying
slightly).
It remains to establish Theorem 3. This is done by a set of tools similar to that used to control the Type I and Type II sums:
- (i) completion of sums;
- (ii) the Weil conjectures and bounds on Ramanujan sums;
- (iii) factorisation of smooth moduli
;
- (iv) the Cauchy-Schwarz and triangle inequalities (Weyl differencing).
The specifics are slightly different though. For the Type I and Type II sums, it was the classical Weil bound on Kloosterman sums that were the key source of power saving; Ramanujan sums only played a minor role, controlling a secondary error term. For the Type III sums, one needs a significantly deeper consequence of the Weil conjectures, namely the estimate of Bombieri and Birch on a three-dimensional variant of a Kloosterman sum. Furthermore, the Ramanujan sums – which are a rare example of sums that actually exhibit better than square root cancellation, thus going beyond even what the Weil conjectures can offer – make a crucial appearance, when combined with the factorisation of the smooth modulus (this new argument is arguably the most original and interesting contribution of Zhang).
This is one of the continuations of the online reading seminar of Zhang’s paper for the polymath8 project. (There are two other continuations; this previous post, which deals with the combinatorial aspects of the second part of Zhang’s paper, and a post to come that covers the Type III sums.) The main purpose of this post is to present (and hopefully, to improve upon) the treatment of two of the three key estimates in Zhang’s paper, namely the Type I and Type II estimates.
The main estimate was already stated as Theorem 16 in the previous post, but we quickly recall the relevant definitions here. As in other posts, we always take to be a parameter going off to infinity, with the usual asymptotic notation
associated to this parameter.
Definition 1 (Coefficient sequences) A coefficient sequence is a finitely supported sequence
that obeys the bounds
for all
, where
is the divisor function.
- (i) If
is a coefficient sequence and
is a primitive residue class, the (signed) discrepancy
of
in the sequence is defined to be the quantity
- (ii) A coefficient sequence
is said to be at scale
for some
if it is supported on an interval of the form
.
- (iii) A coefficient sequence
at scale
is said to obey the Siegel-Walfisz theorem if one has
for any
, any fixed
, and any primitive residue class
.
- (iv) A coefficient sequence
at scale
is said to be smooth if it takes the form
for some smooth function
supported on
obeying the derivative bounds
for all fixed
(note that the implied constant in the
notation may depend on
).
In Lemma 8 of this previous post we established a collection of “crude estimates” which assert, roughly speaking, that for the purposes of averaged estimates one may ignore the factor in (1) and pretend that
was in fact
. We shall rely frequently on these “crude estimates” without further citation to that precise lemma.
For any , let
denote the square-free numbers whose prime factors lie in
.
Definition 2 (Singleton congruence class system) Let
. A singleton congruence class system on
is a collection
of primitive residue classes
for each
, obeying the Chinese remainder theorem property
whenever
are coprime. We say that such a system
has controlled multiplicity if the
for any fixed
and any congruence class
with
.
The main result of this post is then the following:
Theorem 3 (Type I/II estimate) Let
be fixed quantities such that
and let
be coefficient sequences at scales
respectively with
with
obeying a Siegel-Walfisz theorem. Then for any
and any singleton congruence class system
with controlled multiplicity we have
The proof of this theorem relies on five basic tools:
- (i) the Bombieri-Vinogradov theorem;
- (ii) completion of sums;
- (iii) the Weil conjectures;
- (iv) factorisation of smooth moduli
; and
- (v) the Cauchy-Schwarz and triangle inequalities (Weyl differencing and the dispersion method).
For the purposes of numerics, it is the interplay between (ii), (iii), and (v) that drives the final conditions (7), (8). The Weil conjectures are the primary source of power savings ( for some fixed
) in the argument, but they need to overcome power losses coming from completion of sums, and also each use of Cauchy-Schwarz tends to halve any power savings present in one’s estimates. Naively, one could thus expect to get better estimates by relying more on the Weil conjectures, and less on completion of sums and on Cauchy-Schwarz.
The purpose of this post is to isolate a combinatorial optimisation problem regarding subset sums; any improvement upon the current known bounds for this problem would lead to numerical improvements for the quantities pursued in the Polymath8 project. (UPDATE: Unfortunately no purely combinatorial improvement is possible, see comments.) We will also record the number-theoretic details of how this combinatorial problem is used in Zhang’s argument establishing bounded prime gaps.
First, some (rough) motivational background, omitting all the number-theoretic details and focusing on the combinatorics. (But readers who just want to see the combinatorial problem can skip the motivation and jump ahead to Lemma 5.) As part of the Polymath8 project we are trying to establish a certain estimate called for as wide a range of
as possible. Currently the best result we have is:
Theorem 1 (Zhang’s theorem, numerically optimised)
holds whenever
.
Enlarging this region would lead to a better value of certain parameters ,
which in turn control the best bound on asymptotic gaps between consecutive primes. See this previous post for more discussion of this. At present, the best value
of
is applied by taking
sufficiently close to
, so improving Theorem 1 in the neighbourhood of this value is particularly desirable.
I’ll state exactly what is below the fold. For now, suffice to say that it involves a certain number-theoretic function, the von Mangoldt function
. To prove the theorem, the first step is to use a certain identity (the Heath-Brown identity) to decompose
into a lot of pieces, which take the form
for some bounded (in Zhang’s paper
never exceeds
) and various weights
supported at various scales
that multiply up to approximately
:
We can write , thus ignoring negligible errors,
are non-negative real numbers that add up to
:
A key technical feature of the Heath-Brown identity is that the weight associated to sufficiently large values of
(e.g.
) are “smooth” in a certain sense; this will be detailed below the fold.
The operation is Dirichlet convolution, which is commutative and associative. We can thus regroup the convolution (1) in a number of ways. For instance, given any partition
into disjoint sets
, we can rewrite (1) as
where is the convolution of those
with
, and similarly for
.
Zhang’s argument splits into two major pieces, in which certain classes of (1) are established. Cheating a little bit, the following three results are established:
Theorem 2 (Type 0 estimate, informal version) The term (1) gives an acceptable contribution to
whenever
for some
.
Theorem 3 (Type I/II estimate, informal version) The term (1) gives an acceptable contribution to
whenever one can find a partition
such that
where
is a quantity such that
Theorem 4 (Type III estimate, informal version) The term (1) gives an acceptable contribution to
whenever one can find
with distinct
with
and
The above assertions are oversimplifications; there are some additional minor smallness hypotheses on that are needed but at the current (small) values of
under consideration they are not relevant and so will be omitted.
The deduction of Theorem 1 from Theorems 2, 3, 4 is then accomplished from the following, purely combinatorial, lemma:
Lemma 5 (Subset sum lemma) Let
be such that
Let
be non-negative reals such that
Then at least one of the following statements hold:
- (Type 0) There is
such that
.
- (Type I/II) There is a partition
such that
where
is a quantity such that
- (Type III) One can find distinct
with
and
The purely combinatorial question is whether the hypothesis (2) can be relaxed here to a weaker condition. This would allow us to improve the ranges for Theorem 1 (and hence for the values of and
alluded to earlier) without needing further improvement on Theorems 2, 3, 4 (although such improvement is also going to be a focus of Polymath8 investigations in the future).
Let us review how this lemma is currently proven. The key sublemma is the following:
Lemma 6 Let
, and let
be non-negative numbers summing to
. Then one of the following three statements hold:
- (Type 0) There is a
with
.
- (Type I/II) There is a partition
such that
- (Type III) There exist distinct
with
and
.
Proof: Suppose Type I/II never occurs, then every partial sum is either “small” in the sense that it is less than or equal to
, or “large” in the sense that it is greater than or equal to
, since otherwise we would be in the Type I/II case either with
as is and
the complement of
, or vice versa.
Call a summand “powerless” if it cannot be used to turn a small partial sum into a large partial sum, thus there are no
such that
is small and
is large. We then split
where
are the powerless elements and
are the powerful elements.
By induction we see that if and
is small, then
is also small. Thus every sum of powerful summand is either less than
or larger than
. Since a powerful element must be able to convert a small sum to a large sum (in fact it must be able to convert a small sum of powerful summands to a large sum, by stripping out the powerless summands), we conclude that every powerful element has size greater than
. We may assume we are not in Type 0, then every powerful summand is at least
and at most
. In particular, there have to be at least three powerful summands, otherwise
cannot be as large as
. As
, we have
, and we conclude that the sum of any two powerful summands is large (which, incidentally, shows that there are exactly three powerful summands). Taking
to be three powerful summands in increasing order we land in Type III.
Now we see how Lemma 6 implies Lemma 5. Let be as in Lemma 5. We take
almost as large as we can for the Type I/II case, thus we set
for some sufficiently small . We observe from (2) that we certainly have
and
with plenty of room to spare. We then apply Lemma 6. The Type 0 case of that lemma then implies the Type 0 case of Lemma 5, while the Type I/II case of Lemma 6 also implies the Type I/II case of Lemma 5. Finally, suppose that we are in the Type III case of Lemma 6. Since
we thus have
and so we will be done if
Inserting (3) and taking small enough, it suffices to verify that
but after some computation this is equivalent to (2).
It seems that there is some slack in this computation; some of the conclusions of the Type III case of Lemma 5, in particular, ended up being “wasted”, and it is possible that one did not fully exploit all the partial sums that could be used to create a Type I/II situation. So there may be a way to make improvements through purely combinatorial arguments. (UPDATE: As it turns out, this is sadly not the case: consderation of the case when ,
, and
shows that one cannot obtain any further improvement without actually improving the Type I/II and Type III analysis.)
A technical remark: for the application to Theorem 1, it is possible to enforce a bound on the number of summands in Lemma 5. More precisely, we may assume that
is an even number of size at most
for any natural number
we please, at the cost of adding the additioal constraint
to the Type III conclusion. Since
is already at least
, which is at least
, one can safely take
, so
can be taken to be an even number of size at most
, which in principle makes the problem of optimising Lemma 5 a fixed linear programming problem. (Zhang takes
, but this appears to be overkill. On the other hand,
does not appear to be a parameter that overly influences the final numerical bounds.)
Below the fold I give the number-theoretic details of the combinatorial aspects of Zhang’s argument that correspond to the combinatorial problem described above.
This post is a continuation of the previous post on sieve theory, which is an ongoing part of the Polymath8 project to improve the various parameters in Zhang’s proof that bounded gaps between primes occur infinitely often. Given that the comments on that page are getting quite lengthy, this is also a good opportunity to “roll over” that thread.
We will continue the notation from the previous post, including the concept of an admissible tuple, the use of an asymptotic parameter going to infinity, and a quantity
depending on
that goes to infinity sufficiently slowly with
, and
(the
-trick).
The objective of this portion of the Polymath8 project is to make as efficient as possible the connection between two types of results, which we call and
. Let us first state
, which has an integer parameter
:
Conjecture 1 (
) Let
be a fixed admissible
-tuple. Then there are infinitely many translates
of
which contain at least two primes.
Zhang was the first to prove a result of this type with . Since then the value of
has been lowered substantially; at this time of writing, the current record is
.
There are two basic ways known currently to attain this conjecture. The first is to use the Elliott-Halberstam conjecture for some
:
Conjecture 2 (
) One has
for all fixed
. Here we use the abbreviation
for
.
Here of course is the von Mangoldt function and
the Euler totient function. It is conjectured that
holds for all
, but this is currently only known for
, an important result known as the Bombieri-Vinogradov theorem.
In a breakthrough paper, Goldston, Yildirim, and Pintz established an implication of the form
for any , where
depends on
. This deduction was very recently optimised by Farkas, Pintz, and Revesz and also independently in the comments to the previous blog post, leading to the following implication:
Theorem 3 (EH implies DHL) Let
be a real number, and let
be an integer obeying the inequality
where
is the first positive zero of the Bessel function
. Then
implies
.
Note that the right-hand side of (2) is larger than , but tends asymptotically to
as
. We give an alternate proof of Theorem 3 below the fold.
Implications of the form Theorem 3 were modified by Motohashi and Pintz, which in our notation replaces by an easier conjecture
for some
and
, at the cost of degrading the sufficient condition (2) slightly. In our notation, this conjecture takes the following form for each choice of parameters
:
Conjecture 4 (
) Let
be a fixed
-tuple (not necessarily admissible) for some fixed
, and let
be a primitive residue class. Then
for any fixed
, where
,
are the square-free integers whose prime factors lie in
, and
is the quantity
and
is the set of congruence classes
and
is the polynomial
This is a weakened version of the Elliott-Halberstam conjecture:
Proposition 5 (EH implies MPZ) Let
and
. Then
implies
for any
. (In abbreviated form:
implies
.)
In particular, since is conjecturally true for all
, we conjecture
to be true for all
and
.
Proof: Define
then the hypothesis (applied to
and
and then subtracting) tells us that
for any fixed . From the Chinese remainder theorem and the Siegel-Walfisz theorem we have
for any coprime to
(and in particular for
). Since
, where
is the number of prime divisors of
, we can thus bound the left-hand side of (3) by
The contribution of the second term is by standard estimates (see Proposition 8 below). Using the very crude bound
and standard estimates we also have
and the claim now follows from the Cauchy-Schwarz inequality.
In practice, the conjecture is easier to prove than
due to the restriction of the residue classes
to
, and also the restriction of the modulus
to
-smooth numbers. Zhang proved
for any
. More recently, our Polymath8 group has analysed Zhang’s argument (using in part a corrected version of the analysis of a recent preprint of Pintz) to obtain
whenever
are such that
The work of Motohashi and Pintz, and later Zhang, implicitly describe arguments that allow one to deduce from
provided that
is sufficiently large depending on
. The best implication of this sort that we have been able to verify thus far is the following result, established in the previous post:
Theorem 6 (MPZ implies DHL) Let
,
, and let
be an integer obeying the constraint
where
is the quantity
Then
implies
.
This complicated version of is roughly of size
. It is unlikely to be optimal; the work of Motohashi-Pintz and Pintz suggests that it can essentially be improved to
, but currently we are unable to verify this claim. One of the aims of this post is to encourage further discussion as to how to improve the
term in results such as Theorem 6.
We remark that as (5) is an open condition, it is unaffected by infinitesimal modifications to , and so we do not ascribe much importance to such modifications (e.g. replacing
by
for some arbitrarily small
).
The known deductions of from claims such as
or
rely on the following elementary observation of Goldston, Pintz, and Yildirim (essentially a weighted pigeonhole principle), which we have placed in “
-tricked form”:
Lemma 7 (Criterion for DHL) Let
. Suppose that for each fixed admissible
-tuple
and each congruence class
such that
is coprime to
for all
, one can find a non-negative weight function
, fixed quantities
, a quantity
, and a fixed positive power
of
such that one has the upper bound
for all
, and the key inequality
holds. Then
holds. Here
is defined to equal
when
is prime and
otherwise.
By (6), (7), this quantity is at least
By (8), this expression is positive for all sufficiently large . On the other hand, (9) can only be positive if at least one summand is positive, which only can happen when
contains at least two primes for some
with
. Letting
we obtain
as claimed.
In practice, the quantity (referred to as the sieve level) is a power of
such as
or
, and reflects the strength of the distribution hypothesis
or
that is available; the quantity
will also be a key parameter in the definition of the sieve weight
. The factor
reflects the order of magnitude of the expected density of
in the residue class
; it could be absorbed into the sieve weight
by dividing that weight by
, but it is convenient to not enforce such a normalisation so as not to clutter up the formulae. In practice,
will some combination of
and
.
Once one has decided to rely on Lemma 7, the next main task is to select a good weight for which the ratio
is as small as possible (and for which the sieve level
is as large as possible. To ensure non-negativity, we use the Selberg sieve
where takes the form
for some weights vanishing for
that are to be chosen, where
is an interval and
is the polynomial
. If the distribution hypothesis is
, one takes
and
; if the distribution hypothesis is instead
, one takes
and
.
One has a useful amount of flexibility in selecting the weights for the Selberg sieve. The original work of Goldston, Pintz, and Yildirim, as well as the subsequent paper of Zhang, the choice
is used for some additional parameter to be optimised over. More generally, one can take
for some suitable (in particular, sufficiently smooth) cutoff function . We will refer to this choice of sieve weights as the “analytic Selberg sieve”; this is the choice used in the analysis in the previous post.
However, there is a slight variant choice of sieve weights that one can use, which I will call the “elementary Selberg sieve”, and it takes the form
for a sufficiently smooth function , where
for is a
-variant of the Euler totient function, and
for is a
-variant of the function
. (The derivative on the
cutoff is convenient for computations, as will be made clearer later in this post.) This choice of weights
may seem somewhat arbitrary, but it arises naturally when considering how to optimise the quadratic form
(which arises naturally in the estimation of in (6)) subject to a fixed value of
(which morally is associated to the estimation of
in (7)); this is discussed in any sieve theory text as part of the general theory of the Selberg sieve, e.g. Friedlander-Iwaniec.
The use of the elementary Selberg sieve for the bounded prime gaps problem was studied by Motohashi and Pintz. Their arguments give an alternate derivation of from
for
sufficiently large, although unfortunately we were not able to confirm some of their calculations regarding the precise dependence of
on
, and in particular we have not yet been able to improve upon the specific criterion in Theorem 6 using the elementary sieve. However it is quite plausible that such improvements could become available with additional arguments.
Below the fold we describe how the elementary Selberg sieve can be used to reprove Theorem 3, and discuss how they could potentially be used to improve upon Theorem 6. (But the elementary Selberg sieve and the analytic Selberg sieve are in any event closely related; see the appendix of this paper of mine with Ben Green for some further discussion.) For the purposes of polymath8, either developing the elementary Selberg sieve or continuing the analysis of the analytic Selberg sieve from the previous post would be a relevant topic of conversation in the comments to this post.
In a recent paper, Yitang Zhang has proven the following theorem:
Theorem 1 (Bounded gaps between primes) There exists a natural number
such that there are infinitely many pairs of distinct primes
with
.
Zhang obtained the explicit value of for
. A polymath project has been proposed to lower this value and also to improve the understanding of Zhang’s results; as of this time of writing, the current “world record” is
(and the link given should stay updated with the most recent progress.
Zhang’s argument naturally divides into three steps, which we describe in reverse order. The last step, which is the most elementary, is to deduce the above theorem from the following weak version of the Dickson-Hardy-Littlewood (DHL) conjecture for some :
Theorem 2 (
) Let
be an admissible
-tuple, that is to say a tuple of
distinct integers which avoids at least one residue class mod
for every prime
. Then there are infinitely many translates of
that contain at least two primes.
Zhang obtained for
. The current best value of
is
, as discussed in this previous blog post. To get from
to Theorem 1, one has to exhibit an admissible
-tuple of diameter at most
. For instance, with
, the narrowest admissible
-tuple that we can construct has diameter
, which explains the current world record. There is an active discussion on trying to improve the constructions of admissible tuples at this blog post; it is conceivable that some combination of computer search and clever combinatorial constructions could obtain slightly better values of
for a given value of
. The relationship between
and
is approximately of the form
(and a classical estimate of Montgomery and Vaughan tells us that we cannot make
much narrower than
, see this previous post for some related discussion).
The second step in Zhang’s argument, which is somewhat less elementary (relying primarily on the sieve theory of Goldston, Yildirim, Pintz, and Motohashi), is to deduce from a certain conjecture
for some
. Here is one formulation of the conjecture, more or less as (implicitly) stated in Zhang’s paper:
Conjecture 3 (
) Let
be an admissible tuple, let
be an element of
, let
be a large parameter, and define
for any natural number
, and
for any function
. Let
equal
when
is a prime
, and
otherwise. Then one has
for any fixed
.
Note that this is slightly different from the formulation of in the previous post; I have reverted to Zhang’s formulation here as the primary purpose of this post is to read through Zhang’s paper. However, I have distinguished two separate parameters here
instead of one, as it appears that there is some room to optimise by making these two parameters different.
In the previous post, I described how one can deduce from
. Ignoring an exponentially small error
, it turns out that one can deduce
from
whenever one can find a smooth function
vanishing to order at least
at
such that
By selecting for a real parameter
to optimise over, and ignoring the technical
term alluded to previously (which is the only quantity here that depends on
), this gives
from
whenever
It may be possible to do better than this by choosing smarter choices for , or performing some sort of numerical calculus of variations or spectral theory; people interested in this topic are invited to discuss it in the previous post.
The final, and deepest, part of Zhang’s work is the following theorem (Theorem 2 from Zhang’s paper, whose proof occupies Sections 6-13 of that paper, and is about 32 pages long):
The significance of the fraction is that Zhang’s argument proceeds for a general choice of
, but ultimately the argument only closes if one has
(see page 53 of Zhang) which is equivalent to . Plugging in this choice of
into (1) then gives
with
as stated previously.
Improving the value of in Theorem 4 would lead to improvements in
and then
as discussed above. The purpose of this reading seminar is then twofold:
- Going through Zhang’s argument in order to improve the value of
(perhaps by decreasing
); and
- Gaining a more holistic understanding of Zhang’s argument (and perhaps to find some more “global” improvements to that argument), as well as related arguments such as the prior work of Bombieri, Fouvry, Friedlander, and Iwaniec that Zhang’s work is based on.
In addition to reading through Zhang’s paper, the following material is likely to be relevant:
- A recent blog post of Emmanuel Kowalski on the technical details of Zhang’s argument.
- Scanned notes from a talk related to the above blog post.
- A recent expository note by Fouvry, Kowalski, and Michel on a Friedlander-Iwaniec character sum relevant to this argument.
- This 1981 paper of Fouvry and Iwaniec which is the first result in the literature which is roughly of the type
. (This paper seems to give a related result for
and
, if I read it correctly; I don’t yet understand what prevents this result or modifications thereof from being used in place of Theorem 4.)
I envisage a loose, unstructured format for the reading seminar. In the comments below, I am going to post my own impressions, questions, and remarks as I start going through the material, and I encourage other participants to do the same. The most obvious thing to do is to go through Zhang’s Sections 6-13 in linear order, but it may make sense for some participants to follow a different path. One obvious near-term goal is to carefully go through Zhang’s arguments for instead of
, and record exactly how various exponents depend on
, and what inequalities these parameters need to obey for the arguments to go through. It may be that this task can be done at a fairly superficial level without the need to carefully go through the analytic number theory estimates in that paper, though of course we should also be doing that as well. This may lead to some “cheap” optimisations of
which can then propagate to improved bounds on
and
thanks to the other parts of the Polymath project.
Everyone is welcome to participate in this project (as per the usual polymath rules); however I would request that “meta” comments about the project that are not directly related to the task of reading Zhang’s paper and related works be placed instead on the polymath proposal page. (Similarly, comments regarding the optimisation of given
and
should be placed at this post, while comments on the optimisation of
given
should be given at this post. On the other hand, asking questions about Zhang’s paper, even (or especially!) “dumb” ones, would be very appropriate for this post and such questions are encouraged.
Suppose one is given a -tuple
of
distinct integers for some
, arranged in increasing order. When is it possible to find infinitely many translates
of
which consists entirely of primes? The case
is just Euclid’s theorem on the infinitude of primes, but the case
is already open in general, with the
case being the notorious twin prime conjecture.
On the other hand, there are some tuples for which one can easily answer the above question in the negative. For instance, the only translate of
that consists entirely of primes is
, basically because each translate of
must contain an even number, and the only even prime is
. More generally, if there is a prime
such that
meets each of the
residue classes
, then every translate of
contains at least one multiple of
; since
is the only multiple of
that is prime, this shows that there are only finitely many translates of
that consist entirely of primes.
To avoid this obstruction, let us call a -tuple
admissible if it avoids at least one residue class
for each prime
. It is easy to check for admissibility in practice, since a
-tuple is automatically admissible in every prime
larger than
, so one only needs to check a finite number of primes in order to decide on the admissibility of a given tuple. For instance,
or
are admissible, but
is not (because it covers all the residue classes modulo
). We then have the famous Hardy-Littlewood prime tuples conjecture:
Conjecture 1 (Prime tuples conjecture, qualitative form) If
is an admissible
-tuple, then there exists infinitely many translates of
that consist entirely of primes.
This conjecture is extremely difficult (containing the twin prime conjecture, for instance, as a special case), and in fact there is no explicitly known example of an admissible -tuple with
for which we can verify this conjecture (although, thanks to the recent work of Zhang, we know that
satisfies the conclusion of the prime tuples conjecture for some
, even if we can’t yet say what the precise value of
is).
Actually, Hardy and Littlewood conjectured a more precise version of Conjecture 1. Given an admissible -tuple
, and for each prime
, let
denote the number of residue classes modulo
that
meets; thus we have
for all
by admissibility, and also
for all
. We then define the singular series
associated to
by the formula
where is the set of primes; by the previous discussion we see that the infinite product in
converges to a finite non-zero number.
We will also need some asymptotic notation (in the spirit of “cheap nonstandard analysis“). We will need a parameter that one should think of going to infinity. Some mathematical objects (such as
and
) will be independent of
and referred to as fixed; but unless otherwise specified we allow all mathematical objects under consideration to depend on
. If
and
are two such quantities, we say that
if one has
for some fixed
, and
if one has
for some function
of
(and of any fixed parameters present) that goes to zero as
(for each choice of fixed parameters).
Conjecture 2 (Prime tuples conjecture, quantitative form) Let
be a fixed natural number, and let
be a fixed admissible
-tuple. Then the number of natural numbers
such that
consists entirely of primes is
.
Thus, for instance, if Conjecture 2 holds, then the number of twin primes less than should equal
, where
is the twin prime constant
As this conjecture is stronger than Conjecture 1, it is of course open. However there are a number of partial results on this conjecture. For instance, this conjecture is known to be true if one introduces some additional averaging in ; see for instance this previous post. From the methods of sieve theory, one can obtain an upper bound of
for the number of
with
all prime, where
depends only on
. Sieve theory can also give analogues of Conjecture 2 if the primes are replaced by a suitable notion of almost prime (or more precisely, by a weight function concentrated on almost primes).
Another type of partial result towards Conjectures 1, 2 come from the results of Goldston-Pintz-Yildirim, Motohashi-Pintz, and of Zhang. Following the notation of this recent paper of Pintz, for each , let
denote the following assertion (DHL stands for “Dickson-Hardy-Littlewood”):
Conjecture 3 (
) Let
be a fixed admissible
-tuple. Then there are infinitely many translates
of
which contain at least two primes.
This conjecture gets harder as gets smaller. Note for instance that
would imply all the
cases of Conjecture 1, including the twin prime conjecture. More generally, if one knew
for some
, then one would immediately conclude that there are an infinite number of pairs of consecutive primes of separation at most
, where
is the minimal diameter
amongst all admissible
-tuples
. Values of
for small
can be found at this link (with
denoted
in that page). For large
, the best upper bounds on
have been found by using admissible
-tuples
of the form
where denotes the
prime and
is a parameter to be optimised over (in practice it is an order of magnitude or two smaller than
); see this blog post for details. The upshot is that one can bound
for large
by a quantity slightly smaller than
(and the large sieve inequality shows that this is sharp up to a factor of two, see e.g. this previous post for more discussion).
In a key breakthrough, Goldston, Pintz, and Yildirim were able to establish the following conditional result a few years ago:
Theorem 4 (Goldston-Pintz-Yildirim) Suppose that the Elliott-Halberstam conjecture
is true for some
. Then
is true for some finite
. In particular, this establishes an infinite number of pairs of consecutive primes of separation
.
The dependence of constants between and
given by the Goldston-Pintz-Yildirim argument is basically of the form
. (UPDATE: as recently observed by Farkas, Pintz, and Revesz, this relationship can be improved to
.)
Unfortunately, the Elliott-Halberstam conjecture (which we will state properly below) is only known for , an important result known as the Bombieri-Vinogradov theorem. If one uses the Bombieri-Vinogradov theorem instead of the Elliott-Halberstam conjecture, Goldston, Pintz, and Yildirim were still able to show the highly non-trivial result that there were infinitely many pairs
of consecutive primes with
(actually they showed more than this; see e.g. this survey of Soundararajan for details).
Actually, the full strength of the Elliott-Halberstam conjecture is not needed for these results. There is a technical specialisation of the Elliott-Halberstam conjecture which does not presently have a commonly accepted name; I will call it the Motohashi-Pintz-Zhang conjecture in this post, where
is a parameter. We will define this conjecture more precisely later, but let us remark for now that
is a consequence of
.
We then have the following two theorems. Firstly, we have the following strengthening of Theorem 4:
Theorem 5 (Motohashi-Pintz-Zhang) Suppose that
is true for some
. Then
is true for some
.
A version of this result (with a slightly different formulation of ) appears in this paper of Motohashi and Pintz, and in the paper of Zhang, Theorem 5 is proven for the concrete values
and
. We will supply a self-contained proof of Theorem 5 below the fold, the constants upon those in Zhang’s paper (in particular, for
, we can take
as low as
, with further improvements on the way). As with Theorem 4, we have an inverse quadratic relationship
.
In his paper, Zhang obtained for the first time an unconditional advance on :
This is a deep result, building upon the work of Fouvry-Iwaniec, Friedlander-Iwaniec and Bombieri–Friedlander–Iwaniec which established results of a similar nature to but simpler in some key respects. We will not discuss this result further here, except to say that they rely on the (higher-dimensional case of the) Weil conjectures, which were famously proven by Deligne using methods from l-adic cohomology. Also, it was believed among at least some experts that the methods of Bombieri, Fouvry, Friedlander, and Iwaniec were not quite strong enough to obtain results of the form
, making Theorem 6 a particularly impressive achievement.
Combining Theorem 6 with Theorem 5 we obtain for some finite
; Zhang obtains this for
but as detailed below, this can be lowered to
. This in turn gives infinitely many pairs of consecutive primes of separation at most
. Zhang gives a simple argument that bounds
by
, giving his famous result that there are infinitely many pairs of primes of separation at most
; by being a bit more careful (as discussed in this post) one can lower the upper bound on
to
, and if one instead uses the newer value
for
one can instead use the bound
. (Many thanks to Scott Morrison for these numerics.) UPDATE: These values are now obsolete; see this web page for the latest bounds.
In this post we would like to give a self-contained proof of both Theorem 4 and Theorem 5, which are both sieve-theoretic results that are mainly elementary in nature. (But, as stated earlier, we will not discuss the deepest new result in Zhang’s paper, namely Theorem 6.) Our presentation will deviate a little bit from the traditional sieve-theoretic approach in a few places. Firstly, there is a portion of the argument that is traditionally handled using contour integration and properties of the Riemann zeta function; we will present a “cheaper” approach (which Ben Green and I used in our papers, e.g. in this one) using Fourier analysis, with the only property used about the zeta function being the elementary fact that blows up like
as one approaches
from the right. To deal with the contribution of small primes (which is the source of the singular series
), it will be convenient to use the “
-trick” (introduced in this paper of mine with Ben), passing to a single residue class mod
(where
is the product of all the small primes) to end up in a situation in which all small primes have been “turned off” which leads to better pseudorandomness properties (for instance, once one eliminates all multiples of small primes, almost all pairs of remaining numbers will be coprime).
Recent Comments