Suppose one is given a -tuple of distinct integers for some , arranged in increasing order. When is it possible to find infinitely many translates of which consists entirely of primes? The case is just Euclid’s theorem on the infinitude of primes, but the case is already open in general, with the case being the notorious twin prime conjecture.

On the other hand, there are some tuples for which one can easily answer the above question in the negative. For instance, the only translate of that consists entirely of primes is , basically because each translate of must contain an even number, and the only even prime is . More generally, if there is a prime such that meets each of the residue classes , then every translate of contains at least one multiple of ; since is the only multiple of that is prime, this shows that there are only finitely many translates of that consist entirely of primes.

To avoid this obstruction, let us call a -tuple *admissible* if it avoids at least one residue class for each prime . It is easy to check for admissibility in practice, since a -tuple is automatically admissible in every prime larger than , so one only needs to check a finite number of primes in order to decide on the admissibility of a given tuple. For instance, or are admissible, but is not (because it covers all the residue classes modulo ). We then have the famous Hardy-Littlewood prime tuples conjecture:

Conjecture 1 (Prime tuples conjecture, qualitative form)If is an admissible -tuple, then there exists infinitely many translates of that consist entirely of primes.

This conjecture is extremely difficult (containing the twin prime conjecture, for instance, as a special case), and in fact there is *no* explicitly known example of an admissible -tuple with for which we can verify this conjecture (although, thanks to the recent work of Zhang, we know that satisfies the conclusion of the prime tuples conjecture for some , even if we can’t yet say what the precise value of is).

Actually, Hardy and Littlewood conjectured a more precise version of Conjecture 1. Given an admissible -tuple , and for each prime , let denote the number of residue classes modulo that meets; thus we have for all by admissibility, and also for all . We then define the *singular series* associated to by the formula

where is the set of primes; by the previous discussion we see that the infinite product in converges to a finite non-zero number.

We will also need some asymptotic notation (in the spirit of “cheap nonstandard analysis“). We will need a parameter that one should think of going to infinity. Some mathematical objects (such as and ) will be independent of and referred to as *fixed*; but unless otherwise specified we allow all mathematical objects under consideration to depend on . If and are two such quantities, we say that if one has for some fixed , and if one has for some function of (and of any fixed parameters present) that goes to zero as (for each choice of fixed parameters).

Conjecture 2 (Prime tuples conjecture, quantitative form)Let be a fixed natural number, and let be a fixed admissible -tuple. Then the number of natural numbers such that consists entirely of primes is .

Thus, for instance, if Conjecture 2 holds, then the number of twin primes less than should equal , where is the twin prime constant

As this conjecture is stronger than Conjecture 1, it is of course open. However there are a number of partial results on this conjecture. For instance, this conjecture is known to be true if one introduces some additional averaging in ; see for instance this previous post. From the methods of sieve theory, one can obtain an *upper bound* of for the number of with all prime, where depends only on . Sieve theory can also give analogues of Conjecture 2 if the primes are replaced by a suitable notion of almost prime (or more precisely, by a weight function concentrated on almost primes).

Another type of partial result towards Conjectures 1, 2 come from the results of Goldston-Pintz-Yildirim, Motohashi-Pintz, and of Zhang. Following the notation of this recent paper of Pintz, for each , let denote the following assertion (DHL stands for “Dickson-Hardy-Littlewood”):

Conjecture 3() Let be a fixed admissible -tuple. Then there are infinitely many translates of which containat least twoprimes.

This conjecture gets harder as gets smaller. Note for instance that would imply all the cases of Conjecture 1, including the twin prime conjecture. More generally, if one knew for some , then one would immediately conclude that there are an infinite number of pairs of consecutive primes of separation at most , where is the minimal diameter amongst all admissible -tuples . Values of for small can be found at this link (with denoted in that page). For large , the best upper bounds on have been found by using admissible -tuples of the form

where denotes the prime and is a parameter to be optimised over (in practice it is an order of magnitude or two smaller than ); see this blog post for details. The upshot is that one can bound for large by a quantity slightly smaller than (and the large sieve inequality shows that this is sharp up to a factor of two, see e.g. this previous post for more discussion).

In a key breakthrough, Goldston, Pintz, and Yildirim were able to establish the following conditional result a few years ago:

Theorem 4 (Goldston-Pintz-Yildirim)Suppose that the Elliott-Halberstam conjecture is true for some . Then is true for some finite . In particular, this establishes an infinite number of pairs of consecutive primes of separation .

The dependence of constants between and given by the Goldston-Pintz-Yildirim argument is basically of the form . (UPDATE: as recently observed by Farkas, Pintz, and Revesz, this relationship can be improved to .)

Unfortunately, the Elliott-Halberstam conjecture (which we will state properly below) is only known for , an important result known as the Bombieri-Vinogradov theorem. If one uses the Bombieri-Vinogradov theorem instead of the Elliott-Halberstam conjecture, Goldston, Pintz, and Yildirim were still able to show the highly non-trivial result that there were infinitely many pairs of consecutive primes with (actually they showed more than this; see e.g. this survey of Soundararajan for details).

Actually, the full strength of the Elliott-Halberstam conjecture is not needed for these results. There is a technical specialisation of the Elliott-Halberstam conjecture which does not presently have a commonly accepted name; I will call it the *Motohashi-Pintz-Zhang conjecture* in this post, where is a parameter. We will define this conjecture more precisely later, but let us remark for now that is a consequence of .

We then have the following two theorems. Firstly, we have the following strengthening of Theorem 4:

Theorem 5 (Motohashi-Pintz-Zhang)Suppose that is true for some . Then is true for some .

A version of this result (with a slightly different formulation of ) appears in this paper of Motohashi and Pintz, and in the paper of Zhang, Theorem 5 is proven for the concrete values and . We will supply a self-contained proof of Theorem 5 below the fold, the constants upon those in Zhang’s paper (in particular, for , we can take as low as , with further improvements on the way). As with Theorem 4, we have an inverse quadratic relationship .

In his paper, Zhang obtained for the first time an unconditional advance on :

This is a deep result, building upon the work of Fouvry-Iwaniec, Friedlander-Iwaniec and Bombieri-Friedlander-Iwaniec which established results of a similar nature to but simpler in some key respects. We will not discuss this result further here, except to say that they rely on the (higher-dimensional case of the) Weil conjectures, which were famously proven by Deligne using methods from l-adic cohomology. Also, it was believed among at least some experts that the methods of Bombieri, Fouvry, Friedlander, and Iwaniec were not quite strong enough to obtain results of the form , making Theorem 6 a particularly impressive achievement.

Combining Theorem 6 with Theorem 5 we obtain for some finite ; Zhang obtains this for but as detailed below, this can be lowered to . This in turn gives infinitely many pairs of consecutive primes of separation at most . Zhang gives a simple argument that bounds by , giving his famous result that there are infinitely many pairs of primes of separation at most ; by being a bit more careful (as discussed in this post) one can lower the upper bound on to , and if one instead uses the newer value for one can instead use the bound . (Many thanks to Scott Morrison for these numerics.) UPDATE: These values are now obsolete; see this web page for the latest bounds.

In this post we would like to give a self-contained proof of both Theorem 4 and Theorem 5, which are both sieve-theoretic results that are mainly elementary in nature. (But, as stated earlier, we will not discuss the deepest new result in Zhang’s paper, namely Theorem 6.) Our presentation will deviate a little bit from the traditional sieve-theoretic approach in a few places. Firstly, there is a portion of the argument that is traditionally handled using contour integration and properties of the Riemann zeta function; we will present a “cheaper” approach (which Ben Green and I used in our papers, e.g. in this one) using Fourier analysis, with the only property used about the zeta function being the elementary fact that blows up like as one approaches from the right. To deal with the contribution of small primes (which is the source of the singular series ), it will be convenient to use the “-trick” (introduced in this paper of mine with Ben), passing to a single residue class mod (where is the product of all the small primes) to end up in a situation in which all small primes have been “turned off” which leads to better pseudorandomness properties (for instance, once one eliminates all multiples of small primes, almost all pairs of remaining numbers will be coprime).

** — 1. The -trick — **

In this section we introduce the “-trick”, which is a simple but useful device that automatically takes care of local factors arising from small primes, such as the singular series . The price one pays for this trick is that the explicit decay rates in various terms can be rather poor, but for the applications here, we will not need to know any information on these decay rates and so the -trick may be freely applied.

Let be a natural number, which should be thought of as either fixed and large, or as a very slowly growing function of . Actually, the two viewpoints are basically equivalent for the purposes of asymptotic analysis (at least at the qualitative level of decay rates), thanks to the following basic principle:

Lemma 7 (Overspill principle)Let be a quantity depending on and . Then the following are equivalent:

- (i) For every fixed there exists a fixed such that
for all fixed .

- (ii) We have
whenever is a function of going to infinity that is sufficiently slowly growing. (In other words, there exists a function going to infinity with the property that whenever is a natural number-valued function of is such that as and for all sufficiently large .)

This principle is closely related to the overspill principle from nonstandard analysis, though we will not explicitly adopt a nonstandard perspective here. It is also similar in spirit to the diagonalisation trick used to prove the Arzela-Ascoli theorem.

*Proof:* We first show that (i) implies (ii). By (i), we see that for every natural number , we can find a real number with the property that

whenever , , and are such that . By increasing the as necessary we may assume that they are increasing and go to infinity as . If we then define to equal the largest natural number for which , or equal to if no such number exists, then one easily verifies that whenever goes to infinity and is bounded by for sufficiently large .

Now we show that (ii) implies (i). Suppose for contradiction that (i) failed, then we can find a fixed with the property that for any natural number , there exist such that for arbitrarily large . We can select the to be increasing to infinity, and then we can find a sequence increasing to infinity such that for all ; by increasing as necessary, we can also ensure that for all and . If we then define to be when , and for , we see that whenever , contradicting (ii).

Henceforth we will usually think of as a sufficiently slowly growing function of , although we will on occasion take advantage of Lemma 7 to switch to thinking of as a large fixed quantity instead. In either case, we should think of as exceeding the size of fixed quantities such as or , at least in the limit where is large; in particular, for a fixed -tuple , we will have

if is large enough. A particular consequence of the growing nature of is that

as this follows from the absolutely convergent nature of the sum and hence also . As a consequence of this, once we “turn off” all the primes less than , any errors in our sieve-theoretic analysis which are quadratic or higher in can be essentially ignored, which will be very convenient for us. In a similar vein, for any fixed -tuple , one has

which allows one to truncate the singular series:

In order to “turn off” all the small primes, we introduce the quantity , defined as the product of all the primes up to (i.e. the primorial of ):

As is going to infinity, is going to infinity also (but as slowly as we please). The idea of the -trick is to search for prime patterns in a single residue class , which as mentioned earlier will “turn off” all the primes less than in the sieve-theoretic analysis.

Using (4) and the Chinese remainder theorem, we may thus approximate the singular series as

where is the Euler totient function of , and is the set of residue classes such that all of the shifts are coprime to . Note that if consists purely of primes and is sufficiently large, then must lie in one of the residue classes in . Thus we can count tuples with all prime by working in each residue class in separately. We conclude that Conjecture 2 is equivalent to the following “-tricked version” in which the singular series is no longer present (or, more precisely, has been replaced by some natural normalisation factors depending on , such as ):

Conjecture 8 (Prime tuples conjecture, W-tricked quantitative form)Let be a fixed natural number, and let be a fixed admissible -tuple. Assume is a sufficiently slowly growing function of . Then for any residue class in , the number of natural numbers with such that consists entirely of primes is .

We will work with similarly -tricked asymptotics in the analysis below.

** — 2. Sums of multiplicative functions — **

As a result of the sieve-theoretic computations to follow, we will frequently need to estimate sums of the form

and

where is a multiplicative function, the *sieve level* (also denoted in some literature) is a fixed power of (such as or ), is the Möbius function, is a fixed smooth compactly supported function, is a (possibly half-infinite) interval in , and is the set of square-free numbers that are products of distinct primes in . (Actually, in applications won’t *quite* be smooth, but instead have some high order of differentiability (e.g. times continuously differentiable for some ), but we can extend the analysis of smooth to sufficiently differentiable by various standard limiting or approximation arguments which we will not dwell on here.) We will also need to control the more complicated variant

where are also smooth compactly supported functions. In practice, the interval will be something like , , . In particular, thanks to the -trick we will be able to turn off all the primes up to , so that only contains primes larger than , allowing us to take advantage of bounds such as (2).

Once is restricted to , the quantity is determined entirely by the values of the multiplicative function at primes in :

In applications, will have the size bound

for all and some fixed positive (note that we allow the implied constants in the notation to depend on quantities such as ); we refer to as the *dimension* of the multiplicative function . Henceforth we assume that has a fixed dimension . We remark that we could unify the treatment of and in what follows by allowing multiplicative functions of negative dimension, but we will avoid doing so here. In our applications will be an integer; one could also generalise much of the discussion below to the fractional dimension case, but we will not need to do so here.

Traditionally the above expressions are handled by complex analysis, starting with Perron’s formula. We will instead take a slightly different Fourier-analytic approach. We perform a Fourier expansion of the smooth compactly supported function to obtain a representation

for some Schwartz function ; in particular, is rapidly decreasing. (Strictly speaking, is the Fourier transform of shifted in the complex domain by , rather than the true Fourier transform of , but we will ignore this distinction for the purposes of this discussion.) In particular we have

for any . By Fubini’s theorem, we can thus write as

which factorises as

Similarly one has

and

In order to use asymptotics of the Riemann zeta function near the pole , it is convenient to temporarily truncate the above integrals to the region or :

Lemma 9For any fixed , we haveand

and

Also we have the crude bound

*Proof:* We begin with the bounds on . From (6) we have

for (which forces , so there is no issue with the singularity of the logarithm) and thus

Since

we see on taking logarithms that

and thus

The bounds on then follow from the rapid decrease of . The bounds for and are proven similarly.

From (6) and the restriction of to quantities larger than , we see that

and

and

where is the restricted Euler product

which is well-defined for at least (and this is the only region of for which we will need ).

We now specialise to the model case , in which case

where is the Riemann zeta function. Using the basic (and easily proven) asymptotic for near

for , if is sufficiently slowly growing (this can be seen by first working with a fixed large and then using Lemma 7). Note that because of the above truncation, we do not need any deeper bounds on than what one can obtain from the simple pole at ; in particular no zero-free regions near the line are needed here. (This is ultimately because of the smooth nature of , which is sufficient for the applications in this post; if one wanted rougher cutoff functions here then the situation is closer to that of the prime number theorem, and non-trivial zero-free regions would be required.)

We conclude in the case that

and

and

using the rapid decrease of , we thus have

and

and

We can rewrite these expressions in terms of instead of . Using the Gamma function identity

and (7) we see that

whilst from differentiating (7) times at the origin (after first dividing by ) we see that

Combining these two methods, we also see that

We have thus obtained the following asymptotics:

Proposition 10 (Asymptotics without prime truncation)Suppose that , and that has dimension for some fixed natural number . Then we haveand

and

These asymptotics will suffice for the treatment of the Goldston-Pintz-Yildirim theorem (Theorem 4). For the Motohashi-Pintz-Zhang theorem (Theorem 5) we will also need to deal with truncated intervals , such as ; we will discuss how to deal with these truncations later.

** — 3. The Goldston-Yildirim-Pintz theorem — **

We are now ready to state and prove the Goldston-Yildirim-Pintz theorem. We first need to state the Elliott-Halberstam conjecture properly.

Let be the von Mangoldt function, thus equals when is equal to a prime or a power of that prime, and equal to zero otherwise. The prime number theorem in arithmetic progressions tells us that

for any fixed arithmetic progression with coprime to . In particular,

where are the residue classes mod that are coprime to . By invoking the Siegel-Walfisz theorem one can obtain the improvement

for any fixed (though, annoyingly, the implied constant here is only ineffectively bounded with current methods; see this previous post for further discussion).

The above error term is only useful when is fixed (or is of logarithmic size in ). For larger values of , it is very difficult to get good error terms for each separately, unless one assumes powerful hypotheses such as the generalised Riemann hypothesis. However, it is possible to obtain good control on the error term if one *averages* in . More precisely, for any , let denote the following assertion:

Conjecture 11() One hasfor all fixed .

This should be compared with the asymptotic for some absolute constant , as can be deduced for instance from Proposition 10. The Elliott-Halberstam conjecture is the assertion that holds for all . This remains open, but the important Bombieri-Vinogradov theorem establishes for all . Remarkably, the threshold is also the limit of what one can establish if one directly invokes the generalised Riemann hypothesis, so the Bombieri-Vinogradov theorem is often referred to as an assertion that the generalised Riemann hypothesis (or at least the Siegel-Walfisz theorem) holds “on the average”, which is often good enough for sieve-theoretic purposes.

We may replace the von Mangoldt function with the slight variant , defined to equal when is a prime and zero otherwise. Using this replacement, as well as the prime number theorem (with error term), it is not difficult to show that is equivalent to the estimate

Now we establish Theorem 4. Suppose that holds for some fixed , let be sufficiently large depending on but otherwise fixed, and let be a fixed admissible -tuple. We would like to show that there are infinitely many such that contains at least two primes. We will begin with the -trick, restricting to a residue class with (note that is non-empty because is admissible).

The general strategy will be as follows. We will introduce a *weight function* that obeys the upper bound

for all and some fixed , where is a fixed power of (we will eventually take ). (The factors of , , and on the right-hand side are natural normalisations coming from sieve theory and the reader should not pay too much attention to them.) Informally, (9) says that has some normalised density at most , and then (10) roughly speaking asserts that relative to the weight , has a probability of at least of being prime. If we sum (10) for all and then subtract off copies of (9), we conclude that

In particular, if we have the crucial inequality

and so is positive for at least one value of between and . This can only occur if contains two or more primes. Thus we must have containing at least two primes for some between and ; sending off to infinity then gives as desired.

It thus suffices to find a weight function obeying the required properties (9), (10) with parameters obeying the key inequality (11). It is thus of interest to make as large a power of as possible, and to minimise the ratio between and . It is in the former task that the Elliott-Halberstam hypothesis will be crucial.

The key is to find a good choice of , and the selection of this weight is arguably the main contribution of Goldston, Pintz, and Yildirim, who use a carefully modified version of the Selberg sieve. Following (a slight modification of) the Goldston-Pintz-Yildirim argument, we will take a weight of the form , where

where is a fixed smooth non-negative function supported on to be chosen later, , and is the polynomial

The intuition here is that is a truncated approximation to a function of the form

for some natural number , which one can check is only non-vanishing when has at most distinct prime factors in . So is concentrated on those numbers for which already has few prime factors for , which will assist in making the ratio as small as possible.

Clearly is non-negative. Now we consider the task of estimating the left-hand side of (9). Expanding out using (12) and interchanging summations, we can expand this expression as

The constraint is equivalent to requiring that for each prime dividing , lies in one of the residue classes for . By choice of , , so all the are distinct, and so we are constraining to lie in one of residue classes modulo for each ; together with the constraint and the Chinese remainder theorem, we are thus constraining to residue classes modulo , where is the number of prime factors of . We thus have

Note from the support of that may be constrained to be at most , so that is at most . We can thus express the left-hand side of (9) as the main term

plus an error

By Proposition 10, the error term is . So if we set

then the error term will certainly give a negligible contribution to (9) with plenty of room to spare. (But when we come to the more difficult sum (10), we will have much less room – only a superlogarithmic amount of room, in fact.) To show (9), it thus suffices to show that

But by Proposition 10 (applied to the -dimensional multiplicative function ) and the support of , this bound holds with equal to the quantity

Now we turn to (10). Fix . Repeating the arguments for (9), we may expand the left-hand side of (10) as

Now we consider the inner sum

As discussed earlier, the conditions and split into residue classes . However, if for one of the primes dividing , then must vanish (since is much less than ). So there are actually only residue classes for which is coprime to . We thus have

Remark 1There is an inefficiency here; the supremum in (13) is overallprimitive residue classes , but actually one only needs to take the supremum over the residue classes for which , where . This inefficiency is not exploitable if we insist on using the Elliott-Halberstam conjecture as the starting hypothesis, but will be used in the arguments of the next section in which a more lightweight hypothesis is utilised.

The left-hand side of (10) is thus equal to the main term

plus an error term

We first deal with the error term. Since is in and is bounded by on the support of this function, and each has representations of the form , we can bound this expression by

Note that we are assuming to be a fixed smooth compactly supported function and so it has magnitude . On the other hand, from Proposition 10 and the trivial bound we have

while from (8) (and here we crucially use the choice of ) one easily verifies that

for any fixed . By the Cauchy-Schwarz inequality we see that the error term to (10) is negligible (assuming sufficiently slowly growing of course). Meanwhile, the main term can be rewritten as

where is the -dimensional multiplicative function

Applying Proposition 10, we obtain (10) with

To obtain the crucial inequality (11), we thus need to locate a fixed smooth non-negative function supported on obeying the inequality

In principle one can use calculus of variations to optimise the choice of here (it will be the ground state of a certain one-dimensional Schrödinger operator), but one can already get a fairly good result here by a specific choice of that is amenable for computations, namely a polynomial of the form for and some integer , with vanishing for and smoothly truncated to somehow at negative values of . Strictly speaking, this is not admissible here because it is not infinitely smooth at , being only times continuously differentiable instead, but one can regularise this function to be smooth without significantly affecting either side of (14), so we will go ahead and test (14) with this function and leave the regularisation details to the reader. The inequality then becomes (after cancelling some factors)

Using the Beta function identity

and the preceding equation now becomes

which simplifies to

Actually, the same inequality is also applicable when is real instead of integer, using Gamma functions in place of factorials; we leave the details to the interested reader. We can then optimise in by setting , arriving at the inequality

But as long as , this inequality is satisfiable for any larger than . This concludes the proof of Theorem 4.

Remark 2One can obtain slightly better dependencies of in terms of by using more general functions for than the monomials , for instance one can take linear combinations of such functions. See the paper of Goldston, Pintz, and Yildirim for details. Unfortunately, as noted in this survey of Soundararajan, one has the general inequality

which defeats any attempt to directly use this method using only the Bombieri-Vinogradov result that holds for all . We show (17) in the case when is large. Write , then (17) simplifies to

The right-hand side simplifies after some integration by parts to

Subtracting off from both sides, one is left with

From the fundamental theorem of calculus and Cauchy-Schwarz, one has the bound

Using this bound for close to and dominating by for far from , we obtain the claim (at least if is large enough). There is some slack in this argument; it would be of interest to calculate exactly what the best constants are in (17), so that one can obtain the optimal relationship between and .

To get around this obstruction (17) in the unconditional setting when one only has for , Goldston, Pintz, and Yildirim also considered sums of the form in which was now outside (but close to) . While the bounds here were significantly inferior to those in (10), they were still sufficient to prove a variant of the inequality (11) to get reasonably small gaps between primes.

** — 4. The Motohashi-Pintz-Zhang theorem — **

We now modify the above argument to give Theorem 5. Our treatment here is different from that of Zhang in that it employs the method of Buchstab iteration; a related argument also appears in the paper of Motohashi and Pintz. This arrangement of the argument leads to a more efficient dependence of on than in the paper of Zhang. (The argument of Motohashi and Pintz is a bit more complicated and uses a slightly different formulation of the base conjecture , but the final bounds are similar to those given here, albeit with non-explicit constants in the notation.)

The main idea here is to truncate the interval of relevant primes from to for some small . Somewhat remarkably, it turns out that this apparently severe truncation does not affect the sums (9), (10) here as long as is large (which is going to be the case in practice, with being comparable to ). The intuition is that was already concentrated on those for which had about factors, and it is too “expensive” for one of these factors to as large as or more, as it forces many of the other factors to be smaller than they “want” to be. The advantage of truncating the set of primes this way is that the version of the Elliott-Halberstam conjecture needed also acquires the same truncation, which gives that version a certain “well-factored” form (in the spirit of the work of Bombieri, Fouvry, Friedlander, and Iwaniec) which is essential in being able to establish that conjecture unconditionally for some suitably small .

To make this more precise, we first formalise the conjecture for mentioned earlier.

Conjecture 12() Let be a fixed -tuple (not necessarily admissible) for some fixed , and let be a primitive residue class. Thenfor any fixed , where and is the quantity

and

This is the -tricked formulation of the conjecture as (implicitly) stated in Zhang’s paper, which did not have the restriction present (and with the interval enlarged from to , and was required to be admissible). However the two formulations are morally equivalent (and Zhang’s arguments establish Theorem 6 with as stated). From the prime number theorem in arithmetic progressions (with error term) together with Proposition 10 we observe that we may replace (18) here by the slight variant

without affecting the truth of .

It is also not difficult to deduce from after using a Cauchy-Schwarz argument to dispose of the residue classes in the above sum (cf. the treatment of the error term in (10) in the previous section); we leave the details to the interested reader. Note however that whilst demands control over *all* primitive residue classes in a given modulus , the conjecture only requires control of a much smaller number of residue classes (roughly polylogarithmic in number, on average). Thus is simpler than , though it is still far from trivial.

We now begin the proof of Theorem 5. Let be such that holds, and let be a sufficiently large quantity depending on but which is otherwise fixed. As before, it suffices to locate a non-negative sieve weight that obeys the estimates (9), (10) for parameters that obey the key inequality (11), and with smooth and supported on . The choice of weight is almost the same as before; it is also given as a square with given by (12), but now the interval is truncated to instead of . Also, in this argument we take

We now establish (9). By repeating the previous arguments, the left-hand side of (9) is equal to a main term

plus an error term which continues to be acceptable (indeed, the error term is slightly smaller than in the previous case due to the truncated nature of ). At this point in the previous section we applied Proposition 10, but that proposition was only available for the untruncated interval instead of the truncated interval . One could try to adapt the proof of that proposition to the truncated case, but then one is faced with the problem of controlling the truncated zeta function . While one can eventually get some reasonable asymptotics for this function, it seems to be more efficient to eschew Fourier analysis and work entirely in “physical space” by the following partial Möbius inversion argument. Write , thus . Observe that for any , the quantity equals when lies in and vanishes otherwise. Hence, for any function of and supported on squarefree numbers we have the partial Mobius inversion formula

and so the main term (20) can be expressed as

We first dispose of the contribution to (21) when share a common prime factor for some . For any fixed , we can bound this contribution by

Factorising the inner two sums as an Euler product, this becomes

[UPDATE: The above argument is not quite correct; a corrected (and improved) version is given at this newer post.] The product is by e.g. Mertens’ theorem, while . So the contribution of this case is negligible.

If do not share a common factor for any , then we can factor as . Rearranging this portion of (21) and then reinserting the case when have a common factor for some , we may write (21) up to negligible errors as

Note that we can restrict to be at most as otherwise the factors vanish. The inner sum

is now of the form that can be treated by Proposition 10, and takes the form

Here we make the technical remark that the translates of by shifts between and are uniformly controlled in smooth norms, which means that the error here is uniform in the choices of .

Let us first deal with the contribution of the error term. This is bounded by

The inner sum factorises as

which by Mertens’ theorem is (albeit with a rather large implied constant!), so this error is negligible for the purposes of (9). Indeed, (9) is now reduced to the inequality

Note that the factor increases very rapidly with when is large, which basically means that any non-trivial shift of the factors to the left by or will cause the integral in (22) to decrease dramatically. Since all the in are either equal to or bounded below by , this will cause the term to dominate in the regime when is large (or more precisely ), which is the case in applications.

At this point, in order to perform the computations cleanly, we will mimic the arguments from the previous section and take the explicit choice

for some integer and (and some smooth continuation to for negative , and so

for positive . (Again, this function is not quite smooth at , but this issue can be dealt with by an infinitesimal regularisation argument which we omit here.) The left-hand side of (22) now becomes

The integral here is a little bit more complicated than a beta integral. To estimate it, we use the beta function identity to observe that

and

and hence by Cauchy-Schwarz

This Cauchy-Schwarz step is a bit wasteful when are far apart, but this does seems to only lead to a minor loss of efficiency in the estimates. We have thus bounded the left-hand side of (22) by

It is now convenient to collapse the double summation to a single summation. We may bound

(since is less than the greater of and ) and observe that each has representations of the form , so we may now bound the left-hand side of (22) by

Note that an element of is either equal to , or lies in the interval for some natural number . In the latter case, we have

In particular, this expression vanishes if . We can thus bound the left-hand side of (22) by

then we have thus bounded the left-hand side of (22) by

when , while in general we have the *Buchstab identity*

as can be seen by isolating the smallest prime in all the terms in (23) with . (This inequality is very close to being an identity, the only loss coming from the possibility of the prime factor being repeated in a term associated to .) We can iterate this identity to obtain the following conclusion:

whenever and for some fixed , with the error term being uniform in the choice of .

*Proof:* Write . We prove the bound by strong induction on . The case follows from (24). Now suppose that and that the claim has already been proven for smaller . Let and . Note that whenever . We thus have from (25) and the induction hypothesis that

applying Mertens’ theorem (or the prime number theorem) we have

and the claim follows from the telescoping identity

Applying this inequality, we have established (22) with

We remark that as a first approximation we have

and

so in the regime , is roughly , which will be negligible for the parameter ranges of of interest. Thus the in this argument is quite close to the from (15) in practice.

Now we turn to (10). Fix . As in the previous section, we can bound the left-hand side of (10) as the sum of the main term

plus an error term

where is the quantity

is the polynomial , and was defined in (19). Using the hypothesis and Cauchy-Schwarz as in the previous section we see that the error term is negligible for the purposes of establishing (10). As for the main term, the same argument used to reduce (9) to (22) shows that (10) reduces to

Here, we can do something a bit crude; with our choice of , the integrand is non-negative, so we can simply discard all but the term and reduce to

(The intuition here is that by refusing to sieve using primes larger than , we have enlarged the sieve , which makes the upper bound (9) more difficult but the lower bound (10) actually becomes easier.) So we can take the same choice (16) of as in the previous section:

Inserting this and (26) into (11) and simplifying, we see that we can obtain once we can verify the inequality

As before, can be taken to be non-integer if desired. Setting to be slightly larger than we obtain Theorem 5.

** — 5. Using optimal values of (NEW, June 5, 2013) — **

We can do better than given above by using an optimal value of . The following result was obtained recently by Farkas, Pintz, and Revesz, and independently worked out by commenters on this blog:

Theorem 14 (Optimal GPY weight)Let be an integer. Then the ratiowhere is a smooth function with that is not identically vanishing, has a minimal value of

where is the first zero of the Bessel function . Furthermore, this minimum is attained if (and only if) is a scalar multiple of the function

*Proof:* The function , by definition, obeys the Bessel differential equation

and also vanishes to order at the origin. From this and routine computations it is easy to see that is smooth, strictly positive on , and obeys the differential equation

If we write , which is well-defined away from since is non-vanishing on , then obeys the Ricatti-type equation

Now consider the quadratic form

for smooth functions with . A calculation using (29) and integration by parts shows that

and so , giving the first claim; the second claim follows by noting that vanishes if and only if is a scalar multiple of . (Note that the integration by parts is a little subtle, because vanishes to first order at and so blows up to first order; but still vanishes to first order at , allowing one to justify the integration by parts by a standard limiting argument.)

If we now test (14) with a function which is smooth, vanishes to order at , and has a derivative equal to , we see that we can deduce from whenever

Using the known asymptotic

for and large (see e.g. Abramowitz and Stegun), this is asymptotically of the form

or

thus giving a relationship of the form that is superior to the previous relationship .

A similar argument can be given for Theorem 5, using of the form above rather than a monomial (and extended by zero to ). For future optimisation we consider a generalisation of in which the interval is of the form rather than , so that is now rather than . As before, the key point is the estimation of . The arguments leading to (22) go through for any test function , so we have to show

As has derivative equal to , this is

We need some sign information on :

Lemma 15On , is positive, is negative and is positive.

*Proof:* From (28) we have

From construction we already know that is positive on . The above equation then shows that is negative at , and that cannot have any local minimum in , so is negative throughout. To obtain the final claim we use an argument provided by Gergely Harcos in the comments: from the recursive relations for Bessel functions we can check that is a positive multiple of , and the claim then follows from the interlacing properties of the zeroes of Bessel functions.

Write , so is positive and by Theorem 14 we have

If , then as is negative and increasing we have

for , and thus by change of variable

for , and thus

for all . Similarly

for all . By Cauchy-Schwarz we can thus bound the integral in (30) by

and so (30) reduces to

Repeating the arguments of the previous section, we can reduce this to

and by further continuing the arguments of the previous section we end up being able to take

Also, the previous arguments allow us to take

The key inequality (11) now becomes

thus implies whenever (32) is obeyed with the value (31) of .

## 137 comments

Comments feed for this article

3 June, 2013 at 9:25 am

Shun-Xiang OuyangDear Terry, in the tags, “Yitong Zhang” should be “Yitang Zhang”.

[Corrected, thanks - T.]3 June, 2013 at 9:26 am

AnonymousTerry, amazing post. Near the start (it would take me a few hours to scroll back to exactly the right point) you say that , where of course you mean . In the formulation of MPZ, you have , the residue classes for which . It looks to me to be possible, from what I have read of Zhang’s paper so far, that all you need about these is that under the isomorphism of and , rather than that they come from the polynomial . I’m not too sure about that at this stage, though.

[Thanks for the correction! My feeling is that there are some Kloosterman sum estimates (using the Weil conjectures) which rely on some of the structure coming from the polynomial P, otherwise Zhang would be proving something close to the full Elliott-Halberstam conjecture which I don't think he is. But I haven't checked this in detail yet - T.]3 June, 2013 at 3:14 pm

Terence TaoI withdraw my earlier comment: on closer inspection (and consultation with Ben Green) it appears that indeed the fine structure of S(q) is not needed beyond the multiplicative property , the point being that is restricted to be smooth (only having prime factors less than ) and the effect of each small prime "disperses" itself across many moduli q, and so there is an opportunity to use the dispersion method (or Cauchy-Schwarz).

I am thinking of setting up an online reading seminar on the second half of Zhang's paper on this blog (perhaps as part of a broader Polymath project to improve the bounds). It looks like there are a lot of technical details here to discuss…

3 June, 2013 at 9:28 am

Kevin O'BryantTerry, You misspelled Sound’s name.

[Gah, that's embarrassing - I've known Sound for 24 years! Fixed now - T.]3 June, 2013 at 9:32 am

Vít Tuček“In this post we would like to give a self-contained proof of both Theorem 4 and Theorem 6, … ”

I believe you meant to say Theorem 4 and Theorem 5.

[Corrected, thanks - T.]3 June, 2013 at 12:14 pm

Why is mathematics possible? | Combinatorics and more[…] Update: A description of Zhang’s work and a link to the paper can be found on Emmanuel Kowalski’s bloog Further update: A description of Zhang’s work and related questions and results can be found now in Terry Tao’s blog. […]

3 June, 2013 at 1:30 pm

AnthonyDear Terry,

Thanks a lot for the post.

Right after Conjecture 1, you say that Zhang’s result implies that there exists some such that is admissible. You probably meant to say that there is such an admissible set with infinitely many translates that consist entirely of primes.

And a couple of lines later, it should be for all .

[Corrected, thanks - T.]3 June, 2013 at 3:41 pm

castoverTerry,

Right before you make the definition of the singular series, you say “We then define the singular series associated to by the formula…” Do you mean to say “…associated to …?”

[Corrected, thanks - T.]3 June, 2013 at 8:19 pm

Warren D.SmithSee this for a (probably easy) suggestion:

http://tech.groups.yahoo.com/group/primenumbers/message/25125

3 June, 2013 at 8:31 pm

Polymath proposal: bounded gaps between primes | The polymath blog[…] the explicit upper bound of 70,000,000 given for this gap. Since then there has been a flurry of activity in reducing this bound, with the current record being 4,802,222 (but likely to improve at least by […]

3 June, 2013 at 8:46 pm

Terence TaoAs the above link indicates, I’ve posted a proposal for a Polymath project to systematically optimise the bound on prime gaps and to improve the understanding of the mathematics behind these results. Please feel free to contribute your opinion on that post!

4 June, 2013 at 8:13 am

Terence TaoThis comment is implicit in the above post, but I thought I would highlight it again. Morally speaking, the above analysis shows that the conjecture will hold for any natural number for which one can find a smooth function vanishing to order at least at for which the inequality

\

holds

~~(with )~~(with ), or equivalently if there exists a smooth function with such that the inequality(This is cheating a little bit because there is also an error term in the analysis, but this term is exponentially small and is basically negligible for the purposes of optimising .)

Finding the optimal value of for which this inequality (*) holds is a calculus of variations / numerical spectral theory / ODE problem which appears to be quite computationally tractable. Right now we can obtain the value just by substituting in monomials , but these are not the optimal solutions and one should be able to do better, thus improving and hence the bound on the gap between primes.

[Edited to remove some sign conditions on g and f as they are unlikely to be essential. -T.]4 June, 2013 at 10:11 am

Eytan PaldiPerhaps f(x) may have also negative values? (because the functional is independent of its sign)

4 June, 2013 at 11:02 am

Terence TaoAh, you’re right, the condition that f be positive should not be relevant, since if f changed sign then one can replace f with its absolute value (and smooth out f a little bit where it crosses 0 to keep it smooth). So one can think of this as a pure quadratic program.

4 June, 2013 at 9:51 pm

Eytan PaldiThe condition (*) above is equivalent to saying that (for given ) the infimum of the ratio of the right integral over the left integral in (*) is less than the factor of the left integral. Denote the above infimum by (which of course depends on ). So we need only to find this dependence which can be found as follows:

1. The Euler-Lagrange equation is the following Sturm-Liouville equation

on (0, 1)

Where and

2. Note that integration by parts (using vanishing of at 0 and 1) shows that the Lagrange multiplier is the desired infimum!

3. In order to solve the equation I used the paper “A cataloge of Sturm-Liouville differential equation” by W.N. Everitt . In page16 (Bessel equation: form 2) the solution is given (up to a multiplicative constant) by

Where are Bessel functions of the first kind ( of the second kind also solve the equation but are avoided – to keep integrability near 0)

4. The boundary condition y(1) = 0 implies that

This is the desired dependence of on !

5. Since we need the least possible , we denote by j_n the first positive zero of , and get

Remark: In the handbook of mathematical functions (Abramowitz and Stegun) there is an asymptotic expansion of j_n (for large n)

which may be useful to approximate .

6. The optimal is therefore the minimal for which (as given above) is less than the factor of the left integral.

4 June, 2013 at 10:42 pm

Terence TaoGreat, thanks for the calculations! I think there is a very minor typo: the formula for y should actually be

.

Otherwise it seem to check out. So, inserting this optimal value into (*), and ignoring the kappa issue, we can morally get from whenever

. (**)

Asymptotically this gives , which is significantly better than the asymptotics we currently have. I don't have easy access to software that can compute zeroes of Bessel functions here, but it should not be difficult to optimise (**) for and get the value of which should be between one and two orders of magnitude better than it is currently. (Then we have to put back in the term, but this should be fairly straightforward.)

4 June, 2013 at 11:03 pm

Eytan PaldiI’m glad to see such improvement! please note also that the last correction gives the solution a definite positive limit at zero (moreover, it is positive and even analytic on the interval !)

4 June, 2013 at 11:05 pm

v08ltuI get the above (**) is true for , using BesselJZeros in Maple(tm). The zero is at . For , I get is optimal.

5 June, 2013 at 7:33 am

Terence TaoVery good news! It is not yet a confirmed demonstration of for this choice of because the presence of the cutoff in to primes up to introduces a small error term to the basic inequality which should be so small as to be negligible, but this needs to be checked. I will start doing so. One fact which will come in handy is that the extremiser should be decreasing (or at least non-increasing) in x on [0,1]. I think this should be clear from variational reasons: if y is increasing and then decreasing in some region then one could reflect the portion of the graph of y above some threshold across the horizontal line associated to that threshold and obtain a superior ratio for the variational problem. It should presumably be deducible from the properties of Bessel functions or the differential equation obeyed by y as well.

While I can confirm the Bessel function computations, it would be nice to also have a way to explain the relationship through some simplified asymptotic approximation to the Bessel function. Does anyone know a good asymptotic approximation to a high order Bessel function in the interval from 0 to the first zero, which has size roughly ? The asymptotics I know about work for very small arguments or very large arguments but not for arguments of the order of the first zero.

ADDED LATER: I see now that the Pintz-Revesz-Farkas paper has such an approximation, by a polynomial which (in our notation) of the form for some cutoff function that localises to be comparable to some parameter which ends up being of order . The verification of the asymptotics are a bit messy though, actually they end up being longer than the verification for the Bessel function!

4 June, 2013 at 10:44 pm

Eytan PaldiA small correction: in step 3 above, the solution should be

Fortunately, it has no influence on the next steps.

4 June, 2013 at 11:13 pm

v08ltuYou have a typo, it is , not .

http://people.math.sfu.ca/~cbm/aands/page_371.htm

4 June, 2013 at 11:43 pm

Eytan PaldiThanks for the correction!

4 June, 2013 at 11:55 pm

Gergely HarcosYour finding is in complete agreement with a recent paper of Farkas-Pintz-Révész. See Theorem 16 in http://www.renyi.hu/~revesz/ThreeCorr0grey.pdf

5 June, 2013 at 2:03 am

Eytan PaldiThank for the information! (it gives more confidence on the result)

BTW, this optimization problem is somewhat unusual in the sense that it has only ONE endpoint constraint, the second constraint (necessary to make a discrete spectrum for $lambda$) is the integrability condition near 0 (which enforces the coefficient of the independent singular solution Y_{k_0 – 2} to vanish.)

4 June, 2013 at 11:48 pm

Johan CommelinDear Terrence, that should be 1/1168, right? Instead of 1/1158.

[Corrected, thanks - T.]4 June, 2013 at 9:58 am

Gergely HarcosDear Terry, please fix the link here: “which Ben Green and I used in our papers, e.g. in this one”.

[Corrected, thanks -T]4 June, 2013 at 10:36 am

Warren D.SmithHi Terry, the question you just asked of whether such a function f(x) exists on [0,1], is if we make f(x) be a fixed-finite-degree polynomial with

undetermined coefficients, a QUADRATIC PROGRAMMING problem, which

available software will solve.

4 June, 2013 at 10:42 am

Warren D.SmithWell actually your demand f(x)>0 for every x with 0<x<=1 is an infinite

number of constraints (one for each x) and further if I did not restrict the polynomial degree it would be an infinite dimensional quadratic program. However, adaptive finite sets of constraints should approximate it fine (kind of like Remez algorithm in approximation theory…) so this should be no problem.

4 June, 2013 at 10:59 am

Gergely HarcosSmall corrections to the proof of Lemma 7, namely that (ii) implies (i). We need to choose so large

that for , otherwise (ii) cannot be invoked.

In addition, should be not for but for .

[Corrected, thanks - T.]4 June, 2013 at 11:00 am

Avi LevyWhen you defined the singular series, you wrote instead of

[Corrected, thanks - T.]4 June, 2013 at 11:02 am

Avi LevySorry, how do you include latex in these comments (for the future?)

4 June, 2013 at 11:07 am

Gergely HarcosRead “For commenters” on the bottom left corner of this page. I missed that, too!

4 June, 2013 at 11:58 am

Terence TaoA technical comment: The parameter appears in two separate places in the conjecture (Conjecture 12 above); firstly (and most importantly) in the threshold for , and secondly as the upper bound for the interval of primes involved. For the purpose of optimising bounds, it may make sense to split this parameter into two, thus let be as in Conjecture 12 but with the interval replaced by . The analysis in the above post continues to then give assuming the same condition

as before, but the quantity should now be replaced by

This quantity is still exponentially small for as small as . So rather than prove as Zhang does, it may be numerically more advantageous to prove with very close to zero in order to raise the value of . Currently, Zhang proves whenever ; it seems like a good idea to trace through his argument for instead and see how many of the 's in the above expression are actually 's.

4 June, 2013 at 12:11 pm

Gergely HarcosIn Section 3, in the first two displays there should be no on the right hand side, because is included on the left hand side.

[Corrected, thanks - T.]4 June, 2013 at 12:23 pm

Gergely HarcosIn the display before (11), should be on the right hand side.

[Corrected, thanks - T.]4 June, 2013 at 12:44 pm

Online reading seminar for Zhang’s “bounded gaps between primes” | What's new[…] obtained for . The current best value of is , as discussed in this previous blog post. To get from to Theorem 1, one has to exhibit an admissible -tuple of diameter at most . For […]

4 June, 2013 at 1:03 pm

Gergely HarcosAlso, should be in the first two displays of Section 2.

[Corrected, thanks - T.]4 June, 2013 at 11:35 pm

Gergely HarcosIn Remark 1, should be . In Remark 2, should be . In (19), there should be no in the denominator. Finally, to also add some content, the finding of Eytan Paldi above is in complete agreement with a recent paper of Farkas-Pintz-Révész. The best constant in (17) is given by Theorem 16 in http://www.renyi.hu/~revesz/ThreeCorr0grey.pdf

[Corrected, thanks! - T.]5 June, 2013 at 9:11 am

Gergely HarcosDear Terry, somehow my last corrections did not materialize (Remark 1, Remark 2, and (19)) although you said you made the corrections.

[Sorry, they should appear now. -T]5 June, 2013 at 9:53 am

Terence TaoOK, I have now updated the main text to add a criterion for deducing from which should get very close to v08ltu’s value of . Unfortunately it is conditional on the following conjecture:

I can confirm this conjecture numerically for small but do not yet have a rigorous proof of it for large (e.g. ). It is not absolutely essential to have this conjecture in order to get a criterion, but it would be a much messier criterion if this were not the case.

Anyway, assuming this conjecture, we get whenever

where is the quantity

.

In the past, has been exponentially small; now that is a fair bit smaller than it was before, it may not be quite as small as it was before, but it should still have a fairly miniscule impact on the final value of . (There is a bit of slack in the estimation of in any case; it was not important to optimise it previously since it was already so tiny, but maybe we will have to do so soon.)

5 June, 2013 at 10:04 am

v08ltuFor and , I get that . Yet I agree that soon improvements will hit a barrier here.

5 June, 2013 at 10:09 am

v08ltuIncidentally if you take , then the above increases to , and by the time , it is about . So I think one cannot too small.

5 June, 2013 at 10:27 am

v08ltuOn the other hand, my current best practice in estimation is that we will need essentially , which is not attenuated much on . Say and for effect (the ratio of 2 here is highly accidental). Then you win at at 5165. Taking and I think you win at .

5 June, 2013 at 10:58 am

Terence TaoI checked the constraint for and v08ltu’s value of , and confirmed that obeys the criterion (barely!). So as long as we have the conjectued convexity of for this value of , we have confirmed .

5 June, 2013 at 12:04 pm

Gergely HarcosSounds great. Out of curiosity: what is the smallest we obtain with this under Elliott-Halberstam? The original construction of GPY gave .

5 June, 2013 at 3:43 pm

Terence TaoUnfortunately still seems out of reach: we need to be less than 2 for the method to work on Elliott-Halberstam, and for this is 2.03532… – a touch too big.

(For , this quantity is 1.9194… instead.) Some really clever new idea is probably needed to get further here even on EH; I’m sure GPY and the other people in the area have tried all the obvious things here by this point.

5 June, 2013 at 10:11 am

Daniel ReemA very minor remark: there seem to be a small typo in line 4 (34.429 instead of 34,429)

[Corrected, thanks - T.]5 June, 2013 at 1:22 pm

Gergely HarcosThe Conjecture is now proved, see my response below.

5 June, 2013 at 10:31 am

Daniel ReemIn one of the comments above (form 4 June, 2013 at 8:13 am) it was written that an error term should appear but it should be exponentially small. Something similar was written in a comment (June 5, 2013)

in the relevant post of Scott Morrison (from May 30, 2013) mentioned above. Is it possible to give more details on this issue? In particular, to explain why and where this error term appears? And is there any argument or proof showing that this error term must be exponentially small or at least sufficiently small so it will not affect the final value of ?

Thanks in advance.

5 June, 2013 at 10:36 am

Terence TaoThe error term (which, incidentally is denoted (kappa) rather than ) is not present in the original work of Goldston-Pintz-Yildirim (Section 3 in the blog post) but arises in the modification of the Goldston-Pintz-Yildirim argument due to Motohashi-Pintz and Zhang (Section 4, see also the updated version at Section 5). Motohashi, Pintz, and Zhang modify the Goldston-Pintz-Yildirim sieve by throwing out all numbers that are divisible by large primes (primes larger than a certain threshold ). It is necessary to discard these primes in order to have a chance of proving the input conjecture (otherwise one has to prove the much more difficult conjecture , which remains open). But by throwing out these numbers one enlarges the sieve by a small factor (a precise formula for this quantity is now given in my previous comment at 9:53 am). In practice this quantity is very small (it is about ) although, ironically, because we have been making so many improvements in the factor, the error is not as small as it used to be, and we may soon have to take a look at optimising further in order to get the best results.

5 June, 2013 at 11:07 am

Daniel ReemThanks for the information. I have missed some details in previous readings. By the way: does the derivation of from equation (30) depend on the partly conjectured Lemma 15? And is the value which is mentioned above equation (30) optimal or perhaps better values can be obtained? And is there any estimation on the term in the inequality above

5 June, 2013 at 11:40 am

Terence TaoYes, at present the value of in (30) presumes Lemma 15; without it one would get a messier expression for . There is certainly scope for improvement in the bound for (in particular, the factors of 2 and 3 appearing there might be removable with a finer analysis), though we are not yet at the regime where this leads to decisive improvements in the final bounds.

The o(1) term goes to zero as x goes to infinity; since the conjecture is only an asymptotic assertion (as is the assertion that there are infinitely many prime gaps of size at most H), the o(1) term can be utterly neglected for the purposes of optimising k_0 or H. (It would however be important for the question of obtaining quantitative estimates on how often these prime gaps occur.) The way the argument in this blog post is set up, particularly with its reliance on the W-trick, the decay rates on the o(1) factor will be terrible. (There is some discussion on quantitative bounds at the end of this recent paper of Pintz: http://arxiv.org/abs/1305.6289 .)

5 June, 2013 at 12:01 pm

v08ltu**(It would however be important for the question of obtaining quantitative estimates on how often these prime gaps occur.)**

As it currently stands pedantically, Zhang uses ineffective Bombieri-Vinogradov. As noted in Section 12 of GPY Primes in Tuples II, one can circumvent Siegel zeros, due to the result of Heath-Brown that they (in in a suitable sense) imply infinitely many prime twins.

5 June, 2013 at 1:03 pm

Daniel ReemThanks again. Here is a minor question before l will probably take a break (I apologize in advance if I miss something trivial here!):

Are there any assumption or known estimates on which ensure, in particular, that the number of terms in (30) is at least 1?

5 June, 2013 at 12:10 pm

JamesMA couple of quick comments:

If instead of taking as above (which one typically estimates with complex integrals), you can diagonalise the sum using the `elementary method’ which goes back to Selberg. This produces the same asymptotic expressions, but at least in the case it allows for a slightly simpler argument to bound the error term , which might be useful if is no longer trivially small.

(I got something like for )

I think, however, that is roughly the right order of magnitude of , and beyond this one would want to use approximate asymptotic formulae. Instead of obtaining with the error , one would obtain , where is defined by the delay-differential equation

This might be useful in dealing with the terms if becomes small.

5 June, 2013 at 12:42 pm

Gergely HarcosI believe the conjecture follows from the fact that its second derivative equals .

5 June, 2013 at 1:06 pm

Gergely HarcosSome details on this claim: by the usual Taylor series expansion of the -Bessel function, , hence differentiating twice yields the same expression times , with replaced by .

5 June, 2013 at 1:20 pm

Terence TaoGreat! That was easier than I expected. So it looks like is now confirmed! I will pass this on to the “H” team. :)

5 June, 2013 at 1:20 pm

Gergely HarcosTo finish the proof of the Conjecture (in order to take ), it suffices to show that the first zero of precedes the first zero of . This follows from the more general fact that the zeros of the -Bessel functions are interlaced, see Section 15.22 in Watson: A treatise on the theory of Bessel functions (2nd ed., Cambridge University Press, 1944).

5 June, 2013 at 1:32 pm

Eytan PaldiIn the proof of theorem 14, the definition of g_0 after (27) applies only for the interval [0, 1) – so the integration by parts is somewhat more delicate (exploiting the fact that g_0 as a logaritmic derivative of the entire function f_0 has a SIMPLE pole at 1, so the result follows from the fact that a*g_0*f^2

tend to zero at the endpoints.)

5 June, 2013 at 1:45 pm

Terence TaoAh, that’s a good point; I’ve updated the blog post to reflect this subtlety (and also to incorporate Gergely’s nice proof of the convexity).

6 June, 2013 at 1:32 am

Eytan PaldiA minor correction: After (27), f_0 is non-vanishing over [0, 1) (not [0, 1]) – so g_0 is well defined only over [0, 1).

(BTW, the identity about Q(f) is of “Wirtinger type” – see e.g. the remarks about Wirtinger’s inequality in section 7.7 of “inequalities” by Hardy, Littlewood and Polya)

[Corrected, thanks - T.]5 June, 2013 at 2:10 pm

Gergely HarcosActually the interlacing of zeroes argument yields a generalization of Lemma 15 (with a simpler proof): on , the -th derivative of has sign .

7 June, 2013 at 1:39 am

Eytan PaldiIn fact, using your idea of differentiating the Taylor series of f_0 (x), it is easy to verify that its n-th derivative is

7 June, 2013 at 8:15 am

Gergely HarcosYou probably meant in place of .

7 June, 2013 at 8:28 am

Eytan PaldiSomehow it was printed with that typo, as you see I just sent a correction.

(Thanks for seeing that!)

7 June, 2013 at 8:24 am

Eytan PaldiA minor typo: Instead of k_0*n (both in the exponent of x and the order of J) it should be k_0 + n.

5 June, 2013 at 2:19 pm

Terence TaoA small remark regarding the issue: the Motohashi-Pintz-Zhang restriction of the GPY sieve to smooth numbers enlarges the sieve, and thus increases the quantity (roughly speaking this measures the density of the sieve) by the factor . But it also increases the quantity (roughly speaking the normalised density of primes shifted by h in the sieve) by more or less the same factor . This increase is simply discarded in the current analysis, but if we really had to we could compute very precisely the kappa-factor on both the alpha and beta sides of the key inequality (11), and so it is conceivable that in the final analysis kappa more or less cancels itself out and is thus much less of a problem than feared.

5 June, 2013 at 7:50 pm

More narrow admissible sets | Secret Blogging Seminar[…] A reading seminar on Zhang’s Theorem 2. 2) A discussion on sieve theory, bridging the gap begin Zhang’s Theorem 2 and the parameter k_0. 3) Efforts to find narrow […]

6 June, 2013 at 6:48 pm

Gergely HarcosI just noticed this new preprint by János Pintz, http://arxiv.org/abs/1306.1497, where he proves that is admissible in place of Zhang’s . This allows him to take . What value for do we get by using the optimized GPY weights of Theorem 14?

6 June, 2013 at 8:32 pm

Terence TaoIn our notation, Pintz proves with and . So we have to minimise such that

where is the quantity

.

I’ll fire up Maple and try to figure this out (starting by solving the easier problem and then increasing if necessary), but perhaps some independent confirmation would be good to have as well. Certainly we can already do better than Pintz’s k_0=60,000, being currently at 34,429 (Pintz uses the older style monomials instead of the Bessel function).

6 June, 2013 at 8:56 pm

Terence TaoWell, unfortunately is now big enough to cause a problem again, in large part because Pintz took to be so small; without we could take as low as 9128, but is huge then. In fact even with our current value of , is roughly of order 1, so in fact we get no gain at all! But this is an artificial problem for two reasons; firstly we can do better with kappa; and secondly it should be possible to analyse Pintz’s preprint (only 9 pages) to extract exactly what is required on and we can do a multivariable optimisation in to proceed. I’ll go through Pintz’s paper now and see what we get…

6 June, 2013 at 9:08 pm

Terence TaoOK, so Pintz’s conditions on and (the latter he calls ) are

and

.

If we set , we can get ; this makes a little worse than Pintz's value of , but is six times better which will help substantially with our current setup. Back to Maple…

Incidentally, the last section of Pintz suggests that one should be able to take , which is a little bit better than our current value (which is more like ), but I don't know how much this will help us.

6 June, 2013 at 9:16 pm

Terence TaoOK, I computed that works with this choice, but it would be good to get some confirmation. is relatively small here (about ) but this is still enough to have some impact; without one could lower to . So it looks like it is time to work on bounding again!

6 June, 2013 at 11:35 pm

Terence TaoI worked through the details of the Motohashi-Pintz sieve as used in Pintz’s most recent paper (it’s similar to what JamesM described in a previous comment). It seems to work fine if we replace the monomial weights by Bessel weights. it appears that the key inequality now becomes

with

.

With this, becomes smaller again and one should be able to reach with [EDIT: not quite; Maple reports one gets down to 11018 instead]; and by optimising subject to the constraints and one should be able to get a little better still.

I'll try to write up the details of the Motohashi-Pintz sieve in a separate blog post.

6 June, 2013 at 11:53 pm

v08ltuAs far as I can tell, there is no real reason to take less than . In the 1/200 part, Pintz is just writing down something that works. In fact, in 2.8 and 2.13 he copies what I think is an error in Zhang, namely when I get . This leads into the right half of the max in 2.18, but I don’t think it matters as one wins easily enough in this comparison of the 2.18 components to the of 2.19. You should really compare the 2.18 exponents to those in 2.19, rather than use the convenient 1/200 and statements.

7 June, 2013 at 12:07 am

v08ltuFor that matter, although I would trust Pintz over myself at this point, I am not sure why he merely imports Zhang’s (7.2) in this 2.19 statement (maybe “3” can be 2+epsilon by choosing more subtly?), other than that this is the Type II border, and earlier he indicated that only Type I was critical. Maybe even the as the Type I/II frontier as used in 2.19 could be changed if needed. But since he expects to win here by a margin, there is no need to optimize, is my conclusion.

7 June, 2013 at 12:40 am

v08ltuIf I understand the game correctly, I can take and win. Note that taking is not going to help anyways.

7 June, 2013 at 12:50 am

v08ltuSlight improvement, . From what I can tell, this is at least a local optimum, and cannot be had.

7 June, 2013 at 6:55 am

Terence TaoGreat! Are you using the Motohashi-Pintz sieve formulation (with instead of the clunkier value I had in my blog post?). Also, if you could make explicit precisely the linear constraints on that you have extracted from Pintz’s paper that would be useful for future optimisation (I take it that you have deleted the constraint , but kept the main constraint ? Since you mentioned a possible correction to Zhang/Pintz, it would be good to update what the constraints are here.

Basically, it looks like Pintz has done our near-term work for us in that he has optimised Zhang's argument in the parameters that we were beginning to optimise in. It seems the next major improvement will have to come from somehow reducing the number of Cauchy-Schwarz steps in the analysis (which is something Ben and I are looking forward to, having some experience in that area…).

7 June, 2013 at 11:44 am

v08ltuYes, I just plugged into exactly the formula you listed at 11:35pm, eventually figured out that one simply sets and then just minimized wrt . I think Pintz is probably correct. He gets the 1/16 that I now determine shows up (which Zhang also has, though he writes the argument after Cauchy and has 31/32) from the “critical” case that Ben Green and Kowalski have both mentioned, and I don’t doubt that he counted the multiples of correctly given the efforts he made. He seems to have solved for the Type I/III barrier (which is the crucial one, as he says) based on the last line of 2.6 (spilling to the top of page 4) and 2.40 just setting the inequalities to be equalities and solving . I strongly suspect that any glitches elsewhere are irrelevant (for instance, 9.9 in Zhang I get not ).

6 June, 2013 at 9:13 pm

MichaelMaple tells me that , so the inequality is only true for

7 June, 2013 at 7:33 am

Gergely HarcosYou probably miscalculated. For and we get .

7 June, 2013 at 7:35 am

Terence TaoI think Michael was using the older formula for which was somewhat less efficient than the new formula , being about one to two orders of magnitude times bigger it seems in current ranges of .

7 June, 2013 at 2:09 pm

v08ltuPintz from 2.35 to 2.39 (the key part) is a bit hard to parse. What he wants to say in 2.39 is that 2.35 is satisfied if but it comes out garbled IMO.

Also, it is wrong. He should have in 2.39, not (compare, one subtracts the exponents in 2.37 and 2.38, getting for , so should be for , not 3/2 as asserted).

So I only get as the critical inequality now (I think his choice of is already optimal, but am not sure). Back to the optimization technique…

7 June, 2013 at 2:18 pm

v08ltuActually, I got subtraction mixed up. It should even be not (Pintz gets the sign wrong too), so now I have . So Pintz is really not gaining that much over Zhang in the end (if I am correct). In short, he should be subtracting exponents (2.38) minus (2.37), but he adds the -parts (and it seems that 2.38 is correct as written, though its input 2.20 has an obvious typo, not putting the RHS in an exponent).

7 June, 2013 at 2:24 pm

v08ltuPS, this latter statement matches what I got from a rudimentary (ignore ) plugging in of the “critical” case into Ben Green’s analysis on the Reading seminar page, which I also determined was equivalent to what Pintz had written before his error (in fact, I ran across the Pintz problem trying to reconcile).

There one gets (after multiplying BG’s by 2) that , and with from Type I/III Pintz optimization (w/o ‘s), you do indeed (w/o ‘s) get , or as my re-do of Pintz derived.

7 June, 2013 at 8:13 am

Eytan PaldiSince the exponents and obey linear inequality constraints, it seems that the minimization of (with respect to these two exponents or “control parameters”) is at an edge of the “admissible polygon” for them – which may explain their rational values?

(this happens if the minimal k_0 as a function of (,) is CONCAVE, otherwise the optimal exponents can be in the interior of the polygon – and even may be irrational!)

In any case, I suggest to first try the exponent pairs on the edges of the admissible polygon, and later try to see if may be improved inside the polygon.

7 June, 2013 at 8:46 am

Eytan PaldiPerhaps, it is better to first try as exponent pairs the vertices of the admissible polygon for them

7 June, 2013 at 10:08 am

Gergely HarcosJust checking if I understand the optimal right. It should vanish to order at , and its th derivative should be . The second condition implies that , where is a polynomial of degree at most . We can choose so that vanishes to order at , and then implies that in fact vanishes to order at . Am I right? Also, do we really need for to work?

7 June, 2013 at 1:18 pm

Terence TaoYes, this should be the right choice for , with being the Taylor polynomial at needed to enforce the vanishing. It makes for a rather strange formula for but I think this is an artefact of the choice of sieve; the elementary Selberg sieve works directly with rather than (I’m in the process of writing up the details).

7 June, 2013 at 1:27 pm

Gergely HarcosThank you! Writing up means the new blog entry that will also incorporate/describe Pintz’s improvement? At any rate, I look forward to it!

7 June, 2013 at 3:03 pm

v08ltuIf my statements about the error in Pintz are correct, then under the constraint , the best I find is and . Of course here.

7 June, 2013 at 3:09 pm

bengreenI think there is little choice here but to work through the Type I and II analyses in the same way as for the Type III – hopefully the dependence of these parameters will then become crystal clear. It will happen in time….

7 June, 2013 at 4:44 pm

Eytan PaldiI noticed that for your optimal and the inequality constraint becomes equality. Is there any reason for that? (Perhaps k_0 as a function of these two parameters is concave?)

7 June, 2013 at 6:14 pm

Gergely HarcosI think someone familiar with the details could/should simply ask Pintz.

8 June, 2013 at 7:46 am

Terence TaoBy the way, do you know if there are still any further constraints on beyond the condition and ? I know that the condition has been deleted but I wanted to confirm that there are no other conditions which are not already implied by the ones above.

8 June, 2013 at 12:53 pm

v08ltuThere is some upper bound on , for else Type I/III would not be the important barrier (rather Type II would invade), but I think it is so large (like 1/200) that it makes no difference. The quirky is convenient but not needed, and is satisfied anyway for optimal parameters.

8 June, 2013 at 1:24 pm

bengreenTerry, it seems as though is by far the strongest condition these parameters need to satisfy. It comes from the boundary between the Type I and III conditions. I’ve put details of all these computations over at the other blog.

7 June, 2013 at 8:57 pm

Terence TaoI’m encountering some difficulty verifying a claim in the most recent paper of Pintz http://arxiv.org/pdf/1306.1497v1.pdf , as well as a related claim in an earlier paper of Motahashi and Pintz http://arxiv.org/pdf/math/0602599v1.pdf , regarding the value of kappa. (This is unrelated to the issues that v08ltu has raised recently in the Pintz paper). In particular, in (3.1) the most recent Pintz paper it is claimed that kappa is as small as

which can be bounded above by ; there is also a closely related claim in (5.14)-(5.15) of Motohashi-Pintz, in which it is claimed that the quantity given by (5.14) can be estimated by the quantity in (5.15) (and the latter Pintz paper refers to this step of the Motohashi-Pintz paper to justify the claim (3.1) of the Pintz paper).

However, I can't figure out how Motohashi and Pintz get from (5.14) to (5.15), because (5.14) contains a difference of two summations, and the main terms from both terms (after applying arguments analogous to (4.12)-(4.15) as suggested in the text) largely cancel each other out, leading to an expression that is smaller than claimed in (5.15), in which (after ignoring terms of size of the main terms) the factor

is instead replaced by

Unfortunately this expression can be significantly less than (*) in the regime , which seems to be the region that Pintz has (with , , ).

In Motohashi-Pintz they suggest trying to adapt (4.15) to deal with the second of the two summations to try to capture an exponential decay, but (4.15) involves a factor like rather than and so is not directly applicable for the purposes of estimating (5.14).

I'm planning to contact Pintz about this issue; I may be missing something obvious, but at present I can't reconstruct enough of Pintz's argument to make any of our claims that require the value for to work, and will have to be regarded as provisional (particularly given the issues that v08ltu is raising). So any confirmed value of will, for now, have to come using the older value of given in http://terrytao.wordpress.com/2013/06/03/the-prime-tuples-conjecture-sieve-theory-and-the-work-of-goldston-pintz-yildirim-motohashi-pintz-and-zhang/#comment-232840 rather than the Pintz value of .

Hopefully all this confusion will be cleared up soon, but until then a lot of the most recent world records will unfortunately have to have a question mark attached to them on the wiki… (in particular, even v08ltu’s most recent value 25,111 for k_0 may need to be recalculated.)

7 June, 2013 at 9:43 pm

v08ltuI think he is already violating 4.2 if I understand correctly, where is demanded, but when this says . This feeds into 4.15.

7 June, 2013 at 9:52 pm

v08ltuFrom what I can tell, the passage for the last two lines of 4.15, the meat is . Taking logs and shows this to work asymptotically, but it is less apparent in the desired range of .

8 June, 2013 at 6:20 am

Terence TaoI think the issue is estimates for the form (4.15) are not actually applicable in the context required, namely for estimating the second sum in (5.14). The difference looks tiny but is significant: in (5.14) there is a factor of , whereas in (4.15) there is the smaller factor of (with in the latter sum playing basically the same role that plays in the former sum, in particular both expressions are to sum up to ). In both cases the dominant contribution comes when the summation variable ( or ) is close to its upper limit of , so one sees that the type of sums in (4.15) are actually a lot smaller than those in (5.14). One can get the type decay for the former but I don’t think one can for the latter.

I am going to write up an erratum page recording all issues reported so far in relevant papers on the wiki, and also I am going to have a blog post on the sieve theory surrounding this particular issue (I had planned to have it ready yesterday, but then I ran into the above mathematical issue and this complicated things).

7 June, 2013 at 11:03 pm

v08ltuIf I did everything correctly, with the verified expression (30) and , I get and is best. Here and .

8 June, 2013 at 6:26 am

Terence TaoI have verified that the above choices of do indeed give the claimed value of and that (30) is satisfied. Given that Ben and v08ltu have independently found as the optimised criterion coming out of Zhang’s paper, I think we can now report the value as confirmed.

8 June, 2013 at 5:50 am

bengreenJust to say that v08ltu and I seem to have independently arrived at the constraint as being the optimal thing coming from Zhang’s argument in its present form. This (for me at least) is based on the assumption that the worst part of his argument is where the so-called Type I and Type III estimates meet. Details on the other blog.

8 June, 2013 at 1:34 pm

v08ltuI think it is really that you agree with my correction of Pintz. I trusted his value of in the Type I/III border, and it seems you rederived it.

8 June, 2013 at 7:18 am

Weekend Reading | A Flustered Monkey Writes[…] http://terrytao.wordpress.com/2013/06/03/the-prime-tuples-conjecture-sieve-theory-and-the-work-of-go… […]

8 June, 2013 at 12:56 pm

The elementary Selberg sieve and bounded prime gaps | What's new[…] post is a continuation of the previous post on sieve theory, which is an ongoing part of the Polymath8 project to improve the various parameters in […]

8 June, 2013 at 1:02 pm

Terence TaoIt’s time to close this thread and roll over to the new thread at

http://terrytao.wordpress.com/2013/06/08/the-elementary-selberg-sieve-and-bounded-prime-gaps/

to continue the discussion of how to optimise from a given range of , and how to deal with the error term. We now have two slightly different approaches, based on what I call the “analytic Selberg sieve” and the “elementary Selberg sieve”, both truncated to smooth moduli; there is certainly scope for optimisation.

10 June, 2013 at 10:40 am

A combinatorial subset sum problem associated with bounded prime gaps | What's new[…] , which in turn control the best bound on asymptotic gaps between consecutive primes. See this previous post for more discussion of this. At present, the best value of is applied by taking sufficiently […]

11 June, 2013 at 9:54 pm

Further analysis of the truncated GPY sieve | What's new[…] This post is a continuation of the previous post on sieve theory, which is an ongoing part of the Polymath8 project. As the previous post was getting somewhat full, we are rolling the thread over to the current post. We also take the opportunity to correct some errors in the treatment of the truncated GPY sieve from this previous post. […]

12 June, 2013 at 1:03 pm

Estimation of the Type I and Type II sums | What's new[…] we prove part (i) of Theorem 4. By an overspill argument (cf. Lemma 7 of this previous post) it suffices to show […]

13 June, 2013 at 3:55 am

anant1964A question from a complete outside: if eventually the twin-prime conjecture in its original form is proved, then would it also imply that there would be an infinite number of primes that differ not just by 2, but there would also be an infinite number of primes that differ by 4, by 6, by 8,…? In other words, some kind of an analog of your celebrated theorem with Green? Would the present approach allow one to infer such a result, constructive or not?

13 June, 2013 at 9:41 am

AnonymousZhang’s proof works by proving first that there at least 2 primes in an infinite number of a certain “admissible tuple” with k_0 elements (see the introduction of the article). The difference result is a consequence of picking “narrow” tuples with that number of elements. One could conceive that by pushing k_0 to be 2 we get all admissible 2 tuples (0, 2m) for any m and the twin prime conjecture (m=1) together with any even difference would immediately follow. But unfortunately the present methods are based in a weaker version of the Elliott-Halberstam conjecture that even in its strongest version could only get k_0=6 with the narrowest tuple (0,4,6,10,12,16) and maximum difference 16.

13 June, 2013 at 9:43 am

Terence TaoNot directly, but I would be extremely surprised if any future proof of the twin prime conjecture could not be easily modified to also prove an infinitude of prime pairs p, p+k for any even k. (Numbers k for which this statement holds are known as Polignac numbers; this recent paper by Pintz http://arxiv.org/abs/1305.6289 gives a quite comprehensive discussion of what exactly we can know or hope to know about such numbers with current methods.)

18 June, 2013 at 6:19 pm

A truncated elementary Selberg sieve of Pintz | What's new[…] this new truncation we need to review some notation. As in all previous posts (in particular, the first post in this series), we have an asymptotic parameter going off to infinity, and all quantities here are implicitly […]

19 June, 2013 at 8:47 pm

Pace NielsenI’m finally taking the time to read through these arguments thoroughly, and am enjoying it immensely. Here are some minor corrections I found.

In the second offset equation after is defined, there is a right parenthesis missing after .

In Proposition 10, I believe you want the dimension to be fixed. (At least, that was the assumption stated for (6).)

In the second line of the offset equation following (10), I believe the two ‘s should be ‘s.

[Corrected, thanks - T.]20 June, 2013 at 11:12 am

Pace NielsenA few more minor corrections, and some questions.

1. In the two offset equations above where is defined, there are four instances where should be .

2. When describing the constraint , instead of lying in one of the residue classes , I believe you want it in .

3. You write “Repeating the arguments for (9), we may expand the left-hand side of (9) as…”. The second (9) should be a (10).

4. After remark 1, when dealing with the error term for (10), it appears that you are assuming is bounded by a fixed constant. (Or perhaps I’m not seeing the reason that the two terms can be incorporated into the big-O.)

5. In the last paragraph of Remark 2, “unconditional” is misspelled.

—

Question 1: I’ve now read through Section 3. Would you advise me to skip Section 4, and move on to the “The elementary Selberg sieve and bounded prime gaps” post?

Question 2: Chen Jingrun’s theorem on twin primes also had a counter-part for Goldbach’s conjecture. Is there something similar here? Can these sieve methods be modified to show that there is a fixed number for which two of are simultaneously prime?

Question 3: Is there a way to modify the sieve to take into account non-admissible tuples? In other words, suppose we want to consider something stronger than the prime tuples conjecture, such as the claim that the set is “as prime as possible infinitely often.” (One meaning could be that (simultaneously) and are prime and is 6 times a prime, infinitely often.) In this way, one might be able to generalize the result a bit.

20 June, 2013 at 11:55 am

Terence TaoThanks for the corrections! I had neglected to specify that the function was fixed, which was why it could bounded by O(1).

Section 3 is more or less obsolete, but I think some later posts refer to some of the calculations in this section, so one may still have to jump back to this section for later reading.

Yes, the methods here should extend to other patterns of linear forms than just shifts of a single tuple. Unfortunately these more general patterns don’t seem to easily imply a result as clean as “bounded prime gaps” though; for instance, it doesn’t look like one can show that every positive integer is O(1) away from being the sum of two primes by this method.

For non-admissible tuples, I think basically things just fragment into non-interacting admissible tuples and one doesn’t really gain anything. For instance, with , one should think of this set as really being when n is odd and when n is even for the purposes of trying to capture primes. Intuitively there should be no gain on the odd side that comes from fancy sieving on the even side or vice versa. (The W-trick automatically reduces to a single residue class to get rid of all of these local phenomena in any event.)

20 June, 2013 at 1:18 pm

Pace NielsenOn the non-admissible tuple I realized that another way of dealing with it (and keeping both the 2-adic and 3-adic information) would instead be to work with the tuple (where we think of ). So nothing new here.

On the Sophie Germain-type sequence, if I’m understanding your claim correctly (and after modifying the sequence slightly), that may give another proof for infinitely many exponents of FLT (in case 1).

30 June, 2013 at 12:39 pm

Bounded gaps between primes (Polymath8) – a progress report | What's new[…] out to be given in terms of a certain Bessel function, and improves the relationship to ; see this post for details. Also, a modification of the truncated Selberg sieve due to Pintz allows one to take […]

16 July, 2013 at 7:19 am

Stijn HansonI think I’m missing something in the proof of Lemma 9. In this it is stated:

and while I certainly get that it seems to be a very weak bound as I’m getting:

which seems about right as tends to infinity as it approaches 1 from the right. It just seems like such a wasteful bound that I fear I’m misunderstanding something.

16 July, 2013 at 7:33 am

Stijn HansonEDIT: Nope, I got confused between << and O. I really can't see how that equality is arrived at

16 July, 2013 at 8:17 am

Terence TaoAh, that was a typo, the estimate should indeed be (which also fixes the next step of the argument). It’s corrected now.

[Incidentally, it can be helpful for things like this to develop some mathematical reading strategies that are robust with respect to typos and do not cause mental "compilation errors", as I discuss here.]

2 August, 2013 at 2:25 am

Stijn Stefan Campbell HansonI think my problem is that I can’t actually see how the next line follows anyway. It says to take logs which would give something along the lines of:

and I can quite easily see how the R.H.S. becomes but that would then require:

or at least some asymptotic relationship.

Trying to take your advice it’s clear how is necessary but I just have no earthly idea how it’s derived. My apologies if these questions are stupid.

2 August, 2013 at 7:54 am

Terence TaoBy Taylor expansion, when is small.

2 August, 2013 at 8:13 am

Stijn Stefan Campbell HansonOf course, I was trying to use the full Taylor expansion. Sorry, I’m still not used to what I can throw away and what I can’t in these sorts of calculations.

16 August, 2013 at 2:43 pm

Stijn Stefan Campbell HansonI’ve been working through at a rather pedestrian pace and have (finally) gotten to the end of Proposition 10 but I seem to have some extra factors of kicking around. Even though you haven’t explicitly said so it seems like you’ve defined:

which would seem to imply

and throws those coefficients in later, particularly in the expressions for and both of which I derive with an extra factor of

These don’t seem to be particularly important but I’m worried I might have done something wrong, especially as I followed the other convention for the Fourier transform and was kinda expecting all those factors to cancel out at some point.

16 August, 2013 at 11:08 pm

Terence TaoThe precise definition of is not important for this argument, but one can multiply the expression for you have by to resolve this issue.

16 August, 2013 at 6:00 am

Euclid Strikes Back | Gödel's Lost Letter and P=NP[…] are other uses of this simple trick. I note that it is quite close to the “W-trick” described by Terence Tao in relation to Yitang Zhang’s advance on the twin-prime conjecture, and […]

12 September, 2013 at 4:21 pm

Gergely HarcosIn the second display below (12), should be .

[Corrected, thanks - T.]4 October, 2013 at 11:40 am

Stijn HansonMy apologies for all the help I’m requiring but a lot of these proof techniques are new to me in style. I’m looking at your proof of the overspill principle (Lemma 7). In the part you create your function to be some specific function tending to infinity with x. Thus, given any , the function will surpass it at some point. Thus:

where . Upon letting x tend to infinity tends to infinity and we can let m tend to infinity and we have and this seems clear to me.

The issue for me is the ‘sufficiently slowly’ part of the statement. Suppose that is a function tending to infinity with x. Then, for all natural numbers m there is some real such that . If we define then, surely, we must have whenever and

The one thing this argument avoids is your condition of but I didn’t understand where that came from in the first place.

I’m sure I’ve missed something obvious (as I have before) but I can’t see it.

P.S. there is a slight typo in the proof in that it says “veifies” instead of “verifies”

4 October, 2013 at 12:14 pm

Terence TaoOne cannot conclude for unless one already knows that (note that is not fixed, whereas (i) is only available for fixed w); this is why one needs to grow more slowly than .

Incidentally, in the polymath8 writeup (see https://www.dropbox.com/s/16bei7l944twojr/newgap.pdf ) the overspill principle is no longer needed, because it turns out one can take rather than have be a sufficiently slowly growing function of x.

4 October, 2013 at 12:43 pm

Stijn HansonI assumed it had something to do with that w restriction but I don’t see where comes from. The way I understand it we’re setting and then is some real number such that the term dips below whenever but, obviously, I must be missing something.

19 November, 2013 at 10:50 pm

Polymath8b: Bounded intervals with many primes, after Maynard | What's new[…] this claim was already essentially established back in this Polymath8a post (or see Proposition 4.1 and Lemma 4.7 of the Polymath8a paper for essentially these bounds). In […]

31 July, 2014 at 8:33 am

Together and Alone, Closing the Prime Gap | For a better Vietnam[…] work. At the time, though, this advance was one of the most fruitful ones. When the team filled in this piece of the puzzle on June 5, the bound on prime gaps dropped from about 4.6 million to […]