This post is a continuation of the previous post on sieve theory, which is an ongoing part of the Polymath8 project. As the previous post was getting somewhat full, we are rolling the thread over to the current post.

In this post we will record a new truncation of the elementary Selberg sieve discussed in this previous post (and also analysed in the context of bounded prime gaps by Graham-Goldston-Pintz-Yildirim and Motohashi-Pintz) that was recently worked out by Janos Pintz, who has kindly given permission to share this new idea with the Polymath8 project. This new sieve decouples the parameter that was present in our previous analysis of Zhang’s argument into two parameters, a quantity that used to measure smoothness in the modulus, but now measures a weaker notion of “dense divisibility” which is what is really needed in the Elliott-Halberstam type estimates, and a second quantity which still measures smoothness but is allowed to be substantially larger than . Through this decoupling, it appears that the type losses in the sieve theoretic part of the argument can be almost completely eliminated (they basically decay exponential in and have only mild dependence on , whereas the Elliott-Halberstam analysis is sensitive only to , allowing one to set far smaller than previously by keeping large). This should lead to noticeable gains in the quantity in our analysis.

To describe this new truncation we need to review some notation. As in all previous posts (in particular, the first post in this series), we have an asymptotic parameter going off to infinity, and all quantities here are implicitly understood to be allowed to depend on (or to range in a set that depends on ) unless they are explicitly declared to be *fixed*. We use the usual asymptotic notation relative to this parameter . To be able to ignore local factors (such as the singular series ), we also use the “-trick” (as discussed in the first post in this series): we introduce a parameter that grows very slowly with , and set .

For any fixed natural number , define an *admissible -tuple* to be a fixed tuple of distinct integers which avoids at least one residue class modulo for each prime . Our objective is to obtain the following conjecture for as small a value of the parameter as possible:

Conjecture 1() Let be a fixed admissible -tuple. Then there exist infinitely many translates of that contain at least two primes.

The twin prime conjecture asserts that holds for as small as , but currently we are only able to establish this result for (see this comment). However, with the new truncated sieve of Pintz described in this post, we expect to be able to lower this threshold somewhat.

In previous posts, we deduced from a technical variant of the Elliot-Halberstam conjecture for certain choices of parameters , . We will use the following formulation of :

Conjecture 2() Let be a fixed -tuple (not necessarily admissible) for some fixed , and let be a primitive residue class. Thenfor any fixed , where , are the square-free integers whose prime factors lie in , and is the quantity

and is the set of congruence classes

and is the polynomial

The conjecture is currently known to hold whenever (see this comment and this confirmation). Actually, we can prove a stronger result than in this regime in a couple ways. Firstly, the congruence classes can be replaced by a more general system of congruence classes obeying a certain controlled multiplicity axiom; see this post. Secondly, and more importantly for this post, the requirement that the modulus lies in can be relaxed; see below.

To connect the two conjectures, the previously best known implication was the folowing (see Theorem 2 from this post):

Theorem 3Let , and be such that we have the inequalitywhere is the first positive zero of the Bessel function , and are the quantities

and

Then implies .

Actually there have been some slight improvements to the quantities ; see the comments to this previous post. However, the main error remains roughly of the order , which limits one from taking too small.

To improve beyond this, the first basic observation is that the smoothness condition , which implies that all prime divisors of are less than , can be relaxed in the proof of . Indeed, if one inspects the proof of this proposition (described in these three previous posts), one sees that the key property of needed is not so much the smoothness, but a weaker condition which we will call (for lack of a better term) *dense divisibility*:

Definition 4Let . A positive integer is said to be-densely divisibleif for every , one can find a factor of in the interval . We let denote the set of positive integers that are -densely divisible.

Certainly every integer which is -smooth (i.e. has all prime factors at most is also -densely divisible, as can be seen from the greedy algorithm; but the property of being -densely divisible is strictly weaker than -smoothness, which is a fact we shall exploit shortly.

We now define to be the same statement as , but with the condition replaced by the weaker condition . The arguments in previous posts then also establish whenever .

The main result of this post is then the following implication, essentially due to Pintz:

Theorem 5Let , , , and be such thatwhere

and

and

Then implies .

This theorem has rather messy constants, but we can isolate some special cases which are a bit easier to compute with. Setting , we see that vanishes (and the argument below will show that we only need rather than ), and we obtain the following slight improvement of Theorem 3:

Theorem 6Let , and be such that we have the inequalityThen implies .

This is a little better than Theorem 3, because the error has size about , which compares favorably with the error in Theorem 3 which is about . This should already give a “cheap” improvement to our current threshold , though it will fall short of what one would get if one fully optimised over all parameters in the above theorem.

Returning to the full strength of Theorem 5, let us obtain a crude upper bound for that is a little simpler to understand. Extending the summation to infinity and using the Taylor series for the exponential, we have

We can crudely bound

and then optimise in to obtain

Because of the factor in the integrand for and , we expect the ratio to be of the order of , although one will need some theoretical or numerical estimates on Bessel functions to make this heuristic more precise. Setting to be something like , we get a good bound here as long as , which at current values of is a mild condition.

Pintz’s argument uses the elementary Selberg sieve, discussed in this previous post, but with a more efficient estimation of the quantity , in particular avoiding the truncation to moduli between and which was the main source of inefficiency in that previous post. The basic idea is to “linearise” the effect of the truncation of the sieve, so that this contribution can be dealt with by the union bound (basically, bounding the contribution of each large prime one at a time). This mostly avoids the more complicated combinatorial analysis that arose in the analytic Selberg sieve, as seen in this previous post.

** — 1. Review of previous material — **

In this section we collect some results from previous posts which we will need.

We first record an asymptotic for multiplicative functions. For any natural number , define a *-dimensional multiplicative function* to be a multiplicative function which obeys the asymptotic

for all . The following result is Lemma 8 from this previous post:

Lemma 7Let . Let be a fixed positive integer, and let be a multiplicative function of dimension . Then for any fixed compactly supported, Riemann-integrable function , and any for some fixed , one has

Next, we record a criterion for , which is Lemma 7 from this previous post:

Lemma 8 (Criterion for DHL)Let . Suppose that for each fixed admissible -tuple and each congruence class such that is coprime to for all , one can find a non-negative weight function , fixed quantities , a quantity , and a fixed positive power of such that one has the upper boundfor all , and the key inequality

holds. Then holds. Here is defined to equal when is prime and otherwise.

** — 2. Pintz’s argument — **

We can now prove Theorem 5. Fix to obey the hypotheses of this theorem. Let be a congruence class with coprime to for all (this class exists by the admissibility of ). We apply Lemma 8 with

the elementary Selberg sieve defined by

are the multiplicative functions

and

the sieve level is given by the formula

is a fixed smooth function supported on , and is a certain subset of to be chosen shortly. (The constraints and are redundant given that , but we retain them for emphasis.) We will assume that non-negative and non-increasing on . In this previous post, we considered this sieve with equal to either all of , or the subset consisting of smooth numbers. For now, we will discuss the estimates as far as we can without having to explicitly specify .

We first consider the asymptotic (5). By arguing exactly as in Section 2 (or Section 3) of this previous post, we can write the left-hand side of (5), up to errors of , as

The summand here is non-negative, so we may crudely replace by all of and apply Lemma 7 to obtain (5) with

Now we turn to the more difficult asymptotic (6). The left-hand side expands as

As observed in Section 2 of this previous post, we have

where

and

Now let be a small fixed constant to be chosen later, and suppose the following claim holds:

Claim 1 (Dense divisibility of moduli)Whenever and are non-zero, then either or else .

Then from the Bombieri-Vinogradov theorem (for the moduli) or the hypothesis (for the larger moduli, noting that ) and standard arguments (cf. Proposition 5 of this post) we have

for any fixed and any multiplicative function of a fixed dimension , where ranges only over those integers of the form with non-zero. From this we easily see (arguing as in Section 2 of this previous post) that the contribution of the error term is , and we are left with establishing the lower bound

up to errors of (henceforth referred to as *negligible errors*).

As in Section 2 of the previous post, we can write the left-hand side as

where is the -dimensional multiplicative function

So we would like to select small enough that Claim (1) holds, but large enough that we can lower bound (9) by up to negligible errors.

Pintz’s idea is to choose to be the set of all elements of with the property that

Let us first verify Claim 1 with this definition. Suppose that is non-zero, then from (8) and the support of there are multiples , of , respectively with (in particular ) and . Since , we have

and thus

Similarly for and . Multiplying, we obtain

On the other hand,

and so on dividing we have

We conclude that either is less than , or else

The latter conclusion implies that is -densely divisible. Indeed, for any we multiply together all the prime divisors of between and one at a time until just before one reaches or exceeds , or if one runs out of such prime divisors. In the former case one ends up at least as large as , and in the latter case one has reached . Next, one multiplies to this the prime divisors of less than until one reaches or exceeds ; this is possible thanks to (11) in the former case and is automatic in the latter case, and gives a divisor between and as required.

Now we need to obtain a lower bound for (9). If we write

and

(the minus sign being to compensate for the non-positive nature of ) then we have

and thus

Note that this inequality replaces the quadratic expression with a linear expression in the truncation error , which will be more tractable for computing the effect of that error. We may thus lower bound (9) by the difference of

In Section 2 of this post it is shown that

which implies that (12) is equal to

up to negligible errors. Similar considerations show that (13) is equal to

up to negligible errors. To upper bound (14), we need to upper bound

For this we need to catalog the ways in which can fail to be in . In order for this to occur, at least one of the following three statements must hold:

- (i) could be divisible by a prime with .
- (ii) could be divisible by a prime with .
- (iii) lies in , but .

We consider the contributions of (i), (ii), (iii) to (14). We begin with the contribution of (i). This is bounded above by

Applying Lemma 7, one can simplify this modulo negligible errors as

which by another application of Lemma 7 is equal to

where we adopt the notation

Applying Mertens’ theorem and summation by parts, this expression is equal up to negligible errors to

where

Now we turn to the contribution of (ii) to (14). This is bounded above by

By Lemma 7 we have

and so we may bound this contribution up to negligible errors by

which by Lemma 7 again is equal (up to negligible errors) to

By definition, . By Mertens’ theorem, we can thus write the above expression up to negligible errors as

Finally, we turn to the contribution of case (iii) to (14). By Lemma 7 we have

so we may bound this contribution up to negligible errors by

where is as in case (iii).

We introduce the quantities

and

so that case (iii) consists of those in such that

From the support of we may also take . This implies that we may factor

for some primes

and some coprime to , with

The contribution of this case may thus be bounded by

Evaluating the inner sum using Lemma 7, we obtain (up to negligible errors)

where is a truncation of :

Again we have . By Mertens’ theorem we may write this (up to negligible errors) as

Putting all this together, we have obtained the lower bound (6) with

where

and

We now place upper bounds on . In this previous post, the bounds

and

for are proven. Thus we have

These are already quite small for , say, which would correspond to .

For we will use the crude estimate

this may surely be improved, but we will not do so here to simplify the exposition. Then we may bound

The point here is that the first term is exponentially decaying in , which can compensate for the second term if which is currently the case in the regime of interest.

One can do a bit better than this. For any parameter , one has

since the left-hand side vanishes for . This gives the bound

If we insert these bounds into (7), send to zero, and optimise in using Theorem 14 from this previous post, we obtain Theorem 5.

## 82 comments

Comments feed for this article

18 June, 2013 at 6:43 pm

Andrew SutherlandHas anyone worked out the “cheap” gain k0 mentioned after Theorem 6? (or better, a fully optimized k0, but even a cheap gain will give those working on optimizing H(k0) something to play with — we’re not used to working with the same k0 for more than 2 days at time :)).

18 June, 2013 at 7:39 pm

v08ltuIf I typed it all correctly, allows .

Bz(k)=k+1.85575708149*k^(1/3)+1.033150/k^(1/3);

DELTA(om)=(1/4-87*om)/17;

KAPPA1(om,k0)=local(d);d=DELTA(om);intnum(t=4*d/(1+4*om),1,(1-t)^((k0-1)/2)/t);

KAPPA2(om,k0)=local(d);\

d=DELTA(om);(k0-1)*intnum(t=4*d/(1+4*om),1,(1-t)^((k0-1))/t);

INEQ(om,k0)=(1+4*om)*(1-2*KAPPA1(om,k0)-2*KAPPA2(om,k0))-Bz(k0-2)^2/k0/(k0-1);

INEQ(2/733,5937)

18 June, 2013 at 7:42 pm

Andrew SutherlandThanks!

18 June, 2013 at 9:01 pm

v08ltuIf I use the above bound and assume the -quotient really is , I think that I get that and with to satisfy the previous constraint, one can get . I should write a better program to optimize this though.

18 June, 2013 at 9:27 pm

Terence TaoFrom my other comment, I now think the G-quotient is going to be closer to , where . I can try to get more precise asymptotics for the term, but perhaps it will be simpler just to compute the integral numerically (it will serve as a good sanity check to compare the numerics against the asymptotics though, and the asymptotics can give a rough idea of what region of parameter space to search in).

18 June, 2013 at 9:44 pm

v08ltuIn the range of interest, I seem to have and the other are also and respectively. So none of them matter. This assumes I am not making any errors or typos with formulas.

18 June, 2013 at 9:55 pm

v08ltuI am really not sure where I came up with the above numbers, there is no obstacle to making closer to 1/348. With the sech/tanh estimate (ignoring o(1)) I can take and get with , and is not of import again (make the delta-difference smaller if it is). I am not sure I trust all the numerics though (instability), but this is approaching the limit of previous analysis.

18 June, 2013 at 10:04 pm

v08ltuLooks like with and works. If you try to take smaller, then becomes not negligible, and you have the balance the delta-difference. Too hard to look at all parameters right now.

Ooops, looks like and and works (with all in play).

18 June, 2013 at 10:06 pm

v08ltuYou can also make in this last one, and then is irrelevant ( in sech/tanh), while is then slightly bigger (5x), but not enough to vitiate key inequality.

18 June, 2013 at 10:10 pm

v08ltuManaged to squeeze one more, (larger than previous!) with a -difference of (say) allows .

18 June, 2013 at 11:38 pm

v08ltuSlightly nicer form is with and . I never checked that making smaller than forced by would not gain, but I don’t see why it would (probably follows by monotonicity).

19 June, 2013 at 8:27 am

Terence TaoCross-checking these computations with Maple. With and we have . Setting (say) we have

.

We need to not exceed

.

I get

which looks good so far. Now for . Using the crude bound

the exponential factor here is , which looks scary. On the other hand, the G-ratio here is predicted to be roughly

with (I made a slight mistake earlier in not having the square root and factor of 2 here but it fortunately turns out not to matter), and this is is completely negligible. In order to get an independent confirmation of the smallness of the G-ratio is, the Bessel function at takes the value , and at takes the value , so it looks abundantly clear that the G-ratio is indeed ridiculously small in this regime.

19 June, 2013 at 4:30 pm

v08ltuYou have an error in your expression in the Maple check, it is in the denominator of the sqrt inside the exp, not . I think the sech-sqrt fix breaks , I will check.

19 June, 2013 at 6:51 pm

Terence TaoOops, sorry about that. Well, one can hope to squeeze something out of the full version of Theorem 5 (or at least the first display after Theorem 5) in which there is also the A parameter to optimise over. One can probably pick up some other ways to make the estimate for more efficient, but at the cost of simplicity. Pintz’s own notes have a slightly different way of estimating , I can try to have a look at that instead.

There were two corrections I had to make; one (which worked against our favour) was the replacing of with its square root, but the other (which works for us) is the doubling of the exponential from to . I had thought that the two effects roughly cancelled each other out, but I guess not quite…

19 June, 2013 at 4:40 pm

v08ltuCurrently, I can only get with and . You can note the incredible impact that suddenly has if you change “6” to “5” in the numerator of the subtrahend of . So saving a few more in could occur from its more benevolent estimation.

19 June, 2013 at 8:14 pm

Terence TaoI just realised that the estimate

that I used below Theorem 6 can be improved to

(by replacing with infinity and replacing with ). This extra exponential factor makes it harder to optimise in A, but it should give a significant savings, hopefully enough to push back to 5453 and maybe even a bit further.

18 June, 2013 at 8:25 pm

Terence TaoI’m recording here some asymptotics of Bessel functions to help understand the ratio

(*)

in the regime where is fixed and is large. Here we have made the change of variable .

Write . By 9.3.23 of Abramowitz and Stegun (see e.g. this link) we have the asymptotic

for bounded , where is the Airy function. This shows that the zero has the asymptotic

where is the first negative zero of the Airy function. It also suggests that

.

Meanwhile, 9.3.7 of Abramowitz and Stegun (Debye asymptotic) tells us that

for any (this is also consistent with the exponential decay of the Airy function to the right), so if we set (EDIT: this should be ), the ratio (*) should take the form

(EDIT: this should be ) which is exponentially decaying in as claimed in the post. (Unfortunately, we cannot directly use these asymptotics to get explicit values of because we need control on the error terms, which standard sources (e.g. Abramowitz-Stegun) do not supply.)

19 June, 2013 at 5:22 pm

Eytan PaldiPlease note that the upper bound for the integration in the numerator is defined in theorem 5 to be (not its square), and in the presentation of Pintz argument it is defined (for a general pair

) to be , so it is not clear which is the correct one.

[Corrected, thanks -T.]19 June, 2013 at 6:25 pm

Eytan PaldiIn my last message I did not notice that the integrals are after change of variable, but I think that in the numerator should be replaced by its square root. And more importantly, the integral (at least for even ) can be precisely evaluated as a finite linear combination of squares of Bessel functions by finding the indefinite integral using the recursive formula (11.3.33) and the formula (11.3.34) in Abramowitz and Stegun.

(If there is a corresponding formula for , then the recursive formula can be used to compute the integral for odd as well.)

19 June, 2013 at 5:49 am

cocotypo: divsibility -> divisibility, in second paragraph.

[Corrected, thanks – T.]19 June, 2013 at 7:07 am

Gergely HarcosThe nonnegativity property I tried to prove in the previous thread (https://terrytao.wordpress.com/2013/06/11/further-analysis-of-the-truncated-gpy-sieve/) is equivalent to

for any .

Unfortunately, SAGE found the following counterexample for and :

.

More precisely, with these parameters we have

.

This result is verified by the SAGE commands

A={20/100, 21/100, 24/100, 25/100, 31/100}

print sum([((-1)^U.cardinality())*sum([min(max(0,1-sum(S.union(U))),max(0,1-sum(A.difference(S))))^2 for S in Subsets(A.difference(U))]) for U in Subsets(A)])

19 June, 2013 at 7:41 am

Terence TaoA minor technical point: there is one tiny way in which -densely divisible numbers are worse than -smooth numbers: the behaviour is not quite hereditary with respect to factors. If and is -smooth, then is automatically -smooth as well. On the other hand, if is merely -densely divisible, then is only -densely divisible, there is a loss of (basically because could suck up all the useful small primes that are giving the dense divisibility property). This makes the Type III analysis a little more complicated as there is a part in which one forces the frequency parameter to be coprime to the modulus by Mobius inversion, dividing both and by the common factor . However even with the worsening of by , one can check that the net power of remains favorable and the coprime case still dominates, so this is not actually an issue. (But something to look out for when we do the final writing up of the arguments.)

19 June, 2013 at 9:10 am

Terence TaoFor reference, I have uploaded (with permission) Pintz’s original notes on this sieve here:

Section 1

Section 2

Section 3

19 June, 2013 at 8:27 pm

v08ltuOK, I think with and and this now recovers .

19 June, 2013 at 8:29 pm

v08ltuAnd with seems to give with the same .

20 June, 2013 at 9:35 am

Terence TaoI verified that this works with the Maple code below (using crude upper and lower bounds on the Bessel integral after verifying that the Bessel function was decreasing from to , and was on the other side of the maximum at ). By using the bound in the display after Theorem 6 rather than perform any further estimations, there is actually a little bit of margin: is allowed to be as large as but is only of size , with dominating, , and . It seems likely that we could squeeze by one or two more by changing the free parameters a bit, but it is very tight and I didn’t try too hard to do so.

varpi := 1/348 - 4/2000000;

k0 := 5452;

deltap := 1/820;

A := 9500;

delta := (1/4 - 87 * varpi) / 17;

theta := deltap / (1/4 + varpi);

thetat := (deltap - delta + varpi) / (1/4 + varpi);

deltat := delta / (1/4 + varpi);

j := BesselJZeros(k0-2,1);

eps := 1 - j^2 / (k0 * (k0-1) * (1+4*varpi));

kappa1 := int( (1-t)^((k0-1)/2)/t, t = theta..1, numeric);

kappa2 := (k0-1) * int( (1-t)^(k0-1)/t, t=theta..1, numeric);

e := exp( A + (k0-1) * int( exp(-A*t)/t, t=deltat..theta, numeric ) );

s1 := BesselJ(k0-2,0.999*j);

s2 := BesselJ(k0-2,0.998*j);

s3 := BesselJ(k0-2,0.997*j);

# check that s3 > s2 > s1 to ensure Bessel function decreases from 0.998 j to 0.999 j

# crude lower bound on the denominator integral

gd := s1^2 * 0.001 * j * 0.998 * j;

# crude upper bound on the numerator integral

tn := sqrt(thetat)*j;

s0 := BesselJ(k0-2,tn);

gn := s0^2 * tn * tn / 2;

# check that s0 < s3 to ensure that Bessel function decreases up to tn (but this is true with massive amounts of room to spare)

kappa3 := (gn/gd) * e;

eps2 := 2*(kappa1+kappa2+kappa3);

# we win if eps2 < eps

20 June, 2013 at 9:37 am

abqmathteacherA quick nontechnical question. Any idea of the likelihood of pushing this line of reasoning down to H < 1476, the largest known (I think) "maximal prime gap" (see http://oeis.org/A005250, http://oeis.org/A005250/a005250.txt)? That would be an exciting landmark.

20 June, 2013 at 10:21 am

Terence TaoActually it is known that prime gaps can be arbitrarily large. For instance, if n is a natural number, then are all composite, creating a prime gap of length at least . I think 1476 may be the smallest H for which the location of the first prime gap of length H is known, though.

My feeling though is that we are going to run out of improvements before we reach that level; looks like a more realistic target at this stage.

20 June, 2013 at 10:47 am

abqmathteacherThanks! Indeed the term “maximal prime gap” used in the OEIS entry is confusing, sorry to just quote it. It means “maximal gap for a given size integer”. So 1476 is (according to that table) the *largest* H for which the location of the first prime gap of length H is known.

Anyway, even H = 10^5 is quite exciting.

David Metzler

20 June, 2013 at 1:12 pm

DaveIf I’m reading the polymath wiki correctly, isn’t the best known H value already below $10^5$?

20 June, 2013 at 1:16 pm

Terence TaoEr, sorry, I meant would be a reasonable end target for this project. :)

20 June, 2013 at 1:29 pm

Gergely Harcos1. If denotes the first zero of , then it is known that

,

see Section 15.3 in Watson: A treatise on the theory of Bessel functions (2nd ed., Cambridge University Press, 1944). Hence for and we can bound

.

On the other hand, by an orthogonality relation of Bessel functions (see Wikipedia)

It follows that

.

SAGE tells me that for and the right hand side is less than .

2. Some misprints: "analyhsis", "systetm".

[Corrected, thanks. These calculations clean up the rather ad hoc Maple code I had previously! -T]20 June, 2013 at 1:48 pm

Eytan PaldiPlease note that if is even , than by formulas (11.3.33) and (11.3.34) in Abramowitz & Stegun both integrals can be expressed (and thereby precisely computed) as a simple finite linear combinations of squares of Bessel functions! (For odd k, one needs an analogous formula to (11.3.34) in which replaces .)

20 June, 2013 at 2:54 pm

Gergely HarcosThank you! Actually this works for with any , see my latest comment below.

20 June, 2013 at 2:44 pm

Gergely HarcosActually we can calculate the ratio

explicitly in terms of Bessel values. According to Lemma 5.4 in Folland: Fourier analysis and its applications,

for any and ,

where we can express as . Using this, SAGE now tells me (for and ) that the numerator is , and the denominator is , so that

.

20 June, 2013 at 2:48 pm

Gergely HarcosSmall correction: “ as ” should be “ as “.

20 June, 2013 at 3:00 pm

Eytan PaldiNice ! (such simple expression is quite surprising!)

20 June, 2013 at 6:24 pm

Eytan PaldiIt seems that there is a slightly simpler expression (for integer order n):

This is formula (8) in Auluck’s arxiv paper :

http://arxiv.org/abs/1006.4417

20 June, 2013 at 6:55 pm

Gergely HarcosVery good. Actually this formula holds for all (not only for integers): it can be derived from Folland’s lemma, using and .

20 June, 2013 at 7:48 pm

Eytan PaldiI think that it even holds (by analytic continuation) for all complex (since the integrand is entire as a function of for each fixed nonzero .)

21 June, 2013 at 8:39 am

Gergely HarcosEytan: in extending the formula to all complex , one needs to be careful, e.g. one needs to check if the integral converges (hopefully locally uniformly, the difficulty coming from ). Already the extension of is tricky to all complex . Certainly everything is fine for .

21 June, 2013 at 10:12 am

Eytan PaldiI agree that the integral can’t be entire in , since for any fixed , (9.1.10) in A&S implies that the integral diverges as

tends (from the right) to – which must therefore be a singular point for any analytic continuation.

21 June, 2013 at 10:28 am

Gergely HarcosEytan: Right. Probably there is no problem for , as the integral converges absolutely and locally uniformly there. I sticked to just to be safe, because the definition beyond that range is less straightforward (according to Watson’s book). BTW I am also quite pleased and surprised that so many transparent features of Bessel functions play a role in this project. I come from automorphic forms where this phenomenon is wide-spread, but (I think) it is new for sieve theory where this project belongs.

21 June, 2013 at 11:31 am

Eytan PaldiThere is another interesting “sub-multiplicative” property of (shared also by many entire functions of order ) which is very useful for improving the current bounds on the 's. (I'm preparing the details.)

21 June, 2013 at 9:30 am

Exciting real-time developments about prime numbers | Mountain of Math[…] I asked about this at Terry Tao’s blog, and he indicated that H = 10,000 would be a reasonable target for the polymath8 project. And in […]

22 June, 2013 at 7:28 am

Eytan PaldiIn the sum defining , the sum variable J is upper bounded by , but the original range of J (17 lines above) is

. So it is not clear which upper bound for J is the correct one.

[The correct range is . I was not able to locate any instance where was used instead, although sometimes the tilde is almost obscured by the dividing bar between the numerator and denominator. -T]22 June, 2013 at 11:16 am

Gergely Harcos1. I have difficulty in deriving the fourth display below (8), i.e. that the left hand side of (5) is up to negligible error

.

This formula should come from

,

where . Writing and , the inner sum becomes

.

Without the condition in the innermost sum, we would get in the end

,

which is admissible. However, I don’t see how to handle the condition (it was automatic for the earlier choices of ).

2. I don’t see in (9) the condition . Certainly is the product of and , but why does it lie in ?

22 June, 2013 at 1:39 pm

Terence TaoOops, I think I mis-stated the constraint in (8) which is causing both of these problems: it should be that lies in rather than . I’ll edit the post accordingly.

22 June, 2013 at 3:54 pm

Gergely HarcosThank you, it is all clear now. I just posted a small improvement, I hope it is ok.

22 June, 2013 at 3:50 pm

Gergely Harcos1. I think the display below (11) can be improved to

.

By assumption, such that for some .

It follows that

,

whence

.

Multiplying by and using we obtain

.

Therefore either or

.

Hopefully this improvement lowers somewhat.

2. "Section 2 of the previous section" should be "Section 2 of this previous blog".

22 June, 2013 at 4:38 pm

Terence TaoThanks for the correction and improved argument! I edited the main post accordingly. The net effect is to improve the quantity in Theorem 5 from

to

which should hopefully squeeze a bit more out of starting from the current constraint since it delays the time when the error starts exploding and so may be taken a bit larger than previously. Presumably some perturbation of v08ltu's choice of parameters

should work here to nudge below 1467…

22 June, 2013 at 6:59 pm

v08ltuFor instance, .

22 June, 2013 at 7:49 pm

Terence Tao~~Confirmed! H is now 12006.~~[Retracted, see below. -T]varpi := 1/148 - 12/4000000;

k0 := 1466;

deltap := 1/180;

A := 1200;

delta := (1 - 148 * varpi) / 33;

theta := deltap / (1/4 + varpi);

# Gergely's improved value for thetat

thetat := ((deltap - delta)/2 + varpi) / (1/4 + varpi);

deltat := delta / (1/4 + varpi);

j := BesselJZeros(k0-2,1);

eps := 1 - j^2 / (k0 * (k0-1) * (1+4*varpi));

kappa1 := int( (1-t)^((k0-1)/2)/t, t = theta..1, numeric);

kappa2 := (k0-1) * int( (1-t)^(k0-1)/t, t=theta..1, numeric);

e := exp( A + (k0-1) * int( exp(-A*t)/t, t=deltat..theta, numeric ) );

# using Gergely's exact expression for denominator

gd := (j^2/2) * BesselJ(k0-3,j)^2;

# using Eytan's exact expression for numerator

tn := sqrt(thetat)*j;

gn := (tn^2/2) * (BesselJ(k0-2,tn)^2 - BesselJ(k0-3,tn)*BesselJ(k0-1,tn));

kappa3 := (gn/gd) * e;

eps2 := 2*(kappa1+kappa2+kappa3);

# we win if eps2 < eps

22 June, 2013 at 8:14 pm

HannesWith that script I get eps2-eps=1.268156732*10^251 (not good).

22 June, 2013 at 8:33 pm

Hannes(I used Maple 14)

22 June, 2013 at 8:37 pm

Terence TaoHmm, you’re right! My version of maple seems to be a bit glitchy, but it is now reporting the same answer as you – once again is blowing up. So I have to retract the confirmation for now…

23 June, 2013 at 12:53 am

v08ltuI found my error, that I divided by 2 also in the numerator, not only the difference.

23 June, 2013 at 1:02 am

v08ltuThe main issue that you need to take or less to have a hope from Bessel zeroes with , and this is not able to work, as the part dominates the numerator (at least currently).

22 June, 2013 at 4:49 pm

Gergely HarcosLet and , then the paragraph below (11) seems to work only when (e.g. for the first step gives nothing). For we can do the following. We have , hence there is a divisor of in the interval . Multiplying this divisor by yields a divisor of in the interval .

[Corrected, thanks – T.]22 June, 2013 at 7:44 pm

Gergely Harcos1. In the post and in the comments we used the bound (valid for any )

.

Here is an alternate way to bound the left hand side, which probably performs better in practice without any auxiliary parameter:

.

2. Some typos:

2a. ", respectively" should be ", respectively".

2b. "Proposition 7" should be "Lemma 7".

2c. Exponent should be (at least four occurrences).

2d. Link is missing under "In this previous post".

2e. "exceeds . or" should be "exceeds or"

22 June, 2013 at 8:19 pm

Terence TaoThanks for the corrections!

The alternate estimate you propose, namely

seems to be a bit worse actually than the current estimate

for (say) v08ltu’s most recent values: the latter is about times the G-ratio but the former seems to be something like , peaking at about 4200. Somehow the problem is that is so small that the double factorial in the denominator has a hard time counteracting the factors of in the numerator. (The current estimate also blows up as , but the blowup seems to be milder somehow.)

22 June, 2013 at 10:53 pm

Gergely HarcosThanks for the information! I haven’t experimented with the numerics lately. Another idea is to optimize for each individually, i.e. choose such that .

22 June, 2013 at 10:56 pm

Gergely HarcosHere I meant to use your upper bound for the -integral, then optimize in .

23 June, 2013 at 6:22 am

Eytan PaldiFrom my next message (on a sub multiplicative property of ), it follows that one can take only in the integral (i.e. without the constant factor )

23 June, 2013 at 1:50 pm

Gergely HarcosActually keeping the -integral (i.e. not bounding it) gives much better results. I also experimented with replacing

by in the -term. We can gain a factor of this way, but for we would need a gain of .

23 June, 2013 at 6:52 pm

Eytan Paldi[Edited, Jun 25, to incorporate corrections by the author – T.]A sub-multiplicative property of f and new bounds for the ‘s:

1. Let , be non-vanishing on (with the possible vanishing at .) Consider the following inequality for

:

(1)

Where for each fixed

is (obviously) the best (uniform) multiplier .

Clearly, a sufficient condition for to hold for each fixed , is that the function should be non-increasing (as a function of ) on .

That is, its logarithm (as a function of ) should be non-increasing on . Or equivalently, the logarithmic derivative of

should be non-positive on .

It follows that the above sufficient condition is implied by

(2) is concave on .

To prove the implication, observe that for a fixed , the derivative of is

where is the logarithmic derivative of .

Now (2) implies that (as a non-increasing function on ) satisfies whenever , which proves the sufficiency of (2) for (1) with .

This motivates the following

[For some reason, LaTeX is not working on the line below, I was unable to fix it – T.]Definition: A function g \in C^1[0,1] will be called

nice( on [0,1] ) if it is non-vanishing on [0, 1) and satisfy(3) and is concave

on

Consequently, any nice function g on [0, 1] satisfies

(4) , whenever

(4') on .

Where (4') follows from the concavity of on .

It is easy to verify the following properties

(i) A finite product of nice functions is nice.

(ii) If is a sequence of nice and analytic functions on

, converging locally uniformly on to a function g, than g is (obviously) analytic and nice on .

Examples: If and , then the functions

are nice, where

Theorem: Let be an entire function of order , having its zeros in and satisfying , than is a nice function on and, consequently, it satisfies properties (4)-(4').

Proof: denote by the genus and order of g respectively. Since we have two possibilities

(i) : implying that the canonical product representation of g

is given by where denote

the zeros of , since (by the first example), each canonical factor

(i.e. ) is nice and the canonical product converges locally uniformly to on , it follows (from the above properties) that g is nice on .

(ii) : implying that the canonical product representation of g is given similarly by with the same proof.

2. Applications:

Let for

, then the product representation (9.5.10) in A&S shows that f is an entire function of genus having its zeros in .

Hence by the above theorem is nice on . hence properties (4) and (4') imply

(5) whenever the 's are non-negative and .

Remark: the last limitation on the sum is unnecessary if is modified to on .

(5') for each

(with the above modification of for , (5') holds obviously for each .

Remark: it is easy to verify that

which for large is asymptotic to .

Using (5), (5') it is easy to verify that for each

Where

Since is asymptotic (for large ) to , it seems that the last exponential bound is inferior to the current bound

which determines the upper bound on

, similar situation exists for the new bound

relative to the current bound which determines the current upper bound on .

the situation seems better for the new bound on :

Using properties (5)-(5') of , it follows that

It is easy to verify that the last exponential bound implies the current bound with with the parameter in the integrand, but without the constant multiplier .

23 June, 2013 at 8:15 pm

Gergely HarcosThese are very good ideas. Some thoughts:

1. Your (4) and (4′) can be proved directly for

with fixed,

divided by initially to make its value at . All we need is that is decreasing on , i.e. that on . We have that and , so we need

for .

This in turn follows from Auluck’s formula that you quoted earlier:

.

2. So we have now a direct proof that

,

where . This means that Terry’s can be improved to

for any .

23 June, 2013 at 8:36 pm

Gergely HarcosWell, perhaps I should have restricted to , , and the interval instead of .

23 June, 2013 at 9:08 pm

Eytan PaldiThis is a nice observation! I’m quite surprised to see that the log-concavity holds also for , though at least in a sufficiently small neighborhood of a zero, the dominating logarithmic singularity is concave, the problem is that the logarithmic derivative is positive sufficiently close to the right of each zero, this is exactly the reason why the interval of interest should not have zeros of .

Another well known example is known to be entire, log-concave and monotonically decreasing to the right of its maximizer (which is in

) – in this case the zeros are to the left of the “interesting interval”!

23 June, 2013 at 8:35 pm

Gergely HarcosThe new that arises from Eytan’s ideas seems to allow . Using , , , I am getting , , , . These values might not be optimal, and they should be double checked.

23 June, 2013 at 10:02 pm

Terence TaoMaple seems to confirm it. Nice work by both of you!

.

varpi := 1/148 - 1/318000;

k0 := 1466;

deltap := 1/200;

A := 1360;

delta := (1 - 148 * varpi) / 33;

theta := deltap / (1/4 + varpi);

# Gergely's improved value for thetat

thetat := ((deltap - delta)/2 + varpi) / (1/4 + varpi);

deltat := delta / (1/4 + varpi);

j := BesselJZeros(k0-2,1);

eps := 1 - j^2 / (k0 * (k0-1) * (1+4*varpi));

kappa1 := int( (1-t)^((k0-1)/2)/t, t = theta..1, numeric);

kappa2 := (k0-1) * int( (1-t)^(k0-1)/t, t=theta..1, numeric);

# using Gergely and Eytan's improved kappa_3

alpha := j^2 / (4 * (k0-1));

e := exp( A + (k0-1) * int( exp(-(A+2*alpha)*t)/t, t=deltat..theta, numeric ) );

# using Gergely's exact expression for denominator

gd := (j^2/2) * BesselJ(k0-3,j)^2;

# using Eytan's exact expression for numerator

tn := sqrt(thetat)*j;

gn := (tn^2/2) * (BesselJ(k0-2,tn)^2 - BesselJ(k0-3,tn)*BesselJ(k0-1,tn));

kappa3 := (gn/gd) * e;

eps2 := 2*(kappa1+kappa2+kappa3);

# we win if eps2 < eps

23 June, 2013 at 10:33 pm

v08ltuIn your Maple worksheet, you integrate, deltat to theta, but the formula listed by Harcos is .

[Well spotted! But I think Gergely’s formula has a typo, the integral here (which is counting primes up to ) should go up to rather than . In any case the exponential decay of the integrand renders the upper limit of the integral more or less irrelevant. -T.]24 June, 2013 at 4:53 am

Gergely HarcosIndeed I had a typo, thanks for pointing it out! It was inherited by copy+paste from Terry’s earlier comment (https://terrytao.wordpress.com/2013/06/18/a-truncated-elementary-selberg-sieve-of-pintz/#comment-235766), but my SAGE worksheet contained the right endpoint . Also, as Terry explained, the figures would still be fine with integration up to .

24 June, 2013 at 4:58 am

Gergely HarcosI mean that https://terrytao.wordpress.com/2013/06/18/a-truncated-elementary-selberg-sieve-of-pintz/#comment-235986 was a response to https://terrytao.wordpress.com/2013/06/18/a-truncated-elementary-selberg-sieve-of-pintz/#comment-235937

24 June, 2013 at 7:18 am

Terence TaoOops! Sorry about that; too late, now, but I edited the comment to fix it.

25 June, 2013 at 4:39 pm

Eytan PaldiBy (9.5.4) in A&S, it follows that , so this ratio may be used for to check the relative numerical error.

23 June, 2013 at 9:14 pm

The distribution of primes in densely divisible moduli | What's new[…] The difference between this conjecture and the stronger conjecture is that the modulus is constrained to be -densely divisible rather than -smooth (note that is no longer constrained to lie in ). This relaxation of the smoothness condition improves the Goldston-Pintz-Yildirim type sieving needed to deduce from ; see this previous post. […]

25 June, 2013 at 11:08 pm

Terence TaoI wrote a wiki page at

http://michaelnielsen.org/polymath1/index.php?title=Dickson-Hardy-Littlewood_theorems

recording most of the techniques we have for converting results of the form or to . Nothing new on that page, but it at least collects all the different implications that we have found in a single location (though they are all basically obsolete now that we have Pintz’s sieve).

30 June, 2013 at 12:39 pm

Bounded gaps between primes (Polymath8) – a progress report | What's new[…] “dense divisibility”, replacing the conjecture by a variant that we have called ); see this post for details. Finally, by optimising the remaining steps in Zhang’s argument (see below), we […]

5 July, 2013 at 4:41 pm

Eytan PaldiThe current definition of ensures (under theorem 5 hypotheses) that , but without the factor in its definition, we need to add this constraint to theorem 5 hypotheses.

5 July, 2013 at 5:43 pm

Gergely HarcosI don’t think we need this constraint. If we replace by , then the function does not change, hence the condition of Theorem 5 remains the same, while its proof works verbatim, i.e. the original stated conclusion holds.

27 July, 2013 at 6:55 pm

An improved Type I estimate | What's new[…] We use the Pintz sieve from this post, repeating the proof of Theorem 5 from that post (and using the explicit formulae for and from […]