This is the ninth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

We have now tentatively improved the upper bound of the de Bruijn-Newman constant to . Among the technical improvements in our approach, we now are able to use Taylor expansions to efficiently compute the approximation to for many values of in a given region, thus speeding up the computations in the barrier considerably. Also, by using the heuristic that behaves somewhat like the partial Euler product , we were able to find a good location to place the barrier in which is larger than average, hence easier to keep away from zero.

The main remaining bottleneck is that of computing the Euler mollifier bounds that keep bounded away from zero for larger values of beyond the barrier. In going below we are beginning to need quite complicated mollifiers with somewhat poor tail behavior; we may be reaching the point where none of our bounds will succeed in keeping bounded away from zero, so we may be close to the natural limits of our methods.

Participants are also welcome to add any further summaries of the situation in the comments below.

## 27 comments

Comments feed for this article

4 May, 2018 at 3:38 pm

AnonymousCan the more precise approximation be helpful in reducing the computational complexity?

Is it possible (since is much smaller than ) that a better strategy is to try reducing (e.g. to 0.15) and allow increasing (e.g. to 0.3) to keep reasonable computational complexity?

5 May, 2018 at 2:26 am

KMThese are some of the ongoing numerics threads:

1) Testing the mollifier approximations at N=69098, y=0.2 t=0.2. The triangle bounds are quite negative, so lemma bounds and its approximations are needed. A prime only mollifier with enough Euler factors (for eg. Euler 41 or Euler 61) seems to come close to the performance of a full mollifier, but due to the larger number of primes needed does not give a speed advantage. The lemma bound is currently being approximated by using the lemma on the first N terms and the triangle bound tail sum for the remaining D*(N-1) terms. This mixed approximation is not as effective as a potential fully lemma based approximation, but the latter doesn’t seem easy to derive. The approximation is almost as fast to compute as a non-mollified bound, but did not turn positive.

Some sample bound estimates (T: Triangle bound, L: Lemma bound):

N=69098,y=0.2,t=0.2

T E5 full_moll : -0.1264

T E5 full_approx : -0.2155 (current approx is minimum here)

T E5 prime_moll : -0.2350

T E5 prime_apprx: -0.2509

T E13 prime_moll: -0.1798

T E13 prime_apprx: -0.2208 (current approx is minimum here)

T E41 prime_moll: -0.1489

L E5 full_moll : 0.0699

L E5 full_approx : -0.0460 (current approx is minimum here)

L E5 prime_moll : -0.0217

L E5 prime_apprx: -0.0651

L E7 prime_moll : 0.0006

L E7 prime_apprx: -0.0550 (current approx is minimum here)

L E61 prime_moll: 0.0436

2. Checking for potential gains at lower t or y values at N=69098. There are some incremental gains possible by changing t or y slightly, although targeting a dbn bound of close to 0.21 at this height seems difficult.

N=69098;y=0.20;t=0.19;

L E5 full_moll : -0.1567

N=69098;y=0.15;t=0.2;

L E5 full_moll : -0.0726

N=69098;y=0.35;t=0.15;

L E5 full_moll : -0.7369

3. Rudolph and I are planning to prove some conditional statements: If RH true for x <= X, dbn <= DBN (starting with X near 10^13 and DBN = 0.18)

4. I was also checking whether (A+B)/B0 can be computed approximately at an arbitrary height. There were two challenges, 1) computing gamma=A0/B0 which was addressable by computing exp(log(A0)-log(B0)), 2) the error associated with truncating the B and A sums to N0 terms (with N0 around 10^5). This seems to work decently as x, y or t increase and (1+y)/2 + (t/4)log(N) > 1, although the behavior is less certain when y and t are near 0.

7 May, 2018 at 2:42 am

KMI edited the second last entry in the results wiki page slightly to 6×10^10+83952−1/2 <= x <= 6×10^10+83952+1/2 (it was offset by 0.5 earlier).

Also, we were able to compute lower bounds using the recursive approach upto N=30 million in about 15 hours, which suggests N could be taken even higher.

Also, just wanted to check whether the below conservative approach is necessary and correct while calculating the lemma bounds incrementally.

Keeping n and the mollifier same,

decreases with increasing N, hence is conservative

increases with increasing N, hence may not be conservative

increases with increasing N, hence may not be conservative

Since ,

replacing by 1 results in

, making the lower bounds conservative (and can work if the bound still remains positive enough)

While verifying the bounds for ,

we could optimally also replace

We then want

Using

we then require

Hence for example if , we could also use the optimal replacement.

7 May, 2018 at 7:47 am

Terence TaoThis broadly looks correct to me, but it may be good to have a longer description of what you are doing… for instance are you able to edit the relevant section of the writeup on github? A sketch of the calculations there would be helpful.

7 May, 2018 at 10:36 am

KMIt seems the section to be edited is the placeholder kept for the claims (ii) and (iii) proof in zerofree.tex. I will start working on it.

10 May, 2018 at 12:12 pm

KMI edited section 8 of the writeup (pg 35 and 36 of the pdf) to include the recursive lower bounds. Some things I realized while rederiving the formulas is that

1) if (1-a1)/(1+a1) is replaced with 1, the lemmabound turns into a triangle inequality bound, so this factor has to be kept less than 1,

2) Also, since there are extra d^y in mollified alpha_n, |bn-an|/|bn+an| is more complex than earlier, so computational verification may be needed to ensure it remains below (1-|gamma_Nmax|)/(1+|gamma_Nmax|)

3) While moving from N to N+1, the terms n = dN+[1,d] also undergo a change, not only DN+[1,D], so that should be accounted for.

Some content is still left to be filled, and the formatting could be improved, but I just wanted to confirm on these changes before going forward.

13 May, 2018 at 7:44 am

Terence TaoThanks for this! I will go through this writeup soon (am distracted by some other work at present), but looks good so far.

13 May, 2018 at 10:11 pm

KMI was checking if there is one formula (even if somewhat messy) for the main numerator in the lemma bound which can be turned into a repeatedly callable function, to handle all scenarios automatically and streamline the calculations in this approach. Will something like this work,

With,

14 May, 2018 at 11:51 am

Terence TaoI’m starting to read through your notes. I am wondering how you get the lower bound on in terms of , because the quantities (and also the denominator vary with . In principle these quantities should be generally decreasing in N, but we can’t say for certain because of the oscillations in the weights .

(Also, strictly speaking all the lower bounds for or need to be multiplied by a factor of , which is not relevant insofar as keeping away from zero, but which may have some impact on how one can absorb error terms.)

14 May, 2018 at 9:30 pm

KMOn the |moll| part, I was wrongly assuming that it will be less than 1, but since it has terms instead of , it is greater than 1 and should be divided. I will make that change.

The way I was thinking about the main terms was that if we calculated the exact lemmabound (eqn 78) for both N and N+1,

except potentially when n is in [dN+1,dN+d] for all valid d,

since the dependence on N of the terms comes only through the factors.

Also, since

To account for n in [dN+1,dN+d], we could either subtract from the bound the difference or subtract a larger quantity to keep the calculations simpler.

Hence, if we choose the right numerators for each n (one of three or four choices), decreases for a given n and N varying.

15 May, 2018 at 9:05 pm

Terence TaoAh, I see what you are doing now. This seems to work, in theory at least – how does it perform at the current range of parameters (, Euler7 mollifier)?

16 May, 2018 at 1:59 am

KMWith t=0.2,y=0.2,N=69098,Euler 7, we get 0.0199 (now dividing by |moll|) which is a bit low if we want to remain above a threshold (eg. 0.02).

This is although not a bottleneck since Rudolph had recently written an optimized script, that computes the exact lemmabound for each N much faster than earlier and helps resolve the bottleneck till about N=200k or so. Using it to cover N till 1.5 million or when dealing with much higher N would still take significant time.

If we use that script to handle N=69098 to 80000 and then use this approach, we get

80000,0.2,0.2,euler5 -> 0.0345

80000,0.2,0.2,euler7 -> 0.0615

which provides a more comfortable buffer.

17 May, 2018 at 3:41 pm

Terence TaoHmm, sounds like it’s going to be tough to push beyond then. Given how complicated the argument has become, maybe it is a good time to “declare victory” on the Polymath project and focus on finalising all the various steps required to fully verify that ? It sounds like if we step down from Euler 7 to Euler 5, then Euler 3, then Euler 2, we would have a good chance of covering up to say in a computationally feasible amount of time.

18 May, 2018 at 1:54 am

KMThe file with exact lemmabounds (euler 5 mollifier) for N in [69098,80000] is kept here, and the one with incremental lemmabounds (euler 5 mollifier) between [80000,1.5 mil] is here (bounds shown for N=100K). The first one ran in about 6 hours, while the latter took about an hour. Also sharing a plot of how the latter file looks like visually.

I have also edited section 8 of the writeup accordingly.

Also, Rudolph and I found that it is possible to push X to something like 10^13 (or maybe higher) using the multiple evaluation algorithm for the barrier strip and the recursive approach for the lower bounds (both would take about a day each at this height). But with RH not verified at this height, the results will be conditional and could be treated as follow-on work.

18 May, 2018 at 3:09 am

KMAlso sharing the results of a parallel exercise,

Exact lemmabounds for N=80k to 150k, y=0.2, t=0.2 with Euler 3 mollifier

Exact lemmabounds for N=150k to 250k, y=0.2, t=0.2 with Euler 2 mollifier

These took around a day each. The second one is currently on hold.

8 May, 2018 at 8:08 am

AnonymousIn the writeup, third line from the end of page 41, the variation of the argument of on the imaginary axis is actually zero (not merely ) since is positive on the imaginary axis (as clearly seen from its defining integral).

Also in this line, it seems that (in the first portion of the rectangle upper edge) it should be (instead of ).

8 May, 2018 at 11:11 pm

AnonymousIn the results wiki page (zero-free regions), the last two lines should be dated “May 1, 2018” instead of 2017 :) As of today, can we claim “Completes proof of \Lambda = 0.22” for the last entry should be or are we still stuck with “This should establish \Lambda = 0.22”?

[Corrected, thanks – T.]15 May, 2018 at 4:19 am

Sergei OfitserovDear Terence Tao! I take KM signal and count(consider) necessity to express my opinion. Analyse estimates TE41 and LE61,confidently can to say, what this two-sided stair-well(flight), lower part at which wreck condition! For passage at side”+” it is necessary rig up lengthwise girder(s) of Ricci flow. Thanks! Sergei.

20 May, 2018 at 5:34 am

AnonymousGoing back and forth from this “denglish” to Russian and back to English with Google translate this is Sergei’s comment in a “normal” language (I don’t understand more anyway):

Dear Terence Tao! I accept the KM signal and consider (I believe) the need to express my opinion. Analyzing the estimates of TE41 and LE61, we can confidently say that this is a two-way ladder-flight (flight), the lower part, in which there is an emergency condition! To pass from the “+” side, it is necessary to install a longitudinal beam (s) of the Ricci flow. Thanks! Sergei.

20 May, 2018 at 6:56 am

AnonymousIn the write-up paper, there is a typo in Theorem 3.3, point (iii):

$X\leq x \geq X$ should be $X \leq x \leq X$.

[Fixed, thanks – T.]21 May, 2018 at 7:01 am

AnonymousDear Terry,

If you now return to Twin prime conjecture with the strongest solution to attack over H=246,I think every one is surely amusing.Many night I think Twin prime actually ties the best minds in the world.Twin prime makes every one lots of time.I very annoy this.Terry you quick up!Not it make you very old.If I am an expert mathematician,I break it

21 May, 2018 at 10:45 am

KMIn the eighth thread there was a discussion (https://terrytao.wordpress.com/2018/04/17/polymath15-eighth-thread-going-below-0-28/#comment-496959) whether expanding in the multi evaluation algorithm for the barrier was beneficial.

At the time, I had not expanded this term, and calculated the stored sums for each t step separately, since the computation time at x near 6*10^10 was reasonable. However, the stored sum calculation becomes a relative bottleneck at x near 10^13 or higher.

By incurring a larger overhead at the start where stored sums are calculated for each term in the Taylor expansion of (upto a large number of terms like 40 or 50 to maintain the same accuracy as earlier), we do see a significant benefit. Testing near x=6*10^10,y=0.2,t=0.2 resulted in an 8x reduction in time (from 2 hours to about 15 minutes).

21 May, 2018 at 12:29 pm

AnonymousIs it possible to reduce the number of terms (while maintaining the required accuracy) by replacing the Taylor series with a suitable truncated series of its Chebyshev polynomial expansion (for a better uniform approximation over appropriate intervals with fewer terms.)

21 May, 2018 at 12:43 pm

Terence TaoLooks like the barrier part of the argument is now quite tractable!

I will be rather busy with other things in the next week or two, but plan to return to the writeup and try to clean up Section 8 in particular to try to write down all the different parts of the argument before we forget everything. For instance in the barrier computations we still have to show that the 50-term Taylor expansion actually is as accurate as it looks; I’ll have to think about how best to actually show that rigorously (certainly I don’t want to apply the Leibniz rule 50 times and estimate all the resulting terms!). I am thinking perhaps of using the generalised Cauchy integral formula instead to get a reasonably good estimate (we do have tens of orders of magnitude to spare, so hopefully there should be a relatively painless way to proceed).

24 May, 2018 at 4:21 pm

Yuly ShipilevskyThis is it.

http://vixra.org/abs/1805.0399

26 May, 2018 at 9:35 pm

Yuly ShipilevskyGuys, second “portion” of N is “dressed into clothes” inside “m”

26 May, 2018 at 11:33 am

El proyecto Polymath15 logra reducir la cota superior de la constante de Bruijn-Newman | Ciencia | La Ciencia de la Mula Francis[…] La cota numérica Λ ≤ 0.48 se confirmó de forma analítica en el sexto hilo [Tao, 18 Mar 2018]. El avance parece parco, pero los proyectos Polymath necesitan un arranque suave para que todos los participantes vayan conociendo las herramientas que se van a usar y vayan aportando ideas relevantes. Todo parecía indicar que se podía ir más allá con el método numérico [Tao, 28 Mar 2018], hasta alcanzar una cota de Λ ≤ 0.28 [Tao, 17 Apr 2018]. El método de la barrera desarrollado para el ataque analítico prometía pequeñas mejoras adicionales, que llegaron hasta Λ ≤ 0.22 [Tao, 04 May 2018]. […]