This is the tenth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

Most of the progress since the last thread has been on the numerical side, in which the various techniques to numerically establish zero-free regions to the equation H_t(x+iy)=0 have been streamlined, made faster, and extended to larger heights than were previously possible.  The best bound for \Lambda now depends on the height to which one is willing to assume the Riemann hypothesis.  Using the conservative verification up to height (slightly larger than) 3 \times 10^{10}, which has been confirmed by independent work of Platt et al. and Gourdon-Demichel, the best bound remains at \Lambda \leq 0.22.  Using the verification up to height 2.5 \times 10^{12} claimed by Gourdon-Demichel, this improves slightly to \Lambda \leq 0.19, and if one assumes the Riemann hypothesis up to height 5 \times 10^{19} the bound improves to \Lambda \leq 0.11, contingent on a numerical computation that is still underway.   (See the table below the fold for more data of this form.)  This is broadly consistent with the expectation that the bound on \Lambda should be inversely proportional to the logarithm of the height at which the Riemann hypothesis is verified.

As progress seems to have stabilised, it may be time to transition to the writing phase of the Polymath15 project.  (There are still some interesting research questions to pursue, such as numerically investigating the zeroes of H_t for negative values of t, but the writeup does not necessarily have to contain every single direction pursued in the project. If enough additional interesting findings are unearthed then one could always consider writing a second paper, for instance.

Below the fold is the detailed progress report on the numerics by Rudolph Dwars and Kalpesh Muchhal.

— Quick recap —

The effectively bounded and normalised, Riemann-Siegel type asymptotic approximation for H_t(x+yi):

\displaystyle \frac{H_t(x+yi)}{B_0(x+yi)} = ABB_{eff}(x + iy)+ \mathcal{O}_{\le}(e_A+e_B + e_{C,0})

enables us to explore its complex zeros and to establish zero-free regions. By choosing a promising combination y_0 and t_0, and then numerically and analytically showing that the right-hand side doesn’t vanish in the rectangular shaped “canopy” x=0 \dots \infty, t = t_0, y=y_0 \dots 1 (or a point on the blue hyperbola), a new DBN upper bound \Lambda will be established. Summarized in this visual:

numerics1

— The Barrier approach —

To verify that ABB_{eff} \ne 0 in such a rectangular strip, we have adopted the so-called Barrier-approach that comprises of three stages (illustrated in a picture below):

numerics2

  1. Use the numerical verification work of the RH already done by others. Independent teams have now verified the RH up to X=6 \times 10^{10}, and a single study took it up to X=5 \times 10^{12}. This work allows us to rule out, up to a certain X, that a complex zero has flown through the critical strip into any defined canopy. To also cover the x-domains that lie beyond these known verifications, we have to assume the RH up to X. This will then yield a \Lambda that is conditional on this assumption.
  2. Complex zeros could also have horizontally flown into the ‘forbidden tunnel’ at high velocity. To numerically verify this hasn’t occurred, a Barrier needs to be introduced at X and checked for any zeros having flown around, through or over it.
  3. Verifying the range X \leq x < \infty (or N_a \leq N < \infty) is done through testing that the lower bound of ABB_{eff} always stays higher than the upper bound of the error terms. This has to be done numerically up to a certain point N=N_b, after which analytical proof takes over.

So, new numerical computations are required to verify that both the Barrier at X and the non-analytical part of the range N_a \leq N \leq  N_b are zero-free for a certain choice of t_0, y_0.

— Verifying the Barrier is zero-free —

So, how to numerically verify that the Barrier is zero-free?

  1. The Barrier is required to have two nearby screens at X and X+1 to ensure that no complex zeros could fly around it. Hence, it has the 3D structure: X \dots X+1, y_0 \dots 1, 0 \dots t_0.
  2. For the numerical verification that the Barrier is zero-free, it is treated as a ‘pile’ of x, y rectangles. For each rectangle the winding number is computed using the argument principle and Rouché’s theorem.
  3. For each rectangle, the number of mesh points required is decided using the \left|\frac{\partial f}{\partial z}\right|-derivative, and the t-step is decided using the \left|\frac{\partial f}{\partial t}\right|-derivative.

numerics3

Optimizations used for the barrier computations

  • To efficiently calculate all required mesh points of ABB_{eff} on the rectangle sides, we used a pre-calculated stored sum matrix that is Taylor expanded in the x, y and t-directions. The resulting polynomial is used to calculate the required mesh points. The formula for the stored sum matrix:

\displaystyle \mathrm{mat}(I \times J) = {\frac {1}{i!\,j!\,{4}^{j}}\sum _{h=1}^{N}{h}^{i/2x-1/2} \left( \ln \left( 2\,{\frac {h}{N}} \right) \right) ^{i+2\,j}}

with i=0 \dots I and j=0 \dots J, where I and J are the number of Taylor expansion terms required to achieve the required level of accuracy (in our computations we used 20 digits and an algorithm to automatically determine I and J).

  • We found that a more careful placement of the Barrier at an X+\epsilon makes a significant difference in the computation time required. A good location is where ABB_{eff} has a large relative magnitude. Since ABB_{eff} retains some Euler product structure, such locations can be quickly guessed by evaluating a certain euler product upto a small number of primes, for multiple X candidates in an X range.
  • Since \left|\frac{\partial f}{\partial z}\right| and \left|\frac{\partial f}{\partial t}\right| have smooth i.e. non-oscillatory behavior, using conservative numeric integrals with the Lemma 9.3 summands, \textrm{summand}_{n=1} + \int_{1}^{N} \textrm{summand dn}, instead of the actual summation is feasible, and is significantly faster (the time complexity of estimation becomes independent of N)
  • Using a fixed mesh for a rectangle contour (can change from rectangle to rectangle) allows for vectorized computations and is significantly faster than using an adaptive mesh. To determine the number of mesh points, it is assumed that \min ABB_{eff}^{mesh} will stay above 1 (which is expected given the way the X location has been chosen, and is later verified after ABB_{eff} has been computed at all the mesh points). The number is chosen as \bigg \lceil \left|\frac{\partial f}{\partial z}\right| \bigg\rceil
  • Since \min(ABB_{eff}^{mesh}) for the above fixed mesh generally comes way above 1, the ABB_{eff} lower bound along the entire contour (not just on the mesh points) is higher than what would be the case with an adaptive mesh. This property is used to obtain a larger t-step while moving in the t-direction

\displaystyle t_{next} = t_{curr} + \frac{\min ABB_{eff}^{mesh} - 0.5}{\big \lceil \left|\frac{\partial f}{\partial t}\right| \big\rceil}

— Verifying the range N_a \leq N \leq N_b —

This leaves us with ensuring the range N_a \leq N \leq N_b (where N_a is the value of N corresponding to the barrier X) is zero-free through checking that for each N, the ABB_{eff} lower bound always exceeds the upper bound of the error terms.

  1. From theory, two lower bounds are available: the Lemma-bound (eq. 80 in the writeup) and an approximate Triangle bound (eq. 79 in the writeup). Both bounds can be ‘mollified’ by choosing an increasing number of primes (to a certain extent) until the bound is sufficiently positive.
  2. The Lemma bound is used to find the number of ‘mollifiers’ required to make the bound positive at N_a. We found that using primes 2,3,5,7 was the max. number of primes still allowing an acceptable computational performance.
  3. The approximate Triangle bound evaluates faster and is used to establish the mollified (either 0 primes or only prime 2) N_b end point before the analytical lower bound takes over.
  4. The Lemma-bound is then also used to calculate that for each N in [N_a, N_b], the lower bound stays sufficiently above the error terms. The Lemma bound only needs to be verified for the line segment y=y_0, t=t_0, N_a \leq N(x) \leq N_b, since the Lemma bound monotonically increases when y goes to 1.

numerics4

Optimizations used for Lemmabound calculations

  • To speed up computations a fast “sawtooth” mechanism has been developed. This only calculates the minimally required incremental Lemma bound terms and only induces a full calculation when the incremental bound goes below a defined threshold (that is sufficiently above the error bounds).

\displaystyle |f_{N_{min}}| \geq \frac{1}{\sum_{d=1}^D \frac{|\lambda_d|}{d^{\sigma_{N_{min}}}}} \left(1 - |\gamma_{N_{min}}| - \sum\limits_{n=2}^{DN_{min}} \frac{g(n,N_{min},N_{max})}{n^{\sigma_{N_{min}}}} \right)

\displaystyle - |\gamma_{N_{min}}| \sum_{n=1}^{N_{min}} \frac{b_n^t (n^{|\kappa|} - 1)}{n^{\sigma_{N_{min}}-y}}

\displaystyle|f_{N+1}| := |f_{N}| - \frac{1}{\sum_{d=1}^D \frac{|\lambda_d|}{d^{\sigma_{N+1}}}} \sum\limits_{1 \leq d \leq D:d|D}(\sum\limits_{n=dN+1}^{dN+d} \frac{g(n,N_{min},N_{max})}{n^{\sigma_{N+1}}}) - |\gamma_{N+1}| \frac{b_{N+1}^t ({(N+1)}^{|\kappa|} - 1)}{(N+1)^{\sigma_{N+1}-y}}

where

\displaystyle g(n,N,N_{max}) = \max(\frac{1-|\gamma|_{N_{max}}}{1+|\gamma|_{N_{max}}} |\beta_{n,N} + \alpha_{n,N}|,

\displaystyle|\beta_{n,N} - \alpha_{n,N}|, |\beta_{n,N} - |\frac{\gamma_{N_{max}}}{\gamma_{N}}| \alpha_{n,N}|)

(as presented within section 9 of the writeup, pg. 42)

 — Software used —

To accommodate the above, he following software has been developed in both pari/gp (https://pari.math.u-bordeaux.fr) and ARB (http://arblib.org):

For verifying the Barrier:

  • Barrier_Location_Optimizer to find the optimal X, X+1 location to place the Barrier.
  • Stored_Sums_Generator to generate in matrix form, the coefficients of the Taylor polynomial. This is one-off activity for a given X, post which the coefficients can be used for winding number computations in different y and t ranges.
  • Winding_Number_Calculator to verify that no complex zeros passed the Barrier.

For verifying the N_a \leq N \leq N_b range:

  • Nb_Location_Finder for the number of mollifiers to make the bound positive.
  • Lemmabound_calculator Firstly, different mollifiers are tried to see which one gives a sufficiently positive bound at N_a. Then the calculator can be used with that mollifier to evaluate the bound for each N in [N_a,N_b]. The [N_a,N_b] range can also be broken up into sub-ranges, which can then be tackled with different mollifiers.
  • LemmaBound_Sawtooth_calculator to verify each incrementally calculated Lemma bound stays above the error bounds. Generally this script and the Lemmabound calculator script are substitutes for each other, although the latter may also be used for some initial portion of the N range.

Furthermore we have developed software to compute:

  • ABB_{eff} as \left|\frac{A+B}{B_0}\right| and/or \left|\frac{A+B-C}{B_0}\right|.
  • the exact H_t(x+iy) value (using the bounded version of the 3rd integral approach).

The software supports parallel processing through multi-threading and grid computing.

 — Results achieved —

For various combinations of y_0, t_0, these are the numerical outcomes:

numerics5

The numbers suggest that we now have numerically verified that \Lambda \le 0.22 (even at two different Barrier locations). Also, conditionally on the RH being verified up to various x > 6 \times 10^{10}, we have now reached a \Lambda \le 0.12. We are cautiously optimistic, that the tools available right now, do even bring a conditional \Lambda \le 0.10 within reach of computation.

 — Timings for verifying DBN \le 0.22 —

Procedure

Timings

Stored sums generation at X=6*10^10 + 83951.5 42 sec

Winding number check in the barrier for t=[0,0.2], y=[0.2,1]

42 sec

Lemma bounds using incremental method for N=[69098, 250000] and a 4-prime mollifier {2,3,5,7}

118 sec

Overall Timings

~200 sec

Remarks:

  • Timings to be multiplied by a factor of ~3.2 for each incremental order of magnitude of x.
  • Parallel processing significantly improves speed (e.g. Stored sums was done in < 7 sec).
  • Mollifier 2 analytical bound at N=250,000, y=0.2, t=0.2 is ~ 0.098.

 — Links to computational results and software used:  —

Numerical results achieved:

Software scripts used: