This is the first official “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant ${\Lambda}$. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

The proposal naturally splits into at least three separate (but loosely related) topics:

• Numerical computation of the entire functions ${H_t(z)}$, with the ultimate aim of establishing zero-free regions of the form ${\{ x+iy: 0 \leq x \leq T, y \geq \varepsilon \}}$ for various ${T, \varepsilon > 0}$.
• Improved understanding of the dynamics of the zeroes ${z_j(t)}$ of ${H_t}$.
• Establishing the zero-free nature of ${H_t(x+iy)}$ when ${y \geq \varepsilon > 0}$ and ${x}$ is sufficiently large depending on ${t}$ and ${\varepsilon}$.

Below the fold, I will present each of these topics in turn, to initiate further discussion in each of them. (I thought about splitting this post into three to have three separate discussions, but given the current volume of comments, I think we should be able to manage for now having all the comments in a single post. If this changes then of course we can split up some of the discussion later.)

To begin with, let me present some formulae for computing ${H_t}$ (inspired by similar computations in the Ki-Kim-Lee paper) which may be useful. The initial definition of ${H_t}$ is

$\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du$

where

$\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(- \pi n^2 e^{4u} )$

is a variant of the Jacobi theta function. We observe that ${\Phi}$ in fact extends analytically to the strip

$\displaystyle \{ u \in {\bf C}: -\frac{\pi}{8} < \mathrm{Im} u < \frac{\pi}{8} \}, \ \ \ \ \ (1)$

as ${e^{4u}}$ has positive real part on this strip. One can use the Poisson summation formula to verify that ${\Phi}$ is even, ${\Phi(-u) = \Phi(u)}$ (see this previous post for details). This lets us obtain a number of other formulae for ${H_t}$. Most obviously, one can unfold the integral to obtain

$\displaystyle H_t(z) = \frac{1}{2} \int_{-\infty}^\infty e^{tu^2} \Phi(u) e^{izu}\ du.$

In my previous paper with Brad, we used this representation, combined with Fubini’s theorem to swap the sum and integral, to obtain a useful series representation for ${H_t}$ in the ${t<0}$ case. Unfortunately this is not possible in the ${t>0}$ case because expressions such as ${e^{tu^2} e^{9u} \exp( -\pi n^2 e^{4u} ) e^{izu}}$ diverge as ${u}$ approaches ${-\infty}$. Nevertheless we can still perform the following contour integration manipulation. Let ${0 \leq \theta < \frac{\pi}{8}}$ be fixed. The function ${\Phi}$ decays super-exponentially fast (much faster than ${e^{tu^2}}$, in particular) as ${\mathrm{Re} u \rightarrow +\infty}$ with ${-\infty \leq \mathrm{Im} u \leq \theta}$; as ${\Phi}$ is even, we also have this decay as ${\mathrm{Re} u \rightarrow -\infty}$ with ${-\infty \leq \mathrm{Im} u \leq \theta}$ (this is despite each of the summands in ${\Phi}$ having much slower decay in this direction – there is considerable cancellation!). Hence by the Cauchy integral formula we have

$\displaystyle H_t(z) = \frac{1}{2} \int_{i\theta-\infty}^{i\theta+\infty} e^{tu^2} \Phi(u) e^{izu}\ du.$

Splitting the horizontal line from ${i\theta-\infty}$ to ${i\theta+\infty}$ at ${i\theta}$ and using the even nature of ${\Phi(u)}$, we thus have

$\displaystyle H_t(z) = \frac{1}{2} ( \int_{i\theta}^{i\theta+\infty} e^{tu^2} \Phi(u) e^{izu}\ du + \int_{-i\theta}^{-i\theta+\infty} e^{tu^2} \Phi(u) e^{-izu}\ du.$

Using the functional equation ${\Phi(\overline{u}) = \overline{\Phi(u)}}$, we thus have the representation

$\displaystyle H_t(z) = \frac{1}{2} ( K_{t,\theta}(z) + \overline{K_{t,\theta}(\overline{z})} ) \ \ \ \ \ (2)$

where

$\displaystyle K_{t,\theta}(z) := \int_{i\theta}^{i \theta+\infty} e^{tu^2} \Phi(u) e^{izu}\ du$

$\displaystyle = \sum_{n=1}^\infty 2 \pi^2 n^4 I_{t, \theta}( z - 9i, \pi n^2 ) - 3 \pi n^2 I_{t,\theta}( z - 5i, \pi n^2 )$

where ${I_{t,\theta}(b,\beta)}$ is the oscillatory integral

$\displaystyle I_{t,\theta}(b,\beta) := \int_{i\theta}^{i\theta+\infty} \exp( tu^2 - \beta e^{4u} + i b u )\ du. \ \ \ \ \ (3)$

The formula (2) is valid for any ${0 \leq \theta < \frac{\pi}{8}}$. Naively one would think that it would be simplest to take ${\theta=0}$; however, when ${z=x+iy}$ and ${x}$ is large (with ${y}$ bounded), it seems asymptotically better to take ${\theta}$ closer to ${\pi/8}$, in particular something like ${\theta = \frac{\pi}{8} - \frac{1}{4x}}$ seems to be a reasonably good choice. This is because the integrand in (3) becomes significantly less oscillatory and also much lower in amplitude; the ${\exp(ibu)}$ term in (3) now generates a factor roughly comparable to ${\exp( - \pi x/8 )}$ (which, as we will see below, is the main term in the decay asymptotics for ${H_t(x+iy)}$), while the ${\exp( - \beta e^{4u} )}$ term still exhibits a reasonable amount of decay as ${u \rightarrow \infty}$. We will use the representation (2) in the asymptotic analysis of ${H_t}$ below, but it may also be a useful representation to use for numerical purposes.

— 1. Numerical investigation of ${H_t(z)}$

Python and Matlab code to compute ${H_t(x+iy)}$ for medium-sized values of ${x}$ are now available in the Polymath15 github repository. It is expected that all the zeroes of these functions for ${t \geq 0}$ are real (this is equivalent to the Riemann hypothesis). For ${t=0}$, ${H_0(x)}$ is zero precisely when ${\frac{1}{2}+\frac{ix}{2}}$ is a non-trivial zero of the Riemann zeta function, so the first few zeroes of ${H_0}$ occur at approximately

$\displaystyle 28.25, 42.05, 50.05, 60.85, 65.85, 75.15, 81.85, 86.65, 96.05, 99.55.$

As ${t}$ increases, we have observed that the zeroes drift slightly to the left. This is consistent with theory, for instance the number ${N_t(T)}$ of zeroes of ${H_t}$ of real part between ${0}$ and ${T}$ is known to be asymptotically

$\displaystyle N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log \frac{T}{4\pi} + O_t(1) \ \ \ \ \ (4)$

so as ${t}$ increases we should expect a few more zeroes in this region. (I had incorrectly omitted the ${16}$ denominator in a previous version of (4).) It seems like a reasonable near-term aim to improve the numerics to the point where we can confirm this asymptotic.

A related theoretical result is that the gaps between zeroes ${z_j(t)}$ should behave locally like an arithmetic progression for large ${j}$, in the sense that

$\displaystyle z_{j+1}(t)-z_j(t) = (1+o_t(1)) \frac{4\pi}{\log z_j(t)} = (1+o_t(1)) \frac{4\pi}{\log j}.$

This would also be nice to confirm numerically.

Theory also gives that the functions ${H_t(x+iy)}$ decay roughly like ${\exp(-\pi x/8)}$ as ${x \rightarrow \infty}$; see later sections for more precise asymptotics. To see this decay numerically for large ${x}$, it may be necessary to switch over to a representation such as (2) with ${\theta}$ close to ${\pi/8}$, otherwise the effect of numerical roundoff error may become unmanageably large.

— 2. Dynamics of zeroes —

Let ${z_j(t)}$ denote the zeroes of ${H_t(z)}$. For sake of discussion let us suppose that the zeroes are always simple (this is in fact predicted by theory assuming RH; see Corollary 2 of Csordas-Smith-Varga. It may in fact be possible to prove this claim unconditionally, but we may not need this claim for the present project). If we implicitly differentiate the equation

$\displaystyle H_t(z_j(t)) = 0$

in time, we (formally, at least) obtain the equation

$\displaystyle (\partial_t H_t)(z_j(t)) + \partial_t z_j(t) (\partial_z H_t)(z_j(t)) = 0.$

The functions ${H_t}$ obey the backwards heat equation

$\displaystyle \partial_t H_t = - \partial_{zz} H_t$

and thus we have

$\displaystyle \partial_t z_j(t) = \frac{(\partial_{zz} H_t)(z_j(t))}{ (\partial_{z} H_t)(z_j(t))}.$

If we Taylor expand ${H_t}$ around the zero ${z_j(t)}$ as

$\displaystyle H_t(z) = a_j (z - z_j(t)) + b_j (z-z_j(t))^2 + \dots$

for some coefficients ${a_j,b_j}$ with ${a_j}$ non-zero (because we are assuming the zeroes to be simple) then we have after a brief calculation

$\displaystyle \frac{(\partial_{zz} H_t)(z_j(t))}{ (\partial_{z} H_t)(z_j(t))} = \frac{2 b_j}{a_j}$

and also

$\displaystyle \frac{\partial_z H_t(z)}{H_t(z)} = \frac{1}{z-z_j(t)} + \frac{b_j}{a_j} + \dots.$

On the other hand, from Weierstrass factorisation we expect (formally at least) that

$\displaystyle \frac{\partial_z H_t(z)}{H_t(z)} = \sum_k \frac{1}{z-z_k(t)}$

and thus we should have

$\displaystyle \frac{b_j}{a_j} = \sum_{k \neq j} \frac{1}{z_j(t) - z_k(t)}.$

Putting all this together, we should obtain the dynamics

$\displaystyle \partial_t z_j(t) = - 2 \sum_{k \neq j} \frac{1}{z_k(t) - z_j(t)}.$

This is not rigorous for a number of reasons, most notably that the sum here is not absolutely convergent, but these equations should hold in a principal value sense at least. In the regime ${t > \Lambda}$ this was established in Lemma 2.4 of Csordas-Smith-Varga; it may be possible to establish this in the entire region ${t>0}$.

If we write ${z_j = x_j + i y_j}$, we obtain the dynamics

$\displaystyle \partial_t x_j = - 2 \sum_{k \neq j} \frac{x_k - x_j}{(x_k-x_j)^2 + (y_k - y_j)^2} \ \ \ \ \ (5)$

and

$\displaystyle \partial_t y_j = 2 \sum_{k \neq j} \frac{y_k - y_j}{(x_k-x_j)^2 + (y_k - y_j)^2}. \ \ \ \ \ (6)$

Informally, the real parts ${x_j}$ repel each other, while the imaginary parts attract each other. In particular, once a zero is real, it should stay real.

If a zero ${x_j + i y_j}$ is not real, then it has a complex conjugate ${x_{j'} + iy_{j'} = x_j - iy_j}$. Isolating the attraction that the imaginary part ${y_j}$ feels to its complex conjugate ${y_{j'} = -y_j}$, we obtain

$\displaystyle \partial_t y_j = - \frac{1}{y_j} + 2 \sum_{k \neq j, j'} \frac{y_k - y_j}{(x_k-x_j)^2 + (y_k - y_j)^2}. \ \ \ \ \ (7)$

Suppose that ${x_j + iy_j}$ is a zero with maximal imaginary part ${y_j}$ (it is a result of Ki, Kim, and Lee that there are only finitely many non-real zeroes for ${H_t}$ with ${t>0}$). Then all the summands in (7) are non-positive, hence we have the differential inequality

$\displaystyle \partial_t y_j \leq - \frac{1}{y_j}. \ \ \ \ \ (8)$

Hence if ${\sigma_{max}(t)}$ denotes the maximal imaginary part of a zero of ${H_t}$, we (formally, at least) have

$\displaystyle \partial_t \sigma_{max}(t) \leq - \frac{1}{\sigma_{max}(t)}$

or equivalently

$\displaystyle \partial_t \frac{1}{2} \sigma_{max}^2(t) \leq - 1$

whenever ${\sigma_{max}(t)}$ is positive. Thus the zeroes will be forced into the real axis in finite time, and in fact we can establish the bound

$\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}^2(t)$

from this reasoning (this is a result of de Bruijn).

One could presumably do better by a more careful analysis of the sum in (7). The only way it seems that the inequality could be close to sharp is if the offending non-real zero ${x_j + iy_j}$ is somehow isolated far away from all other zeroes except for its complex conjugate ${x_j + iy_j}$. For large ${x_j}$, this is presumably in contradiction with the asymptotics (4); for small ${x_j}$, perhaps one could use numerics to exclude this possibility?

At time ${t=0}$, we know that the first ${10^{13}}$ zeroes are real (a result of Gourdon). Thus any non-real zero will initially have a very large value of ${x_j}$. It would be nice to somehow be able to say that these zeroes continue to have very large real part for positive values of ${t}$ as well, but unfortunately the velocity in (5) could be large and negative if the zero is just to the left of another zero that is repelling it towards the origin. Is there some way to stop this? One may have to work with “clusters” of zeroes and study their centre of mass (or some similar statistic) as this may behave in a better way than an individual position function ${x_j}$. Perhaps some of the identities in this previous post could be adapted for this purpose?

— 3. Asymptotics of ${H_t}$

Standard calculations give the gaussian integral identity

$\displaystyle \int \exp( \alpha_0 + \frac{\alpha_2}{2!} (u - u_0)^2 ) a_0\ du = \sqrt{\frac{2\pi}{-\alpha_2}} a_0 \exp(\alpha_0)$

whenever ${a_0, \alpha_0, \alpha_2, u_0}$ are complex numbers with ${\mathrm{Re} \alpha_2 < 0}$, where we integrate along a horizontal contour such as ${{\bf R}}$ and we use the standard branch of the complex logarithm. More generally, Laplace’s method suggests that one has the approximation

$\displaystyle \int \exp( f(u) ) a(u)\ du \approx \sqrt{\frac{2\pi}{(-f''(u_0))}} a(u_0) \exp( f(u_0) )$

whenever ${f}$ is an oscillating phase that has a single stationary point ${u_0}$ (with ${\mathrm{Re} f''(u_0) < 0}$) and ${a}$ is a slowly varying amplitude, and the integral is along a contour that does not start or end too close to ${u_0}$. (Here one uses the standard branch of the square root function.) There are several ways to make this approximation rigorous, such as the method of steepest descent, the saddle point method, or the method of stationary phase, but for this discussion let us work completely informally. One can apply this method to analyse the integrals (3). For the highest accuracy, one should use the phase ${f(u) = tu^2 - \beta e^{4u} + ibu}$; this is the approach taken for instance in my paper with Brad (in the ${t<0}$ case), but has the drawback that the stationary point ${u_0}$ has to be defined using the Lambert ${W}$-function. A more “quick and dirty” approach, which seems to give worse error bounds but still gives broadly correct qualitative conclusions, is only take the phase ${f(u) = - \beta e^{4u} + ibu}$ and treat the factor ${a(u) = \exp( t u^2 )}$ as an amplitude. In this case, the stationary point occurs at

$\displaystyle u_0 = \frac{1}{4} \log \frac{ib}{4\beta}$

(where we use the branch of the logarithm that makes ${u_0}$ lie in the strip (1)), with

$\displaystyle f(u_0) = - \frac{ib}{4} + \frac{ib}{4} \log \frac{ib}{4\beta}$

and

$\displaystyle f''(u_0) = -4ib.$

This suggests that we have the approximation

$\displaystyle I_{t,\theta}(b,\beta) \approx \sqrt{\frac{\pi}{2ib}} \exp( \frac{t}{16} \log^2 \frac{ib}{4\beta} - \frac{ib}{4} + \frac{ib}{4} \log \frac{ib}{4\beta} ).$

(I think that in order for this approximation to be rigorous, one needs to take ${\theta}$ to be close to the imaginary part of ${u_0}$, in particular close to ${\pi/8}$.) Now, from (2) one has

$\displaystyle H_t(x+iy) = \frac{1}{2} \sum_{n=1}^\infty \ \ \ \ \ (9)$

$\displaystyle 2 \pi^2 n^4 I_{t, \theta}( x - (9-y)i, \pi n^2 ) - 3 \pi n^2 I_{t,\theta}( x - (5-y)i, \pi n^2 )$

$\displaystyle + 2 \pi^2 n^4 \overline{I_{t, \theta}( x - (9+y)i, \pi n^2 )} - 3 \pi n^2 \overline{I_{t,\theta}( x + (5-y)i, \pi n^2 )}.$

Here we consider the regime where ${x}$ is large and ${y}$ is positive but fairly small. Let’s just look at the term

$\displaystyle I_{t, \theta}( x - (9+y)i, \pi n^2 ).$

Taking ${b = x - (9+y)i}$ and ${\beta = \pi n^2}$, we approximately have

$\displaystyle \log \frac{ib}{4\beta} \approx \log \frac{x}{4\pi n^2} + i ( \frac{\pi}{2} - \frac{9+y}{x} )$

and so (after some calculation, and dropping terms of size ${o(1)}$) we have the somewhat crude approximation

$\displaystyle I_{t,\theta}( x - (9+y)i, \pi n^2 ) \approx \sqrt{\frac{\pi}{2ix}} \exp( A + iB )$

where

$\displaystyle A := \frac{t}{16} \log^2 \frac{x}{4\pi n^2} - \frac{t \pi^2}{64} + \frac{9+y}{4} \log \frac{x}{4\pi n^2} - \frac{\pi x}{8}$

and

$\displaystyle B := \frac{\pi t}{16} \log \frac{x}{4\pi n^2} - \frac{x}{4} + \frac{x}{4} \log \frac{x}{4\pi n^2} + \frac{\pi (9+y)}{8}.$

In particular, we expect the magnitude ${|I_{t,\theta}( x - (9+y)i, \pi n^2 )|}$ to behave like

$\displaystyle |I_{t,\theta}( x - (9+y)i, \pi n^2 )| \approx \sqrt{\frac{\pi}{2x}} (\frac{x}{4\pi n^2})^{(9+y)/4} e^{-\pi x/8}$

$\displaystyle \times \exp( \frac{t}{16} \log^2 \frac{x}{4\pi n^2} - \frac{t \pi^2}{64} ).$

Similarly for the other three ${I_{t,\theta}}$ expressions that appear in (9). If ${t, y>0}$ and ${x}$ is large, this suggests that the ${n=1}$ terms in (9) dominate, and furthermore of the four terms, it is the third term which dominates the other three. Thus we expect to have

$\displaystyle H_t(x+iy) \approx \pi^2 \overline{I_{t, \theta}( x - (9+y)i, \pi )}$

$\displaystyle \approx \pi^2 \sqrt{\frac{\pi}{-2ix}} (\frac{x}{4\pi})^{(9+y)/4} e^{-\pi x/8} \exp( \frac{t}{16} \log^2 \frac{x}{4\pi} - \frac{t \pi^2}{64} ) \exp(- iB )$

where

$\displaystyle B := \frac{\pi t}{16} \log \frac{x}{4\pi} - \frac{x}{4} + \frac{x}{4} \log \frac{x}{4\pi} + \frac{\pi (9+y)}{8}.$

Dividing the phase ${B}$ by ${\pi}$ and using the argument principle, we now see where the asymptotic (4) is supposed to come from. Taking magnitudes we also expect

$\displaystyle |H_t(x+iy)| \approx \pi^2 \sqrt{\frac{\pi}{2x}} (\frac{x}{4\pi})^{(9+y)/4} e^{-\pi x/8} \exp( \frac{t}{16} \log^2 \frac{x}{4\pi} - \frac{t \pi^2}{64} ),$

in particular ${H_t(x+iy)}$ should be non-zero for fixed ${t,y>0}$ and large ${x}$.

Hopefully we will be able to make these asymptotics more precise, and also they can be confirmed by numerics (in particular there may be some sign errors or other numerical inaccuracies in my calculations above which numerics might be able to detect).