Building on the interest expressed in the comments to this previous post, I am now formally proposing to initiate a “Polymath project” on the topic of obtaining new upper bounds on the de Bruijn-Newman constant {\Lambda}. The purpose of this post is to describe the proposal and discuss the scope and parameters of the project.

De Bruijn introduced a family {H_t: {\bf C} \rightarrow {\bf C}} of entire functions for each real number {t}, defined by the formula

\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du

where {\Phi} is the super-exponentially decaying function

\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).

As discussed in this previous post, the Riemann hypothesis is equivalent to the assertion that all the zeroes of {H_0} are real.

De Bruijn and Newman showed that there existed a real constant {\Lambda} – the de Bruijn-Newman constant – such that {H_t} has all zeroes real whenever {t \geq \Lambda}, and at least one non-real zero when {t < \Lambda}. In particular, the Riemann hypothesis is equivalent to the upper bound {\Lambda \leq 0}. In the opposite direction, several lower bounds on {\Lambda} have been obtained over the years, most recently in my paper with Brad Rodgers where we showed that {\Lambda \geq 0}, a conjecture of Newman.

As for upper bounds, de Bruijn showed back in 1950 that {\Lambda \leq 1/2}. The only progress since then has been the work of Ki, Kim and Lee in 2009, who improved this slightly to {\Lambda < 1/2}. The primary proposed aim of this Polymath project is to obtain further explicit improvements to the upper bound of {\Lambda}. Of course, if we could lower the upper bound all the way to zero, this would solve the Riemann hypothesis, but I do not view this as a realistic outcome of this project; rather, the upper bounds that one could plausibly obtain by known methods and numerics would be comparable in achievement to the various numerical verifications of the Riemann hypothesis that exist in the literature (e.g., that the first {N} non-trivial zeroes of the zeta function lie on the critical line, for various large explicit values of {N}).

In addition to the primary goal, one could envisage some related secondary goals of the project, such as a better understanding (both analytic and numerical) of the functions {H_t} (or of similar functions), and of the dynamics of the zeroes of these functions. Perhaps further potential goals could emerge in the discussion to this post.

I think there is a plausible plan of attack on this project that proceeds as follows. Firstly, there are results going back to the original work of de Bruijn that demonstrate that the zeroes of {H_t} become attracted to the real line as {t} increases; in particular, if one defines {\sigma_{max}(t)} to be the supremum of the imaginary parts of all the zeroes of {H_t}, then it is known that this quantity obeys the differential inequality

\displaystyle \frac{d}{dt} \sigma_{max}(t) \leq - \frac{1}{\sigma_{max}(t)} \ \ \ \ \ (1)


whenever {\sigma_{max}(t)} is positive; furthermore, once {\sigma_{max}(t) = 0} for some {t}, then {\sigma_{max}(t') = 0} for all {t' > t}. I hope to explain this in a future post (it is basically due to the attraction that a zero off the real axis has to its complex conjugate). As a corollary of this inequality, we have the upper bound

\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2 \ \ \ \ \ (2)


for any real number {t}. For instance, because all the non-trivial zeroes of the Riemann zeta function lie in the critical strip {\{ s: 0 \leq \mathrm{Re} s \leq 1 \}}, one has {\sigma_{max}(0) \leq 1}, which when inserted into (2) gives {\Lambda \leq 1/2}. The inequality (1) also gives {\sigma_{max}(t) \leq \sqrt{1-2t}} for all {0 \leq t \leq 1/2}. If we could find some explicit {t} between {0} and {1/2} where we can improve this upper bound on {\sigma_{max}(t)} by an explicit constant, this would lead to a new upper bound on {\Lambda}.

Secondly, the work of Ki, Kim and Lee (based on an analysis of the various terms appearing in the expression for {H_t}) shows that for any positive {t}, all but finitely many of the zeroes of {H_t} are real (in contrast with the {t=0} situation, where it is still an open question as to whether the proportion of non-trivial zeroes of the zeta function on the critical line is asymptotically equal to {1}). As a key step in this analysis, Ki, Kim, and Lee show that for any {t>0} and {\varepsilon>0}, there exists a {T>0} such that all the zeroes of {H_t} with real part at least {T}, have imaginary part at most {\varepsilon}. Ki, Kim and Lee do not explicitly compute how {T} depends on {t} and {\varepsilon}, but it looks like this bound could be made effective.

If so, this suggests a possible strategy to get a new upper bound on {\Lambda}:

  • Select a good choice of parameters {t, \varepsilon > 0}.
  • By refining the Ki-Kim-Lee analysis, find an explicit {T} such that all zeroes of {H_t} with real part at least {T} have imaginary part at most {\varepsilon}.
  • By a numerical computation (e.g. using the argument principle), also verify that zeroes of {H_t} with real part between {0} and {T} have imaginary part at most {\varepsilon}.
  • Combining these facts, we obtain that {\sigma_{max}(t) \leq \varepsilon}; hopefully, one can insert this into (2) and get a new upper bound for {\Lambda}.

Of course, there may also be alternate strategies to upper bound {\Lambda}, and I would imagine this would also be a legitimate topic of discussion for this project.

One appealing thing about the above strategy for the purposes of a polymath project is that it naturally splits the project into several interacting but reasonably independent parts: an analytic part in which one tries to refine the Ki-Kim-Lee analysis (based on explicitly upper and lower bounding various terms in a certain series expansion for {H_t} – I may detail this later in a subsequent post); a numerical part in which one controls the zeroes of {H_t} in a certain finite range; and perhaps also a dynamical part where one sees if there is any way to improve the inequality (2). For instance, the numerical “team” might, over time, be able to produce zero-free regions for {H_t} with an increasingly large value of {T}, while in parallel the analytic “team” might produce increasingly smaller values of {T} beyond which they can control zeroes, and eventually the two bounds would meet up and we obtain a new bound on {\Lambda}. This factoring of the problem into smaller parts was also a feature of the successful Polymath8 project on bounded gaps between primes.

The project also resembles Polymath8 in another aspect: that there is an obvious way to numerically measure progress, by seeing how the upper bound for {\Lambda} decreases over time (and presumably there will also be another metric of progress regarding how well we can control {T} in terms of {t} and {\varepsilon}). However, in Polymath8 the final measure of progress (the upper bound {H} on gaps between primes) was a natural number, and thus could not decrease indefinitely. Here, the bound will be a real number, and there is a possibility that one may end up having an infinite descent in which progress slows down over time, with refinements to increasingly less significant digits of the bound as the project progresses. Because of this, I think it makes sense to follow recent Polymath projects and place an expiration date for the project, for instance one year after the launch date, in which we will agree to end the project and (if the project was successful enough) write up the results, unless there is consensus at that time to extend the project. (In retrospect, we should probably have imposed similar sunset dates on older Polymath projects, some of which have now been inactive for years, but that is perhaps a discussion for another time.)

Some Polymath projects have been known for a breakneck pace, making it hard for some participants to keep up. It’s hard to control these things, but I am envisaging a relatively leisurely project here, perhaps taking the full year mentioned above. It may well be that as the project matures we will largely be waiting for the results of lengthy numerical calculations to come in, for instance. Of course, as with previous projects, we would maintain some wiki pages (and possibly some other resources, such as a code repository) to keep track of progress and also to summarise what we have learned so far. For instance, as was done with some previous Polymath projects, we could begin with some “online reading seminars” where we go through some relevant piece of literature (most obviously the Ki-Kim-Lee paper, but there may be other resources that become relevant, e.g. one could imagine the literature on numerical verification of RH to be of value).

One could also imagine some incidental outcomes of this project, such as a more efficient way to numerically establish zero free regions for various analytic functions of interest; in particular, the project may well end up focusing on some other aspect of mathematics than the specific questions posed here.

Anyway, I would be interested to hear in the comments below from others who might be interested in participating, or at least observing, this project, particularly if they have suggestions regarding the scope and direction of the project, and on organisational structure (e.g. if one should start with reading seminars, or some initial numerical exploration of the functions {H_t}, etc..) One could also begin some preliminary discussion of the actual mathematics of the project itself, though (in line with the leisurely pace I was hoping for), I expect that the main burst of mathematical activity would happen later, once the project is formally launched (with wiki page resources, blog posts dedicated to specific aspects of the project, etc.).