You are currently browsing the category archive for the ‘polymath’ category.

This is the ninth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

We have now tentatively improved the upper bound of the de Bruijn-Newman constant to {\Lambda \leq 0.22}. Among the technical improvements in our approach, we now are able to use Taylor expansions to efficiently compute the approximation {A+B} to {H_t(x+iy)} for many values of {x,y} in a given region, thus speeding up the computations in the barrier considerably. Also, by using the heuristic that {H_t(x+iy)} behaves somewhat like the partial Euler product {\prod_p (1 - \frac{1}{p^{\frac{1+y-ix}{2}}})^{-1}}, we were able to find a good location to place the barrier in which {H_t(x+iy)} is larger than average, hence easier to keep away from zero.

The main remaining bottleneck is that of computing the Euler mollifier bounds that keep {A+B} bounded away from zero for larger values of {x} beyond the barrier. In going below {0.22} we are beginning to need quite complicated mollifiers with somewhat poor tail behavior; we may be reaching the point where none of our bounds will succeed in keeping {A+B} bounded away from zero, so we may be close to the natural limits of our methods.

Participants are also welcome to add any further summaries of the situation in the comments below.

Just a quick announcement that Dustin Mixon and Aubrey de Grey have just launched the Polymath16 project over at Dustin’s blog.  The main goal of this project is to simplify the recent proof by Aubrey de Grey that the chromatic number of the unit distance graph of the plane is at least 5, thus making progress on the Hadwiger-Nelson problem.  The current proof is computer assisted (in particular it requires one to control the possible 4-colorings of a certain graph with over a thousand vertices), but one of the aims of the project is to reduce the amount of computer assistance needed to verify the proof; already a number of such reductions have been found.  See also this blog post where the polymath project was proposed, as well as the wiki page for the project.  Non-technical discussion of the project will continue at the proposal blog post.

This is the seventh “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

The most recent news is that we appear to have completed the verification that {H_t(x+iy)} is free of zeroes when {t=0.4} and {y \geq 0.4}, which implies that {\Lambda \leq 0.48}. For very large {x} (for instance when the quantity {N := \lfloor \sqrt{\frac{x}{4\pi} + \frac{t}{16}} \rfloor} is at least {300}) this can be done analytically; for medium values of {x} (say when {N} is between {11} and {300}) this can be done by numerically evaluating a fast approximation {A^{eff} + B^{eff}} to {H_t} and using the argument principle in a rectangle; and most recently it appears that we can also handle small values of {x}, in part due to some new, and significantly faster, numerical ways to evaluate {H_t} in this range.

One obvious thing to do now is to experiment with lowering the parameters {t} and {y} and see what happens. However there are two other potential ways to bound {\Lambda} which may also be numerically feasible. One approach is based on trying to exclude zeroes of {H_t(x+iy)=0} in a region of the form {0 \leq t \leq t_0}, {X \leq x \leq X+1} and {y \geq y_0} for some moderately large {X} (this acts as a “barrier” to prevent zeroes from flowing into the region {\{ 0 \leq x \leq X, y \geq y_0 \}} at time {t_0}, assuming that they were not already there at time {0}). This require significantly less numerical verification in the {x} aspect, but more numerical verification in the {t} aspect, so it is not yet clear whether this is a net win.

Another, rather different approach, is to study the evolution of statistics such as {S(t) = \sum_{H_t(x+iy)=0: x,y>0} y e^{-x/X}} over time. One has fairly good control on such quantities at time zero, and their time derivative looks somewhat manageable, so one may be able to still have good control on this quantity at later times {t_0>0}. However for this approach to work, one needs an effective version of the Riemann-von Mangoldt formula for {H_t}, which at present is only available asymptotically (or at time {t=0}). This approach may be able to avoid almost all numerical computation, except for numerical verification of the Riemann hypothesis, for which we can appeal to existing literature.

Participants are also welcome to add any further summaries of the situation in the comments below.

This is the sixth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

The last two threads have been focused primarily on the test problem of showing that {H_t(x+iy) \neq 0} whenever {t = y = 0.4}. We have been able to prove this for most regimes of {x}, or equivalently for most regimes of the natural number parameter {N := \lfloor \sqrt{\frac{x}{4\pi} + \frac{t}{16}} \rfloor}. In many of these regimes, a certain explicit approximation {A^{eff}+B^{eff}} to {H_t} was used, together with a non-zero normalising factor {B^{eff}_0}; see the wiki for definitions. The explicit upper bound

\displaystyle  |H_t - A^{eff} - B^{eff}| \leq E_1 + E_2 + E_3

has been proven for certain explicit expressions {E_1, E_2, E_3} (see here) depending on {x}. In particular, if {x} satisfies the inequality

\displaystyle  |\frac{A^{eff}+B^{eff}}{B^{eff}_0}| > \frac{E_1}{|B^{eff}_0|} + \frac{E_2}{|B^{eff}_0|} + \frac{E_3}{|B^{eff}_0|}

then {H_t(x+iy)} is non-vanishing thanks to the triangle inequality. (In principle we have an even more accurate approximation {A^{eff}+B^{eff}-C^{eff}} available, but it is looking like we will not need it for this test problem at least.)

We have explicit upper bounds on {\frac{E_1}{|B^{eff}_0|}}, {\frac{E_2}{|B^{eff}_0|}}, {\frac{E_3}{|B^{eff}_0|}}; see this wiki page for details. They are tabulated in the range {3 \leq N \leq 2000} here. For {N \geq 2000}, the upper bound {\frac{E_3^*}{|B^{eff}_0|}} for {\frac{E_3}{|B^{eff}_0|}} is monotone decreasing, and is in particular bounded by {1.53 \times 10^{-5}}, while {\frac{E_2}{|B^{eff}_0|}} and {\frac{E_1}{|B^{eff}_0|}} are known to be bounded by {2.9 \times 10^{-7}} and {2.8 \times 10^{-8}} respectively (see here).

Meanwhile, the quantity {|\frac{A^{eff}+B^{eff}}{B^{eff}_0}|} can be lower bounded by

\displaystyle  |\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}|

for certain explicit coefficients {a_n,b_n} and an explicit complex number {s = \sigma + i\tau}. Using the triangle inequality to lower bound this by

\displaystyle  |b_1| - \sum_{n=2}^N \frac{|b_n|}{n^\sigma} - \sum_{n=1}^N \frac{|a_n|}{n^\sigma}

we can obtain a lower bound of {0.18} for {N \geq 2000}, which settles the test problem in this regime. One can get more efficient lower bounds by multiplying both Dirichlet series by a suitable Euler product mollifier; we have found {\prod_{p \leq P} (1 - \frac{b_p}{p^s})} for {P=2,3,5,7} to be good choices to get a variety of further lower bounds depending only on {N}, see this table and this wiki page. Comparing this against our tabulated upper bounds for the error terms we can handle the range {300 \leq N \leq 2000}.

In the range {11 \leq N \leq 300}, we have been able to obtain a suitable lower bound {|\frac{A^{eff}+B^{eff}}{B^{eff}_0}| \geq c} (where {c} exceeds the upper bound for {\frac{E_1}{|B^{eff}_0|} + \frac{E_2}{|B^{eff}_0|} + \frac{E_3}{|B^{eff}_0|}}) by numerically evaluating {|\frac{A^{eff}+B^{eff}}{B^{eff}_0}|} at a mesh of points for each choice of {N}, with the mesh spacing being adaptive and determined by {c} and an upper bound for the derivative of {|\frac{A^{eff}+B^{eff}}{B^{eff}_0}|}; the data is available here.

This leaves the final range {N \leq 10} (roughly corresponding to {x \leq 1600}). Here we can numerically evaluate {H_t(x+iy)} to high accuracy at a fine mesh (see the data here), but to fill in the mesh we need good upper bounds on {H'_t(x+iy)}. It seems that we can get reasonable estimates using some contour shifting from the original definition of {H_t} (see here). We are close to finishing off this remaining region and thus solving the toy problem.

Beyond this, we need to figure out how to show that {H_t(x+iy) \neq 0} for {y > 0.4} as well. General theory lets one do this for {y \geq \sqrt{1-2t} = 0.447\dots}, leaving the region {0.4 < y < 0.448}. The analytic theory that handles {N \geq 2000} and {300 \leq N \leq 2000} should also handle this region; for {N \leq 300} presumably the argument principle will become relevant.

The full argument also needs to be streamlined and organised; right now it sprawls over many wiki pages and github code files. (A very preliminary writeup attempt has begun here). We should also see if there is much hope of extending the methods to push much beyond the bound of {\Lambda \leq 0.48} that we would get from the above calculations. This would also be a good time to start discussing whether to move to the writing phase of the project, or whether there are still fruitful research directions for the project to explore.

Participants are also welcome to add any further summaries of the situation in the comments below.

This is the fifth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

We have almost finished off the test problem of showing that {H_t(x+iy) \neq 0} whenever {t = y = 0.4}. We have two useful approximations for {H_t}, which we have denoted {A^{eff}+B^{eff}} and {A^{eff}+B^{eff}-C^{eff}}, and a normalising quantity {B^{eff}_0} that is asymptotically equal to the above expressions; see the wiki page for definitions. In practice, the {A^{eff}+B^{eff}} approximation seems to be accurate within about one or two significant figures, whilst the {A^{eff}+B^{eff}-C^{eff}} approximation is accurate to about three or four. We have an effective upper bound

\displaystyle  |H_t - A^{eff} - B^{eff}| \leq E_1 + E_2 + E_3^*

where the expressions {E_1,E_2,E_3^*} are quite small in practice ({E_3^*} is typically about two orders of magnitude smaller than the main term {B^{eff}_0} once {x} is moderately large, and the error terms {E_1,E_2} are even smaller). See this page for details. In principle we could also obtain an effective upper bound for {|H_t - (A^{eff} + B^{eff} - C^{eff})|} (the {E_3^*} term would be replaced by something smaller).

The ratio {\frac{A^{eff}+B^{eff}}{B^{eff}_0}} takes the form of a difference {\sum_{n=1}^N \frac{b_n}{n^s} - e^{i\theta} \sum_{n=1}^N \frac{a_n}{n^s}} of two Dirichlet series, where {e^{i\theta}} is a phase whose value is explicit but perhaps not terribly important, and the coefficients {b_n, a_n} are explicit and relatively simple ({b_n} is {\exp( \frac{t}{4} \log^2 n)}, and {a_n} is approximately {(n/N)^y b_n}). To bound this away from zero, we have found it advantageous to mollify this difference by multiplying by an Euler product {\prod_{p \leq P} (1 - \frac{b_p}{p^s})} to cancel much of the initial oscillation; also one can take advantage of the fact that the {b_n} are real and the {a_n} are (approximately) real. See this page for details. The upshot is that we seem to be getting good lower bounds for the size of this difference of Dirichlet series starting from about {x \geq 5 \times 10^5} or so. The error terms {E_1,E_2,E_3^*} are already quite small by this stage, so we should soon be able to rigorously keep {H_t} from vanishing at this point. We also have a scheme for lower bounding the difference of Dirichlet series below this range, though it is not clear at present how far we can continue this before the error terms {E_1,E_2,E_3^*} become unmanageable. For very small {x} we may have to explore some faster ways to compute the expression {H_t}, which is still difficult to compute directly with high accuracy. One will also need to bound the somewhat unwieldy expressions {E_1,E_2} by something more manageable. For instance, right now these quantities depend on the continuous variable {x}; it would be preferable to have a quantity that depends only on the parameter {N = \lfloor \sqrt{ \frac{x}{4\pi} + \frac{t}{16} }\rfloor}, as this could be computed numerically for all {x} in the remaining range of interest quite quickly.

As before, any other mathematical discussion related to the project is also welcome here, for instance any summaries of previous discussion that was not covered in this post.

This is the fourth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing https://terrytao.wordpress.com/2018/01/24/polymath-proposal-upper-bounding-the-de-bruijn-newman-constant/. Progress will be summarised at this Polymath wiki page.

We are getting closer to finishing off the following test problem: can one show that {H_t(x+iy) \neq 0} whenever {t = y = 0.4}, {x \geq 0}? This would morally show that {\Lambda \leq 0.48}. A wiki page for this problem has now been created here. We have obtained a number of approximations {A+B, A'+B', A^{eff}+B^{eff}, A^{toy}+B^{toy}} to {H_t} (see wiki page), though numeric evidence indicates that the approximations are all very close to each other. (Many of these approximations come with a correction term {C}, but thus far it seems that we may be able to avoid having to use this refinement to the approximations.) The effective approximation {A^{eff} + B^{eff}} also comes with an effective error bound

\displaystyle |H_t - A^{eff} - B^{eff}| \leq E_1 + E_2 + E_3

for some explicit (but somewhat messy) error terms {E_1,E_2,E_3}: see this wiki page for details. The original approximations {A+B, A'+B'} can be considered deprecated at this point in favour of the (slightly more complicated) approximation {A^{eff}+B^{eff}}; the approximation {A^{toy}+B^{toy}} is a simplified version of {A^{eff}+B^{eff}} which is not quite as accurate but might be useful for testing purposes.

It is convenient to normalise everything by an explicit non-zero factor {B^{eff}_0}. Asymptotically, {(A^{eff} + B^{eff}) / B^{eff}_0} converges to 1; numerically, it appears that its magnitude (and also its real part) stays roughly between 0.4 and 3 in the range {10^5 \leq x \leq 10^6}, and we seem to be able to keep it (or at least the toy counterpart {(A^{toy} + B^{toy}) / B^{toy}_0}) away from zero starting from about {x \geq 4 \times 10^6} (here it seems that there is a useful trick of multiplying by Euler-type factors like {1 - \frac{1}{2^{1-s}}} to cancel off some of the oscillation). Also, the bounds on the error {(H_t - A^{eff} - B^{eff}) / B^{eff}_0} seem to be of size about 0.1 or better in these ranges also. So we seem to be on track to be able to rigorously eliminate zeroes starting from about {x \geq 10^5} or so. We have not discussed too much what to do with the small values of {x}; at some point our effective error bounds will become unusable, and we may have to find some more faster ways to compute {H_t}.

In addition to this main direction of inquiry, there have been additional discussions on the dynamics of zeroes, and some numerical investigations of the behaviour Lehmer pairs under heat flow. Contributors are welcome to summarise any findings from these discussions from previous threads (or on any other related topic, e.g. improvements in the code) in the comments below.

This is the third “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this previous thread. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

We are making progress on the following test problem: can one show that {H_t(x+iy) \neq 0} whenever {t = 0.4}, {x \geq 0}, and {y \geq 0.4}? This would imply that

\displaystyle \Lambda \leq 0.4 + \frac{1}{2} (0.4)^2 = 0.48

which would be the first quantitative improvement over the de Bruijn bound of {\Lambda \leq 1/2} (or the Ki-Kim-Lee refinement of {\Lambda < 1/2}). Of course we can try to lower the two parameters of {0.4} later on in the project, but this seems as good a place to start as any. One could also potentially try to use finer analysis of dynamics of zeroes to improve the bound {\Lambda \leq 0.48} further, but this seems to be a less urgent task.

Probably the hardest case is {y=0.4}, as there is a good chance that one can then recover the {y>0.4} case by a suitable use of the argument principle. Here we appear to have a workable Riemann-Siegel type formula that gives a tractable approximation for {H_t}. To describe this formula, first note that in the {t=0} case we have

\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1+iz}{2} )

and the Riemann-Siegel formula gives

\displaystyle \xi(s) = \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{1}{n^s}

\displaystyle + \frac{s(s-1)}{2} \pi^{-(1-s)/2} \Gamma((1-s)/2) \sum_{m=1}^M \frac{1}{m^{1-s}}

\displaystyle + \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw

for any natural numbers {N,M}, where {C_M} is a contour from {+\infty} to {+\infty} that winds once anticlockwise around the zeroes {e^{2\pi im}, |m| \leq M} of {e^w-1} but does not wind around any other zeroes. A good choice of {N,M} to use here is

\displaystyle N=M=\lfloor \sqrt{\mathrm{Im}(s)/2\pi}\rfloor = \lfloor \sqrt{\mathrm{Re}(z)/4\pi} \rfloor. \ \ \ \ \ (1)

 

In this case, a classical steepest descent computation (see wiki) yields the approximation

\displaystyle \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw \approx - (2\pi i M)^{s-1} \Psi( \frac{s}{2\pi i M} - N )

where

\displaystyle \Psi(\alpha) := 2\pi \frac{\cos \pi(\frac{1}{2}\alpha^2 - \alpha - \pi/8)}{\cos(\pi \alpha)} \exp( \frac{i\pi}{2} \alpha^2 - \frac{5\pi i}{8} ).

Thus we have

\displaystyle H_0(z) \approx A^{(0)} + B^{(0)} - C^{(0)}

where

\displaystyle A^{(0)} := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{1}{n^s}

\displaystyle B^{(0)} := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-(1-s)/2} \Gamma((1-s)/2) \sum_{m=1}^M \frac{1}{m^{1-s}}

\displaystyle C^{(0)} := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} (2\pi i M)^{s-1} \Psi( \frac{s}{2\pi i M} - N )

with {s := \frac{1+iz}{2}} and {N,M} given by (1).

Heuristically, we have derived (see wiki) the more general approximation

\displaystyle H_t(z) \approx A + B - C

for {t>0} (and in particular for {t=0.4}), where

\displaystyle A := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{\exp(\frac{t}{16} \log^2 \frac{s+4}{2\pi n^2} )}{n^s}

\displaystyle B := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-(1-s)/2} \Gamma((1-s)/2) \sum_{m=1}^M \frac{\exp(\frac{t}{16} \log^2 \frac{5-s}{2\pi m^2} )}{m^{1-s}}

\displaystyle C := \exp(-\frac{t \pi^2}{64}) C^{(0)}.

In practice it seems that the {C} term is negligible once the real part {x} of {z} is moderately large, so one also has the approximation

\displaystyle H_t(z) \approx A + B.

For large {x}, and for fixed {t,y>0}, e.g. {t=y=0.4}, the sums {A,B} converge fairly quickly (in fact the situation seems to be significantly better here than the much more intensively studied {t=0} case), and we expect the first term

\displaystyle B_0 := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-(1-s)/2} \Gamma((1-s)/2) \exp( \frac{t}{16} \log^2 \frac{5-s}{2\pi} )

of the {B} series to dominate. Indeed, analytically we know that {\frac{A+B-C}{B_0} \rightarrow 1} (or {\frac{A+B}{B_0} \rightarrow 1}) as {x \rightarrow \infty} (holding {y} fixed), and it should also be provable that {\frac{H_t}{B_0} \rightarrow 1} as well. Numerically with {t=y=0.4}, it seems in fact that {\frac{A+B-C}{B_0}} (or {\frac{A+B}{B_0}}) stay within a distance of about {1/2} of {1} once {x} is moderately large (e.g. {x \geq 2 \times 10^5}). This raises the hope that one can solve the toy problem of showing {H_t(x+iy) \neq 0} for {t=y=0.4} by numerically controlling {H_t(x+iy) / B_0} for small {x} (e.g. {x \leq 2 \times 10^5}), numerically controlling {(A+B)/B_0} and analytically bounding the error {(H_t - A - B)/B_0} for medium {x} (e.g. {2 \times 10^5 \leq x \leq 10^7}), and analytically bounding both {(A+B)/B_0} and {(H_t-A-B)/B_0} for large {x} (e.g. {x \geq 10^7}). (These numbers {2 \times 10^5} and {10^7} are arbitrarily chosen here and may end up being optimised to something else as the computations become clearer.)

Thus, we now have four largely independent tasks (for suitable ranges of “small”, “medium”, and “large” {x}):

  1. Numerically computing {H_t(x+iy) / B_0} for small {x} (with enough accuracy to verify that there are no zeroes)
  2. Numerically computing {(A+B)/B_0} for medium {x} (with enough accuracy to keep it away from zero)
  3. Analytically bounding {(A+B)/B_0} for large {x} (with enough accuracy to keep it away from zero); and
  4. Analytically bounding {(H_t - A - B)/B_0} for medium and large {x} (with a bound that is better than the bound away from zero in the previous two tasks).

Note that tasks 2 and 3 do not directly require any further understanding of the function {H_t}.

Below we will give a progress report on the numeric and analytic sides of these tasks.

— 1. Numerics report (contributed by Sujit Nair) —

There is some progress on the code side but not at the pace I was hoping. Here are a few things which happened (rather, mistakes which were taken care of).

  1. We got rid of code which wasn’t being used. For example, @dhjpolymath computed {H_t} based on an old version but only realized it after the fact.
  2. We implemented tests to catch human/numerical bugs before a computation starts. Again, we lost some numerical cycles but moving forward these can be avoided.
  3. David got set up on GitHub and he is able to compare his output (in C) with the Python code. That is helping a lot.

Two areas which were worked on were

  1. Computing {H_t} and zeroes for {t} around {0.4}
  2. Computing quantities like {(A+B-C)/B_0}, {(A+B)/B_0}, {C/B_0}, etc. with the goal of understanding the zero free regions.

Some observations for {t=0.4}, {y=0.4}, {x \in ( 10^4, 10^7)} include:

  • {(A+B) / B_0} does seem to avoid the negative real axis
  • {|(A+B) / B0| > 0.4} (based on the oscillations and trends in the plots)
  • {|C/B_0|} seems to be settling around {10^{-4}} range.

See the figure below. The top plot is on the complex plane and the bottom plot is the absolute value. The code to play with this is here.

— 2. Analysis report —

The Riemann-Siegel formula and some manipulations (see wiki) give {H_0 = A^{(0)} + B^{(0)} - \tilde C^{(0)}}, where

\displaystyle A^{(0)} = \frac{2}{8} \sum_{n=1}^N \int_C \exp( \frac{s+4}{2} u - e^u - \frac{s}{2} \log(\pi n^2) )\ du

\displaystyle - \frac{3}{8} \sum_{n=1}^N \int_C \exp( \frac{s+2}{2} u - e^u - \frac{s}{2} \log(\pi n^2) )\ du

\displaystyle B^{(0)} = \frac{2}{8} \sum_{m=1}^M \int_{\overline{C}} \exp( \frac{5-s}{2} u - e^u - \frac{1-s}{2} \log(\pi m^2) )\ du

\displaystyle - \frac{3}{8} \sum_{m=1}^M \int_C \exp( \frac{3-s}{2} u - e^u - \frac{1-s}{2} \log(\pi m^2) )\ du

\displaystyle \tilde C^{(0)} := -\frac{2}{8} \sum_{n=0}^\infty \frac{e^{-i\pi s/2} e^{i\pi s n}}{2^s \pi^{1/2}} \int_{\overline{C}} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{5-s}{2} u - e^u)\ du dw

\displaystyle +\frac{3}{8} \sum_{n=0}^\infty \frac{e^{-i\pi s/2} e^{i\pi s n}}{2^s \pi^{1/2}} \int_{\overline{C}} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{3-s}{2} u - e^u)\ du dw

where {C} is a contour that goes from {+i\infty} to {+\infty} staying a bounded distance away from the upper imaginary and right real axes, and {\overline{C}} is the complex conjugate of {C}. (In each of these sums, it is the first term that should dominate, with the second one being about {O(1/x)} as large.) One can then evolve by the heat flow to obtain {H_t = \tilde A + \tilde B - \tilde C}, where

\displaystyle \tilde A := \frac{2}{8} \sum_{n=1}^N \int_C \exp( \frac{s+4}{2} u - e^u - \frac{s}{2} \log(\pi n^2) + \frac{t}{16} (u - \log(\pi n^2))^2)\ du

\displaystyle - \frac{3}{8} \sum_{n=1}^N \int_C \exp( \frac{s+2}{2} u - e^u - \frac{s}{2} \log(\pi n^2) + \frac{t}{16} (u - \log(\pi n^2))^2)\ du

\displaystyle \tilde B := \frac{2}{8} \sum_{m=1}^M \int_{\overline{C}} \exp( \frac{5-s}{2} u - e^u - \frac{1-s}{2} \log(\pi m^2) + \frac{t}{16} (u - \log(\pi m^2))^2)\ du

\displaystyle - \frac{3}{8} \sum_{m=1}^M \int_C \exp( \frac{3-s}{2} u - e^u - \frac{1-s}{2} \log(\pi m^2) + \frac{t}{16} (u - \log(\pi m^2))^2)\ du

\displaystyle \tilde C := -\frac{2}{8} \sum_{n=0}^\infty \frac{e^{-i\pi s/2} e^{i\pi s n}}{2^s \pi^{1/2}} \int_{\overline{C}} \int_{C_M}

\displaystyle \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{5-s}{2} u - e^u + \frac{t}{4} (i \pi(n-1/2) + \log \frac{w}{2\sqrt{\pi}} - \frac{u}{2})^2) \ du dw

\displaystyle +\frac{3}{8} \sum_{n=0}^\infty \frac{e^{-i\pi s/2} e^{i\pi s n}}{2^s \pi^{1/2}} \int_{\overline{C}} \int_{C_M}

\displaystyle \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{3-s}{2} u - e^u + \frac{t}{4} (i \pi(n-1/2) + \log \frac{w}{2\sqrt{\pi}} - \frac{u}{2})^2)\ du dw.

Steepest descent heuristics then predict that {\tilde A \approx A}, {\tilde B \approx B}, and {\tilde C \approx C}. For the purposes of this project, we will need effective error estimates here, with explicit error terms.

A start has been made towards this goal at this wiki page. Firstly there is a “effective Laplace method” lemma that gives effective bounds on integrals of the form {\int_I e^{\phi(x)} \psi(x)\ dx} if the real part of {\phi(x)} is either monotone with large derivative, or has a critical point and is decreasing on both sides of that critical point. In principle, all one has to do is manipulate expressions such as {\tilde A - A}, {\tilde B - B}, {\tilde C - C} by change of variables, contour shifting and integration by parts until it is of the form to which the above lemma can be profitably applied. As one may imagine though the computations are messy, particularly for the {\tilde C} term. As a warm up, I have begun by trying to estimate integrals of the form

\displaystyle \int_C \exp( s (1+u-e^u) + \frac{t}{16} (u+b)^2 )\ du

for smallish complex numbers {b}, as these sorts of integrals appear in the form of {\tilde A, \tilde B, \tilde C}. As of this time of writing, there are effective bounds for the {b=0} case, and I am currently working on extending them to the {b \neq 0} case, which should give enough control to approximate {\tilde A - A} and {\tilde B-B}. The most complicated task will be that of upper bounding {\tilde C}, but it also looks eventually doable.

This is the first official “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

The proposal naturally splits into at least three separate (but loosely related) topics:

  • Numerical computation of the entire functions {H_t(z)}, with the ultimate aim of establishing zero-free regions of the form {\{ x+iy: 0 \leq x \leq T, y \geq \varepsilon \}} for various {T, \varepsilon > 0}.
  • Improved understanding of the dynamics of the zeroes {z_j(t)} of {H_t}.
  • Establishing the zero-free nature of {H_t(x+iy)} when {y \geq \varepsilon > 0} and {x} is sufficiently large depending on {t} and {\varepsilon}.

Below the fold, I will present each of these topics in turn, to initiate further discussion in each of them. (I thought about splitting this post into three to have three separate discussions, but given the current volume of comments, I think we should be able to manage for now having all the comments in a single post. If this changes then of course we can split up some of the discussion later.)

To begin with, let me present some formulae for computing {H_t} (inspired by similar computations in the Ki-Kim-Lee paper) which may be useful. The initial definition of {H_t} is

\displaystyle  H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du

where

\displaystyle  \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(- \pi n^2 e^{4u} )

is a variant of the Jacobi theta function. We observe that {\Phi} in fact extends analytically to the strip

\displaystyle  \{ u \in {\bf C}: -\frac{\pi}{8} < \mathrm{Im} u < \frac{\pi}{8} \}, \ \ \ \ \ (1)

as {e^{4u}} has positive real part on this strip. One can use the Poisson summation formula to verify that {\Phi} is even, {\Phi(-u) = \Phi(u)} (see this previous post for details). This lets us obtain a number of other formulae for {H_t}. Most obviously, one can unfold the integral to obtain

\displaystyle  H_t(z) = \frac{1}{2} \int_{-\infty}^\infty e^{tu^2} \Phi(u) e^{izu}\ du.

In my previous paper with Brad, we used this representation, combined with Fubini’s theorem to swap the sum and integral, to obtain a useful series representation for {H_t} in the {t<0} case. Unfortunately this is not possible in the {t>0} case because expressions such as {e^{tu^2} e^{9u} \exp( -\pi n^2 e^{4u} ) e^{izu}} diverge as {u} approaches {-\infty}. Nevertheless we can still perform the following contour integration manipulation. Let {0 \leq \theta < \frac{\pi}{8}} be fixed. The function {\Phi} decays super-exponentially fast (much faster than {e^{tu^2}}, in particular) as {\mathrm{Re} u \rightarrow +\infty} with {-\infty \leq \mathrm{Im} u \leq \theta}; as {\Phi} is even, we also have this decay as {\mathrm{Re} u \rightarrow -\infty} with {-\infty \leq \mathrm{Im} u \leq \theta} (this is despite each of the summands in {\Phi} having much slower decay in this direction – there is considerable cancellation!). Hence by the Cauchy integral formula we have

\displaystyle  H_t(z) = \frac{1}{2} \int_{i\theta-\infty}^{i\theta+\infty} e^{tu^2} \Phi(u) e^{izu}\ du.

Splitting the horizontal line from {i\theta-\infty} to {i\theta+\infty} at {i\theta} and using the even nature of {\Phi(u)}, we thus have

\displaystyle  H_t(z) = \frac{1}{2} ( \int_{i\theta}^{i\theta+\infty} e^{tu^2} \Phi(u) e^{izu}\ du + \int_{-i\theta}^{-i\theta+\infty} e^{tu^2} \Phi(u) e^{-izu}\ du.

Using the functional equation {\Phi(\overline{u}) = \overline{\Phi(u)}}, we thus have the representation

\displaystyle  H_t(z) = \frac{1}{2} ( K_{t,\theta}(z) + \overline{K_{t,\theta}(\overline{z})} ) \ \ \ \ \ (2)

where

\displaystyle  K_{t,\theta}(z) := \int_{i\theta}^{i \theta+\infty} e^{tu^2} \Phi(u) e^{izu}\ du

\displaystyle  = \sum_{n=1}^\infty 2 \pi^2 n^4 I_{t, \theta}( z - 9i, \pi n^2 ) - 3 \pi n^2 I_{t,\theta}( z - 5i, \pi n^2 )

where {I_{t,\theta}(b,\beta)} is the oscillatory integral

\displaystyle  I_{t,\theta}(b,\beta) := \int_{i\theta}^{i\theta+\infty} \exp( tu^2 - \beta e^{4u} + i b u )\ du. \ \ \ \ \ (3)

The formula (2) is valid for any {0 \leq \theta < \frac{\pi}{8}}. Naively one would think that it would be simplest to take {\theta=0}; however, when {z=x+iy} and {x} is large (with {y} bounded), it seems asymptotically better to take {\theta} closer to {\pi/8}, in particular something like {\theta = \frac{\pi}{8} - \frac{1}{4x}} seems to be a reasonably good choice. This is because the integrand in (3) becomes significantly less oscillatory and also much lower in amplitude; the {\exp(ibu)} term in (3) now generates a factor roughly comparable to {\exp( - \pi x/8 )} (which, as we will see below, is the main term in the decay asymptotics for {H_t(x+iy)}), while the {\exp( - \beta e^{4u} )} term still exhibits a reasonable amount of decay as {u \rightarrow \infty}. We will use the representation (2) in the asymptotic analysis of {H_t} below, but it may also be a useful representation to use for numerical purposes.

Read the rest of this entry »

Building on the interest expressed in the comments to this previous post, I am now formally proposing to initiate a “Polymath project” on the topic of obtaining new upper bounds on the de Bruijn-Newman constant {\Lambda}. The purpose of this post is to describe the proposal and discuss the scope and parameters of the project.

De Bruijn introduced a family {H_t: {\bf C} \rightarrow {\bf C}} of entire functions for each real number {t}, defined by the formula

\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du

where {\Phi} is the super-exponentially decaying function

\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).

As discussed in this previous post, the Riemann hypothesis is equivalent to the assertion that all the zeroes of {H_0} are real.

De Bruijn and Newman showed that there existed a real constant {\Lambda} – the de Bruijn-Newman constant – such that {H_t} has all zeroes real whenever {t \geq \Lambda}, and at least one non-real zero when {t < \Lambda}. In particular, the Riemann hypothesis is equivalent to the upper bound {\Lambda \leq 0}. In the opposite direction, several lower bounds on {\Lambda} have been obtained over the years, most recently in my paper with Brad Rodgers where we showed that {\Lambda \geq 0}, a conjecture of Newman.

As for upper bounds, de Bruijn showed back in 1950 that {\Lambda \leq 1/2}. The only progress since then has been the work of Ki, Kim and Lee in 2009, who improved this slightly to {\Lambda < 1/2}. The primary proposed aim of this Polymath project is to obtain further explicit improvements to the upper bound of {\Lambda}. Of course, if we could lower the upper bound all the way to zero, this would solve the Riemann hypothesis, but I do not view this as a realistic outcome of this project; rather, the upper bounds that one could plausibly obtain by known methods and numerics would be comparable in achievement to the various numerical verifications of the Riemann hypothesis that exist in the literature (e.g., that the first {N} non-trivial zeroes of the zeta function lie on the critical line, for various large explicit values of {N}).

In addition to the primary goal, one could envisage some related secondary goals of the project, such as a better understanding (both analytic and numerical) of the functions {H_t} (or of similar functions), and of the dynamics of the zeroes of these functions. Perhaps further potential goals could emerge in the discussion to this post.

I think there is a plausible plan of attack on this project that proceeds as follows. Firstly, there are results going back to the original work of de Bruijn that demonstrate that the zeroes of {H_t} become attracted to the real line as {t} increases; in particular, if one defines {\sigma_{max}(t)} to be the supremum of the imaginary parts of all the zeroes of {H_t}, then it is known that this quantity obeys the differential inequality

\displaystyle \frac{d}{dt} \sigma_{max}(t) \leq - \frac{1}{\sigma_{max}(t)} \ \ \ \ \ (1)

 

whenever {\sigma_{max}(t)} is positive; furthermore, once {\sigma_{max}(t) = 0} for some {t}, then {\sigma_{max}(t') = 0} for all {t' > t}. I hope to explain this in a future post (it is basically due to the attraction that a zero off the real axis has to its complex conjugate). As a corollary of this inequality, we have the upper bound

\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2 \ \ \ \ \ (2)

 

for any real number {t}. For instance, because all the non-trivial zeroes of the Riemann zeta function lie in the critical strip {\{ s: 0 \leq \mathrm{Re} s \leq 1 \}}, one has {\sigma_{max}(0) \leq 1}, which when inserted into (2) gives {\Lambda \leq 1/2}. The inequality (1) also gives {\sigma_{max}(t) \leq \sqrt{1-2t}} for all {0 \leq t \leq 1/2}. If we could find some explicit {t} between {0} and {1/2} where we can improve this upper bound on {\sigma_{max}(t)} by an explicit constant, this would lead to a new upper bound on {\Lambda}.

Secondly, the work of Ki, Kim and Lee (based on an analysis of the various terms appearing in the expression for {H_t}) shows that for any positive {t}, all but finitely many of the zeroes of {H_t} are real (in contrast with the {t=0} situation, where it is still an open question as to whether the proportion of non-trivial zeroes of the zeta function on the critical line is asymptotically equal to {1}). As a key step in this analysis, Ki, Kim, and Lee show that for any {t>0} and {\varepsilon>0}, there exists a {T>0} such that all the zeroes of {H_t} with real part at least {T}, have imaginary part at most {\varepsilon}. Ki, Kim and Lee do not explicitly compute how {T} depends on {t} and {\varepsilon}, but it looks like this bound could be made effective.

If so, this suggests a possible strategy to get a new upper bound on {\Lambda}:

  • Select a good choice of parameters {t, \varepsilon > 0}.
  • By refining the Ki-Kim-Lee analysis, find an explicit {T} such that all zeroes of {H_t} with real part at least {T} have imaginary part at most {\varepsilon}.
  • By a numerical computation (e.g. using the argument principle), also verify that zeroes of {H_t} with real part between {0} and {T} have imaginary part at most {\varepsilon}.
  • Combining these facts, we obtain that {\sigma_{max}(t) \leq \varepsilon}; hopefully, one can insert this into (2) and get a new upper bound for {\Lambda}.

Of course, there may also be alternate strategies to upper bound {\Lambda}, and I would imagine this would also be a legitimate topic of discussion for this project.

One appealing thing about the above strategy for the purposes of a polymath project is that it naturally splits the project into several interacting but reasonably independent parts: an analytic part in which one tries to refine the Ki-Kim-Lee analysis (based on explicitly upper and lower bounding various terms in a certain series expansion for {H_t} – I may detail this later in a subsequent post); a numerical part in which one controls the zeroes of {H_t} in a certain finite range; and perhaps also a dynamical part where one sees if there is any way to improve the inequality (2). For instance, the numerical “team” might, over time, be able to produce zero-free regions for {H_t} with an increasingly large value of {T}, while in parallel the analytic “team” might produce increasingly smaller values of {T} beyond which they can control zeroes, and eventually the two bounds would meet up and we obtain a new bound on {\Lambda}. This factoring of the problem into smaller parts was also a feature of the successful Polymath8 project on bounded gaps between primes.

The project also resembles Polymath8 in another aspect: that there is an obvious way to numerically measure progress, by seeing how the upper bound for {\Lambda} decreases over time (and presumably there will also be another metric of progress regarding how well we can control {T} in terms of {t} and {\varepsilon}). However, in Polymath8 the final measure of progress (the upper bound {H} on gaps between primes) was a natural number, and thus could not decrease indefinitely. Here, the bound will be a real number, and there is a possibility that one may end up having an infinite descent in which progress slows down over time, with refinements to increasingly less significant digits of the bound as the project progresses. Because of this, I think it makes sense to follow recent Polymath projects and place an expiration date for the project, for instance one year after the launch date, in which we will agree to end the project and (if the project was successful enough) write up the results, unless there is consensus at that time to extend the project. (In retrospect, we should probably have imposed similar sunset dates on older Polymath projects, some of which have now been inactive for years, but that is perhaps a discussion for another time.)

Some Polymath projects have been known for a breakneck pace, making it hard for some participants to keep up. It’s hard to control these things, but I am envisaging a relatively leisurely project here, perhaps taking the full year mentioned above. It may well be that as the project matures we will largely be waiting for the results of lengthy numerical calculations to come in, for instance. Of course, as with previous projects, we would maintain some wiki pages (and possibly some other resources, such as a code repository) to keep track of progress and also to summarise what we have learned so far. For instance, as was done with some previous Polymath projects, we could begin with some “online reading seminars” where we go through some relevant piece of literature (most obviously the Ki-Kim-Lee paper, but there may be other resources that become relevant, e.g. one could imagine the literature on numerical verification of RH to be of value).

One could also imagine some incidental outcomes of this project, such as a more efficient way to numerically establish zero free regions for various analytic functions of interest; in particular, the project may well end up focusing on some other aspect of mathematics than the specific questions posed here.

Anyway, I would be interested to hear in the comments below from others who might be interested in participating, or at least observing, this project, particularly if they have suggestions regarding the scope and direction of the project, and on organisational structure (e.g. if one should start with reading seminars, or some initial numerical exploration of the functions {H_t}, etc..) One could also begin some preliminary discussion of the actual mathematics of the project itself, though (in line with the leisurely pace I was hoping for), I expect that the main burst of mathematical activity would happen later, once the project is formally launched (with wiki page resources, blog posts dedicated to specific aspects of the project, etc.).

 

The Polymath14 online collaboration has uploaded to the arXiv its paper “Homogeneous length functions on groups“, submitted to Algebra & Number Theory. The paper completely classifies homogeneous length functions {\| \|: G \rightarrow {\bf R}^+} on an arbitrary group {G = (G,\cdot,e,()^{-1})}, that is to say non-negative functions that obey the symmetry condition {\|x^{-1}\| = \|x\|}, the non-degeneracy condition {\|x\|=0 \iff x=e}, the triangle inequality {\|xy\| \leq \|x\| + \|y\|}, and the homogeneity condition {\|x^2\| = 2\|x\|}. It turns out that these norms can only arise from pulling back the norm of a Banach space by an isometric embedding of the group. Among other things, this shows that {G} can only support a homogeneous length function if and only if it is abelian and torsion free, thus giving a metric description of this property.

The proof is based on repeated use of the homogeneous length function axioms, combined with elementary identities of commutators, to obtain increasingly good bounds on quantities such as {\|[x,y]\|}, until one can show that such norms have to vanish. See the previous post for a full proof. The result is robust in that it allows for some loss in the triangle inequality and homogeneity condition, allowing for some new results on “quasinorms” on groups that relate to quasihomomorphisms.

As there are now a large number of comments on the previous post on this project, this post will also serve as the new thread for any final discussion of this project as it winds down.

Archives