Building on the interest expressed in the comments to this previous post, I am now formally proposing to initiate a “Polymath project” on the topic of obtaining new upper bounds on the de Bruijn-Newman constant . The purpose of this post is to describe the proposal and discuss the scope and parameters of the project.
De Bruijn introduced a family of entire functions for each real number
, defined by the formula
where is the super-exponentially decaying function
As discussed in this previous post, the Riemann hypothesis is equivalent to the assertion that all the zeroes of are real.
De Bruijn and Newman showed that there existed a real constant – the de Bruijn-Newman constant – such that
has all zeroes real whenever
, and at least one non-real zero when
. In particular, the Riemann hypothesis is equivalent to the upper bound
. In the opposite direction, several lower bounds on
have been obtained over the years, most recently in my paper with Brad Rodgers where we showed that
, a conjecture of Newman.
As for upper bounds, de Bruijn showed back in 1950 that . The only progress since then has been the work of Ki, Kim and Lee in 2009, who improved this slightly to
. The primary proposed aim of this Polymath project is to obtain further explicit improvements to the upper bound of
. Of course, if we could lower the upper bound all the way to zero, this would solve the Riemann hypothesis, but I do not view this as a realistic outcome of this project; rather, the upper bounds that one could plausibly obtain by known methods and numerics would be comparable in achievement to the various numerical verifications of the Riemann hypothesis that exist in the literature (e.g., that the first
non-trivial zeroes of the zeta function lie on the critical line, for various large explicit values of
).
In addition to the primary goal, one could envisage some related secondary goals of the project, such as a better understanding (both analytic and numerical) of the functions (or of similar functions), and of the dynamics of the zeroes of these functions. Perhaps further potential goals could emerge in the discussion to this post.
I think there is a plausible plan of attack on this project that proceeds as follows. Firstly, there are results going back to the original work of de Bruijn that demonstrate that the zeroes of become attracted to the real line as
increases; in particular, if one defines
to be the supremum of the imaginary parts of all the zeroes of
, then it is known that this quantity obeys the differential inequality
whenever is positive; furthermore, once
for some
, then
for all
. I hope to explain this in a future post (it is basically due to the attraction that a zero off the real axis has to its complex conjugate). As a corollary of this inequality, we have the upper bound
for any real number . For instance, because all the non-trivial zeroes of the Riemann zeta function lie in the critical strip
, one has
, which when inserted into (2) gives
. The inequality (1) also gives
for all
. If we could find some explicit
between
and
where we can improve this upper bound on
by an explicit constant, this would lead to a new upper bound on
.
Secondly, the work of Ki, Kim and Lee (based on an analysis of the various terms appearing in the expression for ) shows that for any positive
, all but finitely many of the zeroes of
are real (in contrast with the
situation, where it is still an open question as to whether the proportion of non-trivial zeroes of the zeta function on the critical line is asymptotically equal to
). As a key step in this analysis, Ki, Kim, and Lee show that for any
and
, there exists a
such that all the zeroes of
with real part at least
, have imaginary part at most
. Ki, Kim and Lee do not explicitly compute how
depends on
and
, but it looks like this bound could be made effective.
If so, this suggests a possible strategy to get a new upper bound on :
- Select a good choice of parameters
.
- By refining the Ki-Kim-Lee analysis, find an explicit
such that all zeroes of
with real part at least
have imaginary part at most
.
- By a numerical computation (e.g. using the argument principle), also verify that zeroes of
with real part between
and
have imaginary part at most
.
- Combining these facts, we obtain that
; hopefully, one can insert this into (2) and get a new upper bound for
.
Of course, there may also be alternate strategies to upper bound , and I would imagine this would also be a legitimate topic of discussion for this project.
One appealing thing about the above strategy for the purposes of a polymath project is that it naturally splits the project into several interacting but reasonably independent parts: an analytic part in which one tries to refine the Ki-Kim-Lee analysis (based on explicitly upper and lower bounding various terms in a certain series expansion for – I may detail this later in a subsequent post); a numerical part in which one controls the zeroes of
in a certain finite range; and perhaps also a dynamical part where one sees if there is any way to improve the inequality (2). For instance, the numerical “team” might, over time, be able to produce zero-free regions for
with an increasingly large value of
, while in parallel the analytic “team” might produce increasingly smaller values of
beyond which they can control zeroes, and eventually the two bounds would meet up and we obtain a new bound on
. This factoring of the problem into smaller parts was also a feature of the successful Polymath8 project on bounded gaps between primes.
The project also resembles Polymath8 in another aspect: that there is an obvious way to numerically measure progress, by seeing how the upper bound for decreases over time (and presumably there will also be another metric of progress regarding how well we can control
in terms of
and
). However, in Polymath8 the final measure of progress (the upper bound
on gaps between primes) was a natural number, and thus could not decrease indefinitely. Here, the bound will be a real number, and there is a possibility that one may end up having an infinite descent in which progress slows down over time, with refinements to increasingly less significant digits of the bound as the project progresses. Because of this, I think it makes sense to follow recent Polymath projects and place an expiration date for the project, for instance one year after the launch date, in which we will agree to end the project and (if the project was successful enough) write up the results, unless there is consensus at that time to extend the project. (In retrospect, we should probably have imposed similar sunset dates on older Polymath projects, some of which have now been inactive for years, but that is perhaps a discussion for another time.)
Some Polymath projects have been known for a breakneck pace, making it hard for some participants to keep up. It’s hard to control these things, but I am envisaging a relatively leisurely project here, perhaps taking the full year mentioned above. It may well be that as the project matures we will largely be waiting for the results of lengthy numerical calculations to come in, for instance. Of course, as with previous projects, we would maintain some wiki pages (and possibly some other resources, such as a code repository) to keep track of progress and also to summarise what we have learned so far. For instance, as was done with some previous Polymath projects, we could begin with some “online reading seminars” where we go through some relevant piece of literature (most obviously the Ki-Kim-Lee paper, but there may be other resources that become relevant, e.g. one could imagine the literature on numerical verification of RH to be of value).
One could also imagine some incidental outcomes of this project, such as a more efficient way to numerically establish zero free regions for various analytic functions of interest; in particular, the project may well end up focusing on some other aspect of mathematics than the specific questions posed here.
Anyway, I would be interested to hear in the comments below from others who might be interested in participating, or at least observing, this project, particularly if they have suggestions regarding the scope and direction of the project, and on organisational structure (e.g. if one should start with reading seminars, or some initial numerical exploration of the functions , etc..) One could also begin some preliminary discussion of the actual mathematics of the project itself, though (in line with the leisurely pace I was hoping for), I expect that the main burst of mathematical activity would happen later, once the project is formally launched (with wiki page resources, blog posts dedicated to specific aspects of the project, etc.).
60 comments
Comments feed for this article
24 January, 2018 at 5:05 pm
Jair
There seems to be a missing factor in the given formula for $H_t$.
[Corrected, thanks – T.]
24 January, 2018 at 5:44 pm
Anonymous
Yes, t is missing
24 January, 2018 at 8:41 pm
Anonymous
the differential inequality (1) can be written in a somewhat simpler form
whenever 
on
(thereby improving
current upper bound), as well as a corresponding lower(!) bound on the rate of decrease of
– since this should imply also a lower bound on
. Suppose e.g. that this rate is lower-bounded by
, then it implies that

(which for
would give
– showing the importance of finding also good lower bounds for
.
Which motivates improving the upper bound
Implying that
25 January, 2018 at 2:27 am
Anonymous
Well from the perspective of the numerical team, I would just highlight some good practices which will enable many more people to participate. Firstly, with the previous Polymath 8 project, it was somewhat difficult to access the latest code and revisions, used for example by Pace Nielsen. It would be a good idea to use something like Github and everyone can keep updating code improvements there. Secondly, it would be good to have a pseudocode of an algorithm to quickly calculate zeroes of H_t, something like the Riemann-Siegel formula for zeta zeroes. This way the numerical team can quickly set things up and get going. Thirdly, from a medium term perspective, as we lower t and the computations get heavy, it would be great if some big organizations can donate some cloud computing power. If not that, then maybe a distributed computing structure like Folding@Home. Ofcourse, if we manage to get 3), the project can keep going on from the numerical side even after an year. – KM
25 January, 2018 at 5:46 am
Anonymous
Although a precise horizontal(!) localization of the zeroes of
is not really needed (according to (2), only a zero-free rectangular region is needed for a upper bound on
) but for smaller upper bound on
it seems that smaller
and
are also required (still without need for precise horizontal localization of
zeroes) for which the dynamics of each
zero becomes more dependent also on precise horizontal locations of the “nearby attracting zeroes” (whose horizontal distribution becomes “less regular” for smaller
values).
26 January, 2018 at 12:55 am
Anonymous
Well for those comfortable with Github, I have created a GitHub project https://github.com/km-git-acc/dbn_upper_bound
Please edit it as necessary. I have removed the restriction which allows only collaborators to edit things. And if you face any issues, do let me know.
26 January, 2018 at 8:27 am
Terence Tao
Thanks for this! It has been a while since I have used git but I will try to see if I can access it. In the meantime, I have started a wiki page at http://michaelnielsen.org/polymath1/index.php?title=De_Bruijn-Newman_constant and have added a link to this repository.
It may take a while for the analysis side of the project to get the point where it really needs the numerical side, but I think it is worth trying to get some numerical activity from the beginning (and it is here where I will need the most help, as I do not have extensive experience with numerics, particularly involving contour integration and locating of zeroes). Perhaps one starting point will be to see if we can get enough preliminary numerics on
to numerically verify the interesting claims in Theorem 1.4 of Ki, Kim, and Lee, which in the current notation asserts that for any fixed
, the zeroes
of
eventually become all real and roughly equally spaced with
and also the number
of zeroes of real part between 0 and
should obey the asymptotic
26 January, 2018 at 11:36 am
Anonymous
Thanks. I have updated the repo with the Polymath wiki link. I have also started some Python scripting and will later keep working on speed optimizations, areas where I think I will be more useful. H_t for real values of z is much easier to start with, and to compare against the expected N_t(T), so I will begin with that. It would be great to get more help on contour integration.
On checking some values of the phi decay function, it does decay quite rapidly.
For eg. I got these values for
u = 0.001, 0.1 1
phi_decay (approx) = 0.44668, 0.30427, 5.1*10^-70
and the values seem to become insensitive to increasing n in the infinite sum. H_t may be well approximated by a low upper limit in the integral.
27 January, 2018 at 12:49 pm
TS
Let
and call
the property “all but finitely many zeros of
are real and simple”. I find it very striking that it is known to be true for all
but not known whether true or not for
. Of course, in dynamical systems theory when one varies parameters some bifurcations can occur which change the local and global properties of the dynamics (incidentaly, scholarpedia is better than wikipedia on these topics). But since, if I understand well,
is meant to be the low energy regular limit, decreasing
shouldn’t worsen things and thus conserve such a property. So there might be a topological way of seeing that
is true. (There are a few sentences at the beginning of the book Topological Invariants of plane curves and caustics by V.I.Arnold that might be helpfull in that regard, but it is over my head.)
26 January, 2018 at 10:48 am
Terence Tao
I think I managed to download git on my machine, clone the repository, and send a test push request, so I think it is all working :)
Perhaps a very preliminary warmup exercise would be to numerically verify the fact that the function
is indeed even (as proven in a previous blog post), and that the zeroes of
are twice the ordinates of the Riemann zeta function (in particular, the first zero should be approximately at
).
I will try to write an initial Polymath post over the weekend, discussing both the numerical goals and the analytic goals (presumably starting with digesting relevant parts of the KKL paper) to launch the project. I was initially thinking of having two separate posts, one for the analytical side and one for the numerical side, but it looks like the rate of comments will be on the light side for a polymath project, and it should hopefully be possible to follow the progress even if both discussions are merged into a single thread.
26 January, 2018 at 10:58 am
Terence Tao
Actually, I was only able to commit to a local repository but was unable to push to the master (got the error message “remote: Permission to km-git-acc/dbn_upper_bound.git denied … fatal: unable to access ‘https://github.com/km-git-acc/dbn_upper_bound.git/’: The requested URL returned error: 403”). Maybe there is a permissions issue? (Sorry, I am very rusty with git)
EDIT: ssh access also fails with a different error (git@github.com: Permission denied (publickey)), but this may be a problem from my end. (Now I remember why I stopped using git…)
SECOND EDIT: Using github for Windows GUI, I can push to the master branch of my own test repository under my github account (user name teorth), but not to the dbn_upper_bound one (“Authentication failed. You may not have permission to access the repository or the repository may have been archived. Open options and verify that you’re signed in with an account that has permission to access this repository.”). So I suspect that the dbn_upper_bound repo can only grant read access and not write access to random github users at present.
26 January, 2018 at 11:48 am
Anonymous
I will look into the permissions issue. It has been discussed here https://stackoverflow.com/questions/17857283/permission-denied-error-on-github-push and could be due to several reasons.
Also if it is a new file, it may be easier to go to the repo link in the browser and copy paste the content into a new file.
26 January, 2018 at 12:54 pm
Terence Tao
OK, I have followed the advice in that link of forking the repository, committing to the fork, and sending a pull request back to the master, hopefully that works.
26 January, 2018 at 12:50 pm
Anonymous
This is what I got for H_0 as I took z from 20 to 28.26 to 29, and it seems to be working. There is a sign change after 28.26 indicating a zero nearby. The computation can be made sharper still which I will attempt later.
z, H_0, error_bound_in_H_0_estimation
(20, 0.004745981288866963, 3.369905401820819e-12)
(24, 0.0011029563388465873, 9.344057770371492e-11)
(26, 0.0003631588937652417, 1.5325919331364568e-10)
(28, 2.5161805529409477e-05, 2.2423109376223938e-10)
(28.06, 1.922635521388559e-05, 2.2649502499222374e-10)
(28.12, 1.3484644556851843e-05, 2.287850213097964e-10)
(28.18, 7.93277306917266e-06, 2.310924854254104e-10)
(28.24, 2.5668886862708826e-06, 2.3343800503727863e-10)
—- (28.26, 8.189321953356443e-07, 2.342308777492086e-10)
(28.28, -9.089217903637239e-07, 2.35030161131744e-10)
(28.4, -1.0861641032745077e-05, 2.3999972582205986e-10)
(29, -5.085338498640488e-05, 2.79882348200311e-10)
26 January, 2018 at 12:55 pm
Anonymous
It’s somewhat late here so signing off. Please give your Github handle and I will add you as a collaborator by tomorrow.
26 January, 2018 at 12:58 pm
Anonymous
Saw your handle teorth in your post just now :), and added you as a collaborator.
26 January, 2018 at 1:01 pm
Anonymous
Have merged your pull request as well.
26 January, 2018 at 1:07 pm
Terence Tao
Great! Looks like it is all working now… still have to test if I can do an upstream pull for any new files you have, but I think I see how to do it.
26 January, 2018 at 1:23 pm
Bill Bauer
FWIW, github users often fork repositories to their own accounts, push changes to their own forks, and issue pull requests from their forks to the original.
Also, FWIW, I’ll try to maintain a Julia port of the code which appears in KM’s repository. My repo is here https://github.com/WilCrofter/DBNUpperBound.jl. It’s essentially empty at the moment.
26 January, 2018 at 2:19 pm
Steven Heilman
In case anyone prefers Matlab, here is a function that plots Ht(z). The for loops make the plotting inefficient, but it’s fine for now I suppose.
—————————
function dbn_numerics
%%% this function plots the real and imaginary parts of the function H_t as
%%% two surfaces. Ht is a function of z=(zreal,zimag).
%%% Htrealval is the real part of Ht
%%% Htimagval is the imaginary part of Ht
t=0; % parameter for the function Ht
numpts=100; % number of points on a one-dimensional axis
%r=10;
%zreal=linspace(-r,r,numpts);
%zimag=linspace(-r,r,numpts);
zreal=linspace(26,30,numpts); % real and imaginary parts of z
zimag=linspace(-2,2,numpts);
Htrealval=zeros(numpts,numpts);
Htimagval=zeros(numpts,numpts);
for i=1:numpts
for j=1:numpts
Htrealval(i,j)=RealPartHt(t,zreal(i),zimag(j));
Htimagval(i,j)=ImagPartHt(t,zreal(i),zimag(j));
end
end
surf(ones(numpts,1)*zreal,zimag’*ones(1,numpts),Htrealval);
hold on
surf(ones(numpts,1)*zreal,zimag’*ones(1,numpts),Htimagval);
xlabel(‘Real z axis’);
ylabel(‘Imaginary z axis’)
end
%we always denote z=x+iy as z=(x,y)
% using cos (x+iy) = cos x cosh(y) – i sin x sinh(y)
function h=RealPartHt(t,x,y)
M=20; % integrate from 0 to M in the definition of Ht
tol=10^-8; %relative tolerance threshold for integral
integrateme=@(u)Phi(u).*exp(t.*(u.^2)).*cos(x.*u).*cosh(y.*u); % function to integrate
h=integral(integrateme,0,M,’RelTol’,tol);s
end
%we always denote z=x+iy as z=(x,y)
% using cos (x+iy) = cos x cosh(y) – i sin x sinh(y)
function h=ImagPartHt(t,x,y)
M=20; % integrate from 0 to M in the definition of Ht
tol=10^-8; %relative tolerance threshold for integral
integrateme=@(u)Phi(u).*exp(t.*(u.^2)).*sin(x.*u).*sinh(y.*u); % function to integrate
h=-integral(integrateme,0,M,’AbsTol’,tol);
end
function phi=Phi(u)
N=100; % sum from 1 to N
n=(1:N)’;
phivec= (n.^(2)).*(exp(5.*u)).*(2.*(pi^2).*(n.^(2)).*(exp(4.*u)) – 3*pi).*exp(-pi.*(n.^(2)).*exp(4.*u));
phi=sum(phivec);
end
26 January, 2018 at 2:55 pm
Brad
I’ve just signed up for github and started watching — I don’t have any experience using it, but I don’t know that my main contributions will be able to be on the numerical side either. (That said, I will be very interested to learn more about this area; I’ve used e.g. tables of zeta zeros before but have no experience with the actual large-scale computation of them.)
Terry mentioned in a comment that one interesting numerical exercise would be to numerically verify some of the results of Ki, Kim, and Lee regarding the zeros of H_t(z) for t > 0. It might also be interesting at the same time to numerically locate some of the zeros of H_t(z) for some fixed negative t’s. In particular it seems reasonable enough to suspect that for negative t some zeros with real part
will have imaginary part with order as large as
, although we don’t prove this in our paper.
28 January, 2018 at 11:10 am
dhjpolymath
https://help.github.com/articles/permission-levels-for-a-user-account-repository/ provide information on permissions for repositories.
25 January, 2018 at 5:16 am
El duro camino hacia la demostración de la hipótesis de Riemann | Ciencia | La Ciencia de la Mula Francis
[…] Una pena, pero el nuevo resultado de Tao y Rodgers cierra puertas, en lugar de abrirlas. El artículo es Brad Rodgers, Terence Tao, “The De Bruijn-Newman constant is non-negative,” arXiv:1801.05914 [math.NT]; más información divulgativa (aunque solo para matemáticos) en Terence Tao, “The De Bruijn-Newman constant is non-negative,” What’s new, 19 Jan 2018. Sabemos que 0 ≤ Λ < 1/2, reducir esta cota superior es el objetivo del nuevo proyecto polimatemático Polymath15, como nos cuenta Terence Tao, “Polymath proposal: upper bounding the de Bruijn-Newman constant,” What’s new, 24 Jan 2018. […]
25 January, 2018 at 8:37 am
Terence Tao
Two further mathematical comments. Firstly, it may be possible to leverage the existing numerical work on the Riemann zeta function. The work of Gourdon et al. verifies that the first 10^13 zeroes of zeta are on the critical line, which should show that H_0 has all zeroes real up to real part 5 x 10^12 or so (from a quick and dirty application of the Riemann-von Mangoldt formula). Unfortunately, we don’t understand the dynamics of the ODE
well enough yet to directly use this to say much about the zeroes
of
for
(the ODE does not prevent the zeroes from reaching unbounded velocities and so in principle a lot of non-real zeroes could enter the region
in finite time). But maybe we will be able to limit this influx of zero scenario somehow.
The second comment is to describe at a very high level how the Ki-Kim-Lee analysis works.
By Fubini’s theorem, one has an expansionwhere
is the oscillatory integral
As it turns out, if one uses methods such as the saddle point method, one can show that for
positive and as the real part of
gets large, the first term
of this series dominates all the others, and also stays away from zero (after renormalising away some decay factors) if one is away from the real axis (actually it would be nice to have some numerical verification of this). This is basically how Ki-Kim-Lee get strong control on
for
positive and real part of
large. Interestingly, the control is a lot better asymptotically for
than for the much better studied
case. Presumably if we wanted to make the Ki-Kim-Lee analysis effective, we would begin by getting explicit upper bounds on
(and lower bounds on
).
ADDED LATER: it looks like I misread the Ki-Kim-Lee analysis slightly (was a bit confused because they have a different normalisation convention to the one here). They do expand out
as a series in which a single oscillatory integral dominates, but the series expansion they used is not quite the one I described above; the main term instead looks something like
. I will have to go through the paper more carefully to figure out exactly what is going on, probably a topic for a future blog post.
25 January, 2018 at 3:57 pm
Anonymous
In order to better understand the ODE of the “zero attraction dynamics”, it would be interesting to find the invariants (e.g. “conservation laws”) for this ODE dynamics (similar to the Newtonian gravitational dynamics).
There are well-developed methods (e.g. Cartan’s equivalence method, or differential algebraic methods) to find the invariants of a given nonlinear system of ODE or PDE.
25 January, 2018 at 4:44 pm
Terence Tao
The ODE is a gradient flow (at least in the case of real zeroes), so one wouldn’t expect too many invariants. In the case of real zeroes, there are at least some monotonicity formulae: I discuss these in the previous post https://terrytao.wordpress.com/2017/10/17/heat-flow-and-zeroes-of-polynomials/ . However, for complex zeroes a lot of the monotonicity properties break down. Currently the only monotonicity that seems to be available is the monotonicity of
, but there may well be other quantities whose time derivative is somehow under control.
26 January, 2018 at 12:08 pm
Anonymous
It seems that it is quite simple to identify the main terms controlling the horizontal and vertical components of the “relative velocity” dynamics of two (sufficiently close) simple zeros of
as follows:
Let
be two distinct simple zeros of
. By the above ODE, simple calculation gives
Where
The function
is locally analytic in
, possibly bounded and may be estimated (by partial summation) from sufficiently good estimate on the distribution of
zeros. Therefore
And similarly
Denote by
the real and imaginary parts of each zero
of
. We have (using the above expressions) the following expression for the “horizontal relative dynamics” between two zeros:
Observe that the above “main term” is
. While the remaining two terms are
are dependent also on the other zeros but still negligible (with respect to the “main term”) for sufficiently small
.
which is non-negative (explaining the “horizontal repulsion” between nearby zeros), bounded(!) by 8, and dependent only(!) on the zeros
Similarly, we have for the “vertical relative dynamics” between two zeros:
Observe that the above (vertical!) main term is
which is non-positive (explaining the “vertical attraction” between nearby zeros), bounded(!) in magnitude by 8, and dependent only(!) on the zeros
. As before, the remaining two terms are
.
Remark: It seems that this gives good control on the
zero dynamics, both horizontally and vertically.
(as you mentioned above), while the boundedness of the vertical dynamics may help to establish a good correspondence between
and
with a possible proof of
.
The boundedness of the horizontal dynamics can help to control possible horizontal zero influx to a zero-free region of
27 January, 2018 at 9:19 am
Terence Tao
In principle this is what happens to a zero that is very close to another zero, but the situation could be more complicated than this if there is a cluster of three or more zeroes that are very close to each other. In that case one cannot just pick out one neighbour
for each
to designate as the “dominant” interaction and try to treat all other interactions as a lower order background term
, because this term will be comparable or stronger to the main term. The worst case scenario is if some tight cluster of zeroes, starting beyond range of numerics (e.g. beyond the first 10^13 zeroes of zeta somehow begins to acquire a massive horizontal velocity and zooms towards the origin so that by, say, t = 0.1 it ends up at a quite low height. There should be some way to prevent this (some sort of local conservation of center of mass, perhaps?) but it probably involves analysing multiple interactions, not just that of a pair. For similar reasons it could be difficult to numerically model the ODE to high accuracy once a zero can get close to many other zeroes simultaneously.
27 January, 2018 at 11:27 am
Anonymous
It seems that this require an extended analysis for the dynamics of a cluster of zeros (both for its relative zeros velocities as well as its “center of mass”) with more distant zeros (with some information on their distribution) in the background.
25 January, 2018 at 8:42 am
Anonymous
Another analytic-probabilistic approach to study the functions
is to use the backward heat equation satisfied by them and search for a possible stochastic diffusion process which generates this equation (and the functions
as some related “probability density functions”).
can help to use more probabilistic tools to study its properties (e.g. by using asymptotic random matrix theory, Feynman-Kac formula, etc.)
Such probabilistic interpretation of
25 January, 2018 at 10:56 am
Anonymous
It is even possible that
(i.e. RH is true) and for each
there is a simple Hermitian operator (possibly related to a probabilistic interpretation of
) having
zeros as its spectrum (i.e. realizing Hilbert-Polya approach for
) but no such operator for
(which may explain the failed attempts so far to find it). Since the zeros of
converge to that of
, it is possible that there is a proof of RH via the functions
(whose zeros are more regularly distributed) but not via direct attack on
itself!
via
reminds similar methods used in the past to solve big problems (e.g. de Branges solution – via Loewner’s PDE with flow parameter – of the Bieberbach conjecture, and Perelman’s solution – via Hamilton’s Ricci flow PDE – of the Poincare conjecture).
Such “homotopic approach” to study
26 January, 2018 at 2:14 pm
Anonymous
Is there a p-adic reformulation of this using local-global correspondence?
25 January, 2018 at 2:54 pm
?
Is this going to be hard analytic number theory?
25 January, 2018 at 11:17 pm
A new polymath proposal (related to the Riemann Hypothesis) over Tao’s blog | The polymath blog
[…] A new polymath proposal over Terry Tao’s blog who wrote: “Building on the interest expressed in the comments to this previous post, I am now formally proposing to initiate a “Polymath project” on the topic of obtaining new upper bounds on the de Bruijn-Newman constant . The purpose of this post is to describe the proposal and discuss the scope and parameters of the project.” […]
26 January, 2018 at 10:14 pm
Anonymous
There are some updates
There is a function implementing H_t for real z values (Ht_real_compute.py)
These are the first few approx zero locations obtained for different values of t
t, first few zeroes
0 — 28.25, 42.05, 50.05, 60.85, 65.85, 75.15, 81.85, 86.65, 96.05, 99.55
0.1 — 28.25, 41.95, 49.95, 60.75, 65.85, 75.05, 81.75, 86.65, 95.85, 99.55
0.2 — 28.15, 41.85, 49.95, 60.65, 65.75, 75.05, 81.65, 86.55, 95.75, 99.45
0.3 — 28.05, 41.85, 49.85, 60.55, 65.75, 74.95, 81.55, 86.55, 95.65, 99.45
0.4 — 28.05, 41.75, 49.75, 60.45, 65.65, 74.85, 81.45, 86.45, 95.55, 99.35
0.5 — 27.95, 41.65, 49.75, 60.35, 65.65, 74.85, 81.45, 86.45, 95.45, 99.35
I varied t and z by 0.1, too coarse-grained, but it was an initial run. There seem to be slight deformations as expected in the locations of zeroes as t varies.
Also I am now putting all the python codes in the python folder, and have created empty folders for julia and matlab, if you want to use other languages.
Will also look for real zeroes for negative t values, and then start thinking about searching for potential imaginary zeroes.
-KM
27 January, 2018 at 3:37 am
Anonymous
It would also be interesting to check numerically if the dependence of each zero on the parameter
is (approximately) according to its ODE dynamics (which depends mainly on the attraction of its neighboring zeros).
27 January, 2018 at 7:29 am
Anonymous
A simple and efficient method to compute the zeros
of
(for each given
) is to start with the known zeros
of
and for each such j-th zero to "follow its trajectory" by a discrete sequence
of the parameter
, with increments sufficiently small to ensure local convergence to the zero
(starting with
by a fast local zero-finding (e.g. Newton-Raphson) algorithm
27 January, 2018 at 8:59 am
Anonymous
I was thinking about using the regions around the H_0 zeroes, but this could be much faster. Interesting question would be whether I would miss any zeroes for H_t using this method, or does every H_t zero, at least the real ones, lies on a trajectory starting at a zeta zero. From the N_t(T) estimate where a tlogT factor gets added, and that as t increases, the height of the nth zero seems to decrease, it does seem plausible.
27 January, 2018 at 10:46 am
Anonymous
Since
is analytic in both
and
, each simple (not necessarily real) zero of
has a neighborhood for which there is only one zero
of
which remains simple in this neighborhood for sufficiently small
. Moreover, this (local) “zero trajectory”
is analytic (except at the crossing points of trajectories of other zeros of
.)
all zeros of
with real part in
are real and simple, so are the zeros of
for sufficiently small
. An interesting problem is to estimate this “sufficiently small”
for such given (large)
(e.g. for the first 10^{13} zeros of
).
So, if it is known that for some given
27 January, 2018 at 9:08 am
Terence Tao
Thanks for this! The t=0 data seems to line up with (twice) the ordinates 14.13, 21.02, 25.01, 30.42, 32.93, 37.58, of the first few zeroes of zeta (see e.g. http://mathworld.wolfram.com/RiemannZetaFunctionZeros.html ), and there seems to be a slight drift towards the origin as t increases, which makes sense because the zeroes (which repel each other) get denser as one moves away from the origin. So everything seems to check out so far. The dynamics look a bit slower I expected; maybe more interesting things happen further out along the real axis, particularly when we start seeing zeroes that are initially close to each other (in particular, I am curious to see how the zeroes are supposed to align themselves towards an arithmetic progression locally, as is predicted by theory).
27 January, 2018 at 11:13 am
KM
Using a root finding algorithm, we can now get much better estimates of the zero locations for H_t (those which have a nearby H_0 zero counterpart)
for example for t=0.1 we get for the first 10 zeroes
t, n, nth zero
0.1, 0, 28.2115092359
0.1, 1, 41.9686550793
0.1, 2, 49.9630971635
0.1, 3, 60.7545079573
0.1, 4, 65.8233207833
0.1, 5, 75.0885948458
0.1, 6, 81.751928485
0.1, 7, 86.6088478461
0.1, 8, 96.0103491423
0.1, 9, 99.547664956
Also, for the first 5 zeroes, I used a t step size of 0.001 and got a pretty plot (the data used for the plot is also kept there)
I think we are now close to analyzing zero spacings for H_t and comparing zero counts against N_t
27 January, 2018 at 11:46 am
Anonymous
Interestingly, in the plot, the slopes
(in particular for larger
) are almost constant – indicating that the “acceleration” of the first zeros dynamics (due to relatively distant “neighboring zeros”) is quite small.
27 January, 2018 at 4:10 pm
Polymath15, first thread: computing $H_t$, asymptotics, and dynamics of zeroes | What's new
[…] Bruijn-Newman constant . Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki […]
27 January, 2018 at 4:14 pm
Terence Tao
I have now created a “research thread” for this project (which is now christened Polymath15) over at https://terrytao.wordpress.com/2018/01/27/polymath15-first-thread-computing-h_t-asymptotics-and-dynamics-of-zeroes/ to try to organise the mathematical discussion a bit better and to also record some predicted asymptotics for
that one can get from Laplace’s method and contour shifting. This current post can be used for all the non-research discussion on the project, such as organisational issues or commentary on the project from external observers.
2 February, 2018 at 8:55 pm
Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation | What's new
[…] this previous thread. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki […]
12 February, 2018 at 3:08 pm
Polymath15, third thread: computing and approximating H_t | What's new
[…] this previous thread. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki […]
14 February, 2018 at 8:03 am
K
I have seen a paper for the hot-spot conjecture on euclidean triangles (https://arxiv.org/abs/1802.01800), which was the topic of polymath 7. Does this proof use insights from the polymath7 project (similar to your discrepancy proof and polymath5)? I wonder how many other polymath projects, which have not (yet) written up the results, have lead to interesting results that could actually be compiled into an article? I assume for future researchers, it is much easier to follow short final write-ups than the whole thought-process that is presented in the discussions of the projects? Is it worth and feasible to try compiling some of the results? Or at least, write a closing statements?
14 February, 2018 at 9:18 am
Terence Tao
We discussed this a little for Polymath7, but as it turned out the partial results we had weren’t really compelling enough to write up. It seems that the new paper did not directly rely on the Polymath7 results, though there are of course some similarities in some of the tools used, which is not so surprising given that they were attacking exactly the same problem.
For more recent polymaths we have instituted an end date to the project, at which point it may well be reasonable to write up some partial results. Tim Gowers did write up something like this for Polymath5 (even though this project never formally came with an end date).
2 March, 2018 at 10:12 am
Polymath15, fifth thread: finishing off the test problem? | What's new
[…] , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki […]
2 March, 2018 at 4:21 pm
Anonymous
So (non-research post) how is this project looking so far? Will finishing the “test problem” turn straightforwardly into a rigorous proof that Λ<0.48, and does that by itself seem like a candidate for an initial D.H.J. Polymath publication? What happens afterwards, i.e. is there more in sight than getting it to 0.47 or 0.46 through optimizing the technical machinery? Just wondering, thanks.
2 March, 2018 at 4:38 pm
Anonymous
this train isn’t slowing down until anonymous proves the rieman hypothesis
3 March, 2018 at 10:01 am
Terence Tao
Solving the test problem will come close to proving
; the test problem shows that
whenever
, and to prove
we need to strengthen this to showing that
whenever
and
(another way to do it would be to show that
for
,
, and
for some moderately large
). We haven’t discussed this much yet, but I am hoping that tools like the argument principle and Rouche’s theorem will let us do this, possibly by refining the numerical calculations of
for small values of
to be able to not just keep
away from the origin, but also to compute how many times it winds around the origin.
That would be a major milestone for the project… where we go next I think depends on how easy it was to get the bound
. If it was only barely within the capability of our method, and further significant progress would require making the argument much more complicated, there might be a case for declaring victory and perhaps handing the baton on to some future research group who might be better placed to come up with a different approach. But if there is significant “room” in the calculation then it would of course be natural to try to push further. One reason for doing this is that it may force us to find better methods, or we may happen upon some numerical phenomenon that is independently interesting.
18 March, 2018 at 8:37 pm
Polymath15, sixth thread: the test problem and beyond | What's new
[…] , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki […]
28 March, 2018 at 1:14 pm
Polymath15, seventh thread: going below 0.48 | What's new
[…] , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki […]
17 April, 2018 at 3:47 pm
Polymath15, eighth thread: going below 0.28 | What's new
[…] , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki […]
4 May, 2018 at 2:23 pm
Polymath15, ninth thread: going below 0.22? | What's new
[…] , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki […]
6 September, 2018 at 7:19 pm
Polymath15, tenth thread: numerics update | What's new
[…] continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki […]
28 December, 2018 at 11:12 am
Polymath 15, eleventh thread: Writing up the results, and exploring negative t | What's new
[…] continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki […]
28 February, 2019 at 5:30 am
Dan Romik Studies the Riemann’s Zeta Function, and Other Zeta News. | Combinatorics and more
[…] task of polymath15 (proposed here, launched here, last (11th) post here) was to use current analytic number theory technology that […]
11 September, 2022 at 8:43 am
nt.number theory - Fractional partial derivatives and integrals of De Bruijn's $H_t(z)$-function. Does a simpler form exist for the $z$ derivative/integral? Answer - Lord Web
[…] 2018/2019, the polymath 15 project managed to successfully reduce the upper bound of the De Bruijn-Newman constant and the work […]