This is the tenth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
Most of the progress since the last thread has been on the numerical side, in which the various techniques to numerically establish zero-free regions to the equation have been streamlined, made faster, and extended to larger heights than were previously possible. The best bound for now depends on the height to which one is willing to assume the Riemann hypothesis. Using the conservative verification up to height (slightly larger than) , which has been confirmed by independent work of Platt et al. and Gourdon-Demichel, the best bound remains at . Using the verification up to height claimed by Gourdon-Demichel, this improves slightly to , and if one assumes the Riemann hypothesis up to height the bound improves to , contingent on a numerical computation that is still underway. (See the table below the fold for more data of this form.) This is broadly consistent with the expectation that the bound on should be inversely proportional to the logarithm of the height at which the Riemann hypothesis is verified.
As progress seems to have stabilised, it may be time to transition to the writing phase of the Polymath15 project. (There are still some interesting research questions to pursue, such as numerically investigating the zeroes of for negative values of , but the writeup does not necessarily have to contain every single direction pursued in the project. If enough additional interesting findings are unearthed then one could always consider writing a second paper, for instance.
Below the fold is the detailed progress report on the numerics by Rudolph Dwars and Kalpesh Muchhal.
— Quick recap —
The effectively bounded and normalised, Riemann-Siegel type asymptotic approximation for :
enables us to explore its complex zeros and to establish zero-free regions. By choosing a promising combination and , and then numerically and analytically showing that the right-hand side doesn’t vanish in the rectangular shaped “canopy” (or a point on the blue hyperbola), a new DBN upper bound will be established. Summarized in this visual:
— The Barrier approach —
To verify that in such a rectangular strip, we have adopted the so-called Barrier-approach that comprises of three stages (illustrated in a picture below):
- Use the numerical verification work of the RH already done by others. Independent teams have now verified the RH up to , and a single study took it up to . This work allows us to rule out, up to a certain , that a complex zero has flown through the critical strip into any defined canopy. To also cover the x-domains that lie beyond these known verifications, we have to assume the RH up to . This will then yield a that is conditional on this assumption.
- Complex zeros could also have horizontally flown into the ‘forbidden tunnel’ at high velocity. To numerically verify this hasn’t occurred, a Barrier needs to be introduced at and checked for any zeros having flown around, through or over it.
- Verifying the range (or ) is done through testing that the lower bound of always stays higher than the upper bound of the error terms. This has to be done numerically up to a certain point , after which analytical proof takes over.
So, new numerical computations are required to verify that both the Barrier at and the non-analytical part of the range are zero-free for a certain choice of .
— Verifying the Barrier is zero-free —
So, how to numerically verify that the Barrier is zero-free?
- The Barrier is required to have two nearby screens at and to ensure that no complex zeros could fly around it. Hence, it has the 3D structure: .
- For the numerical verification that the Barrier is zero-free, it is treated as a ‘pile’ of rectangles. For each rectangle the winding number is computed using the argument principle and Rouché’s theorem.
- For each rectangle, the number of mesh points required is decided using the -derivative, and the t-step is decided using the -derivative.
Optimizations used for the barrier computations
- To efficiently calculate all required mesh points of on the rectangle sides, we used a pre-calculated stored sum matrix that is Taylor expanded in the and -directions. The resulting polynomial is used to calculate the required mesh points. The formula for the stored sum matrix:
with and , where and are the number of Taylor expansion terms required to achieve the required level of accuracy (in our computations we used 20 digits and an algorithm to automatically determine and ).
- We found that a more careful placement of the Barrier at an makes a significant difference in the computation time required. A good location is where has a large relative magnitude. Since retains some Euler product structure, such locations can be quickly guessed by evaluating a certain euler product upto a small number of primes, for multiple X candidates in an X range.
- Since and have smooth i.e. non-oscillatory behavior, using conservative numeric integrals with the Lemma 9.3 summands, , instead of the actual summation is feasible, and is significantly faster (the time complexity of estimation becomes independent of )
- Using a fixed mesh for a rectangle contour (can change from rectangle to rectangle) allows for vectorized computations and is significantly faster than using an adaptive mesh. To determine the number of mesh points, it is assumed that will stay above 1 (which is expected given the way the X location has been chosen, and is later verified after has been computed at all the mesh points). The number is chosen as
- Since for the above fixed mesh generally comes way above 1, the lower bound along the entire contour (not just on the mesh points) is higher than what would be the case with an adaptive mesh. This property is used to obtain a larger t-step while moving in the t-direction
— Verifying the range —
This leaves us with ensuring the range (where is the value of corresponding to the barrier ) is zero-free through checking that for each , the lower bound always exceeds the upper bound of the error terms.
- From theory, two lower bounds are available: the Lemma-bound (eq. 80 in the writeup) and an approximate Triangle bound (eq. 79 in the writeup). Both bounds can be ‘mollified’ by choosing an increasing number of primes (to a certain extent) until the bound is sufficiently positive.
- The Lemma bound is used to find the number of ‘mollifiers’ required to make the bound positive at . We found that using primes was the max. number of primes still allowing an acceptable computational performance.
- The approximate Triangle bound evaluates faster and is used to establish the mollified (either 0 primes or only prime 2) end point before the analytical lower bound takes over.
- The Lemma-bound is then also used to calculate that for each in , the lower bound stays sufficiently above the error terms. The Lemma bound only needs to be verified for the line segment _{,} since the Lemma bound monotonically increases when goes to 1.
Optimizations used for Lemmabound calculations
- To speed up computations a fast “sawtooth” mechanism has been developed. This only calculates the minimally required incremental Lemma bound terms and only induces a full calculation when the incremental bound goes below a defined threshold (that is sufficiently above the error bounds).
where
(as presented within section 9 of the writeup, pg. 42)
— Software used —
To accommodate the above, he following software has been developed in both pari/gp (https://pari.math.u-bordeaux.fr) and ARB (http://arblib.org):
For verifying the Barrier:
- Barrier_Location_Optimizer to find the optimal location to place the Barrier.
- Stored_Sums_Generator to generate in matrix form, the coefficients of the Taylor polynomial. This is one-off activity for a given , post which the coefficients can be used for winding number computations in different and ranges.
- Winding_Number_Calculator to verify that no complex zeros passed the Barrier.
For verifying the range:
- N_{b}_Location_Finder for the number of mollifiers to make the bound positive.
- Lemmabound_calculator Firstly, different mollifiers are tried to see which one gives a sufficiently positive bound at . Then the calculator can be used with that mollifier to evaluate the bound for each in . The range can also be broken up into sub-ranges, which can then be tackled with different mollifiers.
- LemmaBound_Sawtooth_calculator to verify each incrementally calculated Lemma bound stays above the error bounds. Generally this script and the Lemmabound calculator script are substitutes for each other, although the latter may also be used for some initial portion of the N range.
Furthermore we have developed software to compute:
- as and/or .
- the exact value (using the bounded version of the 3^{rd} integral approach).
The software supports parallel processing through multi-threading and grid computing.
— Results achieved —
For various combinations of , these are the numerical outcomes:
The numbers suggest that we now have numerically verified that (even at two different Barrier locations). Also, conditionally on the RH being verified up to various , we have now reached a . We are cautiously optimistic, that the tools available right now, do even bring a conditional within reach of computation.
— Timings for verifying DBN —
Procedure |
Timings |
Stored sums generation at X=6*10^10 + 83951.5 | 42 sec |
Winding number check in the barrier for t=[0,0.2], y=[0.2,1] |
42 sec |
Lemma bounds using incremental method for N=[69098, 250000] and a 4-prime mollifier {2,3,5,7} |
118 sec |
Overall Timings |
~200 sec |
Remarks:
- Timings to be multiplied by a factor of ~3.2 for each incremental order of magnitude of x.
- Parallel processing significantly improves speed (e.g. Stored sums was done in < 7 sec).
- Mollifier 2 analytical bound at is .
— Links to computational results and software used: —
Numerical results achieved:
- Stored sums https://github.com/km-git-acc/dbn_upper_bound/tree/master/output/storedsums
- Winding numbers https://github.com/km-git-acc/dbn_upper_bound/tree/master/output/windingnumbers
- Lemmabound N_a…N_b https://github.com/km-git-acc/dbn_upper_bound/tree/master/output/eulerbounds
Software scripts used:
56 comments
Comments feed for this article
6 September, 2018 at 11:06 pm
scienaimer
Thanks😄, I admire you so much
13 October, 2018 at 11:22 am
Anonymous
Please don’t forget to take that ceremonial bow when you think his or speak his name… :-)
P.S. And trust me, he is a bit overrated too.
18 October, 2018 at 7:31 am
Anonymous
On the other hand, Prof. T. Tao has established a first-rate and evolving website on mathematics (https://terrytao.wordpress.com), and it may be the best on the worldwide internet. Thank you very much! :-)
7 September, 2018 at 7:12 am
Anonymous
Is it possible to reduce the computational complexity by making the width of the barrier variable (not necessarily 1) and choosing a “good” value for it ?
7 September, 2018 at 7:30 am
Terence Tao
The barrier can be reduced in area by a factor of about two (at the cost of replacing its rectangular shape with a region bounded by two straight lines and a parabola), but it can’t be made arbitrarily small (at least with the current arguments). The reason why the barrier needs to be somewhat thick is because one needs to ensure that the complex zeroes (paired with their complex conjugates) on the right of the barrier cannot exert an upward force on any complex zero above the real axis to the left of the barrier; this is to prevent the bad scenario of a complex zero swooping in under the barrier at high velocity, and then being pulled up by zeroes still to the right of the barrier to end up having an imaginary part above at the test time . If the barrier is too thin, then a zero just to the left of it could be pulled up strongly by a zero just to the right and a little bit higher, to an extent that cannot be counteracted by the complex conjugate of the zero to the right. (Though, as I write this, I realise that also the complex conjugate of the zero on the left would also be helpful in pulling the zero down. This may possibly help reduce the size of the barrier a little bit further, although it still can’t be made arbitrarily thin. I’ll try to make a back-of-the-envelope calculation on this shortly. EDIT: ah, I remember the problem now… if one has _many_ zeroes on the right of the barrier trying to pull the zero on the left up, they can overwhelm the effect of the complex conjugate of that zero, so one can’t exploit that conjugate unless one also has some quantitative control on how many zeroes there are just to the right of the barrier.)
My understanding is that verification that the barrier is zero-free has not been the major bottleneck in computations, with the numerical verification of a large rectangular zero-free region to the right of the barrier at time being the more difficult challenge, but perhaps the situation is different at very large heights (I see for instance that there is one choice of parameters in which the rectangular region is in fact completely absent).
7 September, 2018 at 12:19 pm
KM
If we use the Euler 2 analytic bound, and try to maintain the trend of the dbn constant decreasing by 0.01 or more for every order of magnitude change in X, without having to validate any rightward rectangular region, we find such good t0,y0 combinations only till X=10^19.
For example, for X=10^20, we don’t find a combination which could give dbn <= 0.11 by itself (although 0.1125 is manageable), and validating an extra region becomes necessary. We could instead assume a new such trend starts at X=10^20 and ends at X=10^22 with dbn going from 0.12 to 0.10.
If we validate a rightward rectangular region at these heights, it does take significant time. Here, the stored sums generation also takes substantial time (for eg. the 10^19 stored sums took about 30 processor-days (although quite parallelized)). We can choose a larger X to eliminate or reduce the rectangular region but at the cost of more time for the stored sums.
7 September, 2018 at 11:45 am
curious
With unbounded computation capability and infinite time is there a lower bound on ?
7 September, 2018 at 4:24 pm
Terence Tao
Well, it’s now a theorem that , and the Riemann hypothesis is now known to be equivalent to , so it is unlikely that this lower bound will ever be improved :)
Conversely, in the unlikely event that we were able to numerically locate a violation of the Riemann hypothesis, that is to say a zero of off the real line, one could hopefully use some of the numerical methods to calculate developed here to also locate failure of the Riemann hypothesis at or near this location for some positive values of as well. This would then give a positive lower bound to . But I doubt this scenario will ever come to pass. (I doubt that RH will ever be disproven, but if it is, my guess is that the disproof will come more from analytic considerations than from numerically locating a counterexample, which might be of an incredibly large size (e.g., comparable to Skewes number).)
16 October, 2018 at 8:56 am
Anonymous
Your arguments/doubts against the truth of the Riemann Hypothesis are fictional, and therefore, they should be ignored…
7 September, 2018 at 1:15 pm
Anonymous
In step 4 (of “verifying the range”), the meaning of “the line ” is not clear.
[Text clarified – T.]
7 September, 2018 at 5:31 pm
Anonymous
It seems that should be .
[Corrected, thanks – T.]
7 September, 2018 at 1:21 pm
Anonymous
It is possible to prove that using the zero-free regions method? For example, prove that for every positive ?
7 September, 2018 at 4:26 pm
Anonymous
What you are asking for is a proof that the Riemann hypothesis fails.
RH implies lambda <= 0.
7 September, 2018 at 4:34 pm
Anonymous
Sorry… i wanted to say . I wasn’t pay atention. Thanks!
7 September, 2018 at 4:49 pm
Terence Tao
As I think I mentioned in the previous thread, establishing an upper bound is roughly comparable in difficulty to verifying the Riemann hypothesis up to height for some absolute constant . The computational difficulty of this task is in turn of the order of for some other absolute constant . So the method can in principle obtain any upper bound of the form (assuming of course that RH is true), but the time complexity grows exponentially with the reciprocal of the desired upper bound . My feeling is that absent any major breakthrough on RH, these exponential type bounds are here to stay, although one may be able to improve the values of and somewhat.
16 October, 2018 at 9:03 am
Anonymous
“… major breakthrough on RH…” Hah! RH is true!… And your acceptance of this fact will be your personal breakthrough!
7 September, 2018 at 6:55 pm
curious
Does the behavior come from proposition 10.1? Also you say it is unlikely to beat and perhaps it would be nice to state the best behavior if it is beaten to and what is the unlikely event that would trigger?
7 September, 2018 at 7:42 pm
Terence Tao
See my previous comment on this at https://terrytao.wordpress.com/2018/05/04/polymath15-ninth-thread-going-below-0-22/#comment-501625
21 September, 2018 at 5:55 am
Anonymous
Stay tuned:
https://www.newscientist.com/article/2180406-famed-mathematician-claims-proof-of-160-year-old-riemann-hypothesis/
21 September, 2018 at 7:02 am
Anonymous
Such claim from him seems really promising …
22 September, 2018 at 9:26 pm
Anonymous
Dear Prof. Tao,
Are you aware of any details on the supposed proof of the RH by Atiyah?
Best,
23 September, 2018 at 12:22 pm
sylvainjulien
There’s a question on Mathoverflow with a link towards a preprint by Sir Michael : https://mathoverflow.net/questions/311254/is-there-an-error-in-the-pre-print-published-by-atiyah-with-his-proof-of-the-rie
24 September, 2018 at 6:32 am
sha_bi_a_ti_ya
hi, the link does not work any more.
24 September, 2018 at 6:02 am
Anonymous
The papers are out and basically it seems that there is a consensus among the experts that the proof is incorrect (e.g. https://motls.blogspot.com/2018/09/nice-try-but-i-am-now-99-confident-that.html).
I would really like to read a comment on this from prof. Tao.
24 September, 2018 at 6:58 am
think
Motls is no expert.
24 September, 2018 at 8:53 am
Anonymous
Dear prof. Tao,
I suggest to delete this discussion on the claimed proof as irrelevant to this post.
24 September, 2018 at 9:22 am
Anonymous
How is a possible proof of RH not relevant to a bound on the roots of zeta function? Once there is a proof of RH being true, \Lambda = 0 and there is no need for the numerics.
9 October, 2018 at 12:26 am
Anonymous
Interestingly, for some comments above, the “authomatic dislike” is missing.
24 September, 2018 at 9:52 am
Terence Tao
Given the situation, I believe it would not be particularly constructive or appropriate for me to comment directly on this recent announcement of Atiyah. However, regarding the more general question of evaluating proposed attacks on RH (or GRH), I can refer readers to the recent talk of Peter Sarnak (slides available here) entitled “Commentary and comparisons of some approaches to GRH”. A sample quote: “99% of the ‘proofs’ of RH that are submitted to the Annals … can be rejected on the basis that they only use the functional equation [amongst the known properties of , excluding basic properties such as analytic continuation and growth bounds]”. The reason for this is (as has been known since the work of Davenport and Heilbronn) that there are many examples of zeta-like functions (e.g., linear combinations of L-functions) which enjoy a functional equation and similar analyticity and growth properties to zeta, but which have zeroes off of the critical line. Thus, any proof of RH must somehow use a property of zeta which has no usable analogue for the Davenport-Heilbronn examples (or the many subsequent counterexamples produced after their work, see e.g., this recent paper of Vaughan for a more up-to-date discussion). (For instance, the proof might use in an essential way the Euler product formula for zeta, as the Davenport-Heilbronn examples do not enjoy such a product formula.)
Another “barrier” to proofs of RH that was mentioned in Sarnak’s talk is also worth stating explicitly here. Many analysis-based attacks on RH would, if they were successful, not only establish that all the non-trivial zeroes lie on the critical line, but would also force them to be simple. This is because analytic methods tend to be robust with respect to perturbations, and one can infinitesimally perturb a meromorphic function with a repeated zero on the critical line to have zeroes off the critical line (cf. the “dynamics of zeroes” discussions in this Polymath15 project). However, while the zeroes of zeta are believed to be simple, there are many examples of L-functions with a repeated zero at 1/2 (particularly if one believes in BSD and related conjectures). So any proof of RH must either fail to generalise to L-functions, or be non-robust with respect to infinitesimal perturbations of the zeta function. (For instance, in Deligne’s proof of the RH for function fields, the main way this barrier is evaded is through the use of the tensor power trick, which is not robust with respect to infinitesimal perturbations. This proof also evades the first barrier by relying in essential fashion on the Grothendieck-Lefschetz trace formula, which is not available in usable form for Davenport-Heilbronn type examples, even over function fields.)
(I also spoke at this meeting, on the topic of the de Bruijn-Newman constant and (in part) on Polymath15; my talk can be found here, and the remaining lectures may be found here, with slides here.)
24 September, 2018 at 10:40 am
Anonymous
On the other hand, Hamburger’s theorem (1921) shows that is determined (up to a multiplicative constant) by its functional equation, having Dirichlet series representation, finitely many singularities and growth conditions. So in addition to the functional equation and Dirichlet series representation not much more information is needed for a proof of RH.
24 September, 2018 at 12:55 pm
Terence Tao
This is true insofar as one restricts attention just to RH rather than GRH. But at the GRH level one needs the functional equation not just for the Dirichlet series, but for all twists of the Dirichlet series, in order to recover an L-function (or at least to evade the Davenport-Heilbronn type counterexamples). So any purported proof of RH that relies only on the functional equation and the Dirichlet series representation must either fundamentally break down when one attempts to extend it to GRH (i.e., it must use some special property of zeta not shared by other L-functions, such as the specific numeric parameters that appear in the gamma factors of the zeta functional equation as opposed to all other functional equations, which by the way is the case for Hamburger’s theorem), or must naturally incorporate the functional equation for the twists as well. (One could possibly imagine the latter occurring if the argument had a highly “adelic” flavour, for instance.)
25 September, 2018 at 11:13 am
Aula
Since Deligne managed to prove one special case of GRH, it’s a pretty obvious idea to try to generalize his proof to other cases. However it seems that no one has been able to do so, which suggests that function fields really are a very special case of GRH. Why is that?
25 September, 2018 at 12:03 pm
Terence Tao
Well, the function field case is more like half of all cases (if one considers the function field case and number field case to be of equal “size”); Deligne proved GRH not only for the analogue of the zeta function and Dirichlet L-functions (which are actually quite easy), or for curves (which was first done by Weil), but in fact proved GRH for all varieties and more generally for “pure” l-adic sheaves. (See my previous blog post on this at https://terrytao.wordpress.com/2013/07/19/the-riemann-hypothesis-in-various-settings/ .)
Probably the most obvious thing that the function field case has but the number field case seems to lack is the Frobenius map, which is linked to L-functions in the function field case via the Grothendieck-Lefschetz trace formula. But people have certainly searched for ways to extend these methods to number field settings, most prominently Connes and his co-authors (see e.g., his recent talk (slides available here), at the same conference as the previously linked talks). Connes identifies the apparent lack of a (Hirzebruch-)Riemann-Roch theorem for number fields as the key obstacle to making this strategy work.
22 October, 2018 at 3:50 pm
Anonymous
Since RH is generally regarded as the most important open conjecture in mathematics (with many claimed proofs), it seems that any accepted proof of it should also be formalized.
26 October, 2018 at 1:41 pm
Anonymous
Some recently claimed proofs of RH (with some remarks)
http://www.maths.ex.ac.uk/~mwatkins/zeta/RHproofs.htm
25 September, 2018 at 12:53 am
think
Is it possible RH is true but not GRH and may be is there possibility in that case your reasoning might fall short?
25 September, 2018 at 6:47 am
Terence Tao
This is theoretically possible, but (a) it goes against the experience of the last century or so, in which virtually every property that has been established for the zeta function eventually ends up having an analogue for other L-functions (possibly with some additional subtleties), and (b) as I said above, it would mean that any proof of RH would at some point have to rely in some crucial fashion on a property of the zeta function that has no usable analogue for L-functions (i.e., it must use an exception to (a)), since otherwise it would extend to a proof of GRH, which we are assuming for this discussion to be false. Again, this is theoretically possible – certainly I could imagine for instance that there would be an argument that can handle degree 1 L-functions such as zeta or Dirichlet L-functions, but not higher degree L-functions – but many attacks on RH would not be able to make such fine distinctions between zeta and other L-functions, e.g., arguments that are primarily based on complex analysis methods. (The only exception I could see is if the argument really used the specific value of the numerical exponents and other parameters of the zeta functional equation in some key fashion, so that the argument would break down completely if these parameters were replaced by those for some other L-function. Hamburger’s theorem, mentioned above, is one case in which this occurs.)
EDIT: my previous discussion on the distinction between local errors in a proof and global errors in a proof is relevant here. A local error is something like “The derivation of equation (23) on page 17 is not valid” or “The proposed definition of a pseudosimplicial hypercomplex in Definition 3.1 is ambiguous”. A global error is something like “this proof, if it worked, would also show that all meromorphic functions obeying the functional equation must verify the Riemann hypothesis, which is known not to be the case”. A good proof should not only be written in a way to avoid local errors, but also should be consciously striving to avoid global ones as well, otherwise it immediately becomes suspicious (even if the specific local error that the global error indicates must exist has not yet been located).
25 September, 2018 at 9:12 am
think
Not sure if there is a relation but from https://terrytao.wordpress.com/2018/01/19/the-de-bruijn-newman-constant-is-non-negativ/ even RH seems to be barely so. Any non-negativity statement like this holds for GRH?
25 September, 2018 at 10:16 am
Anonymous
It seems that the “barely so” result for the “de-Bruijn” deformation for is “too rigid” for the deformation parameter , so perhaps the “epsilon of room” principle indicates that it may be useful to make the deformation more flexible by using several deformation parameters (, say) instead of the single parameter t.
This may be used to get some desirable properties of the more general deformation. Such a desirable property is to rigorously bound the zeros velocities of by using instead of the classical heat equation (which has unbounded propagation velocities) by a similar version of “relativistic heat equation” (with a small parameter of second order time derivative corresponding to a “wave equation” which should give the desired upper bound on the propagation velocity).See e.g.
https://en.wikipedia.org/wiki/Relativistic_heat_conduction
25 September, 2018 at 11:56 am
Terence Tao
I am actually planning to ask a graduate student to look into this question.
27 September, 2018 at 4:20 am
Anonymous
Is it possible to refine de Bruijn deformation of to a more general (two parametric) deformation such that
(i) converges locally uniformly to as tend to .
(ii) For each there is such that whenever and .
(iii) The threshold is effectively computable.
Such a generalized deformation may be derived from a relativistic version of the heat equation. It seems that if there is a sequence of deformation parameters converging to with a corresponding sequence of thresholds uniformly bounded by some absolute constant M, then property (i) would imply that whenever and .
2 October, 2018 at 3:57 pm
Anonymous
A similar “deformation approach” to RH is to represent the kernel (which gives as its Fourier transform) as a limit of a suitable sequence of kernels whose Fourier transforms have only real zeros and are converging locally uniformly to – implying that H_0 has also only real zeros. This “kernel deformation approach” appears in Shi’s recent arxiv paper
https://arxiv.org/pdf/1706.08868.pdf
3 October, 2018 at 4:42 am
Anonymous
A more detailed description of the “kernel deformation approach” to RH is as follows
Since , where is entire with its double-exponentially decaying “generating kernel” , the basic idea is to construct a sequence of “approximating kernels” to whose Fourier transforms
should be
(i) well-defined and analytic in the horizontal strip .
(ii) converging locally uniformly to in this strip.
(iii) have only real zeros in this strip.
Clearly, the above properties imply that has only real zeros!
Remarks:
1. A sufficient condition for property (i) is
for each
2. A sufficient condition for property (ii) is
which follows from the simple estimate (in the strip )
3. It seems that satisfying property (iii) is the most difficult part in the construction (or "design") of such kernels .
3 October, 2018 at 7:57 am
sha_bi_a_ti_ya
bro, are you mr. shi?
2 October, 2018 at 4:01 pm
Terence Tao
I am not sure I fully understand your set of hypotheses, but it appears that already obeys the properties you state (but with the threshold not bounded uniformly in ).
2 October, 2018 at 4:21 pm
Anonymous
Yes, it is true for the one-parameter deformation , but perhaps the extra flexibility of the two-parametric deformation may be sufficient to have a suitable sequence of such deformations converging locally uniformly to while keeping their corresponding thresholds uniformly bounded?
Moreover, if the deformation is generated from a relativistic version of the heat equation (with bounded propagation velocity), is it possible that their zeros velocities are also uniformly bounded?
28 September, 2018 at 12:33 pm
vznvzn
led from an offhand remark/ tip by a commenter on RJLs recent blog on atiyah attack, now delighted/ excited/ exhilarated to see this hard core empirical/ computational/ numerical work into RH and hope that the word gets out, there are apparently multiple trolls on my blog who reject my own similar inquiries into collatz on spurious grounds. does anyone know of a great survey ref on RH numerical work or more widely in number theories? have found a few of my own over the years but have to go hunt down some of these new refs. https://vzn1.wordpress.com/2015/12/29/select-refs-and-big-tribute-to-empirical-cs-math/
re Atiyah attack, spent a lot of time researching that from many angles, this is a pov written from an outsider looking in on all the insiders. https://vzn1.wordpress.com/2018/09/26/atiyah-riemann-attack-post-mortem-autopsy-primes-torch-carried-on/
29 September, 2018 at 2:56 pm
Terence Tao vs Riemann Hypothesis, polymath 15 computational/ algorithmic number crunching attack, counterpoint to recent Atiyah episode - Nevin Manimala's Blog
[…] by /u/vznvzn [link] […]
29 September, 2018 at 5:39 pm
Atiyah Riemann attack/ post mortem/ autopsy, Primes torch carried on | Turing Machine
[…] thx everyone! from a comment on RJLs blog about "number crunching Riemann", Terence Tao/ Polymath15 is working on a very cool numerical attack. more vindication for numerical approaches. reddit likes […]
30 September, 2018 at 4:35 am
Alberto Ibañez
Hello RH friends. Thanks, professor Tao for your comment. I think that all the world are waiting for your post about this new proof. I like the idea of a simple proof and the “new ideas”. Todd function exists?. RH can be probed by contradiction without giving any information about zeros or primes?. The RH proof will tell us what prime numbers are or where they are and/or how they work?. Prime numbers, from Latin “primus”, “firsts”. Exists the place where all of the prime numbers are the firsts?
1 October, 2018 at 4:29 am
Anonymous
More information on Todd function (appearing in Atiyah claimed proof for RH) and on the closely related Hirzebruch functional equation can be found in Bunkova’s recent arxiv paper
https://arxiv.org/pdf/1803.01398.pdf
7 October, 2018 at 5:29 pm
Anonymous
In the writeup, it seems that in the RHS of (33) the argument of the logarithmic derivative should be (instead of ).
In addition, the proof of claim (i) (in the following rows) can be simplified by observing that the Hadamard product for is locally uniformly convergent – implying that its logarithmic derivative is also locally uniformly convergent – so the logarithmic derivative can be computed termwise and the claim follows (without relying on growth estimates.)
[Thanks, the LaTeX source has now been updated (and the PDF will be updated in due course). -T]
8 October, 2018 at 11:41 am
Anonymous
Since can be represented as the Fourier transform of a (fast decaying) strictly positive definite function , it is therefore a positive definite function on (such positive definite functions are characterised by Bochner’s theorem as Fourier transforms of positive functions.)
Is it possible that represents the correlation function of a stationary continuous-time random process on with as its corresponding power spectral density? (which may explain the empirical connection of the distribution of zeros to the distribution of eigenvalues of large random Hermitian matrices.)
10 October, 2018 at 9:55 am
Watching Over the Zeroes | Gödel's Lost Letter and P=NP
[…] results toward the RH. A PolyMath project on bounding from above has achieved , and a bound of is known to follow if and when the RH is verified for up to . There was previously a lower bound , a […]
13 October, 2018 at 10:16 am
Anonymous
Since is the integral kernel representing , it seems desirable to understand in terms of the kernel why it makes some zeros of non-real for . (i.e. which property of the kernel is changing at the threshold ?)
It is known from a classical paper (1918) of Polya that if the kernel is even, INCREASING function for and compactly supported, its Fourier transform (which is a finite cosine transform) is an entire function with only real zeros. If the kernel function is DECREASING (as in the case of ), its Fourier transform may have non-real zeros (there are some results connecting its convexity properties to the zeros distribution of its Fourier transform). Since the derivative of (as any even function) is zero at the origin, is it possible that its second derivative at the origin is also zero – which would make CONVEX at the origin for , but CONCAVE for ?
(thereby explaining its convexity "transition" at the origin for ).
13 October, 2018 at 10:22 am
Anonymous
Correction: in the fourth line it should be “for " (i.e. "<" is missing)