You are currently browsing the category archive for the ‘math.NA’ category.
Let denote the space of
matrices with integer entries, and let
be the group of invertible
matrices with integer entries. The Smith normal form takes an arbitrary matrix
and factorises it as
, where
,
, and
is a rectangular diagonal matrix, by which we mean that the principal
minor is diagonal, with all other entries zero. Furthermore the diagonal entries of
are
for some
(which is also the rank of
) with the numbers
(known as the invariant factors) principal divisors with
. The invariant factors are uniquely determined; but there can be some freedom to modify the invertible matrices
. The Smith normal form can be computed easily; for instance, in SAGE, it can be computed calling the
function from the matrix class. The Smith normal form is also available for other principal ideal domains than the integers, but we will only be focused on the integer case here. For the purposes of this post, we will view the Smith normal form as a primitive operation on matrices that can be invoked as a “black box”.
In this post I would like to record how to use the Smith normal form to computationally manipulate two closely related classes of objects:
- Subgroups
of a standard lattice
(or lattice subgroups for short);
- Closed subgroups
of a standard torus
(or closed torus subgroups for short).
The above two classes of objects are isomorphic to each other by Pontryagin duality: if is a lattice subgroup, then the orthogonal complement
Example 1 The orthogonal complement of the lattice subgroupis the closed torus subgroup
and conversely.
Let us focus first on lattice subgroups . As all such subgroups are finitely generated abelian groups, one way to describe a lattice subgroup is to specify a set
of generators of
. Equivalently, we have
Example 2 Letbe the lattice subgroup generated by
,
,
, thus
with
. A Smith normal form for
is given by
so
is a rank two lattice with a basis of
and
(and the invariant factors are
and
). The trimmed representation is
There are other Smith normal forms for
, giving slightly different representations here, but the rank and invariant factors will always be the same.
By the above discussion we can represent a lattice subgroup by a matrix
for some
; this representation is not unique, but we will address this issue shortly. For now, we focus on the question of how to use such data representations of subgroups to perform basic operations on lattice subgroups. There are some operations that are very easy to perform using this data representation:
- (Applying a linear transformation) if
, so that
is also a linear transformation from
to
, then
maps lattice subgroups to lattice subgroups, and clearly maps the lattice subgroup
to
for any
.
- (Sum) Given two lattice subgroups
for some
,
, the sum
is equal to the lattice subgroup
, where
is the matrix formed by concatenating the columns of
with the columns of
.
- (Direct sum) Given two lattice subgroups
,
, the direct sum
is equal to the lattice subgroup
, where
is the block matrix formed by taking the direct sum of
and
.
One can also use Smith normal form to detect when one lattice subgroup is a subgroup of another lattice subgroup
. Using Smith normal form factorization
, with invariant factors
, the relation
is equivalent after some manipulation to
Example 3 To test whether the lattice subgroupgenerated by
and
is contained in the lattice subgroup
from Example 2, we write
as
with
, and observe that
The first row is of course divisible by
, and the last row vanishes as required, but the second row is not divisible by
, so
is not contained in
(but
is); also a similar computation verifies that
is conversely contained in
.
One can now test whether by testing whether
and
simultaneously hold (there may be more efficient ways to do this, but this is already computationally manageable in many applications). This in principle addresses the issue of non-uniqueness of representation of a subgroup
in the form
.
Next, we consider the question of representing the intersection of two subgroups
in the form
for some
and
. We can write
Example 4 With the latticefrom Example 2, we shall compute the intersection of
with the subgroup
, which one can also write as
with
. We obtain a Smith normal form
so
. We have
and so we can write
where
One can trim this representation if desired, for instance by deleting the first column of
(and replacing
with
). Thus the intersection of
with
is the rank one subgroup generated by
.
A similar calculation allows one to represent the pullback of a subgroup
via a linear transformation
, since
Among other things, this allows one to describe lattices given by systems of linear equations and congruences in the format. Indeed, the set of lattice vectors
that solve the system of congruences
Example 5 With the lattice subgroupfrom Example 2, we have
, and so
consists of those triples
which obey the (redundant) congruence
the congruence
and the identity
Conversely, one can use the above procedure to convert the above system of congruences and identities back into a form
(though depending on which Smith normal form one chooses, the end result may be a different representation of the same lattice group
).
Now we apply Pontryagin duality. We claim the identity
Example 6 The orthogonal complement of the lattice subgroupfrom Example 2 is the closed torus subgroup
using the trimmed representation of
, one can simplify this a little to
and one can also write this as the image of the group
under the torus isomorphism
In other words, one can write
so that
is isomorphic to
.
We can now dualize all of the previous computable operations on subgroups of to produce computable operations on closed subgroups of
. For instance:
- To form the intersection or sum of two closed torus subgroups
, use the identities
and
and then calculate the sum or intersection of the lattice subgroupsby the previous methods. Similarly, the operation of direct sum of two closed torus subgroups dualises to the operation of direct sum of two lattice subgroups.
- To determine whether one closed torus subgroup
is contained in (or equal to) another closed torus subgroup
, simply use the preceding methods to check whether the lattice subgroup
is contained in (or equal to) the lattice subgroup
.
- To compute the pull back
of a closed torus subgroup
via a linear transformation
, use the identity
Similarly, to compute the imageof a closed torus subgroup
, use the identity
Example 7 Suppose one wants to compute the sum of the closed torus subgroupfrom Example 6 with the closed torus subgroup
. This latter group is the orthogonal complement of the lattice subgroup
considered in Example 4. Thus we have
where
is the matrix from Example 6; discarding the zero column, we thus have
Peter Denton, Stephen Parke, Xining Zhang, and I have just uploaded to the arXiv a completely rewritten version of our previous paper, now titled “Eigenvectors from Eigenvalues: a survey of a basic identity in linear algebra“. This paper is now a survey of the various literature surrounding the following basic identity in linear algebra, which we propose to call the eigenvector-eigenvalue identity:
Theorem 1 (Eigenvector-eigenvalue identity) Let
be an
Hermitian matrix, with eigenvalues
. Let
be a unit eigenvector corresponding to the eigenvalue
, and let
be the
component of
. Then
where
is the
Hermitian matrix formed by deleting the
row and column from
.
When we posted the first version of this paper, we were unaware of previous appearances of this identity in the literature; a related identity had been used by Erdos-Schlein-Yau and by myself and Van Vu for applications to random matrix theory, but to our knowledge this specific identity appeared to be new. Even two months after our preprint first appeared on the arXiv in August, we had only learned of one other place in the literature where the identity showed up (by Forrester and Zhang, who also cite an earlier paper of Baryshnikov).
The situation changed rather dramatically with the publication of a popular science article in Quanta on this identity in November, which gave this result significantly more exposure. Within a few weeks we became informed (through private communication, online discussion, and exploration of the citation tree around the references we were alerted to) of over three dozen places where the identity, or some other closely related identity, had previously appeared in the literature, in such areas as numerical linear algebra, various aspects of graph theory (graph reconstruction, chemical graph theory, and walks on graphs), inverse eigenvalue problems, random matrix theory, and neutrino physics. As a consequence, we have decided to completely rewrite our article in order to collate this crowdsourced information, and survey the history of this identity, all the known proofs (we collect seven distinct ways to prove the identity (or generalisations thereof)), and all the applications of it that we are currently aware of. The citation graph of the literature that this ad hoc crowdsourcing effort produced is only very weakly connected, which we found surprising:
The earliest explicit appearance of the eigenvector-eigenvalue identity we are now aware of is in a 1966 paper of Thompson, although this paper is only cited (directly or indirectly) by a fraction of the known literature, and also there is a precursor identity of Löwner from 1934 that can be shown to imply the identity as a limiting case. At the end of the paper we speculate on some possible reasons why this identity only achieved a modest amount of recognition and dissemination prior to the November 2019 Quanta article.
The Polymath15 paper “Effective approximation of heat flow evolution of the Riemann function, and a new upper bound for the de Bruijn-Newman constant“, submitted to Research in the Mathematical Sciences, has just been uploaded to the arXiv. This paper records the mix of theoretical and computational work needed to improve the upper bound on the de Bruijn-Newman constant
. This constant can be defined as follows. The function
where is the Riemann
function
has a Fourier representation
where is the super-exponentially decaying function
The Riemann hypothesis is equivalent to the claim that all the zeroes of are real. De Bruijn introduced (in different notation) the deformations
of ; one can view this as the solution to the backwards heat equation
starting at
. From the work of de Bruijn and of Newman, it is known that there exists a real number
– the de Bruijn-Newman constant – such that
has all zeroes real for
and has at least one non-real zero for
. In particular, the Riemann hypothesis is equivalent to the assertion
. Prior to this paper, the best known bounds for this constant were
with the lower bound due to Rodgers and myself, and the upper bound due to Ki, Kim, and Lee. One of the main results of the paper is to improve the upper bound to
At a purely numerical level this gets “closer” to proving the Riemann hypothesis, but the methods of proof take as input a finite numerical verification of the Riemann hypothesis up to some given height (in our paper we take
) and converts this (and some other numerical verification) to an upper bound on
that is of order
. As discussed in the final section of the paper, further improvement of the numerical verification of RH would thus lead to modest improvements in the upper bound on
, although it does not seem likely that our methods could for instance improve the bound to below
without an infeasible amount of computation.
We now discuss the methods of proof. An existing result of de Bruijn shows that if all the zeroes of lie in the strip
, then
; we will verify this hypothesis with
, thus giving (1). Using the symmetries and the known zero-free regions, it suffices to show that
whenever and
.
For large (specifically,
), we use effective numerical approximation to
to establish (2), as discussed in a bit more detail below. For smaller values of
, the existing numerical verification of the Riemann hypothesis (we use the results of Platt) shows that
for and
. The problem though is that this result only controls
at time
rather than the desired time
. To bridge the gap we need to erect a “barrier” that, roughly speaking, verifies that
for ,
, and
; with a little bit of work this barrier shows that zeroes cannot sneak in from the right of the barrier to the left in order to produce counterexamples to (2) for small
.
To enforce this barrier, and to verify (2) for large , we need to approximate
for positive
. Our starting point is the Riemann-Siegel formula, which roughly speaking is of the shape
where ,
is an explicit “gamma factor” that decays exponentially in
, and
is a ratio of gamma functions that is roughly of size
. Deforming this by the heat flow gives rise to an approximation roughly of the form
where and
are variants of
and
,
, and
is an exponent which is roughly
. In particular, for positive values of
,
increases (logarithmically) as
increases, and the two sums in the Riemann-Siegel formula become increasingly convergent (even in the face of the slowly increasing coefficients
). For very large values of
(in the range
for a large absolute constant
), the
terms of both sums dominate, and
begins to behave in a sinusoidal fashion, with the zeroes “freezing” into an approximate arithmetic progression on the real line much like the zeroes of the sine or cosine functions (we give some asymptotic theorems that formalise this “freezing” effect). This lets one verify (2) for extremely large values of
(e.g.,
). For slightly less large values of
, we first multiply the Riemann-Siegel formula by an “Euler product mollifier” to reduce some of the oscillation in the sum and make the series converge better; we also use a technical variant of the triangle inequality to improve the bounds slightly. These are sufficient to establish (2) for moderately large
(say
) with only a modest amount of computational effort (a few seconds after all the optimisations; on my own laptop with very crude code I was able to verify all the computations in a matter of minutes).
The most difficult computational task is the verification of the barrier (3), particularly when is close to zero where the series in (4) converge quite slowly. We first use an Euler product heuristic approximation to
to decide where to place the barrier in order to make our numerical approximation to
as large in magnitude as possible (so that we can afford to work with a sparser set of mesh points for the numerical verification). In order to efficiently evaluate the sums in (4) for many different values of
, we perform a Taylor expansion of the coefficients to factor the sums as combinations of other sums that do not actually depend on
and
and so can be re-used for multiple choices of
after a one-time computation. At the scales we work in, this computation is still quite feasible (a handful of minutes after software and hardware optimisations); if one assumes larger numerical verifications of RH and lowers
and
to optimise the value of
accordingly, one could get down to an upper bound of
assuming an enormous numerical verification of RH (up to height about
) and a very large distributed computing project to perform the other numerical verifications.
This post can serve as the (presumably final) thread for the Polymath15 project (continuing this post), to handle any remaining discussion topics for that project.
This is the eleventh research thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
There are currently two strands of activity. One is writing up the paper describing the combination of theoretical and numerical results needed to obtain the new bound . The latest version of the writeup may be found here, in this directory. The theoretical side of things have mostly been written up; the main remaining tasks to do right now are
- giving a more detailed description and illustration of the two major numerical verifications, namely the barrier verification that establishes a zero-free region for
for
, and the Dirichlet series bound that establishes a zero-free region for
; and
- giving more detail on the conditional results assuming more numerical verification of RH.
Meanwhile, several of us have been exploring the behaviour of the zeroes of for negative
; this does not directly lead to any new progress on bounding
(though there is a good chance that it may simplify the proof of
), but there have been some interesting numerical phenomena uncovered, as summarised in this set of slides. One phenomenon is that for large negative
, many of the complex zeroes begin to organise themselves near the curves
(An example of the agreement between the zeroes and these curves may be found here.) We now have a (heuristic) theoretical explanation for this; we should have an approximation
in this region (where are defined in equations (11), (15), (17) of the writeup, and the above curves arise from (an approximation of) those locations where two adjacent terms
,
in this series have equal magnitude (with the other terms being of lower order).
However, we only have a partial explanation at present of the interesting behaviour of the real zeroes at negative t, for instance the surviving zeroes at extremely negative values of appear to lie on the curve where the quantity
is close to a half-integer, where
The remaining zeroes exhibit a pattern in coordinates that is approximately 1-periodic in
, where
A plot of the zeroes in these coordinates (somewhat truncated due to the numerical range) may be found here.
We do not yet have a total explanation of the phenomena seen in this picture. It appears that we have an approximation
where is the non-zero multiplier
and
The derivation of this formula may be found in this wiki page. However our initial attempts to simplify the above approximation further have proven to be somewhat inaccurate numerically (in particular giving an incorrect prediction for the location of zeroes, as seen in this picture). We are in the process of using numerics to try to resolve the discrepancies (see this page for some code and discussion).
This is the tenth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
Most of the progress since the last thread has been on the numerical side, in which the various techniques to numerically establish zero-free regions to the equation have been streamlined, made faster, and extended to larger heights than were previously possible. The best bound for
now depends on the height to which one is willing to assume the Riemann hypothesis. Using the conservative verification up to height (slightly larger than)
, which has been confirmed by independent work of Platt et al. and Gourdon-Demichel, the best bound remains at
. Using the verification up to height
claimed by Gourdon-Demichel, this improves slightly to
, and if one assumes the Riemann hypothesis up to height
the bound improves to
, contingent on a numerical computation that is still underway. (See the table below the fold for more data of this form.) This is broadly consistent with the expectation that the bound on
should be inversely proportional to the logarithm of the height at which the Riemann hypothesis is verified.
As progress seems to have stabilised, it may be time to transition to the writing phase of the Polymath15 project. (There are still some interesting research questions to pursue, such as numerically investigating the zeroes of for negative values of
, but the writeup does not necessarily have to contain every single direction pursued in the project. If enough additional interesting findings are unearthed then one could always consider writing a second paper, for instance.
Below the fold is the detailed progress report on the numerics by Rudolph Dwars and Kalpesh Muchhal.
This is the seventh “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
The most recent news is that we appear to have completed the verification that is free of zeroes when
and
, which implies that
. For very large
(for instance when the quantity
is at least
) this can be done analytically; for medium values of
(say when
is between
and
) this can be done by numerically evaluating a fast approximation
to
and using the argument principle in a rectangle; and most recently it appears that we can also handle small values of
, in part due to some new, and significantly faster, numerical ways to evaluate
in this range.
One obvious thing to do now is to experiment with lowering the parameters and
and see what happens. However there are two other potential ways to bound
which may also be numerically feasible. One approach is based on trying to exclude zeroes of
in a region of the form
,
and
for some moderately large
(this acts as a “barrier” to prevent zeroes from flowing into the region
at time
, assuming that they were not already there at time
). This require significantly less numerical verification in the
aspect, but more numerical verification in the
aspect, so it is not yet clear whether this is a net win.
Another, rather different approach, is to study the evolution of statistics such as over time. One has fairly good control on such quantities at time zero, and their time derivative looks somewhat manageable, so one may be able to still have good control on this quantity at later times
. However for this approach to work, one needs an effective version of the Riemann-von Mangoldt formula for
, which at present is only available asymptotically (or at time
). This approach may be able to avoid almost all numerical computation, except for numerical verification of the Riemann hypothesis, for which we can appeal to existing literature.
Participants are also welcome to add any further summaries of the situation in the comments below.
This is the sixth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
The last two threads have been focused primarily on the test problem of showing that whenever
. We have been able to prove this for most regimes of
, or equivalently for most regimes of the natural number parameter
. In many of these regimes, a certain explicit approximation
to
was used, together with a non-zero normalising factor
; see the wiki for definitions. The explicit upper bound
has been proven for certain explicit expressions (see here) depending on
. In particular, if
satisfies the inequality
then is non-vanishing thanks to the triangle inequality. (In principle we have an even more accurate approximation
available, but it is looking like we will not need it for this test problem at least.)
We have explicit upper bounds on ,
,
; see this wiki page for details. They are tabulated in the range
here. For
, the upper bound
for
is monotone decreasing, and is in particular bounded by
, while
and
are known to be bounded by
and
respectively (see here).
Meanwhile, the quantity can be lower bounded by
for certain explicit coefficients and an explicit complex number
. Using the triangle inequality to lower bound this by
we can obtain a lower bound of for
, which settles the test problem in this regime. One can get more efficient lower bounds by multiplying both Dirichlet series by a suitable Euler product mollifier; we have found
for
to be good choices to get a variety of further lower bounds depending only on
, see this table and this wiki page. Comparing this against our tabulated upper bounds for the error terms we can handle the range
.
In the range , we have been able to obtain a suitable lower bound
(where
exceeds the upper bound for
) by numerically evaluating
at a mesh of points for each choice of
, with the mesh spacing being adaptive and determined by
and an upper bound for the derivative of
; the data is available here.
This leaves the final range (roughly corresponding to
). Here we can numerically evaluate
to high accuracy at a fine mesh (see the data here), but to fill in the mesh we need good upper bounds on
. It seems that we can get reasonable estimates using some contour shifting from the original definition of
(see here). We are close to finishing off this remaining region and thus solving the toy problem.
Beyond this, we need to figure out how to show that for
as well. General theory lets one do this for
, leaving the region
. The analytic theory that handles
and
should also handle this region; for
presumably the argument principle will become relevant.
The full argument also needs to be streamlined and organised; right now it sprawls over many wiki pages and github code files. (A very preliminary writeup attempt has begun here). We should also see if there is much hope of extending the methods to push much beyond the bound of that we would get from the above calculations. This would also be a good time to start discussing whether to move to the writing phase of the project, or whether there are still fruitful research directions for the project to explore.
Participants are also welcome to add any further summaries of the situation in the comments below.
This is the fifth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
We have almost finished off the test problem of showing that whenever
. We have two useful approximations for
, which we have denoted
and
, and a normalising quantity
that is asymptotically equal to the above expressions; see the wiki page for definitions. In practice, the
approximation seems to be accurate within about one or two significant figures, whilst the
approximation is accurate to about three or four. We have an effective upper bound
where the expressions are quite small in practice (
is typically about two orders of magnitude smaller than the main term
once
is moderately large, and the error terms
are even smaller). See this page for details. In principle we could also obtain an effective upper bound for
(the
term would be replaced by something smaller).
The ratio takes the form of a difference
of two Dirichlet series, where
is a phase whose value is explicit but perhaps not terribly important, and the coefficients
are explicit and relatively simple (
is
, and
is approximately
). To bound this away from zero, we have found it advantageous to mollify this difference by multiplying by an Euler product
to cancel much of the initial oscillation; also one can take advantage of the fact that the
are real and the
are (approximately) real. See this page for details. The upshot is that we seem to be getting good lower bounds for the size of this difference of Dirichlet series starting from about
or so. The error terms
are already quite small by this stage, so we should soon be able to rigorously keep
from vanishing at this point. We also have a scheme for lower bounding the difference of Dirichlet series below this range, though it is not clear at present how far we can continue this before the error terms
become unmanageable. For very small
we may have to explore some faster ways to compute the expression
, which is still difficult to compute directly with high accuracy. One will also need to bound the somewhat unwieldy expressions
by something more manageable. For instance, right now these quantities depend on the continuous variable
; it would be preferable to have a quantity that depends only on the parameter
, as this could be computed numerically for all
in the remaining range of interest quite quickly.
As before, any other mathematical discussion related to the project is also welcome here, for instance any summaries of previous discussion that was not covered in this post.
This is the fourth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing https://terrytao.wordpress.com/2018/01/24/polymath-proposal-upper-bounding-the-de-bruijn-newman-constant/. Progress will be summarised at this Polymath wiki page.
We are getting closer to finishing off the following test problem: can one show that whenever
,
? This would morally show that
. A wiki page for this problem has now been created here. We have obtained a number of approximations
to
(see wiki page), though numeric evidence indicates that the approximations are all very close to each other. (Many of these approximations come with a correction term
, but thus far it seems that we may be able to avoid having to use this refinement to the approximations.) The effective approximation
also comes with an effective error bound
for some explicit (but somewhat messy) error terms : see this wiki page for details. The original approximations
can be considered deprecated at this point in favour of the (slightly more complicated) approximation
; the approximation
is a simplified version of
which is not quite as accurate but might be useful for testing purposes.
It is convenient to normalise everything by an explicit non-zero factor . Asymptotically,
converges to 1; numerically, it appears that its magnitude (and also its real part) stays roughly between 0.4 and 3 in the range
, and we seem to be able to keep it (or at least the toy counterpart
) away from zero starting from about
(here it seems that there is a useful trick of multiplying by Euler-type factors like
to cancel off some of the oscillation). Also, the bounds on the error
seem to be of size about 0.1 or better in these ranges also. So we seem to be on track to be able to rigorously eliminate zeroes starting from about
or so. We have not discussed too much what to do with the small values of
; at some point our effective error bounds will become unusable, and we may have to find some more faster ways to compute
.
In addition to this main direction of inquiry, there have been additional discussions on the dynamics of zeroes, and some numerical investigations of the behaviour Lehmer pairs under heat flow. Contributors are welcome to summarise any findings from these discussions from previous threads (or on any other related topic, e.g. improvements in the code) in the comments below.
This is the third “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this previous thread. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
We are making progress on the following test problem: can one show that whenever
,
, and
? This would imply that
which would be the first quantitative improvement over the de Bruijn bound of (or the Ki-Kim-Lee refinement of
). Of course we can try to lower the two parameters of
later on in the project, but this seems as good a place to start as any. One could also potentially try to use finer analysis of dynamics of zeroes to improve the bound
further, but this seems to be a less urgent task.
Probably the hardest case is , as there is a good chance that one can then recover the
case by a suitable use of the argument principle. Here we appear to have a workable Riemann-Siegel type formula that gives a tractable approximation for
. To describe this formula, first note that in the
case we have
and the Riemann-Siegel formula gives
for any natural numbers , where
is a contour from
to
that winds once anticlockwise around the zeroes
of
but does not wind around any other zeroes. A good choice of
to use here is
In this case, a classical steepest descent computation (see wiki) yields the approximation
where
Thus we have
where
with and
given by (1).
Heuristically, we have derived (see wiki) the more general approximation
for (and in particular for
), where
In practice it seems that the term is negligible once the real part
of
is moderately large, so one also has the approximation
For large , and for fixed
, e.g.
, the sums
converge fairly quickly (in fact the situation seems to be significantly better here than the much more intensively studied
case), and we expect the first term
of the series to dominate. Indeed, analytically we know that
(or
) as
(holding
fixed), and it should also be provable that
as well. Numerically with
, it seems in fact that
(or
) stay within a distance of about
of
once
is moderately large (e.g.
). This raises the hope that one can solve the toy problem of showing
for
by numerically controlling
for small
(e.g.
), numerically controlling
and analytically bounding the error
for medium
(e.g.
), and analytically bounding both
and
for large
(e.g.
). (These numbers
and
are arbitrarily chosen here and may end up being optimised to something else as the computations become clearer.)
Thus, we now have four largely independent tasks (for suitable ranges of “small”, “medium”, and “large” ):
- Numerically computing
for small
(with enough accuracy to verify that there are no zeroes)
- Numerically computing
for medium
(with enough accuracy to keep it away from zero)
- Analytically bounding
for large
(with enough accuracy to keep it away from zero); and
- Analytically bounding
for medium and large
(with a bound that is better than the bound away from zero in the previous two tasks).
Note that tasks 2 and 3 do not directly require any further understanding of the function .
Below we will give a progress report on the numeric and analytic sides of these tasks.
— 1. Numerics report (contributed by Sujit Nair) —
There is some progress on the code side but not at the pace I was hoping. Here are a few things which happened (rather, mistakes which were taken care of).
- We got rid of code which wasn’t being used. For example, @dhjpolymath computed
based on an old version but only realized it after the fact.
- We implemented tests to catch human/numerical bugs before a computation starts. Again, we lost some numerical cycles but moving forward these can be avoided.
- David got set up on GitHub and he is able to compare his output (in C) with the Python code. That is helping a lot.
Two areas which were worked on were
- Computing
and zeroes for
around
- Computing quantities like
,
,
, etc. with the goal of understanding the zero free regions.
Some observations for ,
,
include:
does seem to avoid the negative real axis
(based on the oscillations and trends in the plots)
seems to be settling around
range.
See the figure below. The top plot is on the complex plane and the bottom plot is the absolute value. The code to play with this is here.
— 2. Analysis report —
The Riemann-Siegel formula and some manipulations (see wiki) give , where
where is a contour that goes from
to
staying a bounded distance away from the upper imaginary and right real axes, and
is the complex conjugate of
. (In each of these sums, it is the first term that should dominate, with the second one being about
as large.) One can then evolve by the heat flow to obtain
, where
Steepest descent heuristics then predict that ,
, and
. For the purposes of this project, we will need effective error estimates here, with explicit error terms.
A start has been made towards this goal at this wiki page. Firstly there is a “effective Laplace method” lemma that gives effective bounds on integrals of the form if the real part of
is either monotone with large derivative, or has a critical point and is decreasing on both sides of that critical point. In principle, all one has to do is manipulate expressions such as
,
,
by change of variables, contour shifting and integration by parts until it is of the form to which the above lemma can be profitably applied. As one may imagine though the computations are messy, particularly for the
term. As a warm up, I have begun by trying to estimate integrals of the form
for smallish complex numbers , as these sorts of integrals appear in the form of
. As of this time of writing, there are effective bounds for the
case, and I am currently working on extending them to the
case, which should give enough control to approximate
and
. The most complicated task will be that of upper bounding
, but it also looks eventually doable.
Recent Comments