You are currently browsing the category archive for the ‘question’ category.

The Polymath15 paper “Effective approximation of heat flow evolution of the Riemann {\xi} function, and a new upper bound for the de Bruijn-Newman constant“, submitted to Research in the Mathematical Sciences, has just been uploaded to the arXiv. This paper records the mix of theoretical and computational work needed to improve the upper bound on the de Bruijn-Newman constant {\Lambda}. This constant can be defined as follows. The function

\displaystyle H_0(z) := \frac{1}{8} \xi\left(\frac{1}{2} + \frac{iz}{2}\right),

where {\xi} is the Riemann {\xi} function

\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) \zeta(s)

has a Fourier representation

\displaystyle H_0(z) = \int_0^\infty \Phi(u) \cos(zu)\ du

where {\Phi} is the super-exponentially decaying function

\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3\pi n^2 e^{5u} ) \exp(-\pi n^2 e^{4u} ).

The Riemann hypothesis is equivalent to the claim that all the zeroes of {H_0} are real. De Bruijn introduced (in different notation) the deformations

\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du

of {H_0}; one can view this as the solution to the backwards heat equation {\partial_t H_t = -\partial_{zz} H_t} starting at {H_0}. From the work of de Bruijn and of Newman, it is known that there exists a real number {\Lambda} – the de Bruijn-Newman constant – such that {H_t} has all zeroes real for {t \geq \Lambda} and has at least one non-real zero for {t < \Lambda}. In particular, the Riemann hypothesis is equivalent to the assertion {\Lambda \leq 0}. Prior to this paper, the best known bounds for this constant were

\displaystyle 0 \leq \Lambda < 1/2

with the lower bound due to Rodgers and myself, and the upper bound due to Ki, Kim, and Lee. One of the main results of the paper is to improve the upper bound to

\displaystyle \Lambda \leq 0.22. \ \ \ \ \ (1)

At a purely numerical level this gets “closer” to proving the Riemann hypothesis, but the methods of proof take as input a finite numerical verification of the Riemann hypothesis up to some given height {T} (in our paper we take {T \sim 3 \times 10^{10}}) and converts this (and some other numerical verification) to an upper bound on {\Lambda} that is of order {O(1/\log T)}. As discussed in the final section of the paper, further improvement of the numerical verification of RH would thus lead to modest improvements in the upper bound on {\Lambda}, although it does not seem likely that our methods could for instance improve the bound to below {0.1} without an infeasible amount of computation.

We now discuss the methods of proof. An existing result of de Bruijn shows that if all the zeroes of {H_{t_0}(z)} lie in the strip {\{ x+iy: |y| \leq y_0\}}, then {\Lambda \leq t_0 + \frac{1}{2} y_0^2}; we will verify this hypothesis with {t_0=y_0=0.2}, thus giving (1). Using the symmetries and the known zero-free regions, it suffices to show that

\displaystyle H_{0.2}(x+iy) \neq 0 \ \ \ \ \ (2)

whenever {x \geq 0} and {0.2 \leq y \leq 1}.

For large {x} (specifically, {x \geq 6 \times 10^{10}}), we use effective numerical approximation to {H_t(x+iy)} to establish (2), as discussed in a bit more detail below. For smaller values of {x}, the existing numerical verification of the Riemann hypothesis (we use the results of Platt) shows that

\displaystyle H_0(x+iy) \neq 0

for {0 \leq x \leq 6 \times 10^{10}} and {0.2 \leq y \leq 1}. The problem though is that this result only controls {H_t} at time {t=0} rather than the desired time {t = 0.2}. To bridge the gap we need to erect a “barrier” that, roughly speaking, verifies that

\displaystyle H_t(x+iy) \neq 0 \ \ \ \ \ (3)

for {0 \leq t \leq 0.2}, {x = 6 \times 10^{10} + O(1)}, and {0.2 \leq y \leq 1}; with a little bit of work this barrier shows that zeroes cannot sneak in from the right of the barrier to the left in order to produce counterexamples to (2) for small {x}.

To enforce this barrier, and to verify (2) for large {x}, we need to approximate {H_t(x+iy)} for positive {t}. Our starting point is the Riemann-Siegel formula, which roughly speaking is of the shape

\displaystyle H_0(x+iy) \approx B_0(x+iy) ( \sum_{n=1}^N \frac{1}{n^{\frac{1+y-ix}{2}}} + \gamma_0(x+iy) \sum_{n=1}^N \frac{n^y}{n^{\frac{1+y+ix}{2}}} )

where {N := \sqrt{x/4\pi}}, {B_0(x+iy)} is an explicit “gamma factor” that decays exponentially in {x}, and {\gamma_0(x+iy)} is a ratio of gamma functions that is roughly of size {(x/4\pi)^{-y/2}}. Deforming this by the heat flow gives rise to an approximation roughly of the form

\displaystyle H_t(x+iy) \approx B_t(x+iy) ( \sum_{n=1}^N \frac{b_n^t}{n^{s_*}} + \gamma_t(x+iy) \sum_{n=1}^N \frac{n^y}{n^{\overline{s_*}}} ) \ \ \ \ \ (4)

where {B_t(x+iy)} and {\gamma_t(x+iy)} are variants of {B_0(x+iy)} and {\gamma_0(x+iy)}, {b_n^t := \exp( \frac{t}{4} \log^2 n )}, and {s_*} is an exponent which is roughly {\frac{1+y-ix}{2} + \frac{t}{4} \log \frac{x}{4\pi}}. In particular, for positive values of {t}, {s_*} increases (logarithmically) as {x} increases, and the two sums in the Riemann-Siegel formula become increasingly convergent (even in the face of the slowly increasing coefficients {b_n^t}). For very large values of {x} (in the range {x \geq \exp(C/t)} for a large absolute constant {C}), the {n=1} terms of both sums dominate, and {H_t(x+iy)} begins to behave in a sinusoidal fashion, with the zeroes “freezing” into an approximate arithmetic progression on the real line much like the zeroes of the sine or cosine functions (we give some asymptotic theorems that formalise this “freezing” effect). This lets one verify (2) for extremely large values of {x} (e.g., {x \geq 10^{12}}). For slightly less large values of {x}, we first multiply the Riemann-Siegel formula by an “Euler product mollifier” to reduce some of the oscillation in the sum and make the series converge better; we also use a technical variant of the triangle inequality to improve the bounds slightly. These are sufficient to establish (2) for moderately large {x} (say {x \geq 6 \times 10^{10}}) with only a modest amount of computational effort (a few seconds after all the optimisations; on my own laptop with very crude code I was able to verify all the computations in a matter of minutes).

The most difficult computational task is the verification of the barrier (3), particularly when {t} is close to zero where the series in (4) converge quite slowly. We first use an Euler product heuristic approximation to {H_t(x+iy)} to decide where to place the barrier in order to make our numerical approximation to {H_t(x+iy)} as large in magnitude as possible (so that we can afford to work with a sparser set of mesh points for the numerical verification). In order to efficiently evaluate the sums in (4) for many different values of {x+iy}, we perform a Taylor expansion of the coefficients to factor the sums as combinations of other sums that do not actually depend on {x} and {y} and so can be re-used for multiple choices of {x+iy} after a one-time computation. At the scales we work in, this computation is still quite feasible (a handful of minutes after software and hardware optimisations); if one assumes larger numerical verifications of RH and lowers {t_0} and {y_0} to optimise the value of {\Lambda} accordingly, one could get down to an upper bound of {\Lambda \leq 0.1} assuming an enormous numerical verification of RH (up to height about {4 \times 10^{21}}) and a very large distributed computing project to perform the other numerical verifications.

This post can serve as the (presumably final) thread for the Polymath15 project (continuing this post), to handle any remaining discussion topics for that project.

[This post is collectively authored by the ICM structure committee, whom I am currently chairing – T.]

The International Congress of Mathematicians (ICM) is widely considered to be the premier conference for mathematicians.  It is held every four years; for instance, the 2018 ICM was held in Rio de Janeiro, Brazil, and the 2022 ICM is to be held in Saint Petersburg, Russia.  The most high-profile event at the ICM is the awarding of the 10 or so prizes of the International Mathematical Union (IMU) such as the Fields Medal, and the lectures by the prize laureates; but there are also approximately twenty plenary lectures from leading experts across all mathematical disciplines, several public lectures of a less technical nature, about 180 more specialised invited lectures divided into about twenty section panels, each corresponding to a mathematical field (or range of fields), as well as various outreach and social activities, exhibits and satellite programs, and meetings of the IMU General Assembly; see for instance the program for the 2018 ICM for a sample schedule.  In addition to these official events, the ICM also provides more informal networking opportunities, in particular allowing mathematicians at all stages of career, and from all backgrounds and nationalities, to interact with each other.

For each Congress, a Program Committee (together with subcommittees for each section) is entrusted with the task of selecting who will give the lectures of the ICM (excluding the lectures by prize laureates, which are selected by separate prize committees); they also have decided how to appropriately subdivide the entire field of mathematics into sections.   Given the prestigious nature of invitations from the ICM to present a lecture, this has been an important and challenging task, but one for which past Program Committees have managed to fulfill in a largely satisfactory fashion.

Nevertheless, in the last few years there has been substantial discussion regarding ways in which the process for structuring the ICM and inviting lecturers could be further improved, for instance to reflect the fact that the distribution of mathematics across various fields has evolved over time.   At the 2018 ICM General Assembly meeting in Rio de Janeiro, a resolution was adopted to create a new Structure Committee to take on some of the responsibilities previously delegated to the Program Committee, focusing specifically on the structure of the scientific program.  On the other hand, the Structure Committee is not involved with the format for prize lectures, the selection of prize laureates, or the selection of plenary and sectional lecturers; these tasks are instead the responsibilities of other committees (the local Organizing Committee, the prize committees, and the Program Committee respectively).

The first Structure Committee was constituted on 1 Jan 2019, with the following members:

As one of our first actions, we on the committee are using this blog post to solicit input from the mathematical community regarding the topics within our remit.  Among the specific questions (in no particular order) for which we seek comments are the following:

  1. Are there suggestions to change the format of the ICM that would increase its value to the mathematical community?
  2. Are there suggestions to change the format of the ICM that would encourage greater participation and interest in attending, particularly with regards to junior researchers and mathematicians from developing countries?
  3. What is the correct balance between research and exposition in the lectures?  For instance, how strongly should one emphasize the importance of good exposition when selecting plenary and sectional speakers?  Should there be “Bourbaki style” expository talks presenting work not necessarily authored by the speaker?
  4. Is the balance between plenary talks, sectional talks, and public talks at an optimal level?  There is only a finite amount of space in the calendar, so any increase in the number or length of one of these types of talks will come at the expense of another.
  5. The ICM is generally perceived to be more important to pure mathematics than to applied mathematics.  In what ways can the ICM be made more relevant and attractive to applied mathematicians, or should one not try to do so?
  6. Are there structural barriers that cause certain areas or styles of mathematics (such as applied or interdisciplinary mathematics) or certain groups of mathematicians to be under-represented at the ICM?  What, if anything, can be done to mitigate these barriers?

Of course, we do not expect these complex and difficult questions to be resolved within this blog post, and debating these and other issues would likely be a major component of our internal committee discussions.  Nevertheless, we would value constructive comments towards the above questions (or on other topics within the scope of our committee) to help inform these subsequent discussions.  We therefore welcome and invite such commentary, either as responses to this blog post, or sent privately to one of the members of our committee.  We would also be interested in having readers share their personal experiences at past congresses, and how it compares with other major conferences of this type.   (But in order to keep the discussion focused and constructive, we request that comments here refrain from discussing topics that are out of the scope of this committee, such as suggesting specific potential speakers for the next congress, which is a task instead for the 2022 ICM Program Committee.)

While talking mathematics with a postdoc here at UCLA (March Boedihardjo) we came across the following matrix problem which we managed to solve, but the proof was cute and the process of discovering it was fun, so I thought I would present the problem here as a puzzle without revealing the solution for now.

The problem involves word maps on a matrix group, which for sake of discussion we will take to be the special orthogonal group SO(3) of real 3 \times 3 matrices (one of the smallest matrix groups that contains a copy of the free group, which incidentally is the key observation powering the Banach-Tarski paradox).  Given any abstract word w of two generators x,y and their inverses (i.e., an element of the free group {\bf F}_2), one can define the word map w: SO(3) \times SO(3) \to SO(3) simply by substituting a pair of matrices in SO(3) into these generators.  For instance, if one has the word w = x y x^{-2} y^2 x, then the corresponding word map w: SO(3) \times SO(3) \to SO(3) is given by

\displaystyle w(A,B) := ABA^{-2} B^2 A

for A,B \in SO(3).  Because SO(3) contains a copy of the free group, we see the word map is non-trivial (not equal to the identity) if and only if the word itself is nontrivial.

Anyway, here is the problem:

Problem. Does there exist a sequence w_1, w_2, \dots of non-trivial word maps w_n: SO(3) \times SO(3) \to SO(3) that converge uniformly to the identity map?

To put it another way, given any \varepsilon > 0, does there exist a non-trivial word w such that \|w(A,B) - 1 \| \leq \varepsilon for all A,B \in SO(3), where \| \| denotes (say) the operator norm, and 1 denotes the identity matrix in SO(3)?

As I said, I don’t want to spoil the fun of working out this problem, so I will leave it as a challenge. Readers are welcome to share their thoughts, partial solutions, or full solutions in the comments below.

This is the eleventh research thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

There are currently two strands of activity.  One is writing up the paper describing the combination of theoretical and numerical results needed to obtain the new bound \Lambda \leq 0.22.  The latest version of the writeup may be found here, in this directory.  The theoretical side of things have mostly been written up; the main remaining tasks to do right now are

  1. giving a more detailed description and illustration of the two major numerical verifications, namely the barrier verification that establishes a zero-free region for H_t(x+iy)=0 for 0 \leq t \leq 0.2, 0.2 \leq y \leq 1, |x - 6 \times 10^{10} - 83952| \leq 0.5, and the Dirichlet series bound that establishes a zero-free region for t = 0.2, 0.2 \leq y \leq 1, x \geq 6 \times 10^{10} + 83952; and
  2. giving more detail on the conditional results assuming more numerical verification of RH.

Meanwhile, several of us have been exploring the behaviour of the zeroes of H_t for negative t; this does not directly lead to any new progress on bounding \Lambda (though there is a good chance that it may simplify the proof of \Lambda \geq 0), but there have been some interesting numerical phenomena uncovered, as summarised in this set of slides.  One phenomenon is that for large negative t, many of the complex zeroes begin to organise themselves near the curves

\displaystyle y = -\frac{t}{2} \log \frac{x}{4\pi n(n+1)} - 1.

(An example of the agreement between the zeroes and these curves may be found here.)  We now have a (heuristic) theoretical explanation for this; we should have an approximation

\displaystyle H_t(x+iy) \approx B_t(x+iy) \sum_{n=1}^\infty \frac{b_n^t}{n^{s_*}}

in this region (where B_t, b_n^t, n^{s_*} are defined in equations (11), (15), (17) of the writeup, and the above curves arise from (an approximation of) those locations where two adjacent terms \frac{b_n^t}{n^{s_*}}, \frac{b_{n+1}^t}{(n+1)^{s_*}} in this series have equal magnitude (with the other terms being of lower order).

However, we only have a partial explanation at present of the interesting behaviour of the real zeroes at negative t, for instance the surviving zeroes at extremely negative values of t appear to lie on the curve where the quantity N is close to a half-integer, where

\displaystyle \tilde x := x + \frac{\pi t}{4}

\displaystyle N := \sqrt{\frac{\tilde x}{4\pi}}

The remaining zeroes exhibit a pattern in (N,u) coordinates that is approximately 1-periodic in N, where

\displaystyle u := \frac{4\pi |t|}{\tilde x}.

A plot of the zeroes in these coordinates (somewhat truncated due to the numerical range) may be found here.

We do not yet have a total explanation of the phenomena seen in this picture.  It appears that we have an approximation

\displaystyle H_t(x) \approx A_t(x) \sum_{n=1}^\infty \exp( -\frac{|t| \log^2(n/N)}{4(1-\frac{iu}{8\pi})} - \frac{1+i\tilde x}{2} \log(n/N) )

where A_t(x) is the non-zero multiplier

\displaystyle A_t(x) := e^{\pi^2 t/64} M_0(\frac{1+i\tilde x}{2}) N^{-\frac{1+i\tilde x}{2}} \sqrt{\frac{\pi}{1-\frac{iu}{8\pi}}}

and

\displaystyle M_0(s) := \frac{1}{8}\frac{s(s-1)}{2}\pi^{-s/2} \sqrt{2\pi} \exp( (\frac{s}{2}-\frac{1}{2}) \log \frac{s}{2} - \frac{s}{2} )

The derivation of this formula may be found in this wiki page.  However our initial attempts to simplify the above approximation further have proven to be somewhat inaccurate numerically (in particular giving an incorrect prediction for the location of zeroes, as seen in this picture).  We are in the process of using numerics to try to resolve the discrepancies (see this page for some code and discussion).

 

This is the tenth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

Most of the progress since the last thread has been on the numerical side, in which the various techniques to numerically establish zero-free regions to the equation H_t(x+iy)=0 have been streamlined, made faster, and extended to larger heights than were previously possible.  The best bound for \Lambda now depends on the height to which one is willing to assume the Riemann hypothesis.  Using the conservative verification up to height (slightly larger than) 3 \times 10^{10}, which has been confirmed by independent work of Platt et al. and Gourdon-Demichel, the best bound remains at \Lambda \leq 0.22.  Using the verification up to height 2.5 \times 10^{12} claimed by Gourdon-Demichel, this improves slightly to \Lambda \leq 0.19, and if one assumes the Riemann hypothesis up to height 5 \times 10^{19} the bound improves to \Lambda \leq 0.11, contingent on a numerical computation that is still underway.   (See the table below the fold for more data of this form.)  This is broadly consistent with the expectation that the bound on \Lambda should be inversely proportional to the logarithm of the height at which the Riemann hypothesis is verified.

As progress seems to have stabilised, it may be time to transition to the writing phase of the Polymath15 project.  (There are still some interesting research questions to pursue, such as numerically investigating the zeroes of H_t for negative values of t, but the writeup does not necessarily have to contain every single direction pursued in the project. If enough additional interesting findings are unearthed then one could always consider writing a second paper, for instance.

Below the fold is the detailed progress report on the numerics by Rudolph Dwars and Kalpesh Muchhal.

Read the rest of this entry »

This is the ninth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

We have now tentatively improved the upper bound of the de Bruijn-Newman constant to {\Lambda \leq 0.22}. Among the technical improvements in our approach, we now are able to use Taylor expansions to efficiently compute the approximation {A+B} to {H_t(x+iy)} for many values of {x,y} in a given region, thus speeding up the computations in the barrier considerably. Also, by using the heuristic that {H_t(x+iy)} behaves somewhat like the partial Euler product {\prod_p (1 - \frac{1}{p^{\frac{1+y-ix}{2}}})^{-1}}, we were able to find a good location to place the barrier in which {H_t(x+iy)} is larger than average, hence easier to keep away from zero.

The main remaining bottleneck is that of computing the Euler mollifier bounds that keep {A+B} bounded away from zero for larger values of {x} beyond the barrier. In going below {0.22} we are beginning to need quite complicated mollifiers with somewhat poor tail behavior; we may be reaching the point where none of our bounds will succeed in keeping {A+B} bounded away from zero, so we may be close to the natural limits of our methods.

Participants are also welcome to add any further summaries of the situation in the comments below.

Just a quick announcement that Dustin Mixon and Aubrey de Grey have just launched the Polymath16 project over at Dustin’s blog.  The main goal of this project is to simplify the recent proof by Aubrey de Grey that the chromatic number of the unit distance graph of the plane is at least 5, thus making progress on the Hadwiger-Nelson problem.  The current proof is computer assisted (in particular it requires one to control the possible 4-colorings of a certain graph with over a thousand vertices), but one of the aims of the project is to reduce the amount of computer assistance needed to verify the proof; already a number of such reductions have been found.  See also this blog post where the polymath project was proposed, as well as the wiki page for the project.  Non-technical discussion of the project will continue at the proposal blog post.

I am recording here some notes on a nice problem that Sorin Popa shared with me recently. To motivate the question, we begin with the basic observation that the differentiation operator {Df(x) := \frac{d}{dx} f(x)} and the position operator {Xf(x) := xf(x)} in one dimension formally obey the commutator equation

\displaystyle  [D,X] = 1 \ \ \ \ \ (1)

where {1} is the identity operator and {[D,X] := DX-XD} is the commutator. Among other things, this equation is fundamental in quantum mechanics, leading for instance to the Heisenberg uncertainty principle.

The operators {D,X} are unbounded on spaces such as {L^2({\bf R})}. One can ask whether the commutator equation (1) can be solved using bounded operators {D,X \in B(H)} on a Hilbert space {H} rather than unbounded ones. In the finite dimensional case when {D, X} are just {n \times n} matrices for some {n \geq 1}, the answer is clearly negative, since the left-hand side of (1) has trace zero and the right-hand side does not. What about in infinite dimensions, when the trace is not available? As it turns out, the answer is still negative, as was first worked out by Wintner and Wielandt. A short proof can be given as follows. Suppose for contradiction that we can find bounded operators {D, X} obeying (1). From (1) and an easy induction argument, we obtain the identity

\displaystyle  [D,X^n] = n X^{n-1} \ \ \ \ \ (2)

for all natural numbers {n}. From the triangle inequality, this implies that

\displaystyle  n \| X^{n-1} \|_{op} \leq 2 \|D\|_{op} \| X^n \|_{op}.

Iterating this, we conclude that

\displaystyle  \| X \|_{op} \leq \frac{(2 \|D\|_{op})^{n-1}}{n!} \|X^n \|_{op}

for any {n}. Bounding {\|X^n\|_{op} \leq \|X\|_{op}^n} and then sending {n \rightarrow \infty}, we conclude that {\|X\|_{op}=0}, which clearly contradicts (1). (Note the argument can be generalised without difficulty to the case when {D,X} lie in a Banach algebra, rather than be bounded operators on a Hilbert space.)

It was observed by Popa that there is a quantitative version of this result:

Theorem 1 Let {D, X \in B(H)} such that

\displaystyle  \| [D,X] - I \|_{op} \leq \varepsilon

for some {\varepsilon > 0}. Then we have

\displaystyle  \| X \|_{op} \|D \|_{op} \geq \frac{1}{2} \log \frac{1}{\varepsilon}. \ \ \ \ \ (3)

Proof: By multiplying {D} by a suitable constant and dividing {X} by the same constant, we may normalise {\|D\|_{op}=1/2}. Write {DX - XD = 1 + E} with {\|E\|_{op} \leq \varepsilon}. Then the same induction that established (2) now shows that

\displaystyle  [D,X^n]= n X^{n-1} + X^{n-1} E + X^{n-2} E X + \dots + E X^{n-1}

and hence by the triangle inequality

\displaystyle  n \| X^{n-1} \|_{op} \leq \| X^n \|_{op} + n \varepsilon \|X\|_{op}^{n-1}.

We divide by {n!} and sum to conclude that

\displaystyle  \sum_{n=0}^\infty \frac{\|X^n\|_{op}}{n!} \leq \sum_{n=1}^\infty \frac{\|X^n\|_{op}}{n!} + \varepsilon \exp( \|X\|_{op} )

giving the claim.
\Box

Again, the argument generalises easily to any Banach algebra. Popa then posed the question of whether the quantity {\frac{1}{2} \log \frac{1}{\varepsilon}} can be replaced by any substantially larger function of {\varepsilon}, such as a polynomial in {\frac{1}{\varepsilon}}. As far as I know, the above simple bound has not been substantially improved.

In the opposite direction, one can ask for constructions of operators {X,D} that are not too large in operator norm, such that {[D,X]} is close to the identity. Again, one cannot do this in finite dimensions: {[D,X]} has trace zero, so at least one of its eigenvalues must outside the disk {\{ z: |z-1| < 1\}}, and therefore {\|[D,X]-1\|_{op} \geq 1} for any finite-dimensional {n \times n} matrices {X,D}.

However, it was shown in 1965 by Brown and Pearcy that in infinite dimensions, one can construct operators {D,X} with {[D,X]} arbitrarily close to {1} in operator norm (in fact one can prescribe any operator for {[D,X]} as long as it is not equal to a non-zero multiple of the identity plus a compact operator). In the above paper of Popa, a quantitative version of the argument (based in part on some earlier work of Apostol and Zsido) was given as follows. The first step is to observe the following Hilbert space version of Hilbert’s hotel: in an infinite dimensional Hilbert space {H}, one can locate isometries {u, v \in B(H)} obeying the equation

\displaystyle  uu^* + vv^* = 1, \ \ \ \ \ (4)

where {u^*} denotes the adjoint of {u}. For instance, if {H} has a countable orthonormal basis {e_1, e_2, \dots}, one could set

\displaystyle  u := \sum_{n=1}^\infty e_{2n-1} e_n^*

and

\displaystyle  v := \sum_{n=1}^\infty e_{2n} e_n^*,

where {e_n^*} denotes the linear functional {x \mapsto \langle x, e_n \rangle} on {H}. Observe that (4) is again impossible to satisfy in finite dimension {n}, as the left-hand side must have trace {2n} while the right-hand side has trace {n}.

As {u,v} are isometries, we have

\displaystyle  v^* v = u^* u = 1; \ \ \ \ \ (5)

Multiplying (4) on the left by {v^*} and right by {u}, or on the left by {u^*} and right by {v}, then gives

\displaystyle  v^* u = u^* v = 0. \ \ \ \ \ (6)

From (4), (5) we see in particular that, while we cannot express {1} as a commutator of bounded operators, we can at least express it as the sum of two commutators:

\displaystyle  [u^*, u] + [v^*, v] =1.

We can rewrite this somewhat strangely as

\displaystyle  [\frac{1}{2} u^*, 4u+2v] + [\frac{1}{2} u^* - v^*, -2v] = 2

and hence there exists a bounded operator {a} such that

\displaystyle  [\frac{1}{2} u^*, 4u+2v] = 1+a; \quad [\frac{1}{2} u^* - v^*, -2v] = 1-a.

Moving now to the Banach algebra of {2 \times 2} matrices with entries in {B(H)} (which can be equivalently viewed as {B(H \oplus H)}), a short computation then gives the identity

\displaystyle  \left[ \begin{pmatrix} \frac{1}{2} u^* & 0 \\ a & \frac{1}{2} u^* - v^* \end{pmatrix}, \begin{pmatrix} 4u+2v & 1 \\ 0 & -2v \end{pmatrix} \right] = \begin{pmatrix} 1 & v^* \\ b & 1 \end{pmatrix}

for some bounded operator {b} whose exact form will not be relevant for the argument. Now, by Neumann series (and the fact that {u,v} have unit operator norm), we can find another bounded operator {c} such that

\displaystyle  c + \frac{1}{2} v c u^* = b,

and then another brief computation shows that

\displaystyle  \left[ \begin{pmatrix} \frac{1}{2} u^* & 0 \\ a & \frac{1}{2} u^* - v^* \end{pmatrix}, \begin{pmatrix} 4u+2v & 1 \\ vc & -2v \end{pmatrix} \right] = \begin{pmatrix} 1 & v^* \\ 0 & 1 \end{pmatrix}.

Thus we can express the operator {\begin{pmatrix} 1 & v^* \\ 0 & 1 \end{pmatrix}} as the commutator of two operators of norm {O(1)}. Conjugating by {\begin{pmatrix} \varepsilon^{1/2} & 0 \\ 0 & \varepsilon^{-1/2} \end{pmatrix}} for any {0 < \varepsilon \leq 1}, we may then express {\begin{pmatrix} 1 & \varepsilon v^* \\ 0 & 1 \end{pmatrix}} as the commutator of two operators of norm {O(\varepsilon^{-1})}. This shows that the right-hand side of (3) cannot be replaced with anything that blows up faster than {\varepsilon^{-2}} as {\varepsilon \rightarrow 0}. Can one improve this bound further?

This is the seventh “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

The most recent news is that we appear to have completed the verification that {H_t(x+iy)} is free of zeroes when {t=0.4} and {y \geq 0.4}, which implies that {\Lambda \leq 0.48}. For very large {x} (for instance when the quantity {N := \lfloor \sqrt{\frac{x}{4\pi} + \frac{t}{16}} \rfloor} is at least {300}) this can be done analytically; for medium values of {x} (say when {N} is between {11} and {300}) this can be done by numerically evaluating a fast approximation {A^{eff} + B^{eff}} to {H_t} and using the argument principle in a rectangle; and most recently it appears that we can also handle small values of {x}, in part due to some new, and significantly faster, numerical ways to evaluate {H_t} in this range.

One obvious thing to do now is to experiment with lowering the parameters {t} and {y} and see what happens. However there are two other potential ways to bound {\Lambda} which may also be numerically feasible. One approach is based on trying to exclude zeroes of {H_t(x+iy)=0} in a region of the form {0 \leq t \leq t_0}, {X \leq x \leq X+1} and {y \geq y_0} for some moderately large {X} (this acts as a “barrier” to prevent zeroes from flowing into the region {\{ 0 \leq x \leq X, y \geq y_0 \}} at time {t_0}, assuming that they were not already there at time {0}). This require significantly less numerical verification in the {x} aspect, but more numerical verification in the {t} aspect, so it is not yet clear whether this is a net win.

Another, rather different approach, is to study the evolution of statistics such as {S(t) = \sum_{H_t(x+iy)=0: x,y>0} y e^{-x/X}} over time. One has fairly good control on such quantities at time zero, and their time derivative looks somewhat manageable, so one may be able to still have good control on this quantity at later times {t_0>0}. However for this approach to work, one needs an effective version of the Riemann-von Mangoldt formula for {H_t}, which at present is only available asymptotically (or at time {t=0}). This approach may be able to avoid almost all numerical computation, except for numerical verification of the Riemann hypothesis, for which we can appeal to existing literature.

Participants are also welcome to add any further summaries of the situation in the comments below.

This is the sixth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

The last two threads have been focused primarily on the test problem of showing that {H_t(x+iy) \neq 0} whenever {t = y = 0.4}. We have been able to prove this for most regimes of {x}, or equivalently for most regimes of the natural number parameter {N := \lfloor \sqrt{\frac{x}{4\pi} + \frac{t}{16}} \rfloor}. In many of these regimes, a certain explicit approximation {A^{eff}+B^{eff}} to {H_t} was used, together with a non-zero normalising factor {B^{eff}_0}; see the wiki for definitions. The explicit upper bound

\displaystyle  |H_t - A^{eff} - B^{eff}| \leq E_1 + E_2 + E_3

has been proven for certain explicit expressions {E_1, E_2, E_3} (see here) depending on {x}. In particular, if {x} satisfies the inequality

\displaystyle  |\frac{A^{eff}+B^{eff}}{B^{eff}_0}| > \frac{E_1}{|B^{eff}_0|} + \frac{E_2}{|B^{eff}_0|} + \frac{E_3}{|B^{eff}_0|}

then {H_t(x+iy)} is non-vanishing thanks to the triangle inequality. (In principle we have an even more accurate approximation {A^{eff}+B^{eff}-C^{eff}} available, but it is looking like we will not need it for this test problem at least.)

We have explicit upper bounds on {\frac{E_1}{|B^{eff}_0|}}, {\frac{E_2}{|B^{eff}_0|}}, {\frac{E_3}{|B^{eff}_0|}}; see this wiki page for details. They are tabulated in the range {3 \leq N \leq 2000} here. For {N \geq 2000}, the upper bound {\frac{E_3^*}{|B^{eff}_0|}} for {\frac{E_3}{|B^{eff}_0|}} is monotone decreasing, and is in particular bounded by {1.53 \times 10^{-5}}, while {\frac{E_2}{|B^{eff}_0|}} and {\frac{E_1}{|B^{eff}_0|}} are known to be bounded by {2.9 \times 10^{-7}} and {2.8 \times 10^{-8}} respectively (see here).

Meanwhile, the quantity {|\frac{A^{eff}+B^{eff}}{B^{eff}_0}|} can be lower bounded by

\displaystyle  |\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}|

for certain explicit coefficients {a_n,b_n} and an explicit complex number {s = \sigma + i\tau}. Using the triangle inequality to lower bound this by

\displaystyle  |b_1| - \sum_{n=2}^N \frac{|b_n|}{n^\sigma} - \sum_{n=1}^N \frac{|a_n|}{n^\sigma}

we can obtain a lower bound of {0.18} for {N \geq 2000}, which settles the test problem in this regime. One can get more efficient lower bounds by multiplying both Dirichlet series by a suitable Euler product mollifier; we have found {\prod_{p \leq P} (1 - \frac{b_p}{p^s})} for {P=2,3,5,7} to be good choices to get a variety of further lower bounds depending only on {N}, see this table and this wiki page. Comparing this against our tabulated upper bounds for the error terms we can handle the range {300 \leq N \leq 2000}.

In the range {11 \leq N \leq 300}, we have been able to obtain a suitable lower bound {|\frac{A^{eff}+B^{eff}}{B^{eff}_0}| \geq c} (where {c} exceeds the upper bound for {\frac{E_1}{|B^{eff}_0|} + \frac{E_2}{|B^{eff}_0|} + \frac{E_3}{|B^{eff}_0|}}) by numerically evaluating {|\frac{A^{eff}+B^{eff}}{B^{eff}_0}|} at a mesh of points for each choice of {N}, with the mesh spacing being adaptive and determined by {c} and an upper bound for the derivative of {|\frac{A^{eff}+B^{eff}}{B^{eff}_0}|}; the data is available here.

This leaves the final range {N \leq 10} (roughly corresponding to {x \leq 1600}). Here we can numerically evaluate {H_t(x+iy)} to high accuracy at a fine mesh (see the data here), but to fill in the mesh we need good upper bounds on {H'_t(x+iy)}. It seems that we can get reasonable estimates using some contour shifting from the original definition of {H_t} (see here). We are close to finishing off this remaining region and thus solving the toy problem.

Beyond this, we need to figure out how to show that {H_t(x+iy) \neq 0} for {y > 0.4} as well. General theory lets one do this for {y \geq \sqrt{1-2t} = 0.447\dots}, leaving the region {0.4 < y < 0.448}. The analytic theory that handles {N \geq 2000} and {300 \leq N \leq 2000} should also handle this region; for {N \leq 300} presumably the argument principle will become relevant.

The full argument also needs to be streamlined and organised; right now it sprawls over many wiki pages and github code files. (A very preliminary writeup attempt has begun here). We should also see if there is much hope of extending the methods to push much beyond the bound of {\Lambda \leq 0.48} that we would get from the above calculations. This would also be a good time to start discussing whether to move to the writing phase of the project, or whether there are still fruitful research directions for the project to explore.

Participants are also welcome to add any further summaries of the situation in the comments below.

Archives