You are currently browsing the monthly archive for April 2019.

The Polymath15 paper “Effective approximation of heat flow evolution of the Riemann {\xi} function, and a new upper bound for the de Bruijn-Newman constant“, submitted to Research in the Mathematical Sciences, has just been uploaded to the arXiv. This paper records the mix of theoretical and computational work needed to improve the upper bound on the de Bruijn-Newman constant {\Lambda}. This constant can be defined as follows. The function

\displaystyle H_0(z) := \frac{1}{8} \xi\left(\frac{1}{2} + \frac{iz}{2}\right),

where {\xi} is the Riemann {\xi} function

\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) \zeta(s)

has a Fourier representation

\displaystyle H_0(z) = \int_0^\infty \Phi(u) \cos(zu)\ du

where {\Phi} is the super-exponentially decaying function

\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3\pi n^2 e^{5u} ) \exp(-\pi n^2 e^{4u} ).

The Riemann hypothesis is equivalent to the claim that all the zeroes of {H_0} are real. De Bruijn introduced (in different notation) the deformations

\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du

of {H_0}; one can view this as the solution to the backwards heat equation {\partial_t H_t = -\partial_{zz} H_t} starting at {H_0}. From the work of de Bruijn and of Newman, it is known that there exists a real number {\Lambda} – the de Bruijn-Newman constant – such that {H_t} has all zeroes real for {t \geq \Lambda} and has at least one non-real zero for {t < \Lambda}. In particular, the Riemann hypothesis is equivalent to the assertion {\Lambda \leq 0}. Prior to this paper, the best known bounds for this constant were

\displaystyle 0 \leq \Lambda < 1/2

with the lower bound due to Rodgers and myself, and the upper bound due to Ki, Kim, and Lee. One of the main results of the paper is to improve the upper bound to

\displaystyle \Lambda \leq 0.22. \ \ \ \ \ (1)

At a purely numerical level this gets “closer” to proving the Riemann hypothesis, but the methods of proof take as input a finite numerical verification of the Riemann hypothesis up to some given height {T} (in our paper we take {T \sim 3 \times 10^{10}}) and converts this (and some other numerical verification) to an upper bound on {\Lambda} that is of order {O(1/\log T)}. As discussed in the final section of the paper, further improvement of the numerical verification of RH would thus lead to modest improvements in the upper bound on {\Lambda}, although it does not seem likely that our methods could for instance improve the bound to below {0.1} without an infeasible amount of computation.

We now discuss the methods of proof. An existing result of de Bruijn shows that if all the zeroes of {H_{t_0}(z)} lie in the strip {\{ x+iy: |y| \leq y_0\}}, then {\Lambda \leq t_0 + \frac{1}{2} y_0^2}; we will verify this hypothesis with {t_0=y_0=0.2}, thus giving (1). Using the symmetries and the known zero-free regions, it suffices to show that

\displaystyle H_{0.2}(x+iy) \neq 0 \ \ \ \ \ (2)

whenever {x \geq 0} and {0.2 \leq y \leq 1}.

For large {x} (specifically, {x \geq 6 \times 10^{10}}), we use effective numerical approximation to {H_t(x+iy)} to establish (2), as discussed in a bit more detail below. For smaller values of {x}, the existing numerical verification of the Riemann hypothesis (we use the results of Platt) shows that

\displaystyle H_0(x+iy) \neq 0

for {0 \leq x \leq 6 \times 10^{10}} and {0.2 \leq y \leq 1}. The problem though is that this result only controls {H_t} at time {t=0} rather than the desired time {t = 0.2}. To bridge the gap we need to erect a “barrier” that, roughly speaking, verifies that

\displaystyle H_t(x+iy) \neq 0 \ \ \ \ \ (3)

for {0 \leq t \leq 0.2}, {x = 6 \times 10^{10} + O(1)}, and {0.2 \leq y \leq 1}; with a little bit of work this barrier shows that zeroes cannot sneak in from the right of the barrier to the left in order to produce counterexamples to (2) for small {x}.

To enforce this barrier, and to verify (2) for large {x}, we need to approximate {H_t(x+iy)} for positive {t}. Our starting point is the Riemann-Siegel formula, which roughly speaking is of the shape

\displaystyle H_0(x+iy) \approx B_0(x+iy) ( \sum_{n=1}^N \frac{1}{n^{\frac{1+y-ix}{2}}} + \gamma_0(x+iy) \sum_{n=1}^N \frac{n^y}{n^{\frac{1+y+ix}{2}}} )

where {N := \sqrt{x/4\pi}}, {B_0(x+iy)} is an explicit “gamma factor” that decays exponentially in {x}, and {\gamma_0(x+iy)} is a ratio of gamma functions that is roughly of size {(x/4\pi)^{-y/2}}. Deforming this by the heat flow gives rise to an approximation roughly of the form

\displaystyle H_t(x+iy) \approx B_t(x+iy) ( \sum_{n=1}^N \frac{b_n^t}{n^{s_*}} + \gamma_t(x+iy) \sum_{n=1}^N \frac{n^y}{n^{\overline{s_*}}} ) \ \ \ \ \ (4)

where {B_t(x+iy)} and {\gamma_t(x+iy)} are variants of {B_0(x+iy)} and {\gamma_0(x+iy)}, {b_n^t := \exp( \frac{t}{4} \log^2 n )}, and {s_*} is an exponent which is roughly {\frac{1+y-ix}{2} + \frac{t}{4} \log \frac{x}{4\pi}}. In particular, for positive values of {t}, {s_*} increases (logarithmically) as {x} increases, and the two sums in the Riemann-Siegel formula become increasingly convergent (even in the face of the slowly increasing coefficients {b_n^t}). For very large values of {x} (in the range {x \geq \exp(C/t)} for a large absolute constant {C}), the {n=1} terms of both sums dominate, and {H_t(x+iy)} begins to behave in a sinusoidal fashion, with the zeroes “freezing” into an approximate arithmetic progression on the real line much like the zeroes of the sine or cosine functions (we give some asymptotic theorems that formalise this “freezing” effect). This lets one verify (2) for extremely large values of {x} (e.g., {x \geq 10^{12}}). For slightly less large values of {x}, we first multiply the Riemann-Siegel formula by an “Euler product mollifier” to reduce some of the oscillation in the sum and make the series converge better; we also use a technical variant of the triangle inequality to improve the bounds slightly. These are sufficient to establish (2) for moderately large {x} (say {x \geq 6 \times 10^{10}}) with only a modest amount of computational effort (a few seconds after all the optimisations; on my own laptop with very crude code I was able to verify all the computations in a matter of minutes).

The most difficult computational task is the verification of the barrier (3), particularly when {t} is close to zero where the series in (4) converge quite slowly. We first use an Euler product heuristic approximation to {H_t(x+iy)} to decide where to place the barrier in order to make our numerical approximation to {H_t(x+iy)} as large in magnitude as possible (so that we can afford to work with a sparser set of mesh points for the numerical verification). In order to efficiently evaluate the sums in (4) for many different values of {x+iy}, we perform a Taylor expansion of the coefficients to factor the sums as combinations of other sums that do not actually depend on {x} and {y} and so can be re-used for multiple choices of {x+iy} after a one-time computation. At the scales we work in, this computation is still quite feasible (a handful of minutes after software and hardware optimisations); if one assumes larger numerical verifications of RH and lowers {t_0} and {y_0} to optimise the value of {\Lambda} accordingly, one could get down to an upper bound of {\Lambda \leq 0.1} assuming an enormous numerical verification of RH (up to height about {4 \times 10^{21}}) and a very large distributed computing project to perform the other numerical verifications.

This post can serve as the (presumably final) thread for the Polymath15 project (continuing this post), to handle any remaining discussion topics for that project.

Just a brief announcement that the AMS is now accepting (until June 30) nominations for the 2020 Joseph L. Doob Prize, which recognizes a single, relatively recent, outstanding research book that makes a seminal contribution to the research literature, reflects the highest standards of research exposition, and promises to have a deep and long-term impact in its area. The book must have been published within the six calendar years preceding the year in which it is nominated. Books may be nominated by members of the Society, by members of the selection committee, by members of AMS editorial committees, or by publishers.  (I am currently on the committee for this prize.)  A list of previous winners may be found here.  The nomination procedure may be found at the bottom of this page.

Joni Teräväinen and I have just uploaded to the arXiv our paper “Value patterns of multiplicative functions and related sequences“, submitted to Forum of Mathematics, Sigma. This paper explores how to use recent technology on correlations of multiplicative (or nearly multiplicative functions), such as the “entropy decrement method”, in conjunction with techniques from additive combinatorics, to establish new results on the sign patterns of functions such as the Liouville function {\lambda}. For instance, with regards to length 5 sign patterns

\displaystyle  (\lambda(n+1),\dots,\lambda(n+5)) \in \{-1,+1\}^5

of the Liouville function, we can now show that at least {24} of the {32} possible sign patterns in {\{-1,+1\}^5} occur with positive upper density. (Conjecturally, all of them do so, and this is known for all shorter sign patterns, but unfortunately {24} seems to be the limitation of our methods.)

The Liouville function can be written as {\lambda(n) = e^{2\pi i \Omega(n)/2}}, where {\Omega(n)} is the number of prime factors of {n} (counting multiplicity). One can also consider the variant {\lambda_3(n) = e^{2\pi i \Omega(n)/3}}, which is a completely multiplicative function taking values in the cube roots of unity {\{1, \omega, \omega^2\}}. Here we are able to show that all {27} sign patterns in {\{1,\omega,\omega^2\}} occur with positive lower density as sign patterns {(\lambda_3(n+1), \lambda_3(n+2), \lambda_3(n+3))} of this function. The analogous result for {\lambda} was already known (see this paper of Matomäki, Radziwiłł, and myself), and in that case it is even known that all sign patterns occur with equal logarithmic density {1/8} (from this paper of myself and Teräväinen), but these techniques barely fail to handle the {\lambda_3} case by itself (largely because the “parity” arguments used in the case of the Liouville function no longer control three-point correlations in the {\lambda_3} case) and an additional additive combinatorial tool is needed. After applying existing technology (such as entropy decrement methods), the problem roughly speaking reduces to locating patterns {a \in A_1, a+r \in A_2, a+2r \in A_3} for a certain partition {G = A_1 \cup A_2 \cup A_3} of a compact abelian group {G} (think for instance of the unit circle {G={\bf R}/{\bf Z}}, although the general case is a bit more complicated, in particular if {G} is disconnected then there is a certain “coprimality” constraint on {r}, also we can allow the {A_1,A_2,A_3} to be replaced by any {A_{c_1}, A_{c_2}, A_{c_3}} with {c_1+c_2+c_3} divisible by {3}), with each of the {A_i} having measure {1/3}. An inequality of Kneser just barely fails to guarantee the existence of such patterns, but by using an inverse theorem for Kneser’s inequality in this previous paper of mine we are able to identify precisely the obstruction for this method to work, and rule it out by an ad hoc method.

The same techniques turn out to also make progress on some conjectures of Erdös-Pomerance and Hildebrand regarding patterns of the largest prime factor {P^+(n)} of a natural number {n}. For instance, we improve results of Erdös-Pomerance and of Balog demonstrating that the inequalities

\displaystyle  P^+(n+1) < P^+(n+2) < P^+(n+3)


\displaystyle  P^+(n+1) > P^+(n+2) > P^+(n+3)

each hold for infinitely many {n}, by demonstrating the stronger claims that the inequalities

\displaystyle  P^+(n+1) < P^+(n+2) < P^+(n+3) > P^+(n+4)


\displaystyle  P^+(n+1) > P^+(n+2) > P^+(n+3) < P^+(n+4)

each hold for a set of {n} of positive lower density. As a variant, we also show that we can find a positive density set of {n} for which

\displaystyle  P^+(n+1), P^+(n+2), P^+(n+3) > n^\gamma

for any fixed {\gamma < e^{-1/3} = 0.7165\dots} (this improves on a previous result of Hildebrand with {e^{-1/3}} replaced by {e^{-1/2} = 0.6065\dots}. A number of other results of this type are also obtained in this paper.

In order to obtain these sorts of results, one needs to extend the entropy decrement technology from the setting of multiplicative functions to that of what we call “weakly stable sets” – sets {A} which have some multiplicative structure, in the sense that (roughly speaking) there is a set {B} such that for all small primes {p}, the statements {n \in A} and {pn \in B} are roughly equivalent to each other. For instance, if {A} is a level set {A = \{ n: \omega(n) = 0 \hbox{ mod } 3 \}}, one would take {B = \{ n: \omega(n) = 1 \hbox{ mod } 3 \}}; if instead {A} is a set of the form {\{ n: P^+(n) \geq n^\gamma\}}, then one can take {B=A}. When one has such a situation, then very roughly speaking, the entropy decrement argument then allows one to estimate a one-parameter correlation such as

\displaystyle  {\bf E}_n 1_A(n+1) 1_A(n+2) 1_A(n+3)

with a two-parameter correlation such as

\displaystyle  {\bf E}_n {\bf E}_p 1_B(n+p) 1_B(n+2p) 1_B(n+3p)

(where we will be deliberately vague as to how we are averaging over {n} and {p}), and then the use of the “linear equations in primes” technology of Ben Green, Tamar Ziegler, and myself then allows one to replace this average in turn by something like

\displaystyle  {\bf E}_n {\bf E}_r 1_B(n+r) 1_B(n+2r) 1_B(n+3r)

where {r} is constrained to be not divisible by small primes but is otherwise quite arbitrary. This latter average can then be attacked by tools from additive combinatorics, such as translation to a continuous group model (using for instance the Furstenberg correspondence principle) followed by tools such as Kneser’s inequality (or inverse theorems to that inequality).

(This post is mostly intended for my own reference, as I found myself repeatedly looking up several conversions between polynomial bases on various occasions.)

Let {\mathrm{Poly}_{\leq n}} denote the vector space of polynomials {P:{\bf R} \rightarrow {\bf R}} of one variable {x} with real coefficients of degree at most {n}. This is a vector space of dimension {n+1}, and the sequence of these spaces form a filtration:

\displaystyle  \mathrm{Poly}_{\leq 0} \subset \mathrm{Poly}_{\leq 1} \subset \mathrm{Poly}_{\leq 2} \subset \dots

A standard basis for these vector spaces are given by the monomials {x^0, x^1, x^2, \dots}: every polynomial {P(x)} in {\mathrm{Poly}_{\leq n}} can be expressed uniquely as a linear combination of the first {n+1} monomials {x^0, x^1, \dots, x^n}. More generally, if one has any sequence {Q_0(x), Q_1(x), Q_2(x)} of polynomials, with each {Q_n} of degree exactly {n}, then an easy induction shows that {Q_0(x),\dots,Q_n(x)} forms a basis for {\mathrm{Poly}_{\leq n}}.

In particular, if we have two such sequences {Q_0(x), Q_1(x), Q_2(x),\dots} and {R_0(x), R_1(x), R_2(x), \dots} of polynomials, with each {Q_n} of degree {n} and each {R_k} of degree {k}, then {Q_n} must be expressible uniquely as a linear combination of the polynomials {R_0,R_1,\dots,R_n}, thus we have an identity of the form

\displaystyle  Q_n(x) = \sum_{k=0}^n c_{QR}(n,k) R_k(x)

for some change of basis coefficients {c_{QR}(n,k) \in {\bf R}}. These coefficients describe how to convert a polynomial expressed in the {Q_n} basis into a polynomial expressed in the {R_k} basis.

Many standard combinatorial quantities {c(n,k)} involving two natural numbers {0 \leq k \leq n} can be interpreted as such change of basis coefficients. The most familiar example are the binomial coefficients {\binom{n}{k}}, which measures the conversion from the shifted monomial basis {(x+1)^n} to the monomial basis {x^k}, thanks to (a special case of) the binomial formula:

\displaystyle  (x+1)^n = \sum_{k=0}^n \binom{n}{k} x^k,

thus for instance

\displaystyle  (x+1)^3 = \binom{3}{0} x^0 + \binom{3}{1} x^1 + \binom{3}{2} x^2 + \binom{3}{3} x^3

\displaystyle  = 1 + 3x + 3x^2 + x^3.

More generally, for any shift {h}, the conversion from {(x+h)^n} to {x^k} is measured by the coefficients {h^{n-k} \binom{n}{k}}, thanks to the general case of the binomial formula.

But there are other bases of interest too. For instance if one uses the falling factorial basis

\displaystyle  (x)_n := x (x-1) \dots (x-n+1)

then the conversion from falling factorials to monomials is given by the Stirling numbers of the first kind {s(n,k)}:

\displaystyle  (x)_n = \sum_{k=0}^n s(n,k) x^k,

thus for instance

\displaystyle  (x)_3 = s(3,0) x^0 + s(3,1) x^1 + s(3,2) x^2 + s(3,3) x^3

\displaystyle  = 0 + 2 x - 3x^2 + x^3

and the conversion back is given by the Stirling numbers of the second kind {S(n,k)}:

\displaystyle  x^n = \sum_{k=0}^n S(n,k) (x)_k

thus for instance

\displaystyle  x^3 = S(3,0) (x)_0 + S(3,1) (x)_1 + S(3,2) (x)_2 + S(3,3) (x)_3

\displaystyle  = 0 + x + 3 x(x-1) + x(x-1)(x-2).

If one uses the binomial functions {\binom{x}{n} = \frac{1}{n!} (x)_n} as a basis instead of the falling factorials, one of course can rewrite these conversions as

\displaystyle  \binom{x}{n} = \sum_{k=0}^n \frac{1}{n!} s(n,k) x^k


\displaystyle  x^n = \sum_{k=0}^n k! S(n,k) \binom{x}{k}

thus for instance

\displaystyle  \binom{x}{3} = 0 + \frac{1}{3} x - \frac{1}{2} x^2 + \frac{1}{6} x^3


\displaystyle  x^3 = 0 + \binom{x}{1} + 6 \binom{x}{2} + 6 \binom{x}{3}.

As a slight variant, if one instead uses rising factorials

\displaystyle  (x)^n := x (x+1) \dots (x+n-1)

then the conversion to monomials yields the unsigned Stirling numbers {|s(n,k)|} of the first kind:

\displaystyle  (x)^n = \sum_{k=0}^n |s(n,k)| x^k

thus for instance

\displaystyle  (x)^3 = 0 + 2x + 3x^2 + x^3.

One final basis comes from the polylogarithm functions

\displaystyle  \mathrm{Li}_{-n}(x) := \sum_{j=1}^\infty j^n x^j.

For instance one has

\displaystyle  \mathrm{Li}_1(x) = -\log(1-x)

\displaystyle  \mathrm{Li}_0(x) = \frac{x}{1-x}

\displaystyle  \mathrm{Li}_{-1}(x) = \frac{x}{(1-x)^2}

\displaystyle  \mathrm{Li}_{-2}(x) = \frac{x}{(1-x)^3} (1+x)

\displaystyle  \mathrm{Li}_{-3}(x) = \frac{x}{(1-x)^4} (1+4x+x^2)

\displaystyle  \mathrm{Li}_{-4}(x) = \frac{x}{(1-x)^5} (1+11x+11x^2+x^3)

and more generally one has

\displaystyle  \mathrm{Li}_{-n-1}(x) = \frac{x}{(1-x)^{n+2}} E_n(x)

for all natural numbers {n} and some polynomial {E_n} of degree {n} (the Eulerian polynomials), which when converted to the monomial basis yields the (shifted) Eulerian numbers

\displaystyle  E_n(x) = \sum_{k=0}^n A(n+1,k) x^k.

For instance

\displaystyle  E_3(x) = A(4,0) x^0 + A(4,1) x^1 + A(4,2) x^2 + A(4,3) x^3

\displaystyle  = 1 + 11x + 11x^2 + x^3.

These particular coefficients also have useful combinatorial interpretations. For instance:

  • The binomial coefficient {\binom{n}{k}} is of course the number of {k}-element subsets of {\{1,\dots,n\}}.
  • The unsigned Stirling numbers {|s(n,k)|} of the first kind are the number of permutations of {\{1,\dots,n\}} with exactly {k} cycles. The signed Stirling numbers {s(n,k)} are then given by the formula {s(n,k) = (-1)^{n-k} |s(n,k)|}.
  • The Stirling numbers {S(n,k)} of the second kind are the number of ways to partition {\{1,\dots,n\}} into {k} non-empty subsets.
  • The Eulerian numbers {A(n,k)} are the number of permutations of {\{1,\dots,n\}} with exactly {k} ascents.

These coefficients behave similarly to each other in several ways. For instance, the binomial coefficients {\binom{n}{k}} obey the well known Pascal identity

\displaystyle  \binom{n+1}{k} = \binom{n}{k} + \binom{n}{k-1}

(with the convention that {\binom{n}{k}} vanishes outside of the range {0 \leq k \leq n}). In a similar spirit, the unsigned Stirling numbers {|s(n,k)|} of the first kind obey the identity

\displaystyle  |s(n+1,k)| = n |s(n,k)| + |s(n,k-1)|

and the signed counterparts {s(n,k)} obey the identity

\displaystyle  s(n+1,k) = -n s(n,k) + s(n,k-1).

The Stirling numbers of the second kind {S(n,k)} obey the identity

\displaystyle  S(n+1,k) = k S(n,k) + S(n,k-1)

and the Eulerian numbers {A(n,k)} obey the identity

\displaystyle  A(n+1,k) = (k+1) A(n,k) + (n-k+1) A(n,k-1).