You are currently browsing the category archive for the ‘math.NA’ category.

This is the tenth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

Most of the progress since the last thread has been on the numerical side, in which the various techniques to numerically establish zero-free regions to the equation H_t(x+iy)=0 have been streamlined, made faster, and extended to larger heights than were previously possible.  The best bound for \Lambda now depends on the height to which one is willing to assume the Riemann hypothesis.  Using the conservative verification up to height (slightly larger than) 3 \times 10^{10}, which has been confirmed by independent work of Platt et al. and Gourdon-Demichel, the best bound remains at \Lambda \leq 0.22.  Using the verification up to height 2.5 \times 10^{12} claimed by Gourdon-Demichel, this improves slightly to \Lambda \leq 0.19, and if one assumes the Riemann hypothesis up to height 5 \times 10^{19} the bound improves to \Lambda \leq 0.11, contingent on a numerical computation that is still underway.   (See the table below the fold for more data of this form.)  This is broadly consistent with the expectation that the bound on \Lambda should be inversely proportional to the logarithm of the height at which the Riemann hypothesis is verified.

As progress seems to have stabilised, it may be time to transition to the writing phase of the Polymath15 project.  (There are still some interesting research questions to pursue, such as numerically investigating the zeroes of H_t for negative values of t, but the writeup does not necessarily have to contain every single direction pursued in the project. If enough additional interesting findings are unearthed then one could always consider writing a second paper, for instance.

Below the fold is the detailed progress report on the numerics by Rudolph Dwars and Kalpesh Muchhal.

Read the rest of this entry »

This is the seventh “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

The most recent news is that we appear to have completed the verification that {H_t(x+iy)} is free of zeroes when {t=0.4} and {y \geq 0.4}, which implies that {\Lambda \leq 0.48}. For very large {x} (for instance when the quantity {N := \lfloor \sqrt{\frac{x}{4\pi} + \frac{t}{16}} \rfloor} is at least {300}) this can be done analytically; for medium values of {x} (say when {N} is between {11} and {300}) this can be done by numerically evaluating a fast approximation {A^{eff} + B^{eff}} to {H_t} and using the argument principle in a rectangle; and most recently it appears that we can also handle small values of {x}, in part due to some new, and significantly faster, numerical ways to evaluate {H_t} in this range.

One obvious thing to do now is to experiment with lowering the parameters {t} and {y} and see what happens. However there are two other potential ways to bound {\Lambda} which may also be numerically feasible. One approach is based on trying to exclude zeroes of {H_t(x+iy)=0} in a region of the form {0 \leq t \leq t_0}, {X \leq x \leq X+1} and {y \geq y_0} for some moderately large {X} (this acts as a “barrier” to prevent zeroes from flowing into the region {\{ 0 \leq x \leq X, y \geq y_0 \}} at time {t_0}, assuming that they were not already there at time {0}). This require significantly less numerical verification in the {x} aspect, but more numerical verification in the {t} aspect, so it is not yet clear whether this is a net win.

Another, rather different approach, is to study the evolution of statistics such as {S(t) = \sum_{H_t(x+iy)=0: x,y>0} y e^{-x/X}} over time. One has fairly good control on such quantities at time zero, and their time derivative looks somewhat manageable, so one may be able to still have good control on this quantity at later times {t_0>0}. However for this approach to work, one needs an effective version of the Riemann-von Mangoldt formula for {H_t}, which at present is only available asymptotically (or at time {t=0}). This approach may be able to avoid almost all numerical computation, except for numerical verification of the Riemann hypothesis, for which we can appeal to existing literature.

Participants are also welcome to add any further summaries of the situation in the comments below.

This is the sixth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

The last two threads have been focused primarily on the test problem of showing that {H_t(x+iy) \neq 0} whenever {t = y = 0.4}. We have been able to prove this for most regimes of {x}, or equivalently for most regimes of the natural number parameter {N := \lfloor \sqrt{\frac{x}{4\pi} + \frac{t}{16}} \rfloor}. In many of these regimes, a certain explicit approximation {A^{eff}+B^{eff}} to {H_t} was used, together with a non-zero normalising factor {B^{eff}_0}; see the wiki for definitions. The explicit upper bound

\displaystyle  |H_t - A^{eff} - B^{eff}| \leq E_1 + E_2 + E_3

has been proven for certain explicit expressions {E_1, E_2, E_3} (see here) depending on {x}. In particular, if {x} satisfies the inequality

\displaystyle  |\frac{A^{eff}+B^{eff}}{B^{eff}_0}| > \frac{E_1}{|B^{eff}_0|} + \frac{E_2}{|B^{eff}_0|} + \frac{E_3}{|B^{eff}_0|}

then {H_t(x+iy)} is non-vanishing thanks to the triangle inequality. (In principle we have an even more accurate approximation {A^{eff}+B^{eff}-C^{eff}} available, but it is looking like we will not need it for this test problem at least.)

We have explicit upper bounds on {\frac{E_1}{|B^{eff}_0|}}, {\frac{E_2}{|B^{eff}_0|}}, {\frac{E_3}{|B^{eff}_0|}}; see this wiki page for details. They are tabulated in the range {3 \leq N \leq 2000} here. For {N \geq 2000}, the upper bound {\frac{E_3^*}{|B^{eff}_0|}} for {\frac{E_3}{|B^{eff}_0|}} is monotone decreasing, and is in particular bounded by {1.53 \times 10^{-5}}, while {\frac{E_2}{|B^{eff}_0|}} and {\frac{E_1}{|B^{eff}_0|}} are known to be bounded by {2.9 \times 10^{-7}} and {2.8 \times 10^{-8}} respectively (see here).

Meanwhile, the quantity {|\frac{A^{eff}+B^{eff}}{B^{eff}_0}|} can be lower bounded by

\displaystyle  |\sum_{n=1}^N \frac{b_n}{n^s}| - |\sum_{n=1}^N \frac{a_n}{n^s}|

for certain explicit coefficients {a_n,b_n} and an explicit complex number {s = \sigma + i\tau}. Using the triangle inequality to lower bound this by

\displaystyle  |b_1| - \sum_{n=2}^N \frac{|b_n|}{n^\sigma} - \sum_{n=1}^N \frac{|a_n|}{n^\sigma}

we can obtain a lower bound of {0.18} for {N \geq 2000}, which settles the test problem in this regime. One can get more efficient lower bounds by multiplying both Dirichlet series by a suitable Euler product mollifier; we have found {\prod_{p \leq P} (1 - \frac{b_p}{p^s})} for {P=2,3,5,7} to be good choices to get a variety of further lower bounds depending only on {N}, see this table and this wiki page. Comparing this against our tabulated upper bounds for the error terms we can handle the range {300 \leq N \leq 2000}.

In the range {11 \leq N \leq 300}, we have been able to obtain a suitable lower bound {|\frac{A^{eff}+B^{eff}}{B^{eff}_0}| \geq c} (where {c} exceeds the upper bound for {\frac{E_1}{|B^{eff}_0|} + \frac{E_2}{|B^{eff}_0|} + \frac{E_3}{|B^{eff}_0|}}) by numerically evaluating {|\frac{A^{eff}+B^{eff}}{B^{eff}_0}|} at a mesh of points for each choice of {N}, with the mesh spacing being adaptive and determined by {c} and an upper bound for the derivative of {|\frac{A^{eff}+B^{eff}}{B^{eff}_0}|}; the data is available here.

This leaves the final range {N \leq 10} (roughly corresponding to {x \leq 1600}). Here we can numerically evaluate {H_t(x+iy)} to high accuracy at a fine mesh (see the data here), but to fill in the mesh we need good upper bounds on {H'_t(x+iy)}. It seems that we can get reasonable estimates using some contour shifting from the original definition of {H_t} (see here). We are close to finishing off this remaining region and thus solving the toy problem.

Beyond this, we need to figure out how to show that {H_t(x+iy) \neq 0} for {y > 0.4} as well. General theory lets one do this for {y \geq \sqrt{1-2t} = 0.447\dots}, leaving the region {0.4 < y < 0.448}. The analytic theory that handles {N \geq 2000} and {300 \leq N \leq 2000} should also handle this region; for {N \leq 300} presumably the argument principle will become relevant.

The full argument also needs to be streamlined and organised; right now it sprawls over many wiki pages and github code files. (A very preliminary writeup attempt has begun here). We should also see if there is much hope of extending the methods to push much beyond the bound of {\Lambda \leq 0.48} that we would get from the above calculations. This would also be a good time to start discussing whether to move to the writing phase of the project, or whether there are still fruitful research directions for the project to explore.

Participants are also welcome to add any further summaries of the situation in the comments below.

This is the fifth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

We have almost finished off the test problem of showing that {H_t(x+iy) \neq 0} whenever {t = y = 0.4}. We have two useful approximations for {H_t}, which we have denoted {A^{eff}+B^{eff}} and {A^{eff}+B^{eff}-C^{eff}}, and a normalising quantity {B^{eff}_0} that is asymptotically equal to the above expressions; see the wiki page for definitions. In practice, the {A^{eff}+B^{eff}} approximation seems to be accurate within about one or two significant figures, whilst the {A^{eff}+B^{eff}-C^{eff}} approximation is accurate to about three or four. We have an effective upper bound

\displaystyle  |H_t - A^{eff} - B^{eff}| \leq E_1 + E_2 + E_3^*

where the expressions {E_1,E_2,E_3^*} are quite small in practice ({E_3^*} is typically about two orders of magnitude smaller than the main term {B^{eff}_0} once {x} is moderately large, and the error terms {E_1,E_2} are even smaller). See this page for details. In principle we could also obtain an effective upper bound for {|H_t - (A^{eff} + B^{eff} - C^{eff})|} (the {E_3^*} term would be replaced by something smaller).

The ratio {\frac{A^{eff}+B^{eff}}{B^{eff}_0}} takes the form of a difference {\sum_{n=1}^N \frac{b_n}{n^s} - e^{i\theta} \sum_{n=1}^N \frac{a_n}{n^s}} of two Dirichlet series, where {e^{i\theta}} is a phase whose value is explicit but perhaps not terribly important, and the coefficients {b_n, a_n} are explicit and relatively simple ({b_n} is {\exp( \frac{t}{4} \log^2 n)}, and {a_n} is approximately {(n/N)^y b_n}). To bound this away from zero, we have found it advantageous to mollify this difference by multiplying by an Euler product {\prod_{p \leq P} (1 - \frac{b_p}{p^s})} to cancel much of the initial oscillation; also one can take advantage of the fact that the {b_n} are real and the {a_n} are (approximately) real. See this page for details. The upshot is that we seem to be getting good lower bounds for the size of this difference of Dirichlet series starting from about {x \geq 5 \times 10^5} or so. The error terms {E_1,E_2,E_3^*} are already quite small by this stage, so we should soon be able to rigorously keep {H_t} from vanishing at this point. We also have a scheme for lower bounding the difference of Dirichlet series below this range, though it is not clear at present how far we can continue this before the error terms {E_1,E_2,E_3^*} become unmanageable. For very small {x} we may have to explore some faster ways to compute the expression {H_t}, which is still difficult to compute directly with high accuracy. One will also need to bound the somewhat unwieldy expressions {E_1,E_2} by something more manageable. For instance, right now these quantities depend on the continuous variable {x}; it would be preferable to have a quantity that depends only on the parameter {N = \lfloor \sqrt{ \frac{x}{4\pi} + \frac{t}{16} }\rfloor}, as this could be computed numerically for all {x} in the remaining range of interest quite quickly.

As before, any other mathematical discussion related to the project is also welcome here, for instance any summaries of previous discussion that was not covered in this post.

This is the fourth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing https://terrytao.wordpress.com/2018/01/24/polymath-proposal-upper-bounding-the-de-bruijn-newman-constant/. Progress will be summarised at this Polymath wiki page.

We are getting closer to finishing off the following test problem: can one show that {H_t(x+iy) \neq 0} whenever {t = y = 0.4}, {x \geq 0}? This would morally show that {\Lambda \leq 0.48}. A wiki page for this problem has now been created here. We have obtained a number of approximations {A+B, A'+B', A^{eff}+B^{eff}, A^{toy}+B^{toy}} to {H_t} (see wiki page), though numeric evidence indicates that the approximations are all very close to each other. (Many of these approximations come with a correction term {C}, but thus far it seems that we may be able to avoid having to use this refinement to the approximations.) The effective approximation {A^{eff} + B^{eff}} also comes with an effective error bound

\displaystyle |H_t - A^{eff} - B^{eff}| \leq E_1 + E_2 + E_3

for some explicit (but somewhat messy) error terms {E_1,E_2,E_3}: see this wiki page for details. The original approximations {A+B, A'+B'} can be considered deprecated at this point in favour of the (slightly more complicated) approximation {A^{eff}+B^{eff}}; the approximation {A^{toy}+B^{toy}} is a simplified version of {A^{eff}+B^{eff}} which is not quite as accurate but might be useful for testing purposes.

It is convenient to normalise everything by an explicit non-zero factor {B^{eff}_0}. Asymptotically, {(A^{eff} + B^{eff}) / B^{eff}_0} converges to 1; numerically, it appears that its magnitude (and also its real part) stays roughly between 0.4 and 3 in the range {10^5 \leq x \leq 10^6}, and we seem to be able to keep it (or at least the toy counterpart {(A^{toy} + B^{toy}) / B^{toy}_0}) away from zero starting from about {x \geq 4 \times 10^6} (here it seems that there is a useful trick of multiplying by Euler-type factors like {1 - \frac{1}{2^{1-s}}} to cancel off some of the oscillation). Also, the bounds on the error {(H_t - A^{eff} - B^{eff}) / B^{eff}_0} seem to be of size about 0.1 or better in these ranges also. So we seem to be on track to be able to rigorously eliminate zeroes starting from about {x \geq 10^5} or so. We have not discussed too much what to do with the small values of {x}; at some point our effective error bounds will become unusable, and we may have to find some more faster ways to compute {H_t}.

In addition to this main direction of inquiry, there have been additional discussions on the dynamics of zeroes, and some numerical investigations of the behaviour Lehmer pairs under heat flow. Contributors are welcome to summarise any findings from these discussions from previous threads (or on any other related topic, e.g. improvements in the code) in the comments below.

This is the third “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this previous thread. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

We are making progress on the following test problem: can one show that {H_t(x+iy) \neq 0} whenever {t = 0.4}, {x \geq 0}, and {y \geq 0.4}? This would imply that

\displaystyle \Lambda \leq 0.4 + \frac{1}{2} (0.4)^2 = 0.48

which would be the first quantitative improvement over the de Bruijn bound of {\Lambda \leq 1/2} (or the Ki-Kim-Lee refinement of {\Lambda < 1/2}). Of course we can try to lower the two parameters of {0.4} later on in the project, but this seems as good a place to start as any. One could also potentially try to use finer analysis of dynamics of zeroes to improve the bound {\Lambda \leq 0.48} further, but this seems to be a less urgent task.

Probably the hardest case is {y=0.4}, as there is a good chance that one can then recover the {y>0.4} case by a suitable use of the argument principle. Here we appear to have a workable Riemann-Siegel type formula that gives a tractable approximation for {H_t}. To describe this formula, first note that in the {t=0} case we have

\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1+iz}{2} )

and the Riemann-Siegel formula gives

\displaystyle \xi(s) = \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{1}{n^s}

\displaystyle + \frac{s(s-1)}{2} \pi^{-(1-s)/2} \Gamma((1-s)/2) \sum_{m=1}^M \frac{1}{m^{1-s}}

\displaystyle + \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw

for any natural numbers {N,M}, where {C_M} is a contour from {+\infty} to {+\infty} that winds once anticlockwise around the zeroes {e^{2\pi im}, |m| \leq M} of {e^w-1} but does not wind around any other zeroes. A good choice of {N,M} to use here is

\displaystyle N=M=\lfloor \sqrt{\mathrm{Im}(s)/2\pi}\rfloor = \lfloor \sqrt{\mathrm{Re}(z)/4\pi} \rfloor. \ \ \ \ \ (1)

 

In this case, a classical steepest descent computation (see wiki) yields the approximation

\displaystyle \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1}\ dw \approx - (2\pi i M)^{s-1} \Psi( \frac{s}{2\pi i M} - N )

where

\displaystyle \Psi(\alpha) := 2\pi \frac{\cos \pi(\frac{1}{2}\alpha^2 - \alpha - \pi/8)}{\cos(\pi \alpha)} \exp( \frac{i\pi}{2} \alpha^2 - \frac{5\pi i}{8} ).

Thus we have

\displaystyle H_0(z) \approx A^{(0)} + B^{(0)} - C^{(0)}

where

\displaystyle A^{(0)} := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{1}{n^s}

\displaystyle B^{(0)} := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-(1-s)/2} \Gamma((1-s)/2) \sum_{m=1}^M \frac{1}{m^{1-s}}

\displaystyle C^{(0)} := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \frac{e^{-i\pi s} \Gamma(1-s)}{2\pi i} (2\pi i M)^{s-1} \Psi( \frac{s}{2\pi i M} - N )

with {s := \frac{1+iz}{2}} and {N,M} given by (1).

Heuristically, we have derived (see wiki) the more general approximation

\displaystyle H_t(z) \approx A + B - C

for {t>0} (and in particular for {t=0.4}), where

\displaystyle A := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \sum_{n=1}^N \frac{\exp(\frac{t}{16} \log^2 \frac{s+4}{2\pi n^2} )}{n^s}

\displaystyle B := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-(1-s)/2} \Gamma((1-s)/2) \sum_{m=1}^M \frac{\exp(\frac{t}{16} \log^2 \frac{5-s}{2\pi m^2} )}{m^{1-s}}

\displaystyle C := \exp(-\frac{t \pi^2}{64}) C^{(0)}.

In practice it seems that the {C} term is negligible once the real part {x} of {z} is moderately large, so one also has the approximation

\displaystyle H_t(z) \approx A + B.

For large {x}, and for fixed {t,y>0}, e.g. {t=y=0.4}, the sums {A,B} converge fairly quickly (in fact the situation seems to be significantly better here than the much more intensively studied {t=0} case), and we expect the first term

\displaystyle B_0 := \frac{1}{8} \frac{s(s-1)}{2} \pi^{-(1-s)/2} \Gamma((1-s)/2) \exp( \frac{t}{16} \log^2 \frac{5-s}{2\pi} )

of the {B} series to dominate. Indeed, analytically we know that {\frac{A+B-C}{B_0} \rightarrow 1} (or {\frac{A+B}{B_0} \rightarrow 1}) as {x \rightarrow \infty} (holding {y} fixed), and it should also be provable that {\frac{H_t}{B_0} \rightarrow 1} as well. Numerically with {t=y=0.4}, it seems in fact that {\frac{A+B-C}{B_0}} (or {\frac{A+B}{B_0}}) stay within a distance of about {1/2} of {1} once {x} is moderately large (e.g. {x \geq 2 \times 10^5}). This raises the hope that one can solve the toy problem of showing {H_t(x+iy) \neq 0} for {t=y=0.4} by numerically controlling {H_t(x+iy) / B_0} for small {x} (e.g. {x \leq 2 \times 10^5}), numerically controlling {(A+B)/B_0} and analytically bounding the error {(H_t - A - B)/B_0} for medium {x} (e.g. {2 \times 10^5 \leq x \leq 10^7}), and analytically bounding both {(A+B)/B_0} and {(H_t-A-B)/B_0} for large {x} (e.g. {x \geq 10^7}). (These numbers {2 \times 10^5} and {10^7} are arbitrarily chosen here and may end up being optimised to something else as the computations become clearer.)

Thus, we now have four largely independent tasks (for suitable ranges of “small”, “medium”, and “large” {x}):

  1. Numerically computing {H_t(x+iy) / B_0} for small {x} (with enough accuracy to verify that there are no zeroes)
  2. Numerically computing {(A+B)/B_0} for medium {x} (with enough accuracy to keep it away from zero)
  3. Analytically bounding {(A+B)/B_0} for large {x} (with enough accuracy to keep it away from zero); and
  4. Analytically bounding {(H_t - A - B)/B_0} for medium and large {x} (with a bound that is better than the bound away from zero in the previous two tasks).

Note that tasks 2 and 3 do not directly require any further understanding of the function {H_t}.

Below we will give a progress report on the numeric and analytic sides of these tasks.

— 1. Numerics report (contributed by Sujit Nair) —

There is some progress on the code side but not at the pace I was hoping. Here are a few things which happened (rather, mistakes which were taken care of).

  1. We got rid of code which wasn’t being used. For example, @dhjpolymath computed {H_t} based on an old version but only realized it after the fact.
  2. We implemented tests to catch human/numerical bugs before a computation starts. Again, we lost some numerical cycles but moving forward these can be avoided.
  3. David got set up on GitHub and he is able to compare his output (in C) with the Python code. That is helping a lot.

Two areas which were worked on were

  1. Computing {H_t} and zeroes for {t} around {0.4}
  2. Computing quantities like {(A+B-C)/B_0}, {(A+B)/B_0}, {C/B_0}, etc. with the goal of understanding the zero free regions.

Some observations for {t=0.4}, {y=0.4}, {x \in ( 10^4, 10^7)} include:

  • {(A+B) / B_0} does seem to avoid the negative real axis
  • {|(A+B) / B0| > 0.4} (based on the oscillations and trends in the plots)
  • {|C/B_0|} seems to be settling around {10^{-4}} range.

See the figure below. The top plot is on the complex plane and the bottom plot is the absolute value. The code to play with this is here.

— 2. Analysis report —

The Riemann-Siegel formula and some manipulations (see wiki) give {H_0 = A^{(0)} + B^{(0)} - \tilde C^{(0)}}, where

\displaystyle A^{(0)} = \frac{2}{8} \sum_{n=1}^N \int_C \exp( \frac{s+4}{2} u - e^u - \frac{s}{2} \log(\pi n^2) )\ du

\displaystyle - \frac{3}{8} \sum_{n=1}^N \int_C \exp( \frac{s+2}{2} u - e^u - \frac{s}{2} \log(\pi n^2) )\ du

\displaystyle B^{(0)} = \frac{2}{8} \sum_{m=1}^M \int_{\overline{C}} \exp( \frac{5-s}{2} u - e^u - \frac{1-s}{2} \log(\pi m^2) )\ du

\displaystyle - \frac{3}{8} \sum_{m=1}^M \int_C \exp( \frac{3-s}{2} u - e^u - \frac{1-s}{2} \log(\pi m^2) )\ du

\displaystyle \tilde C^{(0)} := -\frac{2}{8} \sum_{n=0}^\infty \frac{e^{-i\pi s/2} e^{i\pi s n}}{2^s \pi^{1/2}} \int_{\overline{C}} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{5-s}{2} u - e^u)\ du dw

\displaystyle +\frac{3}{8} \sum_{n=0}^\infty \frac{e^{-i\pi s/2} e^{i\pi s n}}{2^s \pi^{1/2}} \int_{\overline{C}} \int_{C_M} \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{3-s}{2} u - e^u)\ du dw

where {C} is a contour that goes from {+i\infty} to {+\infty} staying a bounded distance away from the upper imaginary and right real axes, and {\overline{C}} is the complex conjugate of {C}. (In each of these sums, it is the first term that should dominate, with the second one being about {O(1/x)} as large.) One can then evolve by the heat flow to obtain {H_t = \tilde A + \tilde B - \tilde C}, where

\displaystyle \tilde A := \frac{2}{8} \sum_{n=1}^N \int_C \exp( \frac{s+4}{2} u - e^u - \frac{s}{2} \log(\pi n^2) + \frac{t}{16} (u - \log(\pi n^2))^2)\ du

\displaystyle - \frac{3}{8} \sum_{n=1}^N \int_C \exp( \frac{s+2}{2} u - e^u - \frac{s}{2} \log(\pi n^2) + \frac{t}{16} (u - \log(\pi n^2))^2)\ du

\displaystyle \tilde B := \frac{2}{8} \sum_{m=1}^M \int_{\overline{C}} \exp( \frac{5-s}{2} u - e^u - \frac{1-s}{2} \log(\pi m^2) + \frac{t}{16} (u - \log(\pi m^2))^2)\ du

\displaystyle - \frac{3}{8} \sum_{m=1}^M \int_C \exp( \frac{3-s}{2} u - e^u - \frac{1-s}{2} \log(\pi m^2) + \frac{t}{16} (u - \log(\pi m^2))^2)\ du

\displaystyle \tilde C := -\frac{2}{8} \sum_{n=0}^\infty \frac{e^{-i\pi s/2} e^{i\pi s n}}{2^s \pi^{1/2}} \int_{\overline{C}} \int_{C_M}

\displaystyle \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{5-s}{2} u - e^u + \frac{t}{4} (i \pi(n-1/2) + \log \frac{w}{2\sqrt{\pi}} - \frac{u}{2})^2) \ du dw

\displaystyle +\frac{3}{8} \sum_{n=0}^\infty \frac{e^{-i\pi s/2} e^{i\pi s n}}{2^s \pi^{1/2}} \int_{\overline{C}} \int_{C_M}

\displaystyle \frac{w^{s-1} e^{-Nw}}{e^w-1} \exp( \frac{3-s}{2} u - e^u + \frac{t}{4} (i \pi(n-1/2) + \log \frac{w}{2\sqrt{\pi}} - \frac{u}{2})^2)\ du dw.

Steepest descent heuristics then predict that {\tilde A \approx A}, {\tilde B \approx B}, and {\tilde C \approx C}. For the purposes of this project, we will need effective error estimates here, with explicit error terms.

A start has been made towards this goal at this wiki page. Firstly there is a “effective Laplace method” lemma that gives effective bounds on integrals of the form {\int_I e^{\phi(x)} \psi(x)\ dx} if the real part of {\phi(x)} is either monotone with large derivative, or has a critical point and is decreasing on both sides of that critical point. In principle, all one has to do is manipulate expressions such as {\tilde A - A}, {\tilde B - B}, {\tilde C - C} by change of variables, contour shifting and integration by parts until it is of the form to which the above lemma can be profitably applied. As one may imagine though the computations are messy, particularly for the {\tilde C} term. As a warm up, I have begun by trying to estimate integrals of the form

\displaystyle \int_C \exp( s (1+u-e^u) + \frac{t}{16} (u+b)^2 )\ du

for smallish complex numbers {b}, as these sorts of integrals appear in the form of {\tilde A, \tilde B, \tilde C}. As of this time of writing, there are effective bounds for the {b=0} case, and I am currently working on extending them to the {b \neq 0} case, which should give enough control to approximate {\tilde A - A} and {\tilde B-B}. The most complicated task will be that of upper bounding {\tilde C}, but it also looks eventually doable.

This is the first official “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

The proposal naturally splits into at least three separate (but loosely related) topics:

  • Numerical computation of the entire functions {H_t(z)}, with the ultimate aim of establishing zero-free regions of the form {\{ x+iy: 0 \leq x \leq T, y \geq \varepsilon \}} for various {T, \varepsilon > 0}.
  • Improved understanding of the dynamics of the zeroes {z_j(t)} of {H_t}.
  • Establishing the zero-free nature of {H_t(x+iy)} when {y \geq \varepsilon > 0} and {x} is sufficiently large depending on {t} and {\varepsilon}.

Below the fold, I will present each of these topics in turn, to initiate further discussion in each of them. (I thought about splitting this post into three to have three separate discussions, but given the current volume of comments, I think we should be able to manage for now having all the comments in a single post. If this changes then of course we can split up some of the discussion later.)

To begin with, let me present some formulae for computing {H_t} (inspired by similar computations in the Ki-Kim-Lee paper) which may be useful. The initial definition of {H_t} is

\displaystyle  H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du

where

\displaystyle  \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(- \pi n^2 e^{4u} )

is a variant of the Jacobi theta function. We observe that {\Phi} in fact extends analytically to the strip

\displaystyle  \{ u \in {\bf C}: -\frac{\pi}{8} < \mathrm{Im} u < \frac{\pi}{8} \}, \ \ \ \ \ (1)

as {e^{4u}} has positive real part on this strip. One can use the Poisson summation formula to verify that {\Phi} is even, {\Phi(-u) = \Phi(u)} (see this previous post for details). This lets us obtain a number of other formulae for {H_t}. Most obviously, one can unfold the integral to obtain

\displaystyle  H_t(z) = \frac{1}{2} \int_{-\infty}^\infty e^{tu^2} \Phi(u) e^{izu}\ du.

In my previous paper with Brad, we used this representation, combined with Fubini’s theorem to swap the sum and integral, to obtain a useful series representation for {H_t} in the {t<0} case. Unfortunately this is not possible in the {t>0} case because expressions such as {e^{tu^2} e^{9u} \exp( -\pi n^2 e^{4u} ) e^{izu}} diverge as {u} approaches {-\infty}. Nevertheless we can still perform the following contour integration manipulation. Let {0 \leq \theta < \frac{\pi}{8}} be fixed. The function {\Phi} decays super-exponentially fast (much faster than {e^{tu^2}}, in particular) as {\mathrm{Re} u \rightarrow +\infty} with {-\infty \leq \mathrm{Im} u \leq \theta}; as {\Phi} is even, we also have this decay as {\mathrm{Re} u \rightarrow -\infty} with {-\infty \leq \mathrm{Im} u \leq \theta} (this is despite each of the summands in {\Phi} having much slower decay in this direction – there is considerable cancellation!). Hence by the Cauchy integral formula we have

\displaystyle  H_t(z) = \frac{1}{2} \int_{i\theta-\infty}^{i\theta+\infty} e^{tu^2} \Phi(u) e^{izu}\ du.

Splitting the horizontal line from {i\theta-\infty} to {i\theta+\infty} at {i\theta} and using the even nature of {\Phi(u)}, we thus have

\displaystyle  H_t(z) = \frac{1}{2} ( \int_{i\theta}^{i\theta+\infty} e^{tu^2} \Phi(u) e^{izu}\ du + \int_{-i\theta}^{-i\theta+\infty} e^{tu^2} \Phi(u) e^{-izu}\ du.

Using the functional equation {\Phi(\overline{u}) = \overline{\Phi(u)}}, we thus have the representation

\displaystyle  H_t(z) = \frac{1}{2} ( K_{t,\theta}(z) + \overline{K_{t,\theta}(\overline{z})} ) \ \ \ \ \ (2)

where

\displaystyle  K_{t,\theta}(z) := \int_{i\theta}^{i \theta+\infty} e^{tu^2} \Phi(u) e^{izu}\ du

\displaystyle  = \sum_{n=1}^\infty 2 \pi^2 n^4 I_{t, \theta}( z - 9i, \pi n^2 ) - 3 \pi n^2 I_{t,\theta}( z - 5i, \pi n^2 )

where {I_{t,\theta}(b,\beta)} is the oscillatory integral

\displaystyle  I_{t,\theta}(b,\beta) := \int_{i\theta}^{i\theta+\infty} \exp( tu^2 - \beta e^{4u} + i b u )\ du. \ \ \ \ \ (3)

The formula (2) is valid for any {0 \leq \theta < \frac{\pi}{8}}. Naively one would think that it would be simplest to take {\theta=0}; however, when {z=x+iy} and {x} is large (with {y} bounded), it seems asymptotically better to take {\theta} closer to {\pi/8}, in particular something like {\theta = \frac{\pi}{8} - \frac{1}{4x}} seems to be a reasonably good choice. This is because the integrand in (3) becomes significantly less oscillatory and also much lower in amplitude; the {\exp(ibu)} term in (3) now generates a factor roughly comparable to {\exp( - \pi x/8 )} (which, as we will see below, is the main term in the decay asymptotics for {H_t(x+iy)}), while the {\exp( - \beta e^{4u} )} term still exhibits a reasonable amount of decay as {u \rightarrow \infty}. We will use the representation (2) in the asymptotic analysis of {H_t} below, but it may also be a useful representation to use for numerical purposes.

Read the rest of this entry »

I recently learned about a curious operation on square matrices known as sweeping, which is used in numerical linear algebra (particularly in applications to statistics), as a useful and more robust variant of the usual Gaussian elimination operations seen in undergraduate linear algebra courses. Given an {n \times n} matrix {A := (a_{ij})_{1 \leq i,j \leq n}} (with, say, complex entries) and an index {1 \leq k \leq n}, with the entry {a_{kk}} non-zero, the sweep {\hbox{Sweep}_k[A] = (\hat a_{ij})_{1 \leq i,j \leq n}} of {A} at {k} is the matrix given by the formulae

\displaystyle  \hat a_{ij} := a_{ij} - \frac{a_{ik} a_{kj}}{a_{kk}}

\displaystyle  \hat a_{ik} := \frac{a_{ik}}{a_{kk}}

\displaystyle  \hat a_{kj} := \frac{a_{kj}}{a_{kk}}

\displaystyle  \hat a_{kk} := \frac{-1}{a_{kk}}

for all {i,j \in \{1,\dots,n\} \backslash \{k\}}. Thus for instance if {k=1}, and {A} is written in block form as

\displaystyle  A = \begin{pmatrix} a_{11} & X \\ Y & B \end{pmatrix} \ \ \ \ \ (1)

for some {1 \times n-1} row vector {X}, {n-1 \times 1} column vector {Y}, and {n-1 \times n-1} minor {B}, one has

\displaystyle  \hbox{Sweep}_1[A] = \begin{pmatrix} -1/a_{11} & X / a_{11} \\ Y/a_{11} & B - a_{11}^{-1} YX \end{pmatrix}. \ \ \ \ \ (2)

The inverse sweep operation {\hbox{Sweep}_k^{-1}[A] = (\check a_{ij})_{1 \leq i,j \leq n}} is given by a nearly identical set of formulae:

\displaystyle  \check a_{ij} := a_{ij} - \frac{a_{ik} a_{kj}}{a_{kk}}

\displaystyle  \check a_{ik} := -\frac{a_{ik}}{a_{kk}}

\displaystyle  \check a_{kj} := -\frac{a_{kj}}{a_{kk}}

\displaystyle  \check a_{kk} := \frac{-1}{a_{kk}}

for all {i,j \in \{1,\dots,n\} \backslash \{k\}}. One can check that these operations invert each other. Actually, each sweep turns out to have order {4}, so that {\hbox{Sweep}_k^{-1} = \hbox{Sweep}_k^3}: an inverse sweep performs the same operation as three forward sweeps. Sweeps also preserve the space of symmetric matrices (allowing one to cut down computational run time in that case by a factor of two), and behave well with respect to principal minors; a sweep of a principal minor is a principal minor of a sweep, after adjusting indices appropriately.

Remarkably, the sweep operators all commute with each other: {\hbox{Sweep}_k \hbox{Sweep}_l = \hbox{Sweep}_l \hbox{Sweep}_k}. If {1 \leq k \leq n} and we perform the first {k} sweeps (in any order) to a matrix

\displaystyle  A = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix}

with {A_{11}} a {k \times k} minor, {A_{12}} a {k \times n-k} matrix, {A_{12}} a {n-k \times k} matrix, and {A_{22}} a {n-k \times n-k} matrix, one obtains the new matrix

\displaystyle  \hbox{Sweep}_1 \dots \hbox{Sweep}_k[A] = \begin{pmatrix} -A_{11}^{-1} & A_{11}^{-1} A_{12} \\ A_{21} A_{11}^{-1} & A_{22} - A_{21} A_{11}^{-1} A_{12} \end{pmatrix}.

Note the appearance of the Schur complement in the bottom right block. Thus, for instance, one can essentially invert a matrix {A} by performing all {n} sweeps:

\displaystyle  \hbox{Sweep}_1 \dots \hbox{Sweep}_n[A] = -A^{-1}.

If a matrix has the form

\displaystyle  A = \begin{pmatrix} B & X \\ Y & a \end{pmatrix}

for a {n-1 \times n-1} minor {B}, {n-1 \times 1} column vector {X}, {1 \times n-1} row vector {Y}, and scalar {a}, then performing the first {n-1} sweeps gives

\displaystyle  \hbox{Sweep}_1 \dots \hbox{Sweep}_{n-1}[A] = \begin{pmatrix} -B^{-1} & B^{-1} X \\ Y B^{-1} & a - Y B^{-1} X \end{pmatrix}

and all the components of this matrix are usable for various numerical linear algebra applications in statistics (e.g. in least squares regression). Given that sweeps behave well with inverses, it is perhaps not surprising that sweeps also behave well under determinants: the determinant of {A} can be factored as the product of the entry {a_{kk}} and the determinant of the {n-1 \times n-1} matrix formed from {\hbox{Sweep}_k[A]} by removing the {k^{th}} row and column. As a consequence, one can compute the determinant of {A} fairly efficiently (so long as the sweep operations don’t come close to dividing by zero) by sweeping the matrix for {k=1,\dots,n} in turn, and multiplying together the {kk^{th}} entry of the matrix just before the {k^{th}} sweep for {k=1,\dots,n} to obtain the determinant.

It turns out that there is a simple geometric explanation for these seemingly magical properties of the sweep operation. Any {n \times n} matrix {A} creates a graph {\hbox{Graph}[A] := \{ (X, AX): X \in {\bf R}^n \}} (where we think of {{\bf R}^n} as the space of column vectors). This graph is an {n}-dimensional subspace of {{\bf R}^n \times {\bf R}^n}. Conversely, most subspaces of {{\bf R}^n \times {\bf R}^n} arises as graphs; there are some that fail the vertical line test, but these are a positive codimension set of counterexamples.

We use {e_1,\dots,e_n,f_1,\dots,f_n} to denote the standard basis of {{\bf R}^n \times {\bf R}^n}, with {e_1,\dots,e_n} the standard basis for the first factor of {{\bf R}^n} and {f_1,\dots,f_n} the standard basis for the second factor. The operation of sweeping the {k^{th}} entry then corresponds to a ninety degree rotation {\hbox{Rot}_k: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}^n \times {\bf R}^n} in the {e_k,f_k} plane, that sends {f_k} to {e_k} (and {e_k} to {-f_k}), keeping all other basis vectors fixed: thus we have

\displaystyle  \hbox{Graph}[ \hbox{Sweep}_k[A] ] = \hbox{Rot}_k \hbox{Graph}[A]

for generic {n \times n} {A} (more precisely, those {A} with non-vanishing entry {a_{kk}}). For instance, if {k=1} and {A} is of the form (1), then {\hbox{Graph}[A]} is the set of tuples {(r,R,s,S) \in {\bf R} \times {\bf R}^{n-1} \times {\bf R} \times {\bf R}^{n-1}} obeying the equations

\displaystyle  a_{11} r + X R = s

\displaystyle  Y r + B R = S.

The image of {(r,R,s,S)} under {\hbox{Rot}_1} is {(s, R, -r, S)}. Since we can write the above system of equations (for {a_{11} \neq 0}) as

\displaystyle  \frac{-1}{a_{11}} s + \frac{X}{a_{11}} R = -r

\displaystyle  \frac{Y}{a_{11}} s + (B - a_{11}^{-1} YX) R = S

we see from (2) that {\hbox{Rot}_1 \hbox{Graph}[A]} is the graph of {\hbox{Sweep}_1[A]}. Thus the sweep operation is a multidimensional generalisation of the high school geometry fact that the line {y = mx} in the plane becomes {y = \frac{-1}{m} x} after applying a ninety degree rotation.

It is then an instructive exercise to use this geometric interpretation of the sweep operator to recover all the remarkable properties about these operations listed above. It is also useful to compare the geometric interpretation of sweeping as rotation of the graph to that of Gaussian elimination, which instead shears and reflects the graph by various elementary transformations (this is what is going on geometrically when one performs Gaussian elimination on an augmented matrix). Rotations are less distorting than shears, so one can see geometrically why sweeping can produce fewer numerical artefacts than Gaussian elimination.

In addition to the Fields medallists mentioned in the previous post, the IMU also awarded the Nevanlinna prize to Subhash Khot, the Gauss prize to Stan Osher (my colleague here at UCLA!), and the Chern medal to Phillip Griffiths. Like I did in 2010, I’ll try to briefly discuss one result of each of the prize winners, though the fields of mathematics here are even further from my expertise than those discussed in the previous post (and all the caveats from that post apply here also).

Subhash Khot is best known for his Unique Games Conjecture, a problem in complexity theory that is perhaps second in importance only to the {P \neq NP} problem for the purposes of demarcating the mysterious line between “easy” and “hard” problems (if one follows standard practice and uses “polynomial time” as the definition of “easy”). The {P \neq NP} problem can be viewed as an assertion that it is difficult to find exact solutions to certain standard theoretical computer science problems (such as {k}-SAT); thanks to the NP-completeness phenomenon, it turns out that the precise problem posed here is not of critical importance, and {k}-SAT may be substituted with one of the many other problems known to be NP-complete. The unique games conjecture is similarly an assertion about the difficulty of finding even approximate solutions to certain standard problems, in particular “unique games” problems in which one needs to colour the vertices of a graph in such a way that the colour of one vertex of an edge is determined uniquely (via a specified matching) by the colour of the other vertex. This is an easy problem to solve if one insists on exact solutions (in which 100% of the edges have a colouring compatible with the specified matching), but becomes extremely difficult if one permits approximate solutions, with no exact solution available. In analogy with the NP-completeness phenomenon, the threshold for approximate satisfiability of many other problems (such as the MAX-CUT problem) is closely connected with the truth of the unique games conjecture; remarkably, the truth of the unique games conjecture would imply asymptotically sharp thresholds for many of these problems. This has implications for many theoretical computer science constructions which rely on hardness of approximation, such as probabilistically checkable proofs. For a more detailed survey of the unique games conjecture and its implications, see this Bulletin article of Trevisan.

My colleague Stan Osher has worked in many areas of applied mathematics, ranging from image processing to modeling fluids for major animation studies such as Pixar or Dreamworks, but today I would like to talk about one of his contributions that is close to an area of my own expertise, namely compressed sensing. One of the basic reconstruction problem in compressed sensing is the basis pursuit problem of finding the vector {x \in {\bf R}^n} in an affine space {\{ x \in {\bf R}^n: Ax = b \}} (where {b \in {\bf R}^m} and {A \in {\bf R}^{m\times n}} are given, and {m} is typically somewhat smaller than {n}) which minimises the {\ell^1}-norm {\|x\|_{\ell^1} := \sum_{i=1}^n |x_i|} of the vector {x}. This is a convex optimisation problem, and thus solvable in principle (it is a polynomial time problem, and thus “easy” in the above theoretical computer science sense). However, once {n} and {m} get moderately large (e.g. of the order of {10^6}), standard linear optimisation routines begin to become computationally expensive; also, it is difficult for off-the-shelf methods to exploit any additional structure (e.g. sparsity) in the measurement matrix {A}. Much of the problem comes from the fact that the functional {x \mapsto \|x\|_1} is only barely convex. One way to speed up the optimisation problem is to relax it by replacing the constraint {Ax=b} with a convex penalty term {\frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2}, thus one is now trying to minimise the unconstrained functional

\displaystyle \|x\|_1 + \frac{1}{2\epsilon} \|Ax-b\|_{\ell^2}^2.

This functional is more convex, and is over a computationally simpler domain {{\bf R}^n} than the affine space {\{x \in {\bf R}^n: Ax=b\}}, so is easier (though still not entirely trivial) to optimise over. However, the minimiser {x^\epsilon} to this problem need not match the minimiser {x^0} to the original problem, particularly if the (sub-)gradient {\partial \|x\|_1} of the original functional {\|x\|_1} is large at {x^0}, and if {\epsilon} is not set to be small. (And setting {\epsilon} too small will cause other difficulties with numerically solving the optimisation problem, due to the need to divide by very small denominators.) However, if one modifies the objective function by an additional linear term

\displaystyle \|x\|_1 - \langle p, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2

then some simple convexity considerations reveal that the minimiser to this new problem will match the minimiser {x^0} to the original problem, provided that {p} is (or more precisely, lies in) the (sub-)gradient {\partial \|x\|_1} of {\|x\|_1} at {x^0} – even if {\epsilon} is not small. But, one does not know in advance what the correct value of {p} should be, because one does not know what the minimiser {x^0} is.

With Yin, Goldfarb and Darbon, Osher introduced a Bregman iteration method in which one solves for {x} and {p} simultaneously; given an initial guess {x^k, p^k} for both {x^k} and {p^k}, one first updates {x^k} to the minimiser {x^{k+1} \in {\bf R}^n} of the convex functional

\displaystyle \|x\|_1 - \langle p^k, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2 \ \ \ \ \ (1)

and then updates {p^{k+1}} to the natural value of the subgradient {\partial \|x\|_1} at {x^{k+1}}, namely

\displaystyle p^{k+1} := p^k - \nabla \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2|_{x=x^{k+1}} = p_k - \frac{1}{\epsilon} (Ax^k - b)

(note upon taking the first variation of (1) that {p^{k+1}} is indeed in the subgradient). This procedure converges remarkably quickly (both in theory and in practice) to the true minimiser {x^0} even for non-small values of {\epsilon}, and also has some ability to be parallelised, and has led to actual performance improvements of an order of magnitude or more in certain compressed sensing problems (such as reconstructing an MRI image).

Phillip Griffiths has made many contributions to complex, algebraic and differential geometry, and I am not qualified to describe most of these; my primary exposure to his work is through his text on algebraic geometry with Harris, but as excellent though that text is, it is not really representative of his research. But I thought I would mention one cute result of his related to the famous Nash embedding theorem. Suppose that one has a smooth {n}-dimensional Riemannian manifold that one wants to embed locally into a Euclidean space {{\bf R}^m}. The Nash embedding theorem guarantees that one can do this if {m} is large enough depending on {n}, but what is the minimal value of {m} one can get away with? Many years ago, my colleague Robert Greene showed that {m = \frac{n(n+1)}{2} + n} sufficed (a simplified proof was subsequently given by Gunther). However, this is not believed to be sharp; if one replaces “smooth” with “real analytic” then a standard Cauchy-Kovalevski argument shows that {m = \frac{n(n+1)}{2}} is possible, and no better. So this suggests that {m = \frac{n(n+1)}{2}} is the threshold for the smooth problem also, but this remains open in general. The cases {n=1} is trivial, and the {n=2} case is not too difficult (if the curvature is non-zero) as the codimension {m-n} is one in this case, and the problem reduces to that of solving a Monge-Ampere equation. With Bryant and Yang, Griffiths settled the {n=3} case, under a non-degeneracy condition on the Einstein tensor. This is quite a serious paper – over 100 pages combining differential geometry, PDE methods (e.g. Nash-Moser iteration), and even some harmonic analysis (e.g. they rely at one key juncture on an extension theorem of my advisor, Elias Stein). The main difficulty is that that the relevant PDE degenerates along a certain characteristic submanifold of the cotangent bundle, which then requires an extremely delicate analysis to handle.

This is the ninth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page.

The focus is now on bounding {H_1} unconditionally (in particular, without resorting to the Elliott-Halberstam conjecture or its generalisations). We can bound {H_1 \leq H(k)} whenever one can find a symmetric square-integrable function {F} supported on the simplex {{\cal R}_k := \{ (t_1,\dots,t_k) \in [0,+\infty)^k: t_1+\dots+t_k \leq 1 \}} such that

\displaystyle  k \int_{{\cal R}_{k-1}} (\int_0^\infty F(t_1,\dots,t_k)\ dt_k)^2\ dt_1 \dots dt_{k-1} \ \ \ \ \ (1)

\displaystyle > 4 \int_{{\cal R}_{k}} F(t_1,\dots,t_k)^2\ dt_1 \dots dt_{k-1} dt_k.

Our strategy for establishing this has been to restrict {F} to be a linear combination of symmetrised monomials {[t_1^{a_1} \dots t_k^{a_k}]_{sym}} (restricted of course to {{\cal R}_k}), where the degree {a_1+\dots+a_k} is small; actually, it seems convenient to work with the slightly different basis {(1-t_1-\dots-t_k)^i [t_1^{a_1} \dots t_k^{a_k}]_{sym}} where the {a_i} are restricted to be even. The criterion (1) then becomes a large quadratic program with explicit but complicated rational coefficients. This approach has lowered {k} down to {54}, which led to the bound {H_1 \leq 270}.

Actually, we know that the more general criterion

\displaystyle  k \int_{(1-\epsilon) \cdot {\cal R}_{k-1}} (\int_0^\infty F(t_1,\dots,t_k)\ dt_k)^2\ dt_1 \dots dt_{k-1} \ \ \ \ \ (2)

\displaystyle  > 4 \int F(t_1,\dots,t_k)^2\ dt_1 \dots dt_{k-1} dt_k

will suffice, whenever {0 \leq \epsilon < 1} and {F} is supported now on {2 \cdot {\cal R}_k} and obeys the vanishing marginal condition {\int_0^\infty F(t_1,\dots,t_k)\ dt_k = 0} whenever {t_1+\dots+t_k > 1+\epsilon}. The latter is in particular obeyed when {F} is supported on {(1+\epsilon) \cdot {\cal R}_k}. A modification of the preceding strategy has lowered {k} slightly to {53}, giving the bound {H_1 \leq 264} which is currently our best record.

However, the quadratic programs here have become extremely large and slow to run, and more efficient algorithms (or possibly more computer power) may be required to advance further.

Archives