This is the fifth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page (which has recently returned to full functionality, after a partial outage). In particular, the upper bound for {H_1} has been shaved a little from {272} to {270}, and we have very recently achieved the bound {H_1 \leq 8} on the generalised Elliott-Halberstam conjecture GEH, formulated as Conjecture 1 of this paper of Bombieri, Friedlander, and Iwaniec. We also have explicit bounds for {H_m} for {m \leq 5}, both with and without the assumption of the Elliott-Halberstam conjecture, as well as slightly sharper asymptotics for the upper bound for {H_m} as {m \rightarrow \infty}.

The basic strategy for bounding {H_m} still follows the general paradigm first laid out by Goldston, Pintz, Yildirim: given an admissible {k}-tuple {(h_1,\dots,h_k)}, one needs to locate a non-negative sieve weight {\nu: {\bf Z} \rightarrow {\bf R}^+}, supported on an interval {[x,2x]} for a large {x}, such that the ratio

\displaystyle  \frac{\sum_{i=1}^k \sum_n \nu(n) 1_{n+h_i \hbox{ prime}}}{\sum_n \nu(n)} \ \ \ \ \ (1)

is asymptotically larger than {m} as {x \rightarrow \infty}; this will show that {H_m \leq h_k-h_1}. Thus one wants to locate a sieve weight {\nu} for which one has good lower bounds on the numerator and good upper bounds on the denominator.

One can modify this paradigm slightly, for instance by adding the additional term {\sum_n \nu(n) 1_{n+h_1,\dots,n+h_k \hbox{ composite}}} to the numerator, or by subtracting the term {\sum_n \nu(n) 1_{n+h_1,n+h_k \hbox{ prime}}} from the numerator (which allows one to reduce the bound {h_k-h_1} to {\max(h_k-h_2,h_{k-1}-h_1)}); however, the numerical impact of these tweaks have proven to be negligible thus far.

Despite a number of experiments with other sieves, we are still relying primarily on the Selberg sieve

\displaystyle  \nu(n) := 1_{n=b\ (W)} 1_{[x,2x]}(n) \lambda(n)^2

where {\lambda(n)} is the divisor sum

\displaystyle  \lambda(n) := \sum_{d_1|n+h_1, \dots, d_k|n+h_k} \mu(d_1) \dots \mu(d_k) f( \frac{\log d_1}{\log R}, \dots, \frac{\log d_k}{\log R})

with {R = x^{\theta/2}}, {\theta} is the level of distribution ({\theta=1/2-} if relying on Bombieri-Vinogradov, {\theta=1-} if assuming Elliott-Halberstam, and (in principle) {\theta = \frac{1}{2} + \frac{13}{540}-} if using Polymath8a technology), and {f: [0,+\infty)^k \rightarrow {\bf R}} is a smooth, compactly supported function. Most of the progress has come by enlarging the class of cutoff functions {f} one is permitted to use.

The baseline bounds for the numerator and denominator in (1) (as established for instance in this previous post) are as follows. If {f} is supported on the simplex

\displaystyle  {\cal R}_k := \{ (t_1,\dots,t_k) \in [0,+\infty)^k: t_1+\dots+t_k < 1 \},

and we define the mixed partial derivative {F: [0,+\infty)^k \rightarrow {\bf R}} by

\displaystyle  F(t_1,\dots,t_k) = \frac{\partial^k}{\partial t_1 \dots \partial t_k} f(t_1,\dots,t_k)

then the denominator in (1) is

\displaystyle  \frac{Bx}{W} (I_k(F) + o(1)) \ \ \ \ \ (2)

where

\displaystyle  B := (\frac{W}{\phi(W) \log R})^k

and

\displaystyle  I_k(F) := \int_{[0,+\infty)^k} F(t_1,\dots,t_k)^2\ dt_1 \dots dt_k.

Similarly, the numerator of (1) is

\displaystyle  \frac{Bx}{W} \frac{2}{\theta} (\sum_{j=1}^m J^{(m)}_k(F) + o(1)) \ \ \ \ \ (3)

where

\displaystyle  J_k^{(m)}(F) := \int_{[0,+\infty)^{k-1}} (\int_0^\infty F(t_1,\ldots,t_k)\ dt_m)^2\ dt_1 \dots dt_{m-1} dt_{m+1} \dots dt_k.

Thus, if we let {M_k} be the supremum of the ratio

\displaystyle  \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}

whenever {F} is supported on {{\cal R}_k} and is non-vanishing, then one can prove {H_m \leq h_k - h_1} whenever

\displaystyle  M_k > \frac{2m}{\theta}.

We can improve this baseline in a number of ways. Firstly, with regards to the denominator in (1), if one upgrades the Elliott-Halberstam hypothesis {EH[\theta]} to the generalised Elliott-Halberstam hypothesis {GEH[\theta]} (currently known for {\theta < 1/2}, thanks to Motohashi, but conjectured for {\theta < 1}), the asymptotic (2) holds under the more general hypothesis that {F} is supported in a polytope {R}, as long as {R} obeys the inclusion

\displaystyle  R + R \subset \bigcup_{m=1}^k \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: \ \ \ \ \ (4)

\displaystyle  t_1+\dots+t_{m-1}+t_{m+1}+\dots+t_k < 2; t_m < 2/\theta \} \cup \frac{2}{\theta} \cdot {\cal R}_k;

examples of polytopes {R} obeying this constraint include the modified simplex

\displaystyle  {\cal R}'_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\dots+t_{m-1}+t_{m+1}+\dots+t_k < 1

\displaystyle \hbox{ for all } 1 \leq m \leq k \},

the prism

\displaystyle  {\cal R}_{k-1} \times [0, 1/\theta)

the dilated simplex

\displaystyle  \frac{1}{\theta} \cdot {\cal R}_k

and the truncated simplex

\displaystyle  \frac{k}{k-1} \cdot {\cal R}_k \cap [0,1/\theta)^k.

See this previous post for a proof of these claims.

With regards to the numerator, the asymptotic (3) is valid whenever, for each {1 \leq m \leq k}, the marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanish outside of {{\cal R}_{k-1}}. This is automatic if {F} is supported on {{\cal R}_k}, or on the slightly larger region {{\cal R}'_k}, but is an additional constraint when {F} is supported on one of the other polytopes {R} mentioned above.

More recently, we have obtained a more flexible version of the above asymptotic: if the marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanish outside of {(1+\varepsilon) \cdot {\cal R}_{k-1}} for some {0 < \varepsilon < 1}, then the numerator of (1) has a lower bound of

\displaystyle  \frac{Bx}{W} \frac{2}{\theta} (\sum_{j=1}^m J^{(m)}_{k,\varepsilon}(F) + o(1))

where

\displaystyle  J_{k,\varepsilon}^{(m)}(F) := \int_{(1-\varepsilon) \cdot {\cal R}_{k-1}} (\int_0^\infty F(t_1,\ldots,t_k)\ dt_m)^2\ dt_1 \dots dt_{m-1} dt_{m+1} \dots dt_k.

A proof is given here. Putting all this together, we can conclude

Theorem 1 Suppose we can find {0 \leq \varepsilon < 1} and a function {F} supported on a polytope {R} obeying (4), not identically zero and with all marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanishing outside of {(1+\varepsilon) \cdot {\cal R}_{k-1}}, and with

\displaystyle  \frac{\sum_{m=1}^k J_{k,\varepsilon}^{(m)}(F)}{I_k(F)} > \frac{2m}{\theta}.

Then {GEH[\theta]} implies {H_m \leq h_k-h_1}.

In principle, this very flexible criterion for upper bounding {H_m} should lead to better bounds than before, and in particular we have now established {H_1 \leq 8} on GEH.

Another promising direction is to try to improve the analysis at medium {k} (more specifically, in the regime {k \sim 50}), which is where we are currently at without EH or GEH through numerical quadratic programming. Right now we are only using {\theta=1/2} and using the baseline {M_k} analysis, basically for two reasons:

  • We do not have good numerical formulae for integrating polynomials on any region more complicated than the simplex {{\cal R}_k} in medium dimension.
  • The estimates {MPZ^{(i)}[\varpi,\delta]} produced by Polymath8a involve a {\delta} parameter, which introduces additional restrictions on the support of {F} (conservatively, it restricts {F} to {[0,\delta']^k} where {\delta' := \frac{\delta}{1/4+\varpi}} and {\theta = 1/2 + 2 \varpi}; it should be possible to be looser than this (as was done in Polymath8a) but this has not been fully explored yet). This then triggers the previous obstacle of having to integrate on something other than a simplex.

However, these look like solvable problems, and so I would expect that further unconditional improvement for {H_1} should be possible.