You are currently browsing the category archive for the ‘question’ category.

This is the tenth thread for the Polymath8b project to obtain new bounds for the quantity

H_m := \liminf_{n \to\infty} p_{n+m} - p_n;

the previous thread may be found here.

Numerical progress on these bounds have slowed in recent months, although we have very recently lowered the unconditional bound on H_1 from 252 to 246 (see the wiki page for more detailed results).  While there may still be scope for further improvement (particularly with respect to bounds for H_m with m=2,3,4,5, which we have not focused on for a while, it looks like we have reached the point of diminishing returns, and it is time to turn to the task of writing up the results.

A draft version of the paper so far may be found here (with the directory of source files here).  Currently, the introduction and the sieve-theoretic portions of the paper are written up, although the sieve-theoretic arguments are surprisingly lengthy, and some simplification (or other reorganisation) may well be possible.  Other portions of the paper that have not yet been written up include the asymptotic analysis of M_k for large k (leading in particular to results for m=2,3,4,5), and a description of the quadratic programming that is used to estimate M_k for small and medium k.  Also we will eventually need an appendix to summarise the material from Polymath8a that we would use to generate various narrow admissible tuples.

One issue here is that our current unconditional bounds on H_m for m=2,3,4,5 rely on a distributional estimate on the primes which we believed to be true in Polymath8a, but never actually worked out (among other things, there was some delicate algebraic geometry issues concerning the vanishing of certain cohomology groups that was never resolved).  This issue does not affect the m=1 calculations, which only use the Bombieri-Vinogradov theorem or else assume the generalised Elliott-Halberstam conjecture.  As such, we will have to rework the computations for these H_m, given that the task of trying to attain the conjectured distributional estimate on the primes would be a significant amount of work that is rather disjoint from the rest of the Polymath8b writeup.  One could simply dust off the old maple code for this (e.g. one could tweak the code here, with the constraint  1080*varpi/13+ 330*delta/13<1  being replaced by 600*varpi/7+180*delta/7<1), but there is also a chance that our asymptotic bounds for M_k (currently given in messy detail here) could be sharpened.  I plan to look at this issue fairly soon.

Also, there are a number of smaller observations (e.g. the parity problem barrier that prevents us from ever getting a better bound on H_1 than 6) that should also go into the paper at some point; the current outline of the paper as given in the draft is not necessarily comprehensive.

This is the ninth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page.

The focus is now on bounding {H_1} unconditionally (in particular, without resorting to the Elliott-Halberstam conjecture or its generalisations). We can bound {H_1 \leq H(k)} whenever one can find a symmetric square-integrable function {F} supported on the simplex {{\cal R}_k := \{ (t_1,\dots,t_k) \in [0,+\infty)^k: t_1+\dots+t_k \leq 1 \}} such that

\displaystyle  k \int_{{\cal R}_{k-1}} (\int_0^\infty F(t_1,\dots,t_k)\ dt_k)^2\ dt_1 \dots dt_{k-1} \ \ \ \ \ (1)

\displaystyle > 4 \int_{{\cal R}_{k}} F(t_1,\dots,t_k)^2\ dt_1 \dots dt_{k-1} dt_k.

Our strategy for establishing this has been to restrict {F} to be a linear combination of symmetrised monomials {[t_1^{a_1} \dots t_k^{a_k}]_{sym}} (restricted of course to {{\cal R}_k}), where the degree {a_1+\dots+a_k} is small; actually, it seems convenient to work with the slightly different basis {(1-t_1-\dots-t_k)^i [t_1^{a_1} \dots t_k^{a_k}]_{sym}} where the {a_i} are restricted to be even. The criterion (1) then becomes a large quadratic program with explicit but complicated rational coefficients. This approach has lowered {k} down to {54}, which led to the bound {H_1 \leq 270}.

Actually, we know that the more general criterion

\displaystyle  k \int_{(1-\epsilon) \cdot {\cal R}_{k-1}} (\int_0^\infty F(t_1,\dots,t_k)\ dt_k)^2\ dt_1 \dots dt_{k-1} \ \ \ \ \ (2)

\displaystyle  > 4 \int F(t_1,\dots,t_k)^2\ dt_1 \dots dt_{k-1} dt_k

will suffice, whenever {0 \leq \epsilon < 1} and {F} is supported now on {2 \cdot {\cal R}_k} and obeys the vanishing marginal condition {\int_0^\infty F(t_1,\dots,t_k)\ dt_k = 0} whenever {t_1+\dots+t_k > 1+\epsilon}. The latter is in particular obeyed when {F} is supported on {(1+\epsilon) \cdot {\cal R}_k}. A modification of the preceding strategy has lowered {k} slightly to {53}, giving the bound {H_1 \leq 264} which is currently our best record.

However, the quadratic programs here have become extremely large and slow to run, and more efficient algorithms (or possibly more computer power) may be required to advance further.

This is the eighth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page.

The big news since the last thread is that we have managed to obtain the (sieve-theoretically) optimal bound of {H_1 \leq 6} assuming the generalised Elliott-Halberstam conjecture (GEH), which pretty much closes off that part of the story. Unconditionally, our bound on {H_1} is still {H_1 \leq 270}. This bound was obtained using the “vanilla” Maynard sieve, in which the cutoff {F} was supported in the original simplex {\{ t_1+\dots+t_k \leq 1\}}, and only Bombieri-Vinogradov was used. In principle, we can enlarge the sieve support a little bit further now; for instance, we can enlarge to {\{ t_1+\dots+t_k \leq \frac{k}{k-1} \}}, but then have to shrink the J integrals to {\{t_1+\dots+t_{k-1} \leq 1-\epsilon\}}, provided that the marginals vanish for {\{ t_1+\dots+t_{k-1} \geq 1+\epsilon \}}. However, we do not yet know how to numerically work with these expanded problems.

Given the substantial progress made so far, it looks like we are close to the point where we should declare victory and write up the results (though we should take one last look to see if there is any room to improve the {H_1 \leq 270} bounds). There is actually a fair bit to write up:

  • Improvements to the Maynard sieve (pushing beyond the simplex, the epsilon trick, and pushing beyond the cube);
  • Asymptotic bounds for {M_k} and hence {H_m};
  • Explicit bounds for {H_m, m \geq 2} (using the Polymath8a results)
  • {H_1 \leq 270};
  • {H_1 \leq 6} on GEH (and parity obstructions to any further improvement).

I will try to create a skeleton outline of such a paper in the Polymath8 Dropbox folder soon. It shouldn’t be nearly as big as the Polymath8a paper, but it will still be quite sizeable.

This is the seventh thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page.

The current focus is on improving the upper bound on {H_1} under the assumption of the generalised Elliott-Halberstam conjecture (GEH) from {H_1 \leq 8} to {H_1 \leq 6}. Very recently, we have been able to exploit GEH more fully, leading to a promising new expansion of the sieve support region. The problem now reduces to the following:

Problem 1 Does there exist a (not necessarily convex) polytope {R \subset [0,2]^3} with quantities {0 \leq \varepsilon_1,\varepsilon_2,\varepsilon_3 \leq 1}, and a non-trivial square-integrable function {F: {\bf R}^3 \rightarrow {\bf R}} supported on {R} such that

  • {R + R \subset \{ (x,y,z) \in [0,4]^3: \min(x+y,y+z,z+x) \leq 2 \},}
  • {\int_0^\infty F(x,y,z)\ dx = 0} when {y+z \geq 1+\varepsilon_1};
  • {\int_0^\infty F(x,y,z)\ dy = 0} when {x+z \geq 1+\varepsilon_2};
  • {\int_0^\infty F(x,y,z)\ dz = 0} when {x+y \geq 1+\varepsilon_3};

and such that we have the inequality

\displaystyle  \int_{y+z \leq 1-\varepsilon_1} (\int_{\bf R} F(x,y,z)\ dx)^2\ dy dz

\displaystyle + \int_{z+x \leq 1-\varepsilon_2} (\int_{\bf R} F(x,y,z)\ dy)^2\ dz dx

\displaystyle + \int_{x+y \leq 1-\varepsilon_3} (\int_{\bf R} F(x,y,z)\ dz)^2\ dx dy

\displaystyle  > 2 \int_R F(x,y,z)^2\ dx dy dz?

An affirmative answer to this question will imply {H_1 \leq 6} on GEH. We are “within two percent” of this claim; we cannot quite reach {2} yet, but have got as far as {1.962998}. However, we have not yet fully optimised {F} in the above problem. In particular, the simplex

\displaystyle  R = \{ (x,y,z) \in [0,2]^3: x+y+z \leq 3/2 \}

is now available, and should lead to some noticeable improvement in the numerology.

There is also a very slim chance that the twin prime conjecture is now provable on GEH. It would require an affirmative solution to the following problem:

Problem 2 Does there exist a (not necessarily convex) polytope {R \subset [0,2]^2} with quantities {0 \leq \varepsilon_1,\varepsilon_2 \leq 1}, and a non-trivial square-integrable function {F: {\bf R}^2 \rightarrow {\bf R}} supported on {R} such that

  • {R + R \subset \{ (x,y) \in [0,4]^2: \min(x,y) \leq 2 \}}

    \displaystyle  = [0,2] \times [0,4] \cup [0,4] \times [0,2],

  • {\int_0^\infty F(x,y)\ dx = 0} when {y \geq 1+\varepsilon_1};
  • {\int_0^\infty F(x,y)\ dy = 0} when {x \geq 1+\varepsilon_2};

and such that we have the inequality

\displaystyle  \int_{y \leq 1-\varepsilon_1} (\int_{\bf R} F(x,y)\ dx)^2\ dy

\displaystyle + \int_{x \leq 1-\varepsilon_2} (\int_{\bf R} F(x,y)\ dy)^2\ dx

\displaystyle  > 2 \int_R F(x,y)^2\ dx dy?

We suspect that the answer to this question is negative, but have not formally ruled it out yet.

For the rest of this post, I will justify why positive answers to these sorts of variational problems are sufficient to get bounds on {H_1} (or more generally {H_m}).

Read the rest of this entry »

This is the sixth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page (which has recently returned to full functionality, after a partial outage).

The current focus is on improving the upper bound on {H_1} under the assumption of the generalised Elliott-Halberstam conjecture (GEH) from {H_1 \leq 8} to {H_1 \leq 6}, which looks to be the limit of the method (see this previous comment for a semi-rigorous reason as to why {H_1 \leq 4} is not possible with this method). With the most general Selberg sieve available, the problem reduces to the following three-dimensional variational one:

Problem 1 Does there exist a (not necessarily convex) polytope {R \subset [0,1]^3} with quantities {0 \leq \varepsilon_1,\varepsilon_2,\varepsilon_3 \leq 1}, and a non-trivial square-integrable function {F: {\bf R}^3 \rightarrow {\bf R}} supported on {R} such that

  • {R + R \subset \{ (x,y,z) \in [0,2]^3: \min(x+y,y+z,z+x) \leq 2 \},}
  • {\int_0^\infty F(x,y,z)\ dx = 0} when {y+z \geq 1+\varepsilon_1};
  • {\int_0^\infty F(x,y,z)\ dy = 0} when {x+z \geq 1+\varepsilon_2};
  • {\int_0^\infty F(x,y,z)\ dz = 0} when {x+y \geq 1+\varepsilon_3};

and such that we have the inequality

\displaystyle  \int_{y+z \leq 1-\varepsilon_1} (\int_{\bf R} F(x,y,z)\ dx)^2\ dy dz

\displaystyle + \int_{z+x \leq 1-\varepsilon_2} (\int_{\bf R} F(x,y,z)\ dy)^2\ dz dx

\displaystyle + \int_{x+y \leq 1-\varepsilon_3} (\int_{\bf R} F(x,y,z)\ dz)^2\ dx dy

\displaystyle  > 2 \int_R F(x,y,z)^2\ dx dy dz?

(Initially it was assumed that {R} was convex, but we have now realised that this is not necessary.)

An affirmative answer to this question will imply {H_1 \leq 6} on GEH. We are “within almost two percent” of this claim; we cannot quite reach {2} yet, but have got as far as {1.959633}. However, we have not yet fully optimised {F} in the above problem.

The most promising route so far is to take the symmetric polytope

\displaystyle  R = \{ (x,y,z) \in [0,1]^3: x+y+z \leq 3/2 \}

with {F} symmetric as well, and {\varepsilon_1=\varepsilon_2=\varepsilon_3=\varepsilon} (we suspect that the optimal {\varepsilon} will be roughly {1/6}). (However, it is certainly worth also taking a look at easier model problems, such as the polytope {{\cal R}'_3 := \{ (x,y,z) \in [0,1]^3: x+y,y+z,z+x \leq 1\}}, which has no vanishing marginal conditions to contend with; more recently we have been looking at the non-convex polytope {R = \{x+y,x+z \leq 1 \} \cup \{ x+y,y+z \leq 1 \} \cup \{ x+z,y+z \leq 1\}}.) Some further details of this particular case are given below the fold.

There should still be some progress to be made in the other regimes of interest – the unconditional bound on {H_1} (currently at {270}), and on any further progress in asymptotic bounds for {H_m} for larger {m} – but the current focus is certainly on the bound on {H_1} on GEH, as we seem to be tantalisingly close to an optimal result here.

Read the rest of this entry »

This is the fifth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page (which has recently returned to full functionality, after a partial outage). In particular, the upper bound for {H_1} has been shaved a little from {272} to {270}, and we have very recently achieved the bound {H_1 \leq 8} on the generalised Elliott-Halberstam conjecture GEH, formulated as Conjecture 1 of this paper of Bombieri, Friedlander, and Iwaniec. We also have explicit bounds for {H_m} for {m \leq 5}, both with and without the assumption of the Elliott-Halberstam conjecture, as well as slightly sharper asymptotics for the upper bound for {H_m} as {m \rightarrow \infty}.

The basic strategy for bounding {H_m} still follows the general paradigm first laid out by Goldston, Pintz, Yildirim: given an admissible {k}-tuple {(h_1,\dots,h_k)}, one needs to locate a non-negative sieve weight {\nu: {\bf Z} \rightarrow {\bf R}^+}, supported on an interval {[x,2x]} for a large {x}, such that the ratio

\displaystyle  \frac{\sum_{i=1}^k \sum_n \nu(n) 1_{n+h_i \hbox{ prime}}}{\sum_n \nu(n)} \ \ \ \ \ (1)

is asymptotically larger than {m} as {x \rightarrow \infty}; this will show that {H_m \leq h_k-h_1}. Thus one wants to locate a sieve weight {\nu} for which one has good lower bounds on the numerator and good upper bounds on the denominator.

One can modify this paradigm slightly, for instance by adding the additional term {\sum_n \nu(n) 1_{n+h_1,\dots,n+h_k \hbox{ composite}}} to the numerator, or by subtracting the term {\sum_n \nu(n) 1_{n+h_1,n+h_k \hbox{ prime}}} from the numerator (which allows one to reduce the bound {h_k-h_1} to {\max(h_k-h_2,h_{k-1}-h_1)}); however, the numerical impact of these tweaks have proven to be negligible thus far.

Despite a number of experiments with other sieves, we are still relying primarily on the Selberg sieve

\displaystyle  \nu(n) := 1_{n=b\ (W)} 1_{[x,2x]}(n) \lambda(n)^2

where {\lambda(n)} is the divisor sum

\displaystyle  \lambda(n) := \sum_{d_1|n+h_1, \dots, d_k|n+h_k} \mu(d_1) \dots \mu(d_k) f( \frac{\log d_1}{\log R}, \dots, \frac{\log d_k}{\log R})

with {R = x^{\theta/2}}, {\theta} is the level of distribution ({\theta=1/2-} if relying on Bombieri-Vinogradov, {\theta=1-} if assuming Elliott-Halberstam, and (in principle) {\theta = \frac{1}{2} + \frac{13}{540}-} if using Polymath8a technology), and {f: [0,+\infty)^k \rightarrow {\bf R}} is a smooth, compactly supported function. Most of the progress has come by enlarging the class of cutoff functions {f} one is permitted to use.

The baseline bounds for the numerator and denominator in (1) (as established for instance in this previous post) are as follows. If {f} is supported on the simplex

\displaystyle  {\cal R}_k := \{ (t_1,\dots,t_k) \in [0,+\infty)^k: t_1+\dots+t_k < 1 \},

and we define the mixed partial derivative {F: [0,+\infty)^k \rightarrow {\bf R}} by

\displaystyle  F(t_1,\dots,t_k) = \frac{\partial^k}{\partial t_1 \dots \partial t_k} f(t_1,\dots,t_k)

then the denominator in (1) is

\displaystyle  \frac{Bx}{W} (I_k(F) + o(1)) \ \ \ \ \ (2)

where

\displaystyle  B := (\frac{W}{\phi(W) \log R})^k

and

\displaystyle  I_k(F) := \int_{[0,+\infty)^k} F(t_1,\dots,t_k)^2\ dt_1 \dots dt_k.

Similarly, the numerator of (1) is

\displaystyle  \frac{Bx}{W} \frac{2}{\theta} (\sum_{j=1}^m J^{(m)}_k(F) + o(1)) \ \ \ \ \ (3)

where

\displaystyle  J_k^{(m)}(F) := \int_{[0,+\infty)^{k-1}} (\int_0^\infty F(t_1,\ldots,t_k)\ dt_m)^2\ dt_1 \dots dt_{m-1} dt_{m+1} \dots dt_k.

Thus, if we let {M_k} be the supremum of the ratio

\displaystyle  \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}

whenever {F} is supported on {{\cal R}_k} and is non-vanishing, then one can prove {H_m \leq h_k - h_1} whenever

\displaystyle  M_k > \frac{2m}{\theta}.

We can improve this baseline in a number of ways. Firstly, with regards to the denominator in (1), if one upgrades the Elliott-Halberstam hypothesis {EH[\theta]} to the generalised Elliott-Halberstam hypothesis {GEH[\theta]} (currently known for {\theta < 1/2}, thanks to Motohashi, but conjectured for {\theta < 1}), the asymptotic (2) holds under the more general hypothesis that {F} is supported in a polytope {R}, as long as {R} obeys the inclusion

\displaystyle  R + R \subset \bigcup_{m=1}^k \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: \ \ \ \ \ (4)

\displaystyle  t_1+\dots+t_{m-1}+t_{m+1}+\dots+t_k < 2; t_m < 2/\theta \} \cup \frac{2}{\theta} \cdot {\cal R}_k;

examples of polytopes {R} obeying this constraint include the modified simplex

\displaystyle  {\cal R}'_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\dots+t_{m-1}+t_{m+1}+\dots+t_k < 1

\displaystyle \hbox{ for all } 1 \leq m \leq k \},

the prism

\displaystyle  {\cal R}_{k-1} \times [0, 1/\theta)

the dilated simplex

\displaystyle  \frac{1}{\theta} \cdot {\cal R}_k

and the truncated simplex

\displaystyle  \frac{k}{k-1} \cdot {\cal R}_k \cap [0,1/\theta)^k.

See this previous post for a proof of these claims.

With regards to the numerator, the asymptotic (3) is valid whenever, for each {1 \leq m \leq k}, the marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanish outside of {{\cal R}_{k-1}}. This is automatic if {F} is supported on {{\cal R}_k}, or on the slightly larger region {{\cal R}'_k}, but is an additional constraint when {F} is supported on one of the other polytopes {R} mentioned above.

More recently, we have obtained a more flexible version of the above asymptotic: if the marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanish outside of {(1+\varepsilon) \cdot {\cal R}_{k-1}} for some {0 < \varepsilon < 1}, then the numerator of (1) has a lower bound of

\displaystyle  \frac{Bx}{W} \frac{2}{\theta} (\sum_{j=1}^m J^{(m)}_{k,\varepsilon}(F) + o(1))

where

\displaystyle  J_{k,\varepsilon}^{(m)}(F) := \int_{(1-\varepsilon) \cdot {\cal R}_{k-1}} (\int_0^\infty F(t_1,\ldots,t_k)\ dt_m)^2\ dt_1 \dots dt_{m-1} dt_{m+1} \dots dt_k.

A proof is given here. Putting all this together, we can conclude

Theorem 1 Suppose we can find {0 \leq \varepsilon < 1} and a function {F} supported on a polytope {R} obeying (4), not identically zero and with all marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanishing outside of {(1+\varepsilon) \cdot {\cal R}_{k-1}}, and with

\displaystyle  \frac{\sum_{m=1}^k J_{k,\varepsilon}^{(m)}(F)}{I_k(F)} > \frac{2m}{\theta}.

Then {GEH[\theta]} implies {H_m \leq h_k-h_1}.

In principle, this very flexible criterion for upper bounding {H_m} should lead to better bounds than before, and in particular we have now established {H_1 \leq 8} on GEH.

Another promising direction is to try to improve the analysis at medium {k} (more specifically, in the regime {k \sim 50}), which is where we are currently at without EH or GEH through numerical quadratic programming. Right now we are only using {\theta=1/2} and using the baseline {M_k} analysis, basically for two reasons:

  • We do not have good numerical formulae for integrating polynomials on any region more complicated than the simplex {{\cal R}_k} in medium dimension.
  • The estimates {MPZ^{(i)}[\varpi,\delta]} produced by Polymath8a involve a {\delta} parameter, which introduces additional restrictions on the support of {F} (conservatively, it restricts {F} to {[0,\delta']^k} where {\delta' := \frac{\delta}{1/4+\varpi}} and {\theta = 1/2 + 2 \varpi}; it should be possible to be looser than this (as was done in Polymath8a) but this has not been fully explored yet). This then triggers the previous obstacle of having to integrate on something other than a simplex.

However, these look like solvable problems, and so I would expect that further unconditional improvement for {H_1} should be possible.

This is the fourth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} are:

  • (Maynard) Assuming the Elliott-Halberstam conjecture, {H_1 \leq 12}.
  • (Polymath8b, tentative) {H_1 \leq 272}. Assuming Elliott-Halberstam, {H_2 \leq 272}.
  • (Polymath8b, tentative) {H_2 \leq 429{,}822}. Assuming Elliott-Halberstam, {H_4 \leq 493{,}408}.
  • (Polymath8b, tentative) {H_3 \leq 26{,}682{,}014}. (Presumably a comparable bound also holds for {H_6} on Elliott-Halberstam, but this has not been computed.)
  • (Polymath8b) {H_m \leq \exp( 3.817 m )} for sufficiently large {m}. Assuming Elliott-Halberstam, {H_m \ll m e^{2m}} for sufficiently large {m}.

While the {H_1} bound on the Elliott-Halberstam conjecture has not improved since the start of the Polymath8b project, there is reason to hope that it will soon fall, hopefully to {8}. This is because we have begun to exploit more fully the fact that when using “multidimensional Selberg-GPY” sieves of the form

\displaystyle  \nu(n) := \sigma_{f,k}(n)^2

with

\displaystyle  \sigma_{f,k}(n) := \sum_{d_1|n+h_1,\dots,d_k|n+h_k} \mu(d_1) \dots \mu(d_k) f( \frac{\log d_1}{\log R},\dots,\frac{\log d_k}{\log R}),

where {R := x^{\theta/2}}, it is not necessary for the smooth function {f: [0,+\infty)^k \rightarrow {\bf R}} to be supported on the simplex

\displaystyle {\cal R}_k := \{ (t_1,\dots,t_k)\in [0,1]^k: t_1+\dots+t_k \leq 1\},

but can in fact be allowed to range on larger sets. First of all, {f} may instead be supported on the slightly larger polytope

\displaystyle {\cal R}'_k := \{ (t_1,\dots,t_k)\in [0,1]^k: t_1+\dots+t_{j-1}+t_{j+1}+\dots+t_k \leq 1

\displaystyle  \hbox{ for all } j=1,\dots,k\}.

However, it turns out that more is true: given a sufficiently general version of the Elliott-Halberstam conjecture {EH[\theta]} at the given value of {\theta}, one may work with functions {f} supported on more general domains {R}, so long as the sumset {R+R := \{ t+t': t,t'\in R\}} is contained in the non-convex region

\displaystyle  \bigcup_{j=1}^k \{ (t_1,\dots,t_k)\in [0,\frac{2}{\theta}]^k: t_1+\dots+t_{j-1}+t_{j+1}+\dots+t_k \leq 2 \} \cup \frac{2}{\theta} \cdot {\cal R}_k, \ \ \ \ \ (1)

and also provided that the restriction

\displaystyle  (t_1,\dots,t_{j-1},t_{j+1},\dots,t_k) \mapsto f(t_1,\dots,t_{j-1},0,t_{j+1},\dots,t_k) \ \ \ \ \ (2)

is supported on the simplex

\displaystyle {\cal R}_{k-1} := \{ (t_1,\dots,t_{j-1},t_{j+1},\dots,t_k)\in [0,1]^{k-1}:

\displaystyle t_1+\dots+t_{j-1}+t_{j+1}+\dots t_k \leq 1\}.

More precisely, if {f} is a smooth function, not identically zero, with the above properties for some {R}, and the ratio

\displaystyle  \sum_{j=1}^k \int_{{\cal R}_{k-1}} f_{1,\dots,j-1,j+1,\dots,k}(t_1,\dots,t_{j-1},0,t_{j+1},\dots,t_k)^2 \ \ \ \ \ (3)

\displaystyle dt_1 \dots dt_{j-1} dt_{j+1} \dots dt_k

\displaystyle  / \int_R f_{1,\dots,k}^2(t_1,\dots,t_k)\ dt_1 \dots dt_k

is larger than {\frac{2m}{\theta}}, then the claim {DHL[k,m+1]} holds (assuming {EH[\theta]}), and in particular {H_m \leq H(k)}.

I’ll explain why one can do this below the fold. Taking this for granted, we can rewrite this criterion in terms of the mixed derivative {F := f_{1,\dots,k}}, the upshot being that if one can find a smooth function {F} supported on {R} that obeys the vanishing marginal conditions

\displaystyle  \int F( t_1,\dots,t_k )\ dt_j = 0

whenever {1 \leq j \leq k} and {t_1+\dots+t_{j-1}+t_{j+1}+\dots+t_k > 1}, and the ratio

\displaystyle  \frac{\sum_{j=1}^k J_k^{(j)}(F)}{I_k(F)} \ \ \ \ \ (4)

is larger than {\frac{2m}{\theta}}, where

\displaystyle  I_k(F) := \int_R F(t_1,\dots,t_k)^2\ dt_1 \dots dt_k

and

\displaystyle  J_k^{(j)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1/\theta} F(t_1,\dots,t_k)\ dt_j)^2 dt_1 \dots dt_{j-1} dt_{j+1} \dots dt_k

then {DHL[k,m+1]} holds. (To equate these two formulations, it is convenient to assume that {R} is a downset, in the sense that whenever {(t_1,\dots,t_k) \in R}, the entire box {[0,t_1] \times \dots \times [0,t_k]} lie in {R}, but one can easily enlarge {R} to be a downset without destroying the containment of {R+R} in the non-convex region (1).) One initially requires {F} to be smooth, but a limiting argument allows one to relax to bounded measurable {F}. (To approximate a rough {F} by a smooth {F} while retaining the required moment conditions, one can first apply a slight dilation and translation so that the marginals of {F} are supported on a slightly smaller version of the simplex {{\cal R}_{k-1}}, and then convolve by a smooth approximation to the identity to make {F} smooth, while keeping the marginals supported on {{\cal R}_{k-1}}.)

We are now exploring various choices of {R} to work with, including the prism

\displaystyle  \{ (t_1,\dots,t_k) \in [0,1/\theta]^k: t_1+\dots+t_{k-1} \leq 1 \}

and the symmetric region

\displaystyle  \{ (t_1,\dots,t_k) \in [0,1/\theta]^k: t_1+\dots+t_k \leq \frac{k}{k-1} \}.

By suitably subdividing these regions into polytopes, and working with piecewise polynomial functions {F} that are polynomial of a specified degree on each subpolytope, one can phrase the problem of optimising (4) as a quadratic program, which we have managed to work with for {k=3}. Extending this program to {k=4}, there is a decent chance that we will be able to obtain {DHL[4,2]} on EH.

We have also been able to numerically optimise {M_k} quite accurately for medium values of {k} (e.g. {k \sim 50}), which has led to improved values of {H_1} without EH. For large {k}, we now also have the asymptotic {M_k=\log k - O(1)} with explicit error terms (details here) which have allowed us to slightly improve the {m=2} numerology, and also to get explicit {m=3} numerology for the first time.

Read the rest of this entry »

This is the third thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} are:

  • (Maynard) Assuming the Elliott-Halberstam conjecture, {H_1 \leq 12}.
  • (Polymath8b, tentative) {H_1 \leq 330}. Assuming Elliott-Halberstam, {H_2 \leq 330}.
  • (Polymath8b, tentative) {H_2 \leq 484{,}126}. Assuming Elliott-Halberstam, {H_4 \leq 493{,}408}.
  • (Polymath8b) {H_m \leq \exp( 3.817 m )} for sufficiently large {m}. Assuming Elliott-Halberstam, {H_m \ll e^{2m} m \log m} for sufficiently large {m}.

Much of the current focus of the Polymath8b project is on the quantity

\displaystyle M_k = M_k({\cal R}_k) := \sup_F \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}

where {F} ranges over square-integrable functions on the simplex

\displaystyle {\cal R}_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\ldots+t_k \leq 1 \}

with {I_k, J_k^{(m)}} being the quadratic forms

\displaystyle I_k(F) := \int_{{\cal R}_k} F(t_1,\ldots,t_k)^2\ dt_1 \ldots dt_k

and

\displaystyle J_k^{(m)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_k)\ dt_m)^2

\displaystyle dt_1 \ldots dt_{m-1} dt_{m+1} \ldots dt_k.

It was shown by Maynard that one has {H_m \leq H(k)} whenever {M_k > 4m}, where {H(k)} is the narrowest diameter of an admissible {k}-tuple. As discussed in the previous post, we have slight improvements to this implication, but they are currently difficult to implement, due to the need to perform high-dimensional integration. The quantity {M_k} does seem however to be close to the theoretical limit of what the Selberg sieve method can achieve for implications of this type (at the Bombieri-Vinogradov level of distribution, at least); it seems of interest to explore more general sieves, although we have not yet made much progress in this direction.

The best asymptotic bounds for {M_k} we have are

\displaystyle \log k - \log\log\log k + O(1) \leq M_k \leq \frac{k}{k-1} \log k \ \ \ \ \ (1)

 

which we prove below the fold. The upper bound holds for all {k > 1}; the lower bound is only valid for sufficiently large {k}, and gives the upper bound {H_m \ll e^{2m} \log m} on Elliott-Halberstam.

For small {k}, the upper bound is quite competitive, for instance it provides the upper bound in the best values

\displaystyle 1.845 \leq M_4 \leq 1.848

and

\displaystyle 2.001162 \leq M_5 \leq 2.011797

we have for {M_4} and {M_5}. The situation is a little less clear for medium values of {k}, for instance we have

\displaystyle 3.95608 \leq M_{59} \leq 4.148

and so it is not yet clear whether {M_{59} > 4} (which would imply {H_1 \leq 300}). See this wiki page for some further upper and lower bounds on {M_k}.

The best lower bounds are not obtained through the asymptotic analysis, but rather through quadratic programming (extending the original method of Maynard). This has given significant numerical improvements to our best bounds (in particular lowering the {H_1} bound from {600} to {330}), but we have not yet been able to combine this method with the other potential improvements (enlarging the simplex, using MPZ distributional estimates, and exploiting upper bounds on two-point correlations) due to the computational difficulty involved.

Read the rest of this entry »

This is the second thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} are:

  • (Maynard) {H_1 \leq 600}.
  • (Polymath8b, tentative) {H_2 \leq 484,276}.
  • (Polymath8b, tentative) {H_m \leq \exp( 3.817 m )} for sufficiently large {m}.
  • (Maynard) Assuming the Elliott-Halberstam conjecture, {H_1 \leq 12}, {H_2 \leq 600}, and {H_m \ll m^3 e^{2m}}.

Following the strategy of Maynard, the bounds on {H_m} proceed by combining four ingredients:

  1. Distribution estimates {EH[\theta]} or {MPZ[\varpi,\delta]} for the primes (or related objects);
  2. Bounds for the minimal diameter {H(k)} of an admissible {k}-tuple;
  3. Lower bounds for the optimal value {M_k} to a certain variational problem;
  4. Sieve-theoretic arguments to convert the previous three ingredients into a bound on {H_m}.

Accordingly, the most natural routes to improve the bounds on {H_m} are to improve one or more of the above four ingredients.

Ingredient 1 was studied intensively in Polymath8a. The following results are known or conjectured (see the Polymath8a paper for notation and proofs):

  • (Bombieri-Vinogradov) {EH[\theta]} is true for all {0 < \theta < 1/2}.
  • (Polymath8a) {MPZ[\varpi,\delta]} is true for {\frac{600}{7} \varpi + \frac{180}{7}\delta < 1}.
  • (Polymath8a, tentative) {MPZ[\varpi,\delta]} is true for {\frac{1080}{13} \varpi + \frac{330}{13} \delta < 1}.
  • (Elliott-Halberstam conjecture) {EH[\theta]} is true for all {0 < \theta < 1}.

Ingredient 2 was also studied intensively in Polymath8a, and is more or less a solved problem for the values of {k} of interest (with exact values of {H(k)} for {k \leq 342}, and quite good upper bounds for {H(k)} for {k < 5000}, available at this page). So the main focus currently is on improving Ingredients 3 and 4.

For Ingredient 3, the basic variational problem is to understand the quantity

\displaystyle  M_k({\cal R}_k) := \sup_F \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}

for {F: {\cal R}_k \rightarrow {\bf R}} bounded measurable functions, not identically zero, on the simplex

\displaystyle  {\cal R}_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\ldots+t_k \leq 1 \}

with {I_k, J_k^{(m)}} being the quadratic forms

\displaystyle  I_k(F) := \int_{{\cal R}_k} F(t_1,\ldots,t_k)^2\ dt_1 \ldots dt_k

and

\displaystyle  J_k^{(m)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_k)\ dt_i)^2 dt_1 \ldots dt_{m-1} dt_{m+1} \ldots dt_k.

Equivalently, one has

\displaystyle  M_k({\cal R}_k) := \sup_F \frac{\int_{{\cal R}_k} F {\cal L}_k F}{\int_{{\cal R}_k} F^2}

where {{\cal L}_k: L^2({\cal R}_k) \rightarrow L^2({\cal R}_k)} is the positive semi-definite bounded self-adjoint operator

\displaystyle  {\cal L}_k F(t_1,\ldots,t_k) = \sum_{m=1}^k \int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_{m-1},s,t_{m+1},\ldots,t_k)\ ds,

so {M_k} is the operator norm of {{\cal L}}. Another interpretation of {M_k({\cal R}_k)} is that the probability that a rook moving randomly in the unit cube {[0,1]^k} stays in simplex {{\cal R}_k} for {n} moves is asymptotically {(M_k({\cal R}_k)/k + o(1))^n}.

We now have a fairly good asymptotic understanding of {M_k({\cal R}_k)}, with the bounds

\displaystyle  \log k - 2 \log\log k -2 \leq M_k({\cal R}_k) \leq \log k + \log\log k + 2

holding for sufficiently large {k}. There is however still room to tighten the bounds on {M_k({\cal R}_k)} for small {k}; I’ll summarise some of the ideas discussed so far below the fold.

For Ingredient 4, the basic tool is this:

Theorem 1 (Maynard) If {EH[\theta]} is true and {M_k({\cal R}_k) > \frac{2m}{\theta}}, then {H_m \leq H(k)}.

Thus, for instance, it is known that {M_{105} > 4} and {H(105)=600}, and this together with the Bombieri-Vinogradov inequality gives {H_1\leq 600}. This result is proven in Maynard’s paper and an alternate proof is also given in the previous blog post.

We have a number of ways to relax the hypotheses of this result, which we also summarise below the fold.

Read the rest of this entry »

For each natural number {m}, let {H_m} denote the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

where {p_n} denotes the {n\textsuperscript{th}} prime. In other words, {H_m} is the least quantity such that there are infinitely many intervals of length {H_m} that contain {m+1} or more primes. Thus, for instance, the twin prime conjecture is equivalent to the assertion that {H_1 = 2}, and the prime tuples conjecture would imply that {H_m} is equal to the diameter of the narrowest admissible tuple of cardinality {m+1} (thus we conjecturally have {H_1 = 2}, {H_2 = 6}, {H_3 = 8}, {H_4 = 12}, {H_5 = 16}, and so forth; see this web page for further continuation of this sequence).

In 2004, Goldston, Pintz, and Yildirim established the bound {H_1 \leq 16} conditional on the Elliott-Halberstam conjecture, which remains unproven. However, no unconditional finiteness of {H_1} was obtained (although they famously obtained the non-trivial bound {p_{n+1}-p_n = o(\log p_n)}), and even on the Elliot-Halberstam conjecture no finiteness result on the higher {H_m} was obtained either (although they were able to show {p_{n+2}-p_n=o(\log p_n)} on this conjecture). In the recent breakthrough of Zhang, the unconditional bound {H_1 \leq 70,000,000} was obtained, by establishing a weak partial version of the Elliott-Halberstam conjecture; by refining these methods, the Polymath8 project (which I suppose we could retroactively call the Polymath8a project) then lowered this bound to {H_1 \leq 4,680}.

With the very recent preprint of James Maynard, we have the following further substantial improvements:

Theorem 1 (Maynard’s theorem) Unconditionally, we have the following bounds:

  • {H_1 \leq 600}.
  • {H_m \leq C m^3 e^{4m}} for an absolute constant {C} and any {m \geq 1}.

If one assumes the Elliott-Halberstam conjecture, we have the following improved bounds:

  • {H_1 \leq 12}.
  • {H_2 \leq 600}.
  • {H_m \leq C m^3 e^{2m}} for an absolute constant {C} and any {m \geq 1}.

The final conclusion {H_m \leq C m^3 e^{2m}} on Elliott-Halberstam is not explicitly stated in Maynard’s paper, but follows easily from his methods, as I will describe below the fold. (At around the same time as Maynard’s work, I had also begun a similar set of calculations concerning {H_m}, but was only able to obtain the slightly weaker bound {H_m \leq C \exp( C m )} unconditionally.) In the converse direction, the prime tuples conjecture implies that {H_m} should be comparable to {m \log m}. Granville has also obtained the slightly weaker explicit bound {H_m \leq e^{8m+5}} for any {m \geq 1} by a slight modification of Maynard’s argument.

The arguments of Maynard avoid using the difficult partial results on (weakened forms of) the Elliott-Halberstam conjecture that were established by Zhang and then refined by Polymath8; instead, the main input is the classical Bombieri-Vinogradov theorem, combined with a sieve that is closer in spirit to an older sieve of Goldston and Yildirim, than to the sieve used later by Goldston, Pintz, and Yildirim on which almost all subsequent work is based.

The aim of the Polymath8b project is to obtain improved bounds on {H_1, H_2}, and higher values of {H_m}, either conditional on the Elliott-Halberstam conjecture or unconditional. The likeliest routes for doing this are by optimising Maynard’s arguments and/or combining them with some of the results from the Polymath8a project. This post is intended to be the first research thread for that purpose. To start the ball rolling, I am going to give below a presentation of Maynard’s results, with some minor technical differences (most significantly, I am using the Goldston-Pintz-Yildirim variant of the Selberg sieve, rather than the traditional “elementary Selberg sieve” that is used by Maynard (and also in the Polymath8 project), although it seems that the numerology obtained by both sieves is essentially the same). An alternate exposition of Maynard’s work has just been completed also by Andrew Granville.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,332 other followers