This post is a continuation of the previous post on sieve theory, which is an ongoing part of the Polymath8 project. As the previous post was getting somewhat full, we are rolling the thread over to the current post.

In this post we will record a new truncation of the elementary Selberg sieve discussed in this previous post (and also analysed in the context of bounded prime gaps by Graham-Goldston-Pintz-Yildirim and Motohashi-Pintz) that was recently worked out by Janos Pintz, who has kindly given permission to share this new idea with the Polymath8 project. This new sieve decouples the ${\delta}$ parameter that was present in our previous analysis of Zhang’s argument into two parameters, a quantity ${\delta}$ that used to measure smoothness in the modulus, but now measures a weaker notion of “dense divisibility” which is what is really needed in the Elliott-Halberstam type estimates, and a second quantity ${\delta'}$ which still measures smoothness but is allowed to be substantially larger than ${\delta}$. Through this decoupling, it appears that the ${\kappa}$ type losses in the sieve theoretic part of the argument can be almost completely eliminated (they basically decay exponential in ${\delta'}$ and have only mild dependence on ${\delta}$, whereas the Elliott-Halberstam analysis is sensitive only to ${\delta}$, allowing one to set ${\delta}$ far smaller than previously by keeping ${\delta'}$ large). This should lead to noticeable gains in the ${k_0}$ quantity in our analysis.

To describe this new truncation we need to review some notation. As in all previous posts (in particular, the first post in this series), we have an asymptotic parameter ${x}$ going off to infinity, and all quantities here are implicitly understood to be allowed to depend on ${x}$ (or to range in a set that depends on ${x}$) unless they are explicitly declared to be fixed. We use the usual asymptotic notation ${O(), o(), \ll}$ relative to this parameter ${x}$. To be able to ignore local factors (such as the singular series ${{\mathfrak G}}$), we also use the “${W}$-trick” (as discussed in the first post in this series): we introduce a parameter ${w}$ that grows very slowly with ${x}$, and set ${W := \prod_{p.

For any fixed natural number ${k_0}$, define an admissible ${k_0}$-tuple to be a fixed tuple ${{\mathcal H}}$ of ${k_0}$ distinct integers which avoids at least one residue class modulo ${p}$ for each prime ${p}$. Our objective is to obtain the following conjecture ${DHL[k_0,2]}$ for as small a value of the parameter ${k_0}$ as possible:

Conjecture 1 (${DHL[k_0,2]}$) Let ${{\mathcal H}}$ be a fixed admissible ${k_0}$-tuple. Then there exist infinitely many translates ${n+{\mathcal H}}$ of ${{\mathcal H}}$ that contain at least two primes.

The twin prime conjecture asserts that ${DHL[k_0,2]}$ holds for ${k_0}$ as small as ${2}$, but currently we are only able to establish this result for ${k_0 \geq 6329}$ (see this comment). However, with the new truncated sieve of Pintz described in this post, we expect to be able to lower this threshold ${k_0 \geq 6329}$ somewhat.

In previous posts, we deduced ${DHL[k_0,2]}$ from a technical variant ${MPZ[\varpi,\delta]}$ of the Elliot-Halberstam conjecture for certain choices of parameters ${0 < \varpi < 1/4}$, ${0 < \delta < 1/4+\varpi}$. We will use the following formulation of ${MPZ[\varpi,\delta]}$:

Conjecture 2 (${MPZ[\varpi,\delta]}$) Let ${{\mathcal H}}$ be a fixed ${k_0}$-tuple (not necessarily admissible) for some fixed ${k_0 \geq 2}$, and let ${b\ (W)}$ be a primitive residue class. Then

$\displaystyle \sum_{q \in {\mathcal S}_I: q< x^{1/2+2\varpi}} \sum_{a \in C(q)} |\Delta_{b,W}(\Lambda; q,a)| = O( x \log^{-A} x) \ \ \ \ \ (1)$

for any fixed ${A>0}$, where ${I = (w,x^{\delta})}$, ${{\mathcal S}_I}$ are the square-free integers whose prime factors lie in ${I}$, and ${\Delta_{b,W}(\Lambda;q,a)}$ is the quantity

$\displaystyle \Delta_{b,W}(\Lambda;q,a) := | \sum_{x \leq n \leq 2x: n=b\ (W); n = a\ (q)} \Lambda(n) \ \ \ \ \ (2)$

$\displaystyle - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x: n = b\ (W)} \Lambda(n)|.$

and ${C(q)}$ is the set of congruence classes

$\displaystyle C(q) := \{ a \in ({\bf Z}/q{\bf Z})^\times: P(a) = 0 \}$

and ${P}$ is the polynomial

$\displaystyle P(a) := \prod_{h \in {\mathcal H}} (a+h).$

The conjecture ${MPZ[\varpi,\delta]}$ is currently known to hold whenever ${87 \varpi + 17 \delta < \frac{1}{4}}$ (see this comment and this confirmation). Actually, we can prove a stronger result than ${MPZ[\varpi,\delta]}$ in this regime in a couple ways. Firstly, the congruence classes ${C(q)}$ can be replaced by a more general system of congruence classes obeying a certain controlled multiplicity axiom; see this post. Secondly, and more importantly for this post, the requirement that the modulus ${q}$ lies in ${{\mathcal S}_I}$ can be relaxed; see below.

To connect the two conjectures, the previously best known implication was the folowing (see Theorem 2 from this post):

Theorem 3 Let ${0 < \varpi < 1/4}$, ${0 < \delta < 1/4 + \varpi}$ and ${k_0 \geq 2}$ be such that we have the inequality

$\displaystyle (1 +4 \varpi) (1-\kappa') > \frac{j^2_{k_0-2}}{k_0(k_0-1)} (1+\kappa) \ \ \ \ \ (3)$

where ${j_{k_0-2} = j_{k_0-2,1}}$ is the first positive zero of the Bessel function ${J_{k_0-2}}$, and ${\kappa,\kappa'>0}$ are the quantities

$\displaystyle \kappa := \sum_{1 \leq n < \frac{1+4\varpi}{4\delta}} \frac{3^n+1}{2} \frac{k_0^n}{n!} (\int_{4\delta/(1+4\varpi)}^1 (1-t)^{k_0/2}\ \frac{dt}{t})^n$

and

$\displaystyle \kappa' := \sum_{2 \leq n < \frac{1+4\varpi}{4\delta}} \frac{3^n-1}{2} \frac{(k_0-1)^n}{n!}$

$\displaystyle (\int_{4\delta/(1+4\varpi)}^1 (1-t)^{(k_0-1)/2}\ \frac{dt}{t})^n.$

Then ${MPZ[\varpi,\delta]}$ implies ${DHL[k_0,2]}$.

Actually there have been some slight improvements to the quantities ${\kappa,\kappa'}$; see the comments to this previous post. However, the main error ${\kappa}$ remains roughly of the order ${\delta^{-1} \exp( - 2 k_0\delta )}$, which limits one from taking ${\delta}$ too small.

To improve beyond this, the first basic observation is that the smoothness condition ${q \in {\mathcal S}_I}$, which implies that all prime divisors of ${q}$ are less than ${x^\delta}$, can be relaxed in the proof of ${MPZ[\varpi,\delta]}$. Indeed, if one inspects the proof of this proposition (described in these three previous posts), one sees that the key property of ${q}$ needed is not so much the smoothness, but a weaker condition which we will call (for lack of a better term) dense divisibility:

Definition 4 Let ${y > 1}$. A positive integer ${q}$ is said to be ${y}$-densely divisible if for every ${1 \leq R \leq q}$, one can find a factor of ${q}$ in the interval ${[y^{-1} R, R]}$. We let ${{\mathcal D}_y}$ denote the set of positive integers that are ${y}$-densely divisible.

Certainly every integer which is ${y}$-smooth (i.e. has all prime factors at most ${y}$ is also ${y}$-densely divisible, as can be seen from the greedy algorithm; but the property of being ${y}$-densely divisible is strictly weaker than ${y}$-smoothness, which is a fact we shall exploit shortly.

We now define ${MPZ'[\varpi,\delta]}$ to be the same statement as ${MPZ[\varpi,\delta]}$, but with the condition ${q \in {\mathcal S}_I}$ replaced by the weaker condition ${q \in {\mathcal S}_{[w,+\infty)} \cap {\mathcal D}_{x^\delta}}$. The arguments in previous posts then also establish ${MPZ'[\varpi,\delta]}$ whenever ${87 \varpi + 17 \delta < \frac{1}{4}}$.

The main result of this post is then the following implication, essentially due to Pintz:

Theorem 5 Let ${0 < \varpi < 1/4}$, ${0 < \delta \leq \delta' < 1/4 + \varpi}$, ${A \geq 0}$, and ${k_0 \geq 2}$ be such that

$\displaystyle (1 +4 \varpi) (1-2\kappa_1 - 2\kappa_2 - 2\kappa_3) > \frac{j^2_{k_0-2}}{k_0(k_0-1)}$

where

$\displaystyle \kappa_1 := \int_{\theta}^1 (1-t)^{(k_0-1)/2}\ \frac{dt}{t}$

$\displaystyle \kappa_2 := (k_0-1) \int_{\theta}^1 (1-t)^{k_0-1}\ \frac{dt}{t}$

$\displaystyle \kappa_3 := e^A \frac{G_{k_0-1,\tilde \theta}(0,0)}{G_{k_0-1}(0,0)} \sum_{0 \leq J \leq 1/\tilde \delta} \frac{(k_0-1)^J}{J!} (\int_{\tilde \delta}^\theta e^{-At} \frac{dt}{t})^J$

and

$\displaystyle \theta := \frac{\delta'}{1/4+\varpi}$

$\displaystyle \tilde \theta := \frac{(\delta' - \delta)/2 + \varpi}{1/4 + \varpi}$

$\displaystyle \tilde \delta := \frac{\delta}{1/4+\varpi}$

$\displaystyle G_{k_0-1}(0,0) := \int_0^1 f(t)^2 \frac{t^{k_0-2}}{(k_0-2)!}\ dt$

$\displaystyle G_{k_0-1,\tilde \theta}(0,0) := \int_0^{\tilde \theta} f(t)^2 \frac{t^{k_0-2}}{(k_0-2)!}\ dt$

and

$\displaystyle f(t) := t^{1-k_0/2} J_{k_0-2}( \sqrt{t} j_{k_0-2} ).$

Then ${MPZ'[\varpi,\delta]}$ implies ${DHL[k_0,2]}$.

This theorem has rather messy constants, but we can isolate some special cases which are a bit easier to compute with. Setting ${\delta' = \delta}$, we see that ${\kappa_3}$ vanishes (and the argument below will show that we only need ${MPZ[\varpi,\delta]}$ rather than ${MPZ'[\varpi,\delta]}$), and we obtain the following slight improvement of Theorem 3:

Theorem 6 Let ${0 < \varpi < 1/4}$, ${0 < \delta < 1/4 + \varpi}$ and ${k_0 \geq 2}$ be such that we have the inequality

$\displaystyle (1 +4 \varpi) (1-2\kappa_1-2\kappa_2) > \frac{j^2_{k_0-2}}{k_0(k_0-1)} \ \ \ \ \ (4)$

where

$\displaystyle \kappa_1 := \int_{4\delta/(1+4\varpi)}^1 (1-t)^{(k_0-1)/2}\ \frac{dt}{t}$

$\displaystyle \kappa_2 := (k_0-1) \int_{4\delta/(1+4\varpi)}^1 (1-t)^{k_0-1}\ \frac{dt}{t}.$

Then ${MPZ[\varpi,\delta]}$ implies ${DHL[k_0,2]}$.

This is a little better than Theorem 3, because the error ${2\kappa_1+2\kappa_2}$ has size about ${\frac{1}{2 k_0 \delta} \exp( - 2 k_0 \delta) + \frac{1}{2 \delta} \exp(-4 k_0 \delta)}$, which compares favorably with the error in Theorem 3 which is about ${\frac{1}{\delta} \exp(-2 k_0 \delta)}$. This should already give a “cheap” improvement to our current threshold ${k_0 \geq 6329}$, though it will fall short of what one would get if one fully optimised over all parameters in the above theorem.

Returning to the full strength of Theorem 5, let us obtain a crude upper bound for ${\kappa_3}$ that is a little simpler to understand. Extending the ${J}$ summation to infinity and using the Taylor series for the exponential, we have

$\displaystyle \kappa_3 \leq \frac{G_{k_0-1,\tilde \theta}(0,0)}{G_{k_0-1}(0,0)} \exp( A + (k_0-1) \int_{\tilde \delta}^\theta e^{-At} \frac{dt}{t} ).$

We can crudely bound

$\displaystyle \int_{\tilde \delta}^\theta e^{-At} \frac{dt}{t} \leq \frac{1}{A \tilde \delta}$

and then optimise in ${A}$ to obtain

$\displaystyle \kappa_3 \leq \frac{G_{k_0-1,\tilde \theta}(0,0)}{G_{k_0-1}(0,0)} \exp( 2 (k_0-1)^{1/2} \tilde \delta^{-1/2} ).$

Because of the ${t^{k_0-2}}$ factor in the integrand for ${G_{k_0-1}}$ and ${G_{k_0-1,\tilde \theta}}$, we expect the ratio ${\frac{G_{k_0-1,\tilde \theta}(0,0)}{G_{k_0-1}(0,0)}}$ to be of the order of ${\tilde \theta^{k_0-1}}$, although one will need some theoretical or numerical estimates on Bessel functions to make this heuristic more precise. Setting ${\tilde \theta}$ to be something like ${1/2}$, we get a good bound here as long as ${\tilde \delta \gg 1/k_0}$, which at current values of ${\delta, k_0}$ is a mild condition.

Pintz’s argument uses the elementary Selberg sieve, discussed in this previous post, but with a more efficient estimation of the quantity ${\beta}$, in particular avoiding the truncation to moduli ${d}$ between ${x^{-\delta} R}$ and ${R}$ which was the main source of inefficiency in that previous post. The basic idea is to “linearise” the effect of the truncation of the sieve, so that this contribution can be dealt with by the union bound (basically, bounding the contribution of each large prime one at a time). This mostly avoids the more complicated combinatorial analysis that arose in the analytic Selberg sieve, as seen in this previous post.

— 1. Review of previous material —

In this section we collect some results from previous posts which we will need.

We first record an asymptotic for multiplicative functions. For any natural number ${k}$, define a ${k}$-dimensional multiplicative function to be a multiplicative function ${f: {\bf N} \rightarrow {\bf R}}$ which obeys the asymptotic

$\displaystyle f(p) = k + O(\frac{1}{p})$

for all ${p>w}$. The following result is Lemma 8 from this previous post:

Lemma 7 Let ${I = (w,+\infty)}$. Let ${k}$ be a fixed positive integer, and let ${f: {\bf N} \rightarrow {\bf R}}$ be a multiplicative function of dimension ${k}$. Then for any fixed compactly supported, Riemann-integrable function ${g: {\bf R} \rightarrow {\bf R}}$, and any ${R>x^c}$ for some fixed ${c>0}$, one has

$\displaystyle \sum_{d \in {\mathcal S}_I} \frac{f(d)}{d} g(\frac{\log d}{\log R}) = (\frac{\phi(W)}{W} \log R)^k ( \int_0^\infty g(t) \frac{t^{k-1}}{(k-1)!}\ dt + o(1) ).$

Next, we record a criterion for ${DHL[k_0,2]}$, which is Lemma 7 from this previous post:

Lemma 8 (Criterion for DHL) Let ${k_0 \geq 2}$. Suppose that for each fixed admissible ${k_0}$-tuple ${{\mathcal H}}$ and each congruence class ${b\ (W)}$ such that ${b+h}$ is coprime to ${W}$ for all ${h \in {\mathcal H}}$, one can find a non-negative weight function ${\nu: {\bf N} \rightarrow {\bf R}^+}$, fixed quantities ${\alpha,\beta > 0}$, a quantity ${B>0}$, and a fixed positive power ${R}$ of ${x}$ such that one has the upper bound

$\displaystyle \sum_{x \leq n \leq 2x: n = b\ (W)} \nu(n) \leq (\alpha+o(1)) B\frac{x}{W}, \ \ \ \ \ (5)$

the lower bound

$\displaystyle \sum_{x \leq n \leq 2x: n = b\ (W)} \nu(n) \theta(n+h_i) \geq (\beta-o(1)) B\frac{x}{W} \log R \ \ \ \ \ (6)$

for all ${h_i \in {\mathcal H}}$, and the key inequality

$\displaystyle \frac{\log R}{\log x} > \frac{1}{k_0} \frac{\alpha}{\beta} \ \ \ \ \ (7)$

holds. Then ${DHL[k_0,2]}$ holds. Here ${\theta(n)}$ is defined to equal ${\log n}$ when ${n}$ is prime and ${0}$ otherwise.

— 2. Pintz’s argument —

We can now prove Theorem 5. Fix ${\varpi,\delta,\delta',k_0}$ to obey the hypotheses of this theorem. Let ${b\ (W)}$ be a congruence class with ${b+h}$ coprime to ${W}$ for all ${h \in {\mathcal H}}$ (this class exists by the admissibility of ${{\mathcal H}}$). We apply Lemma 8 with

$\displaystyle B := (\frac{\phi(W)}{W} \log R)^{k_0}$

the elementary Selberg sieve ${\nu = \nu_{\mathcal X}}$ defined by

$\displaystyle \nu(n) := (\sum_{d \in {\mathcal S}_{(w,+\infty)}: d|P(n)} \mu(d) a_d)^2$

where

$\displaystyle a_d := \frac{1}{\Phi(d) \Delta(d)} \sum_{q \in {\mathcal S}_{(w,+\infty)}: (q,d)=1; qd \in {\mathcal X}} \frac{1}{\Phi(q)} f'( \frac{\log dq}{\log R} ), \ \ \ \ \ (8)$

${\Phi, \Delta}$ are the multiplicative functions

$\displaystyle \Phi(d) := \prod_{p|d} \frac{p-k_0}{k_0}$

and

$\displaystyle \Delta(d) :=\prod_{p|d} \frac{k_0}{p},$

the sieve level ${R}$ is given by the formula

$\displaystyle R := x^{1/4 + \varpi},$

${f: {\bf R} \rightarrow {\bf R}}$ is a fixed smooth function supported on ${[-1,1]}$, and ${{\mathcal X}}$ is a certain subset of ${{\mathcal S}_{(w,+\infty)}}$ to be chosen shortly. (The constraints ${q \in {\mathcal S}_{(w,+\infty)}}$ and ${(q,d)=1}$ are redundant given that ${qd \in {\mathcal X}}$, but we retain them for emphasis.) We will assume that ${f}$ non-negative and non-increasing on ${[0,1]}$. In this previous post, we considered this sieve with ${{\mathcal X}}$ equal to either all of ${{\mathcal S}_{(w,+\infty)}}$, or the subset ${{\mathcal S}_{(w,x^\delta)}}$ consisting of smooth numbers. For now, we will discuss the estimates as far as we can without having to explicitly specify ${{\mathcal X}}$.

We first consider the asymptotic (5). By arguing exactly as in Section 2 (or Section 3) of this previous post, we can write the left-hand side of (5), up to errors of ${o(B \frac{x}{W})}$, as

$\displaystyle \sum_{d_0 \in {\mathcal X}} \frac{1}{\Phi(d_0)} f'(\frac{\log d_0}{\log R})^2.$

The summand here is non-negative, so we may crudely replace ${{\mathcal X}}$ by all of ${{\mathcal S}_{(w,+\infty)}}$ and apply Lemma 7 to obtain (5) with

$\displaystyle \alpha := \int_0^1 f'(t)^2 \frac{t^{k_0-1}}{(k_0-1)!}\ dt.$

Now we turn to the more difficult asymptotic (6). The left-hand side expands as

$\displaystyle \sum_{d_1,d_2 \in {\mathcal S}_I} \mu(d_1) a_{d_1} \mu(d_2) a_{d_2} \sum_{x \leq n \leq 2x: [d_1,d_2] | P(n); n = b\ (W)} \theta(n+h_i).$

As observed in Section 2 of this previous post, we have

$\displaystyle \sum_{x \leq n \leq 2x: [d_1,d_2] | P(n); n = b\ (W)} \theta(n+h) = \frac{1}{\phi(W)} x \Delta^*([d_1,d_2]) + O( E^*([d_1,d_2]) )$

where

$\displaystyle \Delta^*(q) := \prod_{p|q} \frac{k_0-1}{p-1}$

and

$\displaystyle E^*(q) = \sum_{a \in C_i(q)} | \sum_{x \leq n \leq 2x: n=b\ (W); n = a\ (q)} \theta(n) - \frac{x}{\phi(Wq)}|.$

Now let ${\epsilon>0}$ be a small fixed constant to be chosen later, and suppose the following claim holds:

Claim 1 (Dense divisibility of moduli) Whenever ${q = [d_1,d_2]}$ and ${a_{d_1},a_{d_2}}$ are non-zero, then either ${q \leq x^{1/2-\epsilon}}$ or else ${q \in {\mathcal D}_{x^\delta}}$.

Then from the Bombieri-Vinogradov theorem (for the ${q \leq x^{1/2-\epsilon}}$ moduli) or the hypothesis ${MPZ'[\varpi,\delta]}$ (for the larger moduli, noting that ${q \leq R^2 = x^{1/2+2\varpi}}$) and standard arguments (cf. Proposition 5 of this post) we have

$\displaystyle \sum_q h(q) E^*(q) \ll x \log^{-A} x$

for any fixed ${A>0}$ and any multiplicative function ${h}$ of a fixed dimension ${k}$, where ${q}$ ranges only over those integers of the form ${q=[d_1,d_2]}$ with ${a_{d_1},a_{d_2}}$ non-zero. From this we easily see (arguing as in Section 2 of this previous post) that the contribution of the ${E^*}$ error term is ${o( B \frac{x}{W}\log R)}$, and we are left with establishing the lower bound

$\displaystyle \sum_{d_1,d_2 \in {\mathcal S}_{(w,\infty)}} \mu(d_1) a_{d_1} \mu(d_2) a_{d_2} \Delta^*([d_1,d_2])$

$\displaystyle \geq \beta (\frac{\phi(W)}{W} \log R)^{k_0+1}$

up to errors of ${o( (\frac{\phi(W)}{W} \log R)^{k_0+1} )}$ (henceforth referred to as negligible errors).

As in Section 2 of the previous post, we can write the left-hand side as

$\displaystyle \sum_{d_0 \in {\mathcal S}_{(w,\infty)}} \frac{h(d_0)}{d_0} (\sum_{m \in {\mathcal S}_{(w,\infty)}: (m,d_0)=1; md_0 \in {\mathcal X}} \frac{f'(\frac{\log d_0 m}{\log R})}{\phi(m)} )^2 \ \ \ \ \ (9)$

where ${h}$ is the ${k_0-1}$-dimensional multiplicative function

$\displaystyle h(d) := \prod_{p|d} (k_0-1) \frac{(p-1)^2}{p(p-k_0)}.$

So we would like to select ${{\mathcal X}}$ small enough that Claim (1) holds, but large enough that we can lower bound (9) by ${\beta (\frac{\phi(W)}{W} \log R)^{k_0+1}}$ up to negligible errors.

Pintz’s idea is to choose ${{\mathcal X}}$ to be the set of all elements ${d}$ of ${{\mathcal S}_{(w,x^{\delta'})}}$ with the property that

$\displaystyle \prod_{p|d:p < x^\delta} p \geq x^{(\delta' - \delta)/2 + \varpi + \epsilon/2}. \ \ \ \ \ (10)$

Let us first verify Claim 1 with this definition. Suppose that ${a_{d_1}, a_{d_2}}$ is non-zero, then from (8) and the support of ${f}$ there are multiples ${d_1r_1}$, ${d_2r_2}$ of ${d_1}$, ${d_2}$ respectively with ${d_1r_1,d_2r_2 \in {\mathcal X}}$ (in particular ${d_1,d_2,r_1,r_2 \in {\mathcal S}_{(w,x^{\delta'})}}$) and ${d_1r_1, d_2 r_2 \leq x^{1/4 + \varpi}}$. Since ${d_1r_1 \in {\mathcal X}}$, we have

$\displaystyle \prod_{p|d_1 r_1: p < x^\delta} p \geq x^{(\delta' - \delta)/2 + \varpi + \epsilon/2}$

and thus

$\displaystyle r_1 \prod_{p|d_1: p < x^\delta} p \geq x^{(\delta' - \delta)/2 + \varpi + \epsilon/2}.$

Similarly for ${r_2}$ and ${d_2}$. Multiplying, we obtain

$\displaystyle r_1 r_2 (d_1,d_2) \prod_{p|[d_1,d_2]: p < x^\delta} p \geq x^{\delta' - \delta + 2\varpi + \epsilon}.$

On the other hand,

$\displaystyle r_1 r_2 d_1 d_2 \leq x^{1/4+\varpi} x^{1/4+\varpi}$

and so on dividing we have

$\displaystyle \frac{1}{[d_1,d_2]} \prod_{p|[d_1,d_2]: p < x^\delta} p \geq x^{-1/2 + \delta' - \delta + \epsilon}.$

We conclude that either ${q := [d_1,d_2]}$ is less than ${x^{1/2-\epsilon}}$, or else

$\displaystyle \prod_{p|[d_1,d_2]: p < x^\delta} p \geq x^{\delta' - \delta}. \ \ \ \ \ (11)$

The latter conclusion implies that ${q}$ is ${x^\delta}$-densely divisible. Indeed, for any ${1 \leq R \leq q}$ we multiply together all the prime divisors of ${q}$ between ${x^\delta}$ and ${x^{\delta'}}$ one at a time until just before one reaches or exceeds ${R}$, or if one runs out of such prime divisors. In the former case one ends up at least as large as ${x^{-\delta'} R}$, and in the latter case one has reached ${q / \prod_{p|q: p. Next, one multiplies to this the prime divisors of ${q}$ less than ${x^\delta}$ until one reaches or exceeds ${x^{-\delta} R}$; this is possible thanks to (11) in the former case and is automatic in the latter case, and gives a divisor between ${x^{-\delta} R}$ and ${R}$ as required.

Now we need to obtain a lower bound for (9). If we write

$\displaystyle F(d_0) := \sum_{m \in {\mathcal S}_{(w,\infty)}: (m,d_0)=1} \frac{-f'(\frac{\log d_0 m}{\log R})}{\phi(m)}$

and

$\displaystyle \tilde F(d_0) := \sum_{m \in {\mathcal S}_{(w,\infty)}: (m,d_0)=1; md_0 \in {\mathcal X}} \frac{-f'(\frac{\log d_0 m}{\log R})}{\phi(m)}$

(the minus sign being to compensate for the non-positive nature of ${f'}$) then we have

$\displaystyle 0\leq \tilde F(d_0) \leq F(d_0)$

and thus

$\displaystyle \tilde F(d_0)^2 \geq F(d_0)^2 - 2 F(d_0) (F(d_0)-\tilde F(d_0)).$

Note that this inequality replaces the quadratic expression ${\tilde F(d_0)^2}$ with a linear expression in the truncation error ${F(d_0)-\tilde F(d_0)}$, which will be more tractable for computing the effect of that error. We may thus lower bound (9) by the difference of

$\displaystyle \sum_{d_0 \in {\mathcal S}_{(w,\infty)}} \frac{h(d_0)}{d_0} F(d_0)^2 \ \ \ \ \ (12)$

and

$\displaystyle 2 \sum_{d_0 \in {\mathcal S}_{(w,\infty)}} \frac{h(d_0)}{d_0} F(d_0) (F(d_0)-\tilde F(d_0)). \ \ \ \ \ (13)$

In Section 2 of this post it is shown that

$\displaystyle F(d_0) = (\frac{\phi(W)}{W} \log R) (f(\frac{\log d_0}{\log R}) + O( \frac{d_0}{\phi(d_0)}-1 ) + o(1) ),$

which implies that (12) is equal to

$\displaystyle (\frac{\phi(W)}{W} \log R)^{k_0+1} \int_0^1 f(t)^2 \frac{t^{k_0-2}}{(k_0-2)!}\ dt$

up to negligible errors. Similar considerations show that (13) is equal to

$\displaystyle 2 (\frac{\phi(W)}{W} \log R) \sum_{d_0 \in {\mathcal S}_{(w,\infty)}} \frac{h(d_0)}{d_0} f(\frac{\log d_0}{\log R}) (F(d_0)-\tilde F(d_0)) \ \ \ \ \ (14)$

up to negligible errors. To upper bound (14), we need to upper bound

$\displaystyle F(d_0)-\tilde F(d_0) = \sum_{m \in {\mathcal S}_{(w,\infty)}: (m,d_0)=1; md_0 \not \in {\mathcal X}} \frac{-f'(\frac{\log d_0 m}{\log R})}{\phi(m)}.$

For this we need to catalog the ways in which ${md_0}$ can fail to be in ${{\mathcal X}}$. In order for this to occur, at least one of the following three statements must hold:

• (i) ${m}$ could be divisible by a prime ${p}$ with ${x^{\delta'} \leq x \leq R}$.
• (ii) ${d_0}$ could be divisible by a prime ${p}$ with ${x^{\delta'} \le x \leq R}$.
• (iii) ${d_0}$ lies in ${{\mathcal S}_{(w,x^{\delta'})}}$, but ${\prod_{p|d_0: p < x^\delta} p < x^{(\delta' - \delta)/2 + \varpi + \epsilon/2}}$.

We consider the contributions of (i), (ii), (iii) to (14). We begin with the contribution of (i). This is bounded above by

$\displaystyle 2 (\frac{\phi(W)}{W} \log R) \sum_{x^{\delta'} \leq p \leq R} \sum_{d_0 \in {\mathcal S}_{(w,\infty)}} \frac{h(d_0)}{d_0}$

$\displaystyle \sum_{m \in {\mathcal S}_{(w,\infty)}} \frac{-f'(\frac{\log d_0 p m}{\log R})}{\phi(m)\phi(p)}.$

Applying Lemma 7, one can simplify this modulo negligible errors as

$\displaystyle 2 (\frac{\phi(W)}{W} \log R)^2 \sum_{x^{\delta'} \leq p \leq R} \frac{1}{\phi(p)} \sum_{d_0 \in {\mathcal S}_{(w,\infty)}} \frac{h(d_0)}{d_0} f(\frac{\log d_0}{\log R}) f( \frac{\log d_0}{\log R} + \frac{\log p}{\log R} )$

which by another application of Lemma 7 is equal to

$\displaystyle 2 (\frac{\phi(W)}{W} \log R)^{k_0+1} \sum_{x^{\delta'} \leq p \leq R} \frac{1}{\phi(p)} G_{k_0-1}( 0, \frac{\log p}{\log R})$

$\displaystyle G_{k_0-1}(t_1,t_2) := \int_0^1 f(t+t_1) f(t+t_2) \frac{t^{k_0-2}}{(k_0-2)!}\ dt.$

Applying Mertens’ theorem and summation by parts, this expression is equal up to negligible errors to

$\displaystyle 2 (\frac{\phi(W)}{W} \log R)^{k_0+1} \int_{\theta}^1 G_{k_0-1}(0,t)\ \frac{dt}{t}$

where

$\displaystyle \theta := \frac{\log x^{\delta'}}{\log R} = \frac{\delta'}{1/4 + \varpi}.$

Now we turn to the contribution of (ii) to (14). This is bounded above by

$\displaystyle 2 (\frac{\phi(W)}{W} \log R) \sum_{x^{\delta'} \leq p \le R} \frac{h(p)}{p} \sum_{d_0 \in {\mathcal S}_{(w,\infty)}} \frac{h(d_0)}{d_0} f(\frac{\log pd_0}{\log R}) F(pd_0).$

By Lemma 7 we have

$\displaystyle F(pd_0) \leq (\frac{\phi(W)}{W} \log R) (f(\frac{\log pd_0}{\log R}) + o(1) )$

and so we may bound this contribution up to negligible errors by

$\displaystyle 2 (\frac{\phi(W)}{W} \log R)^2 \sum_{x^{\delta'} \leq p \leq R} \frac{h(p)}{p} \sum_{d_0 \in {\mathcal S}_{(w,\infty)}} \frac{h(d_0)}{d_0} f(\frac{\log pd_0}{\log R})^2$

which by Lemma 7 again is equal (up to negligible errors) to

$\displaystyle 2 (\frac{\phi(W)}{W} \log R)^{k_0+1} \sum_{x^{\delta'} \leq p \leq R} \frac{h(p)}{p} G_{k_0-1}( \frac{\log p}{\log R}, \frac{\log p}{\log R} ).$

By definition, ${h(p) = k_0-1 + o(1)}$. By Mertens’ theorem, we can thus write the above expression up to negligible errors as

$\displaystyle 2(k_0-1) (\frac{\phi(W)}{W} \log R)^{k_0+1} \int_\theta^1 G_{k_0-1}(t,t)\ \frac{dt}{t}.$

Finally, we turn to the contribution of case (iii) to (14). By Lemma 7 we have

$\displaystyle F(d_0) \leq (\frac{\phi(W)}{W} \log R) (f(\frac{\log d_0}{\log R}) + o(1) )$

so we may bound this contribution up to negligible errors by

$\displaystyle 2 (\frac{\phi(W)}{W} \log R)^2 \sum_{d_0} \frac{h(d_0)}{d_0} f(\frac{\log d_0}{\log R})^2$

where ${d_0}$ is as in case (iii).

We introduce the quantities

$\displaystyle \tilde \theta := \frac{\log x^{(\delta' - \delta)/2 + \varpi + \epsilon/2}}{\log R} = \frac{(\delta' - \delta)/2 + \varpi + \epsilon/2}{1/4+\varpi}$

and

$\displaystyle \tilde \delta := \frac{\log x^\delta}{\log R} = \frac{\delta}{1/4+\varpi}$

so that case (iii) consists of those ${d_0}$ in ${{\mathcal S}_{(w, R^\theta)}}$ such that

$\displaystyle \prod_{p|d_0: p < R^{\tilde \delta}} p < R^{\tilde \theta}.$

From the support of ${F}$ we may also take ${d_0 \leq R}$. This implies that we may factor

$\displaystyle d_0 = p_1 \ldots p_J d$

for some primes

$\displaystyle R^{\tilde \delta} \leq p_1 < \ldots < p_J \leq R^{\theta}$

and some ${d \leq R^{\tilde \theta}}$ coprime to ${p_1,\ldots,p_J}$, with

$\displaystyle 0 \leq J \leq \frac{1}{\tilde \delta}.$

The contribution of this case may thus be bounded by

$\displaystyle 2 (\frac{\phi(W)}{W} \log R)^2 \sum_{0 \leq J \leq \frac{1}{\tilde \delta}} \sum_{R^{\tilde \delta} \leq p_1 < \ldots < p_J \leq R^\theta} \frac{h(p_1 \ldots p_J)}{p_1 \ldots p_J}$

$\displaystyle \sum_{d \leq R^{\tilde \theta}} \frac{h(d)}{d} f(\frac{\log d p_1 \ldots p_J}{\log R})^2.$

Evaluating the inner sum using Lemma 7, we obtain (up to negligible errors)

$\displaystyle 2 (\frac{\phi(W)}{W} \log R)^{k_0+1} \sum_{0 \leq J \leq \frac{1}{\tilde \delta}} \sum_{R^{\tilde \delta} \leq p_1 < \ldots < p_J \leq R^\theta} \frac{h(p_1 \ldots p_J)}{p_1 \ldots p_J}$

$\displaystyle G_{k_0-1,\tilde \theta}( \frac{\log p_1 \ldots p_J}{\log R}, \frac{\log p_1 \ldots p_J}{\log R})$

where ${G_{k_0-1,\tilde \theta}}$ is a truncation of ${G_{k_0-1}}$:

$\displaystyle G_{k_0-1, \tilde \theta}(t_1,t_2) := \int_0^{\tilde \theta} f(t+t_1) f(t+t_2) \frac{t^{k_0-2}}{(k_0-2)!}\ dt.$

Again we have ${h(p_1 \ldots p_J) = (k_0-1)^J + o(1)}$. By Mertens’ theorem we may write this (up to negligible errors) as

$\displaystyle 2 (\frac{\phi(W)}{W} \log R)^{k_0+1} \sum_{0 \leq J \leq \frac{1}{\tilde \delta}} (k_0-1)^J \int_{\tilde \delta \leq t_1 < \ldots < t_J \leq \theta}$

$\displaystyle G_{k_0-1,\tilde \theta}( t_1 + \ldots + t_J, t_1 + \ldots + t_J)\ \frac{dt_1 \ldots dt_J}{t_1 \ldots t_J}.$

Putting all this together, we have obtained the lower bound (6) with

$\displaystyle \beta = G_{k_0-1}(0,0) (1 - 2\kappa_1 - 2\kappa_2 - 2\kappa_3)$

where

$\displaystyle \kappa_1 := G_{k_0-1}(0,0)^{-1} \int_{\theta}^1 G_{k_0-1}(0,t)\ \frac{dt}{t}$

$\displaystyle \kappa_2 := (k_0-1) G_{k_0-1}(0,0)^{-1} \int_{\theta}^1 G_{k_0-1}(t,t)\ \frac{dt}{t}$

and

$\displaystyle \kappa_3 = G_{k_0-1}(0,0)^{-1} \sum_{0 \leq J \leq \frac{1}{\tilde \delta}} (k_0-1)^J \int_{\tilde \delta \leq t_1 < \ldots < t_J \leq \theta}$

$\displaystyle G_{k_0-1,\tilde \theta}( t_1 + \ldots + t_J, t_1 + \ldots + t_J)\ \frac{dt_1 \ldots dt_J}{t_1 \ldots t_J}.$

We now place upper bounds on ${\kappa_1,\kappa_2,\kappa_3}$. In this previous post, the bounds

$\displaystyle G_{k_0-1}(0,t) \leq (1-t)^{(k_0-1)/2} G_{k_0-1}(0,0)$

and

$\displaystyle G_{k_0-1}(t,t) \leq (1-t)^{k_0-1} G_{k_0-1}(0,0)$

for ${0 < t < 1}$ are proven. Thus we have

$\displaystyle \kappa_1 \leq \int_{\theta}^1 (1-t)^{(k_0-1)/2}\ \frac{dt}{t}$

$\displaystyle \kappa_2 \leq (k_0-1) \int_{\theta}^1 (1-t)^{k_0-1}\ \frac{dt}{t}.$

These are already quite small for ${\theta \approx 1/2}$, say, which would correspond to ${\delta' \approx 1/8}$.

For ${\kappa_3}$ we will use the crude estimate

$\displaystyle G_{k_0-1,\tilde \theta}( t_1 + \ldots + t_J, t_1 + \ldots + t_J) \leq G_{k_0-1,\tilde \theta}(0,0);$

this may surely be improved, but we will not do so here to simplify the exposition. Then we may bound

$\displaystyle \kappa_3 \leq \frac{G_{k_0-1,\tilde \theta}(0,0)}{G_{k_0-1}(0,0)} \sum_{0 \leq J \leq 1/\tilde \delta} \frac{(k_0-1)^J}{J!} (\log \frac{\theta}{\tilde \delta})^J.$

The point here is that the first term ${\frac{G_{k_0-1,\tilde \theta}(0,0)}{G_{k_0-1}(0,0)}}$ is exponentially decaying in ${k_0}$, which can compensate for the second term if ${1/\tilde \delta \ll k_0}$ which is currently the case in the regime of interest.

One can do a bit better than this. For any parameter ${A\geq 0}$, one has

$\displaystyle G_{k_0-1,\tilde \theta}( t_1 + \ldots + t_J, t_1 + \ldots + t_J) \leq e^A e^{-A(t_1 + \ldots + t_J)} G_{k_0-1,\tilde \theta}(0,0)$

since the left-hand side vanishes for ${t_1+\ldots+t_J \geq 1}$. This gives the bound

$\displaystyle \kappa_3 \leq e^A \frac{G_{k_0-1,\tilde \theta}(0,0)}{G_{k_0-1}(0,0)} \sum_{0 \leq J \leq 1/\tilde \delta} \frac{(k_0-1)^J}{J!} (\int_{\tilde \delta}^\theta e^{-At} \frac{dt}{t})^J.$

If we insert these bounds into (7), send ${\epsilon}$ to zero, and optimise in ${f}$ using Theorem 14 from this previous post, we obtain Theorem 5.