As in previous posts, we use the following asymptotic notation: ${x}$ is a parameter going off to infinity, and all quantities may depend on ${x}$ unless explicitly declared to be “fixed”. The asymptotic notation ${O(), o(), \ll}$ is then defined relative to this parameter. A quantity ${q}$ is said to be of polynomial size if one has ${q = O(x^{O(1)})}$, and bounded if ${q=O(1)}$. We also write ${X \lessapprox Y}$ for ${X \ll x^{o(1)} Y}$, and ${X \sim Y}$ for ${X \ll Y \ll X}$.

The purpose of this post is to collect together all the various refinements to the second half of Zhang’s paper that have been obtained as part of the polymath8 project and present them as a coherent argument (though not fully self-contained, as we will need some lemmas from previous posts).

In order to state the main result, we need to recall some definitions.

Definition 1 (Singleton congruence class system) Let ${I \subset {\bf R}}$, and let ${{\mathcal S}_I}$ denote the square-free numbers whose prime factors lie in ${I}$. A singleton congruence class system on ${I}$ is a collection ${{\mathcal C} = (\{a_q\})_{q \in {\mathcal S}_I}}$ of primitive residue classes ${a_q \in ({\bf Z}/q{\bf Z})^\times}$ for each ${q \in {\mathcal S}_I}$, obeying the Chinese remainder theorem property

$\displaystyle a_{qr}\ (qr) = (a_q\ (q)) \cap (a_r\ (r)) \ \ \ \ \ (1)$

whenever ${q,r \in {\mathcal S}_I}$ are coprime. We say that such a system ${{\mathcal C}}$ has controlled multiplicity if the

$\displaystyle \tau_{\mathcal C}(n) := |\{ q \in {\mathcal S}_I: n = a_q\ (q) \}|$

obeys the estimate

$\displaystyle \sum_{C^{-1} x \leq n \leq Cx: n = a\ (r)} \tau_{\mathcal C}(n)^2 \ll \frac{x}{r} \tau(r)^{O(1)} \log^{O(1)} x + x^{o(1)}. \ \ \ \ \ (2)$

for any fixed ${C>1}$ and any congruence class ${a\ (r)}$ with ${r \in {\mathcal S}_I}$. Here ${\tau}$ is the divisor function.

Next we need a relaxation of the concept of ${y}$-smoothness.

Definition 2 (Dense divisibility) Let ${y \geq 1}$. A positive integer ${q}$ is said to be ${y}$-densely divisible if, for every ${1 \leq R \leq q}$, there exists a factor of ${q}$ in the interval ${[y^{-1} R, R]}$. We let ${{\mathcal D}_y}$ denote the set of ${y}$-densely divisible positive integers.

Now we present a strengthened version ${MPZ'[\varpi,\delta]}$ of the Motohashi-Pintz-Zhang conjecture ${MPZ[\varpi,\delta]}$, which depends on parameters ${0 < \varpi < 1/4}$ and ${0 < \delta < 1/4}$.

Conjecture 3 (${MPZ'[\varpi,\delta]}$) Let ${I \subset {\bf R}}$, and let ${(\{a_q\})_{q \in {\mathcal S}_I}}$ be a congruence class system with controlled multiplicity. Then

$\displaystyle \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}: q< x^{1/2+2\varpi}} |\Delta(\Lambda 1_{[x,2x]}; a_q)| \ll x \log^{-A} x \ \ \ \ \ (3)$

for any fixed ${A>0}$, where ${\Lambda}$ is the von Mangoldt function.

The difference between this conjecture and the weaker conjecture ${MPZ[\varpi,\delta]}$ is that the modulus ${q}$ is constrained to be ${x^\delta}$-densely divisible rather than ${x^\delta}$-smooth (note that ${I}$ is no longer constrained to lie in ${[1,x^\delta]}$). This relaxation of the smoothness condition improves the Goldston-Pintz-Yildirim type sieving needed to deduce ${DHL[k_0,2]}$ from ${MPZ'[\varpi,\delta]}$; see this previous post.

The main result we will establish is

Theorem 4 ${MPZ'[\varpi,\delta]}$ holds for any ${\varpi,\delta>0}$ with

$\displaystyle 148\varpi+33\delta < 1. \ \ \ \ \ (4)$

This improves upon previous constraints of ${87\varpi + 17 \delta < \frac{1}{4}}$ (see this blog comment) and ${207 \varpi + 43 \delta < \frac{1}{4}}$ (see Theorem 13 of this previous post), which were also only established for ${MPZ[\varpi,\delta]}$ instead of ${MPZ'[\varpi,\delta]}$. Inserting Theorem 4 into the Pintz sieve from this previous post gives ${DHL[k_0,2]}$ for ${k_0 = 1467}$ (see this blog comment), which when inserted in turn into newly set up tables of narrow prime tuples gives infinitely many prime gaps of separation at most ${H = 12,012}$.

— 1. Reduction to Type I/II and Type III estimates —

Following Zhang, we can perform a combinatorial reduction to reduce Theorem 4 to two sub-estimates. To state this properly we need some more notation. We need a large fixed constant ${A_0>0}$ (that determines how finely we slice up the scales).

Definition 5 (Coefficient sequences) A coefficient sequence is a finitely supported sequence ${\alpha: {\bf N} \rightarrow {\bf R}}$ that obeys the bounds

$\displaystyle |\alpha(n)| \ll \tau^{O(1)}(n) \log^{O(1)}(x) \ \ \ \ \ (5)$

for all ${n}$.

• (i) If ${\alpha}$ is a coefficient sequence and ${a\ (q) = a \hbox{ mod } q}$ is a primitive residue class, the (signed) discrepancy ${\Delta(\alpha; a\ (q))}$ of ${\alpha}$ in the sequence is defined to be the quantity

$\displaystyle \Delta(\alpha; a \ (q)) := \sum_{n: n = a\ (q)} \alpha(n) - \frac{1}{\phi(q)} \sum_{n: (n,q)=1} \alpha(n). \ \ \ \ \ (6)$

• (ii) A coefficient sequence ${\alpha}$ is said to be at scale ${N}$ for some ${N \geq 1}$ if it is supported on an interval of the form ${[(1-O(\log^{-A_0} x)) N, (1+O(\log^{-A_0} x)) N]}$.
• (iii) A coefficient sequence ${\alpha}$ at scale ${N}$ is said to obey the Siegel-Walfisz theorem if one has

$\displaystyle | \Delta(\alpha 1_{(\cdot,q)=1}; a\ (r)) | \ll \tau(qr)^{O(1)} N \log^{-A} x \ \ \ \ \ (7)$

for any ${q,r \geq 1}$, any fixed ${A}$, and any primitive residue class ${a\ (r)}$.

• (iv) A coefficient sequence ${\alpha}$ at scale ${N}$ is said to be smooth if it takes the form ${\alpha(n) = \psi(n/N)}$ for some smooth function ${\psi: {\bf R} \rightarrow {\bf C}}$ supported on ${[1-O(\log^{-A_0} x), 1+O(\log^{-A_0} x)]}$ obeying the derivative bounds

$\displaystyle \psi^{(j)}(t) = O( \log^{j A_0} x ) \ \ \ \ \ (8)$

for all fixed ${j \geq 0}$ (note that the implied constant in the ${O()}$ notation may depend on ${j}$).

We can now state the two subestimates needed. The first controls sums of Type I or Type II:

Theorem 6 (Type I/II estimate) Let ${\varpi, \delta, \sigma > 0}$ be fixed quantities such that

$\displaystyle 17 \varpi + 4\delta + \sigma < \frac{1}{4} \ \ \ \ \ (9)$

and

$\displaystyle 20 \varpi + 6\delta + 3\sigma < \frac{1}{2} \ \ \ \ \ (10)$

and

$\displaystyle 32 \varpi + 9\delta + \sigma < \frac{1}{2} \ \ \ \ \ (11)$

and

$\displaystyle 48\varpi + 7 \delta < \frac{1}{2} \ \ \ \ \ (12)$

and let ${\alpha,\beta}$ be coefficient sequences at scales ${M,N}$ respectively with

$\displaystyle MN \sim x \ \ \ \ \ (13)$

and

$\displaystyle x^{\frac{1}{2}-\sigma} \ll N \ll M \ll x^{\frac{1}{2}+\sigma} \ \ \ \ \ (14)$

with ${\beta}$ obeying a Siegel-Walfisz theorem. Then for any ${I \subset {\bf R}}$ and any singleton congruence class system ${(\{a_q\})_{q \in {\mathcal S}_I}}$ with controlled multiplicity we have

$\displaystyle \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}: q< x^{1/2+2\varpi}} |\Delta(\alpha \ast \beta; a_q)| \ll x \log^{-A} x.$

This improves upon Theorem 16 in this previous post, in which the modulus was required to be ${x^\delta}$-smooth, and the constraints (9), (10), (11) were replaced by the stronger constraint ${ 11 \varpi + 3\delta + 2 \sigma < \frac{1}{4}}$, and (12) was similarly replaced by a stronger constraint ${39 \varpi + 5\delta < \frac{1}{4}}$. Of the three constraints (9), (10), (11), the second constraint (10) is more stringent in practice, while the constraint (12) is dominated by other constraints (such as (4)).

The second subestimate controls sums of Type III:

Theorem 7 (Type III estimate) Let ${\varpi, \delta > 0}$ be fixed quantities. Let ${N_1,N_2,N_3, M > 1}$ be scales obeying the relations

$\displaystyle N_1 \gg N_2,N_3$

$\displaystyle M N_1^4 N_2^4 N_3^5 > x^{4+16\varpi + \delta+\epsilon} \ \ \ \ \ (15)$

$\displaystyle N_1^3 N_2^3 N_3^4 > x^{3+12\varpi + \delta+\epsilon} \ \ \ \ \ (16)$

and

$\displaystyle N_1 N_2 \gtrapprox x^{1/2+6\varpi+\epsilon} \ \ \ \ \ (17)$

and

$\displaystyle MN_1 N_2 N_3 \sim x$

for some fixed ${\epsilon>0}$. Let ${\alpha,\psi_1,\psi_2,\psi_3}$ be coefficient sequences at scales ${M,N_1,N_2,N_3}$ respectively, with ${\psi_1,\psi_2,\psi_3}$ smooth. Then for any ${I \subset {\bf R}}$, and any singleton congruence class system ${(\{a_q\})_{q \in {\mathcal S}_I}}$ we have

$\displaystyle \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}: q< x^{1/2+2\varpi}} |\Delta(\alpha \ast \psi_1 \ast \psi_2 \ast \psi_3; a_q)| \ll x \log^{-A} x$

for any fixed ${A>0}$.

This improves upon Theorem 17 in this previous post, in which the modulus was required to be ${x^\delta}$-smooth, and the constraints (15), (16) were replaced by the stronger constraint ${N_1^4 N_2^4 N_3^5 > x^{4+16\varpi + \delta+\epsilon}}$. Of the two constraints (15), (16), the first constraint (15) is more stringent in practice.

Let us now recall the combinatorial argument (from this previous post) that allows one to deduce Theorem 4 from Theorems 6 and 7. As in Section 3 of this previous post, we let ${K \geq 1}$ be a fixed integer (${K=10}$ will suffice). Using the Heath-Brown identity as discussed in that section, we reduce to establishing the bound

$\displaystyle \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}: q< x^{1/2+2\varpi}} |\Delta(\alpha_1 \ast \ldots \ast \alpha_{2J}; a_q)| \ll x \log^{-A} x$

where ${1 \leq J \leq K}$, ${1 \ll N_1,\ldots,N_{2J} \ll x}$ are quantities with the following properties:

• (i) Each ${\alpha_i}$ is a coefficient sequence at scale ${N_i}$. More generally the convolution ${\alpha_S}$ of the ${\alpha_i}$ for ${i \in S}$ is a coefficient sequence at scale ${\prod_{i \in S} N_i}$.
• (ii) If ${N_i \gg x^{1/K+\epsilon}}$ for some fixed ${\epsilon>0}$, then ${\alpha_i}$ is smooth.
• (iii) If ${N_i \gg x^\epsilon}$ for some fixed ${\epsilon>0}$, then ${\alpha_i}$ obeys a Siegel-Walfisz theorem. More generally, ${\alpha_S}$ obeys a Siegel-Walfisz theorem if ${\prod_{i \in S} N_i \gg x^\epsilon}$ for some fixed ${\epsilon>0}$.
• (iv) ${N_1 \ldots N_{2J} \sim x}$.

We can write ${N_i \sim x^{t_i}}$ for ${i=1,\ldots,2J}$, where the ${t_i}$ are non-negative reals that sum to ${1}$. We apply Lemma 6 from this previous post with some parameter

$\displaystyle 1/10 < \sigma < 1/2 \ \ \ \ \ (18)$

to be chosen later and conclude one of the following:

• (Type 0) There is a ${t_i}$ with ${t_i \geq 1/2 + \sigma}$.
• (Type I/II) There is a partition ${\{1,\ldots,2J\} = S \cup T}$ such that

$\displaystyle \frac{1}{2} - \sigma < \sum_{i \in S} t_i \leq \sum_{i \in T} t_i < \frac{1}{2} + \sigma.$

• (Type III) There exist distinct ${i,j,k}$ with ${2\sigma \leq t_i \leq t_j \leq t_k \leq 1/2-\sigma}$ and ${t_i+t_j,t_i+t_k,t_j+t_k \geq 1/2 + \sigma}$.

In the Type 0 case, we can write ${\alpha_1 \ast \ldots \ast \alpha_{2j}}$ in a form in which Theorem 15 from this previous post applies. Similarly, in the Type I/II case we can write ${\alpha_1 \ast \ldots \ast \alpha_{2j}}$ in a form in which Theorem 6 applies, provided that the conditions (9), (10), (11) are obeyed (the condition (12) is implied by (4)). Now suppose we are in the Type III case. For ${K}$ large enough (e.g. ${K=10}$), we see that ${t_i,t_j,t_k \geq \frac{1}{K}+\epsilon}$ for some fixed ${\epsilon}$. Theorem 7 will then apply with

$\displaystyle N_1 \sim x^{t_k}$

$\displaystyle N_2 \sim x^{t_j}$

$\displaystyle N_3 \sim x^{t_i}$

$\displaystyle M \sim x^{1-t_i-t_j-t_k}$

provided that we can verify the hypotheses (15), (16), (17), which will follow if we have

$\displaystyle (1-t_i-t_j-t_k) + 4t_k + 4t_j + 5t_i > 4 + 16 \varpi + \delta$

and

$\displaystyle 3t_k + 3 t_j + 4 t_i > 3 + 12 \varpi + \delta$

and

$\displaystyle t_k + t_j > \frac{1}{2} + 6 \varpi.$

Since ${t_i+t_j,t_i+t_k,t_j+t_k \geq 1/2+\sigma}$, we have

$\displaystyle (1-t_i-t_j-t_k) + 4t_k + 4t_j + 5t_i > 1 + 5 (\frac{1}{2}+\sigma)$

and

$\displaystyle 3t_k + 3t_j + 4t_i > 5 (\frac{1}{2}+\sigma)$

so we will be done if we can find ${\sigma}$ obeying the constraints (18), (9), (10), (11) as well as the constraints

$\displaystyle 1 + 5 (\frac{1}{2}+\sigma) > 4 + 16 \varpi + \delta \ \ \ \ \ (19)$

and

$\displaystyle 5 (\frac{1}{2}+\sigma) > 3 + 12 \varpi + \delta \ \ \ \ \ (20)$

and

$\displaystyle \sigma > 6 \varpi. \ \ \ \ \ (21)$

The condition (20) is a consequence of (19) and can thus be omitted.

We rewrite all of the constraints in terms of upper and lower bounds on ${\sigma}$. The upper bounds take the form

$\displaystyle \sigma < 1/2 \ \ \ \ \ (22)$

$\displaystyle \sigma < \frac{1}{4} - 17 \varpi - 4 \delta \ \ \ \ \ (23)$

$\displaystyle \sigma < \frac{1}{6} - \frac{20}{3} \varpi - 2 \delta \ \ \ \ \ (24)$

$\displaystyle \sigma < \frac{1}{2} - 32 \varpi - 9\delta \ \ \ \ \ (25)$

while the lower bounds take the form

$\displaystyle \sigma > \frac{1}{10} \ \ \ \ \ (26)$

$\displaystyle \sigma > \frac{1}{10} + \frac{16}{5} \varpi + \frac{1}{5} \delta. \ \ \ \ \ (27)$

$\displaystyle \sigma > 6\varpi \ \ \ \ \ (28)$

Clearly (22) is implied by (23), and (26) is implied by (27); restricting to the case ${\varpi < 1/60}$ (which follows from (4)), (28) is also implied by (26). Assuming (4), we also see that (23), (25) are implied by (24), so we reduce to establishing that

$\displaystyle \frac{1}{10} + \frac{16}{5} \varpi + \frac{1}{5} \delta < \frac{1}{6} - \frac{20}{3} \varpi - 2 \delta$

but this rearranges to (4).

It remains to establish Theorem 6 and Theorem 7.

— 2. Type I/II analysis —

We begin the proof of Theorem 6, closely following the arguments from Section 5 of this previous post. We can restrict ${q}$ to the range

$\displaystyle q \gtrapprox x^{1/2}$

for some sufficiently slowly decaying ${o(1)}$, since otherwise we may use the Bombieri-Vinogradov theorem (Theorem 4 from this previous post). Thus, by dyadic decomposition, we need to show that

$\displaystyle \sum_{d \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}: D \leq d < 2D} |\Delta(\alpha \ast \beta; a_d)| \ll NM \log^{-A} x. \ \ \ \ \ (29)$

for any ${D}$ in the range

$\displaystyle x^{1/2} \lessapprox D \lessapprox x^{1/2+2\varpi}.$

Let

$\displaystyle \mu > 0 \ \ \ \ \ (30)$

be an exponent to be optimised later (in the Type I case, it will be infinitesimally close to zero, while in the Type II case, it will be infinitesimally larger than ${2\varpi}$).

By Lemma 11 of this previous post, we know that for all ${d}$ in ${[D,2D]}$ outside of a small number of exceptions, we have

$\displaystyle \prod_{p|d: p \leq D_0} p \lessapprox 1 \ \ \ \ \ (31)$

where

$\displaystyle D_0 := \exp(\log^{1/3} x).$

Specifically, the number of exceptions in the interval ${[D,2D]}$ is ${O(D \log^{-A} x)}$ for any fixed ${A>0}$. The contribution of the exceptional ${d}$ can be shown to be acceptable by Cauchy-Schwarz and trivial estimates (see Section 5 of this previous post), so we restrict attention to those ${d}$ for which (31) holds. In particular, as ${d}$ is restricted to be ${x^\delta}$-densely divisible we may factor

$\displaystyle d=qr$

with ${q,r}$ coprime and square-free, with ${q \in {\mathcal S}_{I'}}$ with ${I' := [D_0,\infty) \cap I}$, and

$\displaystyle x^{-\mu-\delta} N \lessapprox r \lessapprox x^{-\mu} N$

and

$\displaystyle x^{1/2} \lessapprox qr \lessapprox x^{1/2+2\varpi}.$

By dyadic decomposition, it thus sufices to show that

$\displaystyle \sum_{q \in {\mathcal S}_{I'}: q \sim Q} \sum_{r \in {\mathcal S}_I: r \sim R; (q,r)=1} |\Delta(\alpha \ast \beta; a_{qr})| \ll NM \log^{-A} x.$

for any fixed ${A>0}$, where ${Q, R \geq 1}$ obey the size conditions

$\displaystyle x^{-\mu-\delta} N \lessapprox R \lessapprox x^{-\mu} N \ \ \ \ \ (32)$

and

$\displaystyle x^{1/2} \lessapprox QR \lessapprox x^{1/2 + 2\varpi}. \ \ \ \ \ (33)$

Fix ${Q,R}$. We abbreviate ${\sum_{q \in {\mathcal S}_{I'}: q \sim Q}}$ and ${\sum_{r \in {\mathcal S}_I: r \sim R}}$ by ${\sum_q}$ and ${\sum_r}$ respectively, thus our task is to show that

$\displaystyle \sum_q \sum_{r: (q,r)=1} |\Delta(\alpha \ast \beta; a_{qr})| \ll NM \log^{-A} x.$

We now split the discrepancy

$\displaystyle \Delta(\alpha \ast \beta; a_{qr}) = \sum_{n = a_{qr}\ (qr)} \alpha \ast \beta(n) - \frac{1}{\phi(qr)} \sum_{n: (n,qr)=1} \alpha \ast \beta(n)$

as the sum of the subdiscrepancies

$\displaystyle \sum_{n: n = a_{qr}\ (qr)} \alpha \ast \beta(n) - \frac{1}{\phi(q)} \sum_{n: (n,q)=1; n = a_r\ (r)} \alpha \ast \beta(n)$

and

$\displaystyle \frac{1}{\phi(q)} \sum_{n: (n,q)=1; n = a_r\ (r)} \alpha \ast \beta(n) - \frac{1}{\phi(qr)} \sum_{n: (n,qr)=1} \alpha \ast \beta(n).$

In Section 5 of this previous post, it was established that

$\displaystyle \sum_{q} \sum_{r; (q,r)=1} |\frac{1}{\phi(q)} \sum_{n: (n,q)=1; n = a_r\ (r)} \alpha \ast \beta(n) - \frac{1}{\phi(qr)} \sum_{n: (n,qr)=1} \alpha \ast \beta(n)| \ll$

$\displaystyle NM \log^{-A} x$

so it suffices to show that

$\displaystyle \sum_{q} \sum_{r; (q,r)=1} |\sum_{n: n = a_{qr}\ (qr)} \alpha \ast \beta(n) - \frac{1}{\phi(q)} \sum_{n: (n,q)=1; n = a_r\ (r)} \alpha \ast \beta(n)| \ \ \ \ \ (34)$

$\displaystyle \ll NM \log^{-A} x.$

As in the previous notes, we will not take advantage of the ${r}$ summation, and use crude estimates to reduce to showing that

$\displaystyle \sum_{q; (q,r)=1} |\sum_{n: n = a_q\ (q); n = a_r\ (r)} \alpha \ast \beta(n) - \frac{1}{\phi(q)} \sum_{n: (n,q)=1; n = a_r\ (r)} \alpha \ast \beta(n)| \ \ \ \ \ (35)$

$\displaystyle \ll NM R^{-1} \tau(r)^{O(1)} \log^{-A} x$

for each individual ${r \in {\mathcal S}_I}$ with ${r \sim R}$, which we now fix. Repeating the previous arguments, it sufices to show that

$\displaystyle \sum_{q; (q,r)=1} |\sum_{n: n = b_q\ (q); n = a_r\ (r)} \alpha \ast \beta(n) - \sum_{n: n = b'_q\ (q); n = a_r\ (r)} \alpha \ast \beta(n)| \ \ \ \ \ (36)$

$\displaystyle \ll NM R^{-1} \tau(r)^{O(1)} \log^{-A} x$

whenever ${(b_q)_{q \in {\mathcal S}_{I'}}, (b'_q)_{q \in {\mathcal S}_{I'}}}$ are good singleton congruence class systems.

By duality and Cauchy-Schwarz exactly as in Section 5 of the previous post, it suffices to show that

$\displaystyle \sum_{m} \psi_M(m) |\sum_{q,n: mn = a_r\ (r); (q,r)=1} c_{q} \beta(n) (1_{mn = b_{q}\ (q)} - 1_{mn = b'_{q}\ (q)})|^2 \ \ \ \ \ (37)$

$\displaystyle \ll N^2 M R^{-2} \tau(r)^{O(1)} \log^{-A} x$

for any fixed ${A>0}$, where ${\psi_M}$ is a smooth coefficient sequence at scale ${M}$. Expanding out the square, it suffices to show that

$\displaystyle \sum_{m} \psi_M(m) \sum_{q_1,q_2,n_1,n_2: mn_1=mn_2 = a_r\ (r); (q_1,r)=(q_2,r)=1} \ \ \ \ \ (38)$

$\displaystyle c_{q_1} c_{q_2} \beta(n_1) \beta(n_2) 1_{mn_1 = b_{q_1}\ (q_1)} 1_{mn_2 = b'_{q_2}\ (q_2)}$

$\displaystyle = X + O( N^2 M R^{-2} \tau(r)^{O(1)} \log^{-A} x )$

where ${q_1,q_2}$ is subject to the same constraints as ${q}$ (thus ${q_i \in {\mathcal S}_{I'}}$ and ${Q \ll q_i \ll Q}$ for ${i=1,2}$), and ${X}$ is some quantity that is independent of the choice of congruence classes ${(b_q)_{q \in {\mathcal S}_I}}$, ${(b'_q)_{q \in {\mathcal S}_I}}$.

As in the previous notes, we can dispose of the case when ${q_1,q_2}$ share a common factor by using the controlled multiplicity hypothesis, provided we have the hypothesis

$\displaystyle x^\mu \geq x^{-1/2+2\varpi+c} N \ \ \ \ \ (39)$

which we file away for later. (There was also the condition ${R \ll x^{-c+o(1)} M}$ from equation (33) of previous notes, but this condition is implied by (30) for ${c}$ small enough since ${N \ll M}$.) This reduces us to establishing

$\displaystyle \sum_{m} \psi_M(m) \sum_{q_1,q_2,n_1,n_2: mn_1=mn_2 = a_r\ (r); (q_1,r)=(q_2,r)=(q_1,q_2)=1} \ \ \ \ \ (40)$

$\displaystyle c_{q_1} c_{q_2} \beta(n_1) \beta(n_2) 1_{mn_1 = b_{q_1}\ (q_1)} 1_{mn_2 = b'_{q_2}\ (q_2)}$

$\displaystyle = X + O( N^2 M R^{-2} \log^{-A} x ).$

It remains to verify (40). Observe that ${n_1}$ must be coprime to ${q_1r}$ and ${n_2}$ coprime to ${q_2r}$, with ${n_1 = n_2\ (r)}$, to have a non-zero contribution to the sum. We then rearrange the left-hand side as

$\displaystyle \sum_{q_1,q_2: (q_1,r)=(q_2,r)=(q_1,q_2)=1} \sum_{m} \psi_M(m) \sum_{n_1,n_2: n_1=n_2\ (r); (n_1,q_1r)=(n_2,q_2)=1}$

$\displaystyle c_{q_1} c_{q_2} \beta(n_1) \beta(n_2) 1_{m = a_r/n_1\ (r); m = b_{q_1}/n_1\ (q_1); m = b'_{q_2}/n_2 (q_2)};$

note that these inverses in the various rings ${{\bf Z}/r{\bf Z}}$, ${{\bf Z}/q_1{\bf Z}}$, ${{\bf Z}/q_2{\bf Z}}$ are well-defined thanks to the coprimality hypotheses.

We may write ${n_2 = n_1+kr}$ for some ${k = O(N/R)}$. By the triangle inequality, and relabeling ${n_1}$ as ${n}$, it thus suffices to show that for any particular

$\displaystyle k = O(N/R), \ \ \ \ \ (41)$

one has

$\displaystyle \sum_{q_1,q_2: (q_1,r)=(q_2,r)=(q_1,q_2)=1} \sum_{m} \psi_M(m) \sum_{n; (n,q_1r)=(n+kr,q_2)=1} \ \ \ \ \ (42)$

$\displaystyle c_{q_1} c_{q_2} \beta(n) \beta(n+kr) 1_{m = a_r/n\ (r); m = b_{q_1}/n\ (q_1); m = b'_{q_2}/(n+kr) (q_2)}$

$\displaystyle = X_k + O( N M R^{-1} \log^{-A} x )$

for some ${X_k}$ independent of the ${b_q}$ and ${b'_q}$.

Applying completion of sums (Section 2 from the previous post), we reduce to showing that

$\displaystyle \sum_{1 \leq h \leq H} \sum_{q_1,q_2 \sim Q} |\sum_{n} \beta(n) \beta(n+kr) \Phi(h,q_1,q_2; n)| \lessapprox x^{-\epsilon} Q^2 N \ \ \ \ \ (43)$

for a sufficiently small fixed ${\epsilon>0}$, where

$\displaystyle H := x^\epsilon Q^2 R/M \ \ \ \ \ (44)$

and ${\Phi = \Phi_{k,r}}$ is the phase

$\displaystyle \Phi(h,q_1,q_2;n) := 1_{(q_1,r)=(q_2,r)=(q_1,q_2)=(n,r)=(n,q_1)=(n+kr,q_2)=1} \ \ \ \ \ (45)$

$\displaystyle e_r( \frac{a_r h}{nq_1 q_2} ) e_{q_1}( \frac{b_{q_1}h}{n r q_2} ) e_{q_2}( \frac{b'_{q_2} h}{(n+kr) r q_1} ),$

and we have dropped all hypotheses on ${q_1,q_2}$ other than magnitude, and we abbreviate ${\sum_{1 \leq h \leq H}}$ as ${\sum_h}$.

We now split into two cases, one which works when ${M, N}$ are not too close to ${x^{1/2}}$, and one which works when ${M, N}$ are close to ${x^{1/2}}$. Here is the Type I estimate:

Theorem 8 (Type I case) If the inequalities

$\displaystyle 17\varpi + \sigma + 4\mu + 4\delta < \frac{1}{4} \ \ \ \ \ (46)$

and

$\displaystyle 20\varpi + 3\sigma + 6\mu + 6\delta < \frac{1}{2} \ \ \ \ \ (47)$

and

$\displaystyle 32 \varpi + \sigma + 9 \mu + 9\delta < \frac{1}{2}. \ \ \ \ \ (48)$

and

$\displaystyle M \gtrapprox x^{1/2+2\varpi+c'} \ \ \ \ \ (49)$

hold for some fixed ${c'>0}$, then (43) holds for a sufficiently small fixed ${\epsilon>0}$.

The hypotheses (46), (47), (48) improve upon Theorem 13 from this previous post, which had instead the more strict condition

$\displaystyle 11\varpi + 3\mu + 3\delta + 2 \sigma < \frac{1}{4}.$

In practice the condition (47) is dominant.

Now we give the Type II estimate:

Theorem 9 (Type II case) If the inequality

$\displaystyle 24\varpi + 7 \mu + 7 \delta + 5 \sigma < \frac{1}{2} \ \ \ \ \ (50)$

holds, then (43) holds for a sufficiently small fixed ${\epsilon>0}$.

This result improves upon Theorem 14 from this previous post which had the stronger condition

$\displaystyle 24\varpi + 10 \mu + 10 \delta + 7 \sigma < \frac{1}{2}.$

In practice, (50) will not hold with the original value of ${\sigma}$ in Theorem 6; instead, we only use Theorem 9 the case ${M \lessapprox x^{1/2+2\varpi+c'}}$ excluded by Theorem 8, in which case we will be able to lower ${\sigma}$ down to be ${2\varpi+c'}$ and verify (50) in that case.

Assuming these theorems, let us now conclude the proof of Theorem 6. First suppose we are in the “Type I” regime when (49) holds for some fixed ${c'>0}$. Then by (13) we have

$\displaystyle N \ll x^{1/2-2\varpi-c'}$

which means that the condition (39) is now weaker than (30) (for ${c}$ small enough) and may be omitted. By (9), (10), (11), we can simultaneously obey (30), (46), (47), (48) by setting ${\mu}$ sufficiently close to zero, and the claim now follows from Theorem 8.

Now suppose instead that we are in the “Type II” regime where (49) fails for some small ${c'>0}$, so that by (13) we have

$\displaystyle x^{1/2-2\varpi-c'} \ll N \ll M \ll x^{1/2+2\varpi+c'}.$

From this we see that we may replace ${\sigma}$ by ${2\varpi+c'}$ in (14) and in all of the above analysis. If we set ${\mu := 2\varpi + c'}$ then the conditions (30), (39) are obeyed (again taking ${c}$ small enough). Theorem 9 will then give us what we want provided that

$\displaystyle 24\varpi + 7 (2\varpi+c') + 7 \delta + 5 (2\varpi+c') < \frac{1}{2}$

which is satisfied for ${c'}$ small enough thanks to (12).

In the next two sections we establish Theorem 8 and Theorem 9.

— 3. The Type I sum —

We now prove Theorem 8. It suffices to show that

$\displaystyle |\sum_h \sum_{q_1,q_2 \sim Q} c_{h,q_1,q_2} \sum_{n} \beta(n) \beta(n+kr) \Phi(h,q_1,q_2; n)| \lessapprox x^{-\epsilon} Q^2 N$

for any bounded real coefficients ${c_{h,q_1,q_2} = O(1)}$. Performing the manipulations from Section 6 of this previous post, we reduce to showing that

$\displaystyle \sum_{h,h'} \sum_{q_2,q'_2 \sim Q} |\sum_{n} \psi_N(n) \Phi(h,q_1,q_2;n) \overline{\Phi(h',q_1,q'_2;n)}| \lessapprox x^{-2\epsilon} Q^2 N \ \ \ \ \ (51)$

for any ${q_1 \sim Q}$.

To prove (51), we isolate the diagonal case ${h'q_2 = hq'_2}$ and the non-diagonal case ${h'q_2 \neq h q'_2}$. For the diagonal case, we argue as in Section 6 of the previous post and reduce to verifying that

$\displaystyle QR \lessapprox x^{-3\varepsilon} M$

but this follows from (33), (49) for ${\epsilon}$ small enough.

Now we treat the non-diagonal case ${h'q_2 \neq hq'_2}$. The key estimate here is

Lemma 10 In the non-diagonal case ${h'q_2 \neq hq'_2}$, we have

$\displaystyle |\sum_{n} \psi_N(n) \Phi(h,q_1,q_2;n) \overline{\Phi(h',q_1,q'_2;n)}| \ \ \ \ \ (52)$

$\displaystyle \lessapprox N^{1/2} Q^{1/4} R^{1/4} + N^{1/2} Q + R^{-1/4} N (hq'_2-h'q_2,r)^{1/4}.$

Proof: From (45) we may of course assume that

$\displaystyle (q_1,r) = (q_2,r) = (q_1,q_2) = (q'_2,r) = (q_1,q'_2) = 1.$

Arguing as in Section 6 of this previous post, we may write the left-hand side of (52) as

$\displaystyle | \sum_{n} \psi_N(n) 1_{(n,d_1) = (n+kr,d_2)=1} e_{d_1}( \frac{c_1}{n} ) e_{d_2}( \frac{c_2}{n+kr} )|$

where ${d_1 := q_1 r}$, ${d_2 := [q_2, q'_2]}$, and ${c_1,c_2}$ are integers with

$\displaystyle (c_1, r) = (hq'_2-h'q_2, r). \ \ \ \ \ (53)$

Now for the new input that was not present in the previous Type I analysis. Applying Proposition 5(iii) from this previous post, and noting that ${D_1,d_2}$ are coprime, we can bound the left-hand side of (52) as

$\displaystyle \lessapprox N^{1/2} d_1^{1/4} +N^{1/2} d_2^{1/2} + \frac{(c_1,d_1)^{1/4}}{d_1^{1/4}} N.$

Since ${d_1 \ll QR}$, ${d_2 \ll Q^2}$, and

$\displaystyle \frac{(c_1,d_1)^{1/4}}{d_1^{1/4}} \leq \frac{(c_1,r)^{1/4}}{r^{1/4}} = \frac{(hq'_2-h'q_2,r)^{1/4}}{r^{1/4}}$

the claim follows. $\Box$

Note from the divisor bound that for each choice of ${h,q'_2}$ and ${a = O(HQ)}$, there are ${\lessapprox 1}$ choices of ${h', q_2}$ such that ${hq'_2 - h'q_2 = a}$. From this and Lemma 5 of this previous post we see that

$\displaystyle \sum_{h,h'} \sum_{q_2,q'_2 \sim Q} (hq'_2-h'q_2,r) 1_{hq'_2 \neq h'q_2} \lessapprox H^2 Q^2$

and thus also if ${(hq'_2-h'q_2,r)}$ is replaced by ${(hq'_2-h'q_2,r)^{1/4}}$. From this and Lemma 10 we see that the non-diagonal contribution to (51) is

$\displaystyle \lessapprox H^2 Q^2 ( N^{1/2} Q^{1/4} R^{1/4} + N^{1/2} Q + R^{-1/4} N )$

so to conclude (51) we need to show that

$\displaystyle H^2 Q^{9/4} R^{1/4} N^{1/2} \lessapprox x^{-2\epsilon} Q^2 N$

and

$\displaystyle H^2 Q^3 N^{1/2} \lessapprox x^{-2\epsilon} Q^2 N$

and

$\displaystyle H^2 Q^2 R^{-1/4} N \lessapprox x^{-2\epsilon} Q^2 N$

Using (44), (13) we can rewrite these criteria as

$\displaystyle (QR)^{17/4} \lessapprox x^{2-4\epsilon} N^{1/2} (R/N)^2$

and

$\displaystyle (QR)^5 \lessapprox x^{2-4\epsilon} N^{3/2} (R/N)^3$

and

$\displaystyle (QR)^4 \lessapprox x^{2-4\epsilon} N^{1/4} (R/N)^{9/4}$

respectively. Applying (33), (32), it suffices to verify that

$\displaystyle \frac{17}{4} (\frac{1}{2} + 2 \varpi) < 2 + \frac{1}{2} ( \frac{1}{2}-\sigma ) - 2 (\mu + \delta)$

and

$\displaystyle 5 (\frac{1}{2} + 2 \varpi) < 2 + \frac{3}{2} ( \frac{1}{2}-\sigma ) - 3 (\mu + \delta)$

and

$\displaystyle 4 (\frac{1}{2} + 2 \varpi) < 2 + \frac{1}{4} ( \frac{1}{2}-\sigma ) - \frac{9}{4} (\mu + \delta).$

respectively. These rearrange to (46), (47), (48) respectively, and the claim follows.

— 4. The Type II sum —

We now prove Theorem 9. Arguing as in Section 7 of this previous post, it suffices to show that

$\displaystyle \sum_{h,h'} \sum_{q_1, q'_1, q_2,q'_2 \sim Q} |\sum_{n} \psi_N(n) \Phi(h,q_1,q_2;n) \overline{\Phi(h',q'_1,q'_2;n)}| \ \ \ \ \ (54)$

$\displaystyle \lessapprox x^{-2\epsilon} Q^4 N.$

As in the previous post, the diagonal case ${h'q_1 q_2 = h q'_1 q'_2}$ is acceptable provided that

$\displaystyle R \ll x^{-3\epsilon+o(1)} M,$

but this is automatic from (32) and (30) if ${\epsilon}$ is small enough.

We have the following analogue of Lemma 10:

Lemma 11 In the off-diagonal case ${h'q_1q_2 \neq h q'_1 q'_2}$, we have

$\displaystyle |\sum_{n} \psi_N(n) \Phi(h,q_1,q_2;n) \overline{\Phi(h',q'_1,q'_2;n)}| \ \ \ \ \ (55)$

$\displaystyle \lessapprox Q^{2} R^{1/2} + R^{-1} N (hq'_1q'_2-h'q_1q_2,r)$

This is an improved version of the estimate (48) from this previous post in which several inefficiencies in the second term on the right-hand side have been removed.

Proof: From (45) we may assume

$\displaystyle (q_1,r) = (q'_1,r) = (q_2,r) = (q'_2,r) = (q_1,q_2) = (q'_1,q'_2) = 1$

and by the arguments from the previous post we may rewrite the left-hand side of (55) as

$\displaystyle | \sum_{n} \psi_N(n) 1_{(n,d_1) = (n+kr,d_2)=1} e_{d_1}( \frac{c_1}{n} ) e_{d_2}( \frac{c_2}{n+kr} ) |$

where

$\displaystyle d_1 := [q_1,q'_1]r; \quad d_2 := [q_2,q'_2]$

and

$\displaystyle (c_1, r) = (hq'_1q'_2-h'q_1q_2, r).$

By Proposition 5(ii) of this previous post, we may the bound this quantity by

$\displaystyle \lessapprox [d_1,d_2]^{1/2} + \frac{N}{d_1 d_2} (d_1,d_2)^2 (c_1,d'_1) (c_2,d'_2)$

where ${d'_1:= d_1/(d_1,d_2)}$, ${d'_2 := d_2/(d_1,d_2)}$. We may bound

$\displaystyle \frac{N}{d_1 d_2} (d_1,d_2)^2 (c_1,d'_1) (c_2,d'_2) \leq N \frac{(c_1,d'_1)}{d'_1} \leq N \frac{(c_1,r)}{r}$

since ${r}$ divides ${d_1}$ but is coprime to ${d_2}$, and the claim follows. $\Box$

Arguing as in the previous section we have

$\displaystyle \sum_{h,h'} \sum_{q_1, q'_1, q_2,q'_2 \sim Q} 1_{hq'_1q'_2 \neq h'q_1q_2} (hq'_1q'_2-h'q_1q_2,r) \lessapprox H^2 Q^4$

and so the off-diagonal contribution to (54) is

$\displaystyle \lessapprox H^2 Q^6 R^{1/2} + H^2 Q^4 R^{-1} N.$

To conclude (54) we thus need to show that

$\displaystyle H^2 Q^6 R^{1/2} \lessapprox x^{-2\epsilon} Q^4 N$

and

$\displaystyle H^2 Q^4 R^{-1} N \lessapprox x^{-2\epsilon} Q^4 N.$

Using (44), (13) we can rewrite these criteria as

$\displaystyle (QR)^6 \lessapprox x^{2-4\epsilon} N^{5/2} (R/N)^{7/2}$

and

$\displaystyle (QR)^4 \lessapprox x^{2-4\epsilon} N (R/N)^3$

respectively. Applying (33), (32), it suffices to verify that

$\displaystyle 6 (\frac{1}{2} + 2 \varpi) < 2 + \frac{5}{2} ( \frac{1}{2}-\sigma ) - \frac{7}{2} (\mu + \delta)$

and

$\displaystyle 4 (\frac{1}{2} + 2 \varpi) < 2 + ( \frac{1}{2}-\sigma ) - 3 (\mu + \delta)$

which can be rearranged as

$\displaystyle 24 \varpi + 7 \mu + 7 \delta + 5 \sigma < 1/2$

and

$\displaystyle 8 \varpi + 3 \mu + 3 \delta + \sigma <1/2$

respectively, and thus both follow from (50).

— 5. The Type III estimate —

Now we prove Theorem 7. Our arguments will closely track those of Section 2 of this previous post, except that we will carry the ${\alpha}$ averaging with us for significantly longer in the argument.

Let ${\varpi,\delta,N_1,N_2,N_3,M,\alpha,\psi_1,\psi_2,\psi_3}$ obey the hypotheses of the theorem. It will suffice to establish the bound

$\displaystyle |\Delta(\alpha \ast \psi_1 \ast \psi_2 \ast \psi_3; a)| \lessapprox x^{-\epsilon} \frac{x}{d} \ \ \ \ \ (56)$

for all ${d \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}}$ with ${d < x^{1/2+2\varpi}}$ and all ${a \in ({\bf Z}/d{\bf Z})^\times}$, and some sufficiently fixed ${\epsilon>0}$.

Fix ${d}$. It suffices to show that

$\displaystyle \sum_{n: n = a\ (d)} \psi_1 \ast \psi_2 \ast \alpha \ast \psi_3(n) = X + O( x^{-\epsilon+o(1)} \frac{x}{d} )$

for some ${X}$ that does not depend on ${a}$. Applying completion of sums, we can express the left-hand side as the main term

$\displaystyle \frac{1}{d} (\sum_{n_1} \psi_1(n_1)) (\sum_{n: (n,d)=1} \psi_2 \ast \alpha \ast \psi_3(n))$

plus the error terms

$\displaystyle O( (\log^{O(1)} x) \frac{N_1}{d} \sum_{1 \leq h \le H} |\sum_{n: (n,d)=1} \psi_2 \ast \alpha \ast \psi_3(n) e_d( \frac{ah}{n} )| )$

and a tiny error

$\displaystyle O( x^{-A+ O(1)} )$

for any fixed ${A>0}$, where

$\displaystyle H := x^\epsilon \frac{d}{N_1}.$

It thus suffices to show that

$\displaystyle \sum_{1 \leq h \le H} |\sum_{n: (n,d)=1} \psi_2 \ast \alpha \ast \psi_3(n) e_d( \frac{ah}{n} ) | \lessapprox x^{-\epsilon} M N_2 N_3. \ \ \ \ \ (57)$

it will suffice to prove the following claim:

Proposition 12 Let ${\delta>0}$ be fixed, and let

$\displaystyle H, N_2, N_3, d, B \gg 1 \ \ \ \ \ (58)$

be such that ${d}$ is ${x^\delta}$-densely divisible and

$\displaystyle H \lessapprox x^{\epsilon} \frac{d}{N_2} \ \ \ \ \ (59)$

and

$\displaystyle M N_2^4 N_3^5 \gtrapprox B^{-6} d^4 H^4 x^{\delta+c} \ \ \ \ \ (60)$

and

$\displaystyle N_2^3 N_3^4 \gtrapprox B^{-4} d^3 H^3 x^{\delta+c} \ \ \ \ \ (61)$

and

$\displaystyle N_2 \gtrapprox H x^c \ \ \ \ \ (62)$

$\displaystyle M N_2^2 N_3 \gtrapprox B^{-2} d H^2 x^c \ \ \ \ \ (63)$

for some fixed ${c>0}$, and let ${\psi_2,\psi_3}$ be smooth coefficient sequences at scale ${N_2,N_3}$ respectively. Then

$\displaystyle \sum_{1 \leq h \le H: (h,d)=1} |\sum_{n: (n,d)=1} \psi_2 \ast \alpha \ast \psi_3(n) e_d( \frac{ah}{n} ) | \lessapprox x^{-\epsilon} B M N_2 N_3$

if ${\epsilon}$ is sufficiently small.

Let us now see why the above proposition implies (57). To prove (57), we may of course assume ${H \geq 1}$ as the claim is trivial otherwise. We can split

$\displaystyle \sum_{1 \leq h \leq H} F(h) = \sum_{d = d_1 d_2} \sum_{1 \leq h' \leq H/d_2: (h',d_1)=1} F( d_2 h' )$

for any function ${F(h)}$ of ${h}$, so that (57) can be written as

$\displaystyle \sum_{d = d_1 d_2} \sum_{1 \leq h' \leq H/d_2: (h',d_1)=1} |\sum_{n: (n,d_1 d_2)=1} \psi_2 \ast \alpha \ast \psi_3(n) e_{d_1}( \frac{ah'}{n} )|$

which we expand as

$\displaystyle \sum_{d = d_1 d_2} \sum_{1 \leq h' \leq H/d_2: (h',d_1)=1} |\sum_{m: (m,d_1 d_2)=1} \alpha(m) \sum_{n_2: (n_2,d_1 d_2)=1}$

$\displaystyle \sum_{n_3: (n_3,d_1d_2)=1} \psi_2(n_2) \psi_3(n_3) e_{d_1}( \frac{ah'}{mn_2 n_3} )|$

In order to apply Proposition 12 we need to modify the ${(n_2,d_1d_2)=1}$, ${(n_3,d_1d_2)=1}$ constraints. By Möbius inversion one has

$\displaystyle \sum_{n_2: (n_2,d_1d_2)=1} F(n_2) = \sum_{b_2|d_2} \mu(b_2) \sum_{n_2: (n_2,d_1)=1} F(b_2 n_2)$

for any function ${F}$, and similarly for ${n_3}$, so by the triangle inequality we may bound the previous expression by

$\displaystyle \sum_{d = d_1 d_2} \sum_{b_2|d_2} \sum_{b_3|d_2} F( d_1, d_2, b_1, b_2 ) \ \ \ \ \ (64)$

where

$\displaystyle F(d_1,d_2,b_1,b_2) := \sum_{1 \leq h' \leq H/d_2: (h',d_1)=1}$

$\displaystyle |\sum_{m: (m,d_1d_2)=1} \alpha(m) \sum_{n_2: (n_2,d_1)=1} \sum_{n_3: (n_3,d_1)=1} \psi_2(b_2n_2) \psi_3(b_3n_3)$

$\displaystyle e_{d_1}( \frac{ah'}{mb_2b_3 n_2 n_3} )|$

We may discard those values of ${d_2}$ for which ${H' := H/d_2}$ is less than one, as the summation is vacuous in that case. We then apply Proposition (12) with ${d,N_2,N_3,H}$ replaced by ${d_1,N_2/b_2,N_3/b_3,H'}$ respectively (but with ${M}$ unchanged), ${\alpha}$ replaced with its restriction to values coprime to ${d_1,d_2}$, and ${B}$ set equal to ${b_2 b_3}$, ${x^\delta}$ replaced by ${d_2 x^\delta}$, and ${\psi_2,\psi_3}$ replaced by ${\psi_2(b_2\cdot)}$ and ${\psi_3(b_3\cdot)}$. One can check that all the hypotheses of Proposition 12 are obeyed (with (60) coming from (15), (61) coming from (16), and (62), (63) coming from (17)), so we may bound (64) by

$\displaystyle \lessapprox x^{-\epsilon} M N_2 N_3 \sum_{d = d_1 d_2} \sum_{b_2|d_2} \sum_{b_3|d_2} 1$

which by the divisor bound is ${\lessapprox x^{-\epsilon} M N_2 N_3}$, which is acceptable.

It remains to prove Proposition 12. Note from (58), (59) one has

$\displaystyle d \gg x^{-\epsilon} N_2. \ \ \ \ \ (65)$

Expanding out the ${\psi_2 \ast \psi_3}$ convolution, our task is to show that

$\displaystyle \sum_{1 \leq h \le H: (h,d)=1} |\sum_{n_2: (n_2,d)=1} \sum_{n_3: (n_3,d)=1} \psi_2(n_2) (\alpha \ast \psi_3)(n_3) e_d( \frac{ah}{n_2n_3} )| \ \ \ \ \ (66)$

$\displaystyle \ll x^{-\epsilon} B MN_2 N_3.$

The next step is Weyl differencing. We will need a step size ${r \geq 1}$ which we will optimise in later. We set

$\displaystyle K := \lfloor x^{-\epsilon} N_2 r^{-1} H^{-1}\rfloor; \ \ \ \ \ (67)$

we will make the hypothesis that

$\displaystyle K \geq 1 \ \ \ \ \ (68)$

and save this condition to be verified later.

By shifting ${n_2}$ by ${khr}$ for ${1 \leq k \leq K}$ and then averaging, we may write the left-hand side of (66) as

$\displaystyle \sum_{1 \leq h \le H: (h,d)=1} |\frac{1}{K} \sum_{1 \leq k \leq K} \sum_{n_2: (n_2,d)=1} \sum_{n_3: (n_3,d)=1}$

$\displaystyle \psi_2(n_2+hkr) (\alpha \ast \psi_3)(n_3) e_d( \frac{ah}{(n_2+hkr)n_3} )|.$

By the triangle inequality, it thus suffices to show that

$\displaystyle \sum_{1 \leq h \leq H: (h,d)=1} \sum_{n_2: (n_2,d)=1} |\sum_{1 \leq k \leq K} \psi_2(n_2+hkr) \ \ \ \ \ (69)$

$\displaystyle \sum_{n_3: (n_3,d)=1} (\alpha \ast \psi_3)(n_3) e_d( \frac{ah}{(n_2+hkr)n_3} )| \ll x^{-\epsilon} B K M N_2 N_3.$

Next, we combine the ${h}$ and ${n_2}$ summations into a single summation over ${{\bf Z}/d{\bf Z}}$. We first use a Taylor expansion and (67) to write

$\displaystyle \psi_2(n_2+hkr) = \sum_{j=0}^J \frac{1}{j!} (h/H)^j N_2^{j} \psi_2^{(j)}(n_2) (Hkr/N_2)^j + O( x^{-J\epsilon+o(1)})$

for any fixed ${J}$. If ${J}$ is large enough, then the error term will be acceptable, so it suffices to establish (69) with ${\psi_2(n_2+hkr)}$ replaced by ${(h/H)^j N_2^j \psi_2^{(j)}(n_2) (hkr/N_2)^j}$ for any fixed ${j \geq 0}$. We can rewrite

$\displaystyle e_d( \frac{ah}{(n_2+hkr)n_3} ) = e_d( \frac{a}{(l+kr) n_3} )$

where ${l \in {\bf Z}/d{\bf Z}}$ is such that ${(l+kr,d)=1}$ and

$\displaystyle l = \frac{n_2}{h}\ (d).$

Thus we can estimate the left-hand side of (69) by

$\displaystyle \sum_{l \in {\bf Z}/d{\bf Z}} \nu(l) |\sum_{1 \leq k \leq K: (l+kr,d)=1} (Hkr/N_2)^j \ \ \ \ \ (70)$

$\displaystyle \sum_{n_3: (n_3,d)=1} (\alpha \ast \psi_3)(n_3) e_d( \frac{a}{(l+kr) n_3})|$

where

$\displaystyle \nu(l) := \sum_{1 \leq h \leq H: (h,d)=1} \sum_{n_2} 1_{l = \frac{n_2}{h}\ (d)} N_2^j |\psi_2^{(j)}(n_2)|.$

Here we have bounded ${(h/H)^j}$ by ${O(1)}$.

We will eliminate the ${\nu}$ expression via Cauchy-Schwarz. Observe from the smoothness of ${\psi_2}$ that

$\displaystyle \nu(l) \ll x^{o(1)} |\{ (h,n_2): 1 \leq h \leq H; 1 \ll n_2 \ll N_2; (h,d)=1; l = \frac{n_2}{h}\ (d) \}|$

and thus

$\displaystyle \sum_l \nu(l)^2 \ll x^{o(1)} |\{ (h,h',n_2,n'_2): 1 \leq h,h' \leq H; 1\ll n_2,n'_2 \ll N_2;$

$\displaystyle (h,d)=(h',d) = 1; \frac{n_2}{h} = \frac{n'_2}{h'}\ (d) \}|.$

Note that ${\frac{n_2}{h} = \frac{n'_2}{h'}\ (d)}$ implies ${n_2 h' = n'_2 h\ (d)}$. But from (59) we have ${1 \leq n_2 h', n'_2 h \ll x^\epsilon d}$, so in fact we have ${n_2 h' = n'_2 h + k d}$ for some ${k = O(x^\epsilon)}$. Thus

$\displaystyle \sum_l \nu(l)^2 \ll x^{\epsilon+o(1)} |\{ (h,h',n_2,n'_2): 1 \leq h' \leq H; 1\ll n_2 \ll N_2;$

$\displaystyle n_2 h' = n'_2 h + k d \hbox{ for some } k = O(x^\epsilon)\}|.$

From the divisor bound, we see that for each fixed ${n_2, h'}$ there are ${O(x^{\epsilon+o(1)})}$ choices for ${n'_2,h}$, thus

$\displaystyle \sum_l \nu(l)^2 \lessapprox x^{\epsilon} N_2 H.$

From this, (70), and Cauchy-Schwarz, we see that to prove (69) it will suffice to show that

$\displaystyle \sum_{l \in {\bf Z}/d{\bf Z}} |\sum_{1 \leq k \leq K: (l+kr,d)=1} (Hkr/N_2)^j \ \ \ \ \ (71)$

$\displaystyle \sum_{n_3: (n_3,d)=1} (\alpha \ast \psi_3)(n_3) e_d( \frac{a}{(l+kr) n_3})|^2$

$\displaystyle \ll x^{-3\epsilon} B^{2} K^2 M^2 N_2 N_3^2 H^{-1}.$

We square out (71) as

$\displaystyle \sum_{1 \leq k,k' \leq K}\sum_{l \in {\bf Z}/d{\bf Z}: (l+kr,d)=(l+k'r,d)=1} (Hkr/N_2)^j (Hk'r/N_2)^j$

$\displaystyle \sum_{m,m': (mm',d)=1} \alpha(m) \overline{\alpha(m')}$

$\displaystyle \sum_{n_3,n'_3: (n_3,d)=(n'_3,d)=1} \psi_3(n_3) \overline{\psi_3}(n'_3) e_d( \frac{a}{(l+kr)mn_3)} - \frac{a}{(l+k'r)m'n'_3} ).$

If we shift ${l}$ by ${kr}$, then relabel ${k'-k}$ by ${k}$, and use the fact that ${Hkr/N_2, Hk'r/N_2 = O(1)}$, we can reduce this to

$\displaystyle \sum_{|k| \leq K}$

$\displaystyle |\sum_{l \in {\bf Z}/d{\bf Z}: (l,d)=(l+kr,d)=1} \sum_{n_3,n'_3: (n_3,d)=(n'_3,d)=1} \sum_{m,m': (mm',d)=1} \alpha(m) \overline{\alpha(m')}$

$\displaystyle \psi_3(n_3) \overline{\psi_3}(n'_3) e_d( \frac{a}{lmn_3} - \frac{a}{(l+kr)m' n'_3} )|$

$\displaystyle \ll x^{-3\epsilon} M^2 B^{2} K N_2 N_3^2 H^{-1}.$

Next we perform another completion of sums, this time in the ${n_3,n'_3}$ variables, to bound

$\displaystyle |\sum_{l \in {\bf Z}/d{\bf Z}: (l,d)=(l+kr,d)=1} \sum_{n_3,n'_3: (n_3,d)=(n'_3,d)=1} \sum_{m,m': (mm',d)=1} \alpha(m) \overline{\alpha(m')}$

$\displaystyle \psi_3(n_3) \overline{\psi_3}(n'_3) e_d( \frac{a}{lmn_3} - \frac{a}{(l+kr)m'n'_3} )|$

by

$\displaystyle \lessapprox \sum_{|m|, |m'| \ll M: (mm',d)=1} \sum_{|t|, |t'| \leq M'} (\frac{N_3}{d})^2 | U(k; t,t'; m,m'; d)|+ x^{-A}$

for any fixed ${A>0}$, where

$\displaystyle M' := x^{\epsilon} \frac{d}{N_3} \ \ \ \ \ (72)$

(the prime is there to distinguish this quantity from ${M}$) and

$\displaystyle U(k;t,t';m,m';d) := \sum_{l \in {\bf Z}/d{\bf Z}: (l,d)=(l+kr,d)=1} \sum_{n_3,n'_3 \in ({\bf Z}/d{\bf Z})^\times}$

$\displaystyle e_d( \frac{a}{lmn_3} - \frac{a}{(l+kr)m'n'_3} + tn_3 - t' n'_3).$

Making the change of variables ${u := \frac{a}{mn_3}\ (d)}$ and ${u' := \frac{a}{m'n'_3}\ (d)}$, we see that

$\displaystyle U(k;t,t';m,m';d) = T( kr; at/m, at'/m'; d)$

where

$\displaystyle T(k; m,m'; q) := \sum_{l \in {\bf Z}/q{\bf Z}: (l,q)=(l+k,q)=1} \sum_{u \in ({\bf Z}/q{\bf Z})^\times} \sum_{u' \in ({\bf Z}/q{\bf Z})^\times}$

$\displaystyle e_q( \frac{u}{l} - \frac{u'}{l+k} + \frac{m}{u} - \frac{m'}{u'} ).$

Applying the Bombieri-Birch bound (Theorem 4 from this previous post), and recalling that ${a \in ({\bf Z}/d{\bf Z})^\times}$, we reduce to showing that

$\displaystyle \sum_{|k| \leq K} \sum_{|m|, |m'| \leq M: (mm',d)=1} \sum_{|t|, |t'| \leq M'} \frac{(kr,t/m-t'/m',d)}{(kr,d)^{1/2}} (\frac{N_3}{d})^2 d^{3/2}$

$\displaystyle \lessapprox x^{-4\epsilon} B^{2} K M^2 N_2 N_3^2 H^{-1}.$

We may cross multiply and write

$\displaystyle (kr,t/m-t'/m',d) = (kr,tm'-t'm,d).$

By the divisor bound, for each choice of ${t,m'}$ and ${tm'-t'm}$ there is ${\lessapprox 1}$ choices for ${t'}$ and ${m}$. Thus it suffices to show that

$\displaystyle MM' \sum_{|k| \leq K} \sum_{|b| \ll MM'} \frac{(kr,b,d)}{(kr,d)^{1/2}} (\frac{N_3}{d})^2 d^{3/2} \lessapprox x^{-4\epsilon} B^{2} K M^2 N_2 N_3^2 H^{-1}.$

We now choose ${r}$ to be a factor of ${d}$, thus

$\displaystyle d = qr$

for some ${q}$ coprime to ${r}$. We compute the sum on the left-hand side:

Lemma 13 If ${r|d}$, then we have

$\displaystyle \sum_{|k| \leq K} \sum_{|b| \ll MM'} \frac{(kr,b,d)}{(kr,d)^{1/2}}$

$\displaystyle \lessapprox ( r^{1/2} K + d^{1/2} + MM' K r^{-1/2} ).$

Proof: We first consider the contribution of the diagonal case ${b=0}$. This term may be estimated by

$\displaystyle \ll \sum_{|k| \leq K} (kr,d)^{1/2} = r^{1/2} \sum_{|k| \leq K} (k,q)^{1/2}.$

The ${k=0}$ term gives ${d^{1/2}}$, while the contribution of the non-zero ${k}$ are also acceptable by Lemma 5 from this previous post.

For the non-diagonal case ${b \neq 0}$, we see from Lemma 5 from this previous post that

$\displaystyle \sum_{|b| \ll M M': b \neq 0} (kr,b,d) \lessapprox MM';$

since ${(kr,d) \geq r}$, we obtain a bound of ${\lessapprox MM' K r^{-1/2}}$ from this case as required. $\Box$

From this lemma, we see that we are done if we can find ${r}$ obeying

$\displaystyle MM' (r^{1/2} K + d^{1/2} + MM' K r^{-1/2} ) (\frac{N_3}{d})^2 d^{3/2} \ll x^{-5\epsilon} B^{2} K M^2 N_2 N_3^2 H^{-1}. \ \ \ \ \ (73)$

as well as the previously recorded condition (68). We can split the condition (73) into three subconditions:

$\displaystyle M' r^{1/2} d^{-1/2} \ll x^{-5\epsilon} B^{2} M N_2 H^{-1}$

$\displaystyle M' K^{-1} \ll x^{-5\epsilon} B^{2} M N_2 H^{-1}$

$\displaystyle (M')^2 r^{-1/2} d^{-1/2} \ll x^{-5\epsilon} B^{2} N_2 H^{-1}.$

Substituting the definitions (67), (72) of ${K, M'}$, we can rewrite all of these conditions as lower and upper bounds on ${r}$. Indeed, (68) follows from (say)

$\displaystyle r \ll x^{-2\epsilon} N_2 H^{-1} \ \ \ \ \ (74)$

while the other three conditions rearrange to

$\displaystyle r \ll x^{-12\epsilon} B^{4} M^2 N_2^2 N_3^2 H^{-2} d^{-1} \ \ \ \ \ (75)$

$\displaystyle r \ll x^{-7\epsilon} B^{2} M N_2^2 N_3 H^{-2} d^{-1} \ \ \ \ \ (76)$

and

$\displaystyle r \gg x^{14\epsilon} B^{-4} N_2^{-2} N_3^{-4} H^2 d^{3}.$

We can replace the first two constraints by the stronger constraint

$\displaystyle r \ll x^{-12\epsilon} B^{2} M N_2^2 N_3 H^{-2} d^{-1} \ \ \ \ \ (77)$

We combine these constraints as

$\displaystyle R_1 \ll r \ll R_2, R_3$

where

$\displaystyle R_1 := x^{14\epsilon} B^{-4} N_2^{-2} N_3^{-4} H^2 d^{3}$

$\displaystyle R_2 := x^{-2\epsilon} N_2 H^{-1}$

$\displaystyle R_3 := x^{-12\epsilon} B^{2} M N_2^2 N_3 H^{-2} d^{-1}.$

From (60), (61) we have

$\displaystyle R_2,R_3 \gtrapprox x^{\delta+c} R_1;$

and from (62), (63) we have ${R_2,R_3 \gg 1}$. Since ${d}$ is ${x^\delta}$-densely divisible, we will be done as soon as we verify that

$\displaystyle R_1 \ll d,$

since ${d}$ will then have a factor in ${[x^{-\delta} R, R]}$ where ${R := \max(\min(R_2,R_3,d),1)}$, which works if ${R_1 \ll x^{-\delta} d}$ (and if ${R_1 \gg x^{-\delta} d}$ we can just take ${d}$ as the factor).

It remains to establish ${R_1 \ll d}$. But this bound can be rewritten as

$\displaystyle x^{20\epsilon} B^{-6} d^3 H^3 \ll N_2^3 N_3^6$

and the claim follows from (61), (58).