You are currently browsing the tag archive for the ‘twin primes’ tag.

Just a brief post to record some notable papers in my fields of interest that appeared on the arXiv recently.

  • A sharp square function estimate for the cone in {\bf R}^3“, by Larry Guth, Hong Wang, and Ruixiang Zhang.  This paper establishes an optimal (up to epsilon losses) square function estimate for the three-dimensional light cone that was essentially conjectured by Mockenhaupt, Seeger, and Sogge, which has a number of other consequences including Sogge’s local smoothing conjecture for the wave equation in two spatial dimensions, which in turn implies the (already known) Bochner-Riesz, restriction, and Kakeya conjectures in two dimensions.   Interestingly, modern techniques such as polynomial partitioning and decoupling estimates are not used in this argument; instead, the authors mostly rely on an induction on scales argument and Kakeya type estimates.  Many previous authors (including myself) were able to get weaker estimates of this type by an induction on scales method, but there were always significant inefficiencies in doing so; in particular knowing the sharp square function estimate at smaller scales did not imply the sharp square function estimate at the given larger scale.  The authors here get around this issue by finding an even stronger estimate that implies the square function estimate, but behaves significantly better with respect to induction on scales.
  • On the Chowla and twin primes conjectures over {\mathbb F}_q[T]“, by Will Sawin and Mark Shusterman.  This paper resolves a number of well known open conjectures in analytic number theory, such as the Chowla conjecture and the twin prime conjecture (in the strong form conjectured by Hardy and Littlewood), in the case of function fields where the field is a prime power q=p^j which is fixed (in contrast to a number of existing results in the “large q” limit) but has a large exponent j.  The techniques here are orthogonal to those used in recent progress towards the Chowla conjecture over the integers (e.g., in this previous paper of mine); the starting point is an algebraic observation that in certain function fields, the Mobius function behaves like a quadratic Dirichlet character along certain arithmetic progressions.  In principle, this reduces problems such as Chowla’s conjecture to problems about estimating sums of Dirichlet characters, for which more is known; but the task is still far from trivial.
  • Bounds for sets with no polynomial progressions“, by Sarah Peluse.  This paper can be viewed as part of a larger project to obtain quantitative density Ramsey theorems of Szemeredi type.  For instance, Gowers famously established a relatively good quantitative bound for Szemeredi’s theorem that all dense subsets of integers contain arbitrarily long arithmetic progressions a, a+r, \dots, a+(k-1)r.  The corresponding question for polynomial progressions a+P_1(r), \dots, a+P_k(r) is considered more difficult for a number of reasons.  One of them is that dilation invariance is lost; a dilation of an arithmetic progression is again an arithmetic progression, but a dilation of a polynomial progression will in general not be a polynomial progression with the same polynomials P_1,\dots,P_k.  Another issue is that the ranges of the two parameters a,r are now at different scales.  Peluse gets around these difficulties in the case when all the polynomials P_1,\dots,P_k have distinct degrees, which is in some sense the opposite case to that considered by Gowers (in particular, she avoids the need to obtain quantitative inverse theorems for high order Gowers norms; which was recently obtained in this integer setting by Manners but with bounds that are probably not strong enough to for the bounds in Peluse’s results, due to a degree lowering argument that is available in this case).  To resolve the first difficulty one has to make all the estimates rather uniform in the coefficients of the polynomials P_j, so that one can still run a density increment argument efficiently.  To resolve the second difficulty one needs to find a quantitative concatenation theorem for Gowers uniformity norms.  Many of these ideas were developed in previous papers of Peluse and Peluse-Prendiville in simpler settings.
  • On blow up for the energy super critical defocusing non linear Schrödinger equations“, by Frank Merle, Pierre Raphael, Igor Rodnianski, and Jeremie Szeftel.  This paper (when combined with two companion papers) resolves a long-standing problem as to whether finite time blowup occurs for the defocusing supercritical nonlinear Schrödinger equation (at least in certain dimensions and nonlinearities).  I had a previous paper establishing a result like this if one “cheated” by replacing the nonlinear Schrodinger equation by a system of such equations, but remarkably they are able to tackle the original equation itself without any such cheating.  Given the very analogous situation with Navier-Stokes, where again one can create finite time blowup by “cheating” and modifying the equation, it does raise hope that finite time blowup for the incompressible Navier-Stokes and Euler equations can be established…  In fact the connection may not just be at the level of analogy; a surprising key ingredient in the proofs here is the observation that a certain blowup ansatz for the nonlinear Schrodinger equation is governed by solutions to the (compressible) Euler equation, and finite time blowup examples for the latter can be used to construct finite time blowup examples for the former.

Kaisa Matomaki, Maksym Radziwill, and I have uploaded to the arXiv our paper “Correlations of the von Mangoldt and higher divisor functions I. Long shift ranges“, submitted to Proceedings of the London Mathematical Society. This paper is concerned with the estimation of correlations such as

\displaystyle \sum_{n \leq X} \Lambda(n) \Lambda(n+h) \ \ \ \ \ (1)

for medium-sized {h} and large {X}, where {\Lambda} is the von Mangoldt function; we also consider variants of this sum in which one of the von Mangoldt functions is replaced with a (higher order) divisor function, but for sake of discussion let us focus just on the sum (1). Understanding this sum is very closely related to the problem of finding pairs of primes that differ by {h}; for instance, if one could establish a lower bound

\displaystyle \sum_{n \leq X} \Lambda(n) \Lambda(n+2) \gg X

then this would easily imply the twin prime conjecture.

The (first) Hardy-Littlewood conjecture asserts an asymptotic

\displaystyle \sum_{n \leq X} \Lambda(n) \Lambda(n+h) = {\mathfrak S}(h) X + o(X) \ \ \ \ \ (2)

as {X \rightarrow \infty} for any fixed positive {h}, where the singular series {{\mathfrak S}(h)} is an arithmetic factor arising from the irregularity of distribution of {\Lambda} at small moduli, defined explicitly by

\displaystyle {\mathfrak S}(h) := 2 \Pi_2 \prod_{p|h; p>2} \frac{p-2}{p-1}

when {h} is even, and {{\mathfrak S}(h)=0} when {h} is odd, where

\displaystyle \Pi_2 := \prod_{p>2} (1-\frac{1}{(p-1)^2}) = 0.66016\dots

is (half of) the twin prime constant. See for instance this previous blog post for a a heuristic explanation of this conjecture. From the previous discussion we see that (2) for {h=2} would imply the twin prime conjecture. Sieve theoretic methods are only able to provide an upper bound of the form { \sum_{n \leq X} \Lambda(n) \Lambda(n+h) \ll {\mathfrak S}(h) X}.

Needless to say, apart from the trivial case of odd {h}, there are no values of {h} for which the Hardy-Littlewood conjecture is known. However there are some results that say that this conjecture holds “on the average”: in particular, if {H} is a quantity depending on {X} that is somewhat large, there are results that show that (2) holds for most (i.e. for {1-o(1)}) of the {h} betwen {0} and {H}. Ideally one would like to get {H} as small as possible, in particular one can view the full Hardy-Littlewood conjecture as the endpoint case when {H} is bounded.

The first results in this direction were by van der Corput and by Lavrik, who established such a result with {H = X} (with a subsequent refinement by Balog); Wolke lowered {H} to {X^{5/8+\varepsilon}}, and Mikawa lowered {H} further to {X^{1/3+\varepsilon}}. The main result of this paper is a further lowering of {H} to {X^{8/33+\varepsilon}}. In fact (as in the preceding works) we get a better error term than {o(X)}, namely an error of the shape {O_A( X \log^{-A} X)} for any {A}.

Our arguments initially proceed along standard lines. One can use the Hardy-Littlewood circle method to express the correlation in (2) as an integral involving exponential sums {S(\alpha) := \sum_{n \leq X} \Lambda(n) e(\alpha n)}. The contribution of “major arc” {\alpha} is known by a standard computation to recover the main term {{\mathfrak S}(h) X} plus acceptable errors, so it is a matter of controlling the “minor arcs”. After averaging in {h} and using the Plancherel identity, one is basically faced with establishing a bound of the form

\displaystyle \int_{\beta-1/H}^{\beta+1/H} |S(\alpha)|^2\ d\alpha \ll_A X \log^{-A} X

for any “minor arc” {\beta}. If {\beta} is somewhat close to a low height rational {a/q} (specifically, if it is within {X^{-1/6-\varepsilon}} of such a rational with {q = O(\log^{O(1)} X)}), then this type of estimate is roughly of comparable strength (by another application of Plancherel) to the best available prime number theorem in short intervals on the average, namely that the prime number theorem holds for most intervals of the form {[x, x + x^{1/6+\varepsilon}]}, and we can handle this case using standard mean value theorems for Dirichlet series. So we can restrict attention to the “strongly minor arc” case where {\beta} is far from such rationals.

The next step (following some ideas we found in a paper of Zhan) is to rewrite this estimate not in terms of the exponential sums {S(\alpha) := \sum_{n \leq X} \Lambda(n) e(\alpha n)}, but rather in terms of the Dirichlet polynomial {F(s) := \sum_{n \sim X} \frac{\Lambda(n)}{n^s}}. After a certain amount of computation (including some oscillatory integral estimates arising from stationary phase), one is eventually reduced to the task of establishing an estimate of the form

\displaystyle \int_{t \sim \lambda X} (\sum_{t-\lambda H}^{t+\lambda H} |F(\frac{1}{2}+it')|\ dt')^2\ dt \ll_A \lambda^2 H^2 X \log^{-A} X

for any {X^{-1/6-\varepsilon} \ll \lambda \ll \log^{-B} X} (with {B} sufficiently large depending on {A}).

The next step, which is again standard, is the use of the Heath-Brown identity (as discussed for instance in this previous blog post) to split up {\Lambda} into a number of components that have a Dirichlet convolution structure. Because the exponent {8/33} we are shooting for is less than {1/4}, we end up with five types of components that arise, which we call “Type {d_1}“, “Type {d_2}“, “Type {d_3}“, “Type {d_4}“, and “Type II”. The “Type II” sums are Dirichlet convolutions involving a factor supported on a range {[X^\varepsilon, X^{-\varepsilon} H]} and is quite easy to deal with; the “Type {d_j}” terms are Dirichlet convolutions that resemble (non-degenerate portions of) the {j^{th}} divisor function, formed from convolving together {j} portions of {1}. The “Type {d_1}” and “Type {d_2}” terms can be estimated satisfactorily by standard moment estimates for Dirichlet polynomials; this already recovers the result of Mikawa (and our argument is in fact slightly more elementary in that no Kloosterman sum estimates are required). It is the treatment of the “Type {d_3}” and “Type {d_4}” sums that require some new analysis, with the Type {d_3} terms turning to be the most delicate. After using an existing moment estimate of Jutila for Dirichlet L-functions, matters reduce to obtaining a family of estimates, a typical one of which (relating to the more difficult Type {d_3} sums) is of the form

\displaystyle \int_{t - H}^{t+H} |M( \frac{1}{2} + it')|^2\ dt' \ll X^{\varepsilon^2} H \ \ \ \ \ (3)

for “typical” ordinates {t} of size {X}, where {M} is the Dirichlet polynomial {M(s) := \sum_{n \sim X^{1/3}} \frac{1}{n^s}} (a fragment of the Riemann zeta function). The precise definition of “typical” is a little technical (because of the complicated nature of Jutila’s estimate) and will not be detailed here. Such a claim would follow easily from the Lindelof hypothesis (which would imply that {M(1/2 + it) \ll X^{o(1)}}) but of course we would like to have an unconditional result.

At this point, having exhausted all the Dirichlet polynomial estimates that are usefully available, we return to “physical space”. Using some further Fourier-analytic and oscillatory integral computations, we can estimate the left-hand side of (3) by an expression that is roughly of the shape

\displaystyle \frac{H}{X^{1/3}} \sum_{\ell \sim X^{1/3}/H} |\sum_{m \sim X^{1/3}} e( \frac{t}{2\pi} \log \frac{m+\ell}{m-\ell} )|.

The phase {\frac{t}{2\pi} \log \frac{m+\ell}{m-\ell}} can be Taylor expanded as the sum of {\frac{t_j \ell}{\pi m}} and a lower order term {\frac{t_j \ell^3}{3\pi m^3}}, plus negligible errors. If we could discard the lower order term then we would get quite a good bound using the exponential sum estimates of Robert and Sargos, which control averages of exponential sums with purely monomial phases, with the averaging allowing us to exploit the hypothesis that {t} is “typical”. Figuring out how to get rid of this lower order term caused some inefficiency in our arguments; the best we could do (after much experimentation) was to use Fourier analysis to shorten the sums, estimate a one-parameter average exponential sum with a binomial phase by a two-parameter average with a monomial phase, and then use the van der Corput {B} process followed by the estimates of Robert and Sargos. This rather complicated procedure works up to {H = X^{8/33+\varepsilon}} it may be possible that some alternate way to proceed here could improve the exponent somewhat.

In a sequel to this paper, we will use a somewhat different method to reduce {H} to a much smaller value of {\log^{O(1)} X}, but only if we replace the correlations {\sum_{n \leq X} \Lambda(n) \Lambda(n+h)} by either {\sum_{n \leq X} \Lambda(n) d_k(n+h)} or {\sum_{n \leq X} d_k(n) d_l(n+h)}, and also we now only save a {o(1)} in the error term rather than {O_A(\log^{-A} X)}.

The twin prime conjecture, still unsolved, asserts that there are infinitely many primes {p} such that {p+2} is also prime. A more precise form of this conjecture is (a special case) of the Hardy-Littlewood prime tuples conjecture, which asserts that

\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+2) = (2\Pi_2+o(1)) x \ \ \ \ \ (1)

 

as {x \rightarrow \infty}, where {\Lambda} is the von Mangoldt function and {\Pi_2 = 0.6606\dots} is the twin prime constant

\displaystyle \prod_{p>2} (1 - \frac{1}{(p-1)^2}).

Because {\Lambda} is almost entirely supported on the primes, it is not difficult to see that (1) implies the twin prime conjecture.

One can give a heuristic justification of the asymptotic (1) (and hence the twin prime conjecture) via sieve theoretic methods. Recall that the von Mangoldt function can be decomposed as a Dirichlet convolution

\displaystyle \Lambda(n) = \sum_{d|n} \mu(d) \log \frac{n}{d}

where {\mu} is the Möbius function. Because of this, we can rewrite the left-hand side of (1) as

\displaystyle \sum_{d \leq x} \mu(d) \sum_{n \leq x: d|n} \log\frac{n}{d} \Lambda(n+2). \ \ \ \ \ (2)

 

To compute this double sum, it is thus natural to consider sums such as

\displaystyle \sum_{n \leq x: d|n} \log \frac{n}{d} \Lambda(n+2)

or (to simplify things by removing the logarithm)

\displaystyle \sum_{n \leq x: d|n} \Lambda(n+2).

The prime number theorem in arithmetic progressions suggests that one has an asymptotic of the form

\displaystyle \sum_{n \leq x: d|n} \Lambda(n+2) \approx \frac{g(d)}{d} x \ \ \ \ \ (3)

 

where {g} is the multiplicative function with {g(d)=0} for {d} even and

\displaystyle g(d) := \frac{d}{\phi(d)} = \prod_{p|d} (1-\frac{1}{p})^{-1}

for {d} odd. Summing by parts, one then expects

\displaystyle \sum_{n \leq x: d|n} \Lambda(n+2)\log \frac{n}{d}  \approx \frac{g(d)}{d} x \log \frac{x}{d}

and so we heuristically have

\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+2) \approx x \sum_{d \leq x} \frac{\mu(d) g(d)}{d} \log \frac{x}{d}.

The Dirichlet series

\displaystyle \sum_n \frac{\mu(n) g(n)}{n^s}

has an Euler product factorisation

\displaystyle \sum_n \frac{\mu(n) g(n)}{n^s} = \prod_p (1 - \frac{g(p)}{p^s})

for {\hbox{Re} s > 1}; comparing this with the Euler product factorisation

\displaystyle \zeta(s) = \prod_p (1 - \frac{1}{p^s})^{-1}

for the Riemann zeta function, and recalling that {\zeta} has a simple pole of residue {1} at {s=1}, we see that

\displaystyle \sum_n \frac{\mu(n) g(n)}{n^s} = \frac{1}{\zeta(s)} \prod_p \frac{1-g(p)/p^s}{1-p^s}

has a simple zero at {s=1} with first derivative

\displaystyle \prod_p \frac{1 - g(p)/p}{1-1/p} = 2 \Pi_2.

From this and standard multiplicative number theory manipulations, one can calculate the asymptotic

\displaystyle \sum_{d \leq x} \frac{\mu(d) g(d)}{d} \log \frac{x}{d} = 2 \Pi_2 + o(1)

which concludes the heuristic justification of (1).

What prevents us from making the above heuristic argument rigorous, and thus proving (1) and the twin prime conjecture? Note that the variable {d} in (2) ranges to be as large as {x}. On the other hand, the prime number theorem in arithmetic progressions (3) is not expected to hold for {d} anywhere that large (for instance, the left-hand side of (3) vanishes as soon as {d} exceeds {x}). The best unconditional result known of the type (3) is the Siegel-Walfisz theorem, which allows {d} to be as large as {\log^{O(1)} x}. Even the powerful generalised Riemann hypothesis (GRH) only lets one prove an estimate of the form (3) for {d} up to about {x^{1/2-o(1)}}.

However, because of the averaging effect of the summation in {d} in (2), we don’t need the asymptotic (3) to be true for all {d} in a particular range; having it true for almost all {d} in that range would suffice. Here the situation is much better; the celebrated Bombieri-Vinogradov theorem (sometimes known as “GRH on the average”) implies, roughly speaking, that the approximation (3) is valid for almost all {d \leq x^{1/2-\varepsilon}} for any fixed {\varepsilon>0}. While this is not enough to control (2) or (1), the Bombieri-Vinogradov theorem can at least be used to control variants of (1) such as

\displaystyle \sum_{n \leq x} (\sum_{d|n} \lambda_d) \Lambda(n+2)

for various sieve weights {\lambda_d} whose associated divisor function {\sum_{d|n} \lambda_d} is supposed to approximate the von Mangoldt function {\Lambda}, although that theorem only lets one do this when the weights {\lambda_d} are supported on the range {d \leq x^{1/2-\varepsilon}}. This is still enough to obtain some partial results towards (1); for instance, by selecting weights according to the Selberg sieve, one can use the Bombieri-Vinogradov theorem to establish the upper bound

\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+2) \leq (4+o(1)) 2 \Pi_2 x, \ \ \ \ \ (4)

 

which is off from (1) by a factor of about {4}. See for instance this blog post for details.

It has been difficult to improve upon the Bombieri-Vinogradov theorem in its full generality, although there are various improvements to certain restricted versions of the Bombieri-Vinogradov theorem, for instance in the famous work of Zhang on bounded gaps between primes. Nevertheless, it is believed that the Elliott-Halberstam conjecture (EH) holds, which roughly speaking would mean that (3) now holds for almost all {d \leq x^{1-\varepsilon}} for any fixed {\varepsilon>0}. (Unfortunately, the {\varepsilon} factor cannot be removed, as investigated in a series of papers by Friedlander, Granville, and also Hildebrand and Maier.) This comes tantalisingly close to having enough distribution to control all of (1). Unfortunately, it still falls short. Using this conjecture in place of the Bombieri-Vinogradov theorem leads to various improvements to sieve theoretic bounds; for instance, the factor of {4+o(1)} in (4) can now be improved to {2+o(1)}.

In two papers from the 1970s (which can be found online here and here respectively, the latter starting on page 255 of the pdf), Bombieri developed what is now known as the Bombieri asymptotic sieve to clarify the situation more precisely. First, he showed that on the Elliott-Halberstam conjecture, while one still could not establish the asymptotic (1), one could prove the generalised asymptotic

\displaystyle \sum_{n \leq x} \Lambda_k(n) \Lambda(n+2) = (2\Pi_2+o(1)) k x \log^{k-1} x \ \ \ \ \ (5)

 

for all natural numbers {k \geq 2}, where the generalised von Mangoldt functions {\Lambda_k} are defined by the formula

\displaystyle \Lambda_k(n) := \sum_{d|n} \mu(d) \log^k \frac{n}{d}.

These functions behave like the von Mangoldt function, but are concentrated on {k}-almost primes (numbers with at most {k} prime factors) rather than primes. The right-hand side of (5) corresponds to what one would expect if one ran the same heuristics used to justify (1). Sadly, the {k=1} case of (5), which is just (1), is just barely excluded from Bombieri’s analysis.

More generally, on the assumption of EH, the Bombieri asymptotic sieve provides the asymptotic

\displaystyle \sum_{n \leq x} \Lambda_{(k_1,\dots,k_r)}(n) \Lambda(n+2) \ \ \ \ \ (6)

 

\displaystyle = (2\Pi_2+o(1)) \frac{\prod_{i=1}^r k_i!}{(k_1+\dots+k_r-1)!} x \log^{k_1+\dots+k_r-1} x

for any fixed {r \geq 1} and any tuple {(k_1,\dots,k_r)} of natural numbers other than {(1,\dots,1)}, where

\displaystyle \Lambda_{(k_1,\dots,k_r)} := \Lambda_{k_1} * \dots * \Lambda_{k_r}

is a further generalisation of the von Mangoldt function (now concentrated on {k_1+\dots+k_r}-almost primes). By combining these asymptotics with some elementary identities involving the {\Lambda_{(k_1,\dots,k_r)}}, together with the Weierstrass approximation theorem, Bombieri was able to control a wide family of sums including (1), except for one undetermined scalar {\delta_x \in [0,2]}. Namely, he was able to show (again on EH) that for any fixed {r \geq 1} and any continuous function {g_r} on the simplex {\Delta_r := \{ (t_1,\dots,t_r) \in {\bf R}^r: t_1+\dots+t_r = 1; 0 \leq t_1 \leq \dots \leq t_r\}} that had suitable vanishing at the boundary, the sum

\displaystyle \sum_{n \leq x: n=p_1 \dots p_r} g_r( \frac{\log p_1}{\log n}, \dots, \frac{\log p_r}{\log n} ) \Lambda(n+2)

was equal to

\displaystyle (\delta_x+o(1)) \int_{\Delta_r} g_r \frac{x}{\log x} \ \ \ \ \ (7)

 

when {r} was odd and

\displaystyle (2-\delta_x+o(1)) \int_{\Delta_r} g_r \frac{x}{\log x} \ \ \ \ \ (8)

 

when {r} was even, where the integral on {\Delta_r} is with respect to the measure {\frac{dt_1 \dots dt_{r-1}}{t_1 \dots t_r}} (this is Dirac measure in the case {r=1}). In particular, we have

\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+2) = (\delta_x + o(1)) 2 \Pi_2 x

and the twin prime conjecture would be proved if one could show that {\delta_x} is bounded away from zero, while (1) is equivalent to the assertion that {\delta_x} is equal to {1+o(1)}. Unfortunately, no additional bound beyond the inequalities {0 \leq \delta_x \leq 2} provided by the Bombieri asymptotic sieve is known, even if one assumes all other major conjectures in number theory than the prime tuples conjecture and its variants (e.g. GRH, GEH, GUE, abc, Chowla, …).

To put it another way, the Bombieri asymptotic sieve is able (on EH) to compute asymptotics for sums

\displaystyle \sum_{n \leq x} f(n) \Lambda(n+2) \ \ \ \ \ (9)

 

without needing to know the unknown scalar {\delta_x}, when {f} is a function supported on almost primes of the form

\displaystyle f(p_1 \dots p_r) = g_r( \frac{\log p_1}{\log n}, \dots, \frac{\log p_r}{\log n} )

for {1 \leq r \leq r_*} and some fixed {r_*}, with {f} vanishing elsewhere and for some continuous (symmetric) functions {g_r: \Delta_r \rightarrow {\bf C}} obeying some vanishing at the boundary, so long as the parity condition

\displaystyle \sum_{r \hbox{ odd}} \int_{\Delta_r} g_r = \sum_{r \hbox{ even}} \int_{\Delta_r} g_r

is obeyed (informally: {f} gives the same weight to products of an odd number of primes as to products of an even number of primes, or to put it another way, {f} is asymptotically orthogonal to the Möbius function {\mu}). But when {f} violates the parity condition, the asymptotic involves the unknown {\delta_x}. This scalar {\delta_x} thus embodies the “parity problem” for the twin prime conjecture (discussed in these previous blog posts).

Because the obstruction to the parity problem is only one-dimensional (on EH), one can replace any parity-violating weight (such as {\Lambda}) with any other parity-violating weight and obtain a logically equivalent estimate. For instance, to prove the twin prime conjecture on EH, it would suffice to show that

\displaystyle \sum_{p_1 p_2 p_3 \leq x: p_1,p_2,p_3 \geq x^\alpha} \Lambda(p_1 p_2 p_3 + 2) \gg \frac{x}{\log x}

for some fixed {\alpha>0}, or equivalently that there are {\gg \frac{x}{\log^2 x}} solutions to the equation {p - p_1 p_2 p_3 = 2} in primes with {p \leq x} and {p_1,p_2,p_3 \geq x^\alpha}. (In some cases, this sort of reduction can also be made using other sieves than the Bombieri asymptotic sieve, as was observed by Ng.) As another example, the Bombieri asymptotic sieve can be used to show that the asymptotic (1) is equivalent to the asymptotic

\displaystyle \sum_{n \leq x} \mu(n) 1_R(n) \Lambda(n+2) = o( \frac{x}{\log x})

where {R} is the set of numbers that are rough in the sense that they have no prime factors less than {x^\alpha} for some fixed {\alpha>0} (the function {\mu 1_R} clearly correlates with {\mu} and so must violate the parity condition). One can replace {1_R} with similar sieve weights (e.g. a Selberg sieve) that concentrate on almost primes if desired.

As it turns out, if one is willing to strengthen the assumption of the Elliott-Halberstam (EH) conjecture to the assumption of the generalised Elliott-Halberstam (GEH) conjecture (as formulated for instance in Claim 2.6 of the Polymath8b paper), one can also swap the {\Lambda(n+2)} factor in the above asymptotics with other parity-violating weights and obtain a logically equivalent estimate, as the Bombieri asymptotic sieve also applies to weights such as {\mu 1_R} under the assumption of GEH. For instance, on GEH one can use two such applications of the Bombieri asymptotic sieve to show that the twin prime conjecture would follow if one could show that there are {\gg \frac{x}{\log^2 x}} solutions to the equation

\displaystyle p_1 p_2 - p_3 p_4 = 2

in primes with {p_1,p_2,p_3,p_4 \geq x^\alpha} and {p_1 p_2 \leq x}, for some {\alpha > 0}. Similarly, on GEH the asymptotic (1) is equivalent to the asymptotic

\displaystyle \sum_{n \leq x} \mu(n) 1_R(n) \mu(n+2) 1_R(n+2) = o( \frac{x}{\log^2 x})

for some fixed {\alpha>0}, and similarly with {1_R} replaced by other sieves. This form of the quantitative twin primes conjecture is appealingly similar to the (special case)

\displaystyle \sum_{n \leq x} \mu(n) \mu(n+2) = o(x)

of the Chowla conjecture, for which there has been some recent progress (discussed for instance in these recent posts). Informally, the Bombieri asymptotic sieve lets us (on GEH) view the twin prime conjecture as a sort of Chowla conjecture restricted to almost primes. Unfortunately, the recent progress on the Chowla conjecture relies heavily on the multiplicativity of {\mu} at small primes, which is completely destroyed by inserting a weight such as {1_R}, so this does not yet yield a viable path towards the twin prime conjecture even assuming GEH. Still, the similarity is striking, and one can hope that further ways to attack the Chowla conjecture may emerge that could impact the twin prime conjecture. (Alternatively, if one assumes a sufficiently optimistic version of the GEH, one could perhaps relax the notion of “almost prime” to the extent that one could start usefully using multiplicativity at smallish primes, though this seems rather wishful at present, particularly since the most optimistic versions of GEH are known to be false.)

The Bombieri asymptotic sieve is already well explained in the original two papers of Bombieri; there is also a slightly different treatment of the sieve by Friedlander and Iwaniec, as well as a simplified version in the book of Friedlander and Iwaniec (in which the distribution hypothesis is strengthened in order to shorten the arguments. I’ve decided though to write up my own notes on the sieve below the fold; this is primarily for my own benefit, but may be useful to some readers also. I largely follow the treatment of Bombieri, with the one idiosyncratic twist of replacing the usual “elementary” Selberg sieve with the “analytic” Selberg sieve used in particular in many of the breakthrough works in small gaps between primes; I prefer working with the latter due to its Fourier-analytic flavour.

— 1. Controlling generalised von Mangoldt sums —

To prove (5), we shall first generalise it, by replacing the sequence {\Lambda(n+2)} by a more general sequence {a_n} obeying the following axioms:

  • (i) (Non-negativity) One has {a_n \geq 0} for all {n}.
  • (ii) (Crude size bound) One has {a_n \ll \tau(n)^{O(1)} \log^{O(1)} n} for all {n}, where {\tau} is the divisor function.
  • (iii) (Size) We have {\sum_{n \leq x} a_n = (C+o(1)) x} for some constant {C>0}.
  • (iv) (Elliott-Halberstam type conjecture) For any {\varepsilon,A>0}, one has

    \displaystyle \sum_{d \leq x^{1-\varepsilon}} |\sum_{n \leq x: d|n} a_n - C x \frac{g(d)}{d}| \ll_{\varepsilon,A} x \log^{-A} x

    where {g} is a multiplicative function with {g(p^j) = 1 + O(1/p)} for all primes {p} and {j \geq 1}.

These axioms are a little bit stronger than what is actually needed to make the Bombieri asymptotic sieve work, but we will not attempt to work with the weakest possible axioms here.

We introduce the function

\displaystyle G(s) := \prod_p \frac{1-g(p)/p^s}{1-1/p^s}

which is analytic for {\hbox{Re}(s) > 0}; in particular it can be evaluated at {s=1} to yield

\displaystyle G(1) = \prod_p \frac{1-g(p)/p}{1-1/p}.

There are two model examples of data {a_n, C, g} to keep in mind. The first, discussed in the introduction, is when {a_n =\Lambda(n+2)}, then {C = 2 \Pi_2} and {g} is as in the introduction; one of course needs EH to justify axiom (iv) in this case. The other is when {a_n=1}, in which case {C=1} and {g(n)=1} for all {n}. We will later take advantage of the second example to avoid doing some (routine, but messy) main term computations.

The main result of this section is then

Theorem 1 Let {a_n, g, C, G} be as above. Let {\vec k = (k_1,\dots,k_r)} be a tuple of natural numbers (independent of {x}) that is not equal to {(1,\dots,1)}. Then one has the asymptotic

\displaystyle \sum_{n \leq x} \Lambda_{\vec k}(n) a_n = (G(1)+o(1)) \frac{\prod_{i=1}^r k_i!}{(|\vec k|-1)!} C x \log^{|\vec k|-1} x

as {x \rightarrow \infty}, where {|\vec k| := k_1 + \dots + k_r}.

Note that this recovers (5) (on EH) as a special case.

We now begin the proof of this theorem. Henceforth we allow implied constants in the {O()} or {\ll} notation to depend on {r, \vec k} and {g,G}.

It will be convenient to replace the range {n \leq x} by a shorter range by the following standard localisation trick. Let {B} be a large quantity depending on {r, \vec k} to be chosen later, and let {I} denote the interval {\{ n: x - x \log^{-B} x \leq n \leq x \}}. We will show the estimate

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = (G(1)+o(1)) \frac{\prod_{i=1}^r k_i!}{(|\vec k|-1)!} C |I| \log^{|\vec k|-1} x \ \ \ \ \ (10)

 

from which the original claim follows by a routine summation argument. Observe from axiom (iv) and the triangle inequality that

\displaystyle \sum_{d \leq x^{1-\varepsilon}: \mu^2(d)=1} |\sum_{n \in I: d|n} a_n - C |I| \frac{g(d)}{d}| \ll_{\varepsilon,A} x \log^{-A} x

for any {\varepsilon,A > 0}.

Write {L} for the logarithm function {L(n) := \log n}, thus {\Lambda_k = \mu * L^k} for any {k}. Without loss of generality we may assume that {k_r > 1}; we then factor {\Lambda_{\vec k} = \mu_{\vec k} * L^{k_r}}, where

\displaystyle \mu_{\vec k} := \Lambda_{k_1} * \dots * \Lambda_{k_{r-1}} * \mu.

This function is just {\mu} when {r=1}. When {r>1} the function is more complicated, but we at least have the following crude bound:

Lemma 2 One has the pointwise bound {|\mu_{\vec k}| \leq L^{|\vec k|-k_r}}.

Proof: We induct on {r}. The case {r=1} is obvious, so suppose {r>1} and the claim has already been proven for {r-1}. Since {\mu_{\vec k} = \Lambda_{k_1} * \mu_{(k_2,\dots,k_r)}}, we see from induction hypothesis and the triangle inequality that

\displaystyle |\mu_{\vec k}| \leq \Lambda_{k_1} * L^{|\vec k| - k_r - k_1} \leq L^{|\vec k| - k_r - k_1} (\Lambda_{k_1} * 1).

Since {\Lambda_{k_1}*1 = L^{k_1}} by Möbius inversion, the claim follows. \Box

We can write

\displaystyle \Lambda_{\vec k}(n) = \sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{n}{d}.

In the region {n \in I}, we have {\log^{k_r} \frac{n}{d} = \log^{k_r} \frac{x}{d} + O( \log^{-B+O(1)} x )}. Thus

\displaystyle \Lambda_{\vec k}(n) = \sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} + O( \tau(x) \log^{-B+O(1)} x )

for {n \in I}. The contribution of the error term to {O( \tau(x) \log^{-B+O(1)} x )} to (10) is easily seen to be negligible if {B} is large enough, so we may freely replace {\Lambda_{\vec k}(n)} with {\sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d}} with little difficulty.

If we insert this replacement directly into the left-hand side of (10) and rearrange, we get

\displaystyle \sum_{d \leq x} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} \sum_{n \in I: d|n} a_d.

We can’t quite control this using axiom (iv) because the range of {d} is a bit too big, as explained in the introduction. So let us introduce a truncated function

\displaystyle \Lambda_{\vec k,\varepsilon}(n) := \sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} \eta_\varepsilon( \frac{\log d}{\log x} ) \ \ \ \ \ (11)

 

where {\varepsilon>0} is a small quantity to be chosen later, and {\eta_\varepsilon: {\bf R} \rightarrow [0,1]} is a smooth function that equals {1} on {(-\infty,1-4\varepsilon)} and equals {0} on {(1-3\varepsilon,+\infty)}. Suppose one could establish the following two estimates for any fixed {\varepsilon>0}:

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = \sum_{n \in I} \Lambda_{\vec k,\varepsilon}(n) a_n + O( (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x ) \ \ \ \ \ (12)

 

and

\displaystyle \sum_{n \in I} \Lambda_{\vec k,\varepsilon}(n) a_n = C Q_{\varepsilon,x} G(1) + o( |I| \log^{|\vec k|-1} x ) \ \ \ \ \ (13)

 

where {Q_{\varepsilon,x}} is a quantity that depends on {\varepsilon, \eta_\varepsilon, \vec k, B, x} but not on {C, g,G}. Then on combining the two estimates we would have

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = C Q_{\varepsilon,x} G(1) + (O(\varepsilon) + o(1)) C |I| \log^{|\vec k|-1} x. \ \ \ \ \ (14)

 

One could in principle compute {Q_{\varepsilon,x}} explicitly from the proof of (13), but one can avoid doing so by the following comparison trick. In the special case {a_n=1}, standard multiplicative number theory (noting that the Dirichlet series {\sum_n \frac{\Lambda_{\vec k}(n)}{n^s}} has a pole of order {|\vec k|} at {s=1}, with top Laurent coefficient {\prod_{j=1}^r k_j!}) gives the asymptotic

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = \frac{\prod_{i=1}^r k_i!}{(|\vec k|-1)!} + o(1)) |I| \log^{|\vec k|-1} x

which when compared with (14) for {a_n=1} (recalling that {G(1)=C=1} in this case) gives the formula

\displaystyle Q_{\varepsilon,x} = (\prod_{j=1}^r k_j + O(\varepsilon)) |I| \log^{|\vec k|-1} x.

Inserting this back into (14) and recalling that {\varepsilon>0} can be made arbitrarily small, we obtain (10).

As it turns out, the estimate (13) is easy to establish, but the estimate (12) is not, roughly speaking because the typical number {n} in {I} has too many divisors {d} in the range {[x^{1-4\varepsilon},1]}, each of which gives a contribution to the error term. (In the book of Friedlander and Iwaniec, the estimate (13) is established anyway, but only after assuming a stronger version of (iv), roughly speaking in which {d} is allowed to be as large as {x \exp( -\log^{1/4} x)}.) To resolve this issue, we will insert a preliminary sieve {\nu_\varepsilon} that will remove most of the potential divisors {d} i the range {[x^{1-4\varepsilon},1]} (leaving only about {O(1)} such divisors on the average for typical {n}), making the analogue of (12) easier to prove (at the cost of making the analogue of (13) more difficult). Namely, if one can find a function {\nu_\varepsilon: {\bf N} \rightarrow {\bf R}} for which one has the estimates

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = \sum_{n \in I} \Lambda_{\vec k}(n) \nu_\varepsilon(n) a_n + O( (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x ), \ \ \ \ \ (15)

 

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) \nu_\varepsilon(n) a_n

\displaystyle = \sum_{n \in I} \Lambda_{\vec k,\varepsilon}(n) \nu_\varepsilon(n) a_n + O( (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x ) \ \ \ \ \ (16)

 

and

\displaystyle \sum_{n \in I} \Lambda_{\vec k,\varepsilon}(n) \nu_\varepsilon(n) a_n = C Q'_{\varepsilon,x} G(1) + o( |I| \log^{|\vec k|-1} x ) \ \ \ \ \ (17)

 

for some quantity {Q'_{\varepsilon,x}} that depends on {\varepsilon, \eta_\varepsilon, \vec k, B, x} but not on {C, g, G,}, then by repeating the previous arguments we will again be able to establish (10).

The key estimate is (16). As we shall see, when comparing {\Lambda_{\vec k}(n) \nu_\varepsilon(n)} with {\Lambda_{\vec k,\varepsilon}(n) \nu_\varepsilon(n)}, the weight {\nu_\varepsilon} will cost us a factor of {1/\varepsilon}, but the {\log^{k_r} \frac{x}{d}} term in the definitions of {\Lambda_{\vec k}} and {\Lambda_{\vec k,\varepsilon}} will recover a factor of {\varepsilon^{k_r}}, which will give the desired bound since we are assuming {k_r > 1}.

One has some flexibility in how to select the weight {\nu_\varepsilon}: basically any standard sieve that uses divisors of size at most {x^{2\varepsilon}} to localise (at least approximately) to numbers that are rough in the sense that they have no (or at least very few) factors less than {x^\varepsilon}, will do. We will use the analytic Selberg sieve choice

\displaystyle \nu_\varepsilon(n) := (\sum_{d|n} \mu(d) \psi( \frac{\log d}{\varepsilon \log x} ))^2 \ \ \ \ \ (18)

 

where {\psi: {\bf R} \rightarrow [0,1]} is a smooth function supported on {[-1,1]} that equals {1} on {[-1/2,1/2]}.

It remains to establish the bounds (15), (16), (17). To warm up and introduce the various methods needed, we begin with the standard bound

\displaystyle \sum_{n \in I} \nu_\varepsilon(n) a_n = \frac{C|I|}{\varepsilon \log x} (\int_0^1 \psi'(u)^2\ du) G(1) + o(1)), \ \ \ \ \ (19)

 

where {\psi'} denotes the derivative of {\psi}. Note the loss of {1/\varepsilon} that had previously been pointed out. In the arguments that follows I will be a little brief with the details, as they are standard (see e.g. this previous post).

We now prove (19). The left-hand side can be expanded as

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \sum_{n \in I: [d_1,d_2]|n} a_n

where {[d_1,d_2]} denotes the least common multiple of {d_1} and {d_2}. From the support of {\psi} we see that the summand is only non-vanishing when {[d_1,d_2] \leq x^{2\varepsilon}}. We now use axiom (iv) and split the left-hand side into a main term

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \frac{g(d)}{d} C |I|

and an error term that is at most

\displaystyle O_\varepsilon( \sum_{d \leq x^{2\varepsilon}} \tau(d)^{O(1)} | \sum_{n \in I: d|n} a_n - \frac{g(d)}{d} C |I|| ). \ \ \ \ \ (20)

 

From axiom (ii) and elementary multiplicative number theory, we have the bound

\displaystyle \sum_{d \leq x} \tau(d)^{O(1)} | \sum_{n \in I: d|n} a_n - \frac{g(d)}{d} C |I| \ll C |I| \log^{O(1)} x

so from axiom (iv) and Cauchy-Schwarz we see that the error term (20) is acceptable. Thus it will suffice to establish the bound

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \frac{g([d_1,d_2])}{[d_1,d_2]}

\displaystyle = \frac{1}{\varepsilon \log x} (\int_0^1 \psi'(u)^2\ du) G(1) + o(\frac{1}{\log x}). \ \ \ \ \ (21)

 

The summand here is almost, but not quite, multiplicative in {d_1,d_2}. To make it genuinely multiplicative, we perform a (shifted) Fourier expansion

\displaystyle \psi(u) = \int_{\bf R} e^{-(1+it)u} \Psi(t)\ dt \ \ \ \ \ (22)

 

for some rapidly decreasing function {\Psi} (essentially the Fourier transform of {e^u \psi(u)}). Thus

\displaystyle \psi( \frac{\log d}{\varepsilon \log x} ) = \int_{\bf R} \frac{1}{d^{\frac{1+it}{\varepsilon \log x}}} \Psi(t)\ dt,

and so the left-hand side of (21) can be rearranged using Fubini’s theorem as

\displaystyle \int_{\bf R} \int_{\bf R} E(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x})\ \Psi(t_1) \Psi(t_2) dt_1 dt_2 \ \ \ \ \ (23)

 

where

\displaystyle E(s_1,s_2) := \sum_{d_1,d_2} \frac{\mu(d_1) \mu(d_2)}{d_1^{s_1}d_2^{s_2}} \frac{g([d_1,d_2])}{[d_1,d_2]}.

We can factorise {E(s_1,s_2)} as an Euler product:

\displaystyle E(s_1,s_2) = \prod_p (1 - \frac{g(p)}{p^{1+s_1}} - \frac{g(p)}{p^{1+s_2}} + \frac{g(p)}{p^{1+s_1+s_2}}).

Taking absolute values and using Mertens’ theorem leads to the crude bound

\displaystyle E(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x}) \ll_\varepsilon \log^{O(1)} x

which when combined with the rapid decrease of {\Psi}, allows us to restrict the region of integration in (23) to the square {\{ |t_1|, |t_2| \leq \sqrt{\log x} \}} (say) with negligible error. Next, we use the Euler product

\displaystyle \zeta(s) = \prod_p (1-\frac{1}{p^s})^{-1}

for {\hbox{Re} s > 1} to factorise

\displaystyle E(s_1,s_2) = \frac{\zeta(1+s_1+s_2)}{\zeta(1+s_1) \zeta(1+s_2)} \prod_p E_p(s_1,s_2)

where

\displaystyle E_p(s_1,s_2) := \frac{(1 - \frac{g(p)}{p^{1+s_1}} - \frac{g(p)}{p^{1+s_2}} + \frac{g(p)}{p^{1+s_1+s_2}})(1 - \frac{1}{p^{1+s_1+s_2}})}{(1-\frac{1}{p^{1+s_1}})(1-\frac{1}{p^{1+s_2}})}.

For {s_1,s_2=o(1)} with nonnegative real part, one has

\displaystyle E_p(s_1,s_2) = 1 + O(1/p^2)

and so by the Weierstrass {M}-test, {\prod_p E_p(s_1,s_2)} is continuous at {s_1=s_2=0}. Since

\displaystyle \prod_p E_p(0,0) = G(1)

we thus have

\displaystyle \prod_p E_p(s_1,s_2) = G(1) + o(1)

Also, since {\zeta} has a pole of order {1} at {s=1} with residue {1}, we have

\displaystyle \frac{\zeta(1+s_1+s_2)}{\zeta(1+s_1) \zeta(1+s_2)} = (1+o(1)) \frac{s_1 s_2}{s_1+s_2}

and thus

\displaystyle E(s_1,s_2) = (G(1)+o(1)) \frac{s_1s_2}{s_1+s_2}.

The quantity (23) can thus be written, up to errors of {o(\frac{1}{\log x})}, as

\displaystyle \frac{G(1)}{\varepsilon \log x} \int_{|t_1|, |t_2| \leq \sqrt{\log x}} \frac{(1+it_1)(1+it_2)}{1+it_1+1+it_2} \Psi(t_1) \Psi(t_2)\ dt_1 dt_2.

Using the rapid decrease of {\Psi}, we may remove the restriction on {t_1,t_2}, and it will now suffice to prove the identity

\displaystyle \int_{\bf R} \int_{\bf R} \frac{(1+it_1)(1+it_2)}{1+it_1+1+it_2} \Psi(t_1) \Psi(t_2)\ dt_1 dt_2 = (\int_0^1 \psi'(u)^2\ du)^2.

But on differentiating and then squaring (22) we have

\displaystyle \psi'(u)^2 = \int_{\bf R} \int_{\bf R} (1+it_1)(1+it_2) e^{-(1+it_1+1+it_2)u}\Psi(t_1) \Psi(t_2)\ dt_1 dt_2

and the claim follows by integrating in {u} from zero to infinity (noting that {\psi'} vanishes for {u>1}).

We have the following variant of (19):

Lemma 3 For any {d \leq x^{1-3\varepsilon}}, one has

\displaystyle \sum_{n \in I: d|n} \nu_\varepsilon(n) a_n \ll \frac{C|I|}{\varepsilon \log x} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 )^2 )}{d} + R_d \ \ \ \ \ (24)

 

where the {R_d} are such that

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} R_d \ll_A |I| \log^{-A} x \ \ \ \ \ (25)

 

for any {A>0}. We also have the variant

\displaystyle \sum_{n \in I: d|n} \nu_\varepsilon(n/d) a_n \ll \frac{C|I|}{\varepsilon \log x} \frac{\prod_{p|d} O(1 ) )}{d} + R_d. \ \ \ \ \ (26)

 

If in addition {d} has no prime factors less than {x^\delta} for some fixed {\delta>0}, one has

\displaystyle \sum_{n \in I: d|n} \nu_\varepsilon(n) a_n

\displaystyle = \frac{1+o(1)}{d} \frac{C|I|}{\varepsilon \log x} (\int_0^1 \psi'(u)^2\ du) G(1) + O(R_d). \ \ \ \ \ (27)

 

Roughly speaking, the above estimates assert that {\nu_\varepsilon} is concentrated on those numbers {n} with no prime factors much less than {x^\varepsilon}, but factors {d} without such small prime divisors occur with about the same relative density as they do in the integers.

Proof: The left-hand side of (24) can be expanded as

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \sum_{n \in I: [d_1,d_2,d]|n} a_n.

If we define

\displaystyle R_d := \sum_{d' \leq x^{1-\varepsilon}: d|d'} \tau(d')^2 |\sum_{n \in I:d'|n} a_n - \frac{g(d')}{d'} C|I||

then the previous expression can be written as

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \frac{g([d_1,d_2,d])}{[d_1,d_2,d]} C|I| + O(R_d),

while one has

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} R_d \leq \sum_{d' \leq x^{1-\varepsilon}} \tau(d')^3 |\sum_{n \in I:d'|n} a_n - \frac{g(d')}{d'} C|I||

which gives (25) from Axiom (iv). To prove (24), it now suffices to show that

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \frac{g([d_1,d_2,d])}{[d_1,d_2,d]}

\displaystyle \ll \frac{1}{\varepsilon \log x} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 )^2 )}{d}. \ \ \ \ \ (28)

 

Arguing as before, the left-hand side is

\displaystyle \int_{\bf R} \int_{\bf R} E^{(d)}(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x})\ \Psi(t_1) \Psi(t_2) dt_1 dt_2

where

\displaystyle E^{(d)}(s_1,s_2) := \sum_{d_1,d_2} \frac{\mu(d_1) \mu(d_2)}{d_1^{s_1}d_2^{s_2}} \frac{g([d_1,d_2,d])}{[d_1,d_2,d]}.

From Mertens’ theorem we have

\displaystyle E^{(d)}(s_1,s_2) \ll_\varepsilon \frac{\prod_{p|d} O(1)}{d} \log^{O(1)} x

when {\hbox{Re} s_1, \hbox{Re} s_2 = \frac{1}{\varepsilon \log x}}, so the contribution of the terms where {|t_1|, |t_2| \geq \sqrt{\log x}} can be absorbed into the {R_d} error (after increasing that error slightly). For the remaining contributions, we see that

\displaystyle E^{(d)}(s_1,s_2) = \frac{\zeta(1+s_1+s_2)}{\zeta(1+s_1) \zeta(1+s_2)} \prod_p E^{(d)}_p(s_1,s_2)

where {E^{(d)}_p(s_1,s_2) = E_p(s_1,s_2)} if {p} does not divide {d}, and

\displaystyle E^{(d)}_p(s_1,s_2) = \frac{g(p^j)}{p^j} \frac{(1 - \frac{1}{p^{s_1}}) (1 - \frac{1}{p^{s_2}}) (1 - \frac{1}{p^{1+s_1+s_2}})}{(1-\frac{1}{p^{1+s_1}})(1-\frac{1}{p^{1+s_2}})}

if {p} divides {d} {j} times for some {j \geq 1}. In the latter case, Taylor expansion gives the bounds

\displaystyle |E^{(d)}_p(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x})| \lesssim (1+|t_1|+|t_2|)^{O(1)} \frac{\min( \frac{\log p}{\varepsilon \log x}, 1 )^2}{p}

and the claim (28) follows. When {p \geq x^\delta} and {|t_1|, |t_2| \leq \sqrt{\log x}} we have

\displaystyle E^{(d)}_p(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x}) = \frac{1+o(1)}{p^j}

and (27) follows by repeating the previous calculations. Finally, (26) is proven similarly to (24) (using {d[d_1,d_2]} in place of {[d_1,d_2,d]}). \Box

Now we can prove (15), (16), (17). We begin with (15). Using the Leibniz rule {L(f*g) = (Lf)*g + f*(Lg)} applied to the identity {\mu = \mu * 1 * \mu} and using {\Lambda = \mu*L} and Möbius inversion (and the associativity and commutativity of Dirichlet convolution) we see that

\displaystyle L\mu = - \mu * \Lambda. \ \ \ \ \ (29)

 

Next, by applying the Leibniz rule to {\Lambda_k = \mu * L^k} for some {k \geq 1} and using (29) we see that

\displaystyle L \Lambda_k = L \mu * L^k + \mu * L^{k+1}

\displaystyle = - \mu * \Lambda * L^k + \Lambda_{k+1}

and hence we have the recursive identity

\displaystyle \Lambda_{k+1} = L \Lambda_k + \Lambda *\Lambda_k. \ \ \ \ \ (30)

 

In particular, from induction we see that {\Lambda_k} is supported on numbers with at most {k} distinct prime factors, and hence {\Lambda_{\vec k}} is supported on numbers with at most {|\vec k|} distinct prime factors. In particular, from (18) we see that {\nu_\varepsilon(n) = O(1)} on the support of {\Lambda_{\vec k}}. Thus it will suffice to show that

\displaystyle \sum_{n \in I: \nu_\varepsilon(n) \neq 1} \Lambda_{\vec k}(n) a_n \ll (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x.

If {\nu_\varepsilon(n) \neq 1} and {\Lambda_{\vec k}(n) \neq 0}, then {n} has at most {|\vec k|} distinct prime factors {p_1 < p_2 < \dots < p_r}, with {p_1 \leq x^\varepsilon}. If we factor {n = n_1 n_2}, where {n_1} is the contribution of those {p_i} with {p_i \leq x^{1/10|\vec k|}}, and {n_2} is the contribution of those {p_i} with {p_i > x^{1/10|\vec k|}}, then at least one of the following two statements hold:

  • (a) {n_1} (and hence {n}) is divisible by a square number of size at least {x^{1/10}}.
  • (b) {n_1 \leq x^{1/5}}.

The contribution of case (a) is easily seen to be acceptable by axiom (ii). For case (b), we observe from (30) and induction that

\displaystyle \Lambda_k(n) \ll \log^{|\vec k|} x \prod_{j=1}^k \frac{\log p_j}{\log x}

and so it will suffice to show that

\displaystyle \sum_{n_1} (\prod_{p|n_1} \frac{\log p}{\log x}) \sum_{n \in I: n_1 | n} 1_R(n/n_1) a_n \ll (\varepsilon + o(1)) C |I| \log^{-1} x

where {n_1} ranges over numbers bounded by {x^{1/5}} with at most {|\vec k|} distinct prime factors, the smallest of which is at most {x^\varepsilon}, and {R} consists of those numbers with no prime factor less than or equal to {x^{1/10|\vec k|}}. Applying (26) (with {\varepsilon} replaced by {1/10|\vec k|}) gives the bound

\displaystyle \sum_{n \in I: d|n} 1_R(n/n_1) a_n \ll \frac{C|I|}{\log x} \frac{1}{n_1} + R_d

so by (25) it suffices to show that

\displaystyle \sum_{n_1} (\prod_{p|n_1} \frac{\log p}{\log x}) \frac{1}{n_1} \ll \varepsilon

subject to the same constraints on {n_1} as before. The contribution of those {n_1} with {r} distinct prime factors can be bounded by

\displaystyle O(\sum_{p_1 \leq x^\varepsilon} \frac{\log p_1}{p_1 \log x}) \times O(\sum_{p \leq x^{1/5}} \frac{\log p}{p\log x})^{r-1};

applying Mertens’ theorem and summing over {1 \leq r \leq |\vec k|}, one obtains the claim.

Now we show (16). As discussed previously in this section, we can replace {\Lambda_{\vec k}(n)} by {\sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d}} with negligible error. Comparing this with (16) and (11), we see that it suffices to show that

\displaystyle \sum_{n \in I} \sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} (1 - \eta_\varepsilon(\frac{\log d}{\log x})) \nu_\varepsilon(n) a_n \ll (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x.

From the support of {\eta_\varepsilon}, the summand on the left-hand side is only non-zero when {d \geq x^{1-4\varepsilon}}, which makes {\log^{k_r} \frac{x}{d} \ll \varepsilon^{k_r} \log^{k_r} x \leq \varepsilon^2 \log^{k_r} x}, where we use the crucial hypothesis {k_r > 1} to gain enough powers of {\varepsilon} to make the argument here work. Applying Lemma 2, we reduce to showing that

\displaystyle \sum_{n \in I} \sum_{d|n: d \geq x^{1-4\varepsilon}} \nu_\varepsilon(n) a_n \ll \frac{1+o(1)}{\varepsilon \log x} C |I|.

We can make the change of variables {d \mapsto n/d} to flip the sum

\displaystyle \sum_{d|n: d \geq x^{1-4\varepsilon}} 1 \leq \sum_{d|n: d \leq x^{3\varepsilon}} 1

and then swap the sums to reduce to showing that

\displaystyle \sum_{d \leq x^{4\varepsilon}} \sum_{n \in I} \nu_\varepsilon(n) a_n \ll \frac{1+o(1)}{\varepsilon \log x} C |I|.

By Lemma 3, it suffices to show that

\displaystyle \sum_{d \leq x^{4\varepsilon}} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 )^2 )}{d} \ll 1.

To prove this, we use the Rankin trick, bounding the implied weight {1_{d \leq x^{4\varepsilon}}} by {O( \frac{1}{d^{1/\varepsilon \log x}} )}. We can then bound the left-hand side by the Euler product

\displaystyle \prod_p (1 + O( \frac{\min( \frac{\log p}{\varepsilon \log x}, 1 )^2}{p^{1+1/\varepsilon \log x}} ))

which can be bounded by

\displaystyle \exp( O( \sum_p \frac{\min( \frac{\log p}{\varepsilon \log x}, 1 )^2}{p^{1+1/\varepsilon \log x}} ) )

and the claim follows from Mertens’ theorem.

Finally, we show (17). By (11), the left-hand side expands as

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} \eta_\varepsilon(\frac{\log d}{\log x}) \sum_{n \in I: d|n} \nu_\varepsilon(n) a_n.

We let {\delta>0} be a small constant to be chosen later. We divide the outer sum into two ranges, depending on whether {d} only has prime factors greater than {x^\delta} or not. In the former case, we can apply (27) to write this contribution as

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} \eta_\varepsilon(\frac{\log d}{\log x}) \frac{1+o(1)}{d} \frac{C|I|}{\varepsilon \log x} (\int_0^1 \psi'(u)^2\ du) G(1)

plus a negligible error, where the {d} is implicitly restricted to numbers with all prime factors greater than {x^\delta}. The main term is messy, but it is of the required form {C Q'_{\varepsilon,x} G(1)} up to an acceptable error, so there is no need to compute it any further. It remains to consider those {d} that have at least one prime factor less than {x^\delta}. Here we use (24) instead of (27) as well as Lemma 3 to dominate this contribution by

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} O( \log^{|\vec k|} x \frac{C|I|}{\varepsilon \log x} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 )^2 )}{d} )

up to negligible errors, where {d} is now restricted to have at least one prime factor less than {x^\delta}. This makes at least one of the factors {\min( \frac{\log p}{\varepsilon \log x}, 1 )} to be at most {O_\varepsilon(\delta)}. A routine application of Rankin’s trick shows that

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 ) )}{d} \ll_\varepsilon 1

and so the total contribution of this case is {O_\varepsilon((\delta+o(1)) |I| \log^{|\vec k|-1} x)}. Since {\delta>0} can be made arbitrarily small, (17) follows.

— 2. Weierstrass approximation —

Having proved Theorem 1, we now take linear combinations of this theorem, combined with the Weierstrass approximation theorem, to give the asymptotics (7), (8) described in the introduction.

Let {a_n}, {g}, {C}, {G} be as in that theorem. It will be convenient to normalise the weights {\Lambda_{\vec k}} by {L^{1-|\vec k|}} to make their mean value comparable to {1}. From Theorem 1 and summation by parts we have

\displaystyle \sum_{n \leq x} L^{1-|\vec k|} \Lambda_{\vec k}(n) a_n = (G(1)+o(1)) \frac{\prod_{i=1}^r k_i!}{(|\vec k|-1)!} C x \ \ \ \ \ (31)

 

whenever {\vec k} does not consist entirely of ones.

We now take a closer look at what happens when {\vec k} does consist entirely of ones. Let {1^r} denote the {r}-tuple {(1,\dots,1)}. Convolving the {k=1} case of (30) with {r-1} copies of {\Lambda} for some {r \geq 1} and using the Leibniz rule, we see that

\displaystyle \Lambda_{(1^{r-1}, 2)} = \frac{1}{r} L \Lambda_{1^r} + \Lambda_{1^{r+1}}

and hence

\displaystyle L^{-r} \Lambda_{1^{r+1}} = L^{-r} \Lambda_{(1^{r-1},2)} - \frac{1}{r} L^{1-r} \Lambda_{1^r}.

Multiplying by {a_n} and summing over {n \leq x}, and using (31) to control the {\Lambda_{(1^{r-1},2)}} term, one has

\displaystyle \sum_{n \leq x} L^{-r} \Lambda_{1^{r+1}}(n) a_n = (G(1)+o(1)) \frac{2}{r!} - \frac{1}{r} \sum_{n \leq x} L^{1-r} \Lambda_{1^{r}}(n) a_n.

If we define {\delta_x} (up to an error of {o(1)}) by the formula

\displaystyle \sum_{n \leq x} \Lambda(n) a_n = (\delta_x G(1) + o(1)) C x

then an induction then shows that

\displaystyle \sum_{n \leq x} L^{1-r} \Lambda_{1^r}(n) a_n = \frac{1}{(r-1)!} (\delta_x G(1) + o(1)) C x

for odd {r}, and

\displaystyle \sum_{n \leq x} L^{1-r} \Lambda_{1^r}(n) a_n = \frac{1}{(r-1)!} ((2-\delta_x) G(1) + o(1)) C x

for even {r}. In particular, after adjusting {\delta_x} by {o(1)} if necessary, we have {0 \leq \delta_x \leq 2} since the left-hand sides are non-negative.

If we now define the comparison sequence {b_n := C G(1) (1 + (1-\delta_x) \mu(n))}, standard multiplicative number theory shows that the above estimates also hold when {a_n} is replaced by {b_n}; thus

\displaystyle \sum_{n \leq x} L^{1-r} \Lambda_{1^r}(n) a_n = \sum_{n \leq x} L^{1-r} \Lambda_{1^r}(n) b_n + o( x )

for both odd and even {r}. The bound (31) also holds for {b_n} when {\vec k} does not consist entirely of ones, and hence

\displaystyle \sum_{n \leq x} L^{1-|\vec k|} \Lambda_{\vec k}(n) a_n = \sum_{n \leq x} L^{1-|\vec k|} \Lambda_{\vec k}(n) b_n + o( x )

for any fixed {\vec k} (which may or may not consist entirely of ones).

Next, from induction (on {j_1+\dots+j_r}), the Leibniz rule, and (30), we see that for any {r \geq 1} and {j_1,\dots,j_r \geq 0}, {k_1,\dots,k_r}, the function

\displaystyle L^{1-j_1-\dots-j_r-|\vec k|} ((L^{j_1} \Lambda_{k_1}) * \dots * (L^{j_r} \Lambda_{k_r})) \ \ \ \ \ (32)

 

is a finite linear combination of functions of the form {L^{1-|\vec k'|} \Lambda_{\vec k'}} for tuples {\vec k'} that may possibly consist entirely of ones. We thus have

\displaystyle \sum_{n \leq x} f(n) a_n = \sum_{n \leq x}f(n) b_n + o( x )

whenever {f} is one of these functions (32). Specialising to the case {k_1=\dots=k_r=1}, we thus have

\displaystyle \sum_{n_1 \dots n_r \leq x} a_{n} \log^{1-r} n \prod_{i=1}^r (\log n_i/\log n)^{j_i} \Lambda(n_i)

\displaystyle = \sum_{n_1 \dots n_r \leq x} b_{n} \log^{1-r} n \prod_{i=1}^r (\log n_i/\log n)^{j_i} \Lambda(n_i) + o(x )

where {n := n_1 \dots n_r}. The contribution of those {n_i} that are powers of primes can be easily seen to be negligible, leading to

\displaystyle \sum_{p_1 \dots p_r \leq x} a_{n} \log n \prod_{i=1}^r (\log p_i/\log n)^{j_i+1}

\displaystyle = \sum_{p_1 \dots p_r \leq x} b_{n} \prod_{i=1}^r (\log p_i/\log n)^{j_i+1} + o(x)

where now {n := p_1 \dots p_r}. The contribution of the case where two of the primes {p_i} agree can also be seen to be negligible, as can the error when replacing {\log n} with {\log x}, and then by symmetry

\displaystyle \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} a_{n} \prod_{i=1}^r (\log p_i/\log n)^{j_i+1}

\displaystyle = \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} b_{n} \prod_{i=1}^r (\log p_i/\log n)^{j_i+1} + o(x / \log x).

By linearity, this implies that

\displaystyle \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} a_{n} P( \log p_1/\log n, \dots, \log p_r/\log n)

\displaystyle = \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} b_{n} P( \log p_1/\log n, \dots, \log p_r/\log n) + o(x / \log x)

for any polynomial {P(t_1,\dots,t_r)} that vanishes on the coordinate hyperplanes {t_i=0}. The right-hand side can also be evaluated by Mertens’ theorem as

\displaystyle CG(1) \delta_x \int_{\Delta_r} P x + o(x)

when {r} is odd and

\displaystyle CG(1) (2-\delta_x) \int_{\Delta_r} P x + o(x)

when {r} is even. Using the Weierstrass approximation theorem, we then have

\displaystyle \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} a_{n} g_r( \log p_1/\log n, \dots, \log p_r/\log n)

\displaystyle = \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} b_{n} g_r( \log p_1/\log n, \dots, \log p_r/\log n) + o(x / \log x)

for any continuous function {g_r} that is compactly supported in the interior of {\Delta_r}. Computing the right-hand side using Mertens’ theorem as before, we obtain the claimed asymptotics (7), (8).

Remark 4 The Bombieri asymptotic sieve has to use the full power of EH (or GEH); there are constructions due to Ford that show that if one only has a distributional hypothesis up to {x^{1-c}} for some fixed constant {c>0}, then the asymptotics of sums such as (5), or more generally (9), are not determined by a single scalar parameter {\delta_x}, but can also vary in other ways as well. Thus the Bombieri asymptotic sieve really is asymptotic; in order to get {o(1)} type error terms one needs the level {1-\varepsilon} of distribution to be asymptotically equal to {1} as {x \rightarrow \infty}. Related to this, the quantitative decay of the {o(1)} error terms in the Bombieri asymptotic sieve are extremely poor; in particular, they depend on the dependence of implied constant in axiom (iv) on the parameters {\varepsilon,A}, for which there is no consensus on what one should conjecturally expect.

The twin prime conjecture is one of the oldest unsolved problems in analytic number theory. There are several reasons why this conjecture remains out of reach of current techniques, but the most important obstacle is the parity problem which prevents purely sieve-theoretic methods (or many other popular methods in analytic number theory, such as the circle method) from detecting pairs of prime twins in a way that can distinguish them from other twins of almost primes. The parity problem is discussed in these previous blog posts; this obstruction is ultimately powered by the Möbius pseudorandomness principle that asserts that the Möbius function {\mu} is asymptotically orthogonal to all “structured” functions (and in particular, to the weight functions constructed from sieve theory methods).

However, there is an intriguing “alternate universe” in which the Möbius function is strongly correlated with some structured functions, and specifically with some Dirichlet characters, leading to the existence of the infamous “Siegel zero“. In this scenario, the parity problem obstruction disappears, and it becomes possible, in principle, to attack problems such as the twin prime conjecture. In particular, we have the following result of Heath-Brown:

Theorem 1 At least one of the following two statements are true:
  • (Twin prime conjecture) There are infinitely many primes {p} such that {p+2} is also prime.
  • (No Siegel zeroes) There exists a constant {c>0} such that for every real Dirichlet character {\chi} of conductor {q > 1}, the associated Dirichlet {L}-function {s \mapsto L(s,\chi)} has no zeroes in the interval {[1-\frac{c}{\log q}, 1]}.

Informally, this result asserts that if one had an infinite sequence of Siegel zeroes, one could use this to generate infinitely many twin primes. See this survey of Friedlander and Iwaniec for more on this “illusory” or “ghostly” parallel universe in analytic number theory that should not actually exist, but is surprisingly self-consistent and to date proven to be impossible to banish from the realm of possibility.

The strategy of Heath-Brown’s proof is fairly straightforward to describe. The usual starting point is to try to lower bound

\displaystyle  \sum_{x \leq n \leq 2x} \Lambda(n) \Lambda(n+2) \ \ \ \ \ (1)

for some large value of {x}, where {\Lambda} is the von Mangoldt function. Actually, in this post we will work with the slight variant

\displaystyle  \sum_{x \leq n \leq 2x} \Lambda_2(n(n+2)) \nu(n(n+2))

where

\displaystyle  \Lambda_2(n) = (\mu * L^2)(n) = \sum_{d|n} \mu(d) \log^2 \frac{n}{d}

is the second von Mangoldt function, and {*} denotes Dirichlet convolution, and {\nu} is an (unsquared) Selberg sieve that damps out small prime factors. This sum also detects twin primes, but will lead to slightly simpler computations. For technical reasons we will also smooth out the interval {x \leq n \leq 2x} and remove very small primes from {n}, but we will skip over these steps for the purpose of this informal discussion. (In Heath-Brown’s original paper, the Selberg sieve {\nu} is essentially replaced by the more combinatorial restriction {1_{(n(n+2),q^{1/C}\#)=1}} for some large {C}, where {q^{1/C}\#} is the primorial of {q^{1/C}}, but I found the computations to be slightly easier if one works with a Selberg sieve, particularly if the sieve is not squared to make it nonnegative.)

If there is a Siegel zero {L(\beta,\chi)=0} with {\beta} close to {1} and {\chi} a Dirichlet character of conductor {q}, then multiplicative number theory methods can be used to show that the Möbius function {\mu} “pretends” to be like the character {\chi} in the sense that {\mu(p) \approx \chi(p)} for “most” primes {p} near {q} (e.g. in the range {q^\varepsilon \leq p \leq q^C} for some small {\varepsilon>0} and large {C>0}). Traditionally, one uses complex-analytic methods to demonstrate this, but one can also use elementary multiplicative number theory methods to establish these results (qualitatively at least), as will be shown below the fold.

The fact that {\mu} pretends to be like {\chi} can be used to construct a tractable approximation (after inserting the sieve weight {\nu}) in the range {[x,2x]} (where {x = q^C} for some large {C}) for the second von Mangoldt function {\Lambda_2}, namely the function

\displaystyle  \tilde \Lambda_2(n) := (\chi * L)(n) = \sum_{d|n} \chi(d) \log^2 \frac{n}{d}.

Roughly speaking, we think of the periodic function {\chi} and the slowly varying function {\log^2} as being of about the same “complexity” as the constant function {1}, so that {\tilde \Lambda_2} is roughly of the same “complexity” as the divisor function

\displaystyle  \tau(n) := (1*1)(n) = \sum_{d|n} 1,

which is considerably simpler to obtain asymptotics for than the von Mangoldt function as the Möbius function is no longer present. (For instance, note from the Dirichlet hyperbola method that one can estimate {\sum_{x \leq n \leq 2x} \tau(n)} to accuracy {O(\sqrt{x})} with little difficulty, whereas to obtain a comparable level of accuracy for {\sum_{x \leq n \leq 2x} \Lambda(n)} or {\sum_{x \leq n \leq 2x} \Lambda_2(n)} is essentially the Riemann hypothesis.)

One expects {\tilde \Lambda_2(n)} to be a good approximant to {\Lambda_2(n)} if {n} is of size {O(x)} and has no prime factors less than {q^{1/C}} for some large constant {C}. The Selberg sieve {\nu} will be mostly supported on numbers with no prime factor less than {q^{1/C}}. As such, one can hope to approximate (1) by the expression

\displaystyle  \sum_{x \leq n \leq 2x} \tilde \Lambda_2(n(n+2)) \nu(n(n+2)); \ \ \ \ \ (2)

as it turns out, the error between this expression and (1) is easily controlled by sieve-theoretic techniques. Let us ignore the Selberg sieve for now and focus on the slightly simpler sum

\displaystyle  \sum_{x \leq n \leq 2x} \tilde \Lambda_2(n(n+2)).

As discussed above, this sum should be thought of as a slightly more complicated version of the sum

\displaystyle \sum_{x \leq n \leq 2x} \tau(n(n+2)). \ \ \ \ \ (3)

Accordingly, let us look (somewhat informally) at the task of estimating the model sum (3). One can think of this problem as basically that of counting solutions to the equation {ab+2=cd} with {a,b,c,d} in various ranges; this is clearly related to understanding the equidistribution of the hyperbola {\{ (a,b) \in {\bf Z}/d{\bf Z}: ab + 2 = 0 \hbox{ mod } d \}} in {({\bf Z}/d{\bf Z})^2}. Taking Fourier transforms, the latter problem is closely related to estimation of the Kloosterman sums

\displaystyle  \sum_{m \in ({\bf Z}/r{\bf Z})^\times} e( \frac{a_1 m + a_2 \overline{m}}{r} )

where {\overline{m}} denotes the inverse of {m} in {({\bf Z}/r{\bf Z})^\times}. One can then use the Weil bound

\displaystyle  \sum_{m \in ({\bf Z}/r{\bf Z})^\times} e( \frac{am+b\overline{m}}{r} ) \ll r^{1/2 + o(1)} (a,b,r)^{1/2} \ \ \ \ \ (4)

where {(a,b,r)} is the greatest common divisor of {a,b,r} (with the convention that this is equal to {r} if {a,b} vanish), and the {o(1)} decays to zero as {r \rightarrow \infty}. The Weil bound yields good enough control on error terms to estimate (3), and as it turns out the same method also works to estimate (2) (provided that {x=q^C} with {C} large enough).

Actually one does not need the full strength of the Weil bound here; any power savings over the trivial bound of {r} will do. In particular, it will suffice to use the weaker, but easier to prove, bounds of Kloosterman:

Lemma 2 (Kloosterman bound) One has

\displaystyle  \sum_{m \in ({\bf Z}/r{\bf Z})^\times} e( \frac{am+b\overline{m}}{r} ) \ll r^{3/4 + o(1)} (a,b,r)^{1/4} \ \ \ \ \ (5)

whenever {r \geq 1} and {a,b} are coprime to {r}, where the {o(1)} is with respect to the limit {r \rightarrow \infty} (and is uniform in {a,b}).

Proof: Observe from change of variables that the Kloosterman sum {\sum_{m \in ({\bf Z}/r{\bf Z})^\times} e( \frac{am+b\overline{m}}{r} )} is unchanged if one replaces {(a,b)} with {(\lambda a, \lambda^{-1} b)} for {\lambda \in ({\bf Z}/d{\bf Z})^\times}. For fixed {a,b}, the number of such pairs {(\lambda a, \lambda^{-1} b)} is at least {r^{1-o(1)} / (a,b,r)}, thanks to the divisor bound. Thus it will suffice to establish the fourth moment bound

\displaystyle  \sum_{a,b \in {\bf Z}/r{\bf Z}} |\sum_{m \in ({\bf Z}/r{\bf Z})^\times} e\left( \frac{am+b\overline{m}}{r} \right)|^4 \ll d^{4+o(1)}.

The left-hand side can be rearranged as

\displaystyle  \sum_{m_1,m_2,m_3,m_4 \in ({\bf Z}/r{\bf Z})^\times} \sum_{a,b \in {\bf Z}/d{\bf Z}}

\displaystyle  e\left( \frac{a(m_1+m_2-m_3-m_4) + b(\overline{m_1}+\overline{m_2}-\overline{m_3}-\overline{m_4})}{r} \right)

which by Fourier summation is equal to

\displaystyle  d^2 \# \{ (m_1,m_2,m_3,m_4) \in (({\bf Z}/r{\bf Z})^\times)^4:

\displaystyle  m_1+m_2-m_3-m_4 = \frac{1}{m_1} + \frac{1}{m_2} - \frac{1}{m_3} - \frac{1}{m_4} = 0 \hbox{ mod } r \}.

Observe from the quadratic formula and the divisor bound that each pair {(x,y)\in ({\bf Z}/r{\bf Z})^2} has at most {O(r^{o(1)})} solutions {(m_1,m_2)} to the system of equations {m_1+m_2=x; \frac{1}{m_1} + \frac{1}{m_2} = y}. Hence the number of quadruples {(m_1,m_2,m_3,m_4)} of the desired form is {r^{2+o(1)}}, and the claim follows. \Box

We will also need another easy case of the Weil bound to handle some other portions of (2):

Lemma 3 (Easy Weil bound) Let {\chi} be a primitive real Dirichlet character of conductor {q}, and let {a,b,c,d \in{\bf Z}/q{\bf Z}}. Then

\displaystyle  \sum_{n \in {\bf Z}/q{\bf Z}} \chi(an+b) \chi(cn+d) \ll q^{o(1)} (ad-bc, q).

Proof: As {q} is the conductor of a primitive real Dirichlet character, {q} is equal to {2^j} times a squarefree odd number for some {j \leq 3}. By the Chinese remainder theorem, it thus suffices to establish the claim when {q} is an odd prime. We may assume that {ad-bc} is not divisible by this prime {q}, as the claim is trivial otherwise. If {a} vanishes then {c} does not vanish, and the claim follows from the mean zero nature of {\chi}; similarly if {c} vanishes. Hence we may assume that {a,c} do not vanish, and then we can normalise them to equal {1}. By completing the square it now suffices to show that

\displaystyle  \sum_{n \in {\bf Z}/p{\bf Z}} \chi( n^2 - b ) \ll 1

whenever {b \neq 0 \hbox{ mod } p}. As {\chi} is {+1} on the quadratic residues and {-1} on the non-residues, it now suffices to show that

\displaystyle  \# \{ (m,n) \in ({\bf Z}/p{\bf Z})^2: n^2 - b = m^2 \} = p + O(1).

But by making the change of variables {(x,y) = (n+m,n-m)}, the left-hand side becomes {\# \{ (x,y) \in ({\bf Z}/p{\bf Z})^2: xy=b\}}, and the claim follows. \Box

While the basic strategy of Heath-Brown’s argument is relatively straightforward, implementing it requires a large amount of computation to control both main terms and error terms. I experimented for a while with rearranging the argument to try to reduce the amount of computation; I did not fully succeed in arriving at a satisfactorily minimal amount of superfluous calculation, but I was able to at least reduce this amount a bit, mostly by replacing a combinatorial sieve with a Selberg-type sieve (which was not needed to be positive, so I dispensed with the squaring aspect of the Selberg sieve to simplify the calculations a little further; also for minor reasons it was convenient to retain a tiny portion of the combinatorial sieve to eliminate extremely small primes). Also some modest reductions in complexity can be obtained by using the second von Mangoldt function {\Lambda_2(n(n+2))} in place of {\Lambda(n) \Lambda(n+2)}. These exercises were primarily for my own benefit, but I am placing them here in case they are of interest to some other readers.

Read the rest of this entry »

We continue the discussion of sieve theory from Notes 4, but now specialise to the case of the linear sieve in which the sieve dimension {\kappa} is equal to {1}, which is one of the best understood sieving situations, and one of the rare cases in which the precise limits of the sieve method are known. A bit more specifically, let {z, D \geq 1} be quantities with {z = D^{1/s}} for some fixed {s>1}, and let {g} be a multiplicative function with

\displaystyle  g(p) = \frac{1}{p} + O(\frac{1}{p^2}) \ \ \ \ \ (1)

and

\displaystyle  0 \leq g(p) \leq 1-c \ \ \ \ \ (2)

for all primes {p} and some fixed {c>0} (we allow all constants below to depend on {c}). Let {P(z) := \prod_{p<z} p}, and for each prime {p < z}, let {E_p} be a set of integers, with {E_d := \bigcap_{p|d} E_p} for {d|P(z)}. We consider finitely supported sequences {(a_n)_{n \in {\bf Z}}} of non-negative reals for which we have bounds of the form

\displaystyle  \sum_{n \in E_d} a_n = g(d) X + r_d. \ \ \ \ \ (3)

for all square-free {d \leq D} and some {X>0}, and some remainder terms {r_d}. One is then interested in upper and lower bounds on the quantity

\displaystyle  \sum_{n\not \in\bigcup_{p <z} E_p} a_n.

The fundamental lemma of sieve theory (Corollary 19 of Notes 4) gives us the bound

\displaystyle  \sum_{n\not \in\bigcup_{p <z} E_p} a_n = (1 + O(e^{-s})) X V(z) + O( \sum_{d \leq D: \mu^2(d)=1} |r_d| ) \ \ \ \ \ (4)

where {V(z)} is the quantity

\displaystyle  V(z) := \prod_{p<z} (1-g(p)). \ \ \ \ \ (5)

This bound is strong when {s} is large, but is not as useful for smaller values of {s}. We now give a sharp bound in this regime. We introduce the functions {F, f: (0,+\infty) \rightarrow {\bf R}^+} by

\displaystyle  F(s) := 2e^\gamma ( \frac{1_{s>1}}{s} \ \ \ \ \ (6)

\displaystyle  + \sum_{j \geq 3, \hbox{ odd}} \frac{1}{j!} \int_{[1,+\infty)^{j-1}} 1_{t_1+\dots+t_{j-1}\leq s-1} \frac{dt_1 \dots dt_{j-1}}{t_1 \dots t_j} )

and

\displaystyle  f(s) := 2e^\gamma \sum_{j \geq 2, \hbox{ even}} \frac{1}{j!} \int_{[1,+\infty)^{j-1}} 1_{t_1+\dots+t_{j-1}\leq s-1} \frac{dt_1 \dots dt_{j-1}}{t_1 \dots t_j} \ \ \ \ \ (7)

where we adopt the convention {t_j := s - t_1 - \dots - t_{j-1}}. Note that for each {s} one has only finitely many non-zero summands in (6), (7). These functions are closely related to the Buchstab function {\omega} from Exercise 28 of Supplement 4; indeed from comparing the definitions one has

\displaystyle  F(s) + f(s) = 2 e^\gamma \omega(s)

for all {s>0}.

Exercise 1 (Alternate definition of {F, f}) Show that {F(s)} is continuously differentiable except at {s=1}, and {f(s)} is continuously differentiable except at {s=2} where it is continuous, obeying the delay-differential equations

\displaystyle  \frac{d}{ds}( s F(s) ) = f(s-1) \ \ \ \ \ (8)

for {s > 1} and

\displaystyle  \frac{d}{ds}( s f(s) ) = F(s-1) \ \ \ \ \ (9)

for {s>2}, with the initial conditions

\displaystyle  F(s) = \frac{2e^\gamma}{s} 1_{s>1}

for {s \leq 3} and

\displaystyle  f(s) = 0

for {s \leq 2}. Show that these properties of {F, f} determine {F, f} completely.

For future reference, we record the following explicit values of {F, f}:

\displaystyle  F(s) = \frac{2e^\gamma}{s} \ \ \ \ \ (10)

for {1 < s \leq 3}, and

\displaystyle  f(s) = \frac{2e^\gamma}{s} \log(s-1) \ \ \ \ \ (11)

for {2 \leq s \leq 4}.

We will show

Theorem 2 (Linear sieve) Let the notation and hypotheses be as above, with {s > 1}. Then, for any {\varepsilon > 0}, one has the upper bound

\displaystyle  \sum_{n\not \in\bigcup_{p <z} E_p} a_n \leq (F(s) + O(\varepsilon)) X V(z) + O( \sum_{d \leq D: \mu^2(d)=1} |r_d| ) \ \ \ \ \ (12)

and the lower bound

\displaystyle  \sum_{n\not \in\bigcup_{p <z} E_p} a_n \geq (f(s) - O(\varepsilon)) X V(z) + O( \sum_{d \leq D: \mu^2(d)=1} |r_d| ) \ \ \ \ \ (13)

if {D} is sufficiently large depending on {\varepsilon, s, c}. Furthermore, this claim is sharp in the sense that the quantity {F(s)} cannot be replaced by any smaller quantity, and similarly {f(s)} cannot be replaced by any larger quantity.

Comparing the linear sieve with the fundamental lemma (and also testing using the sequence {a_n = 1_{1 \leq n \leq N}} for some extremely large {N}), we conclude that we necessarily have the asymptotics

\displaystyle  1 - O(e^{-s}) \leq f(s) \leq 1 \leq F(s) \leq 1 + O( e^{-s} )

for all {s \geq 1}; this can also be proven directly from the definitions of {F, f}, or from Exercise 1, but is somewhat challenging to do so; see e.g. Chapter 11 of Friedlander-Iwaniec for details.

Exercise 3 Establish the integral identities

\displaystyle  F(s) = 1 + \frac{1}{s} \int_s^\infty (1 - f(t-1))\ dt

and

\displaystyle  f(s) = 1 + \frac{1}{s} \int_s^\infty (1 - F(t-1))\ dt

for {s \geq 2}. Argue heuristically that these identities are consistent with the bounds in Theorem 2 and the Buchstab identity (Equation (16) from Notes 4).

Exercise 4 Use the Selberg sieve (Theorem 30 from Notes 4) to obtain a slightly weaker version of (12) in the range {1 < s < 3} in which the error term {|r_d|} is worsened to {\tau_3(d) |r_d|}, but the main term is unchanged.

We will prove Theorem 2 below the fold. The optimality of {F, f} is closely related to the parity problem obstruction discussed in Section 5 of Notes 4; a naive application of the parity arguments there only give the weak bounds {F(s) \geq \frac{2 e^\gamma}{s}} and {f(s)=0} for {s \leq 2}, but this can be sharpened by a more careful counting of various sums involving the Liouville function {\lambda}.

As an application of the linear sieve (specialised to the ranges in (10), (11)), we will establish a famous theorem of Chen, giving (in some sense) the closest approach to the twin prime conjecture that one can hope to achieve by sieve-theoretic methods:

Theorem 5 (Chen’s theorem) There are infinitely many primes {p} such that {p+2} is the product of at most two primes.

The same argument gives the version of Chen’s theorem for the even Goldbach conjecture, namely that for all sufficiently large even {N}, there exists a prime {p} between {2} and {N} such that {N-p} is the product of at most two primes.

The discussion in these notes loosely follows that of Friedlander-Iwaniec (who study sieving problems in more general dimension than {\kappa=1}).

Read the rest of this entry »

Archives