You are currently browsing the category archive for the ‘math.NT’ category.

This is the final continuation of the online reading seminar of Zhang’s paper for the polymath8 project. (There are two other continuations; this previous post, which deals with the combinatorial aspects of the second part of Zhang’s paper, and this previous post, that covers the Type I and Type II sums.) The main purpose of this post is to present (and hopefully, to improve upon) the treatment of the final and most innovative of the key estimates in Zhang’s paper, namely the Type III estimate.

The main estimate was already stated as Theorem 17 in the previous post, but we quickly recall the relevant definitions here. As in other posts, we always take ${x}$ to be a parameter going off to infinity, with the usual asymptotic notation ${O(), o(), \ll}$ associated to this parameter.

Definition 1 (Coefficient sequences) A coefficient sequence is a finitely supported sequence ${\alpha: {\bf N} \rightarrow {\bf R}}$ that obeys the bounds

$\displaystyle |\alpha(n)| \ll \tau^{O(1)}(n) \log^{O(1)}(x) \ \ \ \ \ (1)$

for all ${n}$, where ${\tau}$ is the divisor function.

• (i) If ${\alpha}$ is a coefficient sequence and ${a\ (q) = a \hbox{ mod } q}$ is a primitive residue class, the (signed) discrepancy ${\Delta(\alpha; a\ (q))}$ of ${\alpha}$ in the sequence is defined to be the quantity

$\displaystyle \Delta(\alpha; a \ (q)) := \sum_{n: n = a\ (q)} \alpha(n) - \frac{1}{\phi(q)} \sum_{n: (n,q)=1} \alpha(n). \ \ \ \ \ (2)$

• (ii) A coefficient sequence ${\alpha}$ is said to be at scale ${N}$ for some ${N \geq 1}$ if it is supported on an interval of the form ${[(1-O(\log^{-A_0} x)) N, (1+O(\log^{-A_0} x)) N]}$.
• (iii) A coefficient sequence ${\alpha}$ at scale ${N}$ is said to be smooth if it takes the form ${\alpha(n) = \psi(n/N)}$ for some smooth function ${\psi: {\bf R} \rightarrow {\bf C}}$ supported on ${[1-O(\log^{-A_0} x), 1+O(\log^{-A_0} x)]}$ obeying the derivative bounds

$\displaystyle \psi^{(j)}(t) = O( \log^{j A_0} x ) \ \ \ \ \ (3)$

for all fixed ${j \geq 0}$ (note that the implied constant in the ${O()}$ notation may depend on ${j}$).

For any ${I \subset {\bf R}}$, let ${{\mathcal S}_I}$ denote the square-free numbers whose prime factors lie in ${I}$. The main result of this post is then the following result of Zhang:

Theorem 2 (Type III estimate) Let ${\varpi, \delta > 0}$ be fixed quantities, and let ${M, N_1, N_2, N_3 \gg 1}$ be quantities such that

$\displaystyle x \ll M N_1 N_2 N_3 \ll x$

and

$\displaystyle N_1 \gg N_2, N_3$

and

$\displaystyle N_1^4 N_2^4 N_3^5 \gg x^{4+16\varpi+\delta+c}$

for some fixed ${c>0}$. Let ${\alpha, \psi_1, \psi_2, \psi_3}$ be coefficient sequences at scale ${M,N_1,N_2,N_3}$ respectively with ${\psi_1,\psi_2,\psi_3}$ smooth. Then for any ${I \subset [1,x^\delta]}$ we have

$\displaystyle \sum_{q \in {\mathcal S}_I: q< x^{1/2+2\varpi}} \sup_{a \in ({\bf Z}/q{\bf Z})^\times} |\Delta(\alpha \ast \beta; a)| \ll x \log^{-A} x.$

In fact we have the stronger “pointwise” estimate

$\displaystyle |\Delta(\alpha \ast \psi_1 \ast \psi_2 \ast \psi_3; a)| \ll x^{-\epsilon} \frac{x}{q} \ \ \ \ \ (4)$

for all ${q \in {\mathcal S}_I}$ with ${q < x^{1/2+2\varpi}}$ and all ${a \in ({\bf Z}/q{\bf Z})^\times}$, and some fixed ${\epsilon>0}$.

(This is very slightly stronger than previously claimed, in that the condition ${N_2 \gg N_3}$ has been dropped.)

It turns out that Zhang does not exploit any averaging of the ${\alpha}$ factor, and matters reduce to the following:

Theorem 3 (Type III estimate without ${\alpha}$) Let ${\delta > 0}$ be fixed, and let ${1 \ll N_1, N_2, N_3, d \ll x^{O(1)}}$ be quantities such that

$\displaystyle N_1 \gg N_2, N_3$

and

$\displaystyle d \in {\mathcal S}_{[1,x^\delta]}$

and

$\displaystyle N_1^4 N_2^4 N_3^5 \gg d^8 x^{\delta+c}$

for some fixed ${c>0}$. Let ${\psi_1,\psi_2,\psi_3}$ be smooth coefficient sequences at scales ${N_1,N_2,N_3}$ respectively. Then we have

$\displaystyle |\Delta(\psi_1 \ast \psi_2 \ast \psi_3; a)| \ll x^{-\epsilon} \frac{N_1 N_2 N_3}{d}$

for all ${a \in ({\bf Z}/d{\bf Z})^\times}$ and some fixed ${\epsilon>0}$.

Let us quickly see how Theorem 3 implies Theorem 2. To show (4), it suffices to establish the bound

$\displaystyle \sum_{n = a\ (q)} \alpha \ast \psi_1 \ast \psi_2 \ast \psi_3(n) = X + O( x^{-\epsilon} \frac{x}{q} )$

for all ${a \in ({\bf Z}/q{\bf Z})^\times}$, where ${X}$ denotes a quantity that is independent of ${a}$ (but can depend on other quantities such as ${\alpha,\psi_1,\psi_2,\psi_3,q}$). The left-hand side can be rewritten as

$\displaystyle \sum_{b \in ({\bf Z}/q{\bf Z})^\times} \sum_{m = b\ (q)} \alpha(m) \sum_{n = a/b\ (q)} \psi_1 \ast \psi_2 \ast \psi_3(n).$

From Theorem 3 we have

$\displaystyle \sum_{n = a/b\ (q)} \psi_1 \ast \psi_2 \ast \psi_3(n) = Y + O( x^{-\epsilon} \frac{N_1 N_2 N_3}{q} )$

where the quantity ${Y}$ does not depend on ${a}$ or ${b}$. Inserting this asymptotic and using crude bounds on ${\alpha}$ (see Lemma 8 of this previous post) we conclude (4) as required (after modifying ${\epsilon}$ slightly).

It remains to establish Theorem 3. This is done by a set of tools similar to that used to control the Type I and Type II sums:

• (i) completion of sums;
• (ii) the Weil conjectures and bounds on Ramanujan sums;
• (iii) factorisation of smooth moduli ${q \in {\mathcal S}_I}$;
• (iv) the Cauchy-Schwarz and triangle inequalities (Weyl differencing).

The specifics are slightly different though. For the Type I and Type II sums, it was the classical Weil bound on Kloosterman sums that were the key source of power saving; Ramanujan sums only played a minor role, controlling a secondary error term. For the Type III sums, one needs a significantly deeper consequence of the Weil conjectures, namely the estimate of Bombieri and Birch on a three-dimensional variant of a Kloosterman sum. Furthermore, the Ramanujan sums – which are a rare example of sums that actually exhibit better than square root cancellation, thus going beyond even what the Weil conjectures can offer – make a crucial appearance, when combined with the factorisation of the smooth modulus ${q}$ (this new argument is arguably the most original and interesting contribution of Zhang).

Tamar Ziegler and I have just uploaded to the arXiv our joint paper “A multi-dimensional Szemerédi theorem for the primes via a correspondence principle“. This paper is related to an earlier result of Ben Green and mine in which we established that the primes contain arbitrarily long arithmetic progressions. Actually, in that paper we proved a more general result:

Theorem 1 (Szemerédi’s theorem in the primes) Let ${A}$ be a subset of the primes ${{\mathcal P}}$ of positive relative density, thus ${\limsup_{N \rightarrow \infty} \frac{|A \cap [N]|}{|{\mathcal P} \cap [N]|} > 0}$. Then ${A}$ contains arbitrarily long arithmetic progressions.

This result was based in part on an earlier paper of Green that handled the case of progressions of length three. With the primes replaced by the integers, this is of course the famous theorem of Szemerédi.

Szemerédi’s theorem has now been generalised in many different directions. One of these is the multidimensional Szemerédi theorem of Furstenberg and Katznelson, who used ergodic-theoretic techniques to show that any dense subset of ${{\bf Z}^d}$ necessarily contained infinitely many constellations of any prescribed shape. Our main result is to relativise that theorem to the primes as well:

Theorem 2 (Multidimensional Szemerédi theorem in the primes) Let ${d \geq 1}$, and let ${A}$ be a subset of the ${d^{th}}$ Cartesian power ${{\mathcal P}^d}$ of the primes of positive relative density, thus

$\displaystyle \limsup_{N \rightarrow \infty} \frac{|A \cap [N]^d|}{|{\mathcal P}^d \cap [N]^d|} > 0.$

Then for any ${v_1,\ldots,v_k \in {\bf Z}^d}$, ${A}$ contains infinitely many “constellations” of the form ${a+r v_1, \ldots, a + rv_k}$ with ${a \in {\bf Z}^k}$ and ${r}$ a positive integer.

In the case when ${A}$ is itself a Cartesian product of one-dimensional sets (in particular, if ${A}$ is all of ${{\mathcal P}^d}$), this result already follows from Theorem 1, but there does not seem to be a similarly easy argument to deduce the general case of Theorem 2 from previous results. Simultaneously with this paper, an independent proof of Theorem 2 using a somewhat different method has been established by Cook, Maygar, and Titichetrakun.

The result is reminiscent of an earlier result of mine on finding constellations in the Gaussian primes (or dense subsets thereof). That paper followed closely the arguments of my original paper with Ben Green, namely it first enclosed (a W-tricked version of) the primes or Gaussian primes (in a sieve theoretic-sense) by a slightly larger set (or more precisely, a weight function ${\nu}$) of almost primes or almost Gaussian primes, which one could then verify (using methods closely related to the sieve-theoretic methods in the ongoing Polymath8 project) to obey certain pseudorandomness conditions, known as the linear forms condition and the correlation condition. Very roughly speaking, these conditions assert statements of the following form: if ${n}$ is a randomly selected integer, then the events of ${n+h_1,\ldots,n+h_k}$ simultaneously being an almost prime (or almost Gaussian prime) are approximately independent for most choices of ${h_1,\ldots,h_k}$. Once these conditions are satisfied, one can then run a transference argument (initially based on ergodic-theory methods, but nowadays there are simpler transference results based on the Hahn-Banach theorem, due to Gowers and Reingold-Trevisan-Tulsiani-Vadhan) to obtain relative Szemerédi-type theorems from their absolute counterparts.

However, when one tries to adapt these arguments to sets such as ${{\mathcal P}^2}$, a new difficulty occurs: the natural analogue of the almost primes would be the Cartesian square ${{\mathcal A}^2}$ of the almost primes – pairs ${(n,m)}$ whose entries are both almost primes. (Actually, for technical reasons, one does not work directly with a set of almost primes, but would instead work with a weight function such as ${\nu(n) \nu(m)}$ that is concentrated on a set such as ${{\mathcal A}^2}$, but let me ignore this distinction for now.) However, this set ${{\mathcal A}^2}$ does not enjoy as many pseudorandomness conditions as one would need for a direct application of the transference strategy to work. More specifically, given any fixed ${h, k}$, and random ${(n,m)}$, the four events

$\displaystyle (n,m) \in {\mathcal A}^2$

$\displaystyle (n+h,m) \in {\mathcal A}^2$

$\displaystyle (n,m+k) \in {\mathcal A}^2$

$\displaystyle (n+h,m+k) \in {\mathcal A}^2$

do not behave independently (as they would if ${{\mathcal A}^2}$ were replaced for instance by the Gaussian almost primes), because any three of these events imply the fourth. This blocks the transference strategy for constellations which contain some right-angles to them (e.g. constellations of the form ${(n,m), (n+r,m), (n,m+r)}$) as such constellations soon turn into rectangles such as the one above after applying Cauchy-Schwarz a few times. (But a few years ago, Cook and Magyar showed that if one restricted attention to constellations which were in general position in the sense that any coordinate hyperplane contained at most one element in the constellation, then this obstruction does not occur and one can establish Theorem 2 in this case through the transference argument.) It’s worth noting that very recently, Conlon, Fox, and Zhao have succeeded in removing of the pseudorandomness conditions (namely the correlation condition) from the transference principle, leaving only the linear forms condition as the remaining pseudorandomness condition to be verified, but unfortunately this does not completely solve the above problem because the linear forms condition also fails for ${{\mathcal A}^2}$ (or for weights concentrated on ${{\mathcal A}^2}$) when applied to rectangular patterns.

There are now two ways known to get around this problem and establish Theorem 2 in full generality. The approach of Cook, Magyar, and Titichetrakun proceeds by starting with one of the known proofs of the multidimensional Szemerédi theorem – namely, the proof that proceeds through hypergraph regularity and hypergraph removal – and attach pseudorandom weights directly within the proof itself, rather than trying to add the weights to the result of that proof through a transference argument. (A key technical issue is that weights have to be added to all the levels of the hypergraph – not just the vertices and top-order edges – in order to circumvent the failure of naive pseudorandomness.) As one has to modify the entire proof of the multidimensional Szemerédi theorem, rather than use that theorem as a black box, the Cook-Magyar-Titichetrakun argument is lengthier than ours; on the other hand, it is more general and does not rely on some difficult theorems about primes that are used in our paper.

In our approach, we continue to use the multidimensional Szemerédi theorem (or more precisely, the equivalent theorem of Furstenberg and Katznelson concerning multiple recurrence for commuting shifts) as a black box. The difference is that instead of using a transference principle to connect the relative multidimensional Szemerédi theorem we need to the multiple recurrence theorem, we instead proceed by a version of the Furstenberg correspondence principle, similar to the one that connects the absolute multidimensional Szemerédi theorem to the multiple recurrence theorem. I had discovered this approach many years ago in an unpublished note, but had abandoned it because it required an infinite number of linear forms conditions (in contrast to the transference technique, which only needed a finite number of linear forms conditions and (until the recent work of Conlon-Fox-Zhao) a correlation condition). The reason for this infinite number of conditions is that the correspondence principle has to build a probability measure on an entire ${\sigma}$-algebra; for this, it is not enough to specify the measure ${\mu(A)}$ of a single set such as ${A}$, but one also has to specify the measure ${\mu( T^{n_1} A \cap \ldots \cap T^{n_m} A)}$ of “cylinder sets” such as ${T^{n_1} A \cap \ldots \cap T^{n_m} A}$ where ${m}$ could be arbitrarily large. The larger ${m}$ gets, the more linear forms conditions one needs to keep the correspondence under control.

With the sieve weights ${\nu}$ we were using at the time, standard sieve theory methods could indeed provide a finite number of linear forms conditions, but not an infinite number, so my idea was abandoned. However, with my later work with Green and Ziegler on linear equations in primes (and related work on the Mobius-nilsequences conjecture and the inverse conjecture on the Gowers norm), Tamar and I realised that the primes themselves obey an infinite number of linear forms conditions, so one can basically use the primes (or a proxy for the primes, such as the von Mangoldt function ${\Lambda}$) as the enveloping sieve weight, rather than a classical sieve. Thus my old idea of using the Furstenberg correspondence principle to transfer Szemerédi-type theorems to the primes could actually be realised. In the one-dimensional case, this simply produces a much more complicated proof of Theorem 1 than the existing one; but it turns out that the argument works as well in higher dimensions and yields Theorem 2 relatively painlessly, except for the fact that it needs the results on linear equations in primes, the known proofs of which are extremely lengthy (and also require some of the transference machinery mentioned earlier). The problem of correlations in rectangles is avoided in the correspondence principle approach because one can compensate for such correlations by performing a suitable weighted limit to compute the measure ${\mu( T^{n_1} A \cap \ldots \cap T^{n_m} A)}$ of cylinder sets, with each ${m}$ requiring a different weighted correction. (This may be related to the Cook-Magyar-Titichetrakun strategy of weighting all of the facets of the hypergraph in order to recover pseudorandomness, although our contexts are rather different.)

This is one of the continuations of the online reading seminar of Zhang’s paper for the polymath8 project. (There are two other continuations; this previous post, which deals with the combinatorial aspects of the second part of Zhang’s paper, and a post to come that covers the Type III sums.) The main purpose of this post is to present (and hopefully, to improve upon) the treatment of two of the three key estimates in Zhang’s paper, namely the Type I and Type II estimates.

The main estimate was already stated as Theorem 16 in the previous post, but we quickly recall the relevant definitions here. As in other posts, we always take ${x}$ to be a parameter going off to infinity, with the usual asymptotic notation ${O(), o(), \ll}$ associated to this parameter.

Definition 1 (Coefficient sequences) A coefficient sequence is a finitely supported sequence ${\alpha: {\bf N} \rightarrow {\bf R}}$ that obeys the bounds

$\displaystyle |\alpha(n)| \ll \tau^{O(1)}(n) \log^{O(1)}(x) \ \ \ \ \ (1)$

for all ${n}$, where ${\tau}$ is the divisor function.

• (i) If ${\alpha}$ is a coefficient sequence and ${a\ (q) = a \hbox{ mod } q}$ is a primitive residue class, the (signed) discrepancy ${\Delta(\alpha; a\ (q))}$ of ${\alpha}$ in the sequence is defined to be the quantity

$\displaystyle \Delta(\alpha; a \ (q)) := \sum_{n: n = a\ (q)} \alpha(n) - \frac{1}{\phi(q)} \sum_{n: (n,q)=1} \alpha(n). \ \ \ \ \ (2)$

• (ii) A coefficient sequence ${\alpha}$ is said to be at scale ${N}$ for some ${N \geq 1}$ if it is supported on an interval of the form ${[(1-O(\log^{-A_0} x)) N, (1+O(\log^{-A_0} x)) N]}$.
• (iii) A coefficient sequence ${\alpha}$ at scale ${N}$ is said to obey the Siegel-Walfisz theorem if one has

$\displaystyle | \Delta(\alpha 1_{(\cdot,q)=1}; a\ (r)) | \ll \tau(qr)^{O(1)} N \log^{-A} x \ \ \ \ \ (3)$

for any ${q,r \geq 1}$, any fixed ${A}$, and any primitive residue class ${a\ (r)}$.

• (iv) A coefficient sequence ${\alpha}$ at scale ${N}$ is said to be smooth if it takes the form ${\alpha(n) = \psi(n/N)}$ for some smooth function ${\psi: {\bf R} \rightarrow {\bf C}}$ supported on ${[1-O(\log^{-A_0} x), 1+O(\log^{-A_0} x)]}$ obeying the derivative bounds

$\displaystyle \psi^{(j)}(t) = O( \log^{j A_0} x ) \ \ \ \ \ (4)$

for all fixed ${j \geq 0}$ (note that the implied constant in the ${O()}$ notation may depend on ${j}$.

In Lemma 8 of this previous post we established a collection of “crude estimates” which assert, roughly speaking, that for the purposes of averaged estimates one may ignore the ${\tau^{O(1)}(n)}$ factor in (1) and pretend that ${\alpha(n)}$ was in fact ${O( \log^{O(1)} n)}$. We shall rely frequently on these “crude estimates” without further citation to that precise lemma.

For any ${I \subset {\bf R}}$, let ${{\mathcal S}_I}$ denote the square-free numbers whose prime factors lie in ${I}$.

Definition 2 (Singleton congruence class system) Let ${I \subset {\bf R}}$. A singleton congruence class system on ${I}$ is a collection ${{\mathcal C} = (\{a_q\})_{q \in {\mathcal S}_I}}$ of primitive residue classes ${a_q \in ({\bf Z}/q{\bf Z})^\times}$ for each ${q \in {\mathcal S}_I}$, obeying the Chinese remainder theorem property

$\displaystyle a_{qr}\ (qr) = (a_q\ (q)) \cap (a_r\ (r)) \ \ \ \ \ (5)$

whenever ${q,r \in {\mathcal S}_I}$ are coprime. We say that such a system ${{\mathcal C}}$ has controlled multiplicity if the

$\displaystyle \tau_{\mathcal C}(n) := |\{ q \in {\mathcal S}_I: n = a_q\ (q) \}|$

obeys the estimate

$\displaystyle \sum_{C^{-1} x \leq n \leq Cx: n = a\ (r)} \tau_{\mathcal C}(n)^2 \ll \frac{n}{r} \tau(r)^{O(1)} \log^{O(1)} x + x^{o(1)}. \ \ \ \ \ (6)$

for any fixed ${C>1}$ and any congruence class ${a\ (r)}$ with ${r \in {\mathcal S}_I}$.

The main result of this post is then the following:

Theorem 3 (Type I/II estimate) Let ${\varpi, \delta, \sigma > 0}$ be fixed quantities such that

$\displaystyle 11 \varpi + 3\delta + 2 \sigma < \frac{1}{4} \ \ \ \ \ (7)$

and

$\displaystyle 37\varpi + 5 \delta < \frac{1}{4} \ \ \ \ \ (8)$

and let ${\alpha,\beta}$ be coefficient sequences at scales ${M,N}$ respectively with

$\displaystyle x \ll MN \ll x \ \ \ \ \ (9)$

and

$\displaystyle x^{\frac{1}{2}-\sigma} \ll N \ll M \ll x^{\frac{1}{2}+\sigma} \ \ \ \ \ (10)$

with ${\beta}$ obeying a Siegel-Walfisz theorem. Then for any ${I \subset [1,x^\varpi]}$ and any singleton congruence class system ${(\{a_q\})_{q \in {\mathcal S}_I}}$ with controlled multiplicity we have

$\displaystyle \sum_{q \in {\mathcal S}_I: q< x^{1/2+2\varpi}} |\Delta(\alpha \ast \beta; a_q)| \ll x \log^{-A} x.$

The proof of this theorem relies on five basic tools:

• (ii) completion of sums;
• (iii) and the Weil conjectures;
• (iv) factorisation of smooth moduli ${q \in {\mathcal S}_I}$; and
• (v) the Cauchy-Schwarz and triangle inequalities (the dispersion method).

For the purposes of numerics, it is the interplay between (ii), (iii), and (v) that drives the final conditions (7), (8). The Weil conjectures are the primary source of power savings (${x^{-c}}$ for some fixed ${c>0}$) in the argument, but they need to overcome power losses coming from completion of sums, and also each use of Cauchy-Schwarz tends to halve any power savings present in one’s estimates. Naively, one could thus expect to get better estimates by relying more on the Weil conjectures, and less on completion of sums and on Cauchy-Schwarz.

This post is a continuation of the previous post on sieve theory, which is an ongoing part of the Polymath8 project. As the previous post was getting somewhat full, we are rolling the thread over to the current post. We also take the opportunity to correct some errors in the treatment of the truncated GPY sieve from this previous post.

As usual, we let ${x}$ be a large asymptotic parameter, and ${w}$ a sufficiently slowly growing function of ${x}$. Let ${0 < \varpi < 1/4}$ and ${0 < \delta < 1/4+\varpi}$ be such that ${MPZ[\varpi,\delta]}$ holds (see this previous post for a definition of this assertion). We let ${{\mathcal H}}$ be a fixed admissible ${k_0}$-tuple, let ${I := [w,x^\delta]}$, let ${{\mathcal S}_I}$ be the square-free numbers with prime divisors in ${I}$, and consider the truncated GPY sieve

$\displaystyle \nu(n) := \lambda(n)^2$

where

$\displaystyle \lambda(n) := \sum_{d \in {\mathcal S}_I: d|P(n)} \mu(d) g(\frac{\log d}{\log R})$

where ${R := x^{1/4+\varpi}}$, ${P}$ is the polynomial

$\displaystyle P(n) := \prod_{h \in {\mathcal H}} (n+h),$

and ${g: {\bf R} \rightarrow {\bf R}}$ is a fixed smooth function supported on ${[-1,1]}$. As discussed in the previous post, we are interested in obtaining an upper bound of the form

$\displaystyle \sum_{x \leq n \leq 2x} \nu(n) \leq (\alpha+o(1)) (\frac{W}{\phi(W)})^{k_0} \frac{x}{W \log^{k_0} R}$

as well as a lower bound of the form

$\displaystyle \sum_{x \leq n \leq 2x} \nu(n) \theta(n+h) \geq (\beta+o(1)) (\frac{W}{\phi(W)})^{k_0} \frac{x}{W \log^{k_0-1} R}$

for all ${h \in {\mathcal H}}$ (where ${\theta(n) = \log n}$ when ${n}$ is prime and ${\theta(n)=0}$ otherwise), since this will give the conjecture ${DHL[k_0,2]}$ (i.e. infinitely many prime gaps of size at most ${k_0}$) whenever

$\displaystyle 1+4\varpi > \frac{4\alpha}{k_0 \beta}. \ \ \ \ \ (1)$

It turns out we in fact have precise asymptotics

$\displaystyle \sum_{x \leq n \leq 2x} \nu(n) = (\alpha+o(1)) (\frac{W}{\phi(W)})^{k_0} \frac{x}{W \log^{k_0} R} \ \ \ \ \ (2)$

and

$\displaystyle \sum_{x \leq n \leq 2x} \nu(n) \theta(n+h) = (\beta+o(1)) (\frac{W}{\phi(W)})^{k_0} \frac{x}{W \log^{k_0} R} \ \ \ \ \ (3)$

although the exact formulae for ${\alpha,\beta}$ are a little complicated. (The fact that ${\alpha,\beta}$ could be computed exactly was already anticipated in Zhang’s paper; see the remark on page 24.) We proceed as in the previous post. Indeed, from the arguments in that post, (2) is equivalent to

$\displaystyle \sum_{d_1,d_2 \in {\mathcal S}_I} \mu(d_1) g(\frac{\log d_1}{\log R}) \mu(d_2) g(\frac{\log d_2}{\log R}) \frac{k_0^{\Omega([d_1,d_2])}}{[d_1,d_2]} \ \ \ \ \ (4)$

$\displaystyle = (\alpha + o(1)) (\frac{W}{\phi(W)})^{k_0} \log^{-k_0} R$

and (3) is similarly equivalent to

$\displaystyle \sum_{d_1,d_2 \in {\mathcal S}_I} \mu(d_1) g(\frac{\log d_1}{\log R}) \mu(d_2) g(\frac{\log d_2}{\log R}) \frac{(k_0-1)^{\Omega([d_1,d_2])}}{[d_1,d_2]} \ \ \ \ \ (5)$

$\displaystyle = (\beta + o(1)) (\frac{W}{\phi(W)})^{k_0-1} \log^{-k_0+1} R.$

Here ${\Omega(d)}$ is the number of prime factors of ${d}$.

We will work for now with (4), as the treatment of (5) is almost identical.

We would now like to replace the truncated interval ${I = [w,x^\delta]}$ with the untruncated interval ${I \cup J = [w,\infty)}$, where ${J = (x^\delta,\infty)}$. Unfortunately this replacement was not quite done correctly in the previous post, and this will now be corrected here. We first observe that if ${F(d_1,d_2)}$ is any finitely supported function, then by Möbius inversion we have

$\displaystyle \sum_{d_1,d_2 \in {\mathcal S}_I} F(d_1,d_2) = \sum_{d_1,d_2 \in {\mathcal S}_{I \cup J}} F(d_1,d_2) \sum_{a \in {\mathcal S}_J} \mu(a) 1_{a|[d_1,d_2]}.$

Note that ${a|[d_1,d_2]}$ if and only if we have a factorisation ${d_1 = a_1 d'_1}$, ${d_2 = a_2 d'_2}$ with ${[a_1,a_2] = a}$ and ${d'_1 d'_2}$ coprime to ${a_1 a_2}$, and that this factorisation is unique. From this, we see that we may rearrange the previous expression as

$\displaystyle \sum_{a_1,a_2 \in {\mathcal S}_J} \mu( [a_1,a_2] ) \sum_{d'_1,d'_2 \in {\mathcal S}_{I \cup J}: (d'_1 d'_2, a_1 a_2) = 1} F( a_1 d'_1, a_2 d'_2 ).$

Applying this to (4), and relabeling ${d'_1,d'_2}$ as ${d_1,d_2}$, we conclude that the left-hand side of (4) is equal to

$\displaystyle \sum_{a_1,a_2 \in {\mathcal S}_J} \mu( [a_1,a_2] ) \sum_{d_1,d_2 \in {\mathcal S}_{I \cup J}: (d_1d_2,a_1a_2)=1}$

$\displaystyle \mu(a_1d_1) g(\frac{\log a_1d_1}{\log R}) \mu(a_2d_2) g(\frac{\log a_2d_2}{\log R}) \frac{k_0^{\Omega([a_1 d_1,a_2 d_2])}}{[a_1 d_1,a_2 d_2]}$

which may be rearranged as

$\displaystyle \sum_{a_1,a_2 \in {\mathcal S}_J} \frac{\mu( (a_1,a_2) ) k_0^{\Omega([a_1,a_2])}}{[a_1,a_2]} \sum_{d_1,d_2\in {\mathcal S}_{I \cup J}: (d_1d_2,a_1a_2)=1} \ \ \ \ \ (6)$

$\displaystyle \mu(d_1) g(\frac{\log a_1d_1}{\log R}) \mu(d_2) g(\frac{\log a_1 d_2}{\log R}) \frac{k_0^{\Omega([d_1,d_2])}}{[d_1, d_2]}.$

This is almost the same formula that we had in the previous post, except that the Möbius function ${\mu((a_1,a_2))}$ of the greatest common divisor ${(a_1,a_2)}$ of ${a_1,a_2}$ was missing, and also the coprimality condition ${(d_1d_2,a_1a_2)=1}$ was not handled properly in the previous post.

We may now eliminate the condition ${(d_1d_2,a_1a_2)=1}$ as follows. Suppose that there is a prime ${p_* \in J}$ that divides both ${d_1d_2}$ and ${a_1a_2}$. The expression

$\displaystyle \sum_{a_1,a_2 \in {\mathcal S}_J} \frac{k_0^{\Omega([a_1,a_2])}}{[a_1,a_2]} \sum_{d_1,d_2 \in {\mathcal S}_{I \cup J}: p_* | (d_1d_2,a_1a_2)}$

$\displaystyle |g(\frac{\log a_1d_1}{\log R})| |g(\frac{\log a_1 d_2}{\log R})| \frac{k_0^{\Omega([d_1,d_2])}}{[d_1, d_2]}$

can then be bounded by

$\displaystyle \ll \sum_{a_1,a_2} \sum_{d_1,d_2: p_* | (d_1d_2,a_1a_2)} \frac{k_0^{\Omega([a_1,a_2])}}{[a_1,a_2]} \frac{k_0^{\Omega([d_1,d_2])}}{[d_1, d_2]} (a_1 a_2 d_1 d_2)^{-1/\log R}$

which may be factorised as

$\displaystyle \ll \frac{1}{p_*^2} \prod_p (1 + \frac{O(1)}{p^{1+1/\log R}})$

which by Mertens’ theorem (or the simple pole of ${\zeta(s)}$ at ${s=1}$) is

$\displaystyle \ll \frac{\log^{O(1)} R}{p_*^2}.$

Summing over all ${p_* > x^\varpi}$ gives a negligible contribution to (6) for the purposes of (4). Thus we may effectively replace (6) by

$\displaystyle \sum_{a_1,a_2 \in {\mathcal S}_J} \frac{\mu( (a_1,a_2) ) k_0^{\Omega([a_1,a_2])}}{[a_1,a_2]} \sum_{d_1,d_2\in {\mathcal S}_{I \cup J}}$

$\displaystyle \mu(d_1) g(\frac{\log a_1d_1}{\log R}) \mu(d_2) g(\frac{\log a_1 d_2}{\log R}) \frac{k_0^{\Omega([d_1,d_2])}}{[d_1, d_2]}.$

The inner summation can be treated using Proposition 10 of the previous post. We can then reduce (4) to

$\displaystyle \sum_{a_1,a_2 \in {\mathcal S}_J} \frac{\mu( (a_1,a_2) ) k_0^{\Omega([a_1,a_2])}}{[a_1,a_2]} G_{k_0}( \frac{\log a_1}{\log R}, \frac{\log a_2}{\log R} ) = \alpha+o(1) \ \ \ \ \ (7)$

where ${G_{k_0}}$ is the function

$\displaystyle G_{k_0}(t_1,t_2) := \int_0^1 g^{(k_0)}(t+t_1) g^{(k_0)}(t+t_2) \frac{t^{k_0-1}}{(k_0-1)!}\ dt.$

Note that ${G}$ vanishes if ${t_1 \geq 1}$ or ${t_2 \geq 1}$. In practice, we will work with functions ${g}$ in which ${g^{(k_0)}}$ has a definite sign (in our normalisations, ${g^{(k_0)}}$ will be non-positive), making ${G_{k_0}}$ non-negative.

We rewrite the left-hand side of (7) as

$\displaystyle \sum_{a \in {\mathcal S}_J} \frac{k_0^{\Omega(a)}}{a} \sum_{a_1,a_2: [a_1,a_2] = a} \mu((a_1,a_2)) G_{k_0}( \frac{\log a_1}{\log R}, \frac{\log a_2}{\log R} ).$

We may factor ${a = p_1 \ldots p_n}$ for some ${x^\delta < p_1 < \ldots < p_n}$ with ${p_1 \ldots p_n \leq R}$; in particular, ${n < \frac{1 + 4\varpi}{4\delta}}$. The previous expression now becomes

$\displaystyle \sum_{0 \leq n < \frac{1+4\varpi}{4\delta}} k_0^n \sum_{x^\delta < p_1 < \ldots < p_n} \frac{1}{p_1 \ldots p_n}$

$\displaystyle \sum_{\{1,\ldots,n\} = S \cup T} (-1)^{|S \cap T|} G_{k_0}( \sum_{i \in S} \frac{\log p_i}{\log R}, \sum_{j \in T} \frac{\log p_j}{\log R} ).$

Using Mertens’ theorem, we thus conclude an exact formula for ${\alpha}$, and similarly for ${\beta}$:

Proposition 1 (Exact formula) We have

$\displaystyle \alpha = \sum_{0 \leq n < \frac{1+4\varpi}{4\delta}} k_0^n \int_{\frac{4\delta}{1+4\varpi} < t_1 < \ldots < t_n} G_{k_0,n}(t_1,\ldots,t_n) \frac{dt_1 \ldots dt_n}{t_1 \ldots t_n}$

where

$\displaystyle G_{k_0,n}(t_1,\ldots,t_n) := \sum_{\{1,\ldots,n\} = S \cup T} (-1)^{|S \cap T|} G_{k_0}( \sum_{i \in S} t_i, \sum_{j \in T} t_j ).$

Similarly we have

$\displaystyle \beta = \sum_{0 \leq n < \frac{1+4\varpi}{4\delta}} (k_0-1)^n \int_{\frac{4\delta}{1+4\varpi} < t_1 < \ldots < t_n} G_{k_0-1,n}(t_1,\ldots,t_n) \frac{dt_1 \ldots dt_n}{t_1 \ldots t_n}$

where ${G_{k_0-1}}$ and ${G_{k_0-1,n}}$ are defined similarly to ${G_{k_0}}$ and ${G_{k_0,n}}$ by replacing all occurrences of ${k_0}$ with ${k_0-1}$.

These formulae are unwieldy. However if we make some monotonicity hypotheses, namely that ${g^{(k_0-1)}}$ is positive, ${g^{(k_0)}}$ is negative, and ${g^{(k_0+1)}}$ is positive on ${[0,1)}$, then we can get some good estimates on the ${G_{k_0}, G_{k_0-1}}$ (which are now non-negative functions) and hence on ${\alpha,\beta}$. Namely, if ${g^{(k_0)}}$ is negative but increasing then we have

$\displaystyle -g^{(k_0)}(t+t_1) \leq -g^{(k_0)}(\frac{t}{1-t_1})$

for ${0 \leq t_1 < 1}$ and ${t \in [0,1]}$, which implies that

$\displaystyle G_{k_0}(t_1,t_1) \leq (1-t_1)_+^{k_0} G_{k_0}(0,0)$

for any ${t_1 \geq 0}$. A similar argument in fact gives

$\displaystyle G_{k_0}(t_1+t_2,t_1+t_2) \leq (1-t_1)_+^{k_0} G_{k_0}(t_2,t_2)$

for any ${t_1,t_2 \geq 0}$. Iterating this we conclude that

$\displaystyle G_{k_0}(\sum_{i \in S} t_i, \sum_{i \in S} t_i) \leq (\prod_{i \in S} (1-t_i)_+^{k_0}) G_{k_0}(0,0)$

and similarly

$\displaystyle G_{k_0}(\sum_{i \in T} t_i, \sum_{i \in T} t_i) \leq (\prod_{i \in T} (1-t_i)_+^{k_0}) G_{k_0}(0,0).$

From Cauchy-Schwarz we thus have

$\displaystyle G_{k_0}( \sum_{i \in S} t_i, \sum_{i \in T} t_i ) \leq (\prod_{i=1}^n (1 - t_i)_+^{k_0/2}) G_{k_0}(0,0).$

Observe from the binomial formula that of the ${3^n}$ pairs ${(S,T)}$ with ${S \cup T = \{1,\ldots,n\}}$, ${\frac{3^n+1}{2}}$ of them have ${|S \cap T|}$ even, and ${\frac{3^n-1}{2}}$ of them have ${|S \cap T|}$ odd. We thus have

$\displaystyle -\frac{3^n-1}{2} (\prod_{i=1}^n (1 - t_i)_+^{k_0/2}) G_{k_0}(0,0) \leq G_{k_0,n}(t_1,\ldots,t_n) \ \ \ \ \ (8)$

$\displaystyle \leq \frac{3^n+1}{2} (\prod_{i=1}^n (1 - t_i)_+^{k_0/2}) G_{k_0}(0,0).$

We have thus established the upper bound

$\displaystyle \alpha \leq G_{k_0}(0,0) (1 + \kappa) \ \ \ \ \ (9)$

where

$\displaystyle \kappa := \sum_{1 \leq n < \frac{1+4\varpi}{4\delta}} \frac{3^n+1}{2} k_0^n \int_{\frac{4\delta}{1+4\varpi} < t_1 < \ldots < t_n} (\prod_{i=1}^n (1 - t_i)_+^{k_0/2}) \frac{dt_1 \ldots dt_n}{t_1 \ldots t_n}.$

By symmetry we may factorise

$\displaystyle \kappa := \sum_{1 \leq n < \frac{1+4\varpi}{4\delta}} \frac{3^n+1}{2} \frac{k_0^n}{n!} ( \int_{\frac{4\delta}{1+4\varpi} < t \leq 1} (1-t)^{k_0/2}\ \frac{dt}{t})^n.$

The expression ${\kappa}$ is explicitly computable for any given ${\varpi,\delta,k_0}$. Following the recent preprint of Pintz, one can get a slightly looser, but cleaner, bound by using the upper bound

$\displaystyle 1-t \leq \exp(-t)$

and so

$\displaystyle \kappa \leq \sum_{1 \leq n < \frac{1+4\varpi}{4\delta}} \frac{3^n+1}{2} \frac{k_0^n}{n!} (\int_{4\delta/(1+4\varpi)}^\infty \exp( - \frac{k_0}{2} t )\ \frac{dt}{t})^n.$

Note that

$\displaystyle \int_{4\delta/(1+4\varpi)}^\infty \exp( - \frac{k_0}{2} t )\ \frac{dt}{t} = \int_1^\infty \exp( - \frac{2k_0 \delta}{1+4\varpi} t)\ \frac{dt}{t}$

$\displaystyle < \int_1^\infty \exp( - \frac{2k_0 \delta}{1+4\varpi} t)\ dt$

$\displaystyle = \frac{1+4\varpi}{2k_0\delta} \exp( - \frac{2k_0 \delta}{1+4\varpi} )$

and hence

$\displaystyle \kappa \leq \tilde \kappa$

where

$\displaystyle \tilde \kappa := \sum_{1 \leq n < \frac{1+4\varpi}{4\delta}} \frac{1}{n!} \frac{3^n+1}{2} (\frac{1+4\varpi}{2\delta} \exp( - \frac{2k_0 \delta}{1+4\varpi} ))^n.$

In practice we expect the ${n=1}$ term to dominate, thus we have the heuristic approximation

$\displaystyle \kappa \lessapprox \frac{1+4\varpi}{\delta} \exp( - \frac{2k_0 \delta}{1+4\varpi} ).$

Now we turn to the estimation of ${\beta}$. We have an analogue of (8), namely

$\displaystyle -\frac{3^n-1}{2} (\prod_{i=1}^n (1-t_i)^{(k_0-1)/2}) G_{k_0-1}(0,0) \leq G_{k_0-1,n}(t_1,\ldots,t_n)$

$\displaystyle \leq \frac{3^n+1}{2} (\prod_{i=1}^n (1-t_i)^{(k_0-1)/2}) G_{k_0-1}(0,0).$

But we have an improvment in the lower bound in the ${n=1}$ case, because in this case we have

$\displaystyle G_{k_0-1,n}(t) = G_{k_0-1}(t,0) + G_{k_0-1}(0,t) - G_{k_0-1}(t,t).$

From the positive decreasing nature of ${g^{(k_0-1)}}$ we see that ${G_{k_0-1}(t,t) \leq G_{k_0-1}(t,0)}$ and so ${G_{k_0-1,n}(t)}$ is non-negative and can thus be ignored for the purposes of lower bounds. (There are similar improvements available for higher ${n}$ but this seems to only give negligible improvements and will not be pursued here.) Thus we obtain

$\displaystyle \beta \geq G_{k_0-1}(0,0) (1-\kappa') \ \ \ \ \ (10)$

where

$\displaystyle \kappa' := \sum_{2 \leq n < \frac{1+4\varpi}{4\delta}} \frac{3^n-1}{2} \frac{(k_0-1)^n}{n!}$

$\displaystyle (\int_{4\delta/(1+4\varpi)}^1 (1-t)^{(k_0-1)/2}\ \frac{dt}{t})^n.$

Estimating ${\kappa'}$ similarly to ${\kappa}$ we conclude that

$\displaystyle \kappa' \leq \tilde \kappa'$

where

$\displaystyle \tilde \kappa' := \sum_{2 \leq n < \frac{1+4\varpi}{4\delta}} \frac{1}{n!} \frac{3^n-1}{2} (\frac{1+4\varpi}{2\delta} \exp( - \frac{2(k_0-1) \delta}{1+4\varpi} ))^n.$

By (9), (10), we see that the condition (1) is implied by

$\displaystyle (1+4\varpi) (1-\kappa') > \frac{4G_{k_0}(0,0)}{k_0 G_{k_0-1}(0,0)} (1+\kappa).$

By Theorem 14 and Lemma 15 of this previous post, we may take the ratio ${\frac{4G_{k_0}(0,0)}{k_0 G_{k_0-1}(0,0)}}$ to be arbitrarily close to ${\frac{j_{k_0-2}^2}{k_0(k_0-1)}}$. We conclude the following theorem.

Theorem 2 Let ${0 < \varpi < 1/4}$ and ${0 < \delta < 1/4 + \varpi}$ be such that ${MPZ[\varpi,\delta]}$ holds. Let ${k_0 \geq 2}$ be an integer, define

$\displaystyle \kappa := \sum_{1 \leq n < \frac{1+4\varpi}{4\delta}} \frac{3^n+1}{2} \frac{k_0^n}{n!} (\int_{4\delta/(1+4\varpi)}^1 (1-t)^{k_0/2}\ \frac{dt}{t})^n$

and

$\displaystyle \kappa' := \sum_{2 \leq n < \frac{1+4\varpi}{4\delta}} \frac{3^n-1}{2} \frac{(k_0-1)^n}{n!}$

$\displaystyle (\int_{4\delta/(1+4\varpi)}^1 (1-t)^{(k_0-1)/2}\ \frac{dt}{t})^n$

and suppose that

$\displaystyle (1+4\varpi) (1-\kappa') > \frac{j_{k_0-2}^2}{k_0(k_0-1)} (1+\kappa).$

Then ${DHL[k_0,2]}$ holds.

As noted earlier, we heuristically have

$\displaystyle \tilde \kappa \approx \frac{1+4\varpi}{\delta} \exp( - \frac{2k_0 \delta}{1+4\varpi} )$

and ${\tilde \kappa'}$ is negligible. This constraint is a bit better than the previous condition, in which ${\tilde \kappa'}$ was not present and ${\tilde \kappa}$ was replaced by a quantity roughly of the form ${2 \log(2) k_0 \exp( - \frac{2k_0 \delta}{1+4\varpi})}$.

The purpose of this post is to isolate a combinatorial optimisation problem regarding subset sums; any improvement upon the current known bounds for this problem would lead to numerical improvements for the quantities pursued in the Polymath8 project. (UPDATE: Unfortunately no purely combinatorial improvement is possible, see comments.) We will also record the number-theoretic details of how this combinatorial problem is used in Zhang’s argument establishing bounded prime gaps.

First, some (rough) motivational background, omitting all the number-theoretic details and focusing on the combinatorics. (But readers who just want to see the combinatorial problem can skip the motivation and jump ahead to Lemma 5.) As part of the Polymath8 project we are trying to establish a certain estimate called ${MPZ[\varpi,\delta]}$ for as wide a range of ${\varpi,\delta > 0}$ as possible. Currently the best result we have is:

Theorem 1 (Zhang’s theorem, numerically optimised) ${MPZ[\varpi,\delta]}$ holds whenever ${207\varpi + 43\delta< \frac{1}{4}}$.

Enlarging this region would lead to a better value of certain parameters ${k_0}$, ${H}$ which in turn control the best bound on asymptotic gaps between consecutive primes. See this previous post for more discussion of this. At present, the best value ${k_0=23,283}$ of ${k_0}$ is applied by taking ${(\varpi,\delta)}$ sufficiently close to ${(1/899,71/154628)}$, so improving Theorem 1 in the neighbourhood of this value is particularly desirable.

I’ll state exactly what ${MPZ[\varpi,\delta]}$ is below the fold. For now, suffice to say that it involves a certain number-theoretic function, the von Mangoldt function ${\Lambda}$. To prove the theorem, the first step is to use a certain identity (the Heath-Brown identity) to decompose ${\Lambda}$ into a lot of pieces, which take the form

$\displaystyle \alpha_{1} \ast \ldots \ast \alpha_{n} \ \ \ \ \ (1)$

for some bounded ${n}$ (in Zhang’s paper ${n}$ never exceeds ${20}$) and various weights ${\alpha_{1},\ldots,\alpha_n}$ supported at various scales ${N_1,\ldots,N_n \geq 1}$ that multiply up to approximately ${x}$:

$\displaystyle N_1 \ldots N_n \sim x.$

We can write ${N_i = x^{t_i}}$, thus ignoring negligible errors, ${t_1,\ldots,t_n}$ are non-negative real numbers that add up to ${1}$:

$\displaystyle t_1 + \ldots + t_n = 1.$

A key technical feature of the Heath-Brown identity is that the weight ${\alpha_i}$ associated to sufficiently large values of ${t_i}$ (e.g. ${t_i \geq 1/10}$) are “smooth” in a certain sense; this will be detailed below the fold.

The operation ${\ast}$ is Dirichlet convolution, which is commutative and associative. We can thus regroup the convolution (1) in a number of ways. For instance, given any partition ${\{1,\ldots,n\} = S \cup T}$ into disjoint sets ${S,T}$, we can rewrite (1) as

$\displaystyle \alpha_S \ast \alpha_T$

where ${\alpha_S}$ is the convolution of those ${\alpha_i}$ with ${i \in S}$, and similarly for ${\alpha_T}$.

Zhang’s argument splits into two major pieces, in which certain classes of (1) are established. Cheating a little bit, the following three results are established:

Theorem 2 (Type 0 estimate, informal version) The term (1) gives an acceptable contribution to ${MPZ[\varpi,\delta]}$ whenever

$\displaystyle t_i > \frac{1}{2} + 2 \varpi$

for some ${i}$.

Theorem 3 (Type I/II estimate, informal version) The term (1) gives an acceptable contribution to ${MPZ[\varpi,\delta]}$ whenever one can find a partition ${\{1,\ldots,n\} = S \cup T}$ such that

$\displaystyle \frac{1}{2} - \sigma < \sum_{i \in S} t_i \leq \sum_{i \in T} t_i < \frac{1}{2} + \sigma$

where ${\sigma}$ is a quantity such that

$\displaystyle 11 \varpi + 3\delta + 2 \sigma < \frac{1}{4}.$

Theorem 4 (Type III estimate, informal version) The term (1) gives an acceptable contribution to ${MPZ[\varpi,\delta]}$ whenever one can find ${t_i,t_j,t_k}$ with distinct ${i,j,k \in \{1,\ldots,n\}}$ with

$\displaystyle t_i \leq t_j \leq t_k \leq \frac{1}{2}$

and

$\displaystyle 4t_k + 4t_j + 5t_i > 4 + 16 \varpi + \delta.$

The above assertions are oversimplifications; there are some additional minor smallness hypotheses on ${\varpi,\delta}$ that are needed but at the current (small) values of ${\varpi,\delta}$ under consideration they are not relevant and so will be omitted.

The deduction of Theorem 1 from Theorems 2, 3, 4 is then accomplished from the following, purely combinatorial, lemma:

Lemma 5 (Subset sum lemma) Let ${\varpi,\delta > 0}$ be such that

$\displaystyle 207\varpi + 43\delta < \frac{1}{4}. \ \ \ \ \ (2)$

Let ${t_1,\ldots,t_n}$ be non-negative reals such that

$\displaystyle t_1 + \ldots + t_n = 1.$

Then at least one of the following statements hold:

• (Type 0) There is ${1 \leq i \leq n}$ such that ${t_i > \frac{1}{2} + 2 \varpi}$.
• (Type I/II) There is a partition ${\{1,\ldots,n\} = S \cup T}$ such that

$\displaystyle \frac{1}{2} - \sigma < \sum_{i \in S} t_i \leq \sum_{i \in T} t_i < \frac{1}{2} + \sigma$

where ${\sigma}$ is a quantity such that

$\displaystyle 11 \varpi + 3\delta + 2 \sigma < \frac{1}{4}.$

• (Type III) One can find distinct ${t_i,t_j,t_k}$ with

$\displaystyle t_i \leq t_j \leq t_k \leq \frac{1}{2}$

and

$\displaystyle 4t_k + 4t_j + 5t_i > 4 + 16 \varpi + \delta.$

The purely combinatorial question is whether the hypothesis (2) can be relaxed here to a weaker condition. This would allow us to improve the ranges for Theorem 1 (and hence for the values of ${k_0}$ and ${H}$ alluded to earlier) without needing further improvement on Theorems 2, 3, 4 (although such improvement is also going to be a focus of Polymath8 investigations in the future).

Let us review how this lemma is currently proven. The key sublemma is the following:

Lemma 6 Let ${1/10 < \sigma < 1/2}$, and let ${t_1,\ldots,t_n}$ be non-negative numbers summing to ${1}$. Then one of the following three statements hold:

• (Type 0) There is a ${t_i}$ with ${t_i \geq 1/2 + \sigma}$.
• (Type I/II) There is a partition ${\{1,\ldots,n\} = S \cup T}$ such that

$\displaystyle \frac{1}{2} - \sigma < \sum_{i \in S} t_i \leq \sum_{i \in T} t_i < \frac{1}{2} + \sigma.$

• (Type III) There exist distinct ${i,j,k}$ with ${2\sigma \leq t_i \leq t_j \leq t_k \leq 1/2-\sigma}$ and ${t_i+t_j,t_i+t_k,t_j+t_k \geq 1/2 + \sigma}$.

Proof: Suppose Type I/II never occurs, then every partial sum ${\sum_{i \in S} t_i}$ is either “small” in the sense that it is less than or equal to ${1/2-\sigma}$, or “large” in the sense that it is greater than or equal to ${1/2+\sigma}$, since otherwise we would be in the Type I/II case either with ${S}$ as is and ${T}$ the complement of ${S}$, or vice versa.

Call a summand ${t_i}$ “powerless” if it cannot be used to turn a small partial sum into a large partial sum, thus there are no ${S \subset \{1,\ldots,n\} \backslash \{i\}}$ such that ${\sum_{j \in S} t_j}$ is small and ${t_i + \sum_{j \in S} t_j}$ is large. We then split ${\{1,\ldots,n\} = A \cup B}$ where ${A}$ are the powerless elements and ${B}$ are the powerful elements.

By induction we see that if ${S \subset B}$ and ${\sum_{i \in S} t_i}$ is small, then ${\sum_{i \in S} t_i + \sum_{i \in A} t_i}$ is also small. Thus every sum of powerful summand is either less than ${1/2-\sigma-\sum_{i \in A} t_i}$ or larger than ${1/2+\sigma}$. Since a powerful element must be able to convert a small sum to a large sum (in fact it must be able to convert a small sum of powerful summands to a large sum, by stripping out the powerless summands), we conclude that every powerful element has size greater than ${2\sigma + \sum_{i \in A} t_i}$. We may assume we are not in Type 0, then every powerful summand is at least ${2\sigma + \sum_{i \in A} t_i}$ and at most ${1/2 - \sigma - \sum_{i \in A} t_i}$. In particular, there have to be at least three powerful summands, otherwise ${\sum_{i=1}^n t_i}$ cannot be as large as ${1}$. As ${\sigma > 1/10}$, we have ${4\sigma > 1/2-\sigma}$, and we conclude that the sum of any two powerful summands is large (which, incidentally, shows that there are exactly three powerful summands). Taking ${t_i,t_j,t_k}$ to be three powerful summands in increasing order we land in Type III. $\Box$

Now we see how Lemma 6 implies Lemma 5. Let ${\varpi,\delta}$ be as in Lemma 5. We take ${\sigma}$ almost as large as we can for the Type I/II case, thus we set

$\displaystyle \sigma := \frac{1}{8} - \frac{11}{2} \varpi - \frac{3}{2} \delta - \epsilon \ \ \ \ \ (3)$

for some sufficiently small ${\epsilon>0}$. We observe from (2) that we certainly have

$\displaystyle \sigma > 2 \varpi$

and

$\displaystyle \sigma > \frac{1}{10}$

with plenty of room to spare. We then apply Lemma 6. The Type 0 case of that lemma then implies the Type 0 case of Lemma 5, while the Type I/II case of Lemma 6 also implies the Type I/II case of Lemma 5. Finally, suppose that we are in the Type III case of Lemma 6. Since

$\displaystyle 4t_i + 4t_j + 5 t_k = \frac{5}{2} (t_i+t_k) + \frac{5}{2}(t_j+t_k) + \frac{3}{2} (t_i+t_j)$

we thus have

$\displaystyle 4t_i + 4t_j + 5 t_k \geq \frac{13}{2} (\frac{1}{2}+\sigma)$

and so we will be done if

$\displaystyle \frac{13}{2} (\frac{1}{2}+\sigma) > 4 + 16 \varpi + \delta.$

Inserting (3) and taking ${\epsilon}$ small enough, it suffices to verify that

$\displaystyle \frac{13}{2} (\frac{1}{2}+\frac{1}{8} - \frac{11}{2} \varpi - \frac{3}{2}\delta) > 4 + 16 \varpi + \delta$

but after some computation this is equivalent to (2).

It seems that there is some slack in this computation; some of the conclusions of the Type III case of Lemma 5, in particular, ended up being “wasted”, and it is possible that one did not fully exploit all the partial sums that could be used to create a Type I/II situation. So there may be a way to make improvements through purely combinatorial arguments. (UPDATE: As it turns out, this is sadly not the case: consderation of the case when ${n=4}$, ${t_1 = 1/4 - 3\sigma/2}$, and ${t_2=t_3=t_4 = 1/4+\sigma/2}$ shows that one cannot obtain any further improvement without actually improving the Type I/II and Type III analysis.)

A technical remark: for the application to Theorem 1, it is possible to enforce a bound on the number ${n}$ of summands in Lemma 5. More precisely, we may assume that ${n}$ is an even number of size at most ${n \leq 2K}$ for any natural number ${K}$ we please, at the cost of adding the additioal constraint ${t_i > \frac{1}{K}}$ to the Type III conclusion. Since ${t_i}$ is already at least ${2\sigma}$, which is at least ${\frac{1}{5}}$, one can safely take ${K=5}$, so ${n}$ can be taken to be an even number of size at most ${10}$, which in principle makes the problem of optimising Lemma 5 a fixed linear programming problem. (Zhang takes ${K=10}$, but this appears to be overkill. On the other hand, ${K}$ does not appear to be a parameter that overly influences the final numerical bounds.)

Below the fold I give the number-theoretic details of the combinatorial aspects of Zhang’s argument that correspond to the combinatorial problem described above.

This post is a continuation of the previous post on sieve theory, which is an ongoing part of the Polymath8 project to improve the various parameters in Zhang’s proof that bounded gaps between primes occur infinitely often. Given that the comments on that page are getting quite lengthy, this is also a good opportunity to “roll over” that thread.

We will continue the notation from the previous post, including the concept of an admissible tuple, the use of an asymptotic parameter ${x}$ going to infinity, and a quantity ${w}$ depending on ${x}$ that goes to infinity sufficiently slowly with ${x}$, and ${W := \prod_{p (the ${W}$-trick).

The objective of this portion of the Polymath8 project is to make as efficient as possible the connection between two types of results, which we call ${DHL[k_0,2]}$ and ${MPZ[\varpi,\delta]}$. Let us first state ${DHL[k_0,2]}$, which has an integer parameter ${k_0 \geq 2}$:

Conjecture 1 (${DHL[k_0,2]}$) Let ${{\mathcal H}}$ be a fixed admissible ${k_0}$-tuple. Then there are infinitely many translates ${n+{\mathcal H}}$ of ${{\mathcal H}}$ which contain at least two primes.

Zhang was the first to prove a result of this type with ${k_0 = 3,500,000}$. Since then the value of ${k_0}$ has been lowered substantially; at this time of writing, the current record is ${k_0 = 26,024}$.

There are two basic ways known currently to attain this conjecture. The first is to use the Elliott-Halberstam conjecture ${EH[\theta]}$ for some ${\theta>1/2}$:

Conjecture 2 (${EH[\theta]}$) One has

$\displaystyle \sum_{1 \leq q \leq x^\theta} \sup_{a \in ({\bf Z}/q{\bf Z})^\times} |\sum_{n < x: n = a\ (q)} \Lambda(n) - \frac{1}{\phi(q)} \sum_{n < x} \Lambda(n)|$

$\displaystyle = O( \frac{x}{\log^A x} )$

for all fixed ${A>0}$. Here we use the abbreviation ${n=a\ (q)}$ for ${n=a \hbox{ mod } q}$.

Here of course ${\Lambda}$ is the von Mangoldt function and ${\phi}$ the Euler totient function. It is conjectured that ${EH[\theta]}$ holds for all ${0 < \theta < 1}$, but this is currently only known for ${0 < \theta < 1/2}$, an important result known as the Bombieri-Vinogradov theorem.

In a breakthrough paper, Goldston, Yildirim, and Pintz established an implication of the form

$\displaystyle EH[\theta] \implies DHL[k_0,2] \ \ \ \ \ (1)$

for any ${1/2 < \theta < 1}$, where ${k_0 = k_0(\theta)}$ depends on ${\theta}$. This deduction was very recently optimised by Farkas, Pintz, and Revesz and also independently in the comments to the previous blog post, leading to the following implication:

Theorem 3 (EH implies DHL) Let ${1/2 < \theta < 1}$ be a real number, and let ${k_0 \geq 2}$ be an integer obeying the inequality

$\displaystyle 2\theta > \frac{j_{k_0-2}^2}{k_0(k_0-1)}, \ \ \ \ \ (2)$

where ${j_n}$ is the first positive zero of the Bessel function ${J_n(x)}$. Then ${EH[\theta]}$ implies ${DHL[k_0,2]}$.

Note that the right-hand side of (2) is larger than ${1}$, but tends asymptotically to ${1}$ as ${k_0 \rightarrow \infty}$. We give an alternate proof of Theorem 3 below the fold.

Implications of the form Theorem 3 were modified by Motohashi and Pintz, which in our notation replaces ${EH[\theta]}$ by an easier conjecture ${MPZ[\varpi,\delta]}$ for some ${0 < \varpi < 1/4}$ and ${0 < \delta < 1/4+\varpi}$, at the cost of degrading the sufficient condition (2) slightly. In our notation, this conjecture takes the following form for each choice of parameters ${\varpi,\delta}$:

Conjecture 4 (${MPZ[\varpi,\delta]}$) Let ${{\mathcal H}}$ be a fixed ${k_0}$-tuple (not necessarily admissible) for some fixed ${k_0 \geq 2}$, and let ${b\ (W)}$ be a primitive residue class. Then

$\displaystyle \sum_{q \in {\mathcal S}_I: q< x^{1/2+2\varpi}} \sum_{a \in C(q)} |\Delta_{b,W}(\Lambda; q,a)| = O( x \log^{-A} x) \ \ \ \ \ (3)$

for any fixed ${A>0}$, where ${I = (w,x^{\delta})}$, ${{\mathcal S}_I}$ are the square-free integers whose prime factors lie in ${I}$, and ${\Delta_{b,W}(\Lambda;q,a)}$ is the quantity

$\displaystyle \Delta_{b,W}(\Lambda;q,a) := | \sum_{x \leq n \leq 2x: n=b\ (W); n = a\ (q)} \Lambda(n) \ \ \ \ \ (4)$

$\displaystyle - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x: n = b\ (W)} \Lambda(n)|.$

and ${C(q)}$ is the set of congruence classes

$\displaystyle C(q) := \{ a \in ({\bf Z}/q{\bf Z})^\times: P(a) = 0 \}$

and ${P}$ is the polynomial

$\displaystyle P(a) := \prod_{h \in {\mathcal H}} (a+h).$

This is a weakened version of the Elliott-Halberstam conjecture:

Proposition 5 (EH implies MPZ) Let ${0 < \varpi < 1/4}$ and ${0 < \delta < 1/4+\varpi}$. Then ${EH[1/2+2\varpi+\epsilon]}$ implies ${MPZ[\varpi,\delta]}$ for any ${\epsilon>0}$. (In abbreviated form: ${EH[1/2+2\varpi+]}$ implies ${MPZ[\varpi,\delta]}$.)

In particular, since ${EH[\theta]}$ is conjecturally true for all ${0 < \theta < 1/2}$, we conjecture ${MPZ[\varpi,\delta]}$ to be true for all ${0 < \varpi < 1/4}$ and ${0<\delta<1/4+\varpi}$.

Proof: Define

$\displaystyle E(q) := \sup_{a \in ({\bf Z}/q{\bf Z})^\times} |\sum_{x \leq n \leq 2x: n = a\ (q)} \Lambda(n) - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x} \Lambda(n)|$

then the hypothesis ${EH[1/2+2\varpi+\epsilon]}$ (applied to ${x}$ and ${2x}$ and then subtracting) tells us that

$\displaystyle \sum_{1 \leq q \leq Wx^{1/2+2\varpi}} E(q) \ll x \log^{-A} x$

for any fixed ${A>0}$. From the Chinese remainder theorem and the Siegel-Walfisz theorem we have

$\displaystyle \sup_{a \in ({\bf Z}/q{\bf Z})^\times} \Delta_{b,W}(\Lambda;q,a) \ll E(qW) + \frac{1}{\phi(q)} x \log^{-A} x$

for any ${q}$ coprime to ${W}$ (and in particular for ${q \in {\mathcal S}_I}$). Since ${|C(q)| \leq k_0^{\Omega(q)}}$, where ${\Omega(q)}$ is the number of prime divisors of ${q}$, we can thus bound the left-hand side of (3) by

$\displaystyle \ll \sum_{q \in {\mathcal S}_I: q< x^{1/2+2\varpi}} k_0^{\Omega(q)} E(qW) + k_0^{\Omega(q)} \frac{1}{\phi(q)} x \log^{-A} x.$

The contribution of the second term is ${O(x \log^{-A+O(1)} x)}$ by standard estimates (see Proposition 8 below). Using the very crude bound

$\displaystyle E(q) \ll \frac{1}{\phi(q)} x \log x$

and standard estimates we also have

$\displaystyle \sum_{q \in {\mathcal S}_I: q< x^{1/2+2\varpi}} k_0^{2\Omega(q)} E(qW) \ll x \log^{O(1)} A$

and the claim now follows from the Cauchy-Schwarz inequality. $\Box$

In practice, the conjecture ${MPZ[\varpi,\delta]}$ is easier to prove than ${EH[1/2+2\varpi+]}$ due to the restriction of the residue classes ${a}$ to ${C(q)}$, and also the restriction of the modulus ${q}$ to ${x^\delta}$-smooth numbers. Zhang proved ${MPZ[\varpi,\varpi]}$ for any ${0 < \varpi < 1/1168}$. More recently, our Polymath8 group has analysed Zhang’s argument (using in part a corrected version of the analysis of a recent preprint of Pintz) to obtain ${MPZ[\varpi,\delta]}$ whenever ${\delta, \varpi > 0}$ are such that

$\displaystyle 207\varpi + 43\delta < \frac{1}{4}.$

The work of Motohashi and Pintz, and later Zhang, implicitly describe arguments that allow one to deduce ${DHL[k_0,2]}$ from ${MPZ[\varpi,\delta]}$ provided that ${k_0}$ is sufficiently large depending on ${\varpi,\delta}$. The best implication of this sort that we have been able to verify thus far is the following result, established in the previous post:

Theorem 6 (MPZ implies DHL) Let ${0 < \varpi < 1/4}$, ${0 < \delta < 1/4+\varpi}$, and let ${k_0 \geq 2}$ be an integer obeying the constraint

$\displaystyle 1+4\varpi > \frac{j_{k_0-2}^2}{k_0(k_0-1)} (1+\kappa) \ \ \ \ \ (5)$

where ${\kappa}$ is the quantity

$\displaystyle \kappa := \sum_{1 \leq n < \frac{1+4\varpi}{2\delta}} (1 - \frac{2n \delta}{1 + 4\varpi})^{k_0/2} \prod_{j=1}^{n} (1 + 3k_0 \log(1+\frac{1}{j})) ).$

Then ${MPZ[\varpi,\delta]}$ implies ${DHL[k_0,2]}$.

This complicated version of ${\kappa}$ is roughly of size ${3 \log(2) k_0 \exp( - k_0 \delta)}$. It is unlikely to be optimal; the work of Motohashi-Pintz and Pintz suggests that it can essentially be improved to ${\frac{1}{\delta} \exp(-k_0 \delta)}$, but currently we are unable to verify this claim. One of the aims of this post is to encourage further discussion as to how to improve the ${\kappa}$ term in results such as Theorem 6.

We remark that as (5) is an open condition, it is unaffected by infinitesimal modifications to ${\varpi,\delta}$, and so we do not ascribe much importance to such modifications (e.g. replacing ${\varpi}$ by ${\varpi-\epsilon}$ for some arbitrarily small ${\epsilon>0}$).

The known deductions of ${DHL[k_0,2]}$ from claims such as ${EH[\theta]}$ or ${MPZ[\varpi,\delta]}$ rely on the following elementary observation of Goldston, Pintz, and Yildirim (essentially a weighted pigeonhole principle), which we have placed in “${W}$-tricked form”:

Lemma 7 (Criterion for DHL) Let ${k_0 \geq 2}$. Suppose that for each fixed admissible ${k_0}$-tuple ${{\mathcal H}}$ and each congruence class ${b\ (W)}$ such that ${b+h}$ is coprime to ${W}$ for all ${h \in {\mathcal H}}$, one can find a non-negative weight function ${\nu: {\bf N} \rightarrow {\bf R}^+}$, fixed quantities ${\alpha,\beta > 0}$, a quantity ${A>0}$, and a fixed positive power ${R}$ of ${x}$ such that one has the upper bound

$\displaystyle \sum_{x \leq n \leq 2x: n = b\ (W)} \nu(n) \leq (\alpha+o(1)) A\frac{x}{W}, \ \ \ \ \ (6)$

the lower bound

$\displaystyle \sum_{x \leq n \leq 2x: n = b\ (W)} \nu(n) \theta(n+h_i) \geq (\beta-o(1)) A\frac{x}{W} \log R \ \ \ \ \ (7)$

for all ${h_i \in {\mathcal H}}$, and the key inequality

$\displaystyle \frac{\log R}{\log x} > \frac{1}{k_0} \frac{\alpha}{\beta} \ \ \ \ \ (8)$

holds. Then ${DHL[k_0,2]}$ holds. Here ${\theta(n)}$ is defined to equal ${\log n}$ when ${n}$ is prime and ${0}$ otherwise.

Proof: Consider the quantity

$\displaystyle \sum_{x \leq n \leq 2x: n = b\ (W)} \nu(n) (\sum_{h \in {\mathcal H}} \theta(n+h) - \log(3x)). \ \ \ \ \ (9)$

By (6), (7), this quantity is at least

$\displaystyle k_0 \beta A\frac{x}{W} \log R - \alpha \log(3x) A\frac{x}{W} - o(A\frac{x}{W} \log x).$

By (8), this expression is positive for all sufficiently large ${x}$. On the other hand, (9) can only be positive if at least one summand is positive, which only can happen when ${n+{\mathcal H}}$ contains at least two primes for some ${x \leq n \leq 2x}$ with ${n=b\ (W)}$. Letting ${x \rightarrow \infty}$ we obtain ${DHL[k_0,2]}$ as claimed. $\Box$

In practice, the quantity ${R}$ (referred to as the sieve level) is a power of ${x}$ such as ${x^{\theta/2}}$ or ${x^{1/4+\varpi}}$, and reflects the strength of the distribution hypothesis ${EH[\theta]}$ or ${MPZ[\varpi,\delta]}$ that is available; the quantity ${R}$ will also be a key parameter in the definition of the sieve weight ${\nu}$. The factor ${A}$ reflects the order of magnitude of the expected density of ${\nu}$ in the residue class ${b\ (W)}$; it could be absorbed into the sieve weight ${\nu}$ by dividing that weight by ${A}$, but it is convenient to not enforce such a normalisation so as not to clutter up the formulae. In practice, ${A}$ will some combination of ${\frac{\phi(W)}{W}}$ and ${\log R}$.

Once one has decided to rely on Lemma 7, the next main task is to select a good weight ${\nu}$ for which the ratio ${\alpha/\beta}$ is as small as possible (and for which the sieve level ${R}$ is as large as possible. To ensure non-negativity, we use the Selberg sieve

$\displaystyle \nu = \lambda^2, \ \ \ \ \ (10)$

where ${\lambda(n)}$ takes the form

$\displaystyle \lambda(n) = \sum_{d \in {\mathcal S}_I: d|P(n)} \mu(d) a_d$

for some weights ${a_d \in {\bf R}}$ vanishing for ${d>R}$ that are to be chosen, where ${I \subset (w,+\infty)}$ is an interval and ${P}$ is the polynomial ${P(n) := \prod_{h \in {\mathcal H}} (n+h)}$. If the distribution hypothesis is ${EH[\theta]}$, one takes ${R := x^{\theta/2}}$ and ${I := (w,+\infty)}$; if the distribution hypothesis is instead ${MPZ[\varpi,\delta]}$, one takes ${R := x^{1/4+\varpi}}$ and ${I := (w,x^\delta)}$.

One has a useful amount of flexibility in selecting the weights ${a_d}$ for the Selberg sieve. The original work of Goldston, Pintz, and Yildirim, as well as the subsequent paper of Zhang, the choice

$\displaystyle a_d := \log(\frac{R}{d})_+^{k_0+\ell_0}$

is used for some additional parameter ${\ell_0 > 0}$ to be optimised over. More generally, one can take

$\displaystyle a_d := g( \frac{\log d}{\log R} )$

for some suitable (in particular, sufficiently smooth) cutoff function ${g: {\bf R} \rightarrow {\bf R}}$. We will refer to this choice of sieve weights as the “analytic Selberg sieve”; this is the choice used in the analysis in the previous post.

However, there is a slight variant choice of sieve weights that one can use, which I will call the “elementary Selberg sieve”, and it takes the form

$\displaystyle a_d := \frac{1}{\Phi(d) \Delta(d)} \sum_{q \in {\mathcal S}_I: (q,d)=1} \frac{1}{\Phi(q)} f'( \frac{\log dq}{\log R}) \ \ \ \ \ (11)$

for a sufficiently smooth function ${f: {\bf R} \rightarrow {\bf R}}$, where

$\displaystyle \Phi(d) := \prod_{p|d} \frac{p-k_0}{k_0}$

for ${d \in {\mathcal S}_I}$ is a ${k_0}$-variant of the Euler totient function, and

$\displaystyle \Delta(d) := \prod_{p|d} \frac{k_0}{p} = \frac{k_0^{\Omega(d)}}{d}$

for ${d \in {\mathcal S}_I}$ is a ${k_0}$-variant of the function ${1/d}$. (The derivative on the ${f}$ cutoff is convenient for computations, as will be made clearer later in this post.) This choice of weights ${a_d}$ may seem somewhat arbitrary, but it arises naturally when considering how to optimise the quadratic form

$\displaystyle \sum_{d_1,d_2 \in {\mathcal S}_I} a_{d_1} a_{d_2} \Delta([d_1,d_2])$

(which arises naturally in the estimation of ${\alpha}$ in (6)) subject to a fixed value of ${a_1}$ (which morally is associated to the estimation of ${\beta}$ in (7)); this is discussed in any sieve theory text as part of the general theory of the Selberg sieve, e.g. Friedlander-Iwaniec.

The use of the elementary Selberg sieve for the bounded prime gaps problem was studied by Motohashi and Pintz. Their arguments give an alternate derivation of ${DHL[k_0,2]}$ from ${MPZ[\varpi,\theta]}$ for ${k_0}$ sufficiently large, although unfortunately we were not able to confirm some of their calculations regarding the precise dependence of ${k_0}$ on ${\varpi,\theta}$, and in particular we have not yet been able to improve upon the specific criterion in Theorem 6 using the elementary sieve. However it is quite plausible that such improvements could become available with additional arguments.

Below the fold we describe how the elementary Selberg sieve can be used to reprove Theorem 3, and discuss how they could potentially be used to improve upon Theorem 6. (But the elementary Selberg sieve and the analytic Selberg sieve are in any event closely related; see the appendix of this paper of mine with Ben Green for some further discussion.) For the purposes of polymath8, either developing the elementary Selberg sieve or continuing the analysis of the analytic Selberg sieve from the previous post would be a relevant topic of conversation in the comments to this post.

In a recent paper, Yitang Zhang has proven the following theorem:

Theorem 1 (Bounded gaps between primes) There exists a natural number ${H}$ such that there are infinitely many pairs of distinct primes ${p,q}$ with ${|p-q| \leq H}$.

Zhang obtained the explicit value of ${70,000,000}$ for ${H}$. A polymath project has been proposed to lower this value and also to improve the understanding of Zhang’s results; as of this time of writing, the current “world record” is ${H = 4,802,222}$ (and the link given should stay updated with the most recent progress.

Zhang’s argument naturally divides into three steps, which we describe in reverse order. The last step, which is the most elementary, is to deduce the above theorem from the following weak version of the Dickson-Hardy-Littlewood (DHL) conjecture for some ${k_0}$:

Theorem 2 (${DHL[k_0,2]}$) Let ${{\mathcal H}}$ be an admissible ${k_0}$-tuple, that is to say a tuple of ${k_0}$ distinct integers which avoids at least one residue class mod ${p}$ for every prime ${p}$. Then there are infinitely many translates of ${{\mathcal H}}$ that contain at least two primes.

Zhang obtained ${DHL[k_0,2]}$ for ${k_0 = 3,500,000}$. The current best value of ${k_0}$ is ${341,640}$, as discussed in this previous blog post. To get from ${DHL[k_0,2]}$ to Theorem 1, one has to exhibit an admissible ${k_0}$-tuple of diameter at most ${H}$. For instance, with ${k_0 = 341,640}$, the narrowest admissible ${k_0}$-tuple that we can construct has diameter ${4,802,222}$, which explains the current world record. There is an active discussion on trying to improve the constructions of admissible tuples at this blog post; it is conceivable that some combination of computer search and clever combinatorial constructions could obtain slightly better values of ${H}$ for a given value of ${k_0}$. The relationship between ${H}$ and ${k_0}$ is approximately of the form ${H \approx k_0 \log k_0}$ (and a classical estimate of Montgomery and Vaughan tells us that we cannot make ${H}$ much narrower than ${\frac{1}{2} k_0 \log k_0}$, see this previous post for some related discussion).

The second step in Zhang’s argument, which is somewhat less elementary (relying primarily on the sieve theory of Goldston, Yildirim, Pintz, and Motohashi), is to deduce ${DHL[k_0,2]}$ from a certain conjecture ${MPZ[\varpi,\delta]}$ for some ${\varpi,\delta > 0}$. Here is one formulation of the conjecture, more or less as (implicitly) stated in Zhang’s paper:

Conjecture 3 (${MPZ[\varpi,\delta]}$) Let ${{\mathcal H}}$ be an admissible tuple, let ${h_i}$ be an element of ${{\mathcal H}}$, let ${x}$ be a large parameter, and define

$\displaystyle D := x^{1/4+\varpi},$

$\displaystyle {\mathcal P} := \prod_{p: p < x^{\delta}} p,$

$\displaystyle P(n) := \prod_{h \in {\mathcal H}} (n+h),$

$\displaystyle C_i(d) := \{ c \in {\bf Z}/d{\bf Z}: (c,d) = 1; P(c-h_i) = 0 \hbox{ mod } d \}$

for any natural number ${d}$, and

$\displaystyle \Delta(\gamma;d,c) = \sum_{x \leq n \leq 2x: n = c \hbox{ mod } d} \gamma(n) - \frac{1}{\varphi(d)} \sum_{x \leq n \leq 2x: (n,d) = 1} \gamma(n)$

for any function ${\gamma: {\bf N} \rightarrow {\bf C}}$. Let ${\theta(n)}$ equal ${\log p}$ when ${n}$ is a prime ${p}$, and ${\theta(n)=0}$ otherwise. Then one has

$\displaystyle \sum_{d < D^2; d|{\mathcal P}} \sum_{c \in C_i(d)} |\Delta(\theta; d, c )| \ll x \log^{-A} x$

for any fixed ${A > 0}$.

Note that this is slightly different from the formulation of ${MPZ[\varpi]}$ in the previous post; I have reverted to Zhang’s formulation here as the primary purpose of this post is to read through Zhang’s paper. However, I have distinguished two separate parameters here ${\varpi,\delta}$ instead of one, as it appears that there is some room to optimise by making these two parameters different.

In the previous post, I described how one can deduce ${DHL[k_0,2]}$ from ${MPZ[\varpi,\delta]}$. Ignoring an exponentially small error ${\kappa}$, it turns out that one can deduce ${DHL[k_0,2]}$ from ${MPZ[\varpi,\delta]}$ whenever one can find a smooth function ${g: [0,1] \rightarrow {\bf R}}$ vanishing to order at least ${k_0}$ at ${1}$ such that

$\displaystyle k_0 \int_0^1 g^{(k_0-1)}(x)^2 \frac{x^{k_0-2}}{(k_0-2)!}\ dx > \frac{4}{1+4\varpi} \int_0^1 g^{(k_0)}(x)^2 \frac{x^{k_0-1}}{(k_0-1)!}\ dx.$

By selecting ${g(x) := \frac{1}{(k_0+l_0)!} (1-x)^{k_0+l_0}}$ for a real parameter ${l_0>0}$ to optimise over, and ignoring the technical ${\kappa}$ term alluded to previously (which is the only quantity here that depends on ${\delta}$), this gives ${DHL[k_0,2]}$ from ${MPZ[\varpi,\delta]}$ whenever

$\displaystyle k_0 > (\sqrt{1+4\varpi} - 1)^{-2} \ \ \ \ \ (1)$

It may be possible to do better than this by choosing smarter choices for ${g}$, or performing some sort of numerical calculus of variations or spectral theory; people interested in this topic are invited to discuss it in the previous post.

The final, and deepest, part of Zhang’s work is the following theorem (Theorem 2 from Zhang’s paper, whose proof occupies Sections 6-13 of that paper, and is about 32 pages long):

Theorem 4 (Zhang) ${MPZ[\varpi,\varpi]}$ is true for all ${0 < \varpi \leq \frac{1}{1168}}$.

The significance of the fraction ${1/1168}$ is that Zhang’s argument proceeds for a general choice of ${\varpi > 0}$, but ultimately the argument only closes if one has

$\displaystyle \frac{31}{32} + 36 \varpi \leq 1 - \frac{\varpi}{2}$

(see page 53 of Zhang) which is equivalent to ${\varpi \leq 1/1168}$. Plugging in this choice of ${\varpi}$ into (1) then gives ${DHL[k_0,2]}$ with ${k_0 = 341,640}$ as stated previously.

Improving the value of ${\varpi}$ in Theorem 4 would lead to improvements in ${k_0}$ and then ${H}$ as discussed above. The purpose of this reading seminar is then twofold:

1. Going through Zhang’s argument in order to improve the value of ${\varpi}$ (perhaps by decreasing ${\delta}$); and
2. Gaining a more holistic understanding of Zhang’s argument (and perhaps to find some more “global” improvements to that argument), as well as related arguments such as the prior work of Bombieri, Fouvry, Friedlander, and Iwaniec that Zhang’s work is based on.

In addition to reading through Zhang’s paper, the following material is likely to be relevant:

I envisage a loose, unstructured format for the reading seminar. In the comments below, I am going to post my own impressions, questions, and remarks as I start going through the material, and I encourage other participants to do the same. The most obvious thing to do is to go through Zhang’s Sections 6-13 in linear order, but it may make sense for some participants to follow a different path. One obvious near-term goal is to carefully go through Zhang’s arguments for ${MPZ[\varpi,\delta]}$ instead of ${MPZ[1/1168,1/1168]}$, and record exactly how various exponents depend on ${\varpi,\delta}$, and what inequalities these parameters need to obey for the arguments to go through. It may be that this task can be done at a fairly superficial level without the need to carefully go through the analytic number theory estimates in that paper, though of course we should also be doing that as well. This may lead to some “cheap” optimisations of ${\varpi}$ which can then propagate to improved bounds on ${k_0}$ and ${H}$ thanks to the other parts of the Polymath project.

Everyone is welcome to participate in this project (as per the usual polymath rules); however I would request that “meta” comments about the project that are not directly related to the task of reading Zhang’s paper and related works be placed instead on the polymath proposal page. (Similarly, comments regarding the optimisation of ${k_0}$ given ${\varpi}$ and ${\delta}$ should be placed at this post, while comments on the optimisation of ${H}$ given ${k_0}$ should be given at this post. On the other hand, asking questions about Zhang’s paper, even (or especially!) “dumb” ones, would be very appropriate for this post and such questions are encouraged.

Suppose one is given a ${k_0}$-tuple ${{\mathcal H} = (h_1,\ldots,h_{k_0})}$ of ${k_0}$ distinct integers for some ${k_0 \geq 1}$, arranged in increasing order. When is it possible to find infinitely many translates ${n + {\mathcal H} =(n+h_1,\ldots,n+h_{k_0})}$ of ${{\mathcal H}}$ which consists entirely of primes? The case ${k_0=1}$ is just Euclid’s theorem on the infinitude of primes, but the case ${k_0=2}$ is already open in general, with the ${{\mathcal H} = (0,2)}$ case being the notorious twin prime conjecture.

On the other hand, there are some tuples ${{\mathcal H}}$ for which one can easily answer the above question in the negative. For instance, the only translate of ${(0,1)}$ that consists entirely of primes is ${(2,3)}$, basically because each translate of ${(0,1)}$ must contain an even number, and the only even prime is ${2}$. More generally, if there is a prime ${p}$ such that ${{\mathcal H}}$ meets each of the ${p}$ residue classes ${0 \hbox{ mod } p, 1 \hbox{ mod } p, \ldots, p-1 \hbox{ mod } p}$, then every translate of ${{\mathcal H}}$ contains at least one multiple of ${p}$; since ${p}$ is the only multiple of ${p}$ that is prime, this shows that there are only finitely many translates of ${{\mathcal H}}$ that consist entirely of primes.

To avoid this obstruction, let us call a ${k_0}$-tuple ${{\mathcal H}}$ admissible if it avoids at least one residue class ${\hbox{ mod } p}$ for each prime ${p}$. It is easy to check for admissibility in practice, since a ${k_0}$-tuple is automatically admissible in every prime ${p}$ larger than ${k_0}$, so one only needs to check a finite number of primes in order to decide on the admissibility of a given tuple. For instance, ${(0,2)}$ or ${(0,2,6)}$ are admissible, but ${(0,2,4)}$ is not (because it covers all the residue classes modulo ${3}$). We then have the famous Hardy-Littlewood prime tuples conjecture:

Conjecture 1 (Prime tuples conjecture, qualitative form) If ${{\mathcal H}}$ is an admissible ${k_0}$-tuple, then there exists infinitely many translates of ${{\mathcal H}}$ that consist entirely of primes.

This conjecture is extremely difficult (containing the twin prime conjecture, for instance, as a special case), and in fact there is no explicitly known example of an admissible ${k_0}$-tuple with ${k_0 \geq 2}$ for which we can verify this conjecture (although, thanks to the recent work of Zhang, we know that ${(0,d)}$ satisfies the conclusion of the prime tuples conjecture for some ${0 < d < 70,000,000}$, even if we can’t yet say what the precise value of ${d}$ is).

Actually, Hardy and Littlewood conjectured a more precise version of Conjecture 1. Given an admissible ${k_0}$-tuple ${{\mathcal H} = (h_1,\ldots,h_{k_0})}$, and for each prime ${p}$, let ${\nu_p = \nu_p({\mathcal H}) := |{\mathcal H} \hbox{ mod } p|}$ denote the number of residue classes modulo ${p}$ that ${{\mathcal H}}$ meets; thus we have ${1 \leq \nu_p \leq p-1}$ for all ${p}$ by admissibility, and also ${\nu_p = k_0}$ for all ${p>h_{k_0}-h_1}$. We then define the singular series ${{\mathfrak G} = {\mathfrak G}({\mathcal H})}$ associated to ${{\mathcal H}}$ by the formula

$\displaystyle {\mathfrak G} := \prod_{p \in {\mathcal P}} \frac{1-\frac{\nu_p}{p}}{(1-\frac{1}{p})^{k_0}}$

where ${{\mathcal P} = \{2,3,5,\ldots\}}$ is the set of primes; by the previous discussion we see that the infinite product in ${{\mathfrak G}}$ converges to a finite non-zero number.

We will also need some asymptotic notation (in the spirit of “cheap nonstandard analysis“). We will need a parameter ${x}$ that one should think of going to infinity. Some mathematical objects (such as ${{\mathcal H}}$ and ${k}$) will be independent of ${x}$ and referred to as fixed; but unless otherwise specified we allow all mathematical objects under consideration to depend on ${x}$. If ${X}$ and ${Y}$ are two such quantities, we say that ${X = O(Y)}$ if one has ${|X| \leq CY}$ for some fixed ${C}$, and ${X = o(Y)}$ if one has ${|X| \leq c(x) Y}$ for some function ${c(x)}$ of ${x}$ (and of any fixed parameters present) that goes to zero as ${x \rightarrow \infty}$ (for each choice of fixed parameters).

Conjecture 2 (Prime tuples conjecture, quantitative form) Let ${k_0 \geq 1}$ be a fixed natural number, and let ${{\mathcal H}}$ be a fixed admissible ${k_0}$-tuple. Then the number of natural numbers ${n < x}$ such that ${n+{\mathcal H}}$ consists entirely of primes is ${({\mathfrak G} + o(1)) \frac{x}{\log^{k_0} x}}$.

Thus, for instance, if Conjecture 2 holds, then the number of twin primes less than ${x}$ should equal ${(2 \Pi_2 + o(1)) \frac{x}{\log^2 x}}$, where ${\Pi_2}$ is the twin prime constant

$\displaystyle \Pi_2 := \prod_{p \in {\mathcal P}: p>2} (1 - \frac{1}{(p-1)^2}) = 0.6601618\ldots.$

As this conjecture is stronger than Conjecture 1, it is of course open. However there are a number of partial results on this conjecture. For instance, this conjecture is known to be true if one introduces some additional averaging in ${{\mathcal H}}$; see for instance this previous post. From the methods of sieve theory, one can obtain an upper bound of ${(C_{k_0} {\mathfrak G} + o(1)) \frac{x}{\log^{k_0} x}}$ for the number of ${n < x}$ with ${n + {\mathcal H}}$ all prime, where ${C_{k_0}}$ depends only on ${k_0}$. Sieve theory can also give analogues of Conjecture 2 if the primes are replaced by a suitable notion of almost prime (or more precisely, by a weight function concentrated on almost primes).

Another type of partial result towards Conjectures 1, 2 come from the results of Goldston-Pintz-Yildirim, Motohashi-Pintz, and of Zhang. Following the notation of this recent paper of Pintz, for each ${k_0>2}$, let ${DHL[k_0,2]}$ denote the following assertion (DHL stands for “Dickson-Hardy-Littlewood”):

Conjecture 3 (${DHL[k_0,2]}$) Let ${{\mathcal H}}$ be a fixed admissible ${k_0}$-tuple. Then there are infinitely many translates ${n+{\mathcal H}}$ of ${{\mathcal H}}$ which contain at least two primes.

This conjecture gets harder as ${k_0}$ gets smaller. Note for instance that ${DHL[2,2]}$ would imply all the ${k_0=2}$ cases of Conjecture 1, including the twin prime conjecture. More generally, if one knew ${DHL[k_0,2]}$ for some ${k_0}$, then one would immediately conclude that there are an infinite number of pairs of consecutive primes of separation at most ${H(k_0)}$, where ${H(k_0)}$ is the minimal diameter ${h_{k_0}-h_1}$ amongst all admissible ${k_0}$-tuples ${{\mathcal H}}$. Values of ${H(k_0)}$ for small ${k_0}$ can be found at this link (with ${H(k_0)}$ denoted ${w}$ in that page). For large ${k_0}$, the best upper bounds on ${H(k_0)}$ have been found by using admissible ${k_0}$-tuples ${{\mathcal H}}$ of the form

$\displaystyle {\mathcal H} = ( - p_{m+\lfloor k_0/2\rfloor - 1}, \ldots, - p_{m+1}, -1, +1, p_{m+1}, \ldots, p_{m+\lfloor (k_0+1)/2\rfloor - 1} )$

where ${p_n}$ denotes the ${n^{th}}$ prime and ${m}$ is a parameter to be optimised over (in practice it is an order of magnitude or two smaller than ${k_0}$); see this blog post for details. The upshot is that one can bound ${H(k_0)}$ for large ${k_0}$ by a quantity slightly smaller than ${k_0 \log k_0}$ (and the large sieve inequality shows that this is sharp up to a factor of two, see e.g. this previous post for more discussion).

In a key breakthrough, Goldston, Pintz, and Yildirim were able to establish the following conditional result a few years ago:

Theorem 4 (Goldston-Pintz-Yildirim) Suppose that the Elliott-Halberstam conjecture ${EH[\theta]}$ is true for some ${1/2 < \theta < 1}$. Then ${DHL[k_0,2]}$ is true for some finite ${k_0}$. In particular, this establishes an infinite number of pairs of consecutive primes of separation ${O(1)}$.

The dependence of constants between ${k_0}$ and ${\theta}$ given by the Goldston-Pintz-Yildirim argument is basically of the form ${k_0 \sim (\theta-1/2)^{-2}}$. (UPDATE: as recently observed by Farkas, Pintz, and Revesz, this relationship can be improved to ${k_0 \sim (\theta-1/2)^{-3/2}}$.)

Unfortunately, the Elliott-Halberstam conjecture (which we will state properly below) is only known for ${\theta<1/2}$, an important result known as the Bombieri-Vinogradov theorem. If one uses the Bombieri-Vinogradov theorem instead of the Elliott-Halberstam conjecture, Goldston, Pintz, and Yildirim were still able to show the highly non-trivial result that there were infinitely many pairs ${p_{n+1},p_n}$ of consecutive primes with ${(p_{n+1}-p_n) / \log p_n \rightarrow 0}$ (actually they showed more than this; see e.g. this survey of Soundararajan for details).

Actually, the full strength of the Elliott-Halberstam conjecture is not needed for these results. There is a technical specialisation of the Elliott-Halberstam conjecture which does not presently have a commonly accepted name; I will call it the Motohashi-Pintz-Zhang conjecture ${MPZ[\varpi]}$ in this post, where ${0 < \varpi < 1/4}$ is a parameter. We will define this conjecture more precisely later, but let us remark for now that ${MPZ[\varpi]}$ is a consequence of ${EH[\frac{1}{2}+2\varpi]}$.

We then have the following two theorems. Firstly, we have the following strengthening of Theorem 4:

Theorem 5 (Motohashi-Pintz-Zhang) Suppose that ${MPZ[\varpi]}$ is true for some ${0 < \varpi < 1/4}$. Then ${DHL[k_0,2]}$ is true for some ${k_0}$.

A version of this result (with a slightly different formulation of ${MPZ[\varpi]}$) appears in this paper of Motohashi and Pintz, and in the paper of Zhang, Theorem 5 is proven for the concrete values ${\varpi = 1/1168}$ and ${k_0 = 3,500,000}$. We will supply a self-contained proof of Theorem 5 below the fold, the constants upon those in Zhang’s paper (in particular, for ${\varpi = 1/1168}$, we can take ${k_0}$ as low as ${341,640}$, with further improvements on the way). As with Theorem 4, we have an inverse quadratic relationship ${k_0 \sim \varpi^{-2}}$.

In his paper, Zhang obtained for the first time an unconditional advance on ${MPZ[\varpi]}$:

Theorem 6 (Zhang) ${MPZ[\varpi]}$ is true for all ${0 < \varpi \leq 1/1168}$.

This is a deep result, building upon the work of Fouvry-Iwaniec, Friedlander-Iwaniec and Bombieri-Friedlander-Iwaniec which established results of a similar nature to ${MPZ[\varpi]}$ but simpler in some key respects. We will not discuss this result further here, except to say that they rely on the (higher-dimensional case of the) Weil conjectures, which were famously proven by Deligne using methods from l-adic cohomology. Also, it was believed among at least some experts that the methods of Bombieri, Fouvry, Friedlander, and Iwaniec were not quite strong enough to obtain results of the form ${MPZ[\varpi]}$, making Theorem 6 a particularly impressive achievement.

Combining Theorem 6 with Theorem 5 we obtain ${DHL[k_0,2]}$ for some finite ${k_0}$; Zhang obtains this for ${k_0 = 3,500,000}$ but as detailed below, this can be lowered to ${k_0 = 341,640}$. This in turn gives infinitely many pairs of consecutive primes of separation at most ${H(k_0)}$. Zhang gives a simple argument that bounds ${H(3,500,000)}$ by ${70,000,000}$, giving his famous result that there are infinitely many pairs of primes of separation at most ${70,000,000}$; by being a bit more careful (as discussed in this post) one can lower the upper bound on ${H(3,500,000)}$ to ${57,554,086}$, and if one instead uses the newer value ${k_0 = 341,640}$ for ${k_0}$ one can instead use the bound ${H(341,640) \leq 4,982,086}$. (Many thanks to Scott Morrison for these numerics.) UPDATE: These values are now obsolete; see this web page for the latest bounds.

In this post we would like to give a self-contained proof of both Theorem 4 and Theorem 5, which are both sieve-theoretic results that are mainly elementary in nature. (But, as stated earlier, we will not discuss the deepest new result in Zhang’s paper, namely Theorem 6.) Our presentation will deviate a little bit from the traditional sieve-theoretic approach in a few places. Firstly, there is a portion of the argument that is traditionally handled using contour integration and properties of the Riemann zeta function; we will present a “cheaper” approach (which Ben Green and I used in our papers, e.g. in this one) using Fourier analysis, with the only property used about the zeta function ${\zeta(s)}$ being the elementary fact that blows up like ${\frac{1}{s-1}}$ as one approaches ${1}$ from the right. To deal with the contribution of small primes (which is the source of the singular series ${{\mathfrak G}}$), it will be convenient to use the “${W}$-trick” (introduced in this paper of mine with Ben), passing to a single residue class mod ${W}$ (where ${W}$ is the product of all the small primes) to end up in a situation in which all small primes have been “turned off” which leads to better pseudorandomness properties (for instance, once one eliminates all multiples of small primes, almost all pairs of remaining numbers will be coprime).

One of the basic general problems in analytic number theory is to understand as much as possible the fluctuations of the Möbius function ${\mu(n)}$, defined as ${(-1)^k}$ when ${n}$ is the product of ${k}$ distinct primes, and zero otherwise. For instance, as ${\mu}$ takes values in ${\{-1,0,1\}}$, we have the trivial bound

$\displaystyle |\sum_{n \leq x} \mu(n)| \leq x$

and the seemingly slight improvement

$\displaystyle \sum_{n \leq x} \mu(n) = o(x) \ \ \ \ \ (1)$

is already equivalent to the prime number theorem, as observed by Landau (see e.g. this previous blog post for a proof), while the much stronger (and still open) improvement

$\displaystyle \sum_{n \leq x} \mu(n) = O(x^{1/2+o(1)})$

is equivalent to the notorious Riemann hypothesis.

There is a general Möbius pseudorandomness heuristic that suggests that the sign pattern ${\mu}$ behaves so randomly (or pseudorandomly) that one should expect a substantial amount of cancellation in sums that involve the sign fluctuation of the Möbius function in a nontrivial fashion, with the amount of cancellation present comparable to the amount that an analogous random sum would provide; cf. the probabilistic heuristic discussed in this recent blog post. There are a number of ways to make this heuristic precise. As already mentioned, the Riemann hypothesis can be considered one such manifestation of the heuristic. Another manifestation is the following old conjecture of Chowla:

Conjecture 1 (Chowla’s conjecture) For any fixed integer ${m}$ and exponents ${a_1,a_2,\ldots,a_m \geq 0}$, with at least one of the ${a_i}$ odd (so as not to completely destroy the sign cancellation), we have

$\displaystyle \sum_{n \leq x} \mu(n+1)^{a_1} \ldots \mu(n+m)^{a_m} = o_{x \rightarrow \infty;m}(x).$

Note that as ${\mu^a = \mu^{a+2}}$ for any ${a \geq 1}$, we can reduce to the case when the ${a_i}$ take values in ${0,1,2}$ here. When only one of the ${a_i}$ are odd, this is essentially the prime number theorem in arithmetic progressions (after some elementary sieving), but with two or more of the ${a_i}$ are odd, the problem becomes completely open. For instance, the estimate

$\displaystyle \sum_{n \leq x} \mu(n) \mu(n+2) = o(x)$

is morally very close to the conjectured asymptotic

$\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+2) = 2\Pi_2 x + o(x)$

for the von Mangoldt function ${\Lambda}$, where ${\Pi_2 := \prod_{p > 2} (1 - \frac{1}{(p-1)^2}) = 0.66016\ldots}$ is the twin prime constant; this asymptotic in turn implies the twin prime conjecture. (To formally deduce estimates for von Mangoldt from estimates for Möbius, though, typically requires some better control on the error terms than ${o()}$, in particular gains of some power of ${\log x}$ are usually needed. See this previous blog post for more discussion.)

Remark 1 The Chowla conjecture resembles an assertion that, for ${n}$ chosen randomly and uniformly from ${1}$ to ${x}$, the random variables ${\mu(n+1),\ldots,\mu(n+k)}$ become asymptotically independent of each other (in the probabilistic sense) as ${x \rightarrow \infty}$. However, this is not quite accurate, because some moments (namely those with all exponents ${a_i}$ even) have the “wrong” asymptotic value, leading to some unwanted correlation between the two variables. For instance, the events ${\mu(n)=0}$ and ${\mu(n+4)=0}$ have a strong correlation with each other, basically because they are both strongly correlated with the event of ${n}$ being divisible by ${4}$. A more accurate interpretation of the Chowla conjecture is that the random variables ${\mu(n+1),\ldots,\mu(n+k)}$ are asymptotically conditionally independent of each other, after conditioning on the zero pattern ${\mu(n+1)^2,\ldots,\mu(n+k)^2}$; thus, it is the sign of the Möbius function that fluctuates like random noise, rather than the zero pattern. (The situation is a bit cleaner if one works instead with the Liouville function ${\lambda}$ instead of the Möbius function ${\mu}$, as this function never vanishes, but we will stick to the traditional Möbius function formalism here.)

A more recent formulation of the Möbius randomness heuristic is the following conjecture of Sarnak. Given a bounded sequence ${f: {\bf N} \rightarrow {\bf C}}$, define the topological entropy of the sequence to be the least exponent ${\sigma}$ with the property that for any fixed ${\epsilon > 0}$, and for ${m}$ going to infinity the set ${\{ (f(n+1),\ldots,f(n+m)): n \in {\bf N} \} \subset {\bf C}^m}$ of ${f}$ can be covered by ${O( \exp( \sigma m + o(m) ) )}$ balls of radius ${\epsilon}$. (If ${f}$ arises from a minimal topological dynamical system ${(X,T)}$ by ${f(n) := f(T^n x)}$, the above notion is equivalent to the usual notion of the topological entropy of a dynamical system.) For instance, if the sequence is a bit sequence (i.e. it takes values in ${\{0,1\}}$), then there are only ${\exp(\sigma m + o(m))}$ ${m}$-bit patterns that can appear as blocks of ${m}$ consecutive bits in this sequence. As a special case, a Turing machine with bounded memory that had access to a random number generator at the rate of one random bit produced every ${T}$ units of time, but otherwise evolved deterministically, would have an output sequence that had a topological entropy of at most ${\frac{1}{T} \log 2}$. A bounded sequence is said to be deterministic if its topological entropy is zero. A typical example is a polynomial sequence such as ${f(n) := e^{2\pi i \alpha n^2}}$ for some fixed ${\sigma}$; the ${m}$-blocks of such polynomials sequence have covering numbers that only grow polynomially in ${m}$, rather than exponentially, thus yielding the zero entropy. Unipotent flows, such as the horocycle flow on a compact hyperbolic surface, are another good source of deterministic sequences.

Conjecture 2 (Sarnak’s conjecture) Let ${f: {\bf N} \rightarrow {\bf C}}$ be a deterministic bounded sequence. Then

$\displaystyle \sum_{n \leq x} \mu(n) f(n) = o_{x \rightarrow \infty;f}(x).$

This conjecture in general is still quite far from being solved. However, special cases are known:

• For constant sequences, this is essentially the prime number theorem (1).
• For periodic sequences, this is essentially the prime number theorem in arithmetic progressions.
• For quasiperiodic sequences such as ${f(n) = F(\alpha n \hbox{ mod } 1)}$ for some continuous ${F}$, this follows from the work of Davenport.
• For nilsequences, this is a result of Ben Green and myself.
• For horocycle flows, this is a result of Bourgain, Sarnak, and Ziegler.
• For the Thue-Morse sequence, this is a result of Dartyge-Tenenbaum (with a stronger error term obtained by Maduit-Rivat). A subsequent result of Bourgain handles all bounded rank one sequences (though the Thue-Morse sequence is actually of rank two), and a related result of Green establishes asymptotic orthogonality of the Möbius function to bounded depth circuits, although such functions are not necessarily deterministic in nature.
• For the Rudin-Shapiro sequence, I sketched out an argument at this MathOverflow post.
• The Möbius function is known to itself be non-deterministic, because its square ${\mu^2(n)}$ (i.e. the indicator of the square-free functions) is known to be non-deterministic (indeed, its topological entropy is ${\frac{6}{\pi^2}\log 2}$). (The corresponding question for the Liouville function ${\lambda(n)}$, however, remains open, as the square ${\lambda^2(n)=1}$ has zero entropy.)
• In the converse direction, it is easy to construct sequences of arbitrarily small positive entropy that correlate with the Möbius function (a rather silly example is ${\mu(n) 1_{k|n}}$ for some fixed large (squarefree) ${k}$, which has topological entropy at most ${\log 2/k}$ but clearly correlates with ${\mu}$).

See this survey of Sarnak for further discussion of this and related topics.

In this post I wanted to give a very nice argument of Sarnak that links the above two conjectures:

Proposition 3 The Chowla conjecture implies the Sarnak conjecture.

The argument does not use any number-theoretic properties of the Möbius function; one could replace ${\mu}$ in both conjectures by any other function from the natural numbers to ${\{-1,0,+1\}}$ and obtain the same implication. The argument consists of the following ingredients:

1. To show that ${\sum_{n, it suffices to show that the expectation of the random variable ${\frac{1}{m} (\mu(n+1)f(n+1)+\ldots+\mu(n+m)f(n+m))}$, where ${n}$ is drawn uniformly at random from ${1}$ to ${x}$, can be made arbitrary small by making ${m}$ large (and ${n}$ even larger).
2. By the union bound and the zero topological entropy of ${f}$, it suffices to show that for any bounded deterministic coefficients ${c_1,\ldots,c_m}$, the random variable ${\frac{1}{m}(c_1 \mu(n+1) + \ldots + c_m \mu(n+m))}$ concentrates with exponentially high probability.
3. Finally, this exponentially high concentration can be achieved by the moment method, using a slight variant of the moment method proof of the large deviation estimates such as the Chernoff inequality or Hoeffding inequality (as discussed in this blog post).

As is often the case, though, while the “top-down” order of steps presented above is perhaps the clearest way to think conceptually about the argument, in order to present the argument formally it is more convenient to present the arguments in the reverse (or “bottom-up”) order. This is the approach taken below the fold.

There has been a lot of recent interest in the abc conjecture, since the release a few weeks ago of the last of a series of papers by Shinichi Mochizuki which, as one of its major applications, claims to establish this conjecture. It’s still far too early to judge whether this proof is likely to be correct or not (the entire argument encompasses several hundred pages of argument, mostly in the area of anabelian geometry, which very few mathematicians are expert in, to the extent that we still do not even have a full outline of the proof strategy yet), and I don’t have anything substantial to add to the existing discussion around that conjecture. (But, for those that are interested, the Polymath wiki page on the ABC conjecture has collected most of the links to that discussion, and to various background materials.)

In the meantime, though, I thought I might give the standard probabilistic heuristic argument that explains why we expect the ABC conjecture to be true. The underlying heuristic is a common one, used throughout number theory, and it can be summarised as follows:

Heuristic 1 (Probabilistic heuristic) Even though number theory is a deterministic subject (one does not need to roll any dice to factorise a number, or figure out if a number is prime), one expects to get a good asymptotic prediction for the answers to many number-theoretic questions by pretending that various number-theoretic assertions ${E}$ (e.g. that a given number ${n}$ is prime) are probabilistic events (with a probability ${\mathop{\bf P}(E)}$ that can vary between ${0}$ and ${1}$) rather than deterministic events (that are either always true or always false). Furthermore:

• (Basic heuristic) If two or more of these heuristically probabilistic events have no obvious reason to be strongly correlated to each other, then we should expect them to behave as if they were (jointly) independent.
• (Advanced heuristic) If two or more of these heuristically probabilistic events have some obvious correlation between them, but no further correlations are suspected, then we should expect them to behave as if they were conditionally independent, relative to whatever data is causing the correlation.

This is, of course, an extremely vague and completely non-rigorous heuristic, requiring (among other things) a subjective and ad hoc determination of what an “obvious reason” is, but in practice it tends to give remarkably plausible predictions, some fraction of which can in fact be backed up by rigorous argument (although in many cases, the actual argument has almost nothing in common with the probabilistic heuristic). A famous special case of this heuristic is the Cramér random model for the primes, but this is not the only such instance for that heuristic.

To give the most precise predictions, one should use the advanced heuristic in Heuristic 1, but this can be somewhat complicated to execute, and so we shall focus instead on the predictions given by the basic heuristic (thus ignoring the presence of some number-theoretic correlations), which tends to give predictions that are quantitatively inaccurate but still reasonably good at the qualitative level.

Here is a basic “corollary” of Heuristic 1:

Heuristic 2 (Heuristic Borel-Cantelli) Suppose one has a sequence ${E_1, E_2, \ldots}$ of number-theoretic statements, which we heuristically interpet as probabilistic events with probabilities ${\mathop{\bf P}(E_1), \mathop{\bf P}(E_2), \ldots}$. Suppose also that we know of no obvious reason for these events to have much of a correlation with each other. Then:

• If ${\sum_{i=1}^\infty \mathop{\bf P}(E_i) < \infty}$, we expect only finitely many of the statements ${E_n}$ to be true. (And if ${\sum_{i=1}^\infty \mathop{\bf P}(E_i)}$ is much smaller than ${1}$, we in fact expect none of the ${E_n}$ to be true.)
• If ${\sum_{i=1}^\infty \mathop{\bf P}(E_i) = \infty}$, we expect infinitely many of the statements ${E_n}$ to be true.

This heuristic is motivated both by the Borel-Cantelli lemma, and by the standard probabilistic computation that if one is given jointly independent, and genuinely probabilistic, events ${E_1, E_2, \ldots}$ with ${\sum_{i=1}^\infty \mathop{\bf P}(E_i) = \infty}$, then one almost surely has an infinite number of the ${E_i}$ occuring.

Before we get to the ABC conjecture, let us give two simpler (and well known) demonstrations of these heuristics in action:

Example 1 (Twin prime conjecture) One can heuristically justify the twin prime conjecture as follows. Using the prime number theorem, one can heuristically assign a probability of ${1/\log n}$ to the event that any given large integer ${n}$ is prime. In particular, the probability that ${n+2}$ is prime will then be ${1/\log(n+2)}$. Making the assumption that there are no strong correlations between these events, we are led to the prediction that the probability that ${n}$ and ${n+2}$ are simultaneously prime is ${\frac{1}{(\log n)(\log n+2)}}$. Since ${\sum_{n=1}^\infty \frac{1}{(\log n) (\log n+2)} = \infty}$, the Borel-Cantelli heuristic then predicts that there should be infinitely many twin primes.

Note that the above argument is a bit too naive, because there are some non-trivial correlations between the primality of ${n}$ and the primality of ${n+2}$. Most obviously, if ${n}$ is prime, this greatly increases the probability that ${n}$ is odd, which implies that ${n+2}$ is odd, which then elevates the probability that ${n+2}$ is prime. A bit more subtly, if ${n}$ is prime, then ${n}$ is likely to avoid the residue class ${0 \hbox{ mod } 3}$, which means that ${n+2}$ avoids the residue class ${2 \hbox{ mod } 3}$, which ends up decreasing the probability that ${n+2}$ is prime. However, there is a standard way to correct for these local correlations; see for instance in this previous blog post. As it turns out, these local correlations ultimately alter the prediction for the asymptotic density of twin primes by a constant factor (the twin prime constant), but do not affect the qualitative prediction of there being infinitely many twin primes.

Example 2 (Fermat’s last theorem) Let us now heuristically count the number of solutions to ${x^n+y^n=z^n}$ for various ${n}$ and natural numbers ${x,y,z}$ (which we can reduce to be coprime if desired). We recast this (in the spirit of the ABC conjecture) as ${a+b=c}$, where ${a,b,c}$ are ${n^{th}}$ powers. The number of ${n^{th}}$ powers up to any given number ${N}$ is about ${N^{1/n}}$, so heuristically any given natural number ${a}$ has a probability about ${a^{1/n - 1}}$ of being an ${n^{th}}$ power. If we make the naive assumption that (in the coprime case at least) there is no strong correlation between the events that ${a}$ is an ${n^{th}}$ power, ${b}$ is an ${n^{th}}$ power, and ${a+b}$ being an ${n^{th}}$ power, then for typical ${a,b}$, the probability that ${a,b,a+b}$ are all simultaneously ${n^{th}}$ powers would then be ${a^{1/n-1} b^{1/n-1} (a+b)^{1/n-1}}$. For fixed ${n}$, the total number of solutions to the Fermat equation would then be predicted to be

$\displaystyle \sum_{a=1}^\infty \sum_{b=1}^\infty a^{1/n-1} b^{1/n-1} (a+b)^{1/n-1}.$

(Strictly speaking, we need to restrict to the coprime case, but given that a positive density of pairs of integers are coprime, it should not affect the qualitative conclusion significantly if we now omit this restriction.) It might not be immediately obvious as to whether this sum converges or diverges, but (as is often the case with these sorts of unsigned sums) one can clarify the situation by dyadic decomposition. Suppose for instance that we consider the portion of the sum where ${c=a+b}$ lies between ${2^k}$ and ${2^{k+1}}$. Then this portion of the sum can be controlled by

$\displaystyle \sum_{a \leq 2^{k+1}} \sum_{b \leq 2^{k+1}} a^{1/n-1} b^{1/n-1} O( ( 2^k )^{1/n - 1} )$

which simplifies to

$\displaystyle O( 2^{(3/n - 1)k} ).$

Summing in ${k}$, one thus expects infinitely many solutions for ${n=2}$, only finitely many solutions for ${n>3}$ (indeed, a refinement of this argument shows that one expects only finitely many solutions even if one considers all ${n>3}$ at once), and a borderline prediction of there being a barely infinite number of solutions when ${n=3}$. Here is of course a place where a naive application of the probabilistic heuristic breaks down; there is enough arithmetic structure in the equation ${x^3+y^3=z^3}$ that the naive probabilistic prediction ends up being an inaccurate model. Indeed, while this heuristic suggests that a typical homogeneous cubic should have a logarithmic number of integer solutions of a given height ${N}$, it turns out that some homogeneous cubics (namely, those associated to elliptic curves of positive rank) end up with the bulk of these solutions, while other homogeneous cubics (including those associated to elliptic curves of zero rank, including the Fermat curve ${x^3+y^3=z^3}$) only get finitely many solutions. The reasons for this are subtle, but certainly the high degree of arithmetic structure present in an elliptic curve (starting with the elliptic curve group law which allows one to generate new solutions from old ones, and which also can be used to exclude solutions to ${x^3+y^3=z^3}$ via the method of descent) is a major contributing factor.

Below the fold, we apply similar heuristics to suggest the truth of the ABC conjecture.