You are currently browsing the category archive for the ‘math.NT’ category.

In analytic number theory, it is a well-known phenomenon that for many arithmetic functions ${f: {\bf N} \rightarrow {\bf C}}$ of interest in number theory, it is significantly easier to estimate logarithmic sums such as

$\displaystyle \sum_{n \leq x} \frac{f(n)}{n}$

than it is to estimate summatory functions such as

$\displaystyle \sum_{n \leq x} f(n).$

(Here we are normalising ${f}$ to be roughly constant in size, e.g. ${f(n) = O( n^{o(1)} )}$ as ${n \rightarrow \infty}$.) For instance, when ${f}$ is the von Mangoldt function ${\Lambda}$, the logarithmic sums ${\sum_{n \leq x} \frac{\Lambda(n)}{n}}$ can be adequately estimated by Mertens’ theorem, which can be easily proven by elementary means (see Notes 1); but a satisfactory estimate on the summatory function ${\sum_{n \leq x} \Lambda(n)}$ requires the prime number theorem, which is substantially harder to prove (see Notes 2). (From a complex-analytic or Fourier-analytic viewpoint, the problem is that the logarithmic sums ${\sum_{n \leq x} \frac{f(n)}{n}}$ can usually be controlled just from knowledge of the Dirichlet series ${\sum_n \frac{f(n)}{n^s}}$ for ${s}$ near ${1}$; but the summatory functions require control of the Dirichlet series ${\sum_n \frac{f(n)}{n^s}}$ for ${s}$ on or near a large portion of the line ${\{ 1+it: t \in {\bf R} \}}$. See Notes 2 for further discussion.)

Viewed conversely, whenever one has a difficult estimate on a summatory function such as ${\sum_{n \leq x} f(n)}$, one can look to see if there is a “cheaper” version of that estimate that only controls the logarithmic sums ${\sum_{n \leq x} \frac{f(n)}{n}}$, which is easier to prove than the original, more “expensive” estimate. In this post, we shall do this for two theorems, a classical theorem of Halasz on mean values of multiplicative functions on long intervals, and a much more recent result of Matomaki and Radziwill on mean values of multiplicative functions in short intervals. The two are related; the former theorem is an ingredient in the latter (though in the special case of the Matomaki-Radziwill theorem considered here, we will not need Halasz’s theorem directly, instead using a key tool in the proof of that theorem).

We begin with Halasz’s theorem. Here is a version of this theorem, due to Montgomery and to Tenenbaum:

Theorem 1 (Halasz-Montgomery-Tenenbaum) Let ${f: {\bf N} \rightarrow {\bf C}}$ be a multiplicative function with ${|f(n)| \leq 1}$ for all ${n}$. Let ${x \geq 3}$ and ${T \geq 1}$, and set

$\displaystyle M := \min_{|t| \leq T} \sum_{p \leq x} \frac{1 - \hbox{Re}( f(p) p^{-it} )}{p}.$

Then one has

$\displaystyle \frac{1}{x} \sum_{n \leq x} f(n) \ll (1+M) e^{-M} + \frac{1}{\sqrt{T}}.$

Informally, this theorem asserts that ${\sum_{n \leq x} f(n)}$ is small compared with ${x}$, unless ${f}$ “pretends” to be like the character ${p \mapsto p^{it}}$ on primes for some small ${y}$. (This is the starting point of the “pretentious” approach of Granville and Soundararajan to analytic number theory, as developed for instance here.) We now give a “cheap” version of this theorem which is significantly weaker (both because it settles for controlling logarithmic sums rather than summatory functions, it requires ${f}$ to be completely multiplicative instead of multiplicative, and because it only gives qualitative decay rather than quantitative estimates), but easier to prove:

Theorem 2 (Cheap Halasz) Let ${x}$ be a parameter going to infinity, and let ${f: {\bf N} \rightarrow {\bf C}}$ be a completely multiplicative function (possibly depending on ${x}$) such that ${|f(n)| \leq 1}$ for all ${n}$. Suppose that

$\displaystyle \frac{1}{\log x} \sum_{p \leq x} \frac{(1 - \hbox{Re}( f(p) )) \log p}{p} \rightarrow \infty. \ \ \ \ \ (1)$

Then

$\displaystyle \frac{1}{\log x} \sum_{n \leq x} \frac{f(n)}{n} = o(1). \ \ \ \ \ (2)$

Note that now that we are content with estimating exponential sums, we no longer need to preclude the possibility that ${f(p)}$ pretends to be like ${p^{it}}$; see Exercise 11 of Notes 1 for a related observation.

To prove this theorem, we first need a special case of the Turan-Kubilius inequality.

Lemma 3 (Turan-Kubilius) Let ${x}$ be a parameter going to infinity, and let ${1 < P < x}$ be a quantity depending on ${x}$ such that ${P = x^{o(1)}}$ and ${P \rightarrow \infty}$ as ${x \rightarrow \infty}$. Then

$\displaystyle \sum_{n \leq x} \frac{ | \frac{1}{\log \log P} \sum_{p \leq P: p|n} 1 - 1 |}{n} = o( \log x ).$

Informally, this lemma is asserting that

$\displaystyle \sum_{p \leq P: p|n} 1 \approx \log \log P$

for most large numbers ${n}$. Another way of writing this heuristically is in terms of Dirichlet convolutions:

$\displaystyle 1 \approx 1 * \frac{1}{\log\log P} 1_{{\mathcal P} \cap [1,P]}.$

This type of estimate was previously discussed as a tool to establish a criterion of Katai and Bourgain-Sarnak-Ziegler for Möbius orthogonality estimates in this previous blog post. See also Section 5 of Notes 1 for some similar computations.

Proof: By Cauchy-Schwarz it suffices to show that

$\displaystyle \sum_{n \leq x} \frac{ | \frac{1}{\log \log P} \sum_{p \leq P: p|n} 1 - 1 |^2}{n} = o( \log x ).$

Expanding out the square, it suffices to show that

$\displaystyle \sum_{n \leq x} \frac{ (\frac{1}{\log \log P} \sum_{p \leq P: p|n} 1)^j}{n} = \log x + o( \log x )$

for ${j=0,1,2}$.

We just show the ${j=2}$ case, as the ${j=0,1}$ cases are similar (and easier). We rearrange the left-hand side as

$\displaystyle \frac{1}{(\log\log P)^2} \sum_{p_1, p_2 \leq P} \sum_{n \leq x: p_1,p_2|n} \frac{1}{n}.$

We can estimate the inner sum as ${(1+o(1)) \frac{1}{[p_1,p_2]} \log x}$. But a routine application of Mertens’ theorem (handling the diagonal case when ${p_1=p_2}$ separately) shows that

$\displaystyle \sum_{p_1, p_2 \leq P} \frac{1}{[p_1,p_2]} = (1+o(1)) (\log\log P)^2$

and the claim follows. $\Box$

Remark 4 As an alternative to the Turan-Kubilius inequality, one can use the Ramaré identity

$\displaystyle \sum_{p \leq P: p|n} \frac{1}{\# \{ p' \leq P: p'|n\} + 1} - 1 = 1_{(p,n)=1 \hbox{ for all } p \leq P}$

(see e.g. Section 17.3 of Friedlander-Iwaniec). This identity turns out to give superior quantitative results than the Turan-Kubilius inequality in applications; see the paper of Matomaki and Radziwill for an instance of this.

We now prove Theorem 2. Let ${Q}$ denote the left-hand side of (2); by the triangle inequality we have ${Q=O(1)}$. By Lemma 3 (for some ${P = x^{o(1)}}$ to be chosen later) and the triangle inequality we have

$\displaystyle \sum_{n \leq x} \frac{\frac{1}{\log P} \sum_{d \leq P: d|n} \Lambda(d) f(n)}{n} = Q \log x + o( \log x ).$

We rearrange the left-hand side as

$\displaystyle \frac{1}{\log P} \sum_{d \leq P} \frac{\Lambda(d) f(d)}{d} \sum_{m \leq x/d} \frac{f(m)}{m}.$

We now replace the constraint ${m \leq x/d}$ by ${m \leq x}$. The error incurred in doing so is

$\displaystyle O( \frac{1}{\log P} \sum_{d \leq P} \frac{\Lambda(d)}{d} \sum_{x/P \leq m \leq x} \frac{1}{m} )$

which by Mertens’ theorem is ${O(\log P) = o( \log x )}$. Thus we have

$\displaystyle \frac{1}{\log P} \sum_{d \leq P} \frac{\Lambda(d) f(d)}{d} \sum_{m \leq x} \frac{f(m)}{m} = Q \log x + o( \log x ).$

But by definition of ${Q}$, we have ${\sum_{m \leq x} \frac{f(m)}{m} = Q \log x}$, thus

$\displaystyle [1 - \frac{1}{\log P} \sum_{d \leq P} \frac{\Lambda(d) f(d)}{d}] Q = o(1).$

From Mertens’ theorem, the expression in brackets can be rewritten as

$\displaystyle \frac{1}{\log P} \sum_{d \leq P} \frac{\Lambda(d) (1 - f(d))}{d} + o(1)$

and so the real part of this expression is at least

$\displaystyle \frac{1}{\log P} \sum_{p \leq P} \frac{(1 - \hbox{Re} f(p)) \log p}{p} + o(1).$

Note from Mertens’ theorem and the hypothesis on ${f}$ that

$\displaystyle \frac{1}{\log x^\varepsilon} \sum_{p \leq x^\varepsilon} \frac{(1 - \hbox{Re} f(p)) \log p}{p} \rightarrow \infty$

for any fixed ${\varepsilon}$, and hence by diagonalisation that

$\displaystyle \frac{1}{\log P} \sum_{p \leq P} \frac{(1 - \hbox{Re} f(p)) \log p}{p} \rightarrow \infty$

if ${P=x^{o(1)}}$ for a sufficiently slowly decaying ${o(1)}$. The claim follows.

Exercise 5 (Granville-Koukoulopoulos-Matomaki)

• (i) If ${g}$ is a completely multiplicative function with ${g(p) \in \{0,1\}}$ for all primes ${p}$, show that

$\displaystyle (e^{-\gamma}-o(1)) \prod_{p \leq x} (1 - \frac{g(p)}{p})^{-1} \leq \sum_{n \leq x} \frac{g(n)}{n} \leq \prod_{p \leq x} (1 - \frac{g(p)}{p})^{-1}.$

as ${x \rightarrow \infty}$. (Hint: for the upper bound, expand out the Euler product. For the lower bound, show that ${\sum_{n \leq x} \frac{g(n)}{n} \times \sum_{n \leq x} \frac{h(n)}{n} \ge \sum_{n \leq x} \frac{1}{n}}$, where ${h}$ is the completely multiplicative function with ${h(p) = 1-g(p)}$ for all primes ${p}$.)

• (ii) If ${g}$ is multiplicative and takes values in ${[0,1]}$, show that

$\displaystyle \sum_{n \leq x} \frac{g(n)}{n} \asymp \prod_{p \leq x} (1 - \frac{g(p)}{p})^{-1}$

$\displaystyle \asymp \exp( \sum_{p \leq x} \frac{g(p)}{p} )$

for all ${x \geq 1}$.

Now we turn to a very recent result of Matomaki and Radziwill on mean values of multiplicative functions in short intervals. For sake of illustration we specialise their results to the simpler case of the Liouville function ${\lambda}$, although their arguments actually work (with some additional effort) for arbitrary multiplicative functions of magnitude at most ${1}$ that are real-valued (or more generally, stay far from complex characters ${p \mapsto p^{it}}$). Furthermore, we give a qualitative form of their estimates rather than a quantitative one:

Theorem 6 (Matomaki-Radziwill, special case) Let ${X}$ be a parameter going to infinity, and let ${2 \leq h \leq X}$ be a quantity going to infinity as ${X \rightarrow \infty}$. Then for all but ${o(X)}$ of the integers ${x \in [X,2X]}$, one has

$\displaystyle \sum_{x \leq n \leq x+h} \lambda(n) = o( h ).$

Equivalently, one has

$\displaystyle \sum_{X \leq x \leq 2X} |\sum_{x \leq n \leq x+h} \lambda(n)|^2 = o( h^2 X ). \ \ \ \ \ (3)$

A simple sieving argument (see Exercise 18 of Supplement 4) shows that one can replace ${\lambda}$ by the Möbius function ${\mu}$ and obtain the same conclusion. See this recent note of Matomaki and Radziwill for a simple proof of their (quantitative) main theorem in this special case.

Of course, (3) improves upon the trivial bound of ${O( h^2 X )}$. Prior to this paper, such estimates were only known (using arguments similar to those in Section 3 of Notes 6) for ${h \geq X^{1/6+\varepsilon}}$ unconditionally, or for ${h \geq \log^A X}$ for some sufficiently large ${A}$ if one assumed the Riemann hypothesis. This theorem also represents some progress towards Chowla’s conjecture (discussed in Supplement 4) that

$\displaystyle \sum_{n \leq x} \lambda(n+h_1) \dots \lambda(n+h_k) = o( x )$

as ${x \rightarrow \infty}$ for any fixed distinct ${h_1,\dots,h_k}$; indeed, it implies that this conjecture holds if one performs a small amount of averaging in the ${h_1,\dots,h_k}$.

Below the fold, we give a “cheap” version of the Matomaki-Radziwill argument. More precisely, we establish

Theorem 7 (Cheap Matomaki-Radziwill) Let ${X}$ be a parameter going to infinity, and let ${1 \leq T \leq X}$. Then

$\displaystyle \int_X^{X^A} \left|\sum_{x \leq n \leq e^{1/T} x} \frac{\lambda(n)}{n}\right|^2\frac{dx}{x} = o\left( \frac{\log X}{T^2} \right), \ \ \ \ \ (4)$

for any fixed ${A>1}$.

Note that (4) improves upon the trivial bound of ${O( \frac{\log X}{T^2} )}$. Again, one can replace ${\lambda}$ with ${\mu}$ if desired. Due to the cheapness of Theorem 7, the proof will require few ingredients; the deepest input is the improved zero-free region for the Riemann zeta function due to Vinogradov and Korobov. Other than that, the main tools are the Turan-Kubilius result established above, and some Fourier (or complex) analysis.

In the previous set of notes, we saw how zero-density theorems for the Riemann zeta function, when combined with the zero-free region of Vinogradov and Korobov, could be used to obtain prime number theorems in short intervals. It turns out that a more sophisticated version of this type of argument also works to obtain prime number theorems in arithmetic progressions, in particular establishing the celebrated theorem of Linnik:

Theorem 1 (Linnik’s theorem) Let ${a\ (q)}$ be a primitive residue class. Then ${a\ (q)}$ contains a prime ${p}$ with ${p \ll q^{O(1)}}$.

In fact it is known that one can find a prime ${p}$ with ${p \ll q^{5}}$, a result of Xylouris. For sake of comparison, recall from Exercise 65 of Notes 2 that the Siegel-Walfisz theorem gives this theorem with a bound of ${p \ll \exp( q^{o(1)} )}$, and from Exercise 48 of Notes 2 one can obtain a bound of the form ${p \ll \phi(q)^2 \log^2 q}$ if one assumes the generalised Riemann hypothesis. The probabilistic random models from Supplement 4 suggest that one should in fact be able to take ${p \ll q^{1+o(1)}}$.

We will not aim to obtain the optimal exponents for Linnik’s theorem here, and follow the treatment in Chapter 18 of Iwaniec and Kowalski. We will in fact establish the following more quantitative result (a special case of a more powerful theorem of Gallagher), which splits into two cases, depending on whether there is an exceptional zero or not:

Theorem 2 (Quantitative Linnik theorem) Let ${a\ (q)}$ be a primitive residue class for some ${q \geq 2}$. For any ${x > 1}$, let ${\psi(x;q,a)}$ denote the quantity

$\displaystyle \psi(x;q,a) := \sum_{n \leq x: n=a\ (q)} \Lambda(n).$

Assume that ${x \geq q^C}$ for some sufficiently large ${C}$.

• (i) (No exceptional zero) If all the real zeroes ${\beta}$ of ${L}$-functions ${L(\cdot,\chi)}$ of real characters ${\chi}$ of modulus ${q}$ are such that ${1-\beta \gg \frac{1}{\log q}}$, then

$\displaystyle \psi(x;q,a) = \frac{x}{\phi(q)} ( 1 + O( \exp( - c \frac{\log x}{\log q} ) ) + O( \frac{\log^2 q}{q} ) )$

for all ${x \geq 1}$ and some absolute constant ${c>0}$.

• (ii) (Exceptional zero) If there is a zero ${\beta}$ of an ${L}$-function ${L(\cdot,\chi_1)}$ of a real character ${\chi_1}$ of modulus ${q}$ with ${\beta = 1 - \frac{\varepsilon}{\log q}}$ for some sufficiently small ${\varepsilon>0}$, then

$\displaystyle \psi(x;q,a) = \frac{x}{\phi(q)} ( 1 - \chi_1(a) \frac{x^{\beta-1}}{\beta} \ \ \ \ \ (1)$

$\displaystyle + O( \exp( - c \frac{\log x}{\log q} \log \frac{1}{\varepsilon} ) )$

$\displaystyle + O( \frac{\log^2 q}{q} ) )$

for all ${x \geq 1}$ and some absolute constant ${c>0}$.

The implied constants here are effective.

Note from the Landau-Page theorem (Exercise 54 from Notes 2) that at most one exceptional zero exists (if ${\varepsilon}$ is small enough). A key point here is that the error term ${O( \exp( - c \frac{\log x}{\log q} \log \frac{1}{\varepsilon} ) )}$ in the exceptional zero case is an improvement over the error term when no exceptional zero is present; this compensates for the potential reduction in the main term coming from the ${\chi_1(a) \frac{x^{\beta-1}}{\beta}}$ term. The splitting into cases depending on whether an exceptional zero exists or not turns out to be an essential technique in many advanced results in analytic number theory (though presumably such a splitting will one day become unnecessary, once the possibility of exceptional zeroes are finally eliminated for good).

Exercise 3 Assuming Theorem 2, and assuming ${x \geq q^C}$ for some sufficiently large absolute constant ${C}$, establish the lower bound

$\displaystyle \psi(x;a,q) \gg \frac{x}{\phi(q)}$

when there is no exceptional zero, and

$\displaystyle \psi(x;a,q) \gg \varepsilon \frac{x}{\phi(q)}$

when there is an exceptional zero ${\beta = 1 - \frac{\varepsilon}{\log q}}$. Conclude that Theorem 2 implies Theorem 1, regardless of whether an exceptional zero exists or not.

Remark 4 The Brun-Titchmarsh theorem (Exercise 33 from Notes 4), in the sharp form of Montgomery and Vaughan, gives that

$\displaystyle \pi(x; q, a) \leq 2 \frac{x}{\phi(q) \log (x/q)}$

for any primitive residue class ${a\ (q)}$ and any ${x \geq q}$. This is (barely) consistent with the estimate (1). Any lowering of the coefficient ${2}$ in the Brun-Titchmarsh inequality (with reasonable error terms), in the regime when ${x}$ is a large power of ${q}$, would then lead to at least some elimination of the exceptional zero case. However, this has not led to any progress on the Landau-Siegel zero problem (and may well be just a reformulation of that problem). (When ${x}$ is a relatively small power of ${q}$, some improvements to Brun-Titchmarsh are possible that are not in contradiction with the presence of an exceptional zero; see this paper of Maynard for more discussion.

Theorem 2 is deduced in turn from facts about the distribution of zeroes of ${L}$-functions. Recall from the truncated explicit formula (Exercise 45(iv) of Notes 2) with (say) ${T := q^2}$ that

$\displaystyle \sum_{n \leq x} \Lambda(n) \chi(n) = - \sum_{\hbox{Re}(\rho) > 3/4; |\hbox{Im}(\rho)| \leq q^2; L(\rho,\chi)=0} \frac{x^\rho}{\rho} + O( \frac{x}{q^2} \log^2 q)$

for any non-principal character ${\chi}$ of modulus ${q}$, where we assume ${x \geq q^C}$ for some large ${C}$; for the principal character one has the same formula with an additional term of ${x}$ on the right-hand side (as is easily deduced from Theorem 21 of Notes 2). Using the Fourier inversion formula

$\displaystyle 1_{n = a\ (q)} = \frac{1}{\phi(q)} \sum_{\chi\ (q)} \overline{\chi(a)} \chi(n)$

(see Theorem 69 of Notes 1), we thus have

$\displaystyle \psi(x;a,q) = \frac{x}{\phi(q)} ( 1 - \sum_{\chi\ (q)} \overline{\chi(a)} \sum_{\hbox{Re}(\rho) > 3/4; |\hbox{Im}(\rho)| \leq q^2; L(\rho,\chi)=0} \frac{x^{\rho-1}}{\rho}$

$\displaystyle + O( \frac{\log^2 q}{q} ) )$

and so it suffices by the triangle inequality (bounding ${1/\rho}$ very crudely by ${O(1)}$, as the contribution of the low-lying zeroes already turns out to be quite dominant) to show that

$\displaystyle \sum_{\chi\ (q)} \sum_{\sigma > 3/4; |t| \leq q^2; L(\sigma+it,\chi)=0} x^{\sigma-1} \ll \exp( - c \frac{\log x}{\log q} ) \ \ \ \ \ (2)$

when no exceptional zero is present, and

$\displaystyle \sum_{\chi\ (q)} \sum_{\sigma > 3/4; |t| \leq q^2; L(\sigma+it,\chi)=0; \sigma+it \neq \beta} x^{\sigma-1} \ll \exp( - c \frac{\log x}{\log q} \log \frac{1}{\varepsilon} ) \ \ \ \ \ (3)$

when an exceptional zero is present.

To handle the former case (2), one uses two facts about zeroes. The first is the classical zero-free region (Proposition 51 from Notes 2), which we reproduce in our context here:

Proposition 5 (Classical zero-free region) Let ${q, T \geq 2}$. Apart from a potential exceptional zero ${\beta}$, all zeroes ${\sigma+it}$ of ${L}$-functions ${L(\cdot,\chi)}$ with ${\chi}$ of modulus ${q}$ and ${|t| \leq T}$ are such that

$\displaystyle \sigma \leq 1 - \frac{c}{\log qT}$

for some absolute constant ${c>0}$.

Using this zero-free region, we have

$\displaystyle x^{\sigma-1} \ll \log x \int_{1/2}^{1-c/\log q} 1_{\alpha < \sigma} x^{\alpha-1}\ d\alpha$

whenever ${\sigma}$ contributes to the sum in (2), and so the left-hand side of (2) is bounded by

$\displaystyle \ll \log x \int_{1/2}^{1 - c/\log q} N( \alpha, q, q^2 ) x^{\alpha-1}\ d\alpha$

where we recall that ${N(\alpha,q,T)}$ is the number of zeroes ${\sigma+it}$ of any ${L}$-function of a character ${\chi}$ of modulus ${q}$ with ${\sigma \geq \alpha}$ and ${0 \leq t \leq T}$ (here we use conjugation symmetry to make ${t}$ non-negative, accepting a multiplicative factor of two).

In Exercise 25 of Notes 6, the grand density estimate

$\displaystyle N(\alpha,q,T) \ll (qT)^{4(1-\alpha)} \log^{O(1)}(qT) \ \ \ \ \ (4)$

is proven. If one inserts this bound into the above expression, one obtains a bound for (2) which is of the form

$\displaystyle \ll (\log^{O(1)} q) \exp( - c \frac{\log x}{\log q} ).$

Unfortunately this is off from what we need by a factor of ${\log^{O(1)} q}$ (and would lead to a weak form of Linnik’s theorem in which ${p}$ was bounded by ${O( \exp( \log^{O(1)} q ) )}$ rather than by ${q^{O(1)}}$). In the analogous problem for prime number theorems in short intervals, we could use the Vinogradov-Korobov zero-free region to compensate for this loss, but that region does not help here for the contribution of the low-lying zeroes with ${t = O(1)}$, which as mentioned before give the dominant contribution. Fortunately, it is possible to remove this logarithmic loss from the zero-density side of things:

Theorem 6 (Log-free grand density estimate) For any ${q, T > 1}$ and ${1/2 \leq \alpha \leq 1}$, one has

$\displaystyle N(\alpha,q,T) \ll (qT)^{O(1-\alpha)}.$

The implied constants are effective.

We prove this estimate below the fold. The proof follows the methods of the previous section, but one inserts various sieve weights to restrict sums over natural numbers to essentially become sums over “almost primes”, as this turns out to remove the logarithmic losses. (More generally, the trick of restricting to almost primes by inserting suitable sieve weights is quite useful for avoiding any unnecessary losses of logarithmic factors in analytic number theory estimates.)

Exercise 7 Use Theorem 6 to complete the proof of (2).

Now we turn to the case when there is an exceptional zero (3). The argument used to prove (2) applies here also, but does not gain the factor of ${\log \frac{1}{\varepsilon}}$ in the exponent. To achieve this, we need an additional tool, a version of the Deuring-Heilbronn repulsion phenomenon due to Linnik:

Theorem 8 (Deuring-Heilbronn repulsion phenomenon) Suppose ${q \geq 2}$ is such that there is an exceptional zero ${\beta = 1 - \frac{\varepsilon}{\log q}}$ with ${\varepsilon}$ small. Then all other zeroes ${\sigma+it}$ of ${L}$-functions of modulus ${q}$ are such that

$\displaystyle \sigma \leq 1 - c \frac{\log \frac{1}{\varepsilon}}{\log(q(2+|t|))}.$

In other words, the exceptional zero enlarges the classical zero-free region by a factor of ${\log \frac{1}{\varepsilon}}$. The implied constants are effective.

Exercise 9 Use Theorem 6 and Theorem 8 to complete the proof of (3), and thus Linnik’s theorem.

Exercise 10 Use Theorem 8 to give an alternate proof of (Tatuzawa’s version of) Siegel’s theorem (Theorem 62 of Notes 2). (Hint: if two characters have different moduli, then they can be made to have the same modulus by multiplying by suitable principal characters.)

Theorem 8 is proven by similar methods to that of Theorem 6, the basic idea being to insert a further weight of ${1 * \chi_1}$ (in addition to the sieve weights), the point being that the exceptional zero causes this weight to be quite small on the average. There is a strengthening of Theorem 8 due to Bombieri that is along the lines of Theorem 6, obtaining the improvement

$\displaystyle N'(\alpha,q,T) \ll \varepsilon (1 + \frac{\log T}{\log q}) (qT)^{O(1-\alpha)} \ \ \ \ \ (5)$

with effective implied constants for any ${1/2 \leq \alpha \leq 1}$ and ${T \geq 1}$ in the presence of an exceptional zero, where the prime in ${N'(\alpha,q,T)}$ means that the exceptional zero ${\beta}$ is omitted (thus ${N'(\alpha,q,T) = N(\alpha,q,T)-1}$ if ${\alpha \leq \beta}$). Note that the upper bound on ${N'(\alpha,q,T)}$ falls below one when ${\alpha > 1 - c \frac{\log \frac{1}{\varepsilon}}{\log(qT)}}$ for a sufficiently small ${c>0}$, thus recovering Theorem 8. Bombieri’s theorem can be established by the methods in this set of notes, and will be given as an exercise to the reader.

Remark 11 There are a number of alternate ways to derive the results in this set of notes, for instance using the Turan power sums method which is based on studying derivatives such as

$\displaystyle \frac{L'}{L}(s,\chi)^{(k)} = (-1)^k \sum_n \frac{\Lambda(n) \chi(n) \log^k n}{n^s}$

$\displaystyle \approx (-1)^{k+1} k! \sum_\rho \frac{1}{(s-\rho)^{k+1}}$

for ${\hbox{Re}(s)>1}$ and large ${k}$, and performing various sorts of averaging in ${k}$ to attenuate the contribution of many of the zeroes ${\rho}$. We will not develop this method here, but see for instance Chapter 9 of Montgomery’s book. See the text of Friedlander and Iwaniec for yet another approach based primarily on sieve-theoretic ideas.

Remark 12 When one optimises all the exponents, it turns out that the exponent in Linnik’s theorem is extremely good in the presence of an exceptional zero – indeed Friedlander and Iwaniec showed can even get a bound of the form ${p \ll q^{2-c}}$ for some ${c>0}$, which is even stronger than one can obtain from GRH! There are other places in which exceptional zeroes can be used to obtain results stronger than what one can obtain even on the Riemann hypothesis; for instance, Heath-Brown used the hypothesis of an infinite sequence of Siegel zeroes to obtain the twin prime conejcture.

In the previous set of notes, we studied upper bounds on sums such as ${|\sum_{N \leq n \leq N+M} n^{-it}|}$ for ${1 \leq M \leq N}$ that were valid for all ${t}$ in a given range, such as ${[T,2T]}$; this led in turn to upper bounds on the Riemann zeta ${\zeta(\sigma+it)}$ for ${t}$ in the same range, and for various choices of ${\sigma}$. While some improvement over the trivial bound of ${O(N)}$ was obtained by these methods, we did not get close to the conjectural bound of ${O( N^{1/2+o(1)})}$ that one expects from pseudorandomness heuristics (assuming that ${T}$ is not too large compared with ${N}$, e.g. ${T = O(N^{O(1)})}$.

However, it turns out that one can get much better bounds if one settles for estimating sums such as ${|\sum_{N \leq n \leq N+M} n^{-it}|}$, or more generally finite Dirichlet series (also known as Dirichlet polynomials) such as ${|\sum_n a_n n^{-it}|}$, for most values of ${t}$ in a given range such as ${[T,2T]}$. Equivalently, we will be able to get some control on the large values of such Dirichlet polynomials, in the sense that we can control the set of ${t}$ for which ${|\sum_n a_n n^{-it}|}$ exceeds a certain threshold, even if we cannot show that this set is empty. These large value theorems are often closely tied with estimates for mean values such as ${\frac{1}{T}\int_T^{2T} |\sum_n a_n n^{-it}|^{2k}\ dt}$ of a Dirichlet series; these latter estimates are thus known as mean value theorems for Dirichlet series. Our approach to these theorems will follow the same sort of methods used in Notes 3, in particular relying on the generalised Bessel inequality from those notes.

Our main application of the large value theorems for Dirichlet polynomials will be to control the number of zeroes of the Riemann zeta function ${\zeta(s)}$ (or the Dirichlet ${L}$-functions ${L(s,\chi)}$) in various rectangles of the form ${\{ \sigma+it: \sigma \geq \alpha, |t| \leq T \}}$ for various ${T > 1}$ and ${1/2 < \alpha < 1}$. These rectangles will be larger than the zero-free regions for which we can exclude zeroes completely, but we will often be able to limit the number of zeroes in such rectangles to be quite small. For instance, we will be able to show the following weak form of the Riemann hypothesis: as ${T \rightarrow \infty}$, a proportion ${1-o(1)}$ of zeroes of the Riemann zeta function in the critical strip with ${|\hbox{Im}(s)| \leq T}$ will have real part ${1/2+o(1)}$. Related to this, the number of zeroes with ${|\hbox{Im}(s)| \leq T}$ and ${|\hbox{Re}(s)| \geq \alpha}$ can be shown to be bounded by ${O( T^{O(1-\alpha)+o(1)} )}$ as ${T \rightarrow \infty}$ for any ${1/2 < \alpha < 1}$.

In the next set of notes we will use refined versions of these theorems to establish Linnik’s theorem on the least prime in an arithmetic progression.

Our presentation here is broadly based on Chapters 9 and 10 in Iwaniec and Kowalski, who give a number of more sophisticated large value theorems than the ones discussed here.

We return to the study of the Riemann zeta function ${\zeta(s)}$, focusing now on the task of upper bounding the size of this function within the critical strip; as seen in Exercise 43 of Notes 2, such upper bounds can lead to zero-free regions for ${\zeta}$, which in turn lead to improved estimates for the error term in the prime number theorem.

In equation (21) of Notes 2 we obtained the somewhat crude estimates

$\displaystyle \zeta(s) = \sum_{n \leq x} \frac{1}{n^s} - \frac{x^{1-s}}{1-s} + O( \frac{|s|}{\sigma} \frac{1}{x^\sigma} ) \ \ \ \ \ (1)$

for any ${x > 0}$ and ${s = \sigma+it}$ with ${\sigma>0}$ and ${s \neq 1}$. Setting ${x=1}$, we obtained the crude estimate

$\displaystyle \zeta(s) = \frac{1}{s-1} + O( \frac{|s|}{\sigma} )$

in this region. In particular, if ${0 < \varepsilon \leq \sigma \ll 1}$ and ${|t| \gg 1}$ then we had ${\zeta(s) = O_\varepsilon( |t| )}$. Using the functional equation and the Hadamard three lines lemma, we can improve this to ${\zeta(s) \ll_\varepsilon |t|^{\frac{1-\sigma}{2}+\varepsilon}}$; see Supplement 3.

Now we seek better upper bounds on ${\zeta}$. We will reduce the problem to that of bounding certain exponential sums, in the spirit of Exercise 33 of Supplement 3:

Proposition 1 Let ${s = \sigma+it}$ with ${0 < \varepsilon \leq \sigma \ll 1}$ and ${|t| \gg 1}$. Then

$\displaystyle \zeta(s) \ll_\varepsilon \log(2+|t|) \sup_{1 \leq M \leq N \ll |t|} N^{1-\sigma} |\frac{1}{N} \sum_{N \leq n < N+M} e( -\frac{t}{2\pi} \log n)|$

where ${e(x) := e^{2\pi i x}}$.

Proof: We fix a smooth function ${\eta: {\bf R} \rightarrow {\bf C}}$ with ${\eta(t)=1}$ for ${t \leq -1}$ and ${\eta(t)=0}$ for ${t \geq 1}$, and allow implied constants to depend on ${\eta}$. Let ${s=\sigma+it}$ with ${\varepsilon \leq \sigma \ll 1}$. From Exercise 33 of Supplement 3, we have

$\displaystyle \zeta(s) = \sum_n \frac{1}{n^s} \eta( \log n - \log C|t| ) + O_\varepsilon( 1 )$

for some sufficiently large absolute constant ${C}$. By dyadic decomposition, we thus have

$\displaystyle \zeta(s) \ll_{\varepsilon} 1 + \log(2+|t|) \sup_{1 \leq N \ll |t|} |\sum_{N \leq n < 2N} \frac{1}{n^s} \eta( \log n - \log C|t| )|.$

We can absorb the first term in the second using the ${N=1}$ case of the supremum. Writing ${\frac{1}{n^s} \eta( \log n - \log|C| t ) = N^{-\sigma} e( - \frac{t}{2\pi} \log n ) F_N(n)}$, where

$\displaystyle F_N(n) := (N/n)^\sigma \eta(\log n - \log C|t| ),$

it thus suffices to show that

$\displaystyle \sum_{N \leq n < 2N} e(-t \log N) F_N(n) \ll \sup_{1 \leq M \leq N} |\sum_{N \leq n < N+M} e(-\frac{t}{2\pi} \log n)|$

for each ${N}$. But from the fundamental theorem of calculus, the left-hand side can be written as

$\displaystyle F_N(N) \sum_{N \leq n < 2N} e(-\frac{t}{2\pi} \log N) + \int_N^{2N} (\sum_{N \leq n < N+M} e(-\frac{t}{2\pi} \log n)) F'_N(M)\ dM$

and the claim then follows from the triangle inequality and a routine calculation. $\Box$

We are thus interested in getting good bounds on the sum ${\sum_{N \leq n < N+M} e( -\frac{t}{2\pi} \log n )}$. More generally, we consider normalised exponential sums of the form

$\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ \ \ \ \ (2)$

where ${I \subset {\bf R}}$ is an interval of length at most ${N}$ for some ${N \geq 1}$, and ${f: {\bf R} \rightarrow {\bf R}}$ is a smooth function. We will assume smoothness estimates of the form

$\displaystyle |f^{(j)}(x)| = \exp( O(j^2) ) \frac{T}{N^j} \ \ \ \ \ (3)$

for some ${T>0}$, all ${x \in I}$, and all ${j \geq 1}$, where ${f^{(j)}}$ is the ${j}$-fold derivative of ${f}$; in the case ${f(x) := -\frac{t}{2\pi} \log x}$, ${I \subset [N,2N]}$ of interest for the Riemann zeta function, we easily verify that these estimates hold with ${T := |t|}$. (One can consider exponential sums under more general hypotheses than (3), but the hypotheses here are adequate for our needs.) We do not bound the zeroth derivative ${f^{(0)}=f}$ of ${f}$ directly, but it would not be natural to do so in any event, since the magnitude of the sum (2) is unaffected if one adds an arbitrary constant to ${f(n)}$.

The trivial bound for (2) is

$\displaystyle \frac{1}{N} \sum_{n \in I} e(f(n)) \ll 1 \ \ \ \ \ (4)$

and we will seek to obtain significant improvements to this bound. Pseudorandomness heuristics predict a bound of ${O_\varepsilon(N^{-1/2+\varepsilon})}$ for (2) for any ${\varepsilon>0}$ if ${T = O(N^{O(1)})}$; this assertion (a special case of the exponent pair hypothesis) would have many consequences (for instance, inserting it into Proposition 1 soon yields the Lindelöf hypothesis), but is unfortunately quite far from resolution with known methods. However, we can obtain weaker gains of the form ${O(N^{1-c_K})}$ when ${T \ll N^K}$ and ${c_K > 0}$ depends on ${K}$. We present two such results here, which perform well for small and large values of ${K}$ respectively:

Theorem 2 Let ${2 \leq N \ll T}$, let ${I}$ be an interval of length at most ${N}$, and let ${f: I \rightarrow {\bf R}}$ be a smooth function obeying (3) for all ${j \geq 1}$ and ${x \in I}$.

• (i) (van der Corput estimate) For any natural number ${k \geq 2}$, one has

$\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll (\frac{T}{N^k})^{\frac{1}{2^k-2}} \log^{1/2} (2+T). \ \ \ \ \ (5)$

• (ii) (Vinogradov estimate) If ${k}$ is a natural number and ${T \leq N^{k}}$, then

$\displaystyle \frac{1}{N} \sum_{n \in I} e( f(n) ) \ll N^{-c/k^2} \ \ \ \ \ (6)$

for some absolute constant ${c>0}$.

The factor of ${\log^{1/2} (2+T)}$ can be removed by a more careful argument, but we will not need to do so here as we are willing to lose powers of ${\log T}$. The estimate (6) is superior to (5) when ${T \sim N^K}$ for ${K}$ large, since (after optimising in ${K}$) (5) gives a gain of the form ${N^{-c/2^{cK}}}$ over the trivial bound, while (6) gives ${N^{-c/K^2}}$. We have not attempted to obtain completely optimal estimates here, settling for a relatively simple presentation that still gives good bounds on ${\zeta}$, and there are a wide variety of additional exponential sum estimates beyond the ones given here; see Chapter 8 of Iwaniec-Kowalski, or Chapters 3-4 of Montgomery, for further discussion.

We now briefly discuss the strategies of proof of Theorem 2. Both parts of the theorem proceed by treating ${f}$ like a polynomial of degree roughly ${k}$; in the case of (ii), this is done explicitly via Taylor expansion, whereas for (i) it is only at the level of analogy. Both parts of the theorem then try to “linearise” the phase to make it a linear function of the summands (actually in part (ii), it is necessary to introduce an additional variable and make the phase a bilinear function of the summands). The van der Corput estimate achieves this linearisation by squaring the exponential sum about ${k}$ times, which is why the gain is only exponentially small in ${k}$. The Vinogradov estimate achieves linearisation by raising the exponential sum to a significantly smaller power – on the order of ${k^2}$ – by using Hölder’s inequality in combination with the fact that the discrete curve ${\{ (n,n^2,\dots,n^k): n \in \{1,\dots,M\}\}}$ becomes roughly equidistributed in the box ${\{ (a_1,\dots,a_k): a_j = O( M^j ) \}}$ after taking the sumset of about ${k^2}$ copies of this curve. This latter fact has a precise formulation, known as the Vinogradov mean value theorem, and its proof is the most difficult part of the argument, relying on using a “${p}$-adic” version of this equidistribution to reduce the claim at a given scale ${M}$ to a smaller scale ${M/p}$ with ${p \sim M^{1/k}}$, and then proceeding by induction.

One can combine Theorem 2 with Proposition 1 to obtain various bounds on the Riemann zeta function:

Exercise 3 (Subconvexity bound)

• (i) Show that ${\zeta(\frac{1}{2}+it) \ll (1+|t|)^{1/6} \log^{O(1)}(1+|t|)}$ for all ${t \in {\bf R}}$. (Hint: use the ${k=3}$ case of the Van der Corput estimate.)
• (ii) For any ${0 < \sigma < 1}$, show that ${\zeta(\sigma+it) \ll (1+|t|)^{\min( \frac{1-\sigma}{3}, \frac{1}{2} - \frac{\sigma}{3}) + o(1)}}$ as ${|t| \rightarrow \infty}$.

Exercise 4 Let ${t}$ be such that ${|t| \geq 100}$, and let ${\sigma \geq 1/2}$.

• (i) (Littlewood bound) Use the van der Corput estimate to show that ${\zeta(\sigma+it) \ll \log^{O(1)} |t|}$ whenever ${\sigma \geq 1 - O( \frac{(\log\log |t|)^2}{\log |t|} ))}$.
• (ii) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that ${\zeta(\sigma+it) \ll \log^{O(1)} |t|}$ whenever ${\sigma \geq 1 - O( \frac{(\log\log |t|)^{2/3}}{\log^{2/3} |t|} )}$.

As noted in Exercise 43 of Notes 2, the Vinogradov-Korobov bound leads to the zero-free region ${\{ \sigma+it: \sigma > 1 - c \frac{1}{(\log |t|)^{2/3} (\log\log |t|)^{1/3}}; |t| \geq 100 \}}$, which in turn leads to the prime number theorem with error term

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O\left( x \exp\left( - c \frac{\log^{3/5} x}{(\log\log x)^{1/5}} \right) \right)$

for ${x > 100}$. If one uses the weaker Littlewood bound instead, one obtains the narrower zero-free region

$\displaystyle \{ \sigma+it: \sigma > 1 - c \frac{\log\log|t|}{\log |t|}; |t| \geq 100 \}$

(which is only slightly wider than the classical zero-free region) and an error term

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x \exp( - c \sqrt{\log x \log\log x} ) )$

in the prime number theorem.

Exercise 5 (Vinogradov-Korobov in arithmetic progressions) Let ${\chi}$ be a non-principal character of modulus ${q}$.

• (i) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that ${L(\sigma+it,\chi) \ll \log^{O(1)}(q|t|)}$ whenever ${|t| \geq 100}$ and

$\displaystyle \sigma \geq 1 - O( \min( \frac{\log\log(q|t|)}{\log q}, \frac{(\log\log(q|t|))^{2/3}}{\log^{2/3} |t|} ) ).$

(Hint: use the Vinogradov estimate and a change of variables to control ${\sum_{n \in I: n = a\ (q)} \exp( -it \log n)}$ for various intervals ${I}$ of length at most ${N}$ and residue classes ${a\ (q)}$, in the regime ${N \geq q^2}$ (say). For ${N < q^2}$, do not try to capture any cancellation and just use the triangle inequality instead.)

• (ii) Obtain a zero-free region

$\displaystyle \{ \sigma+it: \sigma > 1 - c \min( \frac{1}{(\log |t|)^{2/3} (\log\log |t|)^{1/3}}, \frac{1}{\log q} );$

$\displaystyle |t| \geq 100 \}$

for ${L(s,\chi)}$, for some (effective) absolute constant ${c>0}$.

• (iii) Obtain the prime number theorem in arithmetic progressions with error term

$\displaystyle \sum_{n \leq x: n = a\ (q)} \Lambda(n) = x + O\left( x \exp\left( - c_A \frac{\log^{3/5} x}{(\log\log x)^{1/5}} \right) \right)$

whenever ${x > 100}$, ${q \leq \log^A x}$, ${a\ (q)}$ is primitive, and ${c_A>0}$ depends (ineffectively) on ${A}$.

We continue the discussion of sieve theory from Notes 4, but now specialise to the case of the linear sieve in which the sieve dimension ${\kappa}$ is equal to ${1}$, which is one of the best understood sieving situations, and one of the rare cases in which the precise limits of the sieve method are known. A bit more specifically, let ${z, D \geq 1}$ be quantities with ${z = D^{1/s}}$ for some fixed ${s>1}$, and let ${g}$ be a multiplicative function with

$\displaystyle g(p) = \frac{1}{p} + O(\frac{1}{p^2}) \ \ \ \ \ (1)$

and

$\displaystyle 0 \leq g(p) \leq 1-c \ \ \ \ \ (2)$

for all primes ${p}$ and some fixed ${c>0}$ (we allow all constants below to depend on ${c}$). Let ${P(z) := \prod_{p, and for each prime ${p < z}$, let ${E_p}$ be a set of integers, with ${E_d := \bigcap_{p|d} E_p}$ for ${d|P(z)}$. We consider finitely supported sequences ${(a_n)_{n \in {\bf Z}}}$ of non-negative reals for which we have bounds of the form

$\displaystyle \sum_{n \in E_d} a_n = g(d) X + r_d. \ \ \ \ \ (3)$

for all square-free ${d \leq D}$ and some ${X>0}$, and some remainder terms ${r_d}$. One is then interested in upper and lower bounds on the quantity

$\displaystyle \sum_{n\not \in\bigcup_{p

The fundamental lemma of sieve theory (Corollary 19 of Notes 4) gives us the bound

$\displaystyle \sum_{n\not \in\bigcup_{p

where ${V(z)}$ is the quantity

$\displaystyle V(z) := \prod_{p

This bound is strong when ${s}$ is large, but is not as useful for smaller values of ${s}$. We now give a sharp bound in this regime. We introduce the functions ${F, f: (0,+\infty) \rightarrow {\bf R}^+}$ by

$\displaystyle F(s) := 2e^\gamma ( \frac{1_{s>1}}{s} \ \ \ \ \ (6)$

$\displaystyle + \sum_{j \geq 3, \hbox{ odd}} \frac{1}{j!} \int_{[1,+\infty)^{j-1}} 1_{t_1+\dots+t_{j-1}\leq s-1} \frac{dt_1 \dots dt_{j-1}}{t_1 \dots t_j} )$

and

$\displaystyle f(s) := 2e^\gamma \sum_{j \geq 2, \hbox{ even}} \frac{1}{j!} \int_{[1,+\infty)^{j-1}} 1_{t_1+\dots+t_{j-1}\leq s-1} \frac{dt_1 \dots dt_{j-1}}{t_1 \dots t_j} \ \ \ \ \ (7)$

where we adopt the convention ${t_j := s - t_1 - \dots - t_{j-1}}$. Note that for each ${s}$ one has only finitely many non-zero summands in (6), (7). These functions are closely related to the Buchstab function ${\omega}$ from Exercise 28 of Supplement 4; indeed from comparing the definitions one has

$\displaystyle F(s) + f(s) = 2 e^\gamma \omega(s)$

for all ${s>0}$.

Exercise 1 (Alternate definition of ${F, f}$) Show that ${F(s)}$ is continuously differentiable except at ${s=1}$, and ${f(s)}$ is continuously differentiable except at ${s=2}$ where it is continuous, obeying the delay-differential equations

$\displaystyle \frac{d}{ds}( s F(s) ) = f(s-1) \ \ \ \ \ (8)$

for ${s > 1}$ and

$\displaystyle \frac{d}{ds}( s f(s) ) = F(s-1) \ \ \ \ \ (9)$

for ${s>2}$, with the initial conditions

$\displaystyle F(s) = \frac{2e^\gamma}{s} 1_{s>1}$

for ${s \leq 3}$ and

$\displaystyle f(s) = 0$

for ${s \leq 2}$. Show that these properties of ${F, f}$ determine ${F, f}$ completely.

For future reference, we record the following explicit values of ${F, f}$:

$\displaystyle F(s) = \frac{2e^\gamma}{s} \ \ \ \ \ (10)$

for ${1 < s \leq 3}$, and

$\displaystyle f(s) = \frac{2e^\gamma}{s} \log(s-1) \ \ \ \ \ (11)$

for ${2 \leq s \leq 4}$.

We will show

Theorem 2 (Linear sieve) Let the notation and hypotheses be as above, with ${s > 1}$. Then, for any ${\varepsilon > 0}$, one has the upper bound

$\displaystyle \sum_{n\not \in\bigcup_{p

and the lower bound

$\displaystyle \sum_{n\not \in\bigcup_{p

if ${D}$ is sufficiently large depending on ${\varepsilon, s, c}$. Furthermore, this claim is sharp in the sense that the quantity ${F(s)}$ cannot be replaced by any smaller quantity, and similarly ${f(s)}$ cannot be replaced by any larger quantity.

Comparing the linear sieve with the fundamental lemma (and also testing using the sequence ${a_n = 1_{1 \leq n \leq N}}$ for some extremely large ${N}$), we conclude that we necessarily have the asymptotics

$\displaystyle 1 - O(e^{-s}) \leq f(s) \leq 1 \leq F(s) \leq 1 + O( e^{-s} )$

for all ${s \geq 1}$; this can also be proven directly from the definitions of ${F, f}$, or from Exercise 1, but is somewhat challenging to do so; see e.g. Chapter 11 of Friedlander-Iwaniec for details.

Exercise 3 Establish the integral identities

$\displaystyle F(s) = 1 + \frac{1}{s} \int_s^\infty (1 - f(t-1))\ dt$

and

$\displaystyle f(s) = 1 + \frac{1}{s} \int_s^\infty (1 - F(t-1))\ dt$

for ${s \geq 2}$. Argue heuristically that these identities are consistent with the bounds in Theorem 2 and the Buchstab identity (Equation (16) from Notes 4).

Exercise 4 Use the Selberg sieve (Theorem 30 from Notes 4) to obtain a slightly weaker version of (12) in the range ${1 < s < 3}$ in which the error term ${|r_d|}$ is worsened to ${\tau_3(d) |r_d|}$, but the main term is unchanged.

We will prove Theorem 2 below the fold. The optimality of ${F, f}$ is closely related to the parity problem obstruction discussed in Section 5 of Notes 4; a naive application of the parity arguments there only give the weak bounds ${F(s) \geq \frac{2 e^\gamma}{s}}$ and ${f(s)=0}$ for ${s \leq 2}$, but this can be sharpened by a more careful counting of various sums involving the Liouville function ${\lambda}$.

As an application of the linear sieve (specialised to the ranges in (10), (11)), we will establish a famous theorem of Chen, giving (in some sense) the closest approach to the twin prime conjecture that one can hope to achieve by sieve-theoretic methods:

Theorem 5 (Chen’s theorem) There are infinitely many primes ${p}$ such that ${p+2}$ is the product of at most two primes.

The same argument gives the version of Chen’s theorem for the even Goldbach conjecture, namely that for all sufficiently large even ${N}$, there exists a prime ${p}$ between ${2}$ and ${N}$ such that ${N-p}$ is the product of at most two primes.

The discussion in these notes loosely follows that of Friedlander-Iwaniec (who study sieving problems in more general dimension than ${\kappa=1}$).

A fundamental and recurring problem in analytic number theory is to demonstrate the presence of cancellation in an oscillating sum, a typical example of which might be a correlation

$\displaystyle \sum_{n} f(n) \overline{g(n)} \ \ \ \ \ (1)$

between two arithmetic functions ${f: {\bf N} \rightarrow {\bf C}}$ and ${g: {\bf N} \rightarrow {\bf C}}$, which to avoid technicalities we will assume to be finitely supported (or that the ${n}$ variable is localised to a finite range, such as ${\{ n: n \leq x \}}$). A key example to keep in mind for the purposes of this set of notes is the twisted von Mangoldt summatory function

$\displaystyle \sum_{n \leq x} \Lambda(n) \overline{\chi(n)} \ \ \ \ \ (2)$

that measures the correlation between the primes and a Dirichlet character ${\chi}$. One can get a “trivial” bound on such sums from the triangle inequality

$\displaystyle |\sum_{n} f(n) \overline{g(n)}| \leq \sum_{n} |f(n)| |g(n)|;$

for instance, from the triangle inequality and the prime number theorem we have

$\displaystyle |\sum_{n \leq x} \Lambda(n) \overline{\chi(n)}| \leq x + o(x) \ \ \ \ \ (3)$

as ${x \rightarrow \infty}$. But the triangle inequality is insensitive to the phase oscillations of the summands, and often we expect (e.g. from the probabilistic heuristics from Supplement 4) to be able to improve upon the trivial triangle inequality bound by a substantial amount; in the best case scenario, one typically expects a “square root cancellation” that gains a factor that is roughly the square root of the number of summands. (For instance, for Dirichlet characters ${\chi}$ of conductor ${O(x^{O(1)})}$, it is expected from probabilistic heuristics that the left-hand side of (3) should in fact be ${O_\varepsilon(x^{1/2+\varepsilon})}$ for any ${\varepsilon>0}$.)

It has proven surprisingly difficult, however, to establish significant cancellation in many of the sums of interest in analytic number theory, particularly if the sums do not have a strong amount of algebraic structure (e.g. multiplicative structure) which allow for the deployment of specialised techniques (such as multiplicative number theory techniques). In fact, we are forced to rely (to an embarrassingly large extent) on (many variations of) a single basic tool to capture at least some cancellation, namely the Cauchy-Schwarz inequality. In fact, in many cases the classical case

$\displaystyle |\sum_n f(n) \overline{g(n)}| \leq (\sum_n |f(n)|^2)^{1/2} (\sum_n |g(n)|^2)^{1/2}, \ \ \ \ \ (4)$

considered by Cauchy, where at least one of ${f, g: {\bf N} \rightarrow {\bf C}}$ is finitely supported, suffices for applications. Roughly speaking, the Cauchy-Schwarz inequality replaces the task of estimating a cross-correlation between two different functions ${f,g}$, to that of measuring self-correlations between ${f}$ and itself, or ${g}$ and itself, which are usually easier to compute (albeit at the cost of capturing less cancellation). Note that the Cauchy-Schwarz inequality requires almost no hypotheses on the functions ${f}$ or ${g}$, making it a very widely applicable tool.

There is however some skill required to decide exactly how to deploy the Cauchy-Schwarz inequality (and in particular, how to select ${f}$ and ${g}$); if applied blindly, one loses all cancellation and can even end up with a worse estimate than the trivial bound. For instance, if one tries to bound (2) directly by applying Cauchy-Schwarz with the functions ${\Lambda}$ and ${\chi}$, one obtains the bound

$\displaystyle |\sum_{n \leq x} \Lambda(n) \overline{\chi(n)}| \leq (\sum_{n \leq x} \Lambda(n)^2)^{1/2} (\sum_{n \leq x} |\chi(n)|^2)^{1/2}.$

The right-hand side may be bounded by ${\ll x \log^{1/2} x}$, but this is worse than the trivial bound (3) by a logarithmic factor. This can be “blamed” on the fact that ${\Lambda}$ and ${\chi}$ are concentrated on rather different sets (${\Lambda}$ is concentrated on primes, while ${\chi}$ is more or less uniformly distributed amongst the natural numbers); but even if one corrects for this (e.g. by weighting Cauchy-Schwarz with some suitable “sieve weight” that is more concentrated on primes), one still does not do any better than (3). Indeed, the Cauchy-Schwarz inequality suffers from the same key weakness as the triangle inequality: it is insensitive to the phase oscillation of the factors ${f, g}$.

While the Cauchy-Schwarz inequality can be poor at estimating a single correlation such as (1), its power improves when considering an average (or sum, or square sum) of multiple correlations. In this set of notes, we will focus on one such situation of this type, namely that of trying to estimate a square sum

$\displaystyle (\sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}|^2)^{1/2} \ \ \ \ \ (5)$

that measures the correlations of a single function ${f: {\bf N} \rightarrow {\bf C}}$ with multiple other functions ${g_j: {\bf N} \rightarrow {\bf C}}$. One should think of the situation in which ${f}$ is a “complicated” function, such as the von Mangoldt function ${\Lambda}$, but the ${g_j}$ are relatively “simple” functions, such as Dirichlet characters. In the case when the ${g_j}$ are orthonormal functions, we of course have the classical Bessel inequality:

Lemma 1 (Bessel inequality) Let ${g_1,\dots,g_J: {\bf N} \rightarrow {\bf C}}$ be finitely supported functions obeying the orthonormality relationship

$\displaystyle \sum_n g_j(n) \overline{g_{j'}(n)} = 1_{j=j'}$

for all ${1 \leq j,j' \leq J}$. Then for any function ${f: {\bf N} \rightarrow {\bf C}}$, we have

$\displaystyle (\sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}|^2)^{1/2} \leq (\sum_n |f(n)|^2)^{1/2}.$

For sake of comparison, if one were to apply the Cauchy-Schwarz inequality (4) separately to each summand in (5), one would obtain the bound of ${J^{1/2} (\sum_n |f(n)|^2)^{1/2}}$, which is significantly inferior to the Bessel bound when ${J}$ is large. Geometrically, what is going on is this: the Cauchy-Schwarz inequality (4) is only close to sharp when ${f}$ and ${g}$ are close to parallel in the Hilbert space ${\ell^2({\bf N})}$. But if ${g_1,\dots,g_J}$ are orthonormal, then it is not possible for any other vector ${f}$ to be simultaneously close to parallel to too many of these orthonormal vectors, and so the inner products of ${f}$ with most of the ${g_j}$ should be small. (See this previous blog post for more discussion of this principle.) One can view the Bessel inequality as formalising a repulsion principle: if ${f}$ correlates too much with some of the ${g_j}$, then it does not have enough “energy” to have large correlation with the rest of the ${g_j}$.

In analytic number theory applications, it is useful to generalise the Bessel inequality to the situation in which the ${g_j}$ are not necessarily orthonormal. This can be accomplished via the Cauchy-Schwarz inequality:

Proposition 2 (Generalised Bessel inequality) Let ${g_1,\dots,g_J: {\bf N} \rightarrow {\bf C}}$ be finitely supported functions, and let ${\nu: {\bf N} \rightarrow {\bf R}^+}$ be a non-negative function. Let ${f: {\bf N} \rightarrow {\bf C}}$ be such that ${f}$ vanishes whenever ${\nu}$ vanishes, we have

$\displaystyle (\sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}|^2)^{1/2} \leq (\sum_n |f(n)|^2 / \nu(n))^{1/2} \ \ \ \ \ (6)$

$\displaystyle \times ( \sum_{j=1}^J \sum_{j'=1}^J c_j \overline{c_{j'}} \sum_n \nu(n) g_j(n) \overline{g_{j'}(n)} )^{1/2}$

for some sequence ${c_1,\dots,c_J}$ of complex numbers with ${\sum_{j=1}^J |c_j|^2 = 1}$, with the convention that ${|f(n)|^2/\nu(n)}$ vanishes whenever ${f(n), \nu(n)}$ both vanish.

Note by relabeling that we may replace the domain ${{\bf N}}$ here by any other at most countable set, such as the integers ${{\bf Z}}$. (Indeed, one can give an analogue of this lemma on arbitrary measure spaces, but we will not do so here.) This result first appears in this paper of Boas.

Proof: We use the method of duality to replace the role of the function ${f}$ by a dual sequence ${c_1,\dots,c_J}$. By the converse to Cauchy-Schwarz, we may write the left-hand side of (6) as

$\displaystyle \sum_{j=1}^J \overline{c_j} \sum_{n} f(n) \overline{g_j(n)}$

for some complex numbers ${c_1,\dots,c_J}$ with ${\sum_{j=1}^J |c_j|^2 = 1}$. Indeed, if all of the ${\sum_{n} f(n) \overline{g_j(n)}}$ vanish, we can set the ${c_j}$ arbitrarily, otherwise we set ${(c_1,\dots,c_J)}$ to be the unit vector formed by dividing ${(\sum_{n} f(n) \overline{g_j(n)})_{j=1}^J}$ by its length. We can then rearrange this expression as

$\displaystyle \sum_n f(n) \overline{\sum_{j=1}^J c_j g_j(n)}.$

Applying Cauchy-Schwarz (dividing the first factor by ${\nu(n)^{1/2}}$ and multiplying the second by ${\nu(n)^{1/2}}$, after first removing those ${n}$ for which ${\nu(n)}$ vanish), this is bounded by

$\displaystyle (\sum_n |f(n)|^2 / \nu(n))^{1/2} (\sum_n \nu(n) |\sum_{j=1}^J c_j g_j(n)|^2)^{1/2},$

and the claim follows by expanding out the second factor. $\Box$

Observe that Lemma 1 is a special case of Proposition 2 when ${\nu=1}$ and the ${g_j}$ are orthonormal. In general, one can expect Proposition 2 to be useful when the ${g_j}$ are almost orthogonal relative to ${\nu}$, in that the correlations ${\sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}}$ tend to be small when ${j,j'}$ are distinct. In that case, one can hope for the diagonal term ${j=j'}$ in the right-hand side of (6) to dominate, in which case one can obtain estimates of comparable strength to the classical Bessel inequality. The flexibility to choose different weights ${\nu}$ in the above proposition has some technical advantages; for instance, if ${f}$ is concentrated in a sparse set (such as the primes), it is sometimes useful to tailor ${\nu}$ to a comparable set (e.g. the almost primes) in order not to lose too much in the first factor ${\sum_n |f(n)|^2 / \nu(n)}$. Also, it can be useful to choose a fairly “smooth” weight ${\nu}$, in order to make the weighted correlations ${\sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}}$ small.

Remark 3 In harmonic analysis, the use of tools such as Proposition 2 is known as the method of almost orthogonality, or the ${TT^*}$ method. The explanation for the latter name is as follows. For sake of exposition, suppose that ${\nu}$ is never zero (or we remove all ${n}$ from the domain for which ${\nu(n)}$ vanishes). Given a family of finitely supported functions ${g_1,\dots,g_J: {\bf N} \rightarrow {\bf C}}$, consider the linear operator ${T: \ell^2(\nu^{-1}) \rightarrow \ell^2(\{1,\dots,J\})}$ defined by the formula

$\displaystyle T f := ( \sum_{n} f(n) \overline{g_j(n)} )_{j=1}^J.$

This is a bounded linear operator, and the left-hand side of (6) is nothing other than the ${\ell^2(\{1,\dots,J\})}$ norm of ${Tf}$. Without any further information on the function ${f}$ other than its ${\ell^2(\nu^{-1})}$ norm ${(\sum_n |f(n)|^2 / \nu(n))^{1/2}}$, the best estimate one can obtain on (6) here is clearly

$\displaystyle (\sum_n |f(n)|^2 / \nu(n))^{1/2} \times \|T\|_{op},$

where ${\|T\|_{op}}$ denotes the operator norm of ${T}$.

The adjoint ${T^*: \ell^2(\{1,\dots,J\}) \rightarrow \ell^2(\nu^{-1})}$ is easily computed to be

$\displaystyle T^* (c_j)_{j=1}^J := (\sum_{j=1}^J c_j \nu(n) g_j(n) )_{n \in {\bf N}}.$

The composition ${TT^*: \ell^2(\{1,\dots,J\}) \rightarrow \ell^2(\{1,\dots,J\})}$ of ${T}$ and its adjoint is then given by

$\displaystyle TT^* (c_j)_{j=1}^J := (\sum_{j=1}^J c_j \sum_n \nu(n) g_j(n) \overline{g_{j'}}(n) )_{j=1}^J.$

From the spectral theorem (or singular value decomposition), one sees that the operator norms of ${T}$ and ${TT^*}$ are related by the identity

$\displaystyle \|T\|_{op} = \|TT^*\|_{op}^{1/2},$

and as ${TT^*}$ is a self-adjoint, positive semi-definite operator, the operator norm ${\|TT^*\|_{op}}$ is also the supremum of the quantity

$\displaystyle \langle TT^* (c_j)_{j=1}^J, (c_j)_{j=1}^J \rangle_{\ell^2(\{1,\dots,J\})} = \sum_{j=1}^J \sum_{j'=1}^J c_j \overline{c_{j'}} \sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}$

where ${(c_j)_{j=1}^J}$ ranges over unit vectors in ${\ell^2(\{1,\dots,J\})}$. Putting these facts together, we obtain Proposition 2; furthermore, we see from this analysis that the bound here is essentially optimal if the only information one is allowed to use about ${f}$ is its ${\ell^2(\nu^{-1})}$ norm.

For further discussion of almost orthogonality methods from a harmonic analysis perspective, see Chapter VII of this text of Stein.

Exercise 4 Under the same hypotheses as Proposition 2, show that

$\displaystyle \sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}| \leq (\sum_n |f(n)|^2 / \nu(n))^{1/2}$

$\displaystyle \times ( \sum_{j=1}^J \sum_{j'=1}^J |\sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}| )^{1/2}$

as well as the variant inequality

$\displaystyle |\sum_{j=1}^J \sum_{n} f(n) \overline{g_j(n)}| \leq (\sum_n |f(n)|^2 / \nu(n))^{1/2}$

$\displaystyle \times | \sum_{j=1}^J \sum_{j'=1}^J \sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}|^{1/2}.$

Proposition 2 has many applications in analytic number theory; for instance, we will use it in later notes to control the large value of Dirichlet series such as the Riemann zeta function. One of the key benefits is that it largely eliminates the need to consider further correlations of the function ${f}$ (other than its self-correlation ${\sum_n |f(n)|^2 / \nu(n)}$ relative to ${\nu^{-1}}$, which is usually fairly easy to compute or estimate as ${\nu}$ is usually chosen to be relatively simple); this is particularly useful if ${f}$ is a function which is significantly more complicated to analyse than the functions ${g_j}$. Of course, the tradeoff for this is that one now has to deal with the coefficients ${c_j}$, which if anything are even less understood than ${f}$, since literally the only thing we know about these coefficients is their square sum ${\sum_{j=1}^J |c_j|^2}$. However, as long as there is enough almost orthogonality between the ${g_j}$, one can estimate the ${c_j}$ by fairly crude estimates (e.g. triangle inequality or Cauchy-Schwarz) and still get reasonably good estimates.

In this set of notes, we will use Proposition 2 to prove some versions of the large sieve inequality, which controls a square-sum of correlations

$\displaystyle \sum_n f(n) e( -\xi_j n )$

of an arbitrary finitely supported function ${f: {\bf Z} \rightarrow {\bf C}}$ with various additive characters ${n \mapsto e( \xi_j n)}$ (where ${e(x) := e^{2\pi i x}}$), or alternatively a square-sum of correlations

$\displaystyle \sum_n f(n) \overline{\chi_j(n)}$

of ${f}$ with various primitive Dirichlet characters ${\chi_j}$; it turns out that one can prove a (slightly sub-optimal) version of this inequality quite quickly from Proposition 2 if one first prepares the sum by inserting a smooth cutoff with well-behaved Fourier transform. The large sieve inequality has many applications (as the name suggests, it has particular utility within sieve theory). For the purposes of this set of notes, though, the main application we will need it for is the Bombieri-Vinogradov theorem, which in a very rough sense gives a prime number theorem in arithmetic progressions, which, “on average”, is of strength comparable to the results provided by the Generalised Riemann Hypothesis (GRH), but has the great advantage of being unconditional (it does not require any unproven hypotheses such as GRH); it can be viewed as a significant extension of the Siegel-Walfisz theorem from Notes 2. As we shall see in later notes, the Bombieri-Vinogradov theorem is a very useful ingredient in sieve-theoretic problems involving the primes.

There is however one additional important trick, beyond the large sieve, which we will need in order to establish the Bombieri-Vinogradov theorem. As it turns out, after some basic manipulations (and the deployment of some multiplicative number theory, and specifically the Siegel-Walfisz theorem), the task of proving the Bombieri-Vinogradov theorem is reduced to that of getting a good estimate on sums that are roughly of the form

$\displaystyle \sum_{j=1}^J |\sum_n \Lambda(n) \overline{\chi_j}(n)| \ \ \ \ \ (7)$

for some primitive Dirichlet characters ${\chi_j}$. This looks like the type of sum that can be controlled by the large sieve (or by Proposition 2), except that this is an ordinary sum rather than a square sum (i.e., an ${\ell^1}$ norm rather than an ${\ell^2}$ norm). One could of course try to control such a sum in terms of the associated square-sum through the Cauchy-Schwarz inequality, but this turns out to be very wasteful (it loses a factor of about ${J^{1/2}}$). Instead, one should try to exploit the special structure of the von Mangoldt function ${\Lambda}$, in particular the fact that it can be expressible as a Dirichlet convolution ${\alpha * \beta}$ of two further arithmetic sequences ${\alpha,\beta}$ (or as a finite linear combination of such Dirichlet convolutions). The reason for introducing this convolution structure is through the basic identity

$\displaystyle (\sum_n \alpha*\beta(n) \overline{\chi_j}(n)) = (\sum_n \alpha(n) \overline{\chi_j}(n)) (\sum_n \beta(n) \overline{\chi_j}(n)) \ \ \ \ \ (8)$

for any finitely supported sequences ${\alpha,\beta: {\bf N} \rightarrow {\bf C}}$, as can be easily seen by multiplying everything out and using the completely multiplicative nature of ${\chi_j}$. (This is the multiplicative analogue of the well-known relationship ${\widehat{f*g}(\xi) = \hat f(\xi) \hat g(\xi)}$ between ordinary convolution and Fourier coefficients.) This factorisation, together with yet another application of the Cauchy-Schwarz inequality, lets one control (7) by square-sums of the sort that can be handled by the large sieve inequality.

As we have seen in Notes 1, the von Mangoldt function ${\Lambda}$ does indeed admit several factorisations into Dirichlet convolution type, such as the factorisation ${\Lambda = \mu * L}$. One can try directly inserting this factorisation into the above strategy; it almost works, however there turns out to be a problem when considering the contribution of the portion of ${\mu}$ or ${L}$ that is supported at very small natural numbers, as the large sieve loses any gain over the trivial bound in such settings. Because of this, there is a need for a more sophisticated decomposition of ${\Lambda}$ into Dirichlet convolutions ${\alpha * \beta}$ which are non-degenerate in the sense that ${\alpha,\beta}$ are supported away from small values. (As a non-example, the trivial factorisation ${\Lambda = \Lambda * \delta}$ would be a totally inappropriate factorisation for this purpose.) Fortunately, it turns out that through some elementary combinatorial manipulations, some satisfactory decompositions of this type are available, such as the Vaughan identity and the Heath-Brown identity. By using one of these identities we will be able to complete the proof of the Bombieri-Vinogradov theorem. (These identities are also useful for other applications in which one wishes to control correlations between the von Mangoldt function ${\Lambda}$ and some other sequence; we will see some examples of this in later notes.)

For further reading on these topics, including a significantly larger number of examples of the large sieve inequality, see Chapters 7 and 17 of Iwaniec and Kowalski.

Remark 5 We caution that the presentation given in this set of notes is highly ahistorical; we are using modern streamlined proofs of results that were first obtained by more complicated arguments.

A fundamental and recurring problem in analytic number theory is to demonstrate the presence of cancellation in an oscillating sum, a typical example of which might be a correlation

$\displaystyle \sum_{n} f(n) \overline{g(n)} \ \ \ \ \ (1)$

between two arithmetic functions ${f: {\bf N} \rightarrow {\bf C}}$ and ${g: {\bf N} \rightarrow {\bf C}}$, which to avoid technicalities we will assume to be finitely supported (or that the ${n}$ variable is localised to a finite range, such as ${\{ n: n \leq x \}}$). A key example to keep in mind for the purposes of this set of notes is the twisted von Mangoldt summatory function

$\displaystyle \sum_{n \leq x} \Lambda(n) \overline{\chi(n)} \ \ \ \ \ (2)$

that measures the correlation between the primes and a Dirichlet character ${\chi}$. One can get a “trivial” bound on such sums from the triangle inequality

$\displaystyle |\sum_{n} f(n) \overline{g(n)}| \leq \sum_{n} |f(n)| |g(n)|;$

for instance, from the triangle inequality and the prime number theorem we have

$\displaystyle |\sum_{n \leq x} \Lambda(n) \overline{\chi(n)}| \leq x + o(x) \ \ \ \ \ (3)$

as ${x \rightarrow \infty}$. But the triangle inequality is insensitive to the phase oscillations of the summands, and often we expect (e.g. from the probabilistic heuristics from Supplement 4) to be able to improve upon the trivial triangle inequality bound by a substantial amount; in the best case scenario, one typically expects a “square root cancellation” that gains a factor that is roughly the square root of the number of summands. (For instance, for Dirichlet characters ${\chi}$ of conductor ${O(x^{O(1)})}$, it is expected from probabilistic heuristics that the left-hand side of (3) should in fact be ${O_\varepsilon(x^{1/2+\varepsilon})}$ for any ${\varepsilon>0}$.)

It has proven surprisingly difficult, however, to establish significant cancellation in many of the sums of interest in analytic number theory, particularly if the sums do not have a strong amount of algebraic structure (e.g. multiplicative structure) which allow for the deployment of specialised techniques (such as multiplicative number theory techniques). In fact, we are forced to rely (to an embarrassingly large extent) on (many variations of) a single basic tool to capture at least some cancellation, namely the Cauchy-Schwarz inequality. In fact, in many cases the classical case

$\displaystyle |\sum_n f(n) \overline{g(n)}| \leq (\sum_n |f(n)|^2)^{1/2} (\sum_n |g(n)|^2)^{1/2}, \ \ \ \ \ (4)$

considered by Cauchy, where at least one of ${f, g: {\bf N} \rightarrow {\bf C}}$ is finitely supported, suffices for applications. Roughly speaking, the Cauchy-Schwarz inequality replaces the task of estimating a cross-correlation between two different functions ${f,g}$, to that of measuring self-correlations between ${f}$ and itself, or ${g}$ and itself, which are usually easier to compute (albeit at the cost of capturing less cancellation). Note that the Cauchy-Schwarz inequality requires almost no hypotheses on the functions ${f}$ or ${g}$, making it a very widely applicable tool.

There is however some skill required to decide exactly how to deploy the Cauchy-Schwarz inequality (and in particular, how to select ${f}$ and ${g}$); if applied blindly, one loses all cancellation and can even end up with a worse estimate than the trivial bound. For instance, if one tries to bound (2) directly by applying Cauchy-Schwarz with the functions ${\Lambda}$ and ${\chi}$, one obtains the bound

$\displaystyle |\sum_{n \leq x} \Lambda(n) \overline{\chi(n)}| \leq (\sum_{n \leq x} \Lambda(n)^2)^{1/2} (\sum_{n \leq x} |\chi(n)|^2)^{1/2}.$

The right-hand side may be bounded by ${\ll x \log^{1/2} x}$, but this is worse than the trivial bound (3) by a logarithmic factor. This can be “blamed” on the fact that ${\Lambda}$ and ${\chi}$ are concentrated on rather different sets (${\Lambda}$ is concentrated on primes, while ${\chi}$ is more or less uniformly distributed amongst the natural numbers); but even if one corrects for this (e.g. by weighting Cauchy-Schwarz with some suitable “sieve weight” that is more concentrated on primes), one still does not do any better than (3). Indeed, the Cauchy-Schwarz inequality suffers from the same key weakness as the triangle inequality: it is insensitive to the phase oscillation of the factors ${f, g}$.

While the Cauchy-Schwarz inequality can be poor at estimating a single correlation such as (1), its power improves when considering an average (or sum, or square sum) of multiple correlations. In this set of notes, we will focus on one such situation of this type, namely that of trying to estimate a square sum

$\displaystyle (\sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}|^2)^{1/2} \ \ \ \ \ (5)$

that measures the correlations of a single function ${f: {\bf N} \rightarrow {\bf C}}$ with multiple other functions ${g_j: {\bf N} \rightarrow {\bf C}}$. One should think of the situation in which ${f}$ is a “complicated” function, such as the von Mangoldt function ${\Lambda}$, but the ${g_j}$ are relatively “simple” functions, such as Dirichlet characters. In the case when the ${g_j}$ are orthonormal functions, we of course have the classical Bessel inequality:

Lemma 1 (Bessel inequality) Let ${g_1,\dots,g_J: {\bf N} \rightarrow {\bf C}}$ be finitely supported functions obeying the orthonormality relationship

$\displaystyle \sum_n g_j(n) \overline{g_{j'}(n)} = 1_{j=j'}$

for all ${1 \leq j,j' \leq J}$. Then for any function ${f: {\bf N} \rightarrow {\bf C}}$, we have

$\displaystyle (\sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}|^2)^{1/2} \leq (\sum_n |f(n)|^2)^{1/2}.$

For sake of comparison, if one were to apply the Cauchy-Schwarz inequality (4) separately to each summand in (5), one would obtain the bound of ${J^{1/2} (\sum_n |f(n)|^2)^{1/2}}$, which is significantly inferior to the Bessel bound when ${J}$ is large. Geometrically, what is going on is this: the Cauchy-Schwarz inequality (4) is only close to sharp when ${f}$ and ${g}$ are close to parallel in the Hilbert space ${\ell^2({\bf N})}$. But if ${g_1,\dots,g_J}$ are orthonormal, then it is not possible for any other vector ${f}$ to be simultaneously close to parallel to too many of these orthonormal vectors, and so the inner products of ${f}$ with most of the ${g_j}$ should be small. (See this previous blog post for more discussion of this principle.) One can view the Bessel inequality as formalising a repulsion principle: if ${f}$ correlates too much with some of the ${g_j}$, then it does not have enough “energy” to have large correlation with the rest of the ${g_j}$.

In analytic number theory applications, it is useful to generalise the Bessel inequality to the situation in which the ${g_j}$ are not necessarily orthonormal. This can be accomplished via the Cauchy-Schwarz inequality:

Proposition 2 (Generalised Bessel inequality) Let ${g_1,\dots,g_J: {\bf N} \rightarrow {\bf C}}$ be finitely supported functions, and let ${\nu: {\bf N} \rightarrow {\bf R}^+}$ be a non-negative function. Let ${f: {\bf N} \rightarrow {\bf C}}$ be such that ${f}$ vanishes whenever ${\nu}$ vanishes, we have

$\displaystyle (\sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}|^2)^{1/2} \leq (\sum_n |f(n)|^2 / \nu(n))^{1/2} \ \ \ \ \ (6)$

$\displaystyle \times ( \sum_{j=1}^J \sum_{j'=1}^J c_j \overline{c_{j'}} \sum_n \nu(n) g_j(n) \overline{g_{j'}(n)} )^{1/2}$

for some sequence ${c_1,\dots,c_J}$ of complex numbers with ${\sum_{j=1}^J |c_j|^2 = 1}$, with the convention that ${|f(n)|^2/\nu(n)}$ vanishes whenever ${f(n), \nu(n)}$ both vanish.

Note by relabeling that we may replace the domain ${{\bf N}}$ here by any other at most countable set, such as the integers ${{\bf Z}}$. (Indeed, one can give an analogue of this lemma on arbitrary measure spaces, but we will not do so here.) This result first appears in this paper of Boas.

Proof: We use the method of duality to replace the role of the function ${f}$ by a dual sequence ${c_1,\dots,c_J}$. By the converse to Cauchy-Schwarz, we may write the left-hand side of (6) as

$\displaystyle \sum_{j=1}^J \overline{c_j} \sum_{n} f(n) \overline{g_j(n)}$

for some complex numbers ${c_1,\dots,c_J}$ with ${\sum_{j=1}^J |c_j|^2 = 1}$. Indeed, if all of the ${\sum_{n} f(n) \overline{g_j(n)}}$ vanish, we can set the ${c_j}$ arbitrarily, otherwise we set ${(c_1,\dots,c_J)}$ to be the unit vector formed by dividing ${(\sum_{n} f(n) \overline{g_j(n)})_{j=1}^J}$ by its length. We can then rearrange this expression as

$\displaystyle \sum_n f(n) \overline{\sum_{j=1}^J c_j g_j(n)}.$

Applying Cauchy-Schwarz (dividing the first factor by ${\nu(n)^{1/2}}$ and multiplying the second by ${\nu(n)^{1/2}}$, after first removing those ${n}$ for which ${\nu(n)}$ vanish), this is bounded by

$\displaystyle (\sum_n |f(n)|^2 / \nu(n))^{1/2} (\sum_n \nu(n) |\sum_{j=1}^J c_j g_j(n)|^2)^{1/2},$

and the claim follows by expanding out the second factor. $\Box$

Observe that Lemma 1 is a special case of Proposition 2 when ${\nu=1}$ and the ${g_j}$ are orthonormal. In general, one can expect Proposition 2 to be useful when the ${g_j}$ are almost orthogonal relative to ${\nu}$, in that the correlations ${\sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}}$ tend to be small when ${j,j'}$ are distinct. In that case, one can hope for the diagonal term ${j=j'}$ in the right-hand side of (6) to dominate, in which case one can obtain estimates of comparable strength to the classical Bessel inequality. The flexibility to choose different weights ${\nu}$ in the above proposition has some technical advantages; for instance, if ${f}$ is concentrated in a sparse set (such as the primes), it is sometimes useful to tailor ${\nu}$ to a comparable set (e.g. the almost primes) in order not to lose too much in the first factor ${\sum_n |f(n)|^2 / \nu(n)}$. Also, it can be useful to choose a fairly “smooth” weight ${\nu}$, in order to make the weighted correlations ${\sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}}$ small.

Remark 3 In harmonic analysis, the use of tools such as Proposition 2 is known as the method of almost orthogonality, or the ${TT^*}$ method. The explanation for the latter name is as follows. For sake of exposition, suppose that ${\nu}$ is never zero (or we remove all ${n}$ from the domain for which ${\nu(n)}$ vanishes). Given a family of finitely supported functions ${g_1,\dots,g_J: {\bf N} \rightarrow {\bf C}}$, consider the linear operator ${T: \ell^2(\nu^{-1}) \rightarrow \ell^2(\{1,\dots,J\})}$ defined by the formula

$\displaystyle T f := ( \sum_{n} f(n) \overline{g_j(n)} )_{j=1}^J.$

This is a bounded linear operator, and the left-hand side of (6) is nothing other than the ${\ell^2(\{1,\dots,J\})}$ norm of ${Tf}$. Without any further information on the function ${f}$ other than its ${\ell^2(\nu^{-1})}$ norm ${(\sum_n |f(n)|^2 / \nu(n))^{1/2}}$, the best estimate one can obtain on (6) here is clearly

$\displaystyle (\sum_n |f(n)|^2 / \nu(n))^{1/2} \times \|T\|_{op},$

where ${\|T\|_{op}}$ denotes the operator norm of ${T}$.

The adjoint ${T^*: \ell^2(\{1,\dots,J\}) \rightarrow \ell^2(\nu^{-1})}$ is easily computed to be

$\displaystyle T^* (c_j)_{j=1}^J := (\sum_{j=1}^J c_j \nu(n) g_j(n) )_{n \in {\bf N}}.$

The composition ${TT^*: \ell^2(\{1,\dots,J\}) \rightarrow \ell^2(\{1,\dots,J\})}$ of ${T}$ and its adjoint is then given by

$\displaystyle TT^* (c_j)_{j=1}^J := (\sum_{j=1}^J c_j \sum_n \nu(n) g_j(n) \overline{g_{j'}}(n) )_{j=1}^J.$

From the spectral theorem (or singular value decomposition), one sees that the operator norms of ${T}$ and ${TT^*}$ are related by the identity

$\displaystyle \|T\|_{op} = \|TT^*\|_{op}^{1/2},$

and as ${TT^*}$ is a self-adjoint, positive semi-definite operator, the operator norm ${\|TT^*\|_{op}}$ is also the supremum of the quantity

$\displaystyle \langle TT^* (c_j)_{j=1}^J, (c_j)_{j=1}^J \rangle_{\ell^2(\{1,\dots,J\})} = \sum_{j=1}^J \sum_{j'=1}^J c_j \overline{c_{j'}} \sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}$

where ${(c_j)_{j=1}^J}$ ranges over unit vectors in ${\ell^2(\{1,\dots,J\})}$. Putting these facts together, we obtain Proposition 2; furthermore, we see from this analysis that the bound here is essentially optimal if the only information one is allowed to use about ${f}$ is its ${\ell^2(\nu^{-1})}$ norm.

For further discussion of almost orthogonality methods from a harmonic analysis perspective, see Chapter VII of this text of Stein.

Exercise 4 Under the same hypotheses as Proposition 2, show that

$\displaystyle \sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}| \leq (\sum_n |f(n)|^2 / \nu(n))^{1/2}$

$\displaystyle \times ( \sum_{j=1}^J \sum_{j'=1}^J |\sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}| )^{1/2}$

as well as the variant inequality

$\displaystyle |\sum_{j=1}^J \sum_{n} f(n) \overline{g_j(n)}| \leq (\sum_n |f(n)|^2 / \nu(n))^{1/2}$

$\displaystyle \times | \sum_{j=1}^J \sum_{j'=1}^J \sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}|^{1/2}.$

Proposition 2 has many applications in analytic number theory; for instance, we will use it in later notes to control the large value of Dirichlet series such as the Riemann zeta function. One of the key benefits is that it largely eliminates the need to consider further correlations of the function ${f}$ (other than its self-correlation ${\sum_n |f(n)|^2 / \nu(n)}$ relative to ${\nu^{-1}}$, which is usually fairly easy to compute or estimate as ${\nu}$ is usually chosen to be relatively simple); this is particularly useful if ${f}$ is a function which is significantly more complicated to analyse than the functions ${g_j}$. Of course, the tradeoff for this is that one now has to deal with the coefficients ${c_j}$, which if anything are even less understood than ${f}$, since literally the only thing we know about these coefficients is their square sum ${\sum_{j=1}^J |c_j|^2}$. However, as long as there is enough almost orthogonality between the ${g_j}$, one can estimate the ${c_j}$ by fairly crude estimates (e.g. triangle inequality or Cauchy-Schwarz) and still get reasonably good estimates.

In this set of notes, we will use Proposition 2 to prove some versions of the large sieve inequality, which controls a square-sum of correlations

$\displaystyle \sum_n f(n) e( -\xi_j n )$

of an arbitrary finitely supported function ${f: {\bf Z} \rightarrow {\bf C}}$ with various additive characters ${n \mapsto e( \xi_j n)}$ (where ${e(x) := e^{2\pi i x}}$), or alternatively a square-sum of correlations

$\displaystyle \sum_n f(n) \overline{\chi_j(n)}$

of ${f}$ with various primitive Dirichlet characters ${\chi_j}$; it turns out that one can prove a (slightly sub-optimal) version of this inequality quite quickly from Proposition 2 if one first prepares the sum by inserting a smooth cutoff with well-behaved Fourier transform. The large sieve inequality has many applications (as the name suggests, it has particular utility within sieve theory). For the purposes of this set of notes, though, the main application we will need it for is the Bombieri-Vinogradov theorem, which in a very rough sense gives a prime number theorem in arithmetic progressions, which, “on average”, is of strength comparable to the results provided by the Generalised Riemann Hypothesis (GRH), but has the great advantage of being unconditional (it does not require any unproven hypotheses such as GRH); it can be viewed as a significant extension of the Siegel-Walfisz theorem from Notes 2. As we shall see in later notes, the Bombieri-Vinogradov theorem is a very useful ingredient in sieve-theoretic problems involving the primes.

There is however one additional important trick, beyond the large sieve, which we will need in order to establish the Bombieri-Vinogradov theorem. As it turns out, after some basic manipulations (and the deployment of some multiplicative number theory, and specifically the Siegel-Walfisz theorem), the task of proving the Bombieri-Vinogradov theorem is reduced to that of getting a good estimate on sums that are roughly of the form

$\displaystyle \sum_{j=1}^J |\sum_n \Lambda(n) \overline{\chi_j}(n)| \ \ \ \ \ (7)$

for some primitive Dirichlet characters ${\chi_j}$. This looks like the type of sum that can be controlled by the large sieve (or by Proposition 2), except that this is an ordinary sum rather than a square sum (i.e., an ${\ell^1}$ norm rather than an ${\ell^2}$ norm). One could of course try to control such a sum in terms of the associated square-sum through the Cauchy-Schwarz inequality, but this turns out to be very wasteful (it loses a factor of about ${J^{1/2}}$). Instead, one should try to exploit the special structure of the von Mangoldt function ${\Lambda}$, in particular the fact that it can be expressible as a Dirichlet convolution ${\alpha * \beta}$ of two further arithmetic sequences ${\alpha,\beta}$ (or as a finite linear combination of such Dirichlet convolutions). The reason for introducing this convolution structure is through the basic identity

$\displaystyle (\sum_n \alpha*\beta(n) \overline{\chi_j}(n)) = (\sum_n \alpha(n) \overline{\chi_j}(n)) (\sum_n \beta(n) \overline{\chi_j}(n)) \ \ \ \ \ (8)$

for any finitely supported sequences ${\alpha,\beta: {\bf N} \rightarrow {\bf C}}$, as can be easily seen by multiplying everything out and using the completely multiplicative nature of ${\chi_j}$. (This is the multiplicative analogue of the well-known relationship ${\widehat{f*g}(\xi) = \hat f(\xi) \hat g(\xi)}$ between ordinary convolution and Fourier coefficients.) This factorisation, together with yet another application of the Cauchy-Schwarz inequality, lets one control (7) by square-sums of the sort that can be handled by the large sieve inequality.

As we have seen in Notes 1, the von Mangoldt function ${\Lambda}$ does indeed admit several factorisations into Dirichlet convolution type, such as the factorisation ${\Lambda = \mu * L}$. One can try directly inserting this factorisation into the above strategy; it almost works, however there turns out to be a problem when considering the contribution of the portion of ${\mu}$ or ${L}$ that is supported at very small natural numbers, as the large sieve loses any gain over the trivial bound in such settings. Because of this, there is a need for a more sophisticated decomposition of ${\Lambda}$ into Dirichlet convolutions ${\alpha * \beta}$ which are non-degenerate in the sense that ${\alpha,\beta}$ are supported away from small values. (As a non-example, the trivial factorisation ${\Lambda = \Lambda * \delta}$ would be a totally inappropriate factorisation for this purpose.) Fortunately, it turns out that through some elementary combinatorial manipulations, some satisfactory decompositions of this type are available, such as the Vaughan identity and the Heath-Brown identity. By using one of these identities we will be able to complete the proof of the Bombieri-Vinogradov theorem. (These identities are also useful for other applications in which one wishes to control correlations between the von Mangoldt function ${\Lambda}$ and some other sequence; we will see some examples of this in later notes.)

For further reading on these topics, including a significantly larger number of examples of the large sieve inequality, see Chapters 7 and 17 of Iwaniec and Kowalski.

Remark 5 We caution that the presentation given in this set of notes is highly ahistorical; we are using modern streamlined proofs of results that were first obtained by more complicated arguments.

We now move away from the world of multiplicative prime number theory covered in Notes 1 and Notes 2, and enter the wider, and complementary, world of non-multiplicative prime number theory, in which one studies statistics related to non-multiplicative patterns, such as twins ${n,n+2}$. This creates a major jump in difficulty; for instance, even the most basic multiplicative result about the primes, namely Euclid’s theorem that there are infinitely many of them, remains unproven for twin primes. Of course, the situation is even worse for stronger results, such as Euler’s theorem, Dirichlet’s theorem, or the prime number theorem. Finally, even many multiplicative questions about the primes remain open. The most famous of these is the Riemann hypothesis, which gives the asymptotic ${\sum_{n \leq x} \Lambda(n) = x + O( \sqrt{x} \log^2 x )}$ (see Proposition 24 from Notes 2). But even if one assumes the Riemann hypothesis, the precise distribution of the error term ${O( \sqrt{x} \log^2 x )}$ in the above asymptotic (or in related asymptotics, such as for the sum ${\sum_{x \leq n < x+y} \Lambda(n)}$ that measures the distribution of primes in short intervals) is not entirely clear.

Despite this, we do have a number of extremely convincing and well supported models for the primes (and related objects) that let us predict what the answer to many prime number theory questions (both multiplicative and non-multiplicative) should be, particularly in asymptotic regimes where one can work with aggregate statistics about the primes, rather than with a small number of individual primes. These models are based on taking some statistical distribution related to the primes (e.g. the primality properties of a randomly selected ${k}$-tuple), and replacing that distribution by a model distribution that is easy to compute with (e.g. a distribution with strong joint independence properties). One can then predict the asymptotic value of various (normalised) statistics about the primes by replacing the relevant statistical distributions of the primes with their simplified models. In this non-rigorous setting, many difficult conjectures on the primes reduce to relatively simple calculations; for instance, all four of the (still unsolved) Landau problems may now be justified in the affirmative by one or more of these models. Indeed, the models are so effective at this task that analytic number theory is in the curious position of being able to confidently predict the answer to a large proportion of the open problems in the subject, whilst not possessing a clear way forward to rigorously confirm these answers!

As it turns out, the models for primes that have turned out to be the most accurate in practice are random models, which involve (either explicitly or implicitly) one or more random variables. This is despite the prime numbers being obviously deterministic in nature; no coins are flipped or dice rolled to create the set of primes. The point is that while the primes have a lot of obvious multiplicative structure (for instance, the product of two primes is never another prime), they do not appear to exhibit much discernible non-multiplicative structure asymptotically, in the sense that they rarely exhibit statistical anomalies in the asymptotic limit that cannot be easily explained in terms of the multiplicative properties of the primes. As such, when considering non-multiplicative statistics of the primes, the primes appear to behave pseudorandomly, and can thus be modeled with reasonable accuracy by a random model. And even for multiplicative problems, which are in principle controlled by the zeroes of the Riemann zeta function, one can obtain good predictions by positing various pseudorandomness properties of these zeroes, so that the distribution of these zeroes can be modeled by a random model.

Of course, one cannot expect perfect accuracy when replicating a deterministic set such as the primes by a probabilistic model of that set, and each of the heuristic models we discuss below have some limitations to the range of statistics about the primes that they can expect to track with reasonable accuracy. For instance, many of the models about the primes do not fully take into account the multiplicative structure of primes, such as the connection with a zeta function with a meromorphic continuation to the entire complex plane; at the opposite extreme, we have the GUE hypothesis which appears to accurately model the zeta function, but does not capture such basic properties of the primes as the fact that the primes are all natural numbers. Nevertheless, each of the models described below, when deployed within their sphere of reasonable application, has (possibly after some fine-tuning) given predictions that are in remarkable agreement with numerical computation and with known rigorous theoretical results, as well as with other models in overlapping spheres of application; they are also broadly compatible with the general heuristic (discussed in this previous post) that in the absence of any exploitable structure, asymptotic statistics should default to the most “uniform”, “pseudorandom”, or “independent” distribution allowable.

As hinted at above, we do not have a single unified model for the prime numbers (other than the primes themselves, of course), but instead have an overlapping family of useful models that each appear to accurately describe some, but not all, aspects of the prime numbers. In this set of notes, we will discuss four such models:

1. The Cramér random model and its refinements, which model the set ${{\mathcal P}}$ of prime numbers by a random set.
2. The Möbius pseudorandomness principle, which predicts that the Möbius function ${\mu}$ does not correlate with any genuinely different arithmetic sequence of reasonable “complexity”.
3. The equidistribution of residues principle, which predicts that the residue classes of a large number ${n}$ modulo a small or medium-sized prime ${p}$ behave as if they are independently and uniformly distributed as ${p}$ varies.
4. The GUE hypothesis, which asserts that the zeroes of the Riemann zeta function are distributed (at microscopic and mesoscopic scales) like the zeroes of a GUE random matrix, and which generalises the pair correlation conjecture regarding pairs of such zeroes.

This is not an exhaustive list of models for the primes and related objects; for instance, there is also the model in which the major arc contribution in the Hardy-Littlewood circle method is predicted to always dominate, and with regards to various finite groups of number-theoretic importance, such as the class groups discussed in Supplement 1, there are also heuristics of Cohen-Lenstra type. Historically, the first heuristic discussion of the primes along these lines was by Sylvester, who worked informally with a model somewhat related to the equidistribution of residues principle. However, we will not discuss any of these models here.

A word of warning: the discussion of the above four models will inevitably be largely informal, and “fuzzy” in nature. While one can certainly make precise formalisations of at least some aspects of these models, one should not be inflexibly wedded to a specific such formalisation as being “the” correct way to pin down the model rigorously. (To quote the statistician George Box: “all models are wrong, but some are useful”.) Indeed, we will see some examples below the fold in which some finer structure in the prime numbers leads to a correction term being added to a “naive” implementation of one of the above models to make it more accurate, and it is perfectly conceivable that some further such fine-tuning will be applied to one or more of these models in the future. These sorts of mathematical models are in some ways closer in nature to the scientific theories used to model the physical world, than they are to the axiomatic theories one is accustomed to in rigorous mathematics, and one should approach the discussion below accordingly. In particular, and in contrast to the other notes in this course, the material here is not directly used for proving further theorems, which is why we have marked it as “optional” material. Nevertheless, the heuristics and models here are still used indirectly for such purposes, for instance by

• giving a clearer indication of what results one expects to be true, thus guiding one to fruitful conjectures;
• providing a quick way to scan for possible errors in a mathematical claim (e.g. by finding that the main term is off from what a model predicts, or an error term is too small);
• gauging the relative strength of various assertions (e.g. classifying some results as “unsurprising”, others as “potential breakthroughs” or “powerful new estimates”, others as “unexpected new phenomena”, and yet others as “way too good to be true”); or
• setting up heuristic barriers (such as the parity barrier) that one has to resolve before resolving certain key problems (e.g. the twin prime conjecture).

See also my previous essay on the distinction between “rigorous” and “post-rigorous” mathematics, or Thurston’s essay discussing, among other things, the “definition-theorem-proof” model of mathematics and its limitations.

Remark 1 The material in this set of notes presumes some prior exposure to probability theory. See for instance this previous post for a quick review of the relevant concepts.

In 1946, Ulam, in response to a theorem of Anning and Erdös, posed the following problem:

Problem 1 (Erdös-Ulam problem) Let ${S \subset {\bf R}^2}$ be a set such that the distance between any two points in ${S}$ is rational. Is it true that ${S}$ cannot be (topologically) dense in ${{\bf R}^2}$?

The paper of Anning and Erdös addressed the case that all the distances between two points in ${S}$ were integer rather than rational in the affirmative.

The Erdös-Ulam problem remains open; it was discussed recently over at Gödel’s lost letter. It is in fact likely (as we shall see below) that the set ${S}$ in the above problem is not only forbidden to be topologically dense, but also cannot be Zariski dense either. If so, then the structure of ${S}$ is quite restricted; it was shown by Solymosi and de Zeeuw that if ${S}$ fails to be Zariski dense, then all but finitely many of the points of ${S}$ must lie on a single line, or a single circle. (Conversely, it is easy to construct examples of dense subsets of a line or circle in which all distances are rational, though in the latter case the square of the radius of the circle must also be rational.)

The main tool of the Solymosi-de Zeeuw analysis was Faltings’ celebrated theorem that every algebraic curve of genus at least two contains only finitely many rational points. The purpose of this post is to observe that an affirmative answer to the full Erdös-Ulam problem similarly follows from the conjectured analogue of Falting’s theorem for surfaces, namely the following conjecture of Bombieri and Lang:

Conjecture 2 (Bombieri-Lang conjecture) Let ${X}$ be a smooth projective irreducible algebraic surface defined over the rationals ${{\bf Q}}$ which is of general type. Then the set ${X({\bf Q})}$ of rational points of ${X}$ is not Zariski dense in ${X}$.

In fact, the Bombieri-Lang conjecture has been made for varieties of arbitrary dimension, and for more general number fields than the rationals, but the above special case of the conjecture is the only one needed for this application. We will review what “general type” means (for smooth projective complex varieties, at least) below the fold.

The Bombieri-Lang conjecture is considered to be extremely difficult, in particular being substantially harder than Faltings’ theorem, which is itself a highly non-trivial result. So this implication should not be viewed as a practical route to resolving the Erdös-Ulam problem unconditionally; rather, it is a demonstration of the power of the Bombieri-Lang conjecture. Still, it was an instructive algebraic geometry exercise for me to carry out the details of this implication, which quickly boils down to verifying that a certain quite explicit algebraic surface is of general type (Theorem 4 below). As I am not an expert in the subject, my computations here will be rather tedious and pedestrian; it is likely that they could be made much slicker by exploiting more of the machinery of modern algebraic geometry, and I would welcome any such streamlining by actual experts in this area. (For similar reasons, there may be more typos and errors than usual in this post; corrections are welcome as always.) My calculations here are based on a similar calculation of van Luijk, who used analogous arguments to show (assuming Bombieri-Lang) that the set of perfect cuboids is not Zariski-dense in its projective parameter space.

We also remark that in a recent paper of Makhul and Shaffaf, the Bombieri-Lang conjecture (or more precisely, a weaker consequence of that conjecture) was used to show that if ${S}$ is a subset of ${{\bf R}^2}$ with rational distances which intersects any line in only finitely many points, then there is a uniform bound on the cardinality of the intersection of ${S}$ with any line. I have also recently learned (private communication) that an unpublished work of Shaffaf has obtained a result similar to the one in this post, namely that the Erdös-Ulam conjecture follows from the Bombieri-Lang conjecture, plus an additional conjecture about the rational curves in a specific surface.

Let us now give the elementary reductions to the claim that a certain variety is of general type. For sake of contradiction, let ${S}$ be a dense set such that the distance between any two points is rational. Then ${S}$ certainly contains two points that are a rational distance apart. By applying a translation, rotation, and a (rational) dilation, we may assume that these two points are ${(0,0)}$ and ${(1,0)}$. As ${S}$ is dense, there is a third point of ${S}$ not on the ${x}$ axis, which after a reflection we can place in the upper half-plane; we will write it as ${(a,\sqrt{b})}$ with ${b>0}$.

Given any two points ${P, Q}$ in ${S}$, the quantities ${|P|^2, |Q|^2, |P-Q|^2}$ are rational, and so by the cosine rule the dot product ${P \cdot Q}$ is rational as well. Since ${(1,0) \in S}$, this implies that the ${x}$-component of every point ${P}$ in ${S}$ is rational; this in turn implies that the product of the ${y}$-coordinates of any two points ${P,Q}$ in ${S}$ is rational as well (since this differs from ${P \cdot Q}$ by a rational number). In particular, ${a}$ and ${b}$ are rational, and all of the points in ${S}$ now lie in the lattice ${\{ ( x, y\sqrt{b}): x, y \in {\bf Q} \}}$. (This fact appears to have first been observed in the 1988 habilitationschrift of Kemnitz.)

Now take four points ${(x_j,y_j \sqrt{b})}$, ${j=1,\dots,4}$ in ${S}$ in general position (so that the octuplet ${(x_1,y_1\sqrt{b},\dots,x_4,y_4\sqrt{b})}$ avoids any pre-specified hypersurface in ${{\bf C}^8}$); this can be done if ${S}$ is dense. (If one wished, one could re-use the three previous points ${(0,0), (1,0), (a,\sqrt{b})}$ to be three of these four points, although this ultimately makes little difference to the analysis.) If ${(x,y\sqrt{b})}$ is any point in ${S}$, then the distances ${r_j}$ from ${(x,y\sqrt{b})}$ to ${(x_j,y_j\sqrt{b})}$ are rationals that obey the equations

$\displaystyle (x - x_j)^2 + b (y-y_j)^2 = r_j^2$

for ${j=1,\dots,4}$, and thus determine a rational point in the affine complex variety ${V = V_{b,x_1,y_1,x_2,y_2,x_3,y_3,x_4,y_4} \subset {\bf C}^5}$ defined as

$\displaystyle V := \{ (x,y,r_1,r_2,r_3,r_4) \in {\bf C}^6:$

$\displaystyle (x - x_j)^2 + b (y-y_j)^2 = r_j^2 \hbox{ for } j=1,\dots,4 \}.$

By inspecting the projection ${(x,y,r_1,r_2,r_3,r_4) \rightarrow (x,y)}$ from ${V}$ to ${{\bf C}^2}$, we see that ${V}$ is a branched cover of ${{\bf C}^2}$, with the generic cover having ${2^4=16}$ points (coming from the different ways to form the square roots ${r_1,r_2,r_3,r_4}$); in particular, ${V}$ is a complex affine algebraic surface, defined over the rationals. By inspecting the monodromy around the four singular base points ${(x,y) = (x_i,y_i)}$ (which switch the sign of one of the roots ${r_i}$, while keeping the other three roots unchanged), we see that the variety ${V}$ is connected away from its singular set, and thus irreducible. As ${S}$ is topologically dense in ${{\bf R}^2}$, it is Zariski-dense in ${{\bf C}^2}$, and so ${S}$ generates a Zariski-dense set of rational points in ${V}$. To solve the Erdös-Ulam problem, it thus suffices to show that

Claim 3 For any non-zero rational ${b}$ and for rationals ${x_1,y_1,x_2,y_2,x_3,y_3,x_4,y_4}$ in general position, the rational points of the affine surface ${V = V_{b,x_1,y_1,x_2,y_2,x_3,y_3,x_4,y_4}}$ is not Zariski dense in ${V}$.

This is already very close to a claim that can be directly resolved by the Bombieri-Lang conjecture, but ${V}$ is affine rather than projective, and also contains some singularities. The first issue is easy to deal with, by working with the projectivisation

$\displaystyle \overline{V} := \{ [X,Y,Z,R_1,R_2,R_3,R_4] \in {\bf CP}^6: Q(X,Y,Z,R_1,R_2,R_3,R_4) = 0 \} \ \ \ \ \ (1)$

of ${V}$, where ${Q: {\bf C}^7 \rightarrow {\bf C}^4}$ is the homogeneous quadratic polynomial

$\displaystyle (X,Y,Z,R_1,R_2,R_3,R_4) := (Q_j(X,Y,Z,R_1,R_2,R_3,R_4) )_{j=1}^4$

with

$\displaystyle Q_j(X,Y,Z,R_1,R_2,R_3,R_4) := (X-x_j Z)^2 + b (Y-y_jZ)^2 - R_j^2$

and the projective complex space ${{\bf CP}^6}$ is the space of all equivalence classes ${[X,Y,Z,R_1,R_2,R_3,R_4]}$ of tuples ${(X,Y,Z,R_1,R_2,R_3,R_4) \in {\bf C}^7 \backslash \{0\}}$ up to projective equivalence ${(\lambda X, \lambda Y, \lambda Z, \lambda R_1, \lambda R_2, \lambda R_3, \lambda R_4) \sim (X,Y,Z,R_1,R_2,R_3,R_4)}$. By identifying the affine point ${(x,y,r_1,r_2,r_3,r_4)}$ with the projective point ${(X,Y,1,R_1,R_2,R_3,R_4)}$, we see that ${\overline{V}}$ consists of the affine variety ${V}$ together with the set ${\{ [X,Y,0,R_1,R_2,R_3,R_4]: X^2+bY^2=R^2; R_j = \pm R_1 \hbox{ for } j=2,3,4\}}$, which is the union of eight curves, each of which lies in the closure of ${V}$. Thus ${\overline{V}}$ is the projective closure of ${V}$, and is thus a complex irreducible projective surface, defined over the rationals. As ${\overline{V}}$ is cut out by four quadric equations in ${{\bf CP}^6}$ and has degree sixteen (as can be seen for instance by inspecting the intersection of ${\overline{V}}$ with a generic perturbation of a fibre over the generically defined projection ${[X,Y,Z,R_1,R_2,R_3,R_4] \mapsto [X,Y,Z]}$), it is also a complete intersection. To show (3), it then suffices to show that the rational points in ${\overline{V}}$ are not Zariski dense in ${\overline{V}}$.

Heuristically, the reason why we expect few rational points in ${\overline{V}}$ is as follows. First observe from the projective nature of (1) that every rational point is equivalent to an integer point. But for a septuple ${(X,Y,Z,R_1,R_2,R_3,R_4)}$ of integers of size ${O(N)}$, the quantity ${Q(X,Y,Z,R_1,R_2,R_3,R_4)}$ is an integer point of ${{\bf Z}^4}$ of size ${O(N^2)}$, and so should only vanish about ${O(N^{-8})}$ of the time. Hence the number of integer points ${(X,Y,Z,R_1,R_2,R_3,R_4) \in {\bf Z}^7}$ of height comparable to ${N}$ should be about

$\displaystyle O(N)^7 \times O(N^{-8}) = O(N^{-1});$

this is a convergent sum if ${N}$ ranges over (say) powers of two, and so from standard probabilistic heuristics (see this previous post) we in fact expect only finitely many solutions, in the absence of any special algebraic structure (e.g. the structure of an abelian variety, or a birational reduction to a simpler variety) that could produce an unusually large number of solutions.

The Bombieri-Lang conjecture, Conjecture 2, can be viewed as a formalisation of the above heuristics (roughly speaking, it is one of the most optimistic natural conjectures one could make that is compatible with these heuristics while also being invariant under birational equivalence).

Unfortunately, ${\overline{V}}$ contains some singular points. Being a complete intersection, this occurs when the Jacobian matrix of the map ${Q: {\bf C}^7 \rightarrow {\bf C}^4}$ has less than full rank, or equivalently that the gradient vectors

$\displaystyle \nabla Q_j = (2(X-x_j Z), 2(Y-y_j Z), -2x_j (X-x_j Z) - 2y_j (Y-y_j Z), \ \ \ \ \ (2)$

$\displaystyle 0, \dots, 0, -2R_j, 0, \dots, 0)$

for ${j=1,\dots,4}$ are linearly dependent, where the ${-2R_j}$ is in the coordinate position associated to ${R_j}$. One way in which this can occur is if one of the gradient vectors ${\nabla Q_j}$ vanish identically. This occurs at precisely ${4 \times 2^3 = 32}$ points, when ${[X,Y,Z]}$ is equal to ${[x_j,y_j,1]}$ for some ${j=1,\dots,4}$, and one has ${R_k = \pm ( (x_j - x_k)^2 + b (y_j - y_k)^2 )^{1/2}}$ for all ${k=1,\dots,4}$ (so in particular ${R_j=0}$). Let us refer to these as the obvious singularities; they arise from the geometrically evident fact that the distance function ${(x,y\sqrt{b}) \mapsto \sqrt{(x-x_j)^2 + b(y-y_j)^2}}$ is singular at ${(x_j,y_j\sqrt{b})}$.

The other way in which could occur is if a non-trivial linear combination of at least two of the gradient vectors vanishes. From (2), this can only occur if ${R_j=R_k=0}$ for some distinct ${j,k}$, which from (1) implies that

$\displaystyle (X - x_j Z) = \pm \sqrt{b} i (Y - y_j Z) \ \ \ \ \ (3)$

and

$\displaystyle (X - x_k Z) = \pm \sqrt{b} i (Y - y_k Z) \ \ \ \ \ (4)$

for two choices of sign ${\pm}$. If the signs are equal, then (as ${x_j, y_j, x_k, y_k}$ are in general position) this implies that ${Z=0}$, and then we have the singular point

$\displaystyle [X,Y,Z,R_1,R_2,R_3,R_4] = [\pm \sqrt{b} i, 1, 0, 0, 0, 0, 0]. \ \ \ \ \ (5)$

If the non-trivial linear combination involved three or more gradient vectors, then by the pigeonhole principle at least two of the signs involved must be equal, and so the only singular points are (5). So the only remaining possibility is when we have two gradient vectors ${\nabla Q_j, \nabla Q_k}$ that are parallel but non-zero, with the signs in (3), (4) opposing. But then (as ${x_j,y_j,x_k,y_k}$ are in general position) the vectors ${(X-x_j Z, Y-y_j Z), (X-x_k Z, Y-y_k Z)}$ are non-zero and non-parallel to each other, a contradiction. Thus, outside of the ${32}$ obvious singular points mentioned earlier, the only other singular points are the two points (5).

We will shortly show that the ${32}$ obvious singularities are ordinary double points; the surface ${\overline{V}}$ near any of these points is analytically equivalent to an ordinary cone ${\{ (x,y,z) \in {\bf C}^3: z^2 = x^2 + y^2 \}}$ near the origin, which is a cone over a smooth conic curve ${\{ (x,y) \in {\bf C}^2: x^2+y^2=1\}}$. The two non-obvious singularities (5) are slightly more complicated than ordinary double points, they are elliptic singularities, which approximately resemble a cone over an elliptic curve. (As far as I can tell, this resemblance is exact in the category of real smooth manifolds, but not in the category of algebraic varieties.) If one blows up each of the point singularities of ${\overline{V}}$ separately, no further singularities are created, and one obtains a smooth projective surface ${X}$ (using the Segre embedding as necessary to embed ${X}$ back into projective space, rather than in a product of projective spaces). Away from the singularities, the rational points of ${\overline{V}}$ lift up to rational points of ${X}$. Assuming the Bombieri-Lang conjecture, we thus are able to answer the Erdös-Ulam problem in the affirmative once we establish

Theorem 4 The blowup ${X}$ of ${\overline{V}}$ is of general type.

This will be done below the fold, by the pedestrian device of explicitly constructing global differential forms on ${X}$; I will also be working from a complex analysis viewpoint rather than an algebraic geometry viewpoint as I am more comfortable with the former approach. (As mentioned above, though, there may well be a quicker way to establish this result by using more sophisticated machinery.)

I thank Mark Green and David Gieseker for helpful conversations (and a crash course in varieties of general type!).

Remark 5 The above argument shows in fact (assuming Bombieri-Lang) that sets ${S \subset {\bf R}^2}$ with all distances rational cannot be Zariski-dense, and thus (by Solymosi-de Zeeuw) must lie on a single line or circle with only finitely many exceptions. Assuming a stronger version of Bombieri-Lang involving a general number field ${K}$, we obtain a similar conclusion with “rational” replaced by “lying in ${K}$” (one has to extend the Solymosi-de Zeeuw analysis to more general number fields, but this should be routine, using the analogue of Faltings’ theorem for such number fields).

Kevin Ford, Ben Green, Sergei Konyagin, James Maynard, and I have just uploaded to the arXiv our paper “Long gaps between primes“. This is a followup work to our two previous papers (discussed in this previous post), in which we had simultaneously shown that the maximal gap

$\displaystyle G(X) := \sup_{p_n, p_{n+1} \leq X} p_{n+1}-p_n$

between primes up to ${X}$ exhibited a lower bound of the shape

$\displaystyle G(X) \geq f(X) \log X \frac{\log \log X \log\log\log\log X}{(\log\log\log X)^2} \ \ \ \ \ (1)$

for some function ${f(X)}$ that went to infinity as ${X \rightarrow \infty}$; this improved upon previous work of Rankin and other authors, who established the same bound but with ${f(X)}$ replaced by a constant. (Again, see the previous post for a more detailed discussion.)

In our previous papers, we did not specify a particular growth rate for ${f(X)}$. In my paper with Kevin, Ben, and Sergei, there was a good reason for this: our argument relied (amongst other things) on the inverse conjecture on the Gowers norms, as well as the Siegel-Walfisz theorem, and the known proofs of both results both have ineffective constants, rendering our growth function ${f(X)}$ similarly ineffective. Maynard’s approach ostensibly also relies on the Siegel-Walfisz theorem, but (as shown in another recent paper of his) can be made quite effective, even when tracking ${k}$-tuples of fairly large size (about ${\log^c x}$ for some small ${c}$). If one carefully makes all the bounds in Maynard’s argument quantitative, one eventually ends up with a growth rate ${f(X)}$ of shape

$\displaystyle f(X) \asymp \frac{\log \log \log X}{\log\log\log\log X}, \ \ \ \ \ (2)$

$\displaystyle G(X) \gg \log X \frac{\log \log X}{\log\log\log X}$

on the gaps between primes for large ${X}$; this is an unpublished calculation of James’.

In this paper we make a further refinement of this calculation to obtain a growth rate

$\displaystyle f(X) \asymp \log \log \log X \ \ \ \ \ (3)$

leading to a bound of the form

$\displaystyle G(X) \geq c \log X \frac{\log \log X \log\log\log\log X}{\log\log\log X} \ \ \ \ \ (4)$

for large ${X}$ and some small constant ${c}$. Furthermore, this appears to be the limit of current technology (in particular, falling short of Cramer’s conjecture that ${G(X)}$ is comparable to ${\log^2 X}$); in the spirit of Erdös’ original prize on this problem, I would like to offer 10,000 USD for anyone who can show (in a refereed publication, of course) that the constant ${c}$ here can be replaced by an arbitrarily large constant ${C}$.

The reason for the growth rate (3) is as follows. After following the sieving process discussed in the previous post, the problem comes down to something like the following: can one sieve out all (or almost all) of the primes in ${[x,y]}$ by removing one residue class modulo ${p}$ for all primes ${p}$ in (say) ${[x/4,x/2]}$? Very roughly speaking, if one can solve this problem with ${y = g(x) x}$, then one can obtain a growth rate on ${f(X)}$ of the shape ${f(X) \sim g(\log X)}$. (This is an oversimplification, as one actually has to sieve out a random subset of the primes, rather than all the primes in ${[x,y]}$, but never mind this detail for now.)

Using the quantitative “dense clusters of primes” machinery of Maynard, one can find lots of ${k}$-tuples in ${[x,y]}$ which contain at least ${\gg \log k}$ primes, for ${k}$ as large as ${\log^c x}$ or so (so that ${\log k}$ is about ${\log\log x}$). By considering ${k}$-tuples in arithmetic progression, this means that one can find lots of residue classes modulo a given prime ${p}$ in ${[x/4,x/2]}$ that capture about ${\log\log x}$ primes. In principle, this means that union of all these residue classes can cover about ${\frac{x}{\log x} \log\log x}$ primes, allowing one to take ${g(x)}$ as large as ${\log\log x}$, which corresponds to (3). However, there is a catch: the residue classes for different primes ${p}$ may collide with each other, reducing the efficiency of the covering. In our previous papers on the subject, we selected the residue classes randomly, which meant that we had to insert an additional logarithmic safety margin in expected number of times each prime would be shifted out by one of the residue classes, in order to guarantee that we would (with high probability) sift out most of the primes. This additional safety margin is ultimately responsible for the ${\log\log\log\log X}$ loss in (2).

The main innovation of this paper, beyond detailing James’ unpublished calculations, is to use ideas from the literature on efficient hypergraph covering, to avoid the need for a logarithmic safety margin. The hypergraph covering problem, roughly speaking, is to try to cover a set of ${n}$ vertices using as few “edges” from a given hypergraph ${H}$ as possible. If each edge has ${m}$ vertices, then one certainly needs at least ${n/m}$ edges to cover all the vertices, and the question is to see if one can come close to attaining this bound given some reasonable uniform distribution hypotheses on the hypergraph ${H}$. As before, random methods tend to require something like ${\frac{n}{m} \log r}$ edges before one expects to cover, say ${1-1/r}$ of the vertices.

However, it turns out (under reasonable hypotheses on ${H}$) to eliminate this logarithmic loss, by using what is now known as the “semi-random method” or the “Rödl nibble”. The idea is to randomly select a small number of edges (a first “nibble”) – small enough that the edges are unlikely to overlap much with each other, thus obtaining maximal efficiency. Then, one pauses to remove all the edges from ${H}$ that intersect edges from this first nibble, so that all remaining edges will not overlap with the existing edges. One then randomly selects another small number of edges (a second “nibble”), and repeats this process until enough nibbles are taken to cover most of the vertices. Remarkably, it turns out that under some reasonable assumptions on the hypergraph ${H}$, one can maintain control on the uniform distribution of the edges throughout the nibbling process, and obtain an efficient hypergraph covering. This strategy was carried out in detail in an influential paper of Pippenger and Spencer.

In our setup, the vertices are the primes in ${[x,y]}$, and the edges are the intersection of the primes with various residue classes. (Technically, we have to work with a family of hypergraphs indexed by a prime ${p}$, rather than a single hypergraph, but let me ignore this minor technical detail.) The semi-random method would in principle eliminate the logarithmic loss and recover the bound (3). However, there is a catch: the analysis of Pippenger and Spencer relies heavily on the assumption that the hypergraph is uniform, that is to say all edges have the same size. In our context, this requirement would mean that each residue class captures exactly the same number of primes, which is not the case; we only control the number of primes in an average sense, but we were unable to obtain any concentration of measure to come close to verifying this hypothesis. And indeed, the semi-random method, when applied naively, does not work well with edges of variable size – the problem is that edges of large size are much more likely to be eliminated after each nibble than edges of small size, since they have many more vertices that could overlap with the previous nibbles. Since the large edges are clearly the more useful ones for the covering problem than small ones, this bias towards eliminating large edges significantly reduces the efficiency of the semi-random method (and also greatly complicates the analysis of that method).

Our solution to this is to iteratively reweight the probability distribution on edges after each nibble to compensate for this bias effect, giving larger edges a greater weight than smaller edges. It turns out that there is a natural way to do this reweighting that allows one to repeat the Pippenger-Spencer analysis in the presence of edges of variable size, and this ultimately allows us to recover the full growth rate (3).

To go beyond (3), one either has to find a lot of residue classes that can capture significantly more than ${\log\log x}$ primes of size ${x}$ (which is the limit of the multidimensional Selberg sieve of Maynard and myself), or else one has to find a very different method to produce large gaps between primes than the Erdös-Rankin method, which is the method used in all previous work on the subject.

It turns out that the arguments in this paper can be combined with the Maier matrix method to also produce chains of consecutive large prime gaps whose size is of the order of (4); three of us (Kevin, James, and myself) will detail this in a future paper. (A similar combination was also recently observed in connection with our earlier result (1) by Pintz, but there are some additional technical wrinkles required to recover the full gain of (3) for the chains of large gaps problem.)