You are currently browsing the category archive for the ‘math.NT’ category.

Many problems in non-multiplicative prime number theory can be recast as sieving problems. Consider for instance the problem of counting the number {N(x)} of pairs of twin primes {p,p+2} contained in {[x/2,x]} for some large {x}; note that the claim that {N(x) > 0} for arbitrarily large {x} is equivalent to the twin prime conjecture. One can obtain this count by any of the following variants of the sieve of Eratosthenes:

  1. Let {A} be the set of natural numbers in {[x/2,x-2]}. For each prime {p \leq \sqrt{x}}, let {E_p} be the union of the residue classes {0\ (p)} and {-2\ (p)}. Then {N(x)} is the cardinality of the sifted set {A \backslash \bigcup_{p \leq \sqrt{x}} E_p}.
  2. Let {A} be the set of primes in {[x/2,x-2]}. For each prime {p \leq \sqrt{x}}, let {E_p} be the residue class {-2\ (p)}. Then {N(x)} is the cardinality of the sifted set {A \backslash \bigcup_{p \leq \sqrt{x}} E_p}.
  3. Let {A} be the set of primes in {[x/2+2,x]}. For each prime {p \leq \sqrt{x}}, let {E_p} be the residue class {2\ (p)}. Then {N(x)} is the cardinality of the sifted set {A \backslash \bigcup_{p \leq \sqrt{x}} E_p}.
  4. Let {A} be the set {\{ n(n+2): x/2 \leq n \leq x-2 \}}. For each prime {p \leq \sqrt{x}}, let {E_p} be the residue class {0\ (p)} Then {N(x)} is the cardinality of the sifted set {A \backslash \bigcup_{p \leq \sqrt{x}} E_p}.

Exercise 1 Develop similar sifting formulations of the other three Landau problems.

In view of these sieving interpretations of number-theoretic problems, it becomes natural to try to estimate the size of sifted sets {A \backslash \bigcup_{p | P} E_p} for various finite sets {A} of integers, and subsets {E_p} of integers indexed by primes {p} dividing some squarefree natural number {P} (which, in the above examples, would be the product of all primes up to {\sqrt{x}}). As we see in the above examples, the sets {E_p} in applications are typically the union of one or more residue classes modulo {p}, but we will work at a more abstract level of generality here by treating {E_p} as more or less arbitrary sets of integers, without caring too much about the arithmetic structure of such sets.

It turns out to be conceptually more natural to replace sets by functions, and to consider the more general the task of estimating sifted sums

\displaystyle  \sum_{n \in {\bf Z}} a_n 1_{n \not \in \bigcup_{p | P} E_p} \ \ \ \ \ (1)

for some finitely supported sequence {(a_n)_{n \in {\bf Z}}} of non-negative numbers; the previous combinatorial sifting problem then corresponds to the indicator function case {a_n=1_{n \in A}}. (One could also use other index sets here than the integers {{\bf Z}} if desired; for much of sieve theory the index set and its subsets {E_p} are treated as abstract sets, so the exact arithmetic structure of these sets is not of primary importance.)

Continuing with twin primes as a running example, we thus have the following sample sieving problem:

Problem 2 (Sieving problem for twin primes) Let {x, z \geq 1}, and let {\pi_2(x,z)} denote the number of natural numbers {n \leq x} which avoid the residue classes {0, -2\ (p)} for all primes {p < z}. In other words, we have

\displaystyle  \pi_2(x,z) := \sum_{n \in {\bf Z}} a_n 1_{n \not \in \bigcup_{p | P(z)} E_p}

where {a_n := 1_{n \in [1,x]}}, {P(z) := \prod_{p < z} p} is the product of all the primes strictly less than {z} (we omit {z} itself for minor technical reasons), and {E_p} is the union of the residue classes {0, -2\ (p)}. Obtain upper and lower bounds on {\pi_2(x,z)} which are as strong as possible in the asymptotic regime where {x} goes to infinity and the sifting level {z} grows with {x} (ideally we would like {z} to grow as fast as {\sqrt{x}}).

From the preceding discussion we know that the number of twin prime pairs {p,p+2} in {(x/2,x]} is equal to {\pi_2(x-2,\sqrt{x}) - \pi_2(x/2,\sqrt{x})}, if {x} is not a perfect square; one also easily sees that the number of twin prime pairs in {[1,x]} is at least {\pi_2(x-2,\sqrt{x})}, again if {x} is not a perfect square. Thus we see that a sufficiently good answer to Problem 2 would resolve the twin prime conjecture, particularly if we can get the sifting level {z} to be as large as {\sqrt{x}}.

We return now to the general problem of estimating (1). We may expand

\displaystyle  1_{n \not \in \bigcup_{p | P} E_p} = \prod_{p | P} (1 - 1_{E_p}(n)) \ \ \ \ \ (2)

\displaystyle  = \sum_{k=0}^\infty (-1)^k \sum_{p_1 \dots p_k|P: p_1 < \dots < p_k} 1_{E_{p_1}} \dots 1_{E_{p_k}}(n)

\displaystyle  = \sum_{d|P} \mu(d) 1_{E_d}(n)

where {E_d := \bigcap_{p|d} E_p} (with the convention that {E_1={\bf Z}}). We thus arrive at the Legendre sieve identity

\displaystyle  \sum_{n \in {\bf Z}} a_n 1_{n \not \in \bigcup_{p | P} E_p} = \sum_{d|P} \mu(d) \sum_{n \in E_d} a_n. \ \ \ \ \ (3)

Specialising to the case of an indicator function {a_n=1_{n \in A}}, we recover the inclusion-exclusion formula

\displaystyle  |A \backslash \bigcup_{p|P} E_p| = \sum_{d|P} \mu(d) |A \cap E_d|.

Such exact sieving formulae are already satisfactory for controlling sifted sets or sifted sums when the amount of sieving is relatively small compared to the size of {A}. For instance, let us return to the running example in Problem 2 for some {x,z \geq 1}. Observe that each {E_p} in this example consists of {\omega(p)} residue classes modulo {p}, where {\omega(p)} is defined to equal {1} when {p=2} and {2} when {p} is odd. By the Chinese remainder theorem, this implies that for each {d|P(z)}, {E_d} consists of {\prod_{p|d} \omega(p)} residue classes modulo {d}. Using the basic bound

\displaystyle  \sum_{n \leq x: n = a\ (q)} 1 = \frac{x}{q} + O(1) \ \ \ \ \ (4)

for any {x > 0} and any residue class {a\ (q)}, we conclude that

\displaystyle  \sum_{n \in E_d} a_n = g(d) x + O( \prod_{p|d} \omega(p) ) \ \ \ \ \ (5)

for any {d|P(z)}, where {g} is the multiplicative function

\displaystyle  g(d) := \prod_{p|d: p|P(z)} \frac{\omega(p)}{p}.

Since {\omega(p) \leq 2} and there are at most {\pi(z)} primes dividing {P(z)}, we may crudely bound {\prod_{p|d} \omega(p) \leq 2^{\pi(z)}}, thus

\displaystyle  \sum_{n \in E_d} a_n = g(d) x + O( 2^{\pi(z)} ). \ \ \ \ \ (6)

Also, the number of divisors of {P(z)} is at most {2^{\pi(z)}}. From the Legendre sieve (3), we thus conclude that

\displaystyle  \pi_2(x,z) = (\sum_{d|P(z)} \mu(d) g(d) x) + O( 4^{\pi(z)} ).

We can factorise the main term to obtain

\displaystyle  \pi_2(x,z) = x \prod_{p < z} (1-\frac{\omega(p)}{p}) + O( 4^{\pi(z)} ).

This is compatible with the heuristic

\displaystyle  \pi_2(x,z) \approx x \prod_{p < z} (1-\frac{\omega(p)}{p}) \ \ \ \ \ (7)

coming from the equidistribution of residues principle (Section 3 of Supplement 4), bearing in mind (from the modified Cramér model, see Section 1 of Supplement 4) that we expect this heuristic to become inaccurate when {z} becomes very large. We can simplify the right-hand side of (7) by recalling the twin prime constant

\displaystyle  \Pi_2 := \prod_{p>2} (1 - \frac{1}{(p-1)^2}) = 0.6601618\dots

(see equation (7) from Supplement 4); note that

\displaystyle  \prod_p (1-\frac{1}{p})^{-2} (1-\frac{\omega(p)}{p}) = 2 \Pi_2

so from Mertens’ third theorem (Theorem 42 from Notes 1) one has

\displaystyle  \prod_{p < z} (1-\frac{\omega(p)}{p}) = (2\Pi_2+o(1)) \frac{1}{(e^\gamma \log z)^2} \ \ \ \ \ (8)

as {z \rightarrow \infty}. Bounding {4^{\pi(z)}} crudely by {\exp(o(z))}, we conclude in particular that

\displaystyle  \pi_2(x,z) = (2\Pi_2 +o(1)) \frac{x}{(e^\gamma \log z)^2}

when {x,z \rightarrow \infty} with {z = O(\log x)}. This is somewhat encouraging for the purposes of getting a sufficiently good answer to Problem 2 to resolve the twin prime conjecture, but note that {z} is currently far too small: one needs to get {z} as large as {\sqrt{x}} before one is counting twin primes, and currently {z} can only get as large as {\log x}.

The problem is that the number of terms in the Legendre sieve (3) basically grows exponentially in {z}, and so the error terms in (4) accumulate to an unacceptable extent once {z} is significantly larger than {\log x}. An alternative way to phrase this problem is that the estimate (4) is only expected to be truly useful in the regime {q=o(x)}; on the other hand, the moduli {d} appearing in (3) can be as large as {P}, which grows exponentially in {z} by the prime number theorem.

To resolve this problem, it is thus natural to try to truncate the Legendre sieve, in such a way that one only uses information about the sums {\sum_{n \in E_d} a_n} for a relatively small number of divisors {d} of {P}, such as those {d} which are below a certain threshold {D}. This leads to the following general sieving problem:

Problem 3 (General sieving problem) Let {P} be a squarefree natural number, and let {{\mathcal D}} be a set of divisors of {P}. For each prime {p} dividing {P}, let {E_p} be a set of integers, and define {E_d := \bigcap_{p|d} E_p} for all {d|P} (with the convention that {E_1={\bf Z}}). Suppose that {(a_n)_{n \in {\bf Z}}} is an (unknown) finitely supported sequence of non-negative reals, whose sums

\displaystyle  X_d := \sum_{n \in E_d} a_n \ \ \ \ \ (9)

are known for all {d \in {\mathcal D}}. What are the best upper and lower bounds one can conclude on the quantity (1)?

Here is a simple example of this type of problem (corresponding to the case {P = 6}, {{\mathcal D} = \{1, 2, 3\}}, {X_1 = 100}, {X_2 = 60}, and {X_3 = 10}):

Exercise 4 Let {(a_n)_{n \in {\bf Z}}} be a finitely supported sequence of non-negative reals such that {\sum_{n \in {\bf Z}} a_n = 100}, {\sum_{n \in {\bf Z}: 2|n} a_n = 60}, and {\sum_{n \in {\bf Z}: 3|n} a_n = 10}. Show that

\displaystyle  30 \leq \sum_{n \in {\bf Z}: (n,6)=1} a_n \leq 40

and give counterexamples to show that these bounds cannot be improved in general, even when {a_n} is an indicator function sequence.

Problem 3 is an example of a linear programming problem. By using linear programming duality (as encapsulated by results such as the Hahn-Banach theorem, the separating hyperplane theorem, or the Farkas lemma), we can rephrase the above problem in terms of upper and lower bound sieves:

Theorem 5 (Dual sieve problem) Let {P, {\mathcal D}, E_p, E_d, X_d} be as in Problem 3. We assume that Problem 3 is feasible, in the sense that there exists at least one finitely supported sequence {(a_n)_{n \in {\bf Z}}} of non-negative reals obeying the constraints in that problem. Define an (normalised) upper bound sieve to be a function {\nu^+: {\bf Z} \rightarrow {\bf R}} of the form

\displaystyle  \nu^+ = \sum_{d \in {\mathcal D}} \lambda^+_d 1_{E_d}

for some coefficients {\lambda^+_d \in {\bf R}}, and obeying the pointwise lower bound

\displaystyle  \nu^+(n) \geq 1_{n \not \in\bigcup_{p|P} E_p}(n) \ \ \ \ \ (10)

for all {n \in {\bf Z}} (in particular {\nu^+} is non-negative). Similarly, define a (normalised) lower bound sieve to be a function {\nu^-: {\bf Z} \rightarrow {\bf R}} of the form

\displaystyle  \nu^-(n) = \sum_{d \in {\mathcal D}} \lambda^-_d 1_{E_d}

for some coefficients {\lambda^-_d \in {\bf R}}, and obeying the pointwise upper bound

\displaystyle  \nu^-(n) \leq 1_{n \not \in\bigcup_{p|P} E_p}(n)

for all {n \in {\bf Z}}. Thus for instance {1} and {0} are (trivially) upper bound sieves and lower bound sieves respectively.

  • (i) The supremal value of the quantity (1), subject to the constraints in Problem 3, is equal to the infimal value of the quantity {\sum_{d \in {\mathcal D}} \lambda^+_d X_d}, as {\nu^+ = \sum_{d \in {\mathcal D}} \lambda^+_d 1_{E_d}} ranges over all upper bound sieves.
  • (ii) The infimal value of the quantity (1), subject to the constraints in Problem 3, is equal to the supremal value of the quantity {\sum_{d \in {\mathcal D}} \lambda^-_d X_d}, as {\nu^- = \sum_{d \in {\mathcal D}} \lambda^-_d 1_{E_d}} ranges over all lower bound sieves.

Proof: We prove part (i) only, and leave part (ii) as an exercise. Let {A} be the supremal value of the quantity (1) given the constraints in Problem 3, and let {B} be the infimal value of {\sum_{d \in {\mathcal D}} \lambda^+_d X_d}. We need to show that {A=B}.

We first establish the easy inequality {A \leq B}. If the sequence {a_n} obeys the constraints in Problem 3, and {\nu^+ = \sum_{d \in {\mathcal D}} \lambda^+_d 1_{E_d}} is an upper bound sieve, then

\displaystyle  \sum_n \nu^+(n) a_n = \sum_{d \in {\mathcal D}} \lambda^+_d X_d

and hence (by the non-negativity of {\nu^+} and {a_n})

\displaystyle  \sum_{n \not \in \bigcup_{p|P} E_p} a_n \leq \sum_{d \in {\mathcal D}} \lambda^+_d X_d;

taking suprema in {f} and infima in {\nu^+} we conclude that {A \leq B}.

Now suppose for contradiction that {A<B}, thus {A < C < B} for some real number {C}. We will argue using the hyperplane separation theorem; one can also proceed using one of the other duality results mentioned above. (See this previous blog post for some discussion of the connections between these various forms of linear duality.) Consider the affine functional

\displaystyle  \rho_0: (a_n)_{n \in{\bf Z}} \mapsto C - \sum_{n \not \in \bigcup_{p|P} E_p} a_n.

on the vector space of finitely supported sequences {(a_n)_{n \in {\bf Z}}} of reals. On the one hand, since {C > A}, this functional is positive for every sequence {(a_n)_{n \in{\bf Z}}} obeying the constraints in Problem 3. Next, let {K} be the space of affine functionals {\rho} of the form

\displaystyle  \rho: (a_n)_{n \in {\bf Z}} \mapsto -\sum_{d \in {\mathcal D}} \lambda^+_d ( \sum_{n \in E_d} a_n - X_d ) + \sum_n a_n \nu(n) + X

for some real numbers {\lambda^+_d \in {\bf R}}, some non-negative function {\nu: {\bf Z} \rightarrow {\bf R}^+} which is a finite linear combination of the {1_{E_d}} for {d|P}, and some non-negative {X}. This is a closed convex cone in a finite-dimensional vector space {V}; note also that {\rho_0} lies in {V}. Suppose first that {\rho_0 \in K}, thus we have a representation of the form

\displaystyle C - \sum_{n \not \in \bigcup_{p|P} E_p} a_n = -\sum_{d \in {\mathcal D}} \lambda^+_d ( \sum_{n \in E_d} a_n - X_d ) + \sum_n a_n \nu(n) + X

for any finitely supported sequence {(a_n)_{n \in {\bf Z}}}. Comparing coefficients, we conclude that

\displaystyle  \sum_{d \in {\mathcal D}} \lambda^+_d 1_{E_d}(n) \geq 1_{n \not \in \bigcup_{p|P} E_p}

for any {n} (i.e., {\sum_{d \in {\mathcal D}} \lambda^+_d 1_{E_d}} is an upper bound sieve), and also

\displaystyle  C \geq \sum_{d \in {\mathcal D}} \lambda^+_d X_d,

and thus {C \geq B}, a contradiction. Thus {\rho_0} lies outside of {K}. But then by the hyperplane separation theorem, we can find an affine functional {\iota: V \rightarrow {\bf R}} on {V} that is non-negative on {K} and negative on {\rho_0}. By duality, such an affine functional takes the form {\iota: \rho \mapsto \rho((b_n)_{n \in {\bf Z}}) + c} for some finitely supported sequence {(b_n)_{n \in {\bf Z}}} and {c \in {\bf R}} (indeed, {(b_n)_{n \in {\bf Z}}} can be supported on a finite set consisting of a single representative for each atom of the finite {\sigma}-algebra generated by the {E_p}). Since {\iota} is non-negative on the cone {K}, we see (on testing against multiples of the functionals {(a_n)_{n \in {\bf Z}} \mapsto \sum_{n \in E_d} a_n - X_d} or {(a_n)_{n \in {\bf Z}} \mapsto a_n}) that the {b_n} and {c} are non-negative, and that {\sum_{n \in E_d} b_n - X_d = 0} for all {d \in {\mathcal D}}; thus {(b_n)_{n \in {\bf Z}}} is feasible for Problem 3. Since {\iota} is negative on {\rho_0}, we see that

\displaystyle  \sum_{n \not \in \bigcup_{p|P} E_p} b_n \geq C

and thus {A \geq C}, giving the desired contradiction. \Box

Exercise 6 Prove part (ii) of the above theorem.

Exercise 7 Show that the infima and suprema in the above theorem are actually attained (so one can replace “infimal” and “supremal” by “minimal” and “maximal” if desired).

Exercise 8 What are the optimal upper and lower bound sieves for Exercise 4?

In the case when {{\mathcal D}} consists of all the divisors of {P}, we see that the Legendre sieve {\sum_{d|P} \mu(d) 1_{E_d}} is both the optimal upper bound sieve and the optimal lower bound sieve, regardless of what the quantities {X_d} are. However, in most cases of interest, {{\mathcal D}} will only be some strict subset of the divisors of {P}, and there will be a gap between the optimal upper and lower bounds.

Observe that a sequence {(\lambda^+_d)_{d \in {\mathcal D}}} of real numbers will form an upper bound sieve {\sum_d \lambda^+_d 1_{E_d}} if one has the inequalities

\displaystyle  \lambda^+_1 \geq 1

and

\displaystyle  \sum_{d|n} \lambda^+_d \geq 0

for all {n|P}; we will refer to such sequences as upper bound sieve coefficients. (Conversely, if the sets {E_p} are in “general position” in the sense that every set of the form {\bigcap_{p|n} E_p \backslash \bigcup_{p|P; p\not | n} E_p} for {n|P} is non-empty, we see that every upper bound sieve arises from a sequence of upper bound sieve coefficients.) Similarly, a sequence {(\lambda^-_d)_{d \in {\mathcal D}}} of real numbers will form a lower bound sieve {\sum_d \lambda^-_d 1_{E_d}} if one has the inequalities

\displaystyle  \lambda^-_1 \leq 1

and

\displaystyle  \sum_{d|n} \lambda^-_d \leq 0

for all {n|P} with {n>1}; we will refer to such sequences as lower bound sieve coefficients.

Exercise 9 (Brun pure sieve) Let {P} be a squarefree number, and {k} a non-negative integer. Show that the sequence {(\lambda_d)_{d \in P}} defined by

\displaystyle  \lambda_d := 1_{\omega(d) \leq k} \mu(d),

where {\omega(d)} is the number of prime factors of {d}, is a sequence of upper bound sieve coefficients for even {k}, and a sequence of lower bound sieve coefficients for odd {k}. Deduce the Bonferroni inequalities

\displaystyle  \sum_{n \in {\bf Z}} a_n 1_{n \not \in \bigcup_{p | P} E_p} \leq \sum_{d|P: \omega(p) \leq k} \mu(d) X_d \ \ \ \ \ (11)

when {k} is even, and

\displaystyle  \sum_{n \in {\bf Z}} a_n 1_{n \not \in \bigcup_{p | P} E_p} \geq \sum_{d|P: \omega(p) \leq k} \mu(d) X_d \ \ \ \ \ (12)

when {k} is odd, whenever one is in the situation of Problem 3 (and {{\mathcal D}} contains all {d|P} with {\omega(p) \leq k}). The resulting upper and lower bound sieves are sometimes known as Brun pure sieves. The Legendre sieve can be viewed as the limiting case when {k \geq \omega(P)}.

In many applications the sums {X_d} in (9) take the form

\displaystyle  \sum_{n \in E_d} a_n = g(d) X + r_d \ \ \ \ \ (13)

for some quantity {X} independent of {d}, some multiplicative function {g} with {0 \leq g(p) \leq 1}, and some remainder term {r_d} whose effect is expected to be negligible on average if {d} is restricted to be small, e.g. less than a threshold {D}; note for instance that (5) is of this form if {D \leq x^{1-\varepsilon}} for some fixed {\varepsilon>0} (note from the divisor bound, Lemma 23 of Notes 1, that {\prod_{p|d} \omega(p) \ll x^{o(1)}} if {d \ll x^{O(1)}}). We are thus led to the following idealisation of the sieving problem, in which the remainder terms {r_d} are ignored:

Problem 10 (Idealised sieving) Let {z, D \geq 1} (we refer to {z} as the sifting level and {D} as the level of distribution), let {g} be a multiplicative function with {0 \leq g(p) \leq 1}, and let {{\mathcal D} := \{ d|P(z): d \leq D \}}. How small can one make the quantity

\displaystyle  \sum_{d \in {\mathcal D}} \lambda^+_d g(d) \ \ \ \ \ (14)

for a sequence {(\lambda^+_d)_{d \in {\mathcal D}}} of upper bound sieve coefficients, and how large can one make the quantity

\displaystyle  \sum_{d \in {\mathcal D}} \lambda^-_d g(d) \ \ \ \ \ (15)

for a sequence {(\lambda^-_d)_{d \in {\mathcal D}}} of lower bound sieve coefficients?

Thus, for instance, the trivial upper bound sieve {\lambda^+_d := 1_{d=1}} and the trivial lower bound sieve {\lambda^-_d := 0} show that (14) can equal {1} and (15) can equal {0}. Of course, one hopes to do better than these trivial bounds in many situations; usually one can improve the upper bound quite substantially, but improving the lower bound is significantly more difficult, particularly when {z} is large compared with {D}.

If the remainder terms {r_d} in (13) are indeed negligible on average for {d \leq D}, then one expects the upper and lower bounds in Problem 3 to essentially be the optimal bounds in (14) and (15) respectively, multiplied by the normalisation factor {X}. Thus Problem 10 serves as a good model problem for Problem 3, in which all the arithmetic content of the original sieving problem has been abstracted into two parameters {z,D} and a multiplicative function {g}. In many applications, {g(p)} will be approximately {\kappa/p} on the average for some fixed {\kappa>0}, known as the sieve dimension; for instance, in the twin prime sieving problem discussed above, the sieve dimension is {2}. The larger one makes the level of distribution {D} compared to {z}, the more choices one has for the upper and lower bound sieves; it is thus of interest to obtain equidistribution estimates such as (13) for {d} as large as possible. When the sequence {a_d} is of arithmetic origin (for instance, if it is the von Mangoldt function {\Lambda}), then estimates such as the Bombieri-Vinogradov theorem, Theorem 17 from Notes 3, turn out to be particularly useful in this regard; in other contexts, the required equidistribution estimates might come from other sources, such as homogeneous dynamics, or the theory of expander graphs (the latter arises in the recent theory of the affine sieve, discussed in this previous blog post). However, the sieve-theoretic tools developed in this post are not particularly sensitive to how a certain level of distribution is attained, and are generally content to use sieve axioms such as (13) as “black boxes”.

In some applications one needs to modify Problem 10 in various technical ways (e.g. in altering the product {P(z)}, the set {{\mathcal D}}, or the definition of an upper or lower sieve coefficient sequence), but to simplify the exposition we will focus on the above problem without such alterations.

As the exercise below (or the heuristic (7)) suggests, the “natural” size of (14) and (15) is given by the quantity {V(z) := \prod_{p < z} (1 - g(p))} (so that the natural size for Problem 3 is {V(z) X}):

Exercise 11 Let {z,D,g} be as in Problem 10, and set {V(z) := \prod_{p \leq z} (1 - g(p))}.

  • (i) Show that the quantity (14) is always at least {V(z)} when {(\lambda^+_d)_{d \in {\mathcal D}}} is a sequence of upper bound sieve coefficients. Similarly, show that the quantity (15) is always at most {V(z)} when {(\lambda^-_d)_{d \in {\mathcal D}}} is a sequence of lower bound sieve coefficients. (Hint: compute the expected value of {\sum_{d|n} \lambda^\pm_d} when {n} is a random factor of {P(z)} chosen according to a certain probability distribution depending on {g}.)
  • (ii) Show that (14) and (15) can both attain the value of {V(z)} when {D \geq P(z)}. (Hint: translate the Legendre sieve to this setting.)

The problem of finding good sequences of upper and lower bound sieve coefficients in order to solve problems such as Problem 10 is one of the core objectives of sieve theory, and has been intensively studied. This is more of an optimisation problem rather than a genuinely number theoretic problem; however, the optimisation problem is sufficiently complicated that it has not been solved exactly or even asymptotically, except in a few special cases. (It can be reduced to a optimisation problem involving multilinear integrals of certain unknown functions of several variables, but this problem is rather difficult to analyse further; see these lecture notes of Selberg for further discussion.) But while we do not yet have a definitive solution to this problem in general, we do have a number of good general-purpose upper and lower bound sieve coefficients that give fairly good values for (14), (15), often coming within a constant factor of the idealised value {V(z)}, and which work well for sifting levels {z} as large as a small power of the level of distribution {D}. Unfortunately, we also know of an important limitation to the sieve, known as the parity problem, that prevents one from taking {z} as large as {D^{1/2}} while still obtaining non-trivial lower bounds; as a consequence, sieve theory is not able, on its own, to sift out primes for such purposes as establishing the twin prime conjecture. However, it is still possible to use these sieves, in conjunction with additional tools, to produce various types of primes or prime patterns in some cases; examples of this include the theorem of Ben Green and myself in which an upper bound sieve is used to demonstrate the existence of primes in arbitrarily long arithmetic progressions, or the more recent theorem of Zhang in which (among other things) used an upper bound sieve was used to demonstrate the existence of infinitely many pairs of primes whose difference was bounded. In such arguments, the upper bound sieve was used not so much to count the primes or prime patterns directly, but to serve instead as a sort of “container” to efficiently envelop such prime patterns; when used in such a manner, the upper bound sieves are sometimes known as enveloping sieves. If the original sequence was supported on primes, then the enveloping sieve can be viewed as a “smoothed out indicator function” that is concentrated on almost primes, which in this context refers to numbers with no small prime factors.

In a somewhat different direction, it can be possible in some cases to break the parity barrier by assuming additional equidistribution axioms on the sequence {a_n} than just (13), in particular controlling certain bilinear sums involving {a_{nm}} rather than just linear sums of the {a_n}. This approach was in particular pursued by Friedlander and Iwaniec, leading to their theorem that there are infinitely many primes of the form {n^2+m^4}.

The study of sieves is an immense topic; see for instance the recent 527-page text by Friedlander and Iwaniec. We will limit attention to two sieves which give good general-purpose results, if not necessarily the most optimal ones:

  • (i) The beta sieve (or Rosser-Iwaniec sieve), which is a modification of the classical combinatorial sieve of Brun. (A collection of sieve coefficients {\lambda_d^{\pm}} is called combinatorial if its coefficients lie in {\{-1,0,+1\}}.) The beta sieve is a family of upper and lower bound combinatorial sieves, and are particularly useful for efficiently sieving out all primes up to a parameter {z = x^{1/u}} from a set of integers of size {x}, in the regime where {u} is moderately large, leading to what is sometimes known as the fundamental lemma of sieve theory.
  • (ii) The Selberg upper bound sieve, which is a general-purpose sieve that can serve both as an upper bound sieve for classical sieving problems, as well as an enveloping sieve for sets such as the primes. (One can also convert the Selberg upper bound sieve into a lower bound sieve in a number of ways, but we will only touch upon this briefly.) A key advantage of the Selberg sieve is that, due to the “quadratic” nature of the sieve, the difficult optimisation problem in Problem 10 is replaced with a much more tractable quadratic optimisation problem, which can often be solved for exactly.

Remark 12 It is possible to compose two sieves together, for instance by using the observation that the product of two upper bound sieves is again an upper bound sieve, or that the product of an upper bound sieve and a lower bound sieve is a lower bound sieve. Such a composition of sieves is useful in some applications, for instance if one wants to apply the fundamental lemma as a “preliminary sieve” to sieve out small primes, but then use a more precise sieve like the Selberg sieve to sieve out medium primes. We will see an example of this in later notes, when we discuss the linear beta-sieve.

We will also briefly present the (arithmetic) large sieve, which gives a rather different approach to Problem 3 in the case that each {E_p} consists of some number (typically a large number) of residue classes modulo {p}, and is powered by the (analytic) large sieve inequality of the preceding section. As an application of these methods, we will utilise the Selberg upper bound sieve as an enveloping sieve to establish Zhang’s theorem on bounded gaps between primes. Finally, we give an informal discussion of the parity barrier which gives some heuristic limitations on what sieve theory is able to accomplish with regards to counting prime patters such as twin primes.

These notes are only an introduction to the vast topic of sieve theory; more detailed discussion can be found in the Friedlander-Iwaniec text, in these lecture notes of Selberg, and in many further texts.

Read the rest of this entry »

A fundamental and recurring problem in analytic number theory is to demonstrate the presence of cancellation in an oscillating sum, a typical example of which might be a correlation

\displaystyle  \sum_{n} f(n) \overline{g(n)} \ \ \ \ \ (1)

between two arithmetic functions {f: {\bf N} \rightarrow {\bf C}} and {g: {\bf N} \rightarrow {\bf C}}, which to avoid technicalities we will assume to be finitely supported (or that the {n} variable is localised to a finite range, such as {\{ n: n \leq x \}}). A key example to keep in mind for the purposes of this set of notes is the twisted von Mangoldt summatory function

\displaystyle  \sum_{n \leq x} \Lambda(n) \overline{\chi(n)} \ \ \ \ \ (2)

that measures the correlation between the primes and a Dirichlet character {\chi}. One can get a “trivial” bound on such sums from the triangle inequality

\displaystyle  |\sum_{n} f(n) \overline{g(n)}| \leq \sum_{n} |f(n)| |g(n)|;

for instance, from the triangle inequality and the prime number theorem we have

\displaystyle  |\sum_{n \leq x} \Lambda(n) \overline{\chi(n)}| \leq x + o(x) \ \ \ \ \ (3)

as {x \rightarrow \infty}. But the triangle inequality is insensitive to the phase oscillations of the summands, and often we expect (e.g. from the probabilistic heuristics from Supplement 4) to be able to improve upon the trivial triangle inequality bound by a substantial amount; in the best case scenario, one typically expects a “square root cancellation” that gains a factor that is roughly the square root of the number of summands. (For instance, for Dirichlet characters {\chi} of conductor {O(x^{O(1)})}, it is expected from probabilistic heuristics that the left-hand side of (3) should in fact be {O_\varepsilon(x^{1/2+\varepsilon})} for any {\varepsilon>0}.)

It has proven surprisingly difficult, however, to establish significant cancellation in many of the sums of interest in analytic number theory, particularly if the sums do not have a strong amount of algebraic structure (e.g. multiplicative structure) which allow for the deployment of specialised techniques (such as multiplicative number theory techniques). In fact, we are forced to rely (to an embarrassingly large extent) on (many variations of) a single basic tool to capture at least some cancellation, namely the Cauchy-Schwarz inequality. In fact, in many cases the classical case

\displaystyle  |\sum_n f(n) \overline{g(n)}| \leq (\sum_n |f(n)|^2)^{1/2} (\sum_n |g(n)|^2)^{1/2}, \ \ \ \ \ (4)

considered by Cauchy, where at least one of {f, g: {\bf N} \rightarrow {\bf C}} is finitely supported, suffices for applications. Roughly speaking, the Cauchy-Schwarz inequality replaces the task of estimating a cross-correlation between two different functions {f,g}, to that of measuring self-correlations between {f} and itself, or {g} and itself, which are usually easier to compute (albeit at the cost of capturing less cancellation). Note that the Cauchy-Schwarz inequality requires almost no hypotheses on the functions {f} or {g}, making it a very widely applicable tool.

There is however some skill required to decide exactly how to deploy the Cauchy-Schwarz inequality (and in particular, how to select {f} and {g}); if applied blindly, one loses all cancellation and can even end up with a worse estimate than the trivial bound. For instance, if one tries to bound (2) directly by applying Cauchy-Schwarz with the functions {\Lambda} and {\chi}, one obtains the bound

\displaystyle  |\sum_{n \leq x} \Lambda(n) \overline{\chi(n)}| \leq (\sum_{n \leq x} \Lambda(n)^2)^{1/2} (\sum_{n \leq x} |\chi(n)|^2)^{1/2}.

The right-hand side may be bounded by {\ll x \log^{1/2} x}, but this is worse than the trivial bound (3) by a logarithmic factor. This can be “blamed” on the fact that {\Lambda} and {\chi} are concentrated on rather different sets ({\Lambda} is concentrated on primes, while {\chi} is more or less uniformly distributed amongst the natural numbers); but even if one corrects for this (e.g. by weighting Cauchy-Schwarz with some suitable “sieve weight” that is more concentrated on primes), one still does not do any better than (3). Indeed, the Cauchy-Schwarz inequality suffers from the same key weakness as the triangle inequality: it is insensitive to the phase oscillation of the factors {f, g}.

While the Cauchy-Schwarz inequality can be poor at estimating a single correlation such as (1), its power improves when considering an average (or sum, or square sum) of multiple correlations. In this set of notes, we will focus on one such situation of this type, namely that of trying to estimate a square sum

\displaystyle  (\sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}|^2)^{1/2} \ \ \ \ \ (5)

that measures the correlations of a single function {f: {\bf N} \rightarrow {\bf C}} with multiple other functions {g_j: {\bf N} \rightarrow {\bf C}}. One should think of the situation in which {f} is a “complicated” function, such as the von Mangoldt function {\Lambda}, but the {g_j} are relatively “simple” functions, such as Dirichlet characters. In the case when the {g_j} are orthonormal functions, we of course have the classical Bessel inequality:

Lemma 1 (Bessel inequality) Let {g_1,\dots,g_J: {\bf N} \rightarrow {\bf C}} be finitely supported functions obeying the orthonormality relationship

\displaystyle  \sum_n g_j(n) \overline{g_{j'}(n)} = 1_{j=j'}

for all {1 \leq j,j' \leq J}. Then for any function {f: {\bf N} \rightarrow {\bf C}}, we have

\displaystyle  (\sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}|^2)^{1/2} \leq (\sum_n |f(n)|^2)^{1/2}.

For sake of comparison, if one were to apply the Cauchy-Schwarz inequality (4) separately to each summand in (5), one would obtain the bound of {J^{1/2} (\sum_n |f(n)|^2)^{1/2}}, which is significantly inferior to the Bessel bound when {J} is large. Geometrically, what is going on is this: the Cauchy-Schwarz inequality (4) is only close to sharp when {f} and {g} are close to parallel in the Hilbert space {\ell^2({\bf N})}. But if {g_1,\dots,g_J} are orthonormal, then it is not possible for any other vector {f} to be simultaneously close to parallel to too many of these orthonormal vectors, and so the inner products of {f} with most of the {g_j} should be small. (See this previous blog post for more discussion of this principle.) One can view the Bessel inequality as formalising a repulsion principle: if {f} correlates too much with some of the {g_j}, then it does not have enough “energy” to have large correlation with the rest of the {g_j}.

In analytic number theory applications, it is useful to generalise the Bessel inequality to the situation in which the {g_j} are not necessarily orthonormal. This can be accomplished via the Cauchy-Schwarz inequality:

Proposition 2 (Generalised Bessel inequality) Let {g_1,\dots,g_J: {\bf N} \rightarrow {\bf C}} be finitely supported functions, and let {\nu: {\bf N} \rightarrow {\bf R}^+} be a non-negative function. Let {f: {\bf N} \rightarrow {\bf C}} be such that {f} vanishes whenever {\nu} vanishes, we have

\displaystyle  (\sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}|^2)^{1/2} \leq (\sum_n |f(n)|^2 / \nu(n))^{1/2} \ \ \ \ \ (6)

\displaystyle  \times ( \sum_{j=1}^J \sum_{j'=1}^J c_j \overline{c_{j'}} \sum_n \nu(n) g_j(n) \overline{g_{j'}(n)} )^{1/2}

for some sequence {c_1,\dots,c_J} of complex numbers with {\sum_{j=1}^J |c_j|^2 = 1}, with the convention that {|f(n)|^2/\nu(n)} vanishes whenever {f(n), \nu(n)} both vanish.

Note by relabeling that we may replace the domain {{\bf N}} here by any other at most countable set, such as the integers {{\bf Z}}. (Indeed, one can give an analogue of this lemma on arbitrary measure spaces, but we will not do so here.) This result first appears in this paper of Boas.

Proof: We use the method of duality to replace the role of the function {f} by a dual sequence {c_1,\dots,c_J}. By the converse to Cauchy-Schwarz, we may write the left-hand side of (6) as

\displaystyle  \sum_{j=1}^J \overline{c_j} \sum_{n} f(n) \overline{g_j(n)}

for some complex numbers {c_1,\dots,c_J} with {\sum_{j=1}^J |c_j|^2 = 1}. Indeed, if all of the {\sum_{n} f(n) \overline{g_j(n)}} vanish, we can set the {c_j} arbitrarily, otherwise we set {(c_1,\dots,c_J)} to be the unit vector formed by dividing {(\sum_{n} f(n) \overline{g_j(n)})_{j=1}^J} by its length. We can then rearrange this expression as

\displaystyle  \sum_n f(n) \overline{\sum_{j=1}^J c_j g_j(n)}.

Applying Cauchy-Schwarz (dividing the first factor by {\nu(n)^{1/2}} and multiplying the second by {\nu(n)^{1/2}}, after first removing those {n} for which {\nu(n)} vanish), this is bounded by

\displaystyle  (\sum_n |f(n)|^2 / \nu(n))^{1/2} (\sum_n \nu(n) |\sum_{j=1}^J c_j g_j(n)|^2)^{1/2},

and the claim follows by expanding out the second factor. \Box

Observe that Lemma 1 is a special case of Proposition 2 when {\nu=1} and the {g_j} are orthonormal. In general, one can expect Proposition 2 to be useful when the {g_j} are almost orthogonal relative to {\nu}, in that the correlations {\sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}} tend to be small when {j,j'} are distinct. In that case, one can hope for the diagonal term {j=j'} in the right-hand side of (6) to dominate, in which case one can obtain estimates of comparable strength to the classical Bessel inequality. The flexibility to choose different weights {\nu} in the above proposition has some technical advantages; for instance, if {f} is concentrated in a sparse set (such as the primes), it is sometimes useful to tailor {\nu} to a comparable set (e.g. the almost primes) in order not to lose too much in the first factor {\sum_n |f(n)|^2 / \nu(n)}. Also, it can be useful to choose a fairly “smooth” weight {\nu}, in order to make the weighted correlations {\sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}} small.

Remark 3 In harmonic analysis, the use of tools such as Proposition 2 is known as the method of almost orthogonality, or the {TT^*} method. The explanation for the latter name is as follows. For sake of exposition, suppose that {\nu} is never zero (or we remove all {n} from the domain for which {\nu(n)} vanishes). Given a family of finitely supported functions {g_1,\dots,g_J: {\bf N} \rightarrow {\bf C}}, consider the linear operator {T: \ell^2(\nu^{-1}) \rightarrow \ell^2(\{1,\dots,J\})} defined by the formula

\displaystyle  T f := ( \sum_{n} f(n) \overline{g_j(n)} )_{j=1}^J.

This is a bounded linear operator, and the left-hand side of (6) is nothing other than the {\ell^2(\{1,\dots,J\})} norm of {Tf}. Without any further information on the function {f} other than its {\ell^2(\nu^{-1})} norm {(\sum_n |f(n)|^2 / \nu(n))^{1/2}}, the best estimate one can obtain on (6) here is clearly

\displaystyle  (\sum_n |f(n)|^2 / \nu(n))^{1/2} \times \|T\|_{op},

where {\|T\|_{op}} denotes the operator norm of {T}.

The adjoint {T^*: \ell^2(\{1,\dots,J\}) \rightarrow \ell^2(\nu^{-1})} is easily computed to be

\displaystyle  T^* (c_j)_{j=1}^J := (\sum_{j=1}^J c_j \nu(n) g_j(n) )_{n \in {\bf N}}.

The composition {TT^*: \ell^2(\{1,\dots,J\}) \rightarrow \ell^2(\{1,\dots,J\})} of {T} and its adjoint is then given by

\displaystyle  TT^* (c_j)_{j=1}^J := (\sum_{j=1}^J c_j \sum_n \nu(n) g_j(n) \overline{g_{j'}}(n) )_{j=1}^J.

From the spectral theorem (or singular value decomposition), one sees that the operator norms of {T} and {TT^*} are related by the identity

\displaystyle  \|T\|_{op} = \|TT^*\|_{op}^{1/2},

and as {TT^*} is a self-adjoint, positive semi-definite operator, the operator norm {\|TT^*\|_{op}} is also the supremum of the quantity

\displaystyle  \langle TT^* (c_j)_{j=1}^J, (c_j)_{j=1}^J \rangle_{\ell^2(\{1,\dots,J\})} = \sum_{j=1}^J \sum_{j'=1}^J c_j \overline{c_{j'}} \sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}

where {(c_j)_{j=1}^J} ranges over unit vectors in {\ell^2(\{1,\dots,J\})}. Putting these facts together, we obtain Proposition 2; furthermore, we see from this analysis that the bound here is essentially optimal if the only information one is allowed to use about {f} is its {\ell^2(\nu^{-1})} norm.

For further discussion of almost orthogonality methods from a harmonic analysis perspective, see Chapter VII of this text of Stein.

Exercise 4 Under the same hypotheses as Proposition 2, show that

\displaystyle  \sum_{j=1}^J |\sum_{n} f(n) \overline{g_j(n)}| \leq (\sum_n |f(n)|^2 / \nu(n))^{1/2}

\displaystyle  \times ( \sum_{j=1}^J \sum_{j'=1}^J |\sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}| )^{1/2}

as well as the variant inequality

\displaystyle  |\sum_{j=1}^J \sum_{n} f(n) \overline{g_j(n)}| \leq (\sum_n |f(n)|^2 / \nu(n))^{1/2}

\displaystyle  \times | \sum_{j=1}^J \sum_{j'=1}^J \sum_n \nu(n) g_j(n) \overline{g_{j'}(n)}|^{1/2}.

Proposition 2 has many applications in analytic number theory; for instance, we will use it in later notes to control the large value of Dirichlet series such as the Riemann zeta function. One of the key benefits is that it largely eliminates the need to consider further correlations of the function {f} (other than its self-correlation {\sum_n |f(n)|^2 / \nu(n)} relative to {\nu^{-1}}, which is usually fairly easy to compute or estimate as {\nu} is usually chosen to be relatively simple); this is particularly useful if {f} is a function which is significantly more complicated to analyse than the functions {g_j}. Of course, the tradeoff for this is that one now has to deal with the coefficients {c_j}, which if anything are even less understood than {f}, since literally the only thing we know about these coefficients is their square sum {\sum_{j=1}^J |c_j|^2}. However, as long as there is enough almost orthogonality between the {g_j}, one can estimate the {c_j} by fairly crude estimates (e.g. triangle inequality or Cauchy-Schwarz) and still get reasonably good estimates.

In this set of notes, we will use Proposition 2 to prove some versions of the large sieve inequality, which controls a square-sum of correlations

\displaystyle  \sum_n f(n) e( -\xi_j n )

of an arbitrary finitely supported function {f: {\bf Z} \rightarrow {\bf C}} with various additive characters {n \mapsto e( \xi_j n)} (where {e(x) := e^{2\pi i x}}), or alternatively a square-sum of correlations

\displaystyle  \sum_n f(n) \overline{\chi_j(n)}

of {f} with various primitive Dirichlet characters {\chi_j}; it turns out that one can prove a (slightly sub-optimal) version of this inequality quite quickly from Proposition 2 if one first prepares the sum by inserting a smooth cutoff with well-behaved Fourier transform. The large sieve inequality has many applications (as the name suggests, it has particular utility within sieve theory). For the purposes of this set of notes, though, the main application we will need it for is the Bombieri-Vinogradov theorem, which in a very rough sense gives a prime number theorem in arithmetic progressions, which, “on average”, is of strength comparable to the results provided by the Generalised Riemann Hypothesis (GRH), but has the great advantage of being unconditional (it does not require any unproven hypotheses such as GRH); it can be viewed as a significant extension of the Siegel-Walfisz theorem from Notes 2. As we shall see in later notes, the Bombieri-Vinogradov theorem is a very useful ingredient in sieve-theoretic problems involving the primes.

There is however one additional important trick, beyond the large sieve, which we will need in order to establish the Bombieri-Vinogradov theorem. As it turns out, after some basic manipulations (and the deployment of some multiplicative number theory, and specifically the Siegel-Walfisz theorem), the task of proving the Bombieri-Vinogradov theorem is reduced to that of getting a good estimate on sums that are roughly of the form

\displaystyle  \sum_{j=1}^J |\sum_n \Lambda(n) \overline{\chi_j}(n)| \ \ \ \ \ (7)

for some primitive Dirichlet characters {\chi_j}. This looks like the type of sum that can be controlled by the large sieve (or by Proposition 2), except that this is an ordinary sum rather than a square sum (i.e., an {\ell^1} norm rather than an {\ell^2} norm). One could of course try to control such a sum in terms of the associated square-sum through the Cauchy-Schwarz inequality, but this turns out to be very wasteful (it loses a factor of about {J^{1/2}}). Instead, one should try to exploit the special structure of the von Mangoldt function {\Lambda}, in particular the fact that it can be expressible as a Dirichlet convolution {\alpha * \beta} of two further arithmetic sequences {\alpha,\beta} (or as a finite linear combination of such Dirichlet convolutions). The reason for introducing this convolution structure is through the basic identity

\displaystyle  (\sum_n \alpha*\beta(n) \overline{\chi_j}(n)) = (\sum_n \alpha(n) \overline{\chi_j}(n)) (\sum_n \beta(n) \overline{\chi_j}(n)) \ \ \ \ \ (8)

for any finitely supported sequences {\alpha,\beta: {\bf N} \rightarrow {\bf C}}, as can be easily seen by multiplying everything out and using the completely multiplicative nature of {\chi_j}. (This is the multiplicative analogue of the well-known relationship {\widehat{f*g}(\xi) = \hat f(\xi) \hat g(\xi)} between ordinary convolution and Fourier coefficients.) This factorisation, together with yet another application of the Cauchy-Schwarz inequality, lets one control (7) by square-sums of the sort that can be handled by the large sieve inequality.

As we have seen in Notes 1, the von Mangoldt function {\Lambda} does indeed admit several factorisations into Dirichlet convolution type, such as the factorisation {\Lambda = \mu * L}. One can try directly inserting this factorisation into the above strategy; it almost works, however there turns out to be a problem when considering the contribution of the portion of {\mu} or {L} that is supported at very small natural numbers, as the large sieve loses any gain over the trivial bound in such settings. Because of this, there is a need for a more sophisticated decomposition of {\Lambda} into Dirichlet convolutions {\alpha * \beta} which are non-degenerate in the sense that {\alpha,\beta} are supported away from small values. (As a non-example, the trivial factorisation {\Lambda = \Lambda * \delta} would be a totally inappropriate factorisation for this purpose.) Fortunately, it turns out that through some elementary combinatorial manipulations, some satisfactory decompositions of this type are available, such as the Vaughan identity and the Heath-Brown identity. By using one of these identities we will be able to complete the proof of the Bombieri-Vinogradov theorem. (These identities are also useful for other applications in which one wishes to control correlations between the von Mangoldt function {\Lambda} and some other sequence; we will see some examples of this in later notes.)

For further reading on these topics, including a significantly larger number of examples of the large sieve inequality, see Chapters 7 and 17 of Iwaniec and Kowalski.

Remark 5 We caution that the presentation given in this set of notes is highly ahistorical; we are using modern streamlined proofs of results that were first obtained by more complicated arguments.

Read the rest of this entry »

We now move away from the world of multiplicative prime number theory covered in Notes 1 and Notes 2, and enter the wider, and complementary, world of non-multiplicative prime number theory, in which one studies statistics related to non-multiplicative patterns, such as twins {n,n+2}. This creates a major jump in difficulty; for instance, even the most basic multiplicative result about the primes, namely Euclid’s theorem that there are infinitely many of them, remains unproven for twin primes. Of course, the situation is even worse for stronger results, such as Euler’s theorem, Dirichlet’s theorem, or the prime number theorem. Finally, even many multiplicative questions about the primes remain open. The most famous of these is the Riemann hypothesis, which gives the asymptotic {\sum_{n \leq x} \Lambda(n) = x + O( \sqrt{x} \log^2 x )} (see Proposition 24 from Notes 2). But even if one assumes the Riemann hypothesis, the precise distribution of the error term {O( \sqrt{x} \log^2 x )} in the above asymptotic (or in related asymptotics, such as for the sum {\sum_{x \leq n < x+y} \Lambda(n)} that measures the distribution of primes in short intervals) is not entirely clear.

Despite this, we do have a number of extremely convincing and well supported models for the primes (and related objects) that let us predict what the answer to many prime number theory questions (both multiplicative and non-multiplicative) should be, particularly in asymptotic regimes where one can work with aggregate statistics about the primes, rather than with a small number of individual primes. These models are based on taking some statistical distribution related to the primes (e.g. the primality properties of a randomly selected {k}-tuple), and replacing that distribution by a model distribution that is easy to compute with (e.g. a distribution with strong joint independence properties). One can then predict the asymptotic value of various (normalised) statistics about the primes by replacing the relevant statistical distributions of the primes with their simplified models. In this non-rigorous setting, many difficult conjectures on the primes reduce to relatively simple calculations; for instance, all four of the (still unsolved) Landau problems may now be justified in the affirmative by one or more of these models. Indeed, the models are so effective at this task that analytic number theory is in the curious position of being able to confidently predict the answer to a large proportion of the open problems in the subject, whilst not possessing a clear way forward to rigorously confirm these answers!

As it turns out, the models for primes that have turned out to be the most accurate in practice are random models, which involve (either explicitly or implicitly) one or more random variables. This is despite the prime numbers being obviously deterministic in nature; no coins are flipped or dice rolled to create the set of primes. The point is that while the primes have a lot of obvious multiplicative structure (for instance, the product of two primes is never another prime), they do not appear to exhibit much discernible non-multiplicative structure asymptotically, in the sense that they rarely exhibit statistical anomalies in the asymptotic limit that cannot be easily explained in terms of the multiplicative properties of the primes. As such, when considering non-multiplicative statistics of the primes, the primes appear to behave pseudorandomly, and can thus be modeled with reasonable accuracy by a random model. And even for multiplicative problems, which are in principle controlled by the zeroes of the Riemann zeta function, one can obtain good predictions by positing various pseudorandomness properties of these zeroes, so that the distribution of these zeroes can be modeled by a random model.

Of course, one cannot expect perfect accuracy when replicating a deterministic set such as the primes by a probabilistic model of that set, and each of the heuristic models we discuss below have some limitations to the range of statistics about the primes that they can expect to track with reasonable accuracy. For instance, many of the models about the primes do not fully take into account the multiplicative structure of primes, such as the connection with a zeta function with a meromorphic continuation to the entire complex plane; at the opposite extreme, we have the GUE hypothesis which appears to accurately model the zeta function, but does not capture such basic properties of the primes as the fact that the primes are all natural numbers. Nevertheless, each of the models described below, when deployed within their sphere of reasonable application, has (possibly after some fine-tuning) given predictions that are in remarkable agreement with numerical computation and with known rigorous theoretical results, as well as with other models in overlapping spheres of application; they are also broadly compatible with the general heuristic (discussed in this previous post) that in the absence of any exploitable structure, asymptotic statistics should default to the most “uniform”, “pseudorandom”, or “independent” distribution allowable.

As hinted at above, we do not have a single unified model for the prime numbers (other than the primes themselves, of course), but instead have an overlapping family of useful models that each appear to accurately describe some, but not all, aspects of the prime numbers. In this set of notes, we will discuss four such models:

  1. The Cramér random model and its refinements, which model the set {{\mathcal P}} of prime numbers by a random set.
  2. The Möbius pseudorandomness principle, which predicts that the Möbius function {\mu} does not correlate with any genuinely different arithmetic sequence of reasonable “complexity”.
  3. The equidistribution of residues principle, which predicts that the residue classes of a large number {n} modulo a small or medium-sized prime {p} behave as if they are independently and uniformly distributed as {p} varies.
  4. The GUE hypothesis, which asserts that the zeroes of the Riemann zeta function are distributed (at microscopic and mesoscopic scales) like the zeroes of a GUE random matrix, and which generalises the pair correlation conjecture regarding pairs of such zeroes.

This is not an exhaustive list of models for the primes and related objects; for instance, there is also the model in which the major arc contribution in the Hardy-Littlewood circle method is predicted to always dominate, and with regards to various finite groups of number-theoretic importance, such as the class groups discussed in Supplement 1, there are also heuristics of Cohen-Lenstra type. Historically, the first heuristic discussion of the primes along these lines was by Sylvester, who worked informally with a model somewhat related to the equidistribution of residues principle. However, we will not discuss any of these models here.

A word of warning: the discussion of the above four models will inevitably be largely informal, and “fuzzy” in nature. While one can certainly make precise formalisations of at least some aspects of these models, one should not be inflexibly wedded to a specific such formalisation as being “the” correct way to pin down the model rigorously. (To quote the statistician George Box: “all models are wrong, but some are useful”.) Indeed, we will see some examples below the fold in which some finer structure in the prime numbers leads to a correction term being added to a “naive” implementation of one of the above models to make it more accurate, and it is perfectly conceivable that some further such fine-tuning will be applied to one or more of these models in the future. These sorts of mathematical models are in some ways closer in nature to the scientific theories used to model the physical world, than they are to the axiomatic theories one is accustomed to in rigorous mathematics, and one should approach the discussion below accordingly. In particular, and in contrast to the other notes in this course, the material here is not directly used for proving further theorems, which is why we have marked it as “optional” material. Nevertheless, the heuristics and models here are still used indirectly for such purposes, for instance by

  • giving a clearer indication of what results one expects to be true, thus guiding one to fruitful conjectures;
  • providing a quick way to scan for possible errors in a mathematical claim (e.g. by finding that the main term is off from what a model predicts, or an error term is too small);
  • gauging the relative strength of various assertions (e.g. classifying some results as “unsurprising”, others as “potential breakthroughs” or “powerful new estimates”, others as “unexpected new phenomena”, and yet others as “way too good to be true”); or
  • setting up heuristic barriers (such as the parity barrier) that one has to resolve before resolving certain key problems (e.g. the twin prime conjecture).

See also my previous essay on the distinction between “rigorous” and “post-rigorous” mathematics, or Thurston’s essay discussing, among other things, the “definition-theorem-proof” model of mathematics and its limitations.

Remark 1 The material in this set of notes presumes some prior exposure to probability theory. See for instance this previous post for a quick review of the relevant concepts.

Read the rest of this entry »

In 1946, Ulam, in response to a theorem of Anning and Erdös, posed the following problem:

Problem 1 (Erdös-Ulam problem) Let {S \subset {\bf R}^2} be a set such that the distance between any two points in {S} is rational. Is it true that {S} cannot be (topologically) dense in {{\bf R}^2}?

The paper of Anning and Erdös addressed the case that all the distances between two points in {S} were integer rather than rational in the affirmative.

The Erdös-Ulam problem remains open; it was discussed recently over at Gödel’s lost letter. It is in fact likely (as we shall see below) that the set {S} in the above problem is not only forbidden to be topologically dense, but also cannot be Zariski dense either. If so, then the structure of {S} is quite restricted; it was shown by Solymosi and de Zeeuw that if {S} fails to be Zariski dense, then all but finitely many of the points of {S} must lie on a single line, or a single circle. (Conversely, it is easy to construct examples of dense subsets of a line or circle in which all distances are rational, though in the latter case the square of the radius of the circle must also be rational.)

The main tool of the Solymosi-de Zeeuw analysis was Faltings’ celebrated theorem that every algebraic curve of genus at least two contains only finitely many rational points. The purpose of this post is to observe that an affirmative answer to the full Erdös-Ulam problem similarly follows from the conjectured analogue of Falting’s theorem for surfaces, namely the following conjecture of Bombieri and Lang:

Conjecture 2 (Bombieri-Lang conjecture) Let {X} be a smooth projective irreducible algebraic surface defined over the rationals {{\bf Q}} which is of general type. Then the set {X({\bf Q})} of rational points of {X} is not Zariski dense in {X}.

In fact, the Bombieri-Lang conjecture has been made for varieties of arbitrary dimension, and for more general number fields than the rationals, but the above special case of the conjecture is the only one needed for this application. We will review what “general type” means (for smooth projective complex varieties, at least) below the fold.

The Bombieri-Lang conjecture is considered to be extremely difficult, in particular being substantially harder than Faltings’ theorem, which is itself a highly non-trivial result. So this implication should not be viewed as a practical route to resolving the Erdös-Ulam problem unconditionally; rather, it is a demonstration of the power of the Bombieri-Lang conjecture. Still, it was an instructive algebraic geometry exercise for me to carry out the details of this implication, which quickly boils down to verifying that a certain quite explicit algebraic surface is of general type (Theorem 4 below). As I am not an expert in the subject, my computations here will be rather tedious and pedestrian; it is likely that they could be made much slicker by exploiting more of the machinery of modern algebraic geometry, and I would welcome any such streamlining by actual experts in this area. (For similar reasons, there may be more typos and errors than usual in this post; corrections are welcome as always.) My calculations here are based on a similar calculation of van Luijk, who used analogous arguments to show (assuming Bombieri-Lang) that the set of perfect cuboids is not Zariski-dense in its projective parameter space.

We also remark that in a recent paper of Makhul and Shaffaf, the Bombieri-Lang conjecture (or more precisely, a weaker consequence of that conjecture) was used to show that if {S} is a subset of {{\bf R}^2} with rational distances which intersects any line in only finitely many points, then there is a uniform bound on the cardinality of the intersection of {S} with any line. I have also recently learned (private communication) that an unpublished work of Shaffaf has obtained a result similar to the one in this post, namely that the Erdös-Ulam conjecture follows from the Bombieri-Lang conjecture, plus an additional conjecture about the rational curves in a specific surface.

Let us now give the elementary reductions to the claim that a certain variety is of general type. For sake of contradiction, let {S} be a dense set such that the distance between any two points is rational. Then {S} certainly contains two points that are a rational distance apart. By applying a translation, rotation, and a (rational) dilation, we may assume that these two points are {(0,0)} and {(1,0)}. As {S} is dense, there is a third point of {S} not on the {x} axis, which after a reflection we can place in the upper half-plane; we will write it as {(a,\sqrt{b})} with {b>0}.

Given any two points {P, Q} in {S}, the quantities {|P|^2, |Q|^2, |P-Q|^2} are rational, and so by the cosine rule the dot product {P \cdot Q} is rational as well. Since {(1,0) \in S}, this implies that the {x}-component of every point {P} in {S} is rational; this in turn implies that the product of the {y}-coordinates of any two points {P,Q} in {S} is rational as well (since this differs from {P \cdot Q} by a rational number). In particular, {a} and {b} are rational, and all of the points in {S} now lie in the lattice {\{ ( x, y\sqrt{b}): x, y \in {\bf Q} \}}. (This fact appears to have first been observed in the 1988 habilitationschrift of Kemnitz.)

Now take four points {(x_j,y_j \sqrt{b})}, {j=1,\dots,4} in {S} in general position (so that the octuplet {(x_1,y_1\sqrt{b},\dots,x_4,y_4\sqrt{b})} avoids any pre-specified hypersurface in {{\bf C}^8}); this can be done if {S} is dense. (If one wished, one could re-use the three previous points {(0,0), (1,0), (a,\sqrt{b})} to be three of these four points, although this ultimately makes little difference to the analysis.) If {(x,y\sqrt{b})} is any point in {S}, then the distances {r_j} from {(x,y\sqrt{b})} to {(x_j,y_j\sqrt{b})} are rationals that obey the equations

\displaystyle (x - x_j)^2 + b (y-y_j)^2 = r_j^2

for {j=1,\dots,4}, and thus determine a rational point in the affine complex variety {V = V_{b,x_1,y_1,x_2,y_2,x_3,y_3,x_4,y_4} \subset {\bf C}^5} defined as

\displaystyle V := \{ (x,y,r_1,r_2,r_3,r_4) \in {\bf C}^6:

\displaystyle (x - x_j)^2 + b (y-y_j)^2 = r_j^2 \hbox{ for } j=1,\dots,4 \}.

By inspecting the projection {(x,y,r_1,r_2,r_3,r_4) \rightarrow (x,y)} from {V} to {{\bf C}^2}, we see that {V} is a branched cover of {{\bf C}^2}, with the generic cover having {2^4=16} points (coming from the different ways to form the square roots {r_1,r_2,r_3,r_4}); in particular, {V} is a complex affine algebraic surface, defined over the rationals. By inspecting the monodromy around the four singular base points {(x,y) = (x_i,y_i)} (which switch the sign of one of the roots {r_i}, while keeping the other three roots unchanged), we see that the variety {V} is connected away from its singular set, and thus irreducible. As {S} is topologically dense in {{\bf R}^2}, it is Zariski-dense in {{\bf C}^2}, and so {S} generates a Zariski-dense set of rational points in {V}. To solve the Erdös-Ulam problem, it thus suffices to show that

Claim 3 For any non-zero rational {b} and for rationals {x_1,y_1,x_2,y_2,x_3,y_3,x_4,y_4} in general position, the rational points of the affine surface {V = V_{b,x_1,y_1,x_2,y_2,x_3,y_3,x_4,y_4}} is not Zariski dense in {V}.

This is already very close to a claim that can be directly resolved by the Bombieri-Lang conjecture, but {V} is affine rather than projective, and also contains some singularities. The first issue is easy to deal with, by working with the projectivisation

\displaystyle \overline{V} := \{ [X,Y,Z,R_1,R_2,R_3,R_4] \in {\bf CP}^6: Q(X,Y,Z,R_1,R_2,R_3,R_4) = 0 \} \ \ \ \ \ (1)

 

of {V}, where {Q: {\bf C}^7 \rightarrow {\bf C}^4} is the homogeneous quadratic polynomial

\displaystyle (X,Y,Z,R_1,R_2,R_3,R_4) := (Q_j(X,Y,Z,R_1,R_2,R_3,R_4) )_{j=1}^4

with

\displaystyle Q_j(X,Y,Z,R_1,R_2,R_3,R_4) := (X-x_j Z)^2 + b (Y-y_jZ)^2 - R_j^2

and the projective complex space {{\bf CP}^6} is the space of all equivalence classes {[X,Y,Z,R_1,R_2,R_3,R_4]} of tuples {(X,Y,Z,R_1,R_2,R_3,R_4) \in {\bf C}^7 \backslash \{0\}} up to projective equivalence {(\lambda X, \lambda Y, \lambda Z, \lambda R_1, \lambda R_2, \lambda R_3, \lambda R_4) \sim (X,Y,Z,R_1,R_2,R_3,R_4)}. By identifying the affine point {(x,y,r_1,r_2,r_3,r_4)} with the projective point {(X,Y,1,R_1,R_2,R_3,R_4)}, we see that {\overline{V}} consists of the affine variety {V} together with the set {\{ [X,Y,0,R_1,R_2,R_3,R_4]: X^2+bY^2=R^2; R_j = \pm R_1 \hbox{ for } j=2,3,4\}}, which is the union of eight curves, each of which lies in the closure of {V}. Thus {\overline{V}} is the projective closure of {V}, and is thus a complex irreducible projective surface, defined over the rationals. As {\overline{V}} is cut out by four quadric equations in {{\bf CP}^6} and has degree sixteen (as can be seen for instance by inspecting the intersection of {\overline{V}} with a generic perturbation of a fibre over the generically defined projection {[X,Y,Z,R_1,R_2,R_3,R_4] \mapsto [X,Y,Z]}), it is also a complete intersection. To show (3), it then suffices to show that the rational points in {\overline{V}} are not Zariski dense in {\overline{V}}.

Heuristically, the reason why we expect few rational points in {\overline{V}} is as follows. First observe from the projective nature of (1) that every rational point is equivalent to an integer point. But for a septuple {(X,Y,Z,R_1,R_2,R_3,R_4)} of integers of size {O(N)}, the quantity {Q(X,Y,Z,R_1,R_2,R_3,R_4)} is an integer point of {{\bf Z}^4} of size {O(N^2)}, and so should only vanish about {O(N^{-8})} of the time. Hence the number of integer points {(X,Y,Z,R_1,R_2,R_3,R_4) \in {\bf Z}^7} of height comparable to {N} should be about

\displaystyle O(N)^7 \times O(N^{-8}) = O(N^{-1});

this is a convergent sum if {N} ranges over (say) powers of two, and so from standard probabilistic heuristics (see this previous post) we in fact expect only finitely many solutions, in the absence of any special algebraic structure (e.g. the structure of an abelian variety, or a birational reduction to a simpler variety) that could produce an unusually large number of solutions.

The Bombieri-Lang conjecture, Conjecture 2, can be viewed as a formalisation of the above heuristics (roughly speaking, it is one of the most optimistic natural conjectures one could make that is compatible with these heuristics while also being invariant under birational equivalence).

Unfortunately, {\overline{V}} contains some singular points. Being a complete intersection, this occurs when the Jacobian matrix of the map {Q: {\bf C}^7 \rightarrow {\bf C}^4} has less than full rank, or equivalently that the gradient vectors

\displaystyle \nabla Q_j = (2(X-x_j Z), 2(Y-y_j Z), -2x_j (X-x_j Z) - 2y_j (Y-y_j Z), \ \ \ \ \ (2)

 

\displaystyle 0, \dots, 0, -2R_j, 0, \dots, 0)

for {j=1,\dots,4} are linearly dependent, where the {-2R_j} is in the coordinate position associated to {R_j}. One way in which this can occur is if one of the gradient vectors {\nabla Q_j} vanish identically. This occurs at precisely {4 \times 2^3 = 32} points, when {[X,Y,Z]} is equal to {[x_j,y_j,1]} for some {j=1,\dots,4}, and one has {R_k = \pm ( (x_j - x_k)^2 + b (y_j - y_k)^2 )^{1/2}} for all {k=1,\dots,4} (so in particular {R_j=0}). Let us refer to these as the obvious singularities; they arise from the geometrically evident fact that the distance function {(x,y\sqrt{b}) \mapsto \sqrt{(x-x_j)^2 + b(y-y_j)^2}} is singular at {(x_j,y_j\sqrt{b})}.

The other way in which could occur is if a non-trivial linear combination of at least two of the gradient vectors vanishes. From (2), this can only occur if {R_j=R_k=0} for some distinct {j,k}, which from (1) implies that

\displaystyle (X - x_j Z) = \pm \sqrt{b} i (Y - y_j Z) \ \ \ \ \ (3)

 

and

\displaystyle (X - x_k Z) = \pm \sqrt{b} i (Y - y_k Z) \ \ \ \ \ (4)

 

for two choices of sign {\pm}. If the signs are equal, then (as {x_j, y_j, x_k, y_k} are in general position) this implies that {Z=0}, and then we have the singular point

\displaystyle [X,Y,Z,R_1,R_2,R_3,R_4] = [\pm \sqrt{b} i, 1, 0, 0, 0, 0, 0]. \ \ \ \ \ (5)

 

If the non-trivial linear combination involved three or more gradient vectors, then by the pigeonhole principle at least two of the signs involved must be equal, and so the only singular points are (5). So the only remaining possibility is when we have two gradient vectors {\nabla Q_j, \nabla Q_k} that are parallel but non-zero, with the signs in (3), (4) opposing. But then (as {x_j,y_j,x_k,y_k} are in general position) the vectors {(X-x_j Z, Y-y_j Z), (X-x_k Z, Y-y_k Z)} are non-zero and non-parallel to each other, a contradiction. Thus, outside of the {32} obvious singular points mentioned earlier, the only other singular points are the two points (5).

We will shortly show that the {32} obvious singularities are ordinary double points; the surface {\overline{V}} near any of these points is analytically equivalent to an ordinary cone {\{ (x,y,z) \in {\bf C}^3: z^2 = x^2 + y^2 \}} near the origin, which is a cone over a smooth conic curve {\{ (x,y) \in {\bf C}^2: x^2+y^2=1\}}. The two non-obvious singularities (5) are slightly more complicated than ordinary double points, they are elliptic singularities, which approximately resemble a cone over an elliptic curve. (As far as I can tell, this resemblance is exact in the category of real smooth manifolds, but not in the category of algebraic varieties.) If one blows up each of the point singularities of {\overline{V}} separately, no further singularities are created, and one obtains a smooth projective surface {X} (using the Segre embedding as necessary to embed {X} back into projective space, rather than in a product of projective spaces). Away from the singularities, the rational points of {\overline{V}} lift up to rational points of {X}. Assuming the Bombieri-Lang conjecture, we thus are able to answer the Erdös-Ulam problem in the affirmative once we establish

Theorem 4 The blowup {X} of {\overline{V}} is of general type.

This will be done below the fold, by the pedestrian device of explicitly constructing global differential forms on {X}; I will also be working from a complex analysis viewpoint rather than an algebraic geometry viewpoint as I am more comfortable with the former approach. (As mentioned above, though, there may well be a quicker way to establish this result by using more sophisticated machinery.)

I thank Mark Green and David Gieseker for helpful conversations (and a crash course in varieties of general type!).

Remark 5 The above argument shows in fact (assuming Bombieri-Lang) that sets {S \subset {\bf R}^2} with all distances rational cannot be Zariski-dense, and thus (by Solymosi-de Zeeuw) must lie on a single line or circle with only finitely many exceptions. Assuming a stronger version of Bombieri-Lang involving a general number field {K}, we obtain a similar conclusion with “rational” replaced by “lying in {K}” (one has to extend the Solymosi-de Zeeuw analysis to more general number fields, but this should be routine, using the analogue of Faltings’ theorem for such number fields).

Read the rest of this entry »

Kevin Ford, Ben Green, Sergei Konyagin, James Maynard, and I have just uploaded to the arXiv our paper “Long gaps between primes“. This is a followup work to our two previous papers (discussed in this previous post), in which we had simultaneously shown that the maximal gap

\displaystyle  G(X) := \sup_{p_n, p_{n+1} \leq X} p_{n+1}-p_n

between primes up to {X} exhibited a lower bound of the shape

\displaystyle  G(X) \geq f(X) \log X \frac{\log \log X \log\log\log\log X}{(\log\log\log X)^2} \ \ \ \ \ (1)

for some function {f(X)} that went to infinity as {X \rightarrow \infty}; this improved upon previous work of Rankin and other authors, who established the same bound but with {f(X)} replaced by a constant. (Again, see the previous post for a more detailed discussion.)

In our previous papers, we did not specify a particular growth rate for {f(X)}. In my paper with Kevin, Ben, and Sergei, there was a good reason for this: our argument relied (amongst other things) on the inverse conjecture on the Gowers norms, as well as the Siegel-Walfisz theorem, and the known proofs of both results both have ineffective constants, rendering our growth function {f(X)} similarly ineffective. Maynard’s approach ostensibly also relies on the Siegel-Walfisz theorem, but (as shown in another recent paper of his) can be made quite effective, even when tracking {k}-tuples of fairly large size (about {\log^c x} for some small {c}). If one carefully makes all the bounds in Maynard’s argument quantitative, one eventually ends up with a growth rate {f(X)} of shape

\displaystyle  f(X) \asymp \frac{\log \log \log X}{\log\log\log\log X}, \ \ \ \ \ (2)

thus leading to a bound

\displaystyle  G(X) \gg \log X \frac{\log \log X}{\log\log\log X}

on the gaps between primes for large {X}; this is an unpublished calculation of James’.

In this paper we make a further refinement of this calculation to obtain a growth rate

\displaystyle  f(X) \asymp \log \log \log X \ \ \ \ \ (3)

leading to a bound of the form

\displaystyle  G(X) \geq c \log X \frac{\log \log X \log\log\log\log X}{\log\log\log X} \ \ \ \ \ (4)

for large {X} and some small constant {c}. Furthermore, this appears to be the limit of current technology (in particular, falling short of Cramer’s conjecture that {G(X)} is comparable to {\log^2 X}); in the spirit of Erdös’ original prize on this problem, I would like to offer 10,000 USD for anyone who can show (in a refereed publication, of course) that the constant {c} here can be replaced by an arbitrarily large constant {C}.

The reason for the growth rate (3) is as follows. After following the sieving process discussed in the previous post, the problem comes down to something like the following: can one sieve out all (or almost all) of the primes in {[x,y]} by removing one residue class modulo {p} for all primes {p} in (say) {[x/4,x/2]}? Very roughly speaking, if one can solve this problem with {y = g(x) x}, then one can obtain a growth rate on {f(X)} of the shape {f(X) \sim g(\log X)}. (This is an oversimplification, as one actually has to sieve out a random subset of the primes, rather than all the primes in {[x,y]}, but never mind this detail for now.)

Using the quantitative “dense clusters of primes” machinery of Maynard, one can find lots of {k}-tuples in {[x,y]} which contain at least {\gg \log k} primes, for {k} as large as {\log^c x} or so (so that {\log k} is about {\log\log x}). By considering {k}-tuples in arithmetic progression, this means that one can find lots of residue classes modulo a given prime {p} in {[x/4,x/2]} that capture about {\log\log x} primes. In principle, this means that union of all these residue classes can cover about {\frac{x}{\log x} \log\log x} primes, allowing one to take {g(x)} as large as {\log\log x}, which corresponds to (3). However, there is a catch: the residue classes for different primes {p} may collide with each other, reducing the efficiency of the covering. In our previous papers on the subject, we selected the residue classes randomly, which meant that we had to insert an additional logarithmic safety margin in expected number of times each prime would be shifted out by one of the residue classes, in order to guarantee that we would (with high probability) sift out most of the primes. This additional safety margin is ultimately responsible for the {\log\log\log\log X} loss in (2).

The main innovation of this paper, beyond detailing James’ unpublished calculations, is to use ideas from the literature on efficient hypergraph covering, to avoid the need for a logarithmic safety margin. The hypergraph covering problem, roughly speaking, is to try to cover a set of {n} vertices using as few “edges” from a given hypergraph {H} as possible. If each edge has {m} vertices, then one certainly needs at least {n/m} edges to cover all the vertices, and the question is to see if one can come close to attaining this bound given some reasonable uniform distribution hypotheses on the hypergraph {H}. As before, random methods tend to require something like {\frac{n}{m} \log r} edges before one expects to cover, say {1-1/r} of the vertices.

However, it turns out (under reasonable hypotheses on {H}) to eliminate this logarithmic loss, by using what is now known as the “semi-random method” or the “Rödl nibble”. The idea is to randomly select a small number of edges (a first “nibble”) – small enough that the edges are unlikely to overlap much with each other, thus obtaining maximal efficiency. Then, one pauses to remove all the edges from {H} that intersect edges from this first nibble, so that all remaining edges will not overlap with the existing edges. One then randomly selects another small number of edges (a second “nibble”), and repeats this process until enough nibbles are taken to cover most of the vertices. Remarkably, it turns out that under some reasonable assumptions on the hypergraph {H}, one can maintain control on the uniform distribution of the edges throughout the nibbling process, and obtain an efficient hypergraph covering. This strategy was carried out in detail in an influential paper of Pippenger and Spencer.

In our setup, the vertices are the primes in {[x,y]}, and the edges are the intersection of the primes with various residue classes. (Technically, we have to work with a family of hypergraphs indexed by a prime {p}, rather than a single hypergraph, but let me ignore this minor technical detail.) The semi-random method would in principle eliminate the logarithmic loss and recover the bound (3). However, there is a catch: the analysis of Pippenger and Spencer relies heavily on the assumption that the hypergraph is uniform, that is to say all edges have the same size. In our context, this requirement would mean that each residue class captures exactly the same number of primes, which is not the case; we only control the number of primes in an average sense, but we were unable to obtain any concentration of measure to come close to verifying this hypothesis. And indeed, the semi-random method, when applied naively, does not work well with edges of variable size – the problem is that edges of large size are much more likely to be eliminated after each nibble than edges of small size, since they have many more vertices that could overlap with the previous nibbles. Since the large edges are clearly the more useful ones for the covering problem than small ones, this bias towards eliminating large edges significantly reduces the efficiency of the semi-random method (and also greatly complicates the analysis of that method).

Our solution to this is to iteratively reweight the probability distribution on edges after each nibble to compensate for this bias effect, giving larger edges a greater weight than smaller edges. It turns out that there is a natural way to do this reweighting that allows one to repeat the Pippenger-Spencer analysis in the presence of edges of variable size, and this ultimately allows us to recover the full growth rate (3).

To go beyond (3), one either has to find a lot of residue classes that can capture significantly more than {\log\log x} primes of size {x} (which is the limit of the multidimensional Selberg sieve of Maynard and myself), or else one has to find a very different method to produce large gaps between primes than the Erdös-Rankin method, which is the method used in all previous work on the subject.

It turns out that the arguments in this paper can be combined with the Maier matrix method to also produce chains of consecutive large prime gaps whose size is of the order of (4); three of us (Kevin, James, and myself) will detail this in a future paper. (A similar combination was also recently observed in connection with our earlier result (1) by Pintz, but there are some additional technical wrinkles required to recover the full gain of (3) for the chains of large gaps problem.)

In Notes 2, the Riemann zeta function {\zeta} (and more generally, the Dirichlet {L}-functions {L(\cdot,\chi)}) were extended meromorphically into the region {\{ s: \hbox{Re}(s) > 0 \}} in and to the right of the critical strip. This is a sufficient amount of meromorphic continuation for many applications in analytic number theory, such as establishing the prime number theorem and its variants. The zeroes of the zeta function in the critical strip {\{ s: 0 < \hbox{Re}(s) < 1 \}} are known as the non-trivial zeroes of {\zeta}, and thanks to the truncated explicit formulae developed in Notes 2, they control the asymptotic distribution of the primes (up to small errors).

The {\zeta} function obeys the trivial functional equation

\displaystyle  \zeta(\overline{s}) = \overline{\zeta(s)} \ \ \ \ \ (1)

for all {s} in its domain of definition. Indeed, as {\zeta(s)} is real-valued when {s} is real, the function {\zeta(s) - \overline{\zeta(\overline{s})}} vanishes on the real line and is also meromorphic, and hence vanishes everywhere. Similarly one has the functional equation

\displaystyle  \overline{L(s, \chi)} = L(\overline{s}, \overline{\chi}). \ \ \ \ \ (2)

From these equations we see that the zeroes of the zeta function are symmetric across the real axis, and the zeroes of {L(\cdot,\chi)} are the reflection of the zeroes of {L(\cdot,\overline{\chi})} across this axis.

It is a remarkable fact that these functions obey an additional, and more non-trivial, functional equation, this time establishing a symmetry across the critical line {\{ s: \hbox{Re}(s) = \frac{1}{2} \}} rather than the real axis. One consequence of this symmetry is that the zeta function and {L}-functions may be extended meromorphically to the entire complex plane. For the zeta function, the functional equation was discovered by Riemann, and reads as follows:

Theorem 1 (Functional equation for the Riemann zeta function) The Riemann zeta function {\zeta} extends meromorphically to the entire complex plane, with a simple pole at {s=1} and no other poles. Furthermore, one has the functional equation

\displaystyle  \zeta(s) = \alpha(s) \zeta(1-s) \ \ \ \ \ (3)

or equivalently

\displaystyle  \zeta(1-s) = \alpha(1-s) \zeta(s) \ \ \ \ \ (4)

for all complex {s} other than {s=0,1}, where {\alpha} is the function

\displaystyle  \alpha(s) := 2^s \pi^{s-1} \sin( \frac{\pi s}{2}) \Gamma(1-s). \ \ \ \ \ (5)

Here {\cos(z) := \frac{e^z + e^{-z}}{2}}, {\sin(z) := \frac{e^{-z}-e^{-z}}{2i}} are the complex-analytic extensions of the classical trigionometric functions {\cos(x), \sin(x)}, and {\Gamma} is the Gamma function, whose definition and properties we review below the fold.

The functional equation can be placed in a more symmetric form as follows:

Corollary 2 (Functional equation for the Riemann xi function) The Riemann xi function

\displaystyle  \xi(s) := \frac{1}{2} s(s-1) \pi^{-s/2} \Gamma(\frac{s}{2}) \zeta(s) \ \ \ \ \ (6)

is analytic on the entire complex plane {{\bf C}} (after removing all removable singularities), and obeys the functional equations

\displaystyle  \xi(\overline{s}) = \overline{\xi(s)}

and

\displaystyle  \xi(s) = \xi(1-s). \ \ \ \ \ (7)

In particular, the zeroes of {\xi} consist precisely of the non-trivial zeroes of {\zeta}, and are symmetric about both the real axis and the critical line. Also, {\xi} is real-valued on the critical line and on the real axis.

Corollary 2 is an easy consequence of Theorem 1 together with the duplication theorem for the Gamma function, and the fact that {\zeta} has no zeroes to the right of the critical strip, and is left as an exercise to the reader (Exercise 19). The functional equation in Theorem 1 has many proofs, but most of them are related in on way or another to the Poisson summation formula

\displaystyle  \sum_n f(n) = \sum_m \hat f(2\pi m) \ \ \ \ \ (8)

(Theorem 34 from Supplement 2, at least in the case when {f} is twice continuously differentiable and compactly supported), which can be viewed as a Fourier-analytic link between the coarse-scale distribution of the integers and the fine-scale distribution of the integers. Indeed, there is a quick heuristic proof of the functional equation that comes from formally applying the Poisson summation formula to the function {1_{x>0} \frac{1}{x^s}}, and noting that the functions {x \mapsto \frac{1}{x^s}} and {\xi \mapsto \frac{1}{\xi^{1-s}}} are formally Fourier transforms of each other, up to some Gamma function factors, as well as some trigonometric factors arising from the distinction between the real line and the half-line. Such a heuristic proof can indeed be made rigorous, and we do so below the fold, while also providing Riemann’s two classical proofs of the functional equation.

From the functional equation (and the poles of the Gamma function), one can see that {\zeta} has trivial zeroes at the negative even integers {-2,-4,-6,\dots}, in addition to the non-trivial zeroes in the critical strip. More generally, the following table summarises the zeroes and poles of the various special functions appearing in the functional equation, after they have been meromorphically extended to the entire complex plane, and with zeroes classified as “non-trivial” or “trivial” depending on whether they lie in the critical strip or not. (Exponential functions such as {2^{s-1}} or {\pi^{-s}} have no zeroes or poles, and will be ignored in this table; the zeroes and poles of rational functions such as {s(s-1)} are self-evident and will also not be displayed here.)

Function Non-trivial zeroes Trivial zeroes Poles
{\zeta(s)} Yes {-2,-4,-6,\dots} {1}
{\zeta(1-s)} Yes {1,3,5,\dots} {0}
{\sin(\pi s/2)} No Even integers No
{\cos(\pi s/2)} No Odd integers No
{\sin(\pi s)} No Integers No
{\Gamma(s)} No No {0,-1,-2,\dots}
{\Gamma(s/2)} No No {0,-2,-4,\dots}
{\Gamma(1-s)} No No {1,2,3,\dots}
{\Gamma((1-s)/2)} No No {2,4,6,\dots}
{\xi(s)} Yes No No

Among other things, this table indicates that the Gamma and trigonometric factors in the functional equation are tied to the trivial zeroes and poles of zeta, but have no direct bearing on the distribution of the non-trivial zeroes, which is the most important feature of the zeta function for the purposes of analytic number theory, beyond the fact that they are symmetric about the real axis and critical line. In particular, the Riemann hypothesis is not going to be resolved just from further analysis of the Gamma function!

The zeta function computes the “global” sum {\sum_n \frac{1}{n^s}}, with {n} ranging all the way from {1} to infinity. However, by some Fourier-analytic (or complex-analytic) manipulation, it is possible to use the zeta function to also control more “localised” sums, such as {\sum_n \frac{1}{n^s} \psi(\log n - \log N)} for some {N \gg 1} and some smooth compactly supported function {\psi: {\bf R} \rightarrow {\bf C}}. It turns out that the functional equation (3) for the zeta function localises to this context, giving an approximate functional equation which roughly speaking takes the form

\displaystyle  \sum_n \frac{1}{n^s} \psi( \log n - \log N ) \approx \alpha(s) \sum_m \frac{1}{m^{1-s}} \psi( \log M - \log m )

whenever {s=\sigma+it} and {NM = \frac{|t|}{2\pi}}; see Theorem 38 below for a precise formulation of this equation. Unsurprisingly, this form of the functional equation is also very closely related to the Poisson summation formula (8), indeed it is essentially a special case of that formula (or more precisely, of the van der Corput {B}-process). This useful identity relates long smoothed sums of {\frac{1}{n^s}} to short smoothed sums of {\frac{1}{m^{1-s}}} (or vice versa), and can thus be used to shorten exponential sums involving terms such as {\frac{1}{n^s}}, which is useful when obtaining some of the more advanced estimates on the Riemann zeta function.

We will give two other basic uses of the functional equation. The first is to get a good count (as opposed to merely an upper bound) on the density of zeroes in the critical strip, establishing the Riemann-von Mangoldt formula that the number {N(T)} of zeroes of imaginary part between {0} and {T} is {\frac{T}{2\pi} \log \frac{T}{2\pi} - \frac{T}{2\pi} + O(\log T)} for large {T}. The other is to obtain untruncated versions of the explicit formula from Notes 2, giving a remarkable exact formula for sums involving the von Mangoldt function in terms of zeroes of the Riemann zeta function. These results are not strictly necessary for most of the material in the rest of the course, but certainly help to clarify the nature of the Riemann zeta function and its relation to the primes.

In view of the material in previous notes, it should not be surprising that there are analogues of all of the above theory for Dirichlet {L}-functions {L(\cdot,\chi)}. We will restrict attention to primitive characters {\chi}, since the {L}-function for imprimitive characters merely differs from the {L}-function of the associated primitive factor by a finite Euler product; indeed, if {\chi = \chi' \chi_0} for some principal {\chi_0} whose modulus {q_0} is coprime to that of {\chi'}, then

\displaystyle  L(s,\chi) = L(s,\chi') \prod_{p|q_0} (1 - \frac{1}{p^s}) \ \ \ \ \ (9)

(cf. equation (45) of Notes 2).

The main new feature is that the Poisson summation formula needs to be “twisted” by a Dirichlet character {\chi}, and this boils down to the problem of understanding the finite (additive) Fourier transform of a Dirichlet character. This is achieved by the classical theory of Gauss sums, which we review below the fold. There is one new wrinkle; the value of {\chi(-1) \in \{-1,+1\}} plays a role in the functional equation. More precisely, we have

Theorem 3 (Functional equation for {L}-functions) Let {\chi} be a primitive character of modulus {q} with {q>1}. Then {L(s,\chi)} extends to an entire function on the complex plane, with

\displaystyle  L(s,\chi) = \varepsilon(\chi) 2^s \pi^{s-1} q^{1/2-s} \sin(\frac{\pi}{2}(s+\kappa)) \Gamma(1-s) L(1-s,\overline{\chi})

or equivalently

\displaystyle  L(1-s,\overline{\chi}) = \varepsilon(\overline{\chi}) 2^{1-s} \pi^{-s} q^{s-1/2} \sin(\frac{\pi}{2}(1-s+\kappa)) \Gamma(s) L(s,\chi)

for all {s}, where {\kappa} is equal to {0} in the even case {\chi(-1)=+1} and {1} in the odd case {\chi(-1)=-1}, and

\displaystyle  \varepsilon(\chi) := \frac{\tau(\chi)}{i^\kappa \sqrt{q}} \ \ \ \ \ (10)

where {\tau(\chi)} is the Gauss sum

\displaystyle  \tau(\chi) := \sum_{n \in {\bf Z}/q{\bf Z}} \chi(n) e(n/q). \ \ \ \ \ (11)

and {e(x) := e^{2\pi ix}}, with the convention that the {q}-periodic function {n \mapsto e(n/q)} is also (by abuse of notation) applied to {n} in the cyclic group {{\bf Z}/q{\bf Z}}.

From this functional equation and (2) we see that, as with the Riemann zeta function, the non-trivial zeroes of {L(s,\chi)} (defined as the zeroes within the critical strip {\{ s: 0 < \hbox{Re}(s) < 1 \}} are symmetric around the critical line (and, if {\chi} is real, are also symmetric around the real axis). In addition, {L(s,\chi)} acquires trivial zeroes at the negative even integers and at zero if {\chi(-1)=1}, and at the negative odd integers if {\chi(-1)=-1}. For imprimitive {\chi}, we see from (9) that {L(s,\chi)} also acquires some additional trivial zeroes on the left edge of the critical strip.

There is also a symmetric version of this equation, analogous to Corollary 2:

Corollary 4 Let {\chi,q,\varepsilon(\chi)} be as above, and set

\displaystyle  \xi(s,\chi) := (q/\pi)^{(s+\kappa)/2} \Gamma((s+\kappa)/2) L(s,\chi),

then {\xi(\cdot,\chi)} is entire with {\xi(1-s,\chi) = \varepsilon(\chi) \xi(s,\chi)}.

For further detail on the functional equation and its implications, I recommend the classic text of Titchmarsh or the text of Davenport.

Read the rest of this entry »

In Notes 1, we approached multiplicative number theory (the study of multiplicative functions {f: {\bf N} \rightarrow {\bf C}} and their relatives) via elementary methods, in which attention was primarily focused on obtaining asymptotic control on summatory functions {\sum_{n \leq x} f(n)} and logarithmic sums {\sum_{n \leq x} \frac{f(n)}{n}}. Now we turn to the complex approach to multiplicative number theory, in which the focus is instead on obtaining various types of control on the Dirichlet series {{\mathcal D} f}, defined (at least for {s} of sufficiently large real part) by the formula

\displaystyle  {\mathcal D} f(s) := \sum_n \frac{f(n)}{n^s}.

These series also made an appearance in the elementary approach to the subject, but only for real {s} that were larger than {1}. But now we will exploit the freedom to extend the variable {s} to the complex domain; this gives enough freedom (in principle, at least) to recover control of elementary sums such as {\sum_{n\leq x} f(n)} or {\sum_{n\leq x} \frac{f(n)}{n}} from control on the Dirichlet series. Crucially, for many key functions {f} of number-theoretic interest, the Dirichlet series {{\mathcal D} f} can be analytically (or at least meromorphically) continued to the left of the line {\{ s: \hbox{Re}(s) = 1 \}}. The zeroes and poles of the resulting meromorphic continuations of {{\mathcal D} f} (and of related functions) then turn out to control the asymptotic behaviour of the elementary sums of {f}; the more one knows about the former, the more one knows about the latter. In particular, knowledge of where the zeroes of the Riemann zeta function {\zeta} are located can give very precise information about the distribution of the primes, by means of a fundamental relationship known as the explicit formula. There are many ways of phrasing this explicit formula (both in exact and in approximate forms), but they are all trying to formalise an approximation to the von Mangoldt function {\Lambda} (and hence to the primes) of the form

\displaystyle  \Lambda(n) \approx 1 - \sum_\rho n^{\rho-1} \ \ \ \ \ (1)

where the sum is over zeroes {\rho} (counting multiplicity) of the Riemann zeta function {\zeta = {\mathcal D} 1} (with the sum often restricted so that {\rho} has large real part and bounded imaginary part), and the approximation is in a suitable weak sense, so that

\displaystyle  \sum_n \Lambda(n) g(n) \approx \int_0^\infty g(y)\ dy - \sum_\rho \int_0^\infty g(y) y^{\rho-1}\ dy \ \ \ \ \ (2)

for suitable “test functions” {g} (which in practice are restricted to be fairly smooth and slowly varying, with the precise amount of restriction dependent on the amount of truncation in the sum over zeroes one wishes to take). Among other things, such approximations can be used to rigorously establish the prime number theorem

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + o(x) \ \ \ \ \ (3)

as {x \rightarrow \infty}, with the size of the error term {o(x)} closely tied to the location of the zeroes {\rho} of the Riemann zeta function.

The explicit formula (1) (or any of its more rigorous forms) is closely tied to the counterpart approximation

\displaystyle  -\frac{\zeta'}{\zeta}(s) \approx \frac{1}{s-1} - \sum_\rho \frac{1}{s-\rho} \ \ \ \ \ (4)

for the Dirichlet series {{\mathcal D} \Lambda = -\frac{\zeta'}{\zeta}} of the von Mangoldt function; note that (4) is formally the special case of (2) when {g(n) = n^{-s}}. Such approximations come from the general theory of local factorisations of meromorphic functions, as discussed in Supplement 2; the passage from (4) to (2) is accomplished by such tools as the residue theorem and the Fourier inversion formula, which were also covered in Supplement 2. The relative ease of uncovering the Fourier-like duality between primes and zeroes (sometimes referred to poetically as the “music of the primes”) is one of the major advantages of the complex-analytic approach to multiplicative number theory; this important duality tends to be rather obscured in the other approaches to the subject, although it can still in principle be discernible with sufficient effort.

More generally, one has an explicit formula

\displaystyle  \Lambda(n) \chi(n) \approx - \sum_\rho n^{\rho-1} \ \ \ \ \ (5)

for any (non-principal) Dirichlet character {\chi}, where {\rho} now ranges over the zeroes of the associated Dirichlet {L}-function {L(s,\chi) := {\mathcal D} \chi(s)}; we view this formula as a “twist” of (1) by the Dirichlet character {\chi}. The explicit formula (5), proven similarly (in any of its rigorous forms) to (1), is important in establishing the prime number theorem in arithmetic progressions, which asserts that

\displaystyle  \sum_{n \leq x: n = a\ (q)} \Lambda(n) = \frac{x}{\phi(q)} + o(x) \ \ \ \ \ (6)

as {x \rightarrow \infty}, whenever {a\ (q)} is a fixed primitive residue class. Again, the size of the error term {o(x)} here is closely tied to the location of the zeroes of the Dirichlet {L}-function, with particular importance given to whether there is a zero very close to {s=1} (such a zero is known as an exceptional zero or Siegel zero).

While any information on the behaviour of zeta functions or {L}-functions is in principle welcome for the purposes of analytic number theory, some regions of the complex plane are more important than others in this regard, due to the differing weights assigned to each zero in the explicit formula. Roughly speaking, in descending order of importance, the most crucial regions on which knowledge of these functions is useful are

  1. The region on or near the point {s=1}.
  2. The region on or near the right edge {\{ 1+it: t \in {\bf R} \}} of the critical strip {\{ s: 0 \leq \hbox{Re}(s) \leq 1 \}}.
  3. The right half {\{ s: \frac{1}{2} < \hbox{Re}(s) < 1 \}} of the critical strip.
  4. The region on or near the critical line {\{ \frac{1}{2} + it: t \in {\bf R} \}} that bisects the critical strip.
  5. Everywhere else.

For instance:

  1. We will shortly show that the Riemann zeta function {\zeta} has a simple pole at {s=1} with residue {1}, which is already sufficient to recover much of the classical theorems of Mertens discussed in the previous set of notes, as well as results on mean values of multiplicative functions such as the divisor function {\tau}. For Dirichlet {L}-functions, the behaviour is instead controlled by the quantity {L(1,\chi)} discussed in Notes 1, which is in turn closely tied to the existence and location of a Siegel zero.
  2. The zeta function is also known to have no zeroes on the right edge {\{1+it: t \in {\bf R}\}} of the critical strip, which is sufficient to prove (and is in fact equivalent to) the prime number theorem. Any enlargement of the zero-free region for {\zeta} into the critical strip leads to improved error terms in that theorem, with larger zero-free regions leading to stronger error estimates. Similarly for {L}-functions and the prime number theorem in arithmetic progressions.
  3. The (as yet unproven) Riemann hypothesis prohibits {\zeta} from having any zeroes within the right half {\{ s: \frac{1}{2} < \hbox{Re}(s) < 1 \}} of the critical strip, and gives very good control on the number of primes in intervals, even when the intervals are relatively short compared to the size of the entries. Even without assuming the Riemann hypothesis, zero density estimates in this region are available that give some partial control of this form. Similarly for {L}-functions, primes in short arithmetic progressions, and the generalised Riemann hypothesis.
  4. Assuming the Riemann hypothesis, further distributional information about the zeroes on the critical line (such as Montgomery’s pair correlation conjecture, or the more general GUE hypothesis) can give finer information about the error terms in the prime number theorem in short intervals, as well as other arithmetic information. Again, one has analogues for {L}-functions and primes in short arithmetic progressions.
  5. The functional equation of the zeta function describes the behaviour of {\zeta} to the left of the critical line, in terms of the behaviour to the right of the critical line. This is useful for building a “global” picture of the structure of the zeta function, and for improving a number of estimates about that function, but (in the absence of unproven conjectures such as the Riemann hypothesis or the pair correlation conjecture) it turns out that many of the basic analytic number theory results using the zeta function can be established without relying on this equation. Similarly for {L}-functions.

Remark 1 If one takes an “adelic” viewpoint, one can unite the Riemann zeta function {\zeta(\sigma+it) = \sum_n n^{-\sigma-it}} and all of the {L}-functions {L(\sigma+it,\chi) = \sum_n \chi(n) n^{-\sigma-it}} for various Dirichlet characters {\chi} into a single object, viewing {n \mapsto \chi(n) n^{-it}} as a general multiplicative character on the adeles; thus the imaginary coordinate {t} and the Dirichlet character {\chi} are really the Archimedean and non-Archimedean components respectively of a single adelic frequency parameter. This viewpoint was famously developed in Tate’s thesis, which among other things helps to clarify the nature of the functional equation, as discussed in this previous post. We will not pursue the adelic viewpoint further in these notes, but it does supply a “high-level” explanation for why so much of the theory of the Riemann zeta function extends to the Dirichlet {L}-functions. (The non-Archimedean character {\chi(n)} and the Archimedean character {n^{it}} behave similarly from an algebraic point of view, but not so much from an analytic point of view; as such, the adelic viewpoint is well suited for algebraic tasks (such as establishing the functional equation), but not for analytic tasks (such as establishing a zero-free region).)

Roughly speaking, the elementary multiplicative number theory from Notes 1 corresponds to the information one can extract from the complex-analytic method in region 1 of the above hierarchy, while the more advanced elementary number theory used to prove the prime number theorem (and which we will not cover in full detail in these notes) corresponds to what one can extract from regions 1 and 2.

As a consequence of this hierarchy of importance, information about the {\zeta} function away from the critical strip, such as Euler’s identity

\displaystyle  \zeta(2) = \frac{\pi^2}{6}

or equivalently

\displaystyle  1 + \frac{1}{2^2} + \frac{1}{3^2} + \dots = \frac{\pi^2}{6}

or the infamous identity

\displaystyle  \zeta(-1) = -\frac{1}{12},

which is often presented (slightly misleadingly, if one’s conventions for divergent summation are not made explicit) as

\displaystyle  1 + 2 + 3 + \dots = -\frac{1}{12},

are of relatively little direct importance in analytic prime number theory, although they are still of interest for some other, non-number-theoretic, applications. (The quantity {\zeta(2)} does play a minor role as a normalising factor in some asymptotics, see e.g. Exercise 28 from Notes 1, but its precise value is usually not of major importance.) In contrast, the value {L(1,\chi)} of an {L}-function at {s=1} turns out to be extremely important in analytic number theory, with many results in this subject relying ultimately on a non-trivial lower-bound on this quantity coming from Siegel’s theorem, discussed below the fold.

For a more in-depth treatment of the topics in this set of notes, see Davenport’s “Multiplicative number theory“.

Read the rest of this entry »

Analytic number theory is only one of many different approaches to number theory. Another important branch of the subject is algebraic number theory, which studies algebraic structures (e.g. groups, rings, and fields) of number-theoretic interest. With this perspective, the classical field of rationals {{\bf Q}}, and the classical ring of integers {{\bf Z}}, are placed inside the much larger field {\overline{{\bf Q}}} of algebraic numbers, and the much larger ring {{\mathcal A}} of algebraic integers, respectively. Recall that an algebraic number is a root of a polynomial with integer coefficients, and an algebraic integer is a root of a monic polynomial with integer coefficients; thus for instance {\sqrt{2}} is an algebraic integer (a root of {x^2-2}), while {\sqrt{2}/2} is merely an algebraic number (a root of {4x^2-2}). For the purposes of this post, we will adopt the concrete (but somewhat artificial) perspective of viewing algebraic numbers and integers as lying inside the complex numbers {{\bf C}}, thus {{\mathcal A} \subset \overline{{\bf Q}} \subset {\bf C}}. (From a modern algebraic perspective, it is better to think of {\overline{{\bf Q}}} as existing as an abstract field separate from {{\bf C}}, but which has a number of embeddings into {{\bf C}} (as well as into other fields, such as the completed p-adics {{\bf C}_p}), no one of which should be considered favoured over any other; cf. this mathOverflow post. But for the rudimentary algebraic number theory in this post, we will not need to work at this level of abstraction.) In particular, we identify the algebraic integer {\sqrt{-d}} with the complex number {\sqrt{d} i} for any natural number {d}.

Exercise 1 Show that the field of algebraic numbers {\overline{{\bf Q}}} is indeed a field, and that the ring of algebraic integers {{\mathcal A}} is indeed a ring, and is in fact an integral domain. Also, show that {{\bf Z} = {\mathcal A} \cap {\bf Q}}, that is to say the ordinary integers are precisely the algebraic integers that are also rational. Because of this, we will sometimes refer to elements of {{\bf Z}} as rational integers.

In practice, the field {\overline{{\bf Q}}} is too big to conveniently work with directly, having infinite dimension (as a vector space) over {{\bf Q}}. Thus, algebraic number theory generally restricts attention to intermediate fields {{\bf Q} \subset F \subset \overline{{\bf Q}}} between {{\bf Q}} and {\overline{{\bf Q}}}, which are of finite dimension over {{\bf Q}}; that is to say, finite degree extensions of {{\bf Q}}. Such fields are known as algebraic number fields, or number fields for short. Apart from {{\bf Q}} itself, the simplest examples of such number fields are the quadratic fields, which have dimension exactly two over {{\bf Q}}.

Exercise 2 Show that if {\alpha} is a rational number that is not a perfect square, then the field {{\bf Q}(\sqrt{\alpha})} generated by {{\bf Q}} and either of the square roots of {\alpha} is a quadratic field. Conversely, show that all quadratic fields arise in this fashion. (Hint: show that every element of a quadratic field is a root of a quadratic polynomial over the rationals.)

The ring of algebraic integers {{\mathcal A}} is similarly too large to conveniently work with directly, so in algebraic number theory one usually works with the rings {{\mathcal O}_F := {\mathcal A} \cap F} of algebraic integers inside a given number field {F}. One can (and does) study this situation in great generality, but for the purposes of this post we shall restrict attention to a simple but illustrative special case, namely the quadratic fields with a certain type of negative discriminant. (The positive discriminant case will be briefly discussed in Remark 42 below.)

Exercise 3 Let {d} be a square-free natural number with {d=1\ (4)} or {d=2\ (4)}. Show that the ring {{\mathcal O} = {\mathcal O}_{{\bf Q}(\sqrt{-d})}} of algebraic integers in {{\bf Q}(\sqrt{-d})} is given by

\displaystyle  {\mathcal O} = {\bf Z}[\sqrt{-d}] = \{ a + b \sqrt{-d}: a,b \in {\bf Z} \}.

If instead {d} is square-free with {d=3\ (4)}, show that the ring {{\mathcal O} = {\mathcal O}_{{\bf Q}(\sqrt{-d})}} is instead given by

\displaystyle  {\mathcal O} = {\bf Z}[\frac{1+\sqrt{-d}}{2}] = \{ a + b \frac{1+\sqrt{-d}}{2}: a,b \in {\bf Z} \}.

What happens if {d} is not square-free, or negative?

Remark 4 In the case {d=3\ (4)}, it may naively appear more natural to work with the ring {{\bf Z}[\sqrt{-d}]}, which is an index two subring of {{\mathcal O}}. However, because this ring only captures some of the algebraic integers in {{\bf Q}(\sqrt{-d})} rather than all of them, the algebraic properties of these rings are somewhat worse than those of {{\mathcal O}} (in particular, they generally fail to be Dedekind domains) and so are not convenient to work with in algebraic number theory.

We refer to fields of the form {{\bf Q}(\sqrt{-d})} for natural square-free numbers {d} as quadratic fields of negative discriminant, and similarly refer to {{\mathcal O}_{{\bf Q}(\sqrt{-d})}} as a ring of quadratic integers of negative discriminant. Quadratic fields and quadratic integers of positive discriminant are just as important to analytic number theory as their negative discriminant counterparts, but we will restrict attention to the latter here for simplicity of discussion.

Thus, for instance, when {d=1}, the ring of integers in {{\bf Q}(\sqrt{-1})} is the ring of Gaussian integers

\displaystyle  {\bf Z}[\sqrt{-1}] = \{ x + y \sqrt{-1}: x,y \in {\bf Z} \}

and when {d=3}, the ring of integers in {{\bf Q}(\sqrt{-3})} is the ring of Eisenstein integers

\displaystyle  {\bf Z}[\omega] := \{ x + y \omega: x,y \in {\bf Z} \}

where {\omega := e^{2\pi i /3}} is a cube root of unity.

As these examples illustrate, the additive structure of a ring {{\mathcal O} = {\mathcal O}_{{\bf Q}(\sqrt{-d})}} of quadratic integers is that of a two-dimensional lattice in {{\bf C}}, which is isomorphic as an additive group to {{\bf Z}^2}. Thus, from an additive viewpoint, one can view quadratic integers as “two-dimensional” analogues of rational integers. From a multiplicative viewpoint, however, the quadratic integers (and more generally, integers in a number field) behave very similarly to the rational integers (as opposed to being some sort of “higher-dimensional” version of such integers). Indeed, a large part of basic algebraic number theory is devoted to treating the multiplicative theory of integers in number fields in a unified fashion, that naturally generalises the classical multiplicative theory of the rational integers.

For instance, every rational integer {n \in {\bf Z}} has an absolute value {|n| \in {\bf N} \cup \{0\}}, with the multiplicativity property {|nm| = |n| |m|} for {n,m \in {\bf Z}}, and the positivity property {|n| > 0} for all {n \neq 0}. Among other things, the absolute value detects units: {|n| = 1} if and only if {n} is a unit in {{\bf Z}} (that is to say, it is multiplicatively invertible in {{\bf Z}}). Similarly, in any ring of quadratic integers {{\mathcal O} = {\mathcal O}_{{\bf Q}(\sqrt{-d})}} with negative discriminant, we can assign a norm {N(n) \in {\bf N} \cup \{0\}} to any quadratic integer {n \in {\mathcal O}_{{\bf Q}(\sqrt{-d})}} by the formula

\displaystyle  N(n) = n \overline{n}

where {\overline{n}} is the complex conjugate of {n}. (When working with other number fields than quadratic fields of negative discriminant, one instead defines {N(n)} to be the product of all the Galois conjugates of {n}.) Thus for instance, when {d=1,2\ (4)} one has

\displaystyle  N(x + y \sqrt{-d}) = x^2 + dy^2 \ \ \ \ \ (1)

and when {d=3\ (4)} one has

\displaystyle  N(x + y \frac{1+\sqrt{-d}}{2}) = x^2 + xy + \frac{d+1}{4} y^2. \ \ \ \ \ (2)

Analogously to the rational integers, we have the multiplicativity property {N(nm) = N(n) N(m)} for {n,m \in {\mathcal O}} and the positivity property {N(n) > 0} for {n \neq 0}, and the units in {{\mathcal O}} are precisely the elements of norm one.

Exercise 5 Establish the three claims of the previous paragraph. Conclude that the units (invertible elements) of {{\mathcal O}} consist of the four elements {\pm 1, \pm i} if {d=1}, the six elements {\pm 1, \pm \omega, \pm \omega^2} if {d=3}, and the two elements {\pm 1} if {d \neq 1,3}.

For the rational integers, we of course have the fundamental theorem of arithmetic, which asserts that every non-zero rational integer can be uniquely factored (up to permutation and units) as the product of irreducible integers, that is to say non-zero, non-unit integers that cannot be factored into the product of integers of strictly smaller norm. As it turns out, the same claim is true for a few additional rings of quadratic integers, such as the Gaussian integers and Eisenstein integers, but fails in general; for instance, in the ring {{\bf Z}[\sqrt{-5}]}, we have the famous counterexample

\displaystyle  6 = 2 \times 3 = (1+\sqrt{-5}) (1-\sqrt{-5})

that decomposes {6} non-uniquely into the product of irreducibles in {{\bf Z}[\sqrt{-5}]}. Nevertheless, it is an important fact that the fundamental theorem of arithmetic can be salvaged if one uses an “idealised” notion of a number in a ring of integers {{\mathcal O}}, now known in modern language as an ideal of that ring. For instance, in {{\bf Z}[\sqrt{-5}]}, the principal ideal {(6)} turns out to uniquely factor into the product of (non-principal) ideals {(2) + (1+\sqrt{-5}), (2) + (1-\sqrt{-5}), (3) + (1+\sqrt{-5}), (3) + (1-\sqrt{-5})}; see Exercise 27. We will review the basic theory of ideals in number fields (focusing primarily on quadratic fields of negative discriminant) below the fold.

The norm forms (1), (2) can be viewed as examples of positive definite quadratic forms {Q: {\bf Z}^2 \rightarrow {\bf Z}} over the integers, by which we mean a polynomial of the form

\displaystyle  Q(x,y) = ax^2 + bxy + cy^2

for some integer coefficients {a,b,c}. One can declare two quadratic forms {Q, Q': {\bf Z}^2 \rightarrow {\bf Z}} to be equivalent if one can transform one to the other by an invertible linear transformation {T: {\bf Z}^2 \rightarrow {\bf Z}^2}, so that {Q' = Q \circ T}. For example, the quadratic forms {(x,y) \mapsto x^2 + y^2} and {(x',y') \mapsto 2 (x')^2 + 2 x' y' + (y')^2} are equivalent, as can be seen by using the invertible linear transformation {(x,y) = (x',x'+y')}. Such equivalences correspond to the different choices of basis available when expressing a ring such as {{\mathcal O}} (or an ideal thereof) additively as a copy of {{\bf Z}^2}.

There is an important and classical invariant of a quadratic form {(x,y) \mapsto ax^2 + bxy + c y^2}, namely the discriminant {\Delta := b^2 - 4ac}, which will of course be familiar to most readers via the quadratic formula, which among other things tells us that a quadratic form will be positive definite precisely when its discriminant is negative. It is not difficult (particularly if one exploits the multiplicativity of the determinant of {2 \times 2} matrices) to show that two equivalent quadratic forms have the same discriminant. Thus for instance any quadratic form equivalent to (1) has discriminant {-4d}, while any quadratic form equivalent to (2) has discriminant {-d}. Thus we see that each ring {{\mathcal O}[\sqrt{-d}]} of quadratic integers is associated with a certain negative discriminant {D}, defined to equal {-4d} when {d=1,2\ (4)} and {-d} when {d=3\ (4)}.

Exercise 6 (Geometric interpretation of discriminant) Let {Q: {\bf Z}^2 \rightarrow {\bf Z}} be a quadratic form of negative discriminant {D}, and extend it to a real form {Q: {\bf R}^2 \rightarrow {\bf R}} in the obvious fashion. Show that for any {X>0}, the set {\{ (x,y) \in {\bf R}^2: Q(x,y) \leq X \}} is an ellipse of area {2\pi X / \sqrt{|D|}}.

It is natural to ask the converse question: if two quadratic forms have the same discriminant, are they necessarily equivalent? For certain choices of discriminant, this is the case:

Exercise 7 Show that any quadratic form {ax^2+bxy+cy^2} of discriminant {-4} is equivalent to the form {x^2+y^2}, and any quadratic form of discriminant {-3} is equivalent to {x^2+xy+y^2}. (Hint: use elementary transformations to try to make {|b|} as small as possible, to the point where one only has to check a finite number of cases; this argument is due to Legendre.) More generally, show that for any negative discriminant {D}, there are only finitely many quadratic forms of that discriminant up to equivalence (a result first established by Gauss).

Unfortunately, for most choices of discriminant, the converse question fails; for instance, the quadratic forms {x^2+5y^2} and {2x^2+2xy+3y^2} both have discriminant {-20}, but are not equivalent (Exercise 38). This particular failure of equivalence turns out to be intimately related to the failure of unique factorisation in the ring {{\bf Z}[\sqrt{-5}]}.

It turns out that there is a fundamental connection between quadratic fields, equivalence classes of quadratic forms of a given discriminant, and real Dirichlet characters, thus connecting the material discussed above with the last section of the previous set of notes. Here is a typical instance of this connection:

Proposition 8 Let {\chi_4: {\bf N} \rightarrow {\bf R}} be the real non-principal Dirichlet character of modulus {4}, or more explicitly {\chi_4(n)} is equal to {+1} when {n = 1\ (4)}, {-1} when {n = 3\ (4)}, and {0} when {n = 0,2\ (4)}.

  • (i) For any natural number {n}, the number of Gaussian integers {m \in {\bf Z}[\sqrt{-1}]} with norm {N(m)=n} is equal to {4(1 * \chi_4)(n)}. Equivalently, the number of solutions to the equation {n = x^2+y^2} with {x,y \in{\bf Z}} is {4(1*\chi_4)(n)}. (Here, as in the previous post, the symbol {*} denotes Dirichlet convolution.)
  • (ii) For any natural number {n}, the number of Gaussian integers {m \in {\bf Z}[\sqrt{-1}]} that divide {n} (thus {n = dm} for some {d \in {\bf Z}[\sqrt{-1}]}) is {4(1*1*1*\mu\chi_4)(n)}.

We will prove this proposition later in these notes. We observe that as a special case of part (i) of this proposition, we recover the Fermat two-square theorem: an odd prime {p} is expressible as the sum of two squares if and only if {p = 1\ (4)}. This proposition should also be compared with the fact, used crucially in the previous post to prove Dirichlet’s theorem, that {1*\chi(n)} is non-negative for any {n}, and at least one when {n} is a square, for any quadratic character {\chi}.

As an illustration of the relevance of such connections to analytic number theory, let us now explicitly compute {L(1,\chi_4)}.

Corollary 9 {L(1,\chi_4) = \frac{\pi}{4}}.

This particular identity is also known as the Leibniz formula.

Proof: For a large number {x}, consider the quantity

\displaystyle  \sum_{n \in {\bf Z}[\sqrt{-1}]: N(n) \leq x} 1

of all the Gaussian integers of norm less than {x}. On the one hand, this is the same as the number of lattice points of {{\bf Z}^2} in the disk {\{ (a,b) \in {\bf R}^2: a^2+b^2 \leq x \}} of radius {\sqrt{x}}. Placing a unit square centred at each such lattice point, we obtain a region which differs from the disk by a region contained in an annulus of area {O(\sqrt{x})}. As the area of the disk is {\pi x}, we conclude the Gauss bound

\displaystyle  \sum_{n \in {\bf Z}[\sqrt{-1}]: N(n) \leq x} 1 = \pi x + O(\sqrt{x}).

On the other hand, by Proposition 8(i) (and removing the {n=0} contribution), we see that

\displaystyle  \sum_{n \in {\bf Z}[\sqrt{-1}]: N(n) \leq x} 1 = 1 + 4 \sum_{n \leq x} 1 * \chi_4(n).

Now we use the Dirichlet hyperbola method to expand the right-hand side sum, first expressing

\displaystyle  \sum_{n \leq x} 1 * \chi_4(n) = \sum_{d \leq \sqrt{x}} \chi_4(d) \sum_{m \leq x/d} 1 + \sum_{m \leq \sqrt{x}} \sum_{d \leq x/m} \chi_4(d)

\displaystyle  - (\sum_{d \leq \sqrt{x}} \chi_4(d)) (\sum_{m \leq \sqrt{x}} 1)

and then using the bounds {\sum_{d \leq y} \chi_4(d) = O(1)}, {\sum_{m \leq y} 1 = y + O(1)}, {\sum_{d \leq \sqrt{x}} \frac{\chi_4(d)}{d} = L(1,\chi_4) + O(\frac{1}{\sqrt{x}})} from the previous set of notes to conclude that

\displaystyle  \sum_{n \leq x} 1 * \chi_4(n) = x L(1,\chi_4) + O(\sqrt{x}).

Comparing the two formulae for {\sum_{n \in {\bf Z}[\sqrt{-1}]: N(n) \leq x} 1} and sending {x \rightarrow \infty}, we obtain the claim. \Box

Exercise 10 Give an alternate proof of Corollary 9 that relies on obtaining asymptotics for the Dirichlet series {\sum_{n \in {\bf Z}} \frac{1 * \chi_4(n)}{n^s}} as {s \rightarrow 1^+}, rather than using the Dirichlet hyperbola method.

Exercise 11 Give a direct proof of Corollary 9 that does not use Proposition 8, instead using Taylor expansion of the complex logarithm {\log(1+z)}. (One can also use Taylor expansions of some other functions related to the complex logarithm here, such as the arctangent function.)

More generally, one can relate {L(1,\chi)} for a real Dirichlet character {\chi} with the number of inequivalent quadratic forms of a certain discriminant, via the famous class number formula; we will give a special case of this formula below the fold.

The material here is only a very rudimentary introduction to algebraic number theory, and is not essential to the rest of the course. A slightly expanded version of the material here, from the perspective of analytic number theory, may be found in Sections 5 and 6 of Davenport’s book. A more in-depth treatment of algebraic number theory may be found in a number of texts, e.g. Fröhlich and Taylor.

Read the rest of this entry »

In analytic number theory, an arithmetic function is simply a function {f: {\bf N} \rightarrow {\bf C}} from the natural numbers {{\bf N} = \{1,2,3,\dots\}} to the real or complex numbers. (One occasionally also considers arithmetic functions taking values in more general rings than {{\bf R}} or {{\bf C}}, as in this previous blog post, but we will restrict attention here to the classical situation of real or complex arithmetic functions.) Experience has shown that a particularly tractable and relevant class of arithmetic functions for analytic number theory are the multiplicative fun3ctions, which are arithmetic functions {f: {\bf N} \rightarrow {\bf C}} with the additional property that

\displaystyle f(nm) = f(n) f(m) \ \ \ \ \ (1)

 

whenever {n,m \in{\bf N}} are coprime. (One also considers arithmetic functions, such as the logarithm function {L(n) := \log n} or the von Mangoldt function, that are not genuinely multiplicative, but interact closely with multiplicative functions, and can be viewed as “derived” versions of multiplicative functions; see this previous post.) A typical example of a multiplicative function is the divisor function

\displaystyle \tau(n) := \sum_{d|n} 1 \ \ \ \ \ (2)

 

that counts the number of divisors of a natural number {n}. (The divisor function {n \mapsto \tau(n)} is also denoted {n \mapsto d(n)} in the literature.) The study of asymptotic behaviour of multiplicative functions (and their relatives) is known as multiplicative number theory, and is a basic cornerstone of modern analytic number theory.

There are various approaches to multiplicative number theory, each of which focuses on different asymptotic statistics of arithmetic functions {f}. In elementary multiplicative number theory, which is the focus of this set of notes, particular emphasis is given on the following two statistics of a given arithmetic function {f: {\bf N} \rightarrow {\bf C}}:

  1. The summatory functions

    \displaystyle \sum_{n \leq x} f(n)

    of an arithmetic function {f}, as well as the associated natural density

    \displaystyle \lim_{x \rightarrow \infty} \frac{1}{x} \sum_{n \leq x} f(n)

    (if it exists).

  2. The logarithmic sums

    \displaystyle \sum_{n\leq x} \frac{f(n)}{n}

    of an arithmetic function {f}, as well as the associated logarithmic density

    \displaystyle \lim_{x \rightarrow \infty} \frac{1}{\log x} \sum_{n \leq x} \frac{f(n)}{n}

    (if it exists).

Here, we are normalising the arithmetic function {f} being studied to be of roughly unit size up to logarithms, obeying bounds such as {f(n)=O(1)}, {f(n) = O(\log^{O(1)} n)}, or at worst

\displaystyle f(n) = O(n^{o(1)}). \ \ \ \ \ (3)

 

A classical case of interest is when {f} is an indicator function {f=1_A} of some set {A} of natural numbers, in which case we also refer to the natural or logarithmic density of {f} as the natural or logarithmic density of {A} respectively. However, in analytic number theory it is usually more convenient to replace such indicator functions with other related functions that have better multiplicative properties. For instance, the indicator function {1_{\mathcal P}} of the primes is often replaced with the von Mangoldt function {\Lambda}.

Typically, the logarithmic sums are relatively easy to control, but the summatory functions require more effort in order to obtain satisfactory estimates; see Exercise 7 below.

If an arithmetic function {f} is multiplicative (or closely related to a multiplicative function), then there is an important further statistic on an arithmetic function {f} beyond the summatory function and the logarithmic sum, namely the Dirichlet series

\displaystyle {\mathcal D}f(s) := \sum_{n=1}^\infty \frac{f(n)}{n^s} \ \ \ \ \ (4)

 

for various real or complex numbers {s}. Under the hypothesis (3), this series is absolutely convergent for real numbers {s>1}, or more generally for complex numbers {s} with {\hbox{Re}(s)>1}. As we will see below the fold, when {f} is multiplicative then the Dirichlet series enjoys an important Euler product factorisation which has many consequences for analytic number theory.

In the elementary approach to multiplicative number theory presented in this set of notes, we consider Dirichlet series only for real numbers {s>1} (and focusing particularly on the asymptotic behaviour as {s \rightarrow 1^+}); in later notes we will focus instead on the important complex-analytic approach to multiplicative number theory, in which the Dirichlet series (4) play a central role, and are defined not only for complex numbers with large real part, but are often extended analytically or meromorphically to the rest of the complex plane as well.

Remark 1 The elementary and complex-analytic approaches to multiplicative number theory are the two classical approaches to the subject. One could also consider a more “Fourier-analytic” approach, in which one studies convolution-type statistics such as

\displaystyle \sum_n \frac{f(n)}{n} G( t - \log n ) \ \ \ \ \ (5)

 

as {t \rightarrow \infty} for various cutoff functions {G: {\bf R} \rightarrow {\bf C}}, such as smooth, compactly supported functions. See for instance this previous blog post for an instance of such an approach. Another related approach is the “pretentious” approach to multiplicative number theory currently being developed by Granville-Soundararajan and their collaborators. We will occasionally make reference to these more modern approaches in these notes, but will primarily focus on the classical approaches.

To reverse the process and derive control on summatory functions or logarithmic sums starting from control of Dirichlet series is trickier, and usually requires one to allow {s} to be complex-valued rather than real-valued if one wants to obtain really accurate estimates; we will return to this point in subsequent notes. However, there is a cheap way to get upper bounds on such sums, known as Rankin’s trick, which we will discuss later in these notes.

The basic strategy of elementary multiplicative theory is to first gather useful estimates on the statistics of “smooth” or “non-oscillatory” functions, such as the constant function {n \mapsto 1}, the harmonic function {n \mapsto \frac{1}{n}}, or the logarithm function {n \mapsto \log n}; one also considers the statistics of periodic functions such as Dirichlet characters. These functions can be understood without any multiplicative number theory, using basic tools from real analysis such as the (quantitative version of the) integral test or summation by parts. Once one understands the statistics of these basic functions, one can then move on to statistics of more arithmetically interesting functions, such as the divisor function (2) or the von Mangoldt function {\Lambda} that we will discuss below. A key tool to relate these functions to each other is that of Dirichlet convolution, which is an operation that interacts well with summatory functions, logarithmic sums, and particularly well with Dirichlet series.

This is only an introduction to elementary multiplicative number theory techniques. More in-depth treatments may be found in this text of Montgomery-Vaughan, or this text of Bateman-Diamond.

Read the rest of this entry »

Many problems and results in analytic prime number theory can be formulated in the following general form: given a collection of (affine-)linear forms {L_1(n),\dots,L_k(n)}, none of which is a multiple of any other, find a number {n} such that a certain property {P( L_1(n),\dots,L_k(n) )} of the linear forms {L_1(n),\dots,L_k(n)} are true. For instance:

  • For the twin prime conjecture, one can use the linear forms {L_1(n) := n}, {L_2(n) := n+2}, and the property {P( L_1(n), L_2(n) )} in question is the assertion that {L_1(n)} and {L_2(n)} are both prime.
  • For the even Goldbach conjecture, the claim is similar but one uses the linear forms {L_1(n) := n}, {L_2(n) := N-n} for some even integer {N}.
  • For Chen’s theorem, we use the same linear forms {L_1(n),L_2(n)} as in the previous two cases, but now {P(L_1(n), L_2(n))} is the assertion that {L_1(n)} is prime and {L_2(n)} is an almost prime (in the sense that there are at most two prime factors).
  • In the recent results establishing bounded gaps between primes, we use the linear forms {L_i(n) = n + h_i} for some admissible tuple {h_1,\dots,h_k}, and take {P(L_1(n),\dots,L_k(n))} to be the assertion that at least two of {L_1(n),\dots,L_k(n)} are prime.

For these sorts of results, one can try a sieve-theoretic approach, which can broadly be formulated as follows:

  1. First, one chooses a carefully selected sieve weight {\nu: {\bf N} \rightarrow {\bf R}^+}, which could for instance be a non-negative function having a divisor sum form

    \displaystyle  \nu(n) := \sum_{d_1|L_1(n), \dots, d_k|L_k(n); d_1 \dots d_k \leq x^{1-\varepsilon}} \lambda_{d_1,\dots,d_k}

    for some coefficients {\lambda_{d_1,\dots,d_k}}, where {x} is a natural scale parameter. The precise choice of sieve weight is often quite a delicate matter, but will not be discussed here. (In some cases, one may work with multiple sieve weights {\nu_1, \nu_2, \dots}.)

  2. Next, one uses tools from analytic number theory (such as the Bombieri-Vinogradov theorem) to obtain upper and lower bounds for sums such as

    \displaystyle  \sum_n \nu(n) \ \ \ \ \ (1)

    or

    \displaystyle  \sum_n \nu(n) 1_{L_i(n) \hbox{ prime}} \ \ \ \ \ (2)

    or more generally of the form

    \displaystyle  \sum_n \nu(n) f(L_i(n)) \ \ \ \ \ (3)

    where {f(L_i(n))} is some “arithmetic” function involving the prime factorisation of {L_i(n)} (we will be a bit vague about what this means precisely, but a typical choice of {f} might be a Dirichlet convolution {\alpha*\beta(L_i(n))} of two other arithmetic functions {\alpha,\beta}).

  3. Using some combinatorial arguments, one manipulates these upper and lower bounds, together with the non-negative nature of {\nu}, to conclude the existence of an {n} in the support of {\nu} (or of at least one of the sieve weights {\nu_1, \nu_2, \dots} being considered) for which {P( L_1(n), \dots, L_k(n) )} holds

For instance, in the recent results on bounded gaps between primes, one selects a sieve weight {\nu} for which one has upper bounds on

\displaystyle  \sum_n \nu(n)

and lower bounds on

\displaystyle  \sum_n \nu(n) 1_{n+h_i \hbox{ prime}}

so that one can show that the expression

\displaystyle  \sum_n \nu(n) (\sum_{i=1}^k 1_{n+h_i \hbox{ prime}} - 1)

is strictly positive, which implies the existence of an {n} in the support of {\nu} such that at least two of {n+h_1,\dots,n+h_k} are prime. As another example, to prove Chen’s theorem to find {n} such that {L_1(n)} is prime and {L_2(n)} is almost prime, one uses a variety of sieve weights to produce a lower bound for

\displaystyle  S_1 := \sum_{n \leq x} 1_{L_1(n) \hbox{ prime}} 1_{L_2(n) \hbox{ rough}}

and an upper bound for

\displaystyle  S_2 := \sum_{z \leq p < x^{1/3}} \sum_{n \leq x} 1_{L_1(n) \hbox{ prime}} 1_{p|L_2(n)} 1_{L_2(n) \hbox{ rough}}

and

\displaystyle  S_3 := \sum_{n \leq x} 1_{L_1(n) \hbox{ prime}} 1_{L_2(n)=pqr \hbox{ for some } z \leq p \leq x^{1/3} < q \leq r},

where {z} is some parameter between {1} and {x^{1/3}}, and “rough” means that all prime factors are at least {z}. One can observe that if {S_1 - \frac{1}{2} S_2 - \frac{1}{2} S_3 > 0}, then there must be at least one {n} for which {L_1(n)} is prime and {L_2(n)} is almost prime, since for any rough number {m}, the quantity

\displaystyle  1 - \frac{1}{2} \sum_{z \leq p < x^{1/3}} 1_{p|m} - \frac{1}{2} \sum_{z \leq p \leq x^{1/3} < q \leq r} 1_{m = pqr}

is only positive when {m} is an almost prime (if {m} has three or more factors, then either it has at least two factors less than {x^{1/3}}, or it is of the form {pqr} for some {p \leq x^{1/3} < q \leq r}). The upper and lower bounds on {S_1,S_2,S_3} are ultimately produced via asymptotics for expressions of the form (1), (2), (3) for various divisor sums {\nu} and various arithmetic functions {f}.

Unfortunately, there is an obstruction to sieve-theoretic techniques working for certain types of properties {P(L_1(n),\dots,L_k(n))}, which Zeb Brady and I recently formalised at an AIM workshop this week. To state the result, we recall the Liouville function {\lambda(n)}, defined by setting {\lambda(n) = (-1)^j} whenever {n} is the product of exactly {j} primes (counting multiplicity). Define a sign pattern to be an element {(\epsilon_1,\dots,\epsilon_k)} of the discrete cube {\{-1,+1\}^k}. Given a property {P(l_1,\dots,l_k)} of {k} natural numbers {l_1,\dots,l_k}, we say that a sign pattern {(\epsilon_1,\dots,\epsilon_k)} is forbidden by {P} if there does not exist any natural numbers {l_1,\dots,l_k} obeying {P(l_1,\dots,l_k)} for which

\displaystyle  (\lambda(l_1),\dots,\lambda(l_k)) = (\epsilon_1,\dots,\epsilon_k).

Example 1 Let {P(l_1,l_2,l_3)} be the property that at least two of {l_1,l_2,l_3} are prime. Then the sign patterns {(+1,+1,+1)}, {(+1,+1,-1)}, {(+1,-1,+1)}, {(-1,+1,+1)} are forbidden, because prime numbers have a Liouville function of {-1}, so that {P(l_1,l_2,l_3)} can only occur when at least two of {\lambda(l_1),\lambda(l_2), \lambda(l_3)} are equal to {-1}.

Example 2 Let {P(l_1,l_2)} be the property that {l_1} is prime and {l_2} is almost prime. Then the only forbidden sign patterns are {(+1,+1)} and {(+1,-1)}.

Example 3 Let {P(l_1,l_2)} be the property that {l_1} and {l_2} are both prime. Then {(+1,+1), (+1,-1), (-1,+1)} are all forbidden sign patterns.

We then have a parity obstruction as soon as {P} has “too many” forbidden sign patterns, in the following (slightly informal) sense:

Claim 1 (Parity obstruction) Suppose {P(l_1,\dots,l_k)} is such that that the convex hull of the forbidden sign patterns of {P} contains the origin. Then one cannot use the above sieve-theoretic approach to establish the existence of an {n} such that {P(L_1(n),\dots,L_k(n))} holds.

Thus for instance, the property in Example 3 is subject to the parity obstruction since {0} is a convex combination of {(+1,-1)} and {(-1,+1)}, whereas the properties in Examples 1, 2 are not. One can also check that the property “at least {j} of the {k} numbers {l_1,\dots,l_k} is prime” is subject to the parity obstruction as soon as {j \geq \frac{k}{2}+1}. Thus, the largest number of elements of a {k}-tuple that one can force to be prime by purely sieve-theoretic methods is {k/2}, rounded up.

This claim is not precisely a theorem, because it presumes a certain “Liouville pseudorandomness conjecture” (a very close cousin of the more well known “Möbius pseudorandomness conjecture”) which is a bit difficult to formalise precisely. However, this conjecture is widely believed by analytic number theorists, see e.g. this blog post for a discussion. (Note though that there are scenarios, most notably the “Siegel zero” scenario, in which there is a severe breakdown of this pseudorandomness conjecture, and the parity obstruction then disappears. A typical instance of this is Heath-Brown’s proof of the twin prime conjecture (which would ordinarily be subject to the parity obstruction) under the hypothesis of a Siegel zero.) The obstruction also does not prevent the establishment of an {n} such that {P(L_1(n),\dots,L_k(n))} holds by introducing additional sieve axioms beyond upper and lower bounds on quantities such as (1), (2), (3). The proof of the Friedlander-Iwaniec theorem is a good example of this latter scenario.

Now we give a (slightly nonrigorous) proof of the claim.

Proof: (Nonrigorous) Suppose that the convex hull of the forbidden sign patterns contain the origin. Then we can find non-negative numbers {p_{\epsilon_1,\dots,\epsilon_k}} for sign patterns {(\epsilon_1,\dots,\epsilon_k)}, which sum to {1}, are non-zero only for forbidden sign patterns, and which have mean zero in the sense that

\displaystyle  \sum_{(\epsilon_1,\dots,\epsilon_k)} p_{\epsilon_1,\dots,\epsilon_k} \epsilon_i = 0

for all {i=1,\dots,k}. By Fourier expansion (or Lagrange interpolation), one can then write {p_{\epsilon_1,\dots,\epsilon_k}} as a polynomial

\displaystyle  p_{\epsilon_1,\dots,\epsilon_k} = 1 + Q( \epsilon_1,\dots,\epsilon_k)

where {Q(t_1,\dots,t_k)} is a polynomial in {k} variables that is a linear combination of monomials {t_{i_1} \dots t_{i_r}} with {i_1 < \dots < i_r} and {r \geq 2} (thus {Q} has no constant or linear terms, and no monomials with repeated terms). The point is that the mean zero condition allows one to eliminate the linear terms. If we now consider the weight function

\displaystyle  w(n) := 1 + Q( \lambda(L_1(n)), \dots, \lambda(L_k(n)) )

then {w} is non-negative, is supported solely on {n} for which {(\lambda(L_1(n)),\dots,\lambda(L_k(n)))} is a forbidden pattern, and is equal to {1} plus a linear combination of monomials {\lambda(L_{i_1}(n)) \dots \lambda(L_{i_r}(n))} with {r \geq 2}.

The Liouville pseudorandomness principle then predicts that sums of the form

\displaystyle  \sum_n \nu(n) Q( \lambda(L_1(n)), \dots, \lambda(L_k(n)) )

and

\displaystyle  \sum_n \nu(n) Q( \lambda(L_1(n)), \dots, \lambda(L_k(n)) ) 1_{L_i(n) \hbox{ prime}}

or more generally

\displaystyle  \sum_n \nu(n) Q( \lambda(L_1(n)), \dots, \lambda(L_k(n)) ) f(L_i(n))

should be asymptotically negligible; intuitively, the point here is that the prime factorisation of {L_i(n)} should not influence the Liouville function of {L_j(n)}, even on the short arithmetic progressions that the divisor sum {\nu} is built out of, and so any monomial {\lambda(L_{i_1}(n)) \dots \lambda(L_{i_r}(n))} occurring in {Q( \lambda(L_1(n)), \dots, \lambda(L_k(n)) )} should exhibit strong cancellation for any of the above sums. If one accepts this principle, then all the expressions (1), (2), (3) should be essentially unchanged when {\nu(n)} is replaced by {\nu(n) w(n)}.

Suppose now for sake of contradiction that one could use sieve-theoretic methods to locate an {n} in the support of some sieve weight {\nu(n)} obeying {P( L_1(n),\dots,L_k(n))}. Then, by reweighting all sieve weights by the additional multiplicative factor of {w(n)}, the same arguments should also be able to locate {n} in the support of {\nu(n) w(n)} for which {P( L_1(n),\dots,L_k(n))} holds. But {w} is only supported on those {n} whose Liouville sign pattern is forbidden, a contradiction. \Box

Claim 1 is sharp in the following sense: if the convex hull of the forbidden sign patterns of {P} do not contain the origin, then by the Hahn-Banach theorem (in the hyperplane separation form), there exist real coefficients {c_1,\dots,c_k} such that

\displaystyle  c_1 \epsilon_1 + \dots + c_k \epsilon_k < -c

for all forbidden sign patterns {(\epsilon_1,\dots,\epsilon_k)} and some {c>0}. On the other hand, from Liouville pseudorandomness one expects that

\displaystyle  \sum_n \nu(n) (c_1 \lambda(L_1(n)) + \dots + c_k \lambda(L_k(n)))

is negligible (as compared against {\sum_n \nu(n)} for any reasonable sieve weight {\nu}. We conclude that for some {n} in the support of {\nu}, that

\displaystyle  c_1 \lambda(L_1(n)) + \dots + c_k \lambda(L_k(n)) > -c \ \ \ \ \ (4)

and hence {(\lambda(L_1(n)),\dots,\lambda(L_k(n)))} is not a forbidden sign pattern. This does not actually imply that {P(L_1(n),\dots,L_k(n))} holds, but it does not prevent {P(L_1(n),\dots,L_k(n))} from holding purely from parity considerations. Thus, we do not expect a parity obstruction of the type in Claim 1 to hold when the convex hull of forbidden sign patterns does not contain the origin.

Example 4 Let {G} be a graph on {k} vertices {\{1,\dots,k\}}, and let {P(l_1,\dots,l_k)} be the property that one can find an edge {\{i,j\}} of {G} with {l_i,l_j} both prime. We claim that this property is subject to the parity problem precisely when {G} is two-colourable. Indeed, if {G} is two-colourable, then we can colour {\{1,\dots,k\}} into two colours (say, red and green) such that all edges in {G} connect a red vertex to a green vertex. If we then consider the two sign patterns in which all the red vertices have one sign and the green vertices have the opposite sign, these are two forbidden sign patterns which contain the origin in the convex hull, and so the parity problem applies. Conversely, suppose that {G} is not two-colourable, then it contains an odd cycle. Any forbidden sign pattern then must contain more {+1}s on this odd cycle than {-1}s (since otherwise two of the {-1}s are adjacent on this cycle by the pigeonhole principle, and this is not forbidden), and so by convexity any tuple in the convex hull of this sign pattern has a positive sum on this odd cycle. Hence the origin is not in the convex hull, and the parity obstruction does not apply. (See also this previous post for a similar obstruction ultimately coming from two-colourability).

Example 5 An example of a parity-obstructed property (supplied by Zeb Brady) that does not come from two-colourability: we let {P( l_{\{1,2\}}, l_{\{1,3\}}, l_{\{1,4\}}, l_{\{2,3\}}, l_{\{2,4\}}, l_{\{3,4\}} )} be the property that {l_{A_1},\dots,l_{A_r}} are prime for some collection {A_1,\dots,A_r} of pair sets that cover {\{1,\dots,4\}}. For instance, this property holds if {l_{\{1,2\}}, l_{\{3,4\}}} are both prime, or if {l_{\{1,2\}}, l_{\{1,3\}}, l_{\{1,4\}}} are all prime, but not if {l_{\{1,2\}}, l_{\{1,3\}}, l_{\{2,3\}}} are the only primes. An example of a forbidden sign pattern is the pattern where {\{1,2\}, \{2,3\}, \{1,3\}} are given the sign {-1}, and the other three pairs are given {+1}. Averaging over permutations of {1,2,3,4} we see that zero lies in the convex hull, and so this example is blocked by parity. However, there is no sign pattern such that it and its negation are both forbidden, which is another formulation of two-colourability.

Of course, the absence of a parity obstruction does not automatically mean that the desired claim is true. For instance, given an admissible {5}-tuple {h_1,\dots,h_5}, parity obstructions do not prevent one from establishing the existence of infinitely many {n} such that at least three of {n+h_1,\dots,n+h_5} are prime, however we are not yet able to actually establish this, even assuming strong sieve-theoretic hypotheses such as the generalised Elliott-Halberstam hypothesis. (However, the argument giving (4) does easily give the far weaker claim that there exist infinitely many {n} such that at least three of {n+h_1,\dots,n+h_5} have a Liouville function of {-1}.)

Remark 1 Another way to get past the parity problem in some cases is to take advantage of linear forms that are constant multiples of each other (which correlates the Liouville functions to each other). For instance, on GEH we can find two {E_3} numbers (products of exactly three primes) that differ by exactly {60}; a direct sieve approach using the linear forms {n,n+60} fails due to the parity obstruction, but instead one can first find {n} such that two of {n,n+4,n+10} are prime, and then among the pairs of linear forms {(15n,15n+60)}, {(6n,6n+60)}, {(10n+40,10n+100)} one can find a pair of {E_3} numbers that differ by exactly {60}. See this paper of Goldston, Graham, Pintz, and Yildirim for more examples of this type.

I thank John Friedlander and Sid Graham for helpful discussions and encouragement.

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 4,917 other followers