You are currently browsing the tag archive for the ‘exponential sums’ tag.

As in all previous posts in this series, we adopt the following asymptotic notation: {x} is a parameter going off to infinity, and all quantities may depend on {x} unless explicitly declared to be “fixed”. The asymptotic notation {O(), o(), \ll} is then defined relative to this parameter. A quantity {q} is said to be of polynomial size if one has {q = O(x^{O(1)})}, and bounded if {q=O(1)}. We also write {X \lessapprox Y} for {X \ll x^{o(1)} Y}, and {X \sim Y} for {X \ll Y \ll X}.

The purpose of this (rather technical) post is both to roll over the polymath8 research thread from this previous post, and also to record the details of the latest improvement to the Type I estimates (based on exploiting additional averaging and using Deligne’s proof of the Weil conjectures) which lead to a slight improvement in the numerology.

In order to obtain this new Type I estimate, we need to strengthen the previously used properties of “dense divisibility” or “double dense divisibility” as follows.

Definition 1 (Multiple dense divisibility) Let {y \geq 1}. For each natural number {k \geq 0}, we define a notion of {k}-tuply {y}-dense divisibility recursively as follows:

  • Every natural number {n} is {0}-tuply {y}-densely divisible.
  • If {k \geq 1} and {n} is a natural number, we say that {n} is {k}-tuply {y}-densely divisible if, whenever {i,j \geq 0} are natural numbers with {i+j=k-1}, and {1 \leq R \leq n}, one can find a factorisation {n = qr} with {y^{-1} R \leq r \leq R} such that {q} is {i}-tuply {y}-densely divisible and {r} is {j}-tuply {y}-densely divisible.

We let {{\mathcal D}^{(k)}_y} denote the set of {k}-tuply {y}-densely divisible numbers. We abbreviate “{1}-tuply densely divisible” as “densely divisible”, “{2}-tuply densely divisible” as “doubly densely divisible”, and so forth; we also abbreviate {{\mathcal D}^{(1)}_y} as {{\mathcal D}_y}.

Given any finitely supported sequence {\alpha: {\bf N} \rightarrow {\bf C}} and any primitive residue class {a\ (q)}, we define the discrepancy

\displaystyle \Delta(\alpha; a \ (q)) := \sum_{n: n = a\ (q)} \alpha(n) - \frac{1}{\phi(q)} \sum_{n: (n,q)=1} \alpha(n).

We now recall the key concept of a coefficient sequence, with some slight tweaks in the definitions that are technically convenient for this post.

Definition 2 A coefficient sequence is a finitely supported sequence {\alpha: {\bf N} \rightarrow {\bf R}} that obeys the bounds

\displaystyle  |\alpha(n)| \ll \tau^{O(1)}(n) \log^{O(1)}(x) \ \ \ \ \ (1)

for all {n}, where {\tau} is the divisor function.

  • (i) A coefficient sequence {\alpha} is said to be located at scale {N} for some {N \geq 1} if it is supported on an interval of the form {[cN, CN]} for some {1 \ll c < C \ll 1}.
  • (ii) A coefficient sequence {\alpha} located at scale {N} for some {N \geq 1} is said to obey the Siegel-Walfisz theorem if one has

    \displaystyle  | \Delta(\alpha 1_{(\cdot,q)=1}; a\ (r)) | \ll \tau(qr)^{O(1)} N \log^{-A} x \ \ \ \ \ (2)

    for any {q,r \geq 1}, any fixed {A}, and any primitive residue class {a\ (r)}.

  • (iii) A coefficient sequence {\alpha} is said to be smooth at scale {N} for some {N > 0} is said to be smooth if it takes the form {\alpha(n) = \psi(n/N)} for some smooth function {\psi: {\bf R} \rightarrow {\bf C}} supported on an interval of size {O(1)} and obeying the derivative bounds

    \displaystyle  |\psi^{(j)}(t)| \lesssim \log^{O(1)} x \ \ \ \ \ (3)

    for all fixed {j \geq 0} (note that the implied constant in the {O()} notation may depend on {j}).

Note that we allow sequences to be smooth at scale {N} without being located at scale {N}; for instance if one arbitrarily translates of a sequence that is both smooth and located at scale {N}, it will remain smooth at this scale but may not necessarily be located at this scale any more. Note also that we allow the smoothness scale {N} of a coefficient sequence to be less than one. This is to allow for the following convenient rescaling property: if {n \mapsto \psi(n)} is smooth at scale {N}, {q \geq 1}, and {a} is an integer, then {n \mapsto \psi(qn+a)} is smooth at scale {N/q}, even if {N/q} is less than one.

Now we adapt the Type I estimate to the {k}-tuply densely divisible setting.

Definition 3 (Type I estimates) Let {0 < \varpi < 1/4}, {0 < \delta < 1/4+\varpi}, and {0 < \sigma < 1/2} be fixed quantities, and let {k \geq 1} be a fixed natural number. We let {I} be an arbitrary bounded subset of {{\bf R}}, let {P_I := \prod_{p \in I} p}, and let {a\ (P_I)} a primitive congruence class. We say that {Type^{(k)}_I[\varpi,\delta,\sigma]} holds if, whenever {M, N \gg 1} are quantities with

\displaystyle  M N \sim x \ \ \ \ \ (4)

and

\displaystyle  x^{1/2-\sigma} \lessapprox N \lessapprox x^{1/2-2\varpi-c} \ \ \ \ \ (5)

for some fixed {c>0}, and {\alpha,\beta} are coefficient sequences located at scales {M,N} respectively, with {\beta} obeying a Siegel-Walfisz theorem, we have

\displaystyle  \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}^{(k)}: q \leq x^{1/2+2\varpi}} |\Delta(\alpha * \beta; a\ (q))| \ll x \log^{-A} x \ \ \ \ \ (6)

for any fixed {A>0}. Here, as in previous posts, {{\mathcal S}_I} denotes the square-free natural numbers whose prime factors lie in {I}.

The main theorem of this post is then

Theorem 4 (Improved Type I estimate) We have {Type^{(4)}_I[\varpi,\delta,\sigma]} whenever

\displaystyle  \frac{160}{3} \varpi + 16 \delta + \frac{34}{9} \sigma < 1

and

\displaystyle  64\varpi + 18\delta + 2\sigma < 1.

In practice, the first condition here is dominant. Except for weakening double dense divisibility to quadruple dense divisibility, this improves upon the previous Type I estimate that established {Type^{(2)}_I[\varpi,\delta,\sigma]} under the stricter hypothesis

\displaystyle  56 \varpi + 16 \delta + 4 \sigma < 1.

As in previous posts, Type I estimates (when combined with existing Type II and Type III estimates) lead to distribution results of Motohashi-Pintz-Zhang type. For any fixed {\varpi, \delta > 0} and {k \geq 1}, we let {MPZ^{(k)}[\varpi,\delta]} denote the assertion that

\displaystyle  \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}^{(k)}: q \leq x^{1/2+2\varpi}} |\Delta(\Lambda 1_{[x,2x]}; a\ (q))| \ll x \log^{-A} x \ \ \ \ \ (7)

for any fixed {A > 0}, any bounded {I}, and any primitive {a\ (P_I)}, where {\Lambda} is the von Mangoldt function.

Corollary 5 We have {MPZ^{(4)}[\varpi,\delta]} whenever

\displaystyle  \frac{600}{7} \varpi + \frac{180}{7} \delta < 1 \ \ \ \ \ (8)

Proof: Setting {\sigma} sufficiently close to {1/10}, we see from the above theorem that {Type^{(4)}_{II}[\varpi,\delta]} holds whenever

\displaystyle  \frac{600}{7} \varpi + \frac{180}{7} \delta < 1

and

\displaystyle  80 \varpi + \frac{45}{2} \delta < 1.

The second condition is implied by the first and can be deleted.

From this previous post we know that {Type^{(4)}_{II}[\varpi,\delta]} (which we define analogously to {Type'_{II}[\varpi,\delta], Type''_{II}[\varpi,\delta]} from previous sections) holds whenever

\displaystyle  68 \varpi + 14 \delta < 1

while {Type^{(4)}_{III}[\varpi,\delta,\sigma]} holds with {\sigma} sufficiently close to {1/10} whenever

\displaystyle  70 \varpi + 5 \delta < 1.

Again, these conditions are implied by (8). The claim then follows from the Heath-Brown identity and dyadic decomposition as in this previous post. \Box

As before, we let {DHL[k_0,2]} denote the claim that given any admissible {k_0}-tuple {{\mathcal H}}, there are infinitely many translates of {{\mathcal H}} that contain at least two primes.

Corollary 6 We have {DHL[k_0,2]} with {k_0 = 632}.

This follows from the Pintz sieve, as discussed below the fold. Combining this with the best known prime tuples, we obtain that there are infinitely many prime gaps of size at most {4,680}, improving slightly over the previous record of {5,414}.

Read the rest of this entry »

[Note: the content of this post is standard number theoretic material that can be found in many textbooks (I am relying principally here on Iwaniec and Kowalski); I am not claiming any new progress on any version of the Riemann hypothesis here, but am simply arranging existing facts together.]

The Riemann hypothesis is arguably the most important and famous unsolved problem in number theory. It is usually phrased in terms of the Riemann zeta function {\zeta}, defined by

\displaystyle  \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}

for {\hbox{Re}(s)>1} and extended meromorphically to other values of {s}, and asserts that the only zeroes of {\zeta} in the critical strip {\{ s: 0 \leq \hbox{Re}(s) \leq 1 \}} lie on the critical line {\{ s: \hbox{Re}(s)=\frac{1}{2} \}}.

One of the main reasons that the Riemann hypothesis is so important to number theory is that the zeroes of the zeta function in the critical strip control the distribution of the primes. To see the connection, let us perform the following formal manipulations (ignoring for now the important analytic issues of convergence of series, interchanging sums, branches of the logarithm, etc., in order to focus on the intuition). The starting point is the fundamental theorem of arithmetic, which asserts that every natural number {n} has a unique factorisation {n = p_1^{a_1} \ldots p_k^{a_k}} into primes. Taking logarithms, we obtain the identity

\displaystyle  \log n = \sum_{d|n} \Lambda(d) \ \ \ \ \ (1)

for any natural number {n}, where {\Lambda} is the von Mangoldt function, thus {\Lambda(n) = \log p} when {n} is a power of a prime {p} and zero otherwise. If we then perform a “Dirichlet-Fourier transform” by viewing both sides of (1) as coefficients of a Dirichlet series, we conclude that

\displaystyle  \sum_{n=1}^\infty \frac{\log n}{n^s} = \sum_{n=1}^\infty \sum_{d|n} \frac{\Lambda(d)}{n^s},

formally at least. Writing {n=dm}, the right-hand side factors as

\displaystyle (\sum_{d=1}^\infty \frac{\Lambda(d)}{d^s}) (\sum_{m=1}^\infty \frac{1}{m^s}) = \zeta(s) \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s}

whereas the left-hand side is (formally, at least) equal to {-\zeta'(s)}. We conclude the identity

\displaystyle  \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s} = -\frac{\zeta'(s)}{\zeta(s)},

(formally, at least). If we integrate this, we are formally led to the identity

\displaystyle  \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} = \log \zeta(s)

or equivalently to the exponential identity

\displaystyle  \zeta(s) = \exp( \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} ) \ \ \ \ \ (2)

which allows one to reconstruct the Riemann zeta function from the von Mangoldt function. (It is instructive exercise in enumerative combinatorics to try to prove this identity directly, at the level of formal Dirichlet series, using the fundamental theorem of arithmetic of course.) Now, as {\zeta} has a simple pole at {s=1} and zeroes at various places {s=\rho} on the critical strip, we expect a Weierstrass factorisation which formally (ignoring normalisation issues) takes the form

\displaystyle  \zeta(s) = \frac{1}{s-1} \times \prod_\rho (s-\rho) \times \ldots

(where we will be intentionally vague about what is hiding in the {\ldots} terms) and so we expect an expansion of the form

\displaystyle  \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} = - \log(s-1) + \sum_\rho \log(s-\rho) + \ldots. \ \ \ \ \ (3)

Note that

\displaystyle  \frac{1}{s-\rho} = \int_1^\infty t^{\rho-s} \frac{dt}{t}

and hence on integrating in {s} we formally have

\displaystyle  \log(s-\rho) = -\int_1^\infty t^{\rho-s-1} \frac{dt}{\log t}

and thus we have the heuristic approximation

\displaystyle  \log(s-\rho) \approx - \sum_{n=1}^\infty \frac{n^{\rho-s-1}}{\log n}.

Comparing this with (3), we are led to a heuristic form of the explicit formula

\displaystyle  \Lambda(n) \approx 1 - \sum_\rho n^{\rho-1}. \ \ \ \ \ (4)

When trying to make this heuristic rigorous, it turns out (due to the rough nature of both sides of (4)) that one has to interpret the explicit formula in some suitably weak sense, for instance by testing (4) against the indicator function {1_{[0,x]}(n)} to obtain the formula

\displaystyle  \sum_{n \leq x} \Lambda(n) \approx x - \sum_\rho \frac{x^{\rho}}{\rho} \ \ \ \ \ (5)

which can in fact be made into a rigorous statement after some truncation (the von Mangoldt explicit formula). From this formula we now see how helpful the Riemann hypothesis will be to control the distribution of the primes; indeed, if the Riemann hypothesis holds, so that {\hbox{Re}(\rho) = 1/2} for all zeroes {\rho}, it is not difficult to use (a suitably rigorous version of) the explicit formula to conclude that

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2} \log^2 x ) \ \ \ \ \ (6)

as {x \rightarrow \infty}, giving a near-optimal “square root cancellation” for the sum {\sum_{n \leq x} \Lambda(n)-1}. Conversely, if one can somehow establish a bound of the form

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2+\epsilon} )

for any fixed {\epsilon}, then the explicit formula can be used to then deduce that all zeroes {\rho} of {\zeta} have real part at most {1/2+\epsilon}, which leads to the following remarkable amplification phenomenon (analogous, as we will see later, to the tensor power trick): any bound of the form

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2+o(1)} )

can be automatically amplified to the stronger bound

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2} \log^2 x )

with both bounds being equivalent to the Riemann hypothesis. Of course, the Riemann hypothesis for the Riemann zeta function remains open; but partial progress on this hypothesis (in the form of zero-free regions for the zeta function) leads to partial versions of the asymptotic (6). For instance, it is known that there are no zeroes of the zeta function on the line {\hbox{Re}(s)=1}, and this can be shown by some analysis (either complex analysis or Fourier analysis) to be equivalent to the prime number theorem

\displaystyle  \sum_{n \leq x} \Lambda(n) =x + o(x);

see e.g. this previous blog post for more discussion.

The main engine powering the above observations was the fundamental theorem of arithmetic, and so one can expect to establish similar assertions in other contexts where some version of the fundamental theorem of arithmetic is available. One of the simplest such variants is to continue working on the natural numbers, but “twist” them by a Dirichlet character {\chi: {\bf Z} \rightarrow {\bf R}}. The analogue of the Riemann zeta function is then the (1), which encoded the fundamental theorem of arithmetic, can be twisted by {\chi} to obtain

\displaystyle  \chi(n) \log n = \sum_{d|n} \chi(d) \Lambda(d) \chi(\frac{n}{d}) \ \ \ \ \ (7)

and essentially the same manipulations as before eventually lead to the exponential identity

\displaystyle  L(s,\chi) = \exp( \sum_{n=1}^\infty \frac{\chi(n) \Lambda(n)}{\log n} n^{-s} ). \ \ \ \ \ (8)

which is a twisted version of (2), as well as twisted explicit formula, which heuristically takes the form

\displaystyle  \chi(n) \Lambda(n) \approx - \sum_\rho n^{\rho-1} \ \ \ \ \ (9)

for non-principal {\chi}, where {\rho} now ranges over the zeroes of {L(s,\chi)} in the critical strip, rather than the zeroes of {\zeta(s)}; a more accurate formulation, following (5), would be

\displaystyle  \sum_{n \leq x} \chi(n) \Lambda(n) \approx - \sum_\rho \frac{x^{\rho}}{\rho}. \ \ \ \ \ (10)

(See e.g. Davenport’s book for a more rigorous discussion which emphasises the analogy between the Riemann zeta function and the Dirichlet {L}-function.) If we assume the generalised Riemann hypothesis, which asserts that all zeroes of {L(s,\chi)} in the critical strip also lie on the critical line, then we obtain the bound

\displaystyle  \sum_{n \leq x} \chi(n) \Lambda(n) = O( x^{1/2} \log(x) \log(xq) )

for any non-principal Dirichlet character {\chi}, again demonstrating a near-optimal square root cancellation for this sum. Again, we have the amplification property that the above bound is implied by the apparently weaker bound

\displaystyle  \sum_{n \leq x} \chi(n) \Lambda(n) = O( x^{1/2+o(1)} )

(where {o(1)} denotes a quantity that goes to zero as {x \rightarrow \infty} for any fixed {q}). Next, one can consider other number systems than the natural numbers {{\bf N}} and integers {{\bf Z}}. For instance, one can replace the integers {{\bf Z}} with rings {{\mathcal O}_K} of integers in other number fields {K} (i.e. finite extensions of {{\bf Q}}), such as the quadratic extensions {K = {\bf Q}[\sqrt{D}]} of the rationals for various square-free integers {D}, in which case the ring of integers would be the ring of quadratic integers {{\mathcal O}_K = {\bf Z}[\omega]} for a suitable generator {\omega} (it turns out that one can take {\omega = \sqrt{D}} if {D=2,3\hbox{ mod } 4}, and {\omega = \frac{1+\sqrt{D}}{2}} if {D=1 \hbox{ mod } 4}). Here, it is not immediately obvious what the analogue of the natural numbers {{\bf N}} is in this setting, since rings such as {{\bf Z}[\omega]} do not come with a natural ordering. However, we can adopt an algebraic viewpoint to see the correct generalisation, observing that every natural number {n} generates a principal ideal {(n) = \{ an: a \in {\bf Z} \}} in the integers, and conversely every non-trivial ideal {{\mathfrak n}} in the integers is associated to precisely one natural number {n} in this fashion, namely the norm {N({\mathfrak n}) := |{\bf Z} / {\mathfrak n}|} of that ideal. So one can identify the natural numbers with the ideals of {{\bf Z}}. Furthermore, with this identification, the prime numbers correspond to the prime ideals, since if {p} is prime, and {a,b} are integers, then {ab \in (p)} if and only if one of {a \in (p)} or {b \in (p)} is true. Finally, even in number systems (such as {{\bf Z}[\sqrt{-5}]}) in which the classical version of the fundamental theorem of arithmetic fail (e.g. {6 = 2 \times 3 = (1-\sqrt{-5})(1+\sqrt{-5})}), we have the fundamental theorem of arithmetic for ideals: every ideal {\mathfrak{n}} in a Dedekind domain (which includes the ring {{\mathcal O}_K} of integers in a number field as a key example) is uniquely representable (up to permutation) as the product of a finite number of prime ideals {\mathfrak{p}} (although these ideals might not necessarily be principal). For instance, in {{\bf Z}[\sqrt{-5}]}, the principal ideal {(6)} factors as the product of four prime (but non-principal) ideals {(2, 1+\sqrt{-5})}, {(2, 1-\sqrt{-5})}, {(3, 1+\sqrt{-5})}, {(3, 1-\sqrt{-5})}. (Note that the first two ideals {(2,1+\sqrt{5}), (2,1-\sqrt{5})} are actually equal to each other.) Because we still have the fundamental theorem of arithmetic, we can develop analogues of the previous observations relating the Riemann hypothesis to the distribution of primes. The analogue of the Riemann hypothesis is now the Dedekind zeta function

\displaystyle  \zeta_K(s) := \sum_{{\mathfrak n}} \frac{1}{N({\mathfrak n})^s}

where the summation is over all non-trivial ideals in {{\mathcal O}_K}. One can also define a von Mangoldt function {\Lambda_K({\mathfrak n})}, defined as {\log N( {\mathfrak p})} when {{\mathfrak n}} is a power of a prime ideal {{\mathfrak p}}, and zero otherwise; then the fundamental theorem of arithmetic for ideals can be encoded in an analogue of (1) (or (7)),

\displaystyle  \log N({\mathfrak n}) = \sum_{{\mathfrak d}|{\mathfrak n}} \Lambda_K({\mathfrak d}) \ \ \ \ \ (11)

which leads as before to an exponential identity

\displaystyle  \zeta_K(s) = \exp( \sum_{{\mathfrak n}} \frac{\Lambda_K({\mathfrak n})}{\log N({\mathfrak n})} N({\mathfrak n})^{-s} ) \ \ \ \ \ (12)

and an explicit formula of the heuristic form

\displaystyle  \Lambda({\mathfrak n}) \approx 1 - \sum_\rho N({\mathfrak n})^{\rho-1}

or, a little more accurately,

\displaystyle  \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) \approx x - \sum_\rho \frac{x^{\rho}}{\rho} \ \ \ \ \ (13)

in analogy with (5) or (10). Again, a suitable Riemann hypothesis for the Dedekind zeta function leads to good asymptotics for the distribution of prime ideals, giving a bound of the form

\displaystyle  \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) = x + O( \sqrt{x} \log(x) (d+\log(Dx)) )

where {D} is the conductor of {K} (which, in the case of number fields, is the absolute value of the discriminant of {K}) and {d = \hbox{dim}_{\bf Q}(K)} is the degree of the extension of {K} over {{\bf Q}}. As before, we have the amplification phenomenon that the above near-optimal square root cancellation bound is implied by the weaker bound

\displaystyle  \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) = x + O( x^{1/2+o(1)} )

where {o(1)} denotes a quantity that goes to zero as {x \rightarrow \infty} (holding {K} fixed). See e.g. Chapter 5 of Iwaniec-Kowalski for details.

As was the case with the Dirichlet {L}-functions, one can twist the Dedekind zeta function example by characters, in this case the Hecke characters; we will not do this here, but see e.g. Section 3 of Iwaniec-Kowalski for details.

Very analogous considerations hold if we move from number fields to function fields. The simplest case is the function field associated to the affine line {{\mathbb A}^1} and a finite field {{\mathbb F} = {\mathbb F}_q} of some order {q}. The polynomial functions on the affine line {{\mathbb A}^1/{\mathbb F}} are just the usual polynomial ring {{\mathbb F}[t]}, which then play the role of the integers {{\bf Z}} (or {{\mathcal O}_K}) in previous examples. This ring happens to be a unique factorisation domain, so the situation is closely analogous to the classical setting of the Riemann zeta function. The analogue of the natural numbers are the monic polynomials (since every non-trivial principal ideal is generated by precisely one monic polynomial), and the analogue of the prime numbers are the irreducible monic polynomials. The norm {N(f)} of a polynomial is the order of {{\mathbb F}[t] / (f)}, which can be computed explicitly as

\displaystyle  N(f) = q^{\hbox{deg}(f)}.

Because of this, we will normalise things slightly differently here and use {\hbox{deg}(f)} in place of {\log N(f)} in what follows. The (local) zeta function {\zeta_{{\mathbb A}^1/{\mathbb F}}(s)} is then defined as

\displaystyle  \zeta_{{\mathbb A}^1/{\mathbb F}}(s) = \sum_f \frac{1}{N(f)^s}

where {f} ranges over monic polynomials, and the von Mangoldt function {\Lambda_{{\mathbb A}^1/{\mathbb F}}(f)} is defined to equal {\hbox{deg}(g)} when {f} is a power of a monic irreducible polynomial {g}, and zero otherwise. Note that because {N(f)} is always a power of {q}, the zeta function here is in fact periodic with period {2\pi i / \log q}. Because of this, it is customary to make a change of variables {T := q^{-s}}, so that

\displaystyle  \zeta_{{\mathbb A}^1/{\mathbb F}}(s) = Z( {\mathbb A}^1/{\mathbb F}, T )

and {Z} is the renormalised zeta function

\displaystyle  Z( {\mathbb A}^1/{\mathbb F}, T ) = \sum_f T^{\hbox{deg}(f)}. \ \ \ \ \ (14)

We have the analogue of (1) (or (7) or (11)):

\displaystyle  \hbox{deg}(f) = \sum_{g|f} \Lambda_{{\mathbb A}^1/{\mathbb F}}(g), \ \ \ \ \ (15)

which leads as before to an exponential identity

\displaystyle  Z( {\mathbb A}^1/{\mathbb F}, T ) = \exp( \sum_f \frac{\Lambda_{{\mathbb A}^1/{\mathbb F}}(f)}{\hbox{deg}(f)} T^{\hbox{deg}(f)} ) \ \ \ \ \ (16)

analogous to (2), (8), or (12). It also leads to the explicit formula

\displaystyle  \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx 1 - \sum_\rho N(f)^{\rho-1}

where {\rho} are the zeroes of the original zeta function {\zeta_{{\mathbb A}^1/{\mathbb F}}(s)} (counting each residue class of the period {2\pi i/\log q} just once), or equivalently

\displaystyle  \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx 1 - q^{-\hbox{deg}(f)} \sum_\alpha \alpha^{\hbox{deg}(f)},

where {\alpha} are the reciprocals of the roots of the normalised zeta function {Z( {\mathbb A}^1/{\mathbb F}, T )} (or to put it another way, {1-\alpha T} are the factors of this zeta function). Again, to make proper sense of this heuristic we need to sum, obtaining

\displaystyle  \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx q^n - \sum_\alpha \alpha^n.

As it turns out, in the function field setting, the zeta functions are always rational (this is part of the Weil conjectures), and the above heuristic formula is basically exact up to a constant factor, thus

\displaystyle  \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = q^n - \sum_\alpha \alpha^n + c \ \ \ \ \ (17)

for an explicit integer {c} (independent of {n}) arising from any potential pole of {Z} at {T=1}. In the case of the affine line {{\mathbb A}^1}, the situation is particularly simple, because the zeta function {Z( {\mathbb A}^1/{\mathbb F}, T)} is easy to compute. Indeed, since there are exactly {q^n} monic polynomials of a given degree {n}, we see from (14) that

\displaystyle  Z( {\mathbb A}^1/{\mathbb F}, T ) = \sum_{n=0}^\infty q^n T^n = \frac{1}{1-qT}

so in fact there are no zeroes whatsoever, and no pole at {T=1} either, so we have an exact prime number theorem for this function field:

\displaystyle  \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = q^n \ \ \ \ \ (18)

Among other things, this tells us that the number of irreducible monic polynomials of degree {n} is {q^n/n + O(q^{n/2})}.

We can transition from an algebraic perspective to a geometric one, by viewing a given monic polynomial {f \in {\mathbb F}[t]} through its roots, which are a finite set of points in the algebraic closure {\overline{{\mathbb F}}} of the finite field {{\mathbb F}} (or more suggestively, as points on the affine line {{\mathbb A}^1( \overline{{\mathbb F}} )}). The number of such points (counting multiplicity) is the degree of {f}, and from the factor theorem, the set of points determines the monic polynomial {f} (or, if one removes the monic hypothesis, it determines the polynomial {f} projectively). These points have an action of the Galois group {\hbox{Gal}( \overline{{\mathbb F}} / {\mathbb F} )}. It is a classical fact that this Galois group is in fact a cyclic group generated by a single element, the (geometric) Frobenius map {\hbox{Frob}: x \mapsto x^q}, which fixes the elements of the original finite field {{\mathbb F}} but permutes the other elements of {\overline{{\mathbb F}}}. Thus the roots of a given polynomial {f} split into orbits of the Frobenius map. One can check that the roots consist of a single such orbit (counting multiplicity) if and only if {f} is irreducible; thus the fundamental theorem of arithmetic can be viewed geometrically as as the orbit decomposition of any Frobenius-invariant finite set of points in the affine line.

Now consider the degree {n} finite field extension {{\mathbb F}_n} of {{\mathbb F}} (it is a classical fact that there is exactly one such extension up to isomorphism for each {n}); this is a subfield of {\overline{{\mathbb F}}} of order {q^n}. (Here we are performing a standard abuse of notation by overloading the subscripts in the {{\mathbb F}} notation; thus {{\mathbb F}_q} denotes the field of order {q}, while {{\mathbb F}_n} denotes the extension of {{\mathbb F} = {\mathbb F}_q} of order {n}, so that we in fact have {{\mathbb F}_n = {\mathbb F}_{q^n}} if we use one subscript convention on the left-hand side and the other subscript convention on the right-hand side. We hope this overloading will not cause confusion.) Each point {x} in this extension (or, more suggestively, the affine line {{\mathbb A}^1({\mathbb F}_n)} over this extension) has a minimal polynomial – an irreducible monic polynomial whose roots consist the Frobenius orbit of {x}. Since the Frobenius action is periodic of period {n} on {{\mathbb F}_n}, the degree of this minimal polynomial must divide {n}. Conversely, every monic irreducible polynomial of degree {d} dividing {n} produces {d} distinct zeroes that lie in {{\mathbb F}_d} (here we use the classical fact that finite fields are perfect) and hence in {{\mathbb F}_n}. We have thus partitioned {{\mathbb A}^1({\mathbb F}_n)} into Frobenius orbits (also known as closed points), with each monic irreducible polynomial {f} of degree {d} dividing {n} contributing an orbit of size {d = \hbox{deg}(f) = \Lambda(f^{n/d})}. From this we conclude a geometric interpretation of the left-hand side of (18):

\displaystyle  \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = \sum_{x \in {\mathbb A}^1({\mathbb F}_n)} 1. \ \ \ \ \ (19)

The identity (18) thus is equivalent to the thoroughly boring fact that the number of {{\mathbb F}_n}-points on the affine line {{\mathbb A}^1} is equal to {q^n}. However, things become much more interesting if one then replaces the affine line {{\mathbb A}^1} by a more general (geometrically) irreducible curve {C} defined over {{\mathbb F}}; for instance one could take {C} to be an ellpitic curve

\displaystyle  E = \{ (x,y): y^2 = x^3 + ax + b \} \ \ \ \ \ (20)

for some suitable {a,b \in {\mathbb F}}, although the discussion here applies to more general curves as well (though to avoid some minor technicalities, we will assume that the curve is projective with a finite number of {{\mathbb F}}-rational points removed). The analogue of {{\mathbb F}[t]} is then the coordinate ring of {C} (for instance, in the case of the elliptic curve (20) it would be {{\mathbb F}[x,y] / (y^2-x^3-ax-b)}), with polynomials in this ring producing a set of roots in the curve {C( \overline{\mathbb F})} that is again invariant with respect to the Frobenius action (acting on the {x} and {y} coordinates separately). In general, we do not expect unique factorisation in this coordinate ring (this is basically because Bezout’s theorem suggests that the zero set of a polynomial on {C} will almost never consist of a single (closed) point). Of course, we can use the algebraic formalism of ideals to get around this, setting up a zeta function

\displaystyle  \zeta_{C/{\mathbb F}}(s) = \sum_{{\mathfrak f}} \frac{1}{N({\mathfrak f})^s}

and a von Mangoldt function {\Lambda_{C/{\mathbb F}}({\mathfrak f})} as before, where {{\mathfrak f}} would now run over the non-trivial ideals of the coordinate ring. However, it is more instructive to use the geometric viewpoint, using the ideal-variety dictionary from algebraic geometry to convert algebraic objects involving ideals into geometric objects involving varieties. In this dictionary, a non-trivial ideal would correspond to a proper subvariety (or more precisely, a subscheme, but let us ignore the distinction between varieties and schemes here) of the curve {C}; as the curve is irreducible and one-dimensional, this subvariety must be zero-dimensional and is thus a (multi-)set of points {\{x_1,\ldots,x_k\}} in {C}, or equivalently an effective divisor {[x_1] + \ldots + [x_k]} of {C}; this generalises the concept of the set of roots of a polynomial (which corresponds to the case of a principal ideal). Furthermore, this divisor has to be rational in the sense that it is Frobenius-invariant. The prime ideals correspond to those divisors (or sets of points) which are irreducible, that is to say the individual Frobenius orbits, also known as closed points of {C}. With this dictionary, the zeta function becomes

\displaystyle  \zeta_{C/{\mathbb F}}(s) = \sum_{D \geq 0} \frac{1}{q^{\hbox{deg}(D)}}

where the sum is over effective rational divisors {D} of {C} (with {k} being the degree of an effective divisor {[x_1] + \ldots + [x_k]}), or equivalently

\displaystyle  Z( C/{\mathbb F}, T ) = \sum_{D \geq 0} T^{\hbox{deg}(D)}.

The analogue of (19), which gives a geometric interpretation to sums of the von Mangoldt function, becomes

\displaystyle  \sum_{N({\mathfrak f}) = q^n} \Lambda_{C/{\mathbb F}}({\mathfrak f}) = \sum_{x \in C({\mathbb F}_n)} 1

\displaystyle  = |C({\mathbb F}_n)|,

thus this sum is simply counting the number of {{\mathbb F}_n}-points of {C}. The analogue of the exponential identity (16) (or (2), (8), or (12)) is then

\displaystyle  Z( C/{\mathbb F}, T ) = \exp( \sum_{n \geq 1} \frac{|C({\mathbb F}_n)|}{n} T^n ) \ \ \ \ \ (21)

and the analogue of the explicit formula (17) (or (5), (10) or (13)) is

\displaystyle  |C({\mathbb F}_n)| = q^n - \sum_\alpha \alpha^n + c \ \ \ \ \ (22)

where {\alpha} runs over the (reciprocal) zeroes of {Z( C/{\mathbb F}, T )} (counting multiplicity), and {c} is an integer independent of {n}. (As it turns out, {c} equals {1} when {C} is a projective curve, and more generally equals {1-k} when {C} is a projective curve with {k} rational points deleted.)

To evaluate {Z(C/{\mathbb F},T)}, one needs to count the number of effective divisors of a given degree on the curve {C}. Fortunately, there is a tool that is particularly well-designed for this task, namely the Riemann-Roch theorem. By using this theorem, one can show (when {C} is projective) that {Z(C/{\mathbb F},T)} is in fact a rational function, with a finite number of zeroes, and a simple pole at both {1} and {1/q}, with similar results when one deletes some rational points from {C}; see e.g. Chapter 11 of Iwaniec-Kowalski for details. Thus the sum in (22) is finite. For instance, for the affine elliptic curve (20) (which is a projective curve with one point removed), it turns out that we have

\displaystyle  |E({\mathbb F}_n)| = q^n - \alpha^n - \beta^n

for two complex numbers {\alpha,\beta} depending on {E} and {q}.

The Riemann hypothesis for (untwisted) curves – which is the deepest and most difficult aspect of the Weil conjectures for these curves – asserts that the zeroes of {\zeta_{C/{\mathbb F}}} lie on the critical line, or equivalently that all the roots {\alpha} in (22) have modulus {\sqrt{q}}, so that (22) then gives the asymptotic

\displaystyle  |C({\mathbb F}_n)| = q^n + O( q^{n/2} ) \ \ \ \ \ (23)

where the implied constant depends only on the genus of {C} (and on the number of points removed from {C}). For instance, for elliptic curves we have the Hasse bound

\displaystyle  |E({\mathbb F}_n) - q^n| \leq 2 \sqrt{q}.

As before, we have an important amplification phenomenon: if we can establish a weaker estimate, e.g.

\displaystyle  |C({\mathbb F}_n)| = q^n + O( q^{n/2 + O(1)} ), \ \ \ \ \ (24)

then we can automatically deduce the stronger bound (23). This amplification is not a mere curiosity; most of the proofs of the Riemann hypothesis for curves proceed via this fact. For instance, by using the elementary method of Stepanov to bound points in curves (discussed for instance in this previous post), one can establish the preliminary bound (24) for large {n}, which then amplifies to the optimal bound (23) for all {n} (and in particular for {n=1}). Again, see Chapter 11 of Iwaniec-Kowalski for details. The ability to convert a bound with {q}-dependent losses over the optimal bound (such as (24)) into an essentially optimal bound with no {q}-dependent losses (such as (23)) is important in analytic number theory, since in many applications (e.g. in those arising from sieve theory) one wishes to sum over large ranges of {q}.

Much as the Riemann zeta function can be twisted by a Dirichlet character to form a Dirichlet {L}-function, one can twist the zeta function on curves by various additive and multiplicative characters. For instance, suppose one has an affine plane curve {C \subset {\mathbb A}^2} and an additive character {\psi: {\mathbb F}^2 \rightarrow {\bf C}^\times}, thus {\psi(x+y) = \psi(x) \psi(y)} for all {x,y \in {\mathbb F}^2}. Given a rational effective divisor {D = [x_1] + \ldots + [x_k]}, the sum {x_1+\ldots+x_k} is Frobenius-invariant and thus lies in {{\mathbb F}^2}. By abuse of notation, we may thus define {\psi} on such divisors by

\displaystyle  \psi( [x_1] + \ldots + [x_k] ) := \psi( x_1 + \ldots + x_k )

and observe that {\psi} is multiplicative in the sense that {\psi(D_1 + D_2) = \psi(D_1) \psi(D_2)} for rational effective divisors {D_1,D_2}. One can then define {\psi({\mathfrak f})} for any non-trivial ideal {{\mathfrak f}} by replacing that ideal with the associated rational effective divisor; for instance, if {f} is a polynomial in the coefficient ring of {C}, with zeroes at {x_1,\ldots,x_k \in C}, then {\psi((f))} is {\psi( x_1+\ldots+x_k )}. Again, we have the multiplicativity property {\psi({\mathfrak f} {\mathfrak g}) = \psi({\mathfrak f}) \psi({\mathfrak g})}. If we then form the twisted normalised zeta function

\displaystyle  Z( C/{\mathbb F}, \psi, T ) = \sum_{D \geq 0} \psi(D) T^{\hbox{deg}(D)}

then by twisting the previous analysis, we eventually arrive at the exponential identity

\displaystyle  Z( C/{\mathbb F}, \psi, T ) = \exp( \sum_{n \geq 1} \frac{S_n(C/{\mathbb F}, \psi)}{n} T^n ) \ \ \ \ \ (25)

in analogy with (21) (or (2), (8), (12), or (16)), where the companion sums {S_n(C/{\mathbb F}, \psi)} are defined by

\displaystyle  S_n(C/{\mathbb F},\psi) = \sum_{x \in C({\mathbb F}^n)} \psi( \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) )

where the trace {\hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x)} of an element {x} in the plane {{\mathbb A}^2( {\mathbb F}_n )} is defined by the formula

\displaystyle  \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) = x + \hbox{Frob}(x) + \ldots + \hbox{Frob}^{n-1}(x).

In particular, {S_1} is the exponential sum

\displaystyle  S_1(C/{\mathbb F},\psi) = \sum_{x \in C({\mathbb F})} \psi(x)

which is an important type of sum in analytic number theory, containing for instance the Kloosterman sum

\displaystyle  K(a,b;p) := \sum_{x \in {\mathbb F}_p^\times} e_p( ax + \frac{b}{x})

as a special case, where {a,b \in {\mathbb F}_p^\times}. (NOTE: the sign conventions for the companion sum {S_n} are not consistent across the literature, sometimes it is {-S_n} which is referred to as the companion sum.)

If {\psi} is non-principal (and {C} is non-linear), one can show (by a suitably twisted version of the Riemann-Roch theorem) that {Z} is a rational function of {T}, with no pole at {T=q^{-1}}, and one then gets an explicit formula of the form

\displaystyle  S_n(C/{\mathbb F},\psi) = -\sum_\alpha \alpha^n + c \ \ \ \ \ (26)

for the companion sums, where {\alpha} are the reciprocals of the zeroes of {S}, in analogy to (22) (or (5), (10), (13), or (17)). For instance, in the case of Kloosterman sums, there is an identity of the form

\displaystyle  \sum_{x \in {\mathbb F}_{p^n}^\times} e_p( a \hbox{Tr}(x) + \frac{b}{\hbox{Tr}(x)}) = -\alpha^n - \beta^n \ \ \ \ \ (27)

for all {n} and some complex numbers {\alpha,\beta} depending on {a,b,p}, where we have abbreviated {\hbox{Tr}_{{\mathbb F}_{p^n}/{\mathbb F}_p}} as {\hbox{Tr}}. As before, the Riemann hypothesis for {Z} then gives a square root cancellation bound of the form

\displaystyle  S_n(C/{\mathbb F},\psi) = O( q^{n/2} ) \ \ \ \ \ (28)

for the companion sums (and in particular gives the very explicit Weil bound {|K(a,b;p)| \leq 2\sqrt{p}} for the Kloosterman sum), but again there is the amplification phenomenon that this sort of bound can be deduced from the apparently weaker bound

\displaystyle  S_n(C/{\mathbb F},\psi) = O( q^{n/2 + O(1)} ).

As before, most of the known proofs of the Riemann hypothesis for these twisted zeta functions proceed by first establishing this weaker bound (e.g. one could again use Stepanov’s method here for this goal) and then amplifying to the full bound (28); see Chapter 11 of Iwaniec-Kowalski for further details.

One can also twist the zeta function on a curve by a multiplicative character {\chi: {\mathbb F}^\times \rightarrow {\bf C}^\times} by similar arguments, except that instead of forming the sum {x_1+\ldots+x_k} of all the components of an effective divisor {[x_1]+\ldots+[x_k]}, one takes the product {x_1 \ldots x_k} instead, and similarly one replaces the trace

\displaystyle  \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) = x + \hbox{Frob}(x) + \ldots + \hbox{Frob}^{n-1}(x)

by the norm

\displaystyle  \hbox{Norm}_{{\mathbb F}_n/{\mathbb F}}(x) = x \cdot \hbox{Frob}(x) \cdot \ldots \cdot \hbox{Frob}^{n-1}(x).

Again, see Chapter 11 of Iwaniec-Kowalski for details.

Deligne famously extended the above theory to higher-dimensional varieties than curves, and also to the closely related context of {\ell}-adic sheaves on curves, giving rise to two separate proofs of the Weil conjectures in full generality. (Very roughly speaking, the former context can be obtained from the latter context by a sort of Fubini theorem type argument that expresses sums on higher-dimensional varieties as iterated sums on curves of various expressions related to {\ell}-adic sheaves.) In this higher-dimensional setting, the zeta function formalism is still present, but is much more difficult to use, in large part due to the much less tractable nature of divisors in higher dimensions (they are now combinations of codimension one subvarieties or subschemes, rather than combinations of points). To get around this difficulty, one has to change perspective yet again, from an algebraic or geometric perspective to an {\ell}-adic cohomological perspective. (I could imagine that once one is sufficiently expert in the subject, all these perspectives merge back together into a unified viewpoint, but I am certainly not yet at that stage of understanding.) In particular, the zeta function, while still present, plays a significantly less prominent role in the analysis (at least if one is willing to take Deligne’s theorems as a black box); the explicit formula is now obtained via a different route, namely the Grothendieck-Lefschetz fixed point formula. I have written some notes on this material below the fold (based in part on some lectures of Philippe Michel, as well as the text of Iwaniec-Kowalski and also this book of Katz), but I should caution that my understanding here is still rather sketchy and possibly inaccurate in places.

Read the rest of this entry »

As in previous posts, we use the following asymptotic notation: {x} is a parameter going off to infinity, and all quantities may depend on {x} unless explicitly declared to be “fixed”. The asymptotic notation {O(), o(), \ll} is then defined relative to this parameter. A quantity {q} is said to be of polynomial size if one has {q = O(x^{O(1)})}, and bounded if {q=O(1)}. We also write {X \lessapprox Y} for {X \ll x^{o(1)} Y}, and {X \sim Y} for {X \ll Y \ll X}.

The purpose of this post is to collect together all the various refinements to the second half of Zhang’s paper that have been obtained as part of the polymath8 project and present them as a coherent argument. In order to state the main result, we need to recall some definitions. If {I} is a bounded subset of {{\bf R}}, let {{\mathcal S}_I} denote the square-free numbers whose prime factors lie in {I}, and let {P_I := \prod_{p \in I} p} denote the product of the primes {p} in {I}. Note by the Chinese remainder theorem that the set {({\bf Z}/P_I{\bf Z})^\times} of primitive congruence classes {a\ (P_I)} modulo {P_I} can be identified with the tuples {(a_q\ (q))_{q \in {\mathcal S}_I}} of primitive congruence classes {a_q\ (q)} of congruence classes modulo {q} for each {q \in {\mathcal S}_I} which obey the Chinese remainder theorem

\displaystyle  (a_{qr}\ (qr)) = (a_q\ (q)) \cap (a_r\ (r))

for all coprime {q,r \in {\mathcal S}_I}, since one can identify {a\ (P_I)} with the tuple {(a\ (q))_{q \in {\mathcal S}_I}} for each {a \in ({\bf Z}/P_I{\bf Z})^\times}.

If {y > 1} and {n} is a natural number, we say that {n} is {y}-densely divisible if, for every {1 \leq R \leq n}, one can find a factor of {n} in the interval {[y^{-1} R, R]}. We say that {n} is doubly {y}-densely divisible if, for every {1 \leq R \leq n}, one can find a factor {m} of {n} in the interval {[y^{-1} R, R]} such that {m} is itself {y}-densely divisible. We let {{\mathcal D}_y^2} denote the set of doubly {y}-densely divisible natural numbers, and {{\mathcal D}_y} the set of {y}-densely divisible numbers.

Given any finitely supported sequence {\alpha: {\bf N} \rightarrow {\bf C}} and any primitive residue class {a\ (q)}, we define the discrepancy

\displaystyle \Delta(\alpha; a \ (q)) := \sum_{n: n = a\ (q)} \alpha(n) - \frac{1}{\phi(q)} \sum_{n: (n,q)=1} \alpha(n).

For any fixed {\varpi, \delta > 0}, we let {MPZ''[\varpi,\delta]} denote the assertion that

\displaystyle  \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}^2: q \leq x^{1/2+2\varpi}} |\Delta(\Lambda 1_{[x,2x]}; a\ (q))| \ll x \log^{-A} x \ \ \ \ \ (1)

for any fixed {A > 0}, any bounded {I}, and any primitive {a\ (P_I)}, where {\Lambda} is the von Mangoldt function. Importantly, we do not require {I} or {a} to be fixed, in particular {I} could grow polynomially in {x}, and {a} could grow exponentially in {x}, but the implied constant in (1) would still need to be fixed (so it has to be uniform in {I} and {a}). (In previous formulations of these estimates, the system of congruence {a\ (q)} was also required to obey a controlled multiplicity hypothesis, but we no longer need this hypothesis in our arguments.) In this post we will record the proof of the following result, which is currently the best distribution result produced by the ongoing polymath8 project to optimise Zhang’s theorem on bounded gaps between primes:

Theorem 1 We have {MPZ''[\varpi,\delta]} whenever {\frac{280}{3} \varpi + \frac{80}{3} \delta < 1}.

This improves upon the previous constraint of {148 \varpi + 33 \delta < 1} (see this previous post), although that latter statement was stronger in that it only required single dense divisibility rather than double dense divisibility. However, thanks to the efficiency of the sieving step of our argument, the upgrade of the single dense divisibility hypothesis to double dense divisibility costs almost nothing with respect to the {k_0} parameter (which, using this constraint, gives a value of {k_0=720} as verified in these comments, which then implies a value of {H = 5,414}).

This estimate is deduced from three sub-estimates, which require a bit more notation to state. We need a fixed quantity {A_0>0}.

Definition 2 A coefficient sequence is a finitely supported sequence {\alpha: {\bf N} \rightarrow {\bf R}} that obeys the bounds

\displaystyle  |\alpha(n)| \ll \tau^{O(1)}(n) \log^{O(1)}(x) \ \ \ \ \ (2)

for all {n}, where {\tau} is the divisor function.

  • (i) A coefficient sequence {\alpha} is said to be at scale {N} for some {N \geq 1} if it is supported on an interval of the form {[(1-O(\log^{-A_0} x)) N, (1+O(\log^{-A_0} x)) N]}.
  • (ii) A coefficient sequence {\alpha} at scale {N} is said to obey the Siegel-Walfisz theorem if one has

    \displaystyle  | \Delta(\alpha 1_{(\cdot,q)=1}; a\ (r)) | \ll \tau(qr)^{O(1)} N \log^{-A} x \ \ \ \ \ (3)

    for any {q,r \geq 1}, any fixed {A}, and any primitive residue class {a\ (r)}.

  • (iii) A coefficient sequence {\alpha} at scale {N} (relative to this choice of {A_0}) is said to be smooth if it takes the form {\alpha(n) = \psi(n/N)} for some smooth function {\psi: {\bf R} \rightarrow {\bf C}} supported on {[1-O(\log^{-A_0} x), 1+O(\log^{-A_0} x)]} obeying the derivative bounds

    \displaystyle  \psi^{(j)}(t) = O( \log^{j A_0} x ) \ \ \ \ \ (4)

    for all fixed {j \geq 0} (note that the implied constant in the {O()} notation may depend on {j}).

Definition 3 (Type I, Type II, Type III estimates) Let {0 < \varpi < 1/4}, {0 < \delta < 1/4+\varpi}, and {0 < \sigma < 1/2} be fixed quantities. We let {I} be an arbitrary bounded subset of {{\bf R}}, and {a\ (P_I)} a primitive congruence class.

  • (i) We say that {Type''_I[\varpi,\delta,\sigma]} holds if, whenever {M, N \gg 1} are quantities with

    \displaystyle  M N \sim x \ \ \ \ \ (5)

    and

    \displaystyle  x^{1/2-\sigma} \lessapprox N \lessapprox x^{1/2-2\varpi-c} \ \ \ \ \ (6)

    for some fixed {c>0}, and {\alpha,\beta} are coefficient sequences at scales {M,N} respectively, with {\beta} obeying a Siegel-Walfisz theorem, we have

    \displaystyle  \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}^2: q \leq x^{1/2+2\varpi}} |\Delta(\alpha * \beta; a\ (q))| \ll x \log^{-A} x. \ \ \ \ \ (7)

  • (ii) We say that {Type''_{II}[\varpi,\delta]} holds if the conclusion (7) of {Type''_I[\varpi,\delta,\sigma]} holds under the same hypotheses as before, except that (6) is replaced with

    \displaystyle  x^{1/2-2\varpi-c} \lessapprox N \lessapprox x^{1/2} \ \ \ \ \ (8)

    for some sufficiently small fixed {c>0}.

  • (iii) We say that {Type''_{III}[\varpi,\delta,\sigma]} holds if, whenever {M, N_1,N_2,N_3 \gg 1} are quantities with

    \displaystyle  M N_1 N_2 N_3 \sim x

    and

    \displaystyle  N_1 N_2, N_1 N_3, N_2 N_3 \gtrapprox x^{1/2+\sigma} \ \ \ \ \ (9)

    and

    \displaystyle  x^{2\sigma} \lessapprox N_1,N_2,N_3 \lessapprox x^{1/2-\sigma} \ \ \ \ \ (10)

    and {\alpha,\psi_1,\psi_2,\psi_3} are coefficient sequences at scales {M,N_1,N_2,N_3} respectively, with {\psi_1,\psi_2,\psi_3} smooth, we have

    \displaystyle  \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}^2: q \leq x^{1/2+2\varpi}} \ \ \ \ \ (11)

    \displaystyle  |\Delta(\alpha * \psi_1 * \psi_2 * \psi_3; a\ (q))| \ll x \log^{-A} x.

Theorem 1 is then a consequence of the following four statements.

Theorem 4 (Type I estimate) {Type''_I[\varpi,\delta,\sigma]} holds whenever {\varpi,\delta,\sigma > 0} are fixed quantities such that

\displaystyle  56 \varpi + 16 \delta + 4\sigma < 1.

Theorem 5 (Type II estimate) {Type''_{II}[\varpi,\delta]} holds whenever {\varpi,\delta > 0} are fixed quantities such that

\displaystyle  68 \varpi + 14 \delta < 1.

Theorem 6 (Type III estimate) {Type''_{III}[\varpi,\delta,\sigma]} holds whenever {0 < \varpi < 1/4}, {0 < \delta < 1/4+\varpi}, and {\sigma > 0} are fixed quantities such that

\displaystyle  \sigma > \frac{1}{18} + \frac{28}{9} \varpi + \frac{2}{9} \delta \ \ \ \ \ (12)

and

\displaystyle  \varpi< \frac{1}{12}. \ \ \ \ \ (13)

In particular, if

\displaystyle  70 \varpi + 5 \delta < 1.

then all values of {\sigma} that are sufficiently close to {1/10} are admissible.

Lemma 7 (Combinatorial lemma) Let {0 < \varpi < 1/4}, {0 < \delta < 1/4+\varpi}, and {1/10 < \sigma < 1/2} be such that {Type''_I[\varpi,\delta,\sigma]}, {Type''_{II}[\varpi,\delta]}, and {Type''_{III}[\varpi,\delta,\sigma]} simultaneously hold. Then {MPZ''[\varpi,\delta]} holds.

Indeed, if {\frac{280}{3} \varpi + \frac{80}{3} \delta < 1}, one checks that the hypotheses for Theorems 4, 5, 6 are obeyed for {\sigma} sufficiently close to {1/10}, at which point the claim follows from Lemma 7.

The proofs of Theorems 4, 5, 6 will be given below the fold, while the proof of Lemma 7 follows from the arguments in this previous post. We remark that in our current arguments, the double dense divisibility is only fully used in the Type I estimates; the Type II and Type III estimates are also valid just with single dense divisibility.

Remark 1 Theorem 6 is vacuously true for {\sigma > 1/6}, as the condition (10) cannot be satisfied in this case. If we use this trivial case of Theorem 6, while keeping the full strength of Theorems 4 and 5, we obtain Theorem 1 in the regime

\displaystyle  168 \varpi + 48 \delta < 1.

Read the rest of this entry »

As in previous posts, we use the following asymptotic notation: {x} is a parameter going off to infinity, and all quantities may depend on {x} unless explicitly declared to be “fixed”. The asymptotic notation {O(), o(), \ll} is then defined relative to this parameter. A quantity {q} is said to be of polynomial size if one has {q = O(x^{O(1)})}, and bounded if {q=O(1)}. We also write {X \lessapprox Y} for {X \ll x^{o(1)} Y}, and {X \sim Y} for {X \ll Y \ll X}.

The purpose of this post is to collect together all the various refinements to the second half of Zhang’s paper that have been obtained as part of the polymath8 project and present them as a coherent argument (though not fully self-contained, as we will need some lemmas from previous posts).

In order to state the main result, we need to recall some definitions.

Definition 1 (Singleton congruence class system) Let {I \subset {\bf R}}, and let {{\mathcal S}_I} denote the square-free numbers whose prime factors lie in {I}. A singleton congruence class system on {I} is a collection {{\mathcal C} = (\{a_q\})_{q \in {\mathcal S}_I}} of primitive residue classes {a_q \in ({\bf Z}/q{\bf Z})^\times} for each {q \in {\mathcal S}_I}, obeying the Chinese remainder theorem property

\displaystyle  a_{qr}\ (qr) = (a_q\ (q)) \cap (a_r\ (r)) \ \ \ \ \ (1)

whenever {q,r \in {\mathcal S}_I} are coprime. We say that such a system {{\mathcal C}} has controlled multiplicity if the

\displaystyle  \tau_{\mathcal C}(n) := |\{ q \in {\mathcal S}_I: n = a_q\ (q) \}|

obeys the estimate

\displaystyle  \sum_{C^{-1} x \leq n \leq Cx: n = a\ (r)} \tau_{\mathcal C}(n)^2 \ll \frac{x}{r} \tau(r)^{O(1)} \log^{O(1)} x + x^{o(1)}. \ \ \ \ \ (2)

for any fixed {C>1} and any congruence class {a\ (r)} with {r \in {\mathcal S}_I}. Here {\tau} is the divisor function.

Next we need a relaxation of the concept of {y}-smoothness.

Definition 2 (Dense divisibility) Let {y \geq 1}. A positive integer {q} is said to be {y}-densely divisible if, for every {1 \leq R \leq q}, there exists a factor of {q} in the interval {[y^{-1} R, R]}. We let {{\mathcal D}_y} denote the set of {y}-densely divisible positive integers.

Now we present a strengthened version {MPZ'[\varpi,\delta]} of the Motohashi-Pintz-Zhang conjecture {MPZ[\varpi,\delta]}, which depends on parameters {0 < \varpi < 1/4} and {0 < \delta < 1/4}.

Conjecture 3 ({MPZ'[\varpi,\delta]}) Let {I \subset {\bf R}}, and let {(\{a_q\})_{q \in {\mathcal S}_I}} be a congruence class system with controlled multiplicity. Then

\displaystyle  \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}: q< x^{1/2+2\varpi}} |\Delta(\Lambda 1_{[x,2x]}; a_q)| \ll x \log^{-A} x \ \ \ \ \ (3)

for any fixed {A>0}, where {\Lambda} is the von Mangoldt function.

The difference between this conjecture and the weaker conjecture {MPZ[\varpi,\delta]} is that the modulus {q} is constrained to be {x^\delta}-densely divisible rather than {x^\delta}-smooth (note that {I} is no longer constrained to lie in {[1,x^\delta]}). This relaxation of the smoothness condition improves the Goldston-Pintz-Yildirim type sieving needed to deduce {DHL[k_0,2]} from {MPZ'[\varpi,\delta]}; see this previous post.

The main result we will establish is

Theorem 4 {MPZ'[\varpi,\delta]} holds for any {\varpi,\delta>0} with

\displaystyle  148\varpi+33\delta < 1. \ \ \ \ \ (4)

This improves upon previous constraints of {87\varpi + 17 \delta < \frac{1}{4}} (see this blog comment) and {207 \varpi + 43 \delta < \frac{1}{4}} (see Theorem 13 of this previous post), which were also only established for {MPZ[\varpi,\delta]} instead of {MPZ'[\varpi,\delta]}. Inserting Theorem 4 into the Pintz sieve from this previous post gives {DHL[k_0,2]} for {k_0 = 1467} (see this blog comment), which when inserted in turn into newly set up tables of narrow prime tuples gives infinitely many prime gaps of separation at most {H = 12,012}.

Read the rest of this entry »

As in previous posts, we use the following asymptotic notation: {x} is a parameter going off to infinity, and all quantities may depend on {x} unless explicitly declared to be “fixed”. The asymptotic notation {O(), o(), \ll} is then defined relative to this parameter. A quantity {q} is said to be of polynomial size if one has {q = O(x^{O(1)})}, and said to be bounded if {q=O(1)}. Another convenient notation: we write {X \lessapprox Y} for {X \ll x^{o(1)} Y}. Thus for instance the divisor bound asserts that if {q} has polynomial size, then the number of divisors of {q} is {\lessapprox 1}.

This post is intended to highlight a phenomenon unearthed in the ongoing polymath8 project (and is in fact a key component of Zhang’s proof that there are bounded gaps between primes infinitely often), namely that one can get quite good bounds on relatively short exponential sums when the modulus {q} is smooth, through the basic technique of Weyl differencing (ultimately based on the Cauchy-Schwarz inequality, and also related to the van der Corput lemma in equidistribution theory). Improvements in the case of smooth moduli have appeared before in the literature (e.g. in this paper of Heath-Brown, paper of Graham and Ringrose, this later paper of Heath-Brown, this paper of Chang, or this paper of Goldmakher); the arguments here are particularly close to that of the first paper of Heath-Brown. It now also appears that further optimisation of this Weyl differencing trick could lead to noticeable improvements in the numerology for the polymath8 project, so I am devoting this post to explaining this trick further.

To illustrate the method, let us begin with the classical problem in analytic number theory of estimating an incomplete character sum

\displaystyle  \sum_{M+1 \leq n \leq M+N} \chi(n)

where {\chi} is a primitive Dirichlet character of some conductor {q}, {M} is an integer, and {N} is some quantity between {1} and {q}. Clearly we have the trivial bound

\displaystyle  |\sum_{M+1 \leq n \leq M+N} \chi(n)| \leq N; \ \ \ \ \ (1)

we also have the classical Pólya-Vinogradov inequality

\displaystyle  |\sum_{M+1 \leq n \leq M+N} \chi(n)| \ll q^{1/2} \log q. \ \ \ \ \ (2)

This latter inequality gives improvements over the trivial bound when {N} is much larger than {q^{1/2}}, but not for {N} much smaller than {q^{1/2}}. The Pólya-Vinogradov inequality can be deduced via a little Fourier analysis from the completed exponential sum bound

\displaystyle  | \sum_{n \in {\bf Z}/q{\bf Z}} \chi(n) e_q( an )| \ll q^{1/2}

for any {a \in {\bf Z}/q{\bf Z}}, where {e_q(n) :=e^{2\pi i n/q}}. (In fact, from the classical theory of Gauss sums, this exponential sum is equal to {\tau(\chi) \overline{\chi(a)}} for some complex number {\tau(\chi)} of norm {\sqrt{q}}.)

In the case when {q} is a prime, improving upon the above two inequalities is an important but difficult problem, with only partially satisfactory results so far. To give just one indication of the difficulty, the seemingly modest improvement

\displaystyle |\sum_{M+1 \leq n \leq M+N} \chi(n)| \ll p^{1/2} \log \log p

to the Pólya-Vinogradov inequality when {q=p} was a prime required a 14-page paper in Inventiones by Montgomery and Vaughan to prove, and even then it was only conditional on the generalised Riemann hypothesis! See also this more recent paper of Granville and Soundararajan for an unconditional variant of this result in the case that {\chi} has odd order.

Another important improvement is the Burgess bound, which in our notation asserts that

\displaystyle  |\sum_{M+1 \leq n \leq M+N} \chi(n)| \lessapprox N^{1-1/r} q^{\frac{r+1}{4r^2}} \ \ \ \ \ (3)

for any fixed integer {r \geq 2}, assuming that {q} is square-free (for simplicity) and of polynomial size; see this previous post for a discussion of the Burgess argument. This is non-trivial for {N} as small as {q^{1/4+o(1)}}.

In the case when {q} is prime, there has been very little improvement to the Burgess bound (or its Fourier dual, which can give bounds for {N} as large as {q^{3/4-o(1)}}) in the last fifty years; an improvement to the exponents in (3) in this case (particularly anything that gave a power saving for {N} below {q^{1/4}}) would in fact be rather significant news in analytic number theory.

However, in the opposite case when {q} is smooth – that is to say, all of its factors are much smaller than {q} – then one can do better than the Burgess bound in some regimes. This fact has been observed in several places in the literature (in particular, in the papers of Heath-Brown, Graham-Ringrose, Chang, and Goldmakher mentioned previously), but also turns out to (implicitly) be a key insight in Zhang’s paper on bounded prime gaps. In the case of character sums, one such improved estimate (closely related to Theorem 2 of the Heath-Brown paper) is as follows:

Proposition 1 Let {q} be square-free with a factorisation {q = q_1 q_2} and of polynomial size, and let {M,N} be integers with {1 \leq N \leq q}. Then for any primitive character {\chi} with conductor {q}, one has

\displaystyle  | \sum_{M+1 \leq n \leq M+N} \chi(n) | \lessapprox N^{1/2} q_1^{1/2} + N^{1/2} q_2^{1/4}.

This proposition is particularly powerful when {q} is smooth, as this gives many factorisations {q = q_1 q_2} with the ability to specify {q_1,q_2} with a fair amount of accuracy. For instance, if {q} is {y}-smooth (i.e. all prime factors are at most {y}), then by the greedy algorithm one can find a divisor {q_1} of {q} with {y^{-2/3} q^{1/3} \leq q_1 \leq y^{1/3} q^{1/3}}; if we set {q_2 := q/q_1}, then {y^{-1/3} q^{2/3} \leq q_2 \leq y^{2/3} q^{2/3}}, and the above proposition then gives

\displaystyle  | \sum_{M+1 \leq n \leq M+N} \chi(n) | \lessapprox y^{1/6} N^{1/2} q^{1/6}

which can improve upon the Burgess bound when {y} is small. For instance, if {N = q^{1/2}}, then this bound becomes {\lessapprox y^{1/6} q^{5/12}}; in contrast the Burgess bound only gives {\lessapprox q^{7/16}} for this value of {N} (using the optimal choice {r=2} for {r}), which is inferior for {y < q^{1/8}}.

The hypothesis that {q} be squarefree may be relaxed, but for applications to the Polymath8 project, it is only the squarefree moduli that are relevant.

Proof: If {N \ll q_1} then the claim follows from the trivial bound (1), while for {N \gg q_2} the claim follows from (2). Hence we may assume that

\displaystyle  q_1 < N < q_2.

We use the method of Weyl differencing, the key point being to difference in multiples of {q_1}.

Let {K := \lfloor N/q_1 \rfloor}, thus {K \geq 1}. For any {1 \leq k \leq K}, we have

\displaystyle  \sum_{M+1 \leq n \leq M+N} \chi(n) = \sum_n 1_{[M+1,M+N]}(n+kq_1) \chi(n+kq_1)

and thus on averaging

\displaystyle  \sum_{M+1 \leq n \leq M+N} \chi(n) = \frac{1}{K} \sum_n \sum_{k=1}^K 1_{[M+1,M+N]}(n+kq_1) \chi(n+kq_1). \ \ \ \ \ (4)

By the Chinese remainder theorem, we may factor

\displaystyle  \chi(n) = \chi_1(n) \chi_2(n)

where {\chi_1,\chi_2} are primitive characters of conductor {q_1,q_2} respectively. As {\chi_1} is periodic of period {q_1}, we thus have

\displaystyle  \chi(n+kq_1) = \chi_1(n) \chi_2(n+kq_2)

and so we can take {\chi_1} out of the inner summation of the right-hand side of (4) to obtain

\displaystyle  \sum_{M+1 \leq n \leq M+N} \chi(n) = \frac{1}{K} \sum_n \chi_1(n) \sum_{k=1}^K 1_{[M+1,M+N]}(n+kq_1) \chi_2(n+kq_1)

and hence by the triangle inequality

\displaystyle  |\sum_{M+1 \leq n \leq M+N} \chi(n)| \leq \frac{1}{K} \sum_n |\sum_{k=1}^K 1_{[M+1,M+N]}(n+kq_1) \chi_2(n+kq_1)|.

Note how the characters on the right-hand side only have period {q_2} rather than {q=q_1 q_2}. This reduction in the period is ultimately the source of the saving over the Pólya-Vinogradov inequality.

Note that the inner sum vanishes unless {n \in [M+1-Kq_1,M+N]}, which is an interval of length {O(N)} by choice of {K}. Thus by Cauchy-Schwarz one has

\displaystyle  | \sum_{M+1 \leq n \leq M+N} \chi(n) | \ll

\displaystyle  \frac{N^{1/2}}{K} (\sum_n |\sum_{k=1}^K 1_{[M+1,M+N]}(n+kq_1) \chi_2(n+kq_1)|^2)^{1/2}.

We expand the right-hand side as

\displaystyle  \frac{N^{1/2}}{K} |\sum_{1 \leq k,k' \leq K} \sum_n

\displaystyle  1_{[M+1,M+N]}(n+kq_1) 1_{[M+1,M+N]}(n+k'q_1) \chi_2(n+kq_1) \overline{\chi_2(n+k'q_1)}|^{1/2}.

We first consider the diagonal contribution {k=k'}. In this case we use the trivial bound {O(N)} for the inner summation, and we soon see that the total contribution here is {O( K^{-1/2} N ) = O( N^{1/2}q_1^{1/2} )}.

Now we consider the off-diagonal case; by symmetry we can take {k < k'}. Then the indicator functions {1_{[M+1,M+N]}(n+kq_1) 1_{[M+1,M+N]}(n+k'q_1)} restrict {n} to the interval {[M+1-kq_1, M+N-k'q_1]}. On the other hand, as a consequence of the Weil conjectures for curves one can show that

\displaystyle  |\sum_{n \in {\bf Z}/q_2{\bf Z}} \chi_2(n+kq_1) \overline{\chi_2(n+k'q_1)} e_{q_2}(an)| \lessapprox q_2^{1/2} (k-k',q_2)^{1/2}

for any {a \in {\bf Z}/q_2{\bf Z}}; indeed one can use the Chinese remainder theorem and the square-free nature of {q_2} to reduce to the case when {q_2} is prime, in which case one can apply (for instance) the original paper of Weil to establish this bound, noting also that {q_1} and {q_2} are coprime since {q} is squarefree. Applying the method of completion of sums (or the Parseval formula), this shows that

\displaystyle  |\sum_n 1_{[M+1,M+N]}(n+kq_1) 1_{[M+1,M+N]}(n+k'q_1) \chi_2(n+kq_1) \overline{\chi_2(n+k'q_1)}|

\displaystyle  \lessapprox q_2^{1/2} (k-k',q_2)^{1/2}.

Summing in {k,k'} (using Lemma 5 from this previous post) we see that the total contribution to the off-diagonal case is

\displaystyle  \lessapprox \frac{N^{1/2}}{K} ( K^2 q_2^{1/2} )^{1/2}

which simplifies to {\lessapprox N^{1/2} q_2^{1/4}}. The claim follows. \Box

A modification of the above argument (using more complicated versions of the Weil conjectures) allows one to replace the summand {\chi(n)} by more complicated summands such as {\chi(f(n)) e_q(g(n))} for some polynomials or rational functions {f,g} of bounded degree and obeying a suitable non-degeneracy condition (after restricting of course to those {n} for which the arguments {f(n),g(n)} are well-defined). We will not detail this here, but instead turn to the question of estimating slightly longer exponential sums, such as

\displaystyle  \sum_{1 \leq n \leq N} e_{d_1}( \frac{c_1}{n} ) e_{d_2}( \frac{c_2}{n+l} )

where {N} should be thought of as a little bit larger than {(d_1d_2)^{1/2}}.

Read the rest of this entry »

One of the most basic methods in additive number theory is the Hardy-Littlewood circle method. This method is based on expressing a quantity of interest to additive number theory, such as the number of representations {f_3(x)} of an integer {x} as the sum of three primes {x = p_1+p_2+p_3}, as a Fourier-analytic integral over the unit circle {{\bf R}/{\bf Z}} involving exponential sums such as

\displaystyle  S(x,\alpha) := \sum_{p \leq x} e( \alpha p) \ \ \ \ \ (1)

where the sum here ranges over all primes up to {x}, and {e(x) := e^{2\pi i x}}. For instance, the expression {f(x)} mentioned earlier can be written as

\displaystyle  f_3(x) = \int_{{\bf R}/{\bf Z}} S(x,\alpha)^3 e(-x\alpha)\ d\alpha. \ \ \ \ \ (2)

The strategy is then to obtain sufficiently accurate bounds on exponential sums such as {S(x,\alpha)} in order to obtain non-trivial bounds on quantities such as {f_3(x)}. For instance, if one can show that {f_3(x)>0} for all odd integers {x} greater than some given threshold {x_0}, this implies that all odd integers greater than {x_0} are expressible as the sum of three primes, thus establishing all but finitely many instances of the odd Goldbach conjecture.

Remark 1 In practice, it can be more efficient to work with smoother sums than the partial sum (1), for instance by replacing the cutoff {p \leq x} with a smoother cutoff {\chi(p/x)} for a suitable chocie of cutoff function {\chi}, or by replacing the restriction of the summation to primes by a more analytically tractable weight, such as the von Mangoldt function {\Lambda(n)}. However, these improvements to the circle method are primarily technical in nature and do not have much impact on the heuristic discussion in this post, so we will not emphasise them here. One can also certainly use the circle method to study additive combinations of numbers from other sets than the set of primes, but we will restrict attention to additive combinations of primes for sake of discussion, as it is historically one of the most studied sets in additive number theory.

In many cases, it turns out that one can get fairly precise evaluations on sums such as {S(x,\alpha)} in the major arc case, when {\alpha} is close to a rational number {a/q} with small denominator {q}, by using tools such as the prime number theorem in arithmetic progressions. For instance, the prime number theorem itself tells us that

\displaystyle  S(x,0) \approx \frac{x}{\log x}

and the prime number theorem in residue classes modulo {q} suggests more generally that

\displaystyle  S(x,\frac{a}{q}) \approx \frac{\mu(q)}{\phi(q)} \frac{x}{\log x}

when {q} is small and {a} is close to {q}, basically thanks to the elementary calculation that the phase {e(an/q)} has an average value of {\mu(q)/\phi(q)} when {n} is uniformly distributed amongst the residue classes modulo {q} that are coprime to {q}. Quantifying the precise error in these approximations can be quite challenging, though, unless one assumes powerful hypotheses such as the Generalised Riemann Hypothesis.

In the minor arc case when {\alpha} is not close to a rational {a/q} with small denominator, one no longer expects to have such precise control on the value of {S(x,\alpha)}, due to the “pseudorandom” fluctuations of the quantity {e(\alpha p)}. Using the standard probabilistic heuristic (supported by results such as the central limit theorem or Chernoff’s inequality) that the sum of {k} “pseudorandom” phases should fluctuate randomly and be of typical magnitude {\sim \sqrt{k}}, one expects upper bounds of the shape

\displaystyle  |S(x,\alpha)| \lessapprox \sqrt{\frac{x}{\log x}} \ \ \ \ \ (3)

for “typical” minor arc {\alpha}. Indeed, a simple application of the Plancherel identity, followed by the prime number theorem, reveals that

\displaystyle  \int_{{\bf R}/{\bf Z}} |S(x,\alpha)|^2\ d\alpha \sim \frac{x}{\log x} \ \ \ \ \ (4)

which is consistent with (though weaker than) the above heuristic. In practice, though, we are unable to rigorously establish bounds anywhere near as strong as (3); upper bounds such as {x^{4/5+o(1)}} are far more typical.

Because one only expects to have upper bounds on {|S(x,\alpha)|}, rather than asymptotics, in the minor arc case, one cannot realistically hope to make much use of phases such as {e(-x\alpha)} for the minor arc contribution to integrals such as (2) (at least if one is working with a single, deterministic, value of {x}, so that averaging in {x} is unavailable). In particular, from upper bound information alone, it is difficult to avoid the “conspiracy” that the magnitude {|S(x,\alpha)|^3} oscillates in sympathetic resonance with the phase {e(-x\alpha)}, thus essentially eliminating almost all of the possible gain in the bounds that could arise from exploiting cancellation from that phase. Thus, one basically has little option except to use the triangle inequality to control the portion of the integral on the minor arc region {\Omega_{minor}}:

\displaystyle  |\int_{\Omega_{minor}} |S(x,\alpha)|^3 e(-x\alpha)\ d\alpha| \leq \int_{\Omega_{minor}} |S(x,\alpha)|^3\ d\alpha.

Despite this handicap, though, it is still possible to get enough bounds on both the major and minor arc contributions of integrals such as (2) to obtain non-trivial lower bounds on quantities such as {f(x)}, at least when {x} is large. In particular, this sort of method can be developed to give a proof of Vinogradov’s famous theorem that every sufficiently large odd integer {x} is the sum of three primes; my own result that all odd numbers greater than {1} can be expressed as the sum of at most five primes is also proven by essentially the same method (modulo a number of minor refinements, and taking advantage of some numerical work on both the Goldbach problems and on the Riemann hypothesis ). It is certainly conceivable that some further variant of the circle method (again combined with a suitable amount of numerical work, such as that of numerically establishing zero-free regions for the Generalised Riemann Hypothesis) can be used to settle the full odd Goldbach conjecture; indeed, under the assumption of the Generalised Riemann Hypothesis, this was already achieved by Deshouillers, Effinger, te Riele, and Zinoviev back in 1997. I am optimistic that an unconditional version of this result will be possible within a few years or so, though I should say that there are still significant technical challenges to doing so, and some clever new ideas will probably be needed to get either the Vinogradov-style argument or numerical verification to work unconditionally for the three-primes problem at medium-sized ranges of {x}, such as {x \sim 10^{50}}. (But the intermediate problem of representing all even natural numbers as the sum of at most four primes looks somewhat closer to being feasible, though even this would require some substantially new and non-trivial ideas beyond what is in my five-primes paper.)

However, I (and many other analytic number theorists) are considerably more skeptical that the circle method can be applied to the even Goldbach problem of representing a large even number {x} as the sum {x = p_1 + p_2} of two primes, or the similar (and marginally simpler) twin prime conjecture of finding infinitely many pairs of twin primes, i.e. finding infinitely many representations {2 = p_1 - p_2} of {2} as the difference of two primes. At first glance, the situation looks tantalisingly similar to that of the Vinogradov theorem: to settle the even Goldbach problem for large {x}, one has to find a non-trivial lower bound for the quantity

\displaystyle  f_2(x) = \int_{{\bf R}/{\bf Z}} S(x,\alpha)^2 e(-x\alpha)\ d\alpha \ \ \ \ \ (5)

for sufficiently large {x}, as this quantity {f_2(x)} is also the number of ways to represent {x} as the sum {x=p_1+p_2} of two primes {p_1,p_2}. Similarly, to settle the twin prime problem, it would suffice to obtain a lower bound for the quantity

\displaystyle  \tilde f_2(x) = \int_{{\bf R}/{\bf Z}} |S(x,\alpha)|^2 e(-2\alpha)\ d\alpha \ \ \ \ \ (6)

that goes to infinity as {x \rightarrow \infty}, as this quantity {\tilde f_2(x)} is also the number of ways to represent {2} as the difference {2 = p_1-p_2} of two primes less than or equal to {x}.

In principle, one can achieve either of these two objectives by a sufficiently fine level of control on the exponential sums {S(x,\alpha)}. Indeed, there is a trivial (and uninteresting) way to take any (hypothetical) solution of either the asymptotic even Goldbach problem or the twin prime problem and (artificially) convert it to a proof that “uses the circle method”; one simply begins with the quantity {f_2(x)} or {\tilde f_2(x)}, expresses it in terms of {S(x,\alpha)} using (5) or (6), and then uses (5) or (6) again to convert these integrals back into a the combinatorial expression of counting solutions to {x=p_1+p_2} or {2=p_1-p_2}, and then uses the hypothetical solution to the given problem to obtain the required lower bounds on {f_2(x)} or {\tilde f_2(x)}.

Of course, this would not qualify as a genuine application of the circle method by any reasonable measure. One can then ask the more refined question of whether one could hope to get non-trivial lower bounds on {f_2(x)} or {\tilde f_2(x)} (or similar quantities) purely from the upper and lower bounds on {S(x,\alpha)} or similar quantities (and of various {L^p} type norms on such quantities, such as the {L^2} bound (4)). Of course, we do not yet know what the strongest possible upper and lower bounds in {S(x,\alpha)} are yet (otherwise we would already have made progress on major conjectures such as the Riemann hypothesis); but we can make plausible heuristic conjectures on such bounds. And this is enough to make the following heuristic conclusions:

  • (i) For “binary” problems such as computing (5), (6), the contribution of the minor arcs potentially dominates that of the major arcs (if all one is given about the minor arc sums is magnitude information), in contrast to “ternary” problems such as computing (2), in which it is the major arc contribution which is absolutely dominant.
  • (ii) Upper and lower bounds on the magnitude of {S(x,\alpha)} are not sufficient, by themselves, to obtain non-trivial bounds on (5), (6) unless these bounds are extremely tight (within a relative error of {O(1/\log x)} or better); but
  • (iii) obtaining such tight bounds is a problem of comparable difficulty to the original binary problems.

I will provide some justification for these conclusions below the fold; they are reasonably well known “folklore” to many researchers in the field, but it seems that they are rarely made explicit in the literature (in part because these arguments are, by their nature, heuristic instead of rigorous) and I have been asked about them from time to time, so I decided to try to write them down here.

In view of the above conclusions, it seems that the best one can hope to do by using the circle method for the twin prime or even Goldbach problems is to reformulate such problems into a statement of roughly comparable difficulty to the original problem, even if one assumes powerful conjectures such as the Generalised Riemann Hypothesis (which lets one make very precise control on major arc exponential sums, but not on minor arc ones). These are not rigorous conclusions – after all, we have already seen that one can always artifically insert the circle method into any viable approach on these problems – but they do strongly suggest that one needs a method other than the circle method in order to fully solve either of these two problems. I do not know what such a method would be, though I can give some heuristic objections to some of the other popular methods used in additive number theory (such as sieve methods, or more recently the use of inverse theorems); this will be done at the end of this post.

Read the rest of this entry »

I’ve just uploaded to the arXiv my paper “Every odd number greater than 1 is the sum of at most five primes“, submitted to Mathematics of Computation. The main result of the paper is as stated in the title, and is in the spirit of (though significantly weaker than) the even Goldbach conjecture (every even natural number is the sum of at most two primes) and odd Goldbach conjecture (every odd natural number greater than 1 is the sum of at most three primes). It also improves on a result of Ramaré that every even natural number is the sum of at most six primes. This result had previously also been established by Kaniecki under the additional assumption of the Riemann hypothesis, so one can view the main result here as an unconditional version of Kaniecki’s result.

The method used is the Hardy-Littlewood circle method, which was for instance also used to prove Vinogradov’s theorem that every sufficiently large odd number is the sum of three primes. Let’s quickly recall how this argument works. It is convenient to use a proxy for the primes, such as the von Mangoldt function {\Lambda}, which is mostly supported on the primes. To represent a large number {x} as the sum of three primes, it suffices to obtain a good lower bound for the sum

\displaystyle  \sum_{n_1,n_2,n_3: n_1+n_2+n_3=x} \Lambda(n_1) \Lambda(n_2) \Lambda(n_3).

By Fourier analysis, one can rewrite this sum as an integral

\displaystyle  \int_{{\bf R}/{\bf Z}} S(x,\alpha)^3 e(-x\alpha)\ d\alpha

where

\displaystyle  S(x,\alpha) := \sum_{n \leq x} \Lambda(n) e(n\alpha)

and {e(\theta) :=e^{2\pi i \theta}}. To control this integral, one then needs good bounds on {S(x,\alpha)} for various values of {\alpha}. To do this, one first approximates {\alpha} by a rational {a/q} with controlled denominator (using a tool such as the Dirichlet approximation theorem) {q}. The analysis then broadly bifurcates into the major arc case when {q} is small, and the minor arc case when {q} is large. In the major arc case, the problem more or less boils down to understanding sums such as

\displaystyle  \sum_{n\leq x} \Lambda(n) e(an/q),

which in turn is almost equivalent to understanding the prime number theorem in arithmetic progressions modulo {q}. In the minor arc case, the prime number theorem is not strong enough to give good bounds (unless one is using some extremely strong hypotheses, such as the generalised Riemann hypothesis), so instead one uses a rather different method, using truncated versions of divisor sum identities such as {\Lambda(n) =\sum_{d|n} \mu(d) \log\frac{n}{d}} to split {S(x,\alpha)} into a collection of linear and bilinear sums that are more tractable to bound, typical examples of which (after using a particularly simple truncated divisor sum identity known as Vaughan’s identity) include the “Type I sum”

\displaystyle \sum_{d \leq U} \mu(d) \sum_{n \leq x/d} \log(n) e(\alpha dn)

and the “Type II sum”

\displaystyle  \sum_{d > U} \sum_{w > V} \mu(d) (\sum_{b|w: b > V} \Lambda(b)) e(\alpha dw) 1_{dw \leq x}.

After using tools such as the triangle inequality or Cauchy-Schwarz inequality to eliminate arithmetic functions such as {\mu(d)} or {\sum_{b|w: b>V}\Lambda(b)}, one ends up controlling plain exponential sums such as {\sum_{V < w < x/d} e(\alpha dw)}, which can be efficiently controlled in the minor arc case.

This argument works well when {x} is extremely large, but starts running into problems for moderate sized {x}, e.g. {x \sim 10^{30}}. The first issue is that of logarithmic losses in the minor arc estimates. A typical minor arc estimate takes the shape

\displaystyle  |S(x,\alpha)| \ll (\frac{x}{\sqrt{q}}+\frac{x}{\sqrt{x/q}} + x^{4/5}) \log^3 x \ \ \ \ \ (1)

when {\alpha} is close to {a/q} for some {1\leq q\leq x}. This only improves upon the trivial estimate {|S(x,\alpha)| \ll x} from the prime number theorem when {\log^6 x \ll q \ll x/\log^6 x}. As a consequence, it becomes necessary to obtain an accurate prime number theorem in arithmetic progressions with modulus as large as {\log^6 x}. However, with current technology, the error term in such theorems are quite poor (terms such as {O(\exp(-c\sqrt{\log x}) x)} for some small {c>0} are typical, and there is also a notorious “Siegel zero” problem), and as a consequence, the method is generally only applicable for very large {x}. For instance, the best explicit result of Vinogradov type known currently is due to Liu and Wang, who established that all odd numbers larger than {10^{1340}} are the sum of three odd primes. (However, on the assumption of the GRH, the full odd Goldbach conjecture is known to be true; this is a result of Deshouillers, Effinger, te Riele, and Zinoviev.)

In this paper, we make a number of refinements to the general scheme, each one of which is individually rather modest and not all that novel, but which when added together turn out to be enough to resolve the five primes problem (though many more ideas would still be needed to tackle the three primes problem, and as is well known the circle method is very unlikely to be the route to make progress on the two primes problem). The first refinement, which is only available in the five primes case, is to take advantage of the numerical verification of the even Goldbach conjecture up to some large {N_0} (we take {N_0=4\times 10^{14}}, using a verification of Richstein, although there are now much larger values of {N_0}as high as {2.6 \times 10^{18}} – for which the conjecture has been verified). As such, instead of trying to represent an odd number {x} as the sum of five primes, we can represent it as the sum of three odd primes and a natural number between {2} and {N_0}. This effectively brings us back to the three primes problem, but with the significant additional boost that one can essentially restrict the frequency variable {\alpha} to be of size {O(1/N_0)}. In practice, this eliminates all of the major arcs except for the principal arc around {0}. This is a significant simplification, in particular avoiding the need to deal with the prime number theorem in arithmetic progressions (and all the attendant theory of L-functions, Siegel zeroes, etc.).

In a similar spirit, by taking advantage of the numerical verification of the Riemann hypothesis up to some height {T_0}, and using the explicit formula relating the von Mangoldt function with the zeroes of the zeta function, one can safely deal with the principal major arc {\{ \alpha = O( T_0 / x ) \}}. For our specific application, we use the value {T_0= 3.29 \times 10^9}, arising from the verification of the Riemann hypothesis of the first {10^{10}} zeroes by van de Lune (unpublished) and Wedeniswki. (Such verifications have since been extended further, the latest being that the first {10^{13}} zeroes lie on the line.)

To make the contribution of the major arc as efficient as possible, we borrow an idea from a paper of Bourgain, and restrict one of the three primes in the three-primes problem to a somewhat shorter range than the other two (of size {O(x/K)} instead of {O(x)}, where we take {K} to be something like {10^3}), as this largely eliminates the “Archimedean” losses coming from trying to use Fourier methods to control convolutions on {{\bf R}}. In our paper, we set the scale parameter {K} to be {10^3} (basically, anything that is much larger than {1} but much less than {T_0} will work), but we found that an additional gain (which we ended up not using) could be obtained by averaging {K} over a range of scales, say between {10^3} and {10^6}. This sort of averaging could be a useful trick in future work on Goldbach-type problems.

It remains to treat the contribution of the “minor arc” {T_0/x \ll |\alpha| \ll 1/N_0}. To do this, one needs good {L^2} and {L^\infty} type estimates on the exponential sum {S(x,\alpha)}. Plancherel’s theorem gives an {L^2} estimate which loses a logarithmic factor, but it turns out that on this particular minor arc one can use tools from the theory of the large sieve (such as Montgomery’s uncertainty principle) to eliminate this logarithmic loss almost completely; it turns out that the most efficient way to do this is use an effective upper bound of Siebert on the number of prime pairs {(p,p+h)} less than {x} to obtain an {L^2} bound that only loses a factor of {8} (or of {7}, once one cuts out the major arc).

For {L^\infty} estimates, it turns out that existing effective versions of (1) (in particular, the bound given by Chen and Wang) are insufficient, due to the three logarithmic factors of {\log x} in the bound. By using a smoothed out version {S_\eta(x,\alpha) :=\sum_{n}\Lambda(n) e(n\alpha) \eta(n/x)} of the sum {S(\alpha,x)}, for some suitable cutoff function {\eta}, one can save one factor of a logarithm, obtaining a bound of the form

\displaystyle  |S_\eta(x,\alpha)| \ll (\frac{x}{\sqrt{q}}+\frac{x}{\sqrt{x/q}} + x^{4/5}) \log^2 x

with effective constants. One can improve the constants further by restricting all summations to odd integers (which barely affects {S_\eta(x,\alpha)}, since {\Lambda} was mostly supported on odd numbers anyway), which in practice reduces the effective constants by a factor of two or so. One can also make further improvements in the constants by using the very sharp large sieve inequality to control the “Type II” sums that arise from Vaughan’s identity, and by using integration by parts to improve the bounds on the “Type I” sums. A final gain can then be extracted by optimising the cutoff parameters {U, V} appearing in Vaughan’s identity to minimise the contribution of the Type II sums (which, in practice, are the dominant term). Combining all these improvements, one ends up with bounds of the shape

\displaystyle  |S_\eta(x,\alpha)| \ll \frac{x}{q} \log^2 x + \frac{x}{\sqrt{q}} \log^2 q

when {q} is small (say {1 < q < x^{1/3}}) and

\displaystyle  |S_\eta(x,\alpha)| \ll \frac{x}{(x/q)^2} \log^2 x + \frac{x}{\sqrt{x/q}} \log^2(x/q)

when {q} is large (say {x^{2/3} < q < x}). (See the paper for more explicit versions of these estimates.) The point here is that the {\log x} factors have been partially replaced by smaller logarithmic factors such as {\log q} or {\log x/q}. Putting together all of these improvements, one can finally obtain a satisfactory bound on the minor arc. (There are still some terms with a {\log x} factor in them, but we use the effective Vinogradov theorem of Liu and Wang to upper bound {\log x} by {3100}, which ends up making the remaining terms involving {\log x} manageable.)

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,977 other followers