You are currently browsing the monthly archive for July 2013.

As in all previous posts in this series, we adopt the following asymptotic notation: ${x}$ is a parameter going off to infinity, and all quantities may depend on ${x}$ unless explicitly declared to be “fixed”. The asymptotic notation ${O(), o(), \ll}$ is then defined relative to this parameter. A quantity ${q}$ is said to be of polynomial size if one has ${q = O(x^{O(1)})}$, and bounded if ${q=O(1)}$. We also write ${X \lessapprox Y}$ for ${X \ll x^{o(1)} Y}$, and ${X \sim Y}$ for ${X \ll Y \ll X}$.

The purpose of this (rather technical) post is both to roll over the polymath8 research thread from this previous post, and also to record the details of the latest improvement to the Type I estimates (based on exploiting additional averaging and using Deligne’s proof of the Weil conjectures) which lead to a slight improvement in the numerology.

In order to obtain this new Type I estimate, we need to strengthen the previously used properties of “dense divisibility” or “double dense divisibility” as follows.

Definition 1 (Multiple dense divisibility) Let ${y \geq 1}$. For each natural number ${k \geq 0}$, we define a notion of ${k}$-tuply ${y}$-dense divisibility recursively as follows:

• Every natural number ${n}$ is ${0}$-tuply ${y}$-densely divisible.
• If ${k \geq 1}$ and ${n}$ is a natural number, we say that ${n}$ is ${k}$-tuply ${y}$-densely divisible if, whenever ${i,j \geq 0}$ are natural numbers with ${i+j=k-1}$, and ${1 \leq R \leq n}$, one can find a factorisation ${n = qr}$ with ${y^{-1} R \leq r \leq R}$ such that ${q}$ is ${i}$-tuply ${y}$-densely divisible and ${r}$ is ${j}$-tuply ${y}$-densely divisible.

We let ${{\mathcal D}^{(k)}_y}$ denote the set of ${k}$-tuply ${y}$-densely divisible numbers. We abbreviate “${1}$-tuply densely divisible” as “densely divisible”, “${2}$-tuply densely divisible” as “doubly densely divisible”, and so forth; we also abbreviate ${{\mathcal D}^{(1)}_y}$ as ${{\mathcal D}_y}$.

Given any finitely supported sequence ${\alpha: {\bf N} \rightarrow {\bf C}}$ and any primitive residue class ${a\ (q)}$, we define the discrepancy

$\displaystyle \Delta(\alpha; a \ (q)) := \sum_{n: n = a\ (q)} \alpha(n) - \frac{1}{\phi(q)} \sum_{n: (n,q)=1} \alpha(n).$

We now recall the key concept of a coefficient sequence, with some slight tweaks in the definitions that are technically convenient for this post.

Definition 2 A coefficient sequence is a finitely supported sequence ${\alpha: {\bf N} \rightarrow {\bf R}}$ that obeys the bounds

$\displaystyle |\alpha(n)| \ll \tau^{O(1)}(n) \log^{O(1)}(x) \ \ \ \ \ (1)$

for all ${n}$, where ${\tau}$ is the divisor function.

• (i) A coefficient sequence ${\alpha}$ is said to be located at scale ${N}$ for some ${N \geq 1}$ if it is supported on an interval of the form ${[cN, CN]}$ for some ${1 \ll c < C \ll 1}$.
• (ii) A coefficient sequence ${\alpha}$ located at scale ${N}$ for some ${N \geq 1}$ is said to obey the Siegel-Walfisz theorem if one has

$\displaystyle | \Delta(\alpha 1_{(\cdot,q)=1}; a\ (r)) | \ll \tau(qr)^{O(1)} N \log^{-A} x \ \ \ \ \ (2)$

for any ${q,r \geq 1}$, any fixed ${A}$, and any primitive residue class ${a\ (r)}$.

• (iii) A coefficient sequence ${\alpha}$ is said to be smooth at scale ${N}$ for some ${N > 0}$ is said to be smooth if it takes the form ${\alpha(n) = \psi(n/N)}$ for some smooth function ${\psi: {\bf R} \rightarrow {\bf C}}$ supported on an interval of size ${O(1)}$ and obeying the derivative bounds

$\displaystyle |\psi^{(j)}(t)| \lesssim \log^{O(1)} x \ \ \ \ \ (3)$

for all fixed ${j \geq 0}$ (note that the implied constant in the ${O()}$ notation may depend on ${j}$).

Note that we allow sequences to be smooth at scale ${N}$ without being located at scale ${N}$; for instance if one arbitrarily translates of a sequence that is both smooth and located at scale ${N}$, it will remain smooth at this scale but may not necessarily be located at this scale any more. Note also that we allow the smoothness scale ${N}$ of a coefficient sequence to be less than one. This is to allow for the following convenient rescaling property: if ${n \mapsto \psi(n)}$ is smooth at scale ${N}$, ${q \geq 1}$, and ${a}$ is an integer, then ${n \mapsto \psi(qn+a)}$ is smooth at scale ${N/q}$, even if ${N/q}$ is less than one.

Now we adapt the Type I estimate to the ${k}$-tuply densely divisible setting.

Definition 3 (Type I estimates) Let ${0 < \varpi < 1/4}$, ${0 < \delta < 1/4+\varpi}$, and ${0 < \sigma < 1/2}$ be fixed quantities, and let ${k \geq 1}$ be a fixed natural number. We let ${I}$ be an arbitrary bounded subset of ${{\bf R}}$, let ${P_I := \prod_{p \in I} p}$, and let ${a\ (P_I)}$ a primitive congruence class. We say that ${Type^{(k)}_I[\varpi,\delta,\sigma]}$ holds if, whenever ${M, N \gg 1}$ are quantities with

$\displaystyle M N \sim x \ \ \ \ \ (4)$

and

$\displaystyle x^{1/2-\sigma} \lessapprox N \lessapprox x^{1/2-2\varpi-c} \ \ \ \ \ (5)$

for some fixed ${c>0}$, and ${\alpha,\beta}$ are coefficient sequences located at scales ${M,N}$ respectively, with ${\beta}$ obeying a Siegel-Walfisz theorem, we have

$\displaystyle \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}^{(k)}: q \leq x^{1/2+2\varpi}} |\Delta(\alpha * \beta; a\ (q))| \ll x \log^{-A} x \ \ \ \ \ (6)$

for any fixed ${A>0}$. Here, as in previous posts, ${{\mathcal S}_I}$ denotes the square-free natural numbers whose prime factors lie in ${I}$.

The main theorem of this post is then

Theorem 4 (Improved Type I estimate) We have ${Type^{(4)}_I[\varpi,\delta,\sigma]}$ whenever

$\displaystyle \frac{160}{3} \varpi + 16 \delta + \frac{34}{9} \sigma < 1$

and

$\displaystyle 64\varpi + 18\delta + 2\sigma < 1.$

In practice, the first condition here is dominant. Except for weakening double dense divisibility to quadruple dense divisibility, this improves upon the previous Type I estimate that established ${Type^{(2)}_I[\varpi,\delta,\sigma]}$ under the stricter hypothesis

$\displaystyle 56 \varpi + 16 \delta + 4 \sigma < 1.$

As in previous posts, Type I estimates (when combined with existing Type II and Type III estimates) lead to distribution results of Motohashi-Pintz-Zhang type. For any fixed ${\varpi, \delta > 0}$ and ${k \geq 1}$, we let ${MPZ^{(k)}[\varpi,\delta]}$ denote the assertion that

$\displaystyle \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}^{(k)}: q \leq x^{1/2+2\varpi}} |\Delta(\Lambda 1_{[x,2x]}; a\ (q))| \ll x \log^{-A} x \ \ \ \ \ (7)$

for any fixed ${A > 0}$, any bounded ${I}$, and any primitive ${a\ (P_I)}$, where ${\Lambda}$ is the von Mangoldt function.

Corollary 5 We have ${MPZ^{(4)}[\varpi,\delta]}$ whenever

$\displaystyle \frac{600}{7} \varpi + \frac{180}{7} \delta < 1 \ \ \ \ \ (8)$

Proof: Setting ${\sigma}$ sufficiently close to ${1/10}$, we see from the above theorem that ${Type^{(4)}_{II}[\varpi,\delta]}$ holds whenever

$\displaystyle \frac{600}{7} \varpi + \frac{180}{7} \delta < 1$

and

$\displaystyle 80 \varpi + \frac{45}{2} \delta < 1.$

The second condition is implied by the first and can be deleted.

From this previous post we know that ${Type^{(4)}_{II}[\varpi,\delta]}$ (which we define analogously to ${Type'_{II}[\varpi,\delta], Type''_{II}[\varpi,\delta]}$ from previous sections) holds whenever

$\displaystyle 68 \varpi + 14 \delta < 1$

while ${Type^{(4)}_{III}[\varpi,\delta,\sigma]}$ holds with ${\sigma}$ sufficiently close to ${1/10}$ whenever

$\displaystyle 70 \varpi + 5 \delta < 1.$

Again, these conditions are implied by (8). The claim then follows from the Heath-Brown identity and dyadic decomposition as in this previous post. $\Box$

As before, we let ${DHL[k_0,2]}$ denote the claim that given any admissible ${k_0}$-tuple ${{\mathcal H}}$, there are infinitely many translates of ${{\mathcal H}}$ that contain at least two primes.

Corollary 6 We have ${DHL[k_0,2]}$ with ${k_0 = 632}$.

This follows from the Pintz sieve, as discussed below the fold. Combining this with the best known prime tuples, we obtain that there are infinitely many prime gaps of size at most ${4,680}$, improving slightly over the previous record of ${5,414}$.

If ${f: {\bf R}^n \rightarrow {\bf C}}$ and ${g: {\bf R}^n \rightarrow {\bf C}}$ are two absolutely integrable functions on a Euclidean space ${{\bf R}^n}$, then the convolution ${f*g: {\bf R}^n \rightarrow {\bf C}}$ of the two functions is defined by the formula

$\displaystyle f*g(x) := \int_{{\bf R}^n} f(y) g(x-y)\ dy = \int_{{\bf R}^n} f(x-z) g(z)\ dz.$

A simple application of the Fubini-Tonelli theorem shows that the convolution ${f*g}$ is well-defined almost everywhere, and yields another absolutely integrable function. In the case that ${f=1_F}$, ${g=1_G}$ are indicator functions, the convolution simplifies to

$\displaystyle 1_F*1_G(x) = m( F \cap (x-G) ) = m( (x-F) \cap G ) \ \ \ \ \ (1)$

where ${m}$ denotes Lebesgue measure. One can also define convolution on more general locally compact groups than ${{\bf R}^n}$, but we will restrict attention to the Euclidean case in this post.

The convolution ${f*g}$ can also be defined by duality by observing the identity

$\displaystyle \int_{{\bf R}^n} f*g(x) h(x)\ dx = \int_{{\bf R}^n} \int_{{\bf R}^n} h(y+z)\ f(y) dy g(z) dz$

for any bounded measurable function ${h: {\bf R}^n \rightarrow {\bf C}}$. Motivated by this observation, we may define the convolution ${\mu*\nu}$ of two finite Borel measures on ${{\bf R}^n}$ by the formula

$\displaystyle \int_{{\bf R}^n} h(x)\ d\mu*\nu(x) := \int_{{\bf R}^n} \int_{{\bf R}^n} h(y+z)\ d\mu(y) d\nu(z) \ \ \ \ \ (2)$

for any bounded (Borel) measurable function ${h: {\bf R}^n \rightarrow {\bf C}}$, or equivalently that

$\displaystyle \mu*\nu(E) = \int_{{\bf R}^n} \int_{{\bf R}^n} 1_E(y+z)\ d\mu(y) d\nu(z) \ \ \ \ \ (3)$

for all Borel measurable ${E}$. (In another equivalent formulation: ${\mu*\nu}$ is the pushforward of the product measure ${\mu \times \nu}$ with respect to the addition map ${+: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}^n}$.) This can easily be verified to again be a finite Borel measure.

If ${\mu}$ and ${\nu}$ are probability measures, then the convolution ${\mu*\nu}$ also has a simple probabilistic interpretation: it is the law (i.e. probability distribution) of a random varible of the form ${X+Y}$, where ${X, Y}$ are independent random variables taking values in ${{\bf R}^n}$ with law ${\mu,\nu}$ respectively. Among other things, this interpretation makes it obvious that the support of ${\mu*\nu}$ is the sumset of the supports of ${\mu}$ (when the supports are compact; the situation is more subtle otherwise) and ${\nu}$, and that ${\mu*\nu}$ will also be a probability measure.

While the above discussion gives a perfectly rigorous definition of the convolution of two measures, it does not always give helpful guidance as to how to compute the convolution of two explicit measures (e.g. the convolution of two surface measures on explicit examples of surfaces, such as the sphere). In simple cases, one can work from first principles directly from the definition (2), (3), perhaps after some application of tools from several variable calculus, such as the change of variables formula. Another technique proceeds by regularisation, approximating the measures ${\mu, \nu}$ involved as the weak limit (or vague limit) of absolutely integrable functions

$\displaystyle \mu = \lim_{\epsilon \rightarrow 0} f_\epsilon; \quad \nu =\lim_{\epsilon \rightarrow 0} g_\epsilon$

(where we identify an absolutely integrable function ${f}$ with the associated absolutely continuous measure ${dm_f(x) := f(x)\ dx}$) which then implies (assuming that the sequences ${f_\epsilon,g_\epsilon}$ are tight) that ${\mu*\nu}$ is the weak limit of the ${f_\epsilon * g_\epsilon}$. The latter convolutions ${f_\epsilon * g_\epsilon}$, being convolutions of functions rather than measures, can be computed (or at least estimated) by traditional integration techniques, at which point the only difficulty is to ensure that one has enough uniformity in ${\epsilon}$ to maintain control of the limit as ${\epsilon \rightarrow 0}$.

A third method proceeds using the Fourier transform

$\displaystyle \hat \mu(\xi) := \int_{{\bf R}^n} e^{-2\pi i x \cdot \xi}\ d\mu(x)$

of ${\mu}$ (and of ${\nu}$). We have

$\displaystyle \widehat{\mu*\nu}(\xi) = \hat{\mu}(\xi) \hat{\nu}(\xi)$

and so one can (in principle, at least) compute ${\mu*\nu}$ by taking Fourier transforms, multiplying them together, and applying the (distributional) inverse Fourier transform. Heuristically, this formula implies that the Fourier transform of ${\mu*\nu}$ should be concentrated in the intersection of the frequency region where the Fourier transform of ${\mu}$ is supported, and the frequency region where the Fourier transform of ${\nu}$ is supported. As the regularity of a measure is related to decay of its Fourier transform, this also suggests that the convolution ${\mu*\nu}$ of two measures will typically be more regular than each of the two original measures, particularly if the Fourier transforms of ${\mu}$ and ${\nu}$ are concentrated in different regions of frequency space (which should happen if the measures ${\mu,\nu}$ are suitably “transverse”). In particular, it can happen that ${\mu*\nu}$ is an absolutely continuous measure, even if ${\mu}$ and ${\nu}$ are both singular measures.

Using intuition from microlocal analysis, we can combine our understanding of the spatial and frequency behaviour of convolution to the following heuristic: a convolution ${\mu*\nu}$ should be supported in regions of phase space ${\{ (x,\xi): x \in {\bf R}^n, \xi \in {\bf R}^n \}}$ of the form ${(x,\xi) = (x_1+x_2,\xi)}$, where ${(x_1,\xi)}$ lies in the region of phase space where ${\mu}$ is concentrated, and ${(x_2,\xi)}$ lies in the region of phase space where ${\nu}$ is concentrated. It is a challenge to make this intuition perfectly rigorous, as one has to somehow deal with the obstruction presented by the Heisenberg uncertainty principle, but it can be made rigorous in various asymptotic regimes, for instance using the machinery of wave front sets (which describes the high frequency limit of the phase space distribution).

Let us illustrate these three methods and the final heuristic with a simple example. Let ${\mu}$ be a singular measure on the horizontal unit interval ${[0,1] \times \{0\} = \{ (x,0): 0 \leq x \leq 1 \}}$, given by weighting Lebesgue measure on that interval by some test function ${\phi: {\bf R} \rightarrow {\bf C}}$ supported on ${[0,1]}$:

$\displaystyle \int_{{\bf R}^2} f(x,y)\ d\mu(x,y) := \int_{\bf R} f(x,0) \phi(x)\ dx.$

Similarly, let ${\nu}$ be a singular measure on the vertical unit interval ${\{0\} \times [0,1] = \{ (0,y): 0 \leq y \leq 1 \}}$ given by weighting Lebesgue measure on that interval by another test function ${\psi: {\bf R} \rightarrow {\bf C}}$ supported on ${[0,1]}$:

$\displaystyle \int_{{\bf R}^2} g(x,y)\ d\nu(x,y) := \int_{\bf R} g(0,y) \psi(y)\ dy.$

We can compute the convolution ${\mu*\nu}$ using (2), which in this case becomes

$\displaystyle \int_{{\bf R}^2} h( x, y ) d\mu*\nu(x,y) = \int_{{\bf R}^2} \int_{{\bf R}^2} h(x_1+x_2, y_1+y_2)\ d\mu(x_1,y_1) d\nu(x_2,y_2)$

$\displaystyle = \int_{\bf R} \int_{\bf R} h( x_1, y_2 )\ \phi(x_1) dx_1 \psi(y_2) dy_2$

and we thus conclude that ${\mu*\nu}$ is an absolutely continuous measure on ${{\bf R}^2}$ with density function ${(x,y) \mapsto \phi(x) \psi(y)}$:

$\displaystyle d(\mu*\nu)(x,y) = \phi(x) \psi(y) dx dy. \ \ \ \ \ (4)$

In particular, ${\mu*\nu}$ is supported on the unit square ${[0,1]^2}$, which is of course the sumset of the two intervals ${[0,1] \times\{0\}}$ and ${\{0\} \times [0,1]}$.

We can arrive at the same conclusion from the regularisation method; the computations become lengthier, but more geometric in nature, and emphasises the role of transversality between the two segments supporting ${\mu}$ and ${\nu}$. One can view ${\mu}$ as the weak limit of the functions

$\displaystyle f_\epsilon(x,y) := \frac{1}{\epsilon} \phi(x) 1_{[0,\epsilon]}(y)$

as ${\epsilon \rightarrow 0}$ (where we continue to identify absolutely integrable functions with absolutely continuous measures, and of course we keep ${\epsilon}$ positive). We can similarly view ${\nu}$ as the weak limit of

$\displaystyle g_\epsilon(x,y) := \frac{1}{\epsilon} 1_{[0,\epsilon]}(x) \psi(y).$

Let us first look at the model case when ${\phi=\psi=1_{[0,1]}}$, so that ${f_\epsilon,g_\epsilon}$ are renormalised indicator functions of thin rectangles:

$\displaystyle f_\epsilon = \frac{1}{\epsilon} 1_{[0,1]\times [0,\epsilon]}; \quad g_\epsilon = \frac{1}{\epsilon} 1_{[0,\epsilon] \times [0,1]}.$

By (1), the convolution ${f_\epsilon*g_\epsilon}$ is then given by

$\displaystyle f_\epsilon*g_\epsilon(x,y) := \frac{1}{\epsilon^2} m( E_\epsilon )$

where ${E_\epsilon}$ is the intersection of two rectangles:

$\displaystyle E_\epsilon := ([0,1] \times [0,\epsilon]) \cap ((x,y) - [0,\epsilon] \times [0,1]).$

When ${(x,y)}$ lies in the square ${[\epsilon,1] \times [\epsilon,1]}$, one readily sees (especially if one draws a picture) that ${E_\epsilon}$ consists of an ${\epsilon \times \epsilon}$ square and thus has measure ${\epsilon^2}$; conversely, if ${(x,y)}$ lies outside ${[0,1+\epsilon] \times [0,1+\epsilon]}$, ${E_\epsilon}$ is empty and thus has measure zero. In the intermediate region, ${E_\epsilon}$ will have some measure between ${0}$ and ${\epsilon^2}$. From this we see that ${f_\epsilon*g_\epsilon}$ converges pointwise almost everywhere to ${1_{[0,1] \times [0,1]}}$ while also being dominated by an absolutely integrable function, and so converges weakly to ${1_{[0,1] \times [0,1]}}$, giving a special case of the formula (4).

Exercise 1 Use a similar method to verify (4) in the case that ${\phi, \psi}$ are continuous functions on ${[0,1]}$. (The argument also works for absolutely integrable ${\phi,\psi}$, but one needs to invoke the Lebesgue differentiation theorem to make it run smoothly.)

Now we compute with the Fourier-analytic method. The Fourier transform ${\hat \mu(\xi,\eta)}$ of ${\mu}$ is given by

$\displaystyle \hat \mu(\xi,\eta) =\int_{{\bf R}^2} e^{-2\pi i (x \xi + y \eta)}\ d\mu(x,y)$

$\displaystyle = \int_{\bf R} \phi(x) e^{-2\pi i x \xi}\ dx$

$\displaystyle = \hat \phi(\xi)$

where we abuse notation slightly by using ${\hat \phi}$ to refer to the one-dimensional Fourier transform of ${\phi}$. In particular, ${\hat \mu}$ decays in the ${\xi}$ direction (by the Riemann-Lebesgue lemma) but has no decay in the ${\eta}$ direction, which reflects the horizontally grained structure of ${\mu}$. Similarly we have

$\displaystyle \hat \nu(\xi,\eta) = \hat \psi(\eta),$

so that ${\hat \nu}$ decays in the ${\eta}$ direction. The convolution ${\mu*\nu}$ then has decay in both the ${\xi}$ and ${\eta}$ directions,

$\displaystyle \widehat{\mu*\nu}(\xi,\eta) = \hat \phi(\xi) \hat \psi(\eta)$

and by inverting the Fourier transform we obtain (4).

Exercise 2 Let ${AB}$ and ${CD}$ be two non-parallel line segments in the plane ${{\bf R}^2}$. If ${\mu}$ is the uniform probability measure on ${AB}$ and ${\nu}$ is the uniform probability measure on ${CD}$, show that ${\mu*\nu}$ is the uniform probability measure on the parallelogram ${AB + CD}$ with vertices ${A+C, A+D, B+C, B+D}$. What happens in the degenerate case when ${AB}$ and ${CD}$ are parallel?

Finally, we compare the above answers with what one gets from the microlocal analysis heuristic. The measure ${\mu}$ is supported on the horizontal interval ${[0,1] \times \{0\}}$, and the cotangent bundle at any point on this interval points in the vertical direction. Thus, the wave front set of ${\mu}$ should be supported on those points ${((x_1,x_2),(\xi_1,\xi_2))}$ in phase space with ${x_1 \in [0,1]}$, ${x_2 = 0}$ and ${\xi_1=0}$. Similarly, the wave front set of ${\nu}$ should be supported at those points ${((y_1,y_2),(\xi_1,\xi_2))}$ with ${y_1 = 0}$, ${y_2 \in [0,1]}$, and ${\xi_2=0}$. The convolution ${\mu * \nu}$ should then have wave front set supported on those points ${((x_1+y_1,x_2+y_2), (\xi_1,\xi_2))}$ with ${x_1 \in [0,1]}$, ${x_2 = 0}$, ${\xi_1=0}$, ${y_1=0}$, ${y_2 \in [0,1]}$, and ${\xi_2=0}$, i.e. it should be spatially supported on the unit square and have zero (rescaled) frequency, so the heuristic predicts a smooth function on the unit square, which is indeed what happens. (The situation is slightly more complicated in the non-smooth case ${\phi=\psi=1_{[0,1]}}$, because ${\mu}$ and ${\nu}$ then acquire some additional singularities at the endpoints; namely, the wave front set of ${\mu}$ now also contains those points ${((x_1,x_2),(\xi_1,\xi_2))}$ with ${x_1 \in \{0,1\}}$, ${x_2=0}$, and ${\xi_1,\xi_2}$ arbitrary, and ${\nu}$ similarly contains those points ${((y_1,y_2), (\xi_1,\xi_2))}$ with ${y_1=0}$, ${y_2 \in \{0,1\}}$, and ${\xi_1,\xi_2}$ arbitrary. I’ll leave it as an exercise to the reader to compute what this predicts for the wave front set of ${\mu*\nu}$, and how this compares with the actual wave front set.)

Exercise 3 Let ${\mu}$ be the uniform measure on the unit sphere ${S^{n-1}}$ in ${{\bf R}^n}$ for some ${n \geq 2}$. Use as many of the above methods as possible to establish multiple proofs of the following fact: the convolution ${\mu*\mu}$ is an absolutely continuous multiple ${f(x)\ dx}$ of Lebesgue measure, with ${f(x)}$ supported on the ball ${B(0,2)}$ of radius ${2}$ and obeying the bounds

$\displaystyle |f(x)| \ll \frac{1}{|x|}$

for ${|x| \leq 1}$ and

$\displaystyle |f(x)| \ll (2-|x|)^{(n-3)/2}$

for ${1 \leq |x| \leq 2}$, where the implied constants are allowed to depend on the dimension ${n}$. (Hint: try the ${n=2}$ case first, which is particularly simple due to the fact that the addition map ${+: S^1 \times S^1 \rightarrow {\bf R}^2}$ is mostly a local diffeomorphism. The Fourier-based approach is instructive, but requires either asymptotics of Bessel functions or the principle of stationary phase.)

[Note: the content of this post is standard number theoretic material that can be found in many textbooks (I am relying principally here on Iwaniec and Kowalski); I am not claiming any new progress on any version of the Riemann hypothesis here, but am simply arranging existing facts together.]

The Riemann hypothesis is arguably the most important and famous unsolved problem in number theory. It is usually phrased in terms of the Riemann zeta function ${\zeta}$, defined by

$\displaystyle \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}$

for ${\hbox{Re}(s)>1}$ and extended meromorphically to other values of ${s}$, and asserts that the only zeroes of ${\zeta}$ in the critical strip ${\{ s: 0 \leq \hbox{Re}(s) \leq 1 \}}$ lie on the critical line ${\{ s: \hbox{Re}(s)=\frac{1}{2} \}}$.

One of the main reasons that the Riemann hypothesis is so important to number theory is that the zeroes of the zeta function in the critical strip control the distribution of the primes. To see the connection, let us perform the following formal manipulations (ignoring for now the important analytic issues of convergence of series, interchanging sums, branches of the logarithm, etc., in order to focus on the intuition). The starting point is the fundamental theorem of arithmetic, which asserts that every natural number ${n}$ has a unique factorisation ${n = p_1^{a_1} \ldots p_k^{a_k}}$ into primes. Taking logarithms, we obtain the identity

$\displaystyle \log n = \sum_{d|n} \Lambda(d) \ \ \ \ \ (1)$

for any natural number ${n}$, where ${\Lambda}$ is the von Mangoldt function, thus ${\Lambda(n) = \log p}$ when ${n}$ is a power of a prime ${p}$ and zero otherwise. If we then perform a “Dirichlet-Fourier transform” by viewing both sides of (1) as coefficients of a Dirichlet series, we conclude that

$\displaystyle \sum_{n=1}^\infty \frac{\log n}{n^s} = \sum_{n=1}^\infty \sum_{d|n} \frac{\Lambda(d)}{n^s},$

formally at least. Writing ${n=dm}$, the right-hand side factors as

$\displaystyle (\sum_{d=1}^\infty \frac{\Lambda(d)}{d^s}) (\sum_{m=1}^\infty \frac{1}{m^s}) = \zeta(s) \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s}$

whereas the left-hand side is (formally, at least) equal to ${-\zeta'(s)}$. We conclude the identity

$\displaystyle \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s} = -\frac{\zeta'(s)}{\zeta(s)},$

(formally, at least). If we integrate this, we are formally led to the identity

$\displaystyle \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} = \log \zeta(s)$

or equivalently to the exponential identity

$\displaystyle \zeta(s) = \exp( \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} ) \ \ \ \ \ (2)$

which allows one to reconstruct the Riemann zeta function from the von Mangoldt function. (It is instructive exercise in enumerative combinatorics to try to prove this identity directly, at the level of formal Dirichlet series, using the fundamental theorem of arithmetic of course.) Now, as ${\zeta}$ has a simple pole at ${s=1}$ and zeroes at various places ${s=\rho}$ on the critical strip, we expect a Weierstrass factorisation which formally (ignoring normalisation issues) takes the form

$\displaystyle \zeta(s) = \frac{1}{s-1} \times \prod_\rho (s-\rho) \times \ldots$

(where we will be intentionally vague about what is hiding in the ${\ldots}$ terms) and so we expect an expansion of the form

$\displaystyle \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} = - \log(s-1) + \sum_\rho \log(s-\rho) + \ldots. \ \ \ \ \ (3)$

Note that

$\displaystyle \frac{1}{s-\rho} = \int_1^\infty t^{\rho-s} \frac{dt}{t}$

and hence on integrating in ${s}$ we formally have

$\displaystyle \log(s-\rho) = -\int_1^\infty t^{\rho-s-1} \frac{dt}{\log t}$

and thus we have the heuristic approximation

$\displaystyle \log(s-\rho) \approx - \sum_{n=1}^\infty \frac{n^{\rho-s-1}}{\log n}.$

Comparing this with (3), we are led to a heuristic form of the explicit formula

$\displaystyle \Lambda(n) \approx 1 - \sum_\rho n^{\rho-1}. \ \ \ \ \ (4)$

When trying to make this heuristic rigorous, it turns out (due to the rough nature of both sides of (4)) that one has to interpret the explicit formula in some suitably weak sense, for instance by testing (4) against the indicator function ${1_{[0,x]}(n)}$ to obtain the formula

$\displaystyle \sum_{n \leq x} \Lambda(n) \approx x - \sum_\rho \frac{x^{\rho}}{\rho} \ \ \ \ \ (5)$

which can in fact be made into a rigorous statement after some truncation (the von Mangoldt explicit formula). From this formula we now see how helpful the Riemann hypothesis will be to control the distribution of the primes; indeed, if the Riemann hypothesis holds, so that ${\hbox{Re}(\rho) = 1/2}$ for all zeroes ${\rho}$, it is not difficult to use (a suitably rigorous version of) the explicit formula to conclude that

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2} \log^2 x ) \ \ \ \ \ (6)$

as ${x \rightarrow \infty}$, giving a near-optimal “square root cancellation” for the sum ${\sum_{n \leq x} \Lambda(n)-1}$. Conversely, if one can somehow establish a bound of the form

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2+\epsilon} )$

for any fixed ${\epsilon}$, then the explicit formula can be used to then deduce that all zeroes ${\rho}$ of ${\zeta}$ have real part at most ${1/2+\epsilon}$, which leads to the following remarkable amplification phenomenon (analogous, as we will see later, to the tensor power trick): any bound of the form

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2+o(1)} )$

can be automatically amplified to the stronger bound

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2} \log^2 x )$

with both bounds being equivalent to the Riemann hypothesis. Of course, the Riemann hypothesis for the Riemann zeta function remains open; but partial progress on this hypothesis (in the form of zero-free regions for the zeta function) leads to partial versions of the asymptotic (6). For instance, it is known that there are no zeroes of the zeta function on the line ${\hbox{Re}(s)=1}$, and this can be shown by some analysis (either complex analysis or Fourier analysis) to be equivalent to the prime number theorem

$\displaystyle \sum_{n \leq x} \Lambda(n) =x + o(x);$

see e.g. this previous blog post for more discussion.

The main engine powering the above observations was the fundamental theorem of arithmetic, and so one can expect to establish similar assertions in other contexts where some version of the fundamental theorem of arithmetic is available. One of the simplest such variants is to continue working on the natural numbers, but “twist” them by a Dirichlet character ${\chi: {\bf Z} \rightarrow {\bf R}}$. The analogue of the Riemann zeta function is then the https://en.wikipedia.org/wiki/Multiplicative_function, the equation (1), which encoded the fundamental theorem of arithmetic, can be twisted by ${\chi}$ to obtain

$\displaystyle \chi(n) \log n = \sum_{d|n} \chi(d) \Lambda(d) \chi(\frac{n}{d}) \ \ \ \ \ (7)$

and essentially the same manipulations as before eventually lead to the exponential identity

$\displaystyle L(s,\chi) = \exp( \sum_{n=1}^\infty \frac{\chi(n) \Lambda(n)}{\log n} n^{-s} ). \ \ \ \ \ (8)$

which is a twisted version of (2), as well as twisted explicit formula, which heuristically takes the form

$\displaystyle \chi(n) \Lambda(n) \approx - \sum_\rho n^{\rho-1} \ \ \ \ \ (9)$

for non-principal ${\chi}$, where ${\rho}$ now ranges over the zeroes of ${L(s,\chi)}$ in the critical strip, rather than the zeroes of ${\zeta(s)}$; a more accurate formulation, following (5), would be

$\displaystyle \sum_{n \leq x} \chi(n) \Lambda(n) \approx - \sum_\rho \frac{x^{\rho}}{\rho}. \ \ \ \ \ (10)$

(See e.g. Davenport’s book for a more rigorous discussion which emphasises the analogy between the Riemann zeta function and the Dirichlet ${L}$-function.) If we assume the generalised Riemann hypothesis, which asserts that all zeroes of ${L(s,\chi)}$ in the critical strip also lie on the critical line, then we obtain the bound

$\displaystyle \sum_{n \leq x} \chi(n) \Lambda(n) = O( x^{1/2} \log(x) \log(xq) )$

for any non-principal Dirichlet character ${\chi}$, again demonstrating a near-optimal square root cancellation for this sum. Again, we have the amplification property that the above bound is implied by the apparently weaker bound

$\displaystyle \sum_{n \leq x} \chi(n) \Lambda(n) = O( x^{1/2+o(1)} )$

(where ${o(1)}$ denotes a quantity that goes to zero as ${x \rightarrow \infty}$ for any fixed ${q}$). Next, one can consider other number systems than the natural numbers ${{\bf N}}$ and integers ${{\bf Z}}$. For instance, one can replace the integers ${{\bf Z}}$ with rings ${{\mathcal O}_K}$ of integers in other number fields ${K}$ (i.e. finite extensions of ${{\bf Q}}$), such as the quadratic extensions ${K = {\bf Q}[\sqrt{D}]}$ of the rationals for various square-free integers ${D}$, in which case the ring of integers would be the ring of quadratic integers ${{\mathcal O}_K = {\bf Z}[\omega]}$ for a suitable generator ${\omega}$ (it turns out that one can take ${\omega = \sqrt{D}}$ if ${D=2,3\hbox{ mod } 4}$, and ${\omega = \frac{1+\sqrt{D}}{2}}$ if ${D=1 \hbox{ mod } 4}$). Here, it is not immediately obvious what the analogue of the natural numbers ${{\bf N}}$ is in this setting, since rings such as ${{\bf Z}[\omega]}$ do not come with a natural ordering. However, we can adopt an algebraic viewpoint to see the correct generalisation, observing that every natural number ${n}$ generates a principal ideal ${(n) = \{ an: a \in {\bf Z} \}}$ in the integers, and conversely every non-trivial ideal ${{\mathfrak n}}$ in the integers is associated to precisely one natural number ${n}$ in this fashion, namely the norm ${N({\mathfrak n}) := |{\bf Z} / {\mathfrak n}|}$ of that ideal. So one can identify the natural numbers with the ideals of ${{\bf Z}}$. Furthermore, with this identification, the prime numbers correspond to the prime ideals, since if ${p}$ is prime, and ${a,b}$ are integers, then ${ab \in (p)}$ if and only if one of ${a \in (p)}$ or ${b \in (p)}$ is true. Finally, even in number systems (such as ${{\bf Z}[\sqrt{-5}]}$) in which the classical version of the fundamental theorem of arithmetic fail (e.g. ${6 = 2 \times 3 = (1-\sqrt{-5})(1+\sqrt{-5})}$), we have the fundamental theorem of arithmetic for ideals: every ideal ${\mathfrak{n}}$ in a Dedekind domain (which includes the ring ${{\mathcal O}_K}$ of integers in a number field as a key example) is uniquely representable (up to permutation) as the product of a finite number of prime ideals ${\mathfrak{p}}$ (although these ideals might not necessarily be principal). For instance, in ${{\bf Z}[\sqrt{-5}]}$, the principal ideal ${(6)}$ factors as the product of four prime (but non-principal) ideals ${(2, 1+\sqrt{-5})}$, ${(2, 1-\sqrt{-5})}$, ${(3, 1+\sqrt{-5})}$, ${(3, 1-\sqrt{-5})}$. (Note that the first two ideals ${(2,1+\sqrt{5}), (2,1-\sqrt{5})}$ are actually equal to each other.) Because we still have the fundamental theorem of arithmetic, we can develop analogues of the previous observations relating the Riemann hypothesis to the distribution of primes. The analogue of the Riemann hypothesis is now the Dedekind zeta function

$\displaystyle \zeta_K(s) := \sum_{{\mathfrak n}} \frac{1}{N({\mathfrak n})^s}$

where the summation is over all non-trivial ideals in ${{\mathcal O}_K}$. One can also define a von Mangoldt function ${\Lambda_K({\mathfrak n})}$, defined as ${\log N( {\mathfrak p})}$ when ${{\mathfrak n}}$ is a power of a prime ideal ${{\mathfrak p}}$, and zero otherwise; then the fundamental theorem of arithmetic for ideals can be encoded in an analogue of (1) (or (7)),

$\displaystyle \log N({\mathfrak n}) = \sum_{{\mathfrak d}|{\mathfrak n}} \Lambda_K({\mathfrak d}) \ \ \ \ \ (11)$

which leads as before to an exponential identity

$\displaystyle \zeta_K(s) = \exp( \sum_{{\mathfrak n}} \frac{\Lambda_K({\mathfrak n})}{\log N({\mathfrak n})} N({\mathfrak n})^{-s} ) \ \ \ \ \ (12)$

and an explicit formula of the heuristic form

$\displaystyle \Lambda({\mathfrak n}) \approx 1 - \sum_\rho N({\mathfrak n})^{\rho-1}$

or, a little more accurately,

$\displaystyle \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) \approx x - \sum_\rho \frac{x^{\rho}}{\rho} \ \ \ \ \ (13)$

in analogy with (5) or (10). Again, a suitable Riemann hypothesis for the Dedekind zeta function leads to good asymptotics for the distribution of prime ideals, giving a bound of the form

$\displaystyle \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) = x + O( \sqrt{x} \log(x) (d+\log(Dx)) )$

where ${D}$ is the conductor of ${K}$ (which, in the case of number fields, is the absolute value of the discriminant of ${K}$) and ${d = \hbox{dim}_{\bf Q}(K)}$ is the degree of the extension of ${K}$ over ${{\bf Q}}$. As before, we have the amplification phenomenon that the above near-optimal square root cancellation bound is implied by the weaker bound

$\displaystyle \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) = x + O( x^{1/2+o(1)} )$

where ${o(1)}$ denotes a quantity that goes to zero as ${x \rightarrow \infty}$ (holding ${K}$ fixed). See e.g. Chapter 5 of Iwaniec-Kowalski for details.

As was the case with the Dirichlet ${L}$-functions, one can twist the Dedekind zeta function example by characters, in this case the Hecke characters; we will not do this here, but see e.g. Section 3 of Iwaniec-Kowalski for details.

Very analogous considerations hold if we move from number fields to function fields. The simplest case is the function field associated to the affine line ${{\mathbb A}^1}$ and a finite field ${{\mathbb F} = {\mathbb F}_q}$ of some order ${q}$. The polynomial functions on the affine line ${{\mathbb A}^1/{\mathbb F}}$ are just the usual polynomial ring ${{\mathbb F}[t]}$, which then play the role of the integers ${{\bf Z}}$ (or ${{\mathcal O}_K}$) in previous examples. This ring happens to be a unique factorisation domain, so the situation is closely analogous to the classical setting of the Riemann zeta function. The analogue of the natural numbers are the monic polynomials (since every non-trivial principal ideal is generated by precisely one monic polynomial), and the analogue of the prime numbers are the irreducible monic polynomials. The norm ${N(f)}$ of a polynomial is the order of ${{\mathbb F}[t] / (f)}$, which can be computed explicitly as

$\displaystyle N(f) = q^{\hbox{deg}(f)}.$

Because of this, we will normalise things slightly differently here and use ${\hbox{deg}(f)}$ in place of ${\log N(f)}$ in what follows. The (local) zeta function ${\zeta_{{\mathbb A}^1/{\mathbb F}}(s)}$ is then defined as

$\displaystyle \zeta_{{\mathbb A}^1/{\mathbb F}}(s) = \sum_f \frac{1}{N(f)^s}$

where ${f}$ ranges over monic polynomials, and the von Mangoldt function ${\Lambda_{{\mathbb A}^1/{\mathbb F}}(f)}$ is defined to equal ${\hbox{deg}(g)}$ when ${f}$ is a power of a monic irreducible polynomial ${g}$, and zero otherwise. Note that because ${N(f)}$ is always a power of ${q}$, the zeta function here is in fact periodic with period ${2\pi i / \log q}$. Because of this, it is customary to make a change of variables ${T := q^{-s}}$, so that

$\displaystyle \zeta_{{\mathbb A}^1/{\mathbb F}}(s) = Z( {\mathbb A}^1/{\mathbb F}, T )$

and ${Z}$ is the renormalised zeta function

$\displaystyle Z( {\mathbb A}^1/{\mathbb F}, T ) = \sum_f T^{\hbox{deg}(f)}. \ \ \ \ \ (14)$

We have the analogue of (1) (or (7) or (11)):

$\displaystyle \hbox{deg}(f) = \sum_{g|f} \Lambda_{{\mathbb A}^1/{\mathbb F}}(g), \ \ \ \ \ (15)$

which leads as before to an exponential identity

$\displaystyle Z( {\mathbb A}^1/{\mathbb F}, T ) = \exp( \sum_f \frac{\Lambda_{{\mathbb A}^1/{\mathbb F}}(f)}{\hbox{deg}(f)} T^{\hbox{deg}(f)} ) \ \ \ \ \ (16)$

analogous to (2), (8), or (12). It also leads to the explicit formula

$\displaystyle \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx 1 - \sum_\rho N(f)^{\rho-1}$

where ${\rho}$ are the zeroes of the original zeta function ${\zeta_{{\mathbb A}^1/{\mathbb F}}(s)}$ (counting each residue class of the period ${2\pi i/\log q}$ just once), or equivalently

$\displaystyle \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx 1 - q^{-\hbox{deg}(f)} \sum_\alpha \alpha^{\hbox{deg}(f)},$

where ${\alpha}$ are the reciprocals of the roots of the normalised zeta function ${Z( {\mathbb A}^1/{\mathbb F}, T )}$ (or to put it another way, ${1-\alpha T}$ are the factors of this zeta function). Again, to make proper sense of this heuristic we need to sum, obtaining

$\displaystyle \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx q^n - \sum_\alpha \alpha^n.$

As it turns out, in the function field setting, the zeta functions are always rational (this is part of the Weil conjectures), and the above heuristic formula is basically exact up to a constant factor, thus

$\displaystyle \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = q^n - \sum_\alpha \alpha^n + c \ \ \ \ \ (17)$

for an explicit integer ${c}$ (independent of ${n}$) arising from any potential pole of ${Z}$ at ${T=1}$. In the case of the affine line ${{\mathbb A}^1}$, the situation is particularly simple, because the zeta function ${Z( {\mathbb A}^1/{\mathbb F}, T)}$ is easy to compute. Indeed, since there are exactly ${q^n}$ monic polynomials of a given degree ${n}$, we see from (14) that

$\displaystyle Z( {\mathbb A}^1/{\mathbb F}, T ) = \sum_{n=0}^\infty q^n T^n = \frac{1}{1-qT}$

so in fact there are no zeroes whatsoever, and no pole at ${T=1}$ either, so we have an exact prime number theorem for this function field:

$\displaystyle \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = q^n \ \ \ \ \ (18)$

Among other things, this tells us that the number of irreducible monic polynomials of degree ${n}$ is ${q^n/n + O(q^{n/2})}$.

We can transition from an algebraic perspective to a geometric one, by viewing a given monic polynomial ${f \in {\mathbb F}[t]}$ through its roots, which are a finite set of points in the algebraic closure ${\overline{{\mathbb F}}}$ of the finite field ${{\mathbb F}}$ (or more suggestively, as points on the affine line ${{\mathbb A}^1( \overline{{\mathbb F}} )}$). The number of such points (counting multiplicity) is the degree of ${f}$, and from the factor theorem, the set of points determines the monic polynomial ${f}$ (or, if one removes the monic hypothesis, it determines the polynomial ${f}$ projectively). These points have an action of the Galois group ${\hbox{Gal}( \overline{{\mathbb F}} / {\mathbb F} )}$. It is a classical fact that this Galois group is in fact a cyclic group generated by a single element, the (geometric) Frobenius map ${\hbox{Frob}: x \mapsto x^q}$, which fixes the elements of the original finite field ${{\mathbb F}}$ but permutes the other elements of ${\overline{{\mathbb F}}}$. Thus the roots of a given polynomial ${f}$ split into orbits of the Frobenius map. One can check that the roots consist of a single such orbit (counting multiplicity) if and only if ${f}$ is irreducible; thus the fundamental theorem of arithmetic can be viewed geometrically as as the orbit decomposition of any Frobenius-invariant finite set of points in the affine line.

Now consider the degree ${n}$ finite field extension ${{\mathbb F}_n}$ of ${{\mathbb F}}$ (it is a classical fact that there is exactly one such extension up to isomorphism for each ${n}$); this is a subfield of ${\overline{{\mathbb F}}}$ of order ${q^n}$. (Here we are performing a standard abuse of notation by overloading the subscripts in the ${{\mathbb F}}$ notation; thus ${{\mathbb F}_q}$ denotes the field of order ${q}$, while ${{\mathbb F}_n}$ denotes the extension of ${{\mathbb F} = {\mathbb F}_q}$ of order ${n}$, so that we in fact have ${{\mathbb F}_n = {\mathbb F}_{q^n}}$ if we use one subscript convention on the left-hand side and the other subscript convention on the right-hand side. We hope this overloading will not cause confusion.) Each point ${x}$ in this extension (or, more suggestively, the affine line ${{\mathbb A}^1({\mathbb F}_n)}$ over this extension) has a minimal polynomial – an irreducible monic polynomial whose roots consist the Frobenius orbit of ${x}$. Since the Frobenius action is periodic of period ${n}$ on ${{\mathbb F}_n}$, the degree of this minimal polynomial must divide ${n}$. Conversely, every monic irreducible polynomial of degree ${d}$ dividing ${n}$ produces ${d}$ distinct zeroes that lie in ${{\mathbb F}_d}$ (here we use the classical fact that finite fields are perfect) and hence in ${{\mathbb F}_n}$. We have thus partitioned ${{\mathbb A}^1({\mathbb F}_n)}$ into Frobenius orbits (also known as closed points), with each monic irreducible polynomial ${f}$ of degree ${d}$ dividing ${n}$ contributing an orbit of size ${d = \hbox{deg}(f) = \Lambda(f^{n/d})}$. From this we conclude a geometric interpretation of the left-hand side of (18):

$\displaystyle \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = \sum_{x \in {\mathbb A}^1({\mathbb F}_n)} 1. \ \ \ \ \ (19)$

The identity (18) thus is equivalent to the thoroughly boring fact that the number of ${{\mathbb F}_n}$-points on the affine line ${{\mathbb A}^1}$ is equal to ${q^n}$. However, things become much more interesting if one then replaces the affine line ${{\mathbb A}^1}$ by a more general (geometrically) irreducible curve ${C}$ defined over ${{\mathbb F}}$; for instance one could take ${C}$ to be an ellpitic curve

$\displaystyle E = \{ (x,y): y^2 = x^3 + ax + b \} \ \ \ \ \ (20)$

for some suitable ${a,b \in {\mathbb F}}$, although the discussion here applies to more general curves as well (though to avoid some minor technicalities, we will assume that the curve is projective with a finite number of ${{\mathbb F}}$-rational points removed). The analogue of ${{\mathbb F}[t]}$ is then the coordinate ring of ${C}$ (for instance, in the case of the elliptic curve (20) it would be ${{\mathbb F}[x,y] / (y^2-x^3-ax-b)}$), with polynomials in this ring producing a set of roots in the curve ${C( \overline{\mathbb F})}$ that is again invariant with respect to the Frobenius action (acting on the ${x}$ and ${y}$ coordinates separately). In general, we do not expect unique factorisation in this coordinate ring (this is basically because Bezout’s theorem suggests that the zero set of a polynomial on ${C}$ will almost never consist of a single (closed) point). Of course, we can use the algebraic formalism of ideals to get around this, setting up a zeta function

$\displaystyle \zeta_{C/{\mathbb F}}(s) = \sum_{{\mathfrak f}} \frac{1}{N({\mathfrak f})^s}$

and a von Mangoldt function ${\Lambda_{C/{\mathbb F}}({\mathfrak f})}$ as before, where ${{\mathfrak f}}$ would now run over the non-trivial ideals of the coordinate ring. However, it is more instructive to use the geometric viewpoint, using the ideal-variety dictionary from algebraic geometry to convert algebraic objects involving ideals into geometric objects involving varieties. In this dictionary, a non-trivial ideal would correspond to a proper subvariety (or more precisely, a subscheme, but let us ignore the distinction between varieties and schemes here) of the curve ${C}$; as the curve is irreducible and one-dimensional, this subvariety must be zero-dimensional and is thus a (multi-)set of points ${\{x_1,\ldots,x_k\}}$ in ${C}$, or equivalently an effective divisor ${[x_1] + \ldots + [x_k]}$ of ${C}$; this generalises the concept of the set of roots of a polynomial (which corresponds to the case of a principal ideal). Furthermore, this divisor has to be rational in the sense that it is Frobenius-invariant. The prime ideals correspond to those divisors (or sets of points) which are irreducible, that is to say the individual Frobenius orbits, also known as closed points of ${C}$. With this dictionary, the zeta function becomes

$\displaystyle \zeta_{C/{\mathbb F}}(s) = \sum_{D \geq 0} \frac{1}{q^{\hbox{deg}(D)}}$

where the sum is over effective rational divisors ${D}$ of ${C}$ (with ${k}$ being the degree of an effective divisor ${[x_1] + \ldots + [x_k]}$), or equivalently

$\displaystyle Z( C/{\mathbb F}, T ) = \sum_{D \geq 0} T^{\hbox{deg}(D)}.$

The analogue of (19), which gives a geometric interpretation to sums of the von Mangoldt function, becomes

$\displaystyle \sum_{N({\mathfrak f}) = q^n} \Lambda_{C/{\mathbb F}}({\mathfrak f}) = \sum_{x \in C({\mathbb F}_n)} 1$

$\displaystyle = |C({\mathbb F}_n)|,$

thus this sum is simply counting the number of ${{\mathbb F}_n}$-points of ${C}$. The analogue of the exponential identity (16) (or (2), (8), or (12)) is then

$\displaystyle Z( C/{\mathbb F}, T ) = \exp( \sum_{n \geq 1} \frac{|C({\mathbb F}_n)|}{n} T^n ) \ \ \ \ \ (21)$

and the analogue of the explicit formula (17) (or (5), (10) or (13)) is

$\displaystyle |C({\mathbb F}_n)| = q^n - \sum_\alpha \alpha^n + c \ \ \ \ \ (22)$

where ${\alpha}$ runs over the (reciprocal) zeroes of ${Z( C/{\mathbb F}, T )}$ (counting multiplicity), and ${c}$ is an integer independent of ${n}$. (As it turns out, ${c}$ equals ${1}$ when ${C}$ is a projective curve, and more generally equals ${1-k}$ when ${C}$ is a projective curve with ${k}$ rational points deleted.)

To evaluate ${Z(C/{\mathbb F},T)}$, one needs to count the number of effective divisors of a given degree on the curve ${C}$. Fortunately, there is a tool that is particularly well-designed for this task, namely the Riemann-Roch theorem. By using this theorem, one can show (when ${C}$ is projective) that ${Z(C/{\mathbb F},T)}$ is in fact a rational function, with a finite number of zeroes, and a simple pole at both ${1}$ and ${1/q}$, with similar results when one deletes some rational points from ${C}$; see e.g. Chapter 11 of Iwaniec-Kowalski for details. Thus the sum in (22) is finite. For instance, for the affine elliptic curve (20) (which is a projective curve with one point removed), it turns out that we have

$\displaystyle |E({\mathbb F}_n)| = q^n - \alpha^n - \beta^n$

for two complex numbers ${\alpha,\beta}$ depending on ${E}$ and ${q}$.

The Riemann hypothesis for (untwisted) curves – which is the deepest and most difficult aspect of the Weil conjectures for these curves – asserts that the zeroes of ${\zeta_{C/{\mathbb F}}}$ lie on the critical line, or equivalently that all the roots ${\alpha}$ in (22) have modulus ${\sqrt{q}}$, so that (22) then gives the asymptotic

$\displaystyle |C({\mathbb F}_n)| = q^n + O( q^{n/2} ) \ \ \ \ \ (23)$

where the implied constant depends only on the genus of ${C}$ (and on the number of points removed from ${C}$). For instance, for elliptic curves we have the Hasse bound

$\displaystyle |E({\mathbb F}_n) - q^n| \leq 2 \sqrt{q}.$

As before, we have an important amplification phenomenon: if we can establish a weaker estimate, e.g.

$\displaystyle |C({\mathbb F}_n)| = q^n + O( q^{n/2 + O(1)} ), \ \ \ \ \ (24)$

then we can automatically deduce the stronger bound (23). This amplification is not a mere curiosity; most of the proofs of the Riemann hypothesis for curves proceed via this fact. For instance, by using the elementary method of Stepanov to bound points in curves (discussed for instance in this previous post), one can establish the preliminary bound (24) for large ${n}$, which then amplifies to the optimal bound (23) for all ${n}$ (and in particular for ${n=1}$). Again, see Chapter 11 of Iwaniec-Kowalski for details. The ability to convert a bound with ${q}$-dependent losses over the optimal bound (such as (24)) into an essentially optimal bound with no ${q}$-dependent losses (such as (23)) is important in analytic number theory, since in many applications (e.g. in those arising from sieve theory) one wishes to sum over large ranges of ${q}$.

Much as the Riemann zeta function can be twisted by a Dirichlet character to form a Dirichlet ${L}$-function, one can twist the zeta function on curves by various additive and multiplicative characters. For instance, suppose one has an affine plane curve ${C \subset {\mathbb A}^2}$ and an additive character ${\psi: {\mathbb F}^2 \rightarrow {\bf C}^\times}$, thus ${\psi(x+y) = \psi(x) \psi(y)}$ for all ${x,y \in {\mathbb F}^2}$. Given a rational effective divisor ${D = [x_1] + \ldots + [x_k]}$, the sum ${x_1+\ldots+x_k}$ is Frobenius-invariant and thus lies in ${{\mathbb F}^2}$. By abuse of notation, we may thus define ${\psi}$ on such divisors by

$\displaystyle \psi( [x_1] + \ldots + [x_k] ) := \psi( x_1 + \ldots + x_k )$

and observe that ${\psi}$ is multiplicative in the sense that ${\psi(D_1 + D_2) = \psi(D_1) \psi(D_2)}$ for rational effective divisors ${D_1,D_2}$. One can then define ${\psi({\mathfrak f})}$ for any non-trivial ideal ${{\mathfrak f}}$ by replacing that ideal with the associated rational effective divisor; for instance, if ${f}$ is a polynomial in the coefficient ring of ${C}$, with zeroes at ${x_1,\ldots,x_k \in C}$, then ${\psi((f))}$ is ${\psi( x_1+\ldots+x_k )}$. Again, we have the multiplicativity property ${\psi({\mathfrak f} {\mathfrak g}) = \psi({\mathfrak f}) \psi({\mathfrak g})}$. If we then form the twisted normalised zeta function

$\displaystyle Z( C/{\mathbb F}, \psi, T ) = \sum_{D \geq 0} \psi(D) T^{\hbox{deg}(D)}$

then by twisting the previous analysis, we eventually arrive at the exponential identity

$\displaystyle Z( C/{\mathbb F}, \psi, T ) = \exp( \sum_{n \geq 1} \frac{S_n(C/{\mathbb F}, \psi)}{n} T^n ) \ \ \ \ \ (25)$

in analogy with (21) (or (2), (8), (12), or (16)), where the companion sums ${S_n(C/{\mathbb F}, \psi)}$ are defined by

$\displaystyle S_n(C/{\mathbb F},\psi) = \sum_{x \in C({\mathbb F}^n)} \psi( \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) )$

where the trace ${\hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x)}$ of an element ${x}$ in the plane ${{\mathbb A}^2( {\mathbb F}_n )}$ is defined by the formula

$\displaystyle \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) = x + \hbox{Frob}(x) + \ldots + \hbox{Frob}^{n-1}(x).$

In particular, ${S_1}$ is the exponential sum

$\displaystyle S_1(C/{\mathbb F},\psi) = \sum_{x \in C({\mathbb F})} \psi(x)$

which is an important type of sum in analytic number theory, containing for instance the Kloosterman sum

$\displaystyle K(a,b;p) := \sum_{x \in {\mathbb F}_p^\times} e_p( ax + \frac{b}{x})$

as a special case, where ${a,b \in {\mathbb F}_p^\times}$. (NOTE: the sign conventions for the companion sum ${S_n}$ are not consistent across the literature, sometimes it is ${-S_n}$ which is referred to as the companion sum.)

If ${\psi}$ is non-principal (and ${C}$ is non-linear), one can show (by a suitably twisted version of the Riemann-Roch theorem) that ${Z}$ is a rational function of ${T}$, with no pole at ${T=q^{-1}}$, and one then gets an explicit formula of the form

$\displaystyle S_n(C/{\mathbb F},\psi) = -\sum_\alpha \alpha^n + c \ \ \ \ \ (26)$

for the companion sums, where ${\alpha}$ are the reciprocals of the zeroes of ${S}$, in analogy to (22) (or (5), (10), (13), or (17)). For instance, in the case of Kloosterman sums, there is an identity of the form

$\displaystyle \sum_{x \in {\mathbb F}_{p^n}^\times} e_p( a \hbox{Tr}(x) + \frac{b}{\hbox{Tr}(x)}) = -\alpha^n - \beta^n \ \ \ \ \ (27)$

for all ${n}$ and some complex numbers ${\alpha,\beta}$ depending on ${a,b,p}$, where we have abbreviated ${\hbox{Tr}_{{\mathbb F}_{p^n}/{\mathbb F}_p}}$ as ${\hbox{Tr}}$. As before, the Riemann hypothesis for ${Z}$ then gives a square root cancellation bound of the form

$\displaystyle S_n(C/{\mathbb F},\psi) = O( q^{n/2} ) \ \ \ \ \ (28)$

for the companion sums (and in particular gives the very explicit Weil bound ${|K(a,b;p)| \leq 2\sqrt{p}}$ for the Kloosterman sum), but again there is the amplification phenomenon that this sort of bound can be deduced from the apparently weaker bound

$\displaystyle S_n(C/{\mathbb F},\psi) = O( q^{n/2 + O(1)} ).$

As before, most of the known proofs of the Riemann hypothesis for these twisted zeta functions proceed by first establishing this weaker bound (e.g. one could again use Stepanov’s method here for this goal) and then amplifying to the full bound (28); see Chapter 11 of Iwaniec-Kowalski for further details.

One can also twist the zeta function on a curve by a multiplicative character ${\chi: {\mathbb F}^\times \rightarrow {\bf C}^\times}$ by similar arguments, except that instead of forming the sum ${x_1+\ldots+x_k}$ of all the components of an effective divisor ${[x_1]+\ldots+[x_k]}$, one takes the product ${x_1 \ldots x_k}$ instead, and similarly one replaces the trace

$\displaystyle \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) = x + \hbox{Frob}(x) + \ldots + \hbox{Frob}^{n-1}(x)$

by the norm

$\displaystyle \hbox{Norm}_{{\mathbb F}_n/{\mathbb F}}(x) = x \cdot \hbox{Frob}(x) \cdot \ldots \cdot \hbox{Frob}^{n-1}(x).$

Again, see Chapter 11 of Iwaniec-Kowalski for details.

Deligne famously extended the above theory to higher-dimensional varieties than curves, and also to the closely related context of ${\ell}$-adic sheaves on curves, giving rise to two separate proofs of the Weil conjectures in full generality. (Very roughly speaking, the former context can be obtained from the latter context by a sort of Fubini theorem type argument that expresses sums on higher-dimensional varieties as iterated sums on curves of various expressions related to ${\ell}$-adic sheaves.) In this higher-dimensional setting, the zeta function formalism is still present, but is much more difficult to use, in large part due to the much less tractable nature of divisors in higher dimensions (they are now combinations of codimension one subvarieties or subschemes, rather than combinations of points). To get around this difficulty, one has to change perspective yet again, from an algebraic or geometric perspective to an ${\ell}$-adic cohomological perspective. (I could imagine that once one is sufficiently expert in the subject, all these perspectives merge back together into a unified viewpoint, but I am certainly not yet at that stage of understanding.) In particular, the zeta function, while still present, plays a significantly less prominent role in the analysis (at least if one is willing to take Deligne’s theorems as a black box); the explicit formula is now obtained via a different route, namely the Grothendieck-Lefschetz fixed point formula. I have written some notes on this material below the fold (based in part on some lectures of Philippe Michel, as well as the text of Iwaniec-Kowalski and also this book of Katz), but I should caution that my understanding here is still rather sketchy and possibly inaccurate in places.

Let ${n}$ be a natural number. We consider the question of how many “almost orthogonal” unit vectors ${v_1,\ldots,v_m}$ one can place in the Euclidean space ${{\bf R}^n}$. Of course, if we insist on ${v_1,\ldots,v_m}$ being exactly orthogonal, so that ${\langle v_i,v_j \rangle = 0}$ for all distinct ${i,j}$, then we can only pack at most ${n}$ unit vectors into this space. However, if one is willing to relax the orthogonality condition a little, so that ${\langle v_i,v_j\rangle}$ is small rather than zero, then one can pack a lot more unit vectors into ${{\bf R}^n}$, due to the important fact that pairs of vectors in high dimensions are typically almost orthogonal to each other. For instance, if one chooses ${v_i}$ uniformly and independently at random on the unit sphere, then a standard computation (based on viewing the ${v_i}$ as gaussian vectors projected onto the unit sphere) shows that each inner product ${\langle v_i,v_j \rangle}$ concentrates around the origin with standard deviation ${O(1/\sqrt{n})}$ and with gaussian tails, and a simple application of the union bound then shows that for any fixed ${K \geq 1}$, one can pack ${n^K}$ unit vectors into ${{\bf R}^n}$ whose inner products are all of size ${O( K^{1/2} n^{-1/2} \log^{1/2} n )}$.

One can remove the logarithm by using some number theoretic constructions. For instance, if ${n}$ is twice a prime ${n=2p}$, one can identify ${{\bf R}^n}$ with the space ${\ell^2({\bf F}_p)}$ of complex-valued functions ${f: {\bf F}_p \rightarrow {\bf C}}$, where ${{\bf F}_p}$ is the field of ${p}$ elements, and if one then considers the ${p^2}$ different quadratic phases ${x \mapsto \frac{1}{\sqrt{p}} e_p( ax^2 + bx )}$ for ${a,b \in {\bf F}_p}$, where ${e_p(a) := e^{2\pi i a/p}}$ is the standard character on ${{\bf F}_p}$, then a standard application of Gauss sum estimates reveals that these ${p^2}$ unit vectors in ${{\bf R}^n}$ all have inner products of magnitude at most ${p^{-1/2}}$ with each other. More generally, if we take ${d \geq 1}$ and consider the ${p^d}$ different polynomial phases ${x \mapsto \frac{1}{\sqrt{p}} e_p( a_d x^d + \ldots + a_1 x )}$ for ${a_1,\ldots,a_d \in {\bf F}_p}$, then an application of the Weil conjectures for curves, proven by Weil, shows that the inner products of the associated ${p^d}$ unit vectors with each other have magnitude at most ${(d-1) p^{-1/2}}$.

As it turns out, this construction is close to optimal, in that there is a polynomial limit to how many unit vectors one can pack into ${{\bf R}^n}$ with an inner product of ${O(1/\sqrt{n})}$:

Theorem 1 (Cheap Kabatjanskii-Levenstein bound) Let ${v_1,\ldots,v_m}$ be unit vector in ${{\bf R}^n}$ such that ${|\langle v_i, v_j \rangle| \leq A n^{-1/2}}$ for some ${1/2 \leq A \leq \frac{1}{2} \sqrt{n}}$. Then we have ${m \leq (\frac{Cn}{A^2})^{C A^2}}$ for some absolute constant ${C}$.

In particular, for fixed ${d}$ and large ${p}$, the number of unit vectors one can pack in ${{\bf R}^{2p}}$ whose inner products all have magnitude at most ${dp^{-1/2}}$ will be ${O( p^{O(d^2)} )}$. This doesn’t quite match the construction coming from the Weil conjectures, although it is worth noting that the upper bound of ${(d-1)p^{-1/2}}$ for the inner product is usually not sharp (the inner product is actually ${p^{-1/2}}$ times the sum of ${d-1}$ unit phases which one expects (cf. the Sato-Tate conjecture) to be uniformly distributed on the unit circle, and so the typical inner product is actually closer to ${(d-1)^{1/2} p^{-1/2}}$).

Note that for ${0 \leq A < 1/2}$, the ${A=1/2}$ case of the above theorem (or more precisely, Lemma 2 below) gives the bound ${m=O(n)}$, which is essentially optimal as the example of an orthonormal basis shows. For ${A \geq \sqrt{n}}$, the condition ${|\langle v_i, v_j \rangle| \leq A n^{-1/2}}$ is trivially true from Cauchy-Schwarz, and ${m}$ can be arbitrariy large. Finally, in the range ${\frac{1}{2} \sqrt{n} \leq A \leq \sqrt{n}}$, we can use a volume packing argument: we have ${\|v_i-v_j\|^2 \geq 2 (1 - A n^{-1/2})}$, so of we set ${r := 2^{-1/2} (1-A n^{-1/2})^{1/2}}$, then the open balls of radius ${r}$ around each ${v_i}$ are disjoint, while all lying in a ball of radius ${O(1)}$, giving rise to the bound ${m \leq C^n (1-A n^{-1/2})^{-n/2}}$ for some absolute constant ${C}$.

As I learned recently from Philippe Michel, a more precise version of this theorem is due to Kabatjanskii and Levenstein, who studied the closely related problem of sphere packing (or more precisely, cap packing) in the unit sphere ${S^{n-1}}$ of ${{\bf R}^n}$. However, I found a short proof of the above theorem which relies on one of my favorite tricks – the tensor power trick – so I thought I would give it here.

We begin with an easy case, basically the ${A=1/2}$ case of the above theorem:

Lemma 2 Let ${v_1,\ldots,v_m}$ be unit vectors in ${{\bf R}^n}$ such that ${|\langle v_i, v_j \rangle| \leq \frac{1}{2n^{1/2}}}$ for all distinct ${i,j}$. Then ${m < 2n}$.

Proof: Suppose for contradiction that ${m \geq 2n}$. We consider the ${2n \times 2n}$ Gram matrix ${( \langle v_i,v_j \rangle )_{1 \leq i,j \leq 2n}}$. This matrix is real symmetric with rank at most ${n}$, thus if one subtracts off the identity matrix, it has an eigenvalue of ${-1}$ with multiplicity at least ${n}$. Taking Hilbert-Schmidt norms, we conclude that

$\displaystyle \sum_{1 \leq i,j \leq 2n: i \neq j} |\langle v_i, v_j \rangle|^2 \geq n.$

But by hypothesis, the left-hand side is at most ${2n(2n-1) \frac{1}{4n} = n-\frac{1}{2}}$, giving the desired contradiction. $\Box$

To amplify the above lemma to cover larger values of ${A}$, we apply the tensor power trick. A direct application of the tensor power trick does not gain very much; however one can do a lot better by using the symmetric tensor power rather than the raw tensor power. This gives

Corollary 3 Let ${k}$ be a natural number, and let ${v_1,\ldots,v_m}$ be unit vectors in ${{\bf R}^n}$ such that ${|\langle v_i, v_j \rangle| \leq 2^{-1/k} (\binom{n+k-1}{k})^{-1/2k}}$ for all distinct ${i,j}$. Then ${m < 2\binom{n+k-1}{k}}$.

Proof: We work in the symmetric component ${\hbox{Sym}^k {\bf R}^n}$ of the tensor power ${({\bf R}^n)^{\otimes k} \equiv {\bf R}^{n^k}}$, which has dimension ${\binom{n+k-1}{k}}$. Applying the previous lemma to the tensor powers ${v_1^{\otimes k},\ldots,v_m^{\otimes k}}$, we obtain the claim. $\Box$

Using the trivial bound ${e^k \geq \frac{k^k}{k!}}$, we can lower bound

$\displaystyle 2^{-1/k} (\binom{n+k-1}{k})^{-1/2k} \geq 2^{-1/k} (n+k-1)^{-1/2} (k!)^{1/2k}$

$\displaystyle \geq 2^{-1/k} e^{-1/2} k^{1/2} (n+k-1)^{-1/2} .$

We can thus prove Theorem 1 by setting ${k := \lfloor C A^2 \rfloor}$ for some sufficiently large absolute constant ${C}$.

Van Vu and I have just uploaded to the arXiv our joint paper “Local universality of zeroes of random polynomials“. This paper is a sequel to our previous work on local universality of eigenvalues of (non-Hermitian) random matrices ${M_n}$ with independent entries. One can re-interpret these previous results as a universality result for a certain type of random polynomial ${f: {\bf C} \rightarrow {\bf C}}$, namely the characteristic polynomial ${f(z) = \hbox{det}(M_n-z)}$ of the random matrix ${M_n}$. In this paper, we consider the analogous problem for a different model of random polynomial, namely polynomials ${f}$ with independent random coefficients. More precisely, we consider random polynomials ${f = f_n}$ of the form

$\displaystyle f(z) = \sum_{i=0}^n c_i \xi_i z^n$

where ${c_0,\ldots,c_n \in {\bf C}}$ are deterministic complex coefficients, and ${\xi_0,\ldots,\xi_n \equiv \xi}$ are independent identically distributed copies of a complex random variable ${\xi}$, which we normalise to have mean zero and variance one. For simplicity we will ignore the technical issue that the leading coefficient ${c_n \xi_n}$ is allowed to vanish; then ${f}$ has ${n}$ zeroes ${\zeta_1,\ldots,\zeta_n \in {\bf C}}$ (counting multiplicity), which can be viewed as a random point process ${\Sigma = \{\zeta_1,\ldots,\zeta_n\}}$ in the complex plane. In analogy with other models (such as random matrix models), we expect the (suitably normalised) asymptotic statistics of this point process in the limit ${n \rightarrow \infty}$ to be universal, in the sense that it is largely independent of the precise distribution of the atom variable ${\xi}$.

Our results are fairly general with regard to the choice of coefficients ${c_i}$, but we isolate three particular choice of coefficients that are particularly natural and well-studied in the literature:

• Flat polynomials (or Weyl polynomials) in which ${c_i := \frac{1}{\sqrt{i!}}}$.
• Elliptic polynomials (or binomial polynomials) in which ${c_i := \sqrt{\binom{n}{i}}}$.
• Kac polynomials in which ${c_i := 1}$.

The flat and elliptic polynomials enjoy special symmetries in the model case when the atom distribution ${\xi}$ is a complex Gaussian ${N(0,1)_{\bf C}}$. Indeed, the zeroes ${\Sigma}$ of elliptic polynomials with complex Gaussian coefficients have a distribution which is invariant with respect to isometries ${T: {\bf C} \cup \{\infty\} \rightarrow {\bf C} \cup \{\infty\}}$ of the Riemann sphere ${{\bf C} \cup \{\infty\}}$ (thus ${T\Sigma}$ has the same distribution as ${\Sigma}$), while the zeroes of the limiting case ${\sum_{i=0}^\infty \frac{1}{\sqrt{i!}} \xi_i z^i}$ of the flat polynomials with complex Gaussian coefficients are similarly invariant with respect to isometries ${T: {\bf C} \rightarrow {\bf C}}$ of the complex plane ${{\bf C}}$. (For a nice geometric proof of this facts, I recommend the nice book of Hough, Krishnapur, Peres, and Virag.)

The global (i.e. coarse-scale) distribution of zeroes of these polynomials is well understood, first in the case of gaussian distributions using the fundamental tool of the Kac-Rice formula, and then for more general choices of atom distribution in the recent work of Kabluchko and Zaporozhets. The zeroes of the flat polynomials are asymptotically distributed according to the circular law, normalised to be uniformly distributed on the disk ${B(0,\sqrt{n})}$ of radius ${\sqrt{n}}$ centred at the origin. To put it a bit informally, the zeroes are asymptotically distributed according to the measure ${\frac{1}{\pi} 1_{|z| \leq \sqrt{n}} dz}$, where ${dz}$ denotes Lebesgue measure on the complex plane. One can non-rigorously see the scale ${\sqrt{n}}$ appear by observing that when ${|z|}$ is much larger than ${\sqrt{n}}$, we expect the leading term ${\frac{1}{\sqrt{n!}} \xi_n z^n}$ of the flat polynomial ${\sum_{i=0}^n \frac{1}{\sqrt{i!}} \xi_i z^i}$ to dominate, so that the polynomial should not have any zeroes in this region.

Similarly, the distribution of the elliptic polynomials is known to be asymptotically distributed according to a Cauchy-type distribution ${\frac{1}{\pi} \frac{1}{1+|z|^2/n} dz}$. The Kac polynomials ${\sum_{i=0}^n \xi_i z^i}$ behave differently; the zeroes concentrate uniformly on the unit circle ${|z|=1}$ (which is reasonable, given that one would expect the top order term ${\xi_i z^i}$ to dominate for ${|z| > 1}$ and the bottom order term ${\xi_0}$ to dominate for ${|z| < 1}$). In particular, whereas the typical spacing between zeroes in the flat and elliptic cases would be expected to be comparable to ${1}$, the typical spacing between zeroes for a Kac polynomial would be expected instead to be comparable to ${1/n}$.

In our paper we studied the local distribution of zeroes at the scale of the typical spacing. In the case of polynomials with continuous complex atom disribution ${\xi}$, the natural statistic to measure here is the ${k}$-point correlation function ${\rho^{(k)}(z_1,\ldots,z_k)}$, which for distinct complex numbers ${z_1,\ldots,z_k}$ can be defined as the probability that there is a zero in each of the balls ${B(z_1,\varepsilon),\ldots,B(z_k,\varepsilon)}$ for some infinitesimal ${\epsilon >0}$, divided by the normalising factor ${(\pi \epsilon^2)^k}$. (One can also define a ${k}$-point correlation function in the case of a discrete distribution, but it will be a measure rather than a function in that case.) Our first main theorem is a general replacement principle which asserts, very roughly speaking, that the asymptotic ${k}$-point correlation functions of two random polynomials ${f, \tilde f}$ will agree if the log-magnitudes ${\log |f(z)|, \log |\tilde f(z)|}$ have asymptotically the same distribution (actually we have to consider the joint distribution of ${\log |f(z_1)|, \ldots \log |f(z_k)|}$ for several points ${z_1,\ldots,z_k}$, but let us ignore this detail for now), and if the polynomials ${f, \tilde f}$ obeys a “non-clustering property” which asserts, roughly speaking, that not too many of the zeroes of ${f}$ can typically concentrate in a small ball. This replacement principle was implicit in our previous paper (and can be viewed as a local-scale version of the global-scale replacement principle in this earlier paper of ours). Specialising the replacement principle to the elliptic or flat polynomials, and using the Lindeberg swapping argument, we obtain a Two Moment Theorem that asserts, roughly speaking, that the asymptotic behaviour of the ${k}$-point correlation functions depends only on the first two moments of the real and imaginary components of ${\xi}$, as long as one avoids some regions of space where universality is expected to break down. (In particular, because ${f(0) = c_0 \xi_0}$ does not have a universal distribution, one does not expect universality to hold near the origin; there is a similar problem near infinity.) Closely related results, by a slightly different method, have also been obtained recently by Ledoan, Merkli, and Starr. A similar result holds for the Kac polynomials after rescaling to account for the narrower spacing between zeroes.

We also have analogous results in the case of polynomials with real coefficients (so that the coefficients ${c_i}$ and the atom distribution ${\xi}$ are both real). In this case one expects to see a certain number of real zeroes, together with conjugate pairs of complex zeroes. Instead of the ${k}$-point correlation function ${\rho^{(k)}(z_1,\ldots,z_k)}$, the natural object of study is now the mixed ${(k,l)}$-point correlation function ${\rho^{(k,l)}(x_1,\ldots,x_k,z_1,\ldots,z_l)}$ that (roughly speaking) controls how often one can find a real zero near the real numbers ${x_1,\ldots,x_k}$, and a complex zero near the points ${z_1,\ldots,z_l}$. It turns out that one can disentangle the real and strictly complex zeroes and obtain separate universality results for both zeroes, provided that at least one of the polynomials involved obeys a “weak repulsion estimate” that shows that the real zeroes do not cluster very close to each other (and that the complex zeroes do not cluster very close to their complex conjugates). Such an estimate is needed to avoid the situation of two nearby real zeroes “colliding” to create a (barely) complex zero and its complex conjugate, or the time reversal of such a collision. Fortunately, in the case of real gaussian polynomials one can use the Kac-Rice formula to establish such a weak repulsion estimate, allowing analogues of the above universality results for complex random polynomials in the real case. Among other things, this gives universality results for the number ${N_{\bf R}}$ of real zeroes of a random flat or elliptic polynomial; it turns out this number is typically equal to ${\frac{2}{\pi} \sqrt{n} + O(n^{1/2-c})}$ and ${\frac{n} + O(n^{1/2-c})}$ respectively. (For Kac polynomials, the situation is somewhat different; it was already known that ${N_{\bf R} = \frac{2}{\pi} \log n + o(\log n)}$ thanks to a long series of papers, starting with the foundational work of Kac and culminating in the work of Ibragimov and Maslova.)

While our methods are based on our earlier work on eigenvalues of random matrices, the situation with random polynomials is actually somewhat easier to deal with. This is because the log-magnitude ${\log |f(z)|}$ of a random polynomial with independent coefficients is significantly easier to control than the log-determinant ${\log |\hbox{det}(M-z)|}$ of a random matrix, as the former can be controlled by the central limit theorem, while the latter requires significantly more difficult arguments (in particular, bounds on the least singular value combined with Girko’s Hermitization trick). As such, one could view the current paper as an introduction to our more complicated previous paper, and with this in mind we have written the current paper to be self-contained (though this did make the paper somewhat lengthy, checking in at 68 pages).

The purpose of this post is to link to a short unpublished note of mine that I wrote back in 2010 but forgot to put on my web page at the time. Entitled “A physical space proof of the bilinear Strichartz and local smoothing estimates for the Schrodinger equation“, it gives a proof of two standard estimates for the free (linear) Schrodinger equation in flat Euclidean space, namely the bilinear Strichartz estimate and the local smoothing estimate, using primarily “physical space” methods such as integration by parts, instead of “frequency space” methods based on the Fourier transform, although a small amount of Fourier analysis (basically sectoral projection to make the Schrodinger waves move roughly in a given direction) is still needed.  This is somewhat in the spirit of an older paper of mine with Klainerman and Rodnianski doing something similar for the wave equation, and is also very similar to a paper of Planchon and Vega from 2009.  The hope was that by avoiding the finer properties of the Fourier transform, one could obtain a more robust argument which could also extend to nonlinear, non-free, or non-flat situations.   These notes were cited once or twice by some people that I had privately circulated them to, so I decided to put them online here for reference.

UPDATE, July 24: Fabrice Planchon has kindly supplied another note in which he gives a particularly simple proof of local smoothing in one dimension, and discusses some other variants of the method (related to the paper of Planchon and Vega cited earlier).

As in previous posts, we use the following asymptotic notation: ${x}$ is a parameter going off to infinity, and all quantities may depend on ${x}$ unless explicitly declared to be “fixed”. The asymptotic notation ${O(), o(), \ll}$ is then defined relative to this parameter. A quantity ${q}$ is said to be of polynomial size if one has ${q = O(x^{O(1)})}$, and bounded if ${q=O(1)}$. We also write ${X \lessapprox Y}$ for ${X \ll x^{o(1)} Y}$, and ${X \sim Y}$ for ${X \ll Y \ll X}$.

The purpose of this post is to collect together all the various refinements to the second half of Zhang’s paper that have been obtained as part of the polymath8 project and present them as a coherent argument. In order to state the main result, we need to recall some definitions. If ${I}$ is a bounded subset of ${{\bf R}}$, let ${{\mathcal S}_I}$ denote the square-free numbers whose prime factors lie in ${I}$, and let ${P_I := \prod_{p \in I} p}$ denote the product of the primes ${p}$ in ${I}$. Note by the Chinese remainder theorem that the set ${({\bf Z}/P_I{\bf Z})^\times}$ of primitive congruence classes ${a\ (P_I)}$ modulo ${P_I}$ can be identified with the tuples ${(a_q\ (q))_{q \in {\mathcal S}_I}}$ of primitive congruence classes ${a_q\ (q)}$ of congruence classes modulo ${q}$ for each ${q \in {\mathcal S}_I}$ which obey the Chinese remainder theorem

$\displaystyle (a_{qr}\ (qr)) = (a_q\ (q)) \cap (a_r\ (r))$

for all coprime ${q,r \in {\mathcal S}_I}$, since one can identify ${a\ (P_I)}$ with the tuple ${(a\ (q))_{q \in {\mathcal S}_I}}$ for each ${a \in ({\bf Z}/P_I{\bf Z})^\times}$.

If ${y > 1}$ and ${n}$ is a natural number, we say that ${n}$ is ${y}$-densely divisible if, for every ${1 \leq R \leq n}$, one can find a factor of ${n}$ in the interval ${[y^{-1} R, R]}$. We say that ${n}$ is doubly ${y}$-densely divisible if, for every ${1 \leq R \leq n}$, one can find a factor ${m}$ of ${n}$ in the interval ${[y^{-1} R, R]}$ such that ${m}$ is itself ${y}$-densely divisible. We let ${{\mathcal D}_y^2}$ denote the set of doubly ${y}$-densely divisible natural numbers, and ${{\mathcal D}_y}$ the set of ${y}$-densely divisible numbers.

Given any finitely supported sequence ${\alpha: {\bf N} \rightarrow {\bf C}}$ and any primitive residue class ${a\ (q)}$, we define the discrepancy

$\displaystyle \Delta(\alpha; a \ (q)) := \sum_{n: n = a\ (q)} \alpha(n) - \frac{1}{\phi(q)} \sum_{n: (n,q)=1} \alpha(n).$

For any fixed ${\varpi, \delta > 0}$, we let ${MPZ''[\varpi,\delta]}$ denote the assertion that

$\displaystyle \sum_{q \in {\mathcal S}_I \cap {\mathcal D}_{x^\delta}^2: q \leq x^{1/2+2\varpi}} |\Delta(\Lambda 1_{[x,2x]}; a\ (q))| \ll x \log^{-A} x \ \ \ \ \ (1)$

for any fixed ${A > 0}$, any bounded ${I}$, and any primitive ${a\ (P_I)}$, where ${\Lambda}$ is the von Mangoldt function. Importantly, we do not require ${I}$ or ${a}$ to be fixed, in particular ${I}$ could grow polynomially in ${x}$, and ${a}$ could grow exponentially in ${x}$, but the implied constant in (1) would still need to be fixed (so it has to be uniform in ${I}$ and ${a}$). (In previous formulations of these estimates, the system of congruence ${a\ (q)}$ was also required to obey a controlled multiplicity hypothesis, but we no longer need this hypothesis in our arguments.) In this post we will record the proof of the following result, which is currently the best distribution result produced by the ongoing polymath8 project to optimise Zhang’s theorem on bounded gaps between primes:

Theorem 1 We have ${MPZ''[\varpi,\delta]}$ whenever ${\frac{280}{3} \varpi + \frac{80}{3} \delta < 1}$.

This improves upon the previous constraint of ${148 \varpi + 33 \delta < 1}$ (see this previous post), although that latter statement was stronger in that it only required single dense divisibility rather than double dense divisibility. However, thanks to the efficiency of the sieving step of our argument, the upgrade of the single dense divisibility hypothesis to double dense divisibility costs almost nothing with respect to the ${k_0}$ parameter (which, using this constraint, gives a value of ${k_0=720}$ as verified in these comments, which then implies a value of ${H = 5,414}$).

This estimate is deduced from three sub-estimates, which require a bit more notation to state. We need a fixed quantity ${A_0>0}$.

Definition 2 A coefficient sequence is a finitely supported sequence ${\alpha: {\bf N} \rightarrow {\bf R}}$ that obeys the bounds

$\displaystyle |\alpha(n)| \ll \tau^{O(1)}(n) \log^{O(1)}(x) \ \ \ \ \ (2)$

for all ${n}$, where ${\tau}$ is the divisor function.

• (i) A coefficient sequence ${\alpha}$ is said to be at scale ${N}$ for some ${N \geq 1}$ if it is supported on an interval of the form ${[(1-O(\log^{-A_0} x)) N, (1+O(\log^{-A_0} x)) N]}$.
• (ii) A coefficient sequence ${\alpha}$ at scale ${N}$ is said to obey the Siegel-Walfisz theorem if one has

$\displaystyle | \Delta(\alpha 1_{(\cdot,q)=1}; a\ (r)) | \ll \tau(qr)^{O(1)} N \log^{-A} x \ \ \ \ \ (3)$

for any ${q,r \geq 1}$, any fixed ${A}$, and any primitive residue class ${a\ (r)}$.

• (iii) A coefficient sequence ${\alpha}$ at scale ${N}$ (relative to this choice of ${A_0}$) is said to be smooth if it takes the form ${\alpha(n) = \psi(n/N)}$ for some smooth function ${\psi: {\bf R} \rightarrow {\bf C}}$ supported on ${[1-O(\log^{-A_0} x), 1+O(\log^{-A_0} x)]}$ obeying the derivative bounds

$\displaystyle \psi^{(j)}(t) = O( \log^{j A_0} x ) \ \ \ \ \ (4)$

for all fixed ${j \geq 0}$ (note that the implied constant in the ${O()}$ notation may depend on ${j}$).

Definition 3 (Type I, Type II, Type III estimates) Let ${0 < \varpi < 1/4}$, ${0 < \delta < 1/4+\varpi}$, and ${0 < \sigma < 1/2}$ be fixed quantities. We let ${I}$ be an arbitrary bounded subset of ${{\bf R}}$, and ${a\ (P_I)}$ a primitive congruence class.

Theorem 1 is then a consequence of the following four statements.

Theorem 4 (Type I estimate) ${Type''_I[\varpi,\delta,\sigma]}$ holds whenever ${\varpi,\delta,\sigma > 0}$ are fixed quantities such that

$\displaystyle 56 \varpi + 16 \delta + 4\sigma < 1.$

Theorem 5 (Type II estimate) ${Type''_{II}[\varpi,\delta]}$ holds whenever ${\varpi,\delta > 0}$ are fixed quantities such that

$\displaystyle 68 \varpi + 14 \delta < 1.$

Theorem 6 (Type III estimate) ${Type''_{III}[\varpi,\delta,\sigma]}$ holds whenever ${0 < \varpi < 1/4}$, ${0 < \delta < 1/4+\varpi}$, and ${\sigma > 0}$ are fixed quantities such that

$\displaystyle \sigma > \frac{1}{18} + \frac{28}{9} \varpi + \frac{2}{9} \delta \ \ \ \ \ (12)$

and

$\displaystyle \varpi< \frac{1}{12}. \ \ \ \ \ (13)$

In particular, if

$\displaystyle 70 \varpi + 5 \delta < 1.$

then all values of ${\sigma}$ that are sufficiently close to ${1/10}$ are admissible.

Lemma 7 (Combinatorial lemma) Let ${0 < \varpi < 1/4}$, ${0 < \delta < 1/4+\varpi}$, and ${1/10 < \sigma < 1/2}$ be such that ${Type''_I[\varpi,\delta,\sigma]}$, ${Type''_{II}[\varpi,\delta]}$, and ${Type''_{III}[\varpi,\delta,\sigma]}$ simultaneously hold. Then ${MPZ''[\varpi,\delta]}$ holds.

Indeed, if ${\frac{280}{3} \varpi + \frac{80}{3} \delta < 1}$, one checks that the hypotheses for Theorems 4, 5, 6 are obeyed for ${\sigma}$ sufficiently close to ${1/10}$, at which point the claim follows from Lemma 7.

The proofs of Theorems 4, 5, 6 will be given below the fold, while the proof of Lemma 7 follows from the arguments in this previous post. We remark that in our current arguments, the double dense divisibility is only fully used in the Type I estimates; the Type II and Type III estimates are also valid just with single dense divisibility.

Remark 1 Theorem 6 is vacuously true for ${\sigma > 1/6}$, as the condition (10) cannot be satisfied in this case. If we use this trivial case of Theorem 6, while keeping the full strength of Theorems 4 and 5, we obtain Theorem 1 in the regime

$\displaystyle 168 \varpi + 48 \delta < 1.$