You are currently browsing the category archive for the ‘expository’ category.

If ${f: {\bf R}^n \rightarrow {\bf C}}$ and ${g: {\bf R}^n \rightarrow {\bf C}}$ are two absolutely integrable functions on a Euclidean space ${{\bf R}^n}$, then the convolution ${f*g: {\bf R}^n \rightarrow {\bf C}}$ of the two functions is defined by the formula

$\displaystyle f*g(x) := \int_{{\bf R}^n} f(y) g(x-y)\ dy = \int_{{\bf R}^n} f(x-z) g(z)\ dz.$

A simple application of the Fubini-Tonelli theorem shows that the convolution ${f*g}$ is well-defined almost everywhere, and yields another absolutely integrable function. In the case that ${f=1_F}$, ${g=1_G}$ are indicator functions, the convolution simplifies to

$\displaystyle 1_F*1_G(x) = m( F \cap (x-G) ) = m( (x-F) \cap G ) \ \ \ \ \ (1)$

where ${m}$ denotes Lebesgue measure. One can also define convolution on more general locally compact groups than ${{\bf R}^n}$, but we will restrict attention to the Euclidean case in this post.

The convolution ${f*g}$ can also be defined by duality by observing the identity

$\displaystyle \int_{{\bf R}^n} f*g(x) h(x)\ dx = \int_{{\bf R}^n} \int_{{\bf R}^n} h(y+z)\ f(y) dy g(z) dz$

for any bounded measurable function ${h: {\bf R}^n \rightarrow {\bf C}}$. Motivated by this observation, we may define the convolution ${\mu*\nu}$ of two finite Borel measures on ${{\bf R}^n}$ by the formula

$\displaystyle \int_{{\bf R}^n} h(x)\ d\mu*\nu(x) := \int_{{\bf R}^n} \int_{{\bf R}^n} h(y+z)\ d\mu(y) d\nu(z) \ \ \ \ \ (2)$

for any bounded (Borel) measurable function ${h: {\bf R}^n \rightarrow {\bf C}}$, or equivalently that

$\displaystyle \mu*\nu(E) = \int_{{\bf R}^n} \int_{{\bf R}^n} 1_E(y+z)\ d\mu(y) d\nu(z) \ \ \ \ \ (3)$

for all Borel measurable ${E}$. (In another equivalent formulation: ${\mu*\nu}$ is the pushforward of the product measure ${\mu \times \nu}$ with respect to the addition map ${+: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}^n}$.) This can easily be verified to again be a finite Borel measure.

If ${\mu}$ and ${\nu}$ are probability measures, then the convolution ${\mu*\nu}$ also has a simple probabilistic interpretation: it is the law (i.e. probability distribution) of a random varible of the form ${X+Y}$, where ${X, Y}$ are independent random variables taking values in ${{\bf R}^n}$ with law ${\mu,\nu}$ respectively. Among other things, this interpretation makes it obvious that the support of ${\mu*\nu}$ is the sumset of the supports of ${\mu}$ and ${\nu}$, and that ${\mu*\nu}$ will also be a probability measure.

While the above discussion gives a perfectly rigorous definition of the convolution of two measures, it does not always give helpful guidance as to how to compute the convolution of two explicit measures (e.g. the convolution of two surface measures on explicit examples of surfaces, such as the sphere). In simple cases, one can work from first principles directly from the definition (2), (3), perhaps after some application of tools from several variable calculus, such as the change of variables formula. Another technique proceeds by regularisation, approximating the measures ${\mu, \nu}$ involved as the weak limit (or vague limit) of absolutely integrable functions

$\displaystyle \mu = \lim_{\epsilon \rightarrow 0} f_\epsilon; \quad \nu =\lim_{\epsilon \rightarrow 0} g_\epsilon$

(where we identify an absolutely integrable function ${f}$ with the associated absolutely continuous measure ${dm_f(x) := f(x)\ dx}$) which then implies (assuming that the sequences ${f_\epsilon,g_\epsilon}$ are tight) that ${\mu*\nu}$ is the weak limit of the ${f_\epsilon * g_\epsilon}$. The latter convolutions ${f_\epsilon * g_\epsilon}$, being convolutions of functions rather than measures, can be computed (or at least estimated) by traditional integration techniques, at which point the only difficulty is to ensure that one has enough uniformity in ${\epsilon}$ to maintain control of the limit as ${\epsilon \rightarrow 0}$.

A third method proceeds using the Fourier transform

$\displaystyle \hat \mu(\xi) := \int_{{\bf R}^n} e^{-2\pi i x \cdot \xi}\ d\mu(\xi)$

of ${\mu}$ (and of ${\nu}$). We have

$\displaystyle \widehat{\mu*\nu}(\xi) = \hat{\mu}(\xi) \hat{\nu}(\xi)$

and so one can (in principle, at least) compute ${\mu*\nu}$ by taking Fourier transforms, multiplying them together, and applying the (distributional) inverse Fourier transform. Heuristically, this formula implies that the Fourier transform of ${\mu*\nu}$ should be concentrated in the intersection of the frequency region where the Fourier transform of ${\mu}$ is supported, and the frequency region where the Fourier transform of ${\nu}$ is supported. As the regularity of a measure is related to decay of its Fourier transform, this also suggests that the convolution ${\mu*\nu}$ of two measures will typically be more regular than each of the two original measures, particularly if the Fourier transforms of ${\mu}$ and ${\nu}$ are concentrated in different regions of frequency space (which should happen if the measures ${\mu,\nu}$ are suitably “transverse”). In particular, it can happen that ${\mu*\nu}$ is an absolutely continuous measure, even if ${\mu}$ and ${\nu}$ are both singular measures.

Using intuition from microlocal analysis, we can combine our understanding of the spatial and frequency behaviour of convolution to the following heuristic: a convolution ${\mu*\nu}$ should be supported in regions of phase space ${\{ (x,\xi): x \in {\bf R}^n, \xi \in {\bf R}^n \}}$ of the form ${(x,\xi) = (x_1+x_2,\xi)}$, where ${(x_1,\xi)}$ lies in the region of phase space where ${\mu}$ is concentrated, and ${(x_2,\xi)}$ lies in the region of phase space where ${\nu}$ is concentrated. It is a challenge to make this intuition perfectly rigorous, as one has to somehow deal with the obstruction presented by the Heisenberg uncertainty principle, but it can be made rigorous in various asymptotic regimes, for instance using the machinery of wave front sets (which describes the high frequency limit of the phase space distribution).

Let us illustrate these three methods and the final heuristic with a simple example. Let ${\mu}$ be a singular measure on the horizontal unit interval ${[0,1] \times \{0\} = \{ (x,0): 0 \leq x \leq 1 \}}$, given by weighting Lebesgue measure on that interval by some test function ${\phi: {\bf R} \rightarrow {\bf C}}$ supported on ${[0,1]}$:

$\displaystyle \int_{{\bf R}^2} f(x,y)\ d\mu(x,y) := \int_{\bf R} f(x,0) \phi(x)\ dx.$

Similarly, let ${\nu}$ be a singular measure on the vertical unit interval ${\{0\} \times [0,1] = \{ (0,y): 0 \leq y \leq 1 \}}$ given by weighting Lebesgue measure on that interval by another test function ${\psi: {\bf R} \rightarrow {\bf C}}$ supported on ${[0,1]}$:

$\displaystyle \int_{{\bf R}^2} g(x,y)\ d\nu(x,y) := \int_{\bf R} g(0,y) \psi(y)\ dy.$

We can compute the convolution ${\mu*\nu}$ using (2), which in this case becomes

$\displaystyle \int_{{\bf R}^2} h( x, y ) d\mu*\nu(x,y) = \int_{{\bf R}^2} \int_{{\bf R}^2} h(x_1+x_2, y_1+y_2)\ d\mu(x_1,y_1) d\nu(x_2,y_2)$

$\displaystyle = \int_{\bf R} \int_{\bf R} h( x_1, y_2 )\ \phi(x_1) dx_1 \psi(y_2) dy_2$

and we thus conclude that ${\mu*\nu}$ is an absolutely continuous measure on ${{\bf R}^2}$ with density function ${(x,y) \mapsto \phi(x) \psi(y)}$:

$\displaystyle d(\mu*\nu)(x,y) = \phi(x) \psi(y) dx dy. \ \ \ \ \ (4)$

In particular, ${\mu*\nu}$ is supported on the unit square ${[0,1]^2}$, which is of course the sumset of the two intervals ${[0,1] \times\{0\}}$ and ${\{0\} \times [0,1]}$.

We can arrive at the same conclusion from the regularisation method; the computations become lengthier, but more geometric in nature, and emphasises the role of transversality between the two segments supporting ${\mu}$ and ${\nu}$. One can view ${\mu}$ as the weak limit of the functions

$\displaystyle f_\epsilon(x,y) := \frac{1}{\epsilon} \phi(x) 1_{[0,\epsilon]}(y)$

as ${\epsilon \rightarrow 0}$ (where we continue to identify absolutely integrable functions with absolutely continuous measures, and of course we keep ${\epsilon}$ positive). We can similarly view ${\nu}$ as the weak limit of

$\displaystyle g_\epsilon(x,y) := \frac{1}{\epsilon} 1_{[0,\epsilon]}(x) \psi(y).$

Let us first look at the model case when ${\phi=\psi=1_{[0,1]}}$, so that ${f_\epsilon,g_\epsilon}$ are renormalised indicator functions of thin rectangles:

$\displaystyle f_\epsilon = \frac{1}{\epsilon} 1_{[0,1]\times [0,\epsilon]}; \quad g_\epsilon = \frac{1}{\epsilon} 1_{[0,\epsilon] \times [0,1]}.$

By (1), the convolution ${f_\epsilon*g_\epsilon}$ is then given by

$\displaystyle f_\epsilon*g_\epsilon(x,y) := \frac{1}{\epsilon^2} m( E_\epsilon )$

where ${E_\epsilon}$ is the intersection of two rectangles:

$\displaystyle E_\epsilon := ([0,1] \times [0,\epsilon]) \cap ((x,y) - [0,\epsilon] \times [0,1]).$

When ${(x,y)}$ lies in the square ${[\epsilon,1] \times [\epsilon,1]}$, one readily sees (especially if one draws a picture) that ${E_\epsilon}$ consists of an ${\epsilon \times \epsilon}$ square and thus has measure ${\epsilon^2}$; conversely, if ${(x,y)}$ lies outside ${[0,1+\epsilon] \times [0,1+\epsilon]}$, ${E_\epsilon}$ is empty and thus has measure zero. In the intermediate region, ${E_\epsilon}$ will have some measure between ${0}$ and ${\epsilon^2}$. From this we see that ${f_\epsilon*g_\epsilon}$ converges pointwise almost everywhere to ${1_{[0,1] \times [0,1]}}$ while also being dominated by an absolutely integrable function, and so converges weakly to ${1_{[0,1] \times [0,1]}}$, giving a special case of the formula (4).

Exercise 1 Use a similar method to verify (4) in the case that ${\phi, \psi}$ are continuous functions on ${[0,1]}$. (The argument also works for absolutely integrable ${\phi,\psi}$, but one needs to invoke the Lebesgue differentiation theorem to make it run smoothly.)

Now we compute with the Fourier-analytic method. The Fourier transform ${\hat \mu(\xi,\eta)}$ of ${\mu}$ is given by

$\displaystyle \hat \mu(\xi,\eta) =\int_{{\bf R}^2} e^{-2\pi i (x \xi + y \eta)}\ d\mu(x,y)$

$\displaystyle = \int_{\bf R} \phi(x) e^{-2\pi i x \xi}\ dx$

$\displaystyle = \hat \phi(\xi)$

where we abuse notation slightly by using ${\hat \phi}$ to refer to the one-dimensional Fourier transform of ${\phi}$. In particular, ${\hat \mu}$ decays in the ${\xi}$ direction (by the Riemann-Lebesgue lemma) but has no decay in the ${\eta}$ direction, which reflects the horizontally grained structure of ${\mu}$. Similarly we have

$\displaystyle \hat \nu(\xi,\eta) = \hat \psi(\eta),$

so that ${\hat \nu}$ decays in the ${\eta}$ direction. The convolution ${\mu*\nu}$ then has decay in both the ${\xi}$ and ${\eta}$ directions,

$\displaystyle \widehat{\mu*\nu}(\xi,\eta) = \hat \phi(\xi) \hat \psi(\eta)$

and by inverting the Fourier transform we obtain (4).

Exercise 2 Let ${AB}$ and ${CD}$ be two non-parallel line segments in the plane ${{\bf R}^2}$. If ${\mu}$ is the uniform probability measure on ${AB}$ and ${\nu}$ is the uniform probability measure on ${CD}$, show that ${\mu*\nu}$ is the uniform probability measure on the parallelogram ${AB + CD}$ with vertices ${A+C, A+D, B+C, B+D}$. What happens in the degenerate case when ${AB}$ and ${CD}$ are parallel?

Finally, we compare the above answers with what one gets from the microlocal analysis heuristic. The measure ${\mu}$ is supported on the horizontal interval ${[0,1] \times \{0\}}$, and the cotangent bundle at any point on this interval points in the vertical direction. Thus, the wave front set of ${\mu}$ should be supported on those points ${((x_1,x_2),(\xi_1,\xi_2))}$ in phase space with ${x_1 \in [0,1]}$, ${x_2 = 0}$ and ${\xi_1=0}$. Similarly, the wave front set of ${\nu}$ should be supported at those points ${((y_1,y_2),(\xi_1,\xi_2))}$ with ${y_1 = 0}$, ${y_2 \in [0,1]}$, and ${\xi_2=0}$. The convolution ${\mu * \nu}$ should then have wave front set supported on those points ${((x_1+y_1,x_2+y_2), (\xi_1,\xi_2))}$ with ${x_1 \in [0,1]}$, ${x_2 = 0}$, ${\xi_1=0}$, ${y_1=0}$, ${y_2 \in [0,1]}$, and ${\xi_2=0}$, i.e. it should be spatially supported on the unit square and have zero (rescaled) frequency, so the heuristic predicts a smooth function on the unit square, which is indeed what happens. (The situation is slightly more complicated in the non-smooth case ${\phi=\psi=1_{[0,1]}}$, because ${\mu}$ and ${\nu}$ then acquire some additional singularities at the endpoints; namely, the wave front set of ${\mu}$ now also contains those points ${((x_1,x_2),(\xi_1,\xi_2))}$ with ${x_1 \in \{0,1\}}$, ${x_2=0}$, and ${\xi_1,\xi_2}$ arbitrary, and ${\nu}$ similarly contains those points ${((y_1,y_2), (\xi_1,\xi_2))}$ with ${y_1=0}$, ${y_2 \in \{0,1\}}$, and ${\xi_1,\xi_2}$ arbitrary. I’ll leave it as an exercise to the reader to compute what this predicts for the wave front set of ${\mu*\nu}$, and how this compares with the actual wave front set.)

Exercise 3 Let ${\mu}$ be the uniform measure on the unit sphere ${S^{n-1}}$ in ${{\bf R}^n}$ for some ${n \geq 2}$. Use as many of the above methods as possible to establish multiple proofs of the following fact: the convolution ${\mu*\mu}$ is an absolutely continuous multiple ${f(x)\ dx}$ of Lebesgue measure, with ${f(x)}$ supported on the ball ${B(0,2)}$ of radius ${2}$ and obeying the bounds

$\displaystyle |f(x)| \ll \frac{1}{|x|}$

for ${|x| \leq 1}$ and

$\displaystyle |f(x)| \ll (2-|x|)^{(n-3)/2}$

for ${1 \leq |x| \leq 2}$, where the implied constants are allowed to depend on the dimension ${n}$. (Hint: try the ${n=2}$ case first, which is particularly simple due to the fact that the addition map ${+: S^1 \times S^1 \rightarrow {\bf R}^2}$ is mostly a local diffeomorphism. The Fourier-based approach is instructive, but requires either asymptotics of Bessel functions or the principle of stationary phase.)

[Note: the content of this post is standard number theoretic material that can be found in many textbooks (I am relying principally here on Iwaniec and Kowalski); I am not claiming any new progress on any version of the Riemann hypothesis here, but am simply arranging existing facts together.]

The Riemann hypothesis is arguably the most important and famous unsolved problem in number theory. It is usually phrased in terms of the Riemann zeta function ${\zeta}$, defined by

$\displaystyle \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}$

for ${\hbox{Re}(s)>1}$ and extended meromorphically to other values of ${s}$, and asserts that the only zeroes of ${\zeta}$ in the critical strip ${\{ s: 0 \leq \hbox{Re}(s) \leq 1 \}}$ lie on the critical line ${\{ s: \hbox{Re}(s)=\frac{1}{2} \}}$.

One of the main reasons that the Riemann hypothesis is so important to number theory is that the zeroes of the zeta function in the critical strip control the distribution of the primes. To see the connection, let us perform the following formal manipulations (ignoring for now the important analytic issues of convergence of series, interchanging sums, branches of the logarithm, etc., in order to focus on the intuition). The starting point is the fundamental theorem of arithmetic, which asserts that every natural number ${n}$ has a unique factorisation ${n = p_1^{a_1} \ldots p_k^{a_k}}$ into primes. Taking logarithms, we obtain the identity

$\displaystyle \log n = \sum_{d|n} \Lambda(d) \ \ \ \ \ (1)$

for any natural number ${n}$, where ${\Lambda}$ is the von Mangoldt function, thus ${\Lambda(n) = \log p}$ when ${n}$ is a power of a prime ${p}$ and zero otherwise. If we then perform a “Dirichlet-Fourier transform” by viewing both sides of (1) as coefficients of a Dirichlet series, we conclude that

$\displaystyle \sum_{n=1}^\infty \frac{\log n}{n^s} = \sum_{n=1}^\infty \sum_{d|n} \frac{\Lambda(d)}{n^s},$

formally at least. Writing ${n=dm}$, the right-hand side factors as

$\displaystyle (\sum_{d=1}^\infty \frac{\Lambda(d)}{d^s}) (\sum_{m=1}^\infty \frac{1}{m^s}) = \zeta(s) \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s}$

whereas the left-hand side is (formally, at least) equal to ${-\zeta'(s)}$. We conclude the identity

$\displaystyle \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s} = -\frac{\zeta'(s)}{\zeta(s)},$

(formally, at least). If we integrate this, we are formally led to the identity

$\displaystyle \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} = \log \zeta(s)$

or equivalently to the exponential identity

$\displaystyle \zeta(s) = \exp( \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} ) \ \ \ \ \ (2)$

which allows one to reconstruct the Riemann zeta function from the von Mangoldt function. (It is instructive exercise in enumerative combinatorics to try to prove this identity directly, at the level of formal Dirichlet series, using the fundamental theorem of arithmetic of course.) Now, as ${\zeta}$ has a simple pole at ${s=1}$ and zeroes at various places ${s=\rho}$ on the critical strip, we expect a Weierstrass factorisation which formally (ignoring normalisation issues) takes the form

$\displaystyle \zeta(s) = \frac{1}{s-1} \times \prod_\rho (s-\rho) \times \ldots$

(where we will be intentionally vague about what is hiding in the ${\ldots}$ terms) and so we expect an expansion of the form

$\displaystyle \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} = - \log(s-1) + \sum_\rho \log(s-\rho) + \ldots. \ \ \ \ \ (3)$

Note that

$\displaystyle \frac{1}{s-\rho} = \int_1^\infty t^{\rho-s} \frac{dt}{t}$

and hence on integrating in ${s}$ we formally have

$\displaystyle \log(s-\rho) = -\int_1^\infty t^{\rho-s-1} \frac{dt}{\log t}$

and thus we have the heuristic approximation

$\displaystyle \log(s-\rho) \approx - \sum_{n=1}^\infty \frac{n^{\rho-s-1}}{\log n}.$

Comparing this with (3), we are led to a heuristic form of the explicit formula

$\displaystyle \Lambda(n) \approx 1 - \sum_\rho n^{\rho-1}. \ \ \ \ \ (4)$

When trying to make this heuristic rigorous, it turns out (due to the rough nature of both sides of (4)) that one has to interpret the explicit formula in some suitably weak sense, for instance by testing (4) against the indicator function ${1_{[0,x]}(n)}$ to obtain the formula

$\displaystyle \sum_{n \leq x} \Lambda(n) \approx x - \sum_\rho \frac{x^{\rho}}{\rho} \ \ \ \ \ (5)$

which can in fact be made into a rigorous statement after some truncation (the von Mangoldt explicit formula). From this formula we now see how helpful the Riemann hypothesis will be to control the distribution of the primes; indeed, if the Riemann hypothesis holds, so that ${\hbox{Re}(\rho) = 1/2}$ for all zeroes ${\rho}$, it is not difficult to use (a suitably rigorous version of) the explicit formula to conclude that

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2} \log^2 x ) \ \ \ \ \ (6)$

as ${x \rightarrow \infty}$, giving a near-optimal “square root cancellation” for the sum ${\sum_{n \leq x} \Lambda(n)-1}$. Conversely, if one can somehow establish a bound of the form

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2+\epsilon} )$

for any fixed ${\epsilon}$, then the explicit formula can be used to then deduce that all zeroes ${\rho}$ of ${\zeta}$ have real part at most ${1/2+\epsilon}$, which leads to the following remarkable amplification phenomenon (analogous, as we will see later, to the tensor power trick): any bound of the form

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2+o(1)} )$

can be automatically amplified to the stronger bound

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2} \log^2 x )$

with both bounds being equivalent to the Riemann hypothesis. Of course, the Riemann hypothesis for the Riemann zeta function remains open; but partial progress on this hypothesis (in the form of zero-free regions for the zeta function) leads to partial versions of the asymptotic (6). For instance, it is known that there are no zeroes of the zeta function on the line ${\hbox{Re}(s)=1}$, and this can be shown by some analysis (either complex analysis or Fourier analysis) to be equivalent to the prime number theorem

$\displaystyle \sum_{n \leq x} \Lambda(n) =x + o(x);$

see e.g. this previous blog post for more discussion.

The main engine powering the above observations was the fundamental theorem of arithmetic, and so one can expect to establish similar assertions in other contexts where some version of the fundamental theorem of arithmetic is available. One of the simplest such variants is to continue working on the natural numbers, but “twist” them by a Dirichlet character ${\chi: {\bf Z} \rightarrow {\bf R}}$. The analogue of the Riemann zeta function is then the (1), which encoded the fundamental theorem of arithmetic, can be twisted by ${\chi}$ to obtain

$\displaystyle \chi(n) \log n = \sum_{d|n} \chi(d) \Lambda(d) \chi(\frac{n}{d}) \ \ \ \ \ (7)$

and essentially the same manipulations as before eventually lead to the exponential identity

$\displaystyle L(s,\chi) = \exp( \sum_{n=1}^\infty \frac{\chi(n) \Lambda(n)}{\log n} n^{-s} ). \ \ \ \ \ (8)$

which is a twisted version of (2), as well as twisted explicit formula, which heuristically takes the form

$\displaystyle \chi(n) \Lambda(n) \approx - \sum_\rho n^{\rho-1} \ \ \ \ \ (9)$

for non-principal ${\chi}$, where ${\rho}$ now ranges over the zeroes of ${L(s,\chi)}$ in the critical strip, rather than the zeroes of ${\zeta(s)}$; a more accurate formulation, following (5), would be

$\displaystyle \sum_{n \leq x} \chi(n) \Lambda(n) \approx - \sum_\rho \frac{x^{\rho}}{\rho}. \ \ \ \ \ (10)$

(See e.g. Davenport’s book for a more rigorous discussion which emphasises the analogy between the Riemann zeta function and the Dirichlet ${L}$-function.) If we assume the generalised Riemann hypothesis, which asserts that all zeroes of ${L(s,\chi)}$ in the critical strip also lie on the critical line, then we obtain the bound

$\displaystyle \sum_{n \leq x} \chi(n) \Lambda(n) = O( x^{1/2} \log(x) \log(xq) )$

for any non-principal Dirichlet character ${\chi}$, again demonstrating a near-optimal square root cancellation for this sum. Again, we have the amplification property that the above bound is implied by the apparently weaker bound

$\displaystyle \sum_{n \leq x} \chi(n) \Lambda(n) = O( x^{1/2+o(1)} )$

(where ${o(1)}$ denotes a quantity that goes to zero as ${x \rightarrow \infty}$ for any fixed ${q}$). Next, one can consider other number systems than the natural numbers ${{\bf N}}$ and integers ${{\bf Z}}$. For instance, one can replace the integers ${{\bf Z}}$ with rings ${{\mathcal O}_K}$ of integers in other number fields ${K}$ (i.e. finite extensions of ${{\bf Q}}$), such as the quadratic extensions ${K = {\bf Q}[\sqrt{D}]}$ of the rationals for various square-free integers ${D}$, in which case the ring of integers would be the ring of quadratic integers ${{\mathcal O}_K = {\bf Z}[\omega]}$ for a suitable generator ${\omega}$ (it turns out that one can take ${\omega = \sqrt{D}}$ if ${D=2,3\hbox{ mod } 4}$, and ${\omega = \frac{1+\sqrt{D}}{2}}$ if ${D=1 \hbox{ mod } 4}$). Here, it is not immediately obvious what the analogue of the natural numbers ${{\bf N}}$ is in this setting, since rings such as ${{\bf Z}[\omega]}$ do not come with a natural ordering. However, we can adopt an algebraic viewpoint to see the correct generalisation, observing that every natural number ${n}$ generates a principal ideal ${(n) = \{ an: a \in {\bf Z} \}}$ in the integers, and conversely every non-trivial ideal ${{\mathfrak n}}$ in the integers is associated to precisely one natural number ${n}$ in this fashion, namely the norm ${N({\mathfrak n}) := |{\bf Z} / {\mathfrak n}|}$ of that ideal. So one can identify the natural numbers with the ideals of ${{\bf Z}}$. Furthermore, with this identification, the prime numbers correspond to the prime ideals, since if ${p}$ is prime, and ${a,b}$ are integers, then ${ab \in (p)}$ if and only if one of ${a \in (p)}$ or ${b \in (p)}$ is true. Finally, even in number systems (such as ${{\bf Z}[\sqrt{-5}]}$) in which the classical version of the fundamental theorem of arithmetic fail (e.g. ${6 = 2 \times 3 = (1-\sqrt{-5})(1+\sqrt{-5})}$), we have the fundamental theorem of arithmetic for ideals: every ideal ${\mathfrak{n}}$ in a Dedekind domain (which includes the ring ${{\mathcal O}_K}$ of integers in a number field as a key example) is uniquely representable (up to permutation) as the product of a finite number of prime ideals ${\mathfrak{p}}$ (although these ideals might not necessarily be principal). For instance, in ${{\bf Z}[\sqrt{-5}]}$, the principal ideal ${(6)}$ factors as the product of four prime (but non-principal) ideals ${(2, 1+\sqrt{-5})}$, ${(2, 1-\sqrt{-5})}$, ${(3, 1+\sqrt{-5})}$, ${(3, 1-\sqrt{-5})}$. (Note that the first two ideals ${(2,1+\sqrt{5}), (2,1-\sqrt{5})}$ are actually equal to each other.) Because we still have the fundamental theorem of arithmetic, we can develop analogues of the previous observations relating the Riemann hypothesis to the distribution of primes. The analogue of the Riemann hypothesis is now the Dedekind zeta function

$\displaystyle \zeta_K(s) := \sum_{{\mathfrak n}} \frac{1}{N({\mathfrak n})^s}$

where the summation is over all non-trivial ideals in ${{\mathcal O}_K}$. One can also define a von Mangoldt function ${\Lambda_K({\mathfrak n})}$, defined as ${\log N( {\mathfrak p})}$ when ${{\mathfrak n}}$ is a power of a prime ideal ${{\mathfrak p}}$, and zero otherwise; then the fundamental theorem of arithmetic for ideals can be encoded in an analogue of (1) (or (7)),

$\displaystyle \log N({\mathfrak n}) = \sum_{{\mathfrak d}|{\mathfrak n}} \Lambda_K({\mathfrak d}) \ \ \ \ \ (11)$

which leads as before to an exponential identity

$\displaystyle \zeta_K(s) = \exp( \sum_{{\mathfrak n}} \frac{\Lambda_K({\mathfrak n})}{\log N({\mathfrak n})} N({\mathfrak n})^{-s} ) \ \ \ \ \ (12)$

and an explicit formula of the heuristic form

$\displaystyle \Lambda({\mathfrak n}) \approx 1 - \sum_\rho N({\mathfrak n})^{\rho-1}$

or, a little more accurately,

$\displaystyle \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) \approx x - \sum_\rho \frac{x^{\rho}}{\rho} \ \ \ \ \ (13)$

in analogy with (5) or (10). Again, a suitable Riemann hypothesis for the Dedekind zeta function leads to good asymptotics for the distribution of prime ideals, giving a bound of the form

$\displaystyle \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) = x + O( \sqrt{x} \log(x) (d+\log(Dx)) )$

where ${D}$ is the conductor of ${K}$ (which, in the case of number fields, is the absolute value of the discriminant of ${K}$) and ${d = \hbox{dim}_{\bf Q}(K)}$ is the degree of the extension of ${K}$ over ${{\bf Q}}$. As before, we have the amplification phenomenon that the above near-optimal square root cancellation bound is implied by the weaker bound

$\displaystyle \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) = x + O( x^{1/2+o(1)} )$

where ${o(1)}$ denotes a quantity that goes to zero as ${x \rightarrow \infty}$ (holding ${K}$ fixed). See e.g. Chapter 5 of Iwaniec-Kowalski for details.

As was the case with the Dirichlet ${L}$-functions, one can twist the Dedekind zeta function example by characters, in this case the Hecke characters; we will not do this here, but see e.g. Section 3 of Iwaniec-Kowalski for details.

Very analogous considerations hold if we move from number fields to function fields. The simplest case is the function field associated to the affine line ${{\mathbb A}^1}$ and a finite field ${{\mathbb F} = {\mathbb F}_q}$ of some order ${q}$. The polynomial functions on the affine line ${{\mathbb A}^1/{\mathbb F}}$ are just the usual polynomial ring ${{\mathbb F}[t]}$, which then play the role of the integers ${{\bf Z}}$ (or ${{\mathcal O}_K}$) in previous examples. This ring happens to be a unique factorisation domain, so the situation is closely analogous to the classical setting of the Riemann zeta function. The analogue of the natural numbers are the monic polynomials (since every non-trivial principal ideal is generated by precisely one monic polynomial), and the analogue of the prime numbers are the irreducible monic polynomials. The norm ${N(f)}$ of a polynomial is the order of ${{\mathbb F}[t] / (f)}$, which can be computed explicitly as

$\displaystyle N(f) = q^{\hbox{deg}(f)}.$

Because of this, we will normalise things slightly differently here and use ${\hbox{deg}(f)}$ in place of ${\log N(f)}$ in what follows. The (local) zeta function ${\zeta_{{\mathbb A}^1/{\mathbb F}}(s)}$ is then defined as

$\displaystyle \zeta_{{\mathbb A}^1/{\mathbb F}}(s) = \sum_f \frac{1}{N(f)^s}$

where ${f}$ ranges over monic polynomials, and the von Mangoldt function ${\Lambda_{{\mathbb A}^1/{\mathbb F}}(f)}$ is defined to equal ${\hbox{deg}(g)}$ when ${f}$ is a power of a monic irreducible polynomial ${g}$, and zero otherwise. Note that because ${N(f)}$ is always a power of ${q}$, the zeta function here is in fact periodic with period ${2\pi i / \log q}$. Because of this, it is customary to make a change of variables ${T := q^{-s}}$, so that

$\displaystyle \zeta_{{\mathbb A}^1/{\mathbb F}}(s) = Z( {\mathbb A}^1/{\mathbb F}, T )$

and ${Z}$ is the renormalised zeta function

$\displaystyle Z( {\mathbb A}^1/{\mathbb F}, T ) = \sum_f T^{\hbox{deg}(f)}. \ \ \ \ \ (14)$

We have the analogue of (1) (or (7) or (11)):

$\displaystyle \hbox{deg}(f) = \sum_{g|f} \Lambda_{{\mathbb A}^1/{\mathbb F}}(g), \ \ \ \ \ (15)$

which leads as before to an exponential identity

$\displaystyle Z( {\mathbb A}^1/{\mathbb F}, T ) = \exp( \sum_f \frac{\Lambda_{{\mathbb A}^1/{\mathbb F}}(f)}{\hbox{deg}(f)} T^{\hbox{deg}(f)} ) \ \ \ \ \ (16)$

analogous to (2), (8), or (12). It also leads to the explicit formula

$\displaystyle \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx 1 - \sum_\rho N(f)^{\rho-1}$

where ${\rho}$ are the zeroes of the original zeta function ${\zeta_{{\mathbb A}^1/{\mathbb F}}(s)}$ (counting each residue class of the period ${2\pi i/\log q}$ just once), or equivalently

$\displaystyle \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx 1 - q^{-\hbox{deg}(f)} \sum_\alpha \alpha^{\hbox{deg}(f)},$

where ${\alpha}$ are the reciprocals of the roots of the normalised zeta function ${Z( {\mathbb A}^1/{\mathbb F}, T )}$ (or to put it another way, ${1-\alpha T}$ are the factors of this zeta function). Again, to make proper sense of this heuristic we need to sum, obtaining

$\displaystyle \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx q^n - \sum_\alpha \alpha^n.$

As it turns out, in the function field setting, the zeta functions are always rational (this is part of the Weil conjectures), and the above heuristic formula is basically exact up to a constant factor, thus

$\displaystyle \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = q^n - \sum_\alpha \alpha^n + c \ \ \ \ \ (17)$

for an explicit integer ${c}$ (independent of ${n}$) arising from any potential pole of ${Z}$ at ${T=1}$. In the case of the affine line ${{\mathbb A}^1}$, the situation is particularly simple, because the zeta function ${Z( {\mathbb A}^1/{\mathbb F}, T)}$ is easy to compute. Indeed, since there are exactly ${q^n}$ monic polynomials of a given degree ${n}$, we see from (14) that

$\displaystyle Z( {\mathbb A}^1/{\mathbb F}, T ) = \sum_{n=0}^\infty q^n T^n = \frac{1}{1-qT}$

so in fact there are no zeroes whatsoever, and no pole at ${T=1}$ either, so we have an exact prime number theorem for this function field:

$\displaystyle \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = q^n \ \ \ \ \ (18)$

Among other things, this tells us that the number of irreducible monic polynomials of degree ${n}$ is ${q^n/n + O(q^{n/2})}$.

We can transition from an algebraic perspective to a geometric one, by viewing a given monic polynomial ${f \in {\mathbb F}[t]}$ through its roots, which are a finite set of points in the algebraic closure ${\overline{{\mathbb F}}}$ of the finite field ${{\mathbb F}}$ (or more suggestively, as points on the affine line ${{\mathbb A}^1( \overline{{\mathbb F}} )}$). The number of such points (counting multiplicity) is the degree of ${f}$, and from the factor theorem, the set of points determines the monic polynomial ${f}$ (or, if one removes the monic hypothesis, it determines the polynomial ${f}$ projectively). These points have an action of the Galois group ${\hbox{Gal}( \overline{{\mathbb F}} / {\mathbb F} )}$. It is a classical fact that this Galois group is in fact a cyclic group generated by a single element, the (geometric) Frobenius map ${\hbox{Frob}: x \mapsto x^q}$, which fixes the elements of the original finite field ${{\mathbb F}}$ but permutes the other elements of ${\overline{{\mathbb F}}}$. Thus the roots of a given polynomial ${f}$ split into orbits of the Frobenius map. One can check that the roots consist of a single such orbit (counting multiplicity) if and only if ${f}$ is irreducible; thus the fundamental theorem of arithmetic can be viewed geometrically as as the orbit decomposition of any Frobenius-invariant finite set of points in the affine line.

Now consider the degree ${n}$ finite field extension ${{\mathbb F}_n}$ of ${{\mathbb F}}$ (it is a classical fact that there is exactly one such extension up to isomorphism for each ${n}$); this is a subfield of ${\overline{{\mathbb F}}}$ of order ${q^n}$. (Here we are performing a standard abuse of notation by overloading the subscripts in the ${{\mathbb F}}$ notation; thus ${{\mathbb F}_q}$ denotes the field of order ${q}$, while ${{\mathbb F}_n}$ denotes the extension of ${{\mathbb F} = {\mathbb F}_q}$ of order ${n}$, so that we in fact have ${{\mathbb F}_n = {\mathbb F}_{q^n}}$ if we use one subscript convention on the left-hand side and the other subscript convention on the right-hand side. We hope this overloading will not cause confusion.) Each point ${x}$ in this extension (or, more suggestively, the affine line ${{\mathbb A}^1({\mathbb F}_n)}$ over this extension) has a minimal polynomial – an irreducible monic polynomial whose roots consist the Frobenius orbit of ${x}$. Since the Frobenius action is periodic of period ${n}$ on ${{\mathbb F}_n}$, the degree of this minimal polynomial must divide ${n}$. Conversely, every monic irreducible polynomial of degree ${d}$ dividing ${n}$ produces ${d}$ distinct zeroes that lie in ${{\mathbb F}_d}$ (here we use the classical fact that finite fields are perfect) and hence in ${{\mathbb F}_n}$. We have thus partitioned ${{\mathbb A}^1({\mathbb F}_n)}$ into Frobenius orbits (also known as closed points), with each monic irreducible polynomial ${f}$ of degree ${d}$ dividing ${n}$ contributing an orbit of size ${d = \hbox{deg}(f) = \Lambda(f^{n/d})}$. From this we conclude a geometric interpretation of the left-hand side of (18):

$\displaystyle \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = \sum_{x \in {\mathbb A}^1({\mathbb F}_n)} 1. \ \ \ \ \ (19)$

The identity (18) thus is equivalent to the thoroughly boring fact that the number of ${{\mathbb F}_n}$-points on the affine line ${{\mathbb A}^1}$ is equal to ${q^n}$. However, things become much more interesting if one then replaces the affine line ${{\mathbb A}^1}$ by a more general (geometrically) irreducible curve ${C}$ defined over ${{\mathbb F}}$; for instance one could take ${C}$ to be an ellpitic curve

$\displaystyle E = \{ (x,y): y^2 = x^3 + ax + b \} \ \ \ \ \ (20)$

for some suitable ${a,b \in {\mathbb F}}$, although the discussion here applies to more general curves as well (though to avoid some minor technicalities, we will assume that the curve is projective with a finite number of ${{\mathbb F}}$-rational points removed). The analogue of ${{\mathbb F}[t]}$ is then the coordinate ring of ${C}$ (for instance, in the case of the elliptic curve (20) it would be ${{\mathbb F}[x,y] / (y^2-x^3-ax-b)}$), with polynomials in this ring producing a set of roots in the curve ${C( \overline{\mathbb F})}$ that is again invariant with respect to the Frobenius action (acting on the ${x}$ and ${y}$ coordinates separately). In general, we do not expect unique factorisation in this coordinate ring (this is basically because Bezout’s theorem suggests that the zero set of a polynomial on ${C}$ will almost never consist of a single (closed) point). Of course, we can use the algebraic formalism of ideals to get around this, setting up a zeta function

$\displaystyle \zeta_{C/{\mathbb F}}(s) = \sum_{{\mathfrak f}} \frac{1}{N({\mathfrak f})^s}$

and a von Mangoldt function ${\Lambda_{C/{\mathbb F}}({\mathfrak f})}$ as before, where ${{\mathfrak f}}$ would now run over the non-trivial ideals of the coordinate ring. However, it is more instructive to use the geometric viewpoint, using the ideal-variety dictionary from algebraic geometry to convert algebraic objects involving ideals into geometric objects involving varieties. In this dictionary, a non-trivial ideal would correspond to a proper subvariety (or more precisely, a subscheme, but let us ignore the distinction between varieties and schemes here) of the curve ${C}$; as the curve is irreducible and one-dimensional, this subvariety must be zero-dimensional and is thus a (multi-)set of points ${\{x_1,\ldots,x_k\}}$ in ${C}$, or equivalently an effective divisor ${[x_1] + \ldots + [x_k]}$ of ${C}$; this generalises the concept of the set of roots of a polynomial (which corresponds to the case of a principal ideal). Furthermore, this divisor has to be rational in the sense that it is Frobenius-invariant. The prime ideals correspond to those divisors (or sets of points) which are irreducible, that is to say the individual Frobenius orbits, also known as closed points of ${C}$. With this dictionary, the zeta function becomes

$\displaystyle \zeta_{C/{\mathbb F}}(s) = \sum_{D \geq 0} \frac{1}{q^{\hbox{deg}(D)}}$

where the sum is over effective rational divisors ${D}$ of ${C}$ (with ${k}$ being the degree of an effective divisor ${[x_1] + \ldots + [x_k]}$), or equivalently

$\displaystyle Z( C/{\mathbb F}, T ) = \sum_{D \geq 0} T^{\hbox{deg}(D)}.$

The analogue of (19), which gives a geometric interpretation to sums of the von Mangoldt function, becomes

$\displaystyle \sum_{N({\mathfrak f}) = q^n} \Lambda_{C/{\mathbb F}}({\mathfrak f}) = \sum_{x \in C({\mathbb F}_n)} 1$

$\displaystyle = |C({\mathbb F}_n)|,$

thus this sum is simply counting the number of ${{\mathbb F}_n}$-points of ${C}$. The analogue of the exponential identity (16) (or (2), (8), or (12)) is then

$\displaystyle Z( C/{\mathbb F}, T ) = \exp( \sum_{n \geq 1} \frac{|C({\mathbb F}_n)|}{n} T^n ) \ \ \ \ \ (21)$

and the analogue of the explicit formula (17) (or (5), (10) or (13)) is

$\displaystyle |C({\mathbb F}_n)| = q^n - \sum_\alpha \alpha^n + c \ \ \ \ \ (22)$

where ${\alpha}$ runs over the (reciprocal) zeroes of ${Z( C/{\mathbb F}, T )}$ (counting multiplicity), and ${c}$ is an integer independent of ${n}$. (As it turns out, ${c}$ equals ${1}$ when ${C}$ is a projective curve, and more generally equals ${1-k}$ when ${C}$ is a projective curve with ${k}$ rational points deleted.)

To evaluate ${Z(C/{\mathbb F},T)}$, one needs to count the number of effective divisors of a given degree on the curve ${C}$. Fortunately, there is a tool that is particularly well-designed for this task, namely the Riemann-Roch theorem. By using this theorem, one can show (when ${C}$ is projective) that ${Z(C/{\mathbb F},T)}$ is in fact a rational function, with a finite number of zeroes, and a simple pole at both ${1}$ and ${1/q}$, with similar results when one deletes some rational points from ${C}$; see e.g. Chapter 11 of Iwaniec-Kowalski for details. Thus the sum in (22) is finite. For instance, for the affine elliptic curve (20) (which is a projective curve with one point removed), it turns out that we have

$\displaystyle |E({\mathbb F}_n)| = q^n - \alpha^n - \beta^n$

for two complex numbers ${\alpha,\beta}$ depending on ${E}$ and ${q}$.

The Riemann hypothesis for (untwisted) curves – which is the deepest and most difficult aspect of the Weil conjectures for these curves – asserts that the zeroes of ${\zeta_{C/{\mathbb F}}}$ lie on the critical line, or equivalently that all the roots ${\alpha}$ in (22) have modulus ${\sqrt{q}}$, so that (22) then gives the asymptotic

$\displaystyle |C({\mathbb F}_n)| = q^n + O( q^{n/2} ) \ \ \ \ \ (23)$

where the implied constant depends only on the genus of ${C}$ (and on the number of points removed from ${C}$). For instance, for elliptic curves we have the Hasse bound

$\displaystyle |E({\mathbb F}_n) - q^n| \leq 2 \sqrt{q}.$

As before, we have an important amplification phenomenon: if we can establish a weaker estimate, e.g.

$\displaystyle |C({\mathbb F}_n)| = q^n + O( q^{n/2 + O(1)} ), \ \ \ \ \ (24)$

then we can automatically deduce the stronger bound (23). This amplification is not a mere curiosity; most of the proofs of the Riemann hypothesis for curves proceed via this fact. For instance, by using the elementary method of Stepanov to bound points in curves (discussed for instance in this previous post), one can establish the preliminary bound (24) for large ${n}$, which then amplifies to the optimal bound (23) for all ${n}$ (and in particular for ${n=1}$). Again, see Chapter 11 of Iwaniec-Kowalski for details. The ability to convert a bound with ${q}$-dependent losses over the optimal bound (such as (24)) into an essentially optimal bound with no ${q}$-dependent losses (such as (23)) is important in analytic number theory, since in many applications (e.g. in those arising from sieve theory) one wishes to sum over large ranges of ${q}$.

Much as the Riemann zeta function can be twisted by a Dirichlet character to form a Dirichlet ${L}$-function, one can twist the zeta function on curves by various additive and multiplicative characters. For instance, suppose one has an affine plane curve ${C \subset {\mathbb A}^2}$ and an additive character ${\psi: {\mathbb F}^2 \rightarrow {\bf C}^\times}$, thus ${\psi(x+y) = \psi(x) \psi(y)}$ for all ${x,y \in {\mathbb F}^2}$. Given a rational effective divisor ${D = [x_1] + \ldots + [x_k]}$, the sum ${x_1+\ldots+x_k}$ is Frobenius-invariant and thus lies in ${{\mathbb F}^2}$. By abuse of notation, we may thus define ${\psi}$ on such divisors by

$\displaystyle \psi( [x_1] + \ldots + [x_k] ) := \psi( x_1 + \ldots + x_k )$

and observe that ${\psi}$ is multiplicative in the sense that ${\psi(D_1 + D_2) = \psi(D_1) \psi(D_2)}$ for rational effective divisors ${D_1,D_2}$. One can then define ${\psi({\mathfrak f})}$ for any non-trivial ideal ${{\mathfrak f}}$ by replacing that ideal with the associated rational effective divisor; for instance, if ${f}$ is a polynomial in the coefficient ring of ${C}$, with zeroes at ${x_1,\ldots,x_k \in C}$, then ${\psi((f))}$ is ${\psi( x_1+\ldots+x_k )}$. Again, we have the multiplicativity property ${\psi({\mathfrak f} {\mathfrak g}) = \psi({\mathfrak f}) \psi({\mathfrak g})}$. If we then form the twisted normalised zeta function

$\displaystyle Z( C/{\mathbb F}, \psi, T ) = \sum_{D \geq 0} \psi(D) T^{\hbox{deg}(D)}$

then by twisting the previous analysis, we eventually arrive at the exponential identity

$\displaystyle Z( C/{\mathbb F}, \psi, T ) = \exp( \sum_{n \geq 1} \frac{S_n(C/{\mathbb F}, \psi)}{n} T^n ) \ \ \ \ \ (25)$

in analogy with (21) (or (2), (8), (12), or (16)), where the companion sums ${S_n(C/{\mathbb F}, \psi)}$ are defined by

$\displaystyle S_n(C/{\mathbb F},\psi) = \sum_{x \in C({\mathbb F}^n)} \psi( \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) )$

where the trace ${\hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x)}$ of an element ${x}$ in the plane ${{\mathbb A}^2( {\mathbb F}_n )}$ is defined by the formula

$\displaystyle \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) = x + \hbox{Frob}(x) + \ldots + \hbox{Frob}^{n-1}(x).$

In particular, ${S_1}$ is the exponential sum

$\displaystyle S_1(C/{\mathbb F},\psi) = \sum_{x \in C({\mathbb F})} \psi(x)$

which is an important type of sum in analytic number theory, containing for instance the Kloosterman sum

$\displaystyle K(a,b;p) := \sum_{x \in {\mathbb F}_p^\times} e_p( ax + \frac{b}{x})$

as a special case, where ${a,b \in {\mathbb F}_p^\times}$. (NOTE: the sign conventions for the companion sum ${S_n}$ are not consistent across the literature, sometimes it is ${-S_n}$ which is referred to as the companion sum.)

If ${\psi}$ is non-principal (and ${C}$ is non-linear), one can show (by a suitably twisted version of the Riemann-Roch theorem) that ${Z}$ is a rational function of ${T}$, with no pole at ${T=q^{-1}}$, and one then gets an explicit formula of the form

$\displaystyle S_n(C/{\mathbb F},\psi) = -\sum_\alpha \alpha^n + c \ \ \ \ \ (26)$

for the companion sums, where ${\alpha}$ are the reciprocals of the zeroes of ${S}$, in analogy to (22) (or (5), (10), (13), or (17)). For instance, in the case of Kloosterman sums, there is an identity of the form

$\displaystyle \sum_{x \in {\mathbb F}_{p^n}^\times} e_p( a \hbox{Tr}(x) + \frac{b}{\hbox{Tr}(x)}) = -\alpha^n - \beta^n \ \ \ \ \ (27)$

for all ${n}$ and some complex numbers ${\alpha,\beta}$ depending on ${a,b,p}$, where we have abbreviated ${\hbox{Tr}_{{\mathbb F}_{p^n}/{\mathbb F}_p}}$ as ${\hbox{Tr}}$. As before, the Riemann hypothesis for ${Z}$ then gives a square root cancellation bound of the form

$\displaystyle S_n(C/{\mathbb F},\psi) = O( q^{n/2} ) \ \ \ \ \ (28)$

for the companion sums (and in particular gives the very explicit Weil bound ${|K(a,b;p)| \leq 2\sqrt{p}}$ for the Kloosterman sum), but again there is the amplification phenomenon that this sort of bound can be deduced from the apparently weaker bound

$\displaystyle S_n(C/{\mathbb F},\psi) = O( q^{n/2 + O(1)} ).$

As before, most of the known proofs of the Riemann hypothesis for these twisted zeta functions proceed by first establishing this weaker bound (e.g. one could again use Stepanov’s method here for this goal) and then amplifying to the full bound (28); see Chapter 11 of Iwaniec-Kowalski for further details.

One can also twist the zeta function on a curve by a multiplicative character ${\chi: {\mathbb F}^\times \rightarrow {\bf C}^\times}$ by similar arguments, except that instead of forming the sum ${x_1+\ldots+x_k}$ of all the components of an effective divisor ${[x_1]+\ldots+[x_k]}$, one takes the product ${x_1 \ldots x_k}$ instead, and similarly one replaces the trace

$\displaystyle \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) = x + \hbox{Frob}(x) + \ldots + \hbox{Frob}^{n-1}(x)$

by the norm

$\displaystyle \hbox{Norm}_{{\mathbb F}_n/{\mathbb F}}(x) = x \cdot \hbox{Frob}(x) \cdot \ldots \cdot \hbox{Frob}^{n-1}(x).$

Again, see Chapter 11 of Iwaniec-Kowalski for details.

Deligne famously extended the above theory to higher-dimensional varieties than curves, and also to the closely related context of ${\ell}$-adic sheaves on curves, giving rise to two separate proofs of the Weil conjectures in full generality. (Very roughly speaking, the former context can be obtained from the latter context by a sort of Fubini theorem type argument that expresses sums on higher-dimensional varieties as iterated sums on curves of various expressions related to ${\ell}$-adic sheaves.) In this higher-dimensional setting, the zeta function formalism is still present, but is much more difficult to use, in large part due to the much less tractable nature of divisors in higher dimensions (they are now combinations of codimension one subvarieties or subschemes, rather than combinations of points). To get around this difficulty, one has to change perspective yet again, from an algebraic or geometric perspective to an ${\ell}$-adic cohomological perspective. (I could imagine that once one is sufficiently expert in the subject, all these perspectives merge back together into a unified viewpoint, but I am certainly not yet at that stage of understanding.) In particular, the zeta function, while still present, plays a significantly less prominent role in the analysis (at least if one is willing to take Deligne’s theorems as a black box); the explicit formula is now obtained via a different route, namely the Grothendieck-Lefschetz fixed point formula. I have written some notes on this material below the fold (based in part on some lectures of Philippe Michel, as well as the text of Iwaniec-Kowalski and also this book of Katz), but I should caution that my understanding here is still rather sketchy and possibly inaccurate in places.

Let ${n}$ be a natural number. We consider the question of how many “almost orthogonal” unit vectors ${v_1,\ldots,v_m}$ one can place in the Euclidean space ${{\bf R}^n}$. Of course, if we insist on ${v_1,\ldots,v_m}$ being exactly orthogonal, so that ${\langle v_i,v_j \rangle = 0}$ for all distinct ${i,j}$, then we can only pack at most ${n}$ unit vectors into this space. However, if one is willing to relax the orthogonality condition a little, so that ${\langle v_i,v_j\rangle}$ is small rather than zero, then one can pack a lot more unit vectors into ${{\bf R}^n}$, due to the important fact that pairs of vectors in high dimensions are typically almost orthogonal to each other. For instance, if one chooses ${v_i}$ uniformly and independently at random on the unit sphere, then a standard computation (based on viewing the ${v_i}$ as gaussian vectors projected onto the unit sphere) shows that each inner product ${\langle v_i,v_j \rangle}$ concentrates around the origin with standard deviation ${O(1/\sqrt{n})}$ and with gaussian tails, and a simple application of the union bound then shows that for any fixed ${K \geq 1}$, one can pack ${n^K}$ unit vectors into ${{\bf R}^n}$ whose inner products are all of size ${O( K^{1/2} n^{-1/2} \log^{1/2} n )}$.

One can remove the logarithm by using some number theoretic constructions. For instance, if ${n}$ is twice a prime ${n=2p}$, one can identify ${{\bf R}^n}$ with the space ${\ell^2({\bf F}_p)}$ of complex-valued functions ${f: {\bf F}_p \rightarrow {\bf C}}$, whee ${{\bf F}_p}$ is the field of ${p}$ elements, and if one then considers the ${p^2}$ different quadratic phases ${x \mapsto \frac{1}{\sqrt{p}} e_p( ax^2 + bx )}$ for ${a,b \in {\bf F}_p}$, where ${e_p(a) := e^{2\pi i a/p}}$ is the standard character on ${{\bf F}_p}$, then a standard application of Gauss sum estimates reveals that these ${p^2}$ unit vectors in ${{\bf R}^n}$ all have inner products of magnitude at most ${p^{-1/2}}$ with each other. More generally, if we take ${d \geq 1}$ and consider the ${p^d}$ different polynomial phases ${x \mapsto \frac{1}{\sqrt{p}} e_p( a_d x^d + \ldots + a_1 x )}$ for ${a_1,\ldots,a_d \in {\bf F}_p}$, then an application of the Weil conjectures for curves, proven by Weil, shows that the inner products of the associated ${p^d}$ unit vectors with each other have magnitude at most ${(d-1) p^{-1/2}}$.

As it turns out, this construction is close to optimal, in that there is a polynomial limit to how many unit vectors one can pack into ${{\bf R}^n}$ with an inner product of ${O(1/\sqrt{n})}$:

Theorem 1 (Cheap Kabatjanskii-Levenstein bound) Let ${v_1,\ldots,v_m}$ be unit vector in ${{\bf R}^n}$ such that ${|\langle v_i, v_j \rangle| \leq A n^{-1/2}}$ for some ${1/2 \leq A \leq \frac{1}{2} \sqrt{n}}$. Then we have ${m \leq (\frac{Cn}{A^2})^{C A^2}}$ for some absolute constant ${C}$.

In particular, for fixed ${d}$ and large ${p}$, the number of unit vectors one can pack in ${{\bf R}^{2p}}$ whose inner products all have magnitude at most ${dp^{-1/2}}$ will be ${O( p^{O(d^2)} )}$. This doesn’t quite match the construction coming from the Weil conjectures, although it is worth noting that the upper bound of ${(d-1)p^{-1/2}}$ for the inner product is usually not sharp (the inner product is actually ${p^{-1/2}}$ times the sum of ${d-1}$ unit phases which one expects (cf. the Sato-Tate conjecture) to be uniformly distributed on the unit circle, and so the typical inner product is actually closer to ${(d-1)^{1/2} p^{-1/2}}$).

Note that for ${0 \leq A < 1/2}$, the ${A=1/2}$ case of the above theorem (or more precisely, Lemma 2 below) gives the bound ${m=O(n)}$, which is essentially optimal as the example of an orthonormal basis shows. For ${A \geq \sqrt{n}}$, the condition ${|\langle v_i, v_j \rangle| \leq A n^{-1/2}}$ is trivially true from Cauchy-Schwarz, and ${m}$ can be arbitrariy large. Finally, in the range ${\frac{1}{2} \sqrt{n} \leq A \leq \sqrt{n}}$, we can use a volume packing argument: we have ${\|v_i-v_j\|^2 \geq 2 (1 - A n^{-1/2})}$, so of we set ${r := 2^{-1/2} (1-A n^{-1/2})^{1/2}}$, then the open balls of radius ${r}$ around each ${v_i}$ are disjoint, while all lying in a ball of radius ${O(1)}$, giving rise to the bound ${m \leq C^n (1-A n^{-1/2})^{-n/2}}$ for some absolute constant ${C}$.

As I learned recently from Philippe Michel, a more precise version of this theorem is due to Kabatjanskii and Levenstein, who studied the closely related problem of sphere packing (or more precisely, cap packing) in the unit sphere ${S^{n-1}}$ of ${{\bf R}^n}$. However, I found a short proof of the above theorem which relies on one of my favorite tricks – the tensor power trick – so I thought I would give it here.

We begin with an easy case, basically the ${A=1/2}$ case of the above theorem:

Lemma 2 Let ${v_1,\ldots,v_m}$ be unit vectors in ${{\bf R}^n}$ such that ${|\langle v_i, v_j \rangle| \leq \frac{1}{2n^{1/2}}}$ for all distinct ${i,j}$. Then ${m < 2n}$.

Proof: Suppose for contradiction that ${m \geq 2n}$. We consider the ${2n \times 2n}$ Gram matrix ${( \langle v_i,v_j \rangle )_{1 \leq i,j \leq 2n}}$. This matrix is real symmetric with rank at most ${n}$, thus if one subtracts off the identity matrix, it has an eigenvalue of ${-1}$ with multiplicity at least ${n}$. Taking Hilbert-Schmidt norms, we conclude that

$\displaystyle \sum_{1 \leq i,j \leq 2n: i \neq j} |\langle v_i, v_j \rangle|^2 \geq n.$

But by hypothesis, the left-hand side is at most ${2n(2n-1) \frac{1}{4n} = n-\frac{1}{2}}$, giving the desired contradiction. $\Box$

To amplify the above lemma to cover larger values of ${A}$, we apply the tensor power trick. A direct application of the tensor power trick does not gain very much; however one can do a lot better by using the symmetric tensor power rather than the raw tensor power. This gives

Corollary 3 Let ${k}$ be a natural number, and let ${v_1,\ldots,v_m}$ be unit vectors in ${{\bf R}^n}$ such that ${|\langle v_i, v_j \rangle| \leq 2^{-1/k} (\binom{n+k-1}{k})^{-1/2k}}$ for all distinct ${i,j}$. Then ${m < 2\binom{n+k-1}{k}}$.

Proof: We work in the symmetric component ${\hbox{Sym}^k {\bf R}^n}$ of the tensor power ${({\bf R}^n)^{\otimes k} \equiv {\bf R}^{n^k}}$, which has dimension ${\binom{n+k-1}{k}}$. Applying the previous lemma to the tensor powers ${v_1^{\otimes k},\ldots,v_m^{\otimes k}}$, we obtain the claim. $\Box$

Using the trivial bound ${e^k \geq \frac{k^k}{k!}}$, we can lower bound

$\displaystyle 2^{-1/k} (\binom{n+k-1}{k})^{-1/2k} \geq 2^{-1/k} (n+k-1)^{-1/2} (k!)^{1/2k}$

$\displaystyle \geq 2^{-1/k} e^{-1/2} k^{1/2} (n+k-1)^{-1/2} .$

We can thus prove Theorem 1 by setting ${k := \lfloor C A^2 \rfloor}$ for some sufficiently large absolute constant ${C}$.

For any ${H \geq 2}$, let ${B[H]}$ denote the assertion that there are infinitely many pairs of consecutive primes ${p_n, p_{n+1}}$ whose difference ${p_{n+1}-p_n}$ is at most ${H}$, or equivalently that

$\displaystyle \lim\inf_{n \rightarrow \infty} p_{n+1} - p_n \leq H;$

thus for instance ${B[2]}$ is the notorious twin prime conjecture. While this conjecture remains unsolved, we have the following recent breakthrough result of Zhang, building upon earlier work of Goldston-Pintz-Yildirim, Bombieri, Fouvry, Friedlander, and Iwaniec, and others:

Theorem 1 (Zhang’s theorem) ${B[H]}$ is true for some finite ${H}$.

In fact, Zhang’s paper shows that ${B[H]}$ is true with ${H = 70,000,000}$.

About a month ago, the Polymath8 project was launched with the objective of reading through Zhang’s paper, clarifying the arguments, and then making them more efficient, in order to improve the value of ${H}$. This project is still ongoing, but we have made significant progress; currently, we have confirmed that ${B[H]}$ holds for ${H}$ as low as ${12,006}$, and provisionally for ${H}$ as low as ${6,966}$ subject to certain lengthy arguments being checked. For several reasons, our methods (which are largely based on Zhang’s original argument structure, though with numerous refinements and improvements) will not be able to attain the twin prime conjecture ${B[2]}$, but there is still scope to lower the value of ${H}$ a bit further than what we have currently.

The precise arguments here are quite technical, and are discussed at length on other posts on this blog. In this post, I would like to give a “high level” summary of how Zhang’s argument works, and give some impressions of the improvements we have made so far; these would already be familiar to the active participants of the Polymath8 project, but perhaps may be of value to people who are following this project on a more casual basis.

While Zhang’s arguments (and our refinements of it) are quite lengthy, they are fortunately also very modular, that is to say they can be broken up into several independent components that can be understood and optimised more or less separately from each other (although we have on occasion needed to modify the formulation of one component in order to better suit the needs of another). At the top level, Zhang’s argument looks like this:

1. Statements of the form ${B[H]}$ are deduced from weakened versions of the Hardy-Littlewood prime tuples conjecture, which we have denoted ${DHL[k_0,2]}$ (the ${DHL}$ stands for “Dickson-Hardy-Littlewood”), by locating suitable narrow admissible tuples (see below). Zhang’s paper establishes for the first time an unconditional proof of ${DHL[k_0,2]}$ for some finite ${k_0}$; in his initial paper, ${k_0}$ was ${3,500,000}$, but we have lowered this value to ${1,466}$ (and provisionally to ${902}$). Any reduction in the value of ${k_0}$ leads directly to reductions in the value of ${H}$; a web site to collect the best known values of ${H}$ in terms of ${k_0}$ has recently been set up here (and is accepting submissions for anyone who finds narrower admissible tuples than are currently known).
2. Next, by adapting sieve-theoretic arguments of Goldston, Pintz, and Yildirim, the Dickson-Hardy-Littlewood type assertions ${DHL[k_0,2]}$ are deduced in turn from weakened versions of the Elliott-Halberstam conjecture that we have denoted ${MPZ[\varpi,\delta]}$ (the ${MPZ}$ stands for “Motohashi-Pintz-Zhang”). More recently, we have replaced the conjecture ${MPZ[\varpi,\delta]}$ by a slightly stronger conjecture ${MPZ'[\varpi,\delta]}$ to significantly improve the efficiency of this step (using some recent ideas of Pintz). Roughly speaking, these statements assert that the primes are more or less evenly distributed along many arithmetic progressions, including those that have relatively large spacing. A crucial technical fact here is that in contrast to the older Elliott-Halberstam conjecture, the Motohashi-Pintz-Zhang estimates only require one to control progressions whose spacings ${q}$ have a lot of small prime factors (the original ${MPZ[\varpi,\delta]}$ conjecture requires the spacing ${q}$ to be smooth, but the newer variant ${MPZ'[\varpi,\delta]}$ has relaxed this to “densely divisible” as this turns out to be more efficient). The ${\varpi}$ parameter is more important than the technical parameter ${\delta}$; we would like ${\varpi}$ to be as large as possible, as any increase in this parameter should lead to a reduced value of ${k_0}$. In Zhang’s original paper, ${\varpi}$ was taken to be ${1/1168}$; we have now increased this to be almost as large as ${1/148}$ (and provisionally ${1/108}$).
3. By a certain amount of combinatorial manipulation (combined with a useful decomposition of the von Mangoldt function due Heath-Brown), estimates such as ${MPZ[\varpi,\delta]}$ can be deduced from three subestimates, the “Type I” estimate ${Type_I[\varpi,\delta,\sigma]}$, the “Type II” estimate ${Type_{II}[\varpi,\delta]}$, and the “Type III” estimate ${Type_{III}[\varpi,\delta,\sigma]}$, which all involve the distribution of certain Dirichlet convolutions in arithmetic progressions. Here ${1/10 < \sigma < 1/2}$ is an adjustable parameter that demarcates the border between the Type I and Type III estimates; raising ${\sigma}$ makes it easier to prove Type III estimates but harder to prove Type I estimates, and lowering ${\sigma}$ of course has the opposite effect. There is a combinatorial lemma that asserts that as long as one can find some ${\sigma}$ between ${1/10}$ and ${1/2}$ for which all three estimates ${Type_I[\varpi,\delta,\sigma]}$, ${Type_{II}[\varpi,\delta]}$, ${Type_{III}[\varpi,\delta,\sigma]}$ hold, one can prove ${MPZ[\varpi,\delta]}$. (The condition ${\sigma > 1/10}$ arises from the combinatorics, and appears to be rather essential; in fact, it is currently a major obstacle to further improvement of ${\varpi}$ and hence ${k_0}$ and ${H}$.)
4. The Type I estimates ${Type_I[\varpi,\delta,\sigma]}$ are asserting good distribution properties of convolutions of the form ${\alpha * \beta}$, where ${\alpha,\beta}$ are moderately long sequences which have controlled magnitude and length but are otherwise arbitrary. Estimates that are roughly of this type first appeared in a series of papers by Bombieri, Fouvry, Friedlander, Iwaniec, and other authors, and Zhang’s arguments here broadly follow those of previous authors, but with several new twists that take advantage of the many factors of the spacing ${q}$. In particular, the dispersion method of Linnik is used (which one can think of as a clever application of the Cauchy-Schwarz inequality) to ultimately reduce matters (after more Cauchy-Schwarz, as well as treatment of several error terms) to estimation of incomplete Kloosterman-type sums such as

$\displaystyle \sum_{n \leq N} e_d( \frac{c}{n} ).$

Zhang’s argument uses classical estimates on this Kloosterman sum (dating back to the work of Weil), but we have improved this using the “${q}$-van der Corput ${A}$-process” introduced by Heath-Brown and Ringrose.

5. The Type II estimates ${Type_{II}[\varpi,\delta]}$ are similar to the Type I estimates, but cover a small hole in the coverage of the Type I estimates which comes up when the two sequences ${\alpha,\beta}$ are almost equal in length. It turns out that one can modify the Type I argument to cover this case also. In practice, these estimates give less stringent conditions on ${\varpi,\delta}$ than the other two estimates, and so as a first approximation one can ignore the need to treat these estimates, although recently our Type I and Type III estimates have become so strong that it has become necessary to tighten the Type II estimates as well.
6. The Type III estimates ${Type_{III}[\varpi,\delta,\sigma]}$ are an averaged variant of the classical problem of understanding the distribution of the ternary divisor function ${\tau_3(n) := \sum_{abc=n} 1}$ in arithmetic progressions. There are various ways to attack this problem, but most of them ultimately boil down (after the use of standard devices such as Cauchy-Schwarz and completion of sums) to the task of controlling certain higher-dimensional Kloosterman-type sums such as

$\displaystyle \sum_{t,t' \in ({\bf Z}/d{\bf Z})^\times} \sum_{l \in {\bf Z}/d{\bf Z}: (l,d)=(l+k,d)=1} e_d( \frac{t}{l} - \frac{t'}{l+k} + \frac{m}{t} - \frac{m'}{t'} ).$

In principle, any such sum can be controlled by invoking Deligne’s proof of the Weil conjectures in arbitrary dimension (which, roughly speaking, establishes the analogue of the Riemann hypothesis for arbitrary varieties over finite fields), although in the higher dimensional setting some algebraic geometry is needed to ensure that one gets the full “square root cancellation” for these exponential sums. (For the particular sum above, the necessary details were worked out by Birch and Bombieri.) As such, this part of the argument is by far the least elementary component of the whole. Zhang’s original argument cleverly exploited some additional cancellation in the above exponential sums that goes beyond the naive square root cancellation heuristic; more recently, an alternate argument of Fouvry, Kowalski, Michel, and Nelson uses bounds on a slightly different higher-dimensional Kloosterman-type sum to obtain results that give better values of ${\varpi,\delta,\sigma}$. We have also been able to improve upon these estimates by exploiting some additional averaging that was left unused by the previous arguments.

As of this time of writing, our understanding of the first three stages of Zhang’s argument (getting from ${DHL[k_0,2]}$ to ${B[H]}$, getting from ${MPZ[\varpi,\delta]}$ or ${MPZ'[\varpi,\delta]}$ to ${DHL[k_0,2]}$, and getting to ${MPZ[\varpi,\delta]}$ or ${MPZ'[\varpi,\delta]}$ from Type I, Type II, and Type III estimates) are quite satisfactory, with the implications here being about as efficient as one could hope for with current methods, although one could still hope to get some small improvements in parameters by wringing out some of the last few inefficiencies. The remaining major sources of improvements to the parameters are then coming from gains in the Type I, II, and III estimates; we are currently in the process of making such improvements, but it will still take some time before they are fully optimised.

Below the fold I will discuss (mostly at an informal, non-rigorous level) the six steps above in a little more detail (full details can of course be found in the other polymath8 posts on this blog). This post will also serve as a new research thread, as the previous threads were getting quite lengthy.

This post is a continuation of the previous post on sieve theory, which is an ongoing part of the Polymath8 project to improve the various parameters in Zhang’s proof that bounded gaps between primes occur infinitely often. Given that the comments on that page are getting quite lengthy, this is also a good opportunity to “roll over” that thread.

We will continue the notation from the previous post, including the concept of an admissible tuple, the use of an asymptotic parameter ${x}$ going to infinity, and a quantity ${w}$ depending on ${x}$ that goes to infinity sufficiently slowly with ${x}$, and ${W := \prod_{p (the ${W}$-trick).

The objective of this portion of the Polymath8 project is to make as efficient as possible the connection between two types of results, which we call ${DHL[k_0,2]}$ and ${MPZ[\varpi,\delta]}$. Let us first state ${DHL[k_0,2]}$, which has an integer parameter ${k_0 \geq 2}$:

Conjecture 1 (${DHL[k_0,2]}$) Let ${{\mathcal H}}$ be a fixed admissible ${k_0}$-tuple. Then there are infinitely many translates ${n+{\mathcal H}}$ of ${{\mathcal H}}$ which contain at least two primes.

Zhang was the first to prove a result of this type with ${k_0 = 3,500,000}$. Since then the value of ${k_0}$ has been lowered substantially; at this time of writing, the current record is ${k_0 = 26,024}$.

There are two basic ways known currently to attain this conjecture. The first is to use the Elliott-Halberstam conjecture ${EH[\theta]}$ for some ${\theta>1/2}$:

Conjecture 2 (${EH[\theta]}$) One has

$\displaystyle \sum_{1 \leq q \leq x^\theta} \sup_{a \in ({\bf Z}/q{\bf Z})^\times} |\sum_{n < x: n = a\ (q)} \Lambda(n) - \frac{1}{\phi(q)} \sum_{n < x} \Lambda(n)|$

$\displaystyle = O( \frac{x}{\log^A x} )$

for all fixed ${A>0}$. Here we use the abbreviation ${n=a\ (q)}$ for ${n=a \hbox{ mod } q}$.

Here of course ${\Lambda}$ is the von Mangoldt function and ${\phi}$ the Euler totient function. It is conjectured that ${EH[\theta]}$ holds for all ${0 < \theta < 1}$, but this is currently only known for ${0 < \theta < 1/2}$, an important result known as the Bombieri-Vinogradov theorem.

In a breakthrough paper, Goldston, Yildirim, and Pintz established an implication of the form

$\displaystyle EH[\theta] \implies DHL[k_0,2] \ \ \ \ \ (1)$

for any ${1/2 < \theta < 1}$, where ${k_0 = k_0(\theta)}$ depends on ${\theta}$. This deduction was very recently optimised by Farkas, Pintz, and Revesz and also independently in the comments to the previous blog post, leading to the following implication:

Theorem 3 (EH implies DHL) Let ${1/2 < \theta < 1}$ be a real number, and let ${k_0 \geq 2}$ be an integer obeying the inequality

$\displaystyle 2\theta > \frac{j_{k_0-2}^2}{k_0(k_0-1)}, \ \ \ \ \ (2)$

where ${j_n}$ is the first positive zero of the Bessel function ${J_n(x)}$. Then ${EH[\theta]}$ implies ${DHL[k_0,2]}$.

Note that the right-hand side of (2) is larger than ${1}$, but tends asymptotically to ${1}$ as ${k_0 \rightarrow \infty}$. We give an alternate proof of Theorem 3 below the fold.

Implications of the form Theorem 3 were modified by Motohashi and Pintz, which in our notation replaces ${EH[\theta]}$ by an easier conjecture ${MPZ[\varpi,\delta]}$ for some ${0 < \varpi < 1/4}$ and ${0 < \delta < 1/4+\varpi}$, at the cost of degrading the sufficient condition (2) slightly. In our notation, this conjecture takes the following form for each choice of parameters ${\varpi,\delta}$:

Conjecture 4 (${MPZ[\varpi,\delta]}$) Let ${{\mathcal H}}$ be a fixed ${k_0}$-tuple (not necessarily admissible) for some fixed ${k_0 \geq 2}$, and let ${b\ (W)}$ be a primitive residue class. Then

$\displaystyle \sum_{q \in {\mathcal S}_I: q< x^{1/2+2\varpi}} \sum_{a \in C(q)} |\Delta_{b,W}(\Lambda; q,a)| = O( x \log^{-A} x) \ \ \ \ \ (3)$

for any fixed ${A>0}$, where ${I = (w,x^{\delta})}$, ${{\mathcal S}_I}$ are the square-free integers whose prime factors lie in ${I}$, and ${\Delta_{b,W}(\Lambda;q,a)}$ is the quantity

$\displaystyle \Delta_{b,W}(\Lambda;q,a) := | \sum_{x \leq n \leq 2x: n=b\ (W); n = a\ (q)} \Lambda(n) \ \ \ \ \ (4)$

$\displaystyle - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x: n = b\ (W)} \Lambda(n)|.$

and ${C(q)}$ is the set of congruence classes

$\displaystyle C(q) := \{ a \in ({\bf Z}/q{\bf Z})^\times: P(a) = 0 \}$

and ${P}$ is the polynomial

$\displaystyle P(a) := \prod_{h \in {\mathcal H}} (a+h).$

This is a weakened version of the Elliott-Halberstam conjecture:

Proposition 5 (EH implies MPZ) Let ${0 < \varpi < 1/4}$ and ${0 < \delta < 1/4+\varpi}$. Then ${EH[1/2+2\varpi+\epsilon]}$ implies ${MPZ[\varpi,\delta]}$ for any ${\epsilon>0}$. (In abbreviated form: ${EH[1/2+2\varpi+]}$ implies ${MPZ[\varpi,\delta]}$.)

In particular, since ${EH[\theta]}$ is conjecturally true for all ${0 < \theta < 1/2}$, we conjecture ${MPZ[\varpi,\delta]}$ to be true for all ${0 < \varpi < 1/4}$ and ${0<\delta<1/4+\varpi}$.

Proof: Define

$\displaystyle E(q) := \sup_{a \in ({\bf Z}/q{\bf Z})^\times} |\sum_{x \leq n \leq 2x: n = a\ (q)} \Lambda(n) - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x} \Lambda(n)|$

then the hypothesis ${EH[1/2+2\varpi+\epsilon]}$ (applied to ${x}$ and ${2x}$ and then subtracting) tells us that

$\displaystyle \sum_{1 \leq q \leq Wx^{1/2+2\varpi}} E(q) \ll x \log^{-A} x$

for any fixed ${A>0}$. From the Chinese remainder theorem and the Siegel-Walfisz theorem we have

$\displaystyle \sup_{a \in ({\bf Z}/q{\bf Z})^\times} \Delta_{b,W}(\Lambda;q,a) \ll E(qW) + \frac{1}{\phi(q)} x \log^{-A} x$

for any ${q}$ coprime to ${W}$ (and in particular for ${q \in {\mathcal S}_I}$). Since ${|C(q)| \leq k_0^{\Omega(q)}}$, where ${\Omega(q)}$ is the number of prime divisors of ${q}$, we can thus bound the left-hand side of (3) by

$\displaystyle \ll \sum_{q \in {\mathcal S}_I: q< x^{1/2+2\varpi}} k_0^{\Omega(q)} E(qW) + k_0^{\Omega(q)} \frac{1}{\phi(q)} x \log^{-A} x.$

The contribution of the second term is ${O(x \log^{-A+O(1)} x)}$ by standard estimates (see Proposition 8 below). Using the very crude bound

$\displaystyle E(q) \ll \frac{1}{\phi(q)} x \log x$

and standard estimates we also have

$\displaystyle \sum_{q \in {\mathcal S}_I: q< x^{1/2+2\varpi}} k_0^{2\Omega(q)} E(qW) \ll x \log^{O(1)} A$

and the claim now follows from the Cauchy-Schwarz inequality. $\Box$

In practice, the conjecture ${MPZ[\varpi,\delta]}$ is easier to prove than ${EH[1/2+2\varpi+]}$ due to the restriction of the residue classes ${a}$ to ${C(q)}$, and also the restriction of the modulus ${q}$ to ${x^\delta}$-smooth numbers. Zhang proved ${MPZ[\varpi,\varpi]}$ for any ${0 < \varpi < 1/1168}$. More recently, our Polymath8 group has analysed Zhang’s argument (using in part a corrected version of the analysis of a recent preprint of Pintz) to obtain ${MPZ[\varpi,\delta]}$ whenever ${\delta, \varpi > 0}$ are such that

$\displaystyle 207\varpi + 43\delta < \frac{1}{4}.$

The work of Motohashi and Pintz, and later Zhang, implicitly describe arguments that allow one to deduce ${DHL[k_0,2]}$ from ${MPZ[\varpi,\delta]}$ provided that ${k_0}$ is sufficiently large depending on ${\varpi,\delta}$. The best implication of this sort that we have been able to verify thus far is the following result, established in the previous post:

Theorem 6 (MPZ implies DHL) Let ${0 < \varpi < 1/4}$, ${0 < \delta < 1/4+\varpi}$, and let ${k_0 \geq 2}$ be an integer obeying the constraint

$\displaystyle 1+4\varpi > \frac{j_{k_0-2}^2}{k_0(k_0-1)} (1+\kappa) \ \ \ \ \ (5)$

where ${\kappa}$ is the quantity

$\displaystyle \kappa := \sum_{1 \leq n < \frac{1+4\varpi}{2\delta}} (1 - \frac{2n \delta}{1 + 4\varpi})^{k_0/2} \prod_{j=1}^{n} (1 + 3k_0 \log(1+\frac{1}{j})) ).$

Then ${MPZ[\varpi,\delta]}$ implies ${DHL[k_0,2]}$.

This complicated version of ${\kappa}$ is roughly of size ${3 \log(2) k_0 \exp( - k_0 \delta)}$. It is unlikely to be optimal; the work of Motohashi-Pintz and Pintz suggests that it can essentially be improved to ${\frac{1}{\delta} \exp(-k_0 \delta)}$, but currently we are unable to verify this claim. One of the aims of this post is to encourage further discussion as to how to improve the ${\kappa}$ term in results such as Theorem 6.

We remark that as (5) is an open condition, it is unaffected by infinitesimal modifications to ${\varpi,\delta}$, and so we do not ascribe much importance to such modifications (e.g. replacing ${\varpi}$ by ${\varpi-\epsilon}$ for some arbitrarily small ${\epsilon>0}$).

The known deductions of ${DHL[k_0,2]}$ from claims such as ${EH[\theta]}$ or ${MPZ[\varpi,\delta]}$ rely on the following elementary observation of Goldston, Pintz, and Yildirim (essentially a weighted pigeonhole principle), which we have placed in “${W}$-tricked form”:

Lemma 7 (Criterion for DHL) Let ${k_0 \geq 2}$. Suppose that for each fixed admissible ${k_0}$-tuple ${{\mathcal H}}$ and each congruence class ${b\ (W)}$ such that ${b+h}$ is coprime to ${W}$ for all ${h \in {\mathcal H}}$, one can find a non-negative weight function ${\nu: {\bf N} \rightarrow {\bf R}^+}$, fixed quantities ${\alpha,\beta > 0}$, a quantity ${A>0}$, and a fixed positive power ${R}$ of ${x}$ such that one has the upper bound

$\displaystyle \sum_{x \leq n \leq 2x: n = b\ (W)} \nu(n) \leq (\alpha+o(1)) A\frac{x}{W}, \ \ \ \ \ (6)$

the lower bound

$\displaystyle \sum_{x \leq n \leq 2x: n = b\ (W)} \nu(n) \theta(n+h_i) \geq (\beta-o(1)) A\frac{x}{W} \log R \ \ \ \ \ (7)$

for all ${h_i \in {\mathcal H}}$, and the key inequality

$\displaystyle \frac{\log R}{\log x} > \frac{1}{k_0} \frac{\alpha}{\beta} \ \ \ \ \ (8)$

holds. Then ${DHL[k_0,2]}$ holds. Here ${\theta(n)}$ is defined to equal ${\log n}$ when ${n}$ is prime and ${0}$ otherwise.

Proof: Consider the quantity

$\displaystyle \sum_{x \leq n \leq 2x: n = b\ (W)} \nu(n) (\sum_{h \in {\mathcal H}} \theta(n+h) - \log(3x)). \ \ \ \ \ (9)$

By (6), (7), this quantity is at least

$\displaystyle k_0 \beta A\frac{x}{W} \log R - \alpha \log(3x) A\frac{x}{W} - o(A\frac{x}{W} \log x).$

By (8), this expression is positive for all sufficiently large ${x}$. On the other hand, (9) can only be positive if at least one summand is positive, which only can happen when ${n+{\mathcal H}}$ contains at least two primes for some ${x \leq n \leq 2x}$ with ${n=b\ (W)}$. Letting ${x \rightarrow \infty}$ we obtain ${DHL[k_0,2]}$ as claimed. $\Box$

In practice, the quantity ${R}$ (referred to as the sieve level) is a power of ${x}$ such as ${x^{\theta/2}}$ or ${x^{1/4+\varpi}}$, and reflects the strength of the distribution hypothesis ${EH[\theta]}$ or ${MPZ[\varpi,\delta]}$ that is available; the quantity ${R}$ will also be a key parameter in the definition of the sieve weight ${\nu}$. The factor ${A}$ reflects the order of magnitude of the expected density of ${\nu}$ in the residue class ${b\ (W)}$; it could be absorbed into the sieve weight ${\nu}$ by dividing that weight by ${A}$, but it is convenient to not enforce such a normalisation so as not to clutter up the formulae. In practice, ${A}$ will some combination of ${\frac{\phi(W)}{W}}$ and ${\log R}$.

Once one has decided to rely on Lemma 7, the next main task is to select a good weight ${\nu}$ for which the ratio ${\alpha/\beta}$ is as small as possible (and for which the sieve level ${R}$ is as large as possible. To ensure non-negativity, we use the Selberg sieve

$\displaystyle \nu = \lambda^2, \ \ \ \ \ (10)$

where ${\lambda(n)}$ takes the form

$\displaystyle \lambda(n) = \sum_{d \in {\mathcal S}_I: d|P(n)} \mu(d) a_d$

for some weights ${a_d \in {\bf R}}$ vanishing for ${d>R}$ that are to be chosen, where ${I \subset (w,+\infty)}$ is an interval and ${P}$ is the polynomial ${P(n) := \prod_{h \in {\mathcal H}} (n+h)}$. If the distribution hypothesis is ${EH[\theta]}$, one takes ${R := x^{\theta/2}}$ and ${I := (w,+\infty)}$; if the distribution hypothesis is instead ${MPZ[\varpi,\delta]}$, one takes ${R := x^{1/4+\varpi}}$ and ${I := (w,x^\delta)}$.

One has a useful amount of flexibility in selecting the weights ${a_d}$ for the Selberg sieve. The original work of Goldston, Pintz, and Yildirim, as well as the subsequent paper of Zhang, the choice

$\displaystyle a_d := \log(\frac{R}{d})_+^{k_0+\ell_0}$

is used for some additional parameter ${\ell_0 > 0}$ to be optimised over. More generally, one can take

$\displaystyle a_d := g( \frac{\log d}{\log R} )$

for some suitable (in particular, sufficiently smooth) cutoff function ${g: {\bf R} \rightarrow {\bf R}}$. We will refer to this choice of sieve weights as the “analytic Selberg sieve”; this is the choice used in the analysis in the previous post.

However, there is a slight variant choice of sieve weights that one can use, which I will call the “elementary Selberg sieve”, and it takes the form

$\displaystyle a_d := \frac{1}{\Phi(d) \Delta(d)} \sum_{q \in {\mathcal S}_I: (q,d)=1} \frac{1}{\Phi(q)} f'( \frac{\log dq}{\log R}) \ \ \ \ \ (11)$

for a sufficiently smooth function ${f: {\bf R} \rightarrow {\bf R}}$, where

$\displaystyle \Phi(d) := \prod_{p|d} \frac{p-k_0}{k_0}$

for ${d \in {\mathcal S}_I}$ is a ${k_0}$-variant of the Euler totient function, and

$\displaystyle \Delta(d) := \prod_{p|d} \frac{k_0}{p} = \frac{k_0^{\Omega(d)}}{d}$

for ${d \in {\mathcal S}_I}$ is a ${k_0}$-variant of the function ${1/d}$. (The derivative on the ${f}$ cutoff is convenient for computations, as will be made clearer later in this post.) This choice of weights ${a_d}$ may seem somewhat arbitrary, but it arises naturally when considering how to optimise the quadratic form

$\displaystyle \sum_{d_1,d_2 \in {\mathcal S}_I} \mu(d_1) a_{d_1} \mu(d_2) a_{d_2} \Delta([d_1,d_2])$

(which arises naturally in the estimation of ${\alpha}$ in (6)) subject to a fixed value of ${a_1}$ (which morally is associated to the estimation of ${\beta}$ in (7)); this is discussed in any sieve theory text as part of the general theory of the Selberg sieve, e.g. Friedlander-Iwaniec.

The use of the elementary Selberg sieve for the bounded prime gaps problem was studied by Motohashi and Pintz. Their arguments give an alternate derivation of ${DHL[k_0,2]}$ from ${MPZ[\varpi,\theta]}$ for ${k_0}$ sufficiently large, although unfortunately we were not able to confirm some of their calculations regarding the precise dependence of ${k_0}$ on ${\varpi,\theta}$, and in particular we have not yet been able to improve upon the specific criterion in Theorem 6 using the elementary sieve. However it is quite plausible that such improvements could become available with additional arguments.

Below the fold we describe how the elementary Selberg sieve can be used to reprove Theorem 3, and discuss how they could potentially be used to improve upon Theorem 6. (But the elementary Selberg sieve and the analytic Selberg sieve are in any event closely related; see the appendix of this paper of mine with Ben Green for some further discussion.) For the purposes of polymath8, either developing the elementary Selberg sieve or continuing the analysis of the analytic Selberg sieve from the previous post would be a relevant topic of conversation in the comments to this post.

Suppose one is given a ${k_0}$-tuple ${{\mathcal H} = (h_1,\ldots,h_{k_0})}$ of ${k_0}$ distinct integers for some ${k_0 \geq 1}$, arranged in increasing order. When is it possible to find infinitely many translates ${n + {\mathcal H} =(n+h_1,\ldots,n+h_{k_0})}$ of ${{\mathcal H}}$ which consists entirely of primes? The case ${k_0=1}$ is just Euclid’s theorem on the infinitude of primes, but the case ${k_0=2}$ is already open in general, with the ${{\mathcal H} = (0,2)}$ case being the notorious twin prime conjecture.

On the other hand, there are some tuples ${{\mathcal H}}$ for which one can easily answer the above question in the negative. For instance, the only translate of ${(0,1)}$ that consists entirely of primes is ${(2,3)}$, basically because each translate of ${(0,1)}$ must contain an even number, and the only even prime is ${2}$. More generally, if there is a prime ${p}$ such that ${{\mathcal H}}$ meets each of the ${p}$ residue classes ${0 \hbox{ mod } p, 1 \hbox{ mod } p, \ldots, p-1 \hbox{ mod } p}$, then every translate of ${{\mathcal H}}$ contains at least one multiple of ${p}$; since ${p}$ is the only multiple of ${p}$ that is prime, this shows that there are only finitely many translates of ${{\mathcal H}}$ that consist entirely of primes.

To avoid this obstruction, let us call a ${k_0}$-tuple ${{\mathcal H}}$ admissible if it avoids at least one residue class ${\hbox{ mod } p}$ for each prime ${p}$. It is easy to check for admissibility in practice, since a ${k_0}$-tuple is automatically admissible in every prime ${p}$ larger than ${k_0}$, so one only needs to check a finite number of primes in order to decide on the admissibility of a given tuple. For instance, ${(0,2)}$ or ${(0,2,6)}$ are admissible, but ${(0,2,4)}$ is not (because it covers all the residue classes modulo ${3}$). We then have the famous Hardy-Littlewood prime tuples conjecture:

Conjecture 1 (Prime tuples conjecture, qualitative form) If ${{\mathcal H}}$ is an admissible ${k_0}$-tuple, then there exists infinitely many translates of ${{\mathcal H}}$ that consist entirely of primes.

This conjecture is extremely difficult (containing the twin prime conjecture, for instance, as a special case), and in fact there is no explicitly known example of an admissible ${k_0}$-tuple with ${k_0 \geq 2}$ for which we can verify this conjecture (although, thanks to the recent work of Zhang, we know that ${(0,d)}$ satisfies the conclusion of the prime tuples conjecture for some ${0 < d < 70,000,000}$, even if we can’t yet say what the precise value of ${d}$ is).

Actually, Hardy and Littlewood conjectured a more precise version of Conjecture 1. Given an admissible ${k_0}$-tuple ${{\mathcal H} = (h_1,\ldots,h_{k_0})}$, and for each prime ${p}$, let ${\nu_p = \nu_p({\mathcal H}) := |{\mathcal H} \hbox{ mod } p|}$ denote the number of residue classes modulo ${p}$ that ${{\mathcal H}}$ meets; thus we have ${1 \leq \nu_p \leq p-1}$ for all ${p}$ by admissibility, and also ${\nu_p = k_0}$ for all ${p>h_{k_0}-h_1}$. We then define the singular series ${{\mathfrak G} = {\mathfrak G}({\mathcal H})}$ associated to ${{\mathcal H}}$ by the formula

$\displaystyle {\mathfrak G} := \prod_{p \in {\mathcal P}} \frac{1-\frac{\nu_p}{p}}{(1-\frac{1}{p})^{k_0}}$

where ${{\mathcal P} = \{2,3,5,\ldots\}}$ is the set of primes; by the previous discussion we see that the infinite product in ${{\mathfrak G}}$ converges to a finite non-zero number.

We will also need some asymptotic notation (in the spirit of “cheap nonstandard analysis“). We will need a parameter ${x}$ that one should think of going to infinity. Some mathematical objects (such as ${{\mathcal H}}$ and ${k_0}$) will be independent of ${x}$ and referred to as fixed; but unless otherwise specified we allow all mathematical objects under consideration to depend on ${x}$. If ${X}$ and ${Y}$ are two such quantities, we say that ${X = O(Y)}$ if one has ${|X| \leq CY}$ for some fixed ${C}$, and ${X = o(Y)}$ if one has ${|X| \leq c(x) Y}$ for some function ${c(x)}$ of ${x}$ (and of any fixed parameters present) that goes to zero as ${x \rightarrow \infty}$ (for each choice of fixed parameters).

Conjecture 2 (Prime tuples conjecture, quantitative form) Let ${k_0 \geq 1}$ be a fixed natural number, and let ${{\mathcal H}}$ be a fixed admissible ${k_0}$-tuple. Then the number of natural numbers ${n < x}$ such that ${n+{\mathcal H}}$ consists entirely of primes is ${({\mathfrak G} + o(1)) \frac{x}{\log^{k_0} x}}$.

Thus, for instance, if Conjecture 2 holds, then the number of twin primes less than ${x}$ should equal ${(2 \Pi_2 + o(1)) \frac{x}{\log^2 x}}$, where ${\Pi_2}$ is the twin prime constant

$\displaystyle \Pi_2 := \prod_{p \in {\mathcal P}: p>2} (1 - \frac{1}{(p-1)^2}) = 0.6601618\ldots.$

As this conjecture is stronger than Conjecture 1, it is of course open. However there are a number of partial results on this conjecture. For instance, this conjecture is known to be true if one introduces some additional averaging in ${{\mathcal H}}$; see for instance this previous post. From the methods of sieve theory, one can obtain an upper bound of ${(C_{k_0} {\mathfrak G} + o(1)) \frac{x}{\log^{k_0} x}}$ for the number of ${n < x}$ with ${n + {\mathcal H}}$ all prime, where ${C_{k_0}}$ depends only on ${k_0}$. Sieve theory can also give analogues of Conjecture 2 if the primes are replaced by a suitable notion of almost prime (or more precisely, by a weight function concentrated on almost primes).

Another type of partial result towards Conjectures 1, 2 come from the results of Goldston-Pintz-Yildirim, Motohashi-Pintz, and of Zhang. Following the notation of this recent paper of Pintz, for each ${k_0>2}$, let ${DHL[k_0,2]}$ denote the following assertion (DHL stands for “Dickson-Hardy-Littlewood”):

Conjecture 3 (${DHL[k_0,2]}$) Let ${{\mathcal H}}$ be a fixed admissible ${k_0}$-tuple. Then there are infinitely many translates ${n+{\mathcal H}}$ of ${{\mathcal H}}$ which contain at least two primes.

This conjecture gets harder as ${k_0}$ gets smaller. Note for instance that ${DHL[2,2]}$ would imply all the ${k_0=2}$ cases of Conjecture 1, including the twin prime conjecture. More generally, if one knew ${DHL[k_0,2]}$ for some ${k_0}$, then one would immediately conclude that there are an infinite number of pairs of consecutive primes of separation at most ${H(k_0)}$, where ${H(k_0)}$ is the minimal diameter ${h_{k_0}-h_1}$ amongst all admissible ${k_0}$-tuples ${{\mathcal H}}$. Values of ${H(k_0)}$ for small ${k_0}$ can be found at this link (with ${H(k_0)}$ denoted ${w}$ in that page). For large ${k_0}$, the best upper bounds on ${H(k_0)}$ have been found by using admissible ${k_0}$-tuples ${{\mathcal H}}$ of the form

$\displaystyle {\mathcal H} = ( - p_{m+\lfloor k_0/2\rfloor - 1}, \ldots, - p_{m+1}, -1, +1, p_{m+1}, \ldots, p_{m+\lfloor (k_0+1)/2\rfloor - 1} )$

where ${p_n}$ denotes the ${n^{th}}$ prime and ${m}$ is a parameter to be optimised over (in practice it is an order of magnitude or two smaller than ${k_0}$); see this blog post for details. The upshot is that one can bound ${H(k_0)}$ for large ${k_0}$ by a quantity slightly smaller than ${k_0 \log k_0}$ (and the large sieve inequality shows that this is sharp up to a factor of two, see e.g. this previous post for more discussion).

In a key breakthrough, Goldston, Pintz, and Yildirim were able to establish the following conditional result a few years ago:

Theorem 4 (Goldston-Pintz-Yildirim) Suppose that the Elliott-Halberstam conjecture ${EH[\theta]}$ is true for some ${1/2 < \theta < 1}$. Then ${DHL[k_0,2]}$ is true for some finite ${k_0}$. In particular, this establishes an infinite number of pairs of consecutive primes of separation ${O(1)}$.

The dependence of constants between ${k_0}$ and ${\theta}$ given by the Goldston-Pintz-Yildirim argument is basically of the form ${k_0 \sim (\theta-1/2)^{-2}}$. (UPDATE: as recently observed by Farkas, Pintz, and Revesz, this relationship can be improved to ${k_0 \sim (\theta-1/2)^{-3/2}}$.)

Unfortunately, the Elliott-Halberstam conjecture (which we will state properly below) is only known for ${\theta<1/2}$, an important result known as the Bombieri-Vinogradov theorem. If one uses the Bombieri-Vinogradov theorem instead of the Elliott-Halberstam conjecture, Goldston, Pintz, and Yildirim were still able to show the highly non-trivial result that there were infinitely many pairs ${p_{n+1},p_n}$ of consecutive primes with ${(p_{n+1}-p_n) / \log p_n \rightarrow 0}$ (actually they showed more than this; see e.g. this survey of Soundararajan for details).

Actually, the full strength of the Elliott-Halberstam conjecture is not needed for these results. There is a technical specialisation of the Elliott-Halberstam conjecture which does not presently have a commonly accepted name; I will call it the Motohashi-Pintz-Zhang conjecture ${MPZ[\varpi]}$ in this post, where ${0 < \varpi < 1/4}$ is a parameter. We will define this conjecture more precisely later, but let us remark for now that ${MPZ[\varpi]}$ is a consequence of ${EH[\frac{1}{2}+2\varpi]}$.

We then have the following two theorems. Firstly, we have the following strengthening of Theorem 4:

Theorem 5 (Motohashi-Pintz-Zhang) Suppose that ${MPZ[\varpi]}$ is true for some ${0 < \varpi < 1/4}$. Then ${DHL[k_0,2]}$ is true for some ${k_0}$.

A version of this result (with a slightly different formulation of ${MPZ[\varpi]}$) appears in this paper of Motohashi and Pintz, and in the paper of Zhang, Theorem 5 is proven for the concrete values ${\varpi = 1/1168}$ and ${k_0 = 3,500,000}$. We will supply a self-contained proof of Theorem 5 below the fold, the constants upon those in Zhang’s paper (in particular, for ${\varpi = 1/1168}$, we can take ${k_0}$ as low as ${341,640}$, with further improvements on the way). As with Theorem 4, we have an inverse quadratic relationship ${k_0 \sim \varpi^{-2}}$.

In his paper, Zhang obtained for the first time an unconditional advance on ${MPZ[\varpi]}$:

Theorem 6 (Zhang) ${MPZ[\varpi]}$ is true for all ${0 < \varpi \leq 1/1168}$.

This is a deep result, building upon the work of Fouvry-Iwaniec, Friedlander-Iwaniec and Bombieri-Friedlander-Iwaniec which established results of a similar nature to ${MPZ[\varpi]}$ but simpler in some key respects. We will not discuss this result further here, except to say that they rely on the (higher-dimensional case of the) Weil conjectures, which were famously proven by Deligne using methods from l-adic cohomology. Also, it was believed among at least some experts that the methods of Bombieri, Fouvry, Friedlander, and Iwaniec were not quite strong enough to obtain results of the form ${MPZ[\varpi]}$, making Theorem 6 a particularly impressive achievement.

Combining Theorem 6 with Theorem 5 we obtain ${DHL[k_0,2]}$ for some finite ${k_0}$; Zhang obtains this for ${k_0 = 3,500,000}$ but as detailed below, this can be lowered to ${k_0 = 341,640}$. This in turn gives infinitely many pairs of consecutive primes of separation at most ${H(k_0)}$. Zhang gives a simple argument that bounds ${H(3,500,000)}$ by ${70,000,000}$, giving his famous result that there are infinitely many pairs of primes of separation at most ${70,000,000}$; by being a bit more careful (as discussed in this post) one can lower the upper bound on ${H(3,500,000)}$ to ${57,554,086}$, and if one instead uses the newer value ${k_0 = 341,640}$ for ${k_0}$ one can instead use the bound ${H(341,640) \leq 4,982,086}$. (Many thanks to Scott Morrison for these numerics.) UPDATE: These values are now obsolete; see this web page for the latest bounds.

In this post we would like to give a self-contained proof of both Theorem 4 and Theorem 5, which are both sieve-theoretic results that are mainly elementary in nature. (But, as stated earlier, we will not discuss the deepest new result in Zhang’s paper, namely Theorem 6.) Our presentation will deviate a little bit from the traditional sieve-theoretic approach in a few places. Firstly, there is a portion of the argument that is traditionally handled using contour integration and properties of the Riemann zeta function; we will present a “cheaper” approach (which Ben Green and I used in our papers, e.g. in this one) using Fourier analysis, with the only property used about the zeta function ${\zeta(s)}$ being the elementary fact that blows up like ${\frac{1}{s-1}}$ as one approaches ${1}$ from the right. To deal with the contribution of small primes (which is the source of the singular series ${{\mathfrak G}}$), it will be convenient to use the “${W}$-trick” (introduced in this paper of mine with Ben), passing to a single residue class mod ${W}$ (where ${W}$ is the product of all the small primes) to end up in a situation in which all small primes have been “turned off” which leads to better pseudorandomness properties (for instance, once one eliminates all multiples of small primes, almost all pairs of remaining numbers will be coprime).

A finite group ${G=(G,\cdot)}$ is said to be a Frobenius group if there is a non-trivial subgroup ${H}$ of ${G}$ (known as the Frobenius complement of ${G}$) such that the conjugates ${gHg^{-1}}$ of ${H}$ are “disjoint as possible” in the sense that ${H \cap gHg^{-1} = \{1\}}$ whenever ${g \not \in H}$. This gives a decomposition

$\displaystyle G = \bigcup_{gH \in G/H} (gHg^{-1} \backslash \{1\}) \cup K \ \ \ \ \ (1)$

where the Frobenius kernel ${K}$ of ${G}$ is defined as the identity element ${1}$ together with all the non-identity elements that are not conjugate to any element of ${H}$. Taking cardinalities, we conclude that

$\displaystyle |G| = \frac{|G|}{|H|} (|H| - 1) + |K|$

and hence

$\displaystyle |H| |K| = |G|. \ \ \ \ \ (2)$

A remarkable theorem of Frobenius gives an unexpected amount of structure on ${K}$ and hence on ${G}$:

Theorem 1 (Frobenius’ theorem) Let ${G}$ be a Frobenius group with Frobenius complement ${H}$ and Frobenius kernel ${K}$. Then ${K}$ is a normal subgroup of ${G}$, and hence (by (2) and the disjointness of ${H}$ and ${K}$ outside the identity) ${G}$ is the semidirect product ${K \rtimes H}$ of ${H}$ and ${K}$.

I discussed Frobenius’ theorem and its proof in this recent blog post. This proof uses the theory of characters on a finite group ${G}$, in particular relying on the fact that a character on a subgroup ${H}$ can induce a character on ${G}$, which can then be decomposed into irreducible characters with natural number coefficients. Remarkably, even though a century has passed since Frobenius’ original argument, there is no proof known of this theorem which avoids character theory entirely; there are elementary proofs known when the complement ${H}$ has even order or when ${H}$ is solvable (we review both of these cases below the fold), which by the Feit-Thompson theorem does cover all the cases, but the proof of the Feit-Thompson theorem involves plenty of character theory (and also relies on Theorem 1). (The answers to this MathOverflow question give a good overview of the current state of affairs.)

I have been playing around recently with the problem of finding a character-free proof of Frobenius’ theorem. I didn’t succeed in obtaining a completely elementary proof, but I did find an argument which replaces character theory (which can be viewed as coming from the representation theory of the non-commutative group algebra ${{\bf C} G \equiv L^2(G)}$) with the Fourier analysis of class functions (i.e. the representation theory of the centre ${Z({\bf C} G) \equiv L^2(G)^G}$ of the group algebra), thus replacing non-commutative representation theory by commutative representation theory. This is not a particularly radical depature from the existing proofs of Frobenius’ theorem, but it did seem to be a new proof which was technically “character-free” (even if it was not all that far from character-based in spirit), so I thought I would record it here.

The main ideas are as follows. The space ${L^2(G)^G}$ of class functions can be viewed as a commutative algebra with respect to the convolution operation ${*}$; as the regular representation is unitary and faithful, this algebra contains no nilpotent elements. As such, (Gelfand-style) Fourier analysis suggests that one can analyse this algebra through the idempotents: class functions ${\phi}$ such that ${\phi*\phi = \phi}$. In terms of characters, idempotents are nothing more than sums of the form ${\sum_{\chi \in \Sigma} \chi(1) \chi}$ for various collections ${\Sigma}$ of characters, but we can perform a fair amount of analysis on idempotents directly without recourse to characters. In particular, it turns out that idempotents enjoy some important integrality properties that can be established without invoking characters: for instance, by taking traces one can check that ${\phi(1)}$ is a natural number, and more generally we will show that ${{\bf E}_{(a,b) \in S} {\bf E}_{x \in G} \phi( a x b^{-1} x^{-1} )}$ is a natural number whenever ${S}$ is a subgroup of ${G \times G}$ (see Corollary 4 below). For instance, the quantity

$\displaystyle \hbox{rank}(\phi) := {\bf E}_{a \in G} {\bf E}_{x \in G} \phi(a xa^{-1} x^{-1})$

is a natural number which we will call the rank of ${\phi}$ (as it is also the linear rank of the transformation ${f \mapsto f*\phi}$ on ${L^2(G)}$).

In the case that ${G}$ is a Frobenius group with kernel ${K}$, the above integrality properties can be used after some elementary manipulations to establish that for any idempotent ${\phi}$, the quantity

$\displaystyle \frac{1}{|G|} \sum_{a \in K} {\bf E}_{x \in G} \phi( axa^{-1}x^{-1} ) - \frac{1}{|G| |K|} \sum_{a,b \in K} \phi(ab^{-1}) \ \ \ \ \ (3)$

is an integer. On the other hand, one can also show by elementary means that this quantity lies between ${0}$ and ${\hbox{rank}(\phi)}$. These two facts are not strong enough on their own to impose much further structure on ${\phi}$, unless one restricts attention to minimal idempotents ${\phi}$. In this case spectral theory (or Gelfand theory, or the fundamental theorem of algebra) tells us that ${\phi}$ has rank one, and then the integrality gap comes into play and forces the quantity (3) to always be either zero or one. This can be used to imply that the convolution action of every minimal idempotent ${\phi}$ either preserves ${\frac{|G|}{|K|} 1_K}$ or annihilates it, which makes ${\frac{|G|}{|K|} 1_K}$ itself an idempotent, which makes ${K}$ normal.

Suppose that ${G = (G,\cdot)}$ is a finite group of even order, thus ${|G|}$ is a multiple of two. By Cauchy’s theorem, this implies that ${G}$ contains an involution: an element ${g}$ in ${G}$ of order two. (Indeed, if no such involution existed, then ${G}$ would be partitioned into doubletons ${\{g,g^{-1}\}}$ together with the identity, so that ${|G|}$ would be odd, a contradiction.) Of course, groups of odd order have no involutions ${g}$, thanks to Lagrange’s theorem (since ${G}$ cannot split into doubletons ${\{ h, hg \}}$).

The classical Brauer-Fowler theorem asserts that if a group ${G}$ has many involutions, then it must have a large non-trivial subgroup:

Theorem 1 (Brauer-Fowler theorem) Let ${G}$ be a finite group with at least ${|G|/n}$ involutions for some ${n > 1}$. Then ${G}$ contains a proper subgroup ${H}$ of index at most ${n^2}$.

This theorem (which is Theorem 2F in the original paper of Brauer and Fowler, who in fact manage to sharpen ${n^2}$ slightly to ${n(n+2)/2}$) has a number of quick corollaries which are also referred to as “the” Brauer-Fowler theorem. For instance, if ${g}$ is a an involution of a group ${G}$, and the centraliser ${C_G(g) := \{ h \in G: gh = hg\}}$ has order ${n}$, then clearly ${n \geq 2}$ (as ${C_G(g)}$ contains ${1}$ and ${g}$) and the conjugacy class ${\{ aga^{-1}: a \in G \}}$ has order ${|G|/n}$ (since the map ${a \mapsto aga^{-1}}$ has preimages that are cosets of ${C_G(g)}$). Every conjugate of an involution is again an involution, so by the Brauer-Fowler theorem ${G}$ contains a subgroup of order at least ${\max( n, |G|/n^2)}$. In particular, we can conclude that every group ${G}$ of even order contains a proper subgroup of order at least ${|G|^{1/3}}$.

Another corollary is that the size of a simple group of even order can be controlled by the size of a centraliser of one of its involutions:

Corollary 2 (Brauer-Fowler theorem) Let ${G}$ be a finite simple group with an involution ${g}$, and suppose that ${C_G(g)}$ has order ${n}$. Then ${G}$ has order at most ${(n^2)!}$.

Indeed, by the previous discussion ${G}$ has a proper subgroup ${H}$ of index less than ${n^2}$, which then gives a non-trivial permutation action of ${G}$ on the coset space ${G/H}$. The kernel of this action is a proper normal subgroup of ${G}$ and is thus trivial, so the action is faithful, and the claim follows.

If one assumes the Feit-Thompson theorem that all groups of odd order are solvable, then Corollary 2 suggests a strategy (first proposed by Brauer himself in 1954) to prove the classification of finite simple groups (CFSG) by induction on the order of the group. Namely, assume for contradiction that the CFSG failed, so that there is a counterexample ${G}$ of minimal order ${|G|}$ to the classification. This is a non-abelian finite simple group; by the Feit-Thompson theorem, it has even order and thus has at least one involution ${g}$. Take such an involution and consider its centraliser ${C_G(g)}$; this is a proper subgroup of ${G}$ of some order ${n < |G|}$. As ${G}$ is a minimal counterexample to the classification, one can in principle describe ${C_G(g)}$ in terms of the CFSG by factoring the group into simple components (via a composition series) and applying the CFSG to each such component. Now, the “only” thing left to do is to verify, for each isomorphism class of ${C_G(g)}$, that all the possible simple groups ${G}$ that could have this type of group as a centraliser of an involution obey the CFSG; Corollary 2 tells us that for each such isomorphism class for ${C_G(g)}$, there are only finitely many ${G}$ that could generate this class for one of its centralisers, so this task should be doable in principle for any given isomorphism class for ${C_G(g)}$. That’s all one needs to do to prove the classification of finite simple groups!

Needless to say, this program turns out to be far more difficult than the above summary suggests, and the actual proof of the CFSG does not quite proceed along these lines. However, a significant portion of the argument is based on a generalisation of this strategy, in which the concept of a centraliser of an involution is replaced by the more general notion of a normaliser of a ${p}$-group, and one studies not just a single normaliser but rather the entire family of such normalisers and how they interact with each other (and in particular, which normalisers of ${p}$-groups commute with each other), motivated in part by the theory of Tits buildings for Lie groups which dictates a very specific type of interaction structure between these ${p}$-groups in the key case when ${G}$ is a (sufficiently high rank) finite simple group of Lie type over a field of characteristic ${p}$. See the text of Aschbacher, Lyons, Smith, and Solomon for a more detailed description of this strategy.

The Brauer-Fowler theorem can be proven by a nice application of character theory, of the type discussed in this recent blog post, ultimately based on analysing the alternating tensor power of representations; I reproduce a version of this argument (taken from this text of Isaacs) below the fold. (The original argument of Brauer and Fowler is more combinatorial in nature.) However, I wanted to record a variant of the argument that relies not on the fine properties of characters, but on the cruder theory of quasirandomness for groups, the modern study of which was initiated by Gowers, and is discussed for instance in this previous post. It gives the following slightly weaker version of Corollary 2:

Corollary 3 (Weak Brauer-Fowler theorem) Let ${G}$ be a finite simple group with an involution ${g}$, and suppose that ${C_G(g)}$ has order ${n}$. Then ${G}$ can be identified with a subgroup of the unitary group ${U_{4n^3}({\bf C})}$.

One can get an upper bound on ${|G|}$ from this corollary using Jordan’s theorem, but the resulting bound is a bit weaker than that in Corollary 2 (and the best bounds on Jordan’s theorem require the CFSG!).

Proof: Let ${A}$ be the set of all involutions in ${G}$, then as discussed above ${|A| \geq |G|/n}$. We may assume that ${G}$ has no non-trivial unitary representation of dimension less than ${4n^3}$ (since such representations are automatically faithful by the simplicity of ${G}$); thus, in the language of quasirandomness, ${G}$ is ${4n^3}$-quasirandom, and is also non-abelian. We have the basic convolution estimate

$\displaystyle \|1_A * 1_A * 1_A - \frac{|A|^3}{|G|} \|_{\ell^\infty(G)} \leq (4n^3)^{-1/2} |G|^{1/2} |A|^{3/2}$

(see Exercise 10 from this previous blog post). In particular,

$\displaystyle 1_A * 1_A * 1_A(0) \geq \frac{|A|^3}{|G|} - (4n^3)^{-1/2} |G|^{1/2} |A|^{3/2} \geq \frac{1}{2n^3} |G|^2$

and so there are at least ${|G|^2/2n^3}$ pairs ${(g,h) \in A \times A}$ such that ${gh \in A^{-1} = A}$, i.e. involutions ${g,h}$ whose product is also an involution. But any such involutions necessarily commute, since

$\displaystyle g (gh) h = g^2 h^2 = 1 = (gh)^2 = g (hg) h.$

Thus there are at least ${|G|^2/2n^3}$ pairs ${(g,h) \in G \times G}$ of non-identity elements that commute, so by the pigeonhole principle there is a non-identity ${g \in G}$ whose centraliser ${C_G(g)}$ has order at least ${|G|/2n^3}$. This centraliser cannot be all of ${G}$ since this would make ${g}$ central which contradicts the non-abelian simple nature of ${G}$. But then the quasiregular representation of ${G}$ on ${G/C_G(g)}$ has dimension at most ${2n^3}$, contradicting the quasirandomness. $\Box$

An abstract finite-dimensional complex Lie algebra, or Lie algebra for short, is a finite-dimensional complex vector space ${{\mathfrak g}}$ together with an anti-symmetric bilinear form ${[,] = [,]_{\mathfrak g}: {\mathfrak g} \times {\mathfrak g} \rightarrow {\mathfrak g}}$ that obeys the Jacobi identity

$\displaystyle [[x,y],z] + [[y,z],x] + [[z,x],y] = 0 \ \ \ \ \ (1)$

for all ${x,y,z \in {\mathfrak g}}$; by anti-symmetry one can also rewrite the Jacobi identity as

$\displaystyle [x,[y,z]] = [[x,y],z] + [y,[x,z]]. \ \ \ \ \ (2)$

We will usually omit the subscript from the Lie bracket ${[,]_{\mathfrak g}}$ when this will not cause ambiguity. A homomorphism ${\phi: {\mathfrak g} \rightarrow {\mathfrak h}}$ between two Lie algebras ${{\mathfrak g},{\mathfrak h}}$ is a linear map that respects the Lie bracket, thus ${\phi([x,y]_{\mathfrak g}) =[\phi(x),\phi(y)]_{\mathfrak h}}$ for all ${x,y \in {\mathfrak g}}$. As with many other classes of mathematical objects, the class of Lie algebras together with their homomorphisms then form a category. One can of course also consider Lie algebras in infinite dimension or over other fields, but we will restrict attention throughout these notes to the finite-dimensional complex case. The trivial, zero-dimensional Lie algebra is denoted ${0}$; Lie algebras of positive dimension will be called non-trivial.

Lie algebras come up in many contexts in mathematics, in particular arising as the tangent space of complex Lie groups. It is thus very profitable to think of Lie algebras as being the infinitesimal component of a Lie group, and in particular almost all of the notation and concepts that are applicable to Lie groups (e.g. nilpotence, solvability, extensions, etc.) have infinitesimal counterparts in the category of Lie algebras (often with exactly the same terminology). See this previous blog post for more discussion about the connection between Lie algebras and Lie groups (that post was focused over the reals instead of the complexes, but much of the discussion carries over to the complex case).

A particular example of a Lie algebra is the general linear Lie algebra ${{\mathfrak{gl}}(V)}$ of linear transformations ${x: V \rightarrow V}$ on a finite-dimensional complex vector space (or vector space for short) ${V}$, with the commutator Lie bracket ${[x,y] := xy-yx}$; one easily verifies that this is indeed an abstract Lie algebra. We will define a concrete Lie algebra to be a Lie algebra that is a subalgebra of ${{\mathfrak{gl}}(V)}$ for some vector space ${V}$, and similarly define a representation of a Lie algebra ${{\mathfrak g}}$ to be a homomorphism ${\rho: {\mathfrak g} \rightarrow {\mathfrak h}}$ into a concrete Lie algebra ${{\mathfrak h}}$. It is a deep theorem of Ado (discussed in this previous post) that every abstract Lie algebra is in fact isomorphic to a concrete one (or equivalently, that every abstract Lie algebra has a faithful representation), but we will not need or prove this fact here.

Even without Ado’s theorem, though, the structure of abstract Lie algebras is very well understood. As with objects in many other algebraic categories, a basic way to understand a Lie algebra ${{\mathfrak g}}$ is to factor it into two simpler algebras ${{\mathfrak h}, {\mathfrak k}}$ via a short exact sequence

$\displaystyle 0 \rightarrow {\mathfrak h} \rightarrow {\mathfrak g} \rightarrow {\mathfrak k} \rightarrow 0, \ \ \ \ \ (3)$

thus one has an injective homomorphism from ${{\mathfrak h}}$ to ${{\mathfrak g}}$ and a surjective homomorphism from ${{\mathfrak g}}$ to ${{\mathfrak k}}$ such that the image of the former homomorphism is the kernel of the latter. (To be pedantic, a short exact sequence in a general category requires these homomorphisms to be monomorphisms and epimorphisms respectively, but in the category of Lie algebras these turn out to reduce to the more familiar concepts of injectivity and surjectivity respectively.) Given such a sequence, one can (non-uniquely) identify ${{\mathfrak g}}$ with the vector space ${{\mathfrak h} \times {\mathfrak k}}$ equipped with a Lie bracket of the form

$\displaystyle [(t,x), (s,y)]_{\mathfrak g} = ([t,s]_{\mathfrak h} + A(t,y) - A(s,x) + B(x,y), [x,y]_{\mathfrak k}) \ \ \ \ \ (4)$

for some bilinear maps ${A: {\mathfrak h} \times {\mathfrak k} \rightarrow {\mathfrak h}}$ and ${B: {\mathfrak k} \times {\mathfrak k} \rightarrow {\mathfrak h}}$ that obey some Jacobi-type identities which we will not record here. Understanding exactly what maps ${A,B}$ are possible here (up to coordinate change) can be a difficult task (and is one of the key objectives of Lie algebra cohomology), but in principle at least, the problem of understanding ${{\mathfrak g}}$ can be reduced to that of understanding that of its factors ${{\mathfrak k}, {\mathfrak h}}$. To emphasise this, I will (perhaps idiosyncratically) express the existence of a short exact sequence (3) by the ATLAS-type notation

$\displaystyle {\mathfrak g} = {\mathfrak h} . {\mathfrak k} \ \ \ \ \ (5)$

although one should caution that for given ${{\mathfrak h}}$ and ${{\mathfrak k}}$, there can be multiple non-isomorphic ${{\mathfrak g}}$ that can form a short exact sequence with ${{\mathfrak h},{\mathfrak k}}$, so that ${{\mathfrak h} . {\mathfrak k}}$ is not a uniquely defined combination of ${{\mathfrak h}}$ and ${{\mathfrak k}}$; one could emphasise this by writing ${{\mathfrak h} ._{A,B} {\mathfrak k}}$ instead of ${{\mathfrak h} . {\mathfrak k}}$, though we will not do so here. We will refer to ${{\mathfrak g}}$ as an extension of ${{\mathfrak k}}$ by ${{\mathfrak h}}$, and read the notation (5) as “ ${{\mathfrak g}}$ is ${{\mathfrak h}}$-by-${{\mathfrak k}}$“; confusingly, these two notations reverse the subject and object of “by”, but unfortunately both notations are well entrenched in the literature. We caution that the operation ${.}$ is not commutative, and it is only partly associative: every Lie algebra of the form ${{\mathfrak k} . ({\mathfrak h} . {\mathfrak l})}$ is also of the form ${({\mathfrak k} . {\mathfrak h}) . {\mathfrak l}}$, but the converse is not true (see this previous blog post for some related discussion). As we are working in the infinitesimal world of Lie algebras (which have an additive group operation) rather than Lie groups (in which the group operation is usually written multiplicatively), it may help to think of ${{\mathfrak h} . {\mathfrak k}}$ as a (twisted) “sum” of ${{\mathfrak h}}$ and ${{\mathfrak k}}$ rather than a “product”; for instance, we have ${{\mathfrak g} = 0 . {\mathfrak g}}$ and ${{\mathfrak g} = {\mathfrak g} . 0}$, and also ${\dim {\mathfrak h} . {\mathfrak k} = \dim {\mathfrak h} + \dim {\mathfrak k}}$.

Special examples of extensions ${{\mathfrak h} .{\mathfrak k}}$ of ${{\mathfrak k}}$ by ${{\mathfrak h}}$ include the direct sum (or direct product) ${{\mathfrak h} \oplus {\mathfrak k}}$ (also denoted ${{\mathfrak h} \times {\mathfrak k}}$), which is given by the construction (4) with ${A}$ and ${B}$ both vanishing, and the split extension (or semidirect product) ${{\mathfrak h} : {\mathfrak k} = {\mathfrak h} :_\rho {\mathfrak k}}$ (also denoted ${{\mathfrak h} \ltimes {\mathfrak k} = {\mathfrak h} \ltimes_\rho {\mathfrak k}}$), which is given by the construction (4) with ${B}$ vanishing and the bilinear map ${A: {\mathfrak h} \times {\mathfrak k} \rightarrow {\mathfrak h}}$ taking the form

$\displaystyle A( t, x ) = \rho(x)(t)$

for some representation ${\rho: {\mathfrak k} \rightarrow \hbox{Der} {\mathfrak h}}$ of ${{\mathfrak k}}$ in the concrete Lie algebra of derivations ${\hbox{Der} {\mathfrak h} \subset {\mathfrak{gl}}({\mathfrak h})}$ of ${{\mathfrak h}}$, that is to say the algebra of linear maps ${D: {\mathfrak h} \rightarrow {\mathfrak h}}$ that obey the Leibniz rule

$\displaystyle D[s,t]_{\mathfrak h} = [Ds,t]_{\mathfrak h} + [s,Dt]_{\mathfrak h}$

for all ${s,t \in {\mathfrak h}}$. (The derivation algebra ${\hbox{Der} {\mathfrak g}}$ of a Lie algebra ${{\mathfrak g}}$ is analogous to the automorphism group ${\hbox{Aut}(G)}$ of a Lie group ${G}$, with the two concepts being intertwined by the tangent space functor ${G \mapsto {\mathfrak g}}$ from Lie groups to Lie algebras (i.e. the derivation algebra is the infinitesimal version of the automorphism group). Of course, this functor also intertwines the Lie algebra and Lie group versions of most of the other concepts discussed here, such as extensions, semidirect products, etc.)

There are two general ways to factor a Lie algebra ${{\mathfrak g}}$ as an extension ${{\mathfrak h} . {\mathfrak k}}$ of a smaller Lie algebra ${{\mathfrak k}}$ by another smaller Lie algebra ${{\mathfrak h}}$. One is to locate a Lie algebra ideal (or ideal for short) ${{\mathfrak h}}$ in ${{\mathfrak g}}$, thus ${[{\mathfrak h},{\mathfrak g}] \subset {\mathfrak h}}$, where ${[{\mathfrak h},{\mathfrak g}]}$ denotes the Lie algebra generated by ${\{ [x,y]: x \in {\mathfrak h}, y \in {\mathfrak g} \}}$, and then take ${{\mathfrak k}}$ to be the quotient space ${{\mathfrak g}/{\mathfrak h}}$ in the usual manner; one can check that ${{\mathfrak h}}$, ${{\mathfrak k}}$ are also Lie algebras and that we do indeed have a short exact sequence

$\displaystyle {\mathfrak g} = {\mathfrak h} . ({\mathfrak g}/{\mathfrak h}).$

Conversely, whenever one has a factorisation ${{\mathfrak g} = {\mathfrak h} . {\mathfrak k}}$, one can identify ${{\mathfrak h}}$ with an ideal in ${{\mathfrak g}}$, and ${{\mathfrak k}}$ with the quotient of ${{\mathfrak g}}$ by ${{\mathfrak h}}$.

The other general way to obtain such a factorisation is is to start with a homomorphism ${\rho: {\mathfrak g} \rightarrow {\mathfrak m}}$ of ${{\mathfrak g}}$ into another Lie algebra ${{\mathfrak m}}$, take ${{\mathfrak k}}$ to be the image ${\rho({\mathfrak g})}$ of ${{\mathfrak g}}$, and ${{\mathfrak h}}$ to be the kernel ${\hbox{ker} \rho := \{ x \in {\mathfrak g}: \rho(x) = 0 \}}$. Again, it is easy to see that this does indeed create a short exact sequence:

$\displaystyle {\mathfrak g} = \hbox{ker} \rho . \rho({\mathfrak g}).$

Conversely, whenever one has a factorisation ${{\mathfrak g} = {\mathfrak h} . {\mathfrak k}}$, one can identify ${{\mathfrak k}}$ with the image of ${{\mathfrak g}}$ under some homomorphism, and ${{\mathfrak h}}$ with the kernel of that homomorphism. Note that if a representation ${\rho: {\mathfrak g} \rightarrow {\mathfrak m}}$ is faithful (i.e. injective), then the kernel is trivial and ${{\mathfrak g}}$ is isomorphic to ${\rho({\mathfrak g})}$.

Now we consider some examples of factoring some class of Lie algebras into simpler Lie algebras. The easiest examples of Lie algebras to understand are the abelian Lie algebras ${{\mathfrak g}}$, in which the Lie bracket identically vanishes. Every one-dimensional Lie algebra is automatically abelian, and thus isomorphic to the scalar algebra ${{\bf C}}$. Conversely, by using an arbitrary linear basis of ${{\mathfrak g}}$, we see that an abelian Lie algebra is isomorphic to the direct sum of one-dimensional algebras. Thus, a Lie algebra is abelian if and only if it is isomorphic to the direct sum of finitely many copies of ${{\bf C}}$.

Now consider a Lie algebra ${{\mathfrak g}}$ that is not necessarily abelian. We then form the derived algebra ${[{\mathfrak g},{\mathfrak g}]}$; this algebra is trivial if and only if ${{\mathfrak g}}$ is abelian. It is easy to see that ${[{\mathfrak h},{\mathfrak k}]}$ is an ideal whenever ${{\mathfrak h},{\mathfrak k}}$ are ideals, so in particular the derived algebra ${[{\mathfrak g},{\mathfrak g}]}$ is an ideal and we thus have the short exact sequence

$\displaystyle {\mathfrak g} = [{\mathfrak g},{\mathfrak g}] . ({\mathfrak g}/[{\mathfrak g},{\mathfrak g}]).$

The algebra ${{\mathfrak g}/[{\mathfrak g},{\mathfrak g}]}$ is the maximal abelian quotient of ${{\mathfrak g}}$, and is known as the abelianisation of ${{\mathfrak g}}$. If it is trivial, we call the Lie algebra perfect. If instead it is non-trivial, then the derived algebra has strictly smaller dimension than ${{\mathfrak g}}$. From this, it is natural to associate two series to any Lie algebra ${{\mathfrak g}}$, the lower central series

$\displaystyle {\mathfrak g}_1 = {\mathfrak g}; {\mathfrak g}_2 := [{\mathfrak g}, {\mathfrak g}_1]; {\mathfrak g}_3 := [{\mathfrak g}, {\mathfrak g}_2]; \ldots$

and the derived series

$\displaystyle {\mathfrak g}^{(1)} := {\mathfrak g}; {\mathfrak g}^{(2)} := [{\mathfrak g}^{(1)}, {\mathfrak g}^{(1)}]; {\mathfrak g}^{(3)} := [{\mathfrak g}^{(2)}, {\mathfrak g}^{(2)}]; \ldots.$

By induction we see that these are both decreasing series of ideals of ${{\mathfrak g}}$, with the derived series being slightly smaller (${{\mathfrak g}^{(k)} \subseteq {\mathfrak g}_k}$ for all ${k}$). We say that a Lie algebra is nilpotent if its lower central series is eventually trivial, and solvable if its derived series eventually becomes trivial. Thus, abelian Lie algebras are nilpotent, and nilpotent Lie algebras are solvable, but the converses are not necessarily true. For instance, in the general linear group ${{\mathfrak{gl}}_n = {\mathfrak{gl}}({\bf C}^n)}$, which can be identified with the Lie algebra of ${n \times n}$ complex matrices, the subalgebra ${{\mathfrak n}}$ of strictly upper triangular matrices is nilpotent (but not abelian for ${n \geq 3}$), while the subalgebra ${{\mathfrak n}}$ of upper triangular matrices is solvable (but not nilpotent for ${n \geq 2}$). It is also clear that any subalgebra of a nilpotent algebra is nilpotent, and similarly for solvable or abelian algebras.

From the above discussion we see that a Lie algebra is solvable if and only if it can be represented by a tower of abelian extensions, thus

$\displaystyle {\mathfrak g} = {\mathfrak a}_1 . ({\mathfrak a}_2 . \ldots ({\mathfrak a}_{k-1} . {\mathfrak a}_k) \ldots )$

for some abelian ${{\mathfrak a}_1,\ldots,{\mathfrak a}_k}$. Similarly, a Lie algebra ${{\mathfrak g}}$ is nilpotent if it is expressible as a tower of central extensions (so that in all the extensions ${{\mathfrak h} . {\mathfrak k}}$ in the above factorisation, ${{\mathfrak h}}$ is central in ${{\mathfrak h} . {\mathfrak k}}$, where we say that ${{\mathfrak h}}$ is central in ${{\mathfrak g}}$ if ${[{\mathfrak h},{\mathfrak g}]=0}$). We also see that an extension ${{\mathfrak h} . {\mathfrak k}}$ is solvable if and only of both factors ${{\mathfrak h}, {\mathfrak k}}$ are solvable. Splitting abelian algebras into cyclic (i.e. one-dimensional) ones, we thus see that a finite-dimensional Lie algebra is solvable if and only if it is polycylic, i.e. it can be represented by a tower of cyclic extensions.

For our next fundamental example of using short exact sequences to split a general Lie algebra into simpler objects, we observe that every abstract Lie algebra ${{\mathfrak g}}$ has an adjoint representation ${\hbox{ad}: {\mathfrak g} \rightarrow \hbox{ad} {\mathfrak g} \subset {\mathfrak{gl}}({\mathfrak g})}$, where for each ${x \in {\mathfrak g}}$, ${\hbox{ad} x \in {\mathfrak{gl}}({\mathfrak g})}$ is the linear map ${(\hbox{ad} x)(y) := [x,y]}$; one easily verifies that this is indeed a representation (indeed, (2) is equivalent to the assertion that ${\hbox{ad} [x,y] = [\hbox{ad} x, \hbox{ad} y]}$ for all ${x,y \in {\mathfrak g}}$). The kernel of this representation is the center ${Z({\mathfrak g}) := \{ x \in {\mathfrak g}: [x,{\mathfrak g}] = 0\}}$, which the maximal central subalgebra of ${{\mathfrak g}}$. We thus have the short exact sequence

$\displaystyle {\mathfrak g} = Z({\mathfrak g}) . \hbox{ad} g \ \ \ \ \ (6)$

which, among other things, shows that every abstract Lie algebra is a central extension of a concrete Lie algebra (which can serve as a cheap substitute for Ado’s theorem mentioned earlier).

For our next fundamental decomposition of Lie algebras, we need some more definitions. A Lie algebra ${{\mathfrak g}}$ is simple if it is non-abelian and has no ideals other than ${0}$ and ${{\mathfrak g}}$; thus simple Lie algebras cannot be factored ${{\mathfrak g} = {\mathfrak h} . {\mathfrak k}}$ into strictly smaller algebras ${{\mathfrak h},{\mathfrak k}}$. In particular, simple Lie algebras are automatically perfect and centerless. We have the following fundamental theorem:

Theorem 1 (Equivalent definitions of semisimplicity) Let ${{\mathfrak g}}$ be a Lie algebra. Then the following are equivalent:

• (i) ${{\mathfrak g}}$ does not contain any non-trivial solvable ideal.
• (ii) ${{\mathfrak g}}$ does not contain any non-trivial abelian ideal.
• (iii) The Killing form ${K: {\mathfrak g} \times {\mathfrak g} \rightarrow {\bf C}}$, defined as the bilinear form ${K(x,y) := \hbox{tr}_{\mathfrak g}( (\hbox{ad} x) (\hbox{ad} y) )}$, is non-degenerate on ${{\mathfrak g}}$.
• (iv) ${{\mathfrak g}}$ is isomorphic to the direct sum of finitely many non-abelian simple Lie algebras.

We review the proof of this theorem later in these notes. A Lie algebra obeying any (and hence all) of the properties (i)-(iv) is known as a semisimple Lie algebra. The statement (iv) is usually taken as the definition of semisimplicity; the equivalence of (iv) and (i) is a special case of Weyl’s complete reducibility theorem (see Theorem 32), and the equivalence of (iv) and (iii) is known as the Cartan semisimplicity criterion. (The equivalence of (i) and (ii) is easy.)

If ${{\mathfrak h}}$ and ${{\mathfrak k}}$ are solvable ideals of a Lie algebra ${{\mathfrak g}}$, then it is not difficult to see that the vector sum ${{\mathfrak h}+{\mathfrak k}}$ is also a solvable ideal (because on quotienting by ${{\mathfrak h}}$ we see that the derived series of ${{\mathfrak h}+{\mathfrak k}}$ must eventually fall inside ${{\mathfrak h}}$, and thence must eventually become trivial by the solvability of ${{\mathfrak h}}$). As our Lie algebras are finite dimensional, we conclude that ${{\mathfrak g}}$ has a unique maximal solvable ideal, known as the radical ${\hbox{rad} {\mathfrak g}}$ of ${{\mathfrak g}}$. The quotient ${{\mathfrak g}/\hbox{rad} {\mathfrak g}}$ is then a Lie algebra with trivial radical, and is thus semisimple by the above theorem, giving the Levi decomposition

$\displaystyle {\mathfrak g} = \hbox{rad} {\mathfrak g} . ({\mathfrak g} / \hbox{rad} {\mathfrak g})$

expressing an arbitrary Lie algebra as an extension of a semisimple Lie algebra ${{\mathfrak g}/\hbox{rad}{\mathfrak g}}$ by a solvable algebra ${\hbox{rad} {\mathfrak g}}$ (and it is not hard to see that this is the only possible such extension up to isomorphism). Indeed, a deep theorem of Levi allows one to upgrade this decomposition to a split extension

$\displaystyle {\mathfrak g} = \hbox{rad} {\mathfrak g} : ({\mathfrak g} / \hbox{rad} {\mathfrak g})$

although we will not need or prove this result here.

In view of the above decompositions, we see that we can factor any Lie algebra (using a suitable combination of direct sums and extensions) into a finite number of simple Lie algebras and the scalar algebra ${{\bf C}}$. In principle, this means that one can understand an arbitrary Lie algebra once one understands all the simple Lie algebras (which, being defined over ${{\bf C}}$, are somewhat confusingly referred to as simple complex Lie algebras in the literature). Amazingly, this latter class of algebras are completely classified:

Theorem 2 (Classification of simple Lie algebras) Up to isomorphism, every simple Lie algebra is of one of the following forms:

• ${A_n = \mathfrak{sl}_{n+1}}$ for some ${n \geq 1}$.
• ${B_n = \mathfrak{so}_{2n+1}}$ for some ${n \geq 2}$.
• ${C_n = \mathfrak{sp}_{2n}}$ for some ${n \geq 3}$.
• ${D_n = \mathfrak{so}_{2n}}$ for some ${n \geq 4}$.
• ${E_6, E_7}$, or ${E_8}$.
• ${F_4}$.
• ${G_2}$.

(The precise definition of the classical Lie algebras ${A_n,B_n,C_n,D_n}$ and the exceptional Lie algebras ${E_6,E_7,E_8,F_4,G_2}$ will be recalled later.)

(One can extend the families ${A_n,B_n,C_n,D_n}$ of classical Lie algebras a little bit to smaller values of ${n}$, but the resulting algebras are either isomorphic to other algebras on this list, or cease to be simple; see this previous post for further discussion.)

This classification is a basic starting point for the classification of many other related objects, including Lie algebras and Lie groups over more general fields (e.g. the reals ${{\bf R}}$), as well as finite simple groups. Being so fundamental to the subject, this classification is covered in almost every basic textbook in Lie algebras, and I myself learned it many years ago in an honours undergraduate course back in Australia. The proof is rather lengthy, though, and I have always had difficulty keeping it straight in my head. So I have decided to write some notes on the classification in this blog post, aiming to be self-contained (though moving rapidly). There is no new material in this post, though; it is all drawn from standard reference texts (I relied particularly on Fulton and Harris’s text, which I highly recommend). In fact it seems remarkably hard to deviate from the standard routes given in the literature to the classification; I would be interested in knowing about other ways to reach the classification (or substeps in that classification) that are genuinely different from the orthodox route.

The classification of finite simple groups (CFSG), first announced in 1983 but only fully completed in 2004, is one of the monumental achievements of twentieth century mathematics. Spanning hundreds of papers and tens of thousands of pages, it has been called the “enormous theorem”. A “second generation” proof of the theorem is nearly completed which is a little shorter (estimated at about five thousand pages in length), but currently there is no reasonably sized proof of the classification.

An important precursor of the CFSG is the Feit-Thompson theorem from 1962-1963, which asserts that every finite group of odd order is solvable, or equivalently that every non-abelian finite simple group has even order. This is an immediate consequence of CFSG, and conversely the Feit-Thompson theorem is an essential starting point in the proof of the classification, since it allows one to reduce matters to groups of even order for which key additional tools (such as the Brauer-Fowler theorem) become available. The original proof of the Feit-Thompson theorem is 255 pages long, which is significantly shorter than the proof of the CFSG, but still far from short. While parts of the proof of the Feit-Thompson theorem have been simplified (and it has recently been converted, after six years of effort, into an argument that has been verified by the proof assistant Coq), the available proofs of this theorem are still extremely lengthy by any reasonable standard.

However, there is a significantly simpler special case of the Feit-Thompson theorem that was established previously by Suzuki in 1957, which was influential in the proof of the more general Feit-Thompson theorem (and thus indirectly to the proof of CFSG). Define a CA-group to be a group ${G}$ with the property that the centraliser ${C_G(x) := \{ g \in G: gx=xg \}}$ of any non-identity element ${x \in G}$ is abelian; equivalently, the commuting relation ${x \sim y}$ (defined as the relation that holds when ${x}$ commutes with ${y}$, thus ${xy=yx}$) is an equivalence relation on the non-identity elements ${G \backslash \{1\}}$ of ${G}$. Trivially, every abelian group is CA. A non-abelian example of a CA-group is the ${ax+b}$ group of invertible affine transformations ${x \mapsto ax+b}$ on a field ${F}$. A little less obviously, the special linear group ${SL_2(F_q)}$ over a finite field ${F_q}$ is a CA-group when ${q}$ is a power of two. The finite simple groups of Lie type are not, in general, CA-groups, but when the rank is bounded they tend to behave as if they were “almost CA”; the centraliser of a generic element in ${SL_d(F_q)}$, for instance, when ${d}$ is bounded and ${q}$ is large), is typically a maximal torus (because most elements in ${SL_d(F_q)}$ are regular semisimple) which is certainly abelian. In view of the CFSG, we thus see that CA or nearly CA groups form an important subclass of the simple groups, and it is thus of interest to study them separately. To this end, we have

Theorem 1 (Suzuki’s theorem on CA-groups) Every finite CA-group of odd order is solvable.

Of course, this theorem is superceded by the more general Feit-Thompson theorem, but Suzuki’s proof is substantially shorter (the original proof is nine pages) and will be given in this post. (See this survey of Solomon for some discussion of the link between Suzuki’s argument and the Feit-Thompson argument.) Suzuki’s analysis can be pushed further to give an essentially complete classification of all the finite CA-groups (of either odd or even order), but we will not pursue these matters here.

Moving even further down the ladder of simple precursors of CSFG is the following theorem of Frobenius from 1901. Define a Frobenius group to be a finite group ${G}$ which has a subgroup ${H}$ (called the Frobenius complement) with the property that all the non-trivial conjugates ${gHg^{-1}}$ of ${H}$ for ${g \in G \backslash H}$, intersect ${H}$ only at the origin. For instance the ${ax+b}$ group is also a Frobenius group (take ${H}$ to be the affine transformations that fix a specified point ${x_0 \in F}$, e.g. the origin). This example suggests that there is some overlap between the notions of a Frobenius group and a CA group. Indeed, note that if ${G}$ is a CA-group and ${H}$ is a maximal abelian subgroup of ${G}$, then any conjugate ${gHg^{-1}}$ of ${H}$ that is not identical to ${H}$ will intersect ${H}$ only at the origin (because ${H}$ and each of its conjugates consist of equivalence classes under the commuting relation ${\sim}$, together with the identity). So if a maximal abelian subgroup ${H}$ of a CA-group is its own normaliser (thus ${N(H) := \{ g \in G: gH=Hg\}}$ is equal to ${H}$), then the group is a Frobenius group.

Frobenius’ theorem places an unexpectedly strong amount of structure on a Frobenius group:

Theorem 2 (Frobenius’ theorem) Let ${G}$ be a Frobenius group with Frobenius complement ${H}$. Then there exists a normal subgroup ${K}$ of ${G}$ (called the Frobenius kernel of ${G}$) such that ${G}$ is the semi-direct product ${H \ltimes K}$ of ${H}$ and ${K}$.

Roughly speaking, this theorem indicates that all Frobenius groups “behave” like the ${ax+b}$ example (which is a quintessential example of a semi-direct product).

Note that if every CA-group of odd order was either Frobenius or abelian, then Theorem 2 would imply Theorem 1 by an induction on the order of ${G}$, since any subgroup of a CA-group is clearly again a CA-group. Indeed, the proof of Suzuki’s theorem does basically proceed by this route (Suzuki’s arguments do indeed imply that CA-groups of odd order are Frobenius or abelian, although we will not quite establish that fact here).

Frobenius’ theorem can be reformulated in the following concrete combinatorial form:

Theorem 3 (Frobenius’ theorem, equivalent version) Let ${G}$ be a group of permutations acting transitively on a finite set ${X}$, with the property that any non-identity permutation in ${G}$ fixes at most one point in ${X}$. Then the set of permutations in ${G}$ that fix no points in ${X}$, together with the identity, is closed under composition.

Again, a good example to keep in mind for this theorem is when ${G}$ is the group of affine permutations on a field ${F}$ (i.e. the ${ax+b}$ group for that field), and ${X}$ is the set of points on that field. In that case, the set of permutations in ${G}$ that do not fix any points are the non-trivial translations.

To deduce Theorem 3 from Theorem 2, one applies Theorem 2 to the stabiliser of a single point in ${X}$. Conversely, to deduce Theorem 2 from Theorem 3, set ${X := G/H = \{ gH: g \in G \}}$ to be the space of left-cosets of ${H}$, with the obvious left ${G}$-action; one easily verifies that this action is faithful, transitive, and each non-identity element ${g}$ of ${G}$ fixes at most one left-coset of ${H}$ (basically because it lies in at most one conjugate of ${H}$). If we let ${K}$ be the elements of ${G}$ that do not fix any point in ${X}$, plus the identity, then by Theorem 3 ${K}$ is closed under composition; it is also clearly closed under inverse and conjugation, and is hence a normal subgroup of ${G}$. From construction ${K}$ is the identity plus the complement of all the ${|G|/|H|}$ conjugates of ${H}$, which are all disjoint except at the identity, so by counting elements we see that

$\displaystyle |K| = |G| - \frac{|G|}{|H|}(|H|-1) = |G|/|H|.$

As ${H}$ normalises ${K}$ and is disjoint from ${K}$, we thus see that ${KH = H \ltimes K}$ is all of ${G}$, giving Theorem 2.

Despite the appealingly concrete and elementary form of Theorem 3, the only known proofs of that theorem (or equivalently, Theorem 2) in its full generality proceed via the machinery of group characters (which one can think of as a version of Fourier analysis for nonabelian groups). On the other hand, once one establishes the basic theory of these characters (reviewed below the fold), the proof of Frobenius’ theorem is very short, which gives quite a striking example of the power of character theory. The proof of Suzuki’s theorem also proceeds via character theory, and is basically a more involved version of the Frobenius argument; again, no character-free proof of Suzuki’s theorem is currently known. (The proofs of Feit-Thompson and CFSG also involve characters, but those proofs also contain many other arguments of much greater complexity than the character-based portions of the proof.)

It seems to me that the above four theorems (Frobenius, Suzuki, Feit-Thompson, and CFSG) provide a ladder of sorts (with exponentially increasing complexity at each step) to the full classification, and that any new approach to the classification might first begin by revisiting the earlier theorems on this ladder and finding new proofs of these results first (in particular, if one had a “robust” proof of Suzuki’s theorem that also gave non-trivial control on “almost CA-groups” – whatever that means – then this might lead to a new route to classifying the finite simple groups of Lie type and bounded rank). But even for the simplest two results on this ladder – Frobenius and Suzuki – it seems remarkably difficult to find any proof that is not essentially the character-based proof. (Even trying to replace character theory by its close cousin, representation theory, doesn’t seem to work unless one gives in to the temptation to take traces everywhere and put the characters back in; it seems that rather than abandon characters altogether, one needs to find some sort of “robust” generalisation of existing character-based methods.) In any case, I am recording here the standard character-based proofs of the theorems of Frobenius and Suzuki below the fold. There is nothing particularly novel here, but I wanted to collect all the relevant material in one place, largely for my own benefit.