You are currently browsing the category archive for the ‘expository’ category.

Define a partition of ${1}$ to be a finite or infinite multiset ${\Sigma}$ of real numbers in the interval ${I \in (0,1]}$ (that is, an unordered set of real numbers in ${I}$, possibly with multiplicity) whose total sum is ${1}$: ${\sum_{t \in \Sigma}t = 1}$. For instance, ${\{1/2,1/4,1/8,1/16,\ldots\}}$ is a partition of ${1}$. Such partitions arise naturally when trying to decompose a large object into smaller ones, for instance:

1. (Prime factorisation) Given a natural number ${n}$, one can decompose it into prime factors ${n = p_1 \ldots p_k}$ (counting multiplicity), and then the multiset

$\displaystyle \Sigma_{PF}(n) := \{ \frac{\log p_1}{\log n}, \ldots,\frac{\log p_k}{\log n} \}$

is a partition of ${1}$.

2. (Cycle decomposition) Given a permutation ${\sigma \in S_n}$ on ${n}$ labels ${\{1,\ldots,n\}}$, one can decompose ${\sigma}$ into cycles ${C_1,\ldots,C_k}$, and then the multiset

$\displaystyle \Sigma_{CD}(\sigma) := \{ \frac{|C_1|}{n}, \ldots, \frac{|C_k|}{n} \}$

is a partition of ${1}$.

3. (Normalisation) Given a multiset ${\Gamma}$ of positive real numbers whose sum ${S := \sum_{x\in \Gamma}x}$ is finite and non-zero, the multiset

$\displaystyle \Sigma_N( \Gamma) := \frac{1}{S} \cdot \Gamma = \{ \frac{x}{S}: x \in \Gamma \}$

is a partition of ${1}$.

In the spirit of the universality phenomenon, one can ask what is the natural distribution for what a “typical” partition should look like; thus one seeks a natural probability distribution on the space of all partitions, analogous to (say) the gaussian distributions on the real line, or GUE distributions on point processes on the line, and so forth. It turns out that there is one natural such distribution which is related to all three examples above, known as the Poisson-Dirichlet distribution. To describe this distribution, we first have to deal with the problem that it is not immediately obvious how to cleanly parameterise the space of partitions, given that the cardinality of the partition can be finite or infinite, that multiplicity is allowed, and that we would like to identify two partitions that are permutations of each other

One way to proceed is to random partition ${\Sigma}$ as a type of point process on the interval ${I}$, with the constraint that ${\sum_{x \in \Sigma} x = 1}$, in which case one can study statistics such as the counting functions

$\displaystyle N_{[a,b]} := |\Sigma \cap [a,b]| = \sum_{x \in\Sigma} 1_{[a,b]}(x)$

(where the cardinality here counts multiplicity). This can certainly be done, although in the case of the Poisson-Dirichlet process, the formulae for the joint distribution of such counting functions is moderately complicated. Another way to proceed is to order the elements of ${\Sigma}$ in decreasing order

$\displaystyle t_1 \geq t_2 \geq t_3 \geq \ldots \geq 0,$

with the convention that one pads the sequence ${t_n}$ by an infinite number of zeroes if ${\Sigma}$ is finite; this identifies the space of partitions with an infinite dimensional simplex

$\displaystyle \{ (t_1,t_2,\ldots) \in [0,1]^{\bf N}: t_1 \geq t_2 \geq \ldots; \sum_{n=1}^\infty t_n = 1 \}.$

However, it turns out that the process of ordering the elements is not “smooth” (basically because functions such as ${(x,y) \mapsto \max(x,y)}$ and ${(x,y) \mapsto \min(x,y)}$ are not smooth) and the formulae for the joint distribution in the case of the Poisson-Dirichlet process is again complicated.

It turns out that there is a better (or at least “smoother”) way to enumerate the elements ${u_1,(1-u_1)u_2,(1-u_1)(1-u_2)u_3,\ldots}$ of a partition ${\Sigma}$ than the ordered method, although it is random rather than deterministic. This procedure (which I learned from this paper of Donnelly and Grimmett) works as follows.

1. Given a partition ${\Sigma}$, let ${u_1}$ be an element of ${\Sigma}$ chosen at random, with each element ${t\in \Sigma}$ having a probability ${t}$ of being chosen as ${u_1}$ (so if ${t \in \Sigma}$ occurs with multiplicity ${m}$, the net probability that ${t}$ is chosen as ${u_1}$ is actually ${mt}$). Note that this is well-defined since the elements of ${\Sigma}$ sum to ${1}$.
2. Now suppose ${u_1}$ is chosen. If ${\Sigma \backslash \{u_1\}}$ is empty, we set ${u_2,u_3,\ldots}$ all equal to zero and stop. Otherwise, let ${u_2}$ be an element of ${\frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})}$ chosen at random, with each element ${t \in \frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})}$ having a probability ${t}$ of being chosen as ${u_2}$. (For instance, if ${u_1}$ occurred with some multiplicity ${m>1}$ in ${\Sigma}$, then ${u_2}$ can equal ${\frac{u_1}{1-u_1}}$ with probability ${(m-1)u_1/(1-u_1)}$.)
3. Now suppose ${u_1,u_2}$ are both chosen. If ${\Sigma \backslash \{u_1,u_2\}}$ is empty, we set ${u_3, u_4, \ldots}$ all equal to zero and stop. Otherwise, let ${u_3}$ be an element of ${\frac{1}{1-u_1-u_2} \cdot (\Sigma\backslash \{u_1,u_2\})}$, with ech element ${t \in \frac{1}{1-u_1-u_2} \cdot (\Sigma\backslash \{u_1,u_2\})}$ having a probability ${t}$ of being chosen as ${u_3}$.
4. We continue this process indefinitely to create elements ${u_1,u_2,u_3,\ldots \in [0,1]}$.

We denote the random sequence ${Enum(\Sigma) := (u_1,u_2,\ldots) \in [0,1]^{\bf N}}$ formed from a partition ${\Sigma}$ in the above manner as the random normalised enumeration of ${\Sigma}$; this is a random variable in the infinite unit cube ${[0,1]^{\bf N}}$, and can be defined recursively by the formula

$\displaystyle Enum(\Sigma) = (u_1, Enum(\frac{1}{1-u_1} \cdot (\Sigma\backslash \{u_1\})))$

with ${u_1}$ drawn randomly from ${\Sigma}$, with each element ${t \in \Sigma}$ chosen with probability ${t}$, except when ${\Sigma =\{1\}}$ in which case we instead have

$\displaystyle Enum(\{1\}) = (1, 0,0,\ldots).$

Note that one can recover ${\Sigma}$ from any of its random normalised enumerations ${Enum(\Sigma) := (u_1,u_2,\ldots)}$ by the formula

$\displaystyle \Sigma = \{ u_1, (1-u_1) u_2,(1-u_1)(1-u_2)u_3,\ldots\} \ \ \ \ \ (1)$

with the convention that one discards any zero elements on the right-hand side. Thus ${Enum}$ can be viewed as a (stochastic) parameterisation of the space of partitions by the unit cube ${[0,1]^{\bf N}}$, which is a simpler domain to work with than the infinite-dimensional simplex mentioned earlier.

Note that this random enumeration procedure can also be adapted to the three models described earlier:

1. Given a natural number ${n}$, one can randomly enumerate its prime factors ${n =p'_1 p'_2 \ldots p'_k}$ by letting each prime factor ${p}$ of ${n}$ be equal to ${p'_1}$ with probability ${\frac{\log p}{\log n}}$, then once ${p'_1}$ is chosen, let each remaining prime factor ${p}$ of ${n/p'_1}$ be equal to ${p'_2}$ with probability ${\frac{\log p}{\log n/p'_1}}$, and so forth.
2. Given a permutation ${\sigma\in S_n}$, one can randomly enumerate its cycles ${C'_1,\ldots,C'_k}$ by letting each cycle ${C}$ in ${\sigma}$ be equal to ${C'_1}$ with probability ${\frac{|C|}{n}}$, and once ${C'_1}$ is chosen, let each remaining cycle ${C}$ be equalto ${C'_2}$ with probability ${\frac{|C|}{n-|C'_1|}}$, and so forth. Alternatively, one traverse the elements of ${\{1,\ldots,n\}}$ in random order, then let ${C'_1}$ be the first cycle one encounters when performing this traversal, let ${C'_2}$ be the next cycle (not equal to ${C'_1}$ one encounters when performing this traversal, and so forth.
3. Given a multiset ${\Gamma}$ of positive real numbers whose sum ${S := \sum_{x\in\Gamma} x}$ is finite, we can randomly enumerate ${x'_1,x'_2,\ldots}$ the elements of this sequence by letting each ${x \in \Gamma}$ have a ${\frac{x}{S}}$ probability of being set equal to ${x'_1}$, and then once ${x'_1}$ is chosen, let each remaining ${x \in \Gamma\backslash \{x'_1\}}$ have a ${\frac{x_i}{S-x'_1}}$ probability of being set equal to ${x'_2}$, and so forth.

We then have the following result:

Proposition 1 (Existence of the Poisson-Dirichlet process) There exists a random partition ${\Sigma}$ whose random enumeration ${Enum(\Sigma) = (u_1,u_2,\ldots)}$ has the uniform distribution on ${[0,1]^{\bf N}}$, thus ${u_1,u_2,\ldots}$ are independently and identically distributed copies of the uniform distribution on ${[0,1]}$.

A random partition ${\Sigma}$ with this property will be called the Poisson-Dirichlet process. This process, first introduced by Kingman, can be described explicitly using (1) as

$\displaystyle \Sigma = \{ u_1, (1-u_1) u_2,(1-u_1)(1-u_2)u_3,\ldots\},$

where ${u_1,u_2,\ldots}$ are iid copies of the uniform distribution of ${[0,1]}$, although it is not immediately obvious from this definition that ${Enum(\Sigma)}$ is indeed uniformly distributed on ${[0,1]^{\bf N}}$. We prove this proposition below the fold.

An equivalent definition of a Poisson-Dirichlet process is a random partition ${\Sigma}$ with the property that

$\displaystyle (u_1, \frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})) \equiv (U, \Sigma) \ \ \ \ \ (2)$

where ${u_1}$ is a random element of ${\Sigma}$ with each ${t \in\Sigma}$ having a probability ${t}$ of being equal to ${u_1}$, ${U}$ is a uniform variable on ${[0,1]}$ that is independent of ${\Sigma}$, and ${\equiv}$ denotes equality of distribution. This can be viewed as a sort of stochastic self-similarity property of ${\Sigma}$: if one randomly removes one element from ${\Sigma}$ and rescales, one gets a new copy of ${\Sigma}$.

It turns out that each of the three ways to generate partitions listed above can lead to the Poisson-Dirichlet process, either directly or in a suitable limit. We begin with the third way, namely by normalising a Poisson process to have sum ${1}$:

Proposition 2 (Poisson-Dirichlet processes via Poisson processes) Let ${a>0}$, and let ${\Gamma_a}$ be a Poisson process on ${(0,+\infty)}$ with intensity function ${t \mapsto \frac{1}{t} e^{-at}}$. Then the sum ${S :=\sum_{x \in \Gamma_a} x}$ is almost surely finite, and the normalisation ${\Sigma_N(\Gamma_a) = \frac{1}{S} \cdot \Gamma_a}$ is a Poisson-Dirichlet process.

Again, we prove this proposition below the fold. Now we turn to the second way (a topic, incidentally, that was briefly touched upon in this previous blog post):

Proposition 3 (Large cycles of a typical permutation) For each natural number ${n}$, let ${\sigma}$ be a permutation drawn uniformly at random from ${S_n}$. Then the random partition ${\Sigma_{CD}(\sigma)}$ converges in the limit ${n \rightarrow\infty}$ to a Poisson-Dirichlet process ${\Sigma_{PF}}$ in the following sense: given any fixed sequence of intervals ${[a_1,b_1],\ldots,[a_k,b_k] \subset I}$ (independent of ${n}$), the joint discrete random variable ${(N_{[a_1,b_1]}(\Sigma_{CD}(\sigma)),\ldots,N_{[a_k,b_k]}(\Sigma_{CD}(\sigma)))}$ converges in distribution to ${(N_{[a_1,b_1]}(\Sigma),\ldots,N_{[a_k,b_k]}(\Sigma))}$.

Finally, we turn to the first way:

Proposition 4 (Large prime factors of a typical number) Let ${x > 0}$, and let ${N_x}$ be a random natural number chosen according to one of the following three rules:

1. (Uniform distribution) ${N_x}$ is drawn uniformly at random from the natural numbers in ${[1,x]}$.
2. (Shifted uniform distribution) ${N_x}$ is drawn uniformly at random from the natural numbers in ${[x,2x]}$.
3. (Zeta distribution) Each natural number ${n}$ has a probability ${\frac{1}{\zeta(s)}\frac{1}{n^s}}$ of being equal to ${N_x}$, where ${s := 1 +\frac{1}{\log x}}$and ${\zeta(s):=\sum_{n=1}^\infty \frac{1}{n^s}}$.

Then ${\Sigma_{PF}(N_x)}$ converges as ${x \rightarrow \infty}$ to a Poisson-Dirichlet process ${\Sigma}$ in the same fashion as in Proposition 3.

The process ${\Sigma_{PF}(N_x)}$ was first studied by Billingsley (and also later by Knuth-Trabb Pardo and by Vershik, but the formulae were initially rather complicated; the proposition above is due to of Donnelly and Grimmett, although the third case of the proposition is substantially easier and appears in the earlier work of Lloyd. We prove the proposition below the fold.

The previous two propositions suggests an interesting analogy between large random integers and large random permutations; see this ICM article of Vershik and this non-technical article of Granville (which, incidentally, was once adapted into a play) for further discussion.

As a sample application, consider the problem of estimating the number ${\pi(x,x^{1/u})}$ of integers up to ${x}$ which are not divisible by any prime larger than ${x^{1/u}}$ (i.e. they are ${x^{1/u}}$-smooth), where ${u>0}$ is a fixed real number. This is essentially (modulo some inessential technicalities concerning the distinction between the intervals ${[x,2x]}$ and ${[1,x]}$) the probability that ${\Sigma}$ avoids ${[1/u,1]}$, which by the above theorem converges to the probability ${\rho(u)}$ that ${\Sigma_{PF}}$ avoids ${[1/u,1]}$. Below the fold we will show that this function is given by the Dickman function, defined by setting ${\rho(u)=1}$ for ${u < 1}$ and ${u\rho'(u) = \rho(u-1)}$ for ${u \geq 1}$, thus recovering the classical result of Dickman that ${\pi(x,x^{1/u}) = (\rho(u)+o(1))x}$.

I thank Andrew Granville and Anatoly Vershik for showing me the nice link between prime factors and the Poisson-Dirichlet process. The material here is standard, and (like many of the other notes on this blog) was primarily written for my own benefit, but it may be of interest to some readers. In preparing this article I found this exposition by Kingman to be helpful.

Note: this article will emphasise the computations rather than rigour, and in particular will rely on informal use of infinitesimals to avoid dealing with stochastic calculus or other technicalities. We adopt the convention that we will neglect higher order terms in infinitesimal calculations, e.g. if ${dt}$ is infinitesimal then we will abbreviate ${dt + o(dt)}$ simply as ${dt}$.

In this previous post I recorded some (very standard) material on the structural theory of finite-dimensional complex Lie algebras (or Lie algebras for short), with a particular focus on those Lie algebras which were semisimple or simple. Among other things, these notes discussed the Weyl complete reducibility theorem (asserting that semisimple Lie algebras are the direct sum of simple Lie algebras) and the classification of simple Lie algebras (with all such Lie algebras being (up to isomorphism) of the form ${A_n}$, ${B_n}$, ${C_n}$, ${D_n}$, ${E_6}$, ${E_7}$, ${E_8}$, ${F_4}$, or ${G_2}$).

Among other things, the structural theory of Lie algebras can then be used to build analogous structures in nearby areas of mathematics, such as Lie groups and Lie algebras over more general fields than the complex field ${{\bf C}}$ (leading in particular to the notion of a Chevalley group), as well as finite simple groups of Lie type, which form the bulk of the classification of finite simple groups (with the exception of the alternating groups and a finite number of sporadic groups).

In the case of complex Lie groups, it turns out that every simple Lie algebra ${\mathfrak{g}}$ is associated with a finite number of connected complex Lie groups, ranging from a “minimal” Lie group ${G_{ad}}$ (the adjoint form of the Lie group) to a “maximal” Lie group ${\tilde G}$ (the simply connected form of the Lie group) that finitely covers ${G_{ad}}$, and occasionally also a number of intermediate forms which finitely cover ${G_{ad}}$, but are in turn finitely covered by ${\tilde G}$. For instance, ${\mathfrak{sl}_n({\bf C})}$ is associated with the projective special linear group ${\hbox{PSL}_n({\bf C}) = \hbox{PGL}_n({\bf C})}$ as its adjoint form and the special linear group ${\hbox{SL}_n({\bf C})}$ as its simply connected form, and intermediate groups can be created by quotienting out ${\hbox{SL}_n({\bf C})}$ by some subgroup of its centre (which is isomorphic to the ${n^{th}}$ roots of unity). The minimal form ${G_{ad}}$ is simple in the group-theoretic sense of having no normal subgroups, but the other forms of the Lie group are merely quasisimple, although traditionally all of the forms of a Lie group associated to a simple Lie algebra are known as simple Lie groups.

Thanks to the work of Chevalley, a very similar story holds for algebraic groups over arbitrary fields ${k}$; given any Dynkin diagram, one can define a simple Lie algebra with that diagram over that field, and also one can find a finite number of connected algebraic groups over ${k}$ (known as Chevalley groups) with that Lie algebra, ranging from an adjoint form ${G_{ad}}$ to a universal form ${G_u}$, with every form having an isogeny (the analogue of a finite cover for algebraic groups) to the adjoint form, and in turn receiving an isogeny from the universal form. Thus, for instance, one could construct the universal form ${E_7(q)_u}$ of the ${E_7}$ algebraic group over a finite field ${{\bf F}_q}$ of finite order.

When one restricts the Chevalley group construction to adjoint forms over a finite field (e.g. ${\hbox{PSL}_n({\bf F}_q)}$), one usually obtains a finite simple group (with a finite number of exceptions when the rank and the field are very small, and in some cases one also has to pass to a bounded index subgroup, such as the derived group, first). One could also use other forms than the adjoint form, but one then recovers the same finite simple group as before if one quotients out by the centre. This construction was then extended by Steinberg, Suzuki, and Ree by taking a Chevalley group over a finite field and then restricting to the fixed points of a certain automorphism of that group; after some additional minor modifications such as passing to a bounded index subgroup or quotienting out a bounded centre, this gives some additional finite simple groups of Lie type, including classical examples such as the projective special unitary groups ${\hbox{PSU}_n({\bf F}_{q^2})}$, as well as some more exotic examples such as the Suzuki groups or the Ree groups.

While I learned most of the classical structural theory of Lie algebras back when I was an undergraduate, and have interacted with Lie groups in many ways in the past (most recently in connection with Hilbert’s fifth problem, as discussed in this previous series of lectures), I have only recently had the need to understand more precisely the concepts of a Chevalley group and of a finite simple group of Lie type, as well as better understand the structural theory of simple complex Lie groups. As such, I am recording some notes here regarding these concepts, mainly for my own benefit, but perhaps they will also be of use to some other readers. The material here is standard, and was drawn from a number of sources, but primarily from Carter, Gorenstein-Lyons-Solomon, and Fulton-Harris, as well as the lecture notes on Chevalley groups by my colleague Robert Steinberg. The arrangement of material also reflects my own personal preferences; in particular, I tend to favour complex-variable or Riemannian geometry methods over algebraic ones, and this influenced a number of choices I had to make regarding how to prove certain key facts. The notes below are far from a comprehensive or fully detailed discussion of these topics, and I would refer interested readers to the references above for a properly thorough treatment.

If ${f: {\bf R}^n \rightarrow {\bf C}}$ and ${g: {\bf R}^n \rightarrow {\bf C}}$ are two absolutely integrable functions on a Euclidean space ${{\bf R}^n}$, then the convolution ${f*g: {\bf R}^n \rightarrow {\bf C}}$ of the two functions is defined by the formula

$\displaystyle f*g(x) := \int_{{\bf R}^n} f(y) g(x-y)\ dy = \int_{{\bf R}^n} f(x-z) g(z)\ dz.$

A simple application of the Fubini-Tonelli theorem shows that the convolution ${f*g}$ is well-defined almost everywhere, and yields another absolutely integrable function. In the case that ${f=1_F}$, ${g=1_G}$ are indicator functions, the convolution simplifies to

$\displaystyle 1_F*1_G(x) = m( F \cap (x-G) ) = m( (x-F) \cap G ) \ \ \ \ \ (1)$

where ${m}$ denotes Lebesgue measure. One can also define convolution on more general locally compact groups than ${{\bf R}^n}$, but we will restrict attention to the Euclidean case in this post.

The convolution ${f*g}$ can also be defined by duality by observing the identity

$\displaystyle \int_{{\bf R}^n} f*g(x) h(x)\ dx = \int_{{\bf R}^n} \int_{{\bf R}^n} h(y+z)\ f(y) dy g(z) dz$

for any bounded measurable function ${h: {\bf R}^n \rightarrow {\bf C}}$. Motivated by this observation, we may define the convolution ${\mu*\nu}$ of two finite Borel measures on ${{\bf R}^n}$ by the formula

$\displaystyle \int_{{\bf R}^n} h(x)\ d\mu*\nu(x) := \int_{{\bf R}^n} \int_{{\bf R}^n} h(y+z)\ d\mu(y) d\nu(z) \ \ \ \ \ (2)$

for any bounded (Borel) measurable function ${h: {\bf R}^n \rightarrow {\bf C}}$, or equivalently that

$\displaystyle \mu*\nu(E) = \int_{{\bf R}^n} \int_{{\bf R}^n} 1_E(y+z)\ d\mu(y) d\nu(z) \ \ \ \ \ (3)$

for all Borel measurable ${E}$. (In another equivalent formulation: ${\mu*\nu}$ is the pushforward of the product measure ${\mu \times \nu}$ with respect to the addition map ${+: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}^n}$.) This can easily be verified to again be a finite Borel measure.

If ${\mu}$ and ${\nu}$ are probability measures, then the convolution ${\mu*\nu}$ also has a simple probabilistic interpretation: it is the law (i.e. probability distribution) of a random varible of the form ${X+Y}$, where ${X, Y}$ are independent random variables taking values in ${{\bf R}^n}$ with law ${\mu,\nu}$ respectively. Among other things, this interpretation makes it obvious that the support of ${\mu*\nu}$ is the sumset of the supports of ${\mu}$ and ${\nu}$, and that ${\mu*\nu}$ will also be a probability measure.

While the above discussion gives a perfectly rigorous definition of the convolution of two measures, it does not always give helpful guidance as to how to compute the convolution of two explicit measures (e.g. the convolution of two surface measures on explicit examples of surfaces, such as the sphere). In simple cases, one can work from first principles directly from the definition (2), (3), perhaps after some application of tools from several variable calculus, such as the change of variables formula. Another technique proceeds by regularisation, approximating the measures ${\mu, \nu}$ involved as the weak limit (or vague limit) of absolutely integrable functions

$\displaystyle \mu = \lim_{\epsilon \rightarrow 0} f_\epsilon; \quad \nu =\lim_{\epsilon \rightarrow 0} g_\epsilon$

(where we identify an absolutely integrable function ${f}$ with the associated absolutely continuous measure ${dm_f(x) := f(x)\ dx}$) which then implies (assuming that the sequences ${f_\epsilon,g_\epsilon}$ are tight) that ${\mu*\nu}$ is the weak limit of the ${f_\epsilon * g_\epsilon}$. The latter convolutions ${f_\epsilon * g_\epsilon}$, being convolutions of functions rather than measures, can be computed (or at least estimated) by traditional integration techniques, at which point the only difficulty is to ensure that one has enough uniformity in ${\epsilon}$ to maintain control of the limit as ${\epsilon \rightarrow 0}$.

A third method proceeds using the Fourier transform

$\displaystyle \hat \mu(\xi) := \int_{{\bf R}^n} e^{-2\pi i x \cdot \xi}\ d\mu(\xi)$

of ${\mu}$ (and of ${\nu}$). We have

$\displaystyle \widehat{\mu*\nu}(\xi) = \hat{\mu}(\xi) \hat{\nu}(\xi)$

and so one can (in principle, at least) compute ${\mu*\nu}$ by taking Fourier transforms, multiplying them together, and applying the (distributional) inverse Fourier transform. Heuristically, this formula implies that the Fourier transform of ${\mu*\nu}$ should be concentrated in the intersection of the frequency region where the Fourier transform of ${\mu}$ is supported, and the frequency region where the Fourier transform of ${\nu}$ is supported. As the regularity of a measure is related to decay of its Fourier transform, this also suggests that the convolution ${\mu*\nu}$ of two measures will typically be more regular than each of the two original measures, particularly if the Fourier transforms of ${\mu}$ and ${\nu}$ are concentrated in different regions of frequency space (which should happen if the measures ${\mu,\nu}$ are suitably “transverse”). In particular, it can happen that ${\mu*\nu}$ is an absolutely continuous measure, even if ${\mu}$ and ${\nu}$ are both singular measures.

Using intuition from microlocal analysis, we can combine our understanding of the spatial and frequency behaviour of convolution to the following heuristic: a convolution ${\mu*\nu}$ should be supported in regions of phase space ${\{ (x,\xi): x \in {\bf R}^n, \xi \in {\bf R}^n \}}$ of the form ${(x,\xi) = (x_1+x_2,\xi)}$, where ${(x_1,\xi)}$ lies in the region of phase space where ${\mu}$ is concentrated, and ${(x_2,\xi)}$ lies in the region of phase space where ${\nu}$ is concentrated. It is a challenge to make this intuition perfectly rigorous, as one has to somehow deal with the obstruction presented by the Heisenberg uncertainty principle, but it can be made rigorous in various asymptotic regimes, for instance using the machinery of wave front sets (which describes the high frequency limit of the phase space distribution).

Let us illustrate these three methods and the final heuristic with a simple example. Let ${\mu}$ be a singular measure on the horizontal unit interval ${[0,1] \times \{0\} = \{ (x,0): 0 \leq x \leq 1 \}}$, given by weighting Lebesgue measure on that interval by some test function ${\phi: {\bf R} \rightarrow {\bf C}}$ supported on ${[0,1]}$:

$\displaystyle \int_{{\bf R}^2} f(x,y)\ d\mu(x,y) := \int_{\bf R} f(x,0) \phi(x)\ dx.$

Similarly, let ${\nu}$ be a singular measure on the vertical unit interval ${\{0\} \times [0,1] = \{ (0,y): 0 \leq y \leq 1 \}}$ given by weighting Lebesgue measure on that interval by another test function ${\psi: {\bf R} \rightarrow {\bf C}}$ supported on ${[0,1]}$:

$\displaystyle \int_{{\bf R}^2} g(x,y)\ d\nu(x,y) := \int_{\bf R} g(0,y) \psi(y)\ dy.$

We can compute the convolution ${\mu*\nu}$ using (2), which in this case becomes

$\displaystyle \int_{{\bf R}^2} h( x, y ) d\mu*\nu(x,y) = \int_{{\bf R}^2} \int_{{\bf R}^2} h(x_1+x_2, y_1+y_2)\ d\mu(x_1,y_1) d\nu(x_2,y_2)$

$\displaystyle = \int_{\bf R} \int_{\bf R} h( x_1, y_2 )\ \phi(x_1) dx_1 \psi(y_2) dy_2$

and we thus conclude that ${\mu*\nu}$ is an absolutely continuous measure on ${{\bf R}^2}$ with density function ${(x,y) \mapsto \phi(x) \psi(y)}$:

$\displaystyle d(\mu*\nu)(x,y) = \phi(x) \psi(y) dx dy. \ \ \ \ \ (4)$

In particular, ${\mu*\nu}$ is supported on the unit square ${[0,1]^2}$, which is of course the sumset of the two intervals ${[0,1] \times\{0\}}$ and ${\{0\} \times [0,1]}$.

We can arrive at the same conclusion from the regularisation method; the computations become lengthier, but more geometric in nature, and emphasises the role of transversality between the two segments supporting ${\mu}$ and ${\nu}$. One can view ${\mu}$ as the weak limit of the functions

$\displaystyle f_\epsilon(x,y) := \frac{1}{\epsilon} \phi(x) 1_{[0,\epsilon]}(y)$

as ${\epsilon \rightarrow 0}$ (where we continue to identify absolutely integrable functions with absolutely continuous measures, and of course we keep ${\epsilon}$ positive). We can similarly view ${\nu}$ as the weak limit of

$\displaystyle g_\epsilon(x,y) := \frac{1}{\epsilon} 1_{[0,\epsilon]}(x) \psi(y).$

Let us first look at the model case when ${\phi=\psi=1_{[0,1]}}$, so that ${f_\epsilon,g_\epsilon}$ are renormalised indicator functions of thin rectangles:

$\displaystyle f_\epsilon = \frac{1}{\epsilon} 1_{[0,1]\times [0,\epsilon]}; \quad g_\epsilon = \frac{1}{\epsilon} 1_{[0,\epsilon] \times [0,1]}.$

By (1), the convolution ${f_\epsilon*g_\epsilon}$ is then given by

$\displaystyle f_\epsilon*g_\epsilon(x,y) := \frac{1}{\epsilon^2} m( E_\epsilon )$

where ${E_\epsilon}$ is the intersection of two rectangles:

$\displaystyle E_\epsilon := ([0,1] \times [0,\epsilon]) \cap ((x,y) - [0,\epsilon] \times [0,1]).$

When ${(x,y)}$ lies in the square ${[\epsilon,1] \times [\epsilon,1]}$, one readily sees (especially if one draws a picture) that ${E_\epsilon}$ consists of an ${\epsilon \times \epsilon}$ square and thus has measure ${\epsilon^2}$; conversely, if ${(x,y)}$ lies outside ${[0,1+\epsilon] \times [0,1+\epsilon]}$, ${E_\epsilon}$ is empty and thus has measure zero. In the intermediate region, ${E_\epsilon}$ will have some measure between ${0}$ and ${\epsilon^2}$. From this we see that ${f_\epsilon*g_\epsilon}$ converges pointwise almost everywhere to ${1_{[0,1] \times [0,1]}}$ while also being dominated by an absolutely integrable function, and so converges weakly to ${1_{[0,1] \times [0,1]}}$, giving a special case of the formula (4).

Exercise 1 Use a similar method to verify (4) in the case that ${\phi, \psi}$ are continuous functions on ${[0,1]}$. (The argument also works for absolutely integrable ${\phi,\psi}$, but one needs to invoke the Lebesgue differentiation theorem to make it run smoothly.)

Now we compute with the Fourier-analytic method. The Fourier transform ${\hat \mu(\xi,\eta)}$ of ${\mu}$ is given by

$\displaystyle \hat \mu(\xi,\eta) =\int_{{\bf R}^2} e^{-2\pi i (x \xi + y \eta)}\ d\mu(x,y)$

$\displaystyle = \int_{\bf R} \phi(x) e^{-2\pi i x \xi}\ dx$

$\displaystyle = \hat \phi(\xi)$

where we abuse notation slightly by using ${\hat \phi}$ to refer to the one-dimensional Fourier transform of ${\phi}$. In particular, ${\hat \mu}$ decays in the ${\xi}$ direction (by the Riemann-Lebesgue lemma) but has no decay in the ${\eta}$ direction, which reflects the horizontally grained structure of ${\mu}$. Similarly we have

$\displaystyle \hat \nu(\xi,\eta) = \hat \psi(\eta),$

so that ${\hat \nu}$ decays in the ${\eta}$ direction. The convolution ${\mu*\nu}$ then has decay in both the ${\xi}$ and ${\eta}$ directions,

$\displaystyle \widehat{\mu*\nu}(\xi,\eta) = \hat \phi(\xi) \hat \psi(\eta)$

and by inverting the Fourier transform we obtain (4).

Exercise 2 Let ${AB}$ and ${CD}$ be two non-parallel line segments in the plane ${{\bf R}^2}$. If ${\mu}$ is the uniform probability measure on ${AB}$ and ${\nu}$ is the uniform probability measure on ${CD}$, show that ${\mu*\nu}$ is the uniform probability measure on the parallelogram ${AB + CD}$ with vertices ${A+C, A+D, B+C, B+D}$. What happens in the degenerate case when ${AB}$ and ${CD}$ are parallel?

Finally, we compare the above answers with what one gets from the microlocal analysis heuristic. The measure ${\mu}$ is supported on the horizontal interval ${[0,1] \times \{0\}}$, and the cotangent bundle at any point on this interval points in the vertical direction. Thus, the wave front set of ${\mu}$ should be supported on those points ${((x_1,x_2),(\xi_1,\xi_2))}$ in phase space with ${x_1 \in [0,1]}$, ${x_2 = 0}$ and ${\xi_1=0}$. Similarly, the wave front set of ${\nu}$ should be supported at those points ${((y_1,y_2),(\xi_1,\xi_2))}$ with ${y_1 = 0}$, ${y_2 \in [0,1]}$, and ${\xi_2=0}$. The convolution ${\mu * \nu}$ should then have wave front set supported on those points ${((x_1+y_1,x_2+y_2), (\xi_1,\xi_2))}$ with ${x_1 \in [0,1]}$, ${x_2 = 0}$, ${\xi_1=0}$, ${y_1=0}$, ${y_2 \in [0,1]}$, and ${\xi_2=0}$, i.e. it should be spatially supported on the unit square and have zero (rescaled) frequency, so the heuristic predicts a smooth function on the unit square, which is indeed what happens. (The situation is slightly more complicated in the non-smooth case ${\phi=\psi=1_{[0,1]}}$, because ${\mu}$ and ${\nu}$ then acquire some additional singularities at the endpoints; namely, the wave front set of ${\mu}$ now also contains those points ${((x_1,x_2),(\xi_1,\xi_2))}$ with ${x_1 \in \{0,1\}}$, ${x_2=0}$, and ${\xi_1,\xi_2}$ arbitrary, and ${\nu}$ similarly contains those points ${((y_1,y_2), (\xi_1,\xi_2))}$ with ${y_1=0}$, ${y_2 \in \{0,1\}}$, and ${\xi_1,\xi_2}$ arbitrary. I’ll leave it as an exercise to the reader to compute what this predicts for the wave front set of ${\mu*\nu}$, and how this compares with the actual wave front set.)

Exercise 3 Let ${\mu}$ be the uniform measure on the unit sphere ${S^{n-1}}$ in ${{\bf R}^n}$ for some ${n \geq 2}$. Use as many of the above methods as possible to establish multiple proofs of the following fact: the convolution ${\mu*\mu}$ is an absolutely continuous multiple ${f(x)\ dx}$ of Lebesgue measure, with ${f(x)}$ supported on the ball ${B(0,2)}$ of radius ${2}$ and obeying the bounds

$\displaystyle |f(x)| \ll \frac{1}{|x|}$

for ${|x| \leq 1}$ and

$\displaystyle |f(x)| \ll (2-|x|)^{(n-3)/2}$

for ${1 \leq |x| \leq 2}$, where the implied constants are allowed to depend on the dimension ${n}$. (Hint: try the ${n=2}$ case first, which is particularly simple due to the fact that the addition map ${+: S^1 \times S^1 \rightarrow {\bf R}^2}$ is mostly a local diffeomorphism. The Fourier-based approach is instructive, but requires either asymptotics of Bessel functions or the principle of stationary phase.)

[Note: the content of this post is standard number theoretic material that can be found in many textbooks (I am relying principally here on Iwaniec and Kowalski); I am not claiming any new progress on any version of the Riemann hypothesis here, but am simply arranging existing facts together.]

The Riemann hypothesis is arguably the most important and famous unsolved problem in number theory. It is usually phrased in terms of the Riemann zeta function ${\zeta}$, defined by

$\displaystyle \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}$

for ${\hbox{Re}(s)>1}$ and extended meromorphically to other values of ${s}$, and asserts that the only zeroes of ${\zeta}$ in the critical strip ${\{ s: 0 \leq \hbox{Re}(s) \leq 1 \}}$ lie on the critical line ${\{ s: \hbox{Re}(s)=\frac{1}{2} \}}$.

One of the main reasons that the Riemann hypothesis is so important to number theory is that the zeroes of the zeta function in the critical strip control the distribution of the primes. To see the connection, let us perform the following formal manipulations (ignoring for now the important analytic issues of convergence of series, interchanging sums, branches of the logarithm, etc., in order to focus on the intuition). The starting point is the fundamental theorem of arithmetic, which asserts that every natural number ${n}$ has a unique factorisation ${n = p_1^{a_1} \ldots p_k^{a_k}}$ into primes. Taking logarithms, we obtain the identity

$\displaystyle \log n = \sum_{d|n} \Lambda(d) \ \ \ \ \ (1)$

for any natural number ${n}$, where ${\Lambda}$ is the von Mangoldt function, thus ${\Lambda(n) = \log p}$ when ${n}$ is a power of a prime ${p}$ and zero otherwise. If we then perform a “Dirichlet-Fourier transform” by viewing both sides of (1) as coefficients of a Dirichlet series, we conclude that

$\displaystyle \sum_{n=1}^\infty \frac{\log n}{n^s} = \sum_{n=1}^\infty \sum_{d|n} \frac{\Lambda(d)}{n^s},$

formally at least. Writing ${n=dm}$, the right-hand side factors as

$\displaystyle (\sum_{d=1}^\infty \frac{\Lambda(d)}{d^s}) (\sum_{m=1}^\infty \frac{1}{m^s}) = \zeta(s) \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s}$

whereas the left-hand side is (formally, at least) equal to ${-\zeta'(s)}$. We conclude the identity

$\displaystyle \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s} = -\frac{\zeta'(s)}{\zeta(s)},$

(formally, at least). If we integrate this, we are formally led to the identity

$\displaystyle \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} = \log \zeta(s)$

or equivalently to the exponential identity

$\displaystyle \zeta(s) = \exp( \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} ) \ \ \ \ \ (2)$

which allows one to reconstruct the Riemann zeta function from the von Mangoldt function. (It is instructive exercise in enumerative combinatorics to try to prove this identity directly, at the level of formal Dirichlet series, using the fundamental theorem of arithmetic of course.) Now, as ${\zeta}$ has a simple pole at ${s=1}$ and zeroes at various places ${s=\rho}$ on the critical strip, we expect a Weierstrass factorisation which formally (ignoring normalisation issues) takes the form

$\displaystyle \zeta(s) = \frac{1}{s-1} \times \prod_\rho (s-\rho) \times \ldots$

(where we will be intentionally vague about what is hiding in the ${\ldots}$ terms) and so we expect an expansion of the form

$\displaystyle \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} = - \log(s-1) + \sum_\rho \log(s-\rho) + \ldots. \ \ \ \ \ (3)$

Note that

$\displaystyle \frac{1}{s-\rho} = \int_1^\infty t^{\rho-s} \frac{dt}{t}$

and hence on integrating in ${s}$ we formally have

$\displaystyle \log(s-\rho) = -\int_1^\infty t^{\rho-s-1} \frac{dt}{\log t}$

and thus we have the heuristic approximation

$\displaystyle \log(s-\rho) \approx - \sum_{n=1}^\infty \frac{n^{\rho-s-1}}{\log n}.$

Comparing this with (3), we are led to a heuristic form of the explicit formula

$\displaystyle \Lambda(n) \approx 1 - \sum_\rho n^{\rho-1}. \ \ \ \ \ (4)$

When trying to make this heuristic rigorous, it turns out (due to the rough nature of both sides of (4)) that one has to interpret the explicit formula in some suitably weak sense, for instance by testing (4) against the indicator function ${1_{[0,x]}(n)}$ to obtain the formula

$\displaystyle \sum_{n \leq x} \Lambda(n) \approx x - \sum_\rho \frac{x^{\rho}}{\rho} \ \ \ \ \ (5)$

which can in fact be made into a rigorous statement after some truncation (the von Mangoldt explicit formula). From this formula we now see how helpful the Riemann hypothesis will be to control the distribution of the primes; indeed, if the Riemann hypothesis holds, so that ${\hbox{Re}(\rho) = 1/2}$ for all zeroes ${\rho}$, it is not difficult to use (a suitably rigorous version of) the explicit formula to conclude that

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2} \log^2 x ) \ \ \ \ \ (6)$

as ${x \rightarrow \infty}$, giving a near-optimal “square root cancellation” for the sum ${\sum_{n \leq x} \Lambda(n)-1}$. Conversely, if one can somehow establish a bound of the form

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2+\epsilon} )$

for any fixed ${\epsilon}$, then the explicit formula can be used to then deduce that all zeroes ${\rho}$ of ${\zeta}$ have real part at most ${1/2+\epsilon}$, which leads to the following remarkable amplification phenomenon (analogous, as we will see later, to the tensor power trick): any bound of the form

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2+o(1)} )$

can be automatically amplified to the stronger bound

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2} \log^2 x )$

with both bounds being equivalent to the Riemann hypothesis. Of course, the Riemann hypothesis for the Riemann zeta function remains open; but partial progress on this hypothesis (in the form of zero-free regions for the zeta function) leads to partial versions of the asymptotic (6). For instance, it is known that there are no zeroes of the zeta function on the line ${\hbox{Re}(s)=1}$, and this can be shown by some analysis (either complex analysis or Fourier analysis) to be equivalent to the prime number theorem

$\displaystyle \sum_{n \leq x} \Lambda(n) =x + o(x);$

see e.g. this previous blog post for more discussion.

The main engine powering the above observations was the fundamental theorem of arithmetic, and so one can expect to establish similar assertions in other contexts where some version of the fundamental theorem of arithmetic is available. One of the simplest such variants is to continue working on the natural numbers, but “twist” them by a Dirichlet character ${\chi: {\bf Z} \rightarrow {\bf R}}$. The analogue of the Riemann zeta function is then the (1), which encoded the fundamental theorem of arithmetic, can be twisted by ${\chi}$ to obtain

$\displaystyle \chi(n) \log n = \sum_{d|n} \chi(d) \Lambda(d) \chi(\frac{n}{d}) \ \ \ \ \ (7)$

and essentially the same manipulations as before eventually lead to the exponential identity

$\displaystyle L(s,\chi) = \exp( \sum_{n=1}^\infty \frac{\chi(n) \Lambda(n)}{\log n} n^{-s} ). \ \ \ \ \ (8)$

which is a twisted version of (2), as well as twisted explicit formula, which heuristically takes the form

$\displaystyle \chi(n) \Lambda(n) \approx - \sum_\rho n^{\rho-1} \ \ \ \ \ (9)$

for non-principal ${\chi}$, where ${\rho}$ now ranges over the zeroes of ${L(s,\chi)}$ in the critical strip, rather than the zeroes of ${\zeta(s)}$; a more accurate formulation, following (5), would be

$\displaystyle \sum_{n \leq x} \chi(n) \Lambda(n) \approx - \sum_\rho \frac{x^{\rho}}{\rho}. \ \ \ \ \ (10)$

(See e.g. Davenport’s book for a more rigorous discussion which emphasises the analogy between the Riemann zeta function and the Dirichlet ${L}$-function.) If we assume the generalised Riemann hypothesis, which asserts that all zeroes of ${L(s,\chi)}$ in the critical strip also lie on the critical line, then we obtain the bound

$\displaystyle \sum_{n \leq x} \chi(n) \Lambda(n) = O( x^{1/2} \log(x) \log(xq) )$

for any non-principal Dirichlet character ${\chi}$, again demonstrating a near-optimal square root cancellation for this sum. Again, we have the amplification property that the above bound is implied by the apparently weaker bound

$\displaystyle \sum_{n \leq x} \chi(n) \Lambda(n) = O( x^{1/2+o(1)} )$

(where ${o(1)}$ denotes a quantity that goes to zero as ${x \rightarrow \infty}$ for any fixed ${q}$). Next, one can consider other number systems than the natural numbers ${{\bf N}}$ and integers ${{\bf Z}}$. For instance, one can replace the integers ${{\bf Z}}$ with rings ${{\mathcal O}_K}$ of integers in other number fields ${K}$ (i.e. finite extensions of ${{\bf Q}}$), such as the quadratic extensions ${K = {\bf Q}[\sqrt{D}]}$ of the rationals for various square-free integers ${D}$, in which case the ring of integers would be the ring of quadratic integers ${{\mathcal O}_K = {\bf Z}[\omega]}$ for a suitable generator ${\omega}$ (it turns out that one can take ${\omega = \sqrt{D}}$ if ${D=2,3\hbox{ mod } 4}$, and ${\omega = \frac{1+\sqrt{D}}{2}}$ if ${D=1 \hbox{ mod } 4}$). Here, it is not immediately obvious what the analogue of the natural numbers ${{\bf N}}$ is in this setting, since rings such as ${{\bf Z}[\omega]}$ do not come with a natural ordering. However, we can adopt an algebraic viewpoint to see the correct generalisation, observing that every natural number ${n}$ generates a principal ideal ${(n) = \{ an: a \in {\bf Z} \}}$ in the integers, and conversely every non-trivial ideal ${{\mathfrak n}}$ in the integers is associated to precisely one natural number ${n}$ in this fashion, namely the norm ${N({\mathfrak n}) := |{\bf Z} / {\mathfrak n}|}$ of that ideal. So one can identify the natural numbers with the ideals of ${{\bf Z}}$. Furthermore, with this identification, the prime numbers correspond to the prime ideals, since if ${p}$ is prime, and ${a,b}$ are integers, then ${ab \in (p)}$ if and only if one of ${a \in (p)}$ or ${b \in (p)}$ is true. Finally, even in number systems (such as ${{\bf Z}[\sqrt{-5}]}$) in which the classical version of the fundamental theorem of arithmetic fail (e.g. ${6 = 2 \times 3 = (1-\sqrt{-5})(1+\sqrt{-5})}$), we have the fundamental theorem of arithmetic for ideals: every ideal ${\mathfrak{n}}$ in a Dedekind domain (which includes the ring ${{\mathcal O}_K}$ of integers in a number field as a key example) is uniquely representable (up to permutation) as the product of a finite number of prime ideals ${\mathfrak{p}}$ (although these ideals might not necessarily be principal). For instance, in ${{\bf Z}[\sqrt{-5}]}$, the principal ideal ${(6)}$ factors as the product of four prime (but non-principal) ideals ${(2, 1+\sqrt{-5})}$, ${(2, 1-\sqrt{-5})}$, ${(3, 1+\sqrt{-5})}$, ${(3, 1-\sqrt{-5})}$. (Note that the first two ideals ${(2,1+\sqrt{5}), (2,1-\sqrt{5})}$ are actually equal to each other.) Because we still have the fundamental theorem of arithmetic, we can develop analogues of the previous observations relating the Riemann hypothesis to the distribution of primes. The analogue of the Riemann hypothesis is now the Dedekind zeta function

$\displaystyle \zeta_K(s) := \sum_{{\mathfrak n}} \frac{1}{N({\mathfrak n})^s}$

where the summation is over all non-trivial ideals in ${{\mathcal O}_K}$. One can also define a von Mangoldt function ${\Lambda_K({\mathfrak n})}$, defined as ${\log N( {\mathfrak p})}$ when ${{\mathfrak n}}$ is a power of a prime ideal ${{\mathfrak p}}$, and zero otherwise; then the fundamental theorem of arithmetic for ideals can be encoded in an analogue of (1) (or (7)),

$\displaystyle \log N({\mathfrak n}) = \sum_{{\mathfrak d}|{\mathfrak n}} \Lambda_K({\mathfrak d}) \ \ \ \ \ (11)$

which leads as before to an exponential identity

$\displaystyle \zeta_K(s) = \exp( \sum_{{\mathfrak n}} \frac{\Lambda_K({\mathfrak n})}{\log N({\mathfrak n})} N({\mathfrak n})^{-s} ) \ \ \ \ \ (12)$

and an explicit formula of the heuristic form

$\displaystyle \Lambda({\mathfrak n}) \approx 1 - \sum_\rho N({\mathfrak n})^{\rho-1}$

or, a little more accurately,

$\displaystyle \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) \approx x - \sum_\rho \frac{x^{\rho}}{\rho} \ \ \ \ \ (13)$

in analogy with (5) or (10). Again, a suitable Riemann hypothesis for the Dedekind zeta function leads to good asymptotics for the distribution of prime ideals, giving a bound of the form

$\displaystyle \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) = x + O( \sqrt{x} \log(x) (d+\log(Dx)) )$

where ${D}$ is the conductor of ${K}$ (which, in the case of number fields, is the absolute value of the discriminant of ${K}$) and ${d = \hbox{dim}_{\bf Q}(K)}$ is the degree of the extension of ${K}$ over ${{\bf Q}}$. As before, we have the amplification phenomenon that the above near-optimal square root cancellation bound is implied by the weaker bound

$\displaystyle \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) = x + O( x^{1/2+o(1)} )$

where ${o(1)}$ denotes a quantity that goes to zero as ${x \rightarrow \infty}$ (holding ${K}$ fixed). See e.g. Chapter 5 of Iwaniec-Kowalski for details.

As was the case with the Dirichlet ${L}$-functions, one can twist the Dedekind zeta function example by characters, in this case the Hecke characters; we will not do this here, but see e.g. Section 3 of Iwaniec-Kowalski for details.

Very analogous considerations hold if we move from number fields to function fields. The simplest case is the function field associated to the affine line ${{\mathbb A}^1}$ and a finite field ${{\mathbb F} = {\mathbb F}_q}$ of some order ${q}$. The polynomial functions on the affine line ${{\mathbb A}^1/{\mathbb F}}$ are just the usual polynomial ring ${{\mathbb F}[t]}$, which then play the role of the integers ${{\bf Z}}$ (or ${{\mathcal O}_K}$) in previous examples. This ring happens to be a unique factorisation domain, so the situation is closely analogous to the classical setting of the Riemann zeta function. The analogue of the natural numbers are the monic polynomials (since every non-trivial principal ideal is generated by precisely one monic polynomial), and the analogue of the prime numbers are the irreducible monic polynomials. The norm ${N(f)}$ of a polynomial is the order of ${{\mathbb F}[t] / (f)}$, which can be computed explicitly as

$\displaystyle N(f) = q^{\hbox{deg}(f)}.$

Because of this, we will normalise things slightly differently here and use ${\hbox{deg}(f)}$ in place of ${\log N(f)}$ in what follows. The (local) zeta function ${\zeta_{{\mathbb A}^1/{\mathbb F}}(s)}$ is then defined as

$\displaystyle \zeta_{{\mathbb A}^1/{\mathbb F}}(s) = \sum_f \frac{1}{N(f)^s}$

where ${f}$ ranges over monic polynomials, and the von Mangoldt function ${\Lambda_{{\mathbb A}^1/{\mathbb F}}(f)}$ is defined to equal ${\hbox{deg}(g)}$ when ${f}$ is a power of a monic irreducible polynomial ${g}$, and zero otherwise. Note that because ${N(f)}$ is always a power of ${q}$, the zeta function here is in fact periodic with period ${2\pi i / \log q}$. Because of this, it is customary to make a change of variables ${T := q^{-s}}$, so that

$\displaystyle \zeta_{{\mathbb A}^1/{\mathbb F}}(s) = Z( {\mathbb A}^1/{\mathbb F}, T )$

and ${Z}$ is the renormalised zeta function

$\displaystyle Z( {\mathbb A}^1/{\mathbb F}, T ) = \sum_f T^{\hbox{deg}(f)}. \ \ \ \ \ (14)$

We have the analogue of (1) (or (7) or (11)):

$\displaystyle \hbox{deg}(f) = \sum_{g|f} \Lambda_{{\mathbb A}^1/{\mathbb F}}(g), \ \ \ \ \ (15)$

which leads as before to an exponential identity

$\displaystyle Z( {\mathbb A}^1/{\mathbb F}, T ) = \exp( \sum_f \frac{\Lambda_{{\mathbb A}^1/{\mathbb F}}(f)}{\hbox{deg}(f)} T^{\hbox{deg}(f)} ) \ \ \ \ \ (16)$

analogous to (2), (8), or (12). It also leads to the explicit formula

$\displaystyle \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx 1 - \sum_\rho N(f)^{\rho-1}$

where ${\rho}$ are the zeroes of the original zeta function ${\zeta_{{\mathbb A}^1/{\mathbb F}}(s)}$ (counting each residue class of the period ${2\pi i/\log q}$ just once), or equivalently

$\displaystyle \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx 1 - q^{-\hbox{deg}(f)} \sum_\alpha \alpha^{\hbox{deg}(f)},$

where ${\alpha}$ are the reciprocals of the roots of the normalised zeta function ${Z( {\mathbb A}^1/{\mathbb F}, T )}$ (or to put it another way, ${1-\alpha T}$ are the factors of this zeta function). Again, to make proper sense of this heuristic we need to sum, obtaining

$\displaystyle \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx q^n - \sum_\alpha \alpha^n.$

As it turns out, in the function field setting, the zeta functions are always rational (this is part of the Weil conjectures), and the above heuristic formula is basically exact up to a constant factor, thus

$\displaystyle \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = q^n - \sum_\alpha \alpha^n + c \ \ \ \ \ (17)$

for an explicit integer ${c}$ (independent of ${n}$) arising from any potential pole of ${Z}$ at ${T=1}$. In the case of the affine line ${{\mathbb A}^1}$, the situation is particularly simple, because the zeta function ${Z( {\mathbb A}^1/{\mathbb F}, T)}$ is easy to compute. Indeed, since there are exactly ${q^n}$ monic polynomials of a given degree ${n}$, we see from (14) that

$\displaystyle Z( {\mathbb A}^1/{\mathbb F}, T ) = \sum_{n=0}^\infty q^n T^n = \frac{1}{1-qT}$

so in fact there are no zeroes whatsoever, and no pole at ${T=1}$ either, so we have an exact prime number theorem for this function field:

$\displaystyle \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = q^n \ \ \ \ \ (18)$

Among other things, this tells us that the number of irreducible monic polynomials of degree ${n}$ is ${q^n/n + O(q^{n/2})}$.

We can transition from an algebraic perspective to a geometric one, by viewing a given monic polynomial ${f \in {\mathbb F}[t]}$ through its roots, which are a finite set of points in the algebraic closure ${\overline{{\mathbb F}}}$ of the finite field ${{\mathbb F}}$ (or more suggestively, as points on the affine line ${{\mathbb A}^1( \overline{{\mathbb F}} )}$). The number of such points (counting multiplicity) is the degree of ${f}$, and from the factor theorem, the set of points determines the monic polynomial ${f}$ (or, if one removes the monic hypothesis, it determines the polynomial ${f}$ projectively). These points have an action of the Galois group ${\hbox{Gal}( \overline{{\mathbb F}} / {\mathbb F} )}$. It is a classical fact that this Galois group is in fact a cyclic group generated by a single element, the (geometric) Frobenius map ${\hbox{Frob}: x \mapsto x^q}$, which fixes the elements of the original finite field ${{\mathbb F}}$ but permutes the other elements of ${\overline{{\mathbb F}}}$. Thus the roots of a given polynomial ${f}$ split into orbits of the Frobenius map. One can check that the roots consist of a single such orbit (counting multiplicity) if and only if ${f}$ is irreducible; thus the fundamental theorem of arithmetic can be viewed geometrically as as the orbit decomposition of any Frobenius-invariant finite set of points in the affine line.

Now consider the degree ${n}$ finite field extension ${{\mathbb F}_n}$ of ${{\mathbb F}}$ (it is a classical fact that there is exactly one such extension up to isomorphism for each ${n}$); this is a subfield of ${\overline{{\mathbb F}}}$ of order ${q^n}$. (Here we are performing a standard abuse of notation by overloading the subscripts in the ${{\mathbb F}}$ notation; thus ${{\mathbb F}_q}$ denotes the field of order ${q}$, while ${{\mathbb F}_n}$ denotes the extension of ${{\mathbb F} = {\mathbb F}_q}$ of order ${n}$, so that we in fact have ${{\mathbb F}_n = {\mathbb F}_{q^n}}$ if we use one subscript convention on the left-hand side and the other subscript convention on the right-hand side. We hope this overloading will not cause confusion.) Each point ${x}$ in this extension (or, more suggestively, the affine line ${{\mathbb A}^1({\mathbb F}_n)}$ over this extension) has a minimal polynomial – an irreducible monic polynomial whose roots consist the Frobenius orbit of ${x}$. Since the Frobenius action is periodic of period ${n}$ on ${{\mathbb F}_n}$, the degree of this minimal polynomial must divide ${n}$. Conversely, every monic irreducible polynomial of degree ${d}$ dividing ${n}$ produces ${d}$ distinct zeroes that lie in ${{\mathbb F}_d}$ (here we use the classical fact that finite fields are perfect) and hence in ${{\mathbb F}_n}$. We have thus partitioned ${{\mathbb A}^1({\mathbb F}_n)}$ into Frobenius orbits (also known as closed points), with each monic irreducible polynomial ${f}$ of degree ${d}$ dividing ${n}$ contributing an orbit of size ${d = \hbox{deg}(f) = \Lambda(f^{n/d})}$. From this we conclude a geometric interpretation of the left-hand side of (18):

$\displaystyle \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = \sum_{x \in {\mathbb A}^1({\mathbb F}_n)} 1. \ \ \ \ \ (19)$

The identity (18) thus is equivalent to the thoroughly boring fact that the number of ${{\mathbb F}_n}$-points on the affine line ${{\mathbb A}^1}$ is equal to ${q^n}$. However, things become much more interesting if one then replaces the affine line ${{\mathbb A}^1}$ by a more general (geometrically) irreducible curve ${C}$ defined over ${{\mathbb F}}$; for instance one could take ${C}$ to be an ellpitic curve

$\displaystyle E = \{ (x,y): y^2 = x^3 + ax + b \} \ \ \ \ \ (20)$

for some suitable ${a,b \in {\mathbb F}}$, although the discussion here applies to more general curves as well (though to avoid some minor technicalities, we will assume that the curve is projective with a finite number of ${{\mathbb F}}$-rational points removed). The analogue of ${{\mathbb F}[t]}$ is then the coordinate ring of ${C}$ (for instance, in the case of the elliptic curve (20) it would be ${{\mathbb F}[x,y] / (y^2-x^3-ax-b)}$), with polynomials in this ring producing a set of roots in the curve ${C( \overline{\mathbb F})}$ that is again invariant with respect to the Frobenius action (acting on the ${x}$ and ${y}$ coordinates separately). In general, we do not expect unique factorisation in this coordinate ring (this is basically because Bezout’s theorem suggests that the zero set of a polynomial on ${C}$ will almost never consist of a single (closed) point). Of course, we can use the algebraic formalism of ideals to get around this, setting up a zeta function

$\displaystyle \zeta_{C/{\mathbb F}}(s) = \sum_{{\mathfrak f}} \frac{1}{N({\mathfrak f})^s}$

and a von Mangoldt function ${\Lambda_{C/{\mathbb F}}({\mathfrak f})}$ as before, where ${{\mathfrak f}}$ would now run over the non-trivial ideals of the coordinate ring. However, it is more instructive to use the geometric viewpoint, using the ideal-variety dictionary from algebraic geometry to convert algebraic objects involving ideals into geometric objects involving varieties. In this dictionary, a non-trivial ideal would correspond to a proper subvariety (or more precisely, a subscheme, but let us ignore the distinction between varieties and schemes here) of the curve ${C}$; as the curve is irreducible and one-dimensional, this subvariety must be zero-dimensional and is thus a (multi-)set of points ${\{x_1,\ldots,x_k\}}$ in ${C}$, or equivalently an effective divisor ${[x_1] + \ldots + [x_k]}$ of ${C}$; this generalises the concept of the set of roots of a polynomial (which corresponds to the case of a principal ideal). Furthermore, this divisor has to be rational in the sense that it is Frobenius-invariant. The prime ideals correspond to those divisors (or sets of points) which are irreducible, that is to say the individual Frobenius orbits, also known as closed points of ${C}$. With this dictionary, the zeta function becomes

$\displaystyle \zeta_{C/{\mathbb F}}(s) = \sum_{D \geq 0} \frac{1}{q^{\hbox{deg}(D)}}$

where the sum is over effective rational divisors ${D}$ of ${C}$ (with ${k}$ being the degree of an effective divisor ${[x_1] + \ldots + [x_k]}$), or equivalently

$\displaystyle Z( C/{\mathbb F}, T ) = \sum_{D \geq 0} T^{\hbox{deg}(D)}.$

The analogue of (19), which gives a geometric interpretation to sums of the von Mangoldt function, becomes

$\displaystyle \sum_{N({\mathfrak f}) = q^n} \Lambda_{C/{\mathbb F}}({\mathfrak f}) = \sum_{x \in C({\mathbb F}_n)} 1$

$\displaystyle = |C({\mathbb F}_n)|,$

thus this sum is simply counting the number of ${{\mathbb F}_n}$-points of ${C}$. The analogue of the exponential identity (16) (or (2), (8), or (12)) is then

$\displaystyle Z( C/{\mathbb F}, T ) = \exp( \sum_{n \geq 1} \frac{|C({\mathbb F}_n)|}{n} T^n ) \ \ \ \ \ (21)$

and the analogue of the explicit formula (17) (or (5), (10) or (13)) is

$\displaystyle |C({\mathbb F}_n)| = q^n - \sum_\alpha \alpha^n + c \ \ \ \ \ (22)$

where ${\alpha}$ runs over the (reciprocal) zeroes of ${Z( C/{\mathbb F}, T )}$ (counting multiplicity), and ${c}$ is an integer independent of ${n}$. (As it turns out, ${c}$ equals ${1}$ when ${C}$ is a projective curve, and more generally equals ${1-k}$ when ${C}$ is a projective curve with ${k}$ rational points deleted.)

To evaluate ${Z(C/{\mathbb F},T)}$, one needs to count the number of effective divisors of a given degree on the curve ${C}$. Fortunately, there is a tool that is particularly well-designed for this task, namely the Riemann-Roch theorem. By using this theorem, one can show (when ${C}$ is projective) that ${Z(C/{\mathbb F},T)}$ is in fact a rational function, with a finite number of zeroes, and a simple pole at both ${1}$ and ${1/q}$, with similar results when one deletes some rational points from ${C}$; see e.g. Chapter 11 of Iwaniec-Kowalski for details. Thus the sum in (22) is finite. For instance, for the affine elliptic curve (20) (which is a projective curve with one point removed), it turns out that we have

$\displaystyle |E({\mathbb F}_n)| = q^n - \alpha^n - \beta^n$

for two complex numbers ${\alpha,\beta}$ depending on ${E}$ and ${q}$.

The Riemann hypothesis for (untwisted) curves – which is the deepest and most difficult aspect of the Weil conjectures for these curves – asserts that the zeroes of ${\zeta_{C/{\mathbb F}}}$ lie on the critical line, or equivalently that all the roots ${\alpha}$ in (22) have modulus ${\sqrt{q}}$, so that (22) then gives the asymptotic

$\displaystyle |C({\mathbb F}_n)| = q^n + O( q^{n/2} ) \ \ \ \ \ (23)$

where the implied constant depends only on the genus of ${C}$ (and on the number of points removed from ${C}$). For instance, for elliptic curves we have the Hasse bound

$\displaystyle |E({\mathbb F}_n) - q^n| \leq 2 \sqrt{q}.$

As before, we have an important amplification phenomenon: if we can establish a weaker estimate, e.g.

$\displaystyle |C({\mathbb F}_n)| = q^n + O( q^{n/2 + O(1)} ), \ \ \ \ \ (24)$

then we can automatically deduce the stronger bound (23). This amplification is not a mere curiosity; most of the proofs of the Riemann hypothesis for curves proceed via this fact. For instance, by using the elementary method of Stepanov to bound points in curves (discussed for instance in this previous post), one can establish the preliminary bound (24) for large ${n}$, which then amplifies to the optimal bound (23) for all ${n}$ (and in particular for ${n=1}$). Again, see Chapter 11 of Iwaniec-Kowalski for details. The ability to convert a bound with ${q}$-dependent losses over the optimal bound (such as (24)) into an essentially optimal bound with no ${q}$-dependent losses (such as (23)) is important in analytic number theory, since in many applications (e.g. in those arising from sieve theory) one wishes to sum over large ranges of ${q}$.

Much as the Riemann zeta function can be twisted by a Dirichlet character to form a Dirichlet ${L}$-function, one can twist the zeta function on curves by various additive and multiplicative characters. For instance, suppose one has an affine plane curve ${C \subset {\mathbb A}^2}$ and an additive character ${\psi: {\mathbb F}^2 \rightarrow {\bf C}^\times}$, thus ${\psi(x+y) = \psi(x) \psi(y)}$ for all ${x,y \in {\mathbb F}^2}$. Given a rational effective divisor ${D = [x_1] + \ldots + [x_k]}$, the sum ${x_1+\ldots+x_k}$ is Frobenius-invariant and thus lies in ${{\mathbb F}^2}$. By abuse of notation, we may thus define ${\psi}$ on such divisors by

$\displaystyle \psi( [x_1] + \ldots + [x_k] ) := \psi( x_1 + \ldots + x_k )$

and observe that ${\psi}$ is multiplicative in the sense that ${\psi(D_1 + D_2) = \psi(D_1) \psi(D_2)}$ for rational effective divisors ${D_1,D_2}$. One can then define ${\psi({\mathfrak f})}$ for any non-trivial ideal ${{\mathfrak f}}$ by replacing that ideal with the associated rational effective divisor; for instance, if ${f}$ is a polynomial in the coefficient ring of ${C}$, with zeroes at ${x_1,\ldots,x_k \in C}$, then ${\psi((f))}$ is ${\psi( x_1+\ldots+x_k )}$. Again, we have the multiplicativity property ${\psi({\mathfrak f} {\mathfrak g}) = \psi({\mathfrak f}) \psi({\mathfrak g})}$. If we then form the twisted normalised zeta function

$\displaystyle Z( C/{\mathbb F}, \psi, T ) = \sum_{D \geq 0} \psi(D) T^{\hbox{deg}(D)}$

then by twisting the previous analysis, we eventually arrive at the exponential identity

$\displaystyle Z( C/{\mathbb F}, \psi, T ) = \exp( \sum_{n \geq 1} \frac{S_n(C/{\mathbb F}, \psi)}{n} T^n ) \ \ \ \ \ (25)$

in analogy with (21) (or (2), (8), (12), or (16)), where the companion sums ${S_n(C/{\mathbb F}, \psi)}$ are defined by

$\displaystyle S_n(C/{\mathbb F},\psi) = \sum_{x \in C({\mathbb F}^n)} \psi( \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) )$

where the trace ${\hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x)}$ of an element ${x}$ in the plane ${{\mathbb A}^2( {\mathbb F}_n )}$ is defined by the formula

$\displaystyle \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) = x + \hbox{Frob}(x) + \ldots + \hbox{Frob}^{n-1}(x).$

In particular, ${S_1}$ is the exponential sum

$\displaystyle S_1(C/{\mathbb F},\psi) = \sum_{x \in C({\mathbb F})} \psi(x)$

which is an important type of sum in analytic number theory, containing for instance the Kloosterman sum

$\displaystyle K(a,b;p) := \sum_{x \in {\mathbb F}_p^\times} e_p( ax + \frac{b}{x})$

as a special case, where ${a,b \in {\mathbb F}_p^\times}$. (NOTE: the sign conventions for the companion sum ${S_n}$ are not consistent across the literature, sometimes it is ${-S_n}$ which is referred to as the companion sum.)

If ${\psi}$ is non-principal (and ${C}$ is non-linear), one can show (by a suitably twisted version of the Riemann-Roch theorem) that ${Z}$ is a rational function of ${T}$, with no pole at ${T=q^{-1}}$, and one then gets an explicit formula of the form

$\displaystyle S_n(C/{\mathbb F},\psi) = -\sum_\alpha \alpha^n + c \ \ \ \ \ (26)$

for the companion sums, where ${\alpha}$ are the reciprocals of the zeroes of ${S}$, in analogy to (22) (or (5), (10), (13), or (17)). For instance, in the case of Kloosterman sums, there is an identity of the form

$\displaystyle \sum_{x \in {\mathbb F}_{p^n}^\times} e_p( a \hbox{Tr}(x) + \frac{b}{\hbox{Tr}(x)}) = -\alpha^n - \beta^n \ \ \ \ \ (27)$

for all ${n}$ and some complex numbers ${\alpha,\beta}$ depending on ${a,b,p}$, where we have abbreviated ${\hbox{Tr}_{{\mathbb F}_{p^n}/{\mathbb F}_p}}$ as ${\hbox{Tr}}$. As before, the Riemann hypothesis for ${Z}$ then gives a square root cancellation bound of the form

$\displaystyle S_n(C/{\mathbb F},\psi) = O( q^{n/2} ) \ \ \ \ \ (28)$

for the companion sums (and in particular gives the very explicit Weil bound ${|K(a,b;p)| \leq 2\sqrt{p}}$ for the Kloosterman sum), but again there is the amplification phenomenon that this sort of bound can be deduced from the apparently weaker bound

$\displaystyle S_n(C/{\mathbb F},\psi) = O( q^{n/2 + O(1)} ).$

As before, most of the known proofs of the Riemann hypothesis for these twisted zeta functions proceed by first establishing this weaker bound (e.g. one could again use Stepanov’s method here for this goal) and then amplifying to the full bound (28); see Chapter 11 of Iwaniec-Kowalski for further details.

One can also twist the zeta function on a curve by a multiplicative character ${\chi: {\mathbb F}^\times \rightarrow {\bf C}^\times}$ by similar arguments, except that instead of forming the sum ${x_1+\ldots+x_k}$ of all the components of an effective divisor ${[x_1]+\ldots+[x_k]}$, one takes the product ${x_1 \ldots x_k}$ instead, and similarly one replaces the trace

$\displaystyle \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) = x + \hbox{Frob}(x) + \ldots + \hbox{Frob}^{n-1}(x)$

by the norm

$\displaystyle \hbox{Norm}_{{\mathbb F}_n/{\mathbb F}}(x) = x \cdot \hbox{Frob}(x) \cdot \ldots \cdot \hbox{Frob}^{n-1}(x).$

Again, see Chapter 11 of Iwaniec-Kowalski for details.

Deligne famously extended the above theory to higher-dimensional varieties than curves, and also to the closely related context of ${\ell}$-adic sheaves on curves, giving rise to two separate proofs of the Weil conjectures in full generality. (Very roughly speaking, the former context can be obtained from the latter context by a sort of Fubini theorem type argument that expresses sums on higher-dimensional varieties as iterated sums on curves of various expressions related to ${\ell}$-adic sheaves.) In this higher-dimensional setting, the zeta function formalism is still present, but is much more difficult to use, in large part due to the much less tractable nature of divisors in higher dimensions (they are now combinations of codimension one subvarieties or subschemes, rather than combinations of points). To get around this difficulty, one has to change perspective yet again, from an algebraic or geometric perspective to an ${\ell}$-adic cohomological perspective. (I could imagine that once one is sufficiently expert in the subject, all these perspectives merge back together into a unified viewpoint, but I am certainly not yet at that stage of understanding.) In particular, the zeta function, while still present, plays a significantly less prominent role in the analysis (at least if one is willing to take Deligne’s theorems as a black box); the explicit formula is now obtained via a different route, namely the Grothendieck-Lefschetz fixed point formula. I have written some notes on this material below the fold (based in part on some lectures of Philippe Michel, as well as the text of Iwaniec-Kowalski and also this book of Katz), but I should caution that my understanding here is still rather sketchy and possibly inaccurate in places.

Let ${n}$ be a natural number. We consider the question of how many “almost orthogonal” unit vectors ${v_1,\ldots,v_m}$ one can place in the Euclidean space ${{\bf R}^n}$. Of course, if we insist on ${v_1,\ldots,v_m}$ being exactly orthogonal, so that ${\langle v_i,v_j \rangle = 0}$ for all distinct ${i,j}$, then we can only pack at most ${n}$ unit vectors into this space. However, if one is willing to relax the orthogonality condition a little, so that ${\langle v_i,v_j\rangle}$ is small rather than zero, then one can pack a lot more unit vectors into ${{\bf R}^n}$, due to the important fact that pairs of vectors in high dimensions are typically almost orthogonal to each other. For instance, if one chooses ${v_i}$ uniformly and independently at random on the unit sphere, then a standard computation (based on viewing the ${v_i}$ as gaussian vectors projected onto the unit sphere) shows that each inner product ${\langle v_i,v_j \rangle}$ concentrates around the origin with standard deviation ${O(1/\sqrt{n})}$ and with gaussian tails, and a simple application of the union bound then shows that for any fixed ${K \geq 1}$, one can pack ${n^K}$ unit vectors into ${{\bf R}^n}$ whose inner products are all of size ${O( K^{1/2} n^{-1/2} \log^{1/2} n )}$.

One can remove the logarithm by using some number theoretic constructions. For instance, if ${n}$ is twice a prime ${n=2p}$, one can identify ${{\bf R}^n}$ with the space ${\ell^2({\bf F}_p)}$ of complex-valued functions ${f: {\bf F}_p \rightarrow {\bf C}}$, whee ${{\bf F}_p}$ is the field of ${p}$ elements, and if one then considers the ${p^2}$ different quadratic phases ${x \mapsto \frac{1}{\sqrt{p}} e_p( ax^2 + bx )}$ for ${a,b \in {\bf F}_p}$, where ${e_p(a) := e^{2\pi i a/p}}$ is the standard character on ${{\bf F}_p}$, then a standard application of Gauss sum estimates reveals that these ${p^2}$ unit vectors in ${{\bf R}^n}$ all have inner products of magnitude at most ${p^{-1/2}}$ with each other. More generally, if we take ${d \geq 1}$ and consider the ${p^d}$ different polynomial phases ${x \mapsto \frac{1}{\sqrt{p}} e_p( a_d x^d + \ldots + a_1 x )}$ for ${a_1,\ldots,a_d \in {\bf F}_p}$, then an application of the Weil conjectures for curves, proven by Weil, shows that the inner products of the associated ${p^d}$ unit vectors with each other have magnitude at most ${(d-1) p^{-1/2}}$.

As it turns out, this construction is close to optimal, in that there is a polynomial limit to how many unit vectors one can pack into ${{\bf R}^n}$ with an inner product of ${O(1/\sqrt{n})}$:

Theorem 1 (Cheap Kabatjanskii-Levenstein bound) Let ${v_1,\ldots,v_m}$ be unit vector in ${{\bf R}^n}$ such that ${|\langle v_i, v_j \rangle| \leq A n^{-1/2}}$ for some ${1/2 \leq A \leq \frac{1}{2} \sqrt{n}}$. Then we have ${m \leq (\frac{Cn}{A^2})^{C A^2}}$ for some absolute constant ${C}$.

In particular, for fixed ${d}$ and large ${p}$, the number of unit vectors one can pack in ${{\bf R}^{2p}}$ whose inner products all have magnitude at most ${dp^{-1/2}}$ will be ${O( p^{O(d^2)} )}$. This doesn’t quite match the construction coming from the Weil conjectures, although it is worth noting that the upper bound of ${(d-1)p^{-1/2}}$ for the inner product is usually not sharp (the inner product is actually ${p^{-1/2}}$ times the sum of ${d-1}$ unit phases which one expects (cf. the Sato-Tate conjecture) to be uniformly distributed on the unit circle, and so the typical inner product is actually closer to ${(d-1)^{1/2} p^{-1/2}}$).

Note that for ${0 \leq A < 1/2}$, the ${A=1/2}$ case of the above theorem (or more precisely, Lemma 2 below) gives the bound ${m=O(n)}$, which is essentially optimal as the example of an orthonormal basis shows. For ${A \geq \sqrt{n}}$, the condition ${|\langle v_i, v_j \rangle| \leq A n^{-1/2}}$ is trivially true from Cauchy-Schwarz, and ${m}$ can be arbitrariy large. Finally, in the range ${\frac{1}{2} \sqrt{n} \leq A \leq \sqrt{n}}$, we can use a volume packing argument: we have ${\|v_i-v_j\|^2 \geq 2 (1 - A n^{-1/2})}$, so of we set ${r := 2^{-1/2} (1-A n^{-1/2})^{1/2}}$, then the open balls of radius ${r}$ around each ${v_i}$ are disjoint, while all lying in a ball of radius ${O(1)}$, giving rise to the bound ${m \leq C^n (1-A n^{-1/2})^{-n/2}}$ for some absolute constant ${C}$.

As I learned recently from Philippe Michel, a more precise version of this theorem is due to Kabatjanskii and Levenstein, who studied the closely related problem of sphere packing (or more precisely, cap packing) in the unit sphere ${S^{n-1}}$ of ${{\bf R}^n}$. However, I found a short proof of the above theorem which relies on one of my favorite tricks – the tensor power trick – so I thought I would give it here.

We begin with an easy case, basically the ${A=1/2}$ case of the above theorem:

Lemma 2 Let ${v_1,\ldots,v_m}$ be unit vectors in ${{\bf R}^n}$ such that ${|\langle v_i, v_j \rangle| \leq \frac{1}{2n^{1/2}}}$ for all distinct ${i,j}$. Then ${m < 2n}$.

Proof: Suppose for contradiction that ${m \geq 2n}$. We consider the ${2n \times 2n}$ Gram matrix ${( \langle v_i,v_j \rangle )_{1 \leq i,j \leq 2n}}$. This matrix is real symmetric with rank at most ${n}$, thus if one subtracts off the identity matrix, it has an eigenvalue of ${-1}$ with multiplicity at least ${n}$. Taking Hilbert-Schmidt norms, we conclude that

$\displaystyle \sum_{1 \leq i,j \leq 2n: i \neq j} |\langle v_i, v_j \rangle|^2 \geq n.$

But by hypothesis, the left-hand side is at most ${2n(2n-1) \frac{1}{4n} = n-\frac{1}{2}}$, giving the desired contradiction. $\Box$

To amplify the above lemma to cover larger values of ${A}$, we apply the tensor power trick. A direct application of the tensor power trick does not gain very much; however one can do a lot better by using the symmetric tensor power rather than the raw tensor power. This gives

Corollary 3 Let ${k}$ be a natural number, and let ${v_1,\ldots,v_m}$ be unit vectors in ${{\bf R}^n}$ such that ${|\langle v_i, v_j \rangle| \leq 2^{-1/k} (\binom{n+k-1}{k})^{-1/2k}}$ for all distinct ${i,j}$. Then ${m < 2\binom{n+k-1}{k}}$.

Proof: We work in the symmetric component ${\hbox{Sym}^k {\bf R}^n}$ of the tensor power ${({\bf R}^n)^{\otimes k} \equiv {\bf R}^{n^k}}$, which has dimension ${\binom{n+k-1}{k}}$. Applying the previous lemma to the tensor powers ${v_1^{\otimes k},\ldots,v_m^{\otimes k}}$, we obtain the claim. $\Box$

Using the trivial bound ${e^k \geq \frac{k^k}{k!}}$, we can lower bound

$\displaystyle 2^{-1/k} (\binom{n+k-1}{k})^{-1/2k} \geq 2^{-1/k} (n+k-1)^{-1/2} (k!)^{1/2k}$

$\displaystyle \geq 2^{-1/k} e^{-1/2} k^{1/2} (n+k-1)^{-1/2} .$

We can thus prove Theorem 1 by setting ${k := \lfloor C A^2 \rfloor}$ for some sufficiently large absolute constant ${C}$.

For any ${H \geq 2}$, let ${B[H]}$ denote the assertion that there are infinitely many pairs of consecutive primes ${p_n, p_{n+1}}$ whose difference ${p_{n+1}-p_n}$ is at most ${H}$, or equivalently that

$\displaystyle \lim\inf_{n \rightarrow \infty} p_{n+1} - p_n \leq H;$

thus for instance ${B[2]}$ is the notorious twin prime conjecture. While this conjecture remains unsolved, we have the following recent breakthrough result of Zhang, building upon earlier work of Goldston-Pintz-Yildirim, Bombieri, Fouvry, Friedlander, and Iwaniec, and others:

Theorem 1 (Zhang’s theorem) ${B[H]}$ is true for some finite ${H}$.

In fact, Zhang’s paper shows that ${B[H]}$ is true with ${H = 70,000,000}$.

About a month ago, the Polymath8 project was launched with the objective of reading through Zhang’s paper, clarifying the arguments, and then making them more efficient, in order to improve the value of ${H}$. This project is still ongoing, but we have made significant progress; currently, we have confirmed that ${B[H]}$ holds for ${H}$ as low as ${12,006}$, and provisionally for ${H}$ as low as ${6,966}$ subject to certain lengthy arguments being checked. For several reasons, our methods (which are largely based on Zhang’s original argument structure, though with numerous refinements and improvements) will not be able to attain the twin prime conjecture ${B[2]}$, but there is still scope to lower the value of ${H}$ a bit further than what we have currently.

The precise arguments here are quite technical, and are discussed at length on other posts on this blog. In this post, I would like to give a “high level” summary of how Zhang’s argument works, and give some impressions of the improvements we have made so far; these would already be familiar to the active participants of the Polymath8 project, but perhaps may be of value to people who are following this project on a more casual basis.

While Zhang’s arguments (and our refinements of it) are quite lengthy, they are fortunately also very modular, that is to say they can be broken up into several independent components that can be understood and optimised more or less separately from each other (although we have on occasion needed to modify the formulation of one component in order to better suit the needs of another). At the top level, Zhang’s argument looks like this:

1. Statements of the form ${B[H]}$ are deduced from weakened versions of the Hardy-Littlewood prime tuples conjecture, which we have denoted ${DHL[k_0,2]}$ (the ${DHL}$ stands for “Dickson-Hardy-Littlewood”), by locating suitable narrow admissible tuples (see below). Zhang’s paper establishes for the first time an unconditional proof of ${DHL[k_0,2]}$ for some finite ${k_0}$; in his initial paper, ${k_0}$ was ${3,500,000}$, but we have lowered this value to ${1,466}$ (and provisionally to ${902}$). Any reduction in the value of ${k_0}$ leads directly to reductions in the value of ${H}$; a web site to collect the best known values of ${H}$ in terms of ${k_0}$ has recently been set up here (and is accepting submissions for anyone who finds narrower admissible tuples than are currently known).
2. Next, by adapting sieve-theoretic arguments of Goldston, Pintz, and Yildirim, the Dickson-Hardy-Littlewood type assertions ${DHL[k_0,2]}$ are deduced in turn from weakened versions of the Elliott-Halberstam conjecture that we have denoted ${MPZ[\varpi,\delta]}$ (the ${MPZ}$ stands for “Motohashi-Pintz-Zhang”). More recently, we have replaced the conjecture ${MPZ[\varpi,\delta]}$ by a slightly stronger conjecture ${MPZ'[\varpi,\delta]}$ to significantly improve the efficiency of this step (using some recent ideas of Pintz). Roughly speaking, these statements assert that the primes are more or less evenly distributed along many arithmetic progressions, including those that have relatively large spacing. A crucial technical fact here is that in contrast to the older Elliott-Halberstam conjecture, the Motohashi-Pintz-Zhang estimates only require one to control progressions whose spacings ${q}$ have a lot of small prime factors (the original ${MPZ[\varpi,\delta]}$ conjecture requires the spacing ${q}$ to be smooth, but the newer variant ${MPZ'[\varpi,\delta]}$ has relaxed this to “densely divisible” as this turns out to be more efficient). The ${\varpi}$ parameter is more important than the technical parameter ${\delta}$; we would like ${\varpi}$ to be as large as possible, as any increase in this parameter should lead to a reduced value of ${k_0}$. In Zhang’s original paper, ${\varpi}$ was taken to be ${1/1168}$; we have now increased this to be almost as large as ${1/148}$ (and provisionally ${1/108}$).
3. By a certain amount of combinatorial manipulation (combined with a useful decomposition of the von Mangoldt function due Heath-Brown), estimates such as ${MPZ[\varpi,\delta]}$ can be deduced from three subestimates, the “Type I” estimate ${Type_I[\varpi,\delta,\sigma]}$, the “Type II” estimate ${Type_{II}[\varpi,\delta]}$, and the “Type III” estimate ${Type_{III}[\varpi,\delta,\sigma]}$, which all involve the distribution of certain Dirichlet convolutions in arithmetic progressions. Here ${1/10 < \sigma < 1/2}$ is an adjustable parameter that demarcates the border between the Type I and Type III estimates; raising ${\sigma}$ makes it easier to prove Type III estimates but harder to prove Type I estimates, and lowering ${\sigma}$ of course has the opposite effect. There is a combinatorial lemma that asserts that as long as one can find some ${\sigma}$ between ${1/10}$ and ${1/2}$ for which all three estimates ${Type_I[\varpi,\delta,\sigma]}$, ${Type_{II}[\varpi,\delta]}$, ${Type_{III}[\varpi,\delta,\sigma]}$ hold, one can prove ${MPZ[\varpi,\delta]}$. (The condition ${\sigma > 1/10}$ arises from the combinatorics, and appears to be rather essential; in fact, it is currently a major obstacle to further improvement of ${\varpi}$ and hence ${k_0}$ and ${H}$.)
4. The Type I estimates ${Type_I[\varpi,\delta,\sigma]}$ are asserting good distribution properties of convolutions of the form ${\alpha * \beta}$, where ${\alpha,\beta}$ are moderately long sequences which have controlled magnitude and length but are otherwise arbitrary. Estimates that are roughly of this type first appeared in a series of papers by Bombieri, Fouvry, Friedlander, Iwaniec, and other authors, and Zhang’s arguments here broadly follow those of previous authors, but with several new twists that take advantage of the many factors of the spacing ${q}$. In particular, the dispersion method of Linnik is used (which one can think of as a clever application of the Cauchy-Schwarz inequality) to ultimately reduce matters (after more Cauchy-Schwarz, as well as treatment of several error terms) to estimation of incomplete Kloosterman-type sums such as

$\displaystyle \sum_{n \leq N} e_d( \frac{c}{n} ).$

Zhang’s argument uses classical estimates on this Kloosterman sum (dating back to the work of Weil), but we have improved this using the “${q}$-van der Corput ${A}$-process” introduced by Heath-Brown and Ringrose.

5. The Type II estimates ${Type_{II}[\varpi,\delta]}$ are similar to the Type I estimates, but cover a small hole in the coverage of the Type I estimates which comes up when the two sequences ${\alpha,\beta}$ are almost equal in length. It turns out that one can modify the Type I argument to cover this case also. In practice, these estimates give less stringent conditions on ${\varpi,\delta}$ than the other two estimates, and so as a first approximation one can ignore the need to treat these estimates, although recently our Type I and Type III estimates have become so strong that it has become necessary to tighten the Type II estimates as well.
6. The Type III estimates ${Type_{III}[\varpi,\delta,\sigma]}$ are an averaged variant of the classical problem of understanding the distribution of the ternary divisor function ${\tau_3(n) := \sum_{abc=n} 1}$ in arithmetic progressions. There are various ways to attack this problem, but most of them ultimately boil down (after the use of standard devices such as Cauchy-Schwarz and completion of sums) to the task of controlling certain higher-dimensional Kloosterman-type sums such as

$\displaystyle \sum_{t,t' \in ({\bf Z}/d{\bf Z})^\times} \sum_{l \in {\bf Z}/d{\bf Z}: (l,d)=(l+k,d)=1} e_d( \frac{t}{l} - \frac{t'}{l+k} + \frac{m}{t} - \frac{m'}{t'} ).$

In principle, any such sum can be controlled by invoking Deligne’s proof of the Weil conjectures in arbitrary dimension (which, roughly speaking, establishes the analogue of the Riemann hypothesis for arbitrary varieties over finite fields), although in the higher dimensional setting some algebraic geometry is needed to ensure that one gets the full “square root cancellation” for these exponential sums. (For the particular sum above, the necessary details were worked out by Birch and Bombieri.) As such, this part of the argument is by far the least elementary component of the whole. Zhang’s original argument cleverly exploited some additional cancellation in the above exponential sums that goes beyond the naive square root cancellation heuristic; more recently, an alternate argument of Fouvry, Kowalski, Michel, and Nelson uses bounds on a slightly different higher-dimensional Kloosterman-type sum to obtain results that give better values of ${\varpi,\delta,\sigma}$. We have also been able to improve upon these estimates by exploiting some additional averaging that was left unused by the previous arguments.

As of this time of writing, our understanding of the first three stages of Zhang’s argument (getting from ${DHL[k_0,2]}$ to ${B[H]}$, getting from ${MPZ[\varpi,\delta]}$ or ${MPZ'[\varpi,\delta]}$ to ${DHL[k_0,2]}$, and getting to ${MPZ[\varpi,\delta]}$ or ${MPZ'[\varpi,\delta]}$ from Type I, Type II, and Type III estimates) are quite satisfactory, with the implications here being about as efficient as one could hope for with current methods, although one could still hope to get some small improvements in parameters by wringing out some of the last few inefficiencies. The remaining major sources of improvements to the parameters are then coming from gains in the Type I, II, and III estimates; we are currently in the process of making such improvements, but it will still take some time before they are fully optimised.

Below the fold I will discuss (mostly at an informal, non-rigorous level) the six steps above in a little more detail (full details can of course be found in the other polymath8 posts on this blog). This post will also serve as a new research thread, as the previous threads were getting quite lengthy.

This post is a continuation of the previous post on sieve theory, which is an ongoing part of the Polymath8 project to improve the various parameters in Zhang’s proof that bounded gaps between primes occur infinitely often. Given that the comments on that page are getting quite lengthy, this is also a good opportunity to “roll over” that thread.

We will continue the notation from the previous post, including the concept of an admissible tuple, the use of an asymptotic parameter ${x}$ going to infinity, and a quantity ${w}$ depending on ${x}$ that goes to infinity sufficiently slowly with ${x}$, and ${W := \prod_{p (the ${W}$-trick).

The objective of this portion of the Polymath8 project is to make as efficient as possible the connection between two types of results, which we call ${DHL[k_0,2]}$ and ${MPZ[\varpi,\delta]}$. Let us first state ${DHL[k_0,2]}$, which has an integer parameter ${k_0 \geq 2}$:

Conjecture 1 (${DHL[k_0,2]}$) Let ${{\mathcal H}}$ be a fixed admissible ${k_0}$-tuple. Then there are infinitely many translates ${n+{\mathcal H}}$ of ${{\mathcal H}}$ which contain at least two primes.

Zhang was the first to prove a result of this type with ${k_0 = 3,500,000}$. Since then the value of ${k_0}$ has been lowered substantially; at this time of writing, the current record is ${k_0 = 26,024}$.

There are two basic ways known currently to attain this conjecture. The first is to use the Elliott-Halberstam conjecture ${EH[\theta]}$ for some ${\theta>1/2}$:

Conjecture 2 (${EH[\theta]}$) One has

$\displaystyle \sum_{1 \leq q \leq x^\theta} \sup_{a \in ({\bf Z}/q{\bf Z})^\times} |\sum_{n < x: n = a\ (q)} \Lambda(n) - \frac{1}{\phi(q)} \sum_{n < x} \Lambda(n)|$

$\displaystyle = O( \frac{x}{\log^A x} )$

for all fixed ${A>0}$. Here we use the abbreviation ${n=a\ (q)}$ for ${n=a \hbox{ mod } q}$.

Here of course ${\Lambda}$ is the von Mangoldt function and ${\phi}$ the Euler totient function. It is conjectured that ${EH[\theta]}$ holds for all ${0 < \theta < 1}$, but this is currently only known for ${0 < \theta < 1/2}$, an important result known as the Bombieri-Vinogradov theorem.

In a breakthrough paper, Goldston, Yildirim, and Pintz established an implication of the form

$\displaystyle EH[\theta] \implies DHL[k_0,2] \ \ \ \ \ (1)$

for any ${1/2 < \theta < 1}$, where ${k_0 = k_0(\theta)}$ depends on ${\theta}$. This deduction was very recently optimised by Farkas, Pintz, and Revesz and also independently in the comments to the previous blog post, leading to the following implication:

Theorem 3 (EH implies DHL) Let ${1/2 < \theta < 1}$ be a real number, and let ${k_0 \geq 2}$ be an integer obeying the inequality

$\displaystyle 2\theta > \frac{j_{k_0-2}^2}{k_0(k_0-1)}, \ \ \ \ \ (2)$

where ${j_n}$ is the first positive zero of the Bessel function ${J_n(x)}$. Then ${EH[\theta]}$ implies ${DHL[k_0,2]}$.

Note that the right-hand side of (2) is larger than ${1}$, but tends asymptotically to ${1}$ as ${k_0 \rightarrow \infty}$. We give an alternate proof of Theorem 3 below the fold.

Implications of the form Theorem 3 were modified by Motohashi and Pintz, which in our notation replaces ${EH[\theta]}$ by an easier conjecture ${MPZ[\varpi,\delta]}$ for some ${0 < \varpi < 1/4}$ and ${0 < \delta < 1/4+\varpi}$, at the cost of degrading the sufficient condition (2) slightly. In our notation, this conjecture takes the following form for each choice of parameters ${\varpi,\delta}$:

Conjecture 4 (${MPZ[\varpi,\delta]}$) Let ${{\mathcal H}}$ be a fixed ${k_0}$-tuple (not necessarily admissible) for some fixed ${k_0 \geq 2}$, and let ${b\ (W)}$ be a primitive residue class. Then

$\displaystyle \sum_{q \in {\mathcal S}_I: q< x^{1/2+2\varpi}} \sum_{a \in C(q)} |\Delta_{b,W}(\Lambda; q,a)| = O( x \log^{-A} x) \ \ \ \ \ (3)$

for any fixed ${A>0}$, where ${I = (w,x^{\delta})}$, ${{\mathcal S}_I}$ are the square-free integers whose prime factors lie in ${I}$, and ${\Delta_{b,W}(\Lambda;q,a)}$ is the quantity

$\displaystyle \Delta_{b,W}(\Lambda;q,a) := | \sum_{x \leq n \leq 2x: n=b\ (W); n = a\ (q)} \Lambda(n) \ \ \ \ \ (4)$

$\displaystyle - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x: n = b\ (W)} \Lambda(n)|.$

and ${C(q)}$ is the set of congruence classes

$\displaystyle C(q) := \{ a \in ({\bf Z}/q{\bf Z})^\times: P(a) = 0 \}$

and ${P}$ is the polynomial

$\displaystyle P(a) := \prod_{h \in {\mathcal H}} (a+h).$

This is a weakened version of the Elliott-Halberstam conjecture:

Proposition 5 (EH implies MPZ) Let ${0 < \varpi < 1/4}$ and ${0 < \delta < 1/4+\varpi}$. Then ${EH[1/2+2\varpi+\epsilon]}$ implies ${MPZ[\varpi,\delta]}$ for any ${\epsilon>0}$. (In abbreviated form: ${EH[1/2+2\varpi+]}$ implies ${MPZ[\varpi,\delta]}$.)

In particular, since ${EH[\theta]}$ is conjecturally true for all ${0 < \theta < 1/2}$, we conjecture ${MPZ[\varpi,\delta]}$ to be true for all ${0 < \varpi < 1/4}$ and ${0<\delta<1/4+\varpi}$.

Proof: Define

$\displaystyle E(q) := \sup_{a \in ({\bf Z}/q{\bf Z})^\times} |\sum_{x \leq n \leq 2x: n = a\ (q)} \Lambda(n) - \frac{1}{\phi(q)} \sum_{x \leq n \leq 2x} \Lambda(n)|$

then the hypothesis ${EH[1/2+2\varpi+\epsilon]}$ (applied to ${x}$ and ${2x}$ and then subtracting) tells us that

$\displaystyle \sum_{1 \leq q \leq Wx^{1/2+2\varpi}} E(q) \ll x \log^{-A} x$

for any fixed ${A>0}$. From the Chinese remainder theorem and the Siegel-Walfisz theorem we have

$\displaystyle \sup_{a \in ({\bf Z}/q{\bf Z})^\times} \Delta_{b,W}(\Lambda;q,a) \ll E(qW) + \frac{1}{\phi(q)} x \log^{-A} x$

for any ${q}$ coprime to ${W}$ (and in particular for ${q \in {\mathcal S}_I}$). Since ${|C(q)| \leq k_0^{\Omega(q)}}$, where ${\Omega(q)}$ is the number of prime divisors of ${q}$, we can thus bound the left-hand side of (3) by

$\displaystyle \ll \sum_{q \in {\mathcal S}_I: q< x^{1/2+2\varpi}} k_0^{\Omega(q)} E(qW) + k_0^{\Omega(q)} \frac{1}{\phi(q)} x \log^{-A} x.$

The contribution of the second term is ${O(x \log^{-A+O(1)} x)}$ by standard estimates (see Proposition 8 below). Using the very crude bound

$\displaystyle E(q) \ll \frac{1}{\phi(q)} x \log x$

and standard estimates we also have

$\displaystyle \sum_{q \in {\mathcal S}_I: q< x^{1/2+2\varpi}} k_0^{2\Omega(q)} E(qW) \ll x \log^{O(1)} A$

and the claim now follows from the Cauchy-Schwarz inequality. $\Box$

In practice, the conjecture ${MPZ[\varpi,\delta]}$ is easier to prove than ${EH[1/2+2\varpi+]}$ due to the restriction of the residue classes ${a}$ to ${C(q)}$, and also the restriction of the modulus ${q}$ to ${x^\delta}$-smooth numbers. Zhang proved ${MPZ[\varpi,\varpi]}$ for any ${0 < \varpi < 1/1168}$. More recently, our Polymath8 group has analysed Zhang’s argument (using in part a corrected version of the analysis of a recent preprint of Pintz) to obtain ${MPZ[\varpi,\delta]}$ whenever ${\delta, \varpi > 0}$ are such that

$\displaystyle 207\varpi + 43\delta < \frac{1}{4}.$

The work of Motohashi and Pintz, and later Zhang, implicitly describe arguments that allow one to deduce ${DHL[k_0,2]}$ from ${MPZ[\varpi,\delta]}$ provided that ${k_0}$ is sufficiently large depending on ${\varpi,\delta}$. The best implication of this sort that we have been able to verify thus far is the following result, established in the previous post:

Theorem 6 (MPZ implies DHL) Let ${0 < \varpi < 1/4}$, ${0 < \delta < 1/4+\varpi}$, and let ${k_0 \geq 2}$ be an integer obeying the constraint

$\displaystyle 1+4\varpi > \frac{j_{k_0-2}^2}{k_0(k_0-1)} (1+\kappa) \ \ \ \ \ (5)$

where ${\kappa}$ is the quantity

$\displaystyle \kappa := \sum_{1 \leq n < \frac{1+4\varpi}{2\delta}} (1 - \frac{2n \delta}{1 + 4\varpi})^{k_0/2} \prod_{j=1}^{n} (1 + 3k_0 \log(1+\frac{1}{j})) ).$

Then ${MPZ[\varpi,\delta]}$ implies ${DHL[k_0,2]}$.

This complicated version of ${\kappa}$ is roughly of size ${3 \log(2) k_0 \exp( - k_0 \delta)}$. It is unlikely to be optimal; the work of Motohashi-Pintz and Pintz suggests that it can essentially be improved to ${\frac{1}{\delta} \exp(-k_0 \delta)}$, but currently we are unable to verify this claim. One of the aims of this post is to encourage further discussion as to how to improve the ${\kappa}$ term in results such as Theorem 6.

We remark that as (5) is an open condition, it is unaffected by infinitesimal modifications to ${\varpi,\delta}$, and so we do not ascribe much importance to such modifications (e.g. replacing ${\varpi}$ by ${\varpi-\epsilon}$ for some arbitrarily small ${\epsilon>0}$).

The known deductions of ${DHL[k_0,2]}$ from claims such as ${EH[\theta]}$ or ${MPZ[\varpi,\delta]}$ rely on the following elementary observation of Goldston, Pintz, and Yildirim (essentially a weighted pigeonhole principle), which we have placed in “${W}$-tricked form”:

Lemma 7 (Criterion for DHL) Let ${k_0 \geq 2}$. Suppose that for each fixed admissible ${k_0}$-tuple ${{\mathcal H}}$ and each congruence class ${b\ (W)}$ such that ${b+h}$ is coprime to ${W}$ for all ${h \in {\mathcal H}}$, one can find a non-negative weight function ${\nu: {\bf N} \rightarrow {\bf R}^+}$, fixed quantities ${\alpha,\beta > 0}$, a quantity ${A>0}$, and a fixed positive power ${R}$ of ${x}$ such that one has the upper bound

$\displaystyle \sum_{x \leq n \leq 2x: n = b\ (W)} \nu(n) \leq (\alpha+o(1)) A\frac{x}{W}, \ \ \ \ \ (6)$

the lower bound

$\displaystyle \sum_{x \leq n \leq 2x: n = b\ (W)} \nu(n) \theta(n+h_i) \geq (\beta-o(1)) A\frac{x}{W} \log R \ \ \ \ \ (7)$

for all ${h_i \in {\mathcal H}}$, and the key inequality

$\displaystyle \frac{\log R}{\log x} > \frac{1}{k_0} \frac{\alpha}{\beta} \ \ \ \ \ (8)$

holds. Then ${DHL[k_0,2]}$ holds. Here ${\theta(n)}$ is defined to equal ${\log n}$ when ${n}$ is prime and ${0}$ otherwise.

Proof: Consider the quantity

$\displaystyle \sum_{x \leq n \leq 2x: n = b\ (W)} \nu(n) (\sum_{h \in {\mathcal H}} \theta(n+h) - \log(3x)). \ \ \ \ \ (9)$

By (6), (7), this quantity is at least

$\displaystyle k_0 \beta A\frac{x}{W} \log R - \alpha \log(3x) A\frac{x}{W} - o(A\frac{x}{W} \log x).$

By (8), this expression is positive for all sufficiently large ${x}$. On the other hand, (9) can only be positive if at least one summand is positive, which only can happen when ${n+{\mathcal H}}$ contains at least two primes for some ${x \leq n \leq 2x}$ with ${n=b\ (W)}$. Letting ${x \rightarrow \infty}$ we obtain ${DHL[k_0,2]}$ as claimed. $\Box$

In practice, the quantity ${R}$ (referred to as the sieve level) is a power of ${x}$ such as ${x^{\theta/2}}$ or ${x^{1/4+\varpi}}$, and reflects the strength of the distribution hypothesis ${EH[\theta]}$ or ${MPZ[\varpi,\delta]}$ that is available; the quantity ${R}$ will also be a key parameter in the definition of the sieve weight ${\nu}$. The factor ${A}$ reflects the order of magnitude of the expected density of ${\nu}$ in the residue class ${b\ (W)}$; it could be absorbed into the sieve weight ${\nu}$ by dividing that weight by ${A}$, but it is convenient to not enforce such a normalisation so as not to clutter up the formulae. In practice, ${A}$ will some combination of ${\frac{\phi(W)}{W}}$ and ${\log R}$.

Once one has decided to rely on Lemma 7, the next main task is to select a good weight ${\nu}$ for which the ratio ${\alpha/\beta}$ is as small as possible (and for which the sieve level ${R}$ is as large as possible. To ensure non-negativity, we use the Selberg sieve

$\displaystyle \nu = \lambda^2, \ \ \ \ \ (10)$

where ${\lambda(n)}$ takes the form

$\displaystyle \lambda(n) = \sum_{d \in {\mathcal S}_I: d|P(n)} \mu(d) a_d$

for some weights ${a_d \in {\bf R}}$ vanishing for ${d>R}$ that are to be chosen, where ${I \subset (w,+\infty)}$ is an interval and ${P}$ is the polynomial ${P(n) := \prod_{h \in {\mathcal H}} (n+h)}$. If the distribution hypothesis is ${EH[\theta]}$, one takes ${R := x^{\theta/2}}$ and ${I := (w,+\infty)}$; if the distribution hypothesis is instead ${MPZ[\varpi,\delta]}$, one takes ${R := x^{1/4+\varpi}}$ and ${I := (w,x^\delta)}$.

One has a useful amount of flexibility in selecting the weights ${a_d}$ for the Selberg sieve. The original work of Goldston, Pintz, and Yildirim, as well as the subsequent paper of Zhang, the choice

$\displaystyle a_d := \log(\frac{R}{d})_+^{k_0+\ell_0}$

is used for some additional parameter ${\ell_0 > 0}$ to be optimised over. More generally, one can take

$\displaystyle a_d := g( \frac{\log d}{\log R} )$

for some suitable (in particular, sufficiently smooth) cutoff function ${g: {\bf R} \rightarrow {\bf R}}$. We will refer to this choice of sieve weights as the “analytic Selberg sieve”; this is the choice used in the analysis in the previous post.

However, there is a slight variant choice of sieve weights that one can use, which I will call the “elementary Selberg sieve”, and it takes the form

$\displaystyle a_d := \frac{1}{\Phi(d) \Delta(d)} \sum_{q \in {\mathcal S}_I: (q,d)=1} \frac{1}{\Phi(q)} f'( \frac{\log dq}{\log R}) \ \ \ \ \ (11)$

for a sufficiently smooth function ${f: {\bf R} \rightarrow {\bf R}}$, where

$\displaystyle \Phi(d) := \prod_{p|d} \frac{p-k_0}{k_0}$

for ${d \in {\mathcal S}_I}$ is a ${k_0}$-variant of the Euler totient function, and

$\displaystyle \Delta(d) := \prod_{p|d} \frac{k_0}{p} = \frac{k_0^{\Omega(d)}}{d}$

for ${d \in {\mathcal S}_I}$ is a ${k_0}$-variant of the function ${1/d}$. (The derivative on the ${f}$ cutoff is convenient for computations, as will be made clearer later in this post.) This choice of weights ${a_d}$ may seem somewhat arbitrary, but it arises naturally when considering how to optimise the quadratic form

$\displaystyle \sum_{d_1,d_2 \in {\mathcal S}_I} \mu(d_1) a_{d_1} \mu(d_2) a_{d_2} \Delta([d_1,d_2])$

(which arises naturally in the estimation of ${\alpha}$ in (6)) subject to a fixed value of ${a_1}$ (which morally is associated to the estimation of ${\beta}$ in (7)); this is discussed in any sieve theory text as part of the general theory of the Selberg sieve, e.g. Friedlander-Iwaniec.

The use of the elementary Selberg sieve for the bounded prime gaps problem was studied by Motohashi and Pintz. Their arguments give an alternate derivation of ${DHL[k_0,2]}$ from ${MPZ[\varpi,\theta]}$ for ${k_0}$ sufficiently large, although unfortunately we were not able to confirm some of their calculations regarding the precise dependence of ${k_0}$ on ${\varpi,\theta}$, and in particular we have not yet been able to improve upon the specific criterion in Theorem 6 using the elementary sieve. However it is quite plausible that such improvements could become available with additional arguments.

Below the fold we describe how the elementary Selberg sieve can be used to reprove Theorem 3, and discuss how they could potentially be used to improve upon Theorem 6. (But the elementary Selberg sieve and the analytic Selberg sieve are in any event closely related; see the appendix of this paper of mine with Ben Green for some further discussion.) For the purposes of polymath8, either developing the elementary Selberg sieve or continuing the analysis of the analytic Selberg sieve from the previous post would be a relevant topic of conversation in the comments to this post.

Suppose one is given a ${k_0}$-tuple ${{\mathcal H} = (h_1,\ldots,h_{k_0})}$ of ${k_0}$ distinct integers for some ${k_0 \geq 1}$, arranged in increasing order. When is it possible to find infinitely many translates ${n + {\mathcal H} =(n+h_1,\ldots,n+h_{k_0})}$ of ${{\mathcal H}}$ which consists entirely of primes? The case ${k_0=1}$ is just Euclid’s theorem on the infinitude of primes, but the case ${k_0=2}$ is already open in general, with the ${{\mathcal H} = (0,2)}$ case being the notorious twin prime conjecture.

On the other hand, there are some tuples ${{\mathcal H}}$ for which one can easily answer the above question in the negative. For instance, the only translate of ${(0,1)}$ that consists entirely of primes is ${(2,3)}$, basically because each translate of ${(0,1)}$ must contain an even number, and the only even prime is ${2}$. More generally, if there is a prime ${p}$ such that ${{\mathcal H}}$ meets each of the ${p}$ residue classes ${0 \hbox{ mod } p, 1 \hbox{ mod } p, \ldots, p-1 \hbox{ mod } p}$, then every translate of ${{\mathcal H}}$ contains at least one multiple of ${p}$; since ${p}$ is the only multiple of ${p}$ that is prime, this shows that there are only finitely many translates of ${{\mathcal H}}$ that consist entirely of primes.

To avoid this obstruction, let us call a ${k_0}$-tuple ${{\mathcal H}}$ admissible if it avoids at least one residue class ${\hbox{ mod } p}$ for each prime ${p}$. It is easy to check for admissibility in practice, since a ${k_0}$-tuple is automatically admissible in every prime ${p}$ larger than ${k_0}$, so one only needs to check a finite number of primes in order to decide on the admissibility of a given tuple. For instance, ${(0,2)}$ or ${(0,2,6)}$ are admissible, but ${(0,2,4)}$ is not (because it covers all the residue classes modulo ${3}$). We then have the famous Hardy-Littlewood prime tuples conjecture:

Conjecture 1 (Prime tuples conjecture, qualitative form) If ${{\mathcal H}}$ is an admissible ${k_0}$-tuple, then there exists infinitely many translates of ${{\mathcal H}}$ that consist entirely of primes.

This conjecture is extremely difficult (containing the twin prime conjecture, for instance, as a special case), and in fact there is no explicitly known example of an admissible ${k_0}$-tuple with ${k_0 \geq 2}$ for which we can verify this conjecture (although, thanks to the recent work of Zhang, we know that ${(0,d)}$ satisfies the conclusion of the prime tuples conjecture for some ${0 < d < 70,000,000}$, even if we can’t yet say what the precise value of ${d}$ is).

Actually, Hardy and Littlewood conjectured a more precise version of Conjecture 1. Given an admissible ${k_0}$-tuple ${{\mathcal H} = (h_1,\ldots,h_{k_0})}$, and for each prime ${p}$, let ${\nu_p = \nu_p({\mathcal H}) := |{\mathcal H} \hbox{ mod } p|}$ denote the number of residue classes modulo ${p}$ that ${{\mathcal H}}$ meets; thus we have ${1 \leq \nu_p \leq p-1}$ for all ${p}$ by admissibility, and also ${\nu_p = k_0}$ for all ${p>h_{k_0}-h_1}$. We then define the singular series ${{\mathfrak G} = {\mathfrak G}({\mathcal H})}$ associated to ${{\mathcal H}}$ by the formula

$\displaystyle {\mathfrak G} := \prod_{p \in {\mathcal P}} \frac{1-\frac{\nu_p}{p}}{(1-\frac{1}{p})^{k_0}}$

where ${{\mathcal P} = \{2,3,5,\ldots\}}$ is the set of primes; by the previous discussion we see that the infinite product in ${{\mathfrak G}}$ converges to a finite non-zero number.

We will also need some asymptotic notation (in the spirit of “cheap nonstandard analysis“). We will need a parameter ${x}$ that one should think of going to infinity. Some mathematical objects (such as ${{\mathcal H}}$ and ${k_0}$) will be independent of ${x}$ and referred to as fixed; but unless otherwise specified we allow all mathematical objects under consideration to depend on ${x}$. If ${X}$ and ${Y}$ are two such quantities, we say that ${X = O(Y)}$ if one has ${|X| \leq CY}$ for some fixed ${C}$, and ${X = o(Y)}$ if one has ${|X| \leq c(x) Y}$ for some function ${c(x)}$ of ${x}$ (and of any fixed parameters present) that goes to zero as ${x \rightarrow \infty}$ (for each choice of fixed parameters).

Conjecture 2 (Prime tuples conjecture, quantitative form) Let ${k_0 \geq 1}$ be a fixed natural number, and let ${{\mathcal H}}$ be a fixed admissible ${k_0}$-tuple. Then the number of natural numbers ${n < x}$ such that ${n+{\mathcal H}}$ consists entirely of primes is ${({\mathfrak G} + o(1)) \frac{x}{\log^{k_0} x}}$.

Thus, for instance, if Conjecture 2 holds, then the number of twin primes less than ${x}$ should equal ${(2 \Pi_2 + o(1)) \frac{x}{\log^2 x}}$, where ${\Pi_2}$ is the twin prime constant

$\displaystyle \Pi_2 := \prod_{p \in {\mathcal P}: p>2} (1 - \frac{1}{(p-1)^2}) = 0.6601618\ldots.$

As this conjecture is stronger than Conjecture 1, it is of course open. However there are a number of partial results on this conjecture. For instance, this conjecture is known to be true if one introduces some additional averaging in ${{\mathcal H}}$; see for instance this previous post. From the methods of sieve theory, one can obtain an upper bound of ${(C_{k_0} {\mathfrak G} + o(1)) \frac{x}{\log^{k_0} x}}$ for the number of ${n < x}$ with ${n + {\mathcal H}}$ all prime, where ${C_{k_0}}$ depends only on ${k_0}$. Sieve theory can also give analogues of Conjecture 2 if the primes are replaced by a suitable notion of almost prime (or more precisely, by a weight function concentrated on almost primes).

Another type of partial result towards Conjectures 1, 2 come from the results of Goldston-Pintz-Yildirim, Motohashi-Pintz, and of Zhang. Following the notation of this recent paper of Pintz, for each ${k_0>2}$, let ${DHL[k_0,2]}$ denote the following assertion (DHL stands for “Dickson-Hardy-Littlewood”):

Conjecture 3 (${DHL[k_0,2]}$) Let ${{\mathcal H}}$ be a fixed admissible ${k_0}$-tuple. Then there are infinitely many translates ${n+{\mathcal H}}$ of ${{\mathcal H}}$ which contain at least two primes.

This conjecture gets harder as ${k_0}$ gets smaller. Note for instance that ${DHL[2,2]}$ would imply all the ${k_0=2}$ cases of Conjecture 1, including the twin prime conjecture. More generally, if one knew ${DHL[k_0,2]}$ for some ${k_0}$, then one would immediately conclude that there are an infinite number of pairs of consecutive primes of separation at most ${H(k_0)}$, where ${H(k_0)}$ is the minimal diameter ${h_{k_0}-h_1}$ amongst all admissible ${k_0}$-tuples ${{\mathcal H}}$. Values of ${H(k_0)}$ for small ${k_0}$ can be found at this link (with ${H(k_0)}$ denoted ${w}$ in that page). For large ${k_0}$, the best upper bounds on ${H(k_0)}$ have been found by using admissible ${k_0}$-tuples ${{\mathcal H}}$ of the form

$\displaystyle {\mathcal H} = ( - p_{m+\lfloor k_0/2\rfloor - 1}, \ldots, - p_{m+1}, -1, +1, p_{m+1}, \ldots, p_{m+\lfloor (k_0+1)/2\rfloor - 1} )$

where ${p_n}$ denotes the ${n^{th}}$ prime and ${m}$ is a parameter to be optimised over (in practice it is an order of magnitude or two smaller than ${k_0}$); see this blog post for details. The upshot is that one can bound ${H(k_0)}$ for large ${k_0}$ by a quantity slightly smaller than ${k_0 \log k_0}$ (and the large sieve inequality shows that this is sharp up to a factor of two, see e.g. this previous post for more discussion).

In a key breakthrough, Goldston, Pintz, and Yildirim were able to establish the following conditional result a few years ago:

Theorem 4 (Goldston-Pintz-Yildirim) Suppose that the Elliott-Halberstam conjecture ${EH[\theta]}$ is true for some ${1/2 < \theta < 1}$. Then ${DHL[k_0,2]}$ is true for some finite ${k_0}$. In particular, this establishes an infinite number of pairs of consecutive primes of separation ${O(1)}$.

The dependence of constants between ${k_0}$ and ${\theta}$ given by the Goldston-Pintz-Yildirim argument is basically of the form ${k_0 \sim (\theta-1/2)^{-2}}$. (UPDATE: as recently observed by Farkas, Pintz, and Revesz, this relationship can be improved to ${k_0 \sim (\theta-1/2)^{-3/2}}$.)

Unfortunately, the Elliott-Halberstam conjecture (which we will state properly below) is only known for ${\theta<1/2}$, an important result known as the Bombieri-Vinogradov theorem. If one uses the Bombieri-Vinogradov theorem instead of the Elliott-Halberstam conjecture, Goldston, Pintz, and Yildirim were still able to show the highly non-trivial result that there were infinitely many pairs ${p_{n+1},p_n}$ of consecutive primes with ${(p_{n+1}-p_n) / \log p_n \rightarrow 0}$ (actually they showed more than this; see e.g. this survey of Soundararajan for details).

Actually, the full strength of the Elliott-Halberstam conjecture is not needed for these results. There is a technical specialisation of the Elliott-Halberstam conjecture which does not presently have a commonly accepted name; I will call it the Motohashi-Pintz-Zhang conjecture ${MPZ[\varpi]}$ in this post, where ${0 < \varpi < 1/4}$ is a parameter. We will define this conjecture more precisely later, but let us remark for now that ${MPZ[\varpi]}$ is a consequence of ${EH[\frac{1}{2}+2\varpi]}$.

We then have the following two theorems. Firstly, we have the following strengthening of Theorem 4:

Theorem 5 (Motohashi-Pintz-Zhang) Suppose that ${MPZ[\varpi]}$ is true for some ${0 < \varpi < 1/4}$. Then ${DHL[k_0,2]}$ is true for some ${k_0}$.

A version of this result (with a slightly different formulation of ${MPZ[\varpi]}$) appears in this paper of Motohashi and Pintz, and in the paper of Zhang, Theorem 5 is proven for the concrete values ${\varpi = 1/1168}$ and ${k_0 = 3,500,000}$. We will supply a self-contained proof of Theorem 5 below the fold, the constants upon those in Zhang’s paper (in particular, for ${\varpi = 1/1168}$, we can take ${k_0}$ as low as ${341,640}$, with further improvements on the way). As with Theorem 4, we have an inverse quadratic relationship ${k_0 \sim \varpi^{-2}}$.

In his paper, Zhang obtained for the first time an unconditional advance on ${MPZ[\varpi]}$:

Theorem 6 (Zhang) ${MPZ[\varpi]}$ is true for all ${0 < \varpi \leq 1/1168}$.

This is a deep result, building upon the work of Fouvry-Iwaniec, Friedlander-Iwaniec and Bombieri-Friedlander-Iwaniec which established results of a similar nature to ${MPZ[\varpi]}$ but simpler in some key respects. We will not discuss this result further here, except to say that they rely on the (higher-dimensional case of the) Weil conjectures, which were famously proven by Deligne using methods from l-adic cohomology. Also, it was believed among at least some experts that the methods of Bombieri, Fouvry, Friedlander, and Iwaniec were not quite strong enough to obtain results of the form ${MPZ[\varpi]}$, making Theorem 6 a particularly impressive achievement.

Combining Theorem 6 with Theorem 5 we obtain ${DHL[k_0,2]}$ for some finite ${k_0}$; Zhang obtains this for ${k_0 = 3,500,000}$ but as detailed below, this can be lowered to ${k_0 = 341,640}$. This in turn gives infinitely many pairs of consecutive primes of separation at most ${H(k_0)}$. Zhang gives a simple argument that bounds ${H(3,500,000)}$ by ${70,000,000}$, giving his famous result that there are infinitely many pairs of primes of separation at most ${70,000,000}$; by being a bit more careful (as discussed in this post) one can lower the upper bound on ${H(3,500,000)}$ to ${57,554,086}$, and if one instead uses the newer value ${k_0 = 341,640}$ for ${k_0}$ one can instead use the bound ${H(341,640) \leq 4,982,086}$. (Many thanks to Scott Morrison for these numerics.) UPDATE: These values are now obsolete; see this web page for the latest bounds.

In this post we would like to give a self-contained proof of both Theorem 4 and Theorem 5, which are both sieve-theoretic results that are mainly elementary in nature. (But, as stated earlier, we will not discuss the deepest new result in Zhang’s paper, namely Theorem 6.) Our presentation will deviate a little bit from the traditional sieve-theoretic approach in a few places. Firstly, there is a portion of the argument that is traditionally handled using contour integration and properties of the Riemann zeta function; we will present a “cheaper” approach (which Ben Green and I used in our papers, e.g. in this one) using Fourier analysis, with the only property used about the zeta function ${\zeta(s)}$ being the elementary fact that blows up like ${\frac{1}{s-1}}$ as one approaches ${1}$ from the right. To deal with the contribution of small primes (which is the source of the singular series ${{\mathfrak G}}$), it will be convenient to use the “${W}$-trick” (introduced in this paper of mine with Ben), passing to a single residue class mod ${W}$ (where ${W}$ is the product of all the small primes) to end up in a situation in which all small primes have been “turned off” which leads to better pseudorandomness properties (for instance, once one eliminates all multiples of small primes, almost all pairs of remaining numbers will be coprime).

A finite group ${G=(G,\cdot)}$ is said to be a Frobenius group if there is a non-trivial subgroup ${H}$ of ${G}$ (known as the Frobenius complement of ${G}$) such that the conjugates ${gHg^{-1}}$ of ${H}$ are “disjoint as possible” in the sense that ${H \cap gHg^{-1} = \{1\}}$ whenever ${g \not \in H}$. This gives a decomposition

$\displaystyle G = \bigcup_{gH \in G/H} (gHg^{-1} \backslash \{1\}) \cup K \ \ \ \ \ (1)$

where the Frobenius kernel ${K}$ of ${G}$ is defined as the identity element ${1}$ together with all the non-identity elements that are not conjugate to any element of ${H}$. Taking cardinalities, we conclude that

$\displaystyle |G| = \frac{|G|}{|H|} (|H| - 1) + |K|$

and hence

$\displaystyle |H| |K| = |G|. \ \ \ \ \ (2)$

A remarkable theorem of Frobenius gives an unexpected amount of structure on ${K}$ and hence on ${G}$:

Theorem 1 (Frobenius’ theorem) Let ${G}$ be a Frobenius group with Frobenius complement ${H}$ and Frobenius kernel ${K}$. Then ${K}$ is a normal subgroup of ${G}$, and hence (by (2) and the disjointness of ${H}$ and ${K}$ outside the identity) ${G}$ is the semidirect product ${K \rtimes H}$ of ${H}$ and ${K}$.

I discussed Frobenius’ theorem and its proof in this recent blog post. This proof uses the theory of characters on a finite group ${G}$, in particular relying on the fact that a character on a subgroup ${H}$ can induce a character on ${G}$, which can then be decomposed into irreducible characters with natural number coefficients. Remarkably, even though a century has passed since Frobenius’ original argument, there is no proof known of this theorem which avoids character theory entirely; there are elementary proofs known when the complement ${H}$ has even order or when ${H}$ is solvable (we review both of these cases below the fold), which by the Feit-Thompson theorem does cover all the cases, but the proof of the Feit-Thompson theorem involves plenty of character theory (and also relies on Theorem 1). (The answers to this MathOverflow question give a good overview of the current state of affairs.)

I have been playing around recently with the problem of finding a character-free proof of Frobenius’ theorem. I didn’t succeed in obtaining a completely elementary proof, but I did find an argument which replaces character theory (which can be viewed as coming from the representation theory of the non-commutative group algebra ${{\bf C} G \equiv L^2(G)}$) with the Fourier analysis of class functions (i.e. the representation theory of the centre ${Z({\bf C} G) \equiv L^2(G)^G}$ of the group algebra), thus replacing non-commutative representation theory by commutative representation theory. This is not a particularly radical depature from the existing proofs of Frobenius’ theorem, but it did seem to be a new proof which was technically “character-free” (even if it was not all that far from character-based in spirit), so I thought I would record it here.

The main ideas are as follows. The space ${L^2(G)^G}$ of class functions can be viewed as a commutative algebra with respect to the convolution operation ${*}$; as the regular representation is unitary and faithful, this algebra contains no nilpotent elements. As such, (Gelfand-style) Fourier analysis suggests that one can analyse this algebra through the idempotents: class functions ${\phi}$ such that ${\phi*\phi = \phi}$. In terms of characters, idempotents are nothing more than sums of the form ${\sum_{\chi \in \Sigma} \chi(1) \chi}$ for various collections ${\Sigma}$ of characters, but we can perform a fair amount of analysis on idempotents directly without recourse to characters. In particular, it turns out that idempotents enjoy some important integrality properties that can be established without invoking characters: for instance, by taking traces one can check that ${\phi(1)}$ is a natural number, and more generally we will show that ${{\bf E}_{(a,b) \in S} {\bf E}_{x \in G} \phi( a x b^{-1} x^{-1} )}$ is a natural number whenever ${S}$ is a subgroup of ${G \times G}$ (see Corollary 4 below). For instance, the quantity

$\displaystyle \hbox{rank}(\phi) := {\bf E}_{a \in G} {\bf E}_{x \in G} \phi(a xa^{-1} x^{-1})$

is a natural number which we will call the rank of ${\phi}$ (as it is also the linear rank of the transformation ${f \mapsto f*\phi}$ on ${L^2(G)}$).

In the case that ${G}$ is a Frobenius group with kernel ${K}$, the above integrality properties can be used after some elementary manipulations to establish that for any idempotent ${\phi}$, the quantity

$\displaystyle \frac{1}{|G|} \sum_{a \in K} {\bf E}_{x \in G} \phi( axa^{-1}x^{-1} ) - \frac{1}{|G| |K|} \sum_{a,b \in K} \phi(ab^{-1}) \ \ \ \ \ (3)$

is an integer. On the other hand, one can also show by elementary means that this quantity lies between ${0}$ and ${\hbox{rank}(\phi)}$. These two facts are not strong enough on their own to impose much further structure on ${\phi}$, unless one restricts attention to minimal idempotents ${\phi}$. In this case spectral theory (or Gelfand theory, or the fundamental theorem of algebra) tells us that ${\phi}$ has rank one, and then the integrality gap comes into play and forces the quantity (3) to always be either zero or one. This can be used to imply that the convolution action of every minimal idempotent ${\phi}$ either preserves ${\frac{|G|}{|K|} 1_K}$ or annihilates it, which makes ${\frac{|G|}{|K|} 1_K}$ itself an idempotent, which makes ${K}$ normal.

Suppose that ${G = (G,\cdot)}$ is a finite group of even order, thus ${|G|}$ is a multiple of two. By Cauchy’s theorem, this implies that ${G}$ contains an involution: an element ${g}$ in ${G}$ of order two. (Indeed, if no such involution existed, then ${G}$ would be partitioned into doubletons ${\{g,g^{-1}\}}$ together with the identity, so that ${|G|}$ would be odd, a contradiction.) Of course, groups of odd order have no involutions ${g}$, thanks to Lagrange’s theorem (since ${G}$ cannot split into doubletons ${\{ h, hg \}}$).

The classical Brauer-Fowler theorem asserts that if a group ${G}$ has many involutions, then it must have a large non-trivial subgroup:

Theorem 1 (Brauer-Fowler theorem) Let ${G}$ be a finite group with at least ${|G|/n}$ involutions for some ${n > 1}$. Then ${G}$ contains a proper subgroup ${H}$ of index at most ${n^2}$.

This theorem (which is Theorem 2F in the original paper of Brauer and Fowler, who in fact manage to sharpen ${n^2}$ slightly to ${n(n+2)/2}$) has a number of quick corollaries which are also referred to as “the” Brauer-Fowler theorem. For instance, if ${g}$ is a an involution of a group ${G}$, and the centraliser ${C_G(g) := \{ h \in G: gh = hg\}}$ has order ${n}$, then clearly ${n \geq 2}$ (as ${C_G(g)}$ contains ${1}$ and ${g}$) and the conjugacy class ${\{ aga^{-1}: a \in G \}}$ has order ${|G|/n}$ (since the map ${a \mapsto aga^{-1}}$ has preimages that are cosets of ${C_G(g)}$). Every conjugate of an involution is again an involution, so by the Brauer-Fowler theorem ${G}$ contains a subgroup of order at least ${\max( n, |G|/n^2)}$. In particular, we can conclude that every group ${G}$ of even order contains a proper subgroup of order at least ${|G|^{1/3}}$.

Another corollary is that the size of a simple group of even order can be controlled by the size of a centraliser of one of its involutions:

Corollary 2 (Brauer-Fowler theorem) Let ${G}$ be a finite simple group with an involution ${g}$, and suppose that ${C_G(g)}$ has order ${n}$. Then ${G}$ has order at most ${(n^2)!}$.

Indeed, by the previous discussion ${G}$ has a proper subgroup ${H}$ of index less than ${n^2}$, which then gives a non-trivial permutation action of ${G}$ on the coset space ${G/H}$. The kernel of this action is a proper normal subgroup of ${G}$ and is thus trivial, so the action is faithful, and the claim follows.

If one assumes the Feit-Thompson theorem that all groups of odd order are solvable, then Corollary 2 suggests a strategy (first proposed by Brauer himself in 1954) to prove the classification of finite simple groups (CFSG) by induction on the order of the group. Namely, assume for contradiction that the CFSG failed, so that there is a counterexample ${G}$ of minimal order ${|G|}$ to the classification. This is a non-abelian finite simple group; by the Feit-Thompson theorem, it has even order and thus has at least one involution ${g}$. Take such an involution and consider its centraliser ${C_G(g)}$; this is a proper subgroup of ${G}$ of some order ${n < |G|}$. As ${G}$ is a minimal counterexample to the classification, one can in principle describe ${C_G(g)}$ in terms of the CFSG by factoring the group into simple components (via a composition series) and applying the CFSG to each such component. Now, the “only” thing left to do is to verify, for each isomorphism class of ${C_G(g)}$, that all the possible simple groups ${G}$ that could have this type of group as a centraliser of an involution obey the CFSG; Corollary 2 tells us that for each such isomorphism class for ${C_G(g)}$, there are only finitely many ${G}$ that could generate this class for one of its centralisers, so this task should be doable in principle for any given isomorphism class for ${C_G(g)}$. That’s all one needs to do to prove the classification of finite simple groups!

Needless to say, this program turns out to be far more difficult than the above summary suggests, and the actual proof of the CFSG does not quite proceed along these lines. However, a significant portion of the argument is based on a generalisation of this strategy, in which the concept of a centraliser of an involution is replaced by the more general notion of a normaliser of a ${p}$-group, and one studies not just a single normaliser but rather the entire family of such normalisers and how they interact with each other (and in particular, which normalisers of ${p}$-groups commute with each other), motivated in part by the theory of Tits buildings for Lie groups which dictates a very specific type of interaction structure between these ${p}$-groups in the key case when ${G}$ is a (sufficiently high rank) finite simple group of Lie type over a field of characteristic ${p}$. See the text of Aschbacher, Lyons, Smith, and Solomon for a more detailed description of this strategy.

The Brauer-Fowler theorem can be proven by a nice application of character theory, of the type discussed in this recent blog post, ultimately based on analysing the alternating tensor power of representations; I reproduce a version of this argument (taken from this text of Isaacs) below the fold. (The original argument of Brauer and Fowler is more combinatorial in nature.) However, I wanted to record a variant of the argument that relies not on the fine properties of characters, but on the cruder theory of quasirandomness for groups, the modern study of which was initiated by Gowers, and is discussed for instance in this previous post. It gives the following slightly weaker version of Corollary 2:

Corollary 3 (Weak Brauer-Fowler theorem) Let ${G}$ be a finite simple group with an involution ${g}$, and suppose that ${C_G(g)}$ has order ${n}$. Then ${G}$ can be identified with a subgroup of the unitary group ${U_{4n^3}({\bf C})}$.

One can get an upper bound on ${|G|}$ from this corollary using Jordan’s theorem, but the resulting bound is a bit weaker than that in Corollary 2 (and the best bounds on Jordan’s theorem require the CFSG!).

Proof: Let ${A}$ be the set of all involutions in ${G}$, then as discussed above ${|A| \geq |G|/n}$. We may assume that ${G}$ has no non-trivial unitary representation of dimension less than ${4n^3}$ (since such representations are automatically faithful by the simplicity of ${G}$); thus, in the language of quasirandomness, ${G}$ is ${4n^3}$-quasirandom, and is also non-abelian. We have the basic convolution estimate

$\displaystyle \|1_A * 1_A * 1_A - \frac{|A|^3}{|G|} \|_{\ell^\infty(G)} \leq (4n^3)^{-1/2} |G|^{1/2} |A|^{3/2}$

(see Exercise 10 from this previous blog post). In particular,

$\displaystyle 1_A * 1_A * 1_A(0) \geq \frac{|A|^3}{|G|} - (4n^3)^{-1/2} |G|^{1/2} |A|^{3/2} \geq \frac{1}{2n^3} |G|^2$

and so there are at least ${|G|^2/2n^3}$ pairs ${(g,h) \in A \times A}$ such that ${gh \in A^{-1} = A}$, i.e. involutions ${g,h}$ whose product is also an involution. But any such involutions necessarily commute, since

$\displaystyle g (gh) h = g^2 h^2 = 1 = (gh)^2 = g (hg) h.$

Thus there are at least ${|G|^2/2n^3}$ pairs ${(g,h) \in G \times G}$ of non-identity elements that commute, so by the pigeonhole principle there is a non-identity ${g \in G}$ whose centraliser ${C_G(g)}$ has order at least ${|G|/2n^3}$. This centraliser cannot be all of ${G}$ since this would make ${g}$ central which contradicts the non-abelian simple nature of ${G}$. But then the quasiregular representation of ${G}$ on ${G/C_G(g)}$ has dimension at most ${2n^3}$, contradicting the quasirandomness. $\Box$