[Note: the content of this post is standard number theoretic material that can be found in many textbooks (I am relying principally here on Iwaniec and Kowalski); I am not claiming any new progress on any version of the Riemann hypothesis here, but am simply arranging existing facts together.]
The Riemann hypothesis is arguably the most important and famous unsolved problem in number theory. It is usually phrased in terms of the Riemann zeta function , defined by
for and extended meromorphically to other values of , and asserts that the only zeroes of in the critical strip lie on the critical line .
One of the main reasons that the Riemann hypothesis is so important to number theory is that the zeroes of the zeta function in the critical strip control the distribution of the primes. To see the connection, let us perform the following formal manipulations (ignoring for now the important analytic issues of convergence of series, interchanging sums, branches of the logarithm, etc., in order to focus on the intuition). The starting point is the fundamental theorem of arithmetic, which asserts that every natural number has a unique factorisation into primes. Taking logarithms, we obtain the identity
for any natural number , where is the von Mangoldt function, thus when is a power of a prime and zero otherwise. If we then perform a “Dirichlet-Fourier transform” by viewing both sides of (1) as coefficients of a Dirichlet series, we conclude that
formally at least. Writing , the right-hand side factors as
whereas the left-hand side is (formally, at least) equal to . We conclude the identity
(formally, at least). If we integrate this, we are formally led to the identity
or equivalently to the exponential identity
which allows one to reconstruct the Riemann zeta function from the von Mangoldt function. (It is instructive exercise in enumerative combinatorics to try to prove this identity directly, at the level of formal Dirichlet series, using the fundamental theorem of arithmetic of course.) Now, as has a simple pole at and zeroes at various places on the critical strip, we expect a Weierstrass factorisation which formally (ignoring normalisation issues) takes the form
(where we will be intentionally vague about what is hiding in the terms) and so we expect an expansion of the form
and hence on integrating in we formally have
and thus we have the heuristic approximation
Comparing this with (3), we are led to a heuristic form of the explicit formula
When trying to make this heuristic rigorous, it turns out (due to the rough nature of both sides of (4)) that one has to interpret the explicit formula in some suitably weak sense, for instance by testing (4) against the indicator function to obtain the formula
which can in fact be made into a rigorous statement after some truncation (the von Mangoldt explicit formula). From this formula we now see how helpful the Riemann hypothesis will be to control the distribution of the primes; indeed, if the Riemann hypothesis holds, so that for all zeroes , it is not difficult to use (a suitably rigorous version of) the explicit formula to conclude that
as , giving a near-optimal “square root cancellation” for the sum . Conversely, if one can somehow establish a bound of the form
for any fixed , then the explicit formula can be used to then deduce that all zeroes of have real part at most , which leads to the following remarkable amplification phenomenon (analogous, as we will see later, to the tensor power trick): any bound of the form
can be automatically amplified to the stronger bound
with both bounds being equivalent to the Riemann hypothesis. Of course, the Riemann hypothesis for the Riemann zeta function remains open; but partial progress on this hypothesis (in the form of zero-free regions for the zeta function) leads to partial versions of the asymptotic (6). For instance, it is known that there are no zeroes of the zeta function on the line , and this can be shown by some analysis (either complex analysis or Fourier analysis) to be equivalent to the prime number theorem
see e.g. this previous blog post for more discussion.
The main engine powering the above observations was the fundamental theorem of arithmetic, and so one can expect to establish similar assertions in other contexts where some version of the fundamental theorem of arithmetic is available. One of the simplest such variants is to continue working on the natural numbers, but “twist” them by a Dirichlet character . The analogue of the Riemann zeta function is then the (1), which encoded the fundamental theorem of arithmetic, can be twisted by to obtain
and essentially the same manipulations as before eventually lead to the exponential identity
which is a twisted version of (2), as well as twisted explicit formula, which heuristically takes the form
for non-principal , where now ranges over the zeroes of in the critical strip, rather than the zeroes of ; a more accurate formulation, following (5), would be
(See e.g. Davenport’s book for a more rigorous discussion which emphasises the analogy between the Riemann zeta function and the Dirichlet -function.) If we assume the generalised Riemann hypothesis, which asserts that all zeroes of in the critical strip also lie on the critical line, then we obtain the bound
for any non-principal Dirichlet character , again demonstrating a near-optimal square root cancellation for this sum. Again, we have the amplification property that the above bound is implied by the apparently weaker bound
(where denotes a quantity that goes to zero as for any fixed ). Next, one can consider other number systems than the natural numbers and integers . For instance, one can replace the integers with rings of integers in other number fields (i.e. finite extensions of ), such as the quadratic extensions of the rationals for various square-free integers , in which case the ring of integers would be the ring of quadratic integers for a suitable generator (it turns out that one can take if , and if ). Here, it is not immediately obvious what the analogue of the natural numbers is in this setting, since rings such as do not come with a natural ordering. However, we can adopt an algebraic viewpoint to see the correct generalisation, observing that every natural number generates a principal ideal in the integers, and conversely every non-trivial ideal in the integers is associated to precisely one natural number in this fashion, namely the norm of that ideal. So one can identify the natural numbers with the ideals of . Furthermore, with this identification, the prime numbers correspond to the prime ideals, since if is prime, and are integers, then if and only if one of or is true. Finally, even in number systems (such as ) in which the classical version of the fundamental theorem of arithmetic fail (e.g. ), we have the fundamental theorem of arithmetic for ideals: every ideal in a Dedekind domain (which includes the ring of integers in a number field as a key example) is uniquely representable (up to permutation) as the product of a finite number of prime ideals (although these ideals might not necessarily be principal). For instance, in , the principal ideal factors as the product of four prime (but non-principal) ideals , , , . (Note that the first two ideals are actually equal to each other.) Because we still have the fundamental theorem of arithmetic, we can develop analogues of the previous observations relating the Riemann hypothesis to the distribution of primes. The analogue of the Riemann hypothesis is now the Dedekind zeta function
where the summation is over all non-trivial ideals in . One can also define a von Mangoldt function , defined as when is a power of a prime ideal , and zero otherwise; then the fundamental theorem of arithmetic for ideals can be encoded in an analogue of (1) (or (7)),
which leads as before to an exponential identity
and an explicit formula of the heuristic form
or, a little more accurately,
in analogy with (5) or (10). Again, a suitable Riemann hypothesis for the Dedekind zeta function leads to good asymptotics for the distribution of prime ideals, giving a bound of the form
where is the conductor of (which, in the case of number fields, is the absolute value of the discriminant of ) and is the degree of the extension of over . As before, we have the amplification phenomenon that the above near-optimal square root cancellation bound is implied by the weaker bound
where denotes a quantity that goes to zero as (holding fixed). See e.g. Chapter 5 of Iwaniec-Kowalski for details.
As was the case with the Dirichlet -functions, one can twist the Dedekind zeta function example by characters, in this case the Hecke characters; we will not do this here, but see e.g. Section 3 of Iwaniec-Kowalski for details.
Very analogous considerations hold if we move from number fields to function fields. The simplest case is the function field associated to the affine line and a finite field of some order . The polynomial functions on the affine line are just the usual polynomial ring , which then play the role of the integers (or ) in previous examples. This ring happens to be a unique factorisation domain, so the situation is closely analogous to the classical setting of the Riemann zeta function. The analogue of the natural numbers are the monic polynomials (since every non-trivial principal ideal is generated by precisely one monic polynomial), and the analogue of the prime numbers are the irreducible monic polynomials. The norm of a polynomial is the order of , which can be computed explicitly as
Because of this, we will normalise things slightly differently here and use in place of in what follows. The (local) zeta function is then defined as
where ranges over monic polynomials, and the von Mangoldt function is defined to equal when is a power of a monic irreducible polynomial , and zero otherwise. Note that because is always a power of , the zeta function here is in fact periodic with period . Because of this, it is customary to make a change of variables , so that
and is the renormalised zeta function
We have the analogue of (1) (or (7) or (11)):
which leads as before to an exponential identity
analogous to (2), (8), or (12). It also leads to the explicit formula
where are the zeroes of the original zeta function (counting each residue class of the period just once), or equivalently
where are the reciprocals of the roots of the normalised zeta function (or to put it another way, are the factors of this zeta function). Again, to make proper sense of this heuristic we need to sum, obtaining
As it turns out, in the function field setting, the zeta functions are always rational (this is part of the Weil conjectures), and the above heuristic formula is basically exact up to a constant factor, thus
for an explicit integer (independent of ) arising from any potential pole of at . In the case of the affine line , the situation is particularly simple, because the zeta function is easy to compute. Indeed, since there are exactly monic polynomials of a given degree , we see from (14) that
so in fact there are no zeroes whatsoever, and no pole at either, so we have an exact prime number theorem for this function field:
Among other things, this tells us that the number of irreducible monic polynomials of degree is .
We can transition from an algebraic perspective to a geometric one, by viewing a given monic polynomial through its roots, which are a finite set of points in the algebraic closure of the finite field (or more suggestively, as points on the affine line ). The number of such points (counting multiplicity) is the degree of , and from the factor theorem, the set of points determines the monic polynomial (or, if one removes the monic hypothesis, it determines the polynomial projectively). These points have an action of the Galois group . It is a classical fact that this Galois group is in fact a cyclic group generated by a single element, the (geometric) Frobenius map , which fixes the elements of the original finite field but permutes the other elements of . Thus the roots of a given polynomial split into orbits of the Frobenius map. One can check that the roots consist of a single such orbit (counting multiplicity) if and only if is irreducible; thus the fundamental theorem of arithmetic can be viewed geometrically as as the orbit decomposition of any Frobenius-invariant finite set of points in the affine line.
Now consider the degree finite field extension of (it is a classical fact that there is exactly one such extension up to isomorphism for each ); this is a subfield of of order . (Here we are performing a standard abuse of notation by overloading the subscripts in the notation; thus denotes the field of order , while denotes the extension of of order , so that we in fact have if we use one subscript convention on the left-hand side and the other subscript convention on the right-hand side. We hope this overloading will not cause confusion.) Each point in this extension (or, more suggestively, the affine line over this extension) has a minimal polynomial – an irreducible monic polynomial whose roots consist the Frobenius orbit of . Since the Frobenius action is periodic of period on , the degree of this minimal polynomial must divide . Conversely, every monic irreducible polynomial of degree dividing produces distinct zeroes that lie in (here we use the classical fact that finite fields are perfect) and hence in . We have thus partitioned into Frobenius orbits (also known as closed points), with each monic irreducible polynomial of degree dividing contributing an orbit of size . From this we conclude a geometric interpretation of the left-hand side of (18):
The identity (18) thus is equivalent to the thoroughly boring fact that the number of -points on the affine line is equal to . However, things become much more interesting if one then replaces the affine line by a more general (geometrically) irreducible curve defined over ; for instance one could take to be an ellpitic curve
for some suitable , although the discussion here applies to more general curves as well (though to avoid some minor technicalities, we will assume that the curve is projective with a finite number of -rational points removed). The analogue of is then the coordinate ring of (for instance, in the case of the elliptic curve (20) it would be ), with polynomials in this ring producing a set of roots in the curve that is again invariant with respect to the Frobenius action (acting on the and coordinates separately). In general, we do not expect unique factorisation in this coordinate ring (this is basically because Bezout’s theorem suggests that the zero set of a polynomial on will almost never consist of a single (closed) point). Of course, we can use the algebraic formalism of ideals to get around this, setting up a zeta function
and a von Mangoldt function as before, where would now run over the non-trivial ideals of the coordinate ring. However, it is more instructive to use the geometric viewpoint, using the ideal-variety dictionary from algebraic geometry to convert algebraic objects involving ideals into geometric objects involving varieties. In this dictionary, a non-trivial ideal would correspond to a proper subvariety (or more precisely, a subscheme, but let us ignore the distinction between varieties and schemes here) of the curve ; as the curve is irreducible and one-dimensional, this subvariety must be zero-dimensional and is thus a (multi-)set of points in , or equivalently an effective divisor of ; this generalises the concept of the set of roots of a polynomial (which corresponds to the case of a principal ideal). Furthermore, this divisor has to be rational in the sense that it is Frobenius-invariant. The prime ideals correspond to those divisors (or sets of points) which are irreducible, that is to say the individual Frobenius orbits, also known as closed points of . With this dictionary, the zeta function becomes
where the sum is over effective rational divisors of (with being the degree of an effective divisor ), or equivalently
The analogue of (19), which gives a geometric interpretation to sums of the von Mangoldt function, becomes
thus this sum is simply counting the number of -points of . The analogue of the exponential identity (16) (or (2), (8), or (12)) is then
and the analogue of the explicit formula (17) (or (5), (10) or (13)) is
where runs over the (reciprocal) zeroes of (counting multiplicity), and is an integer independent of . (As it turns out, equals when is a projective curve, and more generally equals when is a projective curve with rational points deleted.)
To evaluate , one needs to count the number of effective divisors of a given degree on the curve . Fortunately, there is a tool that is particularly well-designed for this task, namely the Riemann-Roch theorem. By using this theorem, one can show (when is projective) that is in fact a rational function, with a finite number of zeroes, and a simple pole at both and , with similar results when one deletes some rational points from ; see e.g. Chapter 11 of Iwaniec-Kowalski for details. Thus the sum in (22) is finite. For instance, for the affine elliptic curve (20) (which is a projective curve with one point removed), it turns out that we have
for two complex numbers depending on and .
The Riemann hypothesis for (untwisted) curves – which is the deepest and most difficult aspect of the Weil conjectures for these curves – asserts that the zeroes of lie on the critical line, or equivalently that all the roots in (22) have modulus , so that (22) then gives the asymptotic
where the implied constant depends only on the genus of (and on the number of points removed from ). For instance, for elliptic curves we have the Hasse bound
As before, we have an important amplification phenomenon: if we can establish a weaker estimate, e.g.
then we can automatically deduce the stronger bound (23). This amplification is not a mere curiosity; most of the proofs of the Riemann hypothesis for curves proceed via this fact. For instance, by using the elementary method of Stepanov to bound points in curves (discussed for instance in this previous post), one can establish the preliminary bound (24) for large , which then amplifies to the optimal bound (23) for all (and in particular for ). Again, see Chapter 11 of Iwaniec-Kowalski for details. The ability to convert a bound with -dependent losses over the optimal bound (such as (24)) into an essentially optimal bound with no -dependent losses (such as (23)) is important in analytic number theory, since in many applications (e.g. in those arising from sieve theory) one wishes to sum over large ranges of .
Much as the Riemann zeta function can be twisted by a Dirichlet character to form a Dirichlet -function, one can twist the zeta function on curves by various additive and multiplicative characters. For instance, suppose one has an affine plane curve and an additive character , thus for all . Given a rational effective divisor , the sum is Frobenius-invariant and thus lies in . By abuse of notation, we may thus define on such divisors by
and observe that is multiplicative in the sense that for rational effective divisors . One can then define for any non-trivial ideal by replacing that ideal with the associated rational effective divisor; for instance, if is a polynomial in the coefficient ring of , with zeroes at , then is . Again, we have the multiplicativity property . If we then form the twisted normalised zeta function
then by twisting the previous analysis, we eventually arrive at the exponential identity
in analogy with (21) (or (2), (8), (12), or (16)), where the companion sums are defined by
where the trace of an element in the plane is defined by the formula
In particular, is the exponential sum
which is an important type of sum in analytic number theory, containing for instance the Kloosterman sum
as a special case, where . (NOTE: the sign conventions for the companion sum are not consistent across the literature, sometimes it is which is referred to as the companion sum.)
If is non-principal (and is non-linear), one can show (by a suitably twisted version of the Riemann-Roch theorem) that is a rational function of , with no pole at , and one then gets an explicit formula of the form
for the companion sums, where are the reciprocals of the zeroes of , in analogy to (22) (or (5), (10), (13), or (17)). For instance, in the case of Kloosterman sums, there is an identity of the form
for all and some complex numbers depending on , where we have abbreviated as . As before, the Riemann hypothesis for then gives a square root cancellation bound of the form
for the companion sums (and in particular gives the very explicit Weil bound for the Kloosterman sum), but again there is the amplification phenomenon that this sort of bound can be deduced from the apparently weaker bound
As before, most of the known proofs of the Riemann hypothesis for these twisted zeta functions proceed by first establishing this weaker bound (e.g. one could again use Stepanov’s method here for this goal) and then amplifying to the full bound (28); see Chapter 11 of Iwaniec-Kowalski for further details.
One can also twist the zeta function on a curve by a multiplicative character by similar arguments, except that instead of forming the sum of all the components of an effective divisor , one takes the product instead, and similarly one replaces the trace
by the norm
Again, see Chapter 11 of Iwaniec-Kowalski for details.
Deligne famously extended the above theory to higher-dimensional varieties than curves, and also to the closely related context of -adic sheaves on curves, giving rise to two separate proofs of the Weil conjectures in full generality. (Very roughly speaking, the former context can be obtained from the latter context by a sort of Fubini theorem type argument that expresses sums on higher-dimensional varieties as iterated sums on curves of various expressions related to -adic sheaves.) In this higher-dimensional setting, the zeta function formalism is still present, but is much more difficult to use, in large part due to the much less tractable nature of divisors in higher dimensions (they are now combinations of codimension one subvarieties or subschemes, rather than combinations of points). To get around this difficulty, one has to change perspective yet again, from an algebraic or geometric perspective to an -adic cohomological perspective. (I could imagine that once one is sufficiently expert in the subject, all these perspectives merge back together into a unified viewpoint, but I am certainly not yet at that stage of understanding.) In particular, the zeta function, while still present, plays a significantly less prominent role in the analysis (at least if one is willing to take Deligne’s theorems as a black box); the explicit formula is now obtained via a different route, namely the Grothendieck-Lefschetz fixed point formula. I have written some notes on this material below the fold (based in part on some lectures of Philippe Michel, as well as the text of Iwaniec-Kowalski and also this book of Katz), but I should caution that my understanding here is still rather sketchy and possibly inaccurate in places.
Read the rest of this entry »