You are currently browsing the tag archive for the ‘Weil conjectures’ tag.

Let be a quasiprojective variety defined over a finite field , thus for instance could be an affine variety

where is -dimensional affine space and are a finite collection of polynomials with coefficients in . Then one can define the set of -rational points, and more generally the set of -rational points for any , since can be viewed as a field extension of . Thus for instance in the affine case (1) we have

The Weil conjectures are concerned with understanding the number

of -rational points over a variety . The first of these conjectures was proven by Dwork, and can be phrased as follows.

Theorem 1 (Rationality of the zeta function)Let be a quasiprojective variety defined over a finite field , and let be given by (2). Then there exist a finite number of algebraic integers (known ascharacteristic valuesof ), such thatfor all .

After cancelling, we may of course assume that for any and , and then it is easy to see (as we will see below) that the become uniquely determined up to permutations of the and . These values are known as the *characteristic values* of . Since is a rational integer (i.e. an element of ) rather than merely an algebraic integer (i.e. an element of the ring of integers of the algebraic closure of ), we conclude from the above-mentioned uniqueness that the set of characteristic values are invariant with respect to the Galois group . To emphasise this Galois invariance, we will not fix a specific embedding of the algebraic numbers into the complex field , but work with all such embeddings simultaneously. (Thus, for instance, contains three cube roots of , but which of these is assigned to the complex numbers , , will depend on the choice of embedding .)

An equivalent way of phrasing Dwork’s theorem is that the (-form of the) zeta function

associated to (which is well defined as a formal power series in , at least) is equal to a rational function of (with the and being the poles and zeroes of respectively). Here, we use the formal exponential

Equivalently, the (-form of the) zeta-function is a meromorphic function on the complex numbers which is also periodic with period , and which has only finitely many poles and zeroes up to this periodicity.

Dwork’s argument relies primarily on -adic analysis – an analogue of complex analysis, but over an algebraically complete (and metrically complete) extension of the -adic field , rather than over the Archimedean complex numbers . The argument is quite effective, and in particular gives explicit upper bounds for the number of characteristic values in terms of the complexity of the variety ; for instance, in the affine case (1) with of degree , Bombieri used Dwork’s methods (in combination with Deligne’s theorem below) to obtain the bound , and a subsequent paper of Hooley established the slightly weaker bound purely from Dwork’s methods (a similar bound had also been pointed out in unpublished work of Dwork). In particular, one has bounds that are uniform in the field , which is an important fact for many analytic number theory applications.

These -adic arguments stand in contrast with Deligne’s resolution of the last (and deepest) of the Weil conjectures:

Theorem 2 (Riemann hypothesis)Let be a quasiprojective variety defined over a finite field , and let be a characteristic value of . Then there exists a natural number such that for every embedding , where denotes the usual absolute value on the complex numbers . (Informally: and all of its Galois conjugates have complex magnitude .)

To put it another way that closely resembles the classical Riemann hypothesis, all the zeroes and poles of the -form lie on the critical lines for . (See this previous blog post for further comparison of various instantiations of the Riemann hypothesis.) Whereas Dwork uses -adic analysis, Deligne uses the essentially orthogonal technique of ell-adic cohomology to establish his theorem. However, ell-adic methods can be used (via the Grothendieck-Lefschetz trace formula) to establish rationality, and conversely, in this paper of Kedlaya p-adic methods are used to establish the Riemann hypothesis. As pointed out by Kedlaya, the ell-adic methods are tied to the intrinsic geometry of (such as the structure of sheaves and covers over ), while the -adic methods are more tied to the *extrinsic* geometry of (how sits inside its ambient affine or projective space).

In this post, I would like to record my notes on Dwork’s proof of Theorem 1, drawing heavily on the expositions of Serre, Hooley, Koblitz, and others.

The basic strategy is to control the rational integers both in an “Archimedean” sense (embedding the rational integers inside the complex numbers with the usual norm ) as well as in the “-adic” sense, with the characteristic of (embedding the integers now in the “complexification” of the -adic numbers , which is equipped with a norm that we will recall later). (This is in contrast to the methods of ell-adic cohomology, in which one primarily works over an -adic field with .) The Archimedean control is trivial:

Proposition 3 (Archimedean control of )With as above, and any embedding , we havefor all and some independent of .

*Proof:* Since is a rational integer, is just . By decomposing into affine pieces, we may assume that is of the affine form (1), then we trivially have , and the claim follows.

Another way of thinking about this Archimedean control is that it guarantees that the zeta function can be defined holomorphically on the open disk in of radius centred at the origin.

The -adic control is significantly more difficult, and is the main component of Dwork’s argument:

Proposition 4 (-adic control of )With as above, and using an embedding (defined later) with the characteristic of , we can find for any real a finite number of elements such thatfor all .

Another way of thinking about this -adic control is that it guarantees that the zeta function can be defined *meromorphically* on the entire -adic complex field .

Proposition 4 is ostensibly much weaker than Theorem 1 because of (a) the error term of -adic magnitude at most ; (b) the fact that the number of potential characteristic values here may go to infinity as ; and (c) the potential characteristic values only exist inside the complexified -adics , rather than in the algebraic integers . However, it turns out that by combining -adic control on in Proposition 4 with the trivial control on in Proposition 3, one can obtain Theorem 1 by an elementary argument that does not use any further properties of (other than the obvious fact that the are rational integers), with the in Proposition 4 chosen to exceed the in Proposition 3. We give this argument (essentially due to Borel) below the fold.

The proof of Proposition 4 can be split into two pieces. The first piece, which can be viewed as the number-theoretic component of the proof, uses external descriptions of such as (1) to obtain the following decomposition of :

Proposition 5 (Decomposition of )With and as above, we can decompose as a finite linear combination (over the integers) of sequences , such that for each such sequence , the zeta functionsare entire in , by which we mean that

as .

This proposition will ultimately be a consequence of the properties of the Teichmuller lifting .

The second piece, which can be viewed as the “-adic complex analytic” component of the proof, relates the -adic entire nature of a zeta function with control on the associated sequence , and can be interpreted (after some manipulation) as a -adic version of the Weierstrass preparation theorem:

Proposition 6 (-adic Weierstrass preparation theorem)Let be a sequence in , such that the zeta functionis entire in . Then for any real , there exist a finite number of elements such that

for all and some .

Clearly, the combination of Proposition 5 and Proposition 6 (and the non-Archimedean nature of the norm) imply Proposition 4.

[Note: the content of this post is standard number theoretic material that can be found in many textbooks (I am relying principally here on Iwaniec and Kowalski); I am not claiming any new progress on any version of the Riemann hypothesis here, but am simply arranging existing facts together.]

The Riemann hypothesis is arguably the most important and famous unsolved problem in number theory. It is usually phrased in terms of the Riemann zeta function , defined by

for and extended meromorphically to other values of , and asserts that the only zeroes of in the critical strip lie on the critical line .

One of the main reasons that the Riemann hypothesis is so important to number theory is that the zeroes of the zeta function in the critical strip control the distribution of the primes. To see the connection, let us perform the following formal manipulations (ignoring for now the important analytic issues of convergence of series, interchanging sums, branches of the logarithm, etc., in order to focus on the intuition). The starting point is the fundamental theorem of arithmetic, which asserts that every natural number has a unique factorisation into primes. Taking logarithms, we obtain the identity

for any natural number , where is the von Mangoldt function, thus when is a power of a prime and zero otherwise. If we then perform a “Dirichlet-Fourier transform” by viewing both sides of (1) as coefficients of a Dirichlet series, we conclude that

formally at least. Writing , the right-hand side factors as

whereas the left-hand side is (formally, at least) equal to . We conclude the identity

(formally, at least). If we integrate this, we are formally led to the identity

or equivalently to the exponential identity

which allows one to reconstruct the Riemann zeta function from the von Mangoldt function. (It is instructive exercise in enumerative combinatorics to try to prove this identity directly, at the level of formal Dirichlet series, using the fundamental theorem of arithmetic of course.) Now, as has a simple pole at and zeroes at various places on the critical strip, we expect a Weierstrass factorisation which formally (ignoring normalisation issues) takes the form

(where we will be intentionally vague about what is hiding in the terms) and so we expect an expansion of the form

and hence on integrating in we formally have

and thus we have the heuristic approximation

Comparing this with (3), we are led to a heuristic form of the *explicit formula*

When trying to make this heuristic rigorous, it turns out (due to the rough nature of both sides of (4)) that one has to interpret the explicit formula in some suitably weak sense, for instance by testing (4) against the indicator function to obtain the formula

which can in fact be made into a rigorous statement after some truncation (the von Mangoldt explicit formula). From this formula we now see how helpful the Riemann hypothesis will be to control the distribution of the primes; indeed, if the Riemann hypothesis holds, so that for all zeroes , it is not difficult to use (a suitably rigorous version of) the explicit formula to conclude that

as , giving a near-optimal “square root cancellation” for the sum . Conversely, if one can somehow establish a bound of the form

for any fixed , then the explicit formula can be used to then deduce that all zeroes of have real part at most , which leads to the following remarkable amplification phenomenon (analogous, as we will see later, to the tensor power trick): any bound of the form

can be automatically amplified to the stronger bound

with both bounds being equivalent to the Riemann hypothesis. Of course, the Riemann hypothesis for the Riemann zeta function remains open; but partial progress on this hypothesis (in the form of zero-free regions for the zeta function) leads to partial versions of the asymptotic (6). For instance, it is known that there are no zeroes of the zeta function on the line , and this can be shown by some analysis (either complex analysis or Fourier analysis) to be equivalent to the prime number theorem

see e.g. this previous blog post for more discussion.

The main engine powering the above observations was the fundamental theorem of arithmetic, and so one can expect to establish similar assertions in other contexts where some version of the fundamental theorem of arithmetic is available. One of the simplest such variants is to continue working on the natural numbers, but “twist” them by a Dirichlet character . The analogue of the Riemann zeta function is then the (1), which encoded the fundamental theorem of arithmetic, can be twisted by to obtain

and essentially the same manipulations as before eventually lead to the exponential identity

which is a twisted version of (2), as well as twisted explicit formula, which heuristically takes the form

for non-principal , where now ranges over the zeroes of in the critical strip, rather than the zeroes of ; a more accurate formulation, following (5), would be

(See e.g. Davenport’s book for a more rigorous discussion which emphasises the analogy between the Riemann zeta function and the Dirichlet -function.) If we assume the generalised Riemann hypothesis, which asserts that all zeroes of in the critical strip also lie on the critical line, then we obtain the bound

for any non-principal Dirichlet character , again demonstrating a near-optimal square root cancellation for this sum. Again, we have the amplification property that the above bound is implied by the apparently weaker bound

(where denotes a quantity that goes to zero as for any fixed ). Next, one can consider other number systems than the natural numbers and integers . For instance, one can replace the integers with rings of integers in other number fields (i.e. finite extensions of ), such as the quadratic extensions of the rationals for various square-free integers , in which case the ring of integers would be the ring of quadratic integers for a suitable generator (it turns out that one can take if , and if ). Here, it is not immediately obvious what the analogue of the natural numbers is in this setting, since rings such as do not come with a natural ordering. However, we can adopt an algebraic viewpoint to see the correct generalisation, observing that every natural number generates a principal ideal in the integers, and conversely every non-trivial ideal in the integers is associated to precisely one natural number in this fashion, namely the norm of that ideal. So one can identify the natural numbers with the ideals of . Furthermore, with this identification, the prime numbers correspond to the prime ideals, since if is prime, and are integers, then if and only if one of or is true. Finally, even in number systems (such as ) in which the classical version of the fundamental theorem of arithmetic fail (e.g. ), we have *the fundamental theorem of arithmetic for ideals*: every ideal in a Dedekind domain (which includes the ring of integers in a number field as a key example) is uniquely representable (up to permutation) as the product of a finite number of prime ideals (although these ideals might not necessarily be principal). For instance, in , the principal ideal factors as the product of four prime (but non-principal) ideals , , , . (Note that the first two ideals are actually equal to each other.) Because we still have the fundamental theorem of arithmetic, we can develop analogues of the previous observations relating the Riemann hypothesis to the distribution of primes. The analogue of the Riemann hypothesis is now the Dedekind zeta function

where the summation is over all non-trivial ideals in . One can also define a von Mangoldt function , defined as when is a power of a prime ideal , and zero otherwise; then the fundamental theorem of arithmetic for ideals can be encoded in an analogue of (1) (or (7)),

which leads as before to an exponential identity

and an explicit formula of the heuristic form

in analogy with (5) or (10). Again, a suitable Riemann hypothesis for the Dedekind zeta function leads to good asymptotics for the distribution of prime ideals, giving a bound of the form

where is the conductor of (which, in the case of number fields, is the absolute value of the discriminant of ) and is the degree of the extension of over . As before, we have the amplification phenomenon that the above near-optimal square root cancellation bound is implied by the weaker bound

where denotes a quantity that goes to zero as (holding fixed). See e.g. Chapter 5 of Iwaniec-Kowalski for details.

As was the case with the Dirichlet -functions, one can twist the Dedekind zeta function example by characters, in this case the Hecke characters; we will not do this here, but see e.g. Section 3 of Iwaniec-Kowalski for details.

Very analogous considerations hold if we move from number fields to function fields. The simplest case is the function field associated to the affine line and a finite field of some order . The polynomial functions on the affine line are just the usual polynomial ring , which then play the role of the integers (or ) in previous examples. This ring happens to be a unique factorisation domain, so the situation is closely analogous to the classical setting of the Riemann zeta function. The analogue of the natural numbers are the monic polynomials (since every non-trivial principal ideal is generated by precisely one monic polynomial), and the analogue of the prime numbers are the irreducible monic polynomials. The norm of a polynomial is the order of , which can be computed explicitly as

Because of this, we will normalise things slightly differently here and use in place of in what follows. The (local) zeta function is then defined as

where ranges over monic polynomials, and the von Mangoldt function is defined to equal when is a power of a monic irreducible polynomial , and zero otherwise. Note that because is always a power of , the zeta function here is in fact periodic with period . Because of this, it is customary to make a change of variables , so that

and is the renormalised zeta function

We have the analogue of (1) (or (7) or (11)):

which leads as before to an exponential identity

analogous to (2), (8), or (12). It also leads to the explicit formula

where are the zeroes of the original zeta function (counting each residue class of the period just once), or equivalently

where are the reciprocals of the roots of the normalised zeta function (or to put it another way, are the factors of this zeta function). Again, to make proper sense of this heuristic we need to sum, obtaining

As it turns out, in the function field setting, the zeta functions are always rational (this is part of the Weil conjectures), and the above heuristic formula is basically exact up to a constant factor, thus

for an explicit integer (independent of ) arising from any potential pole of at . In the case of the affine line , the situation is particularly simple, because the zeta function is easy to compute. Indeed, since there are exactly monic polynomials of a given degree , we see from (14) that

so in fact there are no zeroes whatsoever, and no pole at either, so we have an exact prime number theorem for this function field:

Among other things, this tells us that the number of irreducible monic polynomials of degree is .

We can transition from an algebraic perspective to a geometric one, by viewing a given monic polynomial through its roots, which are a finite set of points in the algebraic closure of the finite field (or more suggestively, as points on the affine line ). The number of such points (counting multiplicity) is the degree of , and from the factor theorem, the set of points determines the monic polynomial (or, if one removes the monic hypothesis, it determines the polynomial projectively). These points have an action of the Galois group . It is a classical fact that this Galois group is in fact a cyclic group generated by a single element, the (geometric) Frobenius map , which fixes the elements of the original finite field but permutes the other elements of . Thus the roots of a given polynomial split into orbits of the Frobenius map. One can check that the roots consist of a single such orbit (counting multiplicity) if and only if is irreducible; thus the fundamental theorem of arithmetic can be viewed geometrically as as the orbit decomposition of any Frobenius-invariant finite set of points in the affine line.

Now consider the degree finite field extension of (it is a classical fact that there is exactly one such extension up to isomorphism for each ); this is a subfield of of order . (Here we are performing a standard abuse of notation by overloading the subscripts in the notation; thus denotes the field of order , while denotes the extension of of order , so that we in fact have if we use one subscript convention on the left-hand side and the other subscript convention on the right-hand side. We hope this overloading will not cause confusion.) Each point in this extension (or, more suggestively, the affine line over this extension) has a minimal polynomial – an irreducible monic polynomial whose roots consist the Frobenius orbit of . Since the Frobenius action is periodic of period on , the degree of this minimal polynomial must divide . Conversely, every monic irreducible polynomial of degree dividing produces distinct zeroes that lie in (here we use the classical fact that finite fields are perfect) and hence in . We have thus partitioned into Frobenius orbits (also known as *closed points*), with each monic irreducible polynomial of degree dividing contributing an orbit of size . From this we conclude a geometric interpretation of the left-hand side of (18):

The identity (18) thus is equivalent to the thoroughly boring fact that the number of -points on the affine line is equal to . However, things become much more interesting if one then replaces the affine line by a more general (geometrically) irreducible curve defined over ; for instance one could take to be an ellpitic curve

for some suitable , although the discussion here applies to more general curves as well (though to avoid some minor technicalities, we will assume that the curve is projective with a finite number of -rational points removed). The analogue of is then the coordinate ring of (for instance, in the case of the elliptic curve (20) it would be ), with polynomials in this ring producing a set of roots in the curve that is again invariant with respect to the Frobenius action (acting on the and coordinates separately). In general, we do not expect unique factorisation in this coordinate ring (this is basically because Bezout’s theorem suggests that the zero set of a polynomial on will almost never consist of a single (closed) point). Of course, we can use the algebraic formalism of ideals to get around this, setting up a zeta function

and a von Mangoldt function as before, where would now run over the non-trivial ideals of the coordinate ring. However, it is more instructive to use the geometric viewpoint, using the ideal-variety dictionary from algebraic geometry to convert algebraic objects involving ideals into geometric objects involving varieties. In this dictionary, a non-trivial ideal would correspond to a proper subvariety (or more precisely, a subscheme, but let us ignore the distinction between varieties and schemes here) of the curve ; as the curve is irreducible and one-dimensional, this subvariety must be zero-dimensional and is thus a (multi-)set of points in , or equivalently an effective divisor of ; this generalises the concept of the set of roots of a polynomial (which corresponds to the case of a principal ideal). Furthermore, this divisor has to be *rational* in the sense that it is Frobenius-invariant. The prime ideals correspond to those divisors (or sets of points) which are irreducible, that is to say the individual Frobenius orbits, also known as closed points of . With this dictionary, the zeta function becomes

where the sum is over effective rational divisors of (with being the degree of an effective divisor ), or equivalently

The analogue of (19), which gives a geometric interpretation to sums of the von Mangoldt function, becomes

thus this sum is simply counting the number of -points of . The analogue of the exponential identity (16) (or (2), (8), or (12)) is then

and the analogue of the explicit formula (17) (or (5), (10) or (13)) is

where runs over the (reciprocal) zeroes of (counting multiplicity), and is an integer independent of . (As it turns out, equals when is a projective curve, and more generally equals when is a projective curve with rational points deleted.)

To evaluate , one needs to count the number of effective divisors of a given degree on the curve . Fortunately, there is a tool that is particularly well-designed for this task, namely the Riemann-Roch theorem. By using this theorem, one can show (when is projective) that is in fact a rational function, with a finite number of zeroes, and a simple pole at both and , with similar results when one deletes some rational points from ; see e.g. Chapter 11 of Iwaniec-Kowalski for details. Thus the sum in (22) is finite. For instance, for the affine elliptic curve (20) (which is a projective curve with one point removed), it turns out that we have

for two complex numbers depending on and .

The Riemann hypothesis for (untwisted) curves – which is the deepest and most difficult aspect of the Weil conjectures for these curves – asserts that the zeroes of lie on the critical line, or equivalently that all the roots in (22) have modulus , so that (22) then gives the asymptotic

where the implied constant depends only on the genus of (and on the number of points removed from ). For instance, for elliptic curves we have the *Hasse bound*

As before, we have an important amplification phenomenon: if we can establish a weaker estimate, e.g.

then we can automatically deduce the stronger bound (23). This amplification is not a mere curiosity; most of the *proofs* of the Riemann hypothesis for curves proceed via this fact. For instance, by using the elementary method of Stepanov to bound points in curves (discussed for instance in this previous post), one can establish the preliminary bound (24) for large , which then amplifies to the optimal bound (23) for all (and in particular for ). Again, see Chapter 11 of Iwaniec-Kowalski for details. The ability to convert a bound with -dependent losses over the optimal bound (such as (24)) into an essentially optimal bound with no -dependent losses (such as (23)) is important in analytic number theory, since in many applications (e.g. in those arising from sieve theory) one wishes to sum over large ranges of .

Much as the Riemann zeta function can be twisted by a Dirichlet character to form a Dirichlet -function, one can twist the zeta function on curves by various additive and multiplicative characters. For instance, suppose one has an affine plane curve and an additive character , thus for all . Given a rational effective divisor , the sum is Frobenius-invariant and thus lies in . By abuse of notation, we may thus define on such divisors by

and observe that is multiplicative in the sense that for rational effective divisors . One can then define for any non-trivial ideal by replacing that ideal with the associated rational effective divisor; for instance, if is a polynomial in the coefficient ring of , with zeroes at , then is . Again, we have the multiplicativity property . If we then form the twisted normalised zeta function

then by twisting the previous analysis, we eventually arrive at the exponential identity

in analogy with (21) (or (2), (8), (12), or (16)), where the *companion sums* are defined by

where the trace of an element in the plane is defined by the formula

In particular, is the exponential sum

which is an important type of sum in analytic number theory, containing for instance the Kloosterman sum

as a special case, where . (NOTE: the sign conventions for the companion sum are not consistent across the literature, sometimes it is which is referred to as the companion sum.)

If is non-principal (and is non-linear), one can show (by a suitably twisted version of the Riemann-Roch theorem) that is a rational function of , with no pole at , and one then gets an explicit formula of the form

for the companion sums, where are the reciprocals of the zeroes of , in analogy to (22) (or (5), (10), (13), or (17)). For instance, in the case of Kloosterman sums, there is an identity of the form

for all and some complex numbers depending on , where we have abbreviated as . As before, the Riemann hypothesis for then gives a square root cancellation bound of the form

for the companion sums (and in particular gives the very explicit Weil bound for the Kloosterman sum), but again there is the amplification phenomenon that this sort of bound can be deduced from the apparently weaker bound

As before, most of the known proofs of the Riemann hypothesis for these twisted zeta functions proceed by first establishing this weaker bound (e.g. one could again use Stepanov’s method here for this goal) and then amplifying to the full bound (28); see Chapter 11 of Iwaniec-Kowalski for further details.

One can also twist the zeta function on a curve by a multiplicative character by similar arguments, except that instead of forming the sum of all the components of an effective divisor , one takes the product instead, and similarly one replaces the trace

by the norm

Again, see Chapter 11 of Iwaniec-Kowalski for details.

Deligne famously extended the above theory to higher-dimensional varieties than curves, and also to the closely related context of *-adic sheaves* on curves, giving rise to two separate proofs of the Weil conjectures in full generality. (Very roughly speaking, the former context can be obtained from the latter context by a sort of Fubini theorem type argument that expresses sums on higher-dimensional varieties as iterated sums on curves of various expressions related to -adic sheaves.) In this higher-dimensional setting, the zeta function formalism is still present, but is much more difficult to use, in large part due to the much less tractable nature of divisors in higher dimensions (they are now combinations of codimension one subvarieties or subschemes, rather than combinations of points). To get around this difficulty, one has to change perspective yet again, from an algebraic or geometric perspective to an -adic cohomological perspective. (I could imagine that once one is sufficiently expert in the subject, all these perspectives merge back together into a unified viewpoint, but I am certainly not yet at that stage of understanding.) In particular, the zeta function, while still present, plays a significantly less prominent role in the analysis (at least if one is willing to take Deligne’s theorems as a black box); the explicit formula is now obtained via a different route, namely the Grothendieck-Lefschetz fixed point formula. I have written some notes on this material below the fold (based in part on some lectures of Philippe Michel, as well as the text of Iwaniec-Kowalski and also this book of Katz), but I should caution that my understanding here is still rather sketchy and possibly inaccurate in places.

As in previous posts, we use the following asymptotic notation: is a parameter going off to infinity, and all quantities may depend on unless explicitly declared to be “fixed”. The asymptotic notation is then defined relative to this parameter. A quantity is said to be *of polynomial size* if one has , and said to be *bounded* if . Another convenient notation: we write for . Thus for instance the divisor bound asserts that if has polynomial size, then the number of divisors of is .

This post is intended to highlight a phenomenon unearthed in the ongoing polymath8 project (and is in fact a key component of Zhang’s proof that there are bounded gaps between primes infinitely often), namely that one can get quite good bounds on relatively short exponential sums when the modulus is smooth, through the basic technique of *Weyl differencing* (ultimately based on the Cauchy-Schwarz inequality, and also related to the van der Corput lemma in equidistribution theory). Improvements in the case of smooth moduli have appeared before in the literature (e.g. in this paper of Heath-Brown, paper of Graham and Ringrose, this later paper of Heath-Brown, this paper of Chang, or this paper of Goldmakher); the arguments here are particularly close to that of the first paper of Heath-Brown. It now also appears that further optimisation of this Weyl differencing trick could lead to noticeable improvements in the numerology for the polymath8 project, so I am devoting this post to explaining this trick further.

To illustrate the method, let us begin with the classical problem in analytic number theory of estimating an *incomplete character sum*

where is a primitive Dirichlet character of some conductor , is an integer, and is some quantity between and . Clearly we have the trivial bound

we also have the classical Pólya-Vinogradov inequality

This latter inequality gives improvements over the trivial bound when is much larger than , but not for much smaller than . The Pólya-Vinogradov inequality can be deduced via a little Fourier analysis from the completed exponential sum bound

for any , where . (In fact, from the classical theory of Gauss sums, this exponential sum is equal to for some complex number of norm .)

In the case when is a prime, improving upon the above two inequalities is an important but difficult problem, with only partially satisfactory results so far. To give just one indication of the difficulty, the seemingly modest improvement

to the Pólya-Vinogradov inequality when was a prime required a 14-page paper in Inventiones by Montgomery and Vaughan to prove, and even then it was only conditional on the generalised Riemann hypothesis! See also this more recent paper of Granville and Soundararajan for an unconditional variant of this result in the case that has odd order.

Another important improvement is the Burgess bound, which in our notation asserts that

for any fixed integer , assuming that is square-free (for simplicity) and of polynomial size; see this previous post for a discussion of the Burgess argument. This is non-trivial for as small as .

In the case when is prime, there has been very little improvement to the Burgess bound (or its Fourier dual, which can give bounds for as large as ) in the last fifty years; an improvement to the exponents in (3) in this case (particularly anything that gave a power saving for below ) would in fact be rather significant news in analytic number theory.

However, in the opposite case when is *smooth* – that is to say, all of its factors are much smaller than – then one can do better than the Burgess bound in some regimes. This fact has been observed in several places in the literature (in particular, in the papers of Heath-Brown, Graham-Ringrose, Chang, and Goldmakher mentioned previously), but also turns out to (implicitly) be a key insight in Zhang’s paper on bounded prime gaps. In the case of character sums, one such improved estimate (closely related to Theorem 2 of the Heath-Brown paper) is as follows:

Proposition 1Let be square-free with a factorisation and of polynomial size, and let be integers with . Then for any primitive character with conductor , one has

This proposition is particularly powerful when is smooth, as this gives many factorisations with the ability to specify with a fair amount of accuracy. For instance, if is -smooth (i.e. all prime factors are at most ), then by the greedy algorithm one can find a divisor of with ; if we set , then , and the above proposition then gives

which can improve upon the Burgess bound when is small. For instance, if , then this bound becomes ; in contrast the Burgess bound only gives for this value of (using the optimal choice for ), which is inferior for .

The hypothesis that be squarefree may be relaxed, but for applications to the Polymath8 project, it is only the squarefree moduli that are relevant.

*Proof:* If then the claim follows from the trivial bound (1), while for the claim follows from (2). Hence we may assume that

We use the method of Weyl differencing, the key point being to difference in multiples of .

Let , thus . For any , we have

By the Chinese remainder theorem, we may factor

where are primitive characters of conductor respectively. As is periodic of period , we thus have

and so we can take out of the inner summation of the right-hand side of (4) to obtain

and hence by the triangle inequality

Note how the characters on the right-hand side only have period rather than . This reduction in the period is ultimately the source of the saving over the Pólya-Vinogradov inequality.

Note that the inner sum vanishes unless , which is an interval of length by choice of . Thus by Cauchy-Schwarz one has

We expand the right-hand side as

We first consider the diagonal contribution . In this case we use the trivial bound for the inner summation, and we soon see that the total contribution here is .

Now we consider the off-diagonal case; by symmetry we can take . Then the indicator functions restrict to the interval . On the other hand, as a consequence of the Weil conjectures for curves one can show that

for any ; indeed one can use the Chinese remainder theorem and the square-free nature of to reduce to the case when is prime, in which case one can apply (for instance) the original paper of Weil to establish this bound, noting also that and are coprime since is squarefree. Applying the method of completion of sums (or the Parseval formula), this shows that

Summing in (using Lemma 5 from this previous post) we see that the total contribution to the off-diagonal case is

which simplifies to . The claim follows.

A modification of the above argument (using more complicated versions of the Weil conjectures) allows one to replace the summand by more complicated summands such as for some polynomials or rational functions of bounded degree and obeying a suitable non-degeneracy condition (after restricting of course to those for which the arguments are well-defined). We will not detail this here, but instead turn to the question of estimating slightly longer exponential sums, such as

where should be thought of as a little bit larger than .

This is the final continuation of the online reading seminar of Zhang’s paper for the polymath8 project. (There are two other continuations; this previous post, which deals with the combinatorial aspects of the second part of Zhang’s paper, and this previous post, that covers the Type I and Type II sums.) The main purpose of this post is to present (and hopefully, to improve upon) the treatment of the final and most innovative of the key estimates in Zhang’s paper, namely the Type III estimate.

The main estimate was already stated as Theorem 17 in the previous post, but we quickly recall the relevant definitions here. As in other posts, we always take to be a parameter going off to infinity, with the usual asymptotic notation associated to this parameter.

Definition 1 (Coefficient sequences)Acoefficient sequenceis a finitely supported sequence that obeys the boundsfor all , where is the divisor function.

- (i) If is a coefficient sequence and is a primitive residue class, the (signed)
discrepancyof in the sequence is defined to be the quantity- (ii) A coefficient sequence is said to be
at scalefor some if it is supported on an interval of the form .- (iii) A coefficient sequence at scale is said to be
smoothif it takes the form for some smooth function supported on obeying the derivative boundsfor all fixed (note that the implied constant in the notation may depend on ).

For any , let denote the square-free numbers whose prime factors lie in . The main result of this post is then the following result of Zhang:

Theorem 2 (Type III estimate)Let be fixed quantities, and let be quantities such thatand

and

for some fixed . Let be coefficient sequences at scale respectively with smooth. Then for any we have

In fact we have the stronger “pointwise” estimate

(This is very slightly stronger than previously claimed, in that the condition has been dropped.)

It turns out that Zhang does not exploit any averaging of the factor, and matters reduce to the following:

Theorem 3 (Type III estimate without )Let be fixed, and let be quantities such thatand

and

for some fixed . Let be smooth coefficient sequences at scales respectively. Then we have

for all and some fixed .

Let us quickly see how Theorem 3 implies Theorem 2. To show (4), it suffices to establish the bound

for all , where denotes a quantity that is independent of (but can depend on other quantities such as ). The left-hand side can be rewritten as

From Theorem 3 we have

where the quantity does not depend on or . Inserting this asymptotic and using crude bounds on (see Lemma 8 of this previous post) we conclude (4) as required (after modifying slightly).

It remains to establish Theorem 3. This is done by a set of tools similar to that used to control the Type I and Type II sums:

- (i) completion of sums;
- (ii) the Weil conjectures and bounds on Ramanujan sums;
- (iii) factorisation of smooth moduli ;
- (iv) the Cauchy-Schwarz and triangle inequalities (Weyl differencing).

The specifics are slightly different though. For the Type I and Type II sums, it was the classical Weil bound on Kloosterman sums that were the key source of power saving; Ramanujan sums only played a minor role, controlling a secondary error term. For the Type III sums, one needs a significantly deeper consequence of the Weil conjectures, namely the estimate of Bombieri and Birch on a three-dimensional variant of a Kloosterman sum. Furthermore, the Ramanujan sums – which are a rare example of sums that actually exhibit *better* than square root cancellation, thus going beyond even what the Weil conjectures can offer – make a crucial appearance, when combined with the factorisation of the smooth modulus (this new argument is arguably the most original and interesting contribution of Zhang).

I’m closing my series of articles for the Princeton Companion to Mathematics with my article on “Ricci flow“. Of course, this flow on Riemannian manifolds is now very well known to mathematicians, due to its fundamental role in Perelman’s celebrated proof of the Poincaré conjecture. In this short article, I do not focus on that proof, but instead on the more basic questions as to what a Riemannian manifold is, what the Ricci curvature tensor is on such a manifold, and how Ricci flow qualitatively changes the geometry (and with surgery, the topology) of such manifolds over time.

I’ve saved this article for last, in part because it ties in well with my upcoming course on Perelman’s proof which will start in a few weeks (details to follow soon).

The last external article for the PCM that I would like to point out here is Brian Osserman‘s article on the Weil conjectures, which include the “Riemann hypothesis over finite fields” that was famously solved by Deligne. These (now solved) conjectures, which among other things gives some quite precise control on the number of points in an algebraic variety over a finite field, were (and continue to be) a major motivating force behind much of modern arithmetic and algebraic geometry.

[*Update*, Mar 13: Actual link to Weil conjecture article added.]

## Recent Comments