You are currently browsing the tag archive for the ‘etale fundamental group’ tag.

[Note: the content of this post is standard number theoretic material that can be found in many textbooks (I am relying principally here on Iwaniec and Kowalski); I am not claiming any new progress on any version of the Riemann hypothesis here, but am simply arranging existing facts together.]

The Riemann hypothesis is arguably the most important and famous unsolved problem in number theory. It is usually phrased in terms of the Riemann zeta function {\zeta}, defined by

\displaystyle  \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}

for {\hbox{Re}(s)>1} and extended meromorphically to other values of {s}, and asserts that the only zeroes of {\zeta} in the critical strip {\{ s: 0 \leq \hbox{Re}(s) \leq 1 \}} lie on the critical line {\{ s: \hbox{Re}(s)=\frac{1}{2} \}}.

One of the main reasons that the Riemann hypothesis is so important to number theory is that the zeroes of the zeta function in the critical strip control the distribution of the primes. To see the connection, let us perform the following formal manipulations (ignoring for now the important analytic issues of convergence of series, interchanging sums, branches of the logarithm, etc., in order to focus on the intuition). The starting point is the fundamental theorem of arithmetic, which asserts that every natural number {n} has a unique factorisation {n = p_1^{a_1} \ldots p_k^{a_k}} into primes. Taking logarithms, we obtain the identity

\displaystyle  \log n = \sum_{d|n} \Lambda(d) \ \ \ \ \ (1)

for any natural number {n}, where {\Lambda} is the von Mangoldt function, thus {\Lambda(n) = \log p} when {n} is a power of a prime {p} and zero otherwise. If we then perform a “Dirichlet-Fourier transform” by viewing both sides of (1) as coefficients of a Dirichlet series, we conclude that

\displaystyle  \sum_{n=1}^\infty \frac{\log n}{n^s} = \sum_{n=1}^\infty \sum_{d|n} \frac{\Lambda(d)}{n^s},

formally at least. Writing {n=dm}, the right-hand side factors as

\displaystyle (\sum_{d=1}^\infty \frac{\Lambda(d)}{d^s}) (\sum_{m=1}^\infty \frac{1}{m^s}) = \zeta(s) \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s}

whereas the left-hand side is (formally, at least) equal to {-\zeta'(s)}. We conclude the identity

\displaystyle  \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s} = -\frac{\zeta'(s)}{\zeta(s)},

(formally, at least). If we integrate this, we are formally led to the identity

\displaystyle  \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} = \log \zeta(s)

or equivalently to the exponential identity

\displaystyle  \zeta(s) = \exp( \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} ) \ \ \ \ \ (2)

which allows one to reconstruct the Riemann zeta function from the von Mangoldt function. (It is instructive exercise in enumerative combinatorics to try to prove this identity directly, at the level of formal Dirichlet series, using the fundamental theorem of arithmetic of course.) Now, as {\zeta} has a simple pole at {s=1} and zeroes at various places {s=\rho} on the critical strip, we expect a Weierstrass factorisation which formally (ignoring normalisation issues) takes the form

\displaystyle  \zeta(s) = \frac{1}{s-1} \times \prod_\rho (s-\rho) \times \ldots

(where we will be intentionally vague about what is hiding in the {\ldots} terms) and so we expect an expansion of the form

\displaystyle  \sum_{n=1}^\infty \frac{\Lambda(n)}{\log n} n^{-s} = - \log(s-1) + \sum_\rho \log(s-\rho) + \ldots. \ \ \ \ \ (3)

Note that

\displaystyle  \frac{1}{s-\rho} = \int_1^\infty t^{\rho-s} \frac{dt}{t}

and hence on integrating in {s} we formally have

\displaystyle  \log(s-\rho) = -\int_1^\infty t^{\rho-s-1} \frac{dt}{\log t}

and thus we have the heuristic approximation

\displaystyle  \log(s-\rho) \approx - \sum_{n=1}^\infty \frac{n^{\rho-s-1}}{\log n}.

Comparing this with (3), we are led to a heuristic form of the explicit formula

\displaystyle  \Lambda(n) \approx 1 - \sum_\rho n^{\rho-1}. \ \ \ \ \ (4)

When trying to make this heuristic rigorous, it turns out (due to the rough nature of both sides of (4)) that one has to interpret the explicit formula in some suitably weak sense, for instance by testing (4) against the indicator function {1_{[0,x]}(n)} to obtain the formula

\displaystyle  \sum_{n \leq x} \Lambda(n) \approx x - \sum_\rho \frac{x^{\rho}}{\rho} \ \ \ \ \ (5)

which can in fact be made into a rigorous statement after some truncation (the von Mangoldt explicit formula). From this formula we now see how helpful the Riemann hypothesis will be to control the distribution of the primes; indeed, if the Riemann hypothesis holds, so that {\hbox{Re}(\rho) = 1/2} for all zeroes {\rho}, it is not difficult to use (a suitably rigorous version of) the explicit formula to conclude that

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2} \log^2 x ) \ \ \ \ \ (6)

as {x \rightarrow \infty}, giving a near-optimal “square root cancellation” for the sum {\sum_{n \leq x} \Lambda(n)-1}. Conversely, if one can somehow establish a bound of the form

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2+\epsilon} )

for any fixed {\epsilon}, then the explicit formula can be used to then deduce that all zeroes {\rho} of {\zeta} have real part at most {1/2+\epsilon}, which leads to the following remarkable amplification phenomenon (analogous, as we will see later, to the tensor power trick): any bound of the form

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2+o(1)} )

can be automatically amplified to the stronger bound

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O( x^{1/2} \log^2 x )

with both bounds being equivalent to the Riemann hypothesis. Of course, the Riemann hypothesis for the Riemann zeta function remains open; but partial progress on this hypothesis (in the form of zero-free regions for the zeta function) leads to partial versions of the asymptotic (6). For instance, it is known that there are no zeroes of the zeta function on the line {\hbox{Re}(s)=1}, and this can be shown by some analysis (either complex analysis or Fourier analysis) to be equivalent to the prime number theorem

\displaystyle  \sum_{n \leq x} \Lambda(n) =x + o(x);

see e.g. this previous blog post for more discussion.

The main engine powering the above observations was the fundamental theorem of arithmetic, and so one can expect to establish similar assertions in other contexts where some version of the fundamental theorem of arithmetic is available. One of the simplest such variants is to continue working on the natural numbers, but “twist” them by a Dirichlet character {\chi: {\bf Z} \rightarrow {\bf R}}. The analogue of the Riemann zeta function is then the (1), which encoded the fundamental theorem of arithmetic, can be twisted by {\chi} to obtain

\displaystyle  \chi(n) \log n = \sum_{d|n} \chi(d) \Lambda(d) \chi(\frac{n}{d}) \ \ \ \ \ (7)

and essentially the same manipulations as before eventually lead to the exponential identity

\displaystyle  L(s,\chi) = \exp( \sum_{n=1}^\infty \frac{\chi(n) \Lambda(n)}{\log n} n^{-s} ). \ \ \ \ \ (8)

which is a twisted version of (2), as well as twisted explicit formula, which heuristically takes the form

\displaystyle  \chi(n) \Lambda(n) \approx - \sum_\rho n^{\rho-1} \ \ \ \ \ (9)

for non-principal {\chi}, where {\rho} now ranges over the zeroes of {L(s,\chi)} in the critical strip, rather than the zeroes of {\zeta(s)}; a more accurate formulation, following (5), would be

\displaystyle  \sum_{n \leq x} \chi(n) \Lambda(n) \approx - \sum_\rho \frac{x^{\rho}}{\rho}. \ \ \ \ \ (10)

(See e.g. Davenport’s book for a more rigorous discussion which emphasises the analogy between the Riemann zeta function and the Dirichlet {L}-function.) If we assume the generalised Riemann hypothesis, which asserts that all zeroes of {L(s,\chi)} in the critical strip also lie on the critical line, then we obtain the bound

\displaystyle  \sum_{n \leq x} \chi(n) \Lambda(n) = O( x^{1/2} \log(x) \log(xq) )

for any non-principal Dirichlet character {\chi}, again demonstrating a near-optimal square root cancellation for this sum. Again, we have the amplification property that the above bound is implied by the apparently weaker bound

\displaystyle  \sum_{n \leq x} \chi(n) \Lambda(n) = O( x^{1/2+o(1)} )

(where {o(1)} denotes a quantity that goes to zero as {x \rightarrow \infty} for any fixed {q}). Next, one can consider other number systems than the natural numbers {{\bf N}} and integers {{\bf Z}}. For instance, one can replace the integers {{\bf Z}} with rings {{\mathcal O}_K} of integers in other number fields {K} (i.e. finite extensions of {{\bf Q}}), such as the quadratic extensions {K = {\bf Q}[\sqrt{D}]} of the rationals for various square-free integers {D}, in which case the ring of integers would be the ring of quadratic integers {{\mathcal O}_K = {\bf Z}[\omega]} for a suitable generator {\omega} (it turns out that one can take {\omega = \sqrt{D}} if {D=2,3\hbox{ mod } 4}, and {\omega = \frac{1+\sqrt{D}}{2}} if {D=1 \hbox{ mod } 4}). Here, it is not immediately obvious what the analogue of the natural numbers {{\bf N}} is in this setting, since rings such as {{\bf Z}[\omega]} do not come with a natural ordering. However, we can adopt an algebraic viewpoint to see the correct generalisation, observing that every natural number {n} generates a principal ideal {(n) = \{ an: a \in {\bf Z} \}} in the integers, and conversely every non-trivial ideal {{\mathfrak n}} in the integers is associated to precisely one natural number {n} in this fashion, namely the norm {N({\mathfrak n}) := |{\bf Z} / {\mathfrak n}|} of that ideal. So one can identify the natural numbers with the ideals of {{\bf Z}}. Furthermore, with this identification, the prime numbers correspond to the prime ideals, since if {p} is prime, and {a,b} are integers, then {ab \in (p)} if and only if one of {a \in (p)} or {b \in (p)} is true. Finally, even in number systems (such as {{\bf Z}[\sqrt{-5}]}) in which the classical version of the fundamental theorem of arithmetic fail (e.g. {6 = 2 \times 3 = (1-\sqrt{-5})(1+\sqrt{-5})}), we have the fundamental theorem of arithmetic for ideals: every ideal {\mathfrak{n}} in a Dedekind domain (which includes the ring {{\mathcal O}_K} of integers in a number field as a key example) is uniquely representable (up to permutation) as the product of a finite number of prime ideals {\mathfrak{p}} (although these ideals might not necessarily be principal). For instance, in {{\bf Z}[\sqrt{-5}]}, the principal ideal {(6)} factors as the product of four prime (but non-principal) ideals {(2, 1+\sqrt{-5})}, {(2, 1-\sqrt{-5})}, {(3, 1+\sqrt{-5})}, {(3, 1-\sqrt{-5})}. (Note that the first two ideals {(2,1+\sqrt{5}), (2,1-\sqrt{5})} are actually equal to each other.) Because we still have the fundamental theorem of arithmetic, we can develop analogues of the previous observations relating the Riemann hypothesis to the distribution of primes. The analogue of the Riemann hypothesis is now the Dedekind zeta function

\displaystyle  \zeta_K(s) := \sum_{{\mathfrak n}} \frac{1}{N({\mathfrak n})^s}

where the summation is over all non-trivial ideals in {{\mathcal O}_K}. One can also define a von Mangoldt function {\Lambda_K({\mathfrak n})}, defined as {\log N( {\mathfrak p})} when {{\mathfrak n}} is a power of a prime ideal {{\mathfrak p}}, and zero otherwise; then the fundamental theorem of arithmetic for ideals can be encoded in an analogue of (1) (or (7)),

\displaystyle  \log N({\mathfrak n}) = \sum_{{\mathfrak d}|{\mathfrak n}} \Lambda_K({\mathfrak d}) \ \ \ \ \ (11)

which leads as before to an exponential identity

\displaystyle  \zeta_K(s) = \exp( \sum_{{\mathfrak n}} \frac{\Lambda_K({\mathfrak n})}{\log N({\mathfrak n})} N({\mathfrak n})^{-s} ) \ \ \ \ \ (12)

and an explicit formula of the heuristic form

\displaystyle  \Lambda({\mathfrak n}) \approx 1 - \sum_\rho N({\mathfrak n})^{\rho-1}

or, a little more accurately,

\displaystyle  \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) \approx x - \sum_\rho \frac{x^{\rho}}{\rho} \ \ \ \ \ (13)

in analogy with (5) or (10). Again, a suitable Riemann hypothesis for the Dedekind zeta function leads to good asymptotics for the distribution of prime ideals, giving a bound of the form

\displaystyle  \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) = x + O( \sqrt{x} \log(x) (d+\log(Dx)) )

where {D} is the conductor of {K} (which, in the case of number fields, is the absolute value of the discriminant of {K}) and {d = \hbox{dim}_{\bf Q}(K)} is the degree of the extension of {K} over {{\bf Q}}. As before, we have the amplification phenomenon that the above near-optimal square root cancellation bound is implied by the weaker bound

\displaystyle  \sum_{N({\mathfrak n}) \leq x} \Lambda({\mathfrak n}) = x + O( x^{1/2+o(1)} )

where {o(1)} denotes a quantity that goes to zero as {x \rightarrow \infty} (holding {K} fixed). See e.g. Chapter 5 of Iwaniec-Kowalski for details.

As was the case with the Dirichlet {L}-functions, one can twist the Dedekind zeta function example by characters, in this case the Hecke characters; we will not do this here, but see e.g. Section 3 of Iwaniec-Kowalski for details.

Very analogous considerations hold if we move from number fields to function fields. The simplest case is the function field associated to the affine line {{\mathbb A}^1} and a finite field {{\mathbb F} = {\mathbb F}_q} of some order {q}. The polynomial functions on the affine line {{\mathbb A}^1/{\mathbb F}} are just the usual polynomial ring {{\mathbb F}[t]}, which then play the role of the integers {{\bf Z}} (or {{\mathcal O}_K}) in previous examples. This ring happens to be a unique factorisation domain, so the situation is closely analogous to the classical setting of the Riemann zeta function. The analogue of the natural numbers are the monic polynomials (since every non-trivial principal ideal is generated by precisely one monic polynomial), and the analogue of the prime numbers are the irreducible monic polynomials. The norm {N(f)} of a polynomial is the order of {{\mathbb F}[t] / (f)}, which can be computed explicitly as

\displaystyle  N(f) = q^{\hbox{deg}(f)}.

Because of this, we will normalise things slightly differently here and use {\hbox{deg}(f)} in place of {\log N(f)} in what follows. The (local) zeta function {\zeta_{{\mathbb A}^1/{\mathbb F}}(s)} is then defined as

\displaystyle  \zeta_{{\mathbb A}^1/{\mathbb F}}(s) = \sum_f \frac{1}{N(f)^s}

where {f} ranges over monic polynomials, and the von Mangoldt function {\Lambda_{{\mathbb A}^1/{\mathbb F}}(f)} is defined to equal {\hbox{deg}(g)} when {f} is a power of a monic irreducible polynomial {g}, and zero otherwise. Note that because {N(f)} is always a power of {q}, the zeta function here is in fact periodic with period {2\pi i / \log q}. Because of this, it is customary to make a change of variables {T := q^{-s}}, so that

\displaystyle  \zeta_{{\mathbb A}^1/{\mathbb F}}(s) = Z( {\mathbb A}^1/{\mathbb F}, T )

and {Z} is the renormalised zeta function

\displaystyle  Z( {\mathbb A}^1/{\mathbb F}, T ) = \sum_f T^{\hbox{deg}(f)}. \ \ \ \ \ (14)

We have the analogue of (1) (or (7) or (11)):

\displaystyle  \hbox{deg}(f) = \sum_{g|f} \Lambda_{{\mathbb A}^1/{\mathbb F}}(g), \ \ \ \ \ (15)

which leads as before to an exponential identity

\displaystyle  Z( {\mathbb A}^1/{\mathbb F}, T ) = \exp( \sum_f \frac{\Lambda_{{\mathbb A}^1/{\mathbb F}}(f)}{\hbox{deg}(f)} T^{\hbox{deg}(f)} ) \ \ \ \ \ (16)

analogous to (2), (8), or (12). It also leads to the explicit formula

\displaystyle  \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx 1 - \sum_\rho N(f)^{\rho-1}

where {\rho} are the zeroes of the original zeta function {\zeta_{{\mathbb A}^1/{\mathbb F}}(s)} (counting each residue class of the period {2\pi i/\log q} just once), or equivalently

\displaystyle  \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx 1 - q^{-\hbox{deg}(f)} \sum_\alpha \alpha^{\hbox{deg}(f)},

where {\alpha} are the reciprocals of the roots of the normalised zeta function {Z( {\mathbb A}^1/{\mathbb F}, T )} (or to put it another way, {1-\alpha T} are the factors of this zeta function). Again, to make proper sense of this heuristic we need to sum, obtaining

\displaystyle  \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) \approx q^n - \sum_\alpha \alpha^n.

As it turns out, in the function field setting, the zeta functions are always rational (this is part of the Weil conjectures), and the above heuristic formula is basically exact up to a constant factor, thus

\displaystyle  \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = q^n - \sum_\alpha \alpha^n + c \ \ \ \ \ (17)

for an explicit integer {c} (independent of {n}) arising from any potential pole of {Z} at {T=1}. In the case of the affine line {{\mathbb A}^1}, the situation is particularly simple, because the zeta function {Z( {\mathbb A}^1/{\mathbb F}, T)} is easy to compute. Indeed, since there are exactly {q^n} monic polynomials of a given degree {n}, we see from (14) that

\displaystyle  Z( {\mathbb A}^1/{\mathbb F}, T ) = \sum_{n=0}^\infty q^n T^n = \frac{1}{1-qT}

so in fact there are no zeroes whatsoever, and no pole at {T=1} either, so we have an exact prime number theorem for this function field:

\displaystyle  \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = q^n \ \ \ \ \ (18)

Among other things, this tells us that the number of irreducible monic polynomials of degree {n} is {q^n/n + O(q^{n/2})}.

We can transition from an algebraic perspective to a geometric one, by viewing a given monic polynomial {f \in {\mathbb F}[t]} through its roots, which are a finite set of points in the algebraic closure {\overline{{\mathbb F}}} of the finite field {{\mathbb F}} (or more suggestively, as points on the affine line {{\mathbb A}^1( \overline{{\mathbb F}} )}). The number of such points (counting multiplicity) is the degree of {f}, and from the factor theorem, the set of points determines the monic polynomial {f} (or, if one removes the monic hypothesis, it determines the polynomial {f} projectively). These points have an action of the Galois group {\hbox{Gal}( \overline{{\mathbb F}} / {\mathbb F} )}. It is a classical fact that this Galois group is in fact a cyclic group generated by a single element, the (geometric) Frobenius map {\hbox{Frob}: x \mapsto x^q}, which fixes the elements of the original finite field {{\mathbb F}} but permutes the other elements of {\overline{{\mathbb F}}}. Thus the roots of a given polynomial {f} split into orbits of the Frobenius map. One can check that the roots consist of a single such orbit (counting multiplicity) if and only if {f} is irreducible; thus the fundamental theorem of arithmetic can be viewed geometrically as as the orbit decomposition of any Frobenius-invariant finite set of points in the affine line.

Now consider the degree {n} finite field extension {{\mathbb F}_n} of {{\mathbb F}} (it is a classical fact that there is exactly one such extension up to isomorphism for each {n}); this is a subfield of {\overline{{\mathbb F}}} of order {q^n}. (Here we are performing a standard abuse of notation by overloading the subscripts in the {{\mathbb F}} notation; thus {{\mathbb F}_q} denotes the field of order {q}, while {{\mathbb F}_n} denotes the extension of {{\mathbb F} = {\mathbb F}_q} of order {n}, so that we in fact have {{\mathbb F}_n = {\mathbb F}_{q^n}} if we use one subscript convention on the left-hand side and the other subscript convention on the right-hand side. We hope this overloading will not cause confusion.) Each point {x} in this extension (or, more suggestively, the affine line {{\mathbb A}^1({\mathbb F}_n)} over this extension) has a minimal polynomial – an irreducible monic polynomial whose roots consist the Frobenius orbit of {x}. Since the Frobenius action is periodic of period {n} on {{\mathbb F}_n}, the degree of this minimal polynomial must divide {n}. Conversely, every monic irreducible polynomial of degree {d} dividing {n} produces {d} distinct zeroes that lie in {{\mathbb F}_d} (here we use the classical fact that finite fields are perfect) and hence in {{\mathbb F}_n}. We have thus partitioned {{\mathbb A}^1({\mathbb F}_n)} into Frobenius orbits (also known as closed points), with each monic irreducible polynomial {f} of degree {d} dividing {n} contributing an orbit of size {d = \hbox{deg}(f) = \Lambda(f^{n/d})}. From this we conclude a geometric interpretation of the left-hand side of (18):

\displaystyle  \sum_{\hbox{deg}(f) = n} \Lambda_{{\mathbb A}^1/{\mathbb F}}(f) = \sum_{x \in {\mathbb A}^1({\mathbb F}_n)} 1. \ \ \ \ \ (19)

The identity (18) thus is equivalent to the thoroughly boring fact that the number of {{\mathbb F}_n}-points on the affine line {{\mathbb A}^1} is equal to {q^n}. However, things become much more interesting if one then replaces the affine line {{\mathbb A}^1} by a more general (geometrically) irreducible curve {C} defined over {{\mathbb F}}; for instance one could take {C} to be an ellpitic curve

\displaystyle  E = \{ (x,y): y^2 = x^3 + ax + b \} \ \ \ \ \ (20)

for some suitable {a,b \in {\mathbb F}}, although the discussion here applies to more general curves as well (though to avoid some minor technicalities, we will assume that the curve is projective with a finite number of {{\mathbb F}}-rational points removed). The analogue of {{\mathbb F}[t]} is then the coordinate ring of {C} (for instance, in the case of the elliptic curve (20) it would be {{\mathbb F}[x,y] / (y^2-x^3-ax-b)}), with polynomials in this ring producing a set of roots in the curve {C( \overline{\mathbb F})} that is again invariant with respect to the Frobenius action (acting on the {x} and {y} coordinates separately). In general, we do not expect unique factorisation in this coordinate ring (this is basically because Bezout’s theorem suggests that the zero set of a polynomial on {C} will almost never consist of a single (closed) point). Of course, we can use the algebraic formalism of ideals to get around this, setting up a zeta function

\displaystyle  \zeta_{C/{\mathbb F}}(s) = \sum_{{\mathfrak f}} \frac{1}{N({\mathfrak f})^s}

and a von Mangoldt function {\Lambda_{C/{\mathbb F}}({\mathfrak f})} as before, where {{\mathfrak f}} would now run over the non-trivial ideals of the coordinate ring. However, it is more instructive to use the geometric viewpoint, using the ideal-variety dictionary from algebraic geometry to convert algebraic objects involving ideals into geometric objects involving varieties. In this dictionary, a non-trivial ideal would correspond to a proper subvariety (or more precisely, a subscheme, but let us ignore the distinction between varieties and schemes here) of the curve {C}; as the curve is irreducible and one-dimensional, this subvariety must be zero-dimensional and is thus a (multi-)set of points {\{x_1,\ldots,x_k\}} in {C}, or equivalently an effective divisor {[x_1] + \ldots + [x_k]} of {C}; this generalises the concept of the set of roots of a polynomial (which corresponds to the case of a principal ideal). Furthermore, this divisor has to be rational in the sense that it is Frobenius-invariant. The prime ideals correspond to those divisors (or sets of points) which are irreducible, that is to say the individual Frobenius orbits, also known as closed points of {C}. With this dictionary, the zeta function becomes

\displaystyle  \zeta_{C/{\mathbb F}}(s) = \sum_{D \geq 0} \frac{1}{q^{\hbox{deg}(D)}}

where the sum is over effective rational divisors {D} of {C} (with {k} being the degree of an effective divisor {[x_1] + \ldots + [x_k]}), or equivalently

\displaystyle  Z( C/{\mathbb F}, T ) = \sum_{D \geq 0} T^{\hbox{deg}(D)}.

The analogue of (19), which gives a geometric interpretation to sums of the von Mangoldt function, becomes

\displaystyle  \sum_{N({\mathfrak f}) = q^n} \Lambda_{C/{\mathbb F}}({\mathfrak f}) = \sum_{x \in C({\mathbb F}_n)} 1

\displaystyle  = |C({\mathbb F}_n)|,

thus this sum is simply counting the number of {{\mathbb F}_n}-points of {C}. The analogue of the exponential identity (16) (or (2), (8), or (12)) is then

\displaystyle  Z( C/{\mathbb F}, T ) = \exp( \sum_{n \geq 1} \frac{|C({\mathbb F}_n)|}{n} T^n ) \ \ \ \ \ (21)

and the analogue of the explicit formula (17) (or (5), (10) or (13)) is

\displaystyle  |C({\mathbb F}_n)| = q^n - \sum_\alpha \alpha^n + c \ \ \ \ \ (22)

where {\alpha} runs over the (reciprocal) zeroes of {Z( C/{\mathbb F}, T )} (counting multiplicity), and {c} is an integer independent of {n}. (As it turns out, {c} equals {1} when {C} is a projective curve, and more generally equals {1-k} when {C} is a projective curve with {k} rational points deleted.)

To evaluate {Z(C/{\mathbb F},T)}, one needs to count the number of effective divisors of a given degree on the curve {C}. Fortunately, there is a tool that is particularly well-designed for this task, namely the Riemann-Roch theorem. By using this theorem, one can show (when {C} is projective) that {Z(C/{\mathbb F},T)} is in fact a rational function, with a finite number of zeroes, and a simple pole at both {1} and {1/q}, with similar results when one deletes some rational points from {C}; see e.g. Chapter 11 of Iwaniec-Kowalski for details. Thus the sum in (22) is finite. For instance, for the affine elliptic curve (20) (which is a projective curve with one point removed), it turns out that we have

\displaystyle  |E({\mathbb F}_n)| = q^n - \alpha^n - \beta^n

for two complex numbers {\alpha,\beta} depending on {E} and {q}.

The Riemann hypothesis for (untwisted) curves – which is the deepest and most difficult aspect of the Weil conjectures for these curves – asserts that the zeroes of {\zeta_{C/{\mathbb F}}} lie on the critical line, or equivalently that all the roots {\alpha} in (22) have modulus {\sqrt{q}}, so that (22) then gives the asymptotic

\displaystyle  |C({\mathbb F}_n)| = q^n + O( q^{n/2} ) \ \ \ \ \ (23)

where the implied constant depends only on the genus of {C} (and on the number of points removed from {C}). For instance, for elliptic curves we have the Hasse bound

\displaystyle  |E({\mathbb F}_n) - q^n| \leq 2 \sqrt{q}.

As before, we have an important amplification phenomenon: if we can establish a weaker estimate, e.g.

\displaystyle  |C({\mathbb F}_n)| = q^n + O( q^{n/2 + O(1)} ), \ \ \ \ \ (24)

then we can automatically deduce the stronger bound (23). This amplification is not a mere curiosity; most of the proofs of the Riemann hypothesis for curves proceed via this fact. For instance, by using the elementary method of Stepanov to bound points in curves (discussed for instance in this previous post), one can establish the preliminary bound (24) for large {n}, which then amplifies to the optimal bound (23) for all {n} (and in particular for {n=1}). Again, see Chapter 11 of Iwaniec-Kowalski for details. The ability to convert a bound with {q}-dependent losses over the optimal bound (such as (24)) into an essentially optimal bound with no {q}-dependent losses (such as (23)) is important in analytic number theory, since in many applications (e.g. in those arising from sieve theory) one wishes to sum over large ranges of {q}.

Much as the Riemann zeta function can be twisted by a Dirichlet character to form a Dirichlet {L}-function, one can twist the zeta function on curves by various additive and multiplicative characters. For instance, suppose one has an affine plane curve {C \subset {\mathbb A}^2} and an additive character {\psi: {\mathbb F}^2 \rightarrow {\bf C}^\times}, thus {\psi(x+y) = \psi(x) \psi(y)} for all {x,y \in {\mathbb F}^2}. Given a rational effective divisor {D = [x_1] + \ldots + [x_k]}, the sum {x_1+\ldots+x_k} is Frobenius-invariant and thus lies in {{\mathbb F}^2}. By abuse of notation, we may thus define {\psi} on such divisors by

\displaystyle  \psi( [x_1] + \ldots + [x_k] ) := \psi( x_1 + \ldots + x_k )

and observe that {\psi} is multiplicative in the sense that {\psi(D_1 + D_2) = \psi(D_1) \psi(D_2)} for rational effective divisors {D_1,D_2}. One can then define {\psi({\mathfrak f})} for any non-trivial ideal {{\mathfrak f}} by replacing that ideal with the associated rational effective divisor; for instance, if {f} is a polynomial in the coefficient ring of {C}, with zeroes at {x_1,\ldots,x_k \in C}, then {\psi((f))} is {\psi( x_1+\ldots+x_k )}. Again, we have the multiplicativity property {\psi({\mathfrak f} {\mathfrak g}) = \psi({\mathfrak f}) \psi({\mathfrak g})}. If we then form the twisted normalised zeta function

\displaystyle  Z( C/{\mathbb F}, \psi, T ) = \sum_{D \geq 0} \psi(D) T^{\hbox{deg}(D)}

then by twisting the previous analysis, we eventually arrive at the exponential identity

\displaystyle  Z( C/{\mathbb F}, \psi, T ) = \exp( \sum_{n \geq 1} \frac{S_n(C/{\mathbb F}, \psi)}{n} T^n ) \ \ \ \ \ (25)

in analogy with (21) (or (2), (8), (12), or (16)), where the companion sums {S_n(C/{\mathbb F}, \psi)} are defined by

\displaystyle  S_n(C/{\mathbb F},\psi) = \sum_{x \in C({\mathbb F}^n)} \psi( \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) )

where the trace {\hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x)} of an element {x} in the plane {{\mathbb A}^2( {\mathbb F}_n )} is defined by the formula

\displaystyle  \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) = x + \hbox{Frob}(x) + \ldots + \hbox{Frob}^{n-1}(x).

In particular, {S_1} is the exponential sum

\displaystyle  S_1(C/{\mathbb F},\psi) = \sum_{x \in C({\mathbb F})} \psi(x)

which is an important type of sum in analytic number theory, containing for instance the Kloosterman sum

\displaystyle  K(a,b;p) := \sum_{x \in {\mathbb F}_p^\times} e_p( ax + \frac{b}{x})

as a special case, where {a,b \in {\mathbb F}_p^\times}. (NOTE: the sign conventions for the companion sum {S_n} are not consistent across the literature, sometimes it is {-S_n} which is referred to as the companion sum.)

If {\psi} is non-principal (and {C} is non-linear), one can show (by a suitably twisted version of the Riemann-Roch theorem) that {Z} is a rational function of {T}, with no pole at {T=q^{-1}}, and one then gets an explicit formula of the form

\displaystyle  S_n(C/{\mathbb F},\psi) = -\sum_\alpha \alpha^n + c \ \ \ \ \ (26)

for the companion sums, where {\alpha} are the reciprocals of the zeroes of {S}, in analogy to (22) (or (5), (10), (13), or (17)). For instance, in the case of Kloosterman sums, there is an identity of the form

\displaystyle  \sum_{x \in {\mathbb F}_{p^n}^\times} e_p( a \hbox{Tr}(x) + \frac{b}{\hbox{Tr}(x)}) = -\alpha^n - \beta^n \ \ \ \ \ (27)

for all {n} and some complex numbers {\alpha,\beta} depending on {a,b,p}, where we have abbreviated {\hbox{Tr}_{{\mathbb F}_{p^n}/{\mathbb F}_p}} as {\hbox{Tr}}. As before, the Riemann hypothesis for {Z} then gives a square root cancellation bound of the form

\displaystyle  S_n(C/{\mathbb F},\psi) = O( q^{n/2} ) \ \ \ \ \ (28)

for the companion sums (and in particular gives the very explicit Weil bound {|K(a,b;p)| \leq 2\sqrt{p}} for the Kloosterman sum), but again there is the amplification phenomenon that this sort of bound can be deduced from the apparently weaker bound

\displaystyle  S_n(C/{\mathbb F},\psi) = O( q^{n/2 + O(1)} ).

As before, most of the known proofs of the Riemann hypothesis for these twisted zeta functions proceed by first establishing this weaker bound (e.g. one could again use Stepanov’s method here for this goal) and then amplifying to the full bound (28); see Chapter 11 of Iwaniec-Kowalski for further details.

One can also twist the zeta function on a curve by a multiplicative character {\chi: {\mathbb F}^\times \rightarrow {\bf C}^\times} by similar arguments, except that instead of forming the sum {x_1+\ldots+x_k} of all the components of an effective divisor {[x_1]+\ldots+[x_k]}, one takes the product {x_1 \ldots x_k} instead, and similarly one replaces the trace

\displaystyle  \hbox{Tr}_{{\mathbb F}_n/{\mathbb F}}(x) = x + \hbox{Frob}(x) + \ldots + \hbox{Frob}^{n-1}(x)

by the norm

\displaystyle  \hbox{Norm}_{{\mathbb F}_n/{\mathbb F}}(x) = x \cdot \hbox{Frob}(x) \cdot \ldots \cdot \hbox{Frob}^{n-1}(x).

Again, see Chapter 11 of Iwaniec-Kowalski for details.

Deligne famously extended the above theory to higher-dimensional varieties than curves, and also to the closely related context of {\ell}-adic sheaves on curves, giving rise to two separate proofs of the Weil conjectures in full generality. (Very roughly speaking, the former context can be obtained from the latter context by a sort of Fubini theorem type argument that expresses sums on higher-dimensional varieties as iterated sums on curves of various expressions related to {\ell}-adic sheaves.) In this higher-dimensional setting, the zeta function formalism is still present, but is much more difficult to use, in large part due to the much less tractable nature of divisors in higher dimensions (they are now combinations of codimension one subvarieties or subschemes, rather than combinations of points). To get around this difficulty, one has to change perspective yet again, from an algebraic or geometric perspective to an {\ell}-adic cohomological perspective. (I could imagine that once one is sufficiently expert in the subject, all these perspectives merge back together into a unified viewpoint, but I am certainly not yet at that stage of understanding.) In particular, the zeta function, while still present, plays a significantly less prominent role in the analysis (at least if one is willing to take Deligne’s theorems as a black box); the explicit formula is now obtained via a different route, namely the Grothendieck-Lefschetz fixed point formula. I have written some notes on this material below the fold (based in part on some lectures of Philippe Michel, as well as the text of Iwaniec-Kowalski and also this book of Katz), but I should caution that my understanding here is still rather sketchy and possibly inaccurate in places.

Read the rest of this entry »

I’ve just uploaded to the arXiv my paper “Expanding polynomials over finite fields of large characteristic, and a regularity lemma for definable sets“, submitted to Contrib. Disc. Math. The motivation of this paper is to understand a certain polynomial variant of the sum-product phenomenon in finite fields. This phenomenon asserts that if {A} is a non-empty subset of a finite field {F}, then either the sumset {A+A := \{a+b: a,b \in A\}} or product set {A \cdot A := \{ab: a,b \in A \}} will be significantly larger than {A}, unless {A} is close to a subfield of {F} (or to {\{1\}}). In particular, in the regime when {A} is large, say {|F|^{1-c} < |A| \leq |F|}, one expects an expansion bound of the form

\displaystyle  |A+A| + |A \cdot A| \gg (|F|/|A|)^{c'} |A| \ \ \ \ \ (1)

for some absolute constants {c, c' > 0}. Results of this type are known; for instance, Hart, Iosevich, and Solymosi obtained precisely this bound for {(c,c')=(3/10,1/3)} (in the case when {|F|} is prime), which was then improved by Garaev to {(c,c')=(1/3,1/2)}.

We have focused here on the case when {A} is a large subset of {F}, but sum-product estimates are also extremely interesting in the opposite regime in which {A} is allowed to be small (see for instance the papers of Katz-Shen and Li and of Garaev for some recent work in this case, building on some older papers of Bourgain, Katz and myself and of Bourgain, Glibichuk, and Konyagin). However, the techniques used in these two regimes are rather different. For large subsets of {F}, it is often profitable to use techniques such as the Fourier transform or the Cauchy-Schwarz inequality to “complete” a sum over a large set (such as {A}) into a set over the entire field {F}, and then to use identities concerning complete sums (such as the Weil bound on complete exponential sums over a finite field). For small subsets of {F}, such techniques are usually quite inefficient, and one has to proceed by somewhat different combinatorial methods which do not try to exploit the ambient field {F}. But my paper focuses exclusively on the large {A} regime, and unfortunately does not directly say much (except through reasoning by analogy) about the small {A} case.

Note that it is necessary to have both {A+A} and {A \cdot A} appear on the left-hand side of (1). Indeed, if one just has the sumset {A+A}, then one can set {A} to be a long arithmetic progression to give counterexamples to (1). Similarly, if one just has a product set {A \cdot A}, then one can set {A} to be a long geometric progression. The sum-product phenomenon can then be viewed that it is not possible to simultaneously behave like a long arithmetic progression and a long geometric progression, unless one is already very close to behaving like a subfield.

Now we consider a polynomial variant of the sum-product phenomenon, where we consider a polynomial image

\displaystyle  P(A,A) := \{ P(a,b): a,b \in A \}

of a set {A \subset F} with respect to a polynomial {P: F \times F \rightarrow F}; we can also consider the asymmetric setting of the image

\displaystyle  P(A,B) := \{ P(a,b): a \in A,b \in B \}

of two subsets {A,B \subset F}. The regime we will be interested is the one where the field {F} is large, and the subsets {A, B} of {F} are also large, but the polynomial {P} has bounded degree. Actually, for technical reasons it will not be enough for us to assume that {F} has large cardinality; we will also need to assume that {F} has large characteristic. (The two concepts are synonymous for fields of prime order, but not in general; for instance, the field with {2^n} elements becomes large as {n \rightarrow \infty} while the characteristic remains fixed at {2}, and is thus not going to be covered by the results in this paper.)

In this paper of Vu, it was shown that one could replace {A \cdot A} with {P(A,A)} in (1), thus obtaining a bound of the form

\displaystyle  |A+A| + |P(A,A)| \gg (|F|/|A|)^{c'} |A|

whenever {|A| \geq |F|^{1-c}} for some absolute constants {c, c' > 0}, unless the polynomial {P} had the degenerate form {P(x,y) = Q(L(x,y))} for some linear function {L: F \times F \rightarrow F} and polynomial {Q: F \rightarrow F}, in which {P(A,A)} behaves too much like {A+A} to get reasonable expansion. In this paper, we focus instead on the question of bounding {P(A,A)} alone. In particular, one can ask to classify the polynomials {P} for which one has the weak expansion property

\displaystyle |P(A,A)| \gg (|F|/|A|)^{c'} |A|

whenever {|A| \geq |F|^{1-c}} for some absolute constants {c, c' > 0}. One can also ask for stronger versions of this expander property, such as the moderate expansion property

\displaystyle |P(A,A)| \gg |F|

whenever {|A| \geq |F|^{1-c}}, or the almost strong expansion property

\displaystyle |P(A,A)| \geq |F| - O( |F|^{1-c'})

whenever {|A| \geq |F|^{1-c}}. (One can consider even stronger expansion properties, such as the strong expansion property {|P(A,A)| \geq |F|-O(1)}, but it was shown by Gyarmati and Sarkozy that this property cannot hold for polynomials of two variables of bounded degree when {|F| \rightarrow \infty}.) One can also consider asymmetric versions of these properties, in which one obtains lower bounds on {|P(A,B)|} rather than {|P(A,A)|}.

The example of a long arithmetic or geometric progression shows that the polynomials {P(x,y) = x+y} or {P(x,y) = xy} cannot be expanders in any of the above senses, and a similar construction also shows that polynomials of the form {P(x,y) = Q(f(x)+f(y))} or {P(x,y) = Q(f(x) f(y))} for some polynomials {Q, f: F \rightarrow F} cannot be expanders. On the other hand, there are a number of results in the literature establishing expansion for various polynomials in two or more variables that are not of this degenerate form (in part because such results are related to incidence geometry questions in finite fields, such as the finite field version of the Erdos distinct distances problem). For instance, Solymosi established weak expansion for polynomials of the form {P(x,y) = f(x)+y} when {f} is a nonlinear polynomial, with generalisations by Hart, Li, and Shen for various polynomials of the form {P(x,y) = f(x) + g(y)} or {P(x,y) = f(x) g(y)}. Further examples of expanding polynomials appear in the work of Shkredov, Iosevich-Rudnev, and Bukh-Tsimerman, as well as the previously mentioned paper of Vu and of Hart-Li-Shen, and these papers in turn cite many further results which are in the spirit of the polynomial expansion bounds discussed here (for instance, dealing with the small {A} regime, or working in other fields such as {{\bf R}} instead of in finite fields {F}). We will not summarise all these results here; they are summarised briefly in my paper, and in more detail in the papers of Hart-Li-Shen and of Bukh-Tsimerman. But we will single out one of the results of Bukh-Tsimerman, which is one of most recent and general of these results, and closest to the results of my own paper. Roughly speaking, in this paper it is shown that a polynomial {P(x,y)} of two variables and bounded degree will be a moderate expander if it is non-composite (in the sense that it does not take the form {P(x,y) = Q(R(x,y))} for some non-linear polynomial {Q} and some polynomial {R}, possibly having coefficients in the algebraic completion of {F}) and is monic on both {x} and {y}, thus it takes the form {P(x,y) = x^d + S(x,y)} for some {d \geq 1} and some polynomial {S} of degree at most {d-1} in {x}, and similarly with the roles of {x} and {y} reversed, unless {P} is of the form {P(x,y) = f(x) + g(y)} or {P(x,y) = f(x) g(y)} (in which case the expansion theory is covered to a large extent by the previous work of Hart, Li, and Shen).

Our first main result improves upon the Bukh-Tsimerman result by strengthening the notion of expansion and removing the non-composite and monic hypotheses, but imposes a condition of large characteristic. I’ll state the result here slightly informally as follows:

Theorem 1 (Criterion for moderate expansion) Let {P: F \times F \rightarrow F} be a polynomial of bounded degree over a finite field {F} of sufficiently large characteristic, and suppose that {P} is not of the form {P(x,y) = Q(f(x)+g(y))} or {P(x,y) = Q(f(x)g(y))} for some polynomials {Q,f,g: F \rightarrow F}. Then one has the (asymmetric) moderate expansion property

\displaystyle  |P(A,B)| \gg |F|

whenever {|A| |B| \ggg |F|^{2-1/8}}.

This is basically a sharp necessary and sufficient condition for asymmetric expansion moderate for polynomials of two variables. In the paper, analogous sufficient conditions for weak or almost strong expansion are also given, although these are not quite as satisfactory (particularly the conditions for almost strong expansion, which include a somewhat complicated algebraic condition which is not easy to check, and which I would like to simplify further, but was unable to).

The argument here resembles the Bukh-Tsimerman argument in many ways. One can view the result as an assertion about the expansion properties of the graph {\{ (a,b,P(a,b)): a,b \in F \}}, which can essentially be thought of as a somewhat sparse three-uniform hypergraph on {F}. Being sparse, it is difficult to directly apply techniques from dense graph or hypergraph theory for this situation; however, after a few applications of the Cauchy-Schwarz inequality, it turns out (as observed by Bukh and Tsimerman) that one can essentially convert the problem to one about the expansion properties of the set

\displaystyle  \{ (P(a,c), P(b,c), P(a,d), P(b,d)): a,b,c,d \in F \} \ \ \ \ \ (2)

(actually, one should view this as a multiset, but let us ignore this technicality) which one expects to be a dense set in {F^4}, except in the case when the associated algebraic variety

\displaystyle  \{ (P(a,c), P(b,c), P(a,d), P(b,d)): a,b,c,d \in \overline{F} \}

fails to be Zariski dense, but it turns out that in this case one can use some differential geometry and Riemann surface arguments (after first invoking the Lefschetz principle and the high characteristic hypothesis to work over the complex numbers instead over a finite field) to show that {P} is of the form {Q(f(x)+g(y))} or {Q(f(x)g(y))}. This reduction is related to the classical fact that the only one-dimensional algebraic groups over the complex numbers are the additive group {({\bf C},+)}, the multiplicative group {({\bf C} \backslash \{0\},\times)}, or the elliptic curves (but the latter have a group law given by rational functions rather than polynomials, and so ultimately end up being eliminated from consideration, though they would play an important role if one wanted to study the expansion properties of rational functions).

It remains to understand the structure of the set (2) is. To understand dense graphs or hypergraphs, one of the standard tools of choice is the Szemerédi regularity lemma, which carves up such graphs into a bounded number of cells, with the graph behaving pseudorandomly on most pairs of cells. However, the bounds in this lemma are notoriously poor (the regularity obtained is an inverse tower exponential function of the number of cells), and this makes this lemma unsuitable for the type of expansion properties we seek (in which we want to deal with sets {A} which have a polynomial sparsity, e.g. {|A| \sim |F|^{1-c}}). Fortunately, in the case of sets such as (2) which are definable over the language of rings, it turns out that a much stronger regularity lemma is available, which I call the “algebraic regularity lemma”. I’ll state it (again, slightly informally) in the context of graphs as follows:

Lemma 2 (Algebraic regularity lemma) Let {F} be a finite field of large characteristic, and let {V, W} be definable sets over {F} of bounded complexity (i.e. {V, W} are subsets of {F^n}, {F^m} for some bounded {n,m} that can be described by some first-order predicate in the language of rings of bounded length and involving boundedly many constants). Let {E} be a definable subset of {V \times W}, again of bounded complexity (one can view {E} as a bipartite graph connecting {V} and {W}). Then one can partition {V, W} into a bounded number of cells {V_1,\ldots,V_a}, {W_1,\ldots,W_b}, still definable with bounded complexity, such that for all pairs {i =1,\ldots a}, {j=1,\ldots,b}, one has the regularity property

\displaystyle  |E \cap (A \times B)| = d_{ij} |A| |B| + O( |F|^{-1/4} |V| |W| )

for all {A \subset V_i, B \subset W_i}, where {d_{ij} := \frac{|E \cap (V_i \times W_j)|}{|V_i| |W_j|}} is the density of {E} in {V_i \times W_j}.

This lemma resembles the Szemerédi regularity lemma, but regularises all pairs of cells (not just most pairs), and the regularity is of polynomial strength in {|F|}, rather than inverse tower exponential in the number of cells. Also, the cells are not arbitrary subsets of {V,W}, but are themselves definable with bounded complexity, which turns out to be crucial for applications. I am optimistic that this lemma will be useful not just for studying expanding polynomials, but for many other combinatorial questions involving dense subsets of definable sets over finite fields.

The above lemma is stated for graphs {E \subset V \times W}, but one can iterate it to obtain an analogous regularisation of hypergraphs {E \subset V_1 \times \ldots \times V_k} for any bounded {k} (for application to (2), we need {k=4}). This hypergraph regularity lemma, by the way, is not analogous to the strong hypergraph regularity lemmas of Rodl et al. and Gowers developed in the last six or so years, but closer in spirit to the older (but weaker) hypergraph regularity lemma of Chung which gives the same “order {1}” regularity that the graph regularity lemma gives, rather than higher order regularity.

One feature of the proof of Lemma 2 which I found striking was the need to use some fairly high powered technology from algebraic geometry, and in particular the Lang-Weil bound on counting points in varieties over a finite field (discussed in this previous blog post), and also the theory of the etale fundamental group. Let me try to briefly explain why this is the case. A model example of a definable set of bounded complexity {E} is a set {E \subset F^n \times F^m} of the form

\displaystyle  E = \{ (x,y) \in F^n \times F^m: \exists t \in F; P(x,y,t)=0 \}

for some polynomial {P: F^n \times F^m \times F \rightarrow F}. (Actually, it turns out that one can essentially write all definable sets as an intersection of sets of this form; see this previous blog post for more discussion.) To regularise the set {E}, it is convenient to square the adjacency matrix, which soon leads to the study of counting functions such as

\displaystyle  \mu(x,x') := | \{ (y,t,t') \in F^m \times F \times F: P(x,y,t) = P(x',y,t') = 0 \}|.

If one can show that this function {\mu} is “approximately finite rank” in the sense that (modulo lower order errors, of size {O(|F|^{-1/2})} smaller than the main term), this quantity depends only on a bounded number of bits of information about {x} and a bounded number of bits of information about {x'}, then a little bit of linear algebra will then give the required regularity result.

One can recognise {\mu(x,x')} as counting {F}-points of a certain algebraic variety

\displaystyle  V_{x,x'} := \{ (y,t,t') \in \overline{F}^m \times \overline{F} \times \overline{F}: P(x,y,t) = P(x',y,t') = 0 \}.

The Lang-Weil bound (discussed in this previous post) provides a formula for this count, in terms of the number {c(x,x')} of geometrically irreducible components of {V_{x,x'}} that are defined over {F} (or equivalently, are invariant with respect to the Frobenius endomorphism associated to {F}). So the problem boils down to ensuring that this quantity {c(x,x')} is “generically bounded rank”, in the sense that for generic {x,x'}, its value depends only on a bounded number of bits of {x} and a bounded number of bits of {x'}.

Here is where the étale fundamental group comes in. One can view {V_{x,x'}} as a fibre product {V_x \times_{\overline{F}^m} V_{x'}} of the varieties

\displaystyle  V_x := \{ (y,t) \in \overline{F}^m \times \overline{F}: P(x,y,t) = 0 \}

and

\displaystyle  V_{x'} := \{ (y,t) \in \overline{F}^m \times \overline{F}: P(x',y,t) = 0 \}

over {\overline{F}^m}. If one is in sufficiently high characteristic (or even better, in zero characteristic, which one can reduce to by an ultraproduct (or nonstandard analysis) construction, similar to that discussed in this previous post), the varieties {V_x,V_{x'}} are generically finite étale covers of {\overline{F}^m}, and the fibre product {V_x \times_{\overline{F}^m} V_{x'}} is then also generically a finite étale cover. One can count the components of a finite étale cover of a connected variety by counting the number of orbits of the étale fundamental group acting on a fibre of that variety (much as the number of components of a cover of a connected manifold is the number of orbits of the topological fundamental group acting on that fibre). So if one understands the étale fundamental group of a certain generic subset of {\overline{F}^m} (formed by intersecting together an {x}-dependent generic subset of {\overline{F}^m} with an {x'}-dependent generic subset), this in principle controls {c(x,x')}. It turns out that one can decouple the {x} and {x'} dependence of this fundamental group by using an étale version of the van Kampen theorem for the fundamental group, which I discussed in this previous blog post. With this fact (and another deep fact about the étale fundamental group in zero characteristic, namely that it is topologically finitely generated), one can obtain the desired generic bounded rank property of {c(x,x')}, which gives the regularity lemma.

In order to expedite the deployment of all this algebraic geometry (as well as some Riemann surface theory), it is convenient to use the formalism of nonstandard analysis (or the ultraproduct construction), which among other things can convert quantitative, finitary problems in large characteristic into equivalent qualitative, infinitary problems in zero characteristic (in the spirit of this blog post). This allows one to use several tools from those fields as “black boxes”; not just the theory of étale fundamental groups (which are considerably simpler and more favorable in characteristic zero than they are in positive characteristic), but also some results limiting the morphisms between compact Riemann surfaces of high genus (such as the de Franchis theorem, the Riemann-Hurwitz formula, or the fact that all morphisms between elliptic curves are essentially group homomorphisms), which would be quite unwieldy to utilise if one did not first pass to the zero characteristic case (and thence to the complex case) via the ultraproduct construction (followed by the Lefschetz principle).

I found this project to be particularly educational for me, as it forced me to wander outside of my usual range by quite a bit in order to pick up the tools from algebraic geometry and Riemann surfaces that I needed (in particular, I read through several chapters of EGA and SGA for the first time). This did however put me in the slightly unnerving position of having to use results (such as the Riemann existence theorem) whose proofs I have not fully verified for myself, but which are easy to find in the literature, and widely accepted in the field. I suppose this type of dependence on results in the literature is more common in the more structured fields of mathematics than it is in analysis, which by its nature has fewer reusable black boxes, and so key tools often need to be rederived and modified for each new application. (This distinction is discussed further in this article of Gowers.)

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,315 other followers