You are currently browsing the category archive for the ‘math.AG’ category.

In 1946, Ulam, in response to a theorem of Anning and Erdös, posed the following problem:

Problem 1 (Erdös-Ulam problem)Let be a set such that the distance between any two points in is rational. Is it true that cannot be (topologically) dense in ?

The paper of Anning and Erdös addressed the case that all the distances between two points in were integer rather than rational in the affirmative.

The Erdös-Ulam problem remains open; it was discussed recently over at Gödel’s lost letter. It is in fact likely (as we shall see below) that the set in the above problem is not only forbidden to be topologically dense, but also cannot be Zariski dense either. If so, then the structure of is quite restricted; it was shown by Solymosi and de Zeeuw that if fails to be Zariski dense, then all but finitely many of the points of must lie on a single line, or a single circle. (Conversely, it is easy to construct examples of dense subsets of a line or circle in which all distances are rational, though in the latter case the square of the radius of the circle must also be rational.)

The main tool of the Solymosi-de Zeeuw analysis was Faltings’ celebrated theorem that every algebraic curve of genus at least two contains only finitely many rational points. The purpose of this post is to observe that an affirmative answer to the full Erdös-Ulam problem similarly follows from the conjectured analogue of Falting’s theorem for surfaces, namely the following conjecture of Bombieri and Lang:

Conjecture 2 (Bombieri-Lang conjecture)Let be a smooth projective irreducible algebraic surface defined over the rationals which is of general type. Then the set of rational points of is not Zariski dense in .

In fact, the Bombieri-Lang conjecture has been made for varieties of arbitrary dimension, and for more general number fields than the rationals, but the above special case of the conjecture is the only one needed for this application. We will review what “general type” means (for smooth projective complex varieties, at least) below the fold.

The Bombieri-Lang conjecture is considered to be extremely difficult, in particular being substantially harder than Faltings’ theorem, which is itself a highly non-trivial result. So this implication should not be viewed as a practical route to resolving the Erdös-Ulam problem unconditionally; rather, it is a demonstration of the power of the Bombieri-Lang conjecture. Still, it was an instructive algebraic geometry exercise for me to carry out the details of this implication, which quickly boils down to verifying that a certain quite explicit algebraic surface is of general type (Theorem 4 below). As I am not an expert in the subject, my computations here will be rather tedious and pedestrian; it is likely that they could be made much slicker by exploiting more of the machinery of modern algebraic geometry, and I would welcome any such streamlining by actual experts in this area. (For similar reasons, there may be more typos and errors than usual in this post; corrections are welcome as always.) My calculations here are based on a similar calculation of van Luijk, who used analogous arguments to show (assuming Bombieri-Lang) that the set of perfect cuboids is not Zariski-dense in its projective parameter space.

We also remark that in a recent paper of Makhul and Shaffaf, the Bombieri-Lang conjecture (or more precisely, a weaker consequence of that conjecture) was used to show that if is a subset of with rational distances which intersects any line in only finitely many points, then there is a uniform bound on the cardinality of the intersection of with any line. I have also recently learned (private communication) that an unpublished work of Shaffaf has obtained a result similar to the one in this post, namely that the Erdös-Ulam conjecture follows from the Bombieri-Lang conjecture, plus an additional conjecture about the rational curves in a specific surface.

Let us now give the elementary reductions to the claim that a certain variety is of general type. For sake of contradiction, let be a dense set such that the distance between any two points is rational. Then certainly contains two points that are a rational distance apart. By applying a translation, rotation, and a (rational) dilation, we may assume that these two points are and . As is dense, there is a third point of not on the axis, which after a reflection we can place in the upper half-plane; we will write it as with .

Given any two points in , the quantities are rational, and so by the cosine rule the dot product is rational as well. Since , this implies that the -component of every point in is rational; this in turn implies that the product of the -coordinates of any two points in is rational as well (since this differs from by a rational number). In particular, and are rational, and all of the points in now lie in the lattice . (This fact appears to have first been observed in the 1988 habilitationschrift of Kemnitz.)

Now take four points , in in general position (so that the octuplet avoids any pre-specified hypersurface in ); this can be done if is dense. (If one wished, one could re-use the three previous points to be three of these four points, although this ultimately makes little difference to the analysis.) If is any point in , then the distances from to are rationals that obey the equations

for , and thus determine a rational point in the affine complex variety defined as

By inspecting the projection from to , we see that is a branched cover of , with the generic cover having points (coming from the different ways to form the square roots ); in particular, is a complex affine algebraic surface, defined over the rationals. By inspecting the monodromy around the four singular base points (which switch the sign of one of the roots , while keeping the other three roots unchanged), we see that the variety is connected away from its singular set, and thus irreducible. As is topologically dense in , it is Zariski-dense in , and so generates a Zariski-dense set of rational points in . To solve the Erdös-Ulam problem, it thus suffices to show that

Claim 3For any non-zero rational and for rationals in general position, the rational points of the affine surface is not Zariski dense in .

This is already very close to a claim that can be directly resolved by the Bombieri-Lang conjecture, but is affine rather than projective, and also contains some singularities. The first issue is easy to deal with, by working with the projectivisation

of , where is the homogeneous quadratic polynomial

with

and the projective complex space is the space of all equivalence classes of tuples up to projective equivalence . By identifying the affine point with the projective point , we see that consists of the affine variety together with the set , which is the union of eight curves, each of which lies in the closure of . Thus is the projective closure of , and is thus a complex irreducible projective surface, defined over the rationals. As is cut out by four quadric equations in and has degree sixteen (as can be seen for instance by inspecting the intersection of with a generic perturbation of a fibre over the generically defined projection ), it is also a complete intersection. To show (3), it then suffices to show that the rational points in are not Zariski dense in .

Heuristically, the reason why we expect few rational points in is as follows. First observe from the projective nature of (1) that every rational point is equivalent to an integer point. But for a septuple of integers of size , the quantity is an integer point of of size , and so should only vanish about of the time. Hence the number of integer points of height comparable to should be about

this is a convergent sum if ranges over (say) powers of two, and so from standard probabilistic heuristics (see this previous post) we in fact expect only finitely many solutions, in the absence of any special algebraic structure (e.g. the structure of an abelian variety, or a birational reduction to a simpler variety) that could produce an unusually large number of solutions.

The Bombieri-Lang conjecture, Conjecture 2, can be viewed as a formalisation of the above heuristics (roughly speaking, it is one of the most optimistic natural conjectures one could make that is compatible with these heuristics while also being invariant under birational equivalence).

Unfortunately, contains some singular points. Being a complete intersection, this occurs when the Jacobian matrix of the map has less than full rank, or equivalently that the gradient vectors

for are linearly dependent, where the is in the coordinate position associated to . One way in which this can occur is if one of the gradient vectors vanish identically. This occurs at precisely points, when is equal to for some , and one has for all (so in particular ). Let us refer to these as the *obvious* singularities; they arise from the geometrically evident fact that the distance function is singular at .

The other way in which could occur is if a non-trivial linear combination of at least two of the gradient vectors vanishes. From (2), this can only occur if for some distinct , which from (1) implies that

for two choices of sign . If the signs are equal, then (as are in general position) this implies that , and then we have the singular point

If the non-trivial linear combination involved three or more gradient vectors, then by the pigeonhole principle at least two of the signs involved must be equal, and so the only singular points are (5). So the only remaining possibility is when we have two gradient vectors that are parallel but non-zero, with the signs in (3), (4) opposing. But then (as are in general position) the vectors are non-zero and non-parallel to each other, a contradiction. Thus, outside of the obvious singular points mentioned earlier, the only other singular points are the two points (5).

We will shortly show that the obvious singularities are *ordinary double points*; the surface near any of these points is analytically equivalent to an ordinary cone near the origin, which is a cone over a smooth conic curve . The two non-obvious singularities (5) are slightly more complicated than ordinary double points, they are *elliptic singularities*, which approximately resemble a cone over an elliptic curve. (As far as I can tell, this resemblance is exact in the category of real smooth manifolds, but not in the category of algebraic varieties.) If one blows up each of the point singularities of separately, no further singularities are created, and one obtains a smooth projective surface (using the Segre embedding as necessary to embed back into projective space, rather than in a product of projective spaces). Away from the singularities, the rational points of lift up to rational points of . Assuming the Bombieri-Lang conjecture, we thus are able to answer the Erdös-Ulam problem in the affirmative once we establish

This will be done below the fold, by the pedestrian device of explicitly constructing global differential forms on ; I will also be working from a complex analysis viewpoint rather than an algebraic geometry viewpoint as I am more comfortable with the former approach. (As mentioned above, though, there may well be a quicker way to establish this result by using more sophisticated machinery.)

I thank Mark Green and David Gieseker for helpful conversations (and a crash course in varieties of general type!).

Remark 5The above argument shows in fact (assuming Bombieri-Lang) that sets with all distances rational cannot be Zariski-dense, and thus (by Solymosi-de Zeeuw) must lie on a single line or circle with only finitely many exceptions. Assuming a stronger version of Bombieri-Lang involving a general number field , we obtain a similar conclusion with “rational” replaced by “lying in ” (one has to extend the Solymosi-de Zeeuw analysis to more general number fields, but this should be routine, using the analogue of Faltings’ theorem for such number fields).

Let be a finite field of order , and let be an absolutely irreducible smooth projective curve defined over (and hence over the algebraic closure of that field). For instance, could be the projective elliptic curve

in the projective plane , where are coefficients whose discriminant is non-vanishing, which is the projective version of the affine elliptic curve

To each such curve one can associate a genus , which we will define later; for instance, elliptic curves have genus . We can also count the cardinality of the set of -points of . The *Hasse-Weil bound* relates the two:

The usual proofs of this bound proceed by first establishing a trace formula of the form

for some complex numbers independent of ; this is in fact a special case of the Lefschetz-Grothendieck trace formula, and can be interpreted as an assertion that the zeta function associated to the curve is rational. The task is then to establish a bound for all ; this (or more precisely, the slightly stronger assertion ) is the Riemann hypothesis for such curves. This can be done either by passing to the Jacobian variety of and using a certain duality available on the cohomology of such varieties, known as Rosati involution; alternatively, one can pass to the product surface and apply the Riemann-Roch theorem for that surface.

In 1969, Stepanov introduced an elementary method (a version of what is now known as the polynomial method) to count (or at least to upper bound) the quantity . The method was initially restricted to hyperelliptic curves, but was soon extended to general curves. In particular, Bombieri used this method to give a short proof of the following weaker version of the Hasse-Weil bound:

Theorem 2 (Weak Hasse-Weil bound)If is a perfect square, and , then .

In fact, the bound on can be sharpened a little bit further, as we will soon see.

Theorem 2 is only an upper bound on , but there is a Galois-theoretic trick to convert (a slight generalisation of) this upper bound to a matching lower bound, and if one then uses the trace formula (1) (and the “tensor power trick” of sending to infinity to control the weights ) one can then recover the full Hasse-Weil bound. We discuss these steps below the fold.

I’ve discussed Bombieri’s proof of Theorem 2 in this previous post (in the special case of hyperelliptic curves), but now wish to present the full proof, with some minor simplifications from Bombieri’s original presentation; it is mostly elementary, with the deepest fact from algebraic geometry needed being Riemann’s inequality (a weak form of the Riemann-Roch theorem).

The first step is to reinterpret as the number of points of intersection between two curves in the surface . Indeed, if we define the Frobenius endomorphism on any projective space by

then this map preserves the curve , and the fixed points of this map are precisely the points of :

Thus one can interpret as the number of points of intersection between the diagonal curve

and the Frobenius graph

which are copies of inside . But we can use the additional hypothesis that is a perfect square to write this more symmetrically, by taking advantage of the fact that the Frobenius map has a square root

with also preserving . One can then also interpret as the number of points of intersection between the curve

Let be the field of rational functions on (with coefficients in ), and define , , and analogously )(although is likely to be disconnected, so will just be a ring rather than a field. We then (morally) have the commuting square

if we ignore the issue that a rational function on, say, , might blow up on all of and thus not have a well-defined restriction to . We use and to denote the restriction maps. Furthermore, we have obvious isomorphisms , coming from composing with the graphing maps and .

The idea now is to find a rational function on the surface of controlled degree which vanishes when restricted to , but is non-vanishing (and not blowing up) when restricted to . On , we thus get a non-zero rational function of controlled degree which vanishes on – which then lets us bound the cardinality of in terms of the degree of . (In Bombieri’s original argument, one required vanishing to high order on the side, but in our presentation, we have factored out a term which removes this high order vanishing condition.)

To find this , we will use linear algebra. Namely, we will locate a finite-dimensional subspace of (consisting of certain “controlled degree” rational functions) which projects injectively to , but whose projection to has strictly smaller dimension than itself. The rank-nullity theorem then forces the existence of a non-zero element of whose projection to vanishes, but whose projection to is non-zero.

Now we build . Pick a point of , which we will think of as being a point at infinity. (For the purposes of proving Theorem 2, we may clearly assume that is non-empty.) Thus is fixed by . To simplify the exposition, we will also assume that is fixed by the square root of ; in the opposite case when has order two when acting on , the argument is essentially the same, but all references to in the second factor of need to be replaced by (we leave the details to the interested reader).

For any natural number , define to be the set of rational functions which are allowed to have a pole of order up to at , but have no other poles on ; note that as we are assuming to be smooth, it is unambiguous what a pole is (and what order it will have). (In the fancier language of divisors and Cech cohomology, we have .) The space is clearly a vector space over ; one can view intuitively as the space of “polynomials” on of “degree” at most . When , consists just of the constant functions. Indeed, if , then the image of avoids and so lies in the affine line ; but as is projective, the image needs to be compact (hence closed) in , and must therefore be a point, giving the claim.

For higher , we have the easy relations

The former inequality just comes from the trivial inclusion . For the latter, observe that if two functions lie in , so that they each have a pole of order at most at , then some linear combination of these functions must have a pole of order at most at ; thus has codimension at most one in , giving the claim.

From (3) and induction we see that each of the are finite dimensional, with the trivial upper bound

*Riemann’s inequality* complements this with the lower bound

thus one has for all but at most exceptions (in fact, exactly exceptions as it turns out). This is a consequence of the Riemann-Roch theorem; it can be proven from abstract nonsense (the snake lemma) if one defines the genus in a non-standard fashion (as the dimension of the first Cech cohomology of the structure sheaf of ), but to obtain this inequality with a standard definition of (e.g. as the dimension of the zeroth Cech cohomolgy of the line bundle of differentials) requires the more non-trivial tool of Serre duality.

At any rate, now that we have these vector spaces , we will define to be a tensor product space

for some natural numbers which we will optimise in later. That is to say, is spanned by functions of the form with and . This is clearly a linear subspace of of dimension , and hence by Rieman’s inequality we have

Observe that maps a tensor product to a function . If and , then we see that the function has a pole of order at most at . We conclude that

and in particular by (4)

We will choose to be a bit bigger than , to make the image of smaller than that of . From (6), (10) we see that if we have the inequality

(together with (7)) then cannot be injective.

On the other hand, we have the following basic fact:

*Proof:* From (3), we can find a linear basis of such that each of the has a distinct order of pole at (somewhere between and inclusive). Similarly, we may find a linear basis of such that each of the has a distinct order of pole at (somewhere between and inclusive). The functions then span , and the order of pole at is . But since , these orders are all distinct, and so these functions must be linearly independent. The claim follows.

This gives us the following bound:

Proposition 4Let be natural numbers such that (7), (11), (12) hold. Then .

*Proof:* As is not injective, we can find with vanishing. By the above lemma, the function is then non-zero, but it must also vanish on , which has cardinality . On the other hand, by (8), has a pole of order at most at and no other poles. Since the number of poles and zeroes of a rational function on a projective curve must add up to zero, the claim follows.

If , we may make the explicit choice

and a brief calculation then gives Theorem 2. In some cases one can optimise things a bit further. For instance, in the genus zero case (e.g. if is just the projective line ) one may take and conclude the absolutely sharp bound in this case; in the case of the projective line , the function is in fact the very concrete function .

Remark 1When is not a perfect square, one can try to run the above argument using the factorisation instead of . This gives a weaker version of the above bound, of the shape . In the hyperelliptic case at least, one can erase this loss by working with a variant of the argument in which one requires to vanish to high order at , rather than just to first order; see this survey article of mine for details.

Let be an irreducible polynomial in three variables. As is not algebraically closed, the zero set can split into various components of dimension between and . For instance, if , the zero set is a line; more interestingly, if , then is the union of a line and a surface (or the product of an acnodal cubic curve with a line). We will assume that the -dimensional component is non-empty, thus defining a real surface in . In particular, this hypothesis implies that is not just irreducible over , but is in fact absolutely irreducible (i.e. irreducible over ), since otherwise one could use the complex factorisation of to contain inside the intersection of the complex zero locus of complex polynomial and its complex conjugate, with having no common factor, forcing to be at most one-dimensional. (For instance, in the case , one can take .) Among other things, this makes a Zariski-dense subset of , thus any polynomial identity which holds true at every point of , also holds true on all of . This allows us to easily use tools from algebraic geometry in this real setting, even though the reals are not quite algebraically closed.

The surface is said to be ruled if, for a Zariski open dense set of points , there exists a line through for some non-zero which is completely contained in , thus

for all . Also, a point is said to be a flecnode if there exists a line through for some non-zero which is tangent to to third order, in the sense that

for . Clearly, if is a ruled surface, then a Zariski open dense set of points on are a flecnode. We then have the remarkable theorem of Cayley and Salmon asserting the converse:

Theorem 1 (Cayley-Salmon theorem)Let be an irreducible polynomial with non-empty. Suppose that a Zariski dense set of points in are flecnodes. Then is a ruled surface.

Among other things, this theorem was used in the celebrated result of Guth and Katz that almost solved the Erdos distance problem in two dimensions, as discussed in this previous blog post. Vanishing to third order is necessary: observe that in a surface of negative curvature, such as the saddle , every point on the surface is tangent to second order to a line (the line in the direction for which the second fundamental form vanishes). This surface happens to be ruled, but a generic perturbation of this surface (e.g. ) will no longer be ruled, although it is still negative curvature near the origin.

The original proof of the Cayley-Salmon theorem, dating back to at least 1915, is not easily accessible and not written in modern language. A modern proof of this theorem (together with substantial generalisations, for instance to higher dimensions) is given by Landsberg; the proof uses the machinery of modern algebraic geometry. The purpose of this post is to record an alternate proof of the Cayley-Salmon theorem based on classical differential geometry (in particular, the notion of torsion of a curve) and basic ODE methods (in particular, Gronwall’s inequality and the Picard existence theorem). The idea is to “integrate” the lines indicated by the flecnode to produce smooth curves on the surface ; one then uses the vanishing (1) and some basic calculus to conclude that these curves have zero torsion and are thus planar curves. Some further manipulation using (1) (now just to second order instead of third) then shows that these curves are in fact straight lines, giving the ruling on the surface.

Update: Janos Kollar has informed me that the above theorem was essentially known to Monge in 1809; see his recent arXiv note for more details.

I thank Larry Guth and Micha Sharir for conversations leading to this post.

The classical foundations of probability theory (discussed for instance in this previous blog post) is founded on the notion of a probability space – a space (the sample space) equipped with a -algebra (the event space), together with a countably additive probability measure that assigns a real number in the interval to each event.

One can generalise the concept of a probability space to a *finitely additive* probability space, in which the event space is now only a Boolean algebra rather than a -algebra, and the measure is now only finitely additive instead of countably additive, thus when are disjoint events. By giving up countable additivity, one loses a fair amount of measure and integration theory, and in particular the notion of the expectation of a random variable becomes problematic (unless the random variable takes only finitely many values). Nevertheless, one can still perform a fair amount of probability theory in this weaker setting.

In this post I would like to describe a further weakening of probability theory, which I will call *qualitative probability theory*, in which one does not assign a precise numerical probability value to each event, but instead merely records whether this probability is zero, one, or something in between. Thus is now a function from to the set , where is a new symbol that replaces all the elements of the open interval . In this setting, one can no longer compute quantitative expressions, such as the mean or variance of a random variable; but one can still talk about whether an event holds almost surely, with positive probability, or with zero probability, and there are still usable notions of independence. (I will refer to classical probability theory as *quantitative* probability theory, to distinguish it from its qualitative counterpart.)

The main reason I want to introduce this weak notion of probability theory is that it becomes suited to talk about random variables living inside algebraic varieties, even if these varieties are defined over fields other than or . In algebraic geometry one often talks about a “generic” element of a variety defined over a field , which does not lie in any specified variety of lower dimension defined over . Once has positive dimension, such generic elements do not exist as classical, deterministic -points in , since of course any such point lies in the -dimensional subvariety of . There are of course several established ways to deal with this problem. One way (which one might call the “Weil” approach to generic points) is to extend the field to a sufficiently transcendental extension , in order to locate a sufficient number of generic points in . Another approach (which one might dub the “Zariski” approach to generic points) is to work scheme-theoretically, and interpret a generic point in as being associated to the zero ideal in the function ring of . However I want to discuss a third perspective, in which one interprets a generic point not as a deterministic object, but rather as a *random variable* taking values in , but which lies in any given lower-dimensional subvariety of with probability zero. This interpretation is intuitive, but difficult to implement in classical probability theory (except perhaps when considering varieties over or ) due to the lack of a natural probability measure to place on algebraic varieties; however it works just fine in qualitative probability theory. In particular, the algebraic geometry notion of being “generically true” can now be interpreted probabilistically as an assertion that something is “almost surely true”.

It turns out that just as qualitative random variables may be used to interpret the concept of a generic point, they can also be used to interpret the concept of a type in model theory; the type of a random variable is the set of all predicates that are almost surely obeyed by . In contrast, model theorists often adopt a Weil-type approach to types, in which one works with deterministic representatives of a type, which often do not occur in the original structure of interest, but only in a sufficiently saturated extension of that structure (this is the analogue of working in a sufficiently transcendental extension of the base field). However, it seems that (in some cases at least) one can equivalently view types in terms of (qualitative) random variables on the original structure, avoiding the need to extend that structure. (Instead, one reserves the right to extend the *sample space* of one’s probability theory whenever necessary, as part of the “probabilistic way of thinking” discussed in this previous blog post.) We illustrate this below the fold with two related theorems that I will interpret through the probabilistic lens: the “group chunk theorem” of Weil (and later developed by Hrushovski), and the “group configuration theorem” of Zilber (and again later developed by Hrushovski). For sake of concreteness we will only consider these theorems in the theory of algebraically closed fields, although the results are quite general and can be applied to many other theories studied in model theory.

Let be a field. A definable set over is a set of the form

where is a natural number, and is a predicate involving the ring operations of , the equality symbol , an arbitrary number of constants and free variables in , the quantifiers , boolean operators such as , and parentheses and colons, where the quantifiers are always understood to be over the field . Thus, for instance, the set of quadratic residues

is definable over , and any algebraic variety over is also a definable set over . Henceforth we will abbreviate “definable over ” simply as “definable”.

If is a finite field, then every subset of is definable, since finite sets are automatically definable. However, we can obtain a more interesting notion in this case by restricting the *complexity* of a definable set. We say that is a *definable set of complexity at most * if , and can be written in the form (1) for some predicate of length at most (where all operators, quantifiers, relations, variables, constants, and punctuation symbols are considered to have unit length). Thus, for instance, a hypersurface in dimensions of degree would be a definable set of complexity . We will then be interested in the regime where the complexity remains bounded, but the field size (or field characteristic) becomes large.

In a recent paper, I established (in the large characteristic case) the following regularity lemma for dense definable graphs, which significantly strengthens the Szemerédi regularity lemma in this context, by eliminating “bad” pairs, giving a polynomially strong regularity, and also giving definability of the cells:

Lemma 1 (Algebraic regularity lemma)Let be a finite field, let be definable non-empty sets of complexity at most , and let also be definable with complexity at most . Assume that the characteristic of is sufficiently large depending on . Then we may partition and with , with the following properties:

My original proof of this lemma was quite complicated, based on an explicit calculation of the “square”

of using the Lang-Weil bound and some facts about the étale fundamental group. It was the reliance on the latter which was the main reason why the result was restricted to the large characteristic setting. (I then applied this lemma to classify expanding polynomials over finite fields of large characteristic, but I will not discuss these applications here; see this previous blog post for more discussion.)

Recently, Anand Pillay and Sergei Starchenko (and independently, Udi Hrushovski) have observed that the theory of the étale fundamental group is not necessary in the argument, and the lemma can in fact be deduced from quite general model theoretic techniques, in particular using (a local version of) the concept of stability. One of the consequences of this new proof of the lemma is that the hypothesis of large characteristic can be omitted; the lemma is now known to be valid for arbitrary finite fields (although its content is trivial if the field is not sufficiently large depending on the complexity at most ).

Inspired by this, I decided to see if I could find yet another proof of the algebraic regularity lemma, again avoiding the theory of the étale fundamental group. It turns out that the spectral proof of the Szemerédi regularity lemma (discussed in this previous blog post) adapts very nicely to this setting. The key fact needed about definable sets over finite fields is that their cardinality takes on an essentially discrete set of values. More precisely, we have the following fundamental result of Chatzidakis, van den Dries, and Macintyre:

Proposition 2Let be a finite field, and let .

- (Discretised cardinality) If is a non-empty definable set of complexity at most , then one has
where is a natural number, and is a positive rational number with numerator and denominator . In particular, we have .

- (Definable cardinality) Assume is sufficiently large depending on . If , and are definable sets of complexity at most , so that can be viewed as a definable subset of that is definably parameterised by , then for each natural number and each positive rational with numerator and denominator , the set
is definable with complexity , where the implied constants in the asymptotic notation used to define (4) are the same as those that appearing in (3). (Informally: the “dimension” and “measure” of depends definably on .)

We will take this proposition as a black box; a proof can be obtained by combining the description of definable sets over pseudofinite fields (discussed in this previous post) with the Lang-Weil bound (discussed in this previous post). (The former fact is phrased using nonstandard analysis, but one can use standard compactness-and-contradiction arguments to convert such statements to statements in standard analysis, as discussed in this post.)

The above proposition places severe restrictions on the cardinality of definable sets; for instance, it shows that one cannot have a definable set of complexity at most and cardinality , if is sufficiently large depending on . If are definable sets of complexity at most , it shows that for some rational with numerator and denominator ; furthermore, if , we may improve this bound to . In particular, we obtain the following “self-improving” properties:

- If are definable of complexity at most and for some , then (if is sufficiently small depending on and is sufficiently large depending on ) this forces .
- If are definable of complexity at most and for some and positive rational , then (if is sufficiently small depending on and is sufficiently large depending on ) this forces .

It turns out that these self-improving properties can be applied to the coefficients of various matrices (basically powers of the adjacency matrix associated to ) that arise in the spectral proof of the regularity lemma to significantly improve the bounds in that lemma; we describe how this is done below the fold. We also make some connections to the stability-based proofs of Pillay-Starchenko and Hrushovski.

I’ve just uploaded to the arXiv my article “Algebraic combinatorial geometry: the polynomial method in arithmetic combinatorics, incidence combinatorics, and number theory“, submitted to the new journal “EMS surveys in the mathematical sciences“. This is the first draft of a survey article on the polynomial method – a technique in combinatorics and number theory for controlling a relevant set of points by comparing it with the zero set of a suitably chosen polynomial, and then using tools from algebraic geometry (e.g. Bezout’s theorem) on that zero set. As such, the method combines algebraic geometry with combinatorial geometry, and could be viewed as the philosophy of a combined field which I dub “algebraic combinatorial geometry”. There is also an important extension of this method when one is working overthe reals, in which methods from algebraic topology (e.g. the ham sandwich theorem and its generalisation to polynomials), and not just algebraic geometry, come into play also.

The polynomial method has been used independently many times in mathematics; for instance, it plays a key role in the proof of Baker’s theorem in transcendence theory, or Stepanov’s method in giving an elementary proof of the Riemann hypothesis for finite fields over curves; in combinatorics, the nullstellenatz of Alon is also another relatively early use of the polynomial method. More recently, it underlies Dvir’s proof of the Kakeya conjecture over finite fields and Guth and Katz’s near-complete solution to the Erdos distance problem in the plane, and can be used to give a short proof of the Szemeredi-Trotter theorem. One of the aims of this survey is to try to present all of these disparate applications of the polynomial method in a somewhat unified context; my hope is that there will eventually be a systematic foundation for algebraic combinatorial geometry which naturally contains all of these different instances the polynomial method (and also suggests new instances to explore); but the field is unfortunately not at that stage of maturity yet.

This is something of a first draft, so comments and suggestions are even more welcome than usual. (For instance, I have already had my attention drawn to some additional uses of the polynomial method in the literature that I was not previously aware of.)

[Note: the content of this post is standard number theoretic material that can be found in many textbooks (I am relying principally here on Iwaniec and Kowalski); I am not claiming any new progress on any version of the Riemann hypothesis here, but am simply arranging existing facts together.]

The Riemann hypothesis is arguably the most important and famous unsolved problem in number theory. It is usually phrased in terms of the Riemann zeta function , defined by

for and extended meromorphically to other values of , and asserts that the only zeroes of in the critical strip lie on the critical line .

One of the main reasons that the Riemann hypothesis is so important to number theory is that the zeroes of the zeta function in the critical strip control the distribution of the primes. To see the connection, let us perform the following formal manipulations (ignoring for now the important analytic issues of convergence of series, interchanging sums, branches of the logarithm, etc., in order to focus on the intuition). The starting point is the fundamental theorem of arithmetic, which asserts that every natural number has a unique factorisation into primes. Taking logarithms, we obtain the identity

for any natural number , where is the von Mangoldt function, thus when is a power of a prime and zero otherwise. If we then perform a “Dirichlet-Fourier transform” by viewing both sides of (1) as coefficients of a Dirichlet series, we conclude that

formally at least. Writing , the right-hand side factors as

whereas the left-hand side is (formally, at least) equal to . We conclude the identity

(formally, at least). If we integrate this, we are formally led to the identity

or equivalently to the exponential identity

which allows one to reconstruct the Riemann zeta function from the von Mangoldt function. (It is instructive exercise in enumerative combinatorics to try to prove this identity directly, at the level of formal Dirichlet series, using the fundamental theorem of arithmetic of course.) Now, as has a simple pole at and zeroes at various places on the critical strip, we expect a Weierstrass factorisation which formally (ignoring normalisation issues) takes the form

(where we will be intentionally vague about what is hiding in the terms) and so we expect an expansion of the form

and hence on integrating in we formally have

and thus we have the heuristic approximation

Comparing this with (3), we are led to a heuristic form of the *explicit formula*

When trying to make this heuristic rigorous, it turns out (due to the rough nature of both sides of (4)) that one has to interpret the explicit formula in some suitably weak sense, for instance by testing (4) against the indicator function to obtain the formula

which can in fact be made into a rigorous statement after some truncation (the von Mangoldt explicit formula). From this formula we now see how helpful the Riemann hypothesis will be to control the distribution of the primes; indeed, if the Riemann hypothesis holds, so that for all zeroes , it is not difficult to use (a suitably rigorous version of) the explicit formula to conclude that

as , giving a near-optimal “square root cancellation” for the sum . Conversely, if one can somehow establish a bound of the form

for any fixed , then the explicit formula can be used to then deduce that all zeroes of have real part at most , which leads to the following remarkable amplification phenomenon (analogous, as we will see later, to the tensor power trick): any bound of the form

can be automatically amplified to the stronger bound

with both bounds being equivalent to the Riemann hypothesis. Of course, the Riemann hypothesis for the Riemann zeta function remains open; but partial progress on this hypothesis (in the form of zero-free regions for the zeta function) leads to partial versions of the asymptotic (6). For instance, it is known that there are no zeroes of the zeta function on the line , and this can be shown by some analysis (either complex analysis or Fourier analysis) to be equivalent to the prime number theorem

see e.g. this previous blog post for more discussion.

The main engine powering the above observations was the fundamental theorem of arithmetic, and so one can expect to establish similar assertions in other contexts where some version of the fundamental theorem of arithmetic is available. One of the simplest such variants is to continue working on the natural numbers, but “twist” them by a Dirichlet character . The analogue of the Riemann zeta function is then the https://en.wikipedia.org/wiki/Multiplicative_function, the equation (1), which encoded the fundamental theorem of arithmetic, can be twisted by to obtain

and essentially the same manipulations as before eventually lead to the exponential identity

which is a twisted version of (2), as well as twisted explicit formula, which heuristically takes the form

for non-principal , where now ranges over the zeroes of in the critical strip, rather than the zeroes of ; a more accurate formulation, following (5), would be

(See e.g. Davenport’s book for a more rigorous discussion which emphasises the analogy between the Riemann zeta function and the Dirichlet -function.) If we assume the generalised Riemann hypothesis, which asserts that all zeroes of in the critical strip also lie on the critical line, then we obtain the bound

for any non-principal Dirichlet character , again demonstrating a near-optimal square root cancellation for this sum. Again, we have the amplification property that the above bound is implied by the apparently weaker bound

(where denotes a quantity that goes to zero as for any fixed ). Next, one can consider other number systems than the natural numbers and integers . For instance, one can replace the integers with rings of integers in other number fields (i.e. finite extensions of ), such as the quadratic extensions of the rationals for various square-free integers , in which case the ring of integers would be the ring of quadratic integers for a suitable generator (it turns out that one can take if , and if ). Here, it is not immediately obvious what the analogue of the natural numbers is in this setting, since rings such as do not come with a natural ordering. However, we can adopt an algebraic viewpoint to see the correct generalisation, observing that every natural number generates a principal ideal in the integers, and conversely every non-trivial ideal in the integers is associated to precisely one natural number in this fashion, namely the norm of that ideal. So one can identify the natural numbers with the ideals of . Furthermore, with this identification, the prime numbers correspond to the prime ideals, since if is prime, and are integers, then if and only if one of or is true. Finally, even in number systems (such as ) in which the classical version of the fundamental theorem of arithmetic fail (e.g. ), we have *the fundamental theorem of arithmetic for ideals*: every ideal in a Dedekind domain (which includes the ring of integers in a number field as a key example) is uniquely representable (up to permutation) as the product of a finite number of prime ideals (although these ideals might not necessarily be principal). For instance, in , the principal ideal factors as the product of four prime (but non-principal) ideals , , , . (Note that the first two ideals are actually equal to each other.) Because we still have the fundamental theorem of arithmetic, we can develop analogues of the previous observations relating the Riemann hypothesis to the distribution of primes. The analogue of the Riemann hypothesis is now the Dedekind zeta function

where the summation is over all non-trivial ideals in . One can also define a von Mangoldt function , defined as when is a power of a prime ideal , and zero otherwise; then the fundamental theorem of arithmetic for ideals can be encoded in an analogue of (1) (or (7)),

which leads as before to an exponential identity

and an explicit formula of the heuristic form

in analogy with (5) or (10). Again, a suitable Riemann hypothesis for the Dedekind zeta function leads to good asymptotics for the distribution of prime ideals, giving a bound of the form

where is the conductor of (which, in the case of number fields, is the absolute value of the discriminant of ) and is the degree of the extension of over . As before, we have the amplification phenomenon that the above near-optimal square root cancellation bound is implied by the weaker bound

where denotes a quantity that goes to zero as (holding fixed). See e.g. Chapter 5 of Iwaniec-Kowalski for details.

As was the case with the Dirichlet -functions, one can twist the Dedekind zeta function example by characters, in this case the Hecke characters; we will not do this here, but see e.g. Section 3 of Iwaniec-Kowalski for details.

Very analogous considerations hold if we move from number fields to function fields. The simplest case is the function field associated to the affine line and a finite field of some order . The polynomial functions on the affine line are just the usual polynomial ring , which then play the role of the integers (or ) in previous examples. This ring happens to be a unique factorisation domain, so the situation is closely analogous to the classical setting of the Riemann zeta function. The analogue of the natural numbers are the monic polynomials (since every non-trivial principal ideal is generated by precisely one monic polynomial), and the analogue of the prime numbers are the irreducible monic polynomials. The norm of a polynomial is the order of , which can be computed explicitly as

Because of this, we will normalise things slightly differently here and use in place of in what follows. The (local) zeta function is then defined as

where ranges over monic polynomials, and the von Mangoldt function is defined to equal when is a power of a monic irreducible polynomial , and zero otherwise. Note that because is always a power of , the zeta function here is in fact periodic with period . Because of this, it is customary to make a change of variables , so that

and is the renormalised zeta function

We have the analogue of (1) (or (7) or (11)):

which leads as before to an exponential identity

analogous to (2), (8), or (12). It also leads to the explicit formula

where are the zeroes of the original zeta function (counting each residue class of the period just once), or equivalently

where are the reciprocals of the roots of the normalised zeta function (or to put it another way, are the factors of this zeta function). Again, to make proper sense of this heuristic we need to sum, obtaining

As it turns out, in the function field setting, the zeta functions are always rational (this is part of the Weil conjectures), and the above heuristic formula is basically exact up to a constant factor, thus

for an explicit integer (independent of ) arising from any potential pole of at . In the case of the affine line , the situation is particularly simple, because the zeta function is easy to compute. Indeed, since there are exactly monic polynomials of a given degree , we see from (14) that

so in fact there are no zeroes whatsoever, and no pole at either, so we have an exact prime number theorem for this function field:

Among other things, this tells us that the number of irreducible monic polynomials of degree is .

We can transition from an algebraic perspective to a geometric one, by viewing a given monic polynomial through its roots, which are a finite set of points in the algebraic closure of the finite field (or more suggestively, as points on the affine line ). The number of such points (counting multiplicity) is the degree of , and from the factor theorem, the set of points determines the monic polynomial (or, if one removes the monic hypothesis, it determines the polynomial projectively). These points have an action of the Galois group . It is a classical fact that this Galois group is in fact a cyclic group generated by a single element, the (geometric) Frobenius map , which fixes the elements of the original finite field but permutes the other elements of . Thus the roots of a given polynomial split into orbits of the Frobenius map. One can check that the roots consist of a single such orbit (counting multiplicity) if and only if is irreducible; thus the fundamental theorem of arithmetic can be viewed geometrically as as the orbit decomposition of any Frobenius-invariant finite set of points in the affine line.

Now consider the degree finite field extension of (it is a classical fact that there is exactly one such extension up to isomorphism for each ); this is a subfield of of order . (Here we are performing a standard abuse of notation by overloading the subscripts in the notation; thus denotes the field of order , while denotes the extension of of order , so that we in fact have if we use one subscript convention on the left-hand side and the other subscript convention on the right-hand side. We hope this overloading will not cause confusion.) Each point in this extension (or, more suggestively, the affine line over this extension) has a minimal polynomial – an irreducible monic polynomial whose roots consist the Frobenius orbit of . Since the Frobenius action is periodic of period on , the degree of this minimal polynomial must divide . Conversely, every monic irreducible polynomial of degree dividing produces distinct zeroes that lie in (here we use the classical fact that finite fields are perfect) and hence in . We have thus partitioned into Frobenius orbits (also known as *closed points*), with each monic irreducible polynomial of degree dividing contributing an orbit of size . From this we conclude a geometric interpretation of the left-hand side of (18):

The identity (18) thus is equivalent to the thoroughly boring fact that the number of -points on the affine line is equal to . However, things become much more interesting if one then replaces the affine line by a more general (geometrically) irreducible curve defined over ; for instance one could take to be an ellpitic curve

for some suitable , although the discussion here applies to more general curves as well (though to avoid some minor technicalities, we will assume that the curve is projective with a finite number of -rational points removed). The analogue of is then the coordinate ring of (for instance, in the case of the elliptic curve (20) it would be ), with polynomials in this ring producing a set of roots in the curve that is again invariant with respect to the Frobenius action (acting on the and coordinates separately). In general, we do not expect unique factorisation in this coordinate ring (this is basically because Bezout’s theorem suggests that the zero set of a polynomial on will almost never consist of a single (closed) point). Of course, we can use the algebraic formalism of ideals to get around this, setting up a zeta function

and a von Mangoldt function as before, where would now run over the non-trivial ideals of the coordinate ring. However, it is more instructive to use the geometric viewpoint, using the ideal-variety dictionary from algebraic geometry to convert algebraic objects involving ideals into geometric objects involving varieties. In this dictionary, a non-trivial ideal would correspond to a proper subvariety (or more precisely, a subscheme, but let us ignore the distinction between varieties and schemes here) of the curve ; as the curve is irreducible and one-dimensional, this subvariety must be zero-dimensional and is thus a (multi-)set of points in , or equivalently an effective divisor of ; this generalises the concept of the set of roots of a polynomial (which corresponds to the case of a principal ideal). Furthermore, this divisor has to be *rational* in the sense that it is Frobenius-invariant. The prime ideals correspond to those divisors (or sets of points) which are irreducible, that is to say the individual Frobenius orbits, also known as closed points of . With this dictionary, the zeta function becomes

where the sum is over effective rational divisors of (with being the degree of an effective divisor ), or equivalently

The analogue of (19), which gives a geometric interpretation to sums of the von Mangoldt function, becomes

thus this sum is simply counting the number of -points of . The analogue of the exponential identity (16) (or (2), (8), or (12)) is then

and the analogue of the explicit formula (17) (or (5), (10) or (13)) is

where runs over the (reciprocal) zeroes of (counting multiplicity), and is an integer independent of . (As it turns out, equals when is a projective curve, and more generally equals when is a projective curve with rational points deleted.)

To evaluate , one needs to count the number of effective divisors of a given degree on the curve . Fortunately, there is a tool that is particularly well-designed for this task, namely the Riemann-Roch theorem. By using this theorem, one can show (when is projective) that is in fact a rational function, with a finite number of zeroes, and a simple pole at both and , with similar results when one deletes some rational points from ; see e.g. Chapter 11 of Iwaniec-Kowalski for details. Thus the sum in (22) is finite. For instance, for the affine elliptic curve (20) (which is a projective curve with one point removed), it turns out that we have

for two complex numbers depending on and .

The Riemann hypothesis for (untwisted) curves – which is the deepest and most difficult aspect of the Weil conjectures for these curves – asserts that the zeroes of lie on the critical line, or equivalently that all the roots in (22) have modulus , so that (22) then gives the asymptotic

where the implied constant depends only on the genus of (and on the number of points removed from ). For instance, for elliptic curves we have the *Hasse bound*

As before, we have an important amplification phenomenon: if we can establish a weaker estimate, e.g.

then we can automatically deduce the stronger bound (23). This amplification is not a mere curiosity; most of the *proofs* of the Riemann hypothesis for curves proceed via this fact. For instance, by using the elementary method of Stepanov to bound points in curves (discussed for instance in this previous post), one can establish the preliminary bound (24) for large , which then amplifies to the optimal bound (23) for all (and in particular for ). Again, see Chapter 11 of Iwaniec-Kowalski for details. The ability to convert a bound with -dependent losses over the optimal bound (such as (24)) into an essentially optimal bound with no -dependent losses (such as (23)) is important in analytic number theory, since in many applications (e.g. in those arising from sieve theory) one wishes to sum over large ranges of .

Much as the Riemann zeta function can be twisted by a Dirichlet character to form a Dirichlet -function, one can twist the zeta function on curves by various additive and multiplicative characters. For instance, suppose one has an affine plane curve and an additive character , thus for all . Given a rational effective divisor , the sum is Frobenius-invariant and thus lies in . By abuse of notation, we may thus define on such divisors by

and observe that is multiplicative in the sense that for rational effective divisors . One can then define for any non-trivial ideal by replacing that ideal with the associated rational effective divisor; for instance, if is a polynomial in the coefficient ring of , with zeroes at , then is . Again, we have the multiplicativity property . If we then form the twisted normalised zeta function

then by twisting the previous analysis, we eventually arrive at the exponential identity

in analogy with (21) (or (2), (8), (12), or (16)), where the *companion sums* are defined by

where the trace of an element in the plane is defined by the formula

In particular, is the exponential sum

which is an important type of sum in analytic number theory, containing for instance the Kloosterman sum

as a special case, where . (NOTE: the sign conventions for the companion sum are not consistent across the literature, sometimes it is which is referred to as the companion sum.)

If is non-principal (and is non-linear), one can show (by a suitably twisted version of the Riemann-Roch theorem) that is a rational function of , with no pole at , and one then gets an explicit formula of the form

for the companion sums, where are the reciprocals of the zeroes of , in analogy to (22) (or (5), (10), (13), or (17)). For instance, in the case of Kloosterman sums, there is an identity of the form

for all and some complex numbers depending on , where we have abbreviated as . As before, the Riemann hypothesis for then gives a square root cancellation bound of the form

for the companion sums (and in particular gives the very explicit Weil bound for the Kloosterman sum), but again there is the amplification phenomenon that this sort of bound can be deduced from the apparently weaker bound

As before, most of the known proofs of the Riemann hypothesis for these twisted zeta functions proceed by first establishing this weaker bound (e.g. one could again use Stepanov’s method here for this goal) and then amplifying to the full bound (28); see Chapter 11 of Iwaniec-Kowalski for further details.

One can also twist the zeta function on a curve by a multiplicative character by similar arguments, except that instead of forming the sum of all the components of an effective divisor , one takes the product instead, and similarly one replaces the trace

by the norm

Again, see Chapter 11 of Iwaniec-Kowalski for details.

Deligne famously extended the above theory to higher-dimensional varieties than curves, and also to the closely related context of *-adic sheaves* on curves, giving rise to two separate proofs of the Weil conjectures in full generality. (Very roughly speaking, the former context can be obtained from the latter context by a sort of Fubini theorem type argument that expresses sums on higher-dimensional varieties as iterated sums on curves of various expressions related to -adic sheaves.) In this higher-dimensional setting, the zeta function formalism is still present, but is much more difficult to use, in large part due to the much less tractable nature of divisors in higher dimensions (they are now combinations of codimension one subvarieties or subschemes, rather than combinations of points). To get around this difficulty, one has to change perspective yet again, from an algebraic or geometric perspective to an -adic cohomological perspective. (I could imagine that once one is sufficiently expert in the subject, all these perspectives merge back together into a unified viewpoint, but I am certainly not yet at that stage of understanding.) In particular, the zeta function, while still present, plays a significantly less prominent role in the analysis (at least if one is willing to take Deligne’s theorems as a black box); the explicit formula is now obtained via a different route, namely the Grothendieck-Lefschetz fixed point formula. I have written some notes on this material below the fold (based in part on some lectures of Philippe Michel, as well as the text of Iwaniec-Kowalski and also this book of Katz), but I should caution that my understanding here is still rather sketchy and possibly inaccurate in places.

The *rectification principle* in arithmetic combinatorics asserts, roughly speaking, that very small subsets (or, alternatively, small structured subsets) of an additive group or a field of large characteristic can be modeled (for the purposes of arithmetic combinatorics) by subsets of a group or field of zero characteristic, such as the integers or the complex numbers . The additive form of this principle is known as the *Freiman rectification principle*; it has several formulations, going back of course to the original work of Freiman. Here is one formulation as given by Bilu, Lev, and Ruzsa:

Proposition 1 (Additive rectification)Let be a subset of the additive group for some prime , and let be an integer. Suppose that . Then there exists a map into a subset of the integers which is a Freiman isomorphism of order in the sense that for any , one hasif and only if

Furthermore is a right-inverse of the obvious projection homomorphism from to .

The original version of the rectification principle allowed the sets involved to be substantially larger in size (cardinality up to a small constant multiple of ), but with the additional hypothesis of bounded doubling involved; see the above-mentioned papers, as well as this later paper of Green and Ruzsa, for further discussion.

The proof of Proposition 1 is quite short (see Theorem 3.1 of Bilu-Lev-Ruzsa); the main idea is to use Minkowski’s theorem to find a non-trivial dilate of that is contained in a small neighbourhood of the origin in , at which point the rectification map can be constructed by hand.

Very recently, Codrut Grosu obtained an arithmetic analogue of the above theorem, in which the rectification map preserves both additive and multiplicative structure:

Theorem 2 (Arithmetic rectification)Let be a subset of the finite field for some prime , and let be an integer. Suppose that . Then there exists a map into a subset of the complex numbers which is aFreiman field isomorphism of orderin the sense that for any and any polynomial of degree at most and integer coefficients of magnitude summing to at most , one hasif and only if

Note that it is necessary to use an algebraically closed field such as for this theorem, in contrast to the integers used in Proposition 1, as can contain objects such as square roots of which can only map to in the complex numbers (once is at least ).

Using Theorem 2, one can transfer results in arithmetic combinatorics (e.g. sum-product or Szemerédi-Trotter type theorems) regarding finite subsets of to analogous results regarding sufficiently small subsets of ; see the paper of Grosu for several examples of this. This should be compared with the paper of Vu, Wood, and Wood, which introduces a converse principle that embeds finite subsets of (or more generally, a characteristic zero integral domain) in a Freiman field-isomorphic fashion into finite subsets of for arbitrarily large primes , allowing one to transfer arithmetic combinatorical facts from the latter setting to the former.

Grosu’s argument uses some quantitative elimination theory, and in particular a quantitative variant of a lemma of Chang that was discussed previously on this blog. In that previous blog post, it was observed that (an ineffective version of) Chang’s theorem could be obtained using only qualitative algebraic geometry (as opposed to quantitative algebraic geometry tools such as elimination theory results with explicit bounds) by means of nonstandard analysis (or, in what amounts to essentially the same thing in this context, the use of ultraproducts). One can then ask whether one can similarly establish an ineffective version of Grosu’s result by nonstandard means. The purpose of this post is to record that this can indeed be done without much difficulty, though the result obtained, being ineffective, is somewhat weaker than that in Theorem 2. More precisely, we obtain

Theorem 3 (Ineffective arithmetic rectification)Let . Then if is a field of characteristic at least for some depending on , and is a subset of of cardinality , then there exists a map into a subset of the complex numbers which is a Freiman field isomorphism of order .

Our arguments will not provide any effective bound on the quantity (though one could in principle eventually extract such a bound by deconstructing the proof of Proposition 4 below), making this result weaker than Theorem 2 (save for the minor generalisation that it can handle fields of prime power order as well as fields of prime order as long as the characteristic remains large).

Following the principle that ultraproducts can be used as a bridge to connect quantitative and qualitative results (as discussed in these previous blog posts), we will deduce Theorem 3 from the following (well-known) qualitative version:

Proposition 4 (Baby Lefschetz principle)Let be a field of characteristic zero that is finitely generated over the rationals. Then there is an isomorphism from to a subfield of .

This principle (first laid out in an appendix of Lefschetz’s book), among other things, often allows one to use the methods of complex analysis (e.g. Riemann surface theory) to study many other fields of characteristic zero. There are many variants and extensions of this principle; see for instance this MathOverflow post for some discussion of these. I used this baby version of the Lefschetz principle recently in a paper on expanding polynomial maps.

*Proof:* We give two proofs of this fact, one using transcendence bases and the other using Hilbert’s nullstellensatz.

We begin with the former proof. As is finitely generated over , it has finite transcendence degree, thus one can find algebraically independent elements of over such that is a finite extension of , and in particular by the primitive element theorem is generated by and an element which is algebraic over . (Here we use the fact that characteristic zero fields are separable.) If we then define by first mapping to generic (and thus algebraically independent) complex numbers , and then setting to be a complex root of of the minimal polynomial for over after replacing each with the complex number , we obtain a field isomorphism with the required properties.

Now we give the latter proof. Let be elements of that generate that field over , but which are not necessarily algebraically independent. Our task is then equivalent to that of finding complex numbers with the property that, for any polynomial with rational coefficients, one has

if and only if

Let be the collection of all polynomials with rational coefficients with , and be the collection of all polynomials with rational coefficients with . The set

is the intersection of countably many algebraic sets and is thus also an algebraic set (by the Hilbert basis theorem or the Noetherian property of algebraic sets). If the desired claim failed, then could be covered by the algebraic sets for . By decomposing into irreducible varieties and observing (e.g. from the Baire category theorem) that a variety of a given dimension over cannot be covered by countably many varieties of smaller dimension, we conclude that must in fact be covered by a finite number of such sets, thus

for some . By the nullstellensatz, we thus have an identity of the form

for some natural numbers , polynomials , and polynomials with coefficients in . In particular, this identity also holds in the algebraic closure of . Evaluating this identity at we see that the right-hand side is zero but the left-hand side is non-zero, a contradiction, and the claim follows.

From Proposition 4 one can now deduce Theorem 3 by a routine ultraproduct argument (the same one used in these previous blog posts). Suppose for contradiction that Theorem 3 fails. Then there exists natural numbers , a sequence of finite fields of characteristic at least , and subsets of of cardinality such that for each , there does not exist a Freiman field isomorphism of order from to the complex numbers. Now we select a non-principal ultrafilter , and construct the ultraproduct of the finite fields . This is again a field (and is a basic example of what is known as a pseudo-finite field); because the characteristic of goes to infinity as , it is easy to see (using Los’s theorem) that has characteristic zero and can thus be viewed as an extension of the rationals .

Now let be the ultralimit of the , so that is the ultraproduct of the , then is a subset of of cardinality . In particular, if is the field generated by and , then is a finitely generated extension of the rationals and thus, by Proposition 4 there is an isomorphism from to a subfield of the complex numbers. In particular, are complex numbers, and for any polynomial with integer coefficients, one has

if and only if

By Los’s theorem, we then conclude that for all sufficiently close to , one has for all polynomials of degree at most and whose coefficients are integers whose magnitude sums up to , one has

if and only if

But this gives a Freiman field isomorphism of order between and , contradicting the construction of , and Theorem 3 follows.

Given a function between two sets , we can form the graph

which is a subset of the Cartesian product .

There are a number of “closed graph theorems” in mathematics which relate the regularity properties of the function with the closure properties of the graph , assuming some “completeness” properties of the domain and range . The most famous of these is the closed graph theorem from functional analysis, which I phrase as follows:

Theorem 1 (Closed graph theorem (functional analysis))Let be complete normed vector spaces over the reals (i.e. Banach spaces). Then a function is a continuous linear transformation if and only if the graph is both linearly closed (i.e. it is a linear subspace of ) and topologically closed (i.e. closed in the product topology of ).

I like to think of this theorem as linking together qualitative and quantitative notions of regularity preservation properties of an operator ; see this blog post for further discussion.

The theorem is equivalent to the assertion that any continuous linear bijection from one Banach space to another is necessarily an isomorphism in the sense that the inverse map is also continuous and linear. Indeed, to see that this claim implies the closed graph theorem, one applies it to the projection from to , which is a continuous linear bijection; conversely, to deduce this claim from the closed graph theorem, observe that the graph of the inverse is the reflection of the graph of . As such, the closed graph theorem is a corollary of the open mapping theorem, which asserts that any continuous linear *surjection* from one Banach space to another is open. (Conversely, one can deduce the open mapping theorem from the closed graph theorem by quotienting out the kernel of the continuous surjection to get a bijection.)

It turns out that there is a closed graph theorem (or equivalent reformulations of that theorem, such as an assertion that bijective morphisms between sufficiently “complete” objects are necessarily isomorphisms, or as an open mapping theorem) in many other categories in mathematics as well. Here are some easy ones:

Theorem 2 (Closed graph theorem (linear algebra))Let be vector spaces over a field . Then a function is a linear transformation if and only if the graph is linearly closed.

Theorem 3 (Closed graph theorem (group theory))Let be groups. Then a function is a group homomorphism if and only if the graph is closed under the group operations (i.e. it is a subgroup of ).

Theorem 4 (Closed graph theorem (order theory))Let be totally ordered sets. Then a function is monotone increasing if and only if the graph is totally ordered (using the product order on ).

Remark 1Similar results to the above three theorems (with similarly easy proofs) hold for other algebraic structures, such as rings (using the usual product of rings), modules, algebras, or Lie algebras, groupoids, or even categories (a map between categories is a functor iff its graph is again a category). (ADDED IN VIEW OF COMMENTS: further examples include affine spaces and -sets (sets with an action of a given group ).) There are also various approximate versions of this theorem that are useful in arithmetic combinatorics, that relate the property of a map being an “approximate homomorphism” in some sense with its graph being an “approximate group” in some sense. This is particularly useful for this subfield of mathematics because there are currently more theorems about approximate groups than about approximate homomorphisms, so that one can profitably use closed graph theorems to transfer results about the former to results about the latter.

A slightly more sophisticated result in the same vein:

Theorem 5 (Closed graph theorem (point set topology))Let be compact Hausdorff spaces. Then a function is continuous if and only if the graph is topologically closed.

Indeed, the “only if” direction is easy, while for the “if” direction, note that if is a closed subset of , then it is compact Hausdorff, and the projection map from to is then a bijective continuous map between compact Hausdorff spaces, which is then closed, thus open, and hence a homeomorphism, giving the claim.

Note that the compactness hypothesis is necessary: for instance, the function defined by for and for is a function which has a closed graph, but is discontinuous.

A similar result (but relying on a much deeper theorem) is available in algebraic geometry, as I learned after asking this MathOverflow question:

Theorem 6 (Closed graph theorem (algebraic geometry))Let be normal projective varieties over an algebraically closed field of characteristic zero. Then a function is a regular map if and only if the graph is Zariski-closed.

*Proof:* (Sketch) For the only if direction, note that the map is a regular map from the projective variety to the projective variety and is thus a projective morphism, hence is proper. In particular, the image of under this map is Zariski-closed.

Conversely, if is Zariski-closed, then it is also a projective variety, and the projection is a projective morphism from to , which is clearly quasi-finite; by the characteristic zero hypothesis, it is also separated. Applying (Grothendieck’s form of) Zariski’s main theorem, this projection is the composition of an open immersion and a finite map. As projective varieties are complete, the open immersion is an isomorphism, and so the projection from to is finite. Being injective and separable, the degree of this finite map must be one, and hence and are isomorphic, hence (by normality of ) is contained in (the image of) , which makes the map from to regular, which makes regular.

The counterexample of the map given by for and demonstrates why the projective hypothesis is necessary. The necessity of the normality condition (or more precisely, a weak normality condition) is demonstrated by (the projective version of) the map from the cusipdal curve to . (If one restricts attention to smooth varieties, though, normality becomes automatic.) The necessity of characteristic zero is demonstrated by (the projective version of) the inverse of the Frobenius map on a field of characteristic .

There are also a number of closed graph theorems for topological groups, of which the following is typical (see Exercise 3 of these previous blog notes):

Theorem 7 (Closed graph theorem (topological group theory))Let be -compact, locally compact Hausdorff groups. Then a function is a continuous homomorphism if and only if the graph is both group-theoretically closed and topologically closed.

The hypotheses of being -compact, locally compact, and Hausdorff can be relaxed somewhat, but I doubt that they can be eliminated entirely (though I do not have a ready counterexample for this).

In several complex variables, it is a classical theorem (see e.g. Lemma 4 of this blog post) that a holomorphic function from a domain in to is locally injective if and only if it is a local diffeomorphism (i.e. its derivative is everywhere non-singular). This leads to a closed graph theorem for complex manifolds:

Theorem 8 (Closed graph theorem (complex manifolds))Let be complex manifolds. Then a function is holomorphic if and only if the graph is a complex manifold (using the complex structure inherited from ) of the same dimension as .

Indeed, one applies the previous observation to the projection from to . The dimension requirement is needed, as can be seen from the example of the map defined by for and .

(ADDED LATER:) There is a real analogue to the above theorem:

Theorem 9 (Closed graph theorem (real manifolds))Let be real manifolds. Then a function is continuous if and only if the graph is a real manifold of the same dimension as .

This theorem can be proven by applying invariance of domain (discussed in this previous post) to the projection of to , to show that it is open if has the same dimension as .

Note though that the analogous claim for *smooth* real manifolds fails: the function defined by has a smooth graph, but is not itself smooth.

(ADDED YET LATER:) Here is an easy closed graph theorem in the symplectic category:

Theorem 10 (Closed graph theorem (symplectic geometry))Let and be smooth symplectic manifolds of the same dimension. Then a smooth map is a symplectic morphism (i.e. ) if and only if the graph is a Lagrangian submanifold of with the symplectic form .

In view of the symplectic rigidity phenomenon, it is likely that the smoothness hypotheses on can be relaxed substantially, but I will not try to formulate such a result here.

There are presumably many further examples of closed graph theorems (or closely related theorems, such as criteria for inverting a morphism, or open mapping type theorems) throughout mathematics; I would be interested to know of further examples.

## Recent Comments