You are currently browsing the tag archive for the ‘polynomial method’ tag.

A *capset* in the vector space over the finite field of three elements is a subset of that does not contain any lines , where and . A basic problem in additive combinatorics (discussed in one of the very first posts on this blog) is to obtain good upper and lower bounds for the maximal size of a capset in .

Trivially, one has . Using Fourier methods (and the density increment argument of Roth), the bound of was obtained by Meshulam, and improved only as late as 2012 to for some absolute constant by Bateman and Katz. But in a very recent breakthrough, Ellenberg (and independently Gijswijt) obtained the exponentially superior bound , using a version of the polynomial method recently introduced by Croot, Lev, and Pach. (In the converse direction, a construction of Edel gives capsets as large as .) Given the success of the polynomial method in superficially similar problems such as the finite field Kakeya problem (discussed in this previous post), it was natural to wonder that this method could be applicable to the cap set problem (see for instance this MathOverflow comment of mine on this from 2010), but it took a surprisingly long time before Croot, Lev, and Pach were able to identify the precise variant of the polynomial method that would actually work here.

The proof of the capset bound is very short (Ellenberg’s and Gijswijt’s preprints are both 3 pages long, and Croot-Lev-Pach is 6 pages), but I thought I would present a slight reformulation of the argument which treats the three points on a line in symmetrically (as opposed to treating the third point differently from the first two, as is done in the Ellenberg and Gijswijt papers; Croot-Lev-Pach also treat the middle point of a three-term arithmetic progression differently from the two endpoints, although this is a very natural thing to do in their context of ). The basic starting point is this: if is a capset, then one has the identity

for all , where is the Kronecker delta function, which we view as taking values in . Indeed, (1) reflects the fact that the equation has solutions precisely when are either all equal, or form a line, and the latter is ruled out precisely when is a capset.

To exploit (1), we will show that the left-hand side of (1) is “low rank” in some sense, while the right-hand side is “high rank”. Recall that a function taking values in a field is of *rank one* if it is non-zero and of the form for some , and that the rank of a general function is the least number of rank one functions needed to express as a linear combination. More generally, if , we define the *rank* of a function to be the least number of “rank one” functions of the form

for some and some functions , , that are needed to generate as a linear combination. For instance, when , the rank one functions take the form , , , and linear combinations of such rank one functions will give a function of rank at most .

It is a standard fact in linear algebra that the rank of a diagonal matrix is equal to the number of non-zero entries. This phenomenon extends to higher dimensions:

Lemma 1 (Rank of diagonal hypermatrices)Let , let be a finite set, let be a field, and for each , let be a coefficient. Then the rank of the function

*Proof:* We induct on . As mentioned above, the case follows from standard linear algebra, so suppose now that and the claim has already been proven for .

It is clear that the function (2) has rank at most equal to the number of non-zero (since the summands on the right-hand side are rank one functions), so it suffices to establish the lower bound. By deleting from those elements with (which cannot increase the rank), we may assume without loss of generality that all the are non-zero. Now suppose for contradiction that (2) has rank at most , then we obtain a representation

for some sets of cardinalities adding up to at most , and some functions and .

Consider the space of functions that are orthogonal to all the , in the sense that

for all . This space is a vector space whose dimension is at least . A basis of this space generates a coordinate matrix of full rank, which implies that there is at least one non-singular minor. This implies that there exists a function in this space which is nowhere vanishing on some subset of of cardinality at least .

If we multiply (3) by and sum in , we conclude that

where

The right-hand side has rank at most , since the summands are rank one functions. On the other hand, from induction hypothesis the left-hand side has rank at least , giving the required contradiction.

On the other hand, we have the following (symmetrised version of a) beautifully simple observation of Croot, Lev, and Pach:

*Proof:* Using the identity for , we have

The right-hand side is clearly a polynomial of degree in , which is then a linear combination of monomials

with with

In particular, from the pigeonhole principle, at least one of is at most .

Consider the contribution of the monomials for which . We can regroup this contribution as

where ranges over those with , is the monomial

and is some explicitly computable function whose exact form will not be of relevance to our argument. The number of such is equal to , so this contribution has rank at most . The remaining contributions arising from the cases and similarly have rank at most (grouping the monomials so that each monomial is only counted once), so the claim follows.

Upon restricting from to , the rank of is still at most . The two lemmas then combine to give the Ellenberg-Gijswijt bound

All that remains is to compute the asymptotic behaviour of . This can be done using the general tool of Cramer’s theorem, but can also be derived from Stirling’s formula (discussed in this previous post). Indeed, if , , for some summing to , Stirling’s formula gives

where is the entropy function

We then have

where is the maximum entropy subject to the constraints

A routine Lagrange multiplier computation shows that the maximum occurs when

and is approximately , giving rise to the claimed bound of .

Remark 3As noted in the Ellenberg and Gijswijt papers, the above argument extends readily to other fields than to control the maximal size of subset of that has no non-trivial solutions to the equation , where are non-zero constants that sum to zero. Of course one replaces the function in Lemma 2 by in this case.

Remark 4This symmetrised formulation suggests that one possible way to improve slightly on the numerical quantity by finding a more efficient way to decompose into rank one functions, however I was not able to do so (though such improvements are reminiscent of the Strassen type algorithms for fast matrix multiplication).

Remark 5It is tempting to see if this method can get non-trivial upper bounds for sets with no length progressions, in (say) . One can run the above arguments, replacing the functionwith

this leads to the bound where

Unfortunately, is asymptotic to and so this bound is in fact slightly worse than the trivial bound ! However, there is a slim chance that there is a more efficient way to decompose into rank one functions that would give a non-trivial bound on . I experimented with a few possible such decompositions but unfortunately without success.

Remark 6Return now to the capset problem. Since Lemma 1 is valid for any field , one could perhaps hope to get better bounds by viewing the Kronecker delta function as taking values in another field than , such as the complex numbers . However, as soon as one works in a field of characteristic other than , one can adjoin a cube root of unity, and one now has the Fourier decompositionMoving to the Fourier basis, we conclude from Lemma 1 that the function on now has rank exactly , and so one cannot improve upon the trivial bound of by this method using fields of characteristic other than three as the range field. So it seems one has to stick with (or the algebraic completion thereof).

Thanks to Jordan Ellenberg and Ben Green for helpful discussions.

Let be a finite field of order , and let be an absolutely irreducible smooth projective curve defined over (and hence over the algebraic closure of that field). For instance, could be the projective elliptic curve

in the projective plane , where are coefficients whose discriminant is non-vanishing, which is the projective version of the affine elliptic curve

To each such curve one can associate a genus , which we will define later; for instance, elliptic curves have genus . We can also count the cardinality of the set of -points of . The *Hasse-Weil bound* relates the two:

The usual proofs of this bound proceed by first establishing a trace formula of the form

for some complex numbers independent of ; this is in fact a special case of the Lefschetz-Grothendieck trace formula, and can be interpreted as an assertion that the zeta function associated to the curve is rational. The task is then to establish a bound for all ; this (or more precisely, the slightly stronger assertion ) is the Riemann hypothesis for such curves. This can be done either by passing to the Jacobian variety of and using a certain duality available on the cohomology of such varieties, known as Rosati involution; alternatively, one can pass to the product surface and apply the Riemann-Roch theorem for that surface.

In 1969, Stepanov introduced an elementary method (a version of what is now known as the polynomial method) to count (or at least to upper bound) the quantity . The method was initially restricted to hyperelliptic curves, but was soon extended to general curves. In particular, Bombieri used this method to give a short proof of the following weaker version of the Hasse-Weil bound:

Theorem 2 (Weak Hasse-Weil bound)If is a perfect square, and , then .

In fact, the bound on can be sharpened a little bit further, as we will soon see.

Theorem 2 is only an upper bound on , but there is a Galois-theoretic trick to convert (a slight generalisation of) this upper bound to a matching lower bound, and if one then uses the trace formula (1) (and the “tensor power trick” of sending to infinity to control the weights ) one can then recover the full Hasse-Weil bound. We discuss these steps below the fold.

I’ve discussed Bombieri’s proof of Theorem 2 in this previous post (in the special case of hyperelliptic curves), but now wish to present the full proof, with some minor simplifications from Bombieri’s original presentation; it is mostly elementary, with the deepest fact from algebraic geometry needed being Riemann’s inequality (a weak form of the Riemann-Roch theorem).

The first step is to reinterpret as the number of points of intersection between two curves in the surface . Indeed, if we define the Frobenius endomorphism on any projective space by

then this map preserves the curve , and the fixed points of this map are precisely the points of :

Thus one can interpret as the number of points of intersection between the diagonal curve

and the Frobenius graph

which are copies of inside . But we can use the additional hypothesis that is a perfect square to write this more symmetrically, by taking advantage of the fact that the Frobenius map has a square root

with also preserving . One can then also interpret as the number of points of intersection between the curve

Let be the field of rational functions on (with coefficients in ), and define , , and analogously )(although is likely to be disconnected, so will just be a ring rather than a field. We then (morally) have the commuting square

if we ignore the issue that a rational function on, say, , might blow up on all of and thus not have a well-defined restriction to . We use and to denote the restriction maps. Furthermore, we have obvious isomorphisms , coming from composing with the graphing maps and .

The idea now is to find a rational function on the surface of controlled degree which vanishes when restricted to , but is non-vanishing (and not blowing up) when restricted to . On , we thus get a non-zero rational function of controlled degree which vanishes on – which then lets us bound the cardinality of in terms of the degree of . (In Bombieri’s original argument, one required vanishing to high order on the side, but in our presentation, we have factored out a term which removes this high order vanishing condition.)

To find this , we will use linear algebra. Namely, we will locate a finite-dimensional subspace of (consisting of certain “controlled degree” rational functions) which projects injectively to , but whose projection to has strictly smaller dimension than itself. The rank-nullity theorem then forces the existence of a non-zero element of whose projection to vanishes, but whose projection to is non-zero.

Now we build . Pick a point of , which we will think of as being a point at infinity. (For the purposes of proving Theorem 2, we may clearly assume that is non-empty.) Thus is fixed by . To simplify the exposition, we will also assume that is fixed by the square root of ; in the opposite case when has order two when acting on , the argument is essentially the same, but all references to in the second factor of need to be replaced by (we leave the details to the interested reader).

For any natural number , define to be the set of rational functions which are allowed to have a pole of order up to at , but have no other poles on ; note that as we are assuming to be smooth, it is unambiguous what a pole is (and what order it will have). (In the fancier language of divisors and Cech cohomology, we have .) The space is clearly a vector space over ; one can view intuitively as the space of “polynomials” on of “degree” at most . When , consists just of the constant functions. Indeed, if , then the image of avoids and so lies in the affine line ; but as is projective, the image needs to be compact (hence closed) in , and must therefore be a point, giving the claim.

For higher , we have the easy relations

The former inequality just comes from the trivial inclusion . For the latter, observe that if two functions lie in , so that they each have a pole of order at most at , then some linear combination of these functions must have a pole of order at most at ; thus has codimension at most one in , giving the claim.

From (3) and induction we see that each of the are finite dimensional, with the trivial upper bound

*Riemann’s inequality* complements this with the lower bound

thus one has for all but at most exceptions (in fact, exactly exceptions as it turns out). This is a consequence of the Riemann-Roch theorem; it can be proven from abstract nonsense (the snake lemma) if one defines the genus in a non-standard fashion (as the dimension of the first Cech cohomology of the structure sheaf of ), but to obtain this inequality with a standard definition of (e.g. as the dimension of the zeroth Cech cohomolgy of the line bundle of differentials) requires the more non-trivial tool of Serre duality.

At any rate, now that we have these vector spaces , we will define to be a tensor product space

for some natural numbers which we will optimise in later. That is to say, is spanned by functions of the form with and . This is clearly a linear subspace of of dimension , and hence by Rieman’s inequality we have

Observe that maps a tensor product to a function . If and , then we see that the function has a pole of order at most at . We conclude that

and in particular by (4)

We will choose to be a bit bigger than , to make the image of smaller than that of . From (6), (10) we see that if we have the inequality

(together with (7)) then cannot be injective.

On the other hand, we have the following basic fact:

*Proof:* From (3), we can find a linear basis of such that each of the has a distinct order of pole at (somewhere between and inclusive). Similarly, we may find a linear basis of such that each of the has a distinct order of pole at (somewhere between and inclusive). The functions then span , and the order of pole at is . But since , these orders are all distinct, and so these functions must be linearly independent. The claim follows.

This gives us the following bound:

Proposition 4Let be natural numbers such that (7), (11), (12) hold. Then .

*Proof:* As is not injective, we can find with vanishing. By the above lemma, the function is then non-zero, but it must also vanish on , which has cardinality . On the other hand, by (8), has a pole of order at most at and no other poles. Since the number of poles and zeroes of a rational function on a projective curve must add up to zero, the claim follows.

If , we may make the explicit choice

and a brief calculation then gives Theorem 2. In some cases one can optimise things a bit further. For instance, in the genus zero case (e.g. if is just the projective line ) one may take and conclude the absolutely sharp bound in this case; in the case of the projective line , the function is in fact the very concrete function .

Remark 1When is not a perfect square, one can try to run the above argument using the factorisation instead of . This gives a weaker version of the above bound, of the shape . In the hyperelliptic case at least, one can erase this loss by working with a variant of the argument in which one requires to vanish to high order at , rather than just to first order; see this survey article of mine for details.

Let be a finite field, with algebraic closure , and let be an (affine) algebraic variety defined over , by which I mean a set of the form

for some ambient dimension , and some finite number of polynomials . In order to reduce the number of subscripts later on, let us say that has *complexity at most * if , , and the degrees of the are all less than or equal to . Note that we do not require at this stage that be irreducible (i.e. not the union of two strictly smaller varieties), or defined over , though we will often specialise to these cases later in this post. (Also, everything said here can also be applied with almost no changes to projective varieties, but we will stick with affine varieties for sake of concreteness.)

One can consider two crude measures of how “big” the variety is. The first measure, which is algebraic geometric in nature, is the *dimension* of the variety , which is an integer between and (or, depending on convention, , , or undefined, if is empty) that can be defined in a large number of ways (e.g. it is the largest for which the generic linear projection from to is dominant, or the smallest for which the intersection with a generic codimension subspace is non-empty). The second measure, which is number-theoretic in nature, is the number of -points of , i.e. points in all of whose coefficients lie in the finite field, or equivalently the number of solutions to the system of equations for with variables in .

These two measures are linked together in a number of ways. For instance, we have the basic Schwarz-Zippel type bound (which, in this qualitative form, goes back at least to Lemma 1 of the work of Lang and Weil in 1954).

Lemma 1 (Schwarz-Zippel type bound)Let be a variety of complexity at most . Then we have .

*Proof:* (Sketch) For the purposes of exposition, we will not carefully track the dependencies of implied constants on the complexity , instead simply assuming that all of these quantities remain controlled throughout the argument. (If one wished, one could obtain ineffective bounds on these quantities by an ultralimit argument, as discussed in this previous post, or equivalently by moving everything over to a nonstandard analysis framework; one could also obtain such uniformity using the machinery of schemes.)

We argue by induction on the ambient dimension of the variety . The case is trivial, so suppose and that the claim has already been proven for . By breaking up into irreducible components we may assume that is irreducible (this requires some control on the number and complexity of these components, but this is available, as discussed in this previous post). For each , the fibre is either one-dimensional (and thus all of ) or zero-dimensional. In the latter case, one has points in the fibre from the fundamental theorem of algebra (indeed one has a bound of in this case), and lives in the projection of to , which is a variety of dimension at most and controlled complexity, so the contribution of this case is acceptable from the induction hypothesis. In the former case, the fibre contributes -points, but lies in a variety in of dimension at most (since otherwise would contain a subvariety of dimension at least , which is absurd) and controlled complexity, and so the contribution of this case is also acceptable from the induction hypothesis.

One can improve the bound on the implied constant to be linear in the degree of (see e.g. Claim 7.2 of this paper of Dvir, Kollar, and Lovett, or Lemma A.3 of this paper of Ellenberg, Oberlin, and myself), but we will not be concerned with these improvements here.

Without further hypotheses on , the above upper bound is sharp (except for improvements in the implied constants). For instance, the variety

where are distict, is the union of distinct hyperplanes of dimension , with and complexity ; similar examples can easily be concocted for other choices of . In the other direction, there is also no non-trivial lower bound for without further hypotheses on . For a trivial example, if is an element of that does not lie in , then the hyperplane

clearly has no -points whatsoever, despite being a -dimensional variety in of complexity . For a slightly less non-trivial example, if is an element of that is not a quadratic residue, then the variety

which is the union of two hyperplanes, still has no -points, even though this time the variety is defined over instead of (by which we mean that the defining polynomial(s) have all of their coefficients in ). There is however the important Lang-Weil bound that allows for a much better estimate as long as is both defined over *and* irreducible:

Theorem 2 (Lang-Weil bound)Let be a variety of complexity at most . Assume that is defined over , and that is irreducible as a variety over (i.e. isgeometrically irreducibleor absolutely irreducible). Then

Again, more explicit bounds on the implied constant here are known, but will not be the focus of this post. As the previous examples show, the hypotheses of definability over and geometric irreducibility are both necessary.

The Lang-Weil bound is already non-trivial in the model case of plane curves:

Theorem 3 (Hasse-Weil bound)Let be an irreducible polynomial of degree with coefficients in . Then

Thus, for instance, if , then the elliptic curve has -points, a result first established by Hasse. The Hasse-Weil bound is already quite non-trivial, being the analogue of the Riemann hypothesis for plane curves. For hyper-elliptic curves, an elementary proof (due to Stepanov) is discussed in this previous post. For general plane curves, the first proof was by Weil (leading to his famous Weil conjectures); there is also a nice version of Stepanov’s argument due to Bombieri covering this case which is a little less elementary (relying crucially on the Riemann-Roch theorem for the upper bound, and a lifting trick to then get the lower bound), which I briefly summarise later in this post. The full Lang-Weil bound is deduced from the Hasse-Weil bound by an induction argument using generic hyperplane slicing, as I will also summarise later in this post.

The hypotheses of definability over and geometric irreducibility in the Lang-Weil can be removed after inserting a geometric factor:

Corollary 4 (Lang-Weil bound, alternate form)Let be a variety of complexity at most . Then one haswhere is the number of top-dimensional components of (i.e. geometrically irreducible components of of dimension ) that are definable over , or equivalently are invariant with respect to the Frobenius endomorphism that defines .

*Proof:* By breaking up a general variety into components (and using Lemma 1 to dispose of any lower-dimensional components), it suffices to establish this claim when is itself geometrically irreducible. If is definable over , the claim follows from Theorem 2. If is not definable over , then it is not fixed by the Frobenius endomorphism (since otherwise one could produce a set of defining polynomials that were fixed by Frobenius and thus defined over by using some canonical basis (such as a reduced Grobner basis) for the associated ideal), and so has strictly smaller dimension than . But captures all the -points of , so in this case the claim follows from Lemma 1.

Note that if is reducible but is itself defined over , then the Frobenius endomorphism preserves itself, but may permute the components of around. In this case, is the number of fixed points of this permutation action of Frobenius on the components. In particular, is always a natural number between and ; thus we see that regardless of the geometry of , the normalised count is asymptotically restricted to a bounded range of natural numbers (in the regime where the complexity stays bounded and goes to infinity).

Example 1Consider the varietyfor some non-zero parameter . Geometrically (by which we basically mean “when viewed over the algebraically closed field “), this is the union of two lines, with slopes corresponding to the two square roots of . If is a quadratic residue, then both of these lines are defined over , and are fixed by Frobenius, and in this case. If is not a quadratic residue, then the lines are not defined over , and the Frobenius automorphism permutes the two lines while preserving as a whole, giving in this case.

Corollary 4 effectively computes (at least to leading order) the number-theoretic size of a variety in terms of geometric information about , namely its dimension and the number of top-dimensional components fixed by Frobenius. It turns out that with a little bit more effort, one can extend this connection to cover not just a single variety , but a family of varieties indexed by points in some base space . More precisely, suppose we now have two affine varieties of bounded complexity, together with a regular map of bounded complexity (the definition of complexity of a regular map is a bit technical, see e.g. this paper, but one can think for instance of a polynomial or rational map of bounded degree as a good example). It will be convenient to assume that the base space is irreducible. If the map is a dominant map (i.e. the image is Zariski dense in ), then standard algebraic geometry results tell us that the fibres are an unramified family of -dimensional varieties outside of an exceptional subset of of dimension strictly smaller than (and with having dimension strictly smaller than ); see e.g. Section I.6.3 of Shafarevich.

Now suppose that , , and are defined over . Then, by Lang-Weil, has -points, and by Schwarz-Zippel, for all but of these -points (the ones that lie in the subvariety ), the fibre is an algebraic variety defined over of dimension . By using ultraproduct arguments (see e.g. Lemma 3.7 of this paper of mine with Emmanuel Breuillard and Ben Green), this variety can be shown to have bounded complexity, and thus by Corollary 4, has -points. One can then ask how the quantity is distributed. A simple but illustrative example occurs when and is the polynomial . Then equals when is a non-zero quadratic residue and when is a non-zero quadratic non-residue (and when is zero, but this is a negligible fraction of all ). In particular, in the asymptotic limit , is equal to half of the time and half of the time.

Now we describe the asymptotic distribution of the . We need some additional notation. Let be an -point in , and let be the connected components of the fibre . As is defined over , this set of components is permuted by the Frobenius endomorphism . But there is also an action by monodromy of the fundamental group (this requires a certain amount of étale machinery to properly set up, as we are working over a positive characteristic field rather than over the complex numbers, but I am going to ignore this rather important detail here, as I still don’t fully understand it). This fundamental group may be infinite, but (by the étale construction) is always profinite, and in particular has a *Haar probability measure*, in which every finite index subgroup (and their cosets) are measurable. Thus we may meaningfully talk about elements drawn uniformly at random from this group, so long as we work only with the profinite -algebra on that is generated by the cosets of the finite index subgroups of this group (which will be the only relevant sets we need to measure when considering the action of this group on finite sets, such as the components of a generic fibre).

Theorem 5 (Lang-Weil with parameters)Let be varieties of complexity at most with irreducible, and let be a dominant map of complexity at most . Let be an -point of . Then, for any natural number , one has for values of , where is the random variable that counts the number of components of a generic fibre that are invariant under , where is an element chosen uniformly at random from the étale fundamental group . In particular, in the asymptotic limit , and with chosen uniformly at random from , (or, equivalently, ) and have the same asymptotic distribution.

This theorem generalises Corollary 4 (which is the case when is just a point, so that is just and is trivial). Informally, the effect of a non-trivial parameter space on the Lang-Weil bound is to push around the Frobenius map by monodromy for the purposes of counting invariant components, and a randomly chosen set of parameters corresponds to a randomly chosen loop on which to perform monodromy.

Example 2Let and for some fixed ; to avoid some technical issues let us suppose that is coprime to . Then can be taken to be , and for a base point we can take . The fibre – the roots of unity – can be identified with the cyclic group by using a primitive root of unity. The étale fundamental group is (I think) isomorphic to the profinite closure of the integers (excluding the part of that closure coming from the characteristic of ). Not coincidentally, the integers are the fundamental group of the complex analogue of . (Brian Conrad points out to me though that for more complicated varieties, such as covers of by a power of the characteristic, the etale fundamental group is more complicated than just a profinite closure of the ordinary fundamental group, due to the presence of Artin-Schreier covers that are only ramified at infinity.) The action of this fundamental group on the fibres can given by translation. Meanwhile, the Frobenius map on is given by multiplication by . A random element then becomes a random affine map on , where chosen uniformly at random from . The number of fixed points of this map is equal to the greatest common divisor of and when is divisible by , and equal to otherwise. This matches up with the elementary number fact that a randomly chosen non-zero element of will be an power with probability , and when this occurs, the number of roots in will be .

Example 3(Thanks to Jordan Ellenberg for this example.) Consider a random elliptic curve , where are chosen uniformly at random, and let . Let be the -torsion points of (i.e. those elements with using the elliptic curve addition law); as a group, this is isomorphic to (assuming that has sufficiently large characteristic, for simplicity), and consider the number of points of , which is a random variable taking values in the natural numbers between and . In this case, the base variety is the modular curve , and the covering variety is the modular curve . The generic fibre here can be identified with , the monodromy action projects down to the action of , and the action of Frobenius on this fibre can be shown to be given by a matrix with determinant (with the exact choice of matrix depending on the choice of fibre and of the identification), so the distribution of the number of -points of is asymptotic to the distribution of the number of fixed points of a random linear map of determinant on .

Theorem 5 seems to be well known “folklore” among arithmetic geometers, though I do not know of an explicit reference for it. I enjoyed deriving it for myself (though my derivation is somewhat incomplete due to my lack of understanding of étale cohomology) from the ordinary Lang-Weil theorem and the moment method. I’m recording this derivation later in this post, mostly for my own benefit (as I am still in the process of learning this material), though perhaps some other readers may also be interested in it.

Caveat: not all details are fully fleshed out in this writeup, particularly those involving the finer points of algebraic geometry and étale cohomology, as my understanding of these topics is not as complete as I would like it to be.

Many thanks to Brian Conrad and Jordan Ellenberg for helpful discussions on these topics.

The ham sandwich theorem asserts that, given bounded open sets in , there exists a hyperplane that bisects each of these sets , in the sense that each of the two half-spaces on either side of the hyperplane captures exactly half of the volume of . The shortest proof of this result proceeds by invoking the Borsuk-Ulam theorem.

A useful generalisation of the ham sandwich theorem is the *polynomial ham sandwich theorem*, which asserts that given bounded open sets in , there exists a hypersurface of degree (thus is a polynomial of degree such that the two semi-algebraic sets and capture half the volume of each of the . (More precisely, the degree will be at most , where is the first positive integer for which exceeds .) This theorem can be deduced from the Borsuk-Ulam theorem in the same manner that the ordinary ham sandwich theorem is (and can also be deduced directly from the ordinary ham sandwich theorem via the Veronese embedding).

The polynomial ham sandwich theorem is a theorem about continuous bodies (bounded open sets), but a simple limiting argument leads one to the following discrete analogue: given *finite* sets in , there exists a hypersurface of degree , such that each of the two semi-algebraic sets and contain at most half of the points on (note that some of the points of can certainly lie on the boundary ). This can be iterated to give a useful cell decomposition:

Proposition 1 (Cell decomposition)Let be a finite set of points in , and let be a positive integer. Then there exists a polynomial of degree at most , and a decompositioninto the hypersurface and a collection of cells bounded by , such that , and such that each cell contains at most points.

A proof is sketched in this previous blog post. The cells in the argument are not necessarily connected (being instead formed by intersecting together a number of semi-algebraic sets such as and ), but it is a classical result (established independently by Oleinik-Petrovskii, Milnor, and Thom) that any degree hypersurface divides into connected components, so one can easily assume that the cells are connected if desired. (Incidentally, one does not need the full machinery of the results in the above cited papers – which control not just the number of components, but all the Betti numbers of the complement of – to get the bound on connected components; one can instead observe that every bounded connected component has a critical point where , and one can control the number of these points by Bezout’s theorem, after perturbing slightly to enforce genericity, and then count the unbounded components by an induction on dimension.)

Remark 1By setting as large as , we obtain as a limiting case of the cell decomposition the fact that any finite set of points in can be captured by a hypersurface of degree . This fact is in fact true over arbitrary fields (not just over ), and can be proven by a simple linear algebra argument (see e.g. this previous blog post). However, the cell decomposition is more flexible than this algebraic fact due to the ability to arbitrarily select the degree parameter .

The cell decomposition can be viewed as a structural theorem for arbitrary large configurations of points in space, much as the Szemerédi regularity lemma can be viewed as a structural theorem for arbitrary large dense graphs. Indeed, just as many problems in the theory of large dense graphs can be profitably attacked by first applying the regularity lemma and then inspecting the outcome, it now seems that many problems in combinatorial incidence geometry can be attacked by applying the cell decomposition (or a similar such decomposition), with a parameter to be optimised later, to a relevant set of points, and seeing how the cells interact with each other and with the other objects in the configuration (lines, planes, circles, etc.). This strategy was spectacularly illustrated recently with Guth and Katz‘s use of the cell decomposition to resolve the Erdös distinct distance problem (up to logarithmic factors), as discussed in this blog post.

In this post, I wanted to record a simpler (but still illustrative) version of this method (that I learned from Nets Katz), namely to provide yet another proof of the Szemerédi-Trotter theorem in incidence geometry:

Theorem 2 (Szemerédi-Trotter theorem)Given a finite set of points and a finite set of lines in , the set of incidences has cardinality

This theorem has many short existing proofs, including one via crossing number inequalities (as discussed in this previous post) or via a slightly different type of cell decomposition (as discussed here). The proof given below is not that different, in particular, from the latter proof, but I believe it still serves as a good introduction to the polynomial method in combinatorial incidence geometry.

Combinatorial incidence geometry is the study of the possible combinatorial configurations between geometric objects such as lines and circles. One of the basic open problems in the subject has been the Erdős distance problem, posed in 1946:

Problem 1 (Erdős distance problem)Let be a large natural number. What is the least number of distances that are determined by points in the plane?

Erdős called this least number . For instance, one can check that and , although the precise computation of rapidly becomes more difficult after this. By considering points in arithmetic progression, we see that . By considering the slightly more sophisticated example of a lattice grid (assuming that is a square number for simplicity), and using some analytic number theory, one can obtain the slightly better asymptotic bound .

On the other hand, lower bounds are more difficult to obtain. As observed by Erdős, an easy argument, ultimately based on the incidence geometry fact that any two circles intersect in at most two points, gives the lower bound . The exponent has been slowly increasing over the years by a series of increasingly intricate arguments combining incidence geometry facts with other known results in combinatorial incidence geometry (most notably the Szemerédi-Trotter theorem) and also some tools from additive combinatorics; however, these methods seemed to fall quite short of getting to the optimal exponent of . (Indeed, previously to last week, the best lower bound known was approximately , due to Katz and Tardos.)

Very recently, though, Guth and Katz have obtained a near-optimal result:

The proof neatly combines together several powerful and modern tools in a new way: a recent geometric reformulation of the problem due to Elekes and Sharir; the polynomial method as used recently by Dvir, Guth, and Guth-Katz on related incidence geometry problems (and discussed previously on this blog); and the somewhat older method of cell decomposition (also discussed on this blog). A key new insight is that the polynomial method (and more specifically, the *polynomial Ham Sandwich theorem*, also discussed previously on this blog) can be used to efficiently create cells.

In this post, I thought I would sketch some of the key ideas used in the proof, though I will not give the full argument here (the paper itself is largely self-contained, well motivated, and of only moderate length). In particular I will not go through all the various cases of configuration types that one has to deal with in the full argument, but only some illustrative special cases.

To simplify the exposition, I will repeatedly rely on “pigeonholing cheats”. A typical such cheat: if I have objects (e.g. points or lines), each of which could be of one of two types, I will assume that either all of the objects are of the first type, or all of the objects are of the second type. (In truth, I can only assume that at least of the objects are of the first type, or at least of the objects are of the second type; but in practice, having instead of only ends up costing an unimportant multiplicative constant in the type of estimates used here.) A related such cheat: if one has objects (again, think of points or circles), and to each object one can associate some natural number (e.g. some sort of “multiplicity” for ) that is of “polynomial size” (of size ), then I will assume in fact that all the are in a fixed dyadic range for some . (In practice, the dyadic pigeonhole principle can only achieve this after throwing away all but about of the original objects; it is this type of logarithmic loss that eventually leads to the logarithmic factor in the main theorem.) Using the notation to denote the assertion that for an absolute constant , we thus have for all , thus is morally constant.

I will also use asymptotic notation rather loosely, to avoid cluttering the exposition with a certain amount of routine but tedious bookkeeping of constants. In particular, I will use the informal notation or to denote the statement that is “much less than” or is “much larger than” , by some large constant factor.

See also Janos Pach’s recent reaction to the Guth-Katz paper on Kalai’s blog.

Below the fold is a version of my talk “Recent progress on the Kakeya conjecture” that I gave at the Fefferman conference.

Jordan Ellenberg, Richard Oberlin, and I have just uploaded to the arXiv the paper “The Kakeya set and maximal conjectures for algebraic varieties over finite fields“, submitted to Mathematika. This paper builds upon some work of Dvir and later authors on the Kakeya problem in finite fields, which I have discussed in this earlier blog post. Dvir established the following:

Kakeya set conjecture for finite fields.Let F be a finite field, and let E be a subset of that contains a line in every direction. Then E has cardinality at least for some .

The initial argument of Dvir gave . This was improved to for some explicit by Saraf and Sudan, and recently to by Dvir, Kopparty, Saraf, and Sudan, which is within a factor 2 of the optimal result.

In our work we investigate a somewhat different set of improvements to Dvir’s result. The first concerns the *Kakeya maximal function* of a function , defined for all directions in the projective hyperplane at infinity by the formula

where the supremum ranges over all lines in oriented in the direction . Our first result is the endpoint estimate for this operator, namely

Kakeya maximal function conjecture in finite fields.We have for some constant .

This result implies Dvir’s result, since if f is the indicator function of the set E in Dvir’s result, then for every . However, it also gives information on more general sets E which do not necessarily contain a line in every direction, but instead contain a certain fraction of a line in a subset of directions. The exponents here are best possible in the sense that all other mapping properties of the operator can be deduced (with bounds that are optimal up to constants) by interpolating the above estimate with more trivial estimates. This result is the finite field analogue of a long-standing (and still open) conjecture for the Kakeya maximal function in Euclidean spaces; we rely on the polynomial method of Dvir, which thus far has not extended to the Euclidean setting (but note the very interesting variant of this method by Guth that has established the endpoint multilinear Kakeya maximal function estimate in this setting, see this blog post for further discussion).

It turns out that a direct application of the polynomial method is not sufficient to recover the full strength of the maximal function estimate; but by combining the polynomial method with the Nikishin-Maurey-Pisier-Stein “method of random rotations” (as interpreted nowadays by Stein and later by Bourgain, and originally inspired by the factorisation theorems of Nikishin, Maurey, and Pisier), one can already recover a “restricted weak type” version of the above estimate. If one then enhances the polynomial method with the “method of multiplicities” (as introduced by Saraf and Sudan) we can then recover the full “strong type” estimate; a few more details below the fold.

It turns out that one can generalise the above results to more general affine or projective algebraic varieties over finite fields. In particular, we showed

Kakeya maximal function conjecture in algebraic varieties.Suppose that is an (n-1)-dimensional algebraic variety. Let be an integer. Then we havefor some constant , where the supremum is over all irreducible algebraic curves of degree at most d that pass through x but do not lie in W, and W(F) denotes the F-points of W.

The ordinary Kakeya maximal function conjecture corresponds to the case when N=n, W is the hyperplane at infinity, and the degree d is equal to 1. One corollary of this estimate is a Dvir-type result: a subset of which contains, for each x in W, an irreducible algebraic curve of degree d passing through x but not lying in W, has cardinality if . (In particular this implies a lower bound for Nikodym sets worked out by Li.) The dependence of the implied constant on W is only via the degree of W.

The techniques used in the flat case can easily handle curves of higher degree (provided that we allow the implied constants to depend on d), but the method of random rotations does not seem to work directly on the algebraic variety W as there are usually no symmetries of this variety to exploit. Fortunately, we can get around this by using a “random projection trick” to “flatten” W into a hyperplane (after first expressing W as the zero locus of some polynomials, and then composing with the graphing map for such polynomials), reducing the non-flat case to the flat case.

Below the fold, I wish to sketch two of the key ingredients in our arguments, the random rotations method and the random projections trick. (We of course also use some algebraic geometry, but mostly low-tech stuff, on the level of Bezout’s theorem, though we do need one non-trivial result of Kleiman (from SGA6), that asserts that bounded degree varieties can be cut out by a bounded number of polynomials of bounded degree.)

[Update, March 14: See also Jordan’s own blog post on our paper.]

One of my favourite family of conjectures (and one that has preoccupied a significant fraction of my own research) is the family of Kakeya conjectures in geometric measure theory and harmonic analysis. There are many (not quite equivalent) conjectures in this family. The cleanest one to state is the set conjecture:

Kakeya set conjecture: Let , and let contain a unit line segment in every direction (such sets are known asKakeya setsorBesicovitch sets). Then E has Hausdorff dimension and Minkowski dimension equal to n.

One reason why I find these conjectures fascinating is the sheer variety of mathematical fields that arise both in the partial results towards this conjecture, and in the applications of those results to other problems. See for instance this survey of Wolff, my Notices article and this article of Łaba on the connections between this problem and other problems in Fourier analysis, PDE, and additive combinatorics; there have even been some connections to number theory and to cryptography. At the other end of the pipeline, the mathematical tools that have gone *into* the proofs of various partial results have included:

- Maximal functions, covering lemmas, methods (Cordoba, Strömberg, Cordoba-Fefferman);
- Fourier analysis (Nagel-Stein-Wainger);
- Multilinear integration (Drury, Christ)
- Paraproducts (Katz);
- Combinatorial incidence geometry (Bourgain, Wolff);
- Multi-scale analysis (Barrionuevo, Katz-Łaba-Tao, Łaba-Tao, Alfonseca-Soria-Vargas);
- Probabilistic constructions (Bateman-Katz, Bateman);
- Additive combinatorics and graph theory (Bourgain, Katz-Łaba-Tao, Katz-Tao, Katz-Tao);
- Sum-product theorems (Bourgain-Katz-Tao);
- Bilinear estimates (Tao-Vargas-Vega);
- Perron trees (Perron, Schoenberg, Keich);
- Group theory (Katz);
- Low-degree algebraic geometry (Schlag, Tao, Mockenhaupt-Tao);
- High-degree algebraic geometry (Dvir, Saraf-Sudan);
- Heat flow monotonicity formulae (Bennett-Carbery-Tao)

[This list is not exhaustive.]

Very recently, I was pleasantly surprised to see yet another mathematical tool used to obtain new progress on the Kakeya conjecture, namely (a generalisation of) the famous Ham Sandwich theorem from algebraic topology. This was recently used by Guth to establish a certain endpoint multilinear Kakeya estimate left open by the work of Bennett, Carbery, and myself. With regards to the Kakeya set conjecture, Guth’s arguments assert, roughly speaking, that the only Kakeya sets that can fail to have full dimension are those which obey a certain “planiness” property, which informally means that the line segments that pass through a typical point in the set must be essentially coplanar. (This property first surfaced in my paper with Katz and Łaba.) Guth’s arguments can be viewed as a partial analogue of Dvir’s arguments in the finite field setting (which I discussed in this blog post) to the Euclidean setting; in particular, both arguments rely crucially on the ability to create a polynomial of controlled degree that vanishes at or near a large number of points. Unfortunately, while these arguments fully settle the Kakeya conjecture in the finite field setting, it appears that some new ideas are still needed to finish off the problem in the Euclidean setting. Nevertheless this is an interesting new development in the long history of this conjecture, in particular demonstrating that the polynomial method can be successfully applied to continuous Euclidean problems (i.e. it is not confined to the finite field setting).

In this post I would like to sketch some of the key ideas in Guth’s paper, in particular the role of the Ham Sandwich theorem (or more precisely, a polynomial generalisation of this theorem first observed by Gromov).

One of my favourite unsolved problems in mathematics is the Kakeya conjecture in geometric measure theory. This conjecture is descended from the

Kakeya needle problem.(1917) What is the least area in the plane required to continuously rotate a needle of unit length and zero thickness around completely (i.e. by )?

For instance, one can rotate a unit needle inside a unit disk, which has area . By using a deltoid one requires only area.

In 1928, Besicovitch showed that that in fact one could rotate a unit needle using an *arbitrarily small* amount of positive area. This unintuitive fact was a corollary of two observations. The first, which is easy, is that one can *translate* a needle using arbitrarily small area, by sliding the needle along the direction it points in for a long distance (which costs zero area), turning it slightly (costing a small amount of area), sliding back, and then undoing the turn. The second fact, which is less obvious, can be phrased as follows. Define a *Kakeya set* in to be any set which contains a unit line segment in each direction. (See this Java applet of mine, or the wikipedia page, for some pictures of such sets.)

Theorem.(Besicovitch, 1919) There exists Kakeya sets of arbitrarily small area (or more precisely, Lebesgue measure).

In fact, one can construct such sets with zero Lebesgue measure. On the other hand, it was shown by Davies that even though these sets had zero area, they were still necessarily two-dimensional (in the sense of either Hausdorff dimension or Minkowski dimension). This led to an analogous conjecture in higher dimensions:

Kakeya conjecture.A Besicovitch set in (i.e. a subset of that contains a unit line segment in every direction) has Minkowski and Hausdorff dimension equal to n.

This conjecture remains open in dimensions three and higher (and gets more difficult as the dimension increases), although many partial results are known. For instance, when n=3, it is known that Besicovitch sets have Hausdorff dimension at least 5/2 and (upper) Minkowski dimension at least . See my Notices article for a general survey of this problem (and its connections with Fourier analysis, additive combinatorics, and PDE), my paper with Katz for a more technical survey, and Wolff’s survey for a systematic treatment of the field (up to about 1998 or so).

In 1999, Wolff proposed a simpler finite field analogue of the Kakeya conjecture as a model problem that avoided all the technical issues involving Minkowski and Hausdorff dimension. If is a vector space over a finite field F, define a *Kakeya set* to be a subset of which contains a line in every direction.

Finite field Kakeya conjecture.Let be a Kakeya set. Then E has cardinality at least , where depends only on n.

This conjecture has had a significant influence in the subject, in particular inspiring work on the *sum-product phenomenon* in finite fields, which has since proven to have many applications in number theory and computer science. Modulo minor technicalities, the progress on the finite field Kakeya conjecture was, until very recently, essentially the same as that of the original “Euclidean” Kakeya conjecture.

Last week, the finite field Kakeya conjecture was proven using a beautifully simple argument by Zeev Dvir, using the *polynomial method* in algebraic extremal combinatorics. The proof is so short that I can present it in full here.

## Recent Comments