You are currently browsing the tag archive for the ‘polynomial method’ tag.

Let be a finite field, with algebraic closure , and let be an (affine) algebraic variety defined over , by which I mean a set of the form

for some ambient dimension , and some finite number of polynomials . In order to reduce the number of subscripts later on, let us say that has *complexity at most * if , , and the degrees of the are all less than or equal to . Note that we do not require at this stage that be irreducible (i.e. not the union of two strictly smaller varieties), or defined over , though we will often specialise to these cases later in this post. (Also, everything said here can also be applied with almost no changes to projective varieties, but we will stick with affine varieties for sake of concreteness.)

One can consider two crude measures of how “big” the variety is. The first measure, which is algebraic geometric in nature, is the *dimension* of the variety , which is an integer between and (or, depending on convention, , , or undefined, if is empty) that can be defined in a large number of ways (e.g. it is the largest for which the generic linear projection from to is dominant, or the smallest for which the intersection with a generic codimension subspace is non-empty). The second measure, which is number-theoretic in nature, is the number of -points of , i.e. points in all of whose coefficients lie in the finite field, or equivalently the number of solutions to the system of equations for with variables in .

These two measures are linked together in a number of ways. For instance, we have the basic Schwarz-Zippel type bound (which, in this qualitative form, goes back at least to Lemma 1 of the work of Lang and Weil in 1954).

Lemma 1 (Schwarz-Zippel type bound)Let be a variety of complexity at most . Then we have .

*Proof:* (Sketch) For the purposes of exposition, we will not carefully track the dependencies of implied constants on the complexity , instead simply assuming that all of these quantities remain controlled throughout the argument. (If one wished, one could obtain ineffective bounds on these quantities by an ultralimit argument, as discussed in this previous post, or equivalently by moving everything over to a nonstandard analysis framework; one could also obtain such uniformity using the machinery of schemes.)

We argue by induction on the ambient dimension of the variety . The case is trivial, so suppose and that the claim has already been proven for . By breaking up into irreducible components we may assume that is irreducible (this requires some control on the number and complexity of these components, but this is available, as discussed in this previous post). For each , the fibre is either one-dimensional (and thus all of ) or zero-dimensional. In the latter case, one has points in the fibre from the fundamental theorem of algebra (indeed one has a bound of in this case), and lives in the projection of to , which is a variety of dimension at most and controlled complexity, so the contribution of this case is acceptable from the induction hypothesis. In the former case, the fibre contributes -points, but lies in a variety in of dimension at most (since otherwise would contain a subvariety of dimension at least , which is absurd) and controlled complexity, and so the contribution of this case is also acceptable from the induction hypothesis.

One can improve the bound on the implied constant to be linear in the degree of (see e.g. Claim 7.2 of this paper of Dvir, Kollar, and Lovett, or Lemma A.3 of this paper of Ellenberg, Oberlin, and myself), but we will not be concerned with these improvements here.

Without further hypotheses on , the above upper bound is sharp (except for improvements in the implied constants). For instance, the variety

where are distict, is the union of distinct hyperplanes of dimension , with and complexity ; similar examples can easily be concocted for other choices of . In the other direction, there is also no non-trivial lower bound for without further hypotheses on . For a trivial example, if is an element of that does not lie in , then the hyperplane

clearly has no -points whatsoever, despite being a -dimensional variety in of complexity . For a slightly less non-trivial example, if is an element of that is not a quadratic residue, then the variety

which is the union of two hyperplanes, still has no -points, even though this time the variety is defined over instead of (by which we mean that the defining polynomial(s) have all of their coefficients in ). There is however the important Lang-Weil bound that allows for a much better estimate as long as is both defined over *and* irreducible:

Theorem 2 (Lang-Weil bound)Let be a variety of complexity at most . Assume that is defined over , and that is irreducible as a variety over (i.e. isgeometrically irreducibleor absolutely irreducible). Then

Again, more explicit bounds on the implied constant here are known, but will not be the focus of this post. As the previous examples show, the hypotheses of definability over and geometric irreducibility are both necessary.

The Lang-Weil bound is already non-trivial in the model case of plane curves:

Theorem 3 (Hasse-Weil bound)Let be an irreducible polynomial of degree with coefficients in . Then

Thus, for instance, if , then the elliptic curve has -points, a result first established by Hasse. The Hasse-Weil bound is already quite non-trivial, being the analogue of the Riemann hypothesis for plane curves. For hyper-elliptic curves, an elementary proof (due to Stepanov) is discussed in this previous post. For general plane curves, the first proof was by Weil (leading to his famous Weil conjectures); there is also a nice version of Stepanov’s argument due to Bombieri covering this case which is a little less elementary (relying crucially on the Riemann-Roch theorem for the upper bound, and a lifting trick to then get the lower bound), which I briefly summarise later in this post. The full Lang-Weil bound is deduced from the Hasse-Weil bound by an induction argument using generic hyperplane slicing, as I will also summarise later in this post.

The hypotheses of definability over and geometric irreducibility in the Lang-Weil can be removed after inserting a geometric factor:

Corollary 4 (Lang-Weil bound, alternate form)Let be a variety of complexity at most . Then one haswhere is the number of top-dimensional components of (i.e. geometrically irreducible components of of dimension ) that are definable over , or equivalently are invariant with respect to the Frobenius endomorphism that defines .

*Proof:* By breaking up a general variety into components (and using Lemma 1 to dispose of any lower-dimensional components), it suffices to establish this claim when is itself geometrically irreducible. If is definable over , the claim follows from Theorem 2. If is not definable over , then it is not fixed by the Frobenius endomorphism (since otherwise one could produce a set of defining polynomials that were fixed by Frobenius and thus defined over by using some canonical basis (such as a reduced Grobner basis) for the associated ideal), and so has strictly smaller dimension than . But captures all the -points of , so in this case the claim follows from Lemma 1.

Note that if is reducible but is itself defined over , then the Frobenius endomorphism preserves itself, but may permute the components of around. In this case, is the number of fixed points of this permutation action of Frobenius on the components. In particular, is always a natural number between and ; thus we see that regardless of the geometry of , the normalised count is asymptotically restricted to a bounded range of natural numbers (in the regime where the complexity stays bounded and goes to infinity).

Example 1Consider the varietyfor some non-zero parameter . Geometrically (by which we basically mean “when viewed over the algebraically closed field “), this is the union of two lines, with slopes corresponding to the two square roots of . If is a quadratic residue, then both of these lines are defined over , and are fixed by Frobenius, and in this case. If is not a quadratic residue, then the lines are not defined over , and the Frobenius automorphism permutes the two lines while preserving as a whole, giving in this case.

Corollary 4 effectively computes (at least to leading order) the number-theoretic size of a variety in terms of geometric information about , namely its dimension and the number of top-dimensional components fixed by Frobenius. It turns out that with a little bit more effort, one can extend this connection to cover not just a single variety , but a family of varieties indexed by points in some base space . More precisely, suppose we now have two affine varieties of bounded complexity, together with a regular map of bounded complexity (the definition of complexity of a regular map is a bit technical, see e.g. this paper, but one can think for instance of a polynomial or rational map of bounded degree as a good example). It will be convenient to assume that the base space is irreducible. If the map is a dominant map (i.e. the image is Zariski dense in ), then standard algebraic geometry results tell us that the fibres are an unramified family of -dimensional varieties outside of an exceptional subset of of dimension strictly smaller than (and with having dimension strictly smaller than ); see e.g. Section I.6.3 of Shafarevich.

Now suppose that , , and are defined over . Then, by Lang-Weil, has -points, and by Schwarz-Zippel, for all but of these -points (the ones that lie in the subvariety ), the fibre is an algebraic variety defined over of dimension . By using ultraproduct arguments (see e.g. Lemma 3.7 of this paper of mine with Emmanuel Breuillard and Ben Green), this variety can be shown to have bounded complexity, and thus by Corollary 4, has -points. One can then ask how the quantity is distributed. A simple but illustrative example occurs when and is the polynomial . Then equals when is a non-zero quadratic residue and when is a non-zero quadratic non-residue (and when is zero, but this is a negligible fraction of all ). In particular, in the asymptotic limit , is equal to half of the time and half of the time.

Now we describe the asymptotic distribution of the . We need some additional notation. Let be an -point in , and let be the connected components of the fibre . As is defined over , this set of components is permuted by the Frobenius endomorphism . But there is also an action by monodromy of the fundamental group (this requires a certain amount of étale machinery to properly set up, as we are working over a positive characteristic field rather than over the complex numbers, but I am going to ignore this rather important detail here, as I still don’t fully understand it). This fundamental group may be infinite, but (by the étale construction) is always profinite, and in particular has a *Haar probability measure*, in which every finite index subgroup (and their cosets) are measurable. Thus we may meaningfully talk about elements drawn uniformly at random from this group, so long as we work only with the profinite -algebra on that is generated by the cosets of the finite index subgroups of this group (which will be the only relevant sets we need to measure when considering the action of this group on finite sets, such as the components of a generic fibre).

Theorem 5 (Lang-Weil with parameters)Let be varieties of complexity at most with irreducible, and let be a dominant map of complexity at most . Let be an -point of . Then, for any natural number , one has for values of , where is the random variable that counts the number of components of a generic fibre that are invariant under , where is an element chosen uniformly at random from the étale fundamental group . In particular, in the asymptotic limit , and with chosen uniformly at random from , (or, equivalently, ) and have the same asymptotic distribution.

This theorem generalises Corollary 4 (which is the case when is just a point, so that is just and is trivial). Informally, the effect of a non-trivial parameter space on the Lang-Weil bound is to push around the Frobenius map by monodromy for the purposes of counting invariant components, and a randomly chosen set of parameters corresponds to a randomly chosen loop on which to perform monodromy.

Example 2Let and for some fixed ; to avoid some technical issues let us suppose that is coprime to . Then can be taken to be , and for a base point we can take . The fibre – the roots of unity – can be identified with the cyclic group by using a primitive root of unity. The étale fundamental group is (I think) isomorphic to the profinite closure of the integers (excluding the part of that closure coming from the characteristic of ). Not coincidentally, the integers are the fundamental group of the complex analogue of . (Brian Conrad points out to me though that for more complicated varieties, such as covers of by a power of the characteristic, the etale fundamental group is more complicated than just a profinite closure of the ordinary fundamental group, due to the presence of Artin-Schreier covers that are only ramified at infinity.) The action of this fundamental group on the fibres can given by translation. Meanwhile, the Frobenius map on is given by multiplication by . A random element then becomes a random affine map on , where chosen uniformly at random from . The number of fixed points of this map is equal to the greatest common divisor of and when is divisible by , and equal to otherwise. This matches up with the elementary number fact that a randomly chosen non-zero element of will be an power with probability , and when this occurs, the number of roots in will be .

Example 3(Thanks to Jordan Ellenberg for this example.) Consider a random elliptic curve , where are chosen uniformly at random, and let . Let be the -torsion points of (i.e. those elements with using the elliptic curve addition law); as a group, this is isomorphic to (assuming that has sufficiently large characteristic, for simplicity), and consider the number of points of , which is a random variable taking values in the natural numbers between and . In this case, the base variety is the modular curve , and the covering variety is the modular curve . The generic fibre here can be identified with , the monodromy action projects down to the action of , and the action of Frobenius on this fibre can be shown to be given by a matrix with determinant (with the exact choice of matrix depending on the choice of fibre and of the identification), so the distribution of the number of -points of is asymptotic to the distribution of the number of fixed points of a random linear map of determinant on .

Theorem 5 seems to be well known “folklore” among arithmetic geometers, though I do not know of an explicit reference for it. I enjoyed deriving it for myself (though my derivation is somewhat incomplete due to my lack of understanding of étale cohomology) from the ordinary Lang-Weil theorem and the moment method. I’m recording this derivation later in this post, mostly for my own benefit (as I am still in the process of learning this material), though perhaps some other readers may also be interested in it.

Caveat: not all details are fully fleshed out in this writeup, particularly those involving the finer points of algebraic geometry and étale cohomology, as my understanding of these topics is not as complete as I would like it to be.

Many thanks to Brian Conrad and Jordan Ellenberg for helpful discussions on these topics.

The ham sandwich theorem asserts that, given bounded open sets in , there exists a hyperplane that bisects each of these sets , in the sense that each of the two half-spaces on either side of the hyperplane captures exactly half of the volume of . The shortest proof of this result proceeds by invoking the Borsuk-Ulam theorem.

A useful generalisation of the ham sandwich theorem is the *polynomial ham sandwich theorem*, which asserts that given bounded open sets in , there exists a hypersurface of degree (thus is a polynomial of degree such that the two semi-algebraic sets and capture half the volume of each of the . (More precisely, the degree will be at most , where is the first positive integer for which exceeds .) This theorem can be deduced from the Borsuk-Ulam theorem in the same manner that the ordinary ham sandwich theorem is (and can also be deduced directly from the ordinary ham sandwich theorem via the Veronese embedding).

The polynomial ham sandwich theorem is a theorem about continuous bodies (bounded open sets), but a simple limiting argument leads one to the following discrete analogue: given *finite* sets in , there exists a hypersurface of degree , such that each of the two semi-algebraic sets and contain at most half of the points on (note that some of the points of can certainly lie on the boundary ). This can be iterated to give a useful cell decomposition:

Proposition 1 (Cell decomposition)Let be a finite set of points in , and let be a positive integer. Then there exists a polynomial of degree at most , and a decompositioninto the hypersurface and a collection of cells bounded by , such that , and such that each cell contains at most points.

A proof is sketched in this previous blog post. The cells in the argument are not necessarily connected (being instead formed by intersecting together a number of semi-algebraic sets such as and ), but it is a classical result (established independently by Oleinik-Petrovskii, Milnor, and Thom) that any degree hypersurface divides into connected components, so one can easily assume that the cells are connected if desired. (Incidentally, one does not need the full machinery of the results in the above cited papers – which control not just the number of components, but all the Betti numbers of the complement of – to get the bound on connected components; one can instead observe that every bounded connected component has a critical point where , and one can control the number of these points by Bezout’s theorem, after perturbing slightly to enforce genericity, and then count the unbounded components by an induction on dimension.)

Remark 1By setting as large as , we obtain as a limiting case of the cell decomposition the fact that any finite set of points in can be captured by a hypersurface of degree . This fact is in fact true over arbitrary fields (not just over ), and can be proven by a simple linear algebra argument (see e.g. this previous blog post). However, the cell decomposition is more flexible than this algebraic fact due to the ability to arbitrarily select the degree parameter .

The cell decomposition can be viewed as a structural theorem for arbitrary large configurations of points in space, much as the Szemerédi regularity lemma can be viewed as a structural theorem for arbitrary large dense graphs. Indeed, just as many problems in the theory of large dense graphs can be profitably attacked by first applying the regularity lemma and then inspecting the outcome, it now seems that many problems in combinatorial incidence geometry can be attacked by applying the cell decomposition (or a similar such decomposition), with a parameter to be optimised later, to a relevant set of points, and seeing how the cells interact with each other and with the other objects in the configuration (lines, planes, circles, etc.). This strategy was spectacularly illustrated recently with Guth and Katz‘s use of the cell decomposition to resolve the Erdös distinct distance problem (up to logarithmic factors), as discussed in this blog post.

In this post, I wanted to record a simpler (but still illustrative) version of this method (that I learned from Nets Katz), namely to provide yet another proof of the Szemerédi-Trotter theorem in incidence geometry:

Theorem 2 (Szemerédi-Trotter theorem)Given a finite set of points and a finite set of lines in , the set of incidences has cardinality

This theorem has many short existing proofs, including one via crossing number inequalities (as discussed in this previous post) or via a slightly different type of cell decomposition (as discussed here). The proof given below is not that different, in particular, from the latter proof, but I believe it still serves as a good introduction to the polynomial method in combinatorial incidence geometry.

Combinatorial incidence geometry is the study of the possible combinatorial configurations between geometric objects such as lines and circles. One of the basic open problems in the subject has been the Erdős distance problem, posed in 1946:

Problem 1 (Erdős distance problem)Let be a large natural number. What is the least number of distances that are determined by points in the plane?

Erdős called this least number . For instance, one can check that and , although the precise computation of rapidly becomes more difficult after this. By considering points in arithmetic progression, we see that . By considering the slightly more sophisticated example of a lattice grid (assuming that is a square number for simplicity), and using some analytic number theory, one can obtain the slightly better asymptotic bound .

On the other hand, lower bounds are more difficult to obtain. As observed by Erdős, an easy argument, ultimately based on the incidence geometry fact that any two circles intersect in at most two points, gives the lower bound . The exponent has been slowly increasing over the years by a series of increasingly intricate arguments combining incidence geometry facts with other known results in combinatorial incidence geometry (most notably the Szemerédi-Trotter theorem) and also some tools from additive combinatorics; however, these methods seemed to fall quite short of getting to the optimal exponent of . (Indeed, previously to last week, the best lower bound known was approximately , due to Katz and Tardos.)

Very recently, though, Guth and Katz have obtained a near-optimal result:

The proof neatly combines together several powerful and modern tools in a new way: a recent geometric reformulation of the problem due to Elekes and Sharir; the polynomial method as used recently by Dvir, Guth, and Guth-Katz on related incidence geometry problems (and discussed previously on this blog); and the somewhat older method of cell decomposition (also discussed on this blog). A key new insight is that the polynomial method (and more specifically, the *polynomial Ham Sandwich theorem*, also discussed previously on this blog) can be used to efficiently create cells.

In this post, I thought I would sketch some of the key ideas used in the proof, though I will not give the full argument here (the paper itself is largely self-contained, well motivated, and of only moderate length). In particular I will not go through all the various cases of configuration types that one has to deal with in the full argument, but only some illustrative special cases.

To simplify the exposition, I will repeatedly rely on “pigeonholing cheats”. A typical such cheat: if I have objects (e.g. points or lines), each of which could be of one of two types, I will assume that either all of the objects are of the first type, or all of the objects are of the second type. (In truth, I can only assume that at least of the objects are of the first type, or at least of the objects are of the second type; but in practice, having instead of only ends up costing an unimportant multiplicative constant in the type of estimates used here.) A related such cheat: if one has objects (again, think of points or circles), and to each object one can associate some natural number (e.g. some sort of “multiplicity” for ) that is of “polynomial size” (of size ), then I will assume in fact that all the are in a fixed dyadic range for some . (In practice, the dyadic pigeonhole principle can only achieve this after throwing away all but about of the original objects; it is this type of logarithmic loss that eventually leads to the logarithmic factor in the main theorem.) Using the notation to denote the assertion that for an absolute constant , we thus have for all , thus is morally constant.

I will also use asymptotic notation rather loosely, to avoid cluttering the exposition with a certain amount of routine but tedious bookkeeping of constants. In particular, I will use the informal notation or to denote the statement that is “much less than” or is “much larger than” , by some large constant factor.

See also Janos Pach’s recent reaction to the Guth-Katz paper on Kalai’s blog.

Below the fold is a version of my talk “Recent progress on the Kakeya conjecture” that I gave at the Fefferman conference.

Jordan Ellenberg, Richard Oberlin, and I have just uploaded to the arXiv the paper “The Kakeya set and maximal conjectures for algebraic varieties over finite fields“, submitted to Mathematika. This paper builds upon some work of Dvir and later authors on the Kakeya problem in finite fields, which I have discussed in this earlier blog post. Dvir established the following:

Kakeya set conjecture for finite fields.Let F be a finite field, and let E be a subset of that contains a line in every direction. Then E has cardinality at least for some .

The initial argument of Dvir gave . This was improved to for some explicit by Saraf and Sudan, and recently to by Dvir, Kopparty, Saraf, and Sudan, which is within a factor 2 of the optimal result.

In our work we investigate a somewhat different set of improvements to Dvir’s result. The first concerns the *Kakeya maximal function* of a function , defined for all directions in the projective hyperplane at infinity by the formula

where the supremum ranges over all lines in oriented in the direction . Our first result is the endpoint estimate for this operator, namely

Kakeya maximal function conjecture in finite fields.We have for some constant .

This result implies Dvir’s result, since if f is the indicator function of the set E in Dvir’s result, then for every . However, it also gives information on more general sets E which do not necessarily contain a line in every direction, but instead contain a certain fraction of a line in a subset of directions. The exponents here are best possible in the sense that all other mapping properties of the operator can be deduced (with bounds that are optimal up to constants) by interpolating the above estimate with more trivial estimates. This result is the finite field analogue of a long-standing (and still open) conjecture for the Kakeya maximal function in Euclidean spaces; we rely on the polynomial method of Dvir, which thus far has not extended to the Euclidean setting (but note the very interesting variant of this method by Guth that has established the endpoint multilinear Kakeya maximal function estimate in this setting, see this blog post for further discussion).

It turns out that a direct application of the polynomial method is not sufficient to recover the full strength of the maximal function estimate; but by combining the polynomial method with the Nikishin-Maurey-Pisier-Stein “method of random rotations” (as interpreted nowadays by Stein and later by Bourgain, and originally inspired by the factorisation theorems of Nikishin, Maurey, and Pisier), one can already recover a “restricted weak type” version of the above estimate. If one then enhances the polynomial method with the “method of multiplicities” (as introduced by Saraf and Sudan) we can then recover the full “strong type” estimate; a few more details below the fold.

It turns out that one can generalise the above results to more general affine or projective algebraic varieties over finite fields. In particular, we showed

Kakeya maximal function conjecture in algebraic varieties.Suppose that is an (n-1)-dimensional algebraic variety. Let be an integer. Then we havefor some constant , where the supremum is over all irreducible algebraic curves of degree at most d that pass through x but do not lie in W, and W(F) denotes the F-points of W.

The ordinary Kakeya maximal function conjecture corresponds to the case when N=n, W is the hyperplane at infinity, and the degree d is equal to 1. One corollary of this estimate is a Dvir-type result: a subset of which contains, for each x in W, an irreducible algebraic curve of degree d passing through x but not lying in W, has cardinality if . (In particular this implies a lower bound for Nikodym sets worked out by Li.) The dependence of the implied constant on W is only via the degree of W.

The techniques used in the flat case can easily handle curves of higher degree (provided that we allow the implied constants to depend on d), but the method of random rotations does not seem to work directly on the algebraic variety W as there are usually no symmetries of this variety to exploit. Fortunately, we can get around this by using a “random projection trick” to “flatten” W into a hyperplane (after first expressing W as the zero locus of some polynomials, and then composing with the graphing map for such polynomials), reducing the non-flat case to the flat case.

Below the fold, I wish to sketch two of the key ingredients in our arguments, the random rotations method and the random projections trick. (We of course also use some algebraic geometry, but mostly low-tech stuff, on the level of Bezout’s theorem, though we do need one non-trivial result of Kleiman (from SGA6), that asserts that bounded degree varieties can be cut out by a bounded number of polynomials of bounded degree.)

[Update, March 14: See also Jordan's own blog post on our paper.]

One of my favourite family of conjectures (and one that has preoccupied a significant fraction of my own research) is the family of Kakeya conjectures in geometric measure theory and harmonic analysis. There are many (not quite equivalent) conjectures in this family. The cleanest one to state is the set conjecture:

Kakeya set conjecture: Let , and let contain a unit line segment in every direction (such sets are known asKakeya setsorBesicovitch sets). Then E has Hausdorff dimension and Minkowski dimension equal to n.

One reason why I find these conjectures fascinating is the sheer variety of mathematical fields that arise both in the partial results towards this conjecture, and in the applications of those results to other problems. See for instance this survey of Wolff, my Notices article and this article of Łaba on the connections between this problem and other problems in Fourier analysis, PDE, and additive combinatorics; there have even been some connections to number theory and to cryptography. At the other end of the pipeline, the mathematical tools that have gone *into* the proofs of various partial results have included:

- Maximal functions, covering lemmas, methods (Cordoba, Strömberg, Cordoba-Fefferman);
- Fourier analysis (Nagel-Stein-Wainger);
- Multilinear integration (Drury, Christ)
- Paraproducts (Katz);
- Combinatorial incidence geometry (Bourgain, Wolff);
- Multi-scale analysis (Barrionuevo, Katz-Łaba-Tao, Łaba-Tao, Alfonseca-Soria-Vargas);
- Probabilistic constructions (Bateman-Katz, Bateman);
- Additive combinatorics and graph theory (Bourgain, Katz-Łaba-Tao, Katz-Tao, Katz-Tao);
- Sum-product theorems (Bourgain-Katz-Tao);
- Bilinear estimates (Tao-Vargas-Vega);
- Perron trees (Perron, Schoenberg, Keich);
- Group theory (Katz);
- Low-degree algebraic geometry (Schlag, Tao, Mockenhaupt-Tao);
- High-degree algebraic geometry (Dvir, Saraf-Sudan);
- Heat flow monotonicity formulae (Bennett-Carbery-Tao)

[This list is not exhaustive.]

Very recently, I was pleasantly surprised to see yet another mathematical tool used to obtain new progress on the Kakeya conjecture, namely (a generalisation of) the famous Ham Sandwich theorem from algebraic topology. This was recently used by Guth to establish a certain endpoint multilinear Kakeya estimate left open by the work of Bennett, Carbery, and myself. With regards to the Kakeya set conjecture, Guth’s arguments assert, roughly speaking, that the only Kakeya sets that can fail to have full dimension are those which obey a certain “planiness” property, which informally means that the line segments that pass through a typical point in the set must be essentially coplanar. (This property first surfaced in my paper with Katz and Łaba.) Guth’s arguments can be viewed as a partial analogue of Dvir’s arguments in the finite field setting (which I discussed in this blog post) to the Euclidean setting; in particular, both arguments rely crucially on the ability to create a polynomial of controlled degree that vanishes at or near a large number of points. Unfortunately, while these arguments fully settle the Kakeya conjecture in the finite field setting, it appears that some new ideas are still needed to finish off the problem in the Euclidean setting. Nevertheless this is an interesting new development in the long history of this conjecture, in particular demonstrating that the polynomial method can be successfully applied to continuous Euclidean problems (i.e. it is not confined to the finite field setting).

In this post I would like to sketch some of the key ideas in Guth’s paper, in particular the role of the Ham Sandwich theorem (or more precisely, a polynomial generalisation of this theorem first observed by Gromov).

One of my favourite unsolved problems in mathematics is the Kakeya conjecture in geometric measure theory. This conjecture is descended from the

Kakeya needle problem.(1917) What is the least area in the plane required to continuously rotate a needle of unit length and zero thickness around completely (i.e. by )?

For instance, one can rotate a unit needle inside a unit disk, which has area . By using a deltoid one requires only area.

In 1928, Besicovitch showed that that in fact one could rotate a unit needle using an *arbitrarily small* amount of positive area. This unintuitive fact was a corollary of two observations. The first, which is easy, is that one can *translate* a needle using arbitrarily small area, by sliding the needle along the direction it points in for a long distance (which costs zero area), turning it slightly (costing a small amount of area), sliding back, and then undoing the turn. The second fact, which is less obvious, can be phrased as follows. Define a *Kakeya set* in to be any set which contains a unit line segment in each direction. (See this Java applet of mine, or the wikipedia page, for some pictures of such sets.)

Theorem.(Besicovitch, 1919) There exists Kakeya sets of arbitrarily small area (or more precisely, Lebesgue measure).

In fact, one can construct such sets with zero Lebesgue measure. On the other hand, it was shown by Davies that even though these sets had zero area, they were still necessarily two-dimensional (in the sense of either Hausdorff dimension or Minkowski dimension). This led to an analogous conjecture in higher dimensions:

Kakeya conjecture.A Besicovitch set in (i.e. a subset of that contains a unit line segment in every direction) has Minkowski and Hausdorff dimension equal to n.

This conjecture remains open in dimensions three and higher (and gets more difficult as the dimension increases), although many partial results are known. For instance, when n=3, it is known that Besicovitch sets have Hausdorff dimension at least 5/2 and (upper) Minkowski dimension at least . See my Notices article for a general survey of this problem (and its connections with Fourier analysis, additive combinatorics, and PDE), my paper with Katz for a more technical survey, and Wolff’s survey for a systematic treatment of the field (up to about 1998 or so).

In 1999, Wolff proposed a simpler finite field analogue of the Kakeya conjecture as a model problem that avoided all the technical issues involving Minkowski and Hausdorff dimension. If is a vector space over a finite field F, define a *Kakeya set* to be a subset of which contains a line in every direction.

Finite field Kakeya conjecture.Let be a Kakeya set. Then E has cardinality at least , where depends only on n.

This conjecture has had a significant influence in the subject, in particular inspiring work on the *sum-product phenomenon* in finite fields, which has since proven to have many applications in number theory and computer science. Modulo minor technicalities, the progress on the finite field Kakeya conjecture was, until very recently, essentially the same as that of the original “Euclidean” Kakeya conjecture.

Last week, the finite field Kakeya conjecture was proven using a beautifully simple argument by Zeev Dvir, using the *polynomial method* in algebraic extremal combinatorics. The proof is so short that I can present it in full here.

## Recent Comments