I’ve just uploaded to the arXiv my paper “Expanding polynomials over finite fields of large characteristic, and a regularity lemma for definable sets“, submitted to Contrib. Disc. Math. The motivation of this paper is to understand a certain polynomial variant of the *sum-product phenomenon* in finite fields. This phenomenon asserts that if is a non-empty subset of a finite field , then either the sumset or product set will be significantly larger than , unless is close to a subfield of (or to ). In particular, in the regime when is large, say , one expects an expansion bound of the form

for some absolute constants . Results of this type are known; for instance, Hart, Iosevich, and Solymosi obtained precisely this bound for (in the case when is prime), which was then improved by Garaev to .

We have focused here on the case when is a large subset of , but sum-product estimates are also extremely interesting in the opposite regime in which is allowed to be small (see for instance the papers of Katz-Shen and Li and of Garaev for some recent work in this case, building on some older papers of Bourgain, Katz and myself and of Bourgain, Glibichuk, and Konyagin). However, the techniques used in these two regimes are rather different. For large subsets of , it is often profitable to use techniques such as the Fourier transform or the Cauchy-Schwarz inequality to “complete” a sum over a large set (such as ) into a set over the entire field , and then to use identities concerning complete sums (such as the Weil bound on complete exponential sums over a finite field). For small subsets of , such techniques are usually quite inefficient, and one has to proceed by somewhat different combinatorial methods which do not try to exploit the ambient field . But my paper focuses exclusively on the large regime, and unfortunately does not directly say much (except through reasoning by analogy) about the small case.

Note that it is necessary to have both and appear on the left-hand side of (1). Indeed, if one just has the sumset , then one can set to be a long arithmetic progression to give counterexamples to (1). Similarly, if one just has a product set , then one can set to be a long geometric progression. The sum-product phenomenon can then be viewed that it is not possible to simultaneously behave like a long arithmetic progression and a long geometric progression, unless one is already very close to behaving like a subfield.

Now we consider a polynomial variant of the sum-product phenomenon, where we consider a polynomial image

of a set with respect to a polynomial ; we can also consider the asymmetric setting of the image

of two subsets . The regime we will be interested is the one where the field is large, and the subsets of are also large, but the polynomial has bounded degree. Actually, for technical reasons it will not be enough for us to assume that has large cardinality; we will also need to assume that has large characteristic. (The two concepts are synonymous for fields of prime order, but not in general; for instance, the field with elements becomes large as while the characteristic remains fixed at , and is thus not going to be covered by the results in this paper.)

In this paper of Vu, it was shown that one could replace with in (1), thus obtaining a bound of the form

whenever for some absolute constants , unless the polynomial had the degenerate form for some linear function and polynomial , in which behaves too much like to get reasonable expansion. In this paper, we focus instead on the question of bounding alone. In particular, one can ask to classify the polynomials for which one has the *weak expansion property*

whenever for some absolute constants . One can also ask for stronger versions of this expander property, such as the *moderate expansion property*

whenever , or the *almost strong expansion property*

whenever . (One can consider even stronger expansion properties, such as the *strong expansion property* , but it was shown by Gyarmati and Sarkozy that this property cannot hold for polynomials of two variables of bounded degree when .) One can also consider asymmetric versions of these properties, in which one obtains lower bounds on rather than .

The example of a long arithmetic or geometric progression shows that the polynomials or cannot be expanders in any of the above senses, and a similar construction also shows that polynomials of the form or for some polynomials cannot be expanders. On the other hand, there are a number of results in the literature establishing expansion for various polynomials in two or more variables that are not of this degenerate form (in part because such results are related to incidence geometry questions in finite fields, such as the finite field version of the Erdos distinct distances problem). For instance, Solymosi established weak expansion for polynomials of the form when is a nonlinear polynomial, with generalisations by Hart, Li, and Shen for various polynomials of the form or . Further examples of expanding polynomials appear in the work of Shkredov, Iosevich-Rudnev, and Bukh-Tsimerman, as well as the previously mentioned paper of Vu and of Hart-Li-Shen, and these papers in turn cite many further results which are in the spirit of the polynomial expansion bounds discussed here (for instance, dealing with the small regime, or working in other fields such as instead of in finite fields ). We will not summarise all these results here; they are summarised briefly in my paper, and in more detail in the papers of Hart-Li-Shen and of Bukh-Tsimerman. But we will single out one of the results of Bukh-Tsimerman, which is one of most recent and general of these results, and closest to the results of my own paper. Roughly speaking, in this paper it is shown that a polynomial of two variables and bounded degree will be a moderate expander if it is *non-composite* (in the sense that it does not take the form for some non-linear polynomial and some polynomial , possibly having coefficients in the algebraic completion of ) and is monic on both and , thus it takes the form for some and some polynomial of degree at most in , and similarly with the roles of and reversed, unless is of the form or (in which case the expansion theory is covered to a large extent by the previous work of Hart, Li, and Shen).

Our first main result improves upon the Bukh-Tsimerman result by strengthening the notion of expansion and removing the non-composite and monic hypotheses, but imposes a condition of large characteristic. I’ll state the result here slightly informally as follows:

Theorem 1 (Criterion for moderate expansion)Let be a polynomial of bounded degree over a finite field of sufficiently large characteristic, and suppose that is not of the form or for some polynomials . Then one has the (asymmetric) moderate expansion propertywhenever .

This is basically a sharp necessary and sufficient condition for asymmetric expansion moderate for polynomials of two variables. In the paper, analogous sufficient conditions for weak or almost strong expansion are also given, although these are not quite as satisfactory (particularly the conditions for almost strong expansion, which include a somewhat complicated algebraic condition which is not easy to check, and which I would like to simplify further, but was unable to).

The argument here resembles the Bukh-Tsimerman argument in many ways. One can view the result as an assertion about the expansion properties of the graph , which can essentially be thought of as a somewhat sparse three-uniform hypergraph on . Being sparse, it is difficult to directly apply techniques from dense graph or hypergraph theory for this situation; however, after a few applications of the Cauchy-Schwarz inequality, it turns out (as observed by Bukh and Tsimerman) that one can essentially convert the problem to one about the expansion properties of the set

(actually, one should view this as a multiset, but let us ignore this technicality) which one expects to be a dense set in , except in the case when the associated algebraic variety

fails to be Zariski dense, but it turns out that in this case one can use some differential geometry and Riemann surface arguments (after first invoking the Lefschetz principle and the high characteristic hypothesis to work over the complex numbers instead over a finite field) to show that is of the form or . This reduction is related to the classical fact that the only one-dimensional algebraic groups over the complex numbers are the additive group , the multiplicative group , or the elliptic curves (but the latter have a group law given by rational functions rather than polynomials, and so ultimately end up being eliminated from consideration, though they would play an important role if one wanted to study the expansion properties of rational functions).

It remains to understand the structure of the set (2) is. To understand dense graphs or hypergraphs, one of the standard tools of choice is the Szemerédi regularity lemma, which carves up such graphs into a bounded number of cells, with the graph behaving pseudorandomly on most pairs of cells. However, the bounds in this lemma are notoriously poor (the regularity obtained is an inverse tower exponential function of the number of cells), and this makes this lemma unsuitable for the type of expansion properties we seek (in which we want to deal with sets which have a polynomial sparsity, e.g. ). Fortunately, in the case of sets such as (2) which are definable over the language of rings, it turns out that a much stronger regularity lemma is available, which I call the “algebraic regularity lemma”. I’ll state it (again, slightly informally) in the context of graphs as follows:

Lemma 2 (Algebraic regularity lemma)Let be a finite field of large characteristic, and let be definable sets over of bounded complexity (i.e. are subsets of , for some bounded that can be described by some first-order predicate in the language of rings of bounded length and involving boundedly many constants). Let be a definable subset of , again of bounded complexity (one can view as a bipartite graph connecting and ). Then one can partition into a bounded number of cells , , still definable with bounded complexity, such that forallpairs , , one has the regularity propertyfor all , where is the density of in .

This lemma resembles the Szemerédi regularity lemma, but regularises *all* pairs of cells (not just most pairs), and the regularity is of polynomial strength in , rather than inverse tower exponential in the number of cells. Also, the cells are not arbitrary subsets of , but are themselves definable with bounded complexity, which turns out to be crucial for applications. I am optimistic that this lemma will be useful not just for studying expanding polynomials, but for many other combinatorial questions involving dense subsets of definable sets over finite fields.

The above lemma is stated for graphs , but one can iterate it to obtain an analogous regularisation of hypergraphs for any bounded (for application to (2), we need ). This hypergraph regularity lemma, by the way, is *not* analogous to the strong hypergraph regularity lemmas of Rodl et al. and Gowers developed in the last six or so years, but closer in spirit to the older (but weaker) hypergraph regularity lemma of Chung which gives the same “order ” regularity that the graph regularity lemma gives, rather than higher order regularity.

One feature of the proof of Lemma 2 which I found striking was the need to use some fairly high powered technology from algebraic geometry, and in particular the Lang-Weil bound on counting points in varieties over a finite field (discussed in this previous blog post), and also the theory of the etale fundamental group. Let me try to briefly explain why this is the case. A model example of a definable set of bounded complexity is a set of the form

for some polynomial . (Actually, it turns out that one can essentially write all definable sets as an intersection of sets of this form; see this previous blog post for more discussion.) To regularise the set , it is convenient to square the adjacency matrix, which soon leads to the study of counting functions such as

If one can show that this function is “approximately finite rank” in the sense that (modulo lower order errors, of size smaller than the main term), this quantity depends only on a bounded number of bits of information about and a bounded number of bits of information about , then a little bit of linear algebra will then give the required regularity result.

One can recognise as counting -points of a certain algebraic variety

The Lang-Weil bound (discussed in this previous post) provides a formula for this count, in terms of the number of geometrically irreducible components of that are defined over (or equivalently, are invariant with respect to the Frobenius endomorphism associated to ). So the problem boils down to ensuring that this quantity is “generically bounded rank”, in the sense that for generic , its value depends only on a bounded number of bits of and a bounded number of bits of .

Here is where the étale fundamental group comes in. One can view as a fibre product of the varieties

and

over . If one is in sufficiently high characteristic (or even better, in zero characteristic, which one can reduce to by an ultraproduct (or nonstandard analysis) construction, similar to that discussed in this previous post), the varieties are generically finite étale covers of , and the fibre product is then also generically a finite étale cover. One can count the components of a finite étale cover of a connected variety by counting the number of orbits of the étale fundamental group acting on a fibre of that variety (much as the number of components of a cover of a connected manifold is the number of orbits of the topological fundamental group acting on that fibre). So if one understands the étale fundamental group of a certain generic subset of (formed by intersecting together an -dependent generic subset of with an -dependent generic subset), this in principle controls . It turns out that one can decouple the and dependence of this fundamental group by using an étale version of the van Kampen theorem for the fundamental group, which I discussed in this previous blog post. With this fact (and another deep fact about the étale fundamental group in zero characteristic, namely that it is topologically finitely generated), one can obtain the desired generic bounded rank property of , which gives the regularity lemma.

In order to expedite the deployment of all this algebraic geometry (as well as some Riemann surface theory), it is convenient to use the formalism of nonstandard analysis (or the ultraproduct construction), which among other things can convert quantitative, finitary problems in large characteristic into equivalent qualitative, infinitary problems in zero characteristic (in the spirit of this blog post). This allows one to use several tools from those fields as “black boxes”; not just the theory of étale fundamental groups (which are considerably simpler and more favorable in characteristic zero than they are in positive characteristic), but also some results limiting the morphisms between compact Riemann surfaces of high genus (such as the de Franchis theorem, the Riemann-Hurwitz formula, or the fact that all morphisms between elliptic curves are essentially group homomorphisms), which would be quite unwieldy to utilise if one did not first pass to the zero characteristic case (and thence to the complex case) via the ultraproduct construction (followed by the Lefschetz principle).

I found this project to be particularly educational for me, as it forced me to wander outside of my usual range by quite a bit in order to pick up the tools from algebraic geometry and Riemann surfaces that I needed (in particular, I read through several chapters of EGA and SGA for the first time). This did however put me in the slightly unnerving position of having to use results (such as the Riemann existence theorem) whose proofs I have not fully verified for myself, but which are easy to find in the literature, and widely accepted in the field. I suppose this type of dependence on results in the literature is more common in the more structured fields of mathematics than it is in analysis, which by its nature has fewer reusable black boxes, and so key tools often need to be rederived and modified for each new application. (This distinction is discussed further in this article of Gowers.)

## 16 comments

Comments feed for this article

14 November, 2012 at 9:53 am

Maurice de GossonFascinating! Useful?

14 November, 2012 at 10:23 am

Emmanuel KowalskiThe last link seems to be wrong…

I think that maybe, instead of passing to characteristic 0, one should in many cases here be able to argue that one works with tame covers and fundamental groups. Even in positive characteristic, these behave like the characteristic zero picture.

14 November, 2012 at 11:46 am

Terence TaoThanks, I’ve corrected the link now.

I agree that most of the arguments in the paper should also extend to the positive characteristic case (though one may still need some effective lower bound on that characteristic in terms of the degrees of the various varieties and morphisms involved, for instance to keep the varieties generically smooth and to stay in the prime-to-p component of the etale fundamental group). My intuition on these topics deteriorates quite rapidly though, once I don’t get to use the Lefschetz principle to run to the complex algebraic geometry setting.

14 November, 2012 at 10:55 am

Expanding polynomials over finite fields of large characteristic, and a regularity lemma for definable sets « Guzman's Mathematics Weblog[…] Expanding polynomials over finite fields of large characteristic, and a regularity lemma for definab…. […]

14 November, 2012 at 11:48 am

acbdImpressive breadth of tools and references!

A tiny nitpick: in the paper, there are some missing letters and accents in the tricky ref 27 (the others seem fine). According to the french wikipedia, the TeX should rather read something like:

\bibitem{sga2}

A. Grothendieck, M. Raynaud, (Y. Laszlo, ed.),

Cohomologie locale des faisceaux coh\’erents et th\’eor\`emes de Lefschetz locaux et globaux (SGA 2). Documents Math\’ematiques, 4, Soci\’et\’e Math\’ematique de France, Paris (2005). [New edition of the 1968 original].

[Thanks, this will be corrected in the next revision of the ms – T.]18 November, 2012 at 10:17 am

valuevarFor what it is worth, my paper on SL_2 included a proof that a function P(x,y) was expanding in your sense for any sets (large or small). This was then used in a crucial step (since P(x,y) appeared as a trace of a product of elements of SL_2). Of course, P wasn’t quite a polynomial – rather, it was a polynomial on x, y, x^{-1} and y^{-1}. I’ve remained interested in the question, though – for which polynomials (on x and y, or on x, y, x^{-1} and y^{-1}) can we prove expansion even for very small subsets of finite fields?

18 November, 2012 at 10:29 am

Terence TaoThanks, Harald, for pointing that out! (Though, strictly speaking, your examples (say, Proposition 3.3 of http://arxiv.org/pdf/math/0509024.pdf for sake of concreteness) are actually polynomial functions of the products and their inverses, and they don’t quite give expansion in the senses I state above because epsilon is not required to depend linearly on delta in the limit , but it is certainly in the same spirit.)

The methods in my paper (based on regularity lemmas) only work for very large sets (of size or larger, basically). For extremely small sets (of size less than ) there should be some sort of Lefschetz principle that allows one to embed the configuration into the complex field, where the work of Elekes and Szabo gives satisfactory results. It seems reasonable to conjecture that the Elekes-Szabo theory extends to sets of cardinality up to for some absolute constant c (for fields of prime order at least, to avoid subfield obstructions), but then there is presumably some crossover to the large set case when begins to have positive density in F.

19 November, 2012 at 11:27 am

valuevarTerry, Prop. 3.3 of http://arxiv.org/pdf/math/0509024.pdf works uniformly for small and medium-sized sets, which are (in my view) the hardest cases; I didn’t optimize things for large sets because other (easier) methods worked for them in the context I was working in.

I gave these matters some thoughts a few years ago (I think Akshay and I talked about them). I would be pretty impressed if the transference arguments that work for very small sets could be extended to size , a small constant. I would imagine there would still be a large gap between small and large sets even in this case, and that would be the main challenge.

My intuition is that there should be large-set methods that work for all (see: Deligne). Don’t you agree?

19 November, 2012 at 12:15 pm

Terence TaoSorry, I meant to say that the

conclusionsof the transference argument should extend to the case (in the prime order case, at least), even if the transference method itself breaks down. (For instance, one could speculatively consider some sort of “approximate embedding” of such smallish sets into characteristic zero which isn’t an exact embedding in the Freiman sense, but is still somehow “good enough” for some vestige of the characteristic zero arguments to be applicable.)For large sets, I agree that is the natural barrier (for weak expansion, at least) since it is the last place where subfield obstructions can occur. (For moderate or strong expansion, there is the possibility of larger counterexamples, e.g. by intersecting together large arithmetic progressions with large geometric progressions, so I don’t have a firm intuition for this case.) My arguments use Deligne’s work (via the Lang-Weil bound) but because of the need to use Cauchy-Schwarz several times to eliminate all the arbitrary sets A, it only starts working at . I can believe that by being more careful, one could reduce the number of applications of Cauchy-Schwarz to get down to or , but to get all the way down to would require a very different idea; it can’t just be using Cauchy-Schwarz to “complete” all sums involving A, followed by Deligne to estimate the completed sums. (In sufficiently “additive” or “multiplicative” situations one can use the relevant Fourier transform as a replacement for Cauchy-Schwarz, and this can get down to the right barrier of , but this trick does not appear to be available in the general case.)

In any case, I agree that the theories for small, medium, and large sets will initially all be rather different from each other (much as is the case with the Bourgain-Gamburd expansion machinery), though perhaps a unified treatment will eventually emerge (for instance, one may optimistically hope that the small set theory will eventually extend all the way up to , and the large set theory down to , thus ultimately eliminating the need for a medium set theory).

19 November, 2012 at 1:46 am

MiladSorry for asking this question here, but how do you use mathematical symbols between your sentences?

Do you use a special program or something?!

[The short answer is yes. See the second section of http://terrytao.wordpress.com/about/ – T.]28 November, 2012 at 7:25 am

DerrickDoes the result on on asymmetric moderate expanders extend to the case when one considers the image of arbitrary subsets of as opposed to Cartesian products of sets ?

Do you think analogous statements can be made in a geometric measure theory context? Perhaps, define strong expansion to be something along the lines of the image of a set with large fractal dimension contains an interval, moderate expansion says that large fractal dimension implies positive Lebesgue measure, and weak expansion to mean the a set with large fractal dimension would imply that the fractal dimension of the image would be bigger than times that fractal dimension of the set plus .

28 November, 2012 at 5:40 pm

Terence TaoFor the first question, if one considers images of arbitrary subsets of then one could simply take for some arbitrary set , in which case one does not do any better than the easy bound (assuming P non-constant of course).

For the second question, there are some scattered results of this type. A typical result here is Wolff’s result on the Falconer distance conjecture, that if is a compact set with Hausdorff dimension greater than 4/3, then the distance set has positive Lebesgue measure; among other things, this (morally, at least) implies that the image of under the polynomial has positive Lebesgue measure whenever is a compact set of dimension at least . I don’t know if anyone has studied the problem for arbitrary polynomials P though. (In principle, the results of Elekes and Szabo could be transferable to this setting, but in practice it is often quite difficult to convert a discrete incidence combinatorics result into a continuous geometric measure theory result; for instance, the near-resolution of the Erdos distance problem by Guth and Katz has, to date, not led to further progress on the continuous counterpart, namely the Falconer distance conjecture.)

11 December, 2012 at 9:04 pm

Mixing for progressions in non-abelian groups « What’s new[…] online submission system). This paper is loosely related in subject topic to my two previous papers on polynomial expansion and on recurrence in quasirandom groups (with Vitaly Bergelson), although the methods here are […]

14 March, 2013 at 1:17 pm

Rectification and the Lefschetz principle | What's new[…] This principle (first laid out in an appendix of Lefschetz’s book), among other things, often allows one to use the methods of complex analysis (e.g. Riemann surface theory) to study many other fields of characteristic zero. There are many variants and extensions of this principle; see for instance this MathOverflow post for some discussion of these. I used this baby version of the Lefschetz principle recently in a paper on expanding polynomial maps. […]

29 October, 2013 at 8:09 pm

A spectral theory proof of the algebraic regularity lemma | What's new[…] a recent paper, I established (in the large characteristic case) the following regularity lemma for dense […]

16 November, 2013 at 10:12 pm

Qualitative probability theory, types, and the group chunk and group configuration theorems | What's new[…] If are definable sets, then the uniform measure on is the product of the uniform measure on and the uniform measure on ; the proof of the Fubini-Tonelli type statement that justifies this may be found for instance in Lemma 13 of this paper of mine. […]