For the first question, if one considers images of arbitrary subsets of then one could simply take for some arbitrary set , in which case one does not do any better than the easy bound (assuming P non-constant of course).

For the second question, there are some scattered results of this type. A typical result here is Wolff’s result on the Falconer distance conjecture, that if is a compact set with Hausdorff dimension greater than 4/3, then the distance set has positive Lebesgue measure; among other things, this (morally, at least) implies that the image of under the polynomial has positive Lebesgue measure whenever is a compact set of dimension at least . I don’t know if anyone has studied the problem for arbitrary polynomials P though. (In principle, the results of Elekes and Szabo could be transferable to this setting, but in practice it is often quite difficult to convert a discrete incidence combinatorics result into a continuous geometric measure theory result; for instance, the near-resolution of the Erdos distance problem by Guth and Katz has, to date, not led to further progress on the continuous counterpart, namely the Falconer distance conjecture.)

]]>Do you think analogous statements can be made in a geometric measure theory context? Perhaps, define strong expansion to be something along the lines of the image of a set with large fractal dimension contains an interval, moderate expansion says that large fractal dimension implies positive Lebesgue measure, and weak expansion to mean the a set with large fractal dimension would imply that the fractal dimension of the image would be bigger than times that fractal dimension of the set plus .

]]>Sorry, I meant to say that the *conclusions* of the transference argument should extend to the case (in the prime order case, at least), even if the transference method itself breaks down. (For instance, one could speculatively consider some sort of “approximate embedding” of such smallish sets into characteristic zero which isn’t an exact embedding in the Freiman sense, but is still somehow “good enough” for some vestige of the characteristic zero arguments to be applicable.)

For large sets, I agree that is the natural barrier (for weak expansion, at least) since it is the last place where subfield obstructions can occur. (For moderate or strong expansion, there is the possibility of larger counterexamples, e.g. by intersecting together large arithmetic progressions with large geometric progressions, so I don’t have a firm intuition for this case.) My arguments use Deligne’s work (via the Lang-Weil bound) but because of the need to use Cauchy-Schwarz several times to eliminate all the arbitrary sets A, it only starts working at . I can believe that by being more careful, one could reduce the number of applications of Cauchy-Schwarz to get down to or , but to get all the way down to would require a very different idea; it can’t just be using Cauchy-Schwarz to “complete” all sums involving A, followed by Deligne to estimate the completed sums. (In sufficiently “additive” or “multiplicative” situations one can use the relevant Fourier transform as a replacement for Cauchy-Schwarz, and this can get down to the right barrier of , but this trick does not appear to be available in the general case.)

In any case, I agree that the theories for small, medium, and large sets will initially all be rather different from each other (much as is the case with the Bourgain-Gamburd expansion machinery), though perhaps a unified treatment will eventually emerge (for instance, one may optimistically hope that the small set theory will eventually extend all the way up to , and the large set theory down to , thus ultimately eliminating the need for a medium set theory).

]]>Terry, Prop. 3.3 of http://arxiv.org/pdf/math/0509024.pdf works uniformly for small and medium-sized sets, which are (in my view) the hardest cases; I didn’t optimize things for large sets because other (easier) methods worked for them in the context I was working in.

I gave these matters some thoughts a few years ago (I think Akshay and I talked about them). I would be pretty impressed if the transference arguments that work for very small sets could be extended to size , a small constant. I would imagine there would still be a large gap between small and large sets even in this case, and that would be the main challenge.

My intuition is that there should be large-set methods that work for all (see: Deligne). Don’t you agree?

]]>Do you use a special program or something?!

*[The short answer is yes. See the second section of https://terrytao.wordpress.com/about/ – T.]*

Thanks, Harald, for pointing that out! (Though, strictly speaking, your examples (say, Proposition 3.3 of http://arxiv.org/pdf/math/0509024.pdf for sake of concreteness) are actually polynomial functions of the products and their inverses, and they don’t quite give expansion in the senses I state above because epsilon is not required to depend linearly on delta in the limit , but it is certainly in the same spirit.)

The methods in my paper (based on regularity lemmas) only work for very large sets (of size or larger, basically). For extremely small sets (of size less than ) there should be some sort of Lefschetz principle that allows one to embed the configuration into the complex field, where the work of Elekes and Szabo gives satisfactory results. It seems reasonable to conjecture that the Elekes-Szabo theory extends to sets of cardinality up to for some absolute constant c (for fields of prime order at least, to avoid subfield obstructions), but then there is presumably some crossover to the large set case when begins to have positive density in F.

]]>