You are currently browsing the tag archive for the ‘polynomials’ tag.

Vitaly Bergelson, Tamar Ziegler, and I have just uploaded to the arXiv our joint paper “Multiple recurrence and convergence results associated to {{\bf F}_{p}^{\omega}}-actions“. This paper is primarily concerned with limit formulae in the theory of multiple recurrence in ergodic theory. Perhaps the most basic formula of this type is the mean ergodic theorem, which (among other things) asserts that if {(X,{\mathcal X}, \mu,T)} is a measure-preserving {{\bf Z}}-system (which, in this post, means that {(X,{\mathcal X}, \mu)} is a probability space and {T: X \mapsto X} is measure-preserving and invertible, thus giving an action {(T^n)_{n \in {\bf Z}}} of the integers), and {f,g \in L^2(X,{\mathcal X}, \mu)} are functions, and {X} is ergodic (which means that {L^2(X,{\mathcal X}, \mu)} contains no {T}-invariant functions other than the constants (up to almost everywhere equivalence, of course)), then the average

\displaystyle  \frac{1}{N} \sum_{n=1}^N \int_X f(x) g(T^n x)\ d\mu \ \ \ \ \ (1)

converges as {N \rightarrow \infty} to the expression

\displaystyle  (\int_X f(x)\ d\mu) (\int_X g(x)\ d\mu);

see e.g. this previous blog post. Informally, one can interpret this limit formula as an equidistribution result: if {x} is drawn at random from {X} (using the probability measure {\mu}), and {n} is drawn at random from {\{1,\ldots,N\}} for some large {N}, then the pair {(x, T^n x)} becomes uniformly distributed in the product space {X \times X} (using product measure {\mu \times \mu}) in the limit as {N \rightarrow \infty}.

If we allow {(X,\mu)} to be non-ergodic, then we still have a limit formula, but it is a bit more complicated. Let {{\mathcal X}^T} be the {T}-invariant measurable sets in {{\mathcal X}}; the {{\bf Z}}-system {(X, {\mathcal X}^T, \mu, T)} can then be viewed as a factor of the original system {(X, {\mathcal X}, \mu, T)}, which is equivalent (in the sense of measure-preserving systems) to a trivial system {(Z_0, {\mathcal Z}_0, \mu_{Z_0}, 1)} (known as the invariant factor) in which the shift is trivial. There is then a projection map {\pi_0: X \rightarrow Z_0} to the invariant factor which is a factor map, and the average (1) converges in the limit to the expression

\displaystyle  \int_{Z_0} (\pi_0)_* f(z) (\pi_0)_* g(z)\ d\mu_{Z_0}(x), \ \ \ \ \ (2)

where {(\pi_0)_*: L^2(X,{\mathcal X},\mu) \rightarrow L^2(Z_0,{\mathcal Z}_0,\mu_{Z_0})} is the pushforward map associated to the map {\pi_0: X \rightarrow Z_0}; see e.g. this previous blog post. We can interpret this as an equidistribution result. If {(x,T^n x)} is a pair as before, then we no longer expect complete equidistribution in {X \times X} in the non-ergodic, because there are now non-trivial constraints relating {x} with {T^n x}; indeed, for any {T}-invariant function {f: X \rightarrow {\bf C}}, we have the constraint {f(x) = f(T^n x)}; putting all these constraints together we see that {\pi_0(x) = \pi_0(T^n x)} (for almost every {x}, at least). The limit (2) can be viewed as an assertion that this constraint {\pi_0(x) = \pi_0(T^n x)} are in some sense the “only” constraints between {x} and {T^n x}, and that the pair {(x,T^n x)} is uniformly distributed relative to these constraints.

Limit formulae are known for multiple ergodic averages as well, although the statement becomes more complicated. For instance, consider the expression

\displaystyle  \frac{1}{N} \sum_{n=1}^N \int_X f(x) g(T^n x) h(T^{2n} x)\ d\mu \ \ \ \ \ (3)

for three functions {f,g,h \in L^\infty(X, {\mathcal X}, \mu)}; this is analogous to the combinatorial task of counting length three progressions in various sets. For simplicity we assume the system {(X,{\mathcal X},\mu,T)} to be ergodic. Naively one might expect this limit to then converge to

\displaystyle  (\int_X f\ d\mu) (\int_X g\ d\mu) (\int_X h\ d\mu)

which would roughly speaking correspond to an assertion that the triplet {(x,T^n x, T^{2n} x)} is asymptotically equidistributed in {X \times X \times X}. However, even in the ergodic case there can be additional constraints on this triplet that cannot be seen at the level of the individual pairs {(x,T^n x)}, {(x, T^{2n} x)}. The key obstruction here is that of eigenfunctions of the shift {T: X \rightarrow X}, that is to say non-trivial functions {f: X \rightarrow S^1} that obey the eigenfunction equation {Tf = \lambda f} almost everywhere for some constant (or {T}-invariant) {\lambda}. Each such eigenfunction generates a constraint

\displaystyle  f(x) \overline{f(T^n x)}^2 f(T^{2n} x) = 1 \ \ \ \ \ (4)

tying together {x}, {T^n x}, and {T^{2n} x}. However, it turns out that these are in some sense the only constraints on {x,T^n x, T^{2n} x} that are relevant for the limit (3). More precisely, if one sets {{\mathcal X}_1} to be the sub-algebra of {{\mathcal X}} generated by the eigenfunctions of {T}, then it turns out that the factor {(X, {\mathcal X}_1, \mu, T)} is isomorphic to a shift system {(Z_1, {\mathcal Z}_1, \mu_{Z_1}, x \mapsto x+\alpha)} known as the Kronecker factor, for some compact abelian group {Z_1 = (Z_1,+)} and some (irrational) shift {\alpha \in Z_1}; the factor map {\pi_1: X \rightarrow Z_1} pushes eigenfunctions forward to (affine) characters on {Z_1}. It is then known that the limit of (3) is

\displaystyle  \int_\Sigma (\pi_1)_* f(x_0) (\pi_1)_* g(x_1) (\pi_1)_* h(x_2)\ d\mu_\Sigma

where {\Sigma \subset Z_1^3} is the closed subgroup

\displaystyle  \Sigma = \{ (x_1,x_2,x_3) \in Z_1^3: x_1-2x_2+x_3=0 \}

and {\mu_\Sigma} is the Haar probability measure on {\Sigma}; see this previous blog post. The equation {x_1-2x_2+x_3=0} defining {\Sigma} corresponds to the constraint (4) mentioned earlier. Among other things, this limit formula implies Roth’s theorem, which in the context of ergodic theory is the assertion that the limit (or at least the limit inferior) of (3) is positive when {f=g=h} is non-negative and not identically vanishing.

If one considers a quadruple average

\displaystyle  \frac{1}{N} \sum_{n=1}^N \int_X f(x) g(T^n x) h(T^{2n} x) k(T^{3n} x)\ d\mu \ \ \ \ \ (5)

(analogous to counting length four progressions) then the situation becomes more complicated still, even in the ergodic case. In addition to the (linear) eigenfunctions that already showed up in the computation of the triple average (3), a new type of constraint also arises from quadratic eigenfunctions {f: X \rightarrow S^1}, which obey an eigenfunction equation {Tf = \lambda f} in which {\lambda} is no longer constant, but is now a linear eigenfunction. For such functions, {f(T^n x)} behaves quadratically in {n}, and one can compute the existence of a constraint

\displaystyle  f(x) \overline{f(T^n x)}^3 f(T^{2n} x)^3 \overline{f(T^{3n} x)} = 1 \ \ \ \ \ (6)

between {x}, {T^n x}, {T^{2n} x}, and {T^{3n} x} that is not detected at the triple average level. As it turns out, this is not the only type of constraint relevant for (5); there is a more general class of constraint involving two-step nilsystems which we will not detail here, but see e.g. this previous blog post for more discussion. Nevertheless there is still a similar limit formula to previous examples, involving a special factor {(Z_2, {\mathcal Z}_2, \mu_{Z_2}, S)} which turns out to be an inverse limit of two-step nilsystems; this limit theorem can be extracted from the structural theory in this paper of Host and Kra combined with a limit formula for nilsystems obtained by Lesigne, but will not be reproduced here. The pattern continues to higher averages (and higher step nilsystems); this was first done explicitly by Ziegler, and can also in principle be extracted from the structural theory of Host-Kra combined with nilsystem equidistribution results of Leibman. These sorts of limit formulae can lead to various recurrence results refining Roth’s theorem in various ways; see this paper of Bergelson, Host, and Kra for some examples of this.

The above discussion was concerned with {{\bf Z}}-systems, but one can adapt much of the theory to measure-preserving {G}-systems for other discrete countable abelian groups {G}, in which one now has a family {(T_g)_{g \in G}} of shifts indexed by {G} rather than a single shift, obeying the compatibility relation {T_{g+h}=T_g T_h}. The role of the intervals {\{1,\ldots,N\}} in this more general setting is replaced by that of Folner sequences. For arbitrary countable abelian {G}, the theory for double averages (1) and triple limits (3) is essentially identical to the {{\bf Z}}-system case. But when one turns to quadruple and higher limits, the situation becomes more complicated (and, for arbitrary {G}, still not fully understood). However one model case which is now well understood is the finite field case when {G = {\bf F}_p^\omega = \bigcup_{n=1}^\infty {\bf F}_p^n} is an infinite-dimensional vector space over a finite field {{\bf F}_p} (with the finite subspaces {{\bf F}_p^n} then being a good choice for the Folner sequence). Here, the analogue of the structural theory of Host and Kra was worked out by Vitaly, Tamar, and myself in these previous papers (treating the high characteristic and low characteristic cases respectively). In the finite field setting, it turns out that nilsystems no longer appear, and one only needs to deal with linear, quadratic, and higher order eigenfunctions (known collectively as phase polynomials). It is then natural to look for a limit formula that asserts, roughly speaking, that if {x} is drawn at random from a {{\bf F}_p^\omega}-system and {n} drawn randomly from a large subspace of {{\bf F}_p^\omega}, then the only constraints between {x, T^n x, \ldots, T^{(p-1)n} x} are those that arise from phase polynomials. The main theorem of this paper is to establish this limit formula (which, again, is a little complicated to state explicitly and will not be done here). In particular, we establish for the first time that the limit actually exists (a result which, for {{\bf Z}}-systems, was one of the main results of this paper of Host and Kra).

As a consequence, we can recover finite field analogues of most of the results of Bergelson-Host-Kra, though interestingly some of the counterexamples demonstrating sharpness of their results for {{\bf Z}}-systems (based on Behrend set constructions) do not seem to be present in the finite field setting (cf. this previous blog post on the cap set problem). In particular, we are able to largely settle the question of when one has a Khintchine-type theorem that asserts that for any measurable set {A} in an ergodic {{\bf F}_p^\omega}-system and any {\epsilon>0}, one has

\displaystyle  \mu( T_{c_1 n} A \cap \ldots \cap T_{c_k n} A ) > \mu(A)^k - \epsilon

for a syndetic set of {n}, where {c_1,\ldots,c_k \in {\bf F}_p} are distinct residue classes. It turns out that Khintchine-type theorems always hold for {k=1,2,3} (and for {k=1,2} ergodicity is not required), and for {k=4} it holds whenever {c_1,c_2,c_3,c_4} form a parallelogram, but not otherwise (though the counterexample here was such a painful computation that we ended up removing it from the paper, and may end up putting it online somewhere instead), and for larger {k} we could show that the Khintchine property failed for generic choices of {c_1,\ldots,c_k}, though the problem of determining exactly the tuples for which the Khintchine property failed looked to be rather messy and we did not completely settle it.

I’ve just uploaded to the arXiv my paper “Expanding polynomials over finite fields of large characteristic, and a regularity lemma for definable sets“, submitted to Contrib. Disc. Math. The motivation of this paper is to understand a certain polynomial variant of the sum-product phenomenon in finite fields. This phenomenon asserts that if {A} is a non-empty subset of a finite field {F}, then either the sumset {A+A := \{a+b: a,b \in A\}} or product set {A \cdot A := \{ab: a,b \in A \}} will be significantly larger than {A}, unless {A} is close to a subfield of {F} (or to {\{1\}}). In particular, in the regime when {A} is large, say {|F|^{1-c} < |A| \leq |F|}, one expects an expansion bound of the form

\displaystyle  |A+A| + |A \cdot A| \gg (|F|/|A|)^{c'} |A| \ \ \ \ \ (1)

for some absolute constants {c, c' > 0}. Results of this type are known; for instance, Hart, Iosevich, and Solymosi obtained precisely this bound for {(c,c')=(3/10,1/3)} (in the case when {|F|} is prime), which was then improved by Garaev to {(c,c')=(1/3,1/2)}.

We have focused here on the case when {A} is a large subset of {F}, but sum-product estimates are also extremely interesting in the opposite regime in which {A} is allowed to be small (see for instance the papers of Katz-Shen and Li and of Garaev for some recent work in this case, building on some older papers of Bourgain, Katz and myself and of Bourgain, Glibichuk, and Konyagin). However, the techniques used in these two regimes are rather different. For large subsets of {F}, it is often profitable to use techniques such as the Fourier transform or the Cauchy-Schwarz inequality to “complete” a sum over a large set (such as {A}) into a set over the entire field {F}, and then to use identities concerning complete sums (such as the Weil bound on complete exponential sums over a finite field). For small subsets of {F}, such techniques are usually quite inefficient, and one has to proceed by somewhat different combinatorial methods which do not try to exploit the ambient field {F}. But my paper focuses exclusively on the large {A} regime, and unfortunately does not directly say much (except through reasoning by analogy) about the small {A} case.

Note that it is necessary to have both {A+A} and {A \cdot A} appear on the left-hand side of (1). Indeed, if one just has the sumset {A+A}, then one can set {A} to be a long arithmetic progression to give counterexamples to (1). Similarly, if one just has a product set {A \cdot A}, then one can set {A} to be a long geometric progression. The sum-product phenomenon can then be viewed that it is not possible to simultaneously behave like a long arithmetic progression and a long geometric progression, unless one is already very close to behaving like a subfield.

Now we consider a polynomial variant of the sum-product phenomenon, where we consider a polynomial image

\displaystyle  P(A,A) := \{ P(a,b): a,b \in A \}

of a set {A \subset F} with respect to a polynomial {P: F \times F \rightarrow F}; we can also consider the asymmetric setting of the image

\displaystyle  P(A,B) := \{ P(a,b): a \in A,b \in B \}

of two subsets {A,B \subset F}. The regime we will be interested is the one where the field {F} is large, and the subsets {A, B} of {F} are also large, but the polynomial {P} has bounded degree. Actually, for technical reasons it will not be enough for us to assume that {F} has large cardinality; we will also need to assume that {F} has large characteristic. (The two concepts are synonymous for fields of prime order, but not in general; for instance, the field with {2^n} elements becomes large as {n \rightarrow \infty} while the characteristic remains fixed at {2}, and is thus not going to be covered by the results in this paper.)

In this paper of Vu, it was shown that one could replace {A \cdot A} with {P(A,A)} in (1), thus obtaining a bound of the form

\displaystyle  |A+A| + |P(A,A)| \gg (|F|/|A|)^{c'} |A|

whenever {|A| \geq |F|^{1-c}} for some absolute constants {c, c' > 0}, unless the polynomial {P} had the degenerate form {P(x,y) = Q(L(x,y))} for some linear function {L: F \times F \rightarrow F} and polynomial {Q: F \rightarrow F}, in which {P(A,A)} behaves too much like {A+A} to get reasonable expansion. In this paper, we focus instead on the question of bounding {P(A,A)} alone. In particular, one can ask to classify the polynomials {P} for which one has the weak expansion property

\displaystyle |P(A,A)| \gg (|F|/|A|)^{c'} |A|

whenever {|A| \geq |F|^{1-c}} for some absolute constants {c, c' > 0}. One can also ask for stronger versions of this expander property, such as the moderate expansion property

\displaystyle |P(A,A)| \gg |F|

whenever {|A| \geq |F|^{1-c}}, or the almost strong expansion property

\displaystyle |P(A,A)| \geq |F| - O( |F|^{1-c'})

whenever {|A| \geq |F|^{1-c}}. (One can consider even stronger expansion properties, such as the strong expansion property {|P(A,A)| \geq |F|-O(1)}, but it was shown by Gyarmati and Sarkozy that this property cannot hold for polynomials of two variables of bounded degree when {|F| \rightarrow \infty}.) One can also consider asymmetric versions of these properties, in which one obtains lower bounds on {|P(A,B)|} rather than {|P(A,A)|}.

The example of a long arithmetic or geometric progression shows that the polynomials {P(x,y) = x+y} or {P(x,y) = xy} cannot be expanders in any of the above senses, and a similar construction also shows that polynomials of the form {P(x,y) = Q(f(x)+f(y))} or {P(x,y) = Q(f(x) f(y))} for some polynomials {Q, f: F \rightarrow F} cannot be expanders. On the other hand, there are a number of results in the literature establishing expansion for various polynomials in two or more variables that are not of this degenerate form (in part because such results are related to incidence geometry questions in finite fields, such as the finite field version of the Erdos distinct distances problem). For instance, Solymosi established weak expansion for polynomials of the form {P(x,y) = f(x)+y} when {f} is a nonlinear polynomial, with generalisations by Hart, Li, and Shen for various polynomials of the form {P(x,y) = f(x) + g(y)} or {P(x,y) = f(x) g(y)}. Further examples of expanding polynomials appear in the work of Shkredov, Iosevich-Rudnev, and Bukh-Tsimerman, as well as the previously mentioned paper of Vu and of Hart-Li-Shen, and these papers in turn cite many further results which are in the spirit of the polynomial expansion bounds discussed here (for instance, dealing with the small {A} regime, or working in other fields such as {{\bf R}} instead of in finite fields {F}). We will not summarise all these results here; they are summarised briefly in my paper, and in more detail in the papers of Hart-Li-Shen and of Bukh-Tsimerman. But we will single out one of the results of Bukh-Tsimerman, which is one of most recent and general of these results, and closest to the results of my own paper. Roughly speaking, in this paper it is shown that a polynomial {P(x,y)} of two variables and bounded degree will be a moderate expander if it is non-composite (in the sense that it does not take the form {P(x,y) = Q(R(x,y))} for some non-linear polynomial {Q} and some polynomial {R}, possibly having coefficients in the algebraic completion of {F}) and is monic on both {x} and {y}, thus it takes the form {P(x,y) = x^d + S(x,y)} for some {d \geq 1} and some polynomial {S} of degree at most {d-1} in {x}, and similarly with the roles of {x} and {y} reversed, unless {P} is of the form {P(x,y) = f(x) + g(y)} or {P(x,y) = f(x) g(y)} (in which case the expansion theory is covered to a large extent by the previous work of Hart, Li, and Shen).

Our first main result improves upon the Bukh-Tsimerman result by strengthening the notion of expansion and removing the non-composite and monic hypotheses, but imposes a condition of large characteristic. I’ll state the result here slightly informally as follows:

Theorem 1 (Criterion for moderate expansion) Let {P: F \times F \rightarrow F} be a polynomial of bounded degree over a finite field {F} of sufficiently large characteristic, and suppose that {P} is not of the form {P(x,y) = Q(f(x)+g(y))} or {P(x,y) = Q(f(x)g(y))} for some polynomials {Q,f,g: F \rightarrow F}. Then one has the (asymmetric) moderate expansion property

\displaystyle  |P(A,B)| \gg |F|

whenever {|A| |B| \ggg |F|^{2-1/8}}.

This is basically a sharp necessary and sufficient condition for asymmetric expansion moderate for polynomials of two variables. In the paper, analogous sufficient conditions for weak or almost strong expansion are also given, although these are not quite as satisfactory (particularly the conditions for almost strong expansion, which include a somewhat complicated algebraic condition which is not easy to check, and which I would like to simplify further, but was unable to).

The argument here resembles the Bukh-Tsimerman argument in many ways. One can view the result as an assertion about the expansion properties of the graph {\{ (a,b,P(a,b)): a,b \in F \}}, which can essentially be thought of as a somewhat sparse three-uniform hypergraph on {F}. Being sparse, it is difficult to directly apply techniques from dense graph or hypergraph theory for this situation; however, after a few applications of the Cauchy-Schwarz inequality, it turns out (as observed by Bukh and Tsimerman) that one can essentially convert the problem to one about the expansion properties of the set

\displaystyle  \{ (P(a,c), P(b,c), P(a,d), P(b,d)): a,b,c,d \in F \} \ \ \ \ \ (2)

(actually, one should view this as a multiset, but let us ignore this technicality) which one expects to be a dense set in {F^4}, except in the case when the associated algebraic variety

\displaystyle  \{ (P(a,c), P(b,c), P(a,d), P(b,d)): a,b,c,d \in \overline{F} \}

fails to be Zariski dense, but it turns out that in this case one can use some differential geometry and Riemann surface arguments (after first invoking the Lefschetz principle and the high characteristic hypothesis to work over the complex numbers instead over a finite field) to show that {P} is of the form {Q(f(x)+g(y))} or {Q(f(x)g(y))}. This reduction is related to the classical fact that the only one-dimensional algebraic groups over the complex numbers are the additive group {({\bf C},+)}, the multiplicative group {({\bf C} \backslash \{0\},\times)}, or the elliptic curves (but the latter have a group law given by rational functions rather than polynomials, and so ultimately end up being eliminated from consideration, though they would play an important role if one wanted to study the expansion properties of rational functions).

It remains to understand the structure of the set (2) is. To understand dense graphs or hypergraphs, one of the standard tools of choice is the Szemerédi regularity lemma, which carves up such graphs into a bounded number of cells, with the graph behaving pseudorandomly on most pairs of cells. However, the bounds in this lemma are notoriously poor (the regularity obtained is an inverse tower exponential function of the number of cells), and this makes this lemma unsuitable for the type of expansion properties we seek (in which we want to deal with sets {A} which have a polynomial sparsity, e.g. {|A| \sim |F|^{1-c}}). Fortunately, in the case of sets such as (2) which are definable over the language of rings, it turns out that a much stronger regularity lemma is available, which I call the “algebraic regularity lemma”. I’ll state it (again, slightly informally) in the context of graphs as follows:

Lemma 2 (Algebraic regularity lemma) Let {F} be a finite field of large characteristic, and let {V, W} be definable sets over {F} of bounded complexity (i.e. {V, W} are subsets of {F^n}, {F^m} for some bounded {n,m} that can be described by some first-order predicate in the language of rings of bounded length and involving boundedly many constants). Let {E} be a definable subset of {V \times W}, again of bounded complexity (one can view {E} as a bipartite graph connecting {V} and {W}). Then one can partition {V, W} into a bounded number of cells {V_1,\ldots,V_a}, {W_1,\ldots,W_b}, still definable with bounded complexity, such that for all pairs {i =1,\ldots a}, {j=1,\ldots,b}, one has the regularity property

\displaystyle  |E \cap (A \times B)| = d_{ij} |A| |B| + O( |F|^{-1/4} |V| |W| )

for all {A \subset V_i, B \subset W_i}, where {d_{ij} := \frac{|E \cap (V_i \times W_j)|}{|V_i| |W_j|}} is the density of {E} in {V_i \times W_j}.

This lemma resembles the Szemerédi regularity lemma, but regularises all pairs of cells (not just most pairs), and the regularity is of polynomial strength in {|F|}, rather than inverse tower exponential in the number of cells. Also, the cells are not arbitrary subsets of {V,W}, but are themselves definable with bounded complexity, which turns out to be crucial for applications. I am optimistic that this lemma will be useful not just for studying expanding polynomials, but for many other combinatorial questions involving dense subsets of definable sets over finite fields.

The above lemma is stated for graphs {E \subset V \times W}, but one can iterate it to obtain an analogous regularisation of hypergraphs {E \subset V_1 \times \ldots \times V_k} for any bounded {k} (for application to (2), we need {k=4}). This hypergraph regularity lemma, by the way, is not analogous to the strong hypergraph regularity lemmas of Rodl et al. and Gowers developed in the last six or so years, but closer in spirit to the older (but weaker) hypergraph regularity lemma of Chung which gives the same “order {1}” regularity that the graph regularity lemma gives, rather than higher order regularity.

One feature of the proof of Lemma 2 which I found striking was the need to use some fairly high powered technology from algebraic geometry, and in particular the Lang-Weil bound on counting points in varieties over a finite field (discussed in this previous blog post), and also the theory of the etale fundamental group. Let me try to briefly explain why this is the case. A model example of a definable set of bounded complexity {E} is a set {E \subset F^n \times F^m} of the form

\displaystyle  E = \{ (x,y) \in F^n \times F^m: \exists t \in F; P(x,y,t)=0 \}

for some polynomial {P: F^n \times F^m \times F \rightarrow F}. (Actually, it turns out that one can essentially write all definable sets as an intersection of sets of this form; see this previous blog post for more discussion.) To regularise the set {E}, it is convenient to square the adjacency matrix, which soon leads to the study of counting functions such as

\displaystyle  \mu(x,x') := | \{ (y,t,t') \in F^m \times F \times F: P(x,y,t) = P(x',y,t') = 0 \}|.

If one can show that this function {\mu} is “approximately finite rank” in the sense that (modulo lower order errors, of size {O(|F|^{-1/2})} smaller than the main term), this quantity depends only on a bounded number of bits of information about {x} and a bounded number of bits of information about {x'}, then a little bit of linear algebra will then give the required regularity result.

One can recognise {\mu(x,x')} as counting {F}-points of a certain algebraic variety

\displaystyle  V_{x,x'} := \{ (y,t,t') \in \overline{F}^m \times \overline{F} \times \overline{F}: P(x,y,t) = P(x',y,t') = 0 \}.

The Lang-Weil bound (discussed in this previous post) provides a formula for this count, in terms of the number {c(x,x')} of geometrically irreducible components of {V_{x,x'}} that are defined over {F} (or equivalently, are invariant with respect to the Frobenius endomorphism associated to {F}). So the problem boils down to ensuring that this quantity {c(x,x')} is “generically bounded rank”, in the sense that for generic {x,x'}, its value depends only on a bounded number of bits of {x} and a bounded number of bits of {x'}.

Here is where the étale fundamental group comes in. One can view {V_{x,x'}} as a fibre product {V_x \times_{\overline{F}^m} V_{x'}} of the varieties

\displaystyle  V_x := \{ (y,t) \in \overline{F}^m \times \overline{F}: P(x,y,t) = 0 \}

and

\displaystyle  V_{x'} := \{ (y,t) \in \overline{F}^m \times \overline{F}: P(x',y,t) = 0 \}

over {\overline{F}^m}. If one is in sufficiently high characteristic (or even better, in zero characteristic, which one can reduce to by an ultraproduct (or nonstandard analysis) construction, similar to that discussed in this previous post), the varieties {V_x,V_{x'}} are generically finite étale covers of {\overline{F}^m}, and the fibre product {V_x \times_{\overline{F}^m} V_{x'}} is then also generically a finite étale cover. One can count the components of a finite étale cover of a connected variety by counting the number of orbits of the étale fundamental group acting on a fibre of that variety (much as the number of components of a cover of a connected manifold is the number of orbits of the topological fundamental group acting on that fibre). So if one understands the étale fundamental group of a certain generic subset of {\overline{F}^m} (formed by intersecting together an {x}-dependent generic subset of {\overline{F}^m} with an {x'}-dependent generic subset), this in principle controls {c(x,x')}. It turns out that one can decouple the {x} and {x'} dependence of this fundamental group by using an étale version of the van Kampen theorem for the fundamental group, which I discussed in this previous blog post. With this fact (and another deep fact about the étale fundamental group in zero characteristic, namely that it is topologically finitely generated), one can obtain the desired generic bounded rank property of {c(x,x')}, which gives the regularity lemma.

In order to expedite the deployment of all this algebraic geometry (as well as some Riemann surface theory), it is convenient to use the formalism of nonstandard analysis (or the ultraproduct construction), which among other things can convert quantitative, finitary problems in large characteristic into equivalent qualitative, infinitary problems in zero characteristic (in the spirit of this blog post). This allows one to use several tools from those fields as “black boxes”; not just the theory of étale fundamental groups (which are considerably simpler and more favorable in characteristic zero than they are in positive characteristic), but also some results limiting the morphisms between compact Riemann surfaces of high genus (such as the de Franchis theorem, the Riemann-Hurwitz formula, or the fact that all morphisms between elliptic curves are essentially group homomorphisms), which would be quite unwieldy to utilise if one did not first pass to the zero characteristic case (and thence to the complex case) via the ultraproduct construction (followed by the Lefschetz principle).

I found this project to be particularly educational for me, as it forced me to wander outside of my usual range by quite a bit in order to pick up the tools from algebraic geometry and Riemann surfaces that I needed (in particular, I read through several chapters of EGA and SGA for the first time). This did however put me in the slightly unnerving position of having to use results (such as the Riemann existence theorem) whose proofs I have not fully verified for myself, but which are easy to find in the literature, and widely accepted in the field. I suppose this type of dependence on results in the literature is more common in the more structured fields of mathematics than it is in analysis, which by its nature has fewer reusable black boxes, and so key tools often need to be rederived and modified for each new application. (This distinction is discussed further in this article of Gowers.)

One of the first non-trivial theorems one encounters in classical algebraic geometry is Bézout’s theorem, which we will phrase as follows:

Theorem 1 (Bézout’s theorem) Let {k} be a field, and let {P, Q \in k[x,y]} be non-zero polynomials in two variables {x,y} with no common factor. Then the two curves {\{ (x,y) \in k^2: P(x,y) = 0\}} and {\{ (x,y) \in k^2: Q(x,y) = 0\}} have no common components, and intersect in at most {\hbox{deg}(P) \hbox{deg}(Q)} points.

This theorem can be proven by a number of means, for instance by using the classical tool of resultants. It has many strengthenings, generalisations, and variants; see for instance this previous blog post on Bézout’s inequality. Bézout’s theorem asserts a fundamental algebraic dichotomy, of importance in combinatorial incidence geometry: any two algebraic curves either share a common component, or else have a bounded finite intersection; there is no intermediate case in which the intersection is unbounded in cardinality, but falls short of a common component. This dichotomy is closely related to the integrality gap in algebraic dimension: an algebraic set can have an integer dimension such as {0} or {1}, but cannot attain any intermediate dimensions such as {1/2}. This stands in marked contrast to sets of analytic, combinatorial, or probabilistic origin, whose “dimension” is typically not necessarily constrained to be an integer.

Bézout’s inequality tells us, roughly speaking, that the intersection of a curve of degree {D_1} and a curve of degree {D_2} forms a set of at most {D_1 D_2} points. One can consider the converse question: given a set {S} of {N} points in the plane {k^2}, can one find two curves of degrees {D_1,D_2} with {D_1 D_2 = O(N)} and no common components, whose intersection contains these points?

A model example that supports the possibility of such a converse is a grid {S = A \times B} that is a Cartesian product of two finite subsets {A, B} of {k} with {|A| |B| = N}. In this case, one can take one curve to be the union of {|A|} vertical lines, and the other curve to be the union of {|B|} horizontal lines, to obtain the required decomposition. Thus, if the proposed converse to Bézout’s inequality held, it would assert that any set of {N} points was essentially behaving like a “nonlinear grid” of size {N}.

Unfortunately, the naive converse to Bézout’s theorem is false. A counterexample can be given by considering a set {S = S_1 \cup S_2} of {2N} points for some large perfect square {N}, where {P_1} is a {\sqrt{N}} by {\sqrt{N}} grid of the form described above, and {S_2} consists of {N} points on an line {\ell} (e.g. a {1 \times N} or {N \times 1} grid). Each of the two component sets {S_1, S_2} can be written as the intersection between two curves whose degrees multiply up to {N}; in the case of {S_1}, we can take the two families of parallel lines (viewed as reducible curves of degree {\sqrt{N}}) as the curves, and in the case of {S_2}, one can take {\ell} as one curve, and the graph of a degree {N} polynomial on {\ell} vanishing on {S_2} for the second curve. But, if {N} is large enough, one cannot cover {S} by the intersection of a single pair {\gamma_1, \gamma_2} of curves with no common components whose degrees {D_1,D_2} multiply up to {D_1 D_2 = O(N)}. Indeed, if this were the case, then without loss of generality we may assume that {D_1 \leq D_2}, so that {D_1 = O(\sqrt{N})}. By Bézout’s theorem, {\gamma_1} either contains {\ell}, or intersects {\ell} in at most {O(D_1) = O(\sqrt{N})} points. Thus, in order for {\gamma_1} to capture all of {S}, it must contain {\ell}, which forces {\gamma_2} to not contain {\ell}. But {\gamma_2} has to intersect {\ell} in {N} points, so by Bézout’s theorem again we have {D_2 \geq N}, thus {D_1 = O(1)}. But then (by more applications of Bézout’s theorem) {\gamma_1} can only capture {O(\sqrt{N})} of the {N} points of {S_1}, a contradiction.

But the above counterexample suggests that even if an arbitrary set of {N} (or {2N}) points cannot be covered by the single intersection of a pair of curves with degree multiplying up to {O(N)}, one may be able to cover such a set by a small number of such intersections. The purpose of this post is to record the simple observation that this is, indeed, the case:

Theorem 2 (Partial converse to Bézout’s theorem) Let {k} be a field, and let {S} be a set of {N} points in {k} for some {N > 1}. Then one can find {m = O(\log N)} and pairs {P_i,Q_i \in k[x,y]} of coprime non-zero polynomials for {i=1,\ldots,m} such that

\displaystyle  S \subset \bigcup_{i=1}^m \{ (x,y) \in k^2: P_i(x,y) = Q_i(x,y) = 0 \} \ \ \ \ \ (1)

and

\displaystyle  \sum_{i=1}^m \hbox{deg}(P_i) \hbox{deg}(Q_i) = O( N ). \ \ \ \ \ (2)

Informally, every finite set in the plane is (a dense subset of) the union of logarithmically many nonlinear grids. The presence of the logarithm is necessary, as can be seen by modifying the {P_1 \cup P_2} example to be the union of logarithmically many Cartesian products of distinct dimensions, rather than just a pair of such products.

Unfortunately I do not know of any application of this converse, but I thought it was cute anyways. The proof is given below the fold.

Read the rest of this entry »

One of the basic problems in analytic number theory is to obtain bounds and asymptotics for sums of the form

\displaystyle  \sum_{n \leq x} f(n)

in the limit {x \rightarrow \infty}, where {n} ranges over natural numbers less than {x}, and {f: {\bf N} \rightarrow {\bf C}} is some arithmetic function of number-theoretic interest. (It is also often convenient to replace this sharply truncated sum with a smoother sum such as {\sum_n f(n) \psi(n/x)}, but we will not discuss this technicality here.) For instance, the prime number theorem is equivalent to the assertion

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + o(x)

where {\Lambda} is the von Mangoldt function, while the Riemann hypothesis is equivalent to the stronger assertion

\displaystyle  \sum_{n \leq x} \Lambda(n) = x + O(x^{1/2+o(1)}).

It is thus of interest to develop techniques to estimate such sums {\sum_{n \leq x} f(n)}. Of course, the difficulty of this task depends on how “nice” the function {f} is. The functions {f} that come up in number theory lie on a broad spectrum of “niceness”, with some particularly nice functions being quite easy to sum, and some being insanely difficult.

At the easiest end of the spectrum are those functions {f} that exhibit some sort of regularity or “smoothness”. Examples of smoothness include “Archimedean” smoothness, in which {f(n)} is the restriction of some smooth function {f: {\bf R} \rightarrow {\bf C}} from the reals to the natural numbers, and the derivatives of {f} are well controlled. A typical example is

\displaystyle  \sum_{n \leq x} \log n.

One can already get quite good bounds on this quantity by comparison with the integral {\int_1^x \log t\ dt}, namely

\displaystyle  \sum_{n \leq x} \log n = x \log x - x + O(\log x),

with sharper bounds available by using tools such as the Euler-Maclaurin formula (see this blog post). Exponentiating such asymptotics, incidentally, leads to one of the standard proofs of Stirling’s formula (as discussed in this blog post).

One can also consider “non-Archimedean” notions of smoothness, such as periodicity relative to a small period {q}. Indeed, if {f} is periodic with period {q} (and is thus essentially a function on the cyclic group {{\bf Z}/q{\bf Z}}), then one has the easy bound

\displaystyle  \sum_{n \leq x} f(n) = \frac{x}{q} \sum_{n \in {\bf Z}/q{\bf Z}} f(n) + O( \sum_{n \in {\bf Z}/q{\bf Z}} |f(n)| ).

In particular, we have the fundamental estimate

\displaystyle  \sum_{n \leq x: q|n} 1 = \frac{x}{q} + O(1). \ \ \ \ \ (1)

This is a good estimate when {q} is much smaller than {x}, but as {q} approaches {x} in magnitude, the error term {O(1)} begins to overwhelm the main term {\frac{n}{q}}, and one needs much more delicate information on the fractional part of {\frac{n}{q}} in order to obtain good estimates at this point.

One can also consider functions {f} which combine “Archimedean” and “non-Archimedean” smoothness into an “adelic” smoothness. We will not define this term precisely here (though the concept of a Schwartz-Bruhat function is one way to capture this sort of concept), but a typical example might be

\displaystyle  \sum_{n \leq x} \chi(n) \log n

where {\chi} is periodic with some small period {q}. By using techniques such as summation by parts, one can estimate such sums using the techniques used to estimate sums of periodic functions or functions with (Archimedean) smoothness.

Another class of functions that is reasonably well controlled are the multiplicative functions, in which {f(nm) = f(n) f(m)} whenever {n,m} are coprime. Here, one can use the powerful techniques of multiplicative number theory, for instance by working with the Dirichlet series

\displaystyle  \sum_{n=1}^\infty \frac{f(n)}{n^s}

which are clearly related to the partial sums {\sum_{n \leq x} f(n)} (essentially via the Mellin transform, a cousin of the Fourier and Laplace transforms); for this post we ignore the (important) issue of how to make sense of this series when it is not absolutely convergent (but see this previous blog post for more discussion). A primary reason that this technique is effective is that the Dirichlet series of a multiplicative function factorises as an Euler product

\displaystyle  \sum_{n=1}^\infty \frac{f(n)}{n^s} = \prod_p (\sum_{j=0}^\infty \frac{f(p^j)}{p^{js}}).

One also obtains similar types of representations for functions that are not quite multiplicative, but are closely related to multiplicative functions, such as the von Mangoldt function {\Lambda} (whose Dirichlet series {\sum_{n=1}^\infty \frac{\Lambda(n)}{n^s} = -\frac{\zeta'(s)}{\zeta(s)}} is not given by an Euler product, but instead by the logarithmic derivative of an Euler product).

Moving another notch along the spectrum between well-controlled and ill-controlled functions, one can consider functions {f} that are divisor sums such as

\displaystyle  f(n) = \sum_{d \leq R; d|n} g(d) = \sum_{d \leq R} 1_{d|n} g(d)

for some other arithmetic function {g}, and some level {R}. This is a linear combination of periodic functions {1_{d|n} g(d)} and is thus technically periodic in {n} (with period equal to the least common multiple of all the numbers from {1} to {R}), but in practice this periodic is far too large to be useful (except for extremely small levels {R}, e.g. {R = O(\log x)}). Nevertheless, we can still control the sum {\sum_{n \leq x} f(n)} simply by rearranging the summation:

\displaystyle  \sum_{n \leq x} f(n) = \sum_{d \leq R} g(d) \sum_{n \leq x: d|n} 1

and thus by (1) one can bound this by the sum of a main term {x \sum_{d \leq R} \frac{g(d)}{d}} and an error term {O( \sum_{d \leq R} |g(d)| )}. As long as the level {R} is significantly less than {x}, one may expect the main term to dominate, and one can often estimate this term by a variety of techniques (for instance, if {g} is multiplicative, then multiplicative number theory techniques are quite effective, as mentioned previously). Similarly for other slight variants of divisor sums, such as expressions of the form

\displaystyle  \sum_{d \leq R; d | n} g(d) \log \frac{n}{d}

or expressions of the form

\displaystyle  \sum_{d \leq R} F_d(n)

where each {F_d} is periodic with period {d}.

One of the simplest examples of this comes when estimating the divisor function

\displaystyle  \tau(n) := \sum_{d|n} 1,

which counts the number of divisors up to {n}. This is a multiplicative function, and is therefore most efficiently estimated using the techniques of multiplicative number theory; but for reasons that will become clearer later, let us “forget” the multiplicative structure and estimate the above sum by more elementary methods. By applying the preceding method, we see that

\displaystyle  \sum_{n \leq x} \tau(n) = \sum_{d \leq x} \sum_{n \leq x:d|n} 1

\displaystyle  = \sum_{d \leq x} (\frac{x}{d} + O(1))

\displaystyle  = x \log x + O(x). \ \ \ \ \ (2)

Here, we are (barely) able to keep the error term smaller than the main term; this is right at the edge of the divisor sum method, because the level {R} in this case is equal to {x}. Unfortunately, at this high choice of level, it is not always possible to always keep the error term under control like this. For instance, if one wishes to use the standard divisor sum representation

\displaystyle  \Lambda(n) = \sum_{d|n} \mu(d) \log \frac{n}{d},

where {\mu} is the Möbius function, to compute {\sum_{n \leq x} \Lambda(n)}, then one ends up looking at

\displaystyle  \sum_{n \leq x} \Lambda(n) = \sum_{d \leq x} \mu(d) \sum_{n \leq x:d|n} \log \frac{n}{d}

\displaystyle  = \sum_{d \leq x} \mu(d) ( \frac{n}{d} \log \frac{n}{d} - \frac{n}{d} + O(\log \frac{n}{d}) )

From Dirichlet series methods, it is not difficult to establish the identities

\displaystyle  \lim_{s\rightarrow 1^+} \sum_{n=1}^\infty \frac{\mu(n)}{n^s} = 0

and

\displaystyle  \lim_{s \rightarrow 1^+} \sum_{n=1}^\infty \frac{\mu(n) \log n}{n^s} = -1.

This suggests (but does not quite prove) that one has

\displaystyle  \sum_{n=1}^\infty \frac{\mu(n)}{n} = 0 \ \ \ \ \ (3)

and

\displaystyle  \sum_{n=1}^\infty \frac{\mu(n)\log n}{n} = -1 \ \ \ \ \ (4)

in the sense of conditionally convergent series. Assuming one can justify this (which, ultimately, requires one to exclude zeroes of the Riemann zeta function on the line {\hbox{Re}(s)=1}, as discussed in this previous post), one is eventually left with the estimate {x + O(x)}, which is useless as a lower bound (and recovers only the classical Chebyshev estimate {\sum_{n \leq x} \Lambda(n) \ll x} as the upper bound). The inefficiency here when compared to the situation with the divisor function {\tau} can be attributed to the signed nature of the Möbius function {\mu(n)}, which causes some cancellation in the divisor sum expansion that needs to be compensated for with improved estimates.

However, there are a number of tricks available to reduce the level of divisor sums. The simplest comes from exploiting the change of variables {d \mapsto \frac{n}{d}}, which can in principle reduce the level by a square root. For instance, when computing the divisor function {\tau(n) = \sum_{d|n} 1}, one can observe using this change of variables that every divisor of {n} above {\sqrt{n}} is paired with one below {\sqrt{n}}, and so we have

\displaystyle  \tau(n) = 2 \sum_{d \leq \sqrt{n}: d|n} 1 \ \ \ \ \ (5)

except when {n} is a perfect square, in which case one must subtract one from the right-hand side. Using this reduced-level divisor sum representation, one can obtain an improvement to (2), namely

\displaystyle  \sum_{n \leq x} \tau(n) = x \log x + (2\gamma-1) x + O(\sqrt{x}).

This type of argument is also known as the Dirichlet hyperbola method. A variant of this argument can also deduce the prime number theorem from (3), (4) (and with some additional effort, one can even drop the use of (4)); this is discussed at this previous blog post.

Using this square root trick, one can now also control divisor sums such as

\displaystyle  \sum_{n \leq x} \tau(n^2+1).

(Note that {\tau(n^2+1)} has no multiplicativity properties in {n}, and so multiplicative number theory techniques cannot be directly applied here.) The level of the divisor sum here is initially of order {x^2}, which is too large to be useful; but using the square root trick, we can expand this expression as

\displaystyle  2 \sum_{n \leq x} \sum_{d \leq n: d | n^2+1} 1

which one can rewrite as

\displaystyle  2 \sum_{d \leq x} \sum_{d \leq n \leq x: n^2+1 = 0 \hbox{ mod } d} 1.

The constraint {n^2+1=0 \hbox{ mod } d} is periodic in {n} with period {d}, so we can write this as

\displaystyle  2 \sum_{d \leq x} ( \frac{x}{d} \rho(d) + O(\rho(d)) )

where {\rho(d)} is the number of solutions in {{\bf Z}/d{\bf Z}} to the equation {n^2+1 = 0 \hbox{ mod } d}, and so

\displaystyle  \sum_{n \leq x} \tau(n^2+1) = 2x \sum_{d \leq x} \frac{\rho(d)}{d} + O(\sum_{d \leq x} \rho(d)).

The function {\rho} is multiplicative, and can be easily computed at primes {p} and prime powers {p^j} using tools such as quadratic reciprocity and Hensel’s lemma. For instance, by Fermat’s two-square theorem, {\rho(p)} is equal to {2} for {p=1 \hbox{ mod } 4} and {0} for {p=3 \hbox{ mod } 4}. From this and standard multiplicative number theory methods (e.g. by obtaining asymptotics on the Dirichlet series {\sum_d \frac{\rho(d)}{d^s}}), one eventually obtains the asymptotic

\displaystyle  \sum_{d \leq x} \frac{\rho(d)}{d} = \frac{3}{2\pi} \log x + O(1)

and also

\displaystyle  \sum_{d \leq x} \rho(d) = O(x)

and thus

\displaystyle  \sum_{n \leq x} \tau(n^2+1) = \frac{3}{\pi} x \log x + O(x).

Similar arguments give asymptotics for {\tau} on other quadratic polynomials; see for instance this paper of Hooley and these papers by McKee. Note that the irreducibility of the polynomial will be important. If one considers instead a sum involving a reducible polynomial, such as {\sum_{n \leq x} \tau(n^2-1)}, then the analogous quantity {\rho(n)} becomes significantly larger, leading to a larger growth rate (of order {x \log^2 x} rather than {x\log x}) for the sum.

However, the square root trick is insufficient by itself to deal with higher order sums involving the divisor function, such as

\displaystyle \sum_{n \leq x} \tau(n^3+1);

the level here is initially of order {x^3}, and the square root trick only lowers this to about {x^{3/2}}, creating an error term that overwhelms the main term. And indeed, the asymptotic for such this sum has not yet been rigorously established (although if one heuristically drops error terms, one can arrive at a reasonable conjecture for this asymptotic), although some results are known if one averages over additional parameters (see e.g. this paper of Greaves, or this paper of Matthiesen).

Nevertheless, there is an ingenious argument of Erdös that allows one to obtain good upper and lower bounds for these sorts of sums, in particular establishing the asymptotic

\displaystyle  x \log x \ll \sum_{n \leq x} \tau(P(n)) \ll x \log x \ \ \ \ \ (6)

for any fixed irreducible non-constant polynomial {P} that maps {{\bf N}} to {{\bf N}} (with the implied constants depending of course on the choice of {P}). There is also the related moment bound

\displaystyle  \sum_{n \leq x} \tau^m(P(n)) \ll x \log^{O(1)} x \ \ \ \ \ (7)

for any fixed {P} (not necessarily irreducible) and any fixed {m \geq 1}, due to van der Corput; this bound is in fact used to dispose of some error terms in the proof of (6). These should be compared with what one can obtain from the divisor bound {\tau(n) \ll n^{O(1/\log \log n)}} and the trivial bound {\tau(n) \geq 1}, giving the bounds

\displaystyle  x \ll \sum_{n \leq x} \tau^m(P(n)) \ll x^{1 + O(\frac{1}{\log \log x})}

for any fixed {m \geq 1}.

The lower bound in (6) is easy, since one can simply lower the level in (5) to obtain the lower bound

\displaystyle  \tau(n) \geq \sum_{d \leq n^\theta: d|n} 1

for any {\theta>0}, and the preceding methods then easily allow one to obtain the lower bound by taking {\theta} small enough (more precisely, if {P} has degree {d}, one should take {\theta} equal to {1/d} or less). The upper bounds in (6) and (7) are more difficult. Ideally, if we could obtain upper bounds of the form

\displaystyle  \tau(n) \ll \sum_{d \leq n^\theta: d|n} 1 \ \ \ \ \ (8)

for any fixed {\theta > 0}, then the preceding methods would easily establish both results. Unfortunately, this bound can fail, as illustrated by the following example. Suppose that {n} is the product of {k} distinct primes {p_1 \ldots p_k}, each of which is close to {n^{1/k}}. Then {n} has {2^k} divisors, with {\binom{n}{j}} of them close to {n^{j/k}} for each {0 \ldots j \leq k}. One can think of (the logarithms of) these divisors as being distributed according to what is essentially a Bernoulli distribution, thus a randomly selected divisor of {n} has magnitude about {n^{j/k}}, where {j} is a random variable which has the same distribution as the number of heads in {k} independently tossed fair coins. By the law of large numbers, {j} should concentrate near {k/2} when {k} is large, which implies that the majority of the divisors of {n} will be close to {n^{1/2}}. Sending {k \rightarrow \infty}, one can show that the bound (8) fails whenever {\theta < 1/2}.

This however can be fixed in a number of ways. First of all, even when {\theta<1/2}, one can show weaker substitutes for (8). For instance, for any fixed {\theta > 0} and {m \geq 1} one can show a bound of the form

\displaystyle  \tau(n)^m \ll \sum_{d \leq n^\theta: d|n} \tau(d)^C \ \ \ \ \ (9)

for some {C} depending only on {m,\theta}. This nice elementary inequality (first observed by Landreau) already gives a quite short proof of van der Corput’s bound (7).

For Erdös’s upper bound (6), though, one cannot afford to lose these additional factors of {\tau(d)}, and one must argue more carefully. Here, the key observation is that the counterexample discussed earlier – when the natural number {n} is the product of a large number of fairly small primes – is quite atypical; most numbers have at least one large prime factor. For instance, the number of natural numbers less than {x} that contain a prime factor between {x^{1/2}} and {x} is equal to

\displaystyle  \sum_{x^{1/2} \leq p \leq x} (\frac{x}{p} + O(1)),

which, thanks to Mertens’ theorem

\displaystyle  \sum_{p \leq x} \frac{1}{p} = \log\log x + M+o(1)

for some absolute constant {M}, is comparable to {x}. In a similar spirit, one can show by similarly elementary means that the number of natural numbers {m} less than {x} that are {x^{1/m}}-smooth, in the sense that all prime factors are at most {x^{1/m}}, is only about {m^{-cm} x} or so. Because of this, one can hope that the bound (8), while not true in full generality, will still be true for most natural numbers {n}, with some slightly weaker substitute available (such as (7)) for the exceptional numbers {n}. This turns out to be the case by an elementary but careful argument.

The Erdös argument is quite robust; for instance, the more general inequality

\displaystyle  x \log^{2^m-1} x \ll \sum_{n \leq x} \tau(P(n))^m \ll x \log^{2^m-1} x

for fixed irreducible {P} and {m \geq 1}, which improves van der Corput’s inequality (8) was shown by Delmer using the same methods. (A slight error in the original paper of Erdös was also corrected in this latter paper.) In a forthcoming revision to my paper on the Erdös-Straus conjecture, Christian Elsholtz and I have also applied this method to obtain bounds such as

\displaystyle  \sum_{a \leq A} \sum_{b \leq B} \tau(a^2 b + 1) \ll AB \log(A+B),

which turn out to be enough to obtain the right asymptotics for the number of solutions to the equation {\frac{4}{p}= \frac{1}{x}+\frac{1}{y}+\frac{1}{z}}.

Below the fold I will provide some more details of the arguments of Landreau and of Erdös.

Read the rest of this entry »

I have blogged several times in the past about nonstandard analysis, which among other things is useful in allowing one to import tools from infinitary (or qualitative) mathematics in order to establish results in finitary (or quantitative) mathematics. One drawback, though, to using nonstandard analysis methods is that the bounds one obtains by such methods are usually ineffective: in particular, the conclusions of a nonstandard analysis argument may involve an unspecified constant {C} that is known to be finite but for which no explicit bound is obviously available. (In many cases, a bound can eventually be worked out by performing proof mining on the argument, and in particular by carefully unpacking the proofs of all the various results from infinitary mathematics that were used in the argument, as opposed to simply using them as “black boxes”, but this is a time-consuming task and the bounds that one eventually obtains tend to be quite poor (e.g. tower exponential or Ackermann type bounds are not uncommon).)

Because of this fact, it would seem that quantitative bounds, such as polynomial type bounds {X \leq C Y^C} that show that one quantity {X} is controlled in a polynomial fashion by another quantity {Y}, are not easily obtainable through the ineffective methods of nonstandard analysis. Actually, this is not the case; as I will demonstrate by an example below, nonstandard analysis can certainly yield polynomial type bounds. The catch is that the exponent {C} in such bounds will be ineffective; but nevertheless such bounds are still good enough for many applications.

Let us now illustrate this by reproving a lemma from this paper of Mei-Chu Chang (Lemma 2.14, to be precise), which was recently pointed out to me by Van Vu. Chang’s paper is focused primarily on the sum-product problem, but she uses a quantitative lemma from algebraic geometry which is of independent interest. To motivate the lemma, let us first establish a qualitative version:

Lemma 1 (Qualitative solvability) Let {P_1,\ldots,P_r: {\bf C}^d \rightarrow {\bf C}} be a finite number of polynomials in several variables with rational coefficients. If there is a complex solution {z = (z_1,\ldots,z_d) \in {\bf C}^d} to the simultaneous system of equations

\displaystyle  P_1(z) = \ldots = P_r(z) = 0,

then there also exists a solution {z \in \overline{{\bf Q}}^d} whose coefficients are algebraic numbers (i.e. they lie in the algebraic closure {{\bf Q}} of the rationals).

Proof: Suppose there was no solution to {P_1(z)=\ldots=P_r(z)=0} over {\overline{{\bf Q}}}. Applying Hilbert’s nullstellensatz (which is available as {\overline{{\bf Q}}} is algebraically closed), we conclude the existence of some polynomials {Q_1,\ldots,Q_r} (with coefficients in {\overline{{\bf Q}}}) such that

\displaystyle  P_1 Q_1 + \ldots + P_r Q_r = 1

as polynomials. In particular, we have

\displaystyle  P_1(z) Q_1(z) + \ldots + P_r(z) Q_r(z) = 1

for all {z \in {\bf C}^d}. This shows that there is no solution to {P_1(z)=\ldots=P_r(z)=0} over {{\bf C}}, as required. \Box

Remark 1 Observe that in the above argument, one could replace {{\bf Q}} and {{\bf C}} by any other pair of fields, with the latter containing the algebraic closure of the former, and still obtain the same result.

The above lemma asserts that if a system of rational equations is solvable at all, then it is solvable with some algebraic solution. But it gives no bound on the complexity of that solution in terms of the complexity of the original equation. Chang’s lemma provides such a bound. If {H \geq 1} is an integer, let us say that an algebraic number has height at most {H} if its minimal polynomial (after clearing denominators) consists of integers of magnitude at most {H}.

Lemma 2 (Quantitative solvability) Let {P_1,\ldots,P_r: {\bf C}^d \rightarrow {\bf C}} be a finite number of polynomials of degree at most {D} with rational coefficients, each of height at most {H}. If there is a complex solution {z = (z_1,\ldots,z_d) \in {\bf C}^d} to the simultaneous system of equations

\displaystyle  P_1(z) = \ldots = P_r(z) = 0,

then there also exists a solution {z \in \overline{{\bf Q}}^d} whose coefficients are algebraic numbers of degree at most {C} and height at most {CH^C}, where {C = C_{D, d,r}} depends only on {D}, {d} and {r}.

Chang proves this lemma by essentially establishing a quantitative version of the nullstellensatz, via elementary elimination theory (somewhat similar, actually, to the approach I took to the nullstellensatz in my own blog post). She also notes that one could also establish the result through the machinery of Gröbner bases. In each of these arguments, it was not possible to use Lemma 1 (or the closely related nullstellensatz) as a black box; one actually had to unpack one of the proofs of that lemma or nullstellensatz to get the polynomial bound. However, using nonstandard analysis, it is possible to get such polynomial bounds (albeit with an ineffective value of the constant {C}) directly from Lemma 1 (or more precisely, the generalisation in Remark 1) without having to inspect the proof, and instead simply using it as a black box, thus providing a “soft” proof of Lemma 2 that is an alternative to the “hard” proofs mentioned above.

Here’s how the proof works. Informally, the idea is that Lemma 2 should follow from Lemma 1 after replacing the field of rationals {{\bf Q}} with “the field of rationals of polynomially bounded height”. Unfortunately, the latter object does not really make sense as a field in standard analysis; nevertheless, it is a perfectly sensible object in nonstandard analysis, and this allows the above informal argument to be made rigorous.

We turn to the details. As is common whenever one uses nonstandard analysis to prove finitary results, we use a “compactness and contradiction” argument (or more precisely, an “ultralimit and contradiction” argument). Suppose for contradiction that Lemma 2 failed. Carefully negating the quantifiers (and using the axiom of choice), we conclude that there exists {D, d, r} such that for each natural number {n}, there is a positive integer {H^{(n)}} and a family {P_1^{(n)}, \ldots, P_r^{(n)}: {\bf C}^d \rightarrow {\bf C}} of polynomials of degree at most {D} and rational coefficients of height at most {H^{(n)}}, such that there exist at least one complex solution {z^{(n)} \in {\bf C}^d} to

\displaystyle  P_1^{(n)}(z^{(n)}) = \ldots = P_r(z^{(n)}) = 0, \ \ \ \ \ (1)

but such that there does not exist any such solution whose coefficients are algebraic numbers of degree at most {n} and height at most {n (H^{(n)})^n}.

Now we take ultralimits (see e.g. this previous blog post of a quick review of ultralimit analysis, which we will assume knowledge of in the argument that follows). Let {p \in \beta {\bf N} \backslash {\bf N}} be a non-principal ultrafilter. For each {i=1,\ldots,r}, the ultralimit

\displaystyle  P_i := \lim_{n \rightarrow p} P_i^{(n)}

of the (standard) polynomials {P_i^{(n)}} is a nonstandard polynomial {P_i: {}^* {\bf C}^d \rightarrow {}^* {\bf C}} of degree at most {D}, whose coefficients now lie in the nonstandard rationals {{}^* {\bf Q}}. Actually, due to the height restriction, we can say more. Let {H := \lim_{n \rightarrow p} H^{(n)} \in {}^* {\bf N}} be the ultralimit of the {H^{(n)}}, this is a nonstandard natural number (which will almost certainly be unbounded, but we will not need to use this). Let us say that a nonstandard integer {a} is of polynomial size if we have {|a| \leq C H^C} for some standard natural number {C}, and say that a nonstandard rational number {a/b} is of polynomial height if {a}, {b} are of polynomial size. Let {{\bf Q}_{poly(H)}} be the collection of all nonstandard rationals of polynomial height. (In the language of nonstandard analysis, {{\bf Q}_{poly(H)}} is an external set rather than an internal one, because it is not itself an ultraproduct of standard sets; but this will not be relevant for the argument that follows.) It is easy to see that {{\bf Q}_{poly(H)}} is a field, basically because the sum or product of two integers of polynomial size, remains of polynomial size. By construction, it is clear that the coefficients of {P_i} are nonstandard rationals of polynomial height, and thus {P_1,\ldots,P_r} are defined over {{\bf Q}_{poly(H)}}.

Meanwhile, if we let {z := \lim_{n \rightarrow p} z^{(n)} \in {}^* {\bf C}^d} be the ultralimit of the solutions {z^{(n)}} in (1), we have

\displaystyle  P_1(z) = \ldots = P_r(z) = 0,

thus {P_1,\ldots,P_r} are solvable in {{}^* {\bf C}}. Applying Lemma 1 (or more precisely, the generalisation in Remark 1), we see that {P_1,\ldots,P_r} are also solvable in {\overline{{\bf Q}_{poly(H)}}}. (Note that as {{\bf C}} is algebraically closed, {{}^*{\bf C}} is also (by Los’s theorem), and so {{}^* {\bf C}} contains {\overline{{\bf Q}_{poly(H)}}}.) Thus, there exists {w \in \overline{{\bf Q}_{poly(H)}}^d} with

\displaystyle  P_1(w) = \ldots = P_r(w) = 0.

As {\overline{{\bf Q}_{poly(H)}}^d} lies in {{}^* {\bf C}^d}, we can write {w} as an ultralimit {w = \lim_{n \rightarrow p} w^{(n)}} of standard complex vectors {w^{(n)} \in {\bf C}^d}. By construction, the coefficients of {w} each obey a non-trivial polynomial equation of degree at most {C} and whose coefficients are nonstandard integers of magnitude at most {C H^C}, for some standard natural number {C}. Undoing the ultralimit, we conclude that for {n} sufficiently close to {p}, the coefficients of {w^{(n)}} obey a non-trivial polynomial equation of degree at most {C} whose coefficients are standard integers of magnitude at most {C (H^{(n)})^C}. In particular, these coefficients have height at most {C (H^{(n)})^C}. Also, we have

\displaystyle  P_1^{(n)}(w^{(n)}) = \ldots = P_r^{(n)}(w^{(n)}) = 0.

But for {n} larger than {C}, this contradicts the construction of the {P_i^{(n)}}, and the claim follows. (Note that as {p} is non-principal, any neighbourhood of {p} in {{\bf N}} will contain arbitrarily large natural numbers.)

Remark 2 The same argument actually gives a slightly stronger version of Lemma 2, namely that the integer coefficients used to define the algebraic solution {z} can be taken to be polynomials in the coefficients of {P_1,\ldots,P_r}, with degree and coefficients bounded by {C_{D,d,r}}.

Tamar Ziegler and I have just uploaded to the arXiv our paper “The inverse conjecture for the Gowers norm over finite fields in low characteristic“, submitted to Annals of Combinatorics. This paper completes another case of the inverse conjecture for the Gowers norm, this time for vector spaces {{\bf F}^n} over a fixed finite field {{\bf F} = {\bf F}_p} of prime order; with Vitaly Bergelson, we had previously established this claim when the characteristic of the field was large, so the main new result here is the extension to the low characteristic case. (The case of a cyclic group {{\bf Z}/N{\bf Z}} or interval {[N]} was established by Ben Green and ourselves in another recent paper. For an arbitrary abelian (or nilpotent) group, a general but less explicit description of the obstructions to Gowers uniformity was recently obtained by Szegedy; the latter result recovers the high-characteristic case of our result (as was done in a subsequent paper of Szegedy), as well as our results with Green, but it is not immediately evident whether Szegedy’s description of the obstructions matches up with the one predicted by the inverse conjecture in low characteristic.)

The statement of the main theorem is as follows. Given a finite-dimensional vector space {V = {\bf F}^n} and a function {f: V \rightarrow {\bf C}}, and an integer {s \geq 0}, one can define the Gowers uniformity norm {\|f\|_{U^{s+1}(V)}} by the formula

\displaystyle  \|f\|_{U^{s+1}(V)} := \left( \mathop{\bf E}_{x,h_1,\ldots,h_{s+1} \in V} \Delta_{h_1} \ldots \Delta_{h_{s+1}} f(x) \right)^{1/2^{s+1}}

where {\Delta_h f(x) := f(x+h) \overline{f(x)}}. If {f} is bounded in magnitude by {1}, it is easy to see that {\|f\|_{U^{s+1}(V)}} is bounded by {1} also, with equality if and only if {f(x) = e(P)} for some non-classical polynomial {P: V \rightarrow {\bf R}/{\bf Z}} of degree at most {s}, where {e(x) := e^{2\pi ix}}, and a non-classical polynomial of degree at most {s} is a function whose {s+1^{th}} “derivatives” vanish in the sense that

\displaystyle  \partial_{h_1} \ldots \partial_{h_{s+1}} P(x) = 0

for all {x,h_1,\ldots,h_{s+1} \in V}, where {\partial_h P(x) := P(x+h) - P(x)}. Our result generalises this to the case when the uniformity norm is not equal to {1}, but is still bounded away from zero:

Theorem 1 (Inverse conjecture) Let {f: V \rightarrow {\bf C}} be bounded by {1} with {\|f\|_{U^{s+1}(V)} \geq \delta > 0} for some {s \geq 0}. Then there exists a non-classical polynomial {P: V \rightarrow {\bf R}/{\bf Z}} of degree at most {s} such that {|\langle f, e(P) \rangle_{L^2(V)}| := |{\bf E}_{x \in V} f(x) e(-P(x))| \geq c(s,p, \delta) > 0}, where {c(s,p, \delta)} is a positive quantity depending only on the indicated parameters.

This theorem is trivial for {s=0}, and follows easily from Fourier analysis for {s=1}. The case {s=2} was done in odd characteristic by Ben Green and myself, and in even characteristic by Samorodnitsky. In two papers, one with Vitaly Bergelson, we established this theorem in the “high characteristic” case when the characteristic {p} of {{\bf F}} was greater than {s} (in which case there is essentially no distinction between non-classical polynomials and their classical counterparts, as discussed previously on this blog). The need to deal with genuinely non-classical polynomials is the main new difficulty in this paper that was not dealt with in previous literature.

In our previous paper with Bergelson, a “weak” version of the above theorem was proven, in which the polynomial {P} in the conclusion had bounded degree {O_{s,p}(1)}, rather than being of degree at most {s}. In the current paper, we use this weak inverse theorem to reduce the inverse conjecture to a statement purely about polynomials:

Theorem 2 (Inverse conjecture for polynomials) Let {s \geq 0}, and let {P: V \rightarrow {\bf C}} be a non-classical polynomial of degree at most {s+1} such that {\|e(P)\|_{U^{s+1}(V)} \geq \delta > 0}. Then {P} has bounded rank in the sense that {P} is a function of {O_{s,p,\delta}(1)} polynomials of degree at most {s}.

This type of inverse theorem was first introduced by Bogdanov and Viola. The deduction of Theorem 1 from Theorem 2 and the weak inverse Gowers conjecture is fairly standard, so the main difficulty is to show Theorem 2.

The quantity {-\log_{|{\bf F}|} \|e(P)\|_{U^{s+1}(V)}^{1/2^{s+1}}} of a polynomial {P} of degree at most {s+1} was denoted the analytic rank of {P} by Gowers and Wolf. They observed that the analytic rank of {P} was closely related to the rank of {P}, defined as the least number of degree {s} polynomials needed to express {P}. For instance, in the quadratic case {s=1} the two ranks are identical (in odd characteristic, at least). For general {s}, it was easy to see that bounded rank implied bounded analytic rank; Theorem 2 is the converse statement.

We tried a number of ways to show that bounded analytic rank implied bounded rank, in particular spending a lot of time on ergodic-theoretic approaches, but eventually we settled on a “brute force” approach that relies on classifying those polynomials of bounded analytic rank as precisely as possible. The argument splits up into establishing three separate facts:

  1. (Classical case) If a classical polynomial has bounded analytic rank, then it has bounded rank.
  2. (Multiplication by {p}) If a non-classical polynomial {P} (of degree at most {s+1}) has bounded analytic rank, then {pP} (which can be shown to have degree at most {\max(s-p,0)}) also has bounded analytic rank.
  3. (Division by {p}) If {Q} is a non-clsasical polynomial of degree {\max(s-p,0)} of bounded rank, then there is a non-classical polynomial {P} of degree at most {s+1} of bounded rank such that {pQ=P}.

The multiplication by {p} and division by {p} facts allow one to easily extend the classical case of the theorem to the non-classical case of the theorem, basically because classical polynomials are the kernel of the multiplication-by-{p} homomorphism. Indeed, if {P} is a non-classical polynomial of bounded analytic rank of the right degree, then the multiplication by {p} claim tells us that {pP} also has bounded analytic rank, which by an induction hypothesis implies that {pP} has bounded rank. Applying the division by {p} claim, we find a bounded rank polynomial {P'} such that {pP = pP'}, thus {P} differs from {P'} by a classical polynomial, which necessarily has bounded analytic rank, hence bounded rank by the classical claim, and the claim follows.

Of the three claims, the multiplication-by-{p} claim is the easiest to prove using known results; after a bit of Fourier analysis, it turns out to follow more or less immediately from the multidimensional Szemerédi theorem over finite fields of Bergelson, Leibman, and McCutcheon (one can also use the density Hales-Jewett theorem here if one desires).

The next easiest claim is the classical case. Here, the idea is to analyse a degree {s+1} classical polynomial {P: V \rightarrow {\bf F}} via its derivative {d^{s+1} P: V^{s+1} \rightarrow {\bf F}}, defined by the formula

\displaystyle  d^{s+1} P( h_1,\ldots,h_{s+1}) := \partial_{h_1} \ldots \partial_{h_{s+1}} P(x)

for any {x,h_1,\ldots,h_{s+1} \in V} (the RHS is independent of {x} as {P} has degree {s+1}). This is a multilinear form, and if {P} has bounded analytic rank, this form is biased (in the sense that the mean of {e(d^{s+1} P)} is large). Applying a general equidistribution theorem of Kaufman and Lovett (based on this earlier paper of Green and myself) this implies that {d^{s+1} P} is a function of a bounded number of multilinear forms of lower degree. Using some “regularity lemma” theory to clean up these forms so that they have good equidistribution properties, it is possible to understand exactly how the original multilinear form {d^{s+1} P} depends on these lower degree forms; indeed, the description one eventually obtains is so explicit that one can write down by inspection another bounded rank polynomial {Q} such that {d^{s+1} P} is equal to {d^{s+1} Q}. Thus {P} differs from the bounded rank polynomial {Q} by a lower degree error, which is automatically of bounded rank also, and the claim follows.

The trickiest thing to establish is the division by {p} claim. The polynomial {Q} is some function {F(R_1,\ldots,R_m)} of lower degree polynomials {R_1,\ldots,R_m}. Ideally, one would like to find a function {F'(R_1,\ldots,R_m)} of the same polynomials with {pF' = F}, such that {F'(R_1,\ldots,R_m)} has the correct degree; however, we have counterexamples that show that this is not always possible. (These counterexamples are the main obstruction to making the ergodic theory approach work: in ergodic theory, one is only allowed to work with “measurable” functions, which are roughly analogous in this context to functions of the indicated polynomials {Q, R_1,\ldots,R_m} and their shifts.) To get around this we have to first apply a regularity lemma to place {R_1,\ldots,R_m} in a suitably equidistributed form (although the fact that {R_1,\ldots,R_m} may be non-classical leads to a rather messy and technical description of this equidistribution), and then we have to extend each {R_j} to a higher degree polynomial {R'_j} with {pR'_j = R_j}. There is a crucial “exact roots” property of polynomials that allows one to do this, with {R'_j} having degree exactly {p-1} higher than {R_j}. It turns out that it is possible to find a function {P = F'(R'_1,\ldots,R'_m)} of these extended polynomials that have the right degree and which solves the required equation {pP=Q}; this is established by classifying completely all functions of the equidistributed polynomials {R_1,\ldots,R_m} or {R'_1,\ldots,R'_m} that are of a given degree.

In Notes 3, we saw that the number of additive patterns in a given set was (in principle, at least) controlled by the Gowers uniformity norms of functions associated to that set.

Such norms can be defined on any finite additive group (and also on some other types of domains, though we will not discuss this point here). In particular, they can be defined on the finite-dimensional vector spaces {V} over a finite field {{\bf F}}.

In this case, the Gowers norms {U^{d+1}(V)} are closely tied to the space {\hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})} of polynomials of degree at most {d}. Indeed, as noted in Exercise 20 of Notes 4, a function {f: V \rightarrow {\bf C}} of {L^\infty(V)} norm {1} has {U^{d+1}(V)} norm equal to {1} if and only if {f = e(\phi)} for some {\phi \in \hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})}; thus polynomials solve the “{100\%} inverse problem” for the trivial inequality {\|f\|_{U^{d+1}(V)} \leq \|f\|_{L^\infty(V)}}. They are also a crucial component of the solution to the “{99\%} inverse problem” and “{1\%} inverse problem”. For the former, we will soon show:

Proposition 1 ({99\%} inverse theorem for {U^{d+1}(V)}) Let {f: V \rightarrow {\bf C}} be such that {\|f\|_{L^\infty(V)}} and {\|f\|_{U^{d+1}(V)} \geq 1-\epsilon} for some {\epsilon > 0}. Then there exists {\phi \in \hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})} such that {\| f - e(\phi)\|_{L^1(V)} = O_{d, {\bf F}}( \epsilon^c )}, where {c = c_d > 0} is a constant depending only on {d}.

Thus, for the Gowers norm to be almost completely saturated, one must be very close to a polynomial. The converse assertion is easily established:

Exercise 1 (Converse to {99\%} inverse theorem for {U^{d+1}(V)}) If {\|f\|_{L^\infty(V)} \leq 1} and {\|f-e(\phi)\|_{L^1(V)} \leq \epsilon} for some {\phi \in \hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})}, then {\|F\|_{U^{d+1}(V)} \geq 1 - O_{d,{\bf F}}( \epsilon^c )}, where {c = c_d > 0} is a constant depending only on {d}.

In the {1\%} world, one no longer expects to be close to a polynomial. Instead, one expects to correlate with a polynomial. Indeed, one has

Lemma 2 (Converse to the {1\%} inverse theorem for {U^{d+1}(V)}) If {f: V \rightarrow {\bf C}} and {\phi \in \hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})} are such that {|\langle f, e(\phi) \rangle_{L^2(V)}| \geq \epsilon}, where {\langle f, g \rangle_{L^2(V)} := {\bf E}_{x \in G} f(x) \overline{g(x)}}, then {\|f\|_{U^{d+1}(V)} \geq \epsilon}.

Proof: From the definition of the {U^1} norm (equation (18) from Notes 3), the monotonicity of the Gowers norms (Exercise 19 of Notes 3), and the polynomial phase modulation invariance of the Gowers norms (Exercise 21 of Notes 3), one has

\displaystyle  |\langle f, e(\phi) \rangle| = \| f e(-\phi) \|_{U^1(V)}

\displaystyle  \leq \|f e(-\phi) \|_{U^{d+1}(V)}

\displaystyle  = \|f\|_{U^{d+1}(V)}

and the claim follows. \Box

In the high characteristic case {\hbox{char}({\bf F}) > d} at least, this can be reversed:

Theorem 3 ({1\%} inverse theorem for {U^{d+1}(V)}) Suppose that {\hbox{char}({\bf F}) > d \geq 0}. If {f: V \rightarrow {\bf C}} is such that {\|f\|_{L^\infty(V)} \leq 1} and {\|f\|_{U^{d+1}(V)} \geq \epsilon}, then there exists {\phi \in \hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})} such that {|\langle f, e(\phi) \rangle_{L^2(V)}| \gg_{\epsilon,d,{\bf F}} 1}.

This result is sometimes referred to as the inverse conjecture for the Gowers norm (in high, but bounded, characteristic). For small {d}, the claim is easy:

Exercise 2 Verify the cases {d=0,1} of this theorem. (Hint: to verify the {d=1} case, use the Fourier-analytic identities {\|f\|_{U^2(V)} = (\sum_{\xi \in \hat V} |\hat f(\xi)|^4)^{1/4}} and {\|f\|_{L^2(V)} = (\sum_{\xi \in \hat V} |\hat f(\xi)|^2)^{1/2}}, where {\hat V} is the space of all homomorphisms {\xi: x \mapsto \xi \cdot x} from {V} to {{\bf R}/{\bf Z}}, and {\hat f(\xi) := \mathop{\bf E}_{x \in V} f(x) e(-\xi \cdot x)} are the Fourier coefficients of {f}.)

This conjecture for larger values of {d} are more difficult to establish. The {d=2} case of the theorem was established by Ben Green and myself in the high characteristic case {\hbox{char}({\bf F}) > 2}; the low characteristic case {\hbox{char}({\bf F}) = d = 2} was independently and simultaneously established by Samorodnitsky. The cases {d>2} in the high characteristic case was established in two stages, firstly using a modification of the Furstenberg correspondence principle, due to Ziegler and myself. to convert the problem to an ergodic theory counterpart, and then using a modification of the methods of Host-Kra and Ziegler to solve that counterpart, as done in this paper of Bergelson, Ziegler, and myself.

The situation with the low characteristic case in general is still unclear. In the high characteristic case, we saw from Notes 4 that one could replace the space of non-classical polynomials {\hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})} in the above conjecture with the essentially equivalent space of classical polynomials {\hbox{Poly}_{\leq d}(V \rightarrow {\bf F})}. However, as we shall see below, this turns out not to be the case in certain low characteristic cases (a fact first observed by Lovett, Meshulam, and Samorodnitsky, and independently by Ben Green and myself), for instance if {\hbox{char}({\bf F}) = 2} and {d \geq 3}; this is ultimately due to the existence in those cases of non-classical polynomials which exhibit no significant correlation with classical polynomials of equal or lesser degree. This distinction between classical and non-classical polynomials appears to be a rather non-trivial obstruction to understanding the low characteristic setting; it may be necessary to obtain a more complete theory of non-classical polynomials in order to fully settle this issue.

The inverse conjecture has a number of consequences. For instance, it can be used to establish the analogue of Szemerédi’s theorem in this setting:

Theorem 4 (Szemerédi’s theorem for finite fields) Let {{\bf F} = {\bf F}_p} be a finite field, let {\delta > 0}, and let {A \subset {\bf F}^n} be such that {|A| \geq \delta |{\bf F}^n|}. If {n} is sufficiently large depending on {p,\delta}, then {A} contains an (affine) line {\{ x, x+r, \ldots, x+(p-1)r\}} for some {x,r \in {\bf F}^n} with { r\neq 0}.

Exercise 3 Use Theorem 4 to establish the following generalisation: with the notation as above, if {k \geq 1} and {n} is sufficiently large depending on {p,\delta}, then {A} contains an affine {k}-dimensional subspace.

We will prove this theorem in two different ways, one using a density increment method, and the other using an energy increment method. We discuss some other applications below the fold.

Read the rest of this entry »

In the previous lectures, we have focused mostly on the equidistribution or linear patterns on a subset of the integers {{\bf Z}}, and in particular on intervals {[N]}. The integers are of course a very important domain to study in additive combinatorics; but there are also other fundamental model examples of domains to study. One of these is that of a vector space {V} over a finite field {{\bf F} = {\bf F}_p} of prime order. Such domains are of interest in computer science (particularly when {p=2}) and also in number theory; but they also serve as an important simplified “dyadic model” for the integers. See this survey article of Green for further discussion of this point.

The additive combinatorics of the integers {{\bf Z}}, and of vector spaces {V} over finite fields, are analogous, but not quite identical. For instance, the analogue of an arithmetic progression in {{\bf Z}} is a subspace of {V}. In many cases, the finite field theory is a little bit simpler than the integer theory; for instance, subspaces are closed under addition, whereas arithmetic progressions are only “almost” closed under addition in various senses. (For instance, {[N]} is closed under addition approximately half of the time.) However, there are some ways in which the integers are better behaved. For instance, because the integers can be generated by a single generator, a homomorphism from {{\bf Z}} to some other group {G} can be described by a single group element {g}: {n \mapsto g^n}. However, to specify a homomorphism from a vector space {V} to {G} one would need to specify one group element for each dimension of {V}. Thus we see that there is a tradeoff when passing from {{\bf Z}} (or {[N]}) to a vector space model; one gains a bounded torsion property, at the expense of conceding the bounded generation property. (Of course, if one wants to deal with arbitrarily large domains, one has to concede one or the other; the only additive groups that have both bounded torsion and boundedly many generators, are bounded.)

The starting point for this course (Notes 1) was the study of equidistribution of polynomials {P: {\bf Z} \rightarrow {\bf R}/{\bf Z}} from the integers to the unit circle. We now turn to the parallel theory of equidistribution of polynomials {P: V \rightarrow {\bf R}/{\bf Z}} from vector spaces over finite fields to the unit circle. Actually, for simplicity we will mostly focus on the classical case, when the polynomials in fact take values in the {p^{th}} roots of unity (where {p} is the characteristic of the field {{\bf F} = {\bf F}_p}). As it turns out, the non-classical case is also of importance (particularly in low characteristic), but the theory is more difficult; see these notes for some further discussion.

Read the rest of this entry »

Jean-Pierre Serre (whose papers are, of course, always worth reading) recently posted a lovely lecture on the arXiv entitled “How to use finite fields for problems concerning infinite fields”. In it, he describes several ways in which algebraic statements over fields of zero characteristic, such as {{\mathbb C}}, can be deduced from their positive characteristic counterparts such as {F_{p^m}}, despite the fact that there is no non-trivial field homomorphism between the two types of fields. In particular finitary tools, including such basic concepts as cardinality, can now be deployed to establish infinitary results. This leads to some simple and elegant proofs of non-trivial algebraic results which are not easy to establish by other means.

One deduction of this type is based on the idea that positive characteristic fields can partially model zero characteristic fields, and proceeds like this: if a certain algebraic statement failed over (say) {{\mathbb C}}, then there should be a “finitary algebraic” obstruction that “witnesses” this failure over {{\mathbb C}}. Because this obstruction is both finitary and algebraic, it must also be definable in some (large) finite characteristic, thus leading to a comparable failure over a finite characteristic field. Taking contrapositives, one obtains the claim.

Algebra is definitely not my own field of expertise, but it is interesting to note that similar themes have also come up in my own area of additive combinatorics (and more generally arithmetic combinatorics), because the combinatorics of addition and multiplication on finite sets is definitely of a “finitary algebraic” nature. For instance, a recent paper of Vu, Wood, and Wood establishes a finitary “Freiman-type” homomorphism from (finite subsets of) the complex numbers to large finite fields that allows them to pull back many results in arithmetic combinatorics in finite fields (e.g. the sum-product theorem) to the complex plane. (Van Vu and I also used a similar trick to control the singularity property of random sign matrices by first mapping them into finite fields in which cardinality arguments became available.) And I have a particular fondness for correspondences between finitary and infinitary mathematics; the correspondence Serre discusses is slightly different from the one I discuss for instance in here or here, although there seems to be a common theme of “compactness” (or of model theory) tying these correspondences together.

As one of his examples, Serre cites one of my own favourite results in algebra, discovered independently by Ax and by Grothendieck (and then rediscovered many times since). Here is a special case of that theorem:

Theorem 1 (Ax-Grothendieck theorem, special case) Let {P: {\mathbb C}^n \rightarrow {\mathbb C}^n} be a polynomial map from a complex vector space to itself. If {P} is injective, then {P} is bijective.

The full version of the theorem allows one to replace {{\mathbb C}^n} by an algebraic variety {X} over any algebraically closed field, and for {P} to be an morphism from the algebraic variety {X} to itself, but for simplicity I will just discuss the above special case. This theorem is not at all obvious; it is not too difficult (see Lemma 4 below) to show that the Jacobian of {P} is non-degenerate, but this does not come close to solving the problem since one would then be faced with the notorious Jacobian conjecture. Also, the claim fails if “polynomial” is replaced by “holomorphic”, due to the existence of Fatou-Bieberbach domains.

In this post I would like to give the proof of Theorem 1 based on finite fields as mentioned by Serre, as well as another elegant proof of Rudin that combines algebra with some elementary complex variable methods. (There are several other proofs of this theorem and its generalisations, for instance a topological proof by Borel, which I will not discuss here.)

Update, March 8: Some corrections to the finite field proof. Thanks to Matthias Aschenbrenner also for clarifying the relationship with Tarski’s theorem and some further references.

Read the rest of this entry »

Vitaly Bergelson, Tamar Ziegler, and I have just uploaded to the arXiv our paper “An inverse theorem for the uniformity seminorms associated with the action of F^\infty_p“. This paper establishes the ergodic inverse theorems that are needed in our other recent paper to establish the inverse conjecture for the Gowers norms over finite fields in high characteristic (and to establish a partial result in low characteristic), as follows:

Theorem. Let {\Bbb F} be a finite field of characteristic p.  Suppose that X = (X,{\mathcal B},\mu) is a probability space with an ergodic measure-preserving action (T_g)_{g \in {\Bbb F}^\omega} of {\Bbb F}^\omega.  Let f \in L^\infty(X) be such that the Gowers-Host-Kra seminorm \|f\|_{U^k(X)} (defined in a previous post) is non-zero.

  1. In the high-characteristic case p \geq k, there exists a phase polynomial g of degree <k (as defined in the previous post) such that |\int_X f \overline{g}\ d\mu| > 0.
  2. In general characteristic, there exists a phase polynomial of degree <C(k) for some C(k) depending only on k such that |\int_X f \overline{g}\ d\mu| > 0.

This theorem is closely analogous to a similar theorem of Host and Kra on ergodic actions of {\Bbb Z}, in which the role of phase polynomials is played by functions that arise from nilsystem factors of X.  Indeed, our arguments rely heavily on the machinery of Host and Kra.

The paper is rather technical (60+ pages!) and difficult to describe in detail here, but I will try to sketch out (in very broad brush strokes) what the key steps in the proof of part 2 of the theorem are.  (Part 1 is similar but requires a more delicate analysis at various stages, keeping more careful track of the degrees of various polynomials.)

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,573 other followers