You are currently browsing the category archive for the ‘math.CO’ category.

Emmanuel Breuillard, Ben Green, and I have just uploaded to the arXiv our survey “Small doubling in groups“, for the proceedings of the upcoming Erdos Centennial.  This is a short survey of the known results on classifying finite subsets A of an (abelian) additive group G = (G,+) or a (not necessarily abelian) multiplicative group G = (G,\cdot) that have small doubling in the sense that the sum set A+A or product set A \cdot A is small.  Such sets behave approximately like finite subgroups of G (and there is a closely related notion of an approximate group in which the analogy is even tighter) , and so this subject can be viewed as a sort of approximate version of finite group theory.  (Unfortunately, thus far the theory does not have much new to say about the classification of actual finite groups; progress has been largely made instead on classifying the (highly restricted) number of ways in which approximate groups can differ from a genuine group.)

In the classical case when G is the integers {\mathbb Z}, these sets were classified (in a qualitative sense, at least) by a celebrated theorem of Freiman, which roughly speaking says that such sets A are necessarily “commensurate” in some sense with a (generalised) arithmetic progression P of bounded rank.   There are a number of essentially equivalent ways to define what “commensurate” means here; for instance, in the original formulation of the theorem, one asks that A be a dense subset of P, but in modern formulations it is often more convenient to require instead that A be of comparable size to P and be covered by a bounded number of translates of P, or that A and P have an intersection that is of comparable size to both A and P (cf. the notion of commensurability in group theory).

Freiman’s original theorem was extended to more general abelian groups in a sequence of papers culminating in the paper of Green and Ruzsa that handled arbitrary abelian groups.   As such groups now contain non-trivial finite subgroups, the conclusion of the theorem must be  modified by allowing for “coset progressions” P+H, which can be viewed as “extensions”  of generalized arithmetic progressions P by genuine finite groups H.

The proof methods in these abelian results were Fourier-analytic in nature (except in the cases of sets of very small doubling, in which more combinatorial approaches can be applied, and there were also some geometric or combinatorial methods that gave some weaker structural results).  As such, it was a challenge to extend these results to nonabelian groups, although for various important special types of ambient group G (such as an linear group over a finite or infinite field) it turns out that one can use tools exploiting the special structure of those groups (e.g. for linear groups one would use tools from Lie theory and algebraic geometry) to obtain quite satisfactory results; see e.g. this survey of  Pyber and Szabo for the linear case.   When the ambient group G is completely arbitrary, it turns out the problem is closely related to the classical Hilbert’s fifth problem of determining the minimal requirements of a topological group in order for such groups to have Lie structure; this connection was first observed and exploited by Hrushovski, and then used by Breuillard, Green, and myself to obtain the analogue of Freiman’s theorem for an arbitrary nonabelian group.

This survey is too short to discuss in much detail the proof techniques used in these results (although the abelian case is discussed in this book of mine with Vu, and the nonabelian case discussed in this more recent book of mine), but instead focuses on the statements of the various known results, as well as some remaining open questions in the subject (in particular, there is substantial work left to be done in making the estimates more quantitative, particularly in the nonabelian setting).

I’ve just uploaded to the arXiv my paper “Mixing for progressions in non-abelian groups“, submitted to Forum of Mathematics, Sigma (which, along with sister publication Forum of Mathematics, Pi, has just opened up its online submission system). This paper is loosely related in subject topic to my two previous papers on polynomial expansion and on recurrence in quasirandom groups (with Vitaly Bergelson), although the methods here are rather different from those in those two papers. The starting motivation for this paper was a question posed in this foundational paper of Tim Gowers on quasirandom groups. In that paper, Gowers showed (among other things) that if {G} was a quasirandom group, patterns such as {(x,xg,xh, xgh)} were mixing in the sense that, for any four sets {A,B,C,D \subset G}, the number of such quadruples {(x,xg,xh, xgh)} in {A \times B \times C \times D} was equal to {(\mu(A) \mu(B) \mu(C) \mu(D) + o(1)) |G|^3}, where {\mu(A) := |A| / |G|}, and {o(1)} denotes a quantity that goes to zero as the quasirandomness of the group goes to infinity. In my recent paper with Vitaly, we also considered mixing properties of some other patterns, namely {(x,xg,gx)} and {(g,x,xg,gx)}. This paper is concerned instead with the pattern {(x,xg,xg^2)}, that is to say a geometric progression of length three. As observed by Gowers, by applying (a suitably quantitative version of) Roth’s theorem in (cosets of) a cyclic group, one can obtain a recurrence theorem for this pattern without much effort: if {G} is an arbitrary finite group, and {A} is a subset of {G} with {\mu(A) \geq \delta}, then there are at least {c(\delta) |G|^2} pairs {(x,g) \in G} such that {x, xg, xg^2 \in A}, where {c(\delta)>0} is a quantity depending only on {\delta}. However, this argument does not settle the question of whether there is a stronger mixing property, in that the number of pairs {(x,g) \in G^2} such that {(x,xg,xg^2) \in A \times B \times C} should be {(\mu(A)\mu(B)\mu(C)+o(1)) |G|^2} for any {A,B,C \subset G}. Informally, this would assert that for {x, g} chosen uniformly at random from {G}, the triplet {(x, xg, xg^2)} should resemble a uniformly selected element of {G^3} in some weak sense.

For non-quasirandom groups, such mixing properties can certainly fail. For instance, if {G} is the cyclic group {G = ({\bf Z}/N{\bf Z},+)} (which is abelian and thus highly non-quasirandom) with the additive group operation, and {A = \{1,\ldots,\lfloor \delta N\rfloor\}} for some small but fixed {\delta > 0}, then {\mu(A) = \delta + o(1)} in the limit {N \rightarrow \infty}, but the number of pairs {(x,g) \in G^2} with {x, x+g, x+2g \in A} is {(\delta^2/2 + o(1)) |G|^2} rather than {(\delta^3+o(1)) |G|^2}. The problem here is that the identity {(x+2g) = 2(x+g) - x} ensures that if {x} and {x+g} both lie in {A}, then {x+2g} has a highly elevated likelihood of also falling in {A}. One can view {A} as the preimage of a small ball under the one-dimensional representation {\rho: G \rightarrow U(1)} defined by {\rho(n) := e^{2\pi i n/N}}; similar obstructions to mixing can also be constructed from other low-dimensional representations.

However, by definition, quasirandom groups do not have low-dimensional representations, and Gowers asked whether mixing for {(x,xg,xg^2)} could hold for quasirandom groups. I do not know if this is the case for arbitrary quasirandom groups, but I was able to settle the question for a specific class of quasirandom groups, namely the special linear groups {G := SL_d(F)} over a finite field {F} in the regime where the dimension {d} is bounded (but is at least two) and {F} is large. Indeed, for such groups I can obtain a count of {(\mu(A) \mu(B) \mu(C) + O( |F|^{-\min(d-1,2)/8} )) |G|^2} for the number of pairs {(x,g) \in G^2} with {(x, xg, xg^2) \in A \times B \times C}. In fact, I have the somewhat stronger statement that there are {(\mu(A) \mu(B) \mu(C) \mu(D) + O( |F|^{-\min(d-1,2)/8} )) |G|^2} pairs {(x,g) \in G^2} with {(x,xg,xg^2,g) \in A \times B \times C \times D} for any {A,B,C,D \subset G}.

I was also able to obtain a partial result for the length four progression {(x,xg,xg^2, xg^3)} in the simpler two-dimensional case {G = SL_2(F)}, but I had to make the unusual restriction that the group element {g \in G} was hyperbolic in the sense that it was diagonalisable over the finite field {F} (as opposed to diagonalisable over the algebraic closure {\overline{F}} of that field); this amounts to the discriminant of the matrix being a quadratic residue, and this holds for approximately half of the elements of {G}. The result is then that for any {A,B,C,D \subset G}, one has {(\frac{1}{2} \mu(A) \mu(B) \mu(C) \mu(D) + o(1)) |G|^2} pairs {(x,g) \in G} with {g} hyperbolic and {(x,xg,xg^2,xg^3) \subset A \times B \times C \times D}. (Again, I actually show a slightly stronger statement in which {g} is restricted to an arbitrary subset {E} of hyperbolic elements.)

For the length three argument, the main tools used are the Cauchy-Schwarz inequality, the quasirandomness of {G}, and some algebraic geometry to ensure that a certain family of probability measures on {G} that are defined algebraically are approximately uniformly distributed. The length four argument is significantly more difficult and relies on a rather ad hoc argument involving, among other things, expander properties related to the work of Bourgain and Gamburd, and also a “twisted” version of an argument of Gowers that is used (among other things) to establish an inverse theorem for the {U^3} norm.

I give some details of these arguments below the fold.

Read the rest of this entry »

Perhaps the most important structural result about general large dense graphs is the Szemerédi regularity lemma. Here is a standard formulation of that lemma:

Lemma 1 (Szemerédi regularity lemma) Let {G = (V,E)} be a graph on {n} vertices, and let {\epsilon > 0}. Then there exists a partition {V = V_1 \cup \ldots \cup V_M} for some {M \leq M(\epsilon)} with the property that for all but at most {\epsilon M^2} of the pairs {1 \leq i \leq j \leq M}, the pair {V_i, V_j} is {\epsilon}-regular in the sense that

\displaystyle  | d( A, B ) - d( V_i, V_j ) | \leq \epsilon

whenever {A \subset V_i, B \subset V_j} are such that {|A| \geq \epsilon |V_i|} and {|B| \geq \epsilon |V_j|}, and {d(A,B) := |\{ (a,b) \in A \times B: \{a,b\} \in E \}|/|A| |B|} is the edge density between {A} and {B}. Furthermore, the partition is equitable in the sense that {||V_i| - |V_j|| \leq 1} for all {1 \leq i \leq j \leq M}.

There are many proofs of this lemma, which is actually not that difficult to establish; see for instance these previous blog posts for some examples. In this post I would like to record one further proof, based on the spectral decomposition of the adjacency matrix of {G}, which is essentially due to Frieze and Kannan. (Strictly speaking, Frieze and Kannan used a variant of this argument to establish a weaker form of the regularity lemma, but it is not difficult to modify the Frieze-Kannan argument to obtain the usual form of the regularity lemma instead. Some closely related spectral regularity lemmas were also developed by Szegedy.) I found recently (while speaking at the Abel conference in honour of this year’s laureate, Endre Szemerédi) that this particular argument is not as widely known among graph theory experts as I had thought, so I thought I would record it here.

For reasons of exposition, it is convenient to first establish a slightly weaker form of the lemma, in which one drops the hypothesis of equitability (but then has to weight the cells {V_i} by their magnitude when counting bad pairs):

Lemma 2 (Szemerédi regularity lemma, weakened variant) . Let {G = (V,E)} be a graph on {n} vertices, and let {\epsilon > 0}. Then there exists a partition {V = V_1 \cup \ldots \cup V_M} for some {M \leq M(\epsilon)} with the property that for all pairs {(i,j) \in \{1,\ldots,M\}^2} outside of an exceptional set {\Sigma}, one has

\displaystyle  | E(A,B) - d_{ij} |A| |B| | \ll \epsilon |V_i| |V_j| \ \ \ \ \ (1)

whenever {A \subset V_i, B \subset V_j}, for some real number {d_{ij}}, where {E(A,B) := |\{ (a,b) \in A \times B: \{a,b\} \in E \}|} is the number of edges between {A} and {B}. Furthermore, we have

\displaystyle  \sum_{(i,j) \in \Sigma} |V_i| |V_j| \ll \epsilon |V|^2. \ \ \ \ \ (2)

Let us now prove Lemma 2. We enumerate {V} (after relabeling) as {V = \{1,\ldots,n\}}. The adjacency matrix {T} of the graph {G} is then a self-adjoint {n \times n} matrix, and thus admits an eigenvalue decomposition

\displaystyle  T = \sum_{i=1}^n \lambda_i u_i^* u_i

for some orthonormal basis {u_1,\ldots,u_n} of {{\bf C}^n} and some eigenvalues {\lambda_1,\ldots,\lambda_n \in {\bf R}}, which we arrange in decreasing order of magnitude:

\displaystyle  |\lambda_1| \geq \ldots \geq |\lambda_n|.

We can compute the trace of {T^2} as

\displaystyle  \hbox{tr}(T^2) = \sum_{i=1}^n |\lambda_i|^2.

But we also have {\hbox{tr}(T^2) = 2|E| \leq n^2}, so

\displaystyle  \sum_{i=1}^n |\lambda_i|^2 \leq n^2. \ \ \ \ \ (3)

Among other things, this implies that

\displaystyle  |\lambda_i| \leq \frac{n}{\sqrt{i}} \ \ \ \ \ (4)

for all {i \geq 1}.

Let {F: {\bf N} \rightarrow {\bf N}} be a function (depending on {\epsilon}) to be chosen later, with {F(i) \geq i} for all {i}. Applying (3) and the pigeonhole principle (or the finite convergence principle, see this blog post), we can find {J \leq C(F,\epsilon)} such that

\displaystyle  \sum_{J \leq i < F(J)} |\lambda_i|^2 \leq \epsilon^3 n^2.

(Indeed, the bound on {J} is basically {F} iterated {1/\epsilon^3} times.) We can now split

\displaystyle  T = T_1 + T_2 + T_3, \ \ \ \ \ (5)

where {T_1} is the “structured” component

\displaystyle  T_1 := \sum_{i < J} \lambda_i u_i^* u_i, \ \ \ \ \ (6)

{T_2} is the “small” component

\displaystyle  T_2 := \sum_{J \leq i < F(J)} \lambda_i u_i^* u_i, \ \ \ \ \ (7)

and {T_3} is the “pseudorandom” component

\displaystyle  T_3 := \sum_{i > F(J)} \lambda_i u_i^* u_i. \ \ \ \ \ (8)

We now design a vertex partition to make {T_1} approximately constant on most cells. For each {i < J}, we partition {V} into {O_{J,\epsilon}(1)} cells on which {u_i} (viewed as a function from {V} to {{\bf C}}) only fluctuates by {O(\epsilon n^{-1/2} /J)}, plus an exceptional cell of size {O( \frac{\epsilon}{J} |V|)} coming from the values where {|u_i|} is excessively large (larger than {\sqrt{\frac{J}{\epsilon}} n^{-1/2}}). Combining all these partitions together, we can write {V = V_1 \cup \ldots \cup V_{M-1} \cup V_M} for some {M = O_{J,\epsilon}(1)}, where {V_M} has cardinality at most {\epsilon |V|}, and for all {1 \leq i \leq M-1}, the eigenfunctions {u_1,\ldots,u_{J-1}} all fluctuate by at most {O(\epsilon/J)}. In particular, if {1 \leq i,j \leq M-1}, then (by (4) and (6)) the entries of {T_1} fluctuate by at most {O(\epsilon)} on each block {V_i \times V_j}. If we let {d_{ij}} be the mean value of these entries on {V_i \times V_j}, we thus have

\displaystyle  1_B^* T_1 1_A = d_{ij} |A| |B| + O( \epsilon |V_i| |V_j| ) \ \ \ \ \ (9)

for any {1 \leq i,j \leq M-1} and {A \subset V_i, B \subset V_j}, where we view the indicator functions {1_A, 1_B} as column vectors of dimension {n}.

Next, we observe from (3) and (7) that {\hbox{tr} T_2^2 \leq \epsilon^3 n^2}. If we let {x_{ab}} be the coefficients of {T_2}, we thus have

\displaystyle  \sum_{a,b \in V} |x_{ab}|^2 \leq \epsilon^3 n^2

and hence by Markov’s inequality we have

\displaystyle  \sum_{a \in V_i} \sum_{b \in V_j} |x_{ab}|^2 \leq \epsilon^2 |V_i| |V_j| \ \ \ \ \ (10)

for all pairs {(i,j) \in \{1,\ldots,M-1\}^2} outside of an exceptional set {\Sigma_1} with

\displaystyle  \sum_{(i,j) \in \Sigma_1} |V_i| |V_j| \leq \epsilon |V|^2.

If {(i,j) \in \{1,\ldots,M-1\}^2} avoids {\Sigma_1}, we thus have

\displaystyle  1_B^* T_2 1_A = O( \epsilon |V_i| |V_j| ) \ \ \ \ \ (11)

for any {A \subset V_i, B \subset V_j}, by (10) and the Cauchy-Schwarz inequality.

Finally, to control {T_3} we see from (4) and (8) that {T_3} has an operator norm of at most {n/\sqrt{F(J)}}. In particular, we have from the Cauchy-Schwarz inequality that

\displaystyle  1_B^* T_3 1_A = O( n^2 / \sqrt{F(J)} ) \ \ \ \ \ (12)

for any {A, B \subset V}.

Let {\Sigma} be the set of all pairs {(i,j) \in \{1,\ldots,M\}^2} where either {(i,j) \in \Sigma_1}, {i = M}, {j=M}, or

\displaystyle  \min(|V_i|, |V_j|) \leq \frac{\epsilon}{M} n.

One easily verifies that (2) holds. If {(i,j) \in \{1,\ldots,M\}^2} is not in {\Sigma}, then by summing (9), (11), (12) and using (5), we see that

\displaystyle  1_B^* T 1_A = d_{ij} |A| |B| + O( \epsilon |V_i| |V_j| ) + O( n^2 / \sqrt{F(J)} ) \ \ \ \ \ (13)

for all {A \subset V_i, B \subset V_j}. The left-hand side is just {E(A,B)}. As {(i,j) \not \in \Sigma}, we have

\displaystyle  |V_i|, |V_j| > \frac{\epsilon}{M} n

and so (since {M = O_{J,\epsilon}(1)})

\displaystyle  n^2 / \sqrt{F(J)} \ll_{J,\epsilon} |V_i| |V_j| / \sqrt{F(J)}.

If we let {F} be a sufficiently rapidly growing function of {J} that depends on {\epsilon}, the second error term in (13) can be absorbed in the first, and (1) follows. This concludes the proof of Lemma 2.

To prove Lemma 1, one argues similarly (after modifying {\epsilon} as necessary), except that the initial partition {V_1,\ldots,V_M} of {V} constructed above needs to be subdivided further into equitable components (of size {\epsilon |V|/M+O(1)}), plus some remainder sets which can be aggregated into an exceptional component of size {O( \epsilon |V| )} (and which can then be redistributed amongst the other components to arrive at a truly equitable partition). We omit the details.

Remark 1 It is easy to verify that {F} needs to be growing exponentially in {J} in order for the above argument to work, which leads to tower-exponential bounds in the number of cells {M} in the partition. It was shown by Gowers that a tower-exponential bound is actually necessary here. By varying {F}, one basically obtains the strong regularity lemma first established by Alon, Fischer, Krivelevich, and Szegedy; in the opposite direction, setting {F(J) := J} essentially gives the weak regularity lemma of Frieze and Kannan.

Remark 2 If we specialise to a Cayley graph, in which {V = (V,+)} is a finite abelian group and {E = \{ (a,b): a-b \in A \}} for some (symmetric) subset {A} of {V}, then the eigenvectors are characters, and one essentially recovers the arithmetic regularity lemma of Green, in which the vertex partition classes {V_i} are given by Bohr sets (and one can then place additional regularity properties on these Bohr sets with some additional arguments). The components {T_1, T_2, T_3} of {T}, representing high, medium, and low eigenvalues of {T}, then become a decomposition associated to high, medium, and low Fourier coefficients of {A}.

Remark 3 The use of spectral theory here is parallel to the use of Fourier analysis to establish results such as Roth’s theorem on arithmetic progressions of length three. In analogy with this, one could view hypergraph regularity as being a sort of “higher order spectral theory”, although this spectral perspective is not as convenient as it is in the graph case.

I’ve just uploaded to the arXiv my joint paper with Vitaly Bergelson, “Multiple recurrence in quasirandom groups“, which is submitted to Geom. Func. Anal.. This paper builds upon a paper of Gowers in which he introduced the concept of a quasirandom group, and established some mixing (or recurrence) properties of such groups. A {D}-quasirandom group is a finite group with no non-trivial unitary representations of dimension at most {D}. We will informally refer to a “quasirandom group” as a {D}-quasirandom group with the quasirandomness parameter {D} large (more formally, one can work with a sequence of {D_n}-quasirandom groups with {D_n} going to infinity). A typical example of a quasirandom group is {SL_2(F_p)} where {p} is a large prime. Quasirandom groups are discussed in depth in this blog post. One of the key properties of quasirandom groups established in Gowers’ paper is the following “weak mixing” property: if {A, B} are subsets of {G}, then for “almost all” {g \in G}, one has

\displaystyle  \mu( A \cap gB ) \approx \mu(A) \mu(B) \ \ \ \ \ (1)

where {\mu(A) := |A|/|G|} denotes the density of {A} in {G}. Here, we use {x \approx y} to informally represent an estimate of the form {x=y+o(1)} (where {o(1)} is a quantity that goes to zero when the quasirandomness parameter {D} goes to infinity), and “almost all {g \in G}” denotes “for all {g} in a subset of {G} of density {1-o(1)}“. As a corollary, if {A,B,C} have positive density in {G} (by which we mean that {\mu(A)} is bounded away from zero, uniformly in the quasirandomness parameter {D}, and similarly for {B,C}), then (if the quasirandomness parameter {D} is sufficiently large) we can find elements {g, x \in G} such that {g \in A}, {x \in B}, {gx \in C}. In fact we can find approximately {\mu(A)\mu(B)\mu(C) |G|^2} such pairs {(g,x)}. To put it another way: if we choose {g,x} uniformly and independently at random from {G}, then the events {g \in A}, {x \in B}, {gx \in C} are approximately independent (thus the random variable {(g,x,gx) \in G^3} resembles a uniformly distributed random variable on {G^3} in some weak sense). One can also express this mixing property in integral form as

\displaystyle  \int_G \int_G f_1(g) f_2(x) f_3(gx)\ d\mu(g) d\mu(x) \approx (\int_G f_1\ d\mu) (\int_G f_2\ d\mu) (\int_G f_3\ d\mu)

for any bounded functions {f_1,f_2,f_3: G \rightarrow {\bf R}}. (Of course, with {G} being finite, one could replace the integrals here by finite averages if desired.) Or in probabilistic language, we have

\displaystyle  \mathop{\bf E} f_1(g) f_2(x) f_3(gx) \approx \mathop{\bf E} f_1(x_1) f_2(x_2) f_3(x_3)

where {g, x, x_1, x_2, x_3} are drawn uniformly and independently at random from {G}.

As observed in Gowers’ paper, one can iterate this observation to find “parallelopipeds” of any given dimension in dense subsets of {G}. For instance, applying (1) with {A,B,C} replaced by {A \cap hB}, {C \cap hD}, and {E \cap hF} one can assert (after some relabeling) that for {g,h,x} chosen uniformly and independently at random from {G}, the events {g \in A}, {h \in B}, {gh \in C}, {x \in D}, {gx \in E}, {hx \in F}, {ghx \in H} are approximately independent whenever {A,B,C,D,E,F,H} are dense subsets of {G}; thus the tuple {(g,h,gh,x,gh,hx,ghx)} resebles a uniformly distributed random variable in {G^7} in some weak sense.

However, there are other tuples for which the above iteration argument does not seem to apply. One of the simplest tuples in this vein is the tuple {(g, x, xg, gx)} in {G^4}, when {g, x} are drawn uniformly at random from a quasirandom group {G}. Here, one does not expect the tuple to behave as if it were uniformly distributed in {G^4}, because there is an obvious constraint connecting the last two components {gx, xg} of this tuple: they must lie in the same conjugacy class! In particular, if {A} is a subset of {G} that is the union of conjugacy classes, then the events {gx \in A}, {xg \in A} are perfectly correlated, so that {\mu( gx \in A, xg \in A)} is equal to {\mu(A)} rather than {\mu(A)^2}. Our main result, though, is that in a quasirandom group, this is (approximately) the only constraint on the tuple. More precisely, we have

Theorem 1 Let {G} be a {D}-quasirandom group, and let {g, x} be drawn uniformly at random from {G}. Then for any {f_1,f_2,f_3,f_4: G \rightarrow [-1,1]}, we have

\displaystyle  \mathop{\bf E} f_1(g) f_2(x) f_3(gx) f_4(xg) = \mathop{\bf E} f_1(x_1) f_2(x_2) f_3(x_3) f_4(x_4) + o(1)

where {o(1)} goes to zero as {D \rightarrow \infty}, {x_1,x_2,x_3} are drawn uniformly and independently at random from {G}, and {x_4} is drawn uniformly at random from the conjugates of {x_3} for each fixed choice of {x_1,x_2,x_3}.

This is the probabilistic formulation of the above theorem; one can also phrase the theorem in other formulations (such as an integral formulation), and this is detailed in the paper. This theorem leads to a number of recurrence results; for instance, as a corollary of this result, we have

\displaystyle  \mu(A) \mu(B)^2 - o(1) \leq \mu( A \cap gB \cap Bg ) \leq \mu(A) \mu(B) + o(1)

for almost all {g \in G}, and any dense subsets {A, B} of {G}; the lower and upper bounds are sharp, with the lower bound being attained when {B} is randomly distributed, and the upper bound when {B} is conjugation-invariant.

To me, the more interesting thing here is not the result itself, but how it is proven. Vitaly and I were not able to find a purely finitary way to establish this mixing theorem. Instead, we had to first use the machinery of ultraproducts (as discussed in this previous post) to convert the finitary statement about a quasirandom group to an infinitary statement about a type of infinite group which we call an ultra quasirandom group (basically, an ultraproduct of increasingly quasirandom finite groups). This is analogous to how the Furstenberg correspondence principle is used to convert a finitary combinatorial problem into an infinitary ergodic theory problem.

Ultra quasirandom groups come equipped with a finite, countably additive measure known as Loeb measure {\mu_G}, which is very analogous to the Haar measure of a compact group, except that in the case of ultra quasirandom groups one does not quite have a topological structure that would give compactness. Instead, one has a slightly weaker structure known as a {\sigma}-topology, which is like a topology except that open sets are only closed under countable unions rather than arbitrary ones. There are some interesting measure-theoretic and topological issues regarding the distinction between topologies and {\sigma}-topologies (and between Haar measure and Loeb measure), but for this post it is perhaps best to gloss over these issues and pretend that ultra quasirandom groups {G} come with a Haar measure. One can then recast Theorem 1 as a mixing theorem for the left and right actions of the ultra approximate group {G} on itself, which roughly speaking is the assertion that

\displaystyle  \int_G f_1(x) L_g f_2(x) L_g R_g f_3(x)\ d\mu_G(x) \approx 0 \ \ \ \ \ (2)

for “almost all” {g \in G}, if {f_1, f_2, f_3} are bounded measurable functions on {G}, with {f_3} having zero mean on all conjugacy classes of {G}, where {L_g, R_g} are the left and right translation operators

\displaystyle  L_g f(x) := f(g^{-1} x); \quad R_g f(x) := f(xg).

To establish this mixing theorem, we use the machinery of idempotent ultrafilters, which is a particularly useful tool for understanding the ergodic theory of actions of countable groups {G} that need not be amenable; in the non-amenable setting the classical ergodic averages do not make much sense, but ultrafilter-based averages are still available. To oversimplify substantially, the idempotent ultrafilter arguments let one establish mixing estimates of the form (2) for “many” elements {g} of an infinite-dimensional parallelopiped known as an IP system (provided that the actions {L_g,R_g} of this IP system obey some technical mixing hypotheses, but let’s ignore that for sake of this discussion). The claim then follows by using the quasirandomness hypothesis to show that if the estimate (2) failed for a large set of {g \in G}, then this large set would then contain an IP system, contradicting the previous claim.

Idempotent ultrafilters are an extremely infinitary type of mathematical object (one has to use Zorn’s lemma no fewer than three times just to construct one of these objects!). So it is quite remarkable that they can be used to establish a finitary theorem such as Theorem 1, though as is often the case with such infinitary arguments, one gets absolutely no quantitative control whatsoever on the error terms {o(1)} appearing in that theorem. (It is also mildly amusing to note that our arguments involve the use of ultrafilters in two completely different ways: firstly in order to set up the ultraproduct that converts the finitary mixing problem to an infinitary one, and secondly to solve the infinitary mixing problem. Despite some superficial similarities, there appear to be no substantial commonalities between these two usages of ultrafilters.) There is already a fair amount of literature on using idempotent ultrafilter methods in infinitary ergodic theory, and perhaps by further development of ultraproduct correspondence principles, one can use such methods to obtain further finitary consequences (although the state of the art for idempotent ultrafilter ergodic theory has not advanced much beyond the analysis of two commuting shifts {L_g, R_g} currently, which is the main reason why our arguments only handle the pattern {(g,x,xg,gx)} and not more sophisticated patterns).

We also have some miscellaneous other results in the paper. It turns out that by using the triangle removal lemma from graph theory, one can obtain a recurrence result that asserts that whenever {A} is a dense subset of a finite group {G} (not necessarily quasirandom), then there are {\gg |G|^2} pairs {(x,g)} such that {x, gx, xg} all lie in {A}. Using a hypergraph generalisation of the triangle removal lemma known as the hypergraph removal lemma, one can obtain more complicated versions of this statement; for instance, if {A} is a dense subset of {G^2}, then one can find {\gg |G|^2} triples {(x,y,g)} such that {(x,y), (gx, y), (gx, gy), (gxg^{-1}, gyg^{-1})} all lie in {A}. But the method is tailored to the specific types of patterns given here, and we do not have a general method for obtaining recurrence or mixing properties for arbitrary patterns of words in some finite alphabet such as {g,x,y}.

We also give some properties of a model example of an ultra quasirandom group, namely the ultraproduct {SL_2(F)} of {SL_2(F_{p_n})} where {p_n} is a sequence of primes going off to infinity. Thanks to the substantial recent progress (by Helfgott, Bourgain, Gamburd, Breuillard, and others) on understanding the expansion properties of the finite groups {SL_2(F_{p_n})}, we have a fair amount of knowledge on the ultraproduct {SL_2(F)} as well; for instance any two elements of {SL_2(F)} will almost surely generate a spectral gap. We don’t have any direct application of this particular ultra quasirandom group, but it might be interesting to study it further.

I’ve just uploaded to the arXiv my paper “Expanding polynomials over finite fields of large characteristic, and a regularity lemma for definable sets“, submitted to Contrib. Disc. Math. The motivation of this paper is to understand a certain polynomial variant of the sum-product phenomenon in finite fields. This phenomenon asserts that if {A} is a non-empty subset of a finite field {F}, then either the sumset {A+A := \{a+b: a,b \in A\}} or product set {A \cdot A := \{ab: a,b \in A \}} will be significantly larger than {A}, unless {A} is close to a subfield of {F} (or to {\{1\}}). In particular, in the regime when {A} is large, say {|F|^{1-c} < |A| \leq |F|}, one expects an expansion bound of the form

\displaystyle  |A+A| + |A \cdot A| \gg (|F|/|A|)^{c'} |A| \ \ \ \ \ (1)

for some absolute constants {c, c' > 0}. Results of this type are known; for instance, Hart, Iosevich, and Solymosi obtained precisely this bound for {(c,c')=(3/10,1/3)} (in the case when {|F|} is prime), which was then improved by Garaev to {(c,c')=(1/3,1/2)}.

We have focused here on the case when {A} is a large subset of {F}, but sum-product estimates are also extremely interesting in the opposite regime in which {A} is allowed to be small (see for instance the papers of Katz-Shen and Li and of Garaev for some recent work in this case, building on some older papers of Bourgain, Katz and myself and of Bourgain, Glibichuk, and Konyagin). However, the techniques used in these two regimes are rather different. For large subsets of {F}, it is often profitable to use techniques such as the Fourier transform or the Cauchy-Schwarz inequality to “complete” a sum over a large set (such as {A}) into a set over the entire field {F}, and then to use identities concerning complete sums (such as the Weil bound on complete exponential sums over a finite field). For small subsets of {F}, such techniques are usually quite inefficient, and one has to proceed by somewhat different combinatorial methods which do not try to exploit the ambient field {F}. But my paper focuses exclusively on the large {A} regime, and unfortunately does not directly say much (except through reasoning by analogy) about the small {A} case.

Note that it is necessary to have both {A+A} and {A \cdot A} appear on the left-hand side of (1). Indeed, if one just has the sumset {A+A}, then one can set {A} to be a long arithmetic progression to give counterexamples to (1). Similarly, if one just has a product set {A \cdot A}, then one can set {A} to be a long geometric progression. The sum-product phenomenon can then be viewed that it is not possible to simultaneously behave like a long arithmetic progression and a long geometric progression, unless one is already very close to behaving like a subfield.

Now we consider a polynomial variant of the sum-product phenomenon, where we consider a polynomial image

\displaystyle  P(A,A) := \{ P(a,b): a,b \in A \}

of a set {A \subset F} with respect to a polynomial {P: F \times F \rightarrow F}; we can also consider the asymmetric setting of the image

\displaystyle  P(A,B) := \{ P(a,b): a \in A,b \in B \}

of two subsets {A,B \subset F}. The regime we will be interested is the one where the field {F} is large, and the subsets {A, B} of {F} are also large, but the polynomial {P} has bounded degree. Actually, for technical reasons it will not be enough for us to assume that {F} has large cardinality; we will also need to assume that {F} has large characteristic. (The two concepts are synonymous for fields of prime order, but not in general; for instance, the field with {2^n} elements becomes large as {n \rightarrow \infty} while the characteristic remains fixed at {2}, and is thus not going to be covered by the results in this paper.)

In this paper of Vu, it was shown that one could replace {A \cdot A} with {P(A,A)} in (1), thus obtaining a bound of the form

\displaystyle  |A+A| + |P(A,A)| \gg (|F|/|A|)^{c'} |A|

whenever {|A| \geq |F|^{1-c}} for some absolute constants {c, c' > 0}, unless the polynomial {P} had the degenerate form {P(x,y) = Q(L(x,y))} for some linear function {L: F \times F \rightarrow F} and polynomial {Q: F \rightarrow F}, in which {P(A,A)} behaves too much like {A+A} to get reasonable expansion. In this paper, we focus instead on the question of bounding {P(A,A)} alone. In particular, one can ask to classify the polynomials {P} for which one has the weak expansion property

\displaystyle |P(A,A)| \gg (|F|/|A|)^{c'} |A|

whenever {|A| \geq |F|^{1-c}} for some absolute constants {c, c' > 0}. One can also ask for stronger versions of this expander property, such as the moderate expansion property

\displaystyle |P(A,A)| \gg |F|

whenever {|A| \geq |F|^{1-c}}, or the almost strong expansion property

\displaystyle |P(A,A)| \geq |F| - O( |F|^{1-c'})

whenever {|A| \geq |F|^{1-c}}. (One can consider even stronger expansion properties, such as the strong expansion property {|P(A,A)| \geq |F|-O(1)}, but it was shown by Gyarmati and Sarkozy that this property cannot hold for polynomials of two variables of bounded degree when {|F| \rightarrow \infty}.) One can also consider asymmetric versions of these properties, in which one obtains lower bounds on {|P(A,B)|} rather than {|P(A,A)|}.

The example of a long arithmetic or geometric progression shows that the polynomials {P(x,y) = x+y} or {P(x,y) = xy} cannot be expanders in any of the above senses, and a similar construction also shows that polynomials of the form {P(x,y) = Q(f(x)+f(y))} or {P(x,y) = Q(f(x) f(y))} for some polynomials {Q, f: F \rightarrow F} cannot be expanders. On the other hand, there are a number of results in the literature establishing expansion for various polynomials in two or more variables that are not of this degenerate form (in part because such results are related to incidence geometry questions in finite fields, such as the finite field version of the Erdos distinct distances problem). For instance, Solymosi established weak expansion for polynomials of the form {P(x,y) = f(x)+y} when {f} is a nonlinear polynomial, with generalisations by Hart, Li, and Shen for various polynomials of the form {P(x,y) = f(x) + g(y)} or {P(x,y) = f(x) g(y)}. Further examples of expanding polynomials appear in the work of Shkredov, Iosevich-Rudnev, and Bukh-Tsimerman, as well as the previously mentioned paper of Vu and of Hart-Li-Shen, and these papers in turn cite many further results which are in the spirit of the polynomial expansion bounds discussed here (for instance, dealing with the small {A} regime, or working in other fields such as {{\bf R}} instead of in finite fields {F}). We will not summarise all these results here; they are summarised briefly in my paper, and in more detail in the papers of Hart-Li-Shen and of Bukh-Tsimerman. But we will single out one of the results of Bukh-Tsimerman, which is one of most recent and general of these results, and closest to the results of my own paper. Roughly speaking, in this paper it is shown that a polynomial {P(x,y)} of two variables and bounded degree will be a moderate expander if it is non-composite (in the sense that it does not take the form {P(x,y) = Q(R(x,y))} for some non-linear polynomial {Q} and some polynomial {R}, possibly having coefficients in the algebraic completion of {F}) and is monic on both {x} and {y}, thus it takes the form {P(x,y) = x^d + S(x,y)} for some {d \geq 1} and some polynomial {S} of degree at most {d-1} in {x}, and similarly with the roles of {x} and {y} reversed, unless {P} is of the form {P(x,y) = f(x) + g(y)} or {P(x,y) = f(x) g(y)} (in which case the expansion theory is covered to a large extent by the previous work of Hart, Li, and Shen).

Our first main result improves upon the Bukh-Tsimerman result by strengthening the notion of expansion and removing the non-composite and monic hypotheses, but imposes a condition of large characteristic. I’ll state the result here slightly informally as follows:

Theorem 1 (Criterion for moderate expansion) Let {P: F \times F \rightarrow F} be a polynomial of bounded degree over a finite field {F} of sufficiently large characteristic, and suppose that {P} is not of the form {P(x,y) = Q(f(x)+g(y))} or {P(x,y) = Q(f(x)g(y))} for some polynomials {Q,f,g: F \rightarrow F}. Then one has the (asymmetric) moderate expansion property

\displaystyle  |P(A,B)| \gg |F|

whenever {|A| |B| \ggg |F|^{2-1/8}}.

This is basically a sharp necessary and sufficient condition for asymmetric expansion moderate for polynomials of two variables. In the paper, analogous sufficient conditions for weak or almost strong expansion are also given, although these are not quite as satisfactory (particularly the conditions for almost strong expansion, which include a somewhat complicated algebraic condition which is not easy to check, and which I would like to simplify further, but was unable to).

The argument here resembles the Bukh-Tsimerman argument in many ways. One can view the result as an assertion about the expansion properties of the graph {\{ (a,b,P(a,b)): a,b \in F \}}, which can essentially be thought of as a somewhat sparse three-uniform hypergraph on {F}. Being sparse, it is difficult to directly apply techniques from dense graph or hypergraph theory for this situation; however, after a few applications of the Cauchy-Schwarz inequality, it turns out (as observed by Bukh and Tsimerman) that one can essentially convert the problem to one about the expansion properties of the set

\displaystyle  \{ (P(a,c), P(b,c), P(a,d), P(b,d)): a,b,c,d \in F \} \ \ \ \ \ (2)

(actually, one should view this as a multiset, but let us ignore this technicality) which one expects to be a dense set in {F^4}, except in the case when the associated algebraic variety

\displaystyle  \{ (P(a,c), P(b,c), P(a,d), P(b,d)): a,b,c,d \in \overline{F} \}

fails to be Zariski dense, but it turns out that in this case one can use some differential geometry and Riemann surface arguments (after first invoking the Lefschetz principle and the high characteristic hypothesis to work over the complex numbers instead over a finite field) to show that {P} is of the form {Q(f(x)+g(y))} or {Q(f(x)g(y))}. This reduction is related to the classical fact that the only one-dimensional algebraic groups over the complex numbers are the additive group {({\bf C},+)}, the multiplicative group {({\bf C} \backslash \{0\},\times)}, or the elliptic curves (but the latter have a group law given by rational functions rather than polynomials, and so ultimately end up being eliminated from consideration, though they would play an important role if one wanted to study the expansion properties of rational functions).

It remains to understand the structure of the set (2) is. To understand dense graphs or hypergraphs, one of the standard tools of choice is the Szemerédi regularity lemma, which carves up such graphs into a bounded number of cells, with the graph behaving pseudorandomly on most pairs of cells. However, the bounds in this lemma are notoriously poor (the regularity obtained is an inverse tower exponential function of the number of cells), and this makes this lemma unsuitable for the type of expansion properties we seek (in which we want to deal with sets {A} which have a polynomial sparsity, e.g. {|A| \sim |F|^{1-c}}). Fortunately, in the case of sets such as (2) which are definable over the language of rings, it turns out that a much stronger regularity lemma is available, which I call the “algebraic regularity lemma”. I’ll state it (again, slightly informally) in the context of graphs as follows:

Lemma 2 (Algebraic regularity lemma) Let {F} be a finite field of large characteristic, and let {V, W} be definable sets over {F} of bounded complexity (i.e. {V, W} are subsets of {F^n}, {F^m} for some bounded {n,m} that can be described by some first-order predicate in the language of rings of bounded length and involving boundedly many constants). Let {E} be a definable subset of {V \times W}, again of bounded complexity (one can view {E} as a bipartite graph connecting {V} and {W}). Then one can partition {V, W} into a bounded number of cells {V_1,\ldots,V_a}, {W_1,\ldots,W_b}, still definable with bounded complexity, such that for all pairs {i =1,\ldots a}, {j=1,\ldots,b}, one has the regularity property

\displaystyle  |E \cap (A \times B)| = d_{ij} |A| |B| + O( |F|^{-1/4} |V| |W| )

for all {A \subset V_i, B \subset W_i}, where {d_{ij} := \frac{|E \cap (V_i \times W_j)|}{|V_i| |W_j|}} is the density of {E} in {V_i \times W_j}.

This lemma resembles the Szemerédi regularity lemma, but regularises all pairs of cells (not just most pairs), and the regularity is of polynomial strength in {|F|}, rather than inverse tower exponential in the number of cells. Also, the cells are not arbitrary subsets of {V,W}, but are themselves definable with bounded complexity, which turns out to be crucial for applications. I am optimistic that this lemma will be useful not just for studying expanding polynomials, but for many other combinatorial questions involving dense subsets of definable sets over finite fields.

The above lemma is stated for graphs {E \subset V \times W}, but one can iterate it to obtain an analogous regularisation of hypergraphs {E \subset V_1 \times \ldots \times V_k} for any bounded {k} (for application to (2), we need {k=4}). This hypergraph regularity lemma, by the way, is not analogous to the strong hypergraph regularity lemmas of Rodl et al. and Gowers developed in the last six or so years, but closer in spirit to the older (but weaker) hypergraph regularity lemma of Chung which gives the same “order {1}” regularity that the graph regularity lemma gives, rather than higher order regularity.

One feature of the proof of Lemma 2 which I found striking was the need to use some fairly high powered technology from algebraic geometry, and in particular the Lang-Weil bound on counting points in varieties over a finite field (discussed in this previous blog post), and also the theory of the etale fundamental group. Let me try to briefly explain why this is the case. A model example of a definable set of bounded complexity {E} is a set {E \subset F^n \times F^m} of the form

\displaystyle  E = \{ (x,y) \in F^n \times F^m: \exists t \in F; P(x,y,t)=0 \}

for some polynomial {P: F^n \times F^m \times F \rightarrow F}. (Actually, it turns out that one can essentially write all definable sets as an intersection of sets of this form; see this previous blog post for more discussion.) To regularise the set {E}, it is convenient to square the adjacency matrix, which soon leads to the study of counting functions such as

\displaystyle  \mu(x,x') := | \{ (y,t,t') \in F^m \times F \times F: P(x,y,t) = P(x',y,t') = 0 \}|.

If one can show that this function {\mu} is “approximately finite rank” in the sense that (modulo lower order errors, of size {O(|F|^{-1/2})} smaller than the main term), this quantity depends only on a bounded number of bits of information about {x} and a bounded number of bits of information about {x'}, then a little bit of linear algebra will then give the required regularity result.

One can recognise {\mu(x,x')} as counting {F}-points of a certain algebraic variety

\displaystyle  V_{x,x'} := \{ (y,t,t') \in \overline{F}^m \times \overline{F} \times \overline{F}: P(x,y,t) = P(x',y,t') = 0 \}.

The Lang-Weil bound (discussed in this previous post) provides a formula for this count, in terms of the number {c(x,x')} of geometrically irreducible components of {V_{x,x'}} that are defined over {F} (or equivalently, are invariant with respect to the Frobenius endomorphism associated to {F}). So the problem boils down to ensuring that this quantity {c(x,x')} is “generically bounded rank”, in the sense that for generic {x,x'}, its value depends only on a bounded number of bits of {x} and a bounded number of bits of {x'}.

Here is where the étale fundamental group comes in. One can view {V_{x,x'}} as a fibre product {V_x \times_{\overline{F}^m} V_{x'}} of the varieties

\displaystyle  V_x := \{ (y,t) \in \overline{F}^m \times \overline{F}: P(x,y,t) = 0 \}

and

\displaystyle  V_{x'} := \{ (y,t) \in \overline{F}^m \times \overline{F}: P(x',y,t) = 0 \}

over {\overline{F}^m}. If one is in sufficiently high characteristic (or even better, in zero characteristic, which one can reduce to by an ultraproduct (or nonstandard analysis) construction, similar to that discussed in this previous post), the varieties {V_x,V_{x'}} are generically finite étale covers of {\overline{F}^m}, and the fibre product {V_x \times_{\overline{F}^m} V_{x'}} is then also generically a finite étale cover. One can count the components of a finite étale cover of a connected variety by counting the number of orbits of the étale fundamental group acting on a fibre of that variety (much as the number of components of a cover of a connected manifold is the number of orbits of the topological fundamental group acting on that fibre). So if one understands the étale fundamental group of a certain generic subset of {\overline{F}^m} (formed by intersecting together an {x}-dependent generic subset of {\overline{F}^m} with an {x'}-dependent generic subset), this in principle controls {c(x,x')}. It turns out that one can decouple the {x} and {x'} dependence of this fundamental group by using an étale version of the van Kampen theorem for the fundamental group, which I discussed in this previous blog post. With this fact (and another deep fact about the étale fundamental group in zero characteristic, namely that it is topologically finitely generated), one can obtain the desired generic bounded rank property of {c(x,x')}, which gives the regularity lemma.

In order to expedite the deployment of all this algebraic geometry (as well as some Riemann surface theory), it is convenient to use the formalism of nonstandard analysis (or the ultraproduct construction), which among other things can convert quantitative, finitary problems in large characteristic into equivalent qualitative, infinitary problems in zero characteristic (in the spirit of this blog post). This allows one to use several tools from those fields as “black boxes”; not just the theory of étale fundamental groups (which are considerably simpler and more favorable in characteristic zero than they are in positive characteristic), but also some results limiting the morphisms between compact Riemann surfaces of high genus (such as the de Franchis theorem, the Riemann-Hurwitz formula, or the fact that all morphisms between elliptic curves are essentially group homomorphisms), which would be quite unwieldy to utilise if one did not first pass to the zero characteristic case (and thence to the complex case) via the ultraproduct construction (followed by the Lefschetz principle).

I found this project to be particularly educational for me, as it forced me to wander outside of my usual range by quite a bit in order to pick up the tools from algebraic geometry and Riemann surfaces that I needed (in particular, I read through several chapters of EGA and SGA for the first time). This did however put me in the slightly unnerving position of having to use results (such as the Riemann existence theorem) whose proofs I have not fully verified for myself, but which are easy to find in the literature, and widely accepted in the field. I suppose this type of dependence on results in the literature is more common in the more structured fields of mathematics than it is in analysis, which by its nature has fewer reusable black boxes, and so key tools often need to be rederived and modified for each new application. (This distinction is discussed further in this article of Gowers.)

One of the first non-trivial theorems one encounters in classical algebraic geometry is Bézout’s theorem, which we will phrase as follows:

Theorem 1 (Bézout’s theorem) Let {k} be a field, and let {P, Q \in k[x,y]} be non-zero polynomials in two variables {x,y} with no common factor. Then the two curves {\{ (x,y) \in k^2: P(x,y) = 0\}} and {\{ (x,y) \in k^2: Q(x,y) = 0\}} have no common components, and intersect in at most {\hbox{deg}(P) \hbox{deg}(Q)} points.

This theorem can be proven by a number of means, for instance by using the classical tool of resultants. It has many strengthenings, generalisations, and variants; see for instance this previous blog post on Bézout’s inequality. Bézout’s theorem asserts a fundamental algebraic dichotomy, of importance in combinatorial incidence geometry: any two algebraic curves either share a common component, or else have a bounded finite intersection; there is no intermediate case in which the intersection is unbounded in cardinality, but falls short of a common component. This dichotomy is closely related to the integrality gap in algebraic dimension: an algebraic set can have an integer dimension such as {0} or {1}, but cannot attain any intermediate dimensions such as {1/2}. This stands in marked contrast to sets of analytic, combinatorial, or probabilistic origin, whose “dimension” is typically not necessarily constrained to be an integer.

Bézout’s inequality tells us, roughly speaking, that the intersection of a curve of degree {D_1} and a curve of degree {D_2} forms a set of at most {D_1 D_2} points. One can consider the converse question: given a set {S} of {N} points in the plane {k^2}, can one find two curves of degrees {D_1,D_2} with {D_1 D_2 = O(N)} and no common components, whose intersection contains these points?

A model example that supports the possibility of such a converse is a grid {S = A \times B} that is a Cartesian product of two finite subsets {A, B} of {k} with {|A| |B| = N}. In this case, one can take one curve to be the union of {|A|} vertical lines, and the other curve to be the union of {|B|} horizontal lines, to obtain the required decomposition. Thus, if the proposed converse to Bézout’s inequality held, it would assert that any set of {N} points was essentially behaving like a “nonlinear grid” of size {N}.

Unfortunately, the naive converse to Bézout’s theorem is false. A counterexample can be given by considering a set {S = S_1 \cup S_2} of {2N} points for some large perfect square {N}, where {P_1} is a {\sqrt{N}} by {\sqrt{N}} grid of the form described above, and {S_2} consists of {N} points on an line {\ell} (e.g. a {1 \times N} or {N \times 1} grid). Each of the two component sets {S_1, S_2} can be written as the intersection between two curves whose degrees multiply up to {N}; in the case of {S_1}, we can take the two families of parallel lines (viewed as reducible curves of degree {\sqrt{N}}) as the curves, and in the case of {S_2}, one can take {\ell} as one curve, and the graph of a degree {N} polynomial on {\ell} vanishing on {S_2} for the second curve. But, if {N} is large enough, one cannot cover {S} by the intersection of a single pair {\gamma_1, \gamma_2} of curves with no common components whose degrees {D_1,D_2} multiply up to {D_1 D_2 = O(N)}. Indeed, if this were the case, then without loss of generality we may assume that {D_1 \leq D_2}, so that {D_1 = O(\sqrt{N})}. By Bézout’s theorem, {\gamma_1} either contains {\ell}, or intersects {\ell} in at most {O(D_1) = O(\sqrt{N})} points. Thus, in order for {\gamma_1} to capture all of {S}, it must contain {\ell}, which forces {\gamma_2} to not contain {\ell}. But {\gamma_2} has to intersect {\ell} in {N} points, so by Bézout’s theorem again we have {D_2 \geq N}, thus {D_1 = O(1)}. But then (by more applications of Bézout’s theorem) {\gamma_1} can only capture {O(\sqrt{N})} of the {N} points of {S_1}, a contradiction.

But the above counterexample suggests that even if an arbitrary set of {N} (or {2N}) points cannot be covered by the single intersection of a pair of curves with degree multiplying up to {O(N)}, one may be able to cover such a set by a small number of such intersections. The purpose of this post is to record the simple observation that this is, indeed, the case:

Theorem 2 (Partial converse to Bézout’s theorem) Let {k} be a field, and let {S} be a set of {N} points in {k} for some {N > 1}. Then one can find {m = O(\log N)} and pairs {P_i,Q_i \in k[x,y]} of coprime non-zero polynomials for {i=1,\ldots,m} such that

\displaystyle  S \subset \bigcup_{i=1}^m \{ (x,y) \in k^2: P_i(x,y) = Q_i(x,y) = 0 \} \ \ \ \ \ (1)

and

\displaystyle  \sum_{i=1}^m \hbox{deg}(P_i) \hbox{deg}(Q_i) = O( N ). \ \ \ \ \ (2)

Informally, every finite set in the plane is (a dense subset of) the union of logarithmically many nonlinear grids. The presence of the logarithm is necessary, as can be seen by modifying the {P_1 \cup P_2} example to be the union of logarithmically many Cartesian products of distinct dimensions, rather than just a pair of such products.

Unfortunately I do not know of any application of this converse, but I thought it was cute anyways. The proof is given below the fold.

Read the rest of this entry »

Ben Green and I have just uploaded to the arXiv our new paper “On sets defining few ordinary lines“, submitted to Discrete and Computational Geometry. This paper asymptotically solves two old questions concerning finite configurations of points {P} in the plane {{\mathbb R}^2}. Given a set {P} of {n} points in the plane, define an ordinary line to be a line containing exactly two points of {P}. The classical Sylvester-Gallai theorem, first posed as a problem by Sylvester in 1893, asserts that as long as the points of {P} are not all collinear, {P} defines at least one ordinary line:

It is then natural to pose the question of what is the minimal number of ordinary lines that a set of {n} non-collinear points can generate. In 1940, Melchior gave an elegant proof of the Sylvester-Gallai theorem based on projective duality and Euler’s formula {V-E+F=2}, showing that at least three ordinary lines must be created; in 1951, Motzkin showed that there must be {\gg n^{1/2}} ordinary lines. Previously to this paper, the best lower bound was by Csima and Sawyer, who in 1993 showed that there are at least {6n/13} ordinary lines. In the converse direction, if {n} is even, then by considering {n/2} equally spaced points on a circle, and {n/2} points on the line at infinity in equally spaced directions, one can find a configuration of {n} points that define just {n/2} ordinary lines.

As first observed by Böröczky, variants of this example also give few ordinary lines for odd {n}, though not quite as few as {n/2}; more precisely, when {n=1 \hbox{ mod } 4} one can find a configuration with {3(n-1)/4} ordinary lines, and when {n = 3 \hbox{ mod } 4} one can find a configuration with {3(n-3)/4} ordinary lines. Our first main result is that these configurations are best possible for sufficiently large {n}:

Theorem 1 (Dirac-Motzkin conjecture) If {n} is sufficiently large, then any set of {n} non-collinear points in the plane will define at least {\lfloor n/2\rfloor} ordinary lines. Furthermore, if {n} is odd, at least {3\lfloor n/4\rfloor} ordinary lines must be created.

The Dirac-Motzkin conjecture asserts that the first part of this theorem in fact holds for all {n}, not just for sufficiently large {n}; in principle, our theorem reduces that conjecture to a finite verification, although our bound for “sufficiently large” is far too poor to actually make this feasible (it is of double exponential type). (There are two known configurations for which one has {(n-1)/2} ordinary lines, one with {n=7} (discovered by Kelly and Moser), and one with {n=13} (discovered by Crowe and McKee).)

Our second main result concerns not the ordinary lines, but rather the {3}-rich lines of an {n}-point set – a line that meets exactly three points of that set. A simple double counting argument (counting pairs of distinct points in the set in two different ways) shows that there are at most

\displaystyle  \binom{n}{2} / \binom{3}{2} = \frac{1}{6} n^2 - \frac{1}{6} n

{3}-rich lines. On the other hand, on an elliptic curve, three distinct points P,Q,R on that curve are collinear precisely when they sum to zero with respect to the group law on that curve. Thus (as observed first by Sylvester in 1868), any finite subgroup of an elliptic curve (of which one can produce numerous examples, as elliptic curves in {{\mathbb R}^2} have the group structure of either {{\mathbb R}/{\mathbb Z}} or {{\mathbb R}/{\mathbb Z} \times ({\mathbb Z}/2{\mathbb Z})}) can provide examples of {n}-point sets with a large number of {3}-rich lines ({\lfloor \frac{1}{6} n^2 - \frac{1}{2} n + 1\rfloor}, to be precise). One can also shift such a finite subgroup by a third root of unity and obtain a similar example with only one fewer {3}-rich line. Sylvester then formally posed the question of determining whether this was best possible.

This problem was known as the Orchard planting problem, and was given a more poetic formulation as such by Jackson in 1821 (nearly fifty years prior to Sylvester!):

Our second main result answers this problem affirmatively in the large {n} case:

Theorem 2 (Orchard planting problem) If {n} is sufficiently large, then any set of {n} points in the plane will determine at most {\lfloor \frac{1}{6} n^2 - \frac{1}{2} n + 1\rfloor} {3}-rich lines.

Again, our threshold for “sufficiently large” for this {n} is extremely large (though slightly less large than in the previous theorem), and so a full solution of the problem, while in principle reduced to a finitary computation, remains infeasible at present.

Our results also classify the extremisers (and near extremisers) for both of these problems; basically, the known examples mentioned previously are (up to projective transformation) the only extremisers when {n} is sufficiently large.

Our proof strategy follows the “inverse theorem method” from additive combinatorics. Namely, rather than try to prove direct theorems such as lower bounds on the number of ordinary lines, or upper bounds on the number of {3}-rich lines, we instead try to prove inverse theorems (also known as structure theorems), in which one attempts a structural classification of all configurations with very few ordinary lines (or very many {3}-rich lines). In principle, once one has a sufficiently explicit structural description of these sets, one simply has to compute the precise number of ordinary lines or {3}-rich lines in each configuration in the list provided by that structural description in order to obtain results such as the two theorems above.

Note from double counting that sets with many {3}-rich lines will necessarily have few ordinary lines. Indeed, if we let {N_k} denote the set of lines that meet exactly {k} points of an {n}-point configuration, so that {N_3} is the number of {3}-rich lines and {N_2} is the number of ordinary lines, then we have the double counting identity

\displaystyle  \sum_{k=2}^n \binom{k}{2} N_k = \binom{n}{2}

which among other things implies that any counterexample to the orchard problem can have at most {n+O(1)} ordinary lines. In particular, any structural theorem that lets us understand configurations with {O(n)} ordinary lines will, in principle, allow us to obtain results such as the above two theorems.

As it turns out, we do eventually obtain a structure theorem that is strong enough to achieve these aims, but it is difficult to prove this theorem directly. Instead we proceed more iteratively, beginning with a “cheap” structure theorem that is relatively easy to prove but provides only a partial amount of control on the configurations with {O(n)} ordinary lines. One then builds upon that theorem with additional arguments to obtain an “intermediate” structure theorem that gives better control, then a “weak” structure theorem that gives even more control, a “strong” structure theorem that gives yet more control, and then finally a “very strong” structure theorem that gives an almost complete description of the configurations (but only in the asymptotic regime when {n} is very large). It turns out that the “weak” theorem is enough for the orchard planting problem, and the “strong” version is enough for the Dirac-Motzkin conjecture. (So the “very strong” structure theorem ends up being unnecessary for the two applications given, but may be of interest for other applications.) Note that the stronger theorems do not completely supercede the weaker ones, because the quantitative bounds in the theorems get progressively worse as the control gets stronger.

Before we state these structure theorems, note that all the examples mentioned previously of sets with few ordinary lines involved cubic curves: either irreducible examples such as elliptic curves, or reducible examples such as the union of a circle (or more generally, a conic section) and a line. (We also allow singular cubic curves, such as the union of a conic section and a tangent line, or a singular irreducible curve such as {\{ (x,y): y^2 = x^3 \}}.) This turns out to be no coincidence; cubic curves happen to be very good at providing many {3}-rich lines (and thus, few ordinary lines), and conversely it turns out that they are essentially the only way to produce such lines. This can already be evidenced by our cheap structure theorem:

Theorem 3 (Cheap structure theorem) Let {P} be a configuration of {n} points with at most {{}Kn} ordinary lines for some {K \geq 1}. Then {P} can be covered by at most {500K} cubic curves.

This theorem is already a non-trivial amount of control on sets with few ordinary lines, but because the result does not specify the nature of these curves, and how they interact with each other, it does not seem to be directly useful for applications. The intermediate structure theorem given below gives a more precise amount of control on these curves (essentially guaranteeing that all but at most one of the curve components are lines):

Theorem 4 (Intermediate structure theorem) Let {P} be a configuration of {n} points with at most {{}Kn} ordinary lines for some {K \geq 1}. Then one of the following is true:

  1. {P} lies on the union of an irreducible cubic curve and an additional {O(K^{O(1)})} points.
  2. {P} lies on the union of an irreducible conic section and an additional {O(K^{O(1)})} lines, with {n/2 + O(K^{O(1)})} of the points on {P} in either of the two components.
  3. {P} lies on the union of {O(K)} lines and an additional {O(K^{O(1)})} points.

By some additional arguments (including a very nice argument supplied to us by Luke Alexander Betts, an undergraduate at Cambridge, which replaces a much more complicated (and weaker) argument we originally had for this paper), one can cut down the number of lines in the above theorem to just one, giving a more useful structure theorem, at least when {n} is large:

Theorem 5 (Weak structure theorem) Let {P} be a configuration of {n} points with at most {{}Kn} ordinary lines for some {K \geq 1}. Assume that {n \geq \exp(\exp(CK^C))} for some sufficiently large absolute constant {C}. Then one of the following is true:

  1. {P} lies on the union of an irreducible cubic curve and an additional {O(K^{O(1)})} points.
  2. {P} lies on the union of an irreducible conic section, a line, and an additional {O(K^{O(1)})} points, with {n/2 + O(K^{O(1)})} of the points on {P} in either of the first two components.
  3. {P} lies on the union of a single line and an additional {O(K^{O(1)})} points.

As mentioned earlier, this theorem is already strong enough to resolve the orchard planting problem for large {n}. The presence of the double exponential here is extremely annoying, and is the main reason why the final thresholds for “sufficiently large” in our results are excessively large, but our methods seem to be unable to eliminate these exponentials from our bounds (though they can fortunately be confined to a lower bound for {n}, keeping the other bounds in the theorem polynomial in {K}).

For the Dirac-Motzkin conjecture one needs more precise control on the portion of {P} on the various low-degree curves indicated. This is given by the following result:

Theorem 6 (Strong structure theorem) Let {P} be a configuration of {n} points with at most {{}Kn} ordinary lines for some {K \geq 1}. Assume that {n \geq \exp(\exp(CK^C))} for some sufficiently large absolute constant {C}. Then, after adding or deleting {O(K^{O(1)})} points from {P} if necessary (modifying {n} appropriately), and then applying a projective transformation, one of the following is true:

  1. {P} is a finite subgroup of an elliptic curve (EDIT: as pointed out in comments, one also needs to allow for finite subgroups of acnodal singular cubic curves), possibly shifted by a third root of unity.
  2. {P} is the Borozcky example mentioned previously (the union of {n/2} equally spaced points on the circle, and {n/2} points on the line at infinity).
  3. {P} lies on a single line.

By applying a final “cleanup” we can replace the {O(K^{O(1)})} in the above theorem with the optimal {O(K)}, which is our “very strong” structure theorem. But the strong structure theorem is already sufficient to establish the Dirac-Motzkin conjecture for large {n}.

There are many tools that go into proving these theorems, some of which are extremely classical (with at least one going back to the ancient Greeks), and others being more recent. I will discuss some (not all) of these tools below the fold, and more specifically:

  1. Melchior’s argument, based on projective duality and Euler’s formula, initially used to prove the Sylvester-Gallai theorem;
  2. Chasles’ version of the Cayley-Bacharach theorem, which can convert dual triangular grids (produced by Melchior’s argument) into cubic curves that meet many points of the original configuration {P});
  3. Menelaus’s theorem, which is useful for producing ordinary lines when the point configuration lies on a few non-concurrent lines, particularly when combined with a sum-product estimate of Elekes, Nathanson, and Ruzsa;
  4. Betts’ argument, that produces ordinary lines when the point configuration lies on a few concurrent lines;
  5. A result of Poonen and Rubinstein that any point not on the origin or unit circle can lie on at most seven chords connecting roots of unity; this, together with a variant for elliptic curves, gives the very strong structure theorem, and is also (a strong version of) what is needed to finish off the Dirac-Motzkin and orchard planting problems from the structure theorems given above.

There are also a number of more standard tools from arithmetic combinatorics (e.g. a version of the Balog-Szemeredi-Gowers lemma) which are needed to tie things together at various junctures, but I won’t say more about these methods here as they are (by now) relatively routine.

Read the rest of this entry »

Ben Green and I have just uploaded to the arXiv our paper “New bounds for Szemeredi’s theorem, Ia: Progressions of length 4 in finite field geometries revisited“, submitted to Proc. Lond. Math. Soc.. This is both an erratum to, and a replacement for, our previous paper “New bounds for Szemeredi’s theorem. I. Progressions of length 4 in finite field geometries“. The main objective in both papers is to bound the quantity {r_4(F^n)} for a vector space {F^n} over a finite field {F} of characteristic greater than {4}, where {r_4(F^n)} is defined as the cardinality of the largest subset of {F^n} that does not contain an arithmetic progression of length {4}. In our earlier paper, we gave two arguments that bounded {r_4(F^n)} in the regime when the field {F} was fixed and {n} was large. The first “cheap” argument gave the bound

\displaystyle  r_4(F^n) \ll |F|^n \exp( - c \sqrt{\log n} )

and the more complicated “expensive” argument gave the improvement

\displaystyle  r_4(F^n) \ll |F|^n n^{-c} \ \ \ \ \ (1)

for some constant {c>0} depending only on {F}.

Unfortunately, while the cheap argument is correct, we discovered a subtle but serious gap in our expensive argument in the original paper. Roughly speaking, the strategy in that argument is to employ the density increment method: one begins with a large subset {A} of {F^n} that has no arithmetic progressions of length {4}, and seeks to locate a subspace on which {A} has a significantly increased density. Then, by using a “Koopman-von Neumann theorem”, ultimately based on an iteration of the inverse {U^3} theorem of Ben and myself (and also independently by Samorodnitsky), one approximates {A} by a “quadratically structured” function {f}, which is (locally) a combination of a bounded number of quadratic phase functions, which one can prepare to be in a certain “locally equidistributed” or “locally high rank” form. (It is this reduction to the high rank case that distinguishes the “expensive” argument from the “cheap” one.) Because {A} has no progressions of length {4}, the count of progressions of length {4} weighted by {f} will also be small; by combining this with the theory of equidistribution of quadratic phase functions, one can then conclude that there will be a subspace on which {f} has increased density.

The error in the paper was to conclude from this that the original function {1_A} also had increased density on the same subspace; it turns out that the manner in which {f} approximates {1_A} is not strong enough to deduce this latter conclusion from the former. (One can strengthen the nature of approximation until one restores such a conclusion, but only at the price of deteriorating the quantitative bounds on {r_4(F^n)} one gets at the end of the day to be worse than the cheap argument.)

After trying unsuccessfully to repair this error, we eventually found an alternate argument, based on earlier papers of ourselves and of Bergelson-Host-Kra, that avoided the density increment method entirely and ended up giving a simpler proof of a stronger result than (1), and also gives the explicit value of {c = 2^{-22}} for the exponent {c} in (1). In fact, it gives the following stronger result:

Theorem 1 Let {A} be a subset of {F^n} of density at least {\alpha}, and let {\epsilon>0}. Then there is a subspace {W} of {F^n} of codimension {O( \epsilon^{-2^{20}})} such that the number of (possibly degenerate) progressions {a, a+r, a+2r, a+3r} in {A \cap W} is at least {(\alpha^4-\epsilon)|W|^2}.

The bound (1) is an easy consequence of this theorem after choosing {\epsilon := \alpha^4/2} and removing the degenerate progressions from the conclusion of the theorem.

The main new idea is to work with a local Koopman-von Neumann theorem rather than a global one, trading a relatively weak global approximation to {1_A} with a significantly stronger local approximation to {1_A} on a subspace {W}. This is somewhat analogous to how sometimes in graph theory it is more efficient (from the point of view of quantative estimates) to work with a local version of the Szemerédi regularity lemma which gives just a single regular pair of cells, rather than attempting to regularise almost all of the cells. This local approach is well adapted to the inverse {U^3} theorem we use (which also has this local aspect), and also makes the reduction to the high rank case much cleaner. At the end of the day, one ends up with a fairly large subspace {W} on which {A} is quite dense (of density {\alpha-O(\epsilon)}) and which can be well approximated by a “pure quadratic” object, namely a function of a small number of quadratic phases obeying a high rank condition. One can then exploit a special positivity property of the count of length four progressions weighted by pure quadratic objects, essentially due to Bergelson-Host-Kra, which then gives the required lower bound.

A few days ago, Endre Szemerédi was awarded the 2012 Abel prize “for his fundamental contributions to discrete mathematics and theoretical computer science, and in recognition of the profound and lasting impact of these contributions on additive number theory and ergodic theory.” The full citation for the prize may be found here, and the written notes for a talk given by Tim Gowers on Endre’s work at the announcement may be found here (and video of the talk can be found here).

As I was on the Abel prize committee this year, I won’t comment further on the prize, but will instead focus on what is arguably Endre’s most well known result, namely Szemerédi’s theorem on arithmetic progressions:

Theorem 1 (Szemerédi’s theorem) Let {A} be a set of integers of positive upper density, thus {\lim \sup_{N \rightarrow\infty} \frac{|A \cap [-N,N]|}{|[-N,N]|} > 0}, where {[-N,N] := \{-N, -N+1,\ldots,N\}}. Then {A} contains an arithmetic progression of length {k} for any {k>1}.

Szemerédi’s original proof of this theorem is a remarkably intricate piece of combinatorial reasoning. Most proofs of theorems in mathematics – even long and difficult ones – generally come with a reasonably compact “high-level” overview, in which the proof is (conceptually, at least) broken down into simpler pieces. There may well be technical difficulties in formulating and then proving each of the component pieces, and then in fitting the pieces together, but usually the “big picture” is reasonably clear. To give just one example, the overall strategy of Perelman’s proof of the Poincaré conjecture can be briefly summarised as follows: to show that a simply connected three-dimensional manifold is homeomorphic to a sphere, place a Riemannian metric on it and perform Ricci flow, excising any singularities that arise by surgery, until the entire manifold becomes extinct. By reversing the flow and analysing the surgeries performed, obtain enough control on the topology of the original manifold to establish that it is a topological sphere.

In contrast, the pieces of Szemerédi’s proof are highly interlocking, particularly with regard to all the epsilon-type parameters involved; it takes quite a bit of notational setup and foundational lemmas before the key steps of the proof can even be stated, let alone proved. Szemerédi’s original paper contains a logical diagram of the proof (reproduced in Gowers’ recent talk) which already gives a fair indication of this interlocking structure. (Many years ago I tried to present the proof, but I was unable to find much of a simplification, and my exposition is probably not that much clearer than the original text.) Even the use of nonstandard analysis, which is often helpful in cleaning up armies of epsilons, turns out to be a bit tricky to apply here. (In typical applications of nonstandard analysis, one can get by with a single nonstandard universe, constructed as an ultrapower of the standard universe; but to correctly model all the epsilons occuring in Szemerédi’s argument, one needs to repeatedly perform the ultrapower construction to obtain a (finite) sequence of increasingly nonstandard (and increasingly saturated) universes, each one containing unbounded quantities that are far larger than any quantity that appears in the preceding universe, as discussed at the end of this previous blog post. This sequence of universes does end up concealing all the epsilons, but it is not so clear that this is a net gain in clarity for the proof; I may return to the nonstandard presentation of Szemeredi’s argument at some future juncture.)

Instead of trying to describe the entire argument here, I thought I would instead show some key components of it, with only the slightest hint as to how to assemble the components together to form the whole proof. In particular, I would like to show how two particular ingredients in the proof – namely van der Waerden’s theorem and the Szemerédi regularity lemma – become useful. For reasons that will hopefully become clearer later, it is convenient not only to work with ordinary progressions {P_1 = \{ a, a+r_1, a+2r_1, \ldots, a+(k_1-1)r_1\}}, but also progressions of progressions {P_2 := \{ P_1, P_1 + r_2, P_1+2r_2, \ldots, P_1+(k_2-1)r_2\}}, progressions of progressions of progressions, and so forth. (In additive combinatorics, these objects are known as generalised arithmetic progressions of rank one, two, three, etc., and play a central role in the subject, although the way they are used in Szemerédi’s proof is somewhat different from the way that they are normally used in additive combinatorics.) Very roughly speaking, Szemerédi’s proof begins by building an enormous generalised arithmetic progression of high rank containing many elements of the set {A} (arranged in a “near-maximal-density” configuration), and then steadily prunes this progression to improve the combinatorial properties of the configuration, until one ends up with a single rank one progression of length {k} that consists entirely of elements of {A}.

To illustrate some of the basic ideas, let us first consider a situation in which we have located a progression {P, P + r, \ldots, P+(k-1)r} of progressions of length {k}, with each progression {P+ir}, {i=0,\ldots,k-1} being quite long, and containing a near-maximal amount of elements of {A}, thus

\displaystyle  |A \cap (P+ir)| \approx \delta |P|

where {\delta := \lim \sup_{|P| \rightarrow \infty} \frac{|A \cap P|}{|P|}} is the “maximal density” of {A} along arithmetic progressions. (There are a lot of subtleties in the argument about exactly how good the error terms are in various approximations, but we will ignore these issues for the sake of this discussion and just use the imprecise symbols such as {\approx} instead.) By hypothesis, {\delta} is positive. The objective is then to locate a progression {a, a+r', \ldots,a+(k-1)r'} in {A}, with each {a+ir} in {P+ir} for {i=0,\ldots,k-1}. It may help to view the progression of progressions {P, P + r, \ldots, P+(k-1)r} as a tall thin rectangle {P \times \{0,\ldots,k-1\}}.

If we write {A_i := \{ a \in P: a+ir \in A \}} for {i=0,\ldots,k-1}, then the problem is equivalent to finding a (possibly degenerate) arithmetic progression {a_0,a_1,\ldots,a_{k-1}}, with each {a_i} in {A_i}.

By hypothesis, we know already that each set {A_i} has density about {\delta} in {P}:

\displaystyle  |A_i \cap P| \approx \delta |P|. \ \ \ \ \ (1)

Let us now make a “weakly mixing” assumption on the {A_i}, which roughly speaking asserts that

\displaystyle  |A_i \cap E| \approx \delta \sigma |P| \ \ \ \ \ (2)

for “most” subsets {E} of {P} of density {\approx \sigma} of a certain form to be specified shortly. This is a plausible type of assumption if one believes {A_i} to behave like a random set, and if the sets {E} are constructed “independently” of the {A_i} in some sense. Of course, we do not expect such an assumption to be valid all of the time, but we will postpone consideration of this point until later. Let us now see how this sort of weakly mixing hypothesis could help one count progressions {a_0,\ldots,a_{k-1}} of the desired form.

We will inductively consider the following (nonrigorously defined) sequence of claims {C(i,j)} for each {0 \leq i \leq j < k}:

  • {C(i,j)}: For most choices of {a_j \in P}, there are {\sim \delta^i |P|} arithmetic progressions {a_0,\ldots,a_{k-1}} in {P} with the specified choice of {a_j}, such that {a_l \in A_l} for all {l=0,\ldots,i-1}.

(Actually, to avoid boundary issues one should restrict {a_j} to lie in the middle third of {P}, rather than near the edges, but let us ignore this minor technical detail.) The quantity {\delta^i |P|} is natural here, given that there are {\sim |P|} arithmetic progressions {a_0,\ldots,a_{k-1}} in {P} that pass through {a_i} in the {i^{th}} position, and that each one ought to have a probability of {\delta^i} or so that the events {a_0 \in A_0, \ldots, a_{i-1} \in A_{i-1}} simultaneously hold.) If one has the claim {C(k-1,k-1)}, then by selecting a typical {a_{k-1}} in {A_{k-1}}, we obtain a progression {a_0,\ldots,a_{k-1}} with {a_i \in A_i} for all {i=0,\ldots,k-1}, as required. (In fact, we obtain about {\delta^k |P|^2} such progressions by this method.)

We can heuristically justify the claims {C(i,j)} by induction on {i}. For {i=0}, the claims {C(0,j)} are clear just from direct counting of progressions (as long as we keep {a_j} away from the edges of {P}). Now suppose that {i>0}, and the claims {C(i-1,j)} have already been proven. For any {i \leq j < k} and for most {a_j \in P}, we have from hypothesis that there are {\sim \delta^{i-1} |P|} progressions {a_0,\ldots,a_{k-1}} in {P} through {a_j} with {a_0 \in A_0,\ldots,a_{i-2}\in A_{i-2}}. Let {E = E(a_j)} be the set of all the values of {a_{i-1}} attained by these progressions, then {|E| \sim \delta^{i-1} |P|}. Invoking the weak mixing hypothesis, we (heuristically, at least) conclude that for most choices of {a_j}, we have

\displaystyle  |A_{i-1} \cap E| \sim \delta^i |P|

which then gives the desired claim {C(i,j)}.

The observant reader will note that we only needed the claim {C(i,j)} in the case {j=k-1} for the above argument, but for technical reasons, the full proof requires one to work with more general values of {j} (also the claim {C(i,j)} needs to be replaced by a more complicated version of itself, but let’s ignore this for sake of discussion).

We now return to the question of how to justify the weak mixing hypothesis (2). For a single block {A_i} of {A}, one can easily concoct a scenario in which this hypothesis fails, by choosing {E} to overlap with {A_i} too strongly, or to be too disjoint from {A_i}. However, one can do better if one can select {A_i} from a long progression of blocks. The starting point is the following simple double counting observation that gives the right upper bound:

Proposition 2 (Single upper bound) Let {P, P+r, \ldots, P+(M-1)r} be a progression of progressions {P} for some large {M}. Suppose that for each {i=0,\ldots,M-1}, the set {A_i := \{ a \in P: a+ir \in A \}} has density {\approx \delta} in {P} (i.e. (1) holds). Let {E} be a subset of {P} of density {\approx \sigma}. Then (if {M} is large enough) one can find an {i = 0,\ldots,M-1} such that

\displaystyle  |A_i \cap E| \lessapprox \delta \sigma |P|.

Proof: The key is the double counting identity

\displaystyle  \sum_{i=0}^{M-1} |A_i \cap E| = \sum_{a \in E} |A \cap \{ a, a+r, \ldots, a+(M-1) r\}|.

Because {A} has maximal density {\delta} and {M} is large, we have

\displaystyle  |A \cap \{ a, a+r, \ldots, a+(M-1) r\}| \lessapprox \delta M

for each {a}, and thus

\displaystyle  \sum_{i=0}^{M-1} |A_i \cap E| \lessapprox \delta M |E|.

The claim then follows from the pigeonhole principle. \Box

Now suppose we want to obtain weak mixing not just for a single set {E}, but for a small number {E_1,\ldots,E_m} of such sets, i.e. we wish to find an {i} for which

\displaystyle  |A_i \cap E_j| \lessapprox \delta \sigma_j |P|. \ \ \ \ \ (3)

for all {j=1,\ldots,m}, where {\sigma_j} is the density of {E_j} in {P}. The above proposition gives, for each {j}, a choice of {i} for which (3) holds, but it could be a different {i} for each {j}, and so it is not immediately obvious how to use Proposition 2 to find an {i} for which (3) holds simultaneously for all {j}. However, it turns out that the van der Waerden theorem is the perfect tool for this amplification:

Proposition 3 (Multiple upper bound) Let {P, P+r, \ldots, P+(M-1)r} be a progression of progressions {P+ir} for some large {M}. Suppose that for each {i=0,\ldots,M-1}, the set {A_i := \{ a \in P: a+ir \in A \}} has density {\approx \delta} in {P} (i.e. (1) holds). For each {1 \leq j \leq m}, let {E_j} be a subset of {P} of density {\approx \sigma_j}. Then (if {M} is large enough depending on {j}) one can find an {i = 0,\ldots,M-1} such that

\displaystyle  |A_i \cap E_j| \lessapprox \delta \sigma_j |P|

simultaneously for all {1 \leq j \leq m}.

Proof: Suppose that the claim failed (for some suitably large {M}). Then, for each {i = 0,\ldots,M-1}, there exists {j \in \{1,\ldots,m\}} such that

\displaystyle  |A_i \cap E_j| \gg \delta \sigma_j |P|.

This can be viewed as a colouring of the interval {\{1,\ldots,M\}} by {m} colours. If we take {M} large compared to {m}, van der Waerden’s theorem allows us to then find a long subprogression of {\{1,\ldots,M\}} which is monochromatic, so that {j} is constant on this progression. But then this will furnish a counterexample to Proposition 2. \Box

One nice thing about this proposition is that the upper bounds can be automatically upgraded to an asymptotic:

Proposition 4 (Multiple mixing) Let {P, P+r, \ldots, P+(M-1)r} be a progression of progressions {P+ir} for some large {M}. Suppose that for each {i=0,\ldots,M-1}, the set {A_i := \{ a \in P: a+ir \in A \}} has density {\approx \delta} in {P} (i.e. (1) holds). For each {1 \leq j \leq m}, let {E_j} be a subset of {P} of density {\approx \sigma_j}. Then (if {M} is large enough depending on {m}) one can find an {i = 0,\ldots,M-1} such that

\displaystyle  |A_i \cap E_j| \approx \delta \sigma_j |P|

simultaneously for all {1 \leq j \leq m}.

Proof: By applying the previous proposition to the collection of sets {E_1,\ldots,E_m} and their complements {P\backslash E_1,\ldots,P \backslash E_m} (thus replacing {m} with {2m}, one can find an {i} for which

\displaystyle  |A_i \cap E_j| \lessapprox \delta \sigma_j |P|

and

\displaystyle  |A_i \cap (P \backslash E_j)| \lessapprox \delta (1-\sigma_j) |P|

which gives the claim. \Box

However, this improvement of Proposition 2 turns out to not be strong enough for applications. The reason is that the number {m} of sets {E_1,\ldots,E_m} for which mixing is established is too small compared with the length {M} of the progression one has to use in order to obtain that mixing. However, thanks to the magic of the Szemerédi regularity lemma, one can amplify the above proposition even further, to allow for a huge number of {E_i} to be mixed (at the cost of excluding a small fraction of exceptions):

Proposition 5 (Really multiple mixing) Let {P, P+r, \ldots, P+(M-1)r} be a progression of progressions {P+ir} for some large {M}. Suppose that for each {i=0,\ldots,M-1}, the set {A_i := \{ a \in P: a+ir \in A \}} has density {\approx \delta} in {P} (i.e. (1) holds). For each {v} in some (large) finite set {V}, let {E_v} be a subset of {P} of density {\approx \sigma_v}. Then (if {M} is large enough, but not dependent on the size of {V}) one can find an {i = 0,\ldots,M-1} such that

\displaystyle  |A_i \cap E_v| \approx \delta \sigma_v |P|

simultaneously for almost all {v \in V}.

Proof: We build a bipartite graph {G = (P, V, E)} connecting the progression {P} to the finite set {V} by placing an edge {(a,v)} between an element {a \in P} and an element {v \in V} whenever {a \in E_v}. The number {|E_v| \approx \sigma_v |P|} can then be interpreted as the degree of {v} in this graph, while the number {|A_i \cap E_v|} is the number of neighbours of {v} that land in {A_i}.

We now apply the regularity lemma to this graph {G}. Roughly speaking, what this lemma does is to partition {P} and {V} into almost equally sized cells {P = P_1 \cup \ldots P_m} and {V = V_1 \cup \ldots V_m} such that for most pairs {P_j, V_k} of cells, the graph {G} resembles a random bipartite graph of some density {d_{jk}} between these two cells. The key point is that the number {m} of cells here is bounded uniformly in the size of {P} and {V}. As a consequence of this lemma, one can show that for most vertices {v} in a typical cell {V_k}, the number {|E_v|} is approximately equal to

\displaystyle  |E_v| \approx \sum_{j=1}^m d_{ij} |P_j|

and the number {|A_i \cap E_v|} is approximately equal to

\displaystyle  |A_i \cap E_v| \approx \sum_{j=1}^m d_{ij} |A_i \cap P_j|.

The point here is that the {|V|} different statistics {|A_i \cap E_v|} are now controlled by a mere {m} statistics {|A_i \cap P_j|} (this is not unlike the use of principal component analysis in statistics, incidentally, but that is another story). Now, we invoke Proposition 4 to find an {i} for which

\displaystyle  |A_i \cap P_j| \approx \delta |P_j|

simultaneously for all {j=1,\ldots,m}, and the claim follows. \Box

This proposition now suggests a way forward to establish the type of mixing properties (2) needed for the preceding attempt at proving Szemerédi’s theorem to actually work. Whereas in that attempt, we were working with a single progression of progressions {P, P+r, \ldots, P+(k-1)r} of progressions containing a near-maximal density of elements of {A}, we will now have to work with a family {(P_\lambda, P_\lambda+r_\lambda,\ldots,P_\lambda+(k-1)r_\lambda)_{\lambda \in \Lambda}} of such progression of progressions, where {\Lambda} ranges over some suitably large parameter set. Furthermore, in order to invoke Proposition 5, this family must be “well-arranged” in some arithmetic sense; in particular, for a given {i}, it should be possible to find many reasonably large subfamilies of this family for which the {i^{th}} terms {P_\lambda + i r_\lambda} of the progression of progressions in this subfamily are themselves in arithmetic progression. (Also, for technical reasons having to do with the fact that the sets {E_v} in Proposition 5 are not allowed to depend on {i}, one also needs the progressions {P_\lambda + i' r_\lambda} for any given {0 \leq i' < i} to be “similar” in the sense that they intersect {A} in the same fashion (thus the sets {A \cap (P_\lambda + i' r_\lambda)} as {\lambda} varies need to be translates of each other).) If one has this sort of family, then Proposition 5 allows us to “spend” some of the degrees of freedom of the parameter set {\Lambda} in order to gain good mixing properties for at least one of the sets {P_\lambda +i r_\lambda} in the progression of progressions.

Of course, we still have to figure out how to get such large families of well-arranged progressions of progressions. Szemerédi’s solution was to begin by working with generalised progressions of a much larger rank {d} than the rank {2} progressions considered here; roughly speaking, to prove Szemerédi’s theorem for length {k} progressions, one has to consider generalised progressions of rank as high as {2^k+1}. It is possible by a reasonably straightforward (though somewhat delicate) “density increment argument” to locate a huge generalised progression of this rank which is “saturated” by {A} in a certain rather technical sense (related to the concept of “near maximal density” used previously). Then, by another reasonably elementary argument, it is possible to locate inside a suitable large generalised progression of some rank {d}, a family of large generalised progressions of rank {d-1} which inherit many of the good properties of the original generalised progression, and which have the arithmetic structure needed for Proposition 5 to be applicable, at least for one value of {i}. (But getting this sort of property for all values of {i} simultaneously is tricky, and requires many careful iterations of the above scheme; there is also the problem that by obtaining good behaviour for one index {i}, one may lose good behaviour at previous indices, leading to a sort of “Tower of Hanoi” situation which may help explain the exponential factor in the rank {2^k+1} that is ultimately needed. It is an extremely delicate argument; all the parameters and definitions have to be set very precisely in order for the argument to work at all, and it is really quite remarkable that Endre was able to see it through to the end.)

This is an addendum to last quarter’s course notes on Hilbert’s fifth problem, which I am in the process of reviewing in order to transcribe them into a book (as was done similarly for several other sets of lecture notes on this blog). When reviewing the zeroth set of notes in particular, I found that I had made a claim (Proposition 11 from those notes) which asserted, roughly speaking, that any sufficiently large nilprogression was an approximate group, and promised to prove it later in the course when we had developed the ability to calculate efficiently in nilpotent groups. As it turned out, I managed finish the course without the need to develop these calculations, and so the proposition remained unproven. In order to rectify this, I will use this post to lay out some of the basic algebra of nilpotent groups, and use it to prove the above proposition, which turns out to be a bit tricky. (In my paper with Breuillard and Green, we avoid the need for this proposition by restricting attention to a special type of nilprogression, which we call a nilprogression in {C}-normal form, for which the computations are simpler.)

There are several ways to think about nilpotent groups; for instance one can use the model example of the Heisenberg group

\displaystyle  H(R) :=\begin{pmatrix} 1 & R & R \\ 0 & 1 & R\\ 0 & 0 & 1 \end{pmatrix}

over an arbitrary ring {R} (which need not be commutative), or more generally any matrix group consisting of unipotent upper triangular matrices, and view a general nilpotent group as being an abstract generalisation of such concrete groups. (In the case of nilpotent Lie groups, at least, this is quite an accurate intuition, thanks to Engel’s theorem.) Or, one can adopt a Lie-theoretic viewpoint and try to think of nilpotent groups as somehow arising from nilpotent Lie algebras; this intuition is rigorous when working with nilpotent Lie groups (at least when the characteristic is large, in order to avoid issues coming from the denominators in the Baker-Campbell-Hausdorff formula), but also retains some conceptual value in the non-Lie setting. In particular, nilpotent groups (particularly finitely generated ones) can be viewed in some sense as “nilpotent Lie groups over {{\bf Z}}“, even though Lie theory does not quite work perfectly when the underlying scalars merely form an integral domain instead of a field.

Another point of view, which arises naturally both in analysis and in algebraic geometry, is to view nilpotent groups as modeling “infinitesimal” perturbations of the identity, where the infinitesimals have a certain finite order. For instance, given a (not necessarily commutative) ring {R} without identity (representing all the “small” elements of some larger ring or algebra), we can form the powers {R^j} for {j=1,2,\ldots}, defined as the ring generated by {j}-fold products {r_1 \ldots r_j} of elements {r_1,\ldots,r_j} in {R}; this is an ideal of {R} which represents the elements which are “{j^{th}} order” in some sense. If one then formally adjoins an identity {1} onto the ring {R}, then for any {s \geq 1}, the multiplicative group {G := 1+R \hbox{ mod } R^{s+1}} is a nilpotent group of step at most {s}. For instance, if {R} is the ring of strictly upper {s \times s} matrices (over some base ring), then {R^{s+1}} vanishes and {G} becomes the group of unipotent upper triangular matrices over the same ring, thus recovering the previous matrix-based example. In analysis applications, {R} might be a ring of operators which are somehow of “order” {O(\epsilon)} or {O(\hbar)} for some small parameter {\epsilon} or {\hbar}, and one wishes to perform Taylor expansions up to order {O(\epsilon^s)} or {O(\hbar^s)}, thus discarding (i.e. quotienting out) all errors in {R^{s+1}}.

From a dynamical or group-theoretic perspective, one can also view nilpotent groups as towers of central extensions of a trivial group. Finitely generated nilpotent groups can also be profitably viewed as a special type of polycylic group; this is the perspective taken in this previous blog post. Last, but not least, one can view nilpotent groups from a combinatorial group theory perspective, as being words from some set of generators of various “degrees” subject to some commutation relations, with commutators of two low-degree generators being expressed in terms of higher degree objects, and all commutators of a sufficiently high degree vanishing. In particular, generators of a given degree can be moved freely around a word, as long as one is willing to generate commutator errors of higher degree.

With this last perspective, in particular, one can start computing in nilpotent groups by adopting the philosophy that the lowest order terms should be attended to first, without much initial concern for the higher order errors generated in the process of organising the lower order terms. Only after the lower order terms are in place should attention then turn to higher order terms, working successively up the hierarchy of degrees until all terms are dealt with. This turns out to be a relatively straightforward philosophy to implement in many cases (particularly if one is not interested in explicit expressions and constants, being content instead with qualitative expansions of controlled complexity), but the arguments are necessarily recursive in nature and as such can become a bit messy, and require a fair amount of notation to express precisely. So, unfortunately, the arguments here will be somewhat cumbersome and notation-heavy, even if the underlying methods of proof are relatively simple.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,310 other followers