You are currently browsing the category archive for the ‘math.CO’ category.

In graph theory, the recently developed theory of graph limits has proven to be a useful tool for analysing large dense graphs, being a convenient reformulation of the Szemerédi regularity lemma. Roughly speaking, the theory asserts that given any sequence {G_n = (V_n, E_n)} of finite graphs, one can extract a subsequence {G_{n_j} = (V_{n_j}, E_{n_j})} which converges (in a specific sense) to a continuous object known as a “graphon” – a symmetric measurable function {p\colon [0,1] \times [0,1] \rightarrow [0,1]}. What “converges” means in this context is that subgraph densities converge to the associated integrals of the graphon {p}. For instance, the edge density

\displaystyle  \frac{1}{|V_{n_j}|^2} |E_{n_j}|

converge to the integral

\displaystyle  \int_0^1 \int_0^1 p(x,y)\ dx dy,

the triangle density

\displaystyle  \frac{1}{|V_{n_j}|^3} \lvert \{ (v_1,v_2,v_3) \in V_{n_j}^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_{n_j} \} \rvert

converges to the integral

\displaystyle  \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ dx_1 dx_2 dx_3,

the four-cycle density

\displaystyle  \frac{1}{|V_{n_j}|^4} \lvert \{ (v_1,v_2,v_3,v_4) \in V_{n_j}^4: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_4\}, \{v_4,v_1\} \in E_{n_j} \} \rvert

converges to the integral

\displaystyle  \int_0^1 \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_4) p(x_4,x_1)\ dx_1 dx_2 dx_3 dx_4,

and so forth. One can use graph limits to prove many results in graph theory that were traditionally proven using the regularity lemma, such as the triangle removal lemma, and can also reduce many asymptotic graph theory problems to continuous problems involving multilinear integrals (although the latter problems are not necessarily easy to solve!). See this text of Lovasz for a detailed study of graph limits and their applications.

One can also express graph limits (and more generally hypergraph limits) in the language of nonstandard analysis (or of ultraproducts); see for instance this paper of Elek and Szegedy, Section 6 of this previous blog post, or this paper of Towsner. (In this post we assume some familiarity with nonstandard analysis, as reviewed for instance in the previous blog post.) Here, one starts as before with a sequence {G_n = (V_n,E_n)} of finite graphs, and then takes an ultraproduct (with respect to some arbitrarily chosen non-principal ultrafilter {\alpha \in\beta {\bf N} \backslash {\bf N}}) to obtain a nonstandard graph {G_\alpha = (V_\alpha,E_\alpha)}, where {V_\alpha = \prod_{n\rightarrow \alpha} V_n} is the ultraproduct of the {V_n}, and similarly for the {E_\alpha}. The set {E_\alpha} can then be viewed as a symmetric subset of {V_\alpha \times V_\alpha} which is measurable with respect to the Loeb {\sigma}-algebra {{\mathcal L}_{V_\alpha \times V_\alpha}} of the product {V_\alpha \times V_\alpha} (see this previous blog post for the construction of Loeb measure). A crucial point is that this {\sigma}-algebra is larger than the product {{\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha}} of the Loeb {\sigma}-algebra of the individual vertex set {V_\alpha}. This leads to a decomposition

\displaystyle  1_{E_\alpha} = p + e

where the “graphon” {p} is the orthogonal projection of {1_{E_\alpha}} onto {L^2( {\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha} )}, and the “regular error” {e} is orthogonal to all product sets {A \times B} for {A, B \in {\mathcal L}_{V_\alpha}}. The graphon {p\colon V_\alpha \times V_\alpha \rightarrow [0,1]} then captures the statistics of the nonstandard graph {G_\alpha}, in exact analogy with the more traditional graph limits: for instance, the edge density

\displaystyle  \hbox{st} \frac{1}{|V_\alpha|^2} |E_\alpha|

(or equivalently, the limit of the {\frac{1}{|V_n|^2} |E_n|} along the ultrafilter {\alpha}) is equal to the integral

\displaystyle  \int_{V_\alpha} \int_{V_\alpha} p(x,y)\ d\mu_{V_\alpha}(x) d\mu_{V_\alpha}(y)

where {d\mu_V} denotes Loeb measure on a nonstandard finite set {V}; the triangle density

\displaystyle  \hbox{st} \frac{1}{|V_\alpha|^3} \lvert \{ (v_1,v_2,v_3) \in V_\alpha^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_\alpha \} \rvert

(or equivalently, the limit along {\alpha} of the triangle densities of {E_n}) is equal to the integral

\displaystyle  \int_{V_\alpha} \int_{V_\alpha} \int_{V_\alpha} p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ d\mu_{V_\alpha}(x_1) d\mu_{V_\alpha}(x_2) d\mu_{V_\alpha}(x_3),

and so forth. Note that with this construction, the graphon {p} is living on the Cartesian square of an abstract probability space {V_\alpha}, which is likely to be inseparable; but it is possible to cut down the Loeb {\sigma}-algebra on {V_\alpha} to minimal countable {\sigma}-algebra for which {p} remains measurable (up to null sets), and then one can identify {V_\alpha} with {[0,1]}, bringing this construction of a graphon in line with the traditional notion of a graphon. (See Remark 5 of this previous blog post for more discussion of this point.)

Additive combinatorics, which studies things like the additive structure of finite subsets {A} of an abelian group {G = (G,+)}, has many analogies and connections with asymptotic graph theory; in particular, there is the arithmetic regularity lemma of Green which is analogous to the graph regularity lemma of Szemerédi. (There is also a higher order arithmetic regularity lemma analogous to hypergraph regularity lemmas, but this is not the focus of the discussion here.) Given this, it is natural to suspect that there is a theory of “additive limits” for large additive sets of bounded doubling, analogous to the theory of graph limits for large dense graphs. The purpose of this post is to record a candidate for such an additive limit. This limit can be used as a substitute for the arithmetic regularity lemma in certain results in additive combinatorics, at least if one is willing to settle for qualitative results rather than quantitative ones; I give a few examples of this below the fold.

It seems that to allow for the most flexible and powerful manifestation of this theory, it is convenient to use the nonstandard formulation (among other things, it allows for full use of the transfer principle, whereas a more traditional limit formulation would only allow for a transfer of those quantities continuous with respect to the notion of convergence). Here, the analogue of a nonstandard graph is an ultra approximate group {A_\alpha} in a nonstandard group {G_\alpha = \prod_{n \rightarrow \alpha} G_n}, defined as the ultraproduct of finite {K}-approximate groups {A_n \subset G_n} for some standard {K}. (A {K}-approximate group {A_n} is a symmetric set containing the origin such that {A_n+A_n} can be covered by {K} or fewer translates of {A_n}.) We then let {O(A_\alpha)} be the external subgroup of {G_\alpha} generated by {A_\alpha}; equivalently, {A_\alpha} is the union of {A_\alpha^m} over all standard {m}. This space has a Loeb measure {\mu_{O(A_\alpha)}}, defined by setting

\displaystyle \mu_{O(A_\alpha)}(E_\alpha) := \hbox{st} \frac{|E_\alpha|}{|A_\alpha|}

whenever {E_\alpha} is an internal subset of {A_\alpha^m} for any standard {m}, and extended to a countably additive measure; the arguments in Section 6 of this previous blog post can be easily modified to give a construction of this measure.

The Loeb measure {\mu_{O(A_\alpha)}} is a translation invariant measure on {O(A_{\alpha})}, normalised so that {A_\alpha} has Loeb measure one. As such, one should think of {O(A_\alpha)} as being analogous to a locally compact abelian group equipped with a Haar measure. It should be noted though that {O(A_\alpha)} is not actually a locally compact group with Haar measure, for two reasons:

  • There is not an obvious topology on {O(A_\alpha)} that makes it simultaneously locally compact, Hausdorff, and {\sigma}-compact. (One can get one or two out of three without difficulty, though.)
  • The addition operation {+\colon O(A_\alpha) \times O(A_\alpha) \rightarrow O(A_\alpha)} is not measurable from the product Loeb algebra {{\mathcal L}_{O(A_\alpha)} \times {\mathcal L}_{O(A_\alpha)}} to {{\mathcal L}_{O(\alpha)}}. Instead, it is measurable from the coarser Loeb algebra {{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}} to {{\mathcal L}_{O(\alpha)}} (compare with the analogous situation for nonstandard graphs).

Nevertheless, the analogy is a useful guide for the arguments that follow.

Let {L(O(A_\alpha))} denote the space of bounded Loeb measurable functions {f\colon O(A_\alpha) \rightarrow {\bf C}} (modulo almost everywhere equivalence) that are supported on {A_\alpha^m} for some standard {m}; this is a complex algebra with respect to pointwise multiplication. There is also a convolution operation {\star\colon L(O(A_\alpha)) \times L(O(A_\alpha)) \rightarrow L(O(A_\alpha))}, defined by setting

\displaystyle  \hbox{st} f \star \hbox{st} g(x) := \hbox{st} \frac{1}{|A_\alpha|} \sum_{y \in A_\alpha^m} f(y) g(x-y)

whenever {f\colon A_\alpha^m \rightarrow {}^* {\bf C}}, {g\colon A_\alpha^l \rightarrow {}^* {\bf C}} are bounded nonstandard functions (extended by zero to all of {O(A_\alpha)}), and then extending to arbitrary elements of {L(O(A_\alpha))} by density. Equivalently, {f \star g} is the pushforward of the {{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}}-measurable function {(x,y) \mapsto f(x) g(y)} under the map {(x,y) \mapsto x+y}.

The basic structural theorem is then as follows.

Theorem 1 (Kronecker factor) Let {A_\alpha} be an ultra approximate group. Then there exists a (standard) locally compact abelian group {G} of the form

\displaystyle  G = {\bf R}^d \times {\bf Z}^m \times T

for some standard {d,m} and some compact abelian group {T}, equipped with a Haar measure {\mu_G} and a measurable homomorphism {\pi\colon O(A_\alpha) \rightarrow G} (using the Loeb {\sigma}-algebra on {O(A_\alpha)} and the Borel {\sigma}-algebra on {G}), with the following properties:

  • (i) {\pi} is surjective, and {\mu_G} is the pushforward of Loeb measure {\mu_{O(A_\alpha)}} by {\pi}.
  • (ii) There exists sets {\{0\} \subset U_0 \subset K_0 \subset G} with {U_0} open and {K_0} compact, such that

    \displaystyle  \pi^{-1}(U_0) \subset 4A_\alpha \subset \pi^{-1}(K_0). \ \ \ \ \ (1)

  • (iii) Whenever {K \subset U \subset G} with {K} compact and {U} open, there exists a nonstandard finite set {B} such that

    \displaystyle  \pi^{-1}(K) \subset B \subset \pi^{-1}(U). \ \ \ \ \ (2)

  • (iv) If {f, g \in L}, then we have the convolution formula

    \displaystyle  f \star g = \pi^*( (\pi_* f) \star (\pi_* g) ) \ \ \ \ \ (3)

    where {\pi_* f,\pi_* g} are the pushforwards of {f,g} to {L^2(G, \mu_G)}, the convolution {\star} on the right-hand side is convolution using {\mu_G}, and {\pi^*} is the pullback map from {L^2(G,\mu_G)} to {L^2(O(A_\alpha), \mu_{O(A_\alpha)})}. In particular, if {\pi_* f = 0}, then {f*g=0} for all {g \in L}.

One can view the locally compact abelian group {G} as a “model “or “Kronecker factor” for the ultra approximate group {A_\alpha} (in close analogy with the Kronecker factor from ergodic theory). In the case that {A_\alpha} is a genuine nonstandard finite group rather than an ultra approximate group, the non-compact components {{\bf R}^d \times {\bf Z}^m} of the Kronecker group {G} are trivial, and this theorem was implicitly established by Szegedy. The compact group {T} is quite large, and in particular is likely to be inseparable; but as with the case of graphons, when one is only studying at most countably many functions {f}, one can cut down the size of this group to be separable (or equivalently, second countable or metrisable) if desired, so one often works with a “reduced Kronecker factor” which is a quotient of the full Kronecker factor {G}.

Given any sequence of uniformly bounded functions {f_n\colon A_n^m \rightarrow {\bf C}} for some fixed {m}, we can view the function {f \in L} defined by

\displaystyle  f := \pi_* \hbox{st} \lim_{n \rightarrow \alpha} f_n \ \ \ \ \ (4)

as an “additive limit” of the {f_n}, in much the same way that graphons {p\colon V_\alpha \times V_\alpha \rightarrow [0,1]} are limits of the indicator functions {1_{E_n}\colon V_n \times V_n \rightarrow \{0,1\}}. The additive limits capture some of the statistics of the {f_n}, for instance the normalised means

\displaystyle  \frac{1}{|A_n|} \sum_{x \in A_n^m} f_n(x)

converge (along the ultrafilter {\alpha}) to the mean

\displaystyle  \int_G f(x)\ d\mu_G(x),

and for three sequences {f_n,g_n,h_n\colon A_n^m \rightarrow {\bf C}} of functions, the normalised correlation

\displaystyle  \frac{1}{|A_n|^2} \sum_{x,y \in A_n^m} f_n(x) g_n(y) h_n(x+y)

converges along {\alpha} to the correlation

\displaystyle  \int_G \int_G f(x) g(y) h(x+y)\ d\mu_G(x) d\mu_G(y),

the normalised {U^2} Gowers norm

\displaystyle  ( \frac{1}{|A_n|^3} \sum_{x,y,z,w \in A_n^m: x+w=y+z} f_n(x) \overline{f_n(y)} \overline{f_n(z)} f_n(w))^{1/4}

converges along {\alpha} to the {U^2} Gowers norm

\displaystyle  ( \int_{G \times G \times G} f(x) \overline{f(y)} \overline{f(z)} f_n(x+y-z)\ d\mu_G(x) d\mu_G(y) d\mu_G(z))^{1/4}

and so forth. We caution however that some correlations that involve evaluating more than one function at the same point will not necessarily be preserved in the additive limit; for instance the normalised {\ell^2} norm

\displaystyle  (\frac{1}{|A_n|} \sum_{x \in A_n^m} |f_n(x)|^2)^{1/2}

does not necessarily converge to the {L^2} norm

\displaystyle  (\int_G |f(x)|^2\ d\mu_G(x))^{1/2},

but can converge instead to a larger quantity, due to the presence of the orthogonal projection {\pi_*} in the definition (4) of {f}.

An important special case of an additive limit occurs when the functions {f_n\colon A_n^m \rightarrow {\bf C}} involved are indicator functions {f_n = 1_{E_n}} of some subsets {E_n} of {A_n^m}. The additive limit {f \in L} does not necessarily remain an indicator function, but instead takes values in {[0,1]} (much as a graphon {p} takes values in {[0,1]} even though the original indicators {1_{E_n}} take values in {\{0,1\}}). The convolution {f \star f\colon G \rightarrow [0,1]} is then the ultralimit of the normalised convolutions {\frac{1}{|A_n|} 1_{E_n} \star 1_{E_n}}; in particular, the measure of the support of {f \star f} provides a lower bound on the limiting normalised cardinality {\frac{1}{|A_n|} |E_n + E_n|} of a sumset. In many situations this lower bound is an equality, but this is not necessarily the case, because the sumset {2E_n = E_n + E_n} could contain a large number of elements which have very few ({o(|A_n|)}) representations as the sum of two elements of {E_n}, and in the limit these portions of the sumset fall outside of the support of {f \star f}. (One can think of the support of {f \star f} as describing the “essential” sumset of {2E_n = E_n + E_n}, discarding those elements that have only very few representations.) Similarly for higher convolutions of {f}. Thus one can use additive limits to partially control the growth {k E_n} of iterated sumsets of subsets {E_n} of approximate groups {A_n}, in the regime where {k} stays bounded and {n} goes to infinity.

Theorem 1 can be proven by Fourier-analytic means (combined with Freiman’s theorem from additive combinatorics), and we will do so below the fold. For now, we give some illustrative examples of additive limits.

Example 1 (Bohr sets) We take {A_n} to be the intervals {A_n := \{ x \in {\bf Z}: |x| \leq N_n \}}, where {N_n} is a sequence going to infinity; these are {2}-approximate groups for all {n}. Let {\theta} be an irrational real number, let {I} be an interval in {{\bf R}/{\bf Z}}, and for each natural number {n} let {B_n} be the Bohr set

\displaystyle  B_n := \{ x \in A^{(n)}: \theta x \hbox{ mod } 1 \in I \}.

In this case, the (reduced) Kronecker factor {G} can be taken to be the infinite cylinder {{\bf R} \times {\bf R}/{\bf Z}} with the usual Lebesgue measure {\mu_G}. The additive limits of {1_{A_n}} and {1_{B_n}} end up being {1_A} and {1_B}, where {A} is the finite cylinder

\displaystyle  A := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]\}

and {B} is the rectangle

\displaystyle  B := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]; t \in I \}.

Geometrically, one should think of {A_n} and {B_n} as being wrapped around the cylinder {{\bf R} \times {\bf R}/{\bf Z}} via the homomorphism {x \mapsto (\frac{x}{N_n}, \theta x \hbox{ mod } 1)}, and then one sees that {B_n} is converging in some normalised weak sense to {B}, and similarly for {A_n} and {A}. In particular, the additive limit predicts the growth rate of the iterated sumsets {kB_n} to be quadratic in {k} until {k|I|} becomes comparable to {1}, at which point the growth transitions to linear growth, in the regime where {k} is bounded and {n} is large.

If {\theta = \frac{p}{q}} were rational instead of irrational, then one would need to replace {{\bf R}/{\bf Z}} by the finite subgroup {\frac{1}{q}{\bf Z}/{\bf Z}} here.

Example 2 (Structured subsets of progressions) We take {A_n} be the rank two progression

\displaystyle  A_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|, |b| \leq N_n \},

where {N_n} is a sequence going to infinity; these are {4}-approximate groups for all {n}. Let {B_n} be the subset

\displaystyle  B_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|^2 + |b|^2 \leq N_n^2 \}.

Then the (reduced) Kronecker factor can be taken to be {G = {\bf R}^2} with Lebesgue measure {\mu_G}, and the additive limits of the {1_{A_n}} and {1_{B_n}} are then {1_A} and {1_B}, where {A} is the square

\displaystyle  A := \{ (a,b) \in {\bf R}^2: |a|, |b| \leq 1 \}

and {B} is the circle

\displaystyle  B := \{ (a,b) \in {\bf R}^2: a^2+b^2 \leq 1 \}.

Geometrically, the picture is similar to the Bohr set one, except now one uses a Freiman homomorphism {a + b N_n^2 \mapsto (\frac{a}{N_n}, \frac{b}{N_n})} for {a,b = O( N_n )} to embed the original sets {A_n, B_n} into the plane {{\bf R}^2}. In particular, one now expects the growth rate of the iterated sumsets {k A_n} and {k B_n} to be quadratic in {k}, in the regime where {k} is bounded and {n} is large.

Example 3 (Dissociated sets) Let {d} be a fixed natural number, and take

\displaystyle  A_n = \{0, v_1,\dots,v_d,-v_1,\dots,-v_d \}

where {v_1,\dots,v_d} are randomly chosen elements of a large cyclic group {{\bf Z}/p_n{\bf Z}}, where {p_n} is a sequence of primes going to infinity. These are {O(d)}-approximate groups. The (reduced) Kronecker factor {G} can (almost surely) then be taken to be {{\bf Z}^d} with counting measure, and the additive limit of {1_{A_n}} is {1_A}, where {A = \{ 0, e_1,\dots,e_d,-e_1,\dots,-e_d\}} and {e_1,\dots,e_d} is the standard basis of {{\bf Z}^d}. In particular, the growth rates of {k A_n} should grow approximately like {k^d} for {k} bounded and {n} large.

Example 4 (Random subsets of groups) Let {A_n = G_n} be a sequence of finite additive groups whose order is going to infinity. Let {B_n} be a random subset of {G_n} of some fixed density {0 \leq \lambda \leq 1}. Then (almost surely) the Kronecker factor here can be reduced all the way to the trivial group {\{0\}}, and the additive limit of the {1_{B_n}} is the constant function {\lambda}. The convolutions {\frac{1}{|G_n|} 1_{B_n} * 1_{B_n}} then converge in the ultralimit (modulo almost everywhere equivalence) to the pullback of {\lambda^2}; this reflects the fact that {(1-o(1))|G_n|} of the elements of {G_n} can be represented as the sum of two elements of {B_n} in {(\lambda^2 + o(1)) |G_n|} ways. In particular, {B_n+B_n} occupies a proportion {1-o(1)} of {G_n}.

Example 5 (Trigonometric series) Take {A_n = G_n = {\bf Z}/p_n {\bf C}} for a sequence {p_n} of primes going to infinity, and for each {n} let {\xi_{n,1},\xi_{n,2},\dots} be an infinite sequence of frequencies chosen uniformly and independently from {{\bf Z}/p_n{\bf Z}}. Let {f_n\colon {\bf Z}/p_n{\bf Z} \rightarrow {\bf C}} denote the random trigonometric series

\displaystyle  f_n(x) := \sum_{j=1}^\infty 2^{-j} e^{2\pi i \xi_{n,j} x / p_n }.

Then (almost surely) we can take the reduced Kronecker factor {G} to be the infinite torus {({\bf R}/{\bf Z})^{\bf N}} (with the Haar probability measure {\mu_G}), and the additive limit of the {f_n} then becomes the function {f\colon ({\bf R}/{\bf Z})^{\bf N} \rightarrow {\bf R}} defined by the formula

\displaystyle  f( (x_j)_{j=1}^\infty ) := \sum_{j=1}^\infty e^{2\pi i x_j}.

In fact, the pullback {\pi^* f} is the ultralimit of the {f_n}. As such, for any standard exponent {1 \leq q < \infty}, the normalised {l^q} norm

\displaystyle  (\frac{1}{p_n} \sum_{x \in {\bf Z}/p_n{\bf Z}} |f_n(x)|^q)^{1/q}

can be seen to converge to the limit

\displaystyle  (\int_{({\bf R}/{\bf Z})^{\bf N}} |f(x)|^q\ d\mu_G(x))^{1/q}.

The reader is invited to consider combinations of the above examples, e.g. random subsets of Bohr sets, to get a sense of the general case of Theorem 1.

It is likely that this theorem can be extended to the noncommutative setting, using the noncommutative Freiman theorem of Emmanuel Breuillard, Ben Green, and myself, but I have not attempted to do so here (see though this recent preprint of Anush Tserunyan for some related explorations); in a separate direction, there should be extensions that can control higher Gowers norms, in the spirit of the work of Szegedy.

Note: the arguments below will presume some familiarity with additive combinatorics and with nonstandard analysis, and will be a little sketchy in places.

Read the rest of this entry »

In addition to the Fields medallists mentioned in the previous post, the IMU also awarded the Nevanlinna prize to Subhash Khot, the Gauss prize to Stan Osher (my colleague here at UCLA!), and the Chern medal to Phillip Griffiths. Like I did in 2010, I’ll try to briefly discuss one result of each of the prize winners, though the fields of mathematics here are even further from my expertise than those discussed in the previous post (and all the caveats from that post apply here also).

Subhash Khot is best known for his Unique Games Conjecture, a problem in complexity theory that is perhaps second in importance only to the {P \neq NP} problem for the purposes of demarcating the mysterious line between “easy” and “hard” problems (if one follows standard practice and uses “polynomial time” as the definition of “easy”). The {P \neq NP} problem can be viewed as an assertion that it is difficult to find exact solutions to certain standard theoretical computer science problems (such as {k}-SAT); thanks to the NP-completeness phenomenon, it turns out that the precise problem posed here is not of critical importance, and {k}-SAT may be substituted with one of the many other problems known to be NP-complete. The unique games conjecture is similarly an assertion about the difficulty of finding even approximate solutions to certain standard problems, in particular “unique games” problems in which one needs to colour the vertices of a graph in such a way that the colour of one vertex of an edge is determined uniquely (via a specified matching) by the colour of the other vertex. This is an easy problem to solve if one insists on exact solutions (in which 100% of the edges have a colouring compatible with the specified matching), but becomes extremely difficult if one permits approximate solutions, with no exact solution available. In analogy with the NP-completeness phenomenon, the threshold for approximate satisfiability of many other problems (such as the MAX-CUT problem) is closely connected with the truth of the unique games conjecture; remarkably, the truth of the unique games conjecture would imply asymptotically sharp thresholds for many of these problems. This has implications for many theoretical computer science constructions which rely on hardness of approximation, such as probabilistically checkable proofs. For a more detailed survey of the unique games conjecture and its implications, see this Bulletin article of Trevisan.

My colleague Stan Osher has worked in many areas of applied mathematics, ranging from image processing to modeling fluids for major animation studies such as Pixar or Dreamworks, but today I would like to talk about one of his contributions that is close to an area of my own expertise, namely compressed sensing. One of the basic reconstruction problem in compressed sensing is the basis pursuit problem of finding the vector {x \in {\bf R}^n} in an affine space {\{ x \in {\bf R}^n: Ax = b \}} (where {b \in {\bf R}^m} and {A \in {\bf R}^{m\times n}} are given, and {m} is typically somewhat smaller than {n}) which minimises the {\ell^1}-norm {\|x\|_{\ell^1} := \sum_{i=1}^n |x_i|} of the vector {x}. This is a convex optimisation problem, and thus solvable in principle (it is a polynomial time problem, and thus “easy” in the above theoretical computer science sense). However, once {n} and {m} get moderately large (e.g. of the order of {10^6}), standard linear optimisation routines begin to become computationally expensive; also, it is difficult for off-the-shelf methods to exploit any additional structure (e.g. sparsity) in the measurement matrix {A}. Much of the problem comes from the fact that the functional {x \mapsto \|x\|_1} is only barely convex. One way to speed up the optimisation problem is to relax it by replacing the constraint {Ax=b} with a convex penalty term {\frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2}, thus one is now trying to minimise the unconstrained functional

\displaystyle \|x\|_1 + \frac{1}{2\epsilon} \|Ax-b\|_{\ell^2}^2.

This functional is more convex, and is over a computationally simpler domain {{\bf R}^n} than the affine space {\{x \in {\bf R}^n: Ax=b\}}, so is easier (though still not entirely trivial) to optimise over. However, the minimiser {x^\epsilon} to this problem need not match the minimiser {x^0} to the original problem, particularly if the (sub-)gradient {\partial \|x\|_1} of the original functional {\|x\|_1} is large at {x^0}, and if {\epsilon} is not set to be small. (And setting {\epsilon} too small will cause other difficulties with numerically solving the optimisation problem, due to the need to divide by very small denominators.) However, if one modifies the objective function by an additional linear term

\displaystyle \|x\|_1 - \langle p, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2

then some simple convexity considerations reveal that the minimiser to this new problem will match the minimiser {x^0} to the original problem, provided that {p} is (or more precisely, lies in) the (sub-)gradient {\partial \|x\|_1} of {\|x\|_1} at {x^0} – even if {\epsilon} is not small. But, one does not know in advance what the correct value of {p} should be, because one does not know what the minimiser {x^0} is.

With Yin, Goldfarb and Darbon, Osher introduced a Bregman iteration method in which one solves for {x} and {p} simultaneously; given an initial guess {x^k, p^k} for both {x^k} and {p^k}, one first updates {x^k} to the minimiser {x^{k+1} \in {\bf R}^n} of the convex functional

\displaystyle \|x\|_1 - \langle p^k, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2 \ \ \ \ \ (1)

and then updates {p^{k+1}} to the natural value of the subgradient {\partial \|x\|_1} at {x^{k+1}}, namely

\displaystyle p^{k+1} := p^k - \nabla \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2|_{x=x^{k+1}} = p_k - \frac{1}{\epsilon} (Ax^k - b)

(note upon taking the first variation of (1) that {p^{k+1}} is indeed in the subgradient). This procedure converges remarkably quickly (both in theory and in practice) to the true minimiser {x^0} even for non-small values of {\epsilon}, and also has some ability to be parallelised, and has led to actual performance improvements of an order of magnitude or more in certain compressed sensing problems (such as reconstructing an MRI image).

Phillip Griffiths has made many contributions to complex, algebraic and differential geometry, and I am not qualified to describe most of these; my primary exposure to his work is through his text on algebraic geometry with Harris, but as excellent though that text is, it is not really representative of his research. But I thought I would mention one cute result of his related to the famous Nash embedding theorem. Suppose that one has a smooth {n}-dimensional Riemannian manifold that one wants to embed locally into a Euclidean space {{\bf R}^m}. The Nash embedding theorem guarantees that one can do this if {m} is large enough depending on {n}, but what is the minimal value of {m} one can get away with? Many years ago, my colleague Robert Greene showed that {m = \frac{n(n+1)}{2} + n} sufficed (a simplified proof was subsequently given by Gunther). However, this is not believed to be sharp; if one replaces “smooth” with “real analytic” then a standard Cauchy-Kovalevski argument shows that {m = \frac{n(n+1)}{2}} is possible, and no better. So this suggests that {m = \frac{n(n+1)}{2}} is the threshold for the smooth problem also, but this remains open in general. The cases {n=1} is trivial, and the {n=2} case is not too difficult (if the curvature is non-zero) as the codimension {m-n} is one in this case, and the problem reduces to that of solving a Monge-Ampere equation. With Bryant and Yang, Griffiths settled the {n=3} case, under a non-degeneracy condition on the Einstein tensor. This is quite a serious paper – over 100 pages combining differential geometry, PDE methods (e.g. Nash-Moser iteration), and even some harmonic analysis (e.g. they rely at one key juncture on an extension theorem of my advisor, Elias Stein). The main difficulty is that that the relevant PDE degenerates along a certain characteristic submanifold of the cotangent bundle, which then requires an extremely delicate analysis to handle.

This is a blog version of a talk I recently gave at the IPAM workshop on “The Kakeya Problem, Restriction Problem, and Sum-product Theory”.

Note: the discussion here will be highly non-rigorous in nature, being extremely loose in particular with asymptotic notation and with the notion of dimension. Caveat emptor.

One of the most infamous unsolved problems at the intersection of geometric measure theory, incidence combinatorics, and real-variable harmonic analysis is the Kakeya set conjecture. We will focus on the following three-dimensional case of the conjecture, stated informally as follows:

Conjecture 1 (Kakeya conjecture) Let {E} be a subset of {{\bf R}^3} that contains a unit line segment in every direction. Then {\hbox{dim}(E) = 3}.

This conjecture is not precisely formulated here, because we have not specified exactly what type of set {E} is (e.g. measurable, Borel, compact, etc.) and what notion of dimension we are using. We will deliberately ignore these technical details in this post. It is slightly more convenient for us here to work with lines instead of unit line segments, so we work with the following slight variant of the conjecture (which is essentially equivalent):

Conjecture 2 (Kakeya conjecture, again) Let {{\cal L}} be a family of lines in {{\bf R}^3} that meet {B(0,1)} and contain a line in each direction. Let {E} be the union of the restriction {\ell \cap B(0,2)} to {B(0,2)} of every line {\ell} in {{\cal L}}. Then {\hbox{dim}(E) = 3}.

As the space of all directions in {{\bf R}^3} is two-dimensional, we thus see that {{\cal L}} is an (at least) two-dimensional subset of the four-dimensional space of lines in {{\bf R}^3} (actually, it lies in a compact subset of this space, since we have constrained the lines to meet {B(0,1)}). One could then ask if this is the only property of {{\cal L}} that is needed to establish the Kakeya conjecture, that is to say if any subset of {B(0,2)} which contains a two-dimensional family of lines (restricted to {B(0,2)}, and meeting {B(0,1)}) is necessarily three-dimensional. Here we have an easy counterexample, namely a plane in {B(0,2)} (passing through the origin), which contains a two-dimensional collection of lines. However, we can exclude this case by adding an additional axiom, leading to what one might call a “strong” Kakeya conjecture:

Conjecture 3 (Strong Kakeya conjecture) Let {{\cal L}} be a two-dimensional family of lines in {{\bf R}^3} that meet {B(0,1)}, and assume the Wolff axiom that no (affine) plane contains more than a one-dimensional family of lines in {{\cal L}}. Let {E} be the union of the restriction {\ell \cap B(0,2)} of every line {\ell} in {{\cal L}}. Then {\hbox{dim}(E) = 3}.

Actually, to make things work out we need a more quantitative version of the Wolff axiom in which we constrain the metric entropy (and not just dimension) of lines that lie close to a plane, rather than exactly on the plane. However, for the informal discussion here we will ignore these technical details. Families of lines that lie in different directions will obey the Wolff axiom, but the converse is not true in general.

In 1995, Wolff established the important lower bound {\hbox{dim}(E) \geq 5/2} (for various notions of dimension, e.g. Hausdorff dimension) for sets {E} in Conjecture 3 (and hence also for the other forms of the Kakeya problem). However, there is a key obstruction to going beyond the {5/2} barrier, coming from the possible existence of half-dimensional (approximate) subfields of the reals {{\bf R}}. To explain this problem, it easiest to first discuss the complex version of the strong Kakeya conjecture, in which all relevant (real) dimensions are doubled:

Conjecture 4 (Strong Kakeya conjecture over {{\bf C}}) Let {{\cal L}} be a four (real) dimensional family of complex lines in {{\bf C}^3} that meet the unit ball {B(0,1)} in {{\bf C}^3}, and assume the Wolff axiom that no four (real) dimensional (affine) subspace contains more than a two (real) dimensional family of complex lines in {{\cal L}}. Let {E} be the union of the restriction {\ell \cap B(0,2)} of every complex line {\ell} in {{\cal L}}. Then {E} has real dimension {6}.

The argument of Wolff can be adapted to the complex case to show that all sets {E} occuring in Conjecture 4 have real dimension at least {5}. Unfortunately, this is sharp, due to the following fundamental counterexample:

Proposition 5 (Heisenberg group counterexample) Let {H \subset {\bf C}^3} be the Heisenberg group

\displaystyle  H = \{ (z_1,z_2,z_3) \in {\bf C}^3: \hbox{Im}(z_1) = \hbox{Im}(z_2 \overline{z_3}) \}

and let {{\cal L}} be the family of complex lines

\displaystyle  \ell_{s,t,\alpha} := \{ (\overline{\alpha} z + t, z, sz + \alpha): z \in {\bf C} \}

with {s,t \in {\bf R}} and {\alpha \in {\bf C}}. Then {H} is a five (real) dimensional subset of {{\bf C}^3} that contains every line in the four (real) dimensional set {{\cal L}}; however each four real dimensional (affine) subspace contains at most a two (real) dimensional set of lines in {{\cal L}}. In particular, the strong Kakeya conjecture over the complex numbers is false.

This proposition is proven by a routine computation, which we omit here. The group structure on {H} is given by the group law

\displaystyle  (z_1,z_2,z_3) \cdot (w_1,w_2,w_3) = (z_1 + w_1 + z_2 \overline{w_3} - z_3 \overline{w_2}, z_2 +w_2, z_3+w_3),

giving {E} the structure of a {2}-step simply-connected nilpotent Lie group, isomorphic to the usual Heisenberg group over {{\bf R}^2}. Note that while the Heisenberg group is a counterexample to the complex strong Kakeya conjecture, it is not a counterexample to the complex form of the original Kakeya conjecture, because the complex lines {{\cal L}} in the Heisenberg counterexample do not point in distinct directions, but instead only point in a three (real) dimensional subset of the four (real) dimensional space of available directions for complex lines. For instance, one has the one real-dimensional family of parallel lines

\displaystyle  \ell_{0,t,0} = \{ (t, z, 0): z \in {\bf C}\}

with {t \in {\bf R}}; multiplying this family of lines on the right by a group element in {H} gives other families of parallel lines, which in fact sweep out all of {{\cal L}}.

The Heisenberg counterexample ultimately arises from the “half-dimensional” (and hence degree two) subfield {{\bf R}} of {{\bf C}}, which induces an involution {z \mapsto \overline{z}} which can then be used to define the Heisenberg group {H} through the formula

\displaystyle  H = \{ (z_1,z_2,z_3) \in {\bf C}^3: z_1 - \overline{z_1} = z_2 \overline{z_3} - z_3 \overline{z_2} \}.

Analogous Heisenberg counterexamples can also be constructed if one works over finite fields {{\bf F}_{q^2}} that contain a “half-dimensional” subfield {{\bf F}_q}; we leave the details to the interested reader. Morally speaking, if {{\bf R}} in turn contained a subfield of dimension {1/2} (or even a subring or “approximate subring”), then one ought to be able to use this field to generate a counterexample to the strong Kakeya conjecture over the reals. Fortunately, such subfields do not exist; this was a conjecture of Erdos and Volkmann that was proven by Edgar and Miller, and more quantitatively by Bourgain (answering a question of Nets Katz and myself). However, this fact is not entirely trivial to prove, being a key example of the sum-product phenomenon.

We thus see that to go beyond the {5/2} dimension bound of Wolff for the 3D Kakeya problem over the reals, one must do at least one of two things:

  • (a) Exploit the distinct directions of the lines in {{\mathcal L}} in a way that goes beyond the Wolff axiom; or
  • (b) Exploit the fact that {{\bf R}} does not contain half-dimensional subfields (or more generally, intermediate-dimensional approximate subrings).

(The situation is more complicated in higher dimensions, as there are more obstructions than the Heisenberg group; for instance, in four dimensions quadric surfaces are an important obstruction, as discussed in this paper of mine.)

Various partial or complete results on the Kakeya problem over various fields have been obtained through route (a) or route (b). For instance, in 2000, Nets Katz, Izabella Laba and myself used route (a) to improve Wolff’s lower bound of {5/2} for Kakeya sets very slightly to {5/2+10^{-10}} (for a weak notion of dimension, namely upper Minkowski dimension). In 2004, Bourgain, Katz, and myself established a sum-product estimate which (among other things) ruled out approximate intermediate-dimensional subrings of {{\bf F}_p}, and then pursued route (b) to obtain a corresponding improvement {5/2+\epsilon} to the Kakeya conjecture over finite fields of prime order. The analogous (discretised) sum-product estimate over the reals was established by Bourgain in 2003, which in principle would allow one to extend the result of Katz, Laba and myself to the strong Kakeya setting, but this has not been carried out in the literature. Finally, in 2009, Dvir used route (a) and introduced the polynomial method (as discussed previously here) to completely settle the Kakeya conjecture in finite fields.

Below the fold, I present a heuristic argument of Nets Katz and myself, which in principle would use route (b) to establish the full (strong) Kakeya conjecture. In broad terms, the strategy is as follows:

  1. Assume that the (strong) Kakeya conjecture fails, so that there are sets {E} of the form in Conjecture 3 of dimension {3-\sigma} for some {\sigma>0}. Assume that {E} is “optimal”, in the sense that {\sigma} is as large as possible.
  2. Use the optimality of {E} (and suitable non-isotropic rescalings) to establish strong forms of standard structural properties expected of such sets {E}, namely “stickiness”, “planiness”, “local graininess” and “global graininess” (we will roughly describe these properties below). Heuristically, these properties are constraining {E} to “behave like” a putative Heisenberg group counterexample.
  3. By playing all these structural properties off of each other, show that {E} can be parameterised locally by a one-dimensional set which generates a counterexample to Bourgain’s sum-product theorem. This contradiction establishes the Kakeya conjecture.

Nets and I have had an informal version of argument for many years, but were never able to make a satisfactory theorem (or even a partial Kakeya result) out of it, because we could not rigorously establish anywhere near enough of the necessary structural properties (stickiness, planiness, etc.) on the optimal set {E} for a large number of reasons (one of which being that we did not have a good notion of dimension that did everything that we wished to demand of it). However, there is beginning to be movement in these directions (e.g. in this recent result of Guth using the polynomial method obtaining a weak version of local graininess on certain Kakeya sets). In view of this (and given that neither Nets or I have been actively working in this direction for some time now, due to many other projects), we’ve decided to distribute these ideas more widely than before, and in particular on this blog.

Read the rest of this entry »

Roth’s theorem on arithmetic progressions asserts that every subset of the integers {{\bf Z}} of positive upper density contains infinitely many arithmetic progressions of length three. There are many versions and variants of this theorem. Here is one of them:

Theorem 1 (Roth’s theorem) Let {G = (G,+)} be a compact abelian group, with Haar probability measure {\mu}, which is {2}-divisible (i.e. the map {x \mapsto 2x} is surjective) and let {A} be a measurable subset of {G} with {\mu(A) \geq \alpha} for some {0 < \alpha < 1}. Then we have

\displaystyle  \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r)\ d\mu(x) d\mu(r) \gg_\alpha 1,

where {X \gg_\alpha Y} denotes the bound {X \geq c_\alpha Y} for some {c_\alpha > 0} depending only on {\alpha}.

This theorem is usually formulated in the case that {G} is a finite abelian group of odd order (in which case the result is essentially due to Meshulam) or more specifically a cyclic group {G = {\bf Z}/N{\bf Z}} of odd order (in which case it is essentially due to Varnavides), but is also valid for the more general setting of {2}-divisible compact abelian groups, as we shall shortly see. One can be more precise about the dependence of the implied constant {c_\alpha} on {\alpha}, but to keep the exposition simple we will work at the qualitative level here, without trying at all to get good quantitative bounds. The theorem is also true without the {2}-divisibility hypothesis, but the proof we will discuss runs into some technical issues due to the degeneracy of the {2r} shift in that case.

We can deduce Theorem 1 from the following more general Khintchine-type statement. Let {\hat G} denote the Pontryagin dual of a compact abelian group {G}, that is to say the set of all continuous homomorphisms {\xi: x \mapsto \xi \cdot x} from {G} to the (additive) unit circle {{\bf R}/{\bf Z}}. Thus {\hat G} is a discrete abelian group, and functions {f \in L^2(G)} have a Fourier transform {\hat f \in \ell^2(\hat G)} defined by

\displaystyle  \hat f(\xi) := \int_G f(x) e^{-2\pi i \xi \cdot x}\ d\mu(x).

If {G} is {2}-divisible, then {\hat G} is {2}-torsion-free in the sense that the map {\xi \mapsto 2 \xi} is injective. For any finite set {S \subset \hat G} and any radius {\rho>0}, define the Bohr set

\displaystyle  B(S,\rho) := \{ x \in G: \sup_{\xi \in S} \| \xi \cdot x \|_{{\bf R}/{\bf Z}} < \rho \}

where {\|\theta\|_{{\bf R}/{\bf Z}}} denotes the distance of {\theta} to the nearest integer. We refer to the cardinality {|S|} of {S} as the rank of the Bohr set. We record a simple volume bound on Bohr sets:

Lemma 2 (Volume packing bound) Let {G} be a compact abelian group with Haar probability measure {\mu}. For any Bohr set {B(S,\rho)}, we have

\displaystyle  \mu( B( S, \rho ) ) \gg_{|S|, \rho} 1.

Proof: We can cover the torus {({\bf R}/{\bf Z})^S} by {O_{|S|,\rho}(1)} translates {\theta+Q} of the cube {Q := \{ (\theta_\xi)_{\xi \in S} \in ({\bf R}/{\bf Z})^S: \sup_{\xi \in S} \|\theta_\xi\|_{{\bf R}/{\bf Z}} < \rho/2 \}}. Then the sets {\{ x \in G: (\xi \cdot x)_{\xi \in S} \in \theta + Q \}} form an cover of {G}. But all of these sets lie in a translate of {B(S,\rho)}, and the claim then follows from the translation invariance of {\mu}. \Box

Given any Bohr set {B(S,\rho)}, we define a normalised “Lipschitz” cutoff function {\nu_{B(S,\rho)}: G \rightarrow {\bf R}} by the formula

\displaystyle  \nu_{B(S,\rho)}(x) = c_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})_+ \ \ \ \ \ (1)

where {c_{B(S,\rho)}} is the constant such that

\displaystyle  \int_G \nu_{B(S,\rho)}\ d\mu = 1,

thus

\displaystyle c_{B(S,\rho)} = \left( \int_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})\ d\mu(x) \right)^{-1}.

The function {\nu_{B(S,\rho)}} should be viewed as an {L^1}-normalised “tent function” cutoff to {B(S,\rho)}. Note from Lemma 2 that

\displaystyle  1 \ll_{|S|,\rho} c_{B(S,\rho)} \ll_{|S|,\rho} 1. \ \ \ \ \ (2)

We then have the following sharper version of Theorem 1:

Theorem 3 (Roth-Khintchine theorem) Let {G = (G,+)} be a {2}-divisible compact abelian group, with Haar probability measure {\mu}, and let {\epsilon>0}. Then for any measurable function {f: G \rightarrow [0,1]}, there exists a Bohr set {B(S,\rho)} with {|S| \ll_\epsilon 1} and {\rho \gg_\epsilon 1} such that

\displaystyle  \int_G \int_G f(x) f(x+r) f(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \ \ \ \ \ (3)

\displaystyle  \geq (\int_G f\ d\mu)^3 - O(\epsilon)

where {*} denotes the convolution operation

\displaystyle  f*g(x) := \int_G f(y) g(x-y)\ d\mu(y).

A variant of this result (expressed in the language of ergodic theory) appears in this paper of Bergelson, Host, and Kra; a combinatorial version of the Bergelson-Host-Kra result that is closer to Theorem 3 subsequently appeared in this paper of Ben Green and myself, but this theorem arguably appears implicitly in a much older paper of Bourgain. To see why Theorem 3 implies Theorem 1, we apply the theorem with {f := 1_A} and {\epsilon} equal to a small multiple of {\alpha^3} to conclude that there is a Bohr set {B(S,\rho)} with {|S| \ll_\alpha 1} and {\rho \gg_\alpha 1} such that

\displaystyle  \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \gg \alpha^3.

But from (2) we have the pointwise bound {\nu_{B(S,\rho)}*\nu_{B(S,\rho)} \ll_\alpha 1}, and Theorem 1 follows.

Below the fold, we give a short proof of Theorem 3, using an “energy pigeonholing” argument that essentially dates back to the 1986 paper of Bourgain mentioned previously (not to be confused with a later 1999 paper of Bourgain on Roth’s theorem that was highly influential, for instance in emphasising the importance of Bohr sets). The idea is to use the pigeonhole principle to choose the Bohr set {B(S,\rho)} to capture all the “large Fourier coefficients” of {f}, but such that a certain “dilate” of {B(S,\rho)} does not capture much more Fourier energy of {f} than {B(S,\rho)} itself. The bound (3) may then be obtained through elementary Fourier analysis, without much need to explicitly compute things like the Fourier transform of an indicator function of a Bohr set. (However, the bound obtained by this argument is going to be quite poor – of tower-exponential type.) To do this we perform a structural decomposition of {f} into “structured”, “small”, and “highly pseudorandom” components, as is common in the subject (e.g. in this previous blog post), but even though we crucially need to retain non-negativity of one of the components in this decomposition, we can avoid recourse to conditional expectation with respect to a partition (or “factor”) of the space, using instead convolution with one of the {\nu_{B(S,\rho)}} considered above to achieve a similar effect.

Read the rest of this entry »

A core foundation of the subject now known as arithmetic combinatorics (and particularly the subfield of additive combinatorics) are the elementary sum set estimates (sometimes known as “Ruzsa calculus”) that relate the cardinality of various sum sets

\displaystyle  A+B := \{ a+b: a \in A, b \in B \}

and difference sets

\displaystyle  A-B := \{ a-b: a \in A, b \in B \},

as well as iterated sumsets such as {3A=A+A+A}, {2A-2A=A+A-A-A}, and so forth. Here, {A, B} are finite non-empty subsets of some additive group {G = (G,+)} (classically one took {G={\bf Z}} or {G={\bf R}}, but nowadays one usually considers more general additive groups). Some basic estimates in this vein are the following:

Lemma 1 (Ruzsa covering lemma) Let {A, B} be finite non-empty subsets of {G}. Then {A} may be covered by at most {\frac{|A+B|}{|B|}} translates of {B-B}.

Proof: Consider a maximal set of disjoint translates {a+B} of {B} by elements {a \in A}. These translates have cardinality {|B|}, are disjoint, and lie in {A+B}, so there are at most {\frac{|A+B|}{|B|}} of them. By maximality, for any {a' \in A}, {a'+B} must intersect at least one of the selected {a+B}, thus {a' \in a+B-B}, and the claim follows. \Box

Lemma 2 (Ruzsa triangle inequality) Let {A,B,C} be finite non-empty subsets of {G}. Then {|A-C| \leq \frac{|A-B| |B-C|}{|B|}}.

Proof: Consider the addition map {+: (x,y) \mapsto x+y} from {(A-B) \times (B-C)} to {G}. Every element {a-c} of {A - C} has a preimage {\{ (x,y) \in (A-B) \times (B-C)\}} of this map of cardinality at least {|B|}, thanks to the obvious identity {a-c = (a-b) + (b-c)} for each {b \in B}. Since {(A-B) \times (B-C)} has cardinality {|A-B| |B-C|}, the claim follows. \Box

Such estimates (which are covered, incidentally, in Section 2 of my book with Van Vu) are particularly useful for controlling finite sets {A} of small doubling, in the sense that {|A+A| \leq K|A|} for some bounded {K}. (There are deeper theorems, most notably Freiman’s theorem, which give more control than what elementary Ruzsa calculus does, however the known bounds in the latter theorem are worse than polynomial in {K} (although it is conjectured otherwise), whereas the elementary estimates are almost all polynomial in {K}.)

However, there are some settings in which the standard sum set estimates are not quite applicable. One such setting is the continuous setting, where one is dealing with bounded open sets in an additive Lie group (e.g. {{\bf R}^n} or a torus {({\bf R}/{\bf Z})^n}) rather than a finite setting. Here, one can largely replicate the discrete sum set estimates by working with a Haar measure in place of cardinality; this is the approach taken for instance in this paper of mine. However, there is another setting, which one might dub the “discretised” setting (as opposed to the “discrete” setting or “continuous” setting), in which the sets {A} remain finite (or at least discretisable to be finite), but for which there is a certain amount of “roundoff error” coming from the discretisation. As a typical example (working now in a non-commutative multiplicative setting rather than an additive one), consider the orthogonal group {O_n({\bf R})} of orthogonal {n \times n} matrices, and let {A} be the matrices obtained by starting with all of the orthogonal matrice in {O_n({\bf R})} and rounding each coefficient of each matrix in this set to the nearest multiple of {\epsilon}, for some small {\epsilon>0}. This forms a finite set (whose cardinality grows as {\epsilon\rightarrow 0} like a certain negative power of {\epsilon}). In the limit {\epsilon \rightarrow 0}, the set {A} is not a set of small doubling in the discrete sense. However, {A \cdot A} is still close to {A} in a metric sense, being contained in the {O_n(\epsilon)}-neighbourhood of {A}. Another key example comes from graphs {\Gamma := \{ (x, f(x)): x \in G \}} of maps {f: A \rightarrow H} from a subset {A} of one additive group {G = (G,+)} to another {H = (H,+)}. If {f} is “approximately additive” in the sense that for all {x,y \in G}, {f(x+y)} is close to {f(x)+f(y)} in some metric, then {\Gamma} might not have small doubling in the discrete sense (because {f(x+y)-f(x)-f(y)} could take a large number of values), but could be considered a set of small doubling in a discretised sense.

One would like to have a sum set (or product set) theory that can handle these cases, particularly in “high-dimensional” settings in which the standard methods of passing back and forth between continuous, discrete, or discretised settings behave poorly from a quantitative point of view due to the exponentially large doubling constant of balls. One way to do this is to impose a translation invariant metric {d} on the underlying group {G = (G,+)} (reverting back to additive notation), and replace the notion of cardinality by that of metric entropy. There are a number of almost equivalent ways to define this concept:

Definition 3 Let {(X,d)} be a metric space, let {E} be a subset of {X}, and let {r>0} be a radius.

  • The packing number {N^{pack}_r(E)} is the largest number of points {x_1,\dots,x_n} one can pack inside {E} such that the balls {B(x_1,r),\dots,B(x_n,r)} are disjoint.
  • The internal covering number {N^{int}_r(E)} is the fewest number of points {x_1,\dots,x_n \in E} such that the balls {B(x_1,r),\dots,B(x_n,r)} cover {E}.
  • The external covering number {N^{ext}_r(E)} is the fewest number of points {x_1,\dots,x_n \in X} such that the balls {B(x_1,r),\dots,B(x_n,r)} cover {E}.
  • The metric entropy {N^{ent}_r(E)} is the largest number of points {x_1,\dots,x_n} one can find in {E} that are {r}-separated, thus {d(x_i,x_j) \geq r} for all {i \neq j}.

It is an easy exercise to verify the inequalities

\displaystyle  N^{ent}_{2r}(E) \leq N^{pack}_r(E) \leq N^{ext}_r(E) \leq N^{int}_r(E) \leq N^{ent}_r(E)

for any {r>0}, and that {N^*_r(E)} is non-increasing in {r} and non-decreasing in {E} for the three choices {* = pack,ext,ent} (but monotonicity in {E} can fail for {*=int}!). It turns out that the external covering number {N^{ent}_r(E)} is slightly more convenient than the other notions of metric entropy, so we will abbreviate {N_r(E) = N^{ent}_r(E)}. The cardinality {|E|} can be viewed as the limit of the entropies {N^*_r(E)} as {r \rightarrow 0}.

If we have the bounded doubling property that {B(0,2r)} is covered by {O(1)} translates of {B(0,r)} for each {r>0}, and one has a Haar measure {m} on {G} which assigns a positive finite mass to each ball, then any of the above entropies {N^*_r(E)} is comparable to {m( E + B(0,r) ) / m(B(0,r))}, as can be seen by simple volume packing arguments. Thus in the bounded doubling setting one can usually use the measure-theoretic sum set theory to derive entropy-theoretic sumset bounds (see e.g. this paper of mine for an example of this). However, it turns out that even in the absence of bounded doubling, one still has an entropy analogue of most of the elementary sum set theory, except that one has to accept some degradation in the radius parameter {r} by some absolute constant. Such losses can be acceptable in applications in which the underlying sets {A} are largely “transverse” to the balls {B(0,r)}, so that the {N_r}-entropy of {A} is largely independent of {A}; this is a situation which arises in particular in the case of graphs {\Gamma = \{ (x,f(x)): x \in G \}} discussed above, if one works with “vertical” metrics whose balls extend primarily in the vertical direction. (I hope to present a specific application of this type here in the near future.)

Henceforth we work in an additive group {G} equipped with a translation-invariant metric {d}. (One can also generalise things slightly by allowing the metric to attain the values {0} or {+\infty}, without changing much of the analysis below.) By the Heine-Borel theorem, any precompact set {E} will have finite entropy {N_r(E)} for any {r>0}. We now have analogues of the two basic Ruzsa lemmas above:

Lemma 4 (Ruzsa covering lemma) Let {A, B} be precompact non-empty subsets of {G}, and let {r>0}. Then {A} may be covered by at most {\frac{N_r(A+B)}{N_r(B)}} translates of {B-B+B(0,2r)}.

Proof: Let {a_1,\dots,a_n \in A} be a maximal set of points such that the sets {a_i + B + B(0,r)} are all disjoint. Then the sets {a_i+B} are disjoint in {A+B} and have entropy {N_r(a_i+B)=N_r(B)}, and furthermore any ball of radius {r} can intersect at most one of the {a_i+B}. We conclude that {N_r(A+B) \geq n N_r(B)}, so {n \leq \frac{N_r(A+B)}{N_r(B)}}. If {a \in A}, then {a+B+B(0,r)} must intersect one of the {a_i + B + B(0,r)}, so {a \in a_i + B-B + B(0,2r)}, and the claim follows. \Box

Lemma 5 (Ruzsa triangle inequality) Let {A,B,C} be precompact non-empty subsets of {G}, and let {r>0}. Then {N_{4r}(A-C) \leq \frac{N_r(A-B) N_r(B-C)}{N_r(B)}}.

Proof: Consider the addition map {+: (x,y) \mapsto x+y} from {(A-B) \times (B-C)} to {G}. The domain {(A-B) \times (B-C)} may be covered by {N_r(A-B) N_r(B-C)} product balls {B(x,r) \times B(y,r)}. Every element {a-c} of {A - C} has a preimage {\{ (x,y) \in (A-B) \times (B-C)\}} of this map which projects to a translate of {B}, and thus must meet at least {N_r(B)} of these product balls. However, if two elements of {A-C} are separated by a distance of at least {4r}, then no product ball can intersect both preimages. We thus see that {N_{4r}^{ent}(A-C) \leq \frac{N_r(A-B) N_r(B-C)}{N_r(A-C)}}, and the claim follows. \Box

Below the fold we will record some further metric entropy analogues of sum set estimates (basically redoing much of Chapter 2 of my book with Van Vu). Unfortunately there does not seem to be a direct way to abstractly deduce metric entropy results from their sum set analogues (basically due to the failure of a certain strong version of Freiman’s theorem, as discussed in this previous post); nevertheless, the proofs of the discrete arguments are elementary enough that they can be modified with a small amount of effort to handle the entropy case. (In fact, there should be a very general model-theoretic framework in which both the discrete and entropy arguments can be processed in a unified manner; see this paper of Hrushovski for one such framework.)

It is also likely that many of the arguments here extend to the non-commutative setting, but for simplicity we will not pursue such generalisations here.

Read the rest of this entry »

(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)

Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.

The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:

(Discrete) (Continuous) (Limit method)
Ramsey theory Topological dynamics Compactness
Density Ramsey theory Ergodic theory Furstenberg correspondence principle
Graph/hypergraph regularity Measure theory Graph limits
Polynomial regularity Linear algebra Ultralimits
Structural decompositions Hilbert space geometry Ultralimits
Fourier analysis Spectral theory Direct and inverse limits
Quantitative algebraic geometry Algebraic geometry Schemes
Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits
Approximate group theory Topological group theory Model theory

As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:

  • Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects {x_n} in a common space {X}, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object {\lim_{n \rightarrow \infty} x_n}, which remains in the same space, and is “close” to many of the original objects {x_n} with respect to the given metric or topology.
  • Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects {x_n} in a category {X}, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit {\varinjlim x_n} or the inverse limit {\varprojlim x_n} of these objects, which is another object in the same category {X}, and is connected to the original objects {x_n} by various morphisms.
  • Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects {x_{\bf n}} or of spaces {X_{\bf n}}, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, {X_{\bf n}} might be groups and {x_{\bf n}} might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}} or a new space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}, which is still a model of the same language (e.g. if the spaces {X_{\bf n}} were all groups, then the limiting space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} is an abelian group, then the {X_{\bf n}} will also be abelian groups for many {{\bf n}}.)

The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects {x_{\bf n}} to all lie in a common space {X} in order to form an ultralimit {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; they are permitted to lie in different spaces {X_{\bf n}}; this is more natural in many discrete contexts, e.g. when considering graphs on {{\bf n}} vertices in the limit when {{\bf n}} goes to infinity. Also, no convergence properties on the {x_{\bf n}} are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces {X_{\bf n}} involved are required in order to construct the ultraproduct.

With so few requirements on the objects {x_{\bf n}} or spaces {X_{\bf n}}, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the {x_{\bf n}}, will be exactly obeyed by the limit object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.

Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.

Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.

Read the rest of this entry »

Let {F} be a field. A definable set over {F} is a set of the form

\displaystyle  \{ x \in F^n | \phi(x) \hbox{ is true} \} \ \ \ \ \ (1)

where {n} is a natural number, and {\phi(x)} is a predicate involving the ring operations {+,\times} of {F}, the equality symbol {=}, an arbitrary number of constants and free variables in {F}, the quantifiers {\forall, \exists}, boolean operators such as {\vee,\wedge,\neg}, and parentheses and colons, where the quantifiers are always understood to be over the field {F}. Thus, for instance, the set of quadratic residues

\displaystyle  \{ x \in F | \exists y: x = y \times y \}

is definable over {F}, and any algebraic variety over {F} is also a definable set over {F}. Henceforth we will abbreviate “definable over {F}” simply as “definable”.

If {F} is a finite field, then every subset of {F^n} is definable, since finite sets are automatically definable. However, we can obtain a more interesting notion in this case by restricting the complexity of a definable set. We say that {E \subset F^n} is a definable set of complexity at most {M} if {n \leq M}, and {E} can be written in the form (1) for some predicate {\phi} of length at most {M} (where all operators, quantifiers, relations, variables, constants, and punctuation symbols are considered to have unit length). Thus, for instance, a hypersurface in {n} dimensions of degree {d} would be a definable set of complexity {O_{n,d}(1)}. We will then be interested in the regime where the complexity remains bounded, but the field size (or field characteristic) becomes large.

In a recent paper, I established (in the large characteristic case) the following regularity lemma for dense definable graphs, which significantly strengthens the Szemerédi regularity lemma in this context, by eliminating “bad” pairs, giving a polynomially strong regularity, and also giving definability of the cells:

Lemma 1 (Algebraic regularity lemma) Let {F} be a finite field, let {V,W} be definable non-empty sets of complexity at most {M}, and let {E \subset V \times W} also be definable with complexity at most {M}. Assume that the characteristic of {F} is sufficiently large depending on {M}. Then we may partition {V = V_1 \cup \ldots \cup V_m} and {W = W_1 \cup \ldots \cup W_n} with {m,n = O_M(1)}, with the following properties:

  • (Definability) Each of the {V_1,\ldots,V_m,W_1,\ldots,W_n} are definable of complexity {O_M(1)}.
  • (Size) We have {|V_i| \gg_M |V|} and {|W_j| \gg_M |W|} for all {i=1,\ldots,m} and {j=1,\ldots,n}.
  • (Regularity) We have

    \displaystyle  |E \cap (A \times B)| = d_{ij} |A| |B| + O_M( |F|^{-1/4} |V| |W| ) \ \ \ \ \ (2)

    for all {i=1,\ldots,m}, {j=1,\ldots,n}, {A \subset V_i}, and {B\subset W_j}, where {d_{ij}} is a rational number in {[0,1]} with numerator and denominator {O_M(1)}.

My original proof of this lemma was quite complicated, based on an explicit calculation of the “square”

\displaystyle  \mu(w,w') := \{ v \in V: (v,w), (v,w') \in E \}

of {E} using the Lang-Weil bound and some facts about the étale fundamental group. It was the reliance on the latter which was the main reason why the result was restricted to the large characteristic setting. (I then applied this lemma to classify expanding polynomials over finite fields of large characteristic, but I will not discuss these applications here; see this previous blog post for more discussion.)

Recently, Anand Pillay and Sergei Starchenko (and independently, Udi Hrushovski) have observed that the theory of the étale fundamental group is not necessary in the argument, and the lemma can in fact be deduced from quite general model theoretic techniques, in particular using (a local version of) the concept of stability. One of the consequences of this new proof of the lemma is that the hypothesis of large characteristic can be omitted; the lemma is now known to be valid for arbitrary finite fields {F} (although its content is trivial if the field is not sufficiently large depending on the complexity at most {M}).

Inspired by this, I decided to see if I could find yet another proof of the algebraic regularity lemma, again avoiding the theory of the étale fundamental group. It turns out that the spectral proof of the Szemerédi regularity lemma (discussed in this previous blog post) adapts very nicely to this setting. The key fact needed about definable sets over finite fields is that their cardinality takes on an essentially discrete set of values. More precisely, we have the following fundamental result of Chatzidakis, van den Dries, and Macintyre:

Proposition 2 Let {F} be a finite field, and let {M > 0}.

  • (Discretised cardinality) If {E} is a non-empty definable set of complexity at most {M}, then one has

    \displaystyle  |E| = c |F|^d + O_M( |F|^{d-1/2} ) \ \ \ \ \ (3)

    where {d = O_M(1)} is a natural number, and {c} is a positive rational number with numerator and denominator {O_M(1)}. In particular, we have {|F|^d \ll_M |E| \ll_M |F|^d}.

  • (Definable cardinality) Assume {|F|} is sufficiently large depending on {M}. If {V, W}, and {E \subset V \times W} are definable sets of complexity at most {M}, so that {E_w := \{ v \in V: (v,w) \in W \}} can be viewed as a definable subset of {V} that is definably parameterised by {w \in W}, then for each natural number {d = O_M(1)} and each positive rational {c} with numerator and denominator {O_M(1)}, the set

    \displaystyle  \{ w \in W: |E_w| = c |F|^d + O_M( |F|^{d-1/2} ) \} \ \ \ \ \ (4)

    is definable with complexity {O_M(1)}, where the implied constants in the asymptotic notation used to define (4) are the same as those that appearing in (3). (Informally: the “dimension” {d} and “measure” {c} of {E_w} depends definably on {w}.)

We will take this proposition as a black box; a proof can be obtained by combining the description of definable sets over pseudofinite fields (discussed in this previous post) with the Lang-Weil bound (discussed in this previous post). (The former fact is phrased using nonstandard analysis, but one can use standard compactness-and-contradiction arguments to convert such statements to statements in standard analysis, as discussed in this post.)

The above proposition places severe restrictions on the cardinality of definable sets; for instance, it shows that one cannot have a definable set of complexity at most {M} and cardinality {|F|^{1/2}}, if {|F|} is sufficiently large depending on {M}. If {E \subset V} are definable sets of complexity at most {M}, it shows that {|E| = (c+ O_M(|F|^{-1/2})) |V|} for some rational {0\leq c \leq 1} with numerator and denominator {O_M(1)}; furthermore, if {c=0}, we may improve this bound to {|E| = O_M( |F|^{-1} |V|)}. In particular, we obtain the following “self-improving” properties:

  • If {E \subset V} are definable of complexity at most {M} and {|E| \leq \epsilon |V|} for some {\epsilon>0}, then (if {\epsilon} is sufficiently small depending on {M} and {F} is sufficiently large depending on {M}) this forces {|E| = O_M( |F|^{-1} |V| )}.
  • If {E \subset V} are definable of complexity at most {M} and {||E| - c |V|| \leq \epsilon |V|} for some {\epsilon>0} and positive rational {c}, then (if {\epsilon} is sufficiently small depending on {M,c} and {F} is sufficiently large depending on {M,c}) this forces {|E| = c |V| + O_M( |F|^{-1/2} |V| )}.

It turns out that these self-improving properties can be applied to the coefficients of various matrices (basically powers of the adjacency matrix associated to {E}) that arise in the spectral proof of the regularity lemma to significantly improve the bounds in that lemma; we describe how this is done below the fold. We also make some connections to the stability-based proofs of Pillay-Starchenko and Hrushovski.

Read the rest of this entry »

I’ve just uploaded to the arXiv my article “Algebraic combinatorial geometry: the polynomial method in arithmetic combinatorics, incidence combinatorics, and number theory“, submitted to the new journal “EMS surveys in the mathematical sciences“.  This is the first draft of a survey article on the polynomial method – a technique in combinatorics and number theory for controlling a relevant set of points by comparing it with the zero set of a suitably chosen polynomial, and then using tools from algebraic geometry (e.g. Bezout’s theorem) on that zero set. As such, the method combines algebraic geometry with combinatorial geometry, and could be viewed as the philosophy of a combined field which I dub “algebraic combinatorial geometry”.   There is also an important extension of this method when one is working overthe reals, in which methods from algebraic topology (e.g. the ham sandwich theorem and its generalisation to polynomials), and not just algebraic geometry, come into play also.

The polynomial method has been used independently many times in mathematics; for instance, it plays a key role in the proof of Baker’s theorem in transcendence theory, or Stepanov’s method in giving an elementary proof of the Riemann hypothesis for finite fields over curves; in combinatorics, the nullstellenatz of Alon is also another relatively early use of the polynomial method.  More recently, it underlies Dvir’s proof of the Kakeya conjecture over finite fields and Guth and Katz’s near-complete solution to the Erdos distance problem in the plane, and can be used to give a short proof of the Szemeredi-Trotter theorem.  One of the aims of this survey is to try to present all of these disparate applications of the polynomial method in a somewhat unified context; my hope is that there will eventually be a systematic foundation for algebraic combinatorial geometry which naturally contains all of these different instances the polynomial method (and also suggests new instances to explore); but the field is unfortunately not at that stage of maturity yet.

This is something of a first draft, so comments and suggestions are even more welcome than usual.  (For instance, I have already had my attention drawn to some additional uses of the polynomial method in the literature that I was not previously aware of.)

Define a partition of {1} to be a finite or infinite multiset {\Sigma} of real numbers in the interval {I \in (0,1]} (that is, an unordered set of real numbers in {I}, possibly with multiplicity) whose total sum is {1}: {\sum_{t \in \Sigma}t = 1}. For instance, {\{1/2,1/4,1/8,1/16,\ldots\}} is a partition of {1}. Such partitions arise naturally when trying to decompose a large object into smaller ones, for instance:

  1. (Prime factorisation) Given a natural number {n}, one can decompose it into prime factors {n = p_1 \ldots p_k} (counting multiplicity), and then the multiset

    \displaystyle  \Sigma_{PF}(n) := \{ \frac{\log p_1}{\log n}, \ldots,\frac{\log p_k}{\log n} \}

    is a partition of {1}.

  2. (Cycle decomposition) Given a permutation {\sigma \in S_n} on {n} labels {\{1,\ldots,n\}}, one can decompose {\sigma} into cycles {C_1,\ldots,C_k}, and then the multiset

    \displaystyle  \Sigma_{CD}(\sigma) := \{ \frac{|C_1|}{n}, \ldots, \frac{|C_k|}{n} \}

    is a partition of {1}.

  3. (Normalisation) Given a multiset {\Gamma} of positive real numbers whose sum {S := \sum_{x\in \Gamma}x} is finite and non-zero, the multiset

    \displaystyle  \Sigma_N( \Gamma) := \frac{1}{S} \cdot \Gamma = \{ \frac{x}{S}: x \in \Gamma \}

    is a partition of {1}.

In the spirit of the universality phenomenon, one can ask what is the natural distribution for what a “typical” partition should look like; thus one seeks a natural probability distribution on the space of all partitions, analogous to (say) the gaussian distributions on the real line, or GUE distributions on point processes on the line, and so forth. It turns out that there is one natural such distribution which is related to all three examples above, known as the Poisson-Dirichlet distribution. To describe this distribution, we first have to deal with the problem that it is not immediately obvious how to cleanly parameterise the space of partitions, given that the cardinality of the partition can be finite or infinite, that multiplicity is allowed, and that we would like to identify two partitions that are permutations of each other

One way to proceed is to random partition {\Sigma} as a type of point process on the interval {I}, with the constraint that {\sum_{x \in \Sigma} x = 1}, in which case one can study statistics such as the counting functions

\displaystyle  N_{[a,b]} := |\Sigma \cap [a,b]| = \sum_{x \in\Sigma} 1_{[a,b]}(x)

(where the cardinality here counts multiplicity). This can certainly be done, although in the case of the Poisson-Dirichlet process, the formulae for the joint distribution of such counting functions is moderately complicated. Another way to proceed is to order the elements of {\Sigma} in decreasing order

\displaystyle  t_1 \geq t_2 \geq t_3 \geq \ldots \geq 0,

with the convention that one pads the sequence {t_n} by an infinite number of zeroes if {\Sigma} is finite; this identifies the space of partitions with an infinite dimensional simplex

\displaystyle  \{ (t_1,t_2,\ldots) \in [0,1]^{\bf N}: t_1 \geq t_2 \geq \ldots; \sum_{n=1}^\infty t_n = 1 \}.

However, it turns out that the process of ordering the elements is not “smooth” (basically because functions such as {(x,y) \mapsto \max(x,y)} and {(x,y) \mapsto \min(x,y)} are not smooth) and the formulae for the joint distribution in the case of the Poisson-Dirichlet process is again complicated.

It turns out that there is a better (or at least “smoother”) way to enumerate the elements {u_1,(1-u_1)u_2,(1-u_1)(1-u_2)u_3,\ldots} of a partition {\Sigma} than the ordered method, although it is random rather than deterministic. This procedure (which I learned from this paper of Donnelly and Grimmett) works as follows.

  1. Given a partition {\Sigma}, let {u_1} be an element of {\Sigma} chosen at random, with each element {t\in \Sigma} having a probability {t} of being chosen as {u_1} (so if {t \in \Sigma} occurs with multiplicity {m}, the net probability that {t} is chosen as {u_1} is actually {mt}). Note that this is well-defined since the elements of {\Sigma} sum to {1}.
  2. Now suppose {u_1} is chosen. If {\Sigma \backslash \{u_1\}} is empty, we set {u_2,u_3,\ldots} all equal to zero and stop. Otherwise, let {u_2} be an element of {\frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})} chosen at random, with each element {t \in \frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})} having a probability {t} of being chosen as {u_2}. (For instance, if {u_1} occurred with some multiplicity {m>1} in {\Sigma}, then {u_2} can equal {\frac{u_1}{1-u_1}} with probability {(m-1)u_1/(1-u_1)}.)
  3. Now suppose {u_1,u_2} are both chosen. If {\Sigma \backslash \{u_1,u_2\}} is empty, we set {u_3, u_4, \ldots} all equal to zero and stop. Otherwise, let {u_3} be an element of {\frac{1}{1-u_1-u_2} \cdot (\Sigma\backslash \{u_1,u_2\})}, with ech element {t \in \frac{1}{1-u_1-u_2} \cdot (\Sigma\backslash \{u_1,u_2\})} having a probability {t} of being chosen as {u_3}.
  4. We continue this process indefinitely to create elements {u_1,u_2,u_3,\ldots \in [0,1]}.

We denote the random sequence {Enum(\Sigma) := (u_1,u_2,\ldots) \in [0,1]^{\bf N}} formed from a partition {\Sigma} in the above manner as the random normalised enumeration of {\Sigma}; this is a random variable in the infinite unit cube {[0,1]^{\bf N}}, and can be defined recursively by the formula

\displaystyle  Enum(\Sigma) = (u_1, Enum(\frac{1}{1-u_1} \cdot (\Sigma\backslash \{u_1\})))

with {u_1} drawn randomly from {\Sigma}, with each element {t \in \Sigma} chosen with probability {t}, except when {\Sigma =\{1\}} in which case we instead have

\displaystyle  Enum(\{1\}) = (1, 0,0,\ldots).

Note that one can recover {\Sigma} from any of its random normalised enumerations {Enum(\Sigma) := (u_1,u_2,\ldots)} by the formula

\displaystyle  \Sigma = \{ u_1, (1-u_1) u_2,(1-u_1)(1-u_2)u_3,\ldots\} \ \ \ \ \ (1)

with the convention that one discards any zero elements on the right-hand side. Thus {Enum} can be viewed as a (stochastic) parameterisation of the space of partitions by the unit cube {[0,1]^{\bf N}}, which is a simpler domain to work with than the infinite-dimensional simplex mentioned earlier.

Note that this random enumeration procedure can also be adapted to the three models described earlier:

  1. Given a natural number {n}, one can randomly enumerate its prime factors {n =p'_1 p'_2 \ldots p'_k} by letting each prime factor {p} of {n} be equal to {p'_1} with probability {\frac{\log p}{\log n}}, then once {p'_1} is chosen, let each remaining prime factor {p} of {n/p'_1} be equal to {p'_2} with probability {\frac{\log p}{\log n/p'_1}}, and so forth.
  2. Given a permutation {\sigma\in S_n}, one can randomly enumerate its cycles {C'_1,\ldots,C'_k} by letting each cycle {C} in {\sigma} be equal to {C'_1} with probability {\frac{|C|}{n}}, and once {C'_1} is chosen, let each remaining cycle {C} be equalto {C'_2} with probability {\frac{|C|}{n-|C'_1|}}, and so forth. Alternatively, one traverse the elements of {\{1,\ldots,n\}} in random order, then let {C'_1} be the first cycle one encounters when performing this traversal, let {C'_2} be the next cycle (not equal to {C'_1} one encounters when performing this traversal, and so forth.
  3. Given a multiset {\Gamma} of positive real numbers whose sum {S := \sum_{x\in\Gamma} x} is finite, we can randomly enumerate {x'_1,x'_2,\ldots} the elements of this sequence by letting each {x \in \Gamma} have a {\frac{x}{S}} probability of being set equal to {x'_1}, and then once {x'_1} is chosen, let each remaining {x \in \Gamma\backslash \{x'_1\}} have a {\frac{x_i}{S-x'_1}} probability of being set equal to {x'_2}, and so forth.

We then have the following result:

Proposition 1 (Existence of the Poisson-Dirichlet process) There exists a random partition {\Sigma} whose random enumeration {Enum(\Sigma) = (u_1,u_2,\ldots)} has the uniform distribution on {[0,1]^{\bf N}}, thus {u_1,u_2,\ldots} are independently and identically distributed copies of the uniform distribution on {[0,1]}.

A random partition {\Sigma} with this property will be called the Poisson-Dirichlet process. This process, first introduced by Kingman, can be described explicitly using (1) as

\displaystyle  \Sigma = \{ u_1, (1-u_1) u_2,(1-u_1)(1-u_2)u_3,\ldots\},

where {u_1,u_2,\ldots} are iid copies of the uniform distribution of {[0,1]}, although it is not immediately obvious from this definition that {Enum(\Sigma)} is indeed uniformly distributed on {[0,1]^{\bf N}}. We prove this proposition below the fold.

An equivalent definition of a Poisson-Dirichlet process is a random partition {\Sigma} with the property that

\displaystyle  (u_1, \frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})) \equiv (U, \Sigma) \ \ \ \ \ (2)

where {u_1} is a random element of {\Sigma} with each {t \in\Sigma} having a probability {t} of being equal to {u_1}, {U} is a uniform variable on {[0,1]} that is independent of {\Sigma}, and {\equiv} denotes equality of distribution. This can be viewed as a sort of stochastic self-similarity property of {\Sigma}: if one randomly removes one element from {\Sigma} and rescales, one gets a new copy of {\Sigma}.

It turns out that each of the three ways to generate partitions listed above can lead to the Poisson-Dirichlet process, either directly or in a suitable limit. We begin with the third way, namely by normalising a Poisson process to have sum {1}:

Proposition 2 (Poisson-Dirichlet processes via Poisson processes) Let {a>0}, and let {\Gamma_a} be a Poisson process on {(0,+\infty)} with intensity function {t \mapsto \frac{1}{t} e^{-at}}. Then the sum {S :=\sum_{x \in \Gamma_a} x} is almost surely finite, and the normalisation {\Sigma_N(\Gamma_a) = \frac{1}{S} \cdot \Gamma_a} is a Poisson-Dirichlet process.

Again, we prove this proposition below the fold. Now we turn to the second way (a topic, incidentally, that was briefly touched upon in this previous blog post):

Proposition 3 (Large cycles of a typical permutation) For each natural number {n}, let {\sigma} be a permutation drawn uniformly at random from {S_n}. Then the random partition {\Sigma_{CD}(\sigma)} converges in the limit {n \rightarrow\infty} to a Poisson-Dirichlet process {\Sigma_{PF}} in the following sense: given any fixed sequence of intervals {[a_1,b_1],\ldots,[a_k,b_k] \subset I} (independent of {n}), the joint discrete random variable {(N_{[a_1,b_1]}(\Sigma_{CD}(\sigma)),\ldots,N_{[a_k,b_k]}(\Sigma_{CD}(\sigma)))} converges in distribution to {(N_{[a_1,b_1]}(\Sigma),\ldots,N_{[a_k,b_k]}(\Sigma))}.

Finally, we turn to the first way:

Proposition 4 (Large prime factors of a typical number) Let {x > 0}, and let {N_x} be a random natural number chosen according to one of the following three rules:

  1. (Uniform distribution) {N_x} is drawn uniformly at random from the natural numbers in {[1,x]}.
  2. (Shifted uniform distribution) {N_x} is drawn uniformly at random from the natural numbers in {[x,2x]}.
  3. (Zeta distribution) Each natural number {n} has a probability {\frac{1}{\zeta(s)}\frac{1}{n^s}} of being equal to {N_x}, where {s := 1 +\frac{1}{\log x}}and {\zeta(s):=\sum_{n=1}^\infty \frac{1}{n^s}}.

Then {\Sigma_{PF}(N_x)} converges as {x \rightarrow \infty} to a Poisson-Dirichlet process {\Sigma} in the same fashion as in Proposition 3.

The process {\Sigma_{PF}(N_x)} was first studied by Billingsley (and also later by Knuth-Trabb Pardo and by Vershik, but the formulae were initially rather complicated; the proposition above is due to of Donnelly and Grimmett, although the third case of the proposition is substantially easier and appears in the earlier work of Lloyd. We prove the proposition below the fold.

The previous two propositions suggests an interesting analogy between large random integers and large random permutations; see this ICM article of Vershik and this non-technical article of Granville (which, incidentally, was once adapted into a play) for further discussion.

As a sample application, consider the problem of estimating the number {\pi(x,x^{1/u})} of integers up to {x} which are not divisible by any prime larger than {x^{1/u}} (i.e. they are {x^{1/u}}-smooth), where {u>0} is a fixed real number. This is essentially (modulo some inessential technicalities concerning the distinction between the intervals {[x,2x]} and {[1,x]}) the probability that {\Sigma} avoids {[1/u,1]}, which by the above theorem converges to the probability {\rho(u)} that {\Sigma_{PF}} avoids {[1/u,1]}. Below the fold we will show that this function is given by the Dickman function, defined by setting {\rho(u)=1} for {u < 1} and {u\rho'(u) = \rho(u-1)} for {u \geq 1}, thus recovering the classical result of Dickman that {\pi(x,x^{1/u}) = (\rho(u)+o(1))x}.

I thank Andrew Granville and Anatoly Vershik for showing me the nice link between prime factors and the Poisson-Dirichlet process. The material here is standard, and (like many of the other notes on this blog) was primarily written for my own benefit, but it may be of interest to some readers. In preparing this article I found this exposition by Kingman to be helpful.

Note: this article will emphasise the computations rather than rigour, and in particular will rely on informal use of infinitesimals to avoid dealing with stochastic calculus or other technicalities. We adopt the convention that we will neglect higher order terms in infinitesimal calculations, e.g. if {dt} is infinitesimal then we will abbreviate {dt + o(dt)} simply as {dt}.

Read the rest of this entry »

Emmanuel Breuillard, Ben Green, Bob Guralnick, and I have just uploaded to the arXiv our joint paper “Expansion in finite simple groups of Lie type“. This long-delayed paper (announced way back in 2010!) is a followup to our previous paper in which we showed that, with one possible exception, generic pairs of elements of a simple algebraic group (over an uncountable field) generated a free group which was strongly dense in the sense that any nonabelian subgroup of this group was Zariski dense. The main result of this paper is to establish the analogous result for finite simple groups of Lie type (as defined in the previous blog post) and bounded rank, namely that almost all pairs {a,b} of elements of such a group generate a Cayley graph which is a (two-sided) expander, with expansion constant bounded below by a quantity depending on the rank of the group. (Informally, this means that the random walk generated by {a,b} spreads out in logarithmic time to be essentially uniformly distributed across the group, as opposed for instance to being largely trapped in an algebraic subgroup. Thus if generic elements did not generate a strongly dense group, one would probably expect expansion to fail.)

There are also some related results established in the paper. Firstly, as we discovered after writing our first paper, there was one class of algebraic groups for which our demonstration of strongly dense subgroups broke down, namely the {Sp_4} groups in characteristic three. In the current paper we provide in a pair of appendices a new argument that covers this case (or more generally, {Sp_4} in odd characteristic), by first reducing to the case of affine groups {k^2 \rtimes SL_2(k)} (which can be found inside {Sp_4} as a subgroup) and then using a ping-pong argument (in a p-adic metric) in the latter context.

Secondly, we show that the distinction between one-sided expansion and two-sided expansion (see this set of lecture notes of mine for definitions) is erased in the context of Cayley graphs of bounded degree, in the sense that such graphs are one-sided expanders if and only if they are two-sided expanders (perhaps with slightly different expansion constants). The argument turns out to be an elementary combinatorial one, based on the “pivot” argument discussed in these lecture notes of mine.

Now to the main result of the paper, namely the expansion of random Cayley graphs. This result had previously been established for {SL_2} by Bourgain and Gamburd, and Ben, Emmanuel and I had used the Bourgain-Gamburd method to achieve the same result for Suzuki groups. For the other finite simple groups of Lie type, expander graphs had been constructed by Kassabov, Lubotzky, and Nikolov, but they required more than two generators, which were placed deterministically rather than randomly. (Here, I am skipping over a large number of other results on expanding Cayley graphs; see this survey of Lubotzsky for a fairly recent summary of developments.) The current paper also uses the “Bourgain-Gamburd machine”, as discussed in these lecture notes of mine, to demonstrate expansion. This machine shows how expansion of a Cayley graph follows from three basic ingredients, which we state informally as follows:

  • Non-concentration (A random walk in this graph does not concentrate in a proper subgroup);
  • Product theorem (A medium-sized subset of this group which is not trapped in a proper subgroup will expand under multiplication); and
  • Quasirandomness (The group has no small non-trivial linear representations).

Quasirandomness of arbitrary finite simple groups of Lie type was established many years ago (predating, in fact, the introduction of the term “quasirandomness” by Gowers for this property) by Landazuri-Seitz and Seitz-Zalesskii, and the product theorem was already established by Pyber-Szabo and independently by Breuillard, Green, and myself. So the main problem is to establish non-concentration: that for a random Cayley graph on a finite simple group {G} of Lie type, random walks did not concentrate in proper subgroups.

The first step was to classify the proper subgroups of {G}. Fortunately, these are all known; in particular, such groups are either contained in proper algebraic subgroups of the algebraic group containing {G} (or a bounded cover thereof) with bounded complexity, or are else arising (up to conjugacy) from a version {G(F')} of the same group {G =G(F)} associated to a proper subfield {F'} of the field {F} respectively; this follows for instance from the work of Larsen and Pink, but also can be deduced using the classification of finite simple groups, together with some work of Aschbacher, Liebeck-Seitz, and Nori. We refer to the two types of subgroups here as “structural subgroups” and “subfield subgroups”.

To preclude concentration in a structural subgroup, we use our previous result that generic elements of an algebraic group generate a strongly dense subgroup, and so do not concentrate in any algebraic subgroup. To translate this result from the algebraic group setting to the finite group setting, we need a Schwarz-Zippel lemma for finite simple groups of Lie type. This is straightforward for Chevalley groups, but turns out to be a bit trickier for the Steinberg and Suzuki-Ree groups, and we have to go back to the Chevalley-type parameterisation of such groups in terms of (twisted) one-parameter subgroups, that can be found for instance in the text of Carter; this “twisted Schwartz-Zippel lemma” may possibly have further application to analysis on twisted simple groups of Lie type. Unfortunately, the Schwartz-Zippel estimate becomes weaker in twisted settings, and particularly in the case of triality groups {{}^3 D_4(q)}, which require a somewhat ad hoc additional treatment that relies on passing to a simpler subgroup present in a triality group, namely a central product of two different {SL_2}‘s.

To rule out concentration in a conjugate of a subfield group, we repeat an argument we introduced in our Suzuki paper and pass to a matrix model and analyse the coefficients of the characteristic polynomial of words in this Cayley graph, to prevent them from concentrating in a subfield. (Note that these coefficients are conjugation-invariant.)

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,873 other followers