You are currently browsing the category archive for the ‘math.MG’ category.

A core foundation of the subject now known as arithmetic combinatorics (and particularly the subfield of additive combinatorics) are the elementary sum set estimates (sometimes known as “Ruzsa calculus”) that relate the cardinality of various sum sets

\displaystyle  A+B := \{ a+b: a \in A, b \in B \}

and difference sets

\displaystyle  A-B := \{ a-b: a \in A, b \in B \},

as well as iterated sumsets such as {3A=A+A+A}, {2A-2A=A+A-A-A}, and so forth. Here, {A, B} are finite non-empty subsets of some additive group {G = (G,+)} (classically one took {G={\bf Z}} or {G={\bf R}}, but nowadays one usually considers more general additive groups). Some basic estimates in this vein are the following:

Lemma 1 (Ruzsa covering lemma) Let {A, B} be finite non-empty subsets of {G}. Then {A} may be covered by at most {\frac{|A+B|}{|B|}} translates of {B-B}.

Proof: Consider a maximal set of disjoint translates {a+B} of {B} by elements {a \in A}. These translates have cardinality {|B|}, are disjoint, and lie in {A+B}, so there are at most {\frac{|A+B|}{|B|}} of them. By maximality, for any {a' \in A}, {a'+B} must intersect at least one of the selected {a+B}, thus {a' \in a+B-B}, and the claim follows. \Box

Lemma 2 (Ruzsa triangle inequality) Let {A,B,C} be finite non-empty subsets of {G}. Then {|A-C| \leq \frac{|A-B| |B-C|}{|B|}}.

Proof: Consider the addition map {+: (x,y) \mapsto x+y} from {(A-B) \times (B-C)} to {G}. Every element {a-c} of {A - C} has a preimage {\{ (x,y) \in (A-B) \times (B-C)\}} of this map of cardinality at least {|B|}, thanks to the obvious identity {a-c = (a-b) + (b-c)} for each {b \in B}. Since {(A-B) \times (B-C)} has cardinality {|A-B| |B-C|}, the claim follows. \Box

Such estimates (which are covered, incidentally, in Section 2 of my book with Van Vu) are particularly useful for controlling finite sets {A} of small doubling, in the sense that {|A+A| \leq K|A|} for some bounded {K}. (There are deeper theorems, most notably Freiman’s theorem, which give more control than what elementary Ruzsa calculus does, however the known bounds in the latter theorem are worse than polynomial in {K} (although it is conjectured otherwise), whereas the elementary estimates are almost all polynomial in {K}.)

However, there are some settings in which the standard sum set estimates are not quite applicable. One such setting is the continuous setting, where one is dealing with bounded open sets in an additive Lie group (e.g. {{\bf R}^n} or a torus {({\bf R}/{\bf Z})^n}) rather than a finite setting. Here, one can largely replicate the discrete sum set estimates by working with a Haar measure in place of cardinality; this is the approach taken for instance in this paper of mine. However, there is another setting, which one might dub the “discretised” setting (as opposed to the “discrete” setting or “continuous” setting), in which the sets {A} remain finite (or at least discretisable to be finite), but for which there is a certain amount of “roundoff error” coming from the discretisation. As a typical example (working now in a non-commutative multiplicative setting rather than an additive one), consider the orthogonal group {O_n({\bf R})} of orthogonal {n \times n} matrices, and let {A} be the matrices obtained by starting with all of the orthogonal matrice in {O_n({\bf R})} and rounding each coefficient of each matrix in this set to the nearest multiple of {\epsilon}, for some small {\epsilon>0}. This forms a finite set (whose cardinality grows as {\epsilon\rightarrow 0} like a certain negative power of {\epsilon}). In the limit {\epsilon \rightarrow 0}, the set {A} is not a set of small doubling in the discrete sense. However, {A \cdot A} is still close to {A} in a metric sense, being contained in the {O_n(\epsilon)}-neighbourhood of {A}. Another key example comes from graphs {\Gamma := \{ (x, f(x)): x \in G \}} of maps {f: A \rightarrow H} from a subset {A} of one additive group {G = (G,+)} to another {H = (H,+)}. If {f} is “approximately additive” in the sense that for all {x,y \in G}, {f(x+y)} is close to {f(x)+f(y)} in some metric, then {\Gamma} might not have small doubling in the discrete sense (because {f(x+y)-f(x)-f(y)} could take a large number of values), but could be considered a set of small doubling in a discretised sense.

One would like to have a sum set (or product set) theory that can handle these cases, particularly in “high-dimensional” settings in which the standard methods of passing back and forth between continuous, discrete, or discretised settings behave poorly from a quantitative point of view due to the exponentially large doubling constant of balls. One way to do this is to impose a translation invariant metric {d} on the underlying group {G = (G,+)} (reverting back to additive notation), and replace the notion of cardinality by that of metric entropy. There are a number of almost equivalent ways to define this concept:

Definition 3 Let {(X,d)} be a metric space, let {E} be a subset of {X}, and let {r>0} be a radius.

  • The packing number {N^{pack}_r(E)} is the largest number of points {x_1,\dots,x_n} one can pack inside {E} such that the balls {B(x_1,r),\dots,B(x_n,r)} are disjoint.
  • The internal covering number {N^{int}_r(E)} is the fewest number of points {x_1,\dots,x_n \in E} such that the balls {B(x_1,r),\dots,B(x_n,r)} cover {E}.
  • The external covering number {N^{ext}_r(E)} is the fewest number of points {x_1,\dots,x_n \in X} such that the balls {B(x_1,r),\dots,B(x_n,r)} cover {E}.
  • The metric entropy {N^{ent}_r(E)} is the largest number of points {x_1,\dots,x_n} one can find in {E} that are {r}-separated, thus {d(x_i,x_j) \geq r} for all {i \neq j}.

It is an easy exercise to verify the inequalities

\displaystyle  N^{ent}_{2r}(E) \leq N^{pack}_r(E) \leq N^{ext}_r(E) \leq N^{int}_r(E) \leq N^{ent}_r(E)

for any {r>0}, and that {N^*_r(E)} is non-increasing in {r} and non-decreasing in {E} for the three choices {* = pack,ext,ent} (but monotonicity in {E} can fail for {*=int}!). It turns out that the external covering number {N^{ent}_r(E)} is slightly more convenient than the other notions of metric entropy, so we will abbreviate {N_r(E) = N^{ent}_r(E)}. The cardinality {|E|} can be viewed as the limit of the entropies {N^*_r(E)} as {r \rightarrow 0}.

If we have the bounded doubling property that {B(0,2r)} is covered by {O(1)} translates of {B(0,r)} for each {r>0}, and one has a Haar measure {m} on {G} which assigns a positive finite mass to each ball, then any of the above entropies {N^*_r(E)} is comparable to {m( E + B(0,r) ) / m(B(0,r))}, as can be seen by simple volume packing arguments. Thus in the bounded doubling setting one can usually use the measure-theoretic sum set theory to derive entropy-theoretic sumset bounds (see e.g. this paper of mine for an example of this). However, it turns out that even in the absence of bounded doubling, one still has an entropy analogue of most of the elementary sum set theory, except that one has to accept some degradation in the radius parameter {r} by some absolute constant. Such losses can be acceptable in applications in which the underlying sets {A} are largely “transverse” to the balls {B(0,r)}, so that the {N_r}-entropy of {A} is largely independent of {A}; this is a situation which arises in particular in the case of graphs {\Gamma = \{ (x,f(x)): x \in G \}} discussed above, if one works with “vertical” metrics whose balls extend primarily in the vertical direction. (I hope to present a specific application of this type here in the near future.)

Henceforth we work in an additive group {G} equipped with a translation-invariant metric {d}. (One can also generalise things slightly by allowing the metric to attain the values {0} or {+\infty}, without changing much of the analysis below.) By the Heine-Borel theorem, any precompact set {E} will have finite entropy {N_r(E)} for any {r>0}. We now have analogues of the two basic Ruzsa lemmas above:

Lemma 4 (Ruzsa covering lemma) Let {A, B} be precompact non-empty subsets of {G}, and let {r>0}. Then {A} may be covered by at most {\frac{N_r(A+B)}{N_r(B)}} translates of {B-B+B(0,2r)}.

Proof: Let {a_1,\dots,a_n \in A} be a maximal set of points such that the sets {a_i + B + B(0,r)} are all disjoint. Then the sets {a_i+B} are disjoint in {A+B} and have entropy {N_r(a_i+B)=N_r(B)}, and furthermore any ball of radius {r} can intersect at most one of the {a_i+B}. We conclude that {N_r(A+B) \geq n N_r(B)}, so {n \leq \frac{N_r(A+B)}{N_r(B)}}. If {a \in A}, then {a+B+B(0,r)} must intersect one of the {a_i + B + B(0,r)}, so {a \in a_i + B-B + B(0,2r)}, and the claim follows. \Box

Lemma 5 (Ruzsa triangle inequality) Let {A,B,C} be precompact non-empty subsets of {G}, and let {r>0}. Then {N_{4r}(A-C) \leq \frac{N_r(A-B) N_r(B-C)}{N_r(B)}}.

Proof: Consider the addition map {+: (x,y) \mapsto x+y} from {(A-B) \times (B-C)} to {G}. The domain {(A-B) \times (B-C)} may be covered by {N_r(A-B) N_r(B-C)} product balls {B(x,r) \times B(y,r)}. Every element {a-c} of {A - C} has a preimage {\{ (x,y) \in (A-B) \times (B-C)\}} of this map which projects to a translate of {B}, and thus must meet at least {N_r(B)} of these product balls. However, if two elements of {A-C} are separated by a distance of at least {4r}, then no product ball can intersect both preimages. We thus see that {N_{4r}^{ent}(A-C) \leq \frac{N_r(A-B) N_r(B-C)}{N_r(A-C)}}, and the claim follows. \Box

Below the fold we will record some further metric entropy analogues of sum set estimates (basically redoing much of Chapter 2 of my book with Van Vu). Unfortunately there does not seem to be a direct way to abstractly deduce metric entropy results from their sum set analogues (basically due to the failure of a certain strong version of Freiman’s theorem, as discussed in this previous post); nevertheless, the proofs of the discrete arguments are elementary enough that they can be modified with a small amount of effort to handle the entropy case. (In fact, there should be a very general model-theoretic framework in which both the discrete and entropy arguments can be processed in a unified manner; see this paper of Hrushovski for one such framework.)

It is also likely that many of the arguments here extend to the non-commutative setting, but for simplicity we will not pursue such generalisations here.

Read the rest of this entry »

Let {n} be a natural number. We consider the question of how many “almost orthogonal” unit vectors {v_1,\ldots,v_m} one can place in the Euclidean space {{\bf R}^n}. Of course, if we insist on {v_1,\ldots,v_m} being exactly orthogonal, so that {\langle v_i,v_j \rangle = 0} for all distinct {i,j}, then we can only pack at most {n} unit vectors into this space. However, if one is willing to relax the orthogonality condition a little, so that {\langle v_i,v_j\rangle} is small rather than zero, then one can pack a lot more unit vectors into {{\bf R}^n}, due to the important fact that pairs of vectors in high dimensions are typically almost orthogonal to each other. For instance, if one chooses {v_i} uniformly and independently at random on the unit sphere, then a standard computation (based on viewing the {v_i} as gaussian vectors projected onto the unit sphere) shows that each inner product {\langle v_i,v_j \rangle} concentrates around the origin with standard deviation {O(1/\sqrt{n})} and with gaussian tails, and a simple application of the union bound then shows that for any fixed {K \geq 1}, one can pack {n^K} unit vectors into {{\bf R}^n} whose inner products are all of size {O( K^{1/2} n^{-1/2} \log^{1/2} n )}.

One can remove the logarithm by using some number theoretic constructions. For instance, if {n} is twice a prime {n=2p}, one can identify {{\bf R}^n} with the space {\ell^2({\bf F}_p)} of complex-valued functions {f: {\bf F}_p \rightarrow {\bf C}}, whee {{\bf F}_p} is the field of {p} elements, and if one then considers the {p^2} different quadratic phases {x \mapsto \frac{1}{\sqrt{p}} e_p( ax^2 + bx )} for {a,b \in {\bf F}_p}, where {e_p(a) := e^{2\pi i a/p}} is the standard character on {{\bf F}_p}, then a standard application of Gauss sum estimates reveals that these {p^2} unit vectors in {{\bf R}^n} all have inner products of magnitude at most {p^{-1/2}} with each other. More generally, if we take {d \geq 1} and consider the {p^d} different polynomial phases {x \mapsto \frac{1}{\sqrt{p}} e_p( a_d x^d + \ldots + a_1 x )} for {a_1,\ldots,a_d \in {\bf F}_p}, then an application of the Weil conjectures for curves, proven by Weil, shows that the inner products of the associated {p^d} unit vectors with each other have magnitude at most {(d-1) p^{-1/2}}.

As it turns out, this construction is close to optimal, in that there is a polynomial limit to how many unit vectors one can pack into {{\bf R}^n} with an inner product of {O(1/\sqrt{n})}:

Theorem 1 (Cheap Kabatjanskii-Levenstein bound) Let {v_1,\ldots,v_m} be unit vector in {{\bf R}^n} such that {|\langle v_i, v_j \rangle| \leq A n^{-1/2}} for some {1/2 \leq A \leq \frac{1}{2} \sqrt{n}}. Then we have {m \leq (\frac{Cn}{A^2})^{C A^2}} for some absolute constant {C}.

In particular, for fixed {d} and large {p}, the number of unit vectors one can pack in {{\bf R}^{2p}} whose inner products all have magnitude at most {dp^{-1/2}} will be {O( p^{O(d^2)} )}. This doesn’t quite match the construction coming from the Weil conjectures, although it is worth noting that the upper bound of {(d-1)p^{-1/2}} for the inner product is usually not sharp (the inner product is actually {p^{-1/2}} times the sum of {d-1} unit phases which one expects (cf. the Sato-Tate conjecture) to be uniformly distributed on the unit circle, and so the typical inner product is actually closer to {(d-1)^{1/2} p^{-1/2}}).

Note that for {0 \leq A < 1/2}, the {A=1/2} case of the above theorem (or more precisely, Lemma 2 below) gives the bound {m=O(n)}, which is essentially optimal as the example of an orthonormal basis shows. For {A \geq \sqrt{n}}, the condition {|\langle v_i, v_j \rangle| \leq A n^{-1/2}} is trivially true from Cauchy-Schwarz, and {m} can be arbitrariy large. Finally, in the range {\frac{1}{2} \sqrt{n} \leq A \leq \sqrt{n}}, we can use a volume packing argument: we have {\|v_i-v_j\|^2 \geq 2 (1 - A n^{-1/2})}, so of we set {r := 2^{-1/2} (1-A n^{-1/2})^{1/2}}, then the open balls of radius {r} around each {v_i} are disjoint, while all lying in a ball of radius {O(1)}, giving rise to the bound {m \leq C^n (1-A n^{-1/2})^{-n/2}} for some absolute constant {C}.

As I learned recently from Philippe Michel, a more precise version of this theorem is due to Kabatjanskii and Levenstein, who studied the closely related problem of sphere packing (or more precisely, cap packing) in the unit sphere {S^{n-1}} of {{\bf R}^n}. However, I found a short proof of the above theorem which relies on one of my favorite tricks – the tensor power trick – so I thought I would give it here.

We begin with an easy case, basically the {A=1/2} case of the above theorem:

Lemma 2 Let {v_1,\ldots,v_m} be unit vectors in {{\bf R}^n} such that {|\langle v_i, v_j \rangle| \leq \frac{1}{2n^{1/2}}} for all distinct {i,j}. Then {m < 2n}.

Proof: Suppose for contradiction that {m \geq 2n}. We consider the {2n \times 2n} Gram matrix {( \langle v_i,v_j \rangle )_{1 \leq i,j \leq 2n}}. This matrix is real symmetric with rank at most {n}, thus if one subtracts off the identity matrix, it has an eigenvalue of {-1} with multiplicity at least {n}. Taking Hilbert-Schmidt norms, we conclude that

\displaystyle  \sum_{1 \leq i,j \leq 2n: i \neq j} |\langle v_i, v_j \rangle|^2 \geq n.

But by hypothesis, the left-hand side is at most {2n(2n-1) \frac{1}{4n} = n-\frac{1}{2}}, giving the desired contradiction. \Box

To amplify the above lemma to cover larger values of {A}, we apply the tensor power trick. A direct application of the tensor power trick does not gain very much; however one can do a lot better by using the symmetric tensor power rather than the raw tensor power. This gives

Corollary 3 Let {k} be a natural number, and let {v_1,\ldots,v_m} be unit vectors in {{\bf R}^n} such that {|\langle v_i, v_j \rangle| \leq 2^{-1/k} (\binom{n+k-1}{k})^{-1/2k}} for all distinct {i,j}. Then {m < 2\binom{n+k-1}{k}}.

Proof: We work in the symmetric component {\hbox{Sym}^k {\bf R}^n} of the tensor power {({\bf R}^n)^{\otimes k} \equiv {\bf R}^{n^k}}, which has dimension {\binom{n+k-1}{k}}. Applying the previous lemma to the tensor powers {v_1^{\otimes k},\ldots,v_m^{\otimes k}}, we obtain the claim. \Box

Using the trivial bound {e^k \geq \frac{k^k}{k!}}, we can lower bound

\displaystyle  2^{-1/k} (\binom{n+k-1}{k})^{-1/2k} \geq 2^{-1/k} (n+k-1)^{-1/2} (k!)^{1/2k}

\displaystyle  \geq 2^{-1/k} e^{-1/2} k^{1/2} (n+k-1)^{-1/2} .

We can thus prove Theorem 1 by setting {k := \lfloor C A^2 \rfloor} for some sufficiently large absolute constant {C}.

In the last set of notes, we obtained the following structural theorem concerning approximate groups:

Theorem 1 Let {A} be a finite {K}-approximate group. Then there exists a coset nilprogression {P} of rank and step {O_K(1)} contained in {A^4}, such that {A} is covered by {O_K(1)} left-translates of {P} (and hence also by {O_K(1)} right-translates of {P}).

Remark 1 Under some mild additional hypotheses (e.g. if the dimensions of {P} are sufficiently large, or if {P} is placed in a certain “normal form”, details of which may be found in this paper), a coset nilprogression {P} of rank and step {O_K(1)} will be an {O_K(1)}-approximate group, thus giving a partial converse to Theorem 1. (It is not quite a full converse though, even if one works qualitatively and forgets how the constants depend on {K}: if {A} is covered by a bounded number of left- and right-translates {gP, Pg} of {P}, one needs the group elements {g} to “approximately normalise” {P} in some sense if one wants to then conclude that {A} is an approximate group.) The mild hypotheses alluded to above can be enforced in the statement of the theorem, but we will not discuss this technicality here, and refer the reader to the above-mentioned paper for details.

By placing the coset nilprogression in a virtually nilpotent group, we have the following corollary in the global case:

Corollary 2 Let {A} be a finite {K}-approximate group in an ambient group {G}. Then {A} is covered by {O_K(1)} left cosets of a virtually nilpotent subgroup {G'} of {G}.

In this final set of notes, we give some applications of the above results. The first application is to replace “{K}-approximate group” by “sets of bounded doubling”:

Proposition 3 Let {A} be a finite non-empty subset of a (global) group {G} such that {|A^2| \leq K |A|}. Then there exists a coset nilprogression {P} of rank and step {O_K(1)} and cardinality {|P| \gg_K |A|} such that {A} can be covered by {O_K(1)} left-translates of {P}, and also by {O_K(1)} right-translates of {P}.

We will also establish (a strengthening of) a well-known theorem of Gromov on groups of polynomial growth, as promised back in Notes 0, as well as a variant result (of a type known as a “generalised Margulis lemma”) controlling the almost stabilisers of discrete actions of isometries.

The material here is largely drawn from my recent paper with Emmanuel Breuillard and Ben Green.

Read the rest of this entry »

In the previous set of notes, we introduced the notion of an ultra approximate group – an ultraproduct {A = \prod_{n \rightarrow\alpha} A_n} of finite {K}-approximate groups {A_n} for some {K} independent of {n}, where each {K}-approximate group {A_n} may lie in a distinct ambient group {G_n}. Although these objects arise initially from the “finitary” objects {A_n}, it turns out that ultra approximate groups {A} can be profitably analysed by means of infinitary groups {L} (and in particular, locally compact groups or Lie groups {L}), by means of certain models {\rho: \langle A \rangle \rightarrow L} of {A} (or of the group {\langle A \rangle} generated by {A}). We will define precisely what we mean by a model later, but as a first approximation one can view a model as a representation of the ultra approximate group {A} (or of {\langle A \rangle}) that is “macroscopically faithful” in that it accurately describes the “large scale” behaviour of {A} (or equivalently, that the kernel of the representation is “microscopic” in some sense). In the next section we will see how one can use “Gleason lemma” technology to convert this macroscopic control of an ultra approximate group into microscopic control, which will be the key to classifying approximate groups.

Models of ultra approximate groups can be viewed as the multiplicative combinatorics analogue of the more well known concept of an ultralimit of metric spaces, which we briefly review below the fold as motivation.

The crucial observation is that ultra approximate groups enjoy a local compactness property which allows them to be usefully modeled by locally compact groups (and hence, through the Gleason-Yamabe theorem from previous notes, by Lie groups also). As per the Heine-Borel theorem, the local compactness will come from a combination of a completeness property and a local total boundedness property. The completeness property turns out to be a direct consequence of the countable saturation property of ultraproducts, thus illustrating one of the key advantages of the ultraproduct setting. The local total boundedness property is more interesting. Roughly speaking, it asserts that “large bounded sets” (such as {A} or {A^{100}}) can be covered by finitely many translates of “small bounded sets” {S}, where “small” is a topological group sense, implying in particular that large powers {S^m} of {S} lie inside a set such as {A} or {A^4}. The easiest way to obtain such a property comes from the following lemma of Sanders:

Lemma 1 (Sanders lemma) Let {A} be a finite {K}-approximate group in a (global) group {G}, and let {m \geq 1}. Then there exists a symmetric subset {S} of {A^4} with {|S| \gg_{K,m} |A|} containing the identity such that {S^m \subset A^4}.

This lemma has an elementary combinatorial proof, and is the key to endowing an ultra approximate group with locally compact structure. There is also a closely related lemma of Croot and Sisask which can achieve similar results, and which will also be discussed below. (The locally compact structure can also be established more abstractly using the much more general methods of definability theory, as was first done by Hrushovski, but we will not discuss this approach here.)

By combining the locally compact structure of ultra approximate groups {A} with the Gleason-Yamabe theorem, one ends up being able to model a large “ultra approximate subgroup” {A'} of {A} by a Lie group {L}. Such Lie models serve a number of important purposes in the structure theory of approximate groups. Firstly, as all Lie groups have a dimension which is a natural number, they allow one to assign a natural number “dimension” to ultra approximate groups, which opens up the ability to perform “induction on dimension” arguments. Secondly, Lie groups have an escape property (which is in fact equivalent to no small subgroups property): if a group element {g} lies outside of a very small ball {B_\epsilon}, then some power {g^n} of it will escape a somewhat larger ball {B_1}. Or equivalently: if a long orbit {g, g^2, \ldots, g^n} lies inside the larger ball {B_1}, one can deduce that the original element {g} lies inside the small ball {B_\epsilon}. Because all Lie groups have this property, we will be able to show that all ultra approximate groups {A} “essentially” have a similar property, in that they are “controlled” by a nearby ultra approximate group which obeys a number of escape-type properties analogous to those enjoyed by small balls in a Lie group, and which we will call a strong ultra approximate group. This will be discussed in the next set of notes, where we will also see how these escape-type properties can be exploited to create a metric structure on strong approximate groups analogous to the Gleason metrics studied in previous notes, which can in turn be exploited (together with an induction on dimension argument) to fully classify such approximate groups (in the finite case, at least).

There are some cases where the analysis is particularly simple. For instance, in the bounded torsion case, one can show that the associated Lie model {L} is necessarily zero-dimensional, which allows for a easy classification of approximate groups of bounded torsion.

Some of the material here is drawn from my recent paper with Ben Green and Emmanuel Breuillard, which is in turn inspired by a previous paper of Hrushovski.

Read the rest of this entry »

Emmanuel Breuillard, Ben Green, and I have just uploaded to the arXiv our paper “The structure of approximate groups“, submitted to Pub. IHES. We had announced the main results of this paper in various forums (including this blog) for a few months now, but it had taken some time to fully write up the paper and put in various refinements and applications.

As announced previously, the main result of this paper is what is a (virtually, qualitatively) complete description of finite approximate groups in an arbitrary (local or global) group {G}. For simplicity let us work in the much more familiar setting of global groups, although our results also apply (but are a bit more technical to state) in the local group setting.

Recall that in a global group {G = (G,\cdot)}, a {K}-approximate group is a symmetric subset {A} of {G} containing the origin, with the property that the product set {A \cdot A} is covered by {K} left-translates of {A}. Examples of {O(1)}-approximate groups include genuine groups, convex bodies in a bounded dimensional vector space, small balls in a bounded dimensional Lie group, large balls in a discrete nilpotent group of bounded rank or step, or generalised arithmetic progressions (or more generally, coset progressions) of bounded rank in an abelian group. Specialising now to finite approximate groups, a key example of such a group is what we call a coset nilprogression: a set of the form {\pi^{-1}(P)}, where {\pi: G' \rightarrow N} is a homomorphism with finite kernel from a subgroup {G'} of {G} to a nilpotent group {N} of bounded step, and {P = P(u_1,\ldots,u_r;N_1,\ldots,N_r)} is a nilprogression with a bounded number of generators {u_1,\ldots,u_r} in {N} and some lengths {N_1,\ldots,N_r \gg 1}, where {P(u_1,\ldots,u_r;N_1,\ldots,N_r)} consists of all the words involving at most {N_1} copies of {u_1^{\pm 1}}, {N_2} copies of {u_2^{\pm 1}}, and so forth up to {N_r} copies of {u_r^{\pm 1}}. One can show (by some nilpotent algebra) that all such coset nilprogressions are {O(1)}-approximate groups so long as the step and the rank {r} are bounded (and if {N_1,\ldots,N_r} are sufficiently large).

Our main theorem (which was essentially conjectured independently by Helfgott and by Lindenstrauss) asserts, roughly speaking, that coset nilprogressions are essentially the only examples of approximate groups.

Theorem 1 Let {A} be a {K}-approximate group. Then {A^4} contains a coset nilprogression {P} of rank and step {O_K(1)}, such that {A} can be covered by {O_K(1)} left-translates of {P}.

In the torsion-free abelian case, this result is essentially Freiman’s theorem (with an alternate proof by Ruzsa); for general abelian case, it is due to Green and Ruzsa. Various partial results in this direction for some other groups (e.g. free groups, nilpotent groups, solvable groups, or simple groups of Lie type) are also known; see these previous blog posts for a summary of several of these results.

This result has a number of applications to geometric growth theory, and in particular to variants of Gromov’s theorem of groups of polynomial growth, which asserts that a finitely generated group is of polynomial growth if and only if it is virtually nilpotent. The connection lies in the fact that if the balls {B_S(R)} associated to a finite set of generators {S} has polynomial growth, then some simple volume-packing arguments combined with the pigeonhole principle will show that {B_S(R)} will end up being a {O(1)}-approximate group for many radii {R}. In fact, since our theorem only needs a single approximate group to obtain virtually nilpotent structure, we are able to obtain some new strengthenings of Gromov’s theorem. For instance, if {A} is any {K}-approximate group in a finitely generated group {G} that contains {B_S(R_0)} for some set of generators {S} and some {R_0} that is sufficiently large depending on {K}, our theorem implies that {G} is virtually nilpotent, answering a question of Petrunin. Among other things, this gives an alternate proof of a recent result of Kapovitch and Wilking (see also this previous paper of Cheeger and Colding) that a compact manifold of bounded diameter and Ricci curvature at least {-\epsilon} necessarily has a virtually nilpotent fundamental group if {\epsilon} is sufficiently small (depending only on dimension). The main point here is that no lower bound on the injectivity radius is required. Another application is a “Margulis-type lemma”, which asserts that if a metric space {X} has “bounded packing” (in the sense that any ball of radius (say) {4} is covered by a bounded number of balls of radius {1}), and {\Gamma} is a group of isometries on {X} that acts discretely (i.e. every orbit has only finitely many elements (counting multiplicity) in each bounded set), then the near-stabiliser {\{ \gamma \in \Gamma: d(\gamma x, x) \leq \epsilon \}} of a point {x} is virtually nilpotent if {\epsilon} is small enough depending on the packing constant.

There are also some variants and refinements to the main theorem proved in the paper, such as an extension to local groups, and also an improvement on the bound on the rank and step from {O_K(1)} to {O(\log K)} (but at the cost of replacing {A^4} in the theorem with {A^{O(1)}}).

I’ll be discussing the proof of the main theorem in detail in the next few lecture notes of my current graduate course. The full proof is somewhat lengthy (occupying about 50 pages of the 90-page paper), but can be summarised in the following steps:

  1. (Hrushovski) Take an arbitrary sequence {A_n} of finite {K}-approximate groups, and show that an appropriate limit {A} of such groups can be “modeled” in some sense by an open bounded subset of a locally compact group. (The precise definition of “model” is technical, but “macroscopically faithful representation” is a good first approximation.) As discussed in the previous lecture notes, we use an ultralimit for this purpose; the paper of Hrushovski where this strategy was first employed also considered more sophisticated model-theoretic limits. To build a locally compact topology, Hrushovski used some tools from definability theory; in our paper, we instead use a combinatorial lemma of Sanders (closely related to a similar result of Croot and Sisask.)
  2. (Gleason-Yamabe) The locally compact group can in turn be “modeled” by a Lie group (possibly after shrinking the group, and thus the ultralimit {A}, slightly). (This result arose from the solution to Hilbert’s fifth problem, as discussed here. For our extension to local groups, we use a recent local version of the Gleason-Yamabe theorem, due to Goldbring.)
  3. (Gleason) Using the escape properties of the Lie model, construct a norm {\| \|} (and thus a left-invariant metric {d}) on the ultralimit approximate group {A} (and also on the finitary groups {A_n}) that obeys a number of good properties, such as a commutator estimate {\| [g,h]\| \ll \|g\| \|h\|}. (This is modeled on an analogous construction used in the theory of Hilbert’s fifth problem, as discussed in this previous set of lecture notes.) This norm is essentially an escape norm associated to (a slight modification) of {A} or {A_n}.
  4. (Jordan-Bieberbach-Frobenius) We now take advantage of the finite nature of the {A_n} by locating the non-trivial element {e} of {A_n} with minimal escape norm (but one has to first quotient out the elements of zero escape norm first). The commutator estimate mentioned previously ensures that this element is essentially “central” in {A_n}. One can then quotient out a progression {P(e;N)} generated by this central element (reducing the dimension of the Lie model by one in the process) and iterates the process until the dimension of the model drops to zero. Reversing the process, this constructs a coset nilprogression inside {A_n^4}. This argument is based on the classic proof of Jordan’s theorem due to Bieberbach and Frobenius, as discussed in this blog post.

One quirk of the argument is that it requires one to work in the category of local groups rather than global groups. (This is somewhat analogous to how, in the standard proofs of Freiman’s theorem, one needs to work with the category of Freiman homomorphisms, rather than group homomorphisms.) The reason for this arises when performing the quotienting step in the Jordan-Bieberbach-Frobenius leg of the argument. The obvious way to perform this step (and the thing that we tried first) would be to quotient out by the entire cyclic group {\langle e \rangle} generated by the element {e} of minimal escape norm. However, it turns out that this doesn’t work too well, because the group quotiented out is so “large” that it can create a lot of torsion in the quotient. In particular, elements which used to have positive escape norm, can now become trapped in the quotient of {A_n}, thus sending their escape norm to zero. This leads to an inferior conclusion (in which a coset nilprogression is replaced by a more complicated tower of alternating extensions between central progressions and finite groups, similar to the towers encountered in my previous paper on this topic). To prevent this unwanted creation of torsion, one has to truncate the cyclic group {\langle e \rangle} before it escapes {A_n}, so that one quotients out by a geometric progression {P(e;N)} rather than the cyclic group. But the operation of quotienting out by a {P(e;N)}, which is a local group rather than a global one, cannot be formalised in the category of global groups, but only in the category of local groups. Because of this, we were forced to carry out the entire argument using the language of local groups. As it turns out, the arguments are ultimately more natural in this setting, although there is an initial investment of notation required, given that global group theory is much more familiar and well-developed than local group theory.

One interesting feature of the argument is that it does not use much of the existing theory of Freiman-type theorems, instead building the coset nilprogression directly from the geometric properties of the approximate group. In particular, our argument gives a new proof of Freiman’s theorem in the abelian case, which largely avoids Fourier analysis (except through the use of the theory of Hilbert’s fifth problem, which uses the Peter-Weyl theorem (or, in the abelian case, Pontryagin duality), which is basically a version of Fourier analysis).

In this set of notes we will be able to finally prove the Gleason-Yamabe theorem from Notes 0, which we restate here:

Theorem 1 (Gleason-Yamabe theorem) Let {G} be a locally compact group. Then, for any open neighbourhood {U} of the identity, there exists an open subgroup {G'} of {G} and a compact normal subgroup {K} of {G'} in {U} such that {G'/K} is isomorphic to a Lie group.

In the next set of notes, we will combine the Gleason-Yamabe theorem with some topological analysis (and in particular, using the invariance of domain theorem) to establish some further control on locally compact groups, and in particular obtaining a solution to Hilbert’s fifth problem.

To prove the Gleason-Yamabe theorem, we will use three major tools developed in previous notes. The first (from Notes 2) is a criterion for Lie structure in terms of a special type of metric, which we will call a Gleason metric:

Definition 2 Let {G} be a topological group. A Gleason metric on {G} is a left-invariant metric {d: G \times G \rightarrow {\bf R}^+} which generates the topology on {G} and obeys the following properties for some constant {C>0}, writing {\|g\|} for {d(g,\hbox{id})}:

  • (Escape property) If {g \in G} and {n \geq 1} is such that {n \|g\| \leq \frac{1}{C}}, then {\|g^n\| \geq \frac{1}{C} n \|g\|}.
  • (Commutator estimate) If {g, h \in G} are such that {\|g\|, \|h\| \leq \frac{1}{C}}, then

    \displaystyle  \|[g,h]\| \leq C \|g\| \|h\|, \ \ \ \ \ (1)

    where {[g,h] := g^{-1}h^{-1}gh} is the commutator of {g} and {h}.

Theorem 3 (Building Lie structure from Gleason metrics) Let {G} be a locally compact group that has a Gleason metric. Then {G} is isomorphic to a Lie group.

The second tool is the existence of a left-invariant Haar measure on any locally compact group; see Theorem 3 from Notes 3. Finally, we will also need the compact case of the Gleason-Yamabe theorem (Theorem 8 from Notes 3), which was proven via the Peter-Weyl theorem:

Theorem 4 (Gleason-Yamabe theorem for compact groups) Let {G} be a compact Hausdorff group, and let {U} be a neighbourhood of the identity. Then there exists a compact normal subgroup {H} of {G} contained in {U} such that {G/H} is isomorphic to a linear group (i.e. a closed subgroup of a general linear group {GL_n({\bf C})}).

To finish the proof of the Gleason-Yamabe theorem, we have to somehow use the available structures on locally compact groups (such as Haar measure) to build good metrics on those groups (or on suitable subgroups or quotient groups). The basic construction is as follows:

Definition 5 (Building metrics out of test functions) Let {G} be a topological group, and let {\psi: G \rightarrow {\bf R}^+} be a bounded non-negative function. Then we define the pseudometric {d_\psi: G \times G \rightarrow {\bf R}^+} by the formula

\displaystyle  d_\psi(g,h) := \sup_{x \in G} |\tau(g) \psi(x) - \tau(h) \psi(x)|

\displaystyle  = \sup_{x \in G} |\psi(g^{-1} x ) - \psi(h^{-1} x)|

and the semi-norm {\| \|_\psi: G \rightarrow {\bf R}^+} by the formula

\displaystyle  \|g\|_\psi := d_\psi(g, \hbox{id}).

Note that one can also write

\displaystyle  \|g\|_\psi = \sup_{x \in G} |\partial_g \psi(x)|

where {\partial_g \psi(x) := \psi(x) - \psi(g^{-1} x)} is the “derivative” of {\psi} in the direction {g}.

Exercise 1 Let the notation and assumptions be as in the above definition. For any {g,h,k \in G}, establish the metric-like properties

  1. (Identity) {d_\psi(g,h) \geq 0}, with equality when {g=h}.
  2. (Symmetry) {d_\psi(g,h) = d_\psi(h,g)}.
  3. (Triangle inequality) {d_\psi(g,k) \leq d_\psi(g,h) + d_\psi(h,k)}.
  4. (Continuity) If {\psi \in C_c(G)}, then the map {d_\psi: G \times G \rightarrow {\bf R}^+} is continuous.
  5. (Boundedness) One has {d_\psi(g,h) \leq \sup_{x \in G} |\psi(x)|}. If {\psi \in C_c(G)} is supported in a set {K}, then equality occurs unless {g^{-1} h \in K K^{-1}}.
  6. (Left-invariance) {d_\psi(g,h) = d_\psi(kg,kh)}. In particular, {d_\psi(g,h) = \| h^{-1} g \|_\psi = \| g^{-1} h \|_\psi}.

In particular, we have the norm-like properties

  1. (Identity) {\|g\|_\psi \geq 0}, with equality when {g=\hbox{id}}.
  2. (Symmetry) {\|g\|_\psi = \|g^{-1}\|_\psi}.
  3. (Triangle inequality) {\|gh\|_\psi \leq \|g\|_\psi + \|h\|_\psi}.
  4. (Continuity) If {\psi \in C_c(G)}, then the map {\|\|_\psi: G \rightarrow {\bf R}^+} is continuous.
  5. (Boundedness) One has {\|g\|_\psi \leq \sup_{x \in G} |\psi(x)|}. If {\psi \in C_c(G)} is supported in a set {K}, then equality occurs unless {g \in K K^{-1}}.

We remark that the first three properties of {d_\psi} in the above exercise ensure that {d_\psi} is indeed a pseudometric.

To get good metrics (such as Gleason metrics) on groups {G}, it thus suffices to obtain test functions {\psi} that obey suitably good “regularity” properties. We will achieve this primarily by means of two tricks. The first trick is to obtain high-regularity test functions by convolving together two low-regularity test functions, taking advantage of the existence of a left-invariant Haar measure {\mu} on {G}. The second trick is to obtain low-regularity test functions by means of a metric-like object on {G}. This latter trick may seem circular, as our whole objective is to get a metric on {G} in the first place, but the key point is that the metric one starts with does not need to have as many “good properties” as the metric one ends up with, thanks to the regularity-improving properties of convolution. As such, one can use a “bootstrap argument” (or induction argument) to create a good metric out of almost nothing. It is this bootstrap miracle which is at the heart of the proof of the Gleason-Yamabe theorem (and hence to the solution of Hilbert’s fifth problem).

The arguments here are based on the nonstandard analysis arguments used to establish Hilbert’s fifth problem by Hirschfeld and by Goldbring (and also some unpublished lecture notes of Goldbring and van den Dries). However, we will not explicitly use any nonstandard analysis in this post.

Read the rest of this entry »

One of the fundamental inequalities in convex geometry is the Brunn-Minkowski inequality, which asserts that if {A, B} are two non-empty bounded open subsets of {{\bf R}^d}, then

\displaystyle  \mu(A+B)^{1/d} \geq \mu(A)^{1/d} + \mu(B)^{1/d}, \ \ \ \ \ (1)

where

\displaystyle  A+B := \{a+b: a \in A, b \in B \}

is the sumset of {A} and {B}, and {\mu} denotes Lebesgue measure. The estimate is sharp, as can be seen by considering the case when {A, B} are convex bodies that are dilates of each other, thus {A = \lambda B := \{ \lambda b: b \in B \}} for some {\lambda>0}, since in this case one has {\mu(A) = \lambda^d \mu(B)}, {A+B = (\lambda+1)B}, and {\mu(A+B) = (\lambda+1)^d \mu(B)}.

The Brunn-Minkowski inequality has many applications in convex geometry. To give just one example, if we assume that {A} has a smooth boundary {\partial A}, and set {B} equal to a small ball {B = B(0,\epsilon)}, then {\mu(B)^{1/d} = \epsilon \mu(B(0,1))^{1/d}}, and in the limit {\epsilon \rightarrow 0} one has

\displaystyle  \mu(A+B) = \mu(A) + \epsilon |\partial A| + o(\epsilon)

where {|\partial A|} is the surface measure of {A}; applying the Brunn-Minkowski inequality and performing a Taylor expansion, one soon arrives at the isoperimetric inequality

\displaystyle  |\partial A| \geq d \mu(A)^{1-1/d} \mu(B(0,1))^{1/d}.

Thus one can view the isoperimetric inequality as an infinitesimal limit of the Brunn-Minkowski inequality.

There are many proofs known of the Brunn-Minkowski inequality. Firstly, the inequality is trivial in one dimension:

Lemma 1 (One-dimensional Brunn-Minkowski) If {A, B \subset {\bf R}} are non-empty measurable sets, then

\displaystyle  \mu(A+B) \geq \mu(A)+\mu(B).

Proof: By inner regularity we may assume that {A,B} are compact. The claim then follows since {A+B} contains the sets {\sup(A)+B} and {A+\inf(B)}, which meet only at a single point {\sup(A)+\inf(B)}. \Box

For the higher dimensional case, the inequality can be established from the Prékopa-Leindler inequality:

Theorem 2 (Prékopa-Leindler inequality in {{\bf R}^d}) Let {0 < \theta < 1}, and let {f, g, h: {\bf R}^d \rightarrow {\bf R}} be non-negative measurable functions obeying the inequality

\displaystyle  h(x+y) \geq f(x)^{1-\theta} g(y)^\theta \ \ \ \ \ (2)

for all {x,y \in {\bf R}^d}. Then we have

\displaystyle  \int_{{\bf R}^d} h \geq \frac{1}{(1-\theta)^{d(1-\theta)} \theta^{d\theta}} (\int_{{\bf R}^d} f)^{1-\theta} (\int_{{\bf R}^d} g)^\theta. \ \ \ \ \ (3)

This inequality is usually stated using {h((1-\theta)x + \theta y)} instead of {h(x+y)} in order to eliminate the ungainly factor {\frac{1}{(1-\theta)^{d(1-\theta)} \theta^{d\theta}}}. However, we formulate the inequality in this fashion in order to avoid any reference to the dilation maps {x \mapsto \lambda x}; the reason for this will become clearer later.

The Prékopa-Leindler inequality quickly implies the Brunn-Minkowski inequality. Indeed, if we apply it to the indicator functions {f := 1_A, g := 1_B, h := 1_{A+B}} (which certainly obey (2)), then (3) gives

\displaystyle  \mu(A+B)^{1/d} \geq \frac{1}{(1-\theta)^{1-\theta} \theta^{\theta}} \mu(A)^{\frac{1-\theta}{d}} \mu(B)^{\frac{\theta}{d}}

for any {0 < \theta < 1}. We can now optimise in {\theta}; the optimal value turns out to be

\displaystyle  \theta := \frac{\mu(B)^{1/d}}{\mu(A)^{1/d}+\mu(B)^{1/d}}

which yields (1).

To prove the Prékopa-Leindler inequality, we first observe that the inequality tensorises in the sense that if it is true in dimensions {d_1} and {d_2}, then it is automatically true in dimension {d_1+d_2}. Indeed, if {f, g, h: {\bf R}^{d_1} \times {\bf R}^{d_2} \rightarrow {\bf R}^+} are measurable functions obeying (2) in dimension {d_1+d_2}, then for any {x_1, y_1 \in {\bf R}^{d_1}}, the functions {f(x_1,\cdot), g(y_1,\cdot), h(x_1+y_1,\cdot): {\bf R}^{d_2} \rightarrow {\bf R}^+} obey (2) in dimension {d_2}. Applying the Prékopa-Leindler inequality in dimension {d_2}, we conclude that

\displaystyle  H(x_1+y_1) \geq \frac{1}{(1-\theta)^{d_2(1-\theta)} \theta^{d_2\theta}} F(x_1)^{1-\theta} G(y_1)^\theta

for all {x_1,y_1 \in {\bf R}^{d_1}}, where {F(x_1) := \int_{{\bf R}^{d_2}} f(x_1,x_2)\ dx_2} and similarly for {G, H}. But then if we apply the Prékopa-Leindler inequality again, this time in dimension {d_1} and to the functions {F}, {G}, and {(1-\theta)^{d_2(1-\theta)} \theta^{d_2\theta} H}, and then use the Fubini-Tonelli theorem, we obtain (3).

From tensorisation, we see that to prove the Prékopa-Leindler inequality it suffices to do so in the one-dimensional case. We can derive this from Lemma 1 by reversing the “Prékopa-Leindler implies Brunn-Minkowski” argument given earlier, as follows. We can normalise {f,g} to have sup norm {1}. If (2) holds (in one dimension), then the super-level sets {\{f>\lambda\}, \{g>\lambda\}, \{h>\lambda\}} are related by the set-theoretic inclusion

\displaystyle  \{ h > \lambda \} \supset \{ f > \lambda \} + \{ g > \lambda \}

and thus by Lemma 1

\displaystyle  \mu(\{ h > \lambda \}) \geq \mu(\{ f > \lambda \}) + \mu(\{ g > \lambda \})

whenever {\lambda \leq 1}. On the other hand, from the Fubini-Tonelli theorem one has the distributional identity

\displaystyle  \int_{\bf R} h = \int_0^\infty \mu(\{h > \lambda\})\ d\lambda

(and similarly for {f,g}, but with {\lambda} restricted to {(0,1)}), and thus

\displaystyle  \int_{\bf R} h \geq \int_{\bf R} f + \int_{\bf R} g.

The claim then follows from the weighted arithmetic mean-geometric mean inequality {(1-\theta) x + \theta y \geq x^{1-\theta} y^\theta}.

In this post, I wanted to record the simple observation (which appears in this paper of Leonardi and Mansou in the case of the Heisenberg group, but may have also been stated elsewhere in the literature) that the above argument carries through without much difficulty to the nilpotent setting, to give a nilpotent Brunn-Minkowski inequality:

Theorem 3 (Nilpotent Brunn-Minkowski) Let {G} be a connected, simply connected nilpotent Lie group of (topological) dimension {d}, and let {A, B} be bounded open subsets of {G}. Let {\mu} be a Haar measure on {G} (note that nilpotent groups are unimodular, so there is no distinction between left and right Haar measure). Then

\displaystyle  \mu(A \cdot B)^{1/d} \geq \mu(A)^{1/d} + \mu(B)^{1/d}. \ \ \ \ \ (4)

Here of course {A \cdot B := \{ ab: a \in A, b \in B \}} is the product set of {A} and {B}.

Indeed, by repeating the previous arguments, the nilpotent Brunn-Minkowski inequality will follow from

Theorem 4 (Nilpotent Prékopa-Leindler inequality) Let {G} be a connected, simply connected nilpotent Lie group of topological dimension {d} with a Haar measure {\mu}. Let {0 < \theta < 1}, and let {f, g, h: G \rightarrow {\bf R}} be non-negative measurable functions obeying the inequality

\displaystyle  h(xy) \geq f(x)^{1-\theta} g(y)^\theta \ \ \ \ \ (5)

for all {x,y \in G}. Then we have

\displaystyle  \int_G h\ d\mu \geq \frac{1}{(1-\theta)^{d(1-\theta)} \theta^{d\theta}} (\int_G f\ d\mu)^{1-\theta} (\int_G g\ d\mu)^\theta. \ \ \ \ \ (6)

To prove the nilpotent Prékopa-Leindler inequality, the key observation is that this inequality not only tensorises; it splits with respect to short exact sequences. Indeed, suppose one has a short exact sequence

\displaystyle  0 \rightarrow K \rightarrow G \rightarrow H \rightarrow 0

of connected, simply connected nilpotent Lie groups. The adjoint action of the connected group {G} on {K} acts nilpotently on the Lie algebra of {K} and is thus unimodular. Because of this, we can split a Haar measure {\mu_G} on {G} into Haar measures {\mu_K, \mu_H} on {K, H} respectively so that we have the Fubini-Tonelli formula

\displaystyle  \int_G f(g)\ d\mu_G(g) = \int_H F(h)\ d\mu_H(h)

for any measurable {f: G \rightarrow {\bf R}^+}, where {F(h)} is defined by the formula

\displaystyle F(h) := \int_K f(kg) d\mu_K(k) = \int_K f(gk)\ d\mu_K(k)

for any coset representative {g \in G} of {h} (the choice of {g} is not important, thanks to unimodularity of the conjugation action). It is then not difficult to repeat the proof of tensorisation (relying heavily on the unimodularity of conjugation) to conclude that the nilpotent Prékopa-Leindler inequality for {H} and {K} implies the Prékopa-Leindler inequality for {G}; we leave this as an exercise to the interested reader.

Now if {G} is a connected simply connected Lie group, then the abeliansation {G/[G,G]} is connected and simply connected and thus isomorphic to a vector space. This implies that {[G,G]} is a retract of {G} and is thus also connected and simply connected. From this and an induction of the step of the nilpotent group, we see that the nilpotent Prékopa-Leindler inequality follows from the abelian case, which we have already established in Theorem 2.

Remark 1 Some connected, simply connected nilpotent groups {G} (and specifically, the Carnot groups) can be equipped with a one-parameter family of dilations {x \mapsto \lambda \cdot x}, which are a family of automorphisms on {G}, which dilate the Haar measure by the formula

\displaystyle  \mu( \lambda \cdot x ) = \lambda^D \mu(x)

for an integer {D}, called the homogeneous dimension of {G}, which is typically larger than the topological dimension. For instance, in the case of the Heisenberg group

\displaystyle  G := \begin{pmatrix} 1 & {\bf R} & {\bf R} \\ 0 & 1 & {\bf R} \\ 0 & 0 & 1 \end{pmatrix},

which has topological dimension {d=3}, the natural family of dilations is given by

\displaystyle  \lambda: \begin{pmatrix} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \end{pmatrix} \mapsto \begin{pmatrix} 1 & \lambda x & \lambda^2 z \\ 0 & 1 & \lambda y \\ 0 & 0 & 1 \end{pmatrix}

with homogeneous dimension {D=4}. Because the two notions {d, D} of dimension are usually distinct in the nilpotent case, it is no longer helpful to try to use these dilations to simplify the proof of the Brunn-Minkowski inequality, in contrast to the Euclidean case. This is why we avoided using dilations in the preceding discussion. It is natural to wonder whether one could replace {d} by {D} in (4), but it can be easily shown that the exponent {d} is best possible (an observation that essentially appeared first in this paper of Monti). Indeed, working in the Heisenberg group for sake of concreteness, consider the set

\displaystyle  A := \{ \begin{pmatrix} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \end{pmatrix}: |x|, |y| \leq N, |z| \leq N^{10} \}

for some large parameter {N}. This set has measure {N^{12}} using the standard Haar measure on {G}. The product set {A \cdot A} is contained in

\displaystyle  A := \{ \begin{pmatrix} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \end{pmatrix}: |x|, |y| \leq 2N, |z| \leq 2N^{10} + O(N^2) \}

and thus has measure at most {8N^{12} + O(N^4)}. This already shows that the exponent in (4) cannot be improved beyond {d=3}; note that the homogeneous dimension {D=4} is making its presence known in the {O(N^4)} term in the measure of {A \cdot A}, but this is a lower order term only.

It is somewhat unfortunate that the nilpotent Brunn-Minkowski inequality is adapted to the topological dimension rather than the homogeneous one, because it means that some of the applications of the inequality (such as the application to isoperimetric inequalities mentioned at the start of the post) break down. (Indeed, the topic of isoperimetric inequalities for the Heisenberg group is a subtle one, with many naive formulations of the inequality being false. See the paper of Monti for more discussion.)

Remark 2 The inequality can be extended to non-simply-connected connected nilpotent groups {G}, if {d} is now set to the dimension of the largest simply connected quotient of {G}. It seems to me that this is the best one can do in general; for instance, if {G} is a torus, then the inequality fails for any {d>0}, as can be seen by setting {A=B=G}.

Remark 3 Specialising the nilpotent Brunn-Minkowski inequality to the case {A=B}, we conclude that

\displaystyle  \mu(A \cdot A) \geq 2^d \mu(A).

This inequality actually has a much simpler proof (attributed to Tsachik Gelander in this paper of Hrushovski, as pointed out to me by Emmanuel Breuillard): one can show that for a connected, simply connected Lie group {G}, the exponential map {\exp: {\mathfrak g} \rightarrow G} is a measure-preserving homeomorphism, for some choice of Haar measure {\mu_{{\mathfrak g}}} on {{\mathfrak g}}, so it suffices to show that

\displaystyle  \mu_{{\mathfrak g}}(\log(A \cdot A)) \geq 2^d \mu_{{\mathfrak g}}(\log A).

But {A \cdot A} contains all the squares {\{a^2: a \in A \}} of {A}, so {\log(A \cdot A)} contains the isotropic dilation {2 \cdot \log A}, and the claim follows. Note that if we set {A} to be a small ball around the origin, we can modify this argument to give another demonstration of why the topological dimension {d} cannot be replaced with any larger exponent in (4).

One may tentatively conjecture that the inequality {\mu(A \cdot A) \geq 2^d \mu(A)} in fact holds in all unimodular connected, simply connected Lie groups {G}, and all bounded open subsets {A} of {G}; I do not know if this bound is always true, however.

Hilbert’s fifth problem concerns the minimal hypotheses one needs to place on a topological group {G} to ensure that it is actually a Lie group. In the previous set of notes, we saw that one could reduce the regularity hypothesis imposed on {G} to a “{C^{1,1}}” condition, namely that there was an open neighbourhood of {G} that was isomorphic (as a local group) to an open subset {V} of a Euclidean space {{\bf R}^d} with identity element {0}, and with group operation {\ast} obeying the asymptotic

\displaystyle  x \ast y = x + y + O(|x| |y|)

for sufficiently small {x,y}. We will call such local groups {(V,\ast)} {C^{1,1}} local groups.

We now reduce the regularity hypothesis further, to one in which there is no explicit Euclidean space that is initially attached to {G}. Of course, Lie groups are still locally Euclidean, so if the hypotheses on {G} do not involve any explicit Euclidean spaces, then one must somehow build such spaces from other structures. One way to do so is to exploit an ambient space with Euclidean or Lie structure that {G} is embedded or immersed in. A trivial example of this is provided by the following basic fact from linear algebra:

Lemma 1 If {V} is a finite-dimensional vector space (i.e. it is isomorphic to {{\bf R}^d} for some {d}), and {W} is a linear subspace of {V}, then {W} is also a finite-dimensional vector space.

We will establish a non-linear version of this statement, known as Cartan’s theorem. Recall that a subset {S} of a {d}-dimensional smooth manifold {M} is a {d'}-dimensional smooth (embedded) submanifold of {M} for some {0 \leq d' \leq d} if for every point {x \in S} there is a smooth coordinate chart {\phi: U \rightarrow V} of a neighbourhood {U} of {x} in {M} that maps {x} to {0}, such that {\phi(U \cap S) = V \cap {\bf R}^{d'}}, where we identify {{\bf R}^{d'} \equiv {\bf R}^{d'} \times \{0\}^{d-d'}} with a subspace of {{\bf R}^d}. Informally, {S} locally sits inside {M} the same way that {{\bf R}^{d'}} sits inside {{\bf R}^d}.

Theorem 2 (Cartan’s theorem) If {H} is a (topologically) closed subgroup of a Lie group {G}, then {H} is a smooth submanifold of {G}, and is thus also a Lie group.

Note that the hypothesis that {H} is closed is essential; for instance, the rationals {{\bf Q}} are a subgroup of the (additive) group of reals {{\bf R}}, but the former is not a Lie group even though the latter is.

Exercise 1 Let {H} be a subgroup of a locally compact group {G}. Show that {H} is closed in {G} if and only if it is locally compact.

A variant of the above results is provided by using (faithful) representations instead of embeddings. Again, the linear version is trivial:

Lemma 3 If {V} is a finite-dimensional vector space, and {W} is another vector space with an injective linear transformation {\rho: W \rightarrow V} from {W} to {V}, then {W} is also a finite-dimensional vector space.

Here is the non-linear version:

Theorem 4 (von Neumann’s theorem) If {G} is a Lie group, and {H} is a locally compact group with an injective continuous homomorphism {\rho: H \rightarrow G}, then {H} also has the structure of a Lie group.

Actually, it will suffice for the homomorphism {\rho} to be locally injective rather than injective; related to this, von Neumann’s theorem localises to the case when {H} is a local group rather a group. The requirement that {H} be locally compact is necessary, for much the same reason that the requirement that {H} be closed was necessary in Cartan’s theorem.

Example 1 Let {G = ({\bf R}/{\bf Z})^2} be the two-dimensional torus, let {H = {\bf R}}, and let {\rho: H \rightarrow G} be the map {\rho(x) := (x,\alpha x)}, where {\alpha \in {\bf R}} is a fixed real number. Then {\rho} is a continuous homomorphism which is locally injective, and is even globally injective if {\alpha} is irrational, and so Theorem 4 is consistent with the fact that {H} is a Lie group. On the other hand, note that when {\alpha} is irrational, then {\rho(H)} is not closed; and so Theorem 4 does not follow immediately from Theorem 2 in this case. (We will see, though, that Theorem 4 follows from a local version of Theorem 2.)

As a corollary of Theorem 4, we observe that any locally compact Hausdorff group {H} with a faithful linear representation, i.e. a continuous injective homomorphism from {H} into a linear group such as {GL_n({\bf R})} or {GL_n({\bf C})}, is necessarily a Lie group. This suggests a representation-theoretic approach to Hilbert’s fifth problem. While this approach does not seem to readily solve the entire problem, it can be used to establish a number of important special cases with a well-understood representation theory, such as the compact case or the abelian case (for which the requisite representation theory is given by the Peter-Weyl theorem and Pontryagin duality respectively). We will discuss these cases further in later notes.

In all of these cases, one is not really building up Euclidean or Lie structure completely from scratch, because there is already a Euclidean or Lie structure present in another object in the hypotheses. Now we turn to results that can create such structure assuming only what is ostensibly a weaker amount of structure. In the linear case, one example of this is is the following classical result in the theory of topological vector spaces.

Theorem 5 Let {V} be a locally compact Hausdorff topological vector space. Then {V} is isomorphic (as a topological vector space) to {{\bf R}^d} for some finite {d}.

Remark 1 The Banach-Alaoglu theorem asserts that in a normed vector space {V}, the closed unit ball in the dual space {V^*} is always compact in the weak-* topology. Of course, this dual space {V^*} may be infinite-dimensional. This however does not contradict the above theorem, because the closed unit ball is not a neighbourhood of the origin in the weak-* topology (it is only a neighbourhood with respect to the strong topology).

The full non-linear analogue of this theorem would be the Gleason-Yamabe theorem, which we are not yet ready to prove in this set of notes. However, by using methods similar to that used to prove Cartan’s theorem and von Neumann’s theorem, one can obtain a partial non-linear analogue which requires an additional hypothesis of a special type of metric, which we will call a Gleason metric:

Definition 6 Let {G} be a topological group. A Gleason metric on {G} is a left-invariant metric {d: G \times G \rightarrow {\bf R}^+} which generates the topology on {G} and obeys the following properties for some constant {C>0}, writing {\|g\|} for {d(g,\hbox{id})}:

  • (Escape property) If {g \in G} and {n \geq 1} is such that {n \|g\| \leq \frac{1}{C}}, then {\|g^n\| \geq \frac{1}{C} n \|g\|}.
  • (Commutator estimate) If {g, h \in G} are such that {\|g\|, \|h\| \leq \frac{1}{C}}, then

    \displaystyle  \|[g,h]\| \leq C \|g\| \|h\|, \ \ \ \ \ (1)

    where {[g,h] := g^{-1}h^{-1}gh} is the commutator of {g} and {h}.

Exercise 2 Let {G} be a topological group that contains a neighbourhood of the identity isomorphic to a {C^{1,1}} local group. Show that {G} admits at least one Gleason metric.

Theorem 7 (Building Lie structure from Gleason metrics) Let {G} be a locally compact group that has a Gleason metric. Then {G} is isomorphic to a Lie group.

We will rely on Theorem 7 to solve Hilbert’s fifth problem; this theorem reduces the task of establishing Lie structure on a locally compact group to that of building a metric with suitable properties. Thus, much of the remainder of the solution of Hilbert’s fifth problem will now be focused on the problem of how to construct good metrics on a locally compact group.

In all of the above results, a key idea is to use one-parameter subgroups to convert from the nonlinear setting to the linear setting. Recall from the previous notes that in a Lie group {G}, the one-parameter subgroups are in one-to-one correspondence with the elements of the Lie algebra {{\mathfrak g}}, which is a vector space. In a general topological group {G}, the concept of a one-parameter subgroup (i.e. a continuous homomorphism from {{\bf R}} to {G}) still makes sense; the main difficulties are then to show that the space of such subgroups continues to form a vector space, and that the associated exponential map {\exp: \phi \mapsto \phi(1)} is still a local homeomorphism near the origin.

Exercise 3 The purpose of this exercise is to illustrate the perspective that a topological group can be viewed as a non-linear analogue of a vector space. Let {G, H} be locally compact groups. For technical reasons we assume that {G, H} are both {\sigma}-compact and metrisable.

  • (i) (Open mapping theorem) Show that if {\phi: G \rightarrow H} is a continuous homomorphism which is surjective, then it is open (i.e. the image of open sets is open). (Hint: mimic the proof of the open mapping theorem for Banach spaces, as discussed for instance in these notes. In particular, take advantage of the Baire category theorem.)
  • (ii) (Closed graph theorem) Show that if a homomorphism {\phi: G \rightarrow H} is closed (i.e. its graph {\{ (g, \phi(g)): g \in G \}} is a closed subset of {G \times H}), then it is continuous. (Hint: mimic the derivation of the closed graph theorem from the open mapping theorem in the Banach space case, as again discussed in these notes.)
  • (iii) Let {\phi: G \rightarrow H} be a homomorphism, and let {\rho: H \rightarrow K} be a continuous injective homomorphism into another Hausdorff topological group {K}. Show that {\phi} is continuous if and only if {\rho \circ \phi} is continuous.
  • (iv) Relax the condition of metrisability to that of being Hausdorff. (Hint: Now one cannot use the Baire category theorem for metric spaces; but there is an analogue of this theorem for locally compact Hausdorff spaces.)

Read the rest of this entry »

This fall (starting Monday, September 26), I will be teaching a graduate topics course which I have entitled “Hilbert’s fifth problem and related topics.” The course is going to focus on three related topics:

  • Hilbert’s fifth problem on the topological description of Lie groups, as well as the closely related (local) classification of locally compact groups (the Gleason-Yamabe theorem).
  • Approximate groups in nonabelian groups, and their classification via the Gleason-Yamabe theorem (this is very recent work of Emmanuel Breuillard, Ben Green, Tom Sanders, and myself, building upon earlier work of Hrushovski);
  • Gromov’s theorem on groups of polynomial growth, as proven via the classification of approximate groups (as well as some consequences to fundamental groups of Riemannian manifolds).

I have already blogged about these topics repeatedly in the past (particularly with regard to Hilbert’s fifth problem), and I intend to recycle some of that material in the lecture notes for this course.

The above three families of results exemplify two broad principles (part of what I like to call “the dichotomy between structure and randomness“):

  • (Rigidity) If a group-like object exhibits a weak amount of regularity, then it (or a large portion thereof) often automatically exhibits a strong amount of regularity as well;
  • (Structure) This strong regularity manifests itself either as Lie type structure (in continuous settings) or nilpotent type structure (in discrete settings). (In some cases, “nilpotent” should be replaced by sister properties such as “abelian“, “solvable“, or “polycyclic“.)

Let me illustrate what I mean by these two principles with two simple examples, one in the continuous setting and one in the discrete setting. We begin with a continuous example. Given an {n \times n} complex matrix {A \in M_n({\bf C})}, define the matrix exponential {\exp(A)} of {A} by the formula

\displaystyle  \exp(A) := \sum_{k=0}^\infty \frac{A^k}{k!} = 1 + A + \frac{1}{2!} A^2 + \frac{1}{3!} A^3 + \ldots

which can easily be verified to be an absolutely convergent series.

Exercise 1 Show that the map {A \mapsto \exp(A)} is a real analytic (and even complex analytic) map from {M_n({\bf C})} to {M_n({\bf C})}, and obeys the restricted homomorphism property

\displaystyle  \exp(sA) \exp(tA) = \exp((s+t)A) \ \ \ \ \ (1)

for all {A \in M_n({\bf C})} and {s,t \in {\bf C}}.

Proposition 1 (Rigidity and structure of matrix homomorphisms) Let {n} be a natural number. Let {GL_n({\bf C})} be the group of invertible {n \times n} complex matrices. Let {\Phi: {\bf R} \rightarrow GL_n({\bf C})} be a map obeying two properties:

  • (Group-like object) {\Phi} is a homomorphism, thus {\Phi(s) \Phi(t) = \Phi(s+t)} for all {s,t \in {\bf R}}.
  • (Weak regularity) The map {t \mapsto \Phi(t)} is continuous.

Then:

  • (Strong regularity) The map {t \mapsto \Phi(t)} is smooth (i.e. infinitely differentiable). In fact it is even real analytic.
  • (Lie-type structure) There exists a (unique) complex {n \times n} matrix {A} such that {\Phi(t) = \exp(tA)} for all {t \in {\bf R}}.

Proof: Let {\Phi} be as above. Let {\epsilon > 0} be a small number (depending only on {n}). By the homomorphism property, {\Phi(0) = 1} (where we use {1} here to denote the identity element of {GL_n({\bf C})}), and so by continuity we may find a small {t_0>0} such that {\Phi(t) = 1 + O(\epsilon)} for all {t \in [-t_0,t_0]} (we use some arbitrary norm here on the space of {n \times n} matrices, and allow implied constants in the {O()} notation to depend on {n}).

The map {A \mapsto \exp(A)} is real analytic and (by the inverse function theorem) is a diffeomorphism near {0}. Thus, by the inverse function theorem, we can (if {\epsilon} is small enough) find a matrix {B} of size {B = O(\epsilon)} such that {\Phi(t_0) = \exp(B)}. By the homomorphism property and (1), we thus have

\displaystyle  \Phi(t_0/2)^2 = \Phi(t_0) = \exp(B) = \exp(B/2)^2.

On the other hand, by another application of the inverse function theorem we see that the squaring map {A \mapsto A^2} is a diffeomorphism near {1} in {GL_n({\bf C})}, and thus (if {\epsilon} is small enough)

\displaystyle  \Phi(t_0/2) = \exp(B/2).

We may iterate this argument (for a fixed, but small, value of {\epsilon}) and conclude that

\displaystyle  \Phi(t_0/2^k) = \exp(B/2^k)

for all {k = 0,1,2,\ldots}. By the homomorphism property and (1) we thus have

\displaystyle  \Phi(qt_0) = \exp(qB)

whenever {q} is a dyadic rational, i.e. a rational of the form {a/2^k} for some integer {a} and natural number {k}. By continuity we thus have

\displaystyle  \Phi(st_0) = \exp(sB)

for all real {s}. Setting {A := B/t_0} we conclude that

\displaystyle  \Phi(t) = \exp(tA)

for all real {t}, which gives existence of the representation and also real analyticity and smoothness. Finally, uniqueness of the representation {\Phi(t) = \exp(tA)} follows from the identity

\displaystyle  A = \frac{d}{dt} \exp(tA)|_{t=0}.

\Box

Exercise 2 Generalise Proposition 1 by replacing the hypothesis that {\Phi} is continuous with the hypothesis that {\Phi} is Lebesgue measurable (Hint: use the Steinhaus theorem.). Show that the proposition fails (assuming the axiom of choice) if this hypothesis is omitted entirely.

Note how one needs both the group-like structure and the weak regularity in combination in order to ensure the strong regularity; neither is sufficient on its own. We will see variants of the above basic argument throughout the course. Here, the task of obtaining smooth (or real analytic structure) was relatively easy, because we could borrow the smooth (or real analytic) structure of the domain {{\bf R}} and range {M_n({\bf C})}; but, somewhat remarkably, we shall see that one can still build such smooth or analytic structures even when none of the original objects have any such structure to begin with.

Now we turn to a second illustration of the above principles, namely Jordan’s theorem, which uses a discreteness hypothesis to upgrade Lie type structure to nilpotent (and in this case, abelian) structure. We shall formulate Jordan’s theorem in a slightly stilted fashion in order to emphasise the adherence to the above-mentioned principles.

Theorem 2 (Jordan’s theorem) Let {G} be an object with the following properties:

  • (Group-like object) {G} is a group.
  • (Discreteness) {G} is finite.
  • (Lie-type structure) {G} is contained in {U_n({\bf C})} (the group of unitary {n \times n} matrices) for some {n}.

Then there is a subgroup {G'} of {G} such that

  • ({G'} is close to {G}) The index {|G/G'|} of {G'} in {G} is {O_n(1)} (i.e. bounded by {C_n} for some quantity {C_n} depending only on {n}).
  • (Nilpotent-type structure) {G'} is abelian.

A key observation in the proof of Jordan’s theorem is that if two unitary elements {g, h \in U_n({\bf C})} are close to the identity, then their commutator {[g,h] = g^{-1}h^{-1}gh} is even closer to the identity (in, say, the operator norm {\| \|_{op}}). Indeed, since multiplication on the left or right by unitary elements does not affect the operator norm, we have

\displaystyle  \| [g,h] - 1 \|_{op} = \| gh - hg \|_{op}

\displaystyle  = \| (g-1)(h-1) - (h-1)(g-1) \|_{op}

and so by the triangle inequality

\displaystyle  \| [g,h] - 1 \|_{op} \leq 2 \|g-1\|_{op} \|h-1\|_{op}. \ \ \ \ \ (2)

Now we can prove Jordan’s theorem.

Proof: We induct on {n}, the case {n=1} being trivial. Suppose first that {G} contains a central element {g} which is not a multiple of the identity. Then, by definition, {G} is contained in the centraliser {Z(g)} of {g}, which by the spectral theorem is isomorphic to a product {U_{n_1}({\bf C}) \times \ldots \times U_{n_k}({\bf C})} of smaller unitary groups. Projecting {G} to each of these factor groups and applying the induction hypothesis, we obtain the claim.

Thus we may assume that {G} contains no central elements other than multiples of the identity. Now pick a small {\epsilon > 0} (one could take {\epsilon=\frac{1}{10d}} in fact) and consider the subgroup {G'} of {G} generated by those elements of {G} that are within {\epsilon} of the identity (in the operator norm). By considering a maximal {\epsilon}-net of {G} we see that {G'} has index at most {O_{n,\epsilon}(1)} in {G}. By arguing as before, we may assume that {G'} has no central elements other than multiples of the identity.

If {G'} consists only of multiples of the identity, then we are done. If not, take an element {g} of {G'} that is not a multiple of the identity, and which is as close as possible to the identity (here is where we crucially use that {G} is finite). By (2), we see that if {\epsilon} is sufficiently small depending on {n}, and if {h} is one of the generators of {G'}, then {[g,h]} lies in {G'} and is closer to the identity than {g}, and is thus a multiple of the identity. On the other hand, {[g,h]} has determinant {1}. Given that it is so close to the identity, it must therefore be the identity (if {\epsilon} is small enough). In other words, {g} is central in {G'}, and is thus a multiple of the identity. But this contradicts the hypothesis that there are no central elements other than multiples of the identity, and we are done. \Box

Commutator estimates such as (2) will play a fundamental role in many of the arguments we will see in this course; as we saw above, such estimates combine very well with a discreteness hypothesis, but will also be very useful in the continuous setting.

Exercise 3 Generalise Jordan’s theorem to the case when {G} is a finite subgroup of {GL_n({\bf C})} rather than of {U_n({\bf C})}. (Hint: The elements of {G} are not necessarily unitary, and thus do not necessarily preserve the standard Hilbert inner product of {{\bf C}^n}. However, if one averages that inner product by the finite group {G}, one obtains a new inner product on {{\bf C}^n} that is preserved by {G}, which allows one to conjugate {G} to a subgroup of {U_n({\bf C})}. This averaging trick is (a small) part of Weyl’s unitary trick in representation theory.)

Exercise 4 (Inability to discretise nonabelian Lie groups) Show that if {n \geq 3}, then the orthogonal group {O_n({\bf R})} cannot contain arbitrarily dense finite subgroups, in the sense that there exists an {\epsilon = \epsilon_n > 0} depending only on {n} such that for every finite subgroup {G} of {O_n({\bf R})}, there exists a ball of radius {\epsilon} in {O_n({\bf R})} (with, say, the operator norm metric) that is disjoint from {G}. What happens in the {n=2} case?

Remark 1 More precise classifications of the finite subgroups of {U_n({\bf C})} are known, particularly in low dimensions. For instance, one can show that the only finite subgroups of {SO_3({\bf R})} (which {SU_2({\bf C})} is a double cover of) are isomorphic to either a cyclic group, a dihedral group, or the symmetry group of one of the Platonic solids.

Read the rest of this entry »

One of the most well known problems from ancient Greek mathematics was that of trisecting an angle by straightedge and compass, which was eventually proven impossible in 1837 by Pierre Wantzel, using methods from Galois theory.

Formally, one can set up the problem as follows. Define a configuration to be a finite collection {{\mathcal C}} of points, lines, and circles in the Euclidean plane. Define a construction step to be one of the following operations to enlarge the collection {{\mathcal C}}:

  • (Straightedge) Given two distinct points {A, B} in {{\mathcal C}}, form the line {\overline{AB}} that connects {A} and {B}, and add it to {{\mathcal C}}.
  • (Compass) Given two distinct points {A, B} in {{\mathcal C}}, and given a third point {O} in {{\mathcal C}} (which may or may not equal {A} or {B}), form the circle with centre {O} and radius equal to the length {|AB|} of the line segment joining {A} and {B}, and add it to {{\mathcal C}}.
  • (Intersection) Given two distinct curves {\gamma, \gamma'} in {{\mathcal C}} (thus {\gamma} is either a line or a circle in {{\mathcal C}}, and similarly for {\gamma'}), select a point {P} that is common to both {\gamma} and {\gamma'} (there are at most two such points), and add it to {{\mathcal C}}.

We say that a point, line, or circle is constructible by straightedge and compass from a configuration {{\mathcal C}} if it can be obtained from {{\mathcal C}} after applying a finite number of construction steps.

Problem 1 (Angle trisection) Let {A, B, C} be distinct points in the plane. Is it always possible to construct by straightedge and compass from {A,B,C} a line {\ell} through {A} that trisects the angle {\angle BAC}, in the sense that the angle between {\ell} and {BA} is one third of the angle of {\angle BAC}?

Thanks to Wantzel’s result, the answer to this problem is known to be “no” in general; a generic angle {\angle BAC} cannot be trisected by straightedge and compass. (On the other hand, some special angles can certainly be trisected by straightedge and compass, such as a right angle. Also, one can certainly trisect generic angles using other methods than straightedge and compass; see the Wikipedia page on angle trisection for some examples of this.)

The impossibility of angle trisection stands in sharp contrast to the easy construction of angle bisection via straightedge and compass, which we briefly review as follows:

  1. Start with three points {A, B, C}.
  2. Form the circle {c_0} with centre {A} and radius {AB}, and intersect it with the line {\overline{AC}}. Let {D} be the point in this intersection that lies on the same side of {A} as {C}. ({D} may well be equal to {C}).
  3. Form the circle {c_1} with centre {B} and radius {AB}, and the circle {c_2} with centre {D} and radius {AB}. Let {E} be the point of intersection of {c_1} and {c_2} that is not {A}.
  4. The line {\ell := \overline{AE}} will then bisect the angle {\angle BAC}.

The key difference between angle trisection and angle bisection ultimately boils down to the following trivial number-theoretic fact:

Lemma 2 There is no power of {2} that is evenly divisible by {3}.

Proof: Obvious by modular arithmetic, by induction, or by the fundamental theorem of arithmetic. \Box

In contrast, there are of course plenty of powers of {2} that are evenly divisible by {2}, and this is ultimately why angle bisection is easy while angle trisection is hard.

The standard way in which Lemma 2 is used to demonstrate the impossibility of angle trisection is via Galois theory. The implication is quite short if one knows this theory, but quite opaque otherwise. We briefly sketch the proof of this implication here, though we will not need it in the rest of the discussion. Firstly, Lemma 2 implies the following fact about field extensions.

Corollary 3 Let {F} be a field, and let {E} be an extension of {F} that can be constructed out of {F} by a finite sequence of quadratic extensions. Then {E} does not contain any cubic extensions {K} of {F}.

Proof: If E contained a cubic extension K of F, then the dimension of E over F would be a multiple of three. On the other hand, if E is obtained from F by a tower of quadratic extensions, then the dimension of E over F is a power of two. The claim then follows from Lemma 2. \Box

To conclude the proof, one then notes that any point, line, or circle that can be constructed from a configuration {{\mathcal C}} is definable in a field obtained from the coefficients of all the objects in {{\mathcal C}} after taking a finite number of quadratic extensions, whereas a trisection of an angle {\angle ABC} will generically only be definable in a cubic extension of the field generated by the coordinates of {A,B,C}.

The Galois theory method also allows one to obtain many other impossibility results of this type, most famously the Abel-Ruffini theorem on the insolvability of the quintic equation by radicals. For this reason (and also because of the many applications of Galois theory to number theory and other branches of mathematics), the Galois theory argument is the “right” way to prove the impossibility of angle trisection within the broader framework of modern mathematics. However, this argument has the drawback that it requires one to first understand Galois theory (or at least field theory), which is usually not presented until an advanced undergraduate algebra or number theory course, whilst the angle trisection problem requires only high-school level mathematics to formulate. Even if one is allowed to “cheat” and sweep several technicalities under the rug, one still needs to possess a fair amount of solid intuition about advanced algebra in order to appreciate the proof. (This was undoubtedly one reason why, even after Wantzel’s impossibility result was published, a large amount of effort was still expended by amateur mathematicians to try to trisect a general angle.)

In this post I would therefore like to present a different proof (or perhaps more accurately, a disguised version of the standard proof) of the impossibility of angle trisection by straightedge and compass, that avoids explicit mention of Galois theory (though it is never far beneath the surface). With “cheats”, the proof is actually quite simple and geometric (except for Lemma 2, which is still used at a crucial juncture), based on the basic geometric concept of monodromy; unfortunately, some technical work is needed however to remove these cheats.

To describe the intuitive idea of the proof, let us return to the angle bisection construction, that takes a triple {A, B, C} of points as input and returns a bisecting line {\ell} as output. We iterate the construction to create a quadrisecting line {m}, via the following sequence of steps that extend the original bisection construction:

  1. Start with three points {A, B, C}.
  2. Form the circle {c_0} with centre {A} and radius {AB}, and intersect it with the line {\overline{AC}}. Let {D} be the point in this intersection that lies on the same side of {A} as {C}. ({D} may well be equal to {C}).
  3. Form the circle {c_1} with centre {B} and radius {AB}, and the circle {c_2} with centre {D} and radius {AB}. Let {E} be the point of intersection of {c_1} and {c_2} that is not {A}.
  4. Let {F} be the point on the line {\ell := \overline{AE}} which lies on {c_0}, and is on the same side of {A} as {E}.
  5. Form the circle {c_3} with centre {F} and radius {AB}. Let {G} be the point of intersection of {c_1} and {c_3} that is not {A}.
  6. The line {m := \overline{AG}} will then quadrisect the angle {\angle BAC}.

Let us fix the points {A} and {B}, but not {C}, and view {m} (as well as intermediate objects such as {D}, {c_2}, {E}, {\ell}, {F}, {c_3}, {G}) as a function of {C}.

Let us now do the following: we begin rotating {C} counterclockwise around {A}, which drags around the other objects {D}, {c_2}, {E}, {\ell}, {F}, {c_3}, {G} that were constructed by {C} accordingly. For instance, here is an early stage of this rotation process, when the angle {\angle BAC} has become obtuse:

Now for the slightly tricky bit. We are going to keep rotating {C} beyond a half-rotation of {180^\circ}, so that {\angle BAC} now becomes a reflex angle. At this point, a singularity occurs; the point {E} collides into {A}, and so there is an instant in which the line {\ell = \overline{AE}} is not well-defined. However, this turns out to be a removable singularity (and the easiest way to demonstrate this will be to tap the power of complex analysis, as complex numbers can easily route around such a singularity), and we can blast through it to the other side, giving a picture like this:

Note that we have now deviated from the original construction in that {F} and {E} are no longer on the same side of {A}; we are thus now working in a continuation of that construction rather than with the construction itself. Nevertheless, we can still work with this continuation (much as, say, one works with analytic continuations of infinite series such as {\sum_{n=1}^\infty \frac{1}{n^s}} beyond their original domain of definition).

We now keep rotating {C} around {A}. Here, {\angle BAC} is approaching a full rotation of {360^\circ}:

When {\angle BAC} reaches a full rotation, a different singularity occurs: {c_1} and {c_2} coincide. Nevertheless, this is also a removable singularity, and we blast through to beyond a full rotation:

And now {C} is back where it started, as are {D}, {c_2}, {E}, and {\ell}… but the point {F} has moved, from one intersection point of {\ell \cap c_3} to the other. As a consequence, {c_3}, {G}, and {m} have also changed, with {m} being at right angles to where it was before. (In the jargon of modern mathematics, the quadrisection construction has a non-trivial monodromy.)

But nothing stops us from rotating {C} some more. If we continue this procedure, we see that after two full rotations of {C} around {A}, all points, lines, and circles constructed from {A, B, C} have returned to their original positions. Because of this, we shall say that the quadrisection construction described above is periodic with period {2}.

Similarly, if one performs an octisection of the angle {\angle BAC} by bisecting the quadrisection, one can verify that this octisection is periodic with period {4}; it takes four full rotations of {C} around {A} before the configuration returns to where it started. More generally, one can show

Proposition 4 Any construction of straightedge and compass from the points {A,B,C} is periodic with period equal to a power of {2}.

The reason for this, ultimately, is because any two circles or lines will intersect each other in at most two points, and so at each step of a straightedge-and-compass construction there is an ambiguity of at most {2! = 2}. Each rotation of {C} around {A} can potentially flip one of these points to the other, but then if one rotates again, the point returns to its original position, and then one can analyse the next point in the construction in the same fashion until one obtains the proposition.

But now consider a putative trisection operation, that starts with an arbitrary angle {\angle BAC} and somehow uses some sequence of straightedge and compass constructions to end up with a trisecting line {\ell}:

What is the period of this construction? If we continuously rotate {C} around {A}, we observe that a full rotations of {C} only causes the trisecting line {\ell} to rotate by a third of a full rotation (i.e. by {120^\circ}):

Because of this, we see that the period of any construction that contains {\ell} must be a multiple of {3}. But this contradicts Proposition 4 and Lemma 2.

Below the fold, I will make the above proof rigorous. Unfortunately, in doing so, I had to again leave the world of high-school mathematics, as one needs a little bit of algebraic geometry and complex analysis to resolve the issues with singularities that we saw in the above sketch. Still, I feel that at an intuitive level at least, this argument is more geometric and accessible than the Galois-theoretic argument (though anyone familiar with Galois theory will note that there is really not that much difference between the proofs, ultimately, as one has simply replaced the Galois group with a closely related monodromy group instead).

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,578 other followers