You are currently browsing the category archive for the ‘math.GT’ category.

Bill Thurston, who made fundamental contributions to our understanding of low-dimensional manifolds and related structures, died on Tuesday, aged 65.

Perhaps Thurston’s best known achievement is the proof of the hyperbolisation theorem for Haken manifolds, which showed that 3-manifolds which obeyed a certain number of topological conditions, could always be given a hyperbolic geometry (i.e. a Riemannian metric that made the manifold isometric to a quotient of the hyperbolic 3-space H^3).  This difficult theorem connecting the topological and geometric structure of 3-manifolds led Thurston to give his influential geometrisation conjecture, which (in principle, at least) completely classifies the topology of an arbitrary compact 3-manifold as a combination of eight model geometries (now known as Thurston model geometries).  This conjecture has many consequences, including Thurston’s hyperbolisation theorem and (most famously) the Poincaré conjecture.  Indeed, by placing that conjecture in the context of a conceptually appealing general framework, of which many other cases could already be verified, Thurston provided one of the strongest pieces of evidence towards the truth of the Poincaré conjecture, until the work of Grisha Perelman in 2002-2003 proved both the Poincaré conjecture and the geometrisation conjecture by developing Hamilton’s Ricci flow methods.  (There are now several variants of Perelman’s proof of both conjectures; in the proof of geometrisation by Bessieres, Besson, Boileau, Maillot, and Porti, Thurston’s hyperbolisation theorem is a crucial ingredient, allowing one to bypass the need for the theory of Alexandrov spaces in a key step in Perelman’s argument.)

One of my favourite results of Thurston’s is his elegant method for everting the sphere (smoothly turning a sphere S^2 in {\bf R}^3 inside out without any folds or singularities).  The fact that sphere eversion can be achieved at all is highly unintuitive, and is often referred to as Smale’s paradox, as Stephen Smale was the first to give a proof that such an eversion exists.  However, prior to Thurston’s method, the known constructions for sphere eversion were quite complicated.  Thurston’s method, relying on corrugating and then twisting the sphere, is sufficiently conceptual and geometric that it can in fact be explained quite effectively in non-technical terms, as was done in the following excellent video entitled “Outside In“, and produced by the Geometry Center:

In addition to his direct mathematical research contributions, Thurston was also an amazing mathematical expositor, having the rare knack of being able to describe the process of mathematical thinking in addition to the results of that process and the intuition underlying it.  His wonderful essay “On proof and progress in mathematics“, which I highly recommend, is the quintessential instance of this; more recent examples include his many insightful questions and answers on MathOverflow.

I unfortunately never had the opportunity to meet Thurston in person (although we did correspond a few times online), but I know many mathematicians who have been profoundly influenced by him and his work.  His death is a great loss for mathematics.

A topological space {X} is said to be metrisable if one can find a metric {d: X \times X \rightarrow [0,+\infty)} on it whose open balls {B(x,r) := \{ y \in X: d(x,y) < r \}} generate the topology.

There are some obvious necessary conditions on the space {X} in order for it to be metrisable. For instance, it must be Hausdorff, since all metric spaces are Hausdorff. It must also be first countable, because every point {x} in a metric space has a countable neighbourhood base of balls {B(x,1/n)}, {n=1,2,\ldots}.

In the converse direction, being Hausdorff and first countable is not always enough to guarantee metrisability, for a variety of reasons. For instance the long line is not metrisable despite being both Hausdorff and first countable, due to a failure of paracompactness, which prevents one from gluing together the local metric structures on this line into a global one. Even after adding in paracompactness, this is still not enough; the real line with the lower limit topology (also known as the Sorgenfrey line) is Hausdorff, first countable, and paracompact, but still not metrisable (because of a failure of second countability despite being separable).

However, there is one important setting in which the Hausdorff and first countability axioms do suffice to give metrisability, and that is the setting of topological groups:

Theorem 1 (Birkhoff-Kakutani theorem) Let {G} be a topological group (i.e. a topological space that is also a group, such that the group operations {\cdot: G \times G \rightarrow G} and {()^{-1}: G \rightarrow G} are continuous). Then {G} is metrisable if and only if it is both Hausdorff and first countable.

Remark 1 It is not hard to show that a topological group is Hausdorff if and only if the singleton set {\{\hbox{id}\}} is closed. More generally, in an arbitrary topological group, it is a good exercise to show that the closure of {\{\hbox{id}\}} is always a closed normal subgroup {H} of {G}, whose quotient {G/H} is then a Hausdorff topological group. Because of this, the study of topological groups can usually be reduced immediately to the study of Hausdorff topological groups. (Indeed, in many texts, topological groups are automatically understood to be an abbreviation for “Hausdorff topological group”.)

The standard proof of the Birkhoff-Kakutani theorem (which we have taken from this book of Montgomery and Zippin) relies on the following Urysohn-type lemma:

Lemma 2 (Urysohn-type lemma) Let {G} be a Hausdorff first countable group. Then there exists a bounded continuous function {f: G \rightarrow [0,1]} with the following properties:

  • (Unique maximum) {f(\hbox{id}) = 1}, and {f(x) < 1} for all {x \neq \hbox{id}}.
  • (Neighbourhood base) The sets {\{ x \in G: f(x) > 1-1/n \}} for {n=1,2,\ldots} form a neighbourhood base at the identity.
  • (Uniform continuity) For every {\varepsilon > 0}, there exists an open neighbourhood {U} of the identity such that {|f(gx)-f(x)| \leq \epsilon} for all {g \in U} and {x \in G}.

Note that if {G} had a left-invariant metric, then the function {f(x) := \max( 1 - \hbox{dist}(x,\hbox{id}), 0)} would suffice for this lemma, which already gives some indication as to why this lemma is relevant to the Birkhoff-Kakutani theorem.

Let us assume Lemma 2 for now and finish the proof of the Birkhoff-Kakutani theorem. We only prove the difficult direction, namely that a Hausdorff first countable topological group {G} is metrisable. We let {f} be the function from Lemma 2, and define the function {d_f := G \times G \rightarrow [0,+\infty)} by the formula

\displaystyle  d_f( g, h ) := \| \tau_g f - \tau_h f \|_{BC(G)} = \sup_{x \in G} |f(g^{-1} x) - f(h^{-1} x)| \ \ \ \ \ (1)

where {BC(G)} is the space of bounded continuous functions on {G} (with the supremum norm) and {\tau_g} is the left-translation operator {\tau_g f(x) := f(g^{-1} x)}.

Clearly {d_f} obeys the the identity {d_f(g,g) = 0} and symmetry {d_f(g,h) = d_f(h,g)} axioms, and the triangle inequality {d_f(g,k) \leq d_f(g,h) + d_f(h,k)} is also immediate. This already makes {d_f} a pseudometric. In order for {d_f} to be a genuine metric, what is needed is that {f} have no non-trivial translation invariances, i.e. one has {\tau_g f \neq f} for all {g \neq \hbox{id}}. But this follows since {f} attains its maximum at exactly one point, namely the group identity {\hbox{id}}.

To put it another way: because {f} has no non-trivial translation invariances, the left translation action {\tau} gives an embedding {g \mapsto \tau_g f}, and {G} then inherits a metric {d_f} from the metric structure on {BC(G)}.

Now we have to check whether the metric {d_f} actually generates the topology. This amounts to verifying two things. Firstly, that every ball {B(x,r)} in this metric is open; and secondly, that every open neighbourhood of a point {x \in G} contains a ball {B(x,r)}.

To verify the former claim, it suffices to show that the map {g \mapsto \tau_g f} from {G} to {BC(G)} is continuous, follows from the uniform continuity hypothesis. The second claim follows easily from the neighbourhood base hypothesis, since if {d_f(g,h) < 1/n} then {f(g^{-1} h) > 1-1/n}.

Remark 2 The above argument in fact shows that if a group {G} is metrisable, then it admits a left-invariant metric. The idea of using a suitable continuous function {f} to generate a useful metric structure on a topological group is a powerful one, for instance underlying the Gleason lemmas which are fundamental to the solution of Hilbert’s fifth problem. I hope to return to this topic in a future post.

Now we prove Lemma 2. By first countability, we can find a countable neighbourhood base

\displaystyle  V_1 \supset V_2 \supset \ldots \supset \{\hbox{id}\}

of the identity. As {G} is Hausdorff, we must have

\displaystyle  \bigcap_{n=1}^\infty V_n = \{\hbox{id}\}.

Using the continuity of the group axioms, we can recursively find a sequence of nested open neighbourhoods of the identity

\displaystyle  U_1 \supset U_{1/2} \supset U_{1/4} \supset \ldots \supset \{\hbox{id}\} \ \ \ \ \ (2)

such that each {U_{1/2^n}} is symmetric (i.e. {g \in U_{1/2^n}} if and only if {g^{-1} \in U_{1/2^n}}), is contained in {V_n}, and is such that {U_{1/2^{n+1}} \cdot U_{1/2^{n+1}} \subset U_{1/2^n}} for each {n \geq 0}. In particular the {U_{1/2^n}} are also a neighbourhood base of the identity with

\displaystyle  \bigcap_{n=1}^\infty U_{1/2^n} = \{\hbox{id}\}. \ \ \ \ \ (3)

For every dyadic rational {a/2^n} in {(0,1)}, we can now define the open sets {U_{a/2^n}} by setting

\displaystyle  U_{a/2^n} := U_{1/2^{n_k}} \cdot \ldots \cdot U_{1/2^{n_1}}

where {a/2^n = 2^{-n_1} + \ldots + 2^{-n_k}} is the binary expansion of {a/2^n} with {1 \leq n_1 < \ldots < n_k}. By repeated use of the hypothesis {U_{1/2^{n+1}} \cdot U_{1/2^{n+1}} \subset U_{1/2^n}} we see that the {U_{a/2^n}} are increasing in {a/2^n}; indeed, we have the inclusion

\displaystyle  U_{1/2^n} \cdot U_{a/2^n} \subset U_{(a+1)/2^n} \ \ \ \ \ (4)

for all {n \geq 1} and {1 \leq a < 2^n}.

We now set

\displaystyle  f(x) := \sup \{ 1 - \frac{a}{2^n}: n \geq 1; 1 \leq a < 2^n; x \in U_{a/2^n} \}

with the understanding that {f(x)=0} if the supremum is over the empty set. One easily verifies using (4) that {f} is continuous, and furthermore obeys the uniform continuity property. The neighbourhood base property follows since the {U_{1/2^n}} are a neighbourhood base of the identity, and the unique maximum property follows from (3). This proves Lemma 2.

Remark 3 A very similar argument to the one above also establishes that every topological group {G} is completely regular.

Notice that the function {f} constructed in the above argument was localised to the set {V_1}. As such, it is not difficult to localise the Birkhoff-Kakutani theorem to local groups. A local group is a topological space {G} equipped with an identity {\hbox{id}}, a partially defined inversion operation {()^{-1}: \Lambda \rightarrow G}, and a partially defined product operation {\cdot: \Omega \rightarrow G}, where {\Lambda}, {\Omega} are open subsets of {G} and {G \times G}, obeying the following restricted versions of the group axioms:

  1. (Continuity) {\cdot} and {()^{-1}} are continuous on their domains of definition.
  2. (Identity) For any {g \in G}, {\hbox{id} \cdot g} and {g \cdot \hbox{id}} are well-defined and equal to {g}.
  3. (Inverse) For any {g \in \Lambda}, {g \cdot g^{-1}} and {g^{-1} \cdot g} are well-defined and equal to {\hbox{id}}. {\hbox{id}^{-1}} is well-defined and equal to {\hbox{id}}.
  4. (Local associativity) If {g, h, k \in G} are such that {g \cdot h}, {(g \cdot h) \cdot k}, {h \cdot k}, and {g \cdot (h \cdot k)} are all well-defined, then {(g \cdot h) \cdot k = g \cdot (h \cdot k)}.

Informally, one can view a local group as a topological group in which the closure axiom has been almost completely dropped, but with all the other axioms retained. A basic way to generate a local group is to start with an ordinary topological group {G} and restrict it to an open neighbourhood {U} of the identity, with {\Lambda := \{ g \in U: g^{-1} \in U \}} and {\Omega := \{ (g,h) \in U \times U: gh \in U \}}. However, this is not quite the only way to generate local groups (ultimately because the local associativity axiom does not necessarily imply a (stronger) global associativity axiom in which one considers two different ways to multiply more than three group elements together).

Remark 4 Another important example of a local group is that of a group chunk, in which the sets {\Lambda} and {\Omega} are somehow “generic”; for instance, {G} could be an algebraic variety, {\Lambda, \Omega} Zariski-open, and the group operations birational on their domains of definition. This is somewhat analogous to the notion of a “{99\%} group” in additive combinatorics. There are a number of group chunk theorems, starting with a theorem of Weil in the algebraic setting, which roughly speaking assert that a generic portion of a group chunk can be identified with the generic portion of a genuine group.

We then have

Theorem 3 (Birkhoff-Kakutani theorem for local groups) Let {G} be a local group which is Hausdorff and first countable. Then there exists an open neighbourhood {V_0} of the identity which is metrisable.

Proof: (Sketch) It is not difficult to see that in a local group {G}, one can find a symmetric neighbourhood {V_0} of the identity such that the product of any {100} (say) elements of {V_0} (multiplied together in any order) are well-defined, which effectively allows us to treat elements of {V_0} as if they belonged to a group for the purposes of simple algebraic manipulation, such as applying the cancellation laws {gh=gk \implies h=k} for {g,h,k \in V_0}. Inside this {V_0}, one can then repeat the previous arguments and eventually end up with a continuous function {f \in BC(G)} supported in {V_0} obeying the conclusions of Lemma 2 (but in the uniform continuity conclusion, one has to restrict {x} to, say, {V_0^{10}}, to avoid issues of ill-definedness). The definition (1) then gives a metric on {V_0} with the required properties, where we make the convention that {\tau_g f(x)} vanishes for {x \not \in V_0^{10}} (say) and {g \in V_0}. \Box

My motivation for studying local groups is that it turns out that there is a correspondence (first observed by Hrushovski) between the concept of an approximate group in additive combinatorics, and a locally compact local group in topological group theory; I hope to discuss this correspondence further in a subsequent post.

The ham sandwich theorem asserts that, given {d} bounded open sets {U_1,\ldots,U_d} in {{\bf R}^d}, there exists a hyperplane {\{ x \in {\bf R}^d: x \cdot v = c \}} that bisects each of these sets {U_i}, in the sense that each of the two half-spaces {\{ x \in {\bf R}^d: x \cdot v < c \}, \{ x \in {\bf R}^d: x \cdot v > c \}} on either side of the hyperplane captures exactly half of the volume of {U_i}. The shortest proof of this result proceeds by invoking the Borsuk-Ulam theorem.

A useful generalisation of the ham sandwich theorem is the polynomial ham sandwich theorem, which asserts that given {m} bounded open sets {U_1,\ldots,U_m} in {{\bf R}^d}, there exists a hypersurface {\{ x \in {\bf R}^d: Q(x)=0\}} of degree {O_d( m^{1/d} )} (thus {P: {\bf R}^d \rightarrow {\bf R}} is a polynomial of degree {O(m^{1/n})} such that the two semi-algebraic sets {\{ Q > 0 \}} and {\{ Q < 0\}} capture half the volume of each of the {U_i}. (More precisely, the degree will be at most {D}, where {D} is the first positive integer for which {\binom{D+d}{d}} exceeds {m}.) This theorem can be deduced from the Borsuk-Ulam theorem in the same manner that the ordinary ham sandwich theorem is (and can also be deduced directly from the ordinary ham sandwich theorem via the Veronese embedding).

The polynomial ham sandwich theorem is a theorem about continuous bodies (bounded open sets), but a simple limiting argument leads one to the following discrete analogue: given {m} finite sets {S_1,\ldots,S_m} in {{\bf R}^d}, there exists a hypersurface {\{ x \in {\bf R}^d: Q(x)=0\}} of degree {O_d( m^{1/d} )}, such that each of the two semi-algebraic sets {\{ Q > 0 \}} and {\{ Q < 0\}} contain at most half of the points on {S_i} (note that some of the points of {S_i} can certainly lie on the boundary {\{Q=0\}}). This can be iterated to give a useful cell decomposition:

Proposition 1 (Cell decomposition) Let {P} be a finite set of points in {{\bf R}^d}, and let {D} be a positive integer. Then there exists a polynomial {Q} of degree at most {D}, and a decomposition

\displaystyle  {\bf R}^d = \{ Q = 0\} \cup C_1 \cup \ldots \cup C_m

into the hypersurface {\{Q=0\}} and a collection {C_1,\ldots,C_m} of cells bounded by {\{P=0\}}, such that {m = O_d(D^d)}, and such that each cell {C_i} contains at most {O_d( |P|/D^d )} points.

A proof is sketched in this previous blog post. The cells in the argument are not necessarily connected (being instead formed by intersecting together a number of semi-algebraic sets such as {\{ Q > 0\}} and {\{Q<0\}}), but it is a classical result (established independently by Oleinik-Petrovskii, Milnor, and Thom) that any degree {D} hypersurface {\{Q=0\}} divides {{\bf R}^d} into {O_d(D^d)} connected components, so one can easily assume that the cells are connected if desired. (Incidentally, one does not need the full machinery of the results in the above cited papers – which control not just the number of components, but all the Betti numbers of the complement of {\{Q=0\}} – to get the bound on connected components; one can instead observe that every bounded connected component has a critical point where {\nabla Q = 0}, and one can control the number of these points by Bezout’s theorem, after perturbing {Q} slightly to enforce genericity, and then count the unbounded components by an induction on dimension.)

Remark 1 By setting {D} as large as {O_d(|P|^{1/m})}, we obtain as a limiting case of the cell decomposition the fact that any finite set {P} of points in {{\bf R}^d} can be captured by a hypersurface of degree {O_d(|P|^{1/m})}. This fact is in fact true over arbitrary fields (not just over {{\bf R}}), and can be proven by a simple linear algebra argument (see e.g. this previous blog post). However, the cell decomposition is more flexible than this algebraic fact due to the ability to arbitrarily select the degree parameter {D}.

The cell decomposition can be viewed as a structural theorem for arbitrary large configurations of points in space, much as the Szemerédi regularity lemma can be viewed as a structural theorem for arbitrary large dense graphs. Indeed, just as many problems in the theory of large dense graphs can be profitably attacked by first applying the regularity lemma and then inspecting the outcome, it now seems that many problems in combinatorial incidence geometry can be attacked by applying the cell decomposition (or a similar such decomposition), with a parameter {D} to be optimised later, to a relevant set of points, and seeing how the cells interact with each other and with the other objects in the configuration (lines, planes, circles, etc.). This strategy was spectacularly illustrated recently with Guth and Katz‘s use of the cell decomposition to resolve the Erdös distinct distance problem (up to logarithmic factors), as discussed in this blog post.

In this post, I wanted to record a simpler (but still illustrative) version of this method (that I learned from Nets Katz), namely to provide yet another proof of the Szemerédi-Trotter theorem in incidence geometry:

Theorem 2 (Szemerédi-Trotter theorem) Given a finite set of points {P} and a finite set of lines {L} in {{\bf R}^2}, the set of incidences {I(P,L):= \{ (p,\ell) \in P \times L: p \in \ell \}} has cardinality

\displaystyle  |I(P,L)| \ll |P|^{2/3} |L|^{2/3} + |P| + |L|.

This theorem has many short existing proofs, including one via crossing number inequalities (as discussed in this previous post) or via a slightly different type of cell decomposition (as discussed here). The proof given below is not that different, in particular, from the latter proof, but I believe it still serves as a good introduction to the polynomial method in combinatorial incidence geometry.

Read the rest of this entry »

Emmanuel Breuillard, Ben Green, and I have just uploaded to the arXiv our paper “Approximate subgroups of linear groups“, submitted to GAFA. This paper contains (the first part) of the results announced previously by us; the second part of these results, concerning expander groups, will appear subsequently. The release of this paper has been coordinated with the release of a parallel paper by Pyber and Szabo (previously announced within an hour(!) of our own announcement).

Our main result describes (with polynomial accuracy) the “sufficiently Zariski dense” approximate subgroups of simple algebraic groups {{\bf G}(k)}, or more precisely absolutely almost simple algebraic groups over {k}, such as {SL_d(k)}. More precisely, define a {K}-approximate subgroup of a genuine group {G} to be a finite symmetric neighbourhood of the identity {A} (thus {1 \in A} and {A^{-1}=A}) such that the product set {A \cdot A} can be covered by {K} left-translates (and equivalently, {K} right-translates) of {A}.

Let {k} be a field, and let {\overline{k}} be its algebraic closure. For us, an absolutely almost simple algebraic group over {k} is a linear algebraic group {{\bf G}(k)} defined over {k} (i.e. an algebraic subvariety of {GL_n(k)} for some {n} with group operations given by regular maps) which is connected (i.e. irreducible), and such that the completion {{\bf G}(\overline{k})} has no proper normal subgroups of positive dimension (i.e. the only normal subgroups are either finite, or are all of {{\bf G}(\overline{k})}. To avoid degeneracies we also require {{\bf G}} to be non-abelian (i.e. not one-dimensional). These groups can be classified in terms of their associated finite-dimensional simple complex Lie algebra, which of course is determined by its Dynkin diagram, together with a choice of weight lattice (and there are only finitely many such choices once the Lie algebra is fixed). However, the exact classification of these groups is not directly used in our work.

Our first main theorem classifies the approximate subgroups {A} of such a group {{\bf G}(k)} in the model case when {A} generates the entire group {{\bf G}(k)}, and {k} is finite; they are either very small or very large.

Theorem 1 (Approximate groups that generate) Let {{\bf G}(k)} be an absolutely almost simple algebraic group over {k}. If {k} is finite and {A} is a {K}-approximate subgroup of {{\bf G}(k)} that generates {{\bf G}(k)}, then either {|A| \leq K^{O(1)}} or {|A| \geq K^{-O(1)} |{\bf G}(k)|}, where the implied constants depend only on {{\bf G}}.

The hypothesis that {A} generates {{\bf G}(k)} cannot be removed completely, since if {A} was a proper subgroup of {{\bf G}(k)} of size intermediate between that of the trivial group and of {{\bf G}(k)}, then the conclusion would fail (with {K=O(1)}). However, one can relax the hypothesis of generation to that of being sufficiently Zariski-dense in {{\bf G}(k)}. More precisely, we have

Theorem 2 (Zariski-dense approximate groups) Let {{\bf G}(k)} be an absolutely almost simple algebraic group over {k}. If {A} is a {K}-approximate group) is not contained in any proper algebraic subgroup of {k} of complexity at most {M} (where {M} is sufficiently large depending on {{\bf G}}), then either {|A| \leq K^{O(1)}} or {|A| \geq K^{-O(1)} |\langle A \rangle|}, where the implied constants depend only on {{\bf G}} and {\langle A \rangle} is the group generated by {A}.

Here, we say that an algebraic variety has complexity at most {M} if it can be cut out of an ambient affine or projective space of dimension at most {M} by using at most {M} polynomials, each of degree at most {M}. (Note that this is not an intrinsic notion of complexity, but will depend on how one embeds the algebraic variety into an ambient space; but we are assuming that our algebraic group {{\bf G}(k)} is a linear group and thus comes with such an embedding.)

In the case when {k = {\bf C}}, the second option of this theorem cannot occur since {{\bf G}({\bf C})} is infinite, leading to a satisfactory classification of the Zariski-dense approximate subgroups of almost simple connected algebraic groups over {{\bf C}}. On the other hand, every approximate subgroup of {GL_n({\bf C})} is Zariski-dense in some algebraic subgroup, which can be then split as an extension of a semisimple algebraic quotient group by a solvable algebraic group (the radical of the Zariski closure). Pursuing this idea (and glossing over some annoying technical issues relating to connectedness), together with the Freiman theory for solvable groups over {{\bf C}} due to Breuillard and Green, we obtain our third theorem:

Theorem 3 (Freiman’s theorem in {GL_n({\bf C})}) Let {A} be a {K}-approximate subgroup of {GL_n({\bf C})}. Then there exists a nilpotent {K}-approximate subgroup {B} of size at most {K^{O(1)}|A|}, such that {A} is covered by {K^{O(1)}} translates of {B}.

This can be compared with Gromov’s celebrated theorem that any finitely generated group of polynomial growth is virtually nilpotent. Indeed, the above theorem easily implies Gromov’s theorem in the case of finitely generated subgroups of {GL_n({\bf C})}.

By fairly standard arguments, the above classification theorems for approximate groups can be used to give bounds on the expansion and diameter of Cayley graphs, for instance one can establish a conjecture of Babai and Seress that connected Cayley graphs on absolutely almost simple groups over a finite field have polylogarithmic diameter at most. Applications to expanders include the result on Suzuki groups mentioned in a previous post; further applications will appear in a forthcoming paper.

Apart from the general structural theory of algebraic groups, and some quantitative analogues of the basic theory of algebraic geometry (which we chose to obtain via ultrafilters, as discussed in this post), we rely on two basic tools. Firstly, we use a version of the pivot argument developed first by Konyagin and Bourgain-Glibichuk-Konyagin in the setting of sum-product estimates, and generalised to more non-commutative settings by Helfgott; this is discussed in this previous post. Secondly, we adapt an argument of Larsen and Pink (which we learned from a paper of Hrushovski) to obtain a sharp bound on the extent to which a sufficiently Zariski-dense approximate groups can concentrate in a (bounded complexity) subvariety; this is discussed at the end of this blog post.

This week I was in my home town of Adelaide, Australia, for the 2009 annual meeting of the Australian Mathematical Society. This was a fairly large meeting (almost 500 participants). One of the highlights of such a large meeting is the ability to listen to plenary lectures in fields adjacent to one’s own, in which speakers can give high-level overviews of a subject without getting too bogged down in the technical details. From the talks here I learned a number of basic things which were well known to experts in that field, but which I had not fully appreciated, and so I wanted to share them here.

The first instance of this was from a plenary lecture by Danny Calegari entitled “faces of the stable commutator length (scl) ball”. One thing I learned from this talk is that in homotopy theory, there is a very close relationship between topological spaces (such as manifolds) on one hand, and groups (and generalisations of groups) on the other, so that homotopy-theoretic questions about the former can often be converted to purely algebraic questions about the latter, and vice versa; indeed, it seems that homotopy theorists almost think of topological spaces and groups as being essentially the same concept, despite looking very different at first glance. To get from a space {X} to a group, one looks at homotopy groups {\pi_n(X)} of that space, and in particular the fundamental group {\pi_1(X)}; conversely, to get from a group {G} back to a topological space one can use the Eilenberg-Maclane spaces {K(G,n)} associated to that group (and more generally, a Postnikov tower associated to a sequence of such groups, together with additional data). In Danny’s talk, he gave the following specific example: the problem of finding the least complicated embedded surface with prescribed (and homologically trivial) boundary in a space {X}, where “least complicated” is measured by genus (or more precisely, the negative component of Euler characteristic), is essentially equivalent to computing the commutator length of the element in the fundamental group {\pi(X)} corresponding to that boundary (i.e. the least number of commutators one is required to multiply together to express the element); and the stable version of this problem (where one allows the surface to wrap around the boundary {n} times for some large {n}, and one computes the asymptotic ratio between the Euler characteristic and {n}) is similarly equivalent to computing the stable commutator length of that group element. (Incidentally, there is a simple combinatorial open problem regarding commutator length in the free group, which I have placed on the polymath wiki.)

This theme was reinforced by another plenary lecture by Ezra Getzler entitled “{n}-groups”, in which he showed how sequences of groups (such as the first {n} homotopy groups {\pi_1(X),\ldots,\pi_n(X)}) can be enhanced into a more powerful structure known as an {n}-group, which is more complicated to define, requiring the machinery of simplicial complexes, sheaves, and nerves. Nevertheless, this gives a very topological and geometric interpretation of the concept of a group and its generalisations, which are of use in topological quantum field theory, among other things.

Mohammed Abuzaid gave a plenary lecture entitled “Functoriality in homological mirror symmetry”. One thing I learned from this talk was that the (partially conjectural) phenomenon of (homological) mirror symmetry is one of several types of duality, in which the behaviour of maps into one mathematical object {X} (e.g. immersed or embedded curves, surfaces, etc.) are closely tied to the behaviour of maps out of a dual mathematical object {\hat X} (e.g. functionals, vector fields, forms, sections, bundles, etc.). A familiar example of this is in linear algebra: by taking adjoints, a linear map into a vector space {X} can be related to an adjoint linear map mapping out of the dual space {X^*}. Here, the behaviour of curves in a two-dimensional symplectic manifold (or more generally, Lagrangian submanifolds in a higher-dimensional symplectic manifold), is tied to the behaviour of holomorphic sections on bundles over a dual algebraic variety, where the precise definition of “behaviour” is category-theoretic, involving some rather complicated gadgets such as the Fukaya category of a symplectic manifold. As with many other applications of category theory, it is not just the individual pairings between an object and its dual which are of interest, but also the relationships between these pairings, as formalised by various functors between categories (and natural transformations between functors). (One approach to mirror symmetry was discussed by Shing-Tung Yau at a distinguished lecture at UCLA, as transcribed in this previous post.)

There was a related theme in a talk by Dennis Gaitsgory entitled “The geometric Langlands program”. From my (very superficial) understanding of the Langlands program, the behaviour of specific maps into a reductive Lie group {G}, such as representations in {G} of a fundamental group, étale fundamental group, class group, or Galois group of a global field, is conjecturally tied to specific maps out of a dual reductive Lie group {\hat G}, such as irreducible automorphic representations of {\hat G}, or of various structures (such as derived categories) attached to vector bundles on {\hat G}. There are apparently some tentatively conjectured links (due to Witten?) between Langlands duality and mirror symmetry, but they seem at present to be fairly distinct phenomena (one is topological and geometric, the other is more algebraic and arithmetic). For abelian groups, Langlands duality is closely connected to the much more classical Pontryagin duality in Fourier analysis. (There is an analogue of Fourier analysis for nonabelian groups, namely representation theory, but the link from this to the Langlands program is somewhat murky, at least to me.)

Related also to this was a plenary talk by Akshay Venkatesh, entitled “The Cohen-Lenstra heuristics over global fields”. Here, the question concerned the conjectural behaviour of class groups of quadratic fields, and in particular to explain the numerically observed phenomenon that about {75.4\%} of all quadratic fields {{\Bbb Q}[\sqrt{d}]} (with d prime) enjoy unique factorisation (i.e. have trivial class group). (Class groups, as I learned in these two talks, are arithmetic analogues of the (abelianised) fundamental groups in topology, with Galois groups serving as the analogue of the full fundamental group.) One thing I learned here was that there was a canonical way to randomly generate a (profinite) abelian group, by taking the product of randomly generated finite abelian {p}-groups for each prime {p}. The way to canonically randomly generate a finite abelian {p}-group is to take large integers {n, D}, and look at the cokernel of a random homomorphism from {({\mathbb Z}/p^n{\mathbb Z})^d} to {({\mathbb Z}/p^n{\mathbb Z})^d}. In the limit {n,d \rightarrow \infty} (or by replacing {{\mathbb Z}/p^n{\mathbb Z}} with the {p}-adics and just sending {d \rightarrow \infty}), this stabilises and generates any given {p}-group {G} with probability

\displaystyle  \frac{1}{|\hbox{Aut}(G)|} \prod_{j=1}^\infty (1 - \frac{1}{p^j}), \ \ \ \ \ (1)

where {\hbox{Aut}(G)} is the group of automorphisms of {G}. In particular this leads to the strange identity

\displaystyle  \sum_G \frac{1}{|\hbox{Aut}(G)|} = \prod_{j=1}^\infty (1 - \frac{1}{p^j})^{-1} \ \ \ \ \ (2)

where {G} ranges over all {p}-groups; I do not know how to prove this identity other than via the above probability computation, the proof of which I give below the fold.

Based on the heuristic that the class group should behave “randomly” subject to some “obvious” constraints, it is expected that a randomly chosen real quadratic field {{\Bbb Q}[\sqrt{d}]} has unique factorisation (i.e. the class group has trivial {p}-group component for every {p}) with probability

\displaystyle  \prod_{p \hbox{ odd}} \prod_{j=2}^\infty (1 - \frac{1}{p^j}) \approx 0.754,

whereas a randomly chosen imaginary quadratic field {{\Bbb Q}[\sqrt{-d}]} has unique factorisation with probability

\displaystyle  \prod_{p \hbox{ odd}} \prod_{j=1}^\infty (1 - \frac{1}{p^j}) = 0.

The former claim is conjectural, whereas the latter claim follows from (for instance) Siegel’s theorem on the size of the class group, as discussed in this previous post. Ellenberg, Venkatesh, and Westerland have recently established some partial results towards the function field analogues of these heuristics.

Read the rest of this entry »

One of the most important topological concepts in analysis is that of compactness (as discussed for instance in my Companion article on this topic).  There are various flavours of this concept, but let us focus on sequential compactness: a subset E of a topological space X is sequentially compact if every sequence in E has a convergent subsequence whose limit is also in E.  This property allows one to do many things with the set E.  For instance, it allows one to maximise a functional on E:

Proposition 1. (Existence of extremisers)  Let E be a non-empty sequentially compact subset of a topological space X, and let F: E \to {\Bbb R} be a continuous function.  Then the supremum \sup_{x \in E} f(x) is attained at at least one point x_* \in E, thus F(x) \leq F(x_*) for all x \in E.  (In particular, this supremum is finite.)  Similarly for the infimum.

Proof. Let -\infty < L \leq +\infty be the supremum L := \sup_{x \in E} F(x).  By the definition of supremum (and the axiom of (countable) choice), one can find a sequence x^{(n)} in E such that F(x^{(n)}) \to L.  By compactness, we can refine this sequence to a subsequence (which, by abuse of notation, we shall continue to call x^{(n)}) such that x^{(n)} converges to a limit x in E.  Since we still have f(x^{(n)}) \to L, and f is continuous at x, we conclude that f(x)=L, and the claim for the supremum follows.  The claim for the infimum is similar.  \Box

Remark 1. An inspection of the argument shows that one can relax the continuity hypothesis on F somewhat: to attain the supremum, it suffices that F be upper semicontinuous, and to attain the infimum, it suffices that F be lower semicontinuous. \diamond

We thus see that sequential compactness is useful, among other things, for ensuring the existence of extremisers.  In finite-dimensional spaces (such as vector spaces), compact sets are plentiful; indeed, the Heine-Borel theorem asserts that every closed and bounded set is compact.  However, once one moves to infinite-dimensional spaces, such as function spaces, then the Heine-Borel theorem fails quite dramatically; most of the closed and bounded sets one encounters in a topological vector space are non-compact, if one insists on using a reasonably “strong” topology.  This causes a difficulty in (among other things) calculus of variations, which is often concerned to finding extremisers to a functional F: E \to {\Bbb R} on a subset E of an infinite-dimensional function space X.

In recent decades, mathematicians have found a number of ways to get around this difficulty.  One of them is to weaken the topology to recover compactness, taking advantage of such results as the Banach-Alaoglu theorem (or its sequential counterpart).  Of course, there is a tradeoff: weakening the topology makes compactness easier to attain, but makes the continuity of F harder to establish.  Nevertheless, if F enjoys enough “smoothing” or “cancellation” properties, one can hope to obtain continuity in the weak topology, allowing one to do things such as locate extremisers.  (The phenomenon that cancellation can lead to continuity in the weak topology is sometimes referred to as compensated compactness.)

Another option is to abandon trying to make all sequences have convergent subsequences, and settle just for extremising sequences to have convergent subsequences, as this would still be enough to retain Theorem 1.  Pursuing this line of thought leads to the Palais-Smale condition, which is a substitute for compactness in some calculus of variations situations.

But in many situations, one cannot weaken the topology to the point where the domain E becomes compact, without destroying the continuity (or semi-continuity) of F, though one can often at least find an intermediate topology (or metric) in which F is continuous, but for which E is still not quite compact.  Thus one can find sequences x^{(n)} in E which do not have any subsequences that converge to a constant element x \in E, even in this intermediate metric.  (As we shall see shortly, one major cause of this failure of compactness is the existence of a non-trivial action of a non-compact group G on E; such a group action can cause compensated compactness or the Palais-Smale condition to fail also.)  Because of this, it is a priori conceivable that a continuous function F need not attain its supremum or infimum.

Nevertheless, even though a sequence x^{(n)} does not have any subsequences that converge to a constant x, it may have a subsequence (which we also call x^{(n)}) which converges to some non-constant sequence y^{(n)} (in the sense that the distance d(x^{(n)},y^{(n)}) between the subsequence and the new sequence in a this intermediate metric), where the approximating sequence y^{(n)} is of a very structured form (e.g. “concentrating” to a point, or “travelling” off to infinity, or a superposition y^{(n)} = \sum_j y^{(n)}_j of several concentrating or travelling profiles of this form).  This weaker form of compactness, in which superpositions of a certain type of profile completely describe all the failures (or defects) of compactness, is known as concentration compactness, and the decomposition x^{(n)} \approx \sum_j y^{(n)}_j of the subsequence is known as the profile decomposition.  In many applications, it is a sufficiently good substitute for compactness that one can still do things like locate extremisers for functionals F –  though one often has to make some additional assumptions of F to compensate for the more complicated nature of the compactness.  This phenomenon was systematically studied by P.L. Lions in the 80s, and found great application in calculus of variations and nonlinear elliptic PDE.  More recently, concentration compactness has been a crucial and powerful tool in the non-perturbative analysis of nonlinear dispersive PDE, in particular being used to locate “minimal energy blowup solutions” or “minimal mass blowup solutions” for such a PDE (analogously to how one can use calculus of variations to find minimal energy solutions to a nonlinear elliptic equation); see for instance this recent survey by Killip and Visan.

In typical applications, the concentration compactness phenomenon is exploited in moderately sophisticated function spaces (such as Sobolev spaces or Strichartz spaces), with the failure of traditional compactness being connected to a moderately complicated group G of symmetries (e.g. the group generated by translations and dilations).  Because of this, concentration compactness can appear to be a rather complicated and technical concept when it is first encountered.  In this note, I would like to illustrate concentration compactness in a simple toy setting, namely in the space X = l^1({\Bbb Z}) of absolutely summable sequences, with the uniform (l^\infty) metric playing the role of the intermediate metric, and the translation group {\Bbb Z} playing the role of the symmetry group G.  This toy setting is significantly simpler than any model that one would actually use in practice [for instance, in most applications X is a Hilbert space], but hopefully it serves to illuminate this useful concept in a less technical fashion.

Read the rest of this entry »

This week I was in Columbus, Ohio, attending a conference on equidistribution on manifolds. I talked about my recent paper with Ben Green on the quantitative behaviour of polynomial sequences in nilmanifolds, which I have blogged about previously. During my talk (and inspired by the immediately preceding talk of Vitaly Bergelson), I stated explicitly for the first time a generalisation of the van der Corput trick which morally underlies our paper, though it is somewhat buried there as we specialised it to our application at hand (and also had to deal with various quantitative issues that made the presentation more complicated). After the talk, several people asked me for a more precise statement of this trick, so I am presenting it here, and as an application reproving an old theorem of Leon Green that gives a necessary and sufficient condition as to whether a linear sequence (g^n x)_{n=1}^\infty on a nilmanifold G/\Gamma is equidistributed, which generalises the famous theorem of Weyl on equidistribution of polynomials.

UPDATE, Feb 2013: It has been pointed out to me by Pavel Zorin that this argument does not fully recover the theorem of Leon Green; to cover all cases, one needs the more complicated van der Corput argument in our paper.

Read the rest of this entry »

Given that this blog is currently being devoted to a rather intensive study of flows on manifolds, I thought that it might be apropos to highlight an amazing 22-minute video from 1994 on this general topic by the (unfortunately now closed) Geometry Center, entitled “Outside In“, which depicts Smale’s paradox (which asserts that an 2-sphere in three-dimensional space can be smoothly inverted without ever ceasing to be an immersion), following a construction of Thurston (who was credited with the concept for the video). I first saw this video at the 1998 International Congress of Mathematicians in Berlin, where it won the first prize at the VideoMath Festival held there. It did a remarkably effective job of explaining the paradox, its resolution in three dimensions, and the lack of a similar paradox in two dimensions, all in a clear and non-technical manner.

A (rather low resolution) copy of the first half of the video can be found here, and the second half can be found here. Some higher resolution short movies of just the inversion process can be found at this Geometry Center page. Finally, the video (and an accompanying booklet with more details and background) can still be obtained today from A K Peters, although I believe the video is only available in the increasingly archaic VHS format.

There are a few other similar such high-quality expository videos of advanced mathematics floating around the internet, but I do not know of any page devoted to collecting such videos. If any readers have their own favourites, you are welcome to post some links or pointers to them here.

In order to motivate the lengthy and detailed analysis of Ricci flow that will occupy the rest of this course, I will spend this lecture giving a high-level overview of Perelman’s Ricci flow-based proof of the Poincaré conjecture, and in particular how that conjecture is reduced to verifying a number of (highly non-trivial) facts about Ricci flow.

At the risk of belaboring the obvious, here is the statement of that conjecture:

Theorem 1. (Poincaré conjecture) Let M be a compact 3-manifold which is simply connected (i.e. it is connected, and every loop is contractible to a point). Then M is homeomorphic to a 3-sphere S^3.

[Unless otherwise stated, all manifolds are assumed to be without boundary.]

I will take it for granted that this result is of interest, but you can read the Notices article of Milnor, the Bulletin article of Morgan, or the Clay Mathematical Institute description of the problem (also by Milnor) for background and motivation for this conjecture. Perelman’s methods also extend to establish further generalisations of the Poincaré conjecture, most notably Thurston’s geometrisation conjecture, but I will focus this course just on the Poincaré conjecture. (On the other hand, the geometrisation conjecture will be rather visibly lurking beneath the surface in the discussion of this lecture.)

Read the rest of this entry »

I’m continuing my series of articles for the Princeton Companion to Mathematics through the holiday season with my article on “Differential forms and integration“. This is my attempt to explain the concept of a differential form in differential geometry and several variable calculus; which I view as an extension of the concept of the signed integral in single variable calculus. I briefly touch on the important concept of de Rham cohomology, but mostly I stick to fundamentals.

I would also like to highlight Doron Zeilberger‘s PCM article “Enumerative and Algebraic combinatorics“. This article describes the art of how to usefully count the number of objects of a given type exactly; this subject has a rather algebraic flavour to it, in contrast with asymptotic combinatorics, which is more concerned with computing the order of magnitude of number of objects in a class. The two subjects complement each other; for instance, in my own work, I have found enumerative and other algebraic methods tend to be useful for controlling “main terms” in a given expression, while asymptotic and other analytic methods tend to be good at controlling “error terms”.

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 4,042 other followers