You are currently browsing the category archive for the ‘math.AT’ category.

Previous set of notes: Notes 3. Next set of notes: Notes 5.

In the previous set of notes we saw that functions ${f: U \rightarrow {\bf C}}$ that were holomorphic on an open set ${U}$ enjoyed a large number of useful properties, particularly if the domain ${U}$ was simply connected. In many situations, though, we need to consider functions ${f}$ that are only holomorphic (or even well-defined) on most of a domain ${U}$, thus they are actually functions ${f: U \backslash S \rightarrow {\bf C}}$ outside of some small singular set ${S}$ inside ${U}$. (In this set of notes we only consider interior singularities; one can also discuss singular behaviour at the boundary of ${U}$, but this is a whole separate topic and will not be pursued here.) Since we have only defined the notion of holomorphicity on open sets, we will require the singular sets ${S}$ to be closed, so that the domain ${U \backslash S}$ on which ${f}$ remains holomorphic is still open. A typical class of examples are the functions of the form ${\frac{f(z)}{z-z_0}}$ that were already encountered in the Cauchy integral formula; if ${f: U \rightarrow {\bf C}}$ is holomorphic and ${z_0 \in U}$, such a function would be holomorphic save for a singularity at ${z_0}$. Another basic class of examples are the rational functions ${P(z)/Q(z)}$, which are holomorphic outside of the zeroes of the denominator ${Q}$.

Singularities come in varying levels of “badness” in complex analysis. The least harmful type of singularity is the removable singularity – a point ${z_0}$ which is an isolated singularity (i.e., an isolated point of the singular set ${S}$) where the function ${f}$ is undefined, but for which one can extend the function across the singularity in such a fashion that the function becomes holomorphic in a neighbourhood of the singularity. A typical example is that of the complex sinc function ${\frac{\sin(z)}{z}}$, which has a removable singularity at the origin ${0}$, which can be removed by declaring the sinc function to equal ${1}$ at ${0}$. The detection of isolated removable singularities can be accomplished by Riemann’s theorem on removable singularities (Exercise 37 from Notes 3): if a holomorphic function ${f: U \backslash S \rightarrow {\bf C}}$ is bounded near an isolated singularity ${z_0 \in S}$, then the singularity at ${z_0}$ may be removed.

After removable singularities, the mildest form of singularity one can encounter is that of a pole – an isolated singularity ${z_0}$ such that ${f(z)}$ can be factored as ${f(z) = \frac{g(z)}{(z-z_0)^m}}$ for some ${m \geq 1}$ (known as the order of the pole), where ${g}$ has a removable singularity at ${z_0}$ (and is non-zero at ${z_0}$ once the singularity is removed). Such functions have already made a frequent appearance in previous notes, particularly the case of simple poles when ${m=1}$. The behaviour near ${z_0}$ of function ${f}$ with a pole of order ${m}$ is well understood: for instance, ${|f(z)|}$ goes to infinity as ${z}$ approaches ${z_0}$ (at a rate comparable to ${|z-z_0|^{-m}}$). These singularities are not, strictly speaking, removable; but if one compactifies the range ${{\bf C}}$ of the holomorphic function ${f: U \backslash S \rightarrow {\bf C}}$ to a slightly larger space ${{\bf C} \cup \{\infty\}}$ known as the Riemann sphere, then the singularity can be removed. In particular, functions ${f: U \backslash S \rightarrow {\bf C}}$ which only have isolated singularities that are either poles or removable can be extended to holomorphic functions ${f: U \rightarrow {\bf C} \cup \{\infty\}}$ to the Riemann sphere. Such functions are known as meromorphic functions, and are nearly as well-behaved as holomorphic functions in many ways. In fact, in one key respect, the family of meromorphic functions is better: the meromorphic functions on ${U}$ turn out to form a field, in particular the quotient of two meromorphic functions is again meromorphic (if the denominator is not identically zero).

Unfortunately, there are isolated singularities that are neither removable or poles, and are known as essential singularities. A typical example is the function ${f(z) = e^{1/z}}$, which turns out to have an essential singularity at ${z=0}$. The behaviour of such essential singularities is quite wild; we will show here the Casorati-Weierstrass theorem, which shows that the image of ${f}$ near the essential singularity is dense in the complex plane, as well as the more difficult great Picard theorem which asserts that in fact the image can omit at most one point in the complex plane. Nevertheless, around any isolated singularity (even the essential ones) ${z_0}$, it is possible to expand ${f}$ as a variant of a Taylor series known as a Laurent series ${\sum_{n=-\infty}^\infty a_n (z-z_0)^n}$. The ${\frac{1}{z-z_0}}$ coefficient ${a_{-1}}$ of this series is particularly important for contour integration purposes, and is known as the residue of ${f}$ at the isolated singularity ${z_0}$. These residues play a central role in a common generalisation of Cauchy’s theorem and the Cauchy integral formula known as the residue theorem, which is a particularly useful tool for computing (or at least transforming) contour integrals of meromorphic functions, and has proven to be a particularly popular technique to use in analytic number theory. Within complex analysis, one important consequence of the residue theorem is the argument principle, which gives a topological (and analytical) way to control the zeroes and poles of a meromorphic function.

Finally, there are the non-isolated singularities. Little can be said about these singularities in general (for instance, the residue theorem does not directly apply in the presence of such singularities), but certain types of non-isolated singularities are still relatively easy to understand. One particularly common example of such non-isolated singularity arises when trying to invert a non-injective function, such as the complex exponential ${z \mapsto \exp(z)}$ or a power function ${z \mapsto z^n}$, leading to branches of multivalued functions such as the complex logarithm ${z \mapsto \log(z)}$ or the ${n^{th}}$ root function ${z \mapsto z^{1/n}}$ respectively. Such branches will typically have a non-isolated singularity along a branch cut; this branch cut can be moved around the complex domain by switching from one branch to another, but usually cannot be eliminated entirely, unless one is willing to lift up the domain ${U}$ to a more general type of domain known as a Riemann surface. As such, one can view branch cuts as being an “artificial” form of singularity, being an artefact of a choice of local coordinates of a Riemann surface, rather than reflecting any intrinsic singularity of the function itself. The further study of Riemann surfaces is an important topic in complex analysis (as well as the related fields of complex geometry and algebraic geometry), but this topic will be postponed to the next course in this sequence.

Previous set of notes: Notes 2. Next set of notes: Notes 4.

[Warning: these notes have been substantially edited on Nov 9, 2021. Any references to theorem or exercise numbers before this date may now be inaccurate.]

We now come to perhaps the most central theorem in complex analysis (save possibly for the fundamental theorem of calculus), namely Cauchy’s theorem, which allows one to compute a large number of contour integrals ${\int_\gamma f(z)\ dz}$ even without knowing any explicit antiderivative of ${f}$. There are many forms and variants of Cauchy’s theorem. To give one such version, we need the basic topological notion of a homotopy:

Definition 1 (Homotopy) Let ${U}$ be an open subset of ${{\bf C}}$, and let ${\gamma_0: [a,b] \rightarrow U}$, ${\gamma_1: [a,b] \rightarrow U}$ be two curves in ${U}$.

• (i) If ${\gamma_0, \gamma_1}$ have the same initial point ${z_0}$ and terminal point ${z_1}$, we say that ${\gamma_0}$ and ${\gamma_1}$ are homotopic with fixed endpoints in ${U}$ if there exists a continuous map ${\gamma: [0,1] \times [a,b] \rightarrow U}$ such that ${\gamma(0,t) = \gamma_0(t)}$ and ${\gamma(1,t) = \gamma_1(t)}$ for all ${t \in [a,b]}$, and such that ${\gamma(s,a) = z_0}$ and ${\gamma(s,b) = z_1}$ for all ${s \in [0,1]}$.
• (ii) If ${\gamma_0, \gamma_1}$ are closed (but possibly with different initial points), we say that ${\gamma_0}$ and ${\gamma_1}$ are homotopic as closed curves in ${U}$ if there exists a continuous map ${\gamma: [0,1] \times [a,b] \rightarrow U}$ such that ${\gamma(0,t) = \gamma_0(t)}$ and ${\gamma(1,t) = \gamma_1(t)}$ for all ${t \in [a,b]}$, and such that ${\gamma(s,a) = \gamma(s,b)}$ for all ${s \in [0,1]}$.
• (iii) If ${\gamma_2: [c,d] \rightarrow U}$ and ${\gamma_3: [e,f] \rightarrow U}$ are curves with the same initial point and same terminal point, we say that ${\gamma_2}$ and ${\gamma_3}$ are homotopic with fixed endpoints up to reparameterisation in ${U}$ if there is a reparameterisation ${\tilde \gamma_2: [a,b] \rightarrow U}$ of ${\gamma_2}$ which is homotopic with fixed endpoints in ${U}$ to a reparameterisation ${\tilde \gamma_3: [a,b] \rightarrow U}$ of ${\gamma_3}$.
• (iv) If ${\gamma_2: [c,d] \rightarrow U}$ and ${\gamma_3: [e,f] \rightarrow U}$ are closed curves, we say that ${\gamma_2}$ and ${\gamma_3}$ are homotopic as closed curves up to reparameterisation in ${U}$ if there is a reparameterisation ${\tilde \gamma_2: [a,b] \rightarrow U}$ of ${\gamma_2}$ which is homotopic as closed curves in ${U}$ to a reparameterisation ${\tilde \gamma_3: [a,b] \rightarrow U}$ of ${\gamma_3}$.

In the first two cases, the map ${\gamma}$ will be referred to as a homotopy from ${\gamma_0}$ to ${\gamma_1}$, and we will also say that ${\gamma_0}$ can be continously deformed to ${\gamma_1}$ (either with fixed endpoints, or as closed curves).

Example 2 If ${U}$ is a convex set, that is to say that ${(1-s) z_0 + s z_1 \in U}$ whenever ${z_0,z_1 \in U}$ and ${0 \leq s \leq 1}$, then any two curves ${\gamma_0, \gamma_1: [0,1] \rightarrow U}$ from one point ${z_0}$ to another ${z_1}$ are homotopic, by using the homotopy

$\displaystyle \gamma(s,t) := (1-s) \gamma_0(t) + s \gamma_1(t).$

For a similar reason, in a convex open set ${U}$, any two closed curves will be homotopic to each other as closed curves.

Exercise 3 Let ${U}$ be an open subset of ${{\bf C}}$.

• (i) Prove that the property of being homotopic with fixed endpoints in ${U}$ is an equivalence relation.
• (ii) Prove that the property of being homotopic as closed curves in ${U}$ is an equivalence relation.
• (iii) If ${\gamma_0: [a,b] \rightarrow U}$, ${\gamma_1: [c,d] \rightarrow U}$ are closed curves with the same initial point, show that ${\gamma_0}$ is homotopic to ${\gamma_1}$ as closed curves up to reparameterisation if and only if ${\gamma_0}$ is homotopic to ${\gamma_2 + \gamma_1 + (-\gamma_2)}$ with fixed endpoints for some closed curve ${\gamma_2}$ with the same initial point as ${\gamma_0}$ or ${\gamma_1}$ up to reparameterisation.
• (iv) Define a point in ${U}$ to be a curve ${\gamma_1: [a,b] \rightarrow U}$ of the form ${\gamma_1(t) = z_0}$ for some ${z_0 \in U}$ and all ${t \in [a,b]}$. Let ${\gamma_0: [a,b] \rightarrow U}$ be a closed curve in ${U}$. Show that ${\gamma_0}$ is homotopic with fixed endpoints to a point in ${U}$ if and only if ${\gamma_0}$ is homotopic as a closed curve to a point in ${U}$. (In either case, we will call ${\gamma_0}$ homotopic to a point, null-homotopic, or contractible to a point in ${U}$.)
• (v) If ${\gamma_0, \gamma_1: [a,b] \rightarrow U}$ are curves with the same initial point and the same terminal point, show that ${\gamma_0}$ is homotopic to ${\gamma_1}$ with fixed endpoints in ${U}$ if and only if ${\gamma_0 + (-\gamma_1)}$ is homotopic to a point in ${U}$.
• (vi) If ${U}$ is connected, and ${\gamma_0, \gamma_1: [a,b] \rightarrow U}$ are any two curves in ${U}$, show that there exists a continuous map ${\gamma: [0,1] \times [a,b] \rightarrow U}$ such that ${\gamma(0,t) = \gamma_0(t)}$ and ${\gamma(1,t) = \gamma_1(t)}$ for all ${t \in [a,b]}$. Thus the notion of homotopy becomes rather trivial if one does not fix the endpoints or require the curve to be closed.
• (vii) Show that if ${\gamma_1: [a,b] \rightarrow U}$ is a reparameterisation of ${\gamma_0: [a,b] \rightarrow U}$, then ${\gamma_0}$ and ${\gamma_1}$ are homotopic with fixed endpoints in U.
• (viii) Prove that the property of being homotopic with fixed endpoints in ${U}$ up to reparameterisation is an equivalence relation.
• (ix) Prove that the property of being homotopic as closed curves in ${U}$ up to reparameterisation is an equivalence relation.

We can then phrase Cauchy’s theorem as an assertion that contour integration on holomorphic functions is a homotopy invariant. More precisely:

Theorem 4 (Cauchy’s theorem) Let ${U}$ be an open subset of ${{\bf C}}$, and let ${f: U \rightarrow {\bf C}}$ be holomorphic.

• (i) If ${\gamma_0: [a,b] \rightarrow U}$ and ${\gamma_1: [c,d] \rightarrow U}$ are rectifiable curves that are homotopic in ${U}$ with fixed endpoints up to reparameterisation, then

$\displaystyle \int_{\gamma_0} f(z)\ dz = \int_{\gamma_1} f(z)\ dz.$

• (ii) If ${\gamma_0: [a,b] \rightarrow U}$ and ${\gamma_1: [c,d] \rightarrow U}$ are closed rectifiable curves that are homotopic in ${U}$ as closed curves up to reparameterisation, then

$\displaystyle \int_{\gamma_0} f(z)\ dz = \int_{\gamma_1} f(z)\ dz.$

This version of Cauchy’s theorem is particularly useful for applications, as it explicitly brings into play the powerful technique of contour shifting, which allows one to compute a contour integral by replacing the contour with a homotopic contour on which the integral is easier to either compute or integrate. This formulation of Cauchy’s theorem also highlights the close relationship between contour integrals and the algebraic topology of the complex plane (and open subsets ${U}$ thereof). Setting ${\gamma_1}$ to be a point, we obtain an important special case of Cauchy’s theorem (which is in fact equivalent to the full theorem):

Corollary 5 (Cauchy’s theorem, again) Let ${U}$ be an open subset of ${{\bf C}}$, and let ${f: U \rightarrow {\bf C}}$ be holomorphic. Then for any closed rectifiable curve ${\gamma}$ in ${U}$ that is contractible in ${U}$ to a point, one has ${\int_\gamma f(z)\ dz = 0}$.

Exercise 6 Show that Theorem 4 and Corollary 5 are logically equivalent.

An important feature to note about Cauchy’s theorem is the global nature of its hypothesis on ${f}$. The conclusion of Cauchy’s theorem only involves the values of a function ${f}$ on the images of the two curves ${\gamma_0, \gamma_1}$. However, in order for the hypotheses of Cauchy’s theorem to apply, the function ${f}$ must be holomorphic not only on the images on ${\gamma_0, \gamma_1}$, but on an open set ${U}$ that is large enough (and sufficiently free of “holes”) to support a homotopy between the two curves. This point can be emphasised through the following fundamental near-counterexample to Cauchy’s theorem:

Example 7 (Key example) Let ${U := {\bf C} \backslash \{0\}}$, and let ${f: U \rightarrow {\bf C}}$ be the holomorphic function ${f(z) := \frac{1}{z}}$. Let ${\gamma_{0,1,\circlearrowleft}: [0,2\pi] \rightarrow {\bf C}}$ be the closed unit circle contour ${\gamma_{0,1,\circlearrowleft}(t) := e^{it}}$. Direct calculation shows that

$\displaystyle \int_{\gamma_{0,1,\circlearrowleft}} f(z)\ dz = 2\pi i \neq 0.$

As a consequence of this and Cauchy’s theorem, we conclude that the contour ${\gamma_{0,1,\circlearrowleft}}$ is not contractible to a point in ${U}$; note that this does not contradict Example 2 because ${U}$ is not convex. Thus we see that the lack of holomorphicity (or singularity) of ${f}$ at the origin can be “blamed” for the non-vanishing of the integral of ${f}$ on the closed contour ${\gamma_{0,1,\circlearrowleft}}$, even though this contour does not come anywhere near the origin. Thus we see that the global behaviour of ${f}$, not just the behaviour in the local neighbourhood of ${\gamma_{0,1,\circlearrowleft}}$, has an impact on the contour integral.
One can of course rewrite this example to involve non-closed contours instead of closed ones. For instance, if we let ${\gamma_0, \gamma_1: [0,\pi] \rightarrow U}$ denote the half-circle contours ${\gamma_0(t) := e^{it}}$ and ${\gamma_1(t) := e^{-it}}$, then ${\gamma_0,\gamma_1}$ are both contours in ${U}$ from ${+1}$ to ${-1}$, but one has

$\displaystyle \int_{\gamma_0} f(z)\ dz = +\pi i$

whereas

$\displaystyle \int_{\gamma_1} f(z)\ dz = -\pi i.$

In order for this to be consistent with Cauchy’s theorem, we conclude that ${\gamma_0}$ and ${\gamma_1}$ are not homotopic in ${U}$ (even after reparameterisation).

In the specific case of functions of the form ${\frac{1}{z}}$, or more generally ${\frac{f(z)}{z-z_0}}$ for some point ${z_0}$ and some ${f}$ that is holomorphic in some neighbourhood of ${z_0}$, we can quantify the precise failure of Cauchy’s theorem through the Cauchy integral formula, and through the concept of a winding number. These turn out to be extremely powerful tools for understanding both the nature of holomorphic functions and the topology of open subsets of the complex plane, as we shall see in this and later notes.

If ${M}$ is a connected topological manifold, and ${p}$ is a point in ${M}$, the (topological) fundamental group ${\pi_1(M,p)}$ of ${M}$ at ${p}$ is traditionally defined as the space of equivalence classes of loops starting and ending at ${p}$, with two loops considered equivalent if they are homotopic to each other. (One can of course define the fundamental group for more general classes of topological spaces, such as locally path connected spaces, but we will stick with topological manifolds in order to avoid pathologies.) As the name suggests, it is one of the most basic topological invariants of a manifold, which among other things can be used to classify the covering spaces of that manifold. Indeed, given any such covering ${\phi: N \rightarrow M}$, the fundamental group ${\pi_1(M,p)}$ acts (on the right) by monodromy on the fibre ${\phi^{-1}(\{p\})}$, and conversely given any discrete set with a right action of ${\pi_1(M,p)}$, one can find a covering space with that monodromy action (this can be done by “tensoring” the universal cover with the given action, as illustrated below the fold). In more category-theoretic terms: monodromy produces an equivalence of categories between the category of covers of ${M}$, and the category of discrete ${\pi_1(M,p)}$-sets.

One of the basic tools used to compute fundamental groups is van Kampen’s theorem:

Theorem 1 (van Kampen’s theorem) Let ${M_1, M_2}$ be connected open sets covering a connected topological manifold ${M}$ with ${M_1 \cap M_2}$ also connected, and let ${p}$ be an element of ${M_1 \cap M_2}$. Then ${\pi_1(M_1 \cup M_2,p)}$ is isomorphic to the amalgamated free product ${\pi_1(M_1,p) *_{\pi_1(M_1\cap M_2,p)} \pi_1(M_2,p)}$.

Since the topological fundamental group is customarily defined using loops, it is not surprising that many proofs of van Kampen’s theorem (e.g. the one in Hatcher’s text) proceed by an analysis of the loops in ${M_1 \cup M_2}$, carefully deforming them into combinations of loops in ${M_1}$ or in ${M_2}$ and using the combinatorial description of the amalgamated free product (which was discussed in this previous blog post). But I recently learned (thanks to the responses to this recent MathOverflow question of mine) that by using the above-mentioned equivalence of categories, one can convert statements about fundamental groups to statements about coverings. In particular, van Kampen’s theorem turns out to be equivalent to a basic statement about how to glue a cover of ${M_1}$ and a cover of ${M_2}$ together to give a cover of ${M}$, and the amalgamated free product emerges through its categorical definition as a coproduct, rather than through its combinatorial description. One advantage of this alternate proof is that it can be extended to other contexts (such as the étale fundamental groups of varieties or schemes) in which the concept of a path or loop is no longer useful, but for which the notion of a covering is still important. I am thus recording (mostly for my own benefit) the covering-based proof of van Kampen’s theorem in the topological setting below the fold.

This is a sequel to my previous blog post “Cayley graphs and the geometry of groups“. In that post, the concept of a Cayley graph of a group ${G}$ was used to place some geometry on that group ${G}$. In this post, we explore a variant of that theme, in which (fragments of) a Cayley graph on ${G}$ is used to describe the basic algebraic structure of ${G}$, and in particular on elementary word identities in ${G}$. Readers who are familiar with either category theory or group homology/cohomology will recognise these concepts lurking not far beneath the surface; we wil remark briefly on these connections later in this post. However, no knowledge of categories or cohomology is needed for the main discussion, which is primarily focused on elementary group theory.

Throughout this post, we fix a single group ${G = (G,\cdot)}$, which is allowed to be non-abelian and/or infinite. All our graphs will be directed, with loops and multiple edges permitted.

In the previous post, we drew the entire Cayley graph of a group ${G}$. Here, we will be working much more locally, and will only draw the portions of the Cayley graph that are relevant to the discussion. In this graph, the vertices are elements ${x}$ of the group ${G}$, and one draws a directed edge from ${x}$ to ${xg}$ labeled (or “coloured”) by the group element ${g}$ for any ${x, g \in G}$; the graph consisting of all such vertices and edges will be denoted ${Cay(G,G)}$. Thus, a typical edge in ${Cay(G,G)}$ looks like this:

Figure 1.

One usually does not work with the complete Cayley graph ${Cay(G,G)}$. It is customary to instead work with smaller Cayley graphs ${Cay(G,S)}$, in which the edge colours ${g}$ are restricted to a smaller subset of ${G}$, such as a set of generators for ${G}$. As we will be working locally, we will in fact work with even smaller fragments of ${Cay(G,G)}$ at a time; in particular, we only use a handful of colours (no more than nine, in fact, for any given diagram), and we will not require these colours to generate the entire group (we do not care if the Cayley graph is connected or not, as this is a global property rather than a local one).

Cayley graphs are left-invariant: for any ${a \in G}$, the left translation map ${x \mapsto ax}$ is a graph isomorphism. To emphasise this left invariance, we will usually omit the vertex labels, and leave only the coloured directed edge, like so:

Figure 2.

This is analogous to how, in undergraduate mathematics and physics, vectors in Euclidean space are often depicted as arrows of a given magnitude and direction, with the initial and final points of this arrow being of secondary importance only. (Indeed, this depiction of vectors in a vector space can be viewed as an abelian special case of the more general depiction of group elements used in this post.)

Let us define a diagram to be a finite directed graph ${H = (V,E)}$, with edges coloured by elements of ${G}$, which has at least one graph homomorphism into the complete Cayley graph ${Cay(G,G)}$ of ${G}$; thus there exists a map ${\phi: V \rightarrow G}$ (not necessarily injective) with the property that ${\phi(w) = \phi(v) g}$ whenever ${(v,w)}$ is a directed edge in ${H}$ coloured by a group element ${g \in G}$. Informally, a diagram is a finite subgraph of a Cayley graph with the vertex labels omitted, and with distinct vertices permitted to represent the same group element. Thus, for instance, the single directed edge displayed in Figure 2 is a very simple example of a diagram. An even simpler example of a diagram would be a depiction of the identity element:

Figure 3.

We will however omit the identity loops in our diagrams in order to reduce clutter.

We make the obvious remark that any directed edge in a diagram can be coloured by at most one group element ${g}$, since ${y=xg, y=xh}$ implies ${g=h}$. This simple observation provides a way to prove group theoretic identities using diagrams: to show that two group elements ${g, h}$ are equal, it suffices to show that they connect together (with the same orientation) the same pair of vertices in a diagram.

Remark 1 One can also interpret these diagrams as commutative diagrams in a category in which all the objects are copies of ${G}$, and the morphisms are right-translation maps. However, we will deviate somewhat from the category theoretic way of thinking here by focusing on the geometric arrangement and shape of these diagrams, rather than on their abstract combinatorial description. In particular, we view the arrows more as distorted analogues of vector arrows, than as the abstract arrows appearing in category theory.

Just as vector addition can be expressed via concatenation of arrows, group multiplication can be described by concatenation of directed edges. Indeed, for any ${x,g,h \in G}$, the vertices ${x, xg, xgh}$ can be connected by the following triangular diagram:

Figure 4.

In a similar spirit, inversion is described by the following diagram:

Figure 5.

We make the pedantic remark though that we do not consider a ${g^{-1}}$ edge to be the reversal of the ${g}$ edge, but rather as a distinct edge that just happens to have the same initial and final endpoints as the reversal of the ${g}$ edge. (This will be of minor importance later, when we start integrating “${1}$-forms” on such edges.)

A fundamental operation for us will be that of gluing two diagrams together.

Lemma 1 ((Labeled) gluing) Let ${D_1 = (V_1,E_1), D_2 = (V_2,E_2)}$ be two diagrams of a given group ${G}$. Suppose that the intersection ${D_1 \cap D_2 := (V_1 \cap V_2, E_1 \cap E_2)}$ of the two diagrams connects all of ${V_1 \cap V_2}$ (i.e. any two elements of ${V_1 \cap V_2}$ are joined by a path in ${D_1 \cap D_2}$). Then the union ${D_1 \cup D_2 := (V_1 \cup V_2, E_1 \cup E_2)}$ is also a diagram of ${G}$.

Proof: By hypothesis, we have graph homomorphisms ${\phi_1: D_1 \rightarrow Cay(G,G)}$, ${\phi_2: D_2 \rightarrow Cay(G,G)}$. If they agree on ${D_1 \cap D_2}$ then one simply glues together the two homomorphisms to create a new graph homomorphism ${\phi: D_1 \cup D_2 \rightarrow Cay(G,G)}$. If they do not agree, one can apply a left translation to either ${\phi_1}$ or ${\phi_2}$ to make the two diagrams agree on at least one vertex of ${D_1 \cap D_2}$; then by the connected nature of ${D_1 \cap D_2}$ we see that they now must agree on all vertices of ${D_1 \cap D_2}$, and then we can form the glued graph homomorphism as before. $\Box$

The above lemma required one to specify the label the vertices of ${D_1,D_2}$ (in order to form the intersection ${D_1 \cap D_2}$ and union ${D_1 \cup D_2}$). However, if one is presented with two diagrams ${D_1, D_2}$ with unlabeled vertices, one can identify some partial set of vertices of ${D_1}$ with a partial set of vertices of ${D_2}$ of matching cardinality. Provided that the subdiagram common to ${D_1}$ and ${D_2}$ after this identification connects all of the common vertices together, we may use the above lemma to create a glued diagram ${D}$.

For instance, if a diagram ${D}$ contains two of the three edges in the triangular diagram in Figure 4, one can “fill in” the triangle by gluing in the third edge:

Figure 6.

One can use glued diagrams to demonstrate various basic group-theoretic identities. For instance, by gluing together two copies of the triangular diagram in Figure 4 to create the glued diagram

Figure 7.

and then filling in two more triangles, we obtain a tetrahedral diagram that demonstrates the associative law ${(gh)k = g(hk)}$:

Figure 8.

Similarly, by gluing together two copies of Figure 4 with three copies of Figure 5 in an appropriate order, we can demonstrate the Abel identity ${(gh)^{-1} = h^{-1} g^{-1}}$:

Figure 9.

In addition to gluing, we will also use the trivial operation of erasing: if ${D}$ is a diagram for a group ${G}$, then any subgraph of ${D}$ (formed by removing vertices and/or edges) is also a diagram of ${G}$. This operation is not strictly necessary for our applications, but serves to reduce clutter in the pictures.

If two group elements ${g, h}$ commute, then we obtain a parallelogram as a diagram, exactly as in the vector space case:

Figure 10.

In general, of course, two arbitrary group elements ${g,h}$ will fail to commute, and so this parallelogram is no longer available. However, various substitutes for this diagram exist. For instance, if we introduce the conjugate ${g^h := h^{-1} g h}$ of one group element ${g}$ by another, then we have the following slightly distorted parallelogram:

Figure 11.

By appropriate gluing and filling, this can be used to demonstrate the homomorphism properties of a conjugation map ${g \mapsto g^h}$:

Figure 12.

Figure 13.

Another way to replace the parallelogram in Figure 10 is to introduce the commutator ${[g,h] := g^{-1}h^{-1}gh}$ of two elements, in which case we can perturb the parallelogram into a pentagon:

Figure 14.

We will tend to depict commutator edges as being somewhat shorter than the edges generating that commutator, reflecting a “perturbative” or “nilpotent” philosophy. (Of course, to fully reflect a nilpotent perspective, one should orient commutator edges in a different dimension from their generating edges, but of course the diagrams drawn here do not have enough dimensions to display this perspective easily.) We will also be adopting a “Lie” perspective of interpreting groups as behaving like perturbations of vector spaces, in particular by trying to draw all edges of the same colour as being approximately (though not perfectly) parallel to each other (and with approximately the same length).

Gluing the above pentagon with the conjugation parallelogram and erasing some edges, we discover a “commutator-conjugate” triangle, describing the basic identity ${g^h = g [g,h]}$:

Figure 15.

Other gluings can also give the basic relations between commutators and conjugates. For instance, by gluing the pentagon in Figure 14 with its reflection, we see that ${[g,h] = [h,g]^{-1}}$. The following diagram, obtained by gluing together copies of Figures 11 and 15, demonstrates that ${[h,g^{-1}] = [g,h]^{g^{-1}}}$,

Figure 16.

while this figure demonstrates that ${[g,hk] = [g,k] [g,h]^k}$:

Figure 17.

Now we turn to a more sophisticated identity, the Hall-Witt identity

$\displaystyle [[g,h],k^g] [[k,g],h^k] [[h,k],g^h] = 1,$

which is the fully noncommutative version of the more well-known Jacobi identity for Lie algebras.

The full diagram for the Hall-Witt identity resembles a slightly truncated parallelopiped. Drawing this truncated paralleopiped in full would result in a rather complicated looking diagram, so I will instead display three components of this diagram separately, and leave it to the reader to mentally glue these three components back to form the full parallelopiped. The first component of the diagram is formed by gluing together three pentagons from Figure 14, and looks like this:

Figure 18.

This should be thought of as the “back” of the truncated parallelopiped needed to establish the Hall-Witt identity.

While it is not needed for proving the Hall-Witt identity, we also observe for future reference that we may also glue in some distorted parallelograms and obtain a slightly more complicated diagram:

Figure 19.

To form the second component, let us now erase all interior components of Figure 18 or Figure 19:

Figure 20.

Then we fill in three distorted parallelograms:

Figure 21.

This is the second component, and is the “front” of the truncated praallelopiped, minus the portions exposed by the truncation.

Finally, we turn to the third component. We begin by erasing the outer edges from the second component in Figure 21:

Figure 22.

We glue in three copies of the commutator-conjugate triangle from Figure 15:

Figure 23.

But now we observe that we can fill in three pentagons, and obtain a small triangle with edges ${[[g,h],k^g] [[k,g],h^k] [[h,k],g^h]}$:

Figure 24.

Erasing everything except this triangle gives the Hall-Witt identity. Alternatively, one can glue together Figures 18, 21, and 24 to obtain a truncated parallelopiped which one can view as a geometric representation of the proof of the Hall-Witt identity.

Among other things, I found these diagrams to be useful to visualise group cohomology; I give a simple example of this below, developing an analogue of the Hall-Witt identity for ${2}$-cocycles.

The classical formulation of Hilbert’s fifth problem asks whether topological groups that have the topological structure of a manifold, are necessarily Lie groups. This is indeed, the case, thanks to following theorem of Gleason and Montgomery-Zippin:

Theorem 1 (Hilbert’s fifth problem) Let ${G}$ be a topological group which is locally Euclidean. Then ${G}$ is isomorphic to a Lie group.

We have discussed the proof of this result, and of related results, in previous posts. There is however a generalisation of Hilbert’s fifth problem which remains open, namely the Hilbert-Smith conjecture, in which it is a space acted on by the group which has the manifold structure, rather than the group itself:

Conjecture 2 (Hilbert-Smith conjecture) Let ${G}$ be a locally compact topological group which acts continuously and faithfully (or effectively) on a connected finite-dimensional manifold ${X}$. Then ${G}$ is isomorphic to a Lie group.

Note that Conjecture 2 easily implies Theorem 1 as one can pass to the connected component ${G^\circ}$ of a locally Euclidean group (which is clearly locally compact), and then look at the action of ${G^\circ}$ on itself by left-multiplication.

The hypothesis that the action is faithful (i.e. each non-identity group element ${g \in G \backslash \{\hbox{id}\}}$ acts non-trivially on ${X}$) cannot be completely eliminated, as any group ${G}$ will have a trivial action on any space ${X}$. The requirement that ${G}$ be locally compact is similarly necessary: consider for instance the diffeomorphism group ${\hbox{Diff}(S^1)}$ of, say, the unit circle ${S^1}$, which acts on ${S^1}$ but is infinite dimensional and is not locally compact (with, say, the uniform topology). Finally, the connectedness of ${X}$ is also important: the infinite torus ${G = ({\bf R}/{\bf Z})^{\bf N}}$ (with the product topology) acts faithfully on the disconnected manifold ${X := {\bf R}/{\bf Z} \times {\bf N}}$ by the action

$\displaystyle (g_n)_{n \in {\bf N}} (\theta, m) := (\theta + g_m, m).$

The conjecture in full generality remains open. However, there are a number of partial results. For instance, it was observed by Montgomery and Zippin that the conjecture is true for transitive actions, by a modification of the argument used to establish Theorem 1. This special case of the Hilbert-Smith conjecture (or more precisely, a generalisation thereof in which “finite-dimensional manifold” was replaced by “locally connected locally compact finite-dimensional”) was used in Gromov’s proof of his famous theorem on groups of polynomial growth. I record the argument of Montgomery and Zippin below the fold.

Another partial result is the reduction of the Hilbert-Smith conjecture to the ${p}$-adic case. Indeed, it is known that Conjecture 2 is equivalent to

Conjecture 3 (Hilbert-Smith conjecture for ${p}$-adic actions) It is not possible for a ${p}$-adic group ${{\bf Z}_p}$ to act continuously and effectively on a connected finite-dimensional manifold ${X}$.

The reduction to the ${p}$-adic case follows from the structural theory of locally compact groups (specifically, the Gleason-Yamabe theorem discussed in previous posts) and some results of Newman that sharply restrict the ability of periodic actions on a manifold ${X}$ to be close to the identity. I record this argument (which appears for instance in this paper of Lee) below the fold also.

Combinatorial incidence geometry is the study of the possible combinatorial configurations between geometric objects such as lines and circles. One of the basic open problems in the subject has been the Erdős distance problem, posed in 1946:

Problem 1 (Erdős distance problem) Let ${N}$ be a large natural number. What is the least number ${\# \{ |x_i-x_j|: 1 \leq i < j \leq N \}}$ of distances that are determined by ${N}$ points ${x_1,\ldots,x_N}$ in the plane?

Erdős called this least number ${g(N)}$. For instance, one can check that ${g(3)=1}$ and ${g(4)=2}$, although the precise computation of ${g}$ rapidly becomes more difficult after this. By considering ${N}$ points in arithmetic progression, we see that ${g(N) \leq N-1}$. By considering the slightly more sophisticated example of a ${\sqrt{N} \times \sqrt{N}}$ lattice grid (assuming that ${N}$ is a square number for simplicity), and using some analytic number theory, one can obtain the slightly better asymptotic bound ${g(N) = O( N / \sqrt{\log N} )}$.

On the other hand, lower bounds are more difficult to obtain. As observed by Erdős, an easy argument, ultimately based on the incidence geometry fact that any two circles intersect in at most two points, gives the lower bound ${g(N) \gg N^{1/2}}$. The exponent ${1/2}$ has been slowly increasing over the years by a series of increasingly intricate arguments combining incidence geometry facts with other known results in combinatorial incidence geometry (most notably the Szemerédi-Trotter theorem) and also some tools from additive combinatorics; however, these methods seemed to fall quite short of getting to the optimal exponent of ${1}$. (Indeed, previously to last week, the best lower bound known was approximately ${N^{0.8641}}$, due to Katz and Tardos.)

Very recently, though, Guth and Katz have obtained a near-optimal result:

Theorem 2 One has ${g(N) \gg N / \log N}$.

The proof neatly combines together several powerful and modern tools in a new way: a recent geometric reformulation of the problem due to Elekes and Sharir; the polynomial method as used recently by Dvir, Guth, and Guth-Katz on related incidence geometry problems (and discussed previously on this blog); and the somewhat older method of cell decomposition (also discussed on this blog). A key new insight is that the polynomial method (and more specifically, the polynomial Ham Sandwich theorem, also discussed previously on this blog) can be used to efficiently create cells.

In this post, I thought I would sketch some of the key ideas used in the proof, though I will not give the full argument here (the paper itself is largely self-contained, well motivated, and of only moderate length). In particular I will not go through all the various cases of configuration types that one has to deal with in the full argument, but only some illustrative special cases.

To simplify the exposition, I will repeatedly rely on “pigeonholing cheats”. A typical such cheat: if I have ${n}$ objects (e.g. ${n}$ points or ${n}$ lines), each of which could be of one of two types, I will assume that either all ${n}$ of the objects are of the first type, or all ${n}$ of the objects are of the second type. (In truth, I can only assume that at least ${n/2}$ of the objects are of the first type, or at least ${n/2}$ of the objects are of the second type; but in practice, having ${n/2}$ instead of ${n}$ only ends up costing an unimportant multiplicative constant in the type of estimates used here.) A related such cheat: if one has ${n}$ objects ${A_1,\ldots,A_n}$ (again, think of ${n}$ points or ${n}$ circles), and to each object ${A_i}$ one can associate some natural number ${k_i}$ (e.g. some sort of “multiplicity” for ${A_i}$) that is of “polynomial size” (of size ${O(N^{O(1)})}$), then I will assume in fact that all the ${k_i}$ are in a fixed dyadic range ${[k,2k]}$ for some ${k}$. (In practice, the dyadic pigeonhole principle can only achieve this after throwing away all but about ${n/\log N}$ of the original ${n}$ objects; it is this type of logarithmic loss that eventually leads to the logarithmic factor in the main theorem.) Using the notation ${X \sim Y}$ to denote the assertion that ${C^{-1} Y \leq X \leq CY}$ for an absolute constant ${C}$, we thus have ${k_i \sim k}$ for all ${i}$, thus ${k_i}$ is morally constant.

I will also use asymptotic notation rather loosely, to avoid cluttering the exposition with a certain amount of routine but tedious bookkeeping of constants. In particular, I will use the informal notation ${X \lll Y}$ or ${Y \ggg X}$ to denote the statement that ${X}$ is “much less than” ${Y}$ or ${Y}$ is “much larger than” ${X}$, by some large constant factor.

Let ${d}$ be a natural number. A basic operation in the topology of oriented, connected, compact, ${d}$-dimensional manifolds (hereby referred to simply as manifolds for short) is that of connected sum: given two manifolds ${M, N}$, the connected sum ${M \# N}$ is formed by removing a small ball from each manifold and then gluing the boundary together (in the orientation-preserving manner). This gives another oriented, connected, compact manifold, and the exact nature of the balls removed and their gluing is not relevant for topological purposes (any two such procedures give homeomorphic manifolds). It is easy to see that this operation is associative and commutative up to homeomorphism, thus ${M \# N \cong N \# M}$ and ${(M \# N) \# O \cong M \# (N \# O)}$, where we use ${M \cong N}$ to denote the assertion that ${M}$ is homeomorphic to ${N}$.

(It is important that the orientation is preserved; if, for instance, ${d=3}$, and ${M}$ is a chiral 3-manifold which is chiral (thus ${M \not \cong -M}$, where ${-M}$ is the orientation reversal of ${M}$), then the connect sum ${M \# M}$ of ${M}$ with itself is also chiral (by the prime decomposition; in fact one does not even need the irreducibility hypothesis for this claim), but ${M \# -M}$ is not. A typical example of an irreducible chiral manifold is the complement of a trefoil knot. Thanks to Danny Calegari for this example.)

The ${d}$-dimensional sphere ${S^d}$ is an identity (up to homeomorphism) of connect sum: ${M \# S^d \cong M}$ for any ${M}$. A basic result in the subject is that the sphere is itself irreducible:

Theorem 1 (Irreducibility of the sphere) If ${S^d \cong M \# N}$, then ${M, N \cong S^d}$.

For ${d=1}$ (curves), this theorem is trivial because the only connected ${1}$-manifolds are homeomorphic to circles. For ${d=2}$ (surfaces), the theorem is also easy by considering the genus of ${M, N, M \# N}$. For ${d=3}$ the result follows from the prime decomposition. But for higher ${d}$, these ad hoc methods no longer work. Nevertheless, there is an elegant proof of Theorem 1, due to Mazur, and known as Mazur’s swindle. The reason for this name should become clear when one sees the proof, which I reproduce below.

Suppose ${M \# N \cong S^d}$. Now consider the infinite connected sum

$\displaystyle (M \# N) \# (M \# N) \# (M \# N) \# \ldots.$

This is an infinite connected sum of spheres, and can thus be viewed as a half-open cylinder, which is topologically equivalent to a sphere with a small ball removed; alternatively, one can contract the boundary at infinity to a point to recover the sphere ${S^d}$. On the other hand, by using the associativity of connected sum (which will still work for the infinite connected sum, if one thinks about it carefully), the above manifold is also homeomorphic to

$\displaystyle M \# (N \# M) \# (N \# M) \# \ldots$

which is the connected sum of ${M}$ with an infinite sequence of spheres, or equivalently ${M}$ with a small ball removed. Contracting the small balls to a point, we conclude that ${M \cong S^d}$, and a similar argument gives ${N \cong S^d}$.

A typical corollary of Theorem 1 is a generalisation of the Jordan curve theorem: any locally flat embedded copy of ${S^{d-1}}$ in ${S^d}$ divides the sphere ${S^d}$ into two regions homeomorphic to balls ${B^d}$. (Some sort of regularity hypothesis, such as local flatness, is essential, thanks to the counterexample of the Alexander horned sphere. If one assumes smoothness instead of local flatness, the problem is known as the Schönflies problem, and is apparently quite subtle, especially in the four-dimensional case ${d=4}$.)

One can ask whether there is a way to prove Theorem 1 for general ${d}$ without recourse to the infinite sum swindle. I do not know the complete answer to this, but some evidence against this hope can be seen by noting that if one works in the smooth category instead of the topological category (i.e. working with smooth manifolds, and only equating manifolds that are diffeomorphic, and not merely homeomorphic), then the exotic spheres in five and higher dimensions provide a counterexample to the smooth version of Theorem 1: it is possible to find two exotic spheres whose connected sum is diffeomorphic to the standard sphere. (Indeed, in five and higher dimensions, the exotic sphere structures on ${S^d}$ form a finite abelian group under connect sum, with the standard sphere being the identity element. The situation in four dimensions is much less well understood.) The problem with the swindle here is that the homeomorphism generated by the infinite number of applications of the associativity law is not smooth when one identifies the boundary with a point.

The basic idea of the swindle – grouping an alternating infinite sum in two different ways – also appears in a few other contexts. Most classically, it is used to show that the sum ${1-1+1-1+\ldots}$ does not converge in any sense which is consistent with the infinite associative law, since this would then imply that ${1=0}$; indeed, one can view the swindle as a dichotomy between the infinite associative law and the presence of non-trivial cancellation. (In the topological manifold category, one has the former but not the latter, whereas in the case of ${1-1+1-1+\ldots}$, one has the latter but not the former.) The alternating series test can also be viewed as a variant of the swindle.

Another variant of the swindle arises in the proof of the Cantor–Bernstein–Schröder theorem. Suppose one has two sets ${A, B}$, together with injections from ${A}$ to ${B}$ and from ${B}$ to ${A}$. The first injection leads to an identification ${B \cong C \uplus A}$ for some set ${C}$, while the second injection leads to an identification ${A \cong D \uplus B}$. Iterating this leads to identifications

$\displaystyle A \cong (D \uplus C \uplus D \uplus \ldots) \uplus X$

and

$\displaystyle B \cong (C \uplus D \uplus C \uplus \ldots) \uplus X$

for some additional set ${X}$. Using the identification ${D \uplus C \cong C \uplus D}$ then yields an explicit bijection between ${A}$ and ${B}$.

(Thanks to Danny Calegari for telling me about the swindle, while we were both waiting to catch an airplane.)

[Update, Oct 7: See the comments for several further examples of swindle-type arguments.]

This week I was in my home town of Adelaide, Australia, for the 2009 annual meeting of the Australian Mathematical Society. This was a fairly large meeting (almost 500 participants). One of the highlights of such a large meeting is the ability to listen to plenary lectures in fields adjacent to one’s own, in which speakers can give high-level overviews of a subject without getting too bogged down in the technical details. From the talks here I learned a number of basic things which were well known to experts in that field, but which I had not fully appreciated, and so I wanted to share them here.

The first instance of this was from a plenary lecture by Danny Calegari entitled “faces of the stable commutator length (scl) ball”. One thing I learned from this talk is that in homotopy theory, there is a very close relationship between topological spaces (such as manifolds) on one hand, and groups (and generalisations of groups) on the other, so that homotopy-theoretic questions about the former can often be converted to purely algebraic questions about the latter, and vice versa; indeed, it seems that homotopy theorists almost think of topological spaces and groups as being essentially the same concept, despite looking very different at first glance. To get from a space ${X}$ to a group, one looks at homotopy groups ${\pi_n(X)}$ of that space, and in particular the fundamental group ${\pi_1(X)}$; conversely, to get from a group ${G}$ back to a topological space one can use the Eilenberg-Maclane spaces ${K(G,n)}$ associated to that group (and more generally, a Postnikov tower associated to a sequence of such groups, together with additional data). In Danny’s talk, he gave the following specific example: the problem of finding the least complicated embedded surface with prescribed (and homologically trivial) boundary in a space ${X}$, where “least complicated” is measured by genus (or more precisely, the negative component of Euler characteristic), is essentially equivalent to computing the commutator length of the element in the fundamental group ${\pi(X)}$ corresponding to that boundary (i.e. the least number of commutators one is required to multiply together to express the element); and the stable version of this problem (where one allows the surface to wrap around the boundary ${n}$ times for some large ${n}$, and one computes the asymptotic ratio between the Euler characteristic and ${n}$) is similarly equivalent to computing the stable commutator length of that group element. (Incidentally, there is a simple combinatorial open problem regarding commutator length in the free group, which I have placed on the polymath wiki.)

This theme was reinforced by another plenary lecture by Ezra Getzler entitled “${n}$-groups”, in which he showed how sequences of groups (such as the first ${n}$ homotopy groups ${\pi_1(X),\ldots,\pi_n(X)}$) can be enhanced into a more powerful structure known as an ${n}$-group, which is more complicated to define, requiring the machinery of simplicial complexes, sheaves, and nerves. Nevertheless, this gives a very topological and geometric interpretation of the concept of a group and its generalisations, which are of use in topological quantum field theory, among other things.

Mohammed Abuzaid gave a plenary lecture entitled “Functoriality in homological mirror symmetry”. One thing I learned from this talk was that the (partially conjectural) phenomenon of (homological) mirror symmetry is one of several types of duality, in which the behaviour of maps into one mathematical object ${X}$ (e.g. immersed or embedded curves, surfaces, etc.) are closely tied to the behaviour of maps out of a dual mathematical object ${\hat X}$ (e.g. functionals, vector fields, forms, sections, bundles, etc.). A familiar example of this is in linear algebra: by taking adjoints, a linear map into a vector space ${X}$ can be related to an adjoint linear map mapping out of the dual space ${X^*}$. Here, the behaviour of curves in a two-dimensional symplectic manifold (or more generally, Lagrangian submanifolds in a higher-dimensional symplectic manifold), is tied to the behaviour of holomorphic sections on bundles over a dual algebraic variety, where the precise definition of “behaviour” is category-theoretic, involving some rather complicated gadgets such as the Fukaya category of a symplectic manifold. As with many other applications of category theory, it is not just the individual pairings between an object and its dual which are of interest, but also the relationships between these pairings, as formalised by various functors between categories (and natural transformations between functors). (One approach to mirror symmetry was discussed by Shing-Tung Yau at a distinguished lecture at UCLA, as transcribed in this previous post.)

There was a related theme in a talk by Dennis Gaitsgory entitled “The geometric Langlands program”. From my (very superficial) understanding of the Langlands program, the behaviour of specific maps into a reductive Lie group ${G}$, such as representations in ${G}$ of a fundamental group, étale fundamental group, class group, or Galois group of a global field, is conjecturally tied to specific maps out of a dual reductive Lie group ${\hat G}$, such as irreducible automorphic representations of ${\hat G}$, or of various structures (such as derived categories) attached to vector bundles on ${\hat G}$. There are apparently some tentatively conjectured links (due to Witten?) between Langlands duality and mirror symmetry, but they seem at present to be fairly distinct phenomena (one is topological and geometric, the other is more algebraic and arithmetic). For abelian groups, Langlands duality is closely connected to the much more classical Pontryagin duality in Fourier analysis. (There is an analogue of Fourier analysis for nonabelian groups, namely representation theory, but the link from this to the Langlands program is somewhat murky, at least to me.)

Related also to this was a plenary talk by Akshay Venkatesh, entitled “The Cohen-Lenstra heuristics over global fields”. Here, the question concerned the conjectural behaviour of class groups of quadratic fields, and in particular to explain the numerically observed phenomenon that about ${75.4\%}$ of all quadratic fields ${{\Bbb Q}[\sqrt{d}]}$ (with $d$ prime) enjoy unique factorisation (i.e. have trivial class group). (Class groups, as I learned in these two talks, are arithmetic analogues of the (abelianised) fundamental groups in topology, with Galois groups serving as the analogue of the full fundamental group.) One thing I learned here was that there was a canonical way to randomly generate a (profinite) abelian group, by taking the product of randomly generated finite abelian ${p}$-groups for each prime ${p}$. The way to canonically randomly generate a finite abelian ${p}$-group is to take large integers ${n, D}$, and look at the cokernel of a random homomorphism from ${({\mathbb Z}/p^n{\mathbb Z})^d}$ to ${({\mathbb Z}/p^n{\mathbb Z})^d}$. In the limit ${n,d \rightarrow \infty}$ (or by replacing ${{\mathbb Z}/p^n{\mathbb Z}}$ with the ${p}$-adics and just sending ${d \rightarrow \infty}$), this stabilises and generates any given ${p}$-group ${G}$ with probability

$\displaystyle \frac{1}{|\hbox{Aut}(G)|} \prod_{j=1}^\infty (1 - \frac{1}{p^j}), \ \ \ \ \ (1)$

where ${\hbox{Aut}(G)}$ is the group of automorphisms of ${G}$. In particular this leads to the strange identity

$\displaystyle \sum_G \frac{1}{|\hbox{Aut}(G)|} = \prod_{j=1}^\infty (1 - \frac{1}{p^j})^{-1} \ \ \ \ \ (2)$

where ${G}$ ranges over all ${p}$-groups; I do not know how to prove this identity other than via the above probability computation, the proof of which I give below the fold.

Based on the heuristic that the class group should behave “randomly” subject to some “obvious” constraints, it is expected that a randomly chosen real quadratic field ${{\Bbb Q}[\sqrt{d}]}$ has unique factorisation (i.e. the class group has trivial ${p}$-group component for every ${p}$) with probability

$\displaystyle \prod_{p \hbox{ odd}} \prod_{j=2}^\infty (1 - \frac{1}{p^j}) \approx 0.754,$

whereas a randomly chosen imaginary quadratic field ${{\Bbb Q}[\sqrt{-d}]}$ has unique factorisation with probability

$\displaystyle \prod_{p \hbox{ odd}} \prod_{j=1}^\infty (1 - \frac{1}{p^j}) = 0.$

The former claim is conjectural, whereas the latter claim follows from (for instance) Siegel’s theorem on the size of the class group, as discussed in this previous post. Ellenberg, Venkatesh, and Westerland have recently established some partial results towards the function field analogues of these heuristics.

Below the fold is a version of my talk “Recent progress on the Kakeya conjecture” that I gave at the Fefferman conference.

A dynamical system is a space X, together with an action $(g,x) \mapsto gx$ of some group $G = (G,\cdot)$.  [In practice, one often places topological or measure-theoretic structure on X or G, but this will not be relevant for the current discussion.  In most applications, G is an abelian (additive) group such as the integers ${\Bbb Z}$ or the reals ${\Bbb R}$, but I prefer to use multiplicative notation here.]  A useful notion in the subject is that of an (abelian) cocycle; this is a function $\rho: G \times X \to U$ taking values in an abelian group $U = (U,+)$ that obeys the cocycle equation

$\rho(gh, x) = \rho(h,x) + \rho(g,hx)$ (1)

for all $g,h \in G$ and $x \in X$.  [Again, if one is placing topological or measure-theoretic structure on the system, one would want $\rho$ to be continuous or measurable, but we will ignore these issues.] The significance of cocycles in the subject is that they allow one to construct (abelian) extensions or skew products $X \times_\rho U$ of the original dynamical system X, defined as the Cartesian product $\{ (x,u): x \in X, u \in U \}$ with the group action $g(x,u) := (gx,u + \rho(g,x))$.  (The cocycle equation (1) is needed to ensure that one indeed has a group action, and in particular that $(gh)(x,u) = g(h(x,u))$.)  This turns out to be a useful means to build complex dynamical systems out of simpler ones.  (For instance, one can build nilsystems by starting with a point and taking a finite number of abelian extensions of that point by a certain type of cocycle.)

A special type of cocycle is a coboundary; this is a cocycle $\rho: G \times X \to U$ that takes the form $\rho(g,x) := F(gx) - F(x)$ for some function $F: X \to U$.  (Note that the cocycle equation (1) is automaticaly satisfied if $\rho$ is of this form.)  An extension $X \times_\rho U$ of a dynamical system by a coboundary $\rho(g,x) := F(gx) - F(x)$ can be conjugated to the trivial extension $X \times_0 U$ by the change of variables $(x,u) \mapsto (x,u-F(x))$.

While every coboundary is a cocycle, the converse is not always true.  (For instance, if X is a point, the only coboundary is the zero function, whereas a cocycle is essentially the same thing as a homomorphism from G to U, so in many cases there will be more cocycles than coboundaries.  For a contrasting example, if X and G are finite (for simplicity) and G acts freely on X, it is not difficult to see that every cocycle is a coboundary.)  One can measure the extent to which this converse fails by introducing the first cohomology group $H^1(G,X,U) := Z^1(G,X,U) / B^1(G,X,U)$, where $Z^1(G,X,U)$ is the space of cocycles $\rho: G \times X \to U$ and $B^1(G,X,U)$ is the space of coboundaries (note that both spaces are abelian groups).  In my forthcoming paper with Vitaly Bergelson and Tamar Ziegler on the ergodic inverse Gowers conjecture (which should be available shortly), we make substantial use of some basic facts about this cohomology group (in the category of measure-preserving systems) that were established in a paper of Host and Kra.

The above terminology of cocycles, coboundaries, and cohomology groups of course comes from the theory of cohomology in algebraic topology.  Comparing the formal definitions of cohomology groups in that theory with the ones given above, there is certainly quite a bit of similarity, but in the dynamical systems literature the precise connection does not seem to be heavily emphasised.   The purpose of this post is to record the precise fashion in which dynamical systems cohomology is a special case of cochain complex cohomology from algebraic topology, and more specifically is analogous to singular cohomology (and can also be viewed as the group cohomology of the space of scalar-valued functions on X, when viewed as a G-module); this is not particularly difficult, but I found it an instructive exercise (especially given that my algebraic topology is extremely rusty), though perhaps this post is more for my own benefit that for anyone else.