You are currently browsing the tag archive for the ‘polynomial growth’ tag.

This fall (starting Monday, September 26), I will be teaching a graduate topics course which I have entitled “Hilbert’s fifth problem and related topics.” The course is going to focus on three related topics:

• Hilbert’s fifth problem on the topological description of Lie groups, as well as the closely related (local) classification of locally compact groups (the Gleason-Yamabe theorem).
• Approximate groups in nonabelian groups, and their classification via the Gleason-Yamabe theorem (this is very recent work of Emmanuel Breuillard, Ben Green, Tom Sanders, and myself, building upon earlier work of Hrushovski);
• Gromov’s theorem on groups of polynomial growth, as proven via the classification of approximate groups (as well as some consequences to fundamental groups of Riemannian manifolds).

I have already blogged about these topics repeatedly in the past (particularly with regard to Hilbert’s fifth problem), and I intend to recycle some of that material in the lecture notes for this course.

The above three families of results exemplify two broad principles (part of what I like to call “the dichotomy between structure and randomness“):

• (Rigidity) If a group-like object exhibits a weak amount of regularity, then it (or a large portion thereof) often automatically exhibits a strong amount of regularity as well;
• (Structure) This strong regularity manifests itself either as Lie type structure (in continuous settings) or nilpotent type structure (in discrete settings). (In some cases, “nilpotent” should be replaced by sister properties such as “abelian“, “solvable“, or “polycyclic“.)

Let me illustrate what I mean by these two principles with two simple examples, one in the continuous setting and one in the discrete setting. We begin with a continuous example. Given an ${n \times n}$ complex matrix ${A \in M_n({\bf C})}$, define the matrix exponential ${\exp(A)}$ of ${A}$ by the formula

$\displaystyle \exp(A) := \sum_{k=0}^\infty \frac{A^k}{k!} = 1 + A + \frac{1}{2!} A^2 + \frac{1}{3!} A^3 + \ldots$

which can easily be verified to be an absolutely convergent series.

Exercise 1 Show that the map ${A \mapsto \exp(A)}$ is a real analytic (and even complex analytic) map from ${M_n({\bf C})}$ to ${M_n({\bf C})}$, and obeys the restricted homomorphism property

$\displaystyle \exp(sA) \exp(tA) = \exp((s+t)A) \ \ \ \ \ (1)$

for all ${A \in M_n({\bf C})}$ and ${s,t \in {\bf C}}$.

Proposition 1 (Rigidity and structure of matrix homomorphisms) Let ${n}$ be a natural number. Let ${GL_n({\bf C})}$ be the group of invertible ${n \times n}$ complex matrices. Let ${\Phi: {\bf R} \rightarrow GL_n({\bf C})}$ be a map obeying two properties:

• (Group-like object) ${\Phi}$ is a homomorphism, thus ${\Phi(s) \Phi(t) = \Phi(s+t)}$ for all ${s,t \in {\bf R}}$.
• (Weak regularity) The map ${t \mapsto \Phi(t)}$ is continuous.

Then:

• (Strong regularity) The map ${t \mapsto \Phi(t)}$ is smooth (i.e. infinitely differentiable). In fact it is even real analytic.
• (Lie-type structure) There exists a (unique) complex ${n \times n}$ matrix ${A}$ such that ${\Phi(t) = \exp(tA)}$ for all ${t \in {\bf R}}$.

Proof: Let ${\Phi}$ be as above. Let ${\epsilon > 0}$ be a small number (depending only on ${n}$). By the homomorphism property, ${\Phi(0) = 1}$ (where we use ${1}$ here to denote the identity element of ${GL_n({\bf C})}$), and so by continuity we may find a small ${t_0>0}$ such that ${\Phi(t) = 1 + O(\epsilon)}$ for all ${t \in [-t_0,t_0]}$ (we use some arbitrary norm here on the space of ${n \times n}$ matrices, and allow implied constants in the ${O()}$ notation to depend on ${n}$).

The map ${A \mapsto \exp(A)}$ is real analytic and (by the inverse function theorem) is a diffeomorphism near ${0}$. Thus, by the inverse function theorem, we can (if ${\epsilon}$ is small enough) find a matrix ${B}$ of size ${B = O(\epsilon)}$ such that ${\Phi(t_0) = \exp(B)}$. By the homomorphism property and (1), we thus have

$\displaystyle \Phi(t_0/2)^2 = \Phi(t_0) = \exp(B) = \exp(B/2)^2.$

On the other hand, by another application of the inverse function theorem we see that the squaring map ${A \mapsto A^2}$ is a diffeomorphism near ${1}$ in ${GL_n({\bf C})}$, and thus (if ${\epsilon}$ is small enough)

$\displaystyle \Phi(t_0/2) = \exp(B/2).$

We may iterate this argument (for a fixed, but small, value of ${\epsilon}$) and conclude that

$\displaystyle \Phi(t_0/2^k) = \exp(B/2^k)$

for all ${k = 0,1,2,\ldots}$. By the homomorphism property and (1) we thus have

$\displaystyle \Phi(qt_0) = \exp(qB)$

whenever ${q}$ is a dyadic rational, i.e. a rational of the form ${a/2^k}$ for some integer ${a}$ and natural number ${k}$. By continuity we thus have

$\displaystyle \Phi(st_0) = \exp(sB)$

for all real ${s}$. Setting ${A := B/t_0}$ we conclude that

$\displaystyle \Phi(t) = \exp(tA)$

for all real ${t}$, which gives existence of the representation and also real analyticity and smoothness. Finally, uniqueness of the representation ${\Phi(t) = \exp(tA)}$ follows from the identity

$\displaystyle A = \frac{d}{dt} \exp(tA)|_{t=0}.$

$\Box$

Exercise 2 Generalise Proposition 1 by replacing the hypothesis that ${\Phi}$ is continuous with the hypothesis that ${\Phi}$ is Lebesgue measurable (Hint: use the Steinhaus theorem.). Show that the proposition fails (assuming the axiom of choice) if this hypothesis is omitted entirely.

Note how one needs both the group-like structure and the weak regularity in combination in order to ensure the strong regularity; neither is sufficient on its own. We will see variants of the above basic argument throughout the course. Here, the task of obtaining smooth (or real analytic structure) was relatively easy, because we could borrow the smooth (or real analytic) structure of the domain ${{\bf R}}$ and range ${M_n({\bf C})}$; but, somewhat remarkably, we shall see that one can still build such smooth or analytic structures even when none of the original objects have any such structure to begin with.

Now we turn to a second illustration of the above principles, namely Jordan’s theorem, which uses a discreteness hypothesis to upgrade Lie type structure to nilpotent (and in this case, abelian) structure. We shall formulate Jordan’s theorem in a slightly stilted fashion in order to emphasise the adherence to the above-mentioned principles.

Theorem 2 (Jordan’s theorem) Let ${G}$ be an object with the following properties:

• (Group-like object) ${G}$ is a group.
• (Discreteness) ${G}$ is finite.
• (Lie-type structure) ${G}$ is contained in ${U_n({\bf C})}$ (the group of unitary ${n \times n}$ matrices) for some ${n}$.

Then there is a subgroup ${G'}$ of ${G}$ such that

• (${G'}$ is close to ${G}$) The index ${|G/G'|}$ of ${G'}$ in ${G}$ is ${O_n(1)}$ (i.e. bounded by ${C_n}$ for some quantity ${C_n}$ depending only on ${n}$).
• (Nilpotent-type structure) ${G'}$ is abelian.

A key observation in the proof of Jordan’s theorem is that if two unitary elements ${g, h \in U_n({\bf C})}$ are close to the identity, then their commutator ${[g,h] = g^{-1}h^{-1}gh}$ is even closer to the identity (in, say, the operator norm ${\| \|_{op}}$). Indeed, since multiplication on the left or right by unitary elements does not affect the operator norm, we have

$\displaystyle \| [g,h] - 1 \|_{op} = \| gh - hg \|_{op}$

$\displaystyle = \| (g-1)(h-1) - (h-1)(g-1) \|_{op}$

and so by the triangle inequality

$\displaystyle \| [g,h] - 1 \|_{op} \leq 2 \|g-1\|_{op} \|h-1\|_{op}. \ \ \ \ \ (2)$

Now we can prove Jordan’s theorem.

Proof: We induct on ${n}$, the case ${n=1}$ being trivial. Suppose first that ${G}$ contains a central element ${g}$ which is not a multiple of the identity. Then, by definition, ${G}$ is contained in the centraliser ${Z(g)}$ of ${g}$, which by the spectral theorem is isomorphic to a product ${U_{n_1}({\bf C}) \times \ldots \times U_{n_k}({\bf C})}$ of smaller unitary groups. Projecting ${G}$ to each of these factor groups and applying the induction hypothesis, we obtain the claim.

Thus we may assume that ${G}$ contains no central elements other than multiples of the identity. Now pick a small ${\epsilon > 0}$ (one could take ${\epsilon=\frac{1}{10n}}$ in fact) and consider the subgroup ${G'}$ of ${G}$ generated by those elements of ${G}$ that are within ${\epsilon}$ of the identity (in the operator norm). By considering a maximal ${\epsilon}$-net of ${G}$ we see that ${G'}$ has index at most ${O_{n,\epsilon}(1)}$ in ${G}$. By arguing as before, we may assume that ${G'}$ has no central elements other than multiples of the identity.

If ${G'}$ consists only of multiples of the identity, then we are done. If not, take an element ${g}$ of ${G'}$ that is not a multiple of the identity, and which is as close as possible to the identity (here is where we crucially use that ${G}$ is finite). By (2), we see that if ${\epsilon}$ is sufficiently small depending on ${n}$, and if ${h}$ is one of the generators of ${G'}$, then ${[g,h]}$ lies in ${G'}$ and is closer to the identity than ${g}$, and is thus a multiple of the identity. On the other hand, ${[g,h]}$ has determinant ${1}$. Given that it is so close to the identity, it must therefore be the identity (if ${\epsilon}$ is small enough). In other words, ${g}$ is central in ${G'}$, and is thus a multiple of the identity. But this contradicts the hypothesis that there are no central elements other than multiples of the identity, and we are done. $\Box$

Commutator estimates such as (2) will play a fundamental role in many of the arguments we will see in this course; as we saw above, such estimates combine very well with a discreteness hypothesis, but will also be very useful in the continuous setting.

Exercise 3 Generalise Jordan’s theorem to the case when ${G}$ is a finite subgroup of ${GL_n({\bf C})}$ rather than of ${U_n({\bf C})}$. (Hint: The elements of ${G}$ are not necessarily unitary, and thus do not necessarily preserve the standard Hilbert inner product of ${{\bf C}^n}$. However, if one averages that inner product by the finite group ${G}$, one obtains a new inner product on ${{\bf C}^n}$ that is preserved by ${G}$, which allows one to conjugate ${G}$ to a subgroup of ${U_n({\bf C})}$. This averaging trick is (a small) part of Weyl’s unitary trick in representation theory.)

Exercise 4 (Inability to discretise nonabelian Lie groups) Show that if ${n \geq 3}$, then the orthogonal group ${O_n({\bf R})}$ cannot contain arbitrarily dense finite subgroups, in the sense that there exists an ${\epsilon = \epsilon_n > 0}$ depending only on ${n}$ such that for every finite subgroup ${G}$ of ${O_n({\bf R})}$, there exists a ball of radius ${\epsilon}$ in ${O_n({\bf R})}$ (with, say, the operator norm metric) that is disjoint from ${G}$. What happens in the ${n=2}$ case?

Remark 1 More precise classifications of the finite subgroups of ${U_n({\bf C})}$ are known, particularly in low dimensions. For instance, one can show that the only finite subgroups of ${SO_3({\bf R})}$ (which ${SU_2({\bf C})}$ is a double cover of) are isomorphic to either a cyclic group, a dihedral group, or the symmetry group of one of the Platonic solids.

This week there is a conference here at IPAM on expanders in pure and applied mathematics. I was an invited speaker, but I don’t actually work in expanders per se (though I am certainly interested in them). So I spoke instead about the recent simplified proof by Kleiner of the celebrated theorem of Gromov on groups of polynomial growth. (This proof does not directly mention expanders, but the argument nevertheless hinges on the absence of expansion in the Cayley graph of a group of polynomial growth, which is exhibited through the smoothness properties of harmonic functions on such graphs.)

In my discussion of the Oppenheim conjecture in my recent post on Ratner’s theorems, I mentioned in passing the simple but crucial fact that the (orthochronous) special orthogonal group $SO(Q)^+$ of an indefinite quadratic form on ${\Bbb R}^3$ can be generated by unipotent elements. This is not a difficult fact to prove, as one can simply diagonalise Q and then explicitly write down some unipotent elements (the magic words here are “null rotations“). But this is a purely algebraic approach; I thought it would also be instructive to show the geometric (or dynamic) reason for why unipotent elements appear in the orthogonal group of indefinite quadratic forms in three dimensions. (I’ll give away the punch line right away: it’s because the parabola is a conic section.) This is not a particularly deep or significant observation, and will not be surprising to the experts, but I would like to record it anyway, as it allows me to review some useful bits and pieces of elementary linear algebra.