You are currently browsing the category archive for the ‘math.MG’ category.
This fall (starting Monday, September 26), I will be teaching a graduate topics course which I have entitled “Hilbert’s fifth problem and related topics.” The course is going to focus on three related topics:
- Hilbert’s fifth problem on the topological description of Lie groups, as well as the closely related (local) classification of locally compact groups (the Gleason-Yamabe theorem).
- Approximate groups in nonabelian groups, and their classification via the Gleason-Yamabe theorem (this is very recent work of Emmanuel Breuillard, Ben Green, Tom Sanders, and myself, building upon earlier work of Hrushovski);
- Gromov’s theorem on groups of polynomial growth, as proven via the classification of approximate groups (as well as some consequences to fundamental groups of Riemannian manifolds).
I have already blogged about these topics repeatedly in the past (particularly with regard to Hilbert’s fifth problem), and I intend to recycle some of that material in the lecture notes for this course.
The above three families of results exemplify two broad principles (part of what I like to call “the dichotomy between structure and randomness“):
- (Rigidity) If a group-like object exhibits a weak amount of regularity, then it (or a large portion thereof) often automatically exhibits a strong amount of regularity as well;
- (Structure) This strong regularity manifests itself either as Lie type structure (in continuous settings) or nilpotent type structure (in discrete settings). (In some cases, “nilpotent” should be replaced by sister properties such as “abelian“, “solvable“, or “polycyclic“.)
Let me illustrate what I mean by these two principles with two simple examples, one in the continuous setting and one in the discrete setting. We begin with a continuous example. Given an complex matrix , define the matrix exponential of by the formula
which can easily be verified to be an absolutely convergent series.
for all and .
- (Group-like object) is a homomorphism, thus for all .
- (Weak regularity) The map is continuous.
- (Strong regularity) The map is smooth (i.e. infinitely differentiable). In fact it is even real analytic.
- (Lie-type structure) There exists a (unique) complex matrix such that for all .
Proof: Let be as above. Let be a small number (depending only on ). By the homomorphism property, (where we use here to denote the identity element of ), and so by continuity we may find a small such that for all (we use some arbitrary norm here on the space of matrices, and allow implied constants in the notation to depend on ).
The map is real analytic and (by the inverse function theorem) is a diffeomorphism near . Thus, by the inverse function theorem, we can (if is small enough) find a matrix of size such that . By the homomorphism property and (1), we thus have
On the other hand, by another application of the inverse function theorem we see that the squaring map is a diffeomorphism near in , and thus (if is small enough)
We may iterate this argument (for a fixed, but small, value of ) and conclude that
for all . By the homomorphism property and (1) we thus have
whenever is a dyadic rational, i.e. a rational of the form for some integer and natural number . By continuity we thus have
for all real . Setting we conclude that
for all real , which gives existence of the representation and also real analyticity and smoothness. Finally, uniqueness of the representation follows from the identity
Exercise 2 Generalise Proposition 1 by replacing the hypothesis that is continuous with the hypothesis that is Lebesgue measurable (Hint: use the Steinhaus theorem.). Show that the proposition fails (assuming the axiom of choice) if this hypothesis is omitted entirely.
Note how one needs both the group-like structure and the weak regularity in combination in order to ensure the strong regularity; neither is sufficient on its own. We will see variants of the above basic argument throughout the course. Here, the task of obtaining smooth (or real analytic structure) was relatively easy, because we could borrow the smooth (or real analytic) structure of the domain and range ; but, somewhat remarkably, we shall see that one can still build such smooth or analytic structures even when none of the original objects have any such structure to begin with.
Now we turn to a second illustration of the above principles, namely Jordan’s theorem, which uses a discreteness hypothesis to upgrade Lie type structure to nilpotent (and in this case, abelian) structure. We shall formulate Jordan’s theorem in a slightly stilted fashion in order to emphasise the adherence to the above-mentioned principles.
- (Group-like object) is a group.
- (Discreteness) is finite.
- (Lie-type structure) is contained in (the group of unitary matrices) for some .
Then there is a subgroup of such that
- ( is close to ) The index of in is (i.e. bounded by for some quantity depending only on ).
- (Nilpotent-type structure) is abelian.
A key observation in the proof of Jordan’s theorem is that if two unitary elements are close to the identity, then their commutator is even closer to the identity (in, say, the operator norm ). Indeed, since multiplication on the left or right by unitary elements does not affect the operator norm, we have
Now we can prove Jordan’s theorem.
Proof: We induct on , the case being trivial. Suppose first that contains a central element which is not a multiple of the identity. Then, by definition, is contained in the centraliser of , which by the spectral theorem is isomorphic to a product of smaller unitary groups. Projecting to each of these factor groups and applying the induction hypothesis, we obtain the claim.
Thus we may assume that contains no central elements other than multiples of the identity. Now pick a small (one could take in fact) and consider the subgroup of generated by those elements of that are within of the identity (in the operator norm). By considering a maximal -net of we see that has index at most in . By arguing as before, we may assume that has no central elements other than multiples of the identity.
If consists only of multiples of the identity, then we are done. If not, take an element of that is not a multiple of the identity, and which is as close as possible to the identity (here is where we crucially use that is finite). By (2), we see that if is sufficiently small depending on , and if is one of the generators of , then lies in and is closer to the identity than , and is thus a multiple of the identity. On the other hand, has determinant . Given that it is so close to the identity, it must therefore be the identity (if is small enough). In other words, is central in , and is thus a multiple of the identity. But this contradicts the hypothesis that there are no central elements other than multiples of the identity, and we are done.
Commutator estimates such as (2) will play a fundamental role in many of the arguments we will see in this course; as we saw above, such estimates combine very well with a discreteness hypothesis, but will also be very useful in the continuous setting.
Exercise 3 Generalise Jordan’s theorem to the case when is a finite subgroup of rather than of . (Hint: The elements of are not necessarily unitary, and thus do not necessarily preserve the standard Hilbert inner product of . However, if one averages that inner product by the finite group , one obtains a new inner product on that is preserved by , which allows one to conjugate to a subgroup of . This averaging trick is (a small) part of Weyl’s unitary trick in representation theory.)
Exercise 4 (Inability to discretise nonabelian Lie groups) Show that if , then the orthogonal group cannot contain arbitrarily dense finite subgroups, in the sense that there exists an depending only on such that for every finite subgroup of , there exists a ball of radius in (with, say, the operator norm metric) that is disjoint from . What happens in the case?
Remark 1 More precise classifications of the finite subgroups of are known, particularly in low dimensions. For instance, one can show that the only finite subgroups of (which is a double cover of) are isomorphic to either a cyclic group, a dihedral group, or the symmetry group of one of the Platonic solids.
One of the most well known problems from ancient Greek mathematics was that of trisecting an angle by straightedge and compass, which was eventually proven impossible in 1837 by Pierre Wantzel, using methods from Galois theory.
Formally, one can set up the problem as follows. Define a configuration to be a finite collection of points, lines, and circles in the Euclidean plane. Define a construction step to be one of the following operations to enlarge the collection :
- (Straightedge) Given two distinct points in , form the line that connects and , and add it to .
- (Compass) Given two distinct points in , and given a third point in (which may or may not equal or ), form the circle with centre and radius equal to the length of the line segment joining and , and add it to .
- (Intersection) Given two distinct curves in (thus is either a line or a circle in , and similarly for ), select a point that is common to both and (there are at most two such points), and add it to .
We say that a point, line, or circle is constructible by straightedge and compass from a configuration if it can be obtained from after applying a finite number of construction steps.
Problem 1 (Angle trisection) Let be distinct points in the plane. Is it always possible to construct by straightedge and compass from a line through that trisects the angle , in the sense that the angle between and is one third of the angle of ?
Thanks to Wantzel’s result, the answer to this problem is known to be “no” in general; a generic angle cannot be trisected by straightedge and compass. (On the other hand, some special angles can certainly be trisected by straightedge and compass, such as a right angle. Also, one can certainly trisect generic angles using other methods than straightedge and compass; see the Wikipedia page on angle trisection for some examples of this.)
The impossibility of angle trisection stands in sharp contrast to the easy construction of angle bisection via straightedge and compass, which we briefly review as follows:
- Start with three points .
- Form the circle with centre and radius , and intersect it with the line . Let be the point in this intersection that lies on the same side of as . ( may well be equal to ).
- Form the circle with centre and radius , and the circle with centre and radius . Let be the point of intersection of and that is not .
- The line will then bisect the angle .
The key difference between angle trisection and angle bisection ultimately boils down to the following trivial number-theoretic fact:
Proof: Obvious by modular arithmetic, by induction, or by the fundamental theorem of arithmetic.
In contrast, there are of course plenty of powers of that are evenly divisible by , and this is ultimately why angle bisection is easy while angle trisection is hard.
The standard way in which Lemma 2 is used to demonstrate the impossibility of angle trisection is via Galois theory. The implication is quite short if one knows this theory, but quite opaque otherwise. We briefly sketch the proof of this implication here, though we will not need it in the rest of the discussion. Firstly, Lemma 2 implies the following fact about field extensions.
Corollary 3 Let be a field, and let be an extension of that can be constructed out of by a finite sequence of quadratic extensions. Then does not contain any cubic extensions of .
Proof: If contained a cubic extension of , then the dimension of over would be a multiple of three. On the other hand, if is obtained from by a tower of quadratic extensions, then the dimension of over is a power of two. The claim then follows from Lemma 2.
To conclude the proof, one then notes that any point, line, or circle that can be constructed from a configuration is definable in a field obtained from the coefficients of all the objects in after taking a finite number of quadratic extensions, whereas a trisection of an angle will generically only be definable in a cubic extension of the field generated by the coordinates of .
The Galois theory method also allows one to obtain many other impossibility results of this type, most famously the Abel-Ruffini theorem on the insolvability of the quintic equation by radicals. For this reason (and also because of the many applications of Galois theory to number theory and other branches of mathematics), the Galois theory argument is the “right” way to prove the impossibility of angle trisection within the broader framework of modern mathematics. However, this argument has the drawback that it requires one to first understand Galois theory (or at least field theory), which is usually not presented until an advanced undergraduate algebra or number theory course, whilst the angle trisection problem requires only high-school level mathematics to formulate. Even if one is allowed to “cheat” and sweep several technicalities under the rug, one still needs to possess a fair amount of solid intuition about advanced algebra in order to appreciate the proof. (This was undoubtedly one reason why, even after Wantzel’s impossibility result was published, a large amount of effort was still expended by amateur mathematicians to try to trisect a general angle.)
In this post I would therefore like to present a different proof (or perhaps more accurately, a disguised version of the standard proof) of the impossibility of angle trisection by straightedge and compass, that avoids explicit mention of Galois theory (though it is never far beneath the surface). With “cheats”, the proof is actually quite simple and geometric (except for Lemma 2, which is still used at a crucial juncture), based on the basic geometric concept of monodromy; unfortunately, some technical work is needed however to remove these cheats.
To describe the intuitive idea of the proof, let us return to the angle bisection construction, that takes a triple of points as input and returns a bisecting line as output. We iterate the construction to create a quadrisecting line , via the following sequence of steps that extend the original bisection construction:
- Start with three points .
- Form the circle with centre and radius , and intersect it with the line . Let be the point in this intersection that lies on the same side of as . ( may well be equal to ).
- Form the circle with centre and radius , and the circle with centre and radius . Let be the point of intersection of and that is not .
- Let be the point on the line which lies on , and is on the same side of as .
- Form the circle with centre and radius . Let be the point of intersection of and that is not .
- The line will then quadrisect the angle .
Let us fix the points and , but not , and view (as well as intermediate objects such as , , , , , , ) as a function of .
Let us now do the following: we begin rotating counterclockwise around , which drags around the other objects , , , , , , that were constructed by accordingly. For instance, here is an early stage of this rotation process, when the angle has become obtuse:
Now for the slightly tricky bit. We are going to keep rotating beyond a half-rotation of , so that now becomes a reflex angle. At this point, a singularity occurs; the point collides into , and so there is an instant in which the line is not well-defined. However, this turns out to be a removable singularity (and the easiest way to demonstrate this will be to tap the power of complex analysis, as complex numbers can easily route around such a singularity), and we can blast through it to the other side, giving a picture like this:
Note that we have now deviated from the original construction in that and are no longer on the same side of ; we are thus now working in a continuation of that construction rather than with the construction itself. Nevertheless, we can still work with this continuation (much as, say, one works with analytic continuations of infinite series such as beyond their original domain of definition).
We now keep rotating around . Here, is approaching a full rotation of :
When reaches a full rotation, a different singularity occurs: and coincide. Nevertheless, this is also a removable singularity, and we blast through to beyond a full rotation:
And now is back where it started, as are , , , and … but the point has moved, from one intersection point of to the other. As a consequence, , , and have also changed, with being at right angles to where it was before. (In the jargon of modern mathematics, the quadrisection construction has a non-trivial monodromy.)
But nothing stops us from rotating some more. If we continue this procedure, we see that after two full rotations of around , all points, lines, and circles constructed from have returned to their original positions. Because of this, we shall say that the quadrisection construction described above is periodic with period .
Similarly, if one performs an octisection of the angle by bisecting the quadrisection, one can verify that this octisection is periodic with period ; it takes four full rotations of around before the configuration returns to where it started. More generally, one can show
The reason for this, ultimately, is because any two circles or lines will intersect each other in at most two points, and so at each step of a straightedge-and-compass construction there is an ambiguity of at most . Each rotation of around can potentially flip one of these points to the other, but then if one rotates again, the point returns to its original position, and then one can analyse the next point in the construction in the same fashion until one obtains the proposition.
But now consider a putative trisection operation, that starts with an arbitrary angle and somehow uses some sequence of straightedge and compass constructions to end up with a trisecting line :
What is the period of this construction? If we continuously rotate around , we observe that a full rotations of only causes the trisecting line to rotate by a third of a full rotation (i.e. by ):
Below the fold, I will make the above proof rigorous. Unfortunately, in doing so, I had to again leave the world of high-school mathematics, as one needs a little bit of algebraic geometry and complex analysis to resolve the issues with singularities that we saw in the above sketch. Still, I feel that at an intuitive level at least, this argument is more geometric and accessible than the Galois-theoretic argument (though anyone familiar with Galois theory will note that there is really not that much difference between the proofs, ultimately, as one has simply replaced the Galois group with a closely related monodromy group instead).
Theorem 1 is deep and difficult result, but the discussion in the previous posts has reduced the proof of this Theorem to that of establishing two simpler results, involving the concepts of a no small subgroups (NSS) subgroup, and that of a Gleason metric. We briefly recall the relevant definitions:
Definition 2 (NSS) A topological group is said to have no small subgroups, or is NSS for short, if there is an open neighbourhood of the identity in that contains no subgroups of other than the trivial subgroup .
Definition 3 (Gleason metric) Let be a topological group. A Gleason metric on is a left-invariant metric which generates the topology on and obeys the following properties for some constant , writing for :
- (Escape property) If and is such that , then
- (Commutator estimate) If are such that , then
where is the commutator of and .
The remaining steps in the resolution of Hilbert’s fifth problem are then as follows:
Theorem 4 (Reduction to the NSS case) Let be a locally compact group, and let be an open neighbourhood of the identity in . Then there exists an open subgroup of , and a compact subgroup of contained in , such that is NSS and locally compact.
The purpose of this post is to establish these two results, using arguments that are originally due to Gleason. We will split this task into several subtasks, each of which improves the structure on the group by some amount:
Proposition 6 (From locally compact to metrisable) Let be a locally compact group, and let be an open neighbourhood of the identity in . Then there exists an open subgroup of , and a compact subgroup of contained in , such that is locally compact and metrisable.
For any open neighbourhood of the identity in , let be the union of all the subgroups of that are contained in . (Thus, for instance, is NSS if and only if is trivial for all sufficiently small .)
Proposition 7 (From metrisable to subgroup trapping) Let be a locally compact metrisable group. Then has the subgroup trapping property: for every open neighbourhood of the identity, there exists another open neighbourhood of the identity such that generates a subgroup contained in .
Proposition 8 (From subgroup trapping to NSS) Let be a locally compact group with the subgroup trapping property, and let be an open neighbourhood of the identity in . Then there exists an open subgroup of , and a compact subgroup of contained in , such that is locally compact and NSS.
Proposition 9 (From NSS to the escape property) Let be a locally compact NSS group. Then there exists a left-invariant metric on generating the topology on which obeys the escape property (1) for some constant .
Propositions 6–10 are all proven separately, but their proofs share some common strategies and ideas. The first main idea is to construct metrics on a locally compact group by starting with a suitable “bump function” (i.e. a continuous, compactly supported function from to ) and pulling back the metric structure on by using the translation action , thus creating a (semi-)metric
One easily verifies that this is indeed a (semi-)metric (in that it is non-negative, symmetric, and obeys the triangle inequality); it is also left-invariant, and so we have , where
where is the difference operator ,
This construction was already seen in the proof of the Birkhoff-Kakutani theorem, which is the main tool used to establish Proposition 6. For the other propositions, the idea is to choose a bump function that is “smooth” enough that it creates a metric with good properties such as the commutator estimate (2). Roughly speaking, to get a bound of the form (2), one needs to have “ regularity” with respect to the “right” smooth structure on By regularity, we mean here something like a bound of the form
for all . Here we use the usual asymptotic notation, writing or if for some constant (which can vary from line to line).
The following lemma illustrates how regularity can be used to build Gleason metrics.
Proof: We begin with the commutator property (2). Observe the identity
But from (4) (and the triangle inequality) we have
and thus we have the “Taylor expansion”
which gives (1).
It remains to obtain that have the desired regularity property. In order to get such regular bump functions, we will use the trick of convolving together two lower regularity bump functions (such as two functions with “ regularity” in some sense to be determined later). In order to perform this convolution, we will use the fundamental tool of (left-invariant) Haar measure on the locally compact group . Here we exploit the basic fact that the convolution
of two functions tends to be smoother than either of the two factors . This is easiest to see in the abelian case, since in this case we can distribute derivatives according to the law
which suggests that the order of “differentiability” of should be the sum of the orders of and separately.
These ideas are already sufficient to establish Proposition 10 directly, and also Proposition 9 when comined with an additional bootstrap argument. The proofs of Proposition 7 and Proposition 8 use similar techniques, but is more difficult due to the potential presence of small subgroups, which require an application of the Peter-Weyl theorem to properly control. Both of these theorems will be proven below the fold, thus (when combined with the preceding posts) completing the proof of Theorem 1.
The presentation here is based on some unpublished notes of van den Dries and Goldbring on Hilbert’s fifth problem. I am indebted to Emmanuel Breuillard, Ben Green, and Tom Sanders for many discussions related to these arguments.
Hilbert’s fifth problem asks to clarify the extent that the assumption on a differentiable or smooth structure is actually needed in the theory of Lie groups and their actions. While this question is not precisely formulated and is thus open to some interpretation, the following result of Gleason and Montgomery-Zippin answers at least one aspect of this question:
Theorem 1 can be viewed as an application of the more general structural theory of locally compact groups. In particular, Theorem 1 can be deduced from the following structural theorem of Gleason and Yamabe:
Theorem 2 (Gleason-Yamabe theorem) Let be a locally compact group, and let be an open neighbourhood of the identity in . Then there exists an open subgroup of , and a compact subgroup of contained in , such that is isomorphic to a Lie group.
The deduction of Theorem 1 from Theorem 2 proceeds using the Brouwer invariance of domain theorem and is discussed in this previous post. In this post, I would like to discuss the proof of Theorem 2. We can split this proof into three parts, by introducing two additional concepts. The first is the property of having no small subgroups:
Definition 3 (NSS) A topological group is said to have no small subgroups, or is NSS for short, if there is an open neighbourhood of the identity in that contains no subgroups of other than the trivial subgroup .
An equivalent definition of an NSS group is one which has an open neighbourhood of the identity that every non-identity element escapes in finite time, in the sense that for some positive integer . It is easy to see that all Lie groups are NSS; we shall shortly see that the converse statement (in the locally compact case) is also true, though significantly harder to prove.
Another useful property is that of having what I will call a Gleason metric:
Definition 4 Let be a topological group. A Gleason metric on is a left-invariant metric which generates the topology on and obeys the following properties for some constant , writing for :
- (Escape property) If and is such that , then .
- (Commutator estimate) If are such that , then
where is the commutator of and .
For instance, the unitary group with the operator norm metric can easily verified to be a Gleason metric, with the commutator estimate (1) coming from the inequality
Similarly, any left-invariant Riemannian metric on a (connected) Lie group can be verified to be a Gleason metric. From the escape property one easily sees that all groups with Gleason metrics are NSS; again, we shall see that there is a partial converse.
Remark 1 The escape and commutator properties are meant to capture “Euclidean-like” structure of the group. Other metrics, such as Carnot-Carathéodory metrics on Carnot Lie groups such as the Heisenberg group, usually fail one or both of these properties.
The proof of Theorem 2 can then be split into three subtheorems:
Theorem 5 (Reduction to the NSS case) Let be a locally compact group, and let be an open neighbourhood of the identity in . Then there exists an open subgroup of , and a compact subgroup of contained in , such that is NSS, locally compact, and metrisable.
Theorem 5 and Theorem 6 proceed by some elementary combinatorial analysis, together with the use of Haar measure (to build convolutions, and thence to build “smooth” bump functions with which to create a metric, in a variant of the analysis used to prove the Birkhoff-Kakutani theorem); Theorem 5 also requires Peter-Weyl theorem (to dispose of certain compact subgroups that arise en route to the reduction to the NSS case), which was discussed previously on this blog.
In this post I would like to detail the final component to the proof of Theorem 2, namely Theorem 7. (I plan to discuss the other two steps, Theorem 5 and Theorem 6, in a separate post.) The strategy is similar to that used to prove von Neumann’s theorem, as discussed in this previous post (and von Neumann’s theorem is also used in the proof), but with the Gleason metric serving as a substitute for the faithful linear representation. Namely, one first gives the space of one-parameter subgroups of enough of a structure that it can serve as a proxy for the “Lie algebra” of ; specifically, it needs to be a vector space, and the “exponential map” needs to cover an open neighbourhood of the identity. This is enough to set up an “adjoint” representation of , whose image is a Lie group by von Neumann’s theorem; the kernel is essentially the centre of , which is abelian and can also be shown to be a Lie group by a similar analysis. To finish the job one needs to use arguments of Kuranishi and of Gleason, as discussed in this previous post.
The arguments here can be phrased either in the standard analysis setting (using sequences, and passing to subsequences often) or in the nonstandard analysis setting (selecting an ultrafilter, and then working with infinitesimals). In my view, the two approaches have roughly the same level of complexity in this case, and I have elected for the standard analysis approach.
Remark 2 From Theorem 7 we see that a Gleason metric structure is a good enough substitute for smooth structure that it can actually be used to reconstruct the entire smooth structure; roughly speaking, the commutator estimate (1) allows for enough “Taylor expansion” of expressions such as that one can simulate the fundamentals of Lie theory (in particular, construction of the Lie algebra and the exponential map, and its basic properties. The advantage of working with a Gleason metric rather than a smoother structure, though, is that it is relatively undemanding with regards to regularity; in particular, the commutator estimate (1) is roughly comparable to the imposition structure on the group , as this is the minimal regularity to get the type of Taylor approximation (with quadratic errors) that would be needed to obtain a bound of the form (1). We will return to this point in a later post.
In a previous blog post, I discussed the recent result of Guth and Katz obtaining a near-optimal bound on the Erdos distance problem. One of the tools used in the proof (building upon the earlier work of Elekes and Sharir) was the observation that the incidence geometry of the Euclidean group of rigid motions of the plane was almost identical to that of lines in the Euclidean space :
Proof: A rigid motion is either a translation or a rotation, with the latter forming a Zariski-dense subset of . Identify a rotation in by an angle with around a point with the element in . (Note that such rotations also form a Zariski-dense subset of .) Elementary trigonometry then reveals that if maps to , then lies on the perpendicular bisector of , and depends in a linear fashion on (for fixed ). The claim follows.
As seen from the proof, this proposition is an easy (though ad hoc) application of elementary trigonometry, but it was still puzzling to me why such a simple parameterisation of the incidence structure of was possible. Certainly it was clear from general algebraic geometry considerations that some bounded-degree algebraic description was available, but why would the be expressible as lines and not as, say, quadratic or cubic curves?
In this post I would like to record some observations arising from discussions with Jordan Ellenberg, Jozsef Solymosi, and Josh Zahl which give a more conceptual (but less elementary) derivation of the above proposition that avoids the use of ad hoc coordinate transformations such as . The starting point is to view the Euclidean plane as the scaling limit of the sphere (a fact which is familiar to all of us through the geometry of the Earth), which makes the Euclidean group a scaling limit of the rotation group . The latter can then be lifted to a double cover, namely the spin group . This group has a natural interpretation as the unit quaternions, which is isometric to the unit sphere . The analogue of the lines in this setting become great circles on this sphere; applying a projective transformation, one can map to (or more precisely to the projective space ), at whichi point the great circles become lines. This gives a proof of Proposition 1.
Details of the correspondence are provided below the fold. One by-product of this analysis, incidentally, is the observation that the Guth-Katz bound for the Erdos distance problem in the plane , immediately extends with almost no modification to the sphere as well (i.e. any points in determine distances), as well as to the hyperbolic plane .
Assaf Naor and I have just uploaded to the arXiv our joint paper “Scale-oblivious metric fragmentation and the nonlinear Dvoretzky theorem“.
Consider a finite metric space , with being a set of points. It is not always the case that this space can be isometrically embedded into a Hilbert space such as ; for instance, in a Hilbert space every pair of points has a unique midpoint, but uniqueness can certainly fail for finite metric spaces (consider for instance the graph metric on a four-point diamond). The situation improves, however, if one allows (a) the embedding to have some distortion, and (b) one is willing to embed just a subset of , rather than all of . More precisely, for any integer and any distortion , let be the largest integer with the property that given every -point metric space , there exists a -point subset of , and a map , which has distortion at most in the sense that
for all .
Bourgain, Figiel, and Milman established that for any fixed , one has the lower bound , and also showed for sufficiently close to there was a matching upper bound . This type of result was called a nonlinear Dvoretsky theorem, in analogy with the linear Dvoretsky theorem that asserts that given any -dimensional normed vector space and , there existed a -dimensional subspace which embedded with distortion into , with . In other words, one can ensure that about of the points in an arbitrary metric space behave in an essentially Euclidean manner, and that this is best possible if one wants the distortion to be small.
Bartel, Linial, Mendel, and Naor observed that there was a threshold phenomenon at . Namely, for , the Bourgain-Figiel-Milman bounds were sharp with ; but for one instead had a power law
for some . In other words, once one allows distortion by factors greater than , one can now embed a polynomial portion of the points, rather than just a logarithmic portion, into a Hilbert space. This has some applications in theoretical computer science to constructing “approximate distance oracles” for high-dimensional sets of data. (The situation at the critical value is still unknown.)
In the special case that the metric is an ultrametric, so that the triangle inequality is upgraded to the ultra-triangle inequality , then it is an easy exercise to show that can be embedded isometrically into a Hilbert space, and in fact into a sphere of radius . Indeed, this can be established by an induction on the cardinality of , using the ultrametric inequality to partition any finite ultrametric space of two or more points into sets of strictly smaller diameter that are all separated by , and using the inductive hypothesis and Pythagoras’ theorem.
One can then replace the concept of embedding into a Hilbert space, with the apparently stronger concept of embedding into an ultrametric space; this is useful for the computer science applications as ultrametrics have a tree structure which allows for some efficient algorithms for computing distances in such spaces. As it turns out, all the preceding constructions carry over without difficulty to this setting; thus, for , one can embed a logarithmic number of points with distortion into an ultrametric space, and for one can embed a polynomial number of points.
One can view the task of locating a subset of a metric space that is equivalent (up to bounded distortion) to an ultrametric as that of fragmenting a metric space into a tree-like structure. For instance, the standard metric on the arithmetic progression of length is not an ultrametric (and in fact needs a huge distortion factor of in order to embed into an ultrametric space), but if one restricts to the Cantor-like subset of integers in whose base expansion consists solely of s and s, then one easily verifies that the resulting set is fragmented enough that the standard metric is equivalent (up to a distortion factor of ) to an ultrametric, namely the metric where is the largest integer for which divides and .
The above fragmentation constructions were somewhat complicated and deterministic in nature. Mendel and Naor introduced a simpler probabilistic method, based on random partition trees of , which reproved the above results in the high-distortion case (with in the limit ). However, this method had inherent limitations in the low-distortion case, in particular failing for (basically because the method would establish a stronger embedding result which fails in that regime).
In this paper, we introduce a variant of the random partition tree method, which involves a random fragmentation at a randomly chosen set of scales, that works all the way down to and gives a clean value for the exponent , namely ; in fact we have the more precise result that we can take to be , where is the unique solution to the equation
and that this is the limit of the method if one chooses a “scale-oblivious” approach that applies uniformly to all metric spaces, ignoring the internal structure of each individual metric space.
The construction ends up to be relatively simple (taking about three pages). The basic idea is as follows. Pick two scales , and let be a random sequence of points in (of course, being finite, every point in will almost surely be visited infinitely often by this sequence). We can then partition into pieces , by defining to be the set of points for which is the first point in the sequence to lie in the ball . We then let be the subset of in which actually falls in the smaller ball . As a consequence, we see that each is contained in a ball of radius (namely ), but that any two are separated by a distance of at least (from the triangle inequality). This gives one layer of a quasi-ultrametric tree structure on ; if one iterates this over many different pairs of scales , one gets a full quasi-ultrametric tree structure, which one can then adjust with bounded distortion to a genuine ultrametric structure. The game is then to optimise the choice of so as to maximise the portion of that remains in the tree; it turns out that a suitably random choice of such scales is the optimal one.
Assaf Naor and I have just uploaded to the arXiv our paper “Random Martingales and localization of maximal inequalities“, to be submitted shortly. This paper investigates the best constant in generalisations of the classical Hardy-Littlewood maximal inequality
for any absolutely integrable , where is the Euclidean ball of radius centred at , and denotes the Lebesgue measure of a subset of . This inequality is fundamental to a large part of real-variable harmonic analysis, and in particular to Calderón-Zygmund theory. A similar inequality in fact holds with the Euclidean norm replaced by any other convex norm on .
The exact value of the constant is only known in , with a remarkable result of Melas establishing that . Classical covering lemma arguments give the exponential upper bound when properly optimised (a direct application of the Vitali covering lemma gives , but one can reduce to by being careful). In an important paper of Stein and Strömberg, the improved bound was obtained for any convex norm by a more intricate covering norm argument, and the slight improvement obtained in the Euclidean case by another argument more adapted to the Euclidean setting that relied on heat kernels. In the other direction, a recent result of Aldaz shows that in the case of the norm, and in fact in an even more recent preprint of Aubrun, the lower bound for any has been obtained in this case. However, these lower bounds do not apply in the Euclidean case, and one may still conjecture that is in fact uniformly bounded in this case.
Unfortunately, we do not make direct progress on these problems here. However, we do show that the Stein-Strömberg bound is extremely general, applying to a wide class of metric measure spaces obeying a certain “microdoubling condition at dimension “; and conversely, in such level of generality, it is essentially the best estimate possible, even with additional metric measure hypotheses on the space. Thus, if one wants to improve this bound for a specific maximal inequality, one has to use specific properties of the geometry (such as the connections between Euclidean balls and heat kernels). Furthermore, in the general setting of metric measure spaces, one has a general localisation principle, which roughly speaking asserts that in order to prove a maximal inequality over all scales , it suffices to prove such an inequality in a smaller range uniformly in . It is this localisation which ultimately explains the significance of the growth in the Stein-Strömberg result (there are essentially distinct scales in any range ). It also shows that if one restricts the radii to a lacunary range (such as powers of ), the best constant improvees to ; if one restricts the radii to an even sparser range such as powers of , the best constant becomes .
Yehuda Shalom and I have just uploaded to the arXiv our paper “A finitary version of Gromov’s polynomial growth theorem“, to be submitted to Geom. Func. Anal.. The purpose of this paper is to establish a quantitative version of Gromov’s polynomial growth theorem which, among other things, is meaningful for finite groups. Here is a statement of Gromov’s theorem:
Gromov’s theorem. Let be a group generated by a finite (symmetric) set , and suppose that one has the polynomial growth condition
for all sufficiently large and some fixed , where is the ball of radius generated by (i.e. the set of all words in of length at most , evaluated in ). Then is virtually nilpotent, i.e. it has a finite index subgroup which is nilpotent of some finite step .
As currently stated, Gromov’s theorem is qualitative rather than quantitative; it does not specify any relationship between the input data (the growth exponent and the range of scales for which one has (1)), and the output parameters (in particular, the index of the nilpotent subgroup of , and the step of that subgroup). However, a compactness argument (sketched in this previous blog post) shows that some such relationship must exist; indeed, if one has (1) for all for some sufficiently large , then one can ensure has index at most and step at most for some quantities , ; thus Gromov’s theorem is inherently a “local” result which only requires one to multiply the generator set a finite number of times before one sees the virtual nilpotency of the group. However, the compactness argument does not give an explicit value to the quantities , and the nature of Gromov’s proof (using, in particular, the deep Montgomery-Zippin-Yamabe theory on Hilbert’s fifth problem) does not easily allow such an explicit value to be extracted.
Another point is that the original formulation of Gromov’s theorem required the polynomial bound (1) at all sufficiently large scales . A later proof of this theorem by van den Dries and Wilkie relaxed this hypothesis to requiring (1) just for infinitely many scales ; the later proof by Kleiner (which I blogged about here) also has this relaxed hypothesis.
Our main result reduces the hypothesis (1) to a single large scale, and makes most of the qualitative dependencies in the theorem quantitative:
Theorem 1. If (1) holds for some for some sufficiently large absolute constant , then contains a finite index subgroup which is nilpotent of step at most .
The argument does in principle provide a bound on the index of in , but it is very poor (of Ackermann type). If instead one is willing to relax “nilpotent” to “polycyclic“, the bounds on the index are somewhat better (of tower exponential type), though still far from ideal.
There is a related finitary analogue of Gromov’s theorem by Makarychev and Lee, which asserts that any finite group of uniformly polynomial growth has a subgroup with a large abelianisation. The quantitative bounds in that result are quite strong, but on the other hand the hypothesis is also strong (it requires upper and lower bounds of the form (1) at all scales) and the conclusion is a bit weaker than virtual nilpotency. The argument is based on a modification of Kleiner’s proof.
Our argument also proceeds by modifying Kleiner’s proof of Gromov’s theorem (a significant fraction of which was already quantitative), and carefully removing all of the steps which require one to take an asymptotic limit. To ease this task, we look for the most elementary arguments available for each step of the proof (thus consciously avoiding powerful tools such as the Tits alternative). A key technical issue is that because there is only a single scale for which one has polynomial growth, one has to work at scales significantly less than in order to have any chance of keeping control of the various groups and other objects being generated.
Below the fold, I discuss a stripped down version of Kleiner’s argument, and then how we convert it to a fully finitary argument.
I am posting the last two talks in my Clay-Mahler lecture series here:
- “Structure and randomness in the prime numbers“. This public lecture is slightly updated from a previous talk of the same name given last year, but is largely the same material.
- “Perelman’s proof of the Poincaré conjecture“. Here I try (perhaps ambitiously) to give an overview of Perelman’s proof of the Poincaré conjecture into an hour-length talk for a general mathematical audience. It is a little unpolished, as I have not given any version of this talk before, but I hope to update it a bit in the future.
[Update, Sep 14: Poincaré conjecture slides revised.]
[Update, Sep 18: Prime slides revised also.]
The celebrated Szemerédi-Trotter theorem gives a bound for the set of incidences between a finite set of points and a finite set of lines in the Euclidean plane . Specifically, the bound is
where we use the asymptotic notation or to denote the statement that for some absolute constant . In particular, the number of incidences between points and lines is . This bound is sharp; consider for instance the discrete box with being the collection of lines . One easily verifies that , , and , showing that (1) is essentially sharp in the case ; one can concoct similar examples for other regimes of and .
On the other hand, if one replaces the Euclidean plane by a finite field geometry , where is a finite field, then the estimate (1) is false. For instance, if is the entire plane , and is the set of all lines in , then are both comparable to , but is comparable to , thus violating (1) when is large. Thus any proof of the Szemerédi-Trotter theorem must use a special property of the Euclidean plane which is not enjoyed by finite field geometries. In particular, this strongly suggests that one cannot rely purely on algebra and combinatorics to prove (1); one must also use some Euclidean geometry or topology as well.
Nowadays, the slickest proof of the Szemerédi-Trotter theorem is via the crossing number inequality (as discussed in this previous post), which ultimately relies on Euler’s famous formula ; thus in this argument it is topology which is the feature of Euclidean space which one is exploiting, and which is not present in the finite field setting. Today, though, I would like to mention a different proof (closer in spirit to the original proof of Szemerédi-Trotter, and also a later argument of Clarkson et al.), based on the method of cell decomposition, which has proven to be a very flexible method in combinatorial incidence geometry. Here, the distinctive feature of Euclidean geometry one is exploiting is convexity, which again has no finite field analogue.
which is inferior to (1). (On the other hand, this estimate works in both Euclidean and finite field geometries, and is sharp in the latter case, as shown by the example given earlier.) Dually, the axiom that two lines determine at most one point gives the bound
An inspection of the proof of (2) shows that it is only expected to be sharp when the bushes associated to each point behave like “independent” subsets of , so that there is no significant correlation between the bush of one point and the bush of another point .
However, in Euclidean space, we have the phenomenon that the bush of a point is influenced by the region of space that lies in. Clearly, if lies in a set (e.g. a convex polygon), then the only lines that can contribute to are those lines which pass through . If is a small convex region of space, one expects only a fraction of the lines in to actually pass through . As such, if and both lie in , then and are compressed inside a smaller subset of , namely the set of lines passing through , and so should be more likely to intersect than if they were independent. This should lead to an improvement to (2) (and indeed, as we shall see below, ultimately leads to (1)).
More formally, the argument proceeds by applying the following lemma:
Lemma 1 (Cell decomposition) Let be a finite collection of lines in , let be a finite set of points, and let . Then it is possible to find a set of lines in , plus some additional open line segments not containing any point in , which subdivide into convex regions (or cells), such that the interior of each such cell is incident to at most lines.
Let be a parameter to be optimised later. We apply the cell decomposition to subdivide into open convex regions, plus a family of lines. Each of the convex regions has only lines through it, and so by (2) contributes incidences. Meanwhile, on each of the lines in used to perform this decomposition, there are at most transverse incidences (because each line in distinct from can intersect at most once), plus all the incidences along itself. Putting all this together, one obtains
We optimise this by selecting ; from (4) we can ensure that , so that . One then obtains
We can iterate away the error (halving the number of lines each time) and sum the resulting geometric series to obtain (1).
It remains to prove (1). If one subdivides using arbitrary lines, one creates at most cells (because each new line intersects the existing lines at most once, and so can create at most distinct cells), and for a similar reason, every line in visits at most of these regions, and so by double counting one expects lines per cell “on the average”. The key difficulty is then to get lines through every cell, not just on the average. It turns out that a probabilistic argument will almost work, but with a logarithmic loss (thus having lines per cell rather than ); but with a little more work one can then iterate away this loss also. The arguments here are loosely based on those of Clarkson et al.; a related (deterministic) decomposition also appears in the original paper of Szemerédi and Trotter. But I wish to focus here on the probabilistic approach.)
It is also worth noting that the original (somewhat complicated) argument of Szemerédi-Trotter has been adapted to establish the analogue of (1) in the complex plane by Toth, while the other known proofs of Szemerédi-Trotter, so far, have not been able to be extended to this setting (the Euler characteristic argument clearly breaks down, as does any proof based on using lines to divide planes into half-spaces). So all three proofs have their advantages and disadvantages.