You are currently browsing the monthly archive for March 2011.

I’ve just finished writing the first draft of my third book coming out of the 2010 blog posts, namely “Higher order Fourier analysis“, which was based primarily on my graduate course in the topic, though it also contains material from some additional posts related to linear and higher order Fourier analysis on the blog. It is available online here. As usual, comments and corrections are welcome. There is also a stub page for the book, which at present does not contain much more than the above link.

Last year, Emmanuel Breuillard, Ben Green, Bob Guralnick, and I wrote a paper entitled “Strongly dense free subgroups of semisimple Lie groups“. The main theorem in that paper asserted that given any semisimple algebraic group over an uncountable algebraically closed field , there existed a free subgroup which was *strongly dense* in the sense that any non-abelian subgroup of was Zariski dense in . This type of result is useful for establishing expansion in finite simple groups of Lie type, as we will discuss in a subsequent paper.

An essentially equivalent formulation of the main result is that if are two non-commuting elements of the free group on two generators, and is a *generic* pair of elements in , then and are not contained in any proper closed algebraic subgroup of . Here, “generic” means “outside of at most countably many proper subvarieties”. In most cases, one expects that if are generically drawn from , then will also be generically drawn from , but this is not always the case, which is a key source of difficulty in the paper. For instance, if is conjugate to in , then and must be conjugate in and so the pair lie in a proper subvariety of . It is currently an open question to determine all the pairs of words for which is not generic for generic (or equivalently, the double word map is not dominant).

The main strategy of proof was as follows. It is not difficult to reduce to the case when is simple. Suppose for contradiction that we could find two non-commuting words such that were generically trapped in a proper closed algebraic subgroup. As it turns out, there are only finitely many conjugacy classes of such groups, and so one can assume that were generically trapped in a conjugate of a fixed proper closed algebraic subgroup . One can show that , , and are generically regular semisimple, which implies that is a maximal rank semisimple subgroup. The key step was then to find another proper semisimple subgroup of which was *not* a *degeneration* of , by which we mean that there did not exist a pair in the Zariski closure of the products of conjugates of , such that generated a Zariski-dense subgroup of . This is enough to establish the theorem, because we could use an induction hypothesis to find in (and hence in such that generated a Zariski-dense subgroup of , which contradicts the hypothesis that was trapped in for generic (and hence in for all .

To illustrate the concept of a degeneration, take and let be the stabiliser of a non-degenerate -space in . All other stabilisers of non-degenerate -spaces are conjugate to . However, stabilisers of *degenerate* -spaces are not conjugate to , but are still degenerations of . For instance, the stabiliser of a totally singular -space (which is isomorphic to the affine group on , extended by ) is a degeneration of .

A significant portion of the paper was then devoted to verifying that for each simple algebraic group , and each maximal rank proper semisimple subgroup of , one could find another proper semisimple subgroup which was not a degeneration of ; roughly speaking, this means that is so “different” from that no conjugate of can come close to covering . This required using the standard classification of algebraic groups via Dynkin diagrams, and knowledge of the various semisimple subgroups of these groups and their representations (as we used the latter as obstructions to degeneration, for instance one can show that a reducible representation cannot degenerate to an irreducible one).

During the refereeing process for this paper, we discovered that there was precisely one family of simple algebraic groups for which this strategy did not actually work, namely the group (or the group that is double-covered by this group) in characteristic . This group (which has Dynkin diagram , as discussed in this previous post) has one maximal rank proper semisimple subgroup up to conjugacy, namely , which is the stabiliser of a line in . To find a proper semisimple group that is not a degeneration of this group, we basically need to find a subgroup that does not stabilise any line in . In characteristic larger than three (or characteristic zero), one can proceed by using the action of on the five-dimensional space of homogeneous degree four polynomials on , which preserves a non-degenerate symmetric form (the four-fold tensor power of the area form on ) and thus embeds into ; as no polynomial is fixed by all of , we see that this copy of is not a degeneration of .

Unfortunately, in characteristics two and three, the symmetric form on degenerates, and this embedding is lost. In the characteristic two case, one can proceed by using the characteristic fact that is isomorphic to (because in characteristic two, the space of null vectors is a hyperplane, and the symmetric form becomes symplectic on this hyperplane), and thus has an additional maximal rank proper semisimple subgroup which is not conjugate to the subgroup. But in characteristic three, it turns out that there are no further semisimple subgroups of that are not already contained in a conjugate of the . (This is not a difficulty for larger groups such as or , where there are plenty of other semisimple groups to utilise; it is only this smallish group that has the misfortune of having exactly one maximal rank proper semisimple group to play with, and not enough other semisimples lying around in characteristic three.)

As a consequence of this issue, our argument does not actually work in the case when the characteristic is three and the semisimple group contains a copy of (or ), and we have had to modify our paper to delete this case from our results. We believe that such groups still do contain strongly dense free subgroups, but this appears to be just out of reach of our current method.

One thing that this experience has taught me is that algebraic groups behave somewhat pathologically in low characteristic; in particular, intuition coming from the characteristic zero case can become unreliable in characteristic two or three.

Classically, the fundamental object of study in *algebraic geometry* is the solution set

to multiple algebraic equations

in multiple unknowns in a field , where the are polynomials of various degrees . We adopt the classical perspective of viewing as a set (and specifically, as an algebraic set), rather than as a scheme. Without loss of generality we may order the degrees in non-increasing order:

We can distinguish between the *underdetermined case* , when there are more unknowns than equations; the *determined case* when there are exactly as many unknowns as equations; and the *overdetermined case* , when there are more equations than unknowns.

Experience has shown that the theory of such equations is significantly simpler if one assumes that the underlying field is algebraically closed , and so we shall make this assumption throughout the rest of this post. In particular, this covers the important case when is the field of complex numbers (but it does *not* cover the case of real numbers – see below).

From the general “soft” theory of algebraic geometry, we know that the algebraic set is a union of finitely many algebraic varieties, each of dimension at least , with none of these components contained in any other. In particular, in the underdetermined case , there are no zero-dimensional components of , and thus is either empty or infinite.

Now we turn to the determined case , where we expect the solution set to be zero-dimensional and thus finite. Here, the basic control on the solution set is given by Bezout’s theorem. In our notation, this theorem states the following:

Theorem 1 (Bezout’s theorem)Let . If is finite, then it has cardinality at most .

This result can be found in any introductory algebraic geometry textbook; it can for instance be proven using the classical tool of resultants. The solution set will be finite when the two polynomials are coprime, but can (and will) be infinite if share a non-trivial common factor.

By defining the right notion of multiplicity on (and adopting a suitably “scheme-theoretic” viewpoint), and working in projective space rather than affine space, one can make the inequality an equality. However, for many applications (and in particular, for the applications to combinatorial incidence geometry), the upper bound usually suffices.

Bezout’s theorem can be generalised in a number of ways. For instance, the restriction on the finiteness of the solution set can be dropped by restricting attention to , the union of the zero-dimensional irreducible components of :

Corollary 2 (Bezout’s theorem, again)Let . Then has cardinality at most .

*Proof:* We factor into irreducible factors (using unique factorisation of polynomials). By removing repeated factors, we may assume are square-free. We then write , where is the greatest common divisor of and are coprime. Observe that the zero-dimensional component of is contained in , which is finite from the coprimality of . The claim follows.

It is also not difficult to use Bezout’s theorem to handle the overdetermined case in the plane:

Corollary 3 (Bezout’s theorem, yet again)Let . Then has cardinality at most .

*Proof:* We may assume all the are square-free. We write , where is coprime to and divides (and also ). We then write , where is coprime to and divides (and also ). Continuing in this fashion we obtain a factorisation . One then observes that is contained in the set , which by Theorem 1 has cardinality at most . Since and , the claim follows.

Remark 1Of course, in the overdetermined case one generically expects the solution set to be empty, but if there is enough degeneracy or numerical coincidence then non-zero solution sets can occur. In particular, by considering the case when and we see that the bound can be sharp in some cases. However, one can do a little better in this situation; by decomposing into irreducible components, for instance, one can improve the upper bound of slightly to . However, this improvement seems harder to obtain in higher dimensions (see below).

Bezout’s theorem also extends to higher dimensions. Indeed, we have

Theorem 4 (Higher-dimensional Bezout’s theorem)Let . If is finite, then it has cardinality at most .

This is a standard fact, and can for instance be proved from the more general and powerful machinery of intersection theory. A typical application of this theorem is to show that, given a degree polynomial over the reals, the number of connected components of is . The main idea of the proof is to locate a critical point inside each connected component, and use Bezout’s theorem to count the number of zeroes of the polynomial map . (This doesn’t quite work directly because some components may be unbounded, and because the fibre of at the origin may contain positive-dimensional components, but one can use truncation and generic perturbation to deal with these issues; see my recent paper with Solymosi for further discussion.)

Bezout’s theorem can be extended to the overdetermined case as before:

Theorem 5 (Bezout’s inequality)Let . Then has cardinality at most .

Remark 2Theorem 5 ostensibly only controls the zero-dimensional components of , but by throwing in a few generic affine-linear forms to the set of polynomials (thus intersecting with a bunch of generic hyperplanes) we can also control the total degree of all the -dimensional components of for any fixed . (Again, by using intersection theory one can get a slightly more precise bound than this, but the proof of that bound is more complicated than the arguments given here.)

This time, though, it is a slightly non-trivial matter to deduce Theorem 5 from Theorem 4, due to the standard difficulty that the intersection of irreducible varieties need not be irreducible (which can be viewed in some ways as the source of many other related difficulties, such as the fact that not every algebraic variety is a complete intersection), and so one cannot evade all irreducibility issues merely by assuming that the original polynomials are irreducible. Theorem 5 first appeared explicitly in the work of Heintz.

As before, the most systematic way to establish Theorem 5 is via intersection theory. In this post, though, I would like to give a slightly more elementary argument (essentially due to Schmid), based on generically perturbing the polynomials in the problem ; this method is less powerful than the intersection-theoretic methods, which can be used for a wider range of algebraic geometry problems, but suffices for the narrow objective of proving Theorem 5. The argument requires some of the “soft” or “qualitative” theory of algebraic geometry (in particular, one needs to understand the semicontinuity properties of preimages of dominant maps), as well as basic linear algebra. As such, the proof is not completely elementary, but it uses only a small amount of algebraic machinery, and as such I found it easier to understand than the intersection theory arguments.

Theorem 5 is a statement about *arbitrary* polynomials . However, it turns out (in the determined case , at least) that upper bounds on are *Zariski closed* properties, and so it will suffice to establish this claim for *generic* polynomials . On the other hand, it is possible to use duality to deduce such upper bounds on from a *Zariski open* condition, namely that a certain collection of polynomials are linearly independent. As such, to verify the generic case of this open condition, it suffices to establish this condition for a *single* family of polynomials, such as a family of monomials, in which case the condition can be verified by direct inspection. Thus we see an example of the somewhat strange strategy of establishing the general case from a specific one, using the generic case as an intermediate step.

Remark 3There is an important caveat to note here, which is that these theorems only hold for algebraically closed fields, and in particular can fail over the reals . For instance, in , the polynomialshave degrees respectively, but their common zero locus has cardinality . In some cases one can safely obtain incidence bounds in by embedding inside , but as the above example shows, one needs to be careful when doing so.

Jozsef Solymosi and I have just uploaded to the arXiv our paper “An incidence theorem in higher dimensions“, submitted to Discrete and Computational Geometry. In this paper we use the polynomial Ham Sandwich method of Guth and Katz (as discussed previously on this blog) to establish higher-dimensional versions of the Szemerédi-Trotter theorem, albeit at the cost of an epsilon loss in exponents.

Recall that the Szemerédi-Trotter theorem asserts that given any finite set of points and lines in the plane , the number of incidences has the upper bound

Apart from the constant factor, this bound is sharp. As discussed in this previous blog post, this theorem can be proven by the polynomial method, and the strategy can be rapidly summarised as follows. Select a parameter . By the polynomial Ham Sandwich theorem, one can divide into cell interiors, each with points, and incident to lines on the average, plus a boundary set which is an algebraic curve of degree . To handle the contribution of each cell interior, one uses a more elementary incidence bound (such as the bound coming from the fact that two points determine at most one line); to handle the contribution on the cell boundary, one uses algebraic geometry tools such as Bezout’s theorem. One then combines all the bounds and optimises in to obtain the result.

As a general rule, the contribution of the cell interiors becomes easier to handle as increases, while the contribution of the cell boundaries become easier as *decreases*. As such, the optimal value of is often an intermediate one (in the case of Szemerédi-Trotter, the choice is typical). Among other things, this requires some control of moderately high degree algebraic sets, though in this planar case , one only needs to control algebraic curves in the plane, which are very well understood.

In higher dimensions, though, the complexity of the algebraic geometry required to control medium degree algebraic sets increases sharply; compare for instance the algebraic geometry of ruled surfaces appearing in the three-dimensional work of Guth and Katz as discussed here, compared with the algebraic geometry of curves in the two-dimensional Szemerédi-Trotter theorem discussed here.

However, Jozsef and I discovered that it is also possible to use the polynomial method with a *non-optimised* value of , and in particular with a *bounded* value of , which makes the algebraic geometry treatment of the boundary significantly easier. The drawback to this is that the cell interiors can no longer be adequately controlled by trivial incidence estimates. However, if instead one controls the cell interiors by an *induction hypothesis*, then it turns out that in many cases one can recover a good estimate. For instance, let us consider the two-dimensional Szemerédi-Trotter theorem in the most interesting regime, namely when the term on the RHS is dominant. If we perform the cell decomposition with some small parameter , then we obtain cells with points and lines on the average; applying the Szemerédi-Trotter theorem inductively to each of these cells, we end up with a total contribution of

for the cell interiors, and so provided that one can also control incidences on the (low degree) cell boundary, we see that we have closed the induction (up to changes in the implied constants in the notation).

Unfortunately, as is well known, the fact that the implied constants in the notation degrade when we do this prevents this induction argument from being rigorous. However, it turns out this method does work nicely to give the weaker incidence bound

for any fixed ; the point being that this extra epsilon of room in the exponents means that the induction hypothesis gains a factor of or so over the desired conclusion, allowing one to close the induction with no loss of implied constants if all the parameters are chosen properly. While this bound is weaker than the full Szemerédi-Trotter theorem, the argument is simpler in the sense that one only needs to understand bounded degree algebraic sets, rather than medium degree algebraic sets. As such, this argument is easier to extend to higher dimensions.

Indeed, as a consequence of this strategy (and also a generic projection argument to cut down the ambient dimension, as used in this previous paper with Ellenberg and Oberlin, and an induction on dimension), we obtain in our paper a higher-dimensional incidence theorem:

Theorem 1 (Higher-dimensional Szemerédi-Trotter theorem)Let , and let and be finite sets of points and -planes in , such that any two -planes in intersect in at most one point. Then we have for any .

Something like the transversality hypothesis that any two -planes in intersect in at most one point is necessary, as can be seen by considering the example in which the points are all collinear, and the -planes in all contain the common line. As one particular consequence of this result, we recover (except for an epsilon loss in exponents) the complex Szemerédi-Trotter theorem of Toth, whose proof was significantly more complicated; it also gives some matrix and quaternionic analogues, in particular giving new sum-product theorems in these rings. Note that the condition is natural, because when the ambient dimension is less than , it is not possible for two -planes to intersect in just one point. Using the generic projection trick, one can easily reduce to the critical case .

Actually, for inductive reasons, we prove a more general result than Theorem 1, in which the -planes are replaced instead by -dimensional real algebraic varieties. The transversality condition is now that whenever a point is incident to two such varieties , that the tangent cones of and at only intersect at . (Also for technical reasons, it is convenient to consider a partial subset of incidences in , rather than the full set of incidences. Indeed, it seems most natural to consider the triplet

as a single diagram, rather than to consider just the sets and .)

The reason for working in this greater level of generality is that it becomes much easier to use an induction hypothesis to deal with the cell boundary; one simply intersects each variety in with the cell boundary, which usually lowers the dimension of the variety by one. (There are some exceptional cases in which is completely trapped inside the boundary, but due to the transversality hypothesis, this cannot contribute many incidences if the ambient dimension is (so the cell boundary is only dimensional).)

As one application of this more general incidence result, we almost extend a classic result of Spencer, Szemerédi, and Trotter asserting that points in the plane determine unit distances from to , at the cost of degrading to .

Van Vu and I have just uploaded to the arXiv our paper “Random matrices: Universality of eigenvectors“, submitted to Random Matrices: Theory and Applications. This paper concerns an extension of our four moment theorem for eigenvalues. Roughly speaking, that four moment theorem asserts (under mild decay conditions on the coefficients of the random matrix) that the fine-scale structure of individual eigenvalues of a Wigner random matrix depend only on the first four moments of each of the entries.

In this paper, we extend this result from eigenvalues to eigen*vectors*, and specifically to the coefficients of, say, the eigenvector of a Wigner random matrix . Roughly speaking, the main result is that the distribution of these coefficients also only depends on the first four moments of each of the entries. In particular, as the distribution of coefficients eigenvectors of invariant ensembles such as GOE or GUE are known to be asymptotically gaussian real (in the GOE case) or gaussian complex (in the GUE case), the same asymptotic automatically holds for Wigner matrices whose coefficients match GOE or GUE to fourth order.

(A technical point here: strictly speaking, the eigenvectors are only determined up to a phase, even when the eigenvalues are simple. So, to phrase the question properly, one has to perform some sort of normalisation, for instance by working with the coefficients of the spectral projection operators instead of the eigenvectors, or rotating each eigenvector by a random phase, or by fixing the first component of each eigenvector to be positive real. This is a fairly minor technical issue here, though, and will not be discussed further.)

This theorem strengthens a four moment theorem for eigenvectors recently established by Knowles and Yin (by a somewhat different method), in that the hypotheses are weaker (no level repulsion assumption is required, and the matrix entries only need to obey a finite moment condition rather than an exponential decay condition), and a slightly stronger conclusion (less regularity is needed on the test function, and one can handle the joint distribution of polynomially many coefficients, rather than boundedly many coefficients). On the other hand, the Knowles-Yin paper can also handle generalised Wigner ensembles in which the variances of the entries are allowed to fluctuate somewhat.

The method used here is a variation of that in our original paper (incorporating the subsequent improvements to extend the four moment theorem from the bulk to the edge, and to replace exponential decay by a finite moment condition). That method was ultimately based on the observation that if one swapped a single entry (and its adjoint) in a Wigner random matrix, then an individual eigenvalue would not fluctuate much as a consequence (as long as one had already truncated away the event of an unexpectedly small eigenvalue gap). The same analysis shows that the projection matrices obeys the same stability property.

As an application of the eigenvalue four moment theorem, we establish a four moment theorem for the coefficients of resolvent matrices , even when is on the real axis (though in that case we need to make a level repulsion hypothesis, which has been already verified in many important special cases and is likely to be true in general). This improves on an earlier four moment theorem for resolvents of Erdos, Yau, and Yin, which required to stay some distance away from the real axis (specifically, that for some small ).

As we are all now very much aware, tsunamis are water waves that start in the deep ocean, usually because of an underwater earthquake (though tsunamis can also be caused by underwater landslides or volcanoes), and then propagate towards shore. Initially, tsunamis have relatively small amplitude (a metre or so is typical), which would seem to render them as harmless as wind waves. And indeed, tsunamis often pass by ships in deep ocean without anyone on board even noticing.

However, being generated by an event as large as an earthquake, the *wavelength* of the tsunami is huge – 200 kilometres is typical (in contrast with wind waves, whose wavelengths are typically closer to 100 metres). In particular, the wavelength of the tsunami is far greater than the depth of the ocean (which is typically 2-3 kilometres). As such, even in the deep ocean, the dynamics of tsunamis are essentially governed by the shallow water equations. One consequence of these equations is that the speed of propagation of a tsunami can be approximated by the formula

where is the depth of the ocean, and is the force of gravity. As such, tsunamis in deep water move *very* fast – speeds such as 500 kilometres per hour (300 miles per hour) are quite typical; enough to travel from Japan to the US, for instance, in less than a day. Ultimately, this is due to the incompressibility of water (and conservation of mass); the massive net pressure (or more precisely, spatial variations in this pressure) of a very broad and deep wave of water forces the profile of the wave to move horizontally at vast speeds. (Note though that this is the phase velocity of the tsunami wave, and not the velocity of the water molecues themselves, which are far slower.)

As the tsunami approaches shore, the depth of course decreases, causing the tsunami to slow down, at a rate proportional to the square root of the depth, as per (1). Unfortunately, wave shoaling then forces the amplitude to increase at an inverse rate governed by *Green’s law*,

at least until the amplitude becomes comparable to the water depth (at which point the assumptions that underlie the above approximate results break down; also, in two (horizontal) spatial dimensions there will be some decay of amplitude as the tsunami spreads outwards). If one starts with a tsunami whose initial amplitude was at depth and computes the point at which the amplitude and depth become comparable using the proportionality relationship (2), some high school algebra then reveals that at this point, amplitude of a tsunami (and the depth of the water) is about . Thus, for instance, a tsunami with initial amplitude of one metre at a depth of 2 kilometres can end up with a final amplitude of about 5 metres near shore, while still traveling at about ten metres per second (35 kilometres per hour, or 22 miles per hour), and we have all now seen the impact that can have when it hits shore.

While tsunamis are far too massive of an event to be able to control (at least in the deep ocean), we can at least model them mathematically, allowing one to predict their impact at various places along the coast with high accuracy. (For instance, here is a video of the NOAA’s model of the March 11 tsunami, which has matched up very well with subsequent measurements.) The full equations and numerical methods used to perform such models are somewhat sophisticated, but by making a large number of simplifying assumptions, it is relatively easy to come up with a rough model that already predicts the basic features of tsunami propagation, such as the velocity formula (1) and the amplitude proportionality law (2). I give this (standard) derivation below the fold. The argument will largely be heuristic in nature; there are very interesting analytic issues in actually justifying many of the steps below rigorously, but I will not discuss these matters here.

A few days ago, I received the sad news that Yahya Ould Hamidoune had recently died. Hamidoune worked in additive combinatorics, and had recently solved a question on noncommutative Freiman-Kneser theorems posed by myself on this blog last year. Namely, Hamidoune showed

Theorem 1 (Noncommutative Freiman-Kneser theorem for small doubling)Let , and let be a finite non-empty subset of a multiplicative group such that for some finite set of cardinality at least , where is the product set of and . Then there exists a finite subgroup of with cardinality , such that is covered by at most right-cosets of , where depend only on .

One can of course specialise here to the case , and view this theorem as a classification of those sets of doubling constant at most .

In fact Hamidoune’s argument, which is completely elementary, gives the very nice explicit constants and , which are essentially optimal except for factors of (as can be seen by considering an arithmetic progression in an additive group). This result was also independently established (in the case) by Tom Sanders (unpublished) by a more Fourier-analytic method, in particular drawing on Sanders’ deep results on the Wiener algebra on arbitrary non-commutative groups .

This type of result had previously been known when was less than the golden ratio , as first observed by Freiman; see my previous blog post for more discussion.

Theorem 1 is not, strictly speaking, contained in Hamidoune’s paper, but can be extracted from his arguments, which share some similarity with the recent simple proof of the Ruzsa-Plünnecke inequality by Petridis (as discussed by Tim Gowers here), and this is what I would like to do below the fold. I also include (with permission) Sanders’ unpublished argument, which proceeds instead by Fourier-analytic methods. Read the rest of this entry »

For sake of concreteness we will work here over the complex numbers , although most of this discussion is valid for arbitrary algebraically closed fields (but some care needs to be taken in characteristic , as always, particularly when defining the orthogonal and symplectic groups). Then one has the following four infinite families of classical Lie groups for :

- (Type ) The special linear group of volume-preserving linear maps .
- (Type ) The special orthogonal group of (orientation preserving) linear maps preserving a non-degenerate symmetric form , such as the standard symmetric form
(this is the complexification of the more familiar

*real special orthogonal group*). - (Type ) The symplectic group of linear maps preserving a non-degenerate antisymmetric form , such as the standard symplectic form
- (Type ) The special orthogonal group of (orientation preserving) linear maps preserving a non-degenerate symmetric form (such as the standard symmetric form).

For this post I will abuse notation somewhat and identify with , with , etc., although it is more accurate to say that is a Lie group of *type* , etc., as there are other forms of the Lie algebras associated to over various fields. Over a non-algebraically closed field, such as , the list of Lie groups associated with a given type can in fact get quite complicated; see for instance this list. One can also view the double covers and of , (i.e. the spin groups) as being of type respectively; however, I find the spin groups less intuitive to work with than the orthogonal groups and will therefore focus more on the orthogonal model.

The reason for this subscripting is that each of the classical groups has rank , i.e. the dimension of any maximal connected abelian subgroup of simultaneously diagonalisable elements (also known as a Cartan subgroup) is . For instance:

- (Type ) In , one Cartan subgroup is the diagonal matrices in , which has dimension .
- (Type ) In , all Cartan subgroups are isomorphic to , which has dimension .
- (Type ) In , all Cartan subgroups are isomorphic to , which has dimension .
- (Type ) in , all Cartan subgroups are isomorphic to , which has dimension .

(This same convention also underlies the notation for the exceptional simple Lie groups , which we will not discuss further here.)

With two exceptions, the classical Lie groups are all simple, i.e. their Lie algebras are non-abelian and not expressible as the direct sum of smaller Lie algebras. The two exceptions are , which is abelian (isomorphic to , in fact) and thus not considered simple, and , which turns out to “essentially” split as , in the sense that the former group is double covered by the latter (and in particular, there is an isogeny from the latter to the former, and the Lie algebras are isomorphic).

The adjoint action of a Cartan subgroup of a Lie group on the Lie algebra splits that algebra into weight spaces; in the case of a simple Lie group, the associated weights are organised by a Dynkin diagram. The Dynkin diagrams for are of course well known, and can be found for instance here.

For small , some of these Dynkin diagrams are isomorphic; this is a classic instance of the tongue-in-cheek strong law of small numbers, though in this case “strong law of small diagrams” would be more appropriate. These accidental isomorphisms then give rise to the exceptional isomorphisms between Lie algebras (and thence to *exceptional isogenies* between Lie groups). Excluding those isomorphisms involving the exceptional Lie algebras for , these isomorphisms are

- ;
- ;
- ;
- .

There is also a pair of exceptional isomorphisms from (the form of) to itself, a phenomenon known as triality.

These isomorphisms are most easily seen via algebraic and combinatorial tools, such as an inspection of the Dynkin diagrams (see e.g. this Wikipedia image). However, the isomorphisms listed above can also be seen by more “geometric” means, using the basic representations of the classical Lie groups on their natural vector spaces ( for respectively) and combinations thereof (such as exterior powers). (However, I don’t know of a simple way to interpret triality geometrically; the descriptions I have seen tend to involve some algebraic manipulation of the octonions or of a Clifford algebra, in a manner that tended to obscure the geometry somewhat.) These isomorphisms are quite standard (I found them, for instance, in this book of Procesi), but it was instructive for me to work through them (as I have only recently needed to start studying algebraic group theory in earnest), and I am recording them here in case anyone else is interested.

Let be an abelian countable discrete group. A measure-preserving -system (or *-system for short*) is a probability space , equipped with a measure-preserving action of the group , thus

for all and , and

for all , with equal to the identity map. Classically, ergodic theory has focused on the cyclic case (in which the are iterates of a single map , with elements of being interpreted as a time parameter), but one can certainly consider actions of other groups also (including continuous or non-abelian groups).

A -system is said to be *strongly -mixing*, or strongly mixing for short, if one has

for all , where the convergence is with respect to the one-point compactification of (thus, for every , there exists a compact (hence finite) subset of such that for all ).

Similarly, we say that a -system is *strongly -mixing* if one has

for all , thus for every , there exists a finite subset of such that

whenever all lie outside .

It is obvious that a strongly -mixing system is necessarily strong -mixing. In the case of -systems, it has been an open problem for some time, due to Rohlin, whether the converse is true:

Problem 1 (Rohlin’s problem)Is every strongly mixing -system necessarily strongly -mixing?

This is a surprisingly difficult problem. In the positive direction, a routine application of the Cauchy-Schwarz inequality (via van der Corput’s inequality) shows that every strongly mixing system is *weakly -mixing*, which roughly speaking means that converges to for *most* . Indeed, every weakly mixing system is in fact weakly mixing of all orders; see for instance this blog post of Carlos Matheus, or these lecture notes of myself. So the problem is to exclude the possibility of correlation between , , and for a small but non-trivial number of pairs .

It is also known that the answer to Rohlin’s problem is affirmative for rank one transformations (a result of Kalikow) and for shifts with purely singular continuous spectrum (a result of Host; note that strongly mixing systems cannot have any non-trivial point spectrum). Indeed, any counterexample to the problem, if it exists, is likely to be highly pathological.

In the other direction, Rohlin’s problem is known to have a negative answer for -systems, by a well-known counterexample of Ledrappier which can be described as follows. One can view a -system as being essentially equivalent to a stationary process of random variables in some range space indexed by , with being with the obvious shift map

In Ledrappier’s example, the take values in the finite field of two elements, and are selected at uniformly random subject to the “Pascal’s triangle” linear constraints

A routine application of the Kolmogorov extension theorem allows one to build such a process. The point is that due to the properties of Pascal’s triangle modulo (known as Sierpinski’s triangle), one has

for all powers of two . This is enough to destroy strong -mixing, because it shows a strong correlation between , , and for arbitrarily large and randomly chosen . On the other hand, one can still show that and are asymptotically uncorrelated for large , giving strong -mixing. Unfortunately, there are significant obstructions to converting Ledrappier’s example from a -system to a -system, as pointed out by de la Rue.

In this post, I would like to record a “finite field” variant of Ledrappier’s construction, in which is replaced by the function field ring , which is a “dyadic” (or more precisely, “triadic”) model for the integers (cf. this earlier blog post of mine). In other words:

Theorem 2There exists a -system that is strongly -mixing but not strongly -mixing.

The idea is much the same as that of Ledrappier; one builds a stationary -process in which are chosen uniformly at random subject to the constraints

for all and all . Again, this system is manifestly not strongly -mixing, but can be shown to be strongly -mixing; I give details below the fold.

As I discussed in this previous post, in many cases the dyadic model serves as a good guide for the non-dyadic model. However, in this case there is a curious rigidity phenomenon that seems to prevent Ledrappier-type examples from being transferable to the one-dimensional non-dyadic setting; once one restores the Archimedean nature of the underlying group, the constraints (1) not only reinforce each other strongly, but also force so much linearity on the system that one loses the strong mixing property.

In a previous blog post, I discussed the recent result of Guth and Katz obtaining a near-optimal bound on the Erdos distance problem. One of the tools used in the proof (building upon the earlier work of Elekes and Sharir) was the observation that the incidence geometry of the Euclidean group of rigid motions of the plane was almost identical to that of lines in the Euclidean space :

Proposition 1One can identify a (Zariski-)dense portion of with , in such a way that for any two points in the plane , the set of rigid motions mapping to forms a line in .

*Proof:* A rigid motion is either a translation or a rotation, with the latter forming a Zariski-dense subset of . Identify a rotation in by an angle with around a point with the element in . (Note that such rotations also form a Zariski-dense subset of .) Elementary trigonometry then reveals that if maps to , then lies on the perpendicular bisector of , and depends in a linear fashion on (for fixed ). The claim follows.

As seen from the proof, this proposition is an easy (though *ad hoc*) application of elementary trigonometry, but it was still puzzling to me why such a simple parameterisation of the incidence structure of was possible. Certainly it was clear from general algebraic geometry considerations that *some* bounded-degree algebraic description was available, but why would the be expressible as lines and not as, say, quadratic or cubic curves?

In this post I would like to record some observations arising from discussions with Jordan Ellenberg, Jozsef Solymosi, and Josh Zahl which give a more conceptual (but less elementary) derivation of the above proposition that avoids the use of *ad hoc* coordinate transformations such as . The starting point is to view the Euclidean plane as the scaling limit of the sphere (a fact which is familiar to all of us through the geometry of the Earth), which makes the Euclidean group a scaling limit of the rotation group . The latter can then be lifted to a double cover, namely the spin group . This group has a natural interpretation as the unit quaternions, which is isometric to the unit sphere . The analogue of the lines in this setting become great circles on this sphere; applying a projective transformation, one can map to (or more precisely to the projective space ), at whichi point the great circles become lines. This gives a proof of Proposition 1.

Details of the correspondence are provided below the fold. One by-product of this analysis, incidentally, is the observation that the Guth-Katz bound for the Erdos distance problem in the plane , immediately extends with almost no modification to the sphere as well (i.e. any points in determine distances), as well as to the hyperbolic plane .

## Recent Comments