You are currently browsing the monthly archive for January 2010.
I have blogged a number of times in the past about the relationship between finitary (or “hard”, or “quantitative”) analysis, and infinitary (or “soft”, or “qualitative”) analysis. One way to connect the two types of analysis is via compactness arguments (and more specifically, contradiction and compactness arguments); such arguments can convert qualitative properties (such as continuity) to quantitative properties (such as bounded), basically because of the fundamental fact that continuous functions on a compact space are bounded (or the closely related fact that sequentially continuous functions on a sequentially compact space are bounded).
A key stage in any such compactness argument is the following: one has a sequence of “quantitative” or “finitary” objects or spaces, and one has to somehow end up with a “qualitative” or “infinitary” limit object
or limit space. One common way to achieve this is to embed everything inside some universal space and then use some weak compactness property of that space, such as the Banach-Alaoglu theorem (or its sequential counterpart). This is for instance the idea behind the Furstenberg correspondence principle relating ergodic theory to combinatorics; see for instance this post of mine on this topic.
However, there is a slightly different approach, which I will call ultralimit analysis, which proceeds via the machinery of ultrafilters and ultraproducts; typically, the limit objects one constructs are now the ultraproducts (or ultralimits) of the original objects
. There are two main facts that make ultralimit analysis powerful. The first is that one can take ultralimits of arbitrary sequences of objects, as opposed to more traditional tools such as metric completions, which only allow one to take limits of Cauchy sequences of objects. The second fact is Los’s theorem, which tells us that
is an elementary limit of the
(i.e. every sentence in first-order logic which is true for the
for
large enough, is true for
). This existence of elementary limits is a manifestation of the compactness theorem in logic; see this earlier blog post for more discussion. So we see that compactness methods and ultrafilter methods are closely intertwined. (See also my earlier class notes for a related connection between ultrafilters and compactness.)
Ultralimit analysis is very closely related to nonstandard analysis. I already discussed some aspects of this relationship in an earlier post, and will expand upon it at the bottom of this post. Roughly speaking, the relationship between ultralimit analysis and nonstandard analysis is analogous to the relationship between measure theory and probability theory.
To illustrate how ultralimit analysis is actually used in practice, I will show later in this post how to take a qualitative infinitary theory – in this case, basic algebraic geometry – and apply ultralimit analysis to then deduce a quantitative version of this theory, in which the complexity of the various algebraic sets and varieties that appear as outputs are controlled uniformly by the complexity of the inputs. The point of this exercise is to show how ultralimit analysis allows for a relatively painless conversion back and forth between the quantitative and qualitative worlds, though in some cases the quantitative translation of a qualitative result (or vice versa) may be somewhat unexpected. In an upcoming paper of myself, Ben Green, and Emmanuel Breuillard (announced in the previous blog post), we will rely on ultralimit analysis to reduce the messiness of various quantitative arguments by replacing them with a qualitative setting in which the theory becomes significantly cleaner.
For sake of completeness, I also redo some earlier instances of the correspondence principle via ultralimit analysis, namely the deduction of the quantitative Gromov theorem from the qualitative one, and of Szemerédi’s theorem from the Furstenberg recurrence theorem, to illustrate how close the two techniques are to each other.
Emmanuel Breuillard, Ben Green, and I have just uploaded to the arXiv our announcement “Linear approximate groups“, submitted to Electronic Research Announcements.
The main result is a step towards the classification of -approximate groups, in the specific setting of simple and semisimple Lie groups (with some partial results for more general Lie groups). For
, define a
-approximate group to be a finite subset
of a group
which is a symmetric neighbourhood of the origin (thus
and
is equal to
), and such that the product set
is covered by
left-translates (or equivalently,
right-translates) of
. For
, this is the same concept as a finite subgroup of
, but for larger values of
, one also gets some interesting objects which are close to, but not exactly groups, such as geometric progressions
for some
and
.
The expectation is that -approximate groups are
-controlled by “structured” objects, such as actual groups and progressions, though the precise formulation of this has not yet been finalised. (We say that one finite set
-controls another
if
is at most
times larger than
in cardinality, and
can be covered by at most
left translates or right translates of
.) The task of stating and proving this statement is the noncommutative Freiman theorem problem, discussed in these earlier blog posts.
While this problem remains unsolved for general groups, significant progress has been made in special groups, notably abelian, nilpotent, and solvable groups. Furthermore, the work of Chang (over ) and Helfgott (over
) has established the important special cases of the special linear groups
and
:
Theorem 1 (Helfgott’s theorem) Let
and let
be either
or
for some prime
. Let
be a
-approximate subgroup of
.
- If
generates the entire group
(which is only possible in the finite case
), then
is either controlled by the trivial group or the whole group.
- If
, then
is
-controlled by a solvable
-approximate subgroup
of
, or by
itself. If
, the latter possibility cannot occur, and
must be abelian.
Our main result is an extension of Helfgott’s theorem to for general
. In fact, we obtain an analogous result for any simple (or almost simple) Chevalley group over an arbitrary finite field (not necessarily of prime order), or over
. (Standard embedding arguments then allow us to in fact handle arbitrary fields.) The results from simple groups can also be extended to (almost) semisimple Lie groups by an approximate version of Goursat’s lemma. Given that general Lie groups are known to split as extensions of (almost) semisimple Lie groups by solvable Lie groups, and Freiman-type theorems are known for solvable groups also, this in principle gives a Freiman-type theorem for arbitrary Lie groups; we have already established this in the characteristic zero case
, but there are some technical issues in the finite characteristic case
that we are currently in the process of resolving.
We remark that a qualitative version of this result (with the polynomial bounds replaced by an ineffective bound
) was also recently obtained by Hrushovski.
Our arguments are based in part on Helfgott’s arguments, in particular maximal tori play a major role in our arguments for much the same reason they do in Helfgott’s arguments. Our main new ingredient is a surprisingly simple argument, which we call the pivot argument, which is an analogue of a corresponding argument of Konyagin and Bourgain-Glibichuk-Konyagin that was used to prove a sum-product estimate. Indeed, it seems that Helfgott-type results in these groups can be viewed as a manifestation of a product-conjugation phenomenon analogous to the sum-product phenomenon. Namely, the sum-product phenomenon asserts that it is difficult for a subset of a field to be simultaneously approximately closed under sums and products, without being close to an actual field; similarly, the product-conjugation phenomenon asserts that it is difficult for a union of (subsets of) tori to be simultaneously approximately closed under products and conjugations, unless it is coming from a genuine group. In both cases, the key is to exploit a sizeable gap between the behaviour of two types of “pivots” (which are scaling parameters in the sum-product case, and tori in the product-conjugation case): ones which interact strongly with the underlying set
, and ones which do not interact at all. The point is that there is no middle ground of pivots which only interact weakly with the set. This separation between interacting (or “involved”) and non-interacting (or “non-involved”) pivots can then be exploited to bootstrap approximate algebraic structure into exact algebraic structure. (Curiously, a similar argument is used all the time in PDE, where it goes under the name of the “bootstrap argument”.)
Below the fold we give more details of this crucial pivot argument.
One piece of trivia about the writing of this paper: this was the first time any of us had used modern version control software to collaboratively write a paper; specifically, we used Subversion, with the repository being hosted online by xp-dev. (See this post at the Secret Blogging Seminar for how to get started with this software.) There were a certain number of technical glitches in getting everything to install and run smoothly, but once it was set up, it was significantly easier to use than our traditional system of emailing draft versions of the paper back and forth, as one could simply download and upload the most recent versions whenever one wished, with all changes merged successfully. I had a positive impression of this software and am likely to try it again in future collaborations, particularly those involving at least three people. (It would also work well for polymath projects, modulo the technical barrier of every participant having to install some software.)
In mathematics, one frequently starts with some space and wishes to extend it to a larger space
. Generally speaking, there are two ways in which one can extend a space
:
- By embedding
into a space
that has
(or at least an isomorphic copy of
) as a subspace.
- By covering
by a space
that has
(or an isomorphic copy thereof) as a quotient.
For many important categories of interest (such as abelian categories), the former type of extension can be represented by the exact sequence,
and the latter type of extension be represented by the exact sequence
In some cases, can be both embedded in, and covered by,
, in a consistent fashion; in such cases we sometimes say that the above exact sequences split.
An analogy would be to that of digital images. When a computer represents an image, it is limited both by the scope of the image (what it is picturing), and by the resolution of an image (how much physical space is represented by a given pixel). To make the image “larger”, one could either embed the image in an image of larger scope but equal resolution (e.g. embedding a picture of a pixel image of person’s face into a
pixel image that covers a region of space that is four times larger in both dimensions, e.g. the person’s upper body) or cover the image with an image of higher resolution but of equal scope (e.g. enhancing a
pixel picture of a face to a
pixel of the same face). In the former case, the original image is a sub-image (or cropped image) of the extension, but in the latter case the original image is a quotient (or a pixelation) of the extension. In the former case, each pixel in the original image can be identified with a pixel in the extension, but not every pixel in the extension is covered. In the latter case, every pixel in the original image is covered by several pixels in the extension, but the pixel in the original image is not canonically identified with any particular pixel in the extension that covers it; it “loses its identity” by dispersing into higher resolution pixels.
(Note that “zooming in” the visual representation of an image by making each pixel occupy a larger region of the screen neither increases the scope or the resolution; in this language, a zoomed-in version of an image is merely an isomorphic copy of the original image; it carries the same amount of information as the original image, but has been represented in a new coordinate system which may make it easier to view, especially to the visually impaired.)
In the study of a given category of spaces (e.g. topological spaces, manifolds, groups, fields, etc.), embedding and coverings are both important; this is particularly true in the more topological areas of mathematics, such as manifold theory. But typically, the term extension is reserved for just one of these two operations. For instance, in the category of fields, coverings are quite trivial; if one covers a field by a field
, the kernel of the covering map
is necessarily trivial and so
are in fact isomorphic. So in field theory, a field extension refers to an embedding of a field, rather than a covering of a field. Similarly, in the theory of metric spaces, there are no non-trivial isometric coverings of a metric space, and so the only useful notion of an extension of a metric space is the one given by embedding the original space in the extension.
On the other hand, in group theory (and in group-like theories, such as the theory of dynamical systems, which studies group actions), the term “extension” is reserved for coverings, rather than for embeddings. I think one of the main reasons for this is that coverings of groups automatically generate a special type of embedding (a normal embedding), whereas most embeddings don’t generate coverings. More precisely, given a group extension of a base group
,
one can form the kernel of the covering map
, which is a normal subgroup of
, and we thus can extend the above sequence canonically to a short exact sequence
On the other hand, an embedding of into
,
does not similarly extend to a short exact sequence unless the the embedding is normal.
Another reason for the notion of extension varying between embeddings and coverings from subject to subject is that there are various natural duality operations (and more generally, contravariant functors) which turn embeddings into coverings and vice versa. For instance, an embedding of one vector space into another
induces a covering of the dual space
by the dual space
, and conversely; similarly, an embedding of a locally compact abelian group
in another
induces a covering of the Pontryagin dual
by the Pontryagin dual
. In the language of images, embedding an image in an image of larger scope is largely equivalent to covering the Fourier transform of that image by a transform of higher resolution, and conversely; this is ultimately a manifestation of the basic fact that frequency is inversely proportional to wavelength.
Similarly, a common duality operation arises in many areas of mathematics by starting with a space and then considering a space
of functions on that space (e.g. continuous real-valued functions, if
was a topological space, or in more algebraic settings one could consider homomorphisms from
to some fixed space). Embedding
into
then induces a covering of
by
, and conversely, a covering of
by
induces an embedding of
into
. Returning again to the analogy with images, if one looks at the collection of all images of a fixed scope and resolution, rather than just a single image, then increasing the available resolution causes an embedding of the space of low-resolution images into the space of high-resolution images (since of course every low-resolution image is an example of a high-resolution image), whereas increasing the available scope causes a covering of the space of narrow-scope images by the space of wide-scope images (since every wide-scope image can be cropped into a narrow-scope image). Note in the case of images, that these extensions can be split: not only can a low-resolution image be viewed as a special case of a high-resolution image, but any high-resolution image can be pixelated into a low-resolution one. Similarly, not only can any wide-scope image be cropped into a narrow-scope one, a narrow-scope image can be extended to a wide-scope one simply by filling in all the new areas of scope with black (or by using more advanced image processing tools to create a more visually pleasing extension). (In the category of sets, the statement that every covering can be split is precisely the axiom of choice.)
I’ve recently found myself having to deal quite a bit with group extensions in my research, so I have decided to make some notes on the basic theory of such extensions here. This is utterly elementary material for a group theorist, but I found this task useful for organising my own thoughts on this topic, and also in pinning down some of the jargon in this field.
One theme in this course will be the central nature played by the gaussian random variables . Gaussians have an incredibly rich algebraic structure, and many results about general random variables can be established by first using this structure to verify the result for gaussians, and then using universality techniques (such as the Lindeberg exchange strategy) to extend the results to more general variables.
One way to exploit this algebraic structure is to continuously deform the variance from an initial variance of zero (so that the random variable is deterministic) to some final level
. We would like to use this to give a continuous family
of random variables
as
(viewed as a “time” parameter) runs from
to
.
At present, we have not completely specified what should be, because we have only described the individual distribution
of each
, and not the joint distribution. However, there is a very natural way to specify a joint distribution of this type, known as Brownian motion. In these notes we lay the necessary probability theory foundations to set up this motion, and indicate its connection with the heat equation, the central limit theorem, and the Ornstein-Uhlenbeck process. This is the beginning of stochastic calculus, which we will not develop fully here.
We will begin with one-dimensional Brownian motion, but it is a simple matter to extend the process to higher dimensions. In particular, we can define Brownian motion on vector spaces of matrices, such as the space of Hermitian matrices. This process is equivariant with respect to conjugation by unitary matrices, and so we can quotient out by this conjugation and obtain a new process on the quotient space, or in other words on the spectrum of
Hermitian matrices. This process is called Dyson Brownian motion, and turns out to have a simple description in terms of ordinary Brownian motion; it will play a key role in several of the subsequent notes in this course.
After a hiatus of several months, I’ve made an effort to advance the writing of the second Polymath1 paper, entitled “Density Hales-Jewett and Moser numbers“. This is in part due to a request from the Szemeredi 60th 70th birthday conference proceedings (which solicited the paper) to move the submission date up from April to February. (Also, the recent launch of Polymath5 on Tim Gowers blog reminds me that I should get this older project out of the way.)
The current draft of the paper is here, with source files here. I have been trimming the paper, in particular replacing some of the auxiliary or incomplete material in the paper with references to pages on the polymath wiki instead. Nevertheless this is still a large paper, at 51 pages. It is now focused primarily on the computation of the Density Hales-Jewett numbers and the Moser numbers
for all n up to 6, with the latter requiring a significant amount of computer assistance.
There are a number of minor issues remaining with the paper:
- A picture of a Fujimura set for the introduction would be nice.
- In the proof of Theorem 1.3 (asymptotic lower bound for DHJ numbers), it is asserted without proof that the circulant matrix with first row 1,2,…,k-1 is nonsingular. One can prove this claim by computing the Fourier coefficients
for all t, but is there a slicker way to see this (e.g. by citing a reference?).
- Reference [15] (which is Komlos’s lower bound on the Moser numbers) is missing a volume number. The reference is currently given as
J. Komlos, solution to problem P.170 by Leo Moser, Canad. Math.. Bull. vol ??? (1972), 312-313, 1970.
Finally, the text probably needs to be proofread one or two more times before it is ready to go, hopefully by early February. There is still also one last opportunity to propose non-trivial restructuring of the paper (in particular, if there are other ways to trim the size of the paper, this may be worth looking into).
Let be a Hermitian
matrix. By the spectral theorem for Hermitian matrices (which, for sake of completeness, we prove below), one can diagonalise
using a sequence
of real eigenvalues, together with an orthonormal basis of eigenvectors
. (The eigenvalues are uniquely determined by
, but the eigenvectors have a little ambiguity to them, particularly if there are repeated eigenvalues; for instance, one could multiply each eigenvector by a complex phase
. In these notes we are arranging eigenvalues in descending order; of course, one can also arrange eigenvalues in increasing order, which causes some slight notational changes in the results below.) The set
is known as the spectrum of
.
A basic question in linear algebra asks the extent to which the eigenvalues and
of two Hermitian matrices
constrains the eigenvalues
of the sum. For instance, the linearity of trace
when expressed in terms of eigenvalues, gives the trace constraint
(together with the counterparts for and
) gives the inequality
and so forth.
The complete answer to this problem is a fascinating one, requiring a strangely recursive description (once known as Horn’s conjecture, which is now solved), and connected to a large number of other fields of mathematics, such as geometric invariant theory, intersection theory, and the combinatorics of a certain gadget known as a “honeycomb”. See for instance my survey with Allen Knutson on this topic some years ago.
In typical applications to random matrices, one of the matrices (say, ) is “small” in some sense, so that
is a perturbation of
. In this case, one does not need the full strength of the above theory, and instead rely on a simple aspect of it pointed out by Helmke and Rosenthal and by Totaro, which generates several of the eigenvalue inequalities relating
,
, and
, of which (1) and (3) are examples. (Actually, this method eventually generates all of the eigenvalue inequalities, but this is a non-trivial fact to prove.) These eigenvalue inequalities can mostly be deduced from a number of minimax characterisations of eigenvalues (of which (2) is a typical example), together with some basic facts about intersections of subspaces. Examples include the Weyl inequalities
valid whenever and
, and the Ky Fan inequality
One consequence of these inequalities is that the spectrum of a Hermitian matrix is stable with respect to small perturbations.
We will also establish some closely related inequalities concerning the relationships between the eigenvalues of a matrix, and the eigenvalues of its minors.
Many of the inequalities here have analogues for the singular values of non-Hermitian matrices (which is consistent with the discussion near Exercise 16 of Notes 3). However, the situation is markedly different when dealing with eigenvalues of non-Hermitian matrices; here, the spectrum can be far more unstable, if pseudospectrum is present. Because of this, the theory of the eigenvalues of a random non-Hermitian matrix requires an additional ingredient, namely upper bounds on the prevalence of pseudospectrum, which after recentering the matrix is basically equivalent to establishing lower bounds on least singular values. We will discuss this point in more detail in later notes.
We will work primarily here with Hermitian matrices, which can be viewed as self-adjoint transformations on complex vector spaces such as . One can of course specialise the discussion to real symmetric matrices, in which case one can restrict these complex vector spaces to their real counterparts
. The specialisation of the complex theory below to the real case is straightforward and is left to the interested reader.
This week at UCLA, Pierre-Louis Lions gave one of this year’s Distinguished Lecture Series, on the topic of mean field games. These are a relatively novel class of systems of partial differential equations, that are used to understand the behaviour of multiple agents each individually trying to optimise their position in space and time, but with their preferences being partly determined by the choices of all the other agents, in the asymptotic limit when the number of agents goes to infinity. A good example here is that of traffic congestion: as a first approximation, each agent wishes to get from A to B in the shortest path possible, but the speed at which one can travel depends on the density of other agents in the area. A more light-hearted example is that of a Mexican wave (or audience wave), which can be modeled by a system of this type, in which each agent chooses to stand, sit, or be in an intermediate position based on his or her comfort level, and also on the position of nearby agents.
Under some assumptions, mean field games can be expressed as a coupled system of two equations, a Fokker-Planck type equation evolving forward in time that governs the evolution of the density function of the agents, and a Hamilton-Jacobi (or Hamilton-Jacobi-Bellman) type equation evolving backward in time that governs the computation of the optimal path for each agent. The combination of both forward propagation and backward propagation in time creates some unusual “elliptic” phenomena in the time variable that is not seen in more conventional evolution equations. For instance, for Mexican waves, this model predicts that such waves only form for stadiums exceeding a certain minimum size (and this phenomenon has apparently been confirmed experimentally!).
Due to lack of time and preparation, I was not able to transcribe Lions’ lectures in full detail; but I thought I would describe here a heuristic derivation of the mean field game equations, and mention some of the results that Lions and his co-authors have been working on. (Video of a related series of lectures (in French) by Lions on this topic at the Collége de France is available here.)
To avoid (rather important) technical issues, I will work at a heuristic level only, ignoring issues of smoothness, convergence, existence and uniqueness, etc.
Consider the sum of iid real random variables
of finite mean
and variance
for some
. Then the sum
has mean
and variance
, and so (by Chebyshev’s inequality) we expect
to usually have size
. To put it another way, if we consider the normalised sum
then has been normalised to have mean zero and variance
, and is thus usually of size
.
In the previous set of notes, we were able to establish various tail bounds on . For instance, from Chebyshev’s inequality one has
and if the original distribution was bounded or subgaussian, we had the much stronger Chernoff bound
for some absolute constants ; in other words, the
are uniformly subgaussian.
Now we look at the distribution of . The fundamental central limit theorem tells us the asymptotic behaviour of this distribution:
Theorem 1 (Central limit theorem) Let
be iid real random variables of finite mean
and variance
for some
, and let
be the normalised sum (1). Then as
,
converges in distribution to the standard normal distribution
.
Exercise 2 Show that
does not converge in probability or in the almost sure sense. (Hint: the intuition here is that for two very different values
of
, the quantities
and
are almost independent of each other, since the bulk of the sum
is determined by those
with
. Now make this intuition precise.)
Exercise 3 Use Stirling’s formula from Notes 0a to verify the central limit theorem in the case when
is a Bernoulli distribution, taking the values
and
only. (This is a variant of Exercise 2 from those notes, or Exercise 2 from Notes 1. It is easy to see that once one does this, one can rescale and handle any other two-valued distribution also.)
Exercise 4 Use Exercise 9 from Notes 1 to verify the central limit theorem in the case when
is gaussian.
Note we are only discussing the case of real iid random variables. The case of complex random variables (or more generally, vector-valued random variables) is a little bit more complicated, and will be discussed later in this post.
The central limit theorem (and its variants, which we discuss below) are extremely useful tools in random matrix theory, in particular through the control they give on random walks (which arise naturally from linear functionals of random matrices). But the central limit theorem can also be viewed as a “commutative” analogue of various spectral results in random matrix theory (in particular, we shall see in later lectures that the Wigner semicircle law can be viewed in some sense as a “noncommutative” or “free” version of the central limit theorem). Because of this, the techniques used to prove the central limit theorem can often be adapted to be useful in random matrix theory. Because of this, we shall use these notes to dwell on several different proofs of the central limit theorem, as this provides a convenient way to showcase some of the basic methods that we will encounter again (in a more sophisticated form) when dealing with random matrices.
Suppose we have a large number of scalar random variables , which each have bounded size on average (e.g. their mean and variance could be
). What can one then say about their sum
? If each individual summand
varies in an interval of size
, then their sum of course varies in an interval of size
. However, a remarkable phenomenon, known as concentration of measure, asserts that assuming a sufficient amount of independence between the component variables
, this sum sharply concentrates in a much narrower range, typically in an interval of size
. This phenomenon is quantified by a variety of large deviation inequalities that give upper bounds (often exponential in nature) on the probability that such a combined random variable deviates significantly from its mean. The same phenomenon applies not only to linear expressions such as
, but more generally to nonlinear combinations
of such variables, provided that the nonlinear function
is sufficiently regular (in particular, if it is Lipschitz, either separately in each variable, or jointly in all variables).
The basic intuition here is that it is difficult for a large number of independent variables to “work together” to simultaneously pull a sum
or a more general combination
too far away from its mean. Independence here is the key; concentration of measure results typically fail if the
are too highly correlated with each other.
There are many applications of the concentration of measure phenomenon, but we will focus on a specific application which is useful in the random matrix theory topics we will be studying, namely on controlling the behaviour of random -dimensional vectors with independent components, and in particular on the distance between such random vectors and a given subspace.
Once one has a sufficient amount of independence, the concentration of measure tends to be sub-gaussian in nature; thus the probability that one is at least standard deviations from the mean tends to drop off like
for some
. In particular, one is
standard deviations from the mean with high probability, and
standard deviations from the mean with overwhelming probability. Indeed, concentration of measure is our primary tool for ensuring that various events hold with overwhelming probability (other moment methods can give high probability, but have difficulty ensuring overwhelming probability).
This is only a brief introduction to the concentration of measure phenomenon. A systematic study of this topic can be found in this book by Ledoux.
Recent Comments