You are currently browsing the monthly archive for February 2010.
<p>
Our study of random matrices, to date, has focused on somewhat general ensembles, such as iid random matrices or Wigner random matrices, in which the distribution of the individual entries of the matrices was essentially arbitrary (as long as certain moments, such as the mean and variance, were normalised). In these notes, we now focus on two much more special, and much more symmetric, ensembles:
<p>
<ul> <li> The <a href=”http://en.wikipedia.org/wiki/Gaussian_Unitary_Ensemble”>Gaussian Unitary Ensemble</a> (GUE), which is an ensemble of random Hermitian matrices
in which the upper-triangular entries are iid with distribution
, and the diagonal entries are iid with distribution
, and independent of the upper-triangular ones; and <li> The <em>Gaussian random matrix ensemble</em>, which is an ensemble of random
(non-Hermitian) matrices
whose entries are iid with distribution
.
</ul>
<p>
The symmetric nature of these ensembles will allow us to compute the spectral distribution by exact algebraic means, revealing a surprising connection with orthogonal polynomials and with determinantal processes. This will, for instance, recover the semi-circular law for GUE, but will also reveal <em>fine</em> spacing information, such as the distribution of the gap between <em>adjacent</em> eigenvalues, which is largely out of reach of tools such as the Stieltjes transform method and the moment method (although the moment method, with some effort, is able to control the extreme edges of the spectrum).
<p>
Similarly, we will see for the first time the <em>circular law</em> for eigenvalues of non-Hermitian matrices.
<p>
There are a number of other highly symmetric ensembles which can also be treated by the same methods, most notably the Gaussian Orthogonal Ensemble (GOE) and the Gaussian Symplectic Ensemble (GSE). However, for simplicity we shall focus just on the above two ensembles. For a systematic treatment of these ensembles, see the <a href=”http://www.ams.org/mathscinet-getitem?mr=1677884″>text by Deift</a>.
<p>
<!–more–>
<p>
<p align=center><b> — 1. The spectrum of GUE — </b></p>
<p>
We have already shown using Dyson Brownian motion in <a href=”https://terrytao.wordpress.com/2010/01/18/254a-notes-3b-brownian-motion-and-dyson-brownian-motion/”>Notes 3b</a> that we have the <a href=”http://www.ams.org/mathscinet-getitem?mr=173726″>Ginibre formula</a> <a name=”rhod”><p align=center></p>
</a> for the density function of the eigenvalues of a GUE matrix
, where <p align=center>
</p>
is the <a href=”http://en.wikipedia.org/wiki/Vandermonde_determinant”>Vandermonde determinant</a>. We now give an alternate proof of this result (omitting the exact value of the normalising constant ) that exploits unitary invariance and the change of variables formula (the latter of which we shall do from first principles). The one thing to be careful about is that one has to somehow quotient out by the invariances of the problem before being able to apply the change of variables formula.
<p>
One approach here would be to artificially “fix a gauge” and work on some slice of the parameter space which is “transverse” to all the symmetries. With such an approach, one can use the classical change of variables formula. While this can certainly be done, we shall adopt a more “gauge-invariant” approach and carry the various invariances with us throughout the computation. (For a comparison of the two approaches, see <a href=”https://terrytao.wordpress.com/2008/09/27/what-is-a-gauge/”>this previous blog post</a>.)
<p>
We turn to the details. Let be the space of Hermitian
matrices, then the distribution
of a GUE matrix
is a absolutely continuous probability measure on
, which can be written using the definition of GUE as <p align=center>
</p>
where is Lebesgue measure on
,
are the coordinates of
, and
is a normalisation constant (the exact value of which depends on how one normalises Lebesgue measure on
). We can express this more compactly as <p align=center>
</p>
Expressed this way, it is clear that the GUE ensemble is invariant under conjugations by any unitary matrix.
<p>
Let be the diagonal matrix whose entries
are the eigenvalues of
in descending order. Then we have
for some unitary matrix
. The matirx
is not uniquely determined; if
is diagonal unitary matrix, then
commutes with
, and so one can freely replace
with
. On the other hand, if the eigenvalues of
are simple, then the diagonal matrices are the <em>only</em> matrices that commute with
, and so this freedom to right-multiply
by diagonal unitaries is the only failure of uniqueness here. And in any case, from the unitary invariance of GUE, we see that even after conditioning on
, we may assume without loss of generality that
is drawn from the invariant Haar measure on
. In particular,
and
can be taken to be independent.
<p>
Fix a diagonal matrix for some
, let
be extremely small, and let us compute the probability <a name=”dprop”><p align=center>
</p>
</a> that lies within
of
in the Frobenius norm. On the one hand, the probability density of
is proportional to <p align=center>
</p>
near (where we write
) and the volume of a ball of radius
in the
-dimensional space
is proportional to
, so <a href=”#dprop”>(2)</a> is equal to <a name=”ciprian”><p align=center>
</p>
</a> for some constant depending only on
, where
goes to zero as
(keeping
and
fixed). On the other hand, if
, then by the Weyl inequality (or Hoffman-Weilandt inequality) we have
(we allow implied constants here to depend on
and on
). This implies
, thus
. As a consequence we see that the off-diagonal elements of
are of size
. We can thus use the inverse function theorem in this local region of parameter space and make the ansatz <p align=center>
</p>
where is a bounded diagonal matrix,
is a diagonal unitary matrix, and
is a bounded skew-adjoint matrix with zero diagonal. (Note here the emergence of the freedom to right-multiply
by diagonal unitaries.) Note that the map
has a non-degenerate Jacobian, so the inverse function theorem applies to uniquely specify
(and thus
) from
in this local region of parameter space.
<p>
Conversely, if take the above form, then we can Taylor expand and conclude that <p align=center>
</p>
and so <p align=center></p>
We can thus bound <a href=”#dprop”>(2)</a> from above and below by expressions of the form <a name=”esd”><p align=center></p>
</a> As is distributed using Haar measure on
,
is (locally) distributed using
times a constant multiple of Lebesgue measure on the space
of skew-adjoint matrices with zero diagonal, which has dimension
. Meanwhile,
is distributed using
times Lebesgue measure on the space of diagonal elements. Thus we can rewrite <a href=”#esd”>(4)</a> as <p align=center>
</p>
where and
denote Lebesgue measure and
depends only on
.
<p>
Observe that the map dilates the (complex-valued)
entry of
by
, and so the Jacobian of this map is
. Applying the change of variables, we can express the above as <p align=center>
</p>
The integral here is of the form for some other constant
. Comparing this formula with <a href=”#ciprian”>(3)</a> we see that <p align=center>
</p>
for yet another constant . Sending
we recover an exact formula <p align=center>
</p>
when is simple. Since almost all Hermitian matrices have simple spectrum (see Exercise 10 of <a href=”https://terrytao.wordpress.com/2010/01/12/254a-notes-3a-eigenvalues-and-sums-of-hermitian-matrices/”>Notes 3a</a>), this gives the full spectral distribution of GUE, except for the issue of the unspecified constant.
<p>
<blockquote><b>Remark 1</b> In principle, this method should also recover the explicit normalising constant in <a href=”#rhod”>(1)</a>, but to do this it appears one needs to understand the volume of the fundamental domain of
with respect to the logarithm map, or equivalently to understand the volume of the unit ball of Hermitian matrices in the operator norm. I do not know of a simple way to compute this quantity (though it can be inferred from <a href=”#rhod”>(1)</a> and the above analysis). One can also recover the normalising constant through the machinery of determinantal processes, see below. </blockquote>
<p>
<p>
<blockquote><b>Remark 2</b> <a name=”daft”></a> The above computation can be generalised to other -conjugation-invariant ensembles
whose probability distribution is of the form <p align=center>
</p>
for some potential function (where we use the spectral theorem to define
), yielding a density function for the spectrum of the form <p align=center>
</p>
Given suitable regularity conditions on , one can then generalise many of the arguments in these notes to such ensembles. See <a href=”http://www.ams.org/mathscinet-getitem?mr=1677884″>this book by Deift</a> for details. </blockquote>
<p>
<p>
<p align=center><b> — 2. The spectrum of gaussian matrices — </b></p>
<p>
The above method also works for gaussian matrices , as was first observed by Dyson (though the final formula was first obtained by Ginibre, using a different method). Here, the density function is given by <a name=”can”><p align=center>
</p>
</a> where is a constant and
is Lebesgue measure on the space
of all complex
matrices. This is invariant under both left and right multiplication by unitary matrices, so in particular is invariant under unitary conjugations as before.
<p>
This matrix has
complex (generalised) eigenvalues
, which are usually distinct:
<p>
<blockquote><b>Exercise 3</b> Let . Show that the space of matrices in
with a repeated eigenvalue has codimension
. </blockquote>
<p>
<p>
Unlike the Hermitian situation, though, there is no natural way to order these complex eigenvalues. We will thus consider all
possible permutations at once, and define the spectral density function
of
by duality and the formula <p align=center>
</p>
for all test functions . By the Riesz representation theorem, this uniquely defines
(as a distribution, at least), although the total mass of
is
rather than
due to the ambiguity in the spectrum.
<p>
Now we compute (up to constants). In the Hermitian case, the key was to use the factorisation
. This particular factorisation is of course unavailable in the non-Hermitian case. However, if the non-Hermitian matrix
has simple spectrum, it can always be factored instead as
, where
is unitary and
is upper triangular. Indeed, if one applies the <a href=”http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process”>Gram-Schmidt process</a> to the eigenvectors of
and uses the resulting orthonormal basis to form
, one easily verifies the desired factorisation. Note that the eigenvalues of
are the same as those of
, which in turn are just the diagonal entries of
.
<p>
<blockquote><b>Exercise 4</b> Show that this factorisation is also available when there are repeated eigenvalues. (Hint: use the <a href=”http://en.wikipedia.org/wiki/Jordan_normal_form”>Jordan normal form</a>.) </blockquote>
<p>
<p>
To use this factorisation, we first have to understand how unique it is, at least in the generic case when there are no repeated eigenvalues. As noted above, if , then the diagonal entries of
form the same set as the eigenvalues of
.
<p>
Now suppose we fix the diagonal of
, which amounts to picking an ordering of the
eigenvalues of
. The eigenvalues of
are
, and furthermore for each
, the eigenvector of
associated to
lies in the span of the last
basis vectors
of
, with a non-zero
coefficient (as can be seen by Gaussian elimination or Cramer’s rule). As
with
unitary, we conclude that for each
, the
column of
lies in the span of the eigenvectors associated to
. As these columns are orthonormal, they must thus arise from applying the Gram-Schmidt process to these eigenvectors (as discussed earlier). This argument also shows that once the diagonal entries
of
are fixed, each column of
is determined up to rotation by a unit phase. In other words, the only remaining freedom is to replace
by
for some unit diagonal matrix
, and then to replace
by
to counterbalance this change of
.
<p>
To summarise, the factorisation is unique up to specifying an enumeration
of the eigenvalues of
and right-multiplying
by diagonal unitary matrices, and then conjugating
by the same matrix. Given a matrix
, we may apply these symmetries randomly, ending up with a random enumeration
of the eigenvalues of
(whose distribution invariant with respect to permutations) together with a random factorisation
such that
has diagonal entries
in that order, and the distribution of
is invariant under conjugation by diagonal unitary matrices. Also, since
is itself invariant under unitary conjugations, we may also assume that
is distributed uniformly according to the Haar measure of
, and independently of
.
<p>
To summarise, the gaussian matrix ensemble , together with a randomly chosen enumeration
of the eigenvalues, can almost surely be factorised as
, where
is an upper-triangular matrix with diagonal entries
, distributed according to some distribution <p align=center>
</p>
which is invariant with respect to conjugating by diagonal unitary matrices, and
is uniformly distributed according to the Haar measure of
, independently of
.
<p>
Now let be an upper triangular matrix with complex entries whose entries
are distinct. As in the previous section, we consider the probability <a name=”gatf”><p align=center>
</p>
</a> On the one hand, since the space of complex
matrices has
real dimensions, we see from <a href=”#can”>(9)</a> that this expression is equal to <a name=”gatf2″><p align=center>
</p>
</a> for some constant .
<p>
Now we compute <a href=”#gatf”>(6)</a> using the factorisation . Suppose that
, so
As the eigenvalues of
are
, which are assumed to be distinct, we see (from the inverse function theorem) that for
small enough,
has eigenvalues
. With probability
, the diagonal entries of
are thus
(in that order). We now restrict to this event (the
factor will eventually be absorbed into one of the unspecified constants).
<p>
Let be eigenvector of
associated to
, then the Gram-Schmidt process applied to
(starting at
and working backwards to
) gives the standard basis
(in reverse order). By the inverse function theorem, we thus see that we have eigenvectors
of
, which when the Gram-Schmidt process is applied, gives a perturbation
in reverse order. This gives a factorisation
in which
, and hence
. This is however not the most general factorisation available, even after fixing the diagonal entries of
, due to the freedom to right-multiply
by diagonal unitary matrices
. We thus see that the correct ansatz here is to have <p align=center>
</p>
for some diagonal unitary matrix .
<p>
In analogy with the GUE case, we can use the inverse function theorem and make the more precise ansatz <p align=center></p>
where is skew-Hermitian with zero diagonal and size
,
is diagonal unitary, and
is an upper triangular matrix of size
. From the invariance
we see that
is distributed uniformly across all diagonal unitaries. Meanwhile, from the unitary conjugation invariance,
is distributed according to a constant multiple of
times Lebesgue measure
on the
-dimensional space of skew Hermitian matrices with zero diagonal; and from the definition of
,
is distributed according to a constant multiple of the measure <p align=center>
</p>
where is Lebesgue measure on the
-dimensional space of upper-triangular matrices. Furthermore, the invariances ensure that the random variables
are distributed independently. Finally, we have <p align=center>
</p>
Thus we may rewrite <a href=”#gatf”>(6)</a> as <a name=”gatf-2″><p align=center></p>
</a> for some (the
integration being absorbable into this constant
). We can Taylor expand <p align=center>
</p>
and so we can bound <a href=”#gatf-2″>(8)</a> above and below by expressions of the form <p align=center></p>
The Lebesgue measure is invariant under translations by upper triangular matrices, so we may rewrite the above expression as <a name=”can”><p align=center>
</p>
</a> where is the strictly lower triangular component of
.
<p>
The next step is to make the (linear) change of variables . We check dimensions:
ranges in the space
of skew-adjoint Hermitian matrices with zero diagonal, which has dimension
, as does the space of strictly lower-triangular matrices, which is where
ranges. So we can in principle make this change of variables, but we first have to compute the Jacobian of the transformation (and check that it is non-zero). For this, we switch to coordinates. Write
and
. In coordinates, the equation
becomes <p align=center>
</p>
or equivalently <p align=center></p>
Thus for instance <p align=center></p>
<p align=center></p>
<p align=center></p>
<p align=center></p>
<p align=center></p>
<p align=center></p>
etc. We then observe that the transformation matrix from to
is triangular, with diagonal entries given by
for
. The Jacobian of the (complex-linear) map
is thus given by <p align=center>
</p>
which is non-zero by the hypothesis that the are distinct. We may thus rewrite <a href=”#can”>(9)</a> as <p align=center>
</p>
where is Lebesgue measure on strictly lower-triangular matrices. The integral here is equal to
for some constant
. Comparing this with <a href=”#gatf”>(6)</a>, cancelling the factor of
, and sending
, we obtain the formula <p align=center>
</p>
for some constant . We can expand <p align=center>
</p>
If we integrate out the off-diagonal variables for
, we see that the density function for the diagonal entries
of
is proportional to <p align=center>
</p>
Since these entries are a random permutation of the eigenvalues of , we conclude the <em>Ginibre formula</em> <a name=”ginibre-gaussian”><p align=center>
</p>
</a> for the joint density of the eigenvalues of a gaussian random matrix, where is a constant.
<p>
<blockquote><b>Remark 5</b> Given that <a href=”#rhod”>(1)</a> can be derived using Dyson Brownian motion, it is natural to ask whether <a href=”#ginibre-gaussian”>(10)</a> can be derived by a similar method. It seems that in order to do this, one needs to consider a Dyson-like process not just on the eigenvalues , but on the entire triangular matrix
(or more precisely, on the moduli space formed by quotienting out the action of conjugation by unitary diagonal matrices). Unfortunately the computations seem to get somewhat complicated, and we do not present them here. </blockquote>
<p>
<p>
<p align=center><b> — 3. Mean field approximation — </b></p>
<p>
We can use the formula <a href=”#rhod”>(1)</a> for the joint distribution to heuristically derive the semicircular law, as follows.
<p>
It is intuitively plausible that the spectrum should concentrate in regions in which
is as large as possible. So it is now natural to ask how to optimise this function. Note that the expression in <a href=”#rhod”>(1)</a> is non-negative, and vanishes whenever two of the
collide, or when one or more of the
go off to infinity, so a maximum should exist away from these degenerate situations.
<p>
We may take logarithms and write <a name=”rhond”><p align=center></p>
</a> where is a constant whose exact value is not of importance to us. From a mathematical physics perspective, one can interpret <a href=”#rhond”>(11)</a> as a Hamiltonian for
particles at positions
, subject to a confining harmonic potential (these are the
terms) and a repulsive logarithmic potential between particles (these are the
terms).
<p>
Our objective is now to find a distribution of that minimises this expression.
<p>
We know from previous notes that the should have magnitude
. Let us then heuristically make a <a href=”http://en.wikipedia.org/wiki/Mean_field_theory”>mean field approximation</a>, in that we approximate the discrete spectral measure
by a continuous probability measure
. (Secretly, we know from the semi-circular law that we should be able to take
, but pretend that we do not know this fact yet.) Then we can heuristically approximate <a href=”#rhond”>(11)</a> as <p align=center>
</p>
and so we expect the distribution to minimise the functional <a name=”rhos”><p align=center>
</p>
</a>
<p>
One can compute the Euler-Lagrange equations of this functional:
<p>
<blockquote><b>Exercise 6</b> Working formally, and assuming that is a probability measure that minimises <a href=”#rhos”>(12)</a>, argue that <p align=center>
</p>
for some constant and all
in the support of
. For all
outside of the support, establish the inequality <p align=center>
</p>
</blockquote>
<p>
<p>
There are various ways we can solve this equation for ; we sketch here a complex-analytic method. Differentiating in
, we formally obtain <p align=center>
</p>
on the support of . But recall that if we let <p align=center>
</p>
be the Stieltjes transform of the probability measure , then we have <p align=center>
</p>
and <p align=center></p>
We conclude that <p align=center></p>
for all , which we rearrange as <p align=center>
</p>
This makes the function entire (it is analytic in the upper half-plane, obeys the symmetry
, and has no jump across the real line). On the other hand, as
as
,
goes to
at infinity. Applying <a href=”http://en.wikipedia.org/wiki/Liouville%27s_theorem_(complex_analysis)”>Liouville’s theorem</a>, we conclude that
is constant, thus we have the familiar equation <p align=center>
</p>
which can then be solved to obtain the semi-circular law as in <a href=”https://terrytao.wordpress.com/2010/02/02/254a-notes-4-the-semi-circular-law/”>previous notes</a>.
<p>
<blockquote><b>Remark 7</b> Recall from <a href=”https://terrytao.wordpress.com/2010/01/18/254a-notes-3b-brownian-motion-and-dyson-brownian-motion/”>Notes 3b</a> that Dyson Brownian motion can be used to derive the formula <a href=”#rhod”>(1)</a>. One can then interpret the Dyson Brownian motion proof of the semi-circular law for GUE in <a href=”https://terrytao.wordpress.com/2010/02/02/254a-notes-4-the-semi-circular-law/”>Notes 4</a> as a rigorous formalisation of the above mean field approximation heuristic argument. </blockquote>
<p>
<p>
One can perform a similar heuristic analysis for the spectral measure of a random gaussian matrix, giving a description of the limiting density:
<p>
<blockquote><b>Exercise 8</b> Using heuristic arguments similar to those above, argue that should be close to a continuous probability distribution
obeying the equation <p align=center>
</p>
on the support of , for some constant
, with the inequality <a name=”zaw”><p align=center>
</p>
</a> outside of this support. Using the <a href=”http://en.wikipedia.org/wiki/Newtonian_potential”>Newton potential</a> for the fundamental solution of the two-dimensional Laplacian
, conclude (non-rigorously) that
is equal to
on its support.
<p>
Also argue that should be rotationally symmetric. Use <a href=”#zaw”>(13)</a> and Green’s formula to argue why the support of
should be simply connected, and then conclude (again non-rigorously) the <em>circular law</em> <a name=”circular”><p align=center>
</p>
</a> </blockquote>
<p>
<p>
We will see more rigorous derivations of the circular law later in these notes, and also in subsequent notes.
<p>
<p align=center><b> — 4. Determinantal form of the GUE spectral distribution — </b></p>
<p>
In a previous section, we showed (up to constants) that the density function for the eigenvalues
of GUE was given by the formula <a href=”#rhod”>(1)</a>.
<p>
As is well known, the Vandermonde determinant that appears in <a href=”#rhod”>(1)</a> can be expressed up to sign as a determinant of an
matrix, namely the matrix
. Indeed, this determinant is clearly a polynomial of degree
in
which vanishes whenever two of the
agree, and the claim then follows from the factor theorem (and inspecting a single coefficient of the Vandermonde determinant, e.g. the
coefficient, to get the sign).
<p>
We can square the above fact (or more precisely, multiply the above matrix matrix by its adjoint) and conclude that is the determinant of the matrix <p align=center>
</p>
More generally, if are any sequence of polynomials, in which
has degree
, then we see from row operations that the determinant of <p align=center>
</p>
is a non-zero constant multiple of (with the constant depending on the leading coefficients of the
), and so the determinant of <p align=center>
</p>
is a non-zero constant multiple of . Comparing this with <a href=”#rhod”>(1)</a>, we obtain the formula <p align=center>
</p>
for some non-zero constant .
<p>
This formula is valid for any choice of polynomials of degree
. But the formula is particularly useful when we set
equal to the (normalised) <a href=”http://en.wikipedia.org/wiki/Hermite_polynomials”>Hermite polynomials</a>, defined by applying the Gram-Schmidt process in
to the polynomials
for
to yield
. (Equivalently, the
are the <a href=”http://en.wikipedia.org/wiki/Orthogonal_polynomials”>orthogonal polynomials</a> associated to the measure
.) In that case, the expression <a name=”knxy”><p align=center>
</p>
</a> becomes the integral kernel of the orthogonal projection operator in
to the span of the
, thus <p align=center>
</p>
for all , and so
is now a constant multiple of <p align=center>
</p>
<p>
The reason for working with orthogonal polynomials is that we have the trace identity <a name=”knxx”><p align=center></p>
</a> and the reproducing formula <a name=”knxy2″><p align=center></p>
</a> which reflects the identity . These two formulae have an important consequence:
<p>
<blockquote><b>Lemma 9 (Determinantal integration formula)</b> Let be any symmetric rapidly decreasing function obeying <a href=”#knxx”>(16)</a>, <a href=”#knxy2″>(17)</a>. Then for any
, one has <a name=”city”><p align=center>
</p>
</a> </blockquote>
<p>
<p>
<blockquote><b>Remark 10</b> This remarkable identity is part of the beautiful algebraic theory of <em>determinantal processes</em>, which I discuss further in <a href=”https://terrytao.wordpress.com/2009/08/23/determinantal-processes/”>this blog post</a>. </blockquote>
<p>
<p>
<em>Proof:</em> We induct on . When
this is just <a href=”#knxx”>(16)</a>. Now assume that
and that the claim has already been proven for
. We apply <a href=”http://en.wikipedia.org/wiki/Cofactor_(linear_algebra)”>cofactor expansion</a> to the bottom row of the determinant
. This gives a principal term <a name=”princ”><p align=center>
</p>
</a> plus a sum of additional terms, the
term of which is of the form <a name=”nonprinc”><p align=center>
</p>
</a> Using <a href=”#knxx”>(16)</a>, the principal term <a href=”#princ”>(19)</a> gives a contribution of to <a href=”#city”>(18)</a>. For each nonprincipal term <a href=”#nonprinc”>(20)</a>, we use the multilinearity of the determinant to absorb the
term into the
column of the matrix. Using <a href=”#knxy2″>(17)</a>, we thus see that the contribution of <a href=”#nonprinc”>(20)</a> to <a href=”#city”>(18)</a> can be simplified as <p align=center>
</p>
which after row exchange, simplifies to . The claim follows.
<p>
In particular, if we iterate the above lemma using the Fubini-Tonelli theorem, we see that <p align=center></p>
On the other hand, if we extend the probability density function symmetrically from the Weyl chamber
to all of
, its integral is also
. Since
is clearly symmetric in the
, we can thus compare constants and conclude the <a href=”http://www.ams.org/mathscinet-getitem?mr=112895″>Gaudin-Mehta formula</a> <p align=center>
</p>
More generally, if we define to be the function <a name=”rhok-1″><p align=center>
</p>
</a> then the above formula shows that is the <em>
-point correlation function</em> for the spectrum, in the sense that <a name=”rhok-2″><p align=center>
</p>
</a> <p align=center></p>
for any test function supported in the region
.
<p>
In particular, if we set , we obtain the explicit formula <p align=center>
</p>
for the expected empirical spectral measure of . Equivalently after renormalising by
, we have <a name=”mun”><p align=center>
</p>
</a>
<p>
It is thus of interest to understand the kernel better.
<p>
To do this, we begin by recalling that the functions were obtained from
by the Gram-Schmidt process. In particular, each
is orthogonal to the
for all
. This implies that
is orthogonal to
for
. On the other hand,
is a polynomial of degree
, so
must lie in the span of
for
. Combining the two facts, we see that
must be a linear combination of
, with the
coefficient being non-trivial. We rewrite this fact in the form <a name=”pii”><p align=center>
</p>
</a> for some real numbers (with
). Taking inner products with
and
we see that <a name=”low”><p align=center>
</p>
</a> and <p align=center></p>
and so <a name=”aii”><p align=center></p>
</a> (with the convention ).
<p>
We will continue the computation of later. For now, we we pick two distinct real numbers
and consider the <a href=”http://en.wikipedia.org/wiki/Wronskian”>Wronskian</a>-type expression <p align=center>
</p>
Using <a href=”#pii”>(24)</a>, <a href=”#aii”>(26)</a>, we can write this as <p align=center></p>
or in other words <p align=center></p>
<p align=center></p>
We telescope this and obtain the <em>Christoffel-Darboux formula</em> for the kernel <a href=”#knxy”>(15)</a>: <a name=”darbo”><p align=center></p>
</a> Sending using <a href=”http://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule”>L’Hopital’s rule</a>, we obtain in particular that <a name=”darbo-2″><p align=center>
</p>
</a>
<p>
Inserting this into <a href=”#mun”>(23)</a>, we see that if we want to understand the expected spectral measure of GUE, we should understand the asymptotic behaviour of and the associated constants
. For this, we need to exploit the specific properties of the gaussian weight
. In particular, we have the identity <a name=”gauss”><p align=center>
</p>
</a> so upon integrating <a href=”#low”>(25)</a> by parts, we have <p align=center></p>
As has degree at most
, the first term vanishes by the orthonormal nature of the
, thus <a name=”ip”><p align=center>
</p>
</a> To compute this, let us denote the leading coefficient of as
. Then
is equal to
plus lower-order terms, and so we have <p align=center>
</p>
On the other hand, by inspecting the coefficient of <a href=”#pii”>(24)</a> we have <p align=center>
</p>
Combining the two formulae (and making the sign convention that the are always positive), we see that <p align=center>
</p>
and <p align=center></p>
Meanwhile, a direct computation shows that , and thus by induction <p align=center>
</p>
A similar method lets us compute the . Indeed, taking inner products of <a href=”#pii”>(24)</a> with
and using orthonormality we have <p align=center>
</p>
which upon integrating by parts using <a href=”#gauss”>(29)</a> gives <p align=center></p>
As is of degree strictly less than
, the integral vanishes by orthonormality, thus
. The identity <a href=”#pii”>(24)</a> thus becomes <em>Hermite recurrence relation</em> <a name=”i2″><p align=center>
</p>
</a> Another recurrence relation arises by considering the integral <p align=center></p>
On the one hand, as has degree at most
, this integral vanishes if
by orthonormality. On the other hand, integrating by parts using <a href=”#gauss”>(29)</a>, we can write the integral as <p align=center>
</p>
If , then
has degree less than
, so the integral again vanishes. Thus the integral is non-vanishing only when
. Using <a href=”#ip”>(30)</a>, we conclude that <a name=”i1″><p align=center>
</p>
</a> We can combine <a href=”#i1″>(32)</a> with <a href=”#i2″>(31)</a> to obtain the formula <p align=center></p>
which together with the initial condition gives the explicit representation <a name=”pn-form”><p align=center>
</p>
</a> for the Hermite polynomials. Thus, for instance, at one sees from Taylor expansion that <a name=”piano”><p align=center>
</p>
</a> when is even, and <a name=”piano-2″><p align=center>
</p>
</a> when is odd.
<p>
In principle, the formula <a href=”#pn-form”>(33)</a>, together with <a href=”#darbo-2″>(28)</a>, gives us an explicit description of the kernel (and thus of
, by <a href=”#mun”>(23)</a>). However, to understand the asymptotic behaviour as
, we would have to understand the asymptotic behaviour of
as
, which is not immediately discernable by inspection. However, one can obtain such asymptotics by a variety of means. We give two such methods here: a method based on ODE analysis, and a complex-analytic method, based on the <a href=”http://en.wikipedia.org/wiki/Method_of_steepest_descent”>method of steepest descent</a>.
<p>
We begin with the ODE method. Combining <a href=”#i2″>(31)</a> with <a href=”#i1″>(32)</a> we see that each polynomial obeys the <em>Hermite differential equation</em> <p align=center>
</p>
If we look instead at the Hermite functions , we obtain the differential equation <p align=center>
</p>
where is the <em>harmonic oscillator operator</em> <p align=center>
</p>
Note that the self-adjointness of here is consistent with the orthogonal nature of the
.
<p>
<blockquote><b>Exercise 11</b> Use <a href=”#knxy”>(15)</a>, <a href=”#darbo-2″>(28)</a>, <a href=”#pn-form”>(33)</a>, <a href=”#i2″>(31)</a>, <a href=”#i1″>(32)</a> to establish the identities <p align=center></p>
<p align=center></p>
and thus by <a href=”#mun”>(23)</a> <p align=center></p>
<p align=center></p>
</blockquote>
<p>
<p>
It is thus natural to look at the rescaled functions <p align=center></p>
which are orthonormal in and solve the equation <p align=center>
</p>
where is the <em>semiclassical harmonic oscillator operator</em> <p align=center>
</p>
thus <p align=center></p>
<a name=”wings”><p align=center></p>
</a>
<p>
The projection is then the spectral projection operator of
to
. According to <a href=”http://en.wikipedia.org/wiki/Semiclassical”>semi-classical analysis</a>, with
being interpreted as analogous to Planck’s constant, the operator
has symbol
, where
is the momentum operator, so the projection
is a projection to the region
of phase space, or equivalently to the region
. In the semi-classical limit
, we thus expect the diagonal
of the normalised projection
to be proportional to the projection of this region to the
variable, i.e. proportional to
. We are thus led to the semi-circular law via semi-classical analysis.
<p>
It is possible to make the above argument rigorous, but this would require developing the theory of <a href=”http://en.wikipedia.org/wiki/Microlocal_analysis”>microlocal analysis</a>, which would be overkill given that we are just dealing with an ODE rather than a PDE here (and an extremely classical ODE at that). We instead use a more basic semiclassical approximation, the <a href=”http://en.wikipedia.org/wiki/WKB_approximation”>WKB approximation</a>, which we will make rigorous using the classical <a href=”http://en.wikipedia.org/wiki/Method_of_variation_of_parameters”>method of variation of parameters</a> (one could also proceed using the closely related <em>Prüfer transformation</em>, which we will not detail here). We study the eigenfunction equation <p align=center></p>
where we think of as being small, and
as being close to
. We rewrite this as <a name=”phik”><p align=center>
</p>
</a> where , where we will only work in the “classical” region
(so
) for now.
<p>
Recall that the general solution to the constant coefficient ODE is given by
. Inspired by this, we make the ansatz <p align=center>
</p>
where is the antiderivative of
. Differentiating this, we have <p align=center>
</p>
<p align=center></p>
Because we are representing a single function by two functions
, we have the freedom to place an additional constraint on
. Following the usual variation of parameters strategy, we will use this freedom to eliminate the last two terms in the expansion of
, thus <a name=”aprime”><p align=center>
</p>
</a> We can now differentiate again and obtain <p align=center></p>
<p align=center></p>
Comparing this with <a href=”#phik”>(37)</a> we see that <p align=center></p>
Combining this with <a href=”#aprime”>(38)</a>, we obtain equations of motion for and
: <p align=center>
</p>
<p align=center></p>
We can simplify this using the <a href=”http://en.wikipedia.org/wiki/Integrating_factor”>integrating factor</a> substitution <p align=center></p>
to obtain <a name=”a1″><p align=center></p>
</a> <a name=”a2″><p align=center></p>
</a> The point of doing all these transformations is that the role of the parameter no longer manifests itself through amplitude factors, and instead only is present in a phase factor. In particular, we have <p align=center>
</p>
on any compact interval in the interior of the classical region
(where we allow implied constants to depend on
), which by <a href=”http://en.wikipedia.org/wiki/Gronwall’s_inequality”>Gronwall’s inequality</a> gives the bounds <p align=center>
</p>
on this interval . We can then insert these bounds into <a href=”#a1″>(39)</a>, <a href=”#a2″>(40)</a> again and integrate by parts (taking advantage of the non-stationary nature of
) to obtain the improved bounds <a name=”axo”><p align=center>
</p>
</a> on this interval. (More precise asymptotic expansions can be obtained by iterating this procedure, but we will not need them here.) This is already enough to get the asymptotics that we need:
<p>
<blockquote><b>Exercise 12</b> Use <a href=”#wings”>(36)</a> to Show that on any compact interval in
, the density of
is given by <p align=center>
</p>
where are as above with
and
. Combining this with <a href=”#axo”>(41)</a>, <a href=”#piano”>(34)</a>, <a href=”#piano-2″>(35)</a>, and Stirling’s formula, conclude that
converges in the vague topology to the semicircular law
. (Note that once one gets convergence inside
, the convergence outside of
can be obtained for free since
and
are both probability measures. </blockquote>
<p>
<p>
We now sketch out the approach using the <a href=”http://en.wikipedia.org/wiki/Method_of_steepest_descent”>method of steepest descent</a>. The starting point is the Fourier inversion formula <p align=center></p>
which upon repeated differentiation gives <p align=center></p>
and thus by <a href=”#pn-form”>(33)</a> <p align=center></p>
and thus <p align=center></p>
where <p align=center></p>
where we use a suitable branch of the complex logarithm to handle the case of negative .
<p>
The idea of the principle of steepest descent is to shift the contour of integration to where the real part of is as small as possible. For this, it turns out that the stationary points of
play a crucial role. A brief calculation using the quadratic formula shows that there are two such stationary points, at <p align=center>
</p>
When ,
is purely imaginary at these stationary points, while for
the real part of
is negative at both points. One then draws a contour through these two stationary points in such a way that near each such point, the imaginary part of
is kept fixed, which keeps oscillation to a minimum and allows the real part to decay as steeply as possible (which explains the name of the method). After a certain tedious amount of computation, one obtains the same type of asymptotics for
that were obtained by the ODE method when
(and exponentially decaying estimates for
).
<p>
<blockquote><b>Exercise 13</b> Let ,
be functions which are analytic near a complex number
, with
and
. Let
be a small number, and let
be the line segment
, where
is a complex phase such that
is a negative real. Show that for
sufficiently small, one has <p align=center>
</p>
as . This is the basic estimate behind the method of steepest descent; readers who are also familiar with the <a href=”http://en.wikipedia.org/wiki/Stationary_phase_approximation”>method of stationary phase</a> may see a close parallel. </blockquote>
<p>
<p>
<blockquote><b>Remark 14</b> The method of steepest descent requires an explicit representation of the orthogonal polynomials as contour integrals, and as such is largely restricted to the classical orthogonal polynomials (such as the Hermite polynomials). However, there is a non-linear generalisation of the method of steepest descent developed by Deift and Zhou, in which one solves a matrix Riemann-Hilbert problem rather than a contour integral; see <a href=”http://www.ams.org/mathscinet-getitem?mr=1677884″>this book by Deift</a> for details. Using these sorts of tools, one can generalise much of the above theory to the spectral distribution of -conjugation-invariant discussed in Remark <a href=”#daft”>2</a>, with the theory of Hermite polynomials being replaced by the more general theory of orthogonal polynomials; this is discussed in the above book of Deift, as well as the more recent <a href=”http://www.ams.org/mathscinet-getitem?mr=2514781″>book of Deift and Gioev</a>. </blockquote>
<p>
<p>
The computations performed above for the diagonal kernel can be summarised by the asymptotic <p align=center>
</p>
whenever is fixed and
, and
is the semi-circular law distribution. It is reasonably straightforward to generalise these asymptotics to the off-diagonal case as well, obtaining the more general result <a name=”kn”><p align=center>
</p>
</a> for fixed and
, where
is the <em>Dyson sine kernel</em> <p align=center>
</p>
In the language of semi-classical analysis, what is going on here is that the rescaling in the left-hand side of <a href=”#kn”>(42)</a> is transforming the phase space region to the region
in the limit
, and the projection to the latter region is given by the Dyson sine kernel. A formal proof of <a href=”#kn”>(42)</a> can be given by using either the ODE method or the steepest descent method to obtain asymptotics for Hermite polynomials, and thence (via the Christoffel-Darboux formula) to asymptotics for
; we do not give the details here, but see for instance the recent book of Anderson, Guionnet, and Zeitouni.
<p>
From <a href=”#kn”>(42)</a> and <a href=”#rhok-1″>(21)</a>, <a href=”#rhok-2″>(22)</a> we obtain the asymptotic formula <p align=center></p>
<p align=center></p>
<p align=center></p>
for the local statistics of eigenvalues. By means of further algebraic manipulations (using the general theory of determinantal processes), this allows one to control such quantities as the distribution of eigenvalue gaps near , normalised at the scale
, which is the average size of these gaps as predicted by the semicircular law. For instance, for any
, one can show (basically by the above formulae combined with the inclusion-exclusion principle) that the proportion of eigenvalues
with normalised gap
less than
converges as
to
, where
is defined by the formula
, and
is the integral operator with kernel
(this operator can be verified to be <a href=”http://en.wikipedia.org/wiki/Trace-class_operator”>trace class</a>, so the determinant can be defined in a <a href=”http://en.wikipedia.org/wiki/Fredholm_determinant”>Fredholm sense</a>). See for instance this <a href=”http://www.ams.org/mathscinet-getitem?mr=2129906″>book of Mehta</a> (and my <a href=”https://terrytao.wordpress.com/2009/08/23/determinantal-processes/”>blog post on determinantal processes</a> describe a finitary version of the inclusion-exclusion argument used to obtain such a result).
<p>
<blockquote><b>Remark 15</b> One can also analyse the distribution of the eigenvalues at the edge of the spectrum, i.e. close to . This ultimately hinges on understanding the behaviour of the projection
near the corners
of the phase space region
, or of the Hermite polynomials
for
close to
. For instance, by using steepest descent methods, one can show that <p align=center>
</p>
as for any fixed
, where
is the <a href=”http://en.wikipedia.org/wiki/Airy_function”>Airy function</a> <p align=center>
</p>
This asymptotic and the Christoffel-Darboux formula then gives the asymptotic <a name=”nkn”><p align=center></p>
</a> for any fixed , where
is the <em>Airy kernel</em> <p align=center>
</p>
(<em>Aside</em>: Semiclassical heuristics suggest that the rescaled kernel <a href=”#nkn”>(43)</a> should correspond to projection to the parabolic region of phase space , but I do not know of a connection between this region and the Airy kernel; I am not sure whether semiclassical heuristics are in fact valid at this scaling regime. On the other hand, these heuristics do explain the emergence of the length scale
that emerges in <a href=”#nkn”>(43)</a>, as this is the smallest scale at the edge which occupies a region in
consistent with the Heisenberg uncertainty principle.) This then gives an asymptotic description of the largest eigenvalues of a GUE matrix, which cluster in the region
. For instance, one can use the above asymptotics to show that the largest eigenvalue
of a GUE matrix obeys the <a href=”http://en.wikipedia.org/wiki/Tracy%E2%80%93Widom_distribution”>Tracy-Widom law</a> <p align=center>
</p>
for any fixed , where
is the integral operator with kernel
. See for instance the recent book of Anderson, Guionnet, and Zeitouni. </blockquote>
<p>
<p>
<p align=center><b> — 5. Determinantal form of the gaussian matrix distribution — </b></p>
<p>
One can perform an analogous analysis of the joint distribution function <a href=”#ginibre-gaussian”>(10)</a> of gaussian random matrices. Indeed, given any family of polynomials, with each
of degree
, much the same arguments as before show that <a href=”#ginibre-gaussian”>(10)</a> is equal to a constant multiple of <p align=center>
</p>
One can then select to be orthonormal in
. Actually in this case, the polynomials are very simple, being given explicitly by the formula <p align=center>
</p>
<p>
<blockquote><b>Exercise 16</b> Verify that the are indeed orthonormal, and then conclude that <a href=”#ginibre-gaussian”>(10)</a> is equal to
, where <p align=center>
</p>
Conclude further that the -point correlation functions
are given as <p align=center>
</p>
</blockquote>
<p>
<p>
<blockquote><b>Exercise 17</b> Show that as , one has <p align=center>
</p>
and deduce that the expected spectral measure converges vaguely to the circular measure
; this is a special case of the <em>circular law</em>. </blockquote>
<p>
<p>
<blockquote><b>Exercise 18</b> For any and
, show that <p align=center>
</p>
as . This formula (in principle, at least) describes the asymptotic local
-point correlation functions of the spectrum of gaussian matrices. </blockquote>
<p>
<p>
<blockquote><b>Remark 19</b> One can use the above formulae as the starting point for many other computations on the spectrum of random gaussian matrices; to give just one example, one can show that expected number of eigenvalues which are real is of the order of (see <a href=”http://www.ams.org/mathscinet-getitem?mr=1437734″>this paper of Edelman</a> for more precise results of this nature). It remains a challenge to extend these results to more general ensembles than the gaussian ensemble. </blockquote>
<p>
<p>
When solving the initial value problem to an ordinary differential equation, such as
where is the unknown solution (taking values in some finite-dimensional vector space
),
is the initial datum, and
is some nonlinear function (which we will take to be smooth for sake of argument), then one can construct a solution locally in time via the Picard iteration method. There are two basic ideas. The first is to use the fundamental theorem of calculus to rewrite the initial value problem (1) as the problem of solving an integral equation,
The second idea is to solve this integral equation by the contraction mapping theorem, showing that the integral operator defined by
is a contraction on a suitable complete metric space (e.g. a closed ball in the function space ), and thus has a unique fixed point in this space. This method works as long as one only seeks to construct local solutions (for time
in
for sufficiently small
), but the solutions constructed have a number of very good properties, including
- Existence: A solution
exists in the space
(and even in
) for
sufficiently small.
- Uniqueness: There is at most one solution
to the initial value problem in the space
(or in smoother spaces, such as
). (For solutions in the weaker space
we use the integral formulation (2) to define the solution concept.)
- Lipschitz continuous dependence on the data: If
is a sequence of initial data converging to
, then the associated solutions
converge uniformly to
on
(possibly after shrinking
slightly). In fact we have the Lipschitz bound
for
large enough and
, where
is an absolute constant.
This package of properties is referred to as (Lipschitz) wellposedness.
This method extends to certain partial differential equations, particularly those of a semilinear nature (linear except for lower order nonlinear terms). For instance, if trying to solve an initial value problem of the form
where now takes values in a function space
(e.g. a Sobolev space
),
is an initial datum,
is some (differential) operator (independent of
) that is (densely) defined on
, and
is a nonlinearity which is also (densely) defined on
, then (formally, at least) one can solve this problem by using Duhamel’s formula to convert the problem to that of solving an integral equation
and one can then hope to show that the associated nonlinear integral operator
is a contraction in a subset of a suitably chosen function space.
This method turns out to work surprisingly well for many semilinear partial differential equations, and in particular for semilinear parabolic, semilinear dispersive, and semilinear wave equations. As in the ODE case, when the method works, it usually gives the entire package of Lipschitz well-posedness: existence, uniqueness, and Lipschitz continuous dependence on the initial data, for short times at least.
However, when one moves from semilinear initial value problems to quasilinear initial value problems such as
in which the top order operator now depends on the solution
itself, then the nature of well-posedness changes; one can still hope to obtain (local) existence and uniqueness, and even continuous dependence on the data, but one usually is forced to give up Lipschitz continuous dependence at the highest available regularity (though one can often recover it at lower regularities). As a consequence, the Picard iteration method is not directly suitable for constructing solutions to such equations.
One can already see this phenomenon with a very simple equation, namely the one-dimensional constant-velocity transport equation
where we consider as part of the initial data. (If one wishes, one could view this equation as a rather trivial example of a system.
to emphasis this viewpoint, but this would be somewhat idiosyncratic.) One can solve this equation explicitly of course to get the solution
In particular, if we look at the solution just at time for simplicity, we have
Now let us see how this solution depends on the parameter
. One can ask whether this dependence is Lipschitz in
, in some function space
:
for some finite . But using the Newton approximation
we see that we should only expect such a bound when (and its translates) lie in
. Thus, we see a loss of derivatives phenomenon with regard to Lipschitz well-posedness; if the initial data
is in some regularity space, say
, then one only obtains Lipschitz dependence on
in a lower regularity space such as
.
We have just seen that if all one knows about the initial data is that it is bounded in a function space
, then one usually cannot hope to make the dependence of
on the velocity parameter
Lipschitz continuous. Indeed, one cannot even make it continuous uniformly in
. Given two values of
that are close together, e.g.
and
, and a reasonable function space
(e.g. a Sobolev space
, or a classical regularity space
) one can easily cook up a function
that is bounded in
but whose two solutions
and
separate in the
norm at time
, simply by choosing
to be supported on an interval of width
.
(Part of the problem here is that using a subtractive method to determine the distance between two solutions
is not a physically natural operation when transport mechanisms are present that could cause the key features of
(such as singularities) to be situated in slightly different locations. In such cases, the correct notion of distance may need to take transport into account, e.g. by using metrics of Wasserstein type.)
On the other hand, one still has non-uniform continuous dependence on the initial parameters: if lies in some reasonable function space
, then the map
is continuous in the
topology, even if it is not uniformly continuous with respect to
. (More succinctly: translation is a continuous but not uniformly continuous operation in most function spaces.) The reason for this is that we already have established this continuity in the case when
is so smooth that an additional derivative of
lies in
; and such smooth functions tend to be dense in the original space
, so the general case can then be established by a limiting argument, approximating a general function in
by a smoother function. We then see that the non-uniformity ultimately comes from the fact that a given function in
may be arbitrarily rough (or concentrated at an arbitrarily fine scale), and so the ability to approximate such a function by a smooth one can be arbitrarily poor.
In many quasilinear PDE, one often encounters qualitatively similar phenomena. Namely, one often has local well-posedness in sufficiently smooth function spaces (so that if the initial data lies in
, then for short times one has existence, uniqueness, and continuous dependence on the data in the
topology), but Lipschitz or uniform continuity in the
topology is usually false. However, if the data (and solution) is known to be in a high-regularity function space
, one can often recover Lipschitz or uniform continuity in a lower-regularity topology.
Because the continuous dependence on the data in quasilinear equations is necessarily non-uniform, the arguments needed to establish this dependence can be remarkably delicate. As with the simple example of the transport equation, the key is to approximate a rough solution by a smooth solution first, by smoothing out the data (this is the non-uniform step, as it depends on the physical scale (or wavelength) that the data features are located). But for quasilinear equations, keeping the rough and smooth solution together can require a little juggling of function space norms, in particular playing the low-frequency nature of the smooth solution against the high-frequency nature of the residual between the rough and smooth solutions.
Below the fold I will illustrate this phenomenon with one of the simplest quasilinear equations, namely the initial value problem for the inviscid Burgers’ equation
which is a modification of the transport equation (3) in which the velocity is no longer a parameter, but now depends (and is, in this case, actually equal to) the solution. To avoid technicalities we will work only with the classical function spaces
of
times continuously differentiable functions, though one can certainly work with other spaces (such as Sobolev spaces) by exploiting the Sobolev embedding theorem. To avoid having to distinguish continuity from uniform continuity, we shall work in a compact domain by assuming periodicity in space, thus for instance restricting
to the unit circle
.
This discussion is inspired by this survey article of Nikolay Tzvetkov, which further explores the distinction between well-posedness and ill-posedness in both semilinear and quasilinear settings.
A celebrated theorem of Gromov reads:
Theorem 1 Every finitely generated group of polynomial growth is virtually nilpotent.
The original proof of Gromov’s theorem was quite non-elementary, using an infinitary limit and exploiting the work surrounding the solution to Hilbert’s fifth problem. More recently, Kleiner provided a proof which was more elementary (based in large part on an earlier paper of Colding and Minicozzi), though still not entirely so, relying in part on (a weak form of the) Tits alternative and also on an ultrafilter argument of Korevaar-Schoen and Mok. I discuss Kleiner’s argument more in this previous blog post.
Recently, Yehuda Shalom and I established a quantitative version of Gromov’s theorem by making every component of Kleiner’s argument finitary. Technically, this provides a fully elementary proof of Gromov’s theorem (we do use one infinitary limit to simplify the argument a little bit, but this is not truly necessary); however, because we were trying to quantify as much of the result as possible, the argument became quite lengthy.
In this note I want to record a short version of the argument of Yehuda and myself which is not quantitative, but gives a self-contained and largely elementary proof of Gromov’s theorem. The argument is not too far from the Kleiner argument, but has a number of simplifications at various places. In a number of places, there was a choice to take between a short argument that was “inefficient” in the sense that it did not lead to a good quantitative bound, and a lengthier argument which led to better quantitative bounds. I have opted for the former in all such cases.
Yehuda and I plan to write a short paper containing this argument as well as some additional material, but due to some interest in this particular proof, we are detailing it here on this blog in advance of our paper.
Note: this post will assume familiarity with the basic terminology of group theory, and will move somewhat quickly through the technical details.
Ben Green, and I have just uploaded to the arXiv a short (six-page) paper “Yet another proof of Szemeredi’s theorem“, submitted to the 70th birthday conference proceedings for Endre Szemerédi. In this paper we put in print a folklore observation, namely that the inverse conjecture for the Gowers norm, together with the density increment argument, easily implies Szemerédi’s famous theorem on arithmetic progressions. This is unsurprising, given that Gowers’ proof of Szemerédi’s theorem proceeds through a weaker version of the inverse conjecture and a density increment argument, and also given that it is possible to derive Szemerédi’s theorem from knowledge of the characteristic factor for multiple recurrence (the ergodic theory analogue of the inverse conjecture, first established by Host and Kra), as was done by Bergelson, Leibman, and Lesigne (and also implicitly in the earlier paper of Bergelson, Host, and Kra); but to our knowledge the exact derivation of Szemerédi’s theorem from the inverse conjecture was not in the literature. Ordinarily this type of folklore might be considered too trifling (and too well known among experts in the field) to publish; but we felt that the venue of the Szemerédi birthday conference provided a natural venue for this particular observation.
The key point is that one can show (by an elementary argument relying primarily an induction on dimension argument and the Weyl recurrence theorem, i.e. that given any real and any integer
, that the expression
gets arbitrarily close to an integer) that given a (polynomial) nilsequence
, one can subdivide any long arithmetic progression (such as
) into a number of medium-sized progressions, where the nilsequence is nearly constant on each progression. As a consequence of this and the inverse conjecture for the Gowers norm, if a set has no arithmetic progressions, then it must have an elevated density on a subprogression; iterating this observation as per the usual density-increment argument as introduced long ago by Roth, one obtains the claim. (This is very close to the scheme of Gowers’ proof.)
Technically, one might call this the shortest proof of Szemerédi’s theorem in the literature (and would be something like the sixteenth such genuinely distinct proof, by our count), but that would be cheating quite a bit, primarily due to the fact that it assumes the inverse conjecture for the Gowers norm, our current proof of which is checking in at about 100 pages…
Ben Green, and I have just uploaded to the arXiv our paper “An arithmetic regularity lemma, an associated counting lemma, and applications“, submitted (a little behind schedule) to the 70th birthday conference proceedings for Endre Szemerédi. In this paper we describe the general-degree version of the arithmetic regularity lemma, which can be viewed as the counterpart of the Szemerédi regularity lemma, in which the object being regularised is a function on a discrete interval
rather than a graph, and the type of patterns one wishes to count are additive patterns (such as arithmetic progressions
) rather than subgraphs. Very roughly speaking, this regularity lemma asserts that all such functions can be decomposed as a degree
nilsequence (or more precisely, a variant of a nilsequence that we call an virtual irrational nilsequence), plus a small error, plus a third error which is extremely tiny in the Gowers uniformity norm
. In principle, at least, the latter two errors can be readily discarded in applications, so that the regularity lemma reduces many questions in additive combinatorics to questions concerning (virtual irrational) nilsequences. To work with these nilsequences, we also establish a arithmetic counting lemma that gives an integral formula for counting additive patterns weighted by such nilsequences.
The regularity lemma is a manifestation of the “dichotomy between structure and randomness”, as discussed for instance in my ICM article or FOCS article. In the degree case
, this result is essentially due to Green. It is powered by the inverse conjecture for the Gowers norms, which we and Tamar Ziegler have recently established (paper to be forthcoming shortly; the
case of our argument is discussed here). The counting lemma is established through the quantitative equidistribution theory of nilmanifolds, which Ben and I set out in this paper.
The regularity and counting lemmas are designed to be used together, and in the paper we give three applications of this combination. Firstly, we give a new proof of Szemerédi’s theorem, which proceeds via an energy increment argument rather than a density increment one. Secondly, we establish a conjecture of Bergelson, Host, and Kra, namely that if has density
, and
, then there exist
shifts
for which
contains at least
arithmetic progressions of length
of spacing
. (The
case of this conjecture was established earlier by Green; the
case is false, as was shown by Ruzsa in an appendix to the Bergelson-Host-Kra paper.) Thirdly, we establish a variant of a recent result of Gowers-Wolf, showing that the true complexity of a system of linear forms over
indeed matches the conjectured value predicted in their first paper.
In all three applications, the scheme of proof can be described as follows:
- Apply the arithmetic regularity lemma, and decompose a relevant function
into three pieces,
.
- The uniform part
is so tiny in the Gowers uniformity norm that its contribution can be easily dealt with by an appropriate “generalised von Neumann theorem”.
- The contribution of the (virtual, irrational) nilsequence
can be controlled using the arithmetic counting lemma.
- Finally, one needs to check that the contribution of the small error
does not overwhelm the main term
. This is the trickiest bit; one often needs to use the counting lemma again to show that one can find a set of arithmetic patterns for
that is so sufficiently “equidistributed” that it is not impacted by the small error.
To illustrate the last point, let us give the following example. Suppose we have a set of some positive density (say
) and we have managed to prove that
contains a reasonable number of arithmetic progressions of length
(say), e.g. it contains at least
such progressions. Now we perturb
by deleting a small number, say
, elements from
to create a new set
. Can we still conclude that the new set
contains any arithmetic progressions of length
?
Unfortunately, the answer could be no; conceivably, all of the arithmetic progressions in
could be wiped out by the
elements removed from
, since each such element of
could be associated with up to
(or even
) arithmetic progressions in
.
But suppose we knew that the arithmetic progressions in
were equidistributed, in the sense that each element in
belonged to the same number of such arithmetic progressions, namely
. Then each element deleted from
only removes at most
progressions, and so one can safely remove
elements from
and still retain some arithmetic progressions. The same argument works if the arithmetic progressions are only approximately equidistributed, in the sense that the number of progressions that a given element
belongs to concentrates sharply around its mean (for instance, by having a small variance), provided that the equidistribution is sufficiently strong. Fortunately, the arithmetic regularity and counting lemmas are designed to give precisely such a strong equidistribution result.
A succinct (but slightly inaccurate) summation of the regularity+counting lemma strategy would be that in order to solve a problem in additive combinatorics, it “suffices to check it for nilsequences”. But this should come with a caveat, due to the issue of the small error above; in addition to checking it for nilsequences, the answer in the nilsequence case must be sufficiently “dispersed” in a suitable sense, so that it can survive the addition of a small (but not completely negligible) perturbation.
One last “production note”. Like our previous paper with Emmanuel Breuillard, we used Subversion to write this paper, which turned out to be a significant efficiency boost as we could work on different parts of the paper simultaneously (this was particularly important this time round as the paper was somewhat lengthy and complicated, and there was a submission deadline). When doing so, we found it convenient to split the paper into a dozen or so pieces (one for each section of the paper, basically) in order to avoid conflicts, and to help coordinate the writing process. I’m also looking into git (a more advanced version control system), and am planning to use it for another of my joint projects; I hope to be able to comment on the relative strengths of these systems (and with plain old email) in the future.
As an experiment, I’ve recently started using Google Buzz as an outlet for various things I wanted to say or share, but which were too insubstantial to merit a mention on this blog. (In turn, one of the reasons of starting this blog was to share various bits of mathematics which were too insubstantial for a published paper. Presumably the process becomes degenerate if iterated any further…) I don’t know how frequently I will be updating, though.
In the foundations of modern probability, as laid out by Kolmogorov, the basic objects of study are constructed in the following order:
- Firstly, one selects a sample space
, whose elements
represent all the possible states that one’s stochastic system could be in.
- Then, one selects a
-algebra
of events
(modeled by subsets of
), and assigns each of these events a probability
in a countably additive manner, so that the entire sample space has probability
.
- Finally, one builds (commutative) algebras of random variables
(such as complex-valued random variables, modeled by measurable functions from
to
), and (assuming suitable integrability or moment conditions) one can assign expectations
to each such random variable.
In measure theory, the underlying measure space plays a prominent foundational role, with the measurable sets and measurable functions (the analogues of the events and the random variables) always being viewed as somehow being attached to that space. In probability theory, in contrast, it is the events and their probabilities that are viewed as being fundamental, with the sample space
being abstracted away as much as possible, and with the random variables and expectations being viewed as derived concepts. See Notes 0 for further discussion of this philosophy.
However, it is possible to take the abstraction process one step further, and view the algebra of random variables and their expectations as being the foundational concept, and ignoring both the presence of the original sample space, the algebra of events, or the probability measure.
There are two reasons for wanting to shed (or abstract away) these previously foundational structures. Firstly, it allows one to more easily take certain types of limits, such as the large limit
when considering
random matrices, because quantities built from the algebra of random variables and their expectations, such as the normalised moments of random matrices tend to be quite stable in the large
limit (as we have seen in previous notes), even as the sample space and event space varies with
. (This theme of using abstraction to facilitate the taking of the large
limit also shows up in the application of ergodic theory to combinatorics via the correspondence principle; see this previous blog post for further discussion.)
Secondly, this abstract formalism allows one to generalise the classical, commutative theory of probability to the more general theory of non-commutative probability theory, which does not have a classical underlying sample space or event space, but is instead built upon a (possibly) non-commutative algebra of random variables (or “observables”) and their expectations (or “traces”). This more general formalism not only encompasses classical probability, but also spectral theory (with matrices or operators taking the role of random variables, and the trace taking the role of expectation), random matrix theory (which can be viewed as a natural blend of classical probability and spectral theory), and quantum mechanics (with physical observables taking the role of random variables, and their expected value on a given quantum state being the expectation). It is also part of a more general “non-commutative way of thinking” (of which non-commutative geometry is the most prominent example), in which a space is understood primarily in terms of the ring or algebra of functions (or function-like objects, such as sections of bundles) placed on top of that space, and then the space itself is largely abstracted away in order to allow the algebraic structures to become less commutative. In short, the idea is to make algebra the foundation of the theory, as opposed to other possible choices of foundations such as sets, measures, categories, etc..
[Note that this foundational preference is to some extent a metamathematical one rather than a mathematical one; in many cases it is possible to rewrite the theory in a mathematically equivalent form so that some other mathematical structure becomes designated as the foundational one, much as probability theory can be equivalently formulated as the measure theory of probability measures. However, this does not negate the fact that a different choice of foundations can lead to a different way of thinking about the subject, and thus to ask a different set of questions and to discover a different set of proofs and solutions. Thus it is often of value to understand multiple foundational perspectives at once, to get a truly stereoscopic view of the subject.]
It turns out that non-commutative probability can be modeled using operator algebras such as -algebras, von Neumann algebras, or algebras of bounded operators on a Hilbert space, with the latter being accomplished via the Gelfand-Naimark-Segal construction. We will discuss some of these models here, but just as probability theory seeks to abstract away its measure-theoretic models, the philosophy of non-commutative probability is also to downplay these operator algebraic models once some foundational issues are settled.
When one generalises the set of structures in one’s theory, for instance from the commutative setting to the non-commutative setting, the notion of what it means for a structure to be “universal”, “free”, or “independent” can change. The most familiar example of this comes from group theory. If one restricts attention to the category of abelian groups, then the “freest” object one can generate from two generators is the free abelian group of commutative words
with
, which is isomorphic to the group
. If however one generalises to the non-commutative setting of arbitrary groups, then the “freest” object that can now be generated from two generators
is the free group
of non-commutative words
with
, which is a significantly larger extension of the free abelian group
.
Similarly, when generalising classical probability theory to non-commutative probability theory, the notion of what it means for two or more random variables to be independent changes. In the classical (commutative) setting, two (bounded, real-valued) random variables are independent if one has
whenever are well-behaved functions (such as polynomials) such that
,
both vanish. In the non-commutative setting, one can generalise the above definition to two commuting bounded self-adjoint variables; this concept is useful for instance in quantum probability, which is an abstraction of the theory of observables in quantum mechanics. But for two (bounded, self-adjoint) non-commutative random variables
, the notion of classical independence no longer applies. As a substitute, one can instead consider the notion of being freely independent (or free for short), which means that
whenever are well-behaved functions such that all of
vanish.
The concept of free independence was introduced by Voiculescu, and its study is now known as the subject of free probability. We will not attempt a systematic survey of this subject here; for this, we refer the reader to the surveys of Speicher and of Biane. Instead, we shall just discuss a small number of topics in this area to give the flavour of the subject only.
The significance of free probability to random matrix theory lies in the fundamental observation that random matrices which are independent in the classical sense, also tend to be independent in the free probability sense, in the large limit
. (This is only possible because of the highly non-commutative nature of these matrices; as we shall see, it is not possible for non-trivial commuting independent random variables to be freely independent.) Because of this, many tedious computations in random matrix theory, particularly those of an algebraic or enumerative combinatorial nature, can be done more quickly and systematically by using the framework of free probability, which by design is optimised for algebraic tasks rather than analytical ones.
Much as free groups are in some sense “maximally non-commutative”, freely independent random variables are about as far from being commuting as possible. For instance, if are freely independent and of expectation zero, then
vanishes, but
instead factors as
. As a consequence, the behaviour of freely independent random variables can be quite different from the behaviour of their classically independent commuting counterparts. Nevertheless there is a remarkably strong analogy between the two types of independence, in that results which are true in the classically independent case often have an interesting analogue in the freely independent setting. For instance, the central limit theorem (Notes 2) for averages of classically independent random variables, which roughly speaking asserts that such averages become gaussian in the large
limit, has an analogue for averages of freely independent variables, the free central limit theorem, which roughly speaking asserts that such averages become semicircular in the large
limit. One can then use this theorem to provide yet another proof of Wigner’s semicircle law (Notes 4).
Another important (and closely related) analogy is that while the distribution of sums of independent commutative random variables can be quickly computed via the characteristic function (i.e. the Fourier transform of the distribution), the distribution of sums of freely independent non-commutative random variables can be quickly computed using the Stieltjes transform instead (or with closely related objects, such as the -transform of Voiculescu). This is strongly reminiscent of the appearance of the Stieltjes transform in random matrix theory, and indeed we will see many parallels between the use of the Stieltjes transform here and in Notes 4.
As mentioned earlier, free probability is an excellent tool for computing various expressions of interest in random matrix theory, such as asymptotic values of normalised moments in the large limit
. Nevertheless, as it only covers the asymptotic regime in which
is sent to infinity while holding all other parameters fixed, there are some aspects of random matrix theory to which the tools of free probability are not sufficient by themselves to resolve (although it can be possible to combine free probability theory with other tools to then answer these questions). For instance, questions regarding the rate of convergence of normalised moments as
are not directly answered by free probability, though if free probability is combined with tools such as concentration of measure (Notes 1) then such rate information can often be recovered. For similar reasons, free probability lets one understand the behaviour of
moments as
for fixed
, but has more difficulty dealing with the situation in which
is allowed to grow slowly in
(e.g.
). Because of this, free probability methods are effective at controlling the bulk of the spectrum of a random matrix, but have more difficulty with the edges of that spectrum (as well as with related concepts such as the operator norm, Notes 3) as well as with fine-scale structure of the spectrum. Finally, free probability methods are most effective when dealing with matrices that are Hermitian with bounded operator norm, largely because the spectral theory of bounded self-adjoint operators in the infinite-dimensional setting of the large
limit is non-pathological. (This is ultimately due to the stable nature of eigenvalues in the self-adjoint setting; see this previous blog post for discussion.) For non-self-adjoint operators, free probability needs to be augmented with additional tools, most notably by bounds on least singular values, in order to recover the required stability for the various spectral data of random matrices to behave continuously with respect to the large
limit. We will discuss this latter point in a later set of notes.
I have just finished the first draft of my blog book for 2009, under the title of “An epsilon of room: pages from year three of a mathematical blog“. It largely follows the format of my previous two blog books, “Structure and Randomness” and “Poincaré’s legacies“.
There is still some amount of work to be done on the texts; for instance, I need to create an index (which I had neglected to do in the previous two books in the series), and will probably end up splitting the book into two volumes (as was done for “Poincaré’s legacies”).
As always, any feedback or comments are very welcome.
I’ve just uploaded the D.H.J. Polymath article “Density Hales-Jewett and Moser numbers” to the arXiv, submitted to the Szemeredi birthday conference proceedings.
This article investigates the Density Hales-Jewett numbers , defined as the largest subset of the n-dimensional cube
containing no combinatorial line, as well as the related Moser numbers
, defined as the largest subset of the n-dimensional cube containing no geometric line. We compute the first six numbers in each sequence in this paper. For the DHJ numbers, they are 1,2,6,18,52,150,450 and for the Moser numbers they are 1,2,6,16,43,124,353. The last two elements of both sequences are new; the computation
was the hardest and required a non-trivial amount of computer assistance.
We also establish the asymptotic lower bounds
and
.
In contrast, the best known upper bound to these quantities is , obtained by the sister project to this Polymath project.
We also show a counterexample to a certain “hyper-optimistic conjecture” which would have generalised the Lubell-Yamamoto–Meshalkin (LYM) inequality to this setting.
Thanks to all the participants for this interesting experiment in collaborative mathematics. This is certainly a different type of project from the ones I normally am involved in, but I found it to be an enjoyable and educational experience.
There is still some time to make further (minor) corrections; I think the deadline for the final submission should be in April.
We can now turn attention to one of the centerpiece universality results in random matrix theory, namely the Wigner semi-circle law for Wigner matrices. Recall from previous notes that a Wigner Hermitian matrix ensemble is a random matrix ensemble of Hermitian matrices (thus
; this includes real symmetric matrices as an important special case), in which the upper-triangular entries
,
are iid complex random variables with mean zero and unit variance, and the diagonal entries
are iid real variables, independent of the upper-triangular entries, with bounded mean and variance. Particular special cases of interest include the Gaussian Orthogonal Ensemble (GOE), the symmetric random sign matrices (aka symmetric Bernoulli ensemble), and the Gaussian Unitary Ensemble (GUE).
In previous notes we saw that the operator norm of was typically of size
, so it is natural to work with the normalised matrix
. Accordingly, given any
Hermitian matrix
, we can form the (normalised) empirical spectral distribution (or ESD for short)
of , where
are the (necessarily real) eigenvalues of
, counting multiplicity. The ESD is a probability measure, which can be viewed as a distribution of the normalised eigenvalues of
.
When is a random matrix ensemble, then the ESD
is now a random measure – i.e. a random variable taking values in the space
of probability measures on the real line. (Thus, the distribution of
is a probability measure on probability measures!)
Now we consider the behaviour of the ESD of a sequence of Hermitian matrix ensembles as
. Recall from Notes 0 that for any sequence of random variables in a
-compact metrisable space, one can define notions of convergence in probability and convergence almost surely. Specialising these definitions to the case of random probability measures on
, and to deterministic limits, we see that a sequence of random ESDs
converge in probability (resp. converge almost surely) to a deterministic limit
(which, confusingly enough, is a deterministic probability measure!) if, for every test function
, the quantities
converge in probability (resp. converge almost surely) to
.
Remark 1 As usual, convergence almost surely implies convergence in probability, but not vice versa. In the special case of random probability measures, there is an even weaker notion of convergence, namely convergence in expectation, defined as follows. Given a random ESD
, one can form its expectation
, defined via duality (the Riesz representation theorem) as
this probability measure can be viewed as the law of a random eigenvalue
drawn from a random matrix
from the ensemble. We then say that the ESDs converge in expectation to a limit
if
converges the vague topology to
, thus
for all
.
In general, these notions of convergence are distinct from each other; but in practice, one often finds in random matrix theory that these notions are effectively equivalent to each other, thanks to the concentration of measure phenomenon.
Exercise 1 Let
be a sequence of
Hermitian matrix ensembles, and let
be a continuous probability measure on
.
- Show that
converges almost surely to
if and only if
converges almost surely to
for all
.
- Show that
converges in probability to
if and only if
converges in probability to
for all
.
- Show that
converges in expectation to
if and only if
converges to
for all
.
We can now state the Wigner semi-circular law.
Theorem 1 (Semicircular law) Let
be the top left
minors of an infinite Wigner matrix
. Then the ESDs
converge almost surely (and hence also in probability and in expectation) to the Wigner semi-circular distribution
A numerical example of this theorem in action can be seen at the MathWorld entry for this law.
The semi-circular law nicely complements the upper Bai-Yin theorem from Notes 3, which asserts that (in the case when the entries have finite fourth moment, at least), the matrices almost surely has operator norm at most
. Note that the operator norm is the same thing as the largest magnitude of the eigenvalues. Because the semi-circular distribution (1) is supported on the interval
with positive density on the interior of this interval, Theorem 1 easily supplies the lower Bai-Yin theorem, that the operator norm of
is almost surely at least
, and thus (in the finite fourth moment case) the norm is in fact equal to
. Indeed, we have just shown that the circular law provides an alternate proof of the lower Bai-Yin bound (Proposition 11 of Notes 3).
As will hopefully become clearer in the next set of notes, the semi-circular law is the noncommutative (or free probability) analogue of the central limit theorem, with the semi-circular distribution (1) taking on the role of the normal distribution. Of course, there is a striking difference between the two distributions, in that the former is compactly supported while the latter is merely subgaussian. One reason for this is that the concentration of measure phenomenon is more powerful in the case of ESDs of Wigner matrices than it is for averages of iid variables; compare the concentration of measure results in Notes 3 with those in Notes 1.
There are several ways to prove (or at least to heuristically justify) the circular law. In this set of notes we shall focus on the two most popular methods, the moment method and the Stieltjes transform method, together with a third (heuristic) method based on Dyson Brownian motion (Notes 3b). In the next set of notes we shall also study the free probability method, and in the set of notes after that we use the determinantal processes method (although this method is initially only restricted to highly symmetric ensembles, such as GUE).
Recent Comments