You are currently browsing the tag archive for the ‘GUE’ tag.

Let be a large natural number, and let be a matrix drawn from the Gaussian Unitary Ensemble (GUE), by which we mean that is a Hermitian matrix whose upper triangular entries are iid complex gaussians with mean zero and variance one, and whose diagonal entries are iid real gaussians with mean zero and variance one (and independent of the upper triangular entries). The eigenvalues are then real and almost surely distinct, and can be viewed as a random point process on the real line. One can then form the -point correlation functions for every , which can be defined by duality by requiring

for any test function . For GUE, which is a continuous matrix ensemble, one can also define for distinct as the unique quantity such that the probability that there is an eigenvalue in each of the intervals is in the limit .

As is well known, the GUE process is a determinantal point process, which means that -point correlation functions can be explicitly computed as

for some kernel ; explicitly, one has

where are the (normalised) Hermite polynomials; see this previous blog post for details.

Using the asymptotics of Hermite polynomials (which then give asymptotics for the kernel ), one can take a limit of a (suitably rescaled) sequence of GUE processes to obtain the *Dyson sine process*, which is a determinantal point process on the real line with correlation functions

where is the *Dyson sine kernel*

A bit more precisely, for any fixed bulk energy , the renormalised point processes converge in distribution in the vague topology to as , where is the semi-circular law density.

On the other hand, an important feature of the GUE process is its stationarity (modulo rescaling) under Dyson Brownian motion

which describes the stochastic evolution of eigenvalues of a Hermitian matrix under independent Brownian motion of its entries, and is discussed in this previous blog post. To cut a long story short, this stationarity tells us that the self-similar -point correlation function

obeys the *Dyson heat equation*

(see Exercise 11 of the previously mentioned blog post). Note that vanishes to second order whenever two of the coincide, so there is no singularity on the right-hand side. Setting and using self-similarity, we can rewrite this equation in time-independent form as

One can then integrate out all but of these variables (after carefully justifying convergence) to obtain a system of equations for the -point correlation functions :

where the integral is interpreted in the principal value case. This system is an example of a BBGKY hierarchy.

If one carefully rescales and takes limits (say at the energy level , for simplicity), the left-hand side turns out to rescale to be a lower order term, and one ends up with a hierarchy for the Dyson sine process:

Informally, these equations show that the Dyson sine process is stationary with respect to the infinite Dyson Brownian motion

where are independent Brownian increments, and the sum is interpreted in a suitable principal value sense.

I recently set myself the exercise of deriving the identity (3) directly from the definition (1) of the Dyson sine process, without reference to GUE. This turns out to not be too difficult when done the right way (namely, by modifying the proof of Gaudin’s lemma), although it did take me an entire day of work before I realised this, and I could not find it in the literature (though I suspect that many people in the field have privately performed this exercise in the past). In any case, I am recording the computation here, largely because I really don’t want to have to do it again, but perhaps it will also be of interest to some readers.

I’ve just uploaded to the arXiv my paper The asymptotic distribution of a single eigenvalue gap of a Wigner matrix, submitted to Probability Theory and Related Fields. This paper (like several of my previous papers) is concerned with the asymptotic distribution of the eigenvalues of a random Wigner matrix in the limit , with a particular focus on matrices drawn from the Gaussian Unitary Ensemble (GUE). This paper is focused on the *bulk* of the spectrum, i.e. to eigenvalues with for some fixed .

The location of an individual eigenvalue is by now quite well understood. If we normalise the entries of the matrix to have mean zero and variance , then in the asymptotic limit , the Wigner semicircle law tells us that with probability one has

where the *classical location* of the eigenvalue is given by the formula

and the semicircular distribution is given by the formula

Actually, one can improve the error term here from to for any (see this previous recent paper of Van and myself for more discussion of these sorts of estimates, sometimes known as *eigenvalue rigidity* estimates).

From the semicircle law (and the fundamental theorem of calculus), one expects the eigenvalue spacing to have an average size of . It is thus natural to introduce the normalised eigenvalue spacing

and ask what the distribution of is.

As mentioned previously, we will focus on the bulk case , and begin with the model case when is drawn from GUE. (In the edge case when is close to or to , the distribution is given by the famous Tracy-Widom law.) Here, the distribution was almost (but as we shall see, not quite) worked out by Gaudin and Mehta. By using the theory of determinantal processes, they were able to compute a quantity closely related to , namely the probability

that an interval near of length comparable to the expected eigenvalue spacing is devoid of eigenvalues. For in the bulk and fixed , they showed that this probability is equal to

where is the Dyson projection

to Fourier modes in , and is the Fredholm determinant. As shown by Jimbo, Miwa, Tetsuji, Mori, and Sato, this determinant can also be expressed in terms of a solution to a Painleve V ODE, though we will not need this fact here. In view of this asymptotic and some standard integration by parts manipulations, it becomes plausible to propose that will be asymptotically distributed according to the *Gaudin-Mehta distribution* , where

A reasonably accurate approximation for is given by the *Wigner surmise* [EDIT: as pointed out in comments, in this GUE setting the correct surmise is ], which was presciently proposed by Wigner as early as 1957; it is exact for but not in the asymptotic limit .

Unfortunately, when one tries to make this argument rigorous, one finds that the asymptotic for (1) does not control a single gap , but rather an ensemble of gaps , where is drawn from an interval of some moderate size (e.g. ); see for instance this paper of Deift, Kriecherbauer, McLaughlin, Venakides, and Zhou for a more precise formalisation of this statement (which is phrased slightly differently, in which one samples all gaps inside a fixed window of spectrum, rather than inside a fixed range of eigenvalue indices ). (This result is stated for GUE, but can be extended to other Wigner ensembles by the Four Moment Theorem, at least if one assumes a moment matching condition; see this previous paper with Van Vu for details. The moment condition can in fact be removed, as was done in this subsequent paper with Erdos, Ramirez, Schlein, Vu, and Yau.)

The problem is that when one specifies a given window of spectrum such as , one cannot quite pin down in advance which eigenvalues are going to lie to the left or right of this window; even with the strongest eigenvalue rigidity results available, there is a natural uncertainty of or so in the index (as can be quantified quite precisely by this central limit theorem of Gustavsson).

The main difficulty here is that there could potentially be some strange coupling between the event (1) of an interval being devoid of eigenvalues, and the number of eigenvalues to the left of that interval. For instance, one could conceive of a possible scenario in which the interval in (1) tends to have many eigenvalues when is even, but very few when is odd. In this sort of situation, the gaps may have different behaviour for even than for odd , and such anomalies would not be picked up in the averaged statistics in which is allowed to range over some moderately large interval.

The main result of the current paper is that these anomalies do not actually occur, and that all of the eigenvalue gaps in the bulk are asymptotically governed by the Gaudin-Mehta law without the need for averaging in the parameter. Again, this is shown first for GUE, and then extended to other Wigner matrices obeying a matching moment condition using the Four Moment Theorem. (It is likely that the moment matching condition can be removed here, but I was unable to achieve this, despite all the recent advances in establishing universality of local spectral statistics for Wigner matrices, mainly because the universality results in the literature are more focused on specific energy levels than on specific eigenvalue indices . To make matters worse, in some cases universality is currently known only after an additional averaging in the energy parameter.)

The main task in the proof is to show that the random variable is largely decoupled from the event in (1) when is drawn from GUE. To do this we use some of the theory of determinantal processes, and in particular the nice fact that when one conditions a determinantal process to the event that a certain spatial region (such as an interval) contains no points of the process, then one obtains a new determinantal process (with a kernel that is closely related to the original kernel). The main task is then to obtain a sufficiently good control on the distance between the new determinantal kernel and the old one, which we do by some functional-analytic considerations involving the manipulation of norms of operators (and specifically, the operator norm, Hilbert-Schmidt norm, and nuclear norm). Amusingly, the Fredholm alternative makes a key appearance, as I end up having to invert a compact perturbation of the identity at one point (specifically, I need to invert , where is the Dyson projection and is an interval). As such, the bounds in my paper become ineffective, though I am sure that with more work one can invert this particular perturbation of the identity by hand, without the need to invoke the Fredholm alternative.

Van Vu and I have just uploaded to the arXiv our paper A central limit theorem for the determinant of a Wigner matrix, submitted to Adv. Math.. It studies the asymptotic distribution of the determinant of a random Wigner matrix (such as a matrix drawn from the Gaussian Unitary Ensemble (GUE) or Gaussian Orthogonal Ensemble (GOE)).

Before we get to these results, let us first discuss the simpler problem of studying the determinant of a random *iid* matrix , such as a real gaussian matrix (where all entries are independently and identically distributed using the standard real normal distribution ), a complex gaussian matrix (where all entries are independently and identically distributed using the standard complex normal distribution , thus the real and imaginary parts are independent with law ), or the random sign matrix (in which all entries are independently and identically distributed according to the Bernoulli distribution (with a chance of either sign). More generally, one can consider a matrix in which all the entries are independently and identically distributed with mean zero and variance .

We can expand using the Leibniz expansion

where ranges over the permutations of , and is the product

From the iid nature of the , we easily see that each has mean zero and variance one, and are pairwise uncorrelated as varies. We conclude that has mean zero and variance (an observation first made by Turán). In particular, from Chebyshev’s inequality we see that is typically of size .

It turns out, though, that this is not quite best possible. This is easiest to explain in the real gaussian case, by performing a computation first made by Goodman. In this case, is clearly symmetrical, so we can focus attention on the magnitude . We can interpret this quantity geometrically as the volume of an -dimensional parallelopiped whose generating vectors are independent real gaussian vectors in (i.e. their coefficients are iid with law ). Using the classical base-times-height formula, we thus have

where is the -dimensional linear subspace of spanned by (note that , having an absolutely continuous joint distribution, are almost surely linearly independent). Taking logarithms, we conclude

Now, we take advantage of a fundamental symmetry property of the Gaussian vector distribution, namely its invariance with respect to the orthogonal group . Because of this, we see that if we fix (and thus , the random variable has the same distribution as , or equivalently the distribution

where are iid copies of . As this distribution does not depend on the , we conclude that the law of is given by the sum of independent -variables:

A standard computation shows that each has mean and variance , and then a Taylor series (or Ito calculus) computation (using concentration of measure tools to control tails) shows that has mean and variance . As such, has mean and variance . Applying a suitable version of the central limit theorem, one obtains the asymptotic law

where denotes convergence in distribution. A bit more informally, we have

when is a real gaussian matrix; thus, for instance, the median value of is . At first glance, this appears to conflict with the second moment bound of Turán mentioned earlier, but once one recalls that has a second moment of , we see that the two facts are in fact perfectly consistent; the upper tail of the normal distribution in the exponent in (4) ends up dominating the second moment.

It turns out that the central limit theorem (3) is valid for any real iid matrix with mean zero, variance one, and an exponential decay condition on the entries; this was first claimed by Girko, though the arguments in that paper appear to be incomplete. Another proof of this result, with more quantitative bounds on the convergence rate has been recently obtained by Hoi Nguyen and Van Vu. The basic idea in these arguments is to express the sum in (2) in terms of a martingale and apply the martingale central limit theorem.

If one works with complex gaussian random matrices instead of real gaussian random matrices, the above computations change slightly (one has to replace the real distribution with the complex distribution, in which the are distributed according to the complex gaussian instead of the real one). At the end of the day, one ends up with the law

(but note that this new asymptotic is still consistent with Turán’s second moment calculation).

We can now turn to the results of our paper. Here, we replace the iid matrices by *Wigner matrices* , which are defined similarly but are constrained to be Hermitian (or real symmetric), thus for all . Model examples here include the Gaussian Unitary Ensemble (GUE), in which for and for , the Gaussian Orthogonal Ensemble (GOE), in which for and for , and the *symmetric Bernoulli ensemble*, in which for (with probability of either sign). In all cases, the upper triangular entries of the matrix are assumed to be jointly independent. For a more precise definition of the Wigner matrix ensembles we are considering, see the introduction to our paper.

The determinants of these matrices still have a Leibniz expansion. However, in the Wigner case, the mean and variance of the are slightly different, and what is worse, they are not all pairwise uncorrelated any more. For instance, the mean of is still usually zero, but equals in the exceptional case when is a perfect matching (i.e. the union of exactly -cycles, a possibility that can of course only happen when is even). As such, the mean still vanishes when is odd, but for even it is equal to

(the fraction here simply being the number of perfect matchings on vertices). Using Stirling’s formula, one then computes that is comparable to when is large and even. The second moment calculation is more complicated (and uses facts about the distribution of cycles in random permutations, mentioned in this previous post), but one can compute that is comparable to for GUE and for GOE. (The discrepancy here comes from the fact that in the GOE case, and can correlate when contains reversals of -cycles of for , but this does not happen in the GUE case.) For GUE, much more precise asymptotics for the moments of the determinant are known, starting from the work of Brezin and Hikami, though we do not need these more sophisticated computations here.

Our main results are then as follows.

Theorem 1Let be a Wigner matrix.

- If is drawn from GUE, then
- If is drawn from GOE, then
- The previous two results also hold for more general Wigner matrices, assuming that the real and imaginary parts are independent, a finite moment condition is satisfied, and the entries match moments with those of GOE or GUE to fourth order. (See the paper for a more precise formulation of the result.)

Thus, we informally have

when is drawn from GUE, or from another Wigner ensemble matching GUE to fourth order (and obeying some additional minor technical hypotheses); and

when is drawn from GOE, or from another Wigner ensemble matching GOE to fourth order. Again, these asymptotic limiting distributions are consistent with the asymptotic behaviour for the second moments.

The extension from the GUE or GOE case to more general Wigner ensembles is a fairly routine application of the *four moment theorem* for Wigner matrices, although for various technical reasons we do not quite use the existing four moment theorems in the literature, but adapt them to the log determinant. The main idea is to express the log-determinant as an integral

of . Strictly speaking, the integral in (7) is divergent at infinity (and also can be ill-behaved near zero), but this can be addressed by standard truncation and renormalisation arguments (combined with known facts about the least singular value of Wigner matrices), which we omit here. We then use a variant of the four moment theorem for the Stieltjes transform, as used by Erdos, Yau, and Yin (based on a previous four moment theorem for individual eigenvalues introduced by Van Vu and myself). The four moment theorem is proven by the now-standard Lindeberg exchange method, combined with the usual resolvent identities to control the behaviour of the resolvent (and hence the Stieltjes transform) with respect to modifying one or two entries, together with the delocalisation of eigenvector property (which in turn arises from local semicircle laws) to control the error terms.

Somewhat surprisingly (to us, at least), it turned out that it was the first part of the theorem (namely, the verification of the limiting law for the invariant ensembles GUE and GOE) that was more difficult than the extension to the Wigner case. Even in an ensemble as highly symmetric as GUE, the rows are no longer independent, and the formula (2) is basically useless for getting any non-trivial control on the log determinant. There is an explicit formula for the joint distribution of the eigenvalues of GUE (or GOE), which does eventually give the distribution of the cumulants of the log determinant, which then gives the required central limit theorem; but this is a lengthy computation, first performed by Delannay and Le Caer.

Following a suggestion of my colleague, Rowan Killip, we give an alternate proof of this central limit theorem in the GUE and GOE cases, by using a beautiful observation of Trotter, namely that the GUE or GOE ensemble can be conjugated into a tractable tridiagonal form. Let me state it just for GUE:

Proposition 2 (Tridiagonal form of GUE)Let be the random tridiagonal real symmetric matrixwhere the are jointly independent real random variables, with being standard real Gaussians, and each having a -distribution:

where are iid complex gaussians. Let be drawn from GUE. Then the joint eigenvalue distribution of is identical to the joint eigenvalue distribution of .

*Proof:* Let be drawn from GUE. We can write

where is drawn from the GUE, , and is a random gaussian vector with all entries iid with distribution . Furthermore, are jointly independent.

We now apply the tridiagonal matrix algorithm. Let , then has the -distribution indicated in the proposition. We then conjugate by a unitary matrix that preserves the final basis vector , and maps to . Then we have

where is conjugate to . Now we make the crucial observation: because is distributed according to GUE (which is a unitarily invariant ensemble), and is a unitary matrix independent of , is also distributed according to GUE, and remains independent of both and .

We continue this process, expanding as

Applying a further unitary conjugation that fixes but maps to , we may replace by while transforming to another GUE matrix independent of . Iterating this process, we eventually obtain a coupling of to by unitary conjugations, and the claim follows.

The determinant of a tridiagonal matrix is not quite as simple as the determinant of a triangular matrix (in which it is simply the product of the diagonal entries), but it is pretty close: the determinant of the above matrix is given by solving the recursion

with and . Thus, instead of the product of a sequence of independent scalar distributions as in the gaussian matrix case, the determinant of GUE ends up being controlled by the product of a sequence of independent matrices whose entries are given by gaussians and distributions. In this case, one cannot immediately take logarithms and hope to get something for which the martingale central limit theorem can be applied, but some *ad hoc* manipulation of these matrix products eventually does make this strategy work. (Roughly speaking, one has to work with the logarithm of the Frobenius norm of the matrix first.)

This week I am at the American Institute of Mathematics, as an organiser on a workshop on the universality phenomenon in random matrices. There have been a number of interesting discussions so far in this workshop. Percy Deift, in a lecture on universality for invariant ensembles, gave some applications of what he only half-jokingly termed “the most important identity in mathematics”, namely the formula

whenever are and matrices respectively (or more generally, and could be linear operators with sufficiently good spectral properties that make both sides equal). Note that the left-hand side is an determinant, while the right-hand side is a determinant; this formula is particularly useful when computing determinants of large matrices (or of operators), as one can often use it to transform such determinants into much smaller determinants. In particular, the asymptotic behaviour of determinants as can be converted via this formula to determinants of a fixed size (independent of ), which is often a more favourable situation to analyse. Unsurprisingly, this trick is particularly useful for understanding the asymptotic behaviour of determinantal processes.

There are many ways to prove the identity. One is to observe first that when are invertible square matrices of the same size, that and are conjugate to each other and thus clearly have the same determinant; a density argument then removes the invertibility hypothesis, and a padding-by-zeroes argument then extends the square case to the rectangular case. Another is to proceed via the spectral theorem, noting that and have the same non-zero eigenvalues.

By rescaling, one obtains the variant identity

which essentially relates the characteristic polynomial of with that of . When , a comparison of coefficients this already gives important basic identities such as and ; when is not equal to , an inspection of the coefficient similarly gives the Cauchy-Binet formula (which, incidentally, is also useful when performing computations on determinantal processes).

Thanks to this formula (and with a crucial insight of Alice Guionnet), I was able to solve a problem (on outliers for the circular law) that I had in the back of my mind for a few months, and initially posed to me by Larry Abbott; I hope to talk more about this in a future post.

Today, though, I wish to talk about another piece of mathematics that emerged from an afternoon of free-form discussion that we managed to schedule within the AIM workshop. Specifically, we hammered out a heuristic model of the *mesoscopic* structure of the eigenvalues of the Gaussian Unitary Ensemble (GUE), where is a large integer. As is well known, the probability density of these eigenvalues is given by the *Ginebre distribution*

where is Lebesgue measure on the Weyl chamber , is a constant, and the Hamiltonian is given by the formula

At the macroscopic scale of , the eigenvalues are distributed according to the Wigner semicircle law

Indeed, if one defines the *classical location* of the eigenvalue to be the unique solution in to the equation

then it is known that the random variable is quite close to . Indeed, a result of Gustavsson shows that, in the bulk region when , is distributed asymptotically as a gaussian random variable with mean and variance . Note that from the semicircular law, the factor is the mean eigenvalue spacing.

At the other extreme, at the microscopic scale of the mean eigenvalue spacing (which is comparable to in the bulk, but can be as large as at the edge), the eigenvalues are asymptotically distributed with respect to a special determinantal point process, namely the Dyson sine process in the bulk (and the Airy process on the edge), as discussed in this previous post.

Here, I wish to discuss the *mesoscopic* structure of the eigenvalues, in which one involves scales that are intermediate between the microscopic scale and the macroscopic scale , for instance in correlating the eigenvalues and in the regime for some . Here, there is a surprising phenomenon; there is quite a long-range correlation between such eigenvalues. The result of Gustavsson shows that both and behave asymptotically like gaussian random variables, but a further result from the same paper shows that the correlation between these two random variables is asymptotic to (in the bulk, at least); thus, for instance, adjacent eigenvalues and are almost perfectly correlated (which makes sense, as their spacing is much less than either of their standard deviations), but that even very distant eigenvalues, such as and , have a correlation comparable to . One way to get a sense of this is to look at the trace

This is also the sum of the diagonal entries of a GUE matrix, and is thus normally distributed with a variance of . In contrast, each of the (in the bulk, at least) has a variance comparable to . In order for these two facts to be consistent, the average correlation between pairs of eigenvalues then has to be of the order of .

Below the fold, I give a heuristic way to see this correlation, based on Taylor expansion of the convex Hamiltonian around the minimum , which gives a conceptual probabilistic model for the mesoscopic structure of the GUE eigenvalues. While this heuristic is in no way rigorous, it does seem to explain many of the features currently known or conjectured about GUE, and looks likely to extend also to other models.

Let be a large integer, and let be the Gaussian Unitary Ensemble (GUE), i.e. the random Hermitian matrix with probability distribution

where is a Haar measure on Hermitian matrices and is the normalisation constant required to make the distribution of unit mass. The eigenvalues of this matrix are then a coupled family of real random variables. For any , we can define the *-point correlation function* to be the unique symmetric measure on such that

A standard computation (given for instance in these lecture notes of mine) gives the *Ginebre formula*

for the -point correlation function, where is another normalisation constant. Using Vandermonde determinants, one can rewrite this expression in determinantal form as

where the kernel is given by

where and are the (-normalised) Hermite polynomials (thus the are an orthonormal family, with each being a polynomial of degree ). Integrating out one or more of the variables, one is led to the *Gaudin-Mehta formula*

(In particular, the normalisation constant in the previous formula turns out to simply be equal to .) Again, see these lecture notes for details.

The functions can be viewed as an orthonormal basis of eigenfunctions for the *harmonic oscillator operator*

indeed it is a classical fact that

As such, the kernel can be viewed as the integral kernel of the spectral projection operator .

From (1) we see that the fine-scale structure of the eigenvalues of GUE are controlled by the asymptotics of as . The two main asymptotics of interest are given by the following lemmas:

Lemma 1 (Asymptotics of in the bulk)Let , and let be the semicircular law density at . Then, we haveas for any fixed (removing the singularity at in the usual manner).

Lemma 2 (Asymptotics of at the edge)We haveas for any fixed , where is the Airy function

and again removing the singularity at in the usual manner.

The proof of these asymptotics usually proceeds via computing the asymptotics of Hermite polynomials, together with the Christoffel-Darboux formula; this is for instance the approach taken in the previous notes. However, there is a slightly different approach that is closer in spirit to the methods of semi-classical analysis, which was briefly mentioned in the previous notes but not elaborated upon. For sake of completeness, I am recording some notes on this approach here, although to focus on the main ideas I will not be completely rigorous in the derivation (ignoring issues such as convegence of integrals or of operators, or (removable) singularities in kernels caused by zeroes in the denominator).

<p>

Our study of random matrices, to date, has focused on somewhat general ensembles, such as iid random matrices or Wigner random matrices, in which the distribution of the individual entries of the matrices was essentially arbitrary (as long as certain moments, such as the mean and variance, were normalised). In these notes, we now focus on two much more special, and much more symmetric, ensembles:

<p>

<ul> <li> The <a href=”http://en.wikipedia.org/wiki/Gaussian_Unitary_Ensemble”>Gaussian Unitary Ensemble</a> (GUE), which is an ensemble of random Hermitian matrices in which the upper-triangular entries are iid with distribution , and the diagonal entries are iid with distribution , and independent of the upper-triangular ones; and <li> The <em>Gaussian random matrix ensemble</em>, which is an ensemble of random (non-Hermitian) matrices whose entries are iid with distribution .

</ul>

<p>

The symmetric nature of these ensembles will allow us to compute the spectral distribution by exact algebraic means, revealing a surprising connection with orthogonal polynomials and with determinantal processes. This will, for instance, recover the semi-circular law for GUE, but will also reveal <em>fine</em> spacing information, such as the distribution of the gap between <em>adjacent</em> eigenvalues, which is largely out of reach of tools such as the Stieltjes transform method and the moment method (although the moment method, with some effort, is able to control the extreme edges of the spectrum).

<p>

Similarly, we will see for the first time the <em>circular law</em> for eigenvalues of non-Hermitian matrices.

<p>

There are a number of other highly symmetric ensembles which can also be treated by the same methods, most notably the Gaussian Orthogonal Ensemble (GOE) and the Gaussian Symplectic Ensemble (GSE). However, for simplicity we shall focus just on the above two ensembles. For a systematic treatment of these ensembles, see the <a href=”http://www.ams.org/mathscinet-getitem?mr=1677884″>text by Deift</a>.

<p>

<!–more–>

<p>

<p align=center><b> — 1. The spectrum of GUE — </b></p>

<p>

We have already shown using Dyson Brownian motion in <a href=”https://terrytao.wordpress.com/2010/01/18/254a-notes-3b-brownian-motion-and-dyson-brownian-motion/”>Notes 3b</a> that we have the <a href=”http://www.ams.org/mathscinet-getitem?mr=173726″>Ginibre formula</a> <a name=”rhod”><p align=center></p>

</a> for the density function of the eigenvalues of a GUE matrix , where <p align=center></p>

is the <a href=”http://en.wikipedia.org/wiki/Vandermonde_determinant”>Vandermonde determinant</a>. We now give an alternate proof of this result (omitting the exact value of the normalising constant ) that exploits unitary invariance and the change of variables formula (the latter of which we shall do from first principles). The one thing to be careful about is that one has to somehow quotient out by the invariances of the problem before being able to apply the change of variables formula.

<p>

One approach here would be to artificially “fix a gauge” and work on some slice of the parameter space which is “transverse” to all the symmetries. With such an approach, one can use the classical change of variables formula. While this can certainly be done, we shall adopt a more “gauge-invariant” approach and carry the various invariances with us throughout the computation. (For a comparison of the two approaches, see <a href=”https://terrytao.wordpress.com/2008/09/27/what-is-a-gauge/”>this previous blog post</a>.)

<p>

We turn to the details. Let be the space of Hermitian matrices, then the distribution of a GUE matrix is a absolutely continuous probability measure on , which can be written using the definition of GUE as <p align=center></p>

where is Lebesgue measure on , are the coordinates of , and is a normalisation constant (the exact value of which depends on how one normalises Lebesgue measure on ). We can express this more compactly as <p align=center></p>

Expressed this way, it is clear that the GUE ensemble is invariant under conjugations by any unitary matrix.

<p>

Let be the diagonal matrix whose entries are the eigenvalues of in descending order. Then we have for some unitary matrix . The matirx is not uniquely determined; if is diagonal unitary matrix, then commutes with , and so one can freely replace with . On the other hand, if the eigenvalues of are simple, then the diagonal matrices are the <em>only</em> matrices that commute with , and so this freedom to right-multiply by diagonal unitaries is the only failure of uniqueness here. And in any case, from the unitary invariance of GUE, we see that even after conditioning on , we may assume without loss of generality that is drawn from the invariant Haar measure on . In particular, and can be taken to be independent.

<p>

Fix a diagonal matrix for some , let be extremely small, and let us compute the probability <a name=”dprop”><p align=center></p>

</a> that lies within of in the Frobenius norm. On the one hand, the probability density of is proportional to <p align=center></p>

near (where we write ) and the volume of a ball of radius in the -dimensional space is proportional to , so <a href=”#dprop”>(2)</a> is equal to <a name=”ciprian”><p align=center></p>

</a> for some constant depending only on , where goes to zero as (keeping and fixed). On the other hand, if , then by the Weyl inequality (or Hoffman-Weilandt inequality) we have (we allow implied constants here to depend on and on ). This implies , thus . As a consequence we see that the off-diagonal elements of are of size . We can thus use the inverse function theorem in this local region of parameter space and make the ansatz <p align=center></p>

where is a bounded diagonal matrix, is a diagonal unitary matrix, and is a bounded skew-adjoint matrix with zero diagonal. (Note here the emergence of the freedom to right-multiply by diagonal unitaries.) Note that the map has a non-degenerate Jacobian, so the inverse function theorem applies to uniquely specify (and thus ) from in this local region of parameter space.

<p>

Conversely, if take the above form, then we can Taylor expand and conclude that <p align=center></p>

and so <p align=center></p>

We can thus bound <a href=”#dprop”>(2)</a> from above and below by expressions of the form <a name=”esd”><p align=center></p>

</a> As is distributed using Haar measure on , is (locally) distributed using times a constant multiple of Lebesgue measure on the space of skew-adjoint matrices with zero diagonal, which has dimension . Meanwhile, is distributed using times Lebesgue measure on the space of diagonal elements. Thus we can rewrite <a href=”#esd”>(4)</a> as <p align=center></p>

where and denote Lebesgue measure and depends only on .

<p>

Observe that the map dilates the (complex-valued) entry of by , and so the Jacobian of this map is . Applying the change of variables, we can express the above as <p align=center></p>

The integral here is of the form for some other constant . Comparing this formula with <a href=”#ciprian”>(3)</a> we see that <p align=center></p>

for yet another constant . Sending we recover an exact formula <p align=center></p>

when is simple. Since almost all Hermitian matrices have simple spectrum (see Exercise 10 of <a href=”https://terrytao.wordpress.com/2010/01/12/254a-notes-3a-eigenvalues-and-sums-of-hermitian-matrices/”>Notes 3a</a>), this gives the full spectral distribution of GUE, except for the issue of the unspecified constant.

<p>

<blockquote><b>Remark 1</b> In principle, this method should also recover the explicit normalising constant in <a href=”#rhod”>(1)</a>, but to do this it appears one needs to understand the volume of the fundamental domain of with respect to the logarithm map, or equivalently to understand the volume of the unit ball of Hermitian matrices in the operator norm. I do not know of a simple way to compute this quantity (though it can be inferred from <a href=”#rhod”>(1)</a> and the above analysis). One can also recover the normalising constant through the machinery of determinantal processes, see below. </blockquote>

<p>

<p>

<blockquote><b>Remark 2</b> <a name=”daft”></a> The above computation can be generalised to other -conjugation-invariant ensembles whose probability distribution is of the form <p align=center></p>

for some potential function (where we use the spectral theorem to define ), yielding a density function for the spectrum of the form <p align=center></p>

Given suitable regularity conditions on , one can then generalise many of the arguments in these notes to such ensembles. See <a href=”http://www.ams.org/mathscinet-getitem?mr=1677884″>this book by Deift</a> for details. </blockquote>

<p>

<p>

<p align=center><b> — 2. The spectrum of gaussian matrices — </b></p>

<p>

The above method also works for gaussian matrices , as was first observed by Dyson (though the final formula was first obtained by Ginibre, using a different method). Here, the density function is given by <a name=”can”><p align=center></p>

</a> where is a constant and is Lebesgue measure on the space of all complex matrices. This is invariant under both left and right multiplication by unitary matrices, so in particular is invariant under unitary conjugations as before.

<p>

This matrix has complex (generalised) eigenvalues , which are usually distinct:

<p>

<blockquote><b>Exercise 3</b> Let . Show that the space of matrices in with a repeated eigenvalue has codimension . </blockquote>

<p>

<p>

Unlike the Hermitian situation, though, there is no natural way to order these complex eigenvalues. We will thus consider all possible permutations at once, and define the spectral density function of by duality and the formula <p align=center></p>

for all test functions . By the Riesz representation theorem, this uniquely defines (as a distribution, at least), although the total mass of is rather than due to the ambiguity in the spectrum.

<p>

Now we compute (up to constants). In the Hermitian case, the key was to use the factorisation . This particular factorisation is of course unavailable in the non-Hermitian case. However, if the non-Hermitian matrix has simple spectrum, it can always be factored instead as , where is unitary and is upper triangular. Indeed, if one applies the <a href=”http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process”>Gram-Schmidt process</a> to the eigenvectors of and uses the resulting orthonormal basis to form , one easily verifies the desired factorisation. Note that the eigenvalues of are the same as those of , which in turn are just the diagonal entries of .

<p>

<blockquote><b>Exercise 4</b> Show that this factorisation is also available when there are repeated eigenvalues. (Hint: use the <a href=”http://en.wikipedia.org/wiki/Jordan_normal_form”>Jordan normal form</a>.) </blockquote>

<p>

<p>

To use this factorisation, we first have to understand how unique it is, at least in the generic case when there are no repeated eigenvalues. As noted above, if , then the diagonal entries of form the same set as the eigenvalues of .

<p>

Now suppose we fix the diagonal of , which amounts to picking an ordering of the eigenvalues of . The eigenvalues of are , and furthermore for each , the eigenvector of associated to lies in the span of the last basis vectors of , with a non-zero coefficient (as can be seen by Gaussian elimination or Cramer’s rule). As with unitary, we conclude that for each , the column of lies in the span of the eigenvectors associated to . As these columns are orthonormal, they must thus arise from applying the Gram-Schmidt process to these eigenvectors (as discussed earlier). This argument also shows that once the diagonal entries of are fixed, each column of is determined up to rotation by a unit phase. In other words, the only remaining freedom is to replace by for some unit diagonal matrix , and then to replace by to counterbalance this change of .

<p>

To summarise, the factorisation is unique up to specifying an enumeration of the eigenvalues of and right-multiplying by diagonal unitary matrices, and then conjugating by the same matrix. Given a matrix , we may apply these symmetries randomly, ending up with a random enumeration of the eigenvalues of (whose distribution invariant with respect to permutations) together with a random factorisation such that has diagonal entries in that order, and the distribution of is invariant under conjugation by diagonal unitary matrices. Also, since is itself invariant under unitary conjugations, we may also assume that is distributed uniformly according to the Haar measure of , and independently of .

<p>

To summarise, the gaussian matrix ensemble , together with a randomly chosen enumeration of the eigenvalues, can almost surely be factorised as , where is an upper-triangular matrix with diagonal entries , distributed according to some distribution <p align=center></p>

which is invariant with respect to conjugating by diagonal unitary matrices, and is uniformly distributed according to the Haar measure of , independently of .

<p>

Now let be an upper triangular matrix with complex entries whose entries are distinct. As in the previous section, we consider the probability <a name=”gatf”><p align=center></p>

</a> On the one hand, since the space of complex matrices has real dimensions, we see from <a href=”#can”>(9)</a> that this expression is equal to <a name=”gatf2″><p align=center></p>

</a> for some constant .

<p>

Now we compute <a href=”#gatf”>(6)</a> using the factorisation . Suppose that , so As the eigenvalues of are , which are assumed to be distinct, we see (from the inverse function theorem) that for small enough, has eigenvalues . With probability , the diagonal entries of are thus (in that order). We now restrict to this event (the factor will eventually be absorbed into one of the unspecified constants).

<p>

Let be eigenvector of associated to , then the Gram-Schmidt process applied to (starting at and working backwards to ) gives the standard basis (in reverse order). By the inverse function theorem, we thus see that we have eigenvectors of , which when the Gram-Schmidt process is applied, gives a perturbation in reverse order. This gives a factorisation in which , and hence . This is however not the most general factorisation available, even after fixing the diagonal entries of , due to the freedom to right-multiply by diagonal unitary matrices . We thus see that the correct ansatz here is to have <p align=center></p>

for some diagonal unitary matrix .

<p>

In analogy with the GUE case, we can use the inverse function theorem and make the more precise ansatz <p align=center></p>

where is skew-Hermitian with zero diagonal and size , is diagonal unitary, and is an upper triangular matrix of size . From the invariance we see that is distributed uniformly across all diagonal unitaries. Meanwhile, from the unitary conjugation invariance, is distributed according to a constant multiple of times Lebesgue measure on the -dimensional space of skew Hermitian matrices with zero diagonal; and from the definition of , is distributed according to a constant multiple of the measure <p align=center></p>

where is Lebesgue measure on the -dimensional space of upper-triangular matrices. Furthermore, the invariances ensure that the random variables are distributed independently. Finally, we have <p align=center></p>

Thus we may rewrite <a href=”#gatf”>(6)</a> as <a name=”gatf-2″><p align=center></p>

</a> for some (the integration being absorbable into this constant ). We can Taylor expand <p align=center></p>

and so we can bound <a href=”#gatf-2″>(8)</a> above and below by expressions of the form <p align=center></p>

The Lebesgue measure is invariant under translations by upper triangular matrices, so we may rewrite the above expression as <a name=”can”><p align=center></p>

</a> where is the strictly lower triangular component of .

<p>

The next step is to make the (linear) change of variables . We check dimensions: ranges in the space of skew-adjoint Hermitian matrices with zero diagonal, which has dimension , as does the space of strictly lower-triangular matrices, which is where ranges. So we can in principle make this change of variables, but we first have to compute the Jacobian of the transformation (and check that it is non-zero). For this, we switch to coordinates. Write and . In coordinates, the equation becomes <p align=center></p>

or equivalently <p align=center></p>

Thus for instance <p align=center></p>

<p align=center></p>

<p align=center></p>

<p align=center></p>

<p align=center></p>

<p align=center></p>

etc. We then observe that the transformation matrix from to is triangular, with diagonal entries given by for . The Jacobian of the (complex-linear) map is thus given by <p align=center></p>

which is non-zero by the hypothesis that the are distinct. We may thus rewrite <a href=”#can”>(9)</a> as <p align=center></p>

where is Lebesgue measure on strictly lower-triangular matrices. The integral here is equal to for some constant . Comparing this with <a href=”#gatf”>(6)</a>, cancelling the factor of , and sending , we obtain the formula <p align=center></p>

for some constant . We can expand <p align=center></p>

If we integrate out the off-diagonal variables for , we see that the density function for the diagonal entries of is proportional to <p align=center></p>

Since these entries are a random permutation of the eigenvalues of , we conclude the <em>Ginibre formula</em> <a name=”ginibre-gaussian”><p align=center></p>

</a> for the joint density of the eigenvalues of a gaussian random matrix, where is a constant.

<p>

<blockquote><b>Remark 5</b> Given that <a href=”#rhod”>(1)</a> can be derived using Dyson Brownian motion, it is natural to ask whether <a href=”#ginibre-gaussian”>(10)</a> can be derived by a similar method. It seems that in order to do this, one needs to consider a Dyson-like process not just on the eigenvalues , but on the entire triangular matrix (or more precisely, on the moduli space formed by quotienting out the action of conjugation by unitary diagonal matrices). Unfortunately the computations seem to get somewhat complicated, and we do not present them here. </blockquote>

<p>

<p>

<p align=center><b> — 3. Mean field approximation — </b></p>

<p>

We can use the formula <a href=”#rhod”>(1)</a> for the joint distribution to heuristically derive the semicircular law, as follows.

<p>

It is intuitively plausible that the spectrum should concentrate in regions in which is as large as possible. So it is now natural to ask how to optimise this function. Note that the expression in <a href=”#rhod”>(1)</a> is non-negative, and vanishes whenever two of the collide, or when one or more of the go off to infinity, so a maximum should exist away from these degenerate situations.

<p>

We may take logarithms and write <a name=”rhond”><p align=center></p>

</a> where is a constant whose exact value is not of importance to us. From a mathematical physics perspective, one can interpret <a href=”#rhond”>(11)</a> as a Hamiltonian for particles at positions , subject to a confining harmonic potential (these are the terms) and a repulsive logarithmic potential between particles (these are the terms).

<p>

Our objective is now to find a distribution of that minimises this expression.

<p>

We know from previous notes that the should have magnitude . Let us then heuristically make a <a href=”http://en.wikipedia.org/wiki/Mean_field_theory”>mean field approximation</a>, in that we approximate the discrete spectral measure by a continuous probability measure . (Secretly, we know from the semi-circular law that we should be able to take , but pretend that we do not know this fact yet.) Then we can heuristically approximate <a href=”#rhond”>(11)</a> as <p align=center></p>

and so we expect the distribution to minimise the functional <a name=”rhos”><p align=center></p>

</a>

<p>

One can compute the Euler-Lagrange equations of this functional:

<p>

<blockquote><b>Exercise 6</b> Working formally, and assuming that is a probability measure that minimises <a href=”#rhos”>(12)</a>, argue that <p align=center></p>

for some constant and all in the support of . For all outside of the support, establish the inequality <p align=center></p>

</blockquote>

<p>

<p>

There are various ways we can solve this equation for ; we sketch here a complex-analytic method. Differentiating in , we formally obtain <p align=center></p>

on the support of . But recall that if we let <p align=center></p>

be the Stieltjes transform of the probability measure , then we have <p align=center></p>

and <p align=center></p>

We conclude that <p align=center></p>

for all , which we rearrange as <p align=center></p>

This makes the function entire (it is analytic in the upper half-plane, obeys the symmetry , and has no jump across the real line). On the other hand, as as , goes to at infinity. Applying <a href=”http://en.wikipedia.org/wiki/Liouville%27s_theorem_(complex_analysis)”>Liouville’s theorem</a>, we conclude that is constant, thus we have the familiar equation <p align=center></p>

which can then be solved to obtain the semi-circular law as in <a href=”https://terrytao.wordpress.com/2010/02/02/254a-notes-4-the-semi-circular-law/”>previous notes</a>.

<p>

<blockquote><b>Remark 7</b> Recall from <a href=”https://terrytao.wordpress.com/2010/01/18/254a-notes-3b-brownian-motion-and-dyson-brownian-motion/”>Notes 3b</a> that Dyson Brownian motion can be used to derive the formula <a href=”#rhod”>(1)</a>. One can then interpret the Dyson Brownian motion proof of the semi-circular law for GUE in <a href=”https://terrytao.wordpress.com/2010/02/02/254a-notes-4-the-semi-circular-law/”>Notes 4</a> as a rigorous formalisation of the above mean field approximation heuristic argument. </blockquote>

<p>

<p>

One can perform a similar heuristic analysis for the spectral measure of a random gaussian matrix, giving a description of the limiting density:

<p>

<blockquote><b>Exercise 8</b> Using heuristic arguments similar to those above, argue that should be close to a continuous probability distribution obeying the equation <p align=center></p>

on the support of , for some constant , with the inequality <a name=”zaw”><p align=center></p>

</a> outside of this support. Using the <a href=”http://en.wikipedia.org/wiki/Newtonian_potential”>Newton potential</a> for the fundamental solution of the two-dimensional Laplacian , conclude (non-rigorously) that is equal to on its support.

<p>

Also argue that should be rotationally symmetric. Use <a href=”#zaw”>(13)</a> and Green’s formula to argue why the support of should be simply connected, and then conclude (again non-rigorously) the <em>circular law</em> <a name=”circular”><p align=center></p>

</a> </blockquote>

<p>

<p>

We will see more rigorous derivations of the circular law later in these notes, and also in subsequent notes.

<p>

<p align=center><b> — 4. Determinantal form of the GUE spectral distribution — </b></p>

<p>

In a previous section, we showed (up to constants) that the density function for the eigenvalues of GUE was given by the formula <a href=”#rhod”>(1)</a>.

<p>

As is well known, the Vandermonde determinant that appears in <a href=”#rhod”>(1)</a> can be expressed up to sign as a determinant of an matrix, namely the matrix . Indeed, this determinant is clearly a polynomial of degree in which vanishes whenever two of the agree, and the claim then follows from the factor theorem (and inspecting a single coefficient of the Vandermonde determinant, e.g. the coefficient, to get the sign).

<p>

We can square the above fact (or more precisely, multiply the above matrix matrix by its adjoint) and conclude that is the determinant of the matrix <p align=center></p>

More generally, if are any sequence of polynomials, in which has degree , then we see from row operations that the determinant of <p align=center></p>

is a non-zero constant multiple of (with the constant depending on the leading coefficients of the ), and so the determinant of <p align=center></p>

is a non-zero constant multiple of . Comparing this with <a href=”#rhod”>(1)</a>, we obtain the formula <p align=center></p>

for some non-zero constant .

<p>

This formula is valid for any choice of polynomials of degree . But the formula is particularly useful when we set equal to the (normalised) <a href=”http://en.wikipedia.org/wiki/Hermite_polynomials”>Hermite polynomials</a>, defined by applying the Gram-Schmidt process in to the polynomials for to yield . (Equivalently, the are the <a href=”http://en.wikipedia.org/wiki/Orthogonal_polynomials”>orthogonal polynomials</a> associated to the measure .) In that case, the expression <a name=”knxy”><p align=center></p>

</a> becomes the integral kernel of the orthogonal projection operator in to the span of the , thus <p align=center></p>

for all , and so is now a constant multiple of <p align=center></p>

<p>

The reason for working with orthogonal polynomials is that we have the trace identity <a name=”knxx”><p align=center></p>

</a> and the reproducing formula <a name=”knxy2″><p align=center></p>

</a> which reflects the identity . These two formulae have an important consequence:

<p>

<blockquote><b>Lemma 9 (Determinantal integration formula)</b> Let be any symmetric rapidly decreasing function obeying <a href=”#knxx”>(16)</a>, <a href=”#knxy2″>(17)</a>. Then for any , one has <a name=”city”><p align=center></p>

</a> </blockquote>

<p>

<p>

<blockquote><b>Remark 10</b> This remarkable identity is part of the beautiful algebraic theory of <em>determinantal processes</em>, which I discuss further in <a href=”https://terrytao.wordpress.com/2009/08/23/determinantal-processes/”>this blog post</a>. </blockquote>

<p>

<p>

<em>Proof:</em> We induct on . When this is just <a href=”#knxx”>(16)</a>. Now assume that and that the claim has already been proven for . We apply <a href=”http://en.wikipedia.org/wiki/Cofactor_(linear_algebra)”>cofactor expansion</a> to the bottom row of the determinant . This gives a principal term <a name=”princ”><p align=center></p>

</a> plus a sum of additional terms, the term of which is of the form <a name=”nonprinc”><p align=center></p>

</a> Using <a href=”#knxx”>(16)</a>, the principal term <a href=”#princ”>(19)</a> gives a contribution of to <a href=”#city”>(18)</a>. For each nonprincipal term <a href=”#nonprinc”>(20)</a>, we use the multilinearity of the determinant to absorb the term into the column of the matrix. Using <a href=”#knxy2″>(17)</a>, we thus see that the contribution of <a href=”#nonprinc”>(20)</a> to <a href=”#city”>(18)</a> can be simplified as <p align=center></p>

which after row exchange, simplifies to . The claim follows.

<p>

In particular, if we iterate the above lemma using the Fubini-Tonelli theorem, we see that <p align=center></p>

On the other hand, if we extend the probability density function symmetrically from the Weyl chamber to all of , its integral is also . Since is clearly symmetric in the , we can thus compare constants and conclude the <a href=”http://www.ams.org/mathscinet-getitem?mr=112895″>Gaudin-Mehta formula</a> <p align=center></p>

More generally, if we define to be the function <a name=”rhok-1″><p align=center></p>

</a> then the above formula shows that is the <em>-point correlation function</em> for the spectrum, in the sense that <a name=”rhok-2″><p align=center></p>

</a> <p align=center></p>

for any test function supported in the region .

<p>

In particular, if we set , we obtain the explicit formula <p align=center></p>

for the expected empirical spectral measure of . Equivalently after renormalising by , we have <a name=”mun”><p align=center></p>

</a>

<p>

It is thus of interest to understand the kernel better.

<p>

To do this, we begin by recalling that the functions were obtained from by the Gram-Schmidt process. In particular, each is orthogonal to the for all . This implies that is orthogonal to for . On the other hand, is a polynomial of degree , so must lie in the span of for . Combining the two facts, we see that must be a linear combination of , with the coefficient being non-trivial. We rewrite this fact in the form <a name=”pii”><p align=center></p>

</a> for some real numbers (with ). Taking inner products with and we see that <a name=”low”><p align=center></p>

</a> and <p align=center></p>

and so <a name=”aii”><p align=center></p>

</a> (with the convention ).

<p>

We will continue the computation of later. For now, we we pick two distinct real numbers and consider the <a href=”http://en.wikipedia.org/wiki/Wronskian”>Wronskian</a>-type expression <p align=center></p>

Using <a href=”#pii”>(24)</a>, <a href=”#aii”>(26)</a>, we can write this as <p align=center></p>

or in other words <p align=center></p>

<p align=center></p>

We telescope this and obtain the <em>Christoffel-Darboux formula</em> for the kernel <a href=”#knxy”>(15)</a>: <a name=”darbo”><p align=center></p>

</a> Sending using <a href=”http://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule”>L’Hopital’s rule</a>, we obtain in particular that <a name=”darbo-2″><p align=center></p>

</a>

<p>

Inserting this into <a href=”#mun”>(23)</a>, we see that if we want to understand the expected spectral measure of GUE, we should understand the asymptotic behaviour of and the associated constants . For this, we need to exploit the specific properties of the gaussian weight . In particular, we have the identity <a name=”gauss”><p align=center></p>

</a> so upon integrating <a href=”#low”>(25)</a> by parts, we have <p align=center></p>

As has degree at most , the first term vanishes by the orthonormal nature of the , thus <a name=”ip”><p align=center></p>

</a> To compute this, let us denote the leading coefficient of as . Then is equal to plus lower-order terms, and so we have <p align=center></p>

On the other hand, by inspecting the coefficient of <a href=”#pii”>(24)</a> we have <p align=center></p>

Combining the two formulae (and making the sign convention that the are always positive), we see that <p align=center></p>

and <p align=center></p>

Meanwhile, a direct computation shows that , and thus by induction <p align=center></p>

A similar method lets us compute the . Indeed, taking inner products of <a href=”#pii”>(24)</a> with and using orthonormality we have <p align=center></p>

which upon integrating by parts using <a href=”#gauss”>(29)</a> gives <p align=center></p>

As is of degree strictly less than , the integral vanishes by orthonormality, thus . The identity <a href=”#pii”>(24)</a> thus becomes <em>Hermite recurrence relation</em> <a name=”i2″><p align=center></p>

</a> Another recurrence relation arises by considering the integral <p align=center></p>

On the one hand, as has degree at most , this integral vanishes if by orthonormality. On the other hand, integrating by parts using <a href=”#gauss”>(29)</a>, we can write the integral as <p align=center></p>

If , then has degree less than , so the integral again vanishes. Thus the integral is non-vanishing only when . Using <a href=”#ip”>(30)</a>, we conclude that <a name=”i1″><p align=center></p>

</a> We can combine <a href=”#i1″>(32)</a> with <a href=”#i2″>(31)</a> to obtain the formula <p align=center></p>

which together with the initial condition gives the explicit representation <a name=”pn-form”><p align=center></p>

</a> for the Hermite polynomials. Thus, for instance, at one sees from Taylor expansion that <a name=”piano”><p align=center></p>

</a> when is even, and <a name=”piano-2″><p align=center></p>

</a> when is odd.

<p>

In principle, the formula <a href=”#pn-form”>(33)</a>, together with <a href=”#darbo-2″>(28)</a>, gives us an explicit description of the kernel (and thus of , by <a href=”#mun”>(23)</a>). However, to understand the asymptotic behaviour as , we would have to understand the asymptotic behaviour of as , which is not immediately discernable by inspection. However, one can obtain such asymptotics by a variety of means. We give two such methods here: a method based on ODE analysis, and a complex-analytic method, based on the <a href=”http://en.wikipedia.org/wiki/Method_of_steepest_descent”>method of steepest descent</a>.

<p>

We begin with the ODE method. Combining <a href=”#i2″>(31)</a> with <a href=”#i1″>(32)</a> we see that each polynomial obeys the <em>Hermite differential equation</em> <p align=center></p>

If we look instead at the Hermite functions , we obtain the differential equation <p align=center></p>

where is the <em>harmonic oscillator operator</em> <p align=center></p>

Note that the self-adjointness of here is consistent with the orthogonal nature of the .

<p>

<blockquote><b>Exercise 11</b> Use <a href=”#knxy”>(15)</a>, <a href=”#darbo-2″>(28)</a>, <a href=”#pn-form”>(33)</a>, <a href=”#i2″>(31)</a>, <a href=”#i1″>(32)</a> to establish the identities <p align=center></p>

<p align=center></p>

and thus by <a href=”#mun”>(23)</a> <p align=center></p>

<p align=center></p>

</blockquote>

<p>

<p>

It is thus natural to look at the rescaled functions <p align=center></p>

which are orthonormal in and solve the equation <p align=center></p>

where is the <em>semiclassical harmonic oscillator operator</em> <p align=center></p>

thus <p align=center></p>

<a name=”wings”><p align=center></p>

</a>

<p>

The projection is then the spectral projection operator of to . According to <a href=”http://en.wikipedia.org/wiki/Semiclassical”>semi-classical analysis</a>, with being interpreted as analogous to Planck’s constant, the operator has symbol , where is the momentum operator, so the projection is a projection to the region of phase space, or equivalently to the region . In the semi-classical limit , we thus expect the diagonal of the normalised projection to be proportional to the projection of this region to the variable, i.e. proportional to . We are thus led to the semi-circular law via semi-classical analysis.

<p>

It is possible to make the above argument rigorous, but this would require developing the theory of <a href=”http://en.wikipedia.org/wiki/Microlocal_analysis”>microlocal analysis</a>, which would be overkill given that we are just dealing with an ODE rather than a PDE here (and an extremely classical ODE at that). We instead use a more basic semiclassical approximation, the <a href=”http://en.wikipedia.org/wiki/WKB_approximation”>WKB approximation</a>, which we will make rigorous using the classical <a href=”http://en.wikipedia.org/wiki/Method_of_variation_of_parameters”>method of variation of parameters</a> (one could also proceed using the closely related <em>Prüfer transformation</em>, which we will not detail here). We study the eigenfunction equation <p align=center></p>

where we think of as being small, and as being close to . We rewrite this as <a name=”phik”><p align=center></p>

</a> where , where we will only work in the “classical” region (so ) for now.

<p>

Recall that the general solution to the constant coefficient ODE is given by . Inspired by this, we make the ansatz <p align=center></p>

where is the antiderivative of . Differentiating this, we have <p align=center></p>

<p align=center></p>

Because we are representing a single function by two functions , we have the freedom to place an additional constraint on . Following the usual variation of parameters strategy, we will use this freedom to eliminate the last two terms in the expansion of , thus <a name=”aprime”><p align=center></p>

</a> We can now differentiate again and obtain <p align=center></p>

<p align=center></p>

Comparing this with <a href=”#phik”>(37)</a> we see that <p align=center></p>

Combining this with <a href=”#aprime”>(38)</a>, we obtain equations of motion for and : <p align=center></p>

<p align=center></p>

We can simplify this using the <a href=”http://en.wikipedia.org/wiki/Integrating_factor”>integrating factor</a> substitution <p align=center></p>

to obtain <a name=”a1″><p align=center></p>

</a> <a name=”a2″><p align=center></p>

</a> The point of doing all these transformations is that the role of the parameter no longer manifests itself through amplitude factors, and instead only is present in a phase factor. In particular, we have <p align=center></p>

on any compact interval in the interior of the classical region (where we allow implied constants to depend on ), which by <a href=”http://en.wikipedia.org/wiki/Gronwall’s_inequality”>Gronwall’s inequality</a> gives the bounds <p align=center></p>

on this interval . We can then insert these bounds into <a href=”#a1″>(39)</a>, <a href=”#a2″>(40)</a> again and integrate by parts (taking advantage of the non-stationary nature of ) to obtain the improved bounds <a name=”axo”><p align=center></p>

</a> on this interval. (More precise asymptotic expansions can be obtained by iterating this procedure, but we will not need them here.) This is already enough to get the asymptotics that we need:

<p>

<blockquote><b>Exercise 12</b> Use <a href=”#wings”>(36)</a> to Show that on any compact interval in , the density of is given by <p align=center></p>

where are as above with and . Combining this with <a href=”#axo”>(41)</a>, <a href=”#piano”>(34)</a>, <a href=”#piano-2″>(35)</a>, and Stirling’s formula, conclude that converges in the vague topology to the semicircular law . (Note that once one gets convergence inside , the convergence outside of can be obtained for free since and are both probability measures. </blockquote>

<p>

<p>

We now sketch out the approach using the <a href=”http://en.wikipedia.org/wiki/Method_of_steepest_descent”>method of steepest descent</a>. The starting point is the Fourier inversion formula <p align=center></p>

which upon repeated differentiation gives <p align=center></p>

and thus by <a href=”#pn-form”>(33)</a> <p align=center></p>

and thus <p align=center></p>

where <p align=center></p>

where we use a suitable branch of the complex logarithm to handle the case of negative .

<p>

The idea of the principle of steepest descent is to shift the contour of integration to where the real part of is as small as possible. For this, it turns out that the stationary points of play a crucial role. A brief calculation using the quadratic formula shows that there are two such stationary points, at <p align=center></p>

When , is purely imaginary at these stationary points, while for the real part of is negative at both points. One then draws a contour through these two stationary points in such a way that near each such point, the imaginary part of is kept fixed, which keeps oscillation to a minimum and allows the real part to decay as steeply as possible (which explains the name of the method). After a certain tedious amount of computation, one obtains the same type of asymptotics for that were obtained by the ODE method when (and exponentially decaying estimates for ).

<p>

<blockquote><b>Exercise 13</b> Let , be functions which are analytic near a complex number , with and . Let be a small number, and let be the line segment , where is a complex phase such that is a negative real. Show that for sufficiently small, one has <p align=center></p>

as . This is the basic estimate behind the method of steepest descent; readers who are also familiar with the <a href=”http://en.wikipedia.org/wiki/Stationary_phase_approximation”>method of stationary phase</a> may see a close parallel. </blockquote>

<p>

<p>

<blockquote><b>Remark 14</b> The method of steepest descent requires an explicit representation of the orthogonal polynomials as contour integrals, and as such is largely restricted to the classical orthogonal polynomials (such as the Hermite polynomials). However, there is a non-linear generalisation of the method of steepest descent developed by Deift and Zhou, in which one solves a matrix Riemann-Hilbert problem rather than a contour integral; see <a href=”http://www.ams.org/mathscinet-getitem?mr=1677884″>this book by Deift</a> for details. Using these sorts of tools, one can generalise much of the above theory to the spectral distribution of -conjugation-invariant discussed in Remark <a href=”#daft”>2</a>, with the theory of Hermite polynomials being replaced by the more general theory of orthogonal polynomials; this is discussed in the above book of Deift, as well as the more recent <a href=”http://www.ams.org/mathscinet-getitem?mr=2514781″>book of Deift and Gioev</a>. </blockquote>

<p>

<p>

The computations performed above for the diagonal kernel can be summarised by the asymptotic <p align=center></p>

whenever is fixed and , and is the semi-circular law distribution. It is reasonably straightforward to generalise these asymptotics to the off-diagonal case as well, obtaining the more general result <a name=”kn”><p align=center></p>

</a> for fixed and , where is the <em>Dyson sine kernel</em> <p align=center></p>

In the language of semi-classical analysis, what is going on here is that the rescaling in the left-hand side of <a href=”#kn”>(42)</a> is transforming the phase space region to the region in the limit , and the projection to the latter region is given by the Dyson sine kernel. A formal proof of <a href=”#kn”>(42)</a> can be given by using either the ODE method or the steepest descent method to obtain asymptotics for Hermite polynomials, and thence (via the Christoffel-Darboux formula) to asymptotics for ; we do not give the details here, but see for instance the recent book of Anderson, Guionnet, and Zeitouni.

<p>

From <a href=”#kn”>(42)</a> and <a href=”#rhok-1″>(21)</a>, <a href=”#rhok-2″>(22)</a> we obtain the asymptotic formula <p align=center></p>

<p align=center></p>

<p align=center></p>

for the local statistics of eigenvalues. By means of further algebraic manipulations (using the general theory of determinantal processes), this allows one to control such quantities as the distribution of eigenvalue gaps near , normalised at the scale , which is the average size of these gaps as predicted by the semicircular law. For instance, for any , one can show (basically by the above formulae combined with the inclusion-exclusion principle) that the proportion of eigenvalues with normalised gap less than converges as to , where is defined by the formula , and is the integral operator with kernel (this operator can be verified to be <a href=”http://en.wikipedia.org/wiki/Trace-class_operator”>trace class</a>, so the determinant can be defined in a <a href=”http://en.wikipedia.org/wiki/Fredholm_determinant”>Fredholm sense</a>). See for instance this <a href=”http://www.ams.org/mathscinet-getitem?mr=2129906″>book of Mehta</a> (and my <a href=”https://terrytao.wordpress.com/2009/08/23/determinantal-processes/”>blog post on determinantal processes</a> describe a finitary version of the inclusion-exclusion argument used to obtain such a result).

<p>

<blockquote><b>Remark 15</b> One can also analyse the distribution of the eigenvalues at the edge of the spectrum, i.e. close to . This ultimately hinges on understanding the behaviour of the projection near the corners of the phase space region , or of the Hermite polynomials for close to . For instance, by using steepest descent methods, one can show that <p align=center></p>

as for any fixed , where is the <a href=”http://en.wikipedia.org/wiki/Airy_function”>Airy function</a> <p align=center></p>

This asymptotic and the Christoffel-Darboux formula then gives the asymptotic <a name=”nkn”><p align=center></p>

</a> for any fixed , where is the <em>Airy kernel</em> <p align=center></p>

(<em>Aside</em>: Semiclassical heuristics suggest that the rescaled kernel <a href=”#nkn”>(43)</a> should correspond to projection to the parabolic region of phase space , but I do not know of a connection between this region and the Airy kernel; I am not sure whether semiclassical heuristics are in fact valid at this scaling regime. On the other hand, these heuristics do explain the emergence of the length scale that emerges in <a href=”#nkn”>(43)</a>, as this is the smallest scale at the edge which occupies a region in consistent with the Heisenberg uncertainty principle.) This then gives an asymptotic description of the largest eigenvalues of a GUE matrix, which cluster in the region . For instance, one can use the above asymptotics to show that the largest eigenvalue of a GUE matrix obeys the <a href=”http://en.wikipedia.org/wiki/Tracy%E2%80%93Widom_distribution”>Tracy-Widom law</a> <p align=center></p>

for any fixed , where is the integral operator with kernel . See for instance the recent book of Anderson, Guionnet, and Zeitouni. </blockquote>

<p>

<p>

<p align=center><b> — 5. Determinantal form of the gaussian matrix distribution — </b></p>

<p>

One can perform an analogous analysis of the joint distribution function <a href=”#ginibre-gaussian”>(10)</a> of gaussian random matrices. Indeed, given any family of polynomials, with each of degree , much the same arguments as before show that <a href=”#ginibre-gaussian”>(10)</a> is equal to a constant multiple of <p align=center></p>

One can then select to be orthonormal in . Actually in this case, the polynomials are very simple, being given explicitly by the formula <p align=center></p>

<p>

<blockquote><b>Exercise 16</b> Verify that the are indeed orthonormal, and then conclude that <a href=”#ginibre-gaussian”>(10)</a> is equal to , where <p align=center></p>

Conclude further that the -point correlation functions are given as <p align=center></p>

</blockquote>

<p>

<p>

<blockquote><b>Exercise 17</b> Show that as , one has <p align=center></p>

and deduce that the expected spectral measure converges vaguely to the circular measure ; this is a special case of the <em>circular law</em>. </blockquote>

<p>

<p>

<blockquote><b>Exercise 18</b> For any and , show that <p align=center></p>

as . This formula (in principle, at least) describes the asymptotic local -point correlation functions of the spectrum of gaussian matrices. </blockquote>

<p>

<p>

<blockquote><b>Remark 19</b> One can use the above formulae as the starting point for many other computations on the spectrum of random gaussian matrices; to give just one example, one can show that expected number of eigenvalues which are real is of the order of (see <a href=”http://www.ams.org/mathscinet-getitem?mr=1437734″>this paper of Edelman</a> for more precise results of this nature). It remains a challenge to extend these results to more general ensembles than the gaussian ensemble. </blockquote>

<p>

<p>

One theme in this course will be the central nature played by the *gaussian random variables* . Gaussians have an incredibly rich algebraic structure, and many results about general random variables can be established by first using this structure to verify the result for gaussians, and then using universality techniques (such as the Lindeberg exchange strategy) to extend the results to more general variables.

One way to exploit this algebraic structure is to continuously deform the variance from an initial variance of zero (so that the random variable is deterministic) to some final level . We would like to use this to give a continuous family of random variables as (viewed as a “time” parameter) runs from to .

At present, we have not completely specified what should be, because we have only described the individual distribution of each , and not the joint distribution. However, there is a very natural way to specify a joint distribution of this type, known as Brownian motion. In these notes we lay the necessary probability theory foundations to set up this motion, and indicate its connection with the heat equation, the central limit theorem, and the Ornstein-Uhlenbeck process. This is the beginning of stochastic calculus, which we will not develop fully here.

We will begin with one-dimensional Brownian motion, but it is a simple matter to extend the process to higher dimensions. In particular, we can define Brownian motion on vector spaces of matrices, such as the space of Hermitian matrices. This process is equivariant with respect to conjugation by unitary matrices, and so we can quotient out by this conjugation and obtain a new process on the quotient space, or in other words on the *spectrum* of Hermitian matrices. This process is called *Dyson Brownian motion*, and turns out to have a simple description in terms of ordinary Brownian motion; it will play a key role in several of the subsequent notes in this course.

Given a set , a (simple) point process is a random subset of . (A non-simple point process would allow multiplicity; more formally, is no longer a subset of , but is a Radon measure on , where we give the structure of a locally compact Polish space, but I do not wish to dwell on these sorts of technical issues here.) Typically, will be finite or countable, even when is uncountable. Basic examples of point processes include

- (Bernoulli point process) is an at most countable set, is a parameter, and a random set such that the events for each are jointly independent and occur with a probability of each. This process is automatically simple.
- (Discrete Poisson point process) is an at most countable space, is a measure on (i.e. an assignment of a non-negative number to each ), and is a multiset where the multiplicity of in is a Poisson random variable with intensity , and the multiplicities of as varies in are jointly independent. This process is usually not simple.
- (Continuous Poisson point process) is a locally compact Polish space with a Radon measure , and for each of finite measure, the number of points that contains inside is a Poisson random variable with intensity . Furthermore, if are disjoint sets, then the random variables are jointly independent. (The fact that Poisson processes exist at all requires a non-trivial amount of measure theory, and will not be discussed here.) This process is almost surely simple iff all points in have measure zero.
- (Spectral point processes) The spectrum of a random matrix is a point process in (or in , if the random matrix is Hermitian). If the spectrum is almost surely simple, then the point process is almost surely simple. In a similar spirit, the zeroes of a random polynomial are also a point process.

A remarkable fact is that many natural (simple) point processes are *determinantal processes*. Very roughly speaking, this means that there exists a positive semi-definite kernel such that, for any , the probability that all lie in the random set is proportional to the determinant . Examples of processes known to be determinantal include non-intersecting random walks, spectra of random matrix ensembles such as GUE, and zeroes of polynomials with gaussian coefficients.

I would be interested in finding a good explanation (even at the heuristic level) as to why determinantal processes are so prevalent in practice. I do have a very weak explanation, namely that determinantal processes obey a large number of rather pretty algebraic identities, and so it is plausible that any other process which has a very algebraic structure (in particular, any process involving gaussians, characteristic polynomials, etc.) would be connected in some way with determinantal processes. I’m not particularly satisfied with this explanation, but I thought I would at least describe some of these identities below to support this case. (This is partly for my own benefit, as I am trying to learn about these processes, particularly in connection with the spectral distribution of random matrices.) The material here is partly based on this survey of Hough, Krishnapur, Peres, and Virág.

The Riemann zeta function , defined for by

and then continued meromorphically to other values of by analytic continuation, is a fundamentally important function in analytic number theory, as it is connected to the primes via the Euler product formula

(for , at least), where ranges over primes. (The equivalence between (1) and (2) is essentially the generating function version of the fundamental theorem of arithmetic.) The function has a pole at and a number of zeroes . A formal application of the factor theorem gives

where ranges over zeroes of , and we will be vague about what the factor is, how to make sense of the infinite product, and exactly which zeroes of are involved in the product. Equating (2) and (3) and taking logarithms gives the formal identity

and differentiating the above identity in yields the formal identity

where is the von Mangoldt function, defined to be when is a power of a prime , and zero otherwise. Thus we see that the behaviour of the primes (as encoded by the von Mangoldt function) is intimately tied to the distribution of the zeroes . For instance, if we knew that the zeroes were far away from the axis , then we would heuristically have

for real . On the other hand, the integral test suggests that

and thus we see that and have essentially the same (multiplicative) Fourier transform:

Inverting the Fourier transform (or performing a contour integral closely related to the inverse Fourier transform), one is led to the prime number theorem

In fact, the standard proof of the prime number theorem basically proceeds by making all of the above formal arguments precise and rigorous.

Unfortunately, we don’t know as much about the zeroes of the zeta function (and hence, about the function itself) as we would like. The Riemann hypothesis (RH) asserts that all the zeroes (except for the “trivial” zeroes at the negative even numbers) lie on the *critical line* ; this hypothesis would make the error terms in the above proof of the prime number theorem significantly more accurate. Furthermore, the stronger *GUE hypothesis* asserts in addition to RH that the local distribution of these zeroes on the critical line should behave like the local distribution of the eigenvalues of a random matrix drawn from the gaussian unitary ensemble (GUE). I will not give a precise formulation of this hypothesis here, except to say that the adjective “local” in the context of distribution of zeroes means something like “at scale when “.

Nevertheless, we do know some reasonably non-trivial facts about the zeroes and the zeta function , either unconditionally, or assuming RH (or GUE). Firstly, there are no zeroes for (as one can already see from the convergence of the Euler product (2) in this case) or for (this is trickier, relying on (6) and the elementary observation that

is non-negative for and ); from the functional equation

(which can be viewed as a consequence of the Poisson summation formula, see e.g. my blog post on this topic) we know that there are no zeroes for either (except for the trivial zeroes at negative even integers, corresponding to the poles of the Gamma function). Thus all the non-trivial zeroes lie in the *critical strip* .

We also know that there are infinitely many non-trivial zeroes, and can approximately count how many zeroes there are in any large bounded region of the critical strip. For instance, for large , the number of zeroes in this strip with is . This can be seen by applying (6) to (say); the trivial zeroes at the negative integers end up giving a contribution of to this sum (this is a heavily disguised variant of Stirling’s formula, as one can view the trivial zeroes as essentially being poles of the Gamma function), while the and terms end up being negligible (of size ), while each non-trivial zero contributes a term which has a non-negative real part, and furthermore has size comparable to if . (Here I am glossing over a technical renormalisation needed to make the infinite series in (6) converge properly.) Meanwhile, the left-hand side of (6) is absolutely convergent for and of size , and the claim follows. A more refined version of this argument shows that the number of non-trivial zeroes with is , but we will not need this more precise formula here. (A fair fraction – at least 40%, in fact – of these zeroes are known to lie on the critical line; see this earlier blog post of mine for more discussion.)

Another thing that we happen to know is how the *magnitude* of the zeta function is distributed as ; it turns out to be log-normally distributed with log-variance about . More precisely, we have the following result of Selberg:

Theorem 1Let be a large number, and let be chosen uniformly at random from between and (say). Then the distribution of converges (in distribution) to the normal distribution .

To put it more informally, behaves like plus lower order terms for “typical” large values of . (Zeroes of are, of course, certainly not typical, but one can show that one can usually stay away from these zeroes.) In fact, Selberg showed a slightly more precise result, namely that for any fixed , the moment of converges to the moment of .

Remarkably, Selberg’s result does not need RH or GUE, though it is certainly consistent with such hypotheses. (For instance, the determinant of a GUE matrix asymptotically obeys a remarkably similar log-normal law to that given by Selberg’s theorem.) Indeed, the net effect of these hypotheses only affects some error terms in of magnitude , and are thus asymptotically negligible compared to the main term, which has magnitude about . So Selberg’s result, while very pretty, manages to finesse the question of what the zeroes of are actually doing – he makes the primes do most of the work, rather than the zeroes.

Selberg never actually published the above result, but it is reproduced in a number of places (e.g. in this book by Joyner, or this book by Laurincikas). As with many other results in analytic number theory, the actual details of the proof can get somewhat technical; but I would like to record here (partly for my own benefit) an informal sketch of some of the main ideas in the argument.

## Recent Comments