You are currently browsing the category archive for the ‘math.PR’ category.

Van Vu and I have just uploaded to the arXiv our paper “Random matrices: The Universality phenomenon for Wigner ensembles“. This survey is a longer version (58 pages) of a previous short survey we wrote up a few months ago. The survey focuses on recent progress in understanding the universality phenomenon for Hermitian Wigner ensembles, of which the Gaussian Unitary Ensemble (GUE) is the most well known. The one-sentence summary of this progress is that many of the asymptotic spectral statistics (e.g. correlation functions, eigenvalue gaps, determinants, etc.) that were previously known for GUE matrices, are now known for very large classes of Wigner ensembles as well. There are however a wide variety of results of this type, due to the large number of interesting spectral statistics, the varying hypotheses placed on the ensemble, and the different modes of convergence studied, and it is difficult to isolate a single such result currently as the definitive universality result. (In particular, there is at present a tradeoff between generality of ensemble and strength of convergence; the universality results that are available for the most general classes of ensemble are only presently able to demonstrate a rather weak sense of convergence to the universal distribution (involving an additional averaging in the energy parameter), which limits the applicability of such results to a number of interesting questions in which energy averaging is not permissible, such as the study of the least singular value of a Wigner matrix, or of related quantities such as the condition number or determinant. But it is conceivable that this tradeoff is a temporary phenomenon and may be eliminated by future work in this area; in the case of Hermitian matrices whose entries have the same second moments as that of the GUE ensemble, for instance, the need for energy averaging has already been removed.)

Nevertheless, throughout the family of results that have been obtained recently, there are two main methods which have been fundamental to almost all of the recent progress in extending from special ensembles such as GUE to general ensembles. The first method, developed extensively by Erdos, Schlein, Yau, Yin, and others (and building on an initial breakthrough by Johansson), is the heat flow method, which exploits the rapid convergence to equilibrium of the spectral statistics of matrices undergoing Dyson-type flows towards GUE. (An important aspect to this method is the ability to accelerate the convergence to equilibrium by localising the Hamiltonian, in order to eliminate the slowest modes of the flow; this refinement of the method is known as the “local relaxation flow” method. Unfortunately, the translation mode is not accelerated by this process, which is the principal reason why results obtained by pure heat flow methods still require an energy averaging in the final conclusion; it would of interest to find a way around this difficulty.) The other method, which goes all the way back to Lindeberg in his classical proof of the central limit theorem, and which was introduced to random matrix theory by Chatterjee and then developed for the universality problem by Van Vu and myself, is the swapping method, which is based on the observation that spectral statistics of Wigner matrices tend to be stable if one replaces just one or two entries of the matrix with another distribution, with the stability of the swapping process becoming stronger if one assumes that the old and new entries have many matching moments. The main formalisations of this observation are known as four moment theorems, because they require four matching moments between the entries, although there are some variant three moment theorems and two moment theorems in the literature as well. Our initial four moment theorems were focused on individual eigenvalues (and later also to eigenvectors), but it was later observed by Erdos, Yau, and Yin that simpler four moment theorems could also be established for aggregate spectral statistics, such as the coefficients of the Greens function, and Knowles and Yin also subsequently observed that these latter theorems could be used to recover a four moment theorem for eigenvalues and eigenvectors, giving an alternate approach to proving such theorems.

Interestingly, it seems that the heat flow and swapping methods are complementary to each other; the heat flow methods are good at removing moment hypotheses on the coefficients, while the swapping methods are good at removing regularity hypotheses. To handle general ensembles with minimal moment or regularity hypotheses, it is thus necessary to combine the two methods (though perhaps in the future a third method, or a unification of the two existing methods, might emerge).

Besides the heat flow and swapping methods, there are also a number of other basic tools that are also needed in these results, such as local semicircle laws and eigenvalue rigidity, which are also discussed in the survey. We also survey how universality has been established for wide variety of spectral statistics; the ${k}$-point correlation functions are the most well known of these statistics, but they do not tell the whole story (particularly if one can only control these functions after an averaging in the energy), and there are a number of other statistics, such as eigenvalue counting functions, determinants, or spectral gaps, for which the above methods can be applied.

In order to prevent the survey from becoming too enormous, we decided to restrict attention to Hermitian matrix ensembles, whose entries off the diagonal are identically distributed, as this is the case in which the strongest results are available. There are several results that are applicable to more general ensembles than these which are briefly mentioned in the survey, but they are not covered in detail.

We plan to submit this survey eventually to the proceedings of a workshop on random matrix theory, and will continue to update the references on the arXiv version until the time comes to actually submit the paper.

Finally, in the survey we issue some errata for previous papers of Van and myself in this area, mostly centering around the three moment theorem (a variant of the more widely used four moment theorem), for which the original proof of Van and myself was incomplete. (Fortunately, as the three moment theorem had many fewer applications than the four moment theorem, and most of the applications that it did have ended up being superseded by subsequent papers, the actual impact of this issue was limited, but still an erratum is in order.)

Van Vu and I have just uploaded to the arXiv our paper Random matrices: Sharp concentration of eigenvalues, submitted to the Electronic Journal of Probability. As with many of our previous papers, this paper is concerned with the distribution of the eigenvalues ${\lambda_1(M_n) \leq \ldots \leq \lambda_n(M_n)}$ of a random Wigner matrix ${M_n}$ (such as a matrix drawn from the Gaussian Unitary Ensemble (GUE) or Gaussian Orthogonal Ensemble (GOE)). To simplify the discussion we shall mostly restrict attention to the bulk of the spectrum, i.e. to eigenvalues ${\lambda_i(M_n)}$ with ${\delta n \leq i \leq (1-\delta) i n}$ for some fixed ${\delta>0}$, although analogues of most of the results below have also been obtained at the edge of the spectrum.

If we normalise the entries of the matrix ${M_n}$ to have mean zero and variance ${1/n}$, then in the asymptotic limit ${n \rightarrow \infty}$, we have the Wigner semicircle law, which asserts that the eigenvalues are asymptotically distributed according to the semicircular distribution ${\rho_{sc}(x)\ dx}$, where

$\displaystyle \rho_{sc}(x) := \frac{1}{2\pi} (4-x^2)_+^{1/2}.$

An essentially equivalent way of saying this is that for large ${n}$, we expect the ${i^{th}}$ eigenvalue ${\lambda_i(M_n)}$ of ${M_n}$ to stay close to the classical location ${\gamma_i \in [-2,2]}$, defined by the formula

$\displaystyle \int_{-2}^{\gamma_i} \rho_{sc}(x)\ dx = \frac{i}{n}.$

In particular, from the Wigner semicircle law it can be shown that asymptotically almost surely, one has

$\displaystyle \lambda_i(M_n) = \gamma_i + o(1) \ \ \ \ \ (1)$

for all ${1 \leq i \leq n}$.

In the modern study of the spectrum of Wigner matrices (and in particular as a key tool in establishing universality results), it has become of interest to improve the error term in (1) as much as possible. A typical early result in this direction was by Bai, who used the Stieltjes transform method to obtain polynomial convergence rates of the shape ${O(n^{-c})}$ for some absolute constant ${c>0}$; see also the subsequent papers of Alon-Krivelevich-Vu and of of Meckes, who were able to obtain such convergence rates (with exponentially high probability) by using concentration of measure tools, such as Talagrand’s inequality. On the other hand, in the case of the GUE ensemble it is known (by this paper of Gustavsson) that ${\lambda_i(M_n)}$ has variance comparable to ${\frac{\log n}{n^2}}$ in the bulk, so that the optimal error term in (1) should be about ${O(\log^{1/2} n/n)}$. (One may think that if one wanted bounds on (1) that were uniform in ${i}$, one would need to enlarge the error term further, but this does not appear to be the case, due to strong correlations between the ${\lambda_i}$; note for instance this recent result of Ben Arous and Bourgarde that the largest gap between eigenvalues in the bulk is typically of order ${O(\log^{1/2} n/n)}$.)

A significant advance in this direction was achieved by Erdos, Schlein, and Yau in a series of papers where they used a combination of Stieltjes transform and concentration of measure methods to obtain local semicircle laws which showed, among other things, that one had asymptotics of the form

$\displaystyle N(I) = (1+o(1)) \int_I \rho_{sc}(x)\ dx$

with exponentially high probability for intervals ${I}$ in the bulk that were as short as ${n^{-1+\epsilon}}$ for some ${\epsilon>0}$, where ${N(I)}$ is the number of eigenvalues. These asymptotics are consistent with a good error term in (1), and are already sufficient for many applications, but do not quite imply a strong concentration result for individual eigenvalues ${\lambda_i}$ (basically because they do not preclude long-range or “secular” shifts in the spectrum that involve large blocks of eigenvalues at mesoscopic scales). Nevertheless, this was rectified in a subsequent paper of Erdos, Yau, and Yin, which roughly speaking obtained a bound of the form

$\displaystyle \lambda_i(M_n) = \gamma_i + O( \frac{\log^{O(\log\log n)} n}{n} )$

in the bulk with exponentially high probability, for Wigner matrices obeying some exponential decay conditions on the entries. This was achieved by a rather delicate high moment calculation, in which the contribution of the diagonal entries of the resolvent (whose average forms the Stieltjes transform) was shown to mostly cancel each other out.

As the GUE computations show, this concentration result is sharp up to the quasilogarithmic factor ${\log^{O(\log\log n)} n}$. The main result of this paper is to improve the concentration result to one more in line with the GUE case, namely

$\displaystyle \lambda_i(M_n) = \gamma_i + O( \frac{\log^{O(1)} n}{n} )$

with exponentially high probability (see the paper for a more precise statement of results). The one catch is that an additional hypothesis is required, namely that the entries of the Wigner matrix have vanishing third moment. We also obtain similar results for the edge of the spectrum (but with a different scaling).

Our arguments are rather different from those of Erdos, Yau, and Yin, and thus provide an alternate approach to establishing eigenvalue concentration. The main tool is the Lindeberg exchange strategy, which is also used to prove the Four Moment Theorem (although we do not directly invoke the Four Moment Theorem in our analysis). The main novelty is that this exchange strategy is now used to establish large deviation estimates (i.e. exponentially small tail probabilities) rather than universality of the limiting distribution. Roughly speaking, the basic point is as follows. The Lindeberg exchange strategy seeks to compare a function ${F(X_1,\ldots,X_n)}$ of many independent random variables ${X_1,\ldots,X_n}$ with the same function ${F(Y_1,\ldots,Y_n)}$ of a different set of random variables (which match moments with the original set of variables to some order, such as to second or fourth order) by exchanging the random variables one at a time. Typically, one tries to upper bound expressions such as

$\displaystyle {\bf E} \phi(F(X_1,\ldots,X_n)) - \phi(F(X_1,\ldots,X_{n-1},Y_n))$

for various smooth test functions ${\phi}$, by performing a Taylor expansion in the variable being swapped and taking advantage of the matching moment hypotheses. In previous implementations of this strategy, ${\phi}$ was a bounded test function, which allowed one to get control of the bulk of the distribution of ${F(X_1,\ldots,X_n)}$, and in particular in controlling probabilities such as

$\displaystyle {\bf P}( a \leq F(X_1,\ldots,X_n) \leq b )$

for various thresholds ${a}$ and ${b}$, but did not give good control on the tail as the error terms tended to be polynomially decaying in ${n}$ rather than exponentially decaying. However, it turns out that one can modify the exchange strategy to deal with moments such as

$\displaystyle {\bf E} (1 + F(X_1,\ldots,X_n)^2)^k$

for various moderately large ${k}$ (e.g. of size comparable to ${\log n}$), obtaining results such as

$\displaystyle {\bf E} (1 + F(Y_1,\ldots,Y_n)^2)^k = (1+o(1)) {\bf E} (1 + F(X_1,\ldots,X_n)^2)^k$

after performing all the relevant exchanges. As such, one can then use large deviation estimates on ${F(X_1,\ldots,X_n)}$ to deduce large deviation estimates on ${F(Y_1,\ldots,Y_n)}$.

In this paper we also take advantage of a simplification, first noted by Erdos, Yau, and Yin, that Four Moment Theorems become somewhat easier to prove if one works with resolvents ${(M_n-z)^{-1}}$ (and the closely related Stieltjes transform ${s(z) := \frac{1}{n} \hbox{tr}( (M_n-z)^{-1} )}$) rather than with individual eigenvalues, as the Taylor expansion of resolvents are very simple (essentially being a Neumann series). The relationship between the Stieltjes transform and the location of individual eigenvalues can be seen by taking advantage of the identity

$\displaystyle \frac{\pi}{2} - \frac{\pi}{n} N((-\infty,E)) = \int_0^\infty \hbox{Re} s(E + i \eta)\ d\eta$

for any energy level ${E \in {\bf R}}$, which can be verified from elementary calculus. (In practice, we would truncate ${\eta}$ near zero and near infinity to avoid some divergences, but this is a minor technicality.) As such, a concentration result for the Stieltjes transform can be used to establish an analogous concentration result for the eigenvalue counting functions ${N((-\infty,E))}$, which in turn can be used to deduce concentration results for individual eigenvalues ${\lambda_i(M_n)}$ by some basic combinatorial manipulations.

Van Vu and I have just uploaded to the arXiv our short survey article, “Random matrices: The Four Moment Theorem for Wigner ensembles“, submitted to the MSRI book series, as part of the proceedings on the MSRI semester program on random matrix theory from last year.  This is a highly condensed version (at 17 pages) of a much longer survey (currently at about 48 pages, though not completely finished) that we are currently working on, devoted to the recent advances in understanding the universality phenomenon for spectral statistics of Wigner matrices.  In this abridged version of the survey, we focus on a key tool in the subject, namely the Four Moment Theorem which roughly speaking asserts that the statistics of a Wigner matrix depend only on the first four moments of the entries.  We give a sketch of proof of this theorem, and two sample applications: a central limit theorem for individual eigenvalues of a Wigner matrix (extending a result of Gustavsson in the case of GUE), and the verification of a conjecture of Wigner, Dyson, and Mehta on the universality of the asymptotic k-point correlation functions even for discrete ensembles (provided that we interpret convergence in the vague topology sense).

For reasons of space, this paper is very far from an exhaustive survey even of the narrow topic of universality for Wigner matrices, but should hopefully be an accessible entry point into the subject nevertheless.

Van Vu and I have just uploaded to the arXiv our paper A central limit theorem for the determinant of a Wigner matrix, submitted to Adv. Math.. It studies the asymptotic distribution of the determinant ${\det M_n}$ of a random Wigner matrix (such as a matrix drawn from the Gaussian Unitary Ensemble (GUE) or Gaussian Orthogonal Ensemble (GOE)).

Before we get to these results, let us first discuss the simpler problem of studying the determinant ${\det A_n}$ of a random iid matrix ${A_n = (\zeta_{ij})_{1 \leq i,j \leq n}}$, such as a real gaussian matrix (where all entries are independently and identically distributed using the standard real normal distribution ${\zeta_{ij} \equiv N(0,1)_{\bf R}}$), a complex gaussian matrix (where all entries are independently and identically distributed using the standard complex normal distribution ${\zeta_{ij} \equiv N(0,1)_{\bf C}}$, thus the real and imaginary parts are independent with law ${N(0,1/2)_{\bf R}}$), or the random sign matrix (in which all entries are independently and identically distributed according to the Bernoulli distribution ${\zeta_{ij} \equiv \pm 1}$ (with a ${1/2}$ chance of either sign). More generally, one can consider a matrix ${A_n}$ in which all the entries ${\zeta_{ij}}$ are independently and identically distributed with mean zero and variance ${1}$.

We can expand ${\det A_n}$ using the Leibniz expansion

$\displaystyle \det A_n = \sum_{\sigma \in S_n} I_\sigma, \ \ \ \ \ (1)$

where ${\sigma: \{1,\ldots,n\} \rightarrow \{1,\ldots,n\}}$ ranges over the permutations of ${\{1,\ldots,n\}}$, and ${I_\sigma}$ is the product

$\displaystyle I_\sigma := \hbox{sgn}(\sigma) \prod_{i=1}^n \zeta_{i\sigma(i)}.$

From the iid nature of the ${\zeta_{ij}}$, we easily see that each ${I_\sigma}$ has mean zero and variance one, and are pairwise uncorrelated as ${\sigma}$ varies. We conclude that ${\det A_n}$ has mean zero and variance ${n!}$ (an observation first made by Turán). In particular, from Chebyshev’s inequality we see that ${\det A_n}$ is typically of size ${O(\sqrt{n!})}$.

It turns out, though, that this is not quite best possible. This is easiest to explain in the real gaussian case, by performing a computation first made by Goodman. In this case, ${\det A_n}$ is clearly symmetrical, so we can focus attention on the magnitude ${|\det A_n|}$. We can interpret this quantity geometrically as the volume of an ${n}$-dimensional parallelopiped whose generating vectors ${X_1,\ldots,X_n}$ are independent real gaussian vectors in ${{\bf R}^n}$ (i.e. their coefficients are iid with law ${N(0,1)_{\bf R}}$). Using the classical base-times-height formula, we thus have

$\displaystyle |\det A_n| = \prod_{i=1}^n \hbox{dist}(X_i, V_i) \ \ \ \ \ (2)$

where ${V_i}$ is the ${i-1}$-dimensional linear subspace of ${{\bf R}^n}$ spanned by ${X_1,\ldots,X_{i-1}}$ (note that ${X_1,\ldots,X_n}$, having an absolutely continuous joint distribution, are almost surely linearly independent). Taking logarithms, we conclude

$\displaystyle \log |\det A_n| = \sum_{i=1}^n \log \hbox{dist}(X_i, V_i).$

Now, we take advantage of a fundamental symmetry property of the Gaussian vector distribution, namely its invariance with respect to the orthogonal group ${O(n)}$. Because of this, we see that if we fix ${X_1,\ldots,X_{i-1}}$ (and thus ${V_i}$, the random variable ${\hbox{dist}(X_i,V_i)}$ has the same distribution as ${\hbox{dist}(X_i,{\bf R}^{i-1})}$, or equivalently the ${\chi}$ distribution

$\displaystyle \chi_{n-i+1} := (\sum_{j=1}^{n-i+1} \xi_{n-i+1,j}^2)^{1/2}$

where ${\xi_{n-i+1,1},\ldots,\xi_{n-i+1,n-i+1}}$ are iid copies of ${N(0,1)_{\bf R}}$. As this distribution does not depend on the ${X_1,\ldots,X_{i-1}}$, we conclude that the law of ${\log |\det A_n|}$ is given by the sum of ${n}$ independent ${\chi}$-variables:

$\displaystyle \log |\det A_n| \equiv \sum_{j=1}^{n} \log \chi_j.$

A standard computation shows that each ${\chi_j^2}$ has mean ${j}$ and variance ${2j}$, and then a Taylor series (or Ito calculus) computation (using concentration of measure tools to control tails) shows that ${\log \chi_j}$ has mean ${\frac{1}{2} \log j - \frac{1}{2j} + O(1/j^{3/2})}$ and variance ${\frac{1}{2j}+O(1/j^{3/2})}$. As such, ${\log |\det A_n|}$ has mean ${\frac{1}{2} \log n! - \frac{1}{2} \log n + O(1)}$ and variance ${\frac{1}{2} \log n + O(1)}$. Applying a suitable version of the central limit theorem, one obtains the asymptotic law

$\displaystyle \frac{\log |\det A_n| - \frac{1}{2} \log n! + \frac{1}{2} \log n}{\sqrt{\frac{1}{2}\log n}} \rightarrow N(0,1)_{\bf R}, \ \ \ \ \ (3)$

where ${\rightarrow}$ denotes convergence in distribution. A bit more informally, we have

$\displaystyle |\det A_n| \approx n^{-1/2} \sqrt{n!} \exp( N( 0, \log n / 2 )_{\bf R} ) \ \ \ \ \ (4)$

when ${A_n}$ is a real gaussian matrix; thus, for instance, the median value of ${|\det A_n|}$ is ${n^{-1/2+o(1)} \sqrt{n!}}$. At first glance, this appears to conflict with the second moment bound ${\mathop{\bf E} |\det A_n|^2 = n!}$ of Turán mentioned earlier, but once one recalls that ${\exp(N(0,t)_{\bf R})}$ has a second moment of ${\exp(2t)}$, we see that the two facts are in fact perfectly consistent; the upper tail of the normal distribution in the exponent in (4) ends up dominating the second moment.

It turns out that the central limit theorem (3) is valid for any real iid matrix with mean zero, variance one, and an exponential decay condition on the entries; this was first claimed by Girko, though the arguments in that paper appear to be incomplete. Another proof of this result, with more quantitative bounds on the convergence rate has been recently obtained by Hoi Nguyen and Van Vu. The basic idea in these arguments is to express the sum in (2) in terms of a martingale and apply the martingale central limit theorem.

If one works with complex gaussian random matrices instead of real gaussian random matrices, the above computations change slightly (one has to replace the real ${\chi}$ distribution with the complex ${\chi}$ distribution, in which the ${\xi_{i,j}}$ are distributed according to the complex gaussian ${N(0,1)_{\bf C}}$ instead of the real one). At the end of the day, one ends up with the law

$\displaystyle \frac{\log |\det A_n| - \frac{1}{2} \log n! + \frac{1}{4} \log n}{\sqrt{\frac{1}{4}\log n}} \rightarrow N(0,1)_{\bf R}, \ \ \ \ \ (5)$

$\displaystyle |\det A_n| \approx n^{-1/4} \sqrt{n!} \exp( N( 0, \log n / 4 )_{\bf R} ) \ \ \ \ \ (6)$

(but note that this new asymptotic is still consistent with Turán’s second moment calculation).

We can now turn to the results of our paper. Here, we replace the iid matrices ${A_n}$ by Wigner matrices ${M_n = (\zeta_{ij})_{1 \leq i,j \leq n}}$, which are defined similarly but are constrained to be Hermitian (or real symmetric), thus ${\zeta_{ij} = \overline{\zeta_{ji}}}$ for all ${i,j}$. Model examples here include the Gaussian Unitary Ensemble (GUE), in which ${\zeta_{ij} \equiv N(0,1)_{\bf C}}$ for ${1 \leq i < j \leq n}$ and ${\zeta_{ij} \equiv N(0,1)_{\bf R}}$ for ${1 \leq i=j \leq n}$, the Gaussian Orthogonal Ensemble (GOE), in which ${\zeta_{ij} \equiv N(0,1)_{\bf R}}$ for ${1 \leq i < j \leq n}$ and ${\zeta_{ij} \equiv N(0,2)_{\bf R}}$ for ${1 \leq i=j \leq n}$, and the symmetric Bernoulli ensemble, in which ${\zeta_{ij} \equiv \pm 1}$ for ${1 \leq i \leq j \leq n}$ (with probability ${1/2}$ of either sign). In all cases, the upper triangular entries of the matrix are assumed to be jointly independent. For a more precise definition of the Wigner matrix ensembles we are considering, see the introduction to our paper.

The determinants ${\det M_n}$ of these matrices still have a Leibniz expansion. However, in the Wigner case, the mean and variance of the ${I_\sigma}$ are slightly different, and what is worse, they are not all pairwise uncorrelated any more. For instance, the mean of ${I_\sigma}$ is still usually zero, but equals ${(-1)^{n/2}}$ in the exceptional case when ${\sigma}$ is a perfect matching (i.e. the union of exactly ${n/2}$ ${2}$-cycles, a possibility that can of course only happen when ${n}$ is even). As such, the mean ${\mathop{\bf E} \det M_n}$ still vanishes when ${n}$ is odd, but for even ${n}$ it is equal to

$\displaystyle (-1)^{n/2} \frac{n!}{(n/2)!2^{n/2}}$

(the fraction here simply being the number of perfect matchings on ${n}$ vertices). Using Stirling’s formula, one then computes that ${|\mathop{\bf E} \det A_n|}$ is comparable to ${n^{-1/4} \sqrt{n!}}$ when ${n}$ is large and even. The second moment calculation is more complicated (and uses facts about the distribution of cycles in random permutations, mentioned in this previous post), but one can compute that ${\mathop{\bf E} |\det A_n|^2}$ is comparable to ${n^{1/2} n!}$ for GUE and ${n^{3/2} n!}$ for GOE. (The discrepancy here comes from the fact that in the GOE case, ${I_\sigma}$ and ${I_\rho}$ can correlate when ${\rho}$ contains reversals of ${k}$-cycles of ${\sigma}$ for ${k \geq 3}$, but this does not happen in the GUE case.) For GUE, much more precise asymptotics for the moments of the determinant are known, starting from the work of Brezin and Hikami, though we do not need these more sophisticated computations here.

Our main results are then as follows.

Theorem 1 Let ${M_n}$ be a Wigner matrix.

• If ${M_n}$ is drawn from GUE, then

$\displaystyle \frac{\log |\det M_n| - \frac{1}{2} \log n! + \frac{1}{4} \log n}{\sqrt{\frac{1}{2}\log n}} \rightarrow N(0,1)_{\bf R}.$

• If ${M_n}$ is drawn from GOE, then

$\displaystyle \frac{\log |\det M_n| - \frac{1}{2} \log n! + \frac{1}{4} \log n}{\sqrt{\log n}} \rightarrow N(0,1)_{\bf R}.$

• The previous two results also hold for more general Wigner matrices, assuming that the real and imaginary parts are independent, a finite moment condition is satisfied, and the entries match moments with those of GOE or GUE to fourth order. (See the paper for a more precise formulation of the result.)

Thus, we informally have

$\displaystyle |\det M_n| \approx n^{-1/4} \sqrt{n!} \exp( N( 0, \log n / 2 )_{\bf R} )$

when ${M_n}$ is drawn from GUE, or from another Wigner ensemble matching GUE to fourth order (and obeying some additional minor technical hypotheses); and

$\displaystyle |\det M_n| \approx n^{-1/4} \sqrt{n!} \exp( N( 0, \log n )_{\bf R} )$

when ${M_n}$ is drawn from GOE, or from another Wigner ensemble matching GOE to fourth order. Again, these asymptotic limiting distributions are consistent with the asymptotic behaviour for the second moments.

The extension from the GUE or GOE case to more general Wigner ensembles is a fairly routine application of the four moment theorem for Wigner matrices, although for various technical reasons we do not quite use the existing four moment theorems in the literature, but adapt them to the log determinant. The main idea is to express the log-determinant as an integral

$\displaystyle \log|\det M_n| = \frac{1}{2} n \log n - n \hbox{Im} \int_0^\infty s(\sqrt{-1}\eta)\ d\eta \ \ \ \ \ (7)$

of the Stieltjes transform

$\displaystyle s(z) := \frac{1}{n} \hbox{tr}( \frac{1}{\sqrt{n}} M_n - z )^{-1}$

of ${M_n}$. Strictly speaking, the integral in (7) is divergent at infinity (and also can be ill-behaved near zero), but this can be addressed by standard truncation and renormalisation arguments (combined with known facts about the least singular value of Wigner matrices), which we omit here. We then use a variant of the four moment theorem for the Stieltjes transform, as used by Erdos, Yau, and Yin (based on a previous four moment theorem for individual eigenvalues introduced by Van Vu and myself). The four moment theorem is proven by the now-standard Lindeberg exchange method, combined with the usual resolvent identities to control the behaviour of the resolvent (and hence the Stieltjes transform) with respect to modifying one or two entries, together with the delocalisation of eigenvector property (which in turn arises from local semicircle laws) to control the error terms.

Somewhat surprisingly (to us, at least), it turned out that it was the first part of the theorem (namely, the verification of the limiting law for the invariant ensembles GUE and GOE) that was more difficult than the extension to the Wigner case. Even in an ensemble as highly symmetric as GUE, the rows are no longer independent, and the formula (2) is basically useless for getting any non-trivial control on the log determinant. There is an explicit formula for the joint distribution of the eigenvalues of GUE (or GOE), which does eventually give the distribution of the cumulants of the log determinant, which then gives the required central limit theorem; but this is a lengthy computation, first performed by Delannay and Le Caer.

Following a suggestion of my colleague, Rowan Killip, we give an alternate proof of this central limit theorem in the GUE and GOE cases, by using a beautiful observation of Trotter, namely that the GUE or GOE ensemble can be conjugated into a tractable tridiagonal form. Let me state it just for GUE:

Proposition 2 (Tridiagonal form of GUE) \cite{trotter} Let ${M'_n}$ be the random tridiagonal real symmetric matrix

$\displaystyle M'_n = \begin{pmatrix} a_1 & b_1 & 0 & \ldots & 0 & 0 \\ b_1 & a_2 & b_2 & \ldots & 0 & 0 \\ 0 & b_2 & a_3 & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \ldots & a_{n-1} & b_{n-1} \\ 0 & 0 & 0 & \ldots & b_{n-1} & a_n \end{pmatrix}$

where the ${a_1,\ldots,a_n, b_1,\ldots,b_{n-1}}$ are jointly independent real random variables, with ${a_1,\ldots,a_n \equiv N(0,1)_{\bf R}}$ being standard real Gaussians, and each ${b_i}$ having a ${\chi}$-distribution:

$\displaystyle b_i = (\sum_{j=1}^i |z_{i,j}|^2)^{1/2}$

where ${z_{i,j} \equiv N(0,1)_{\bf C}}$ are iid complex gaussians. Let ${M_n}$ be drawn from GUE. Then the joint eigenvalue distribution of ${M_n}$ is identical to the joint eigenvalue distribution of ${M'_n}$.

Proof: Let ${M_n}$ be drawn from GUE. We can write

$\displaystyle M_n = \begin{pmatrix} M_{n-1} & X_n \\ X_n^* & a_n \end{pmatrix}$

where ${M_{n-1}}$ is drawn from the ${n-1\times n-1}$ GUE, ${a_n \equiv N(0,1)_{\bf R}}$, and ${X_n \in {\bf C}^{n-1}}$ is a random gaussian vector with all entries iid with distribution ${N(0,1)_{\bf C}}$. Furthermore, ${M_{n-1}, X_n, a_n}$ are jointly independent.

We now apply the tridiagonal matrix algorithm. Let ${b_{n-1} := |X_n|}$, then ${b_n}$ has the ${\chi}$-distribution indicated in the proposition. We then conjugate ${M_n}$ by a unitary matrix ${U}$ that preserves the final basis vector ${e_n}$, and maps ${X}$ to ${b_{n-1} e_{n-1}}$. Then we have

$\displaystyle U M_n U^* = \begin{pmatrix} \tilde M_{n-1} & b_{n-1} e_{n-1} \\ b_{n-1} e_{n-1}^* & a_n \end{pmatrix}$

where ${\tilde M_{n-1}}$ is conjugate to ${M_{n-1}}$. Now we make the crucial observation: because ${M_{n-1}}$ is distributed according to GUE (which is a unitarily invariant ensemble), and ${U}$ is a unitary matrix independent of ${M_{n-1}}$, ${\tilde M_{n-1}}$ is also distributed according to GUE, and remains independent of both ${b_{n-1}}$ and ${a_n}$.

We continue this process, expanding ${U M_n U^*}$ as

$\displaystyle \begin{pmatrix} M_{n-2} & X_{n-1} & 0 \\ X_{n-1}^* & a_{n-1} & b_{n-1} \\ 0 & b_{n-1} & a_n. \end{pmatrix}$

Applying a further unitary conjugation that fixes ${e_{n-1}, e_n}$ but maps ${X_{n-1}}$ to ${b_{n-2} e_{n-2}}$, we may replace ${X_{n-1}}$ by ${b_{n-2} e_{n-2}}$ while transforming ${M_{n-2}}$ to another GUE matrix ${\tilde M_{n-2}}$ independent of ${a_n, b_{n-1}, a_{n-1}, b_{n-2}}$. Iterating this process, we eventually obtain a coupling of ${M_n}$ to ${M'_n}$ by unitary conjugations, and the claim follows. $\Box$

The determinant of a tridiagonal matrix is not quite as simple as the determinant of a triangular matrix (in which it is simply the product of the diagonal entries), but it is pretty close: the determinant ${D_n}$ of the above matrix is given by solving the recursion

$\displaystyle D_i = a_i D_{i-1} + b_{i-1}^2 D_{i-2}$

with ${D_0=1}$ and ${D_{-1} = 0}$. Thus, instead of the product of a sequence of independent scalar ${\chi}$ distributions as in the gaussian matrix case, the determinant of GUE ends up being controlled by the product of a sequence of independent ${2\times 2}$ matrices whose entries are given by gaussians and ${\chi}$ distributions. In this case, one cannot immediately take logarithms and hope to get something for which the martingale central limit theorem can be applied, but some ad hoc manipulation of these ${2 \times 2}$ matrix products eventually does make this strategy work. (Roughly speaking, one has to work with the logarithm of the Frobenius norm of the matrix first.)

Let ${n}$ be a natural number, and let ${\sigma: \{1,\ldots,n\} \rightarrow \{1,\ldots,n\}}$ be a permutation of ${\{1,\ldots,n\}}$, drawn uniformly at random. Using the cycle decomposition, one can view ${\sigma}$ as the disjoint union of cycles of varying lengths (from ${1}$ to ${n}$). For each ${1 \leq k \leq n}$, let ${C_k}$ denote the number of cycles of ${\sigma}$ of length ${k}$; thus the ${C_k}$ are natural number-valued random variables with the constraint

$\displaystyle \sum_{k=1}^n k C_k = n. \ \ \ \ \ (1)$

We let ${C := \sum_{k=1}^n C_k}$ be the number of cycles (of arbitrary length); this is another natural number-valued random variable, of size at most ${n}$.

I recently had need to understand the distribution of the random variables ${C_k}$ and ${C}$. As it turns out this is an extremely classical subject, but as an exercise I worked out what I needed using a quite tedious computation involving generating functions that I will not reproduce here. But the resulting identities I got were so nice, that they strongly suggested the existence of elementary bijective (or “double counting”) proofs, in which the identities are proven with a minimum of computation, by interpreting each side of the identity as the cardinality (or probability) of the same quantity (or event), viewed in two different ways. I then found these bijective proofs, which I found to be rather cute; again, these are all extremely classical (closely related, for instance, to Stirling numbers of the first kind), but I thought some readers might be interested in trying to find these proofs themselves as an exercise (and I also wanted a place to write the identities down so I could retrieve them later), so I have listed the identities I found below.

1. For any ${1 \leq k \leq n}$, one has ${{\bf E} C_k = \frac{1}{k}}$. In particular, ${{\bf E} C = 1 + \frac{1}{2} + \ldots + \frac{1}{n} = \log n + O(1)}$.
2. More generally, for any ${1 \leq k \leq n}$ and ${j \geq 1}$ with ${jk \leq n}$, one has ${{\bf E} \binom{C_k}{j} = \frac{1}{k^j j!}}$.
3. More generally still, for any ${1 \leq k_1 < \ldots < k_r \leq n}$ and ${j_1,\ldots,j_r \geq 1}$ with ${\sum_{i=1}^r j_i k_i \leq n}$, one has

$\displaystyle {\bf E} \prod_{i=1}^r \binom{C_{k_i}}{j_i} = \prod_{i=1}^r \frac{1}{k_i^{j_i} j_i!}.$

4. In particular, we have Cauchy’s formula: if ${\sum_{k=1}^n j_k k = n}$, then the probability that ${C_k = j_k}$ for all ${k=1,\ldots,n}$ is precisely ${\prod_{k=1}^n \frac{1}{k^{j_k} j_k!}}$. (This in particular leads to a reasonably tractable formula for the joint generating function of the ${C_k}$, which is what I initially used to compute everything that I needed, before finding the slicker bijective proofs.)
5. For fixed ${k}$, ${C_k}$ converges in distribution as ${n \rightarrow \infty}$ to the Poisson distribution of intensity ${\frac{1}{k}}$.
6. More generally, for fixed ${1 \leq k_1 < \ldots < k_r}$, ${C_{k_1},\ldots,C_{k_r}}$ converge in joint distribution to ${r}$ independent Poisson distributions of intensity ${\frac{1}{k_1},\ldots,\frac{1}{k_r}}$ respectively. (A more precise version of this claim can be found in this paper of Arratia and Tavaré.)
7. One has ${{\bf E} 2^C = n+1}$.
8. More generally, one has ${{\bf E} m^C = \binom{n+m-1}{n}}$ for all natural numbers ${m}$.

Van Vu and I have just uploaded to the arXiv our paper “Random matrices: Universality of eigenvectors“, submitted to Random Matrices: Theory and Applications. This paper concerns an extension of our four moment theorem for eigenvalues. Roughly speaking, that four moment theorem asserts (under mild decay conditions on the coefficients of the random matrix) that the fine-scale structure of individual eigenvalues of a Wigner random matrix depend only on the first four moments of each of the entries.

In this paper, we extend this result from eigenvalues to eigenvectors, and specifically to the coefficients of, say, the ${i^{th}}$ eigenvector ${u_i(M_n)}$ of a Wigner random matrix ${M_n}$. Roughly speaking, the main result is that the distribution of these coefficients also only depends on the first four moments of each of the entries. In particular, as the distribution of coefficients eigenvectors of invariant ensembles such as GOE or GUE are known to be asymptotically gaussian real (in the GOE case) or gaussian complex (in the GUE case), the same asymptotic automatically holds for Wigner matrices whose coefficients match GOE or GUE to fourth order.

(A technical point here: strictly speaking, the eigenvectors ${u_i(M_n)}$ are only determined up to a phase, even when the eigenvalues are simple. So, to phrase the question properly, one has to perform some sort of normalisation, for instance by working with the coefficients of the spectral projection operators ${P_i(M_n) := u_i(M_n) u_i(M_n)^*}$ instead of the eigenvectors, or rotating each eigenvector by a random phase, or by fixing the first component of each eigenvector to be positive real. This is a fairly minor technical issue here, though, and will not be discussed further.)

This theorem strengthens a four moment theorem for eigenvectors recently established by Knowles and Yin (by a somewhat different method), in that the hypotheses are weaker (no level repulsion assumption is required, and the matrix entries only need to obey a finite moment condition rather than an exponential decay condition), and a slightly stronger conclusion (less regularity is needed on the test function, and one can handle the joint distribution of polynomially many coefficients, rather than boundedly many coefficients). On the other hand, the Knowles-Yin paper can also handle generalised Wigner ensembles in which the variances of the entries are allowed to fluctuate somewhat.

The method used here is a variation of that in our original paper (incorporating the subsequent improvements to extend the four moment theorem from the bulk to the edge, and to replace exponential decay by a finite moment condition). That method was ultimately based on the observation that if one swapped a single entry (and its adjoint) in a Wigner random matrix, then an individual eigenvalue ${\lambda_i(M_n)}$ would not fluctuate much as a consequence (as long as one had already truncated away the event of an unexpectedly small eigenvalue gap). The same analysis shows that the projection matrices ${P_i(M_n)}$ obeys the same stability property.

As an application of the eigenvalue four moment theorem, we establish a four moment theorem for the coefficients of resolvent matrices ${(\frac{1}{\sqrt{n}} M_n - zI)^{-1}}$, even when ${z}$ is on the real axis (though in that case we need to make a level repulsion hypothesis, which has been already verified in many important special cases and is likely to be true in general). This improves on an earlier four moment theorem for resolvents of Erdos, Yau, and Yin, which required ${z}$ to stay some distance away from the real axis (specifically, that ${\hbox{Im}(z) \geq n^{-1-c}}$ for some small ${c>0}$).

Let ${G = (G,+)}$ be an abelian countable discrete group. A measure-preserving ${G}$-system ${X = (X, {\mathcal X}, \mu, (T_g)_{g \in G})}$ (or ${G}$-system for short) is a probability space ${(X, {\mathcal X}, \mu)}$, equipped with a measure-preserving action ${T_g: X \rightarrow X}$ of the group ${G}$, thus

$\displaystyle \mu( T_g(E) ) = \mu(E)$

for all ${E \in {\mathcal X}}$ and ${g \in G}$, and

$\displaystyle T_g T_h = T_{g+h}$

for all ${g, h \in G}$, with ${T_0}$ equal to the identity map. Classically, ergodic theory has focused on the cyclic case ${G={\bf Z}}$ (in which the ${T_g}$ are iterates of a single map ${T = T_1}$, with elements of ${G}$ being interpreted as a time parameter), but one can certainly consider actions of other groups ${G}$ also (including continuous or non-abelian groups).

A ${G}$-system is said to be strongly ${2}$-mixing, or strongly mixing for short, if one has

$\displaystyle \lim_{g \rightarrow \infty} \mu( A \cap T_g B ) = \mu(A) \mu(B)$

for all ${A, B \in {\mathcal X}}$, where the convergence is with respect to the one-point compactification of ${G}$ (thus, for every ${\epsilon > 0}$, there exists a compact (hence finite) subset ${K}$ of ${G}$ such that ${|\mu(A \cap T_g B) - \mu(A)\mu(B)| \leq \epsilon}$ for all ${g \not \in K}$).

Similarly, we say that a ${G}$-system is strongly ${3}$-mixing if one has

$\displaystyle \lim_{g,h,h-g \rightarrow \infty} \mu( A \cap T_g B \cap T_h C ) = \mu(A) \mu(B) \mu(C)$

for all ${A,B,C \in {\mathcal X}}$, thus for every ${\epsilon > 0}$, there exists a finite subset ${K}$ of ${G}$ such that

$\displaystyle |\mu( A \cap T_g B \cap T_h C ) - \mu(A) \mu(B) \mu(C)| \leq \epsilon$

whenever ${g, h, h-g}$ all lie outside ${K}$.

It is obvious that a strongly ${3}$-mixing system is necessarily strong ${2}$-mixing. In the case of ${{\bf Z}}$-systems, it has been an open problem for some time, due to Rohlin, whether the converse is true:

Problem 1 (Rohlin’s problem) Is every strongly mixing ${{\bf Z}}$-system necessarily strongly ${3}$-mixing?

This is a surprisingly difficult problem. In the positive direction, a routine application of the Cauchy-Schwarz inequality (via van der Corput’s inequality) shows that every strongly mixing system is weakly ${3}$-mixing, which roughly speaking means that ${\mu(A \cap T_g B \cap T_h C)}$ converges to ${\mu(A) \mu(B) \mu(C)}$ for most ${g, h \in {\bf Z}}$. Indeed, every weakly mixing system is in fact weakly mixing of all orders; see for instance this blog post of Carlos Matheus, or these lecture notes of myself. So the problem is to exclude the possibility of correlation between ${A}$, ${T_g B}$, and ${T_h C}$ for a small but non-trivial number of pairs ${(g,h)}$.

It is also known that the answer to Rohlin’s problem is affirmative for rank one transformations (a result of Kalikow) and for shifts with purely singular continuous spectrum (a result of Host; note that strongly mixing systems cannot have any non-trivial point spectrum). Indeed, any counterexample to the problem, if it exists, is likely to be highly pathological.

In the other direction, Rohlin’s problem is known to have a negative answer for ${{\bf Z}^2}$-systems, by a well-known counterexample of Ledrappier which can be described as follows. One can view a ${{\bf Z}^2}$-system as being essentially equivalent to a stationary process ${(x_{n,m})_{(n,m) \in {\bf Z}^2}}$ of random variables ${x_{n,m}}$ in some range space ${\Omega}$ indexed by ${{\bf Z}^2}$, with ${X}$ being ${\Omega^{{\bf Z}^2}}$ with the obvious shift map

$\displaystyle T_{(g,h)} (x_{n,m})_{(n,m) \in {\bf Z}^2} := (x_{n-g,m-h})_{(n,m) \in {\bf Z}^2}.$

In Ledrappier’s example, the ${x_{n,m}}$ take values in the finite field ${{\bf F}_2}$ of two elements, and are selected at uniformly random subject to the “Pascal’s triangle” linear constraints

$\displaystyle x_{n,m} = x_{n-1,m} + x_{n,m-1}.$

A routine application of the Kolmogorov extension theorem allows one to build such a process. The point is that due to the properties of Pascal’s triangle modulo ${2}$ (known as Sierpinski’s triangle), one has

$\displaystyle x_{n,m} = x_{n-2^k,m} + x_{n,m-2^k}$

for all powers of two ${2^k}$. This is enough to destroy strong ${3}$-mixing, because it shows a strong correlation between ${x}$, ${T_{(2^k,0)} x}$, and ${T_{(0,2^k)} x}$ for arbitrarily large ${k}$ and randomly chosen ${x \in X}$. On the other hand, one can still show that ${x}$ and ${T_g x}$ are asymptotically uncorrelated for large ${g}$, giving strong ${2}$-mixing. Unfortunately, there are significant obstructions to converting Ledrappier’s example from a ${{\bf Z}^2}$-system to a ${{\bf Z}}$-system, as pointed out by de la Rue.

In this post, I would like to record a “finite field” variant of Ledrappier’s construction, in which ${{\bf Z}^2}$ is replaced by the function field ring ${{\bf F}_3[t]}$, which is a “dyadic” (or more precisely, “triadic”) model for the integers (cf. this earlier blog post of mine). In other words:

Theorem 2 There exists a ${{\bf F}_3[t]}$-system that is strongly ${2}$-mixing but not strongly ${3}$-mixing.

The idea is much the same as that of Ledrappier; one builds a stationary ${{\bf F}_3[t]}$-process ${(x_n)_{n \in {\bf F}_3[t]}}$ in which ${x_n \in {\bf F}_3}$ are chosen uniformly at random subject to the constraints

$\displaystyle x_n + x_{n + t^k} + x_{n + 2t^k} = 0 \ \ \ \ \ (1)$

for all ${n \in {\bf F}_3[t]}$ and all ${k \geq 0}$. Again, this system is manifestly not strongly ${3}$-mixing, but can be shown to be strongly ${2}$-mixing; I give details below the fold.

As I discussed in this previous post, in many cases the dyadic model serves as a good guide for the non-dyadic model. However, in this case there is a curious rigidity phenomenon that seems to prevent Ledrappier-type examples from being transferable to the one-dimensional non-dyadic setting; once one restores the Archimedean nature of the underlying group, the constraints (1) not only reinforce each other strongly, but also force so much linearity on the system that one loses the strong mixing property.

Van Vu and I have just uploaded to the arXiv our paper “The Wigner-Dyson-Mehta bulk universality conjecture for Wigner matrices“, submitted to the Proceedings of the National Academy of Sciences. This short note concerns the convergence of the ${k}$-point correlation functions of Wigner matrices in the bulk to the Dyson ${k}$-point functions, a statement conjectured by Wigner, Dyson, and Mehta. Thanks to the results of Erdös, Peche, Ramirez, Schlein, Vu, Yau, and myself, this conjecture has now been established for all Wigner matrices (assuming a finite moment condition on the entries), but only if one uses a quite weak notion of convergence, namely averaged vague convergence in which one averages in the energy parameter ${u}$. The main purpose of this note is to observe that by combining together existing results in the literature, one can improve the convergence to vague convergence (which is the natural notion of convergence in the discrete setting); and furthermore, if one assumes some regularity and decay conditions on the coefficient distribution, one can improve the convergence further to local ${L^1}$ convergence.

More precisely, let ${M_n}$ be an ${n \times n}$ Wigner matrix – a random Hermitian matrix whose off-diagonal elements ${\frac{1}{\sqrt{n}} \zeta_{ij}}$ for ${1 \leq i < j \leq n}$ are iid with mean zero and variance ${1/n}$ (and whose diagonal elements also obey similar hypotheses, which we omit here). For simplicity, we also assume that the real and imaginary parts of ${\zeta_{ij}}$ are also iid (as is the case for instance for the Gaussian Unitary Ensemble (GUE)). The eigenvalues ${\lambda_1(M_n) \leq \ldots \leq \lambda_n(M_n)}$ of such a matrix are known to be asymptotically distributed accordingly to the Wigner semicircular distribution ${\rho_{sc}(u)\ du}$, where

$\displaystyle \rho_{sc}(u) := \frac{1}{2\pi} (4-u^2)_+^{1/2}.$

In particular, this suggests that at any energy level ${u}$ in the bulk ${(-2,2)}$ of the spectrum, the average eigenvalue spacing should be about ${\frac{1}{n \rho_{sc}(u)}}$. It is then natural to introduce the normalised ${k}$-point correlation function

$\displaystyle \rho_{n,u}^{(k)}(t_1,\ldots,t_k) := \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon^k} {\bf P} E_\epsilon$

for any distinct reals ${t_1,\ldots,t_k}$ and ${k \geq 1}$, where ${E_\epsilon}$ is the event that there is an eigenvalue in each of the intervals ${[u + \frac{t_i}{n \rho_{sc}(u)}, u + \frac{t_i+\epsilon}{n \rho_{sc}(u)}]}$ for each ${1 \leq i \leq k}$. (This definition is valid when the Wigner ensemble is continuous; for discrete ensembles, one can define ${\rho_{n,u}^{(k)}}$ instead in a distributional sense.)

The Wigner-Dyson-Mehta conjecture asserts that ${\rho_{n,u}^{(k)}}$ converges (in various senses) as ${n \rightarrow \infty}$ to the Dyson ${k}$-point function

$\displaystyle \rho_{Dyson}^{(k)}(t_1,\ldots,t_k) := \hbox{det}( K( t_i,t_j) )_{1 \leq i,j \leq k}$

where ${K(t,t'):=\frac{\sin \pi(t-t')}{\pi(t-t')}}$ is the Dyson sine kernel. This conjecture was verified first for the GUE (with a quite strong notion of convergence, namely local uniform convergence) by Dyson, using an explicit formula for ${\rho_{n,u}^{(k)}}$ in the GUE case due to Gaudin and Mehta. Later results of Johansson, Erdos-Ramirez-Schlein-Yau, Erdos-Peche-Ramirez-Schlein-Yau, and Vu and myself, extended these results to increasingly wider ranges of Wigner matrices, but in the context of either weak convergence (which means that

$\displaystyle \int_{{\bf R}^k} \rho_{n,u}^{(k)}(t) F(t)\ dt \rightarrow \int_{{\bf R}^k} \rho_{Dyson}^{(k)}(t) F(t)\ dt \ \ \ \ \ (1)$

for any ${L^\infty}$, compactly supported function ${F}$), or the slightly weaker notion of vague convergence (which is the same as weak convergence, except that the function ${F}$ is also required to be continuous).

In a joint paper of Erdos, Ramirez, Schlein, Vu, Yau, and myself, we established the Wigner-Dyson-Mehta conjecture for all Wigner matrices (assuming only an exponential decay condition on the entries), but using a quite weak notion of convergence, namely averaged vague convergence, which allows for averaging in the energy parameter. Specifically, we showed that

$\displaystyle \lim_{b \rightarrow 0} \lim_{n \rightarrow \infty} \frac{1}{2b} \int_{u-b}^{u+b} \int_{{\bf R}^k} \rho_{n,u'}^{(k)}(t) F(t)\ dt = \int_{{\bf R}^k} \rho_{Dyson}^{(k)}(t) F(t)\ dt.$

Subsequently, Erdos, Schlein, and Yau introduced the powerful local relaxation flow method, which achieved a simpler proof of the same result which also generalised to other ensembles beyond the Wigner case. However, for technical reasons, this method was restricted to establishing averaged vague convergence only.

In the current paper, we show that by combining the argument of Erdos, Ramirez, Schlein, Vu, Yau, and myself with some more recent technical results, namely the relaxation of the exponential decay condition in the four moment theorem to a finite moment condition (established by Vu and myself) and a strong eigenvalue localisation bound of Erdos, Yau, and Yin, one can upgrade the averaged vague convergence to vague convergence, and handle all Wigner matrices that assume a finite moment condition. Vague convergence is the most natural notion of convergence for discrete random matrix ensembles; for such ensembles, the correlation function is a discrete measure, and so one does not expect convergence to a continuous limit in any stronger sense than the vague sense. Also, by carefully inspecting the earlier argument of Erdos, Peche, Ramirez, Schlein, and Yau, we were able to establish convergence in the stronger local ${L^1}$ sense once one assumed some regularity and positivity condition on the underlying coefficient distribution. These are somewhat modest and technical improvements over previous work on the Wigner-Dyson-Mehta conjecture, but they help to clarify and organise the profusion of results in this area, which are now reaching a fairly definitive form.

It may well be possible to go beyond local ${L^1}$ convergence in the case of smooth ensembles, for instance establishing local uniform convergence; this was recently accomplished in the ${k=1}$ case by Maltsev and Schlein. Indeed one may optimistically expect to even have convergence in the local smooth topology, which would basically be the strongest convergence one could hope for.

Last week I gave a talk at the Trinity Mathematical Society at Trinity College, Cambridge UK.  As the audience was primarily undergraduate, I gave a fairly non-technical talk on the universality phenomenon, based on this blog article of mine on the same topic.  It was a quite light and informal affair, and this is reflected in the talk slides (which, in particular, play up quite strongly the role of former students and Fellows of Trinity College in this story).   There was some interest in making these slides available publicly, so I have placed them on this site here.  (Note: copyright for the images in these slides has not been secured.)

I’ve just uploaded to the arXiv my paper “Outliers in the spectrum of iid matrices with bounded rank perturbations“, submitted to Probability Theory and Related Fields. This paper is concerned with outliers to the circular law for iid random matrices. Recall that if ${X_n}$ is an ${n \times n}$ matrix whose entries are iid complex random variables with mean zero and variance one, then the ${n}$ complex eigenvalues of the normalised matrix ${\frac{1}{\sqrt{n}} X_n}$ will almost surely be distributed according to the circular law distribution ${\frac{1}{\pi} 1_{|z| \leq 1} d^2 z}$ in the limit ${n \rightarrow \infty}$. (See these lecture notes for further discussion of this law.)

The circular law is also stable under bounded rank perturbations: if ${C_n}$ is a deterministic rank ${O(1)}$ matrix of polynomial size (i.e. of operator norm ${O(n^{O(1)})}$), then the circular law also holds for ${\frac{1}{\sqrt{n}} X_n + C_n}$ (this is proven in a paper of myself, Van Vu, and Manjunath Krisnhapur). In particular, the bulk of the eigenvalues (i.e. ${(1-o(1)) n}$ of the ${n}$ eigenvalues) will lie inside the unit disk ${\{ z \in {\bf C}: |z| \leq 1 \}}$.

However, this leaves open the possibility for one or more outlier eigenvalues that lie significantly outside the unit disk; the arguments in the paper cited above give some upper bound on the number of such eigenvalues (of the form ${O(n^{1-c})}$ for some absolute constant ${c>0}$) but does not exclude them entirely. And indeed, numerical data shows that such outliers can exist for certain bounded rank perturbations.

In this paper, some results are given as to when outliers exist, and how they are distributed. The easiest case is of course when there is no bounded rank perturbation: ${C_n=0}$. In that case, an old result of Bai and Yin and of Geman shows that the spectral radius of ${\frac{1}{\sqrt{n}} X_n}$ is almost surely ${1+o(1)}$, thus all eigenvalues will be contained in a ${o(1)}$ neighbourhood of the unit disk, and so there are no significant outliers. The proof is based on the moment method.

Now we consider a bounded rank perturbation ${C_n}$ which is nonzero, but which has a bounded operator norm: ${\|C_n\|_{op} = O(1)}$. In this case, it turns out that the matrix ${\frac{1}{\sqrt{n}} X_n + C_n}$ will have outliers if the deterministic component ${C_n}$ has outliers. More specifically (and under the technical hypothesis that the entries of ${X_n}$ have bounded fourth moment), if ${\lambda}$ is an eigenvalue of ${C_n}$ with ${|\lambda| > 1}$, then (for ${n}$ large enough), ${\frac{1}{\sqrt{n}} X_n + C_n}$ will almost surely have an eigenvalue at ${\lambda+o(1)}$, and furthermore these will be the only outlier eigenvalues of ${\frac{1}{\sqrt{n}} X_n + C_n}$.

Thus, for instance, adding a bounded nilpotent low rank matrix to ${\frac{1}{\sqrt{n}} X_n}$ will not create any outliers, because the nilpotent matrix only has eigenvalues at zero. On the other hand, adding a bounded Hermitian low rank matrix will create outliers as soon as this matrix has an operator norm greater than ${1}$.

When I first thought about this problem (which was communicated to me by Larry Abbott), I believed that it was quite difficult, because I knew that the eigenvalues of non-Hermitian matrices were quite unstable with respect to general perturbations (as discussed in this previous blog post), and that there were no interlacing inequalities in this case to control bounded rank perturbations (as discussed in this post). However, as it turns out I had arrived at the wrong conclusion, especially in the exterior of the unit disk in which the resolvent is actually well controlled and so there is no pseudospectrum present to cause instability. This was pointed out to me by Alice Guionnet at an AIM workshop last week, after I had posed the above question during an open problems session. Furthermore, at the same workshop, Percy Deift emphasised the point that the basic determinantal identity

$\displaystyle \det(1 + AB) = \det(1 + BA) \ \ \ \ \ (1)$

for ${n \times k}$ matrices ${A}$ and ${k \times n}$ matrices ${B}$ was a particularly useful identity in random matrix theory, as it converted problems about large (${n \times n}$) matrices into problems about small (${k \times k}$) matrices, which was particularly convenient in the regime when ${n \rightarrow \infty}$ and ${k}$ was fixed. (Percy was speaking in the context of invariant ensembles, but the point is in fact more general than this.)

From this, it turned out to be a relatively simple manner to transform what appeared to be an intractable ${n \times n}$ matrix problem into quite a well-behaved ${k \times k}$ matrix problem for bounded ${k}$. Specifically, suppose that ${C_n}$ had rank ${k}$, so that one can factor ${C_n = A_n B_n}$ for some (deterministic) ${n \times k}$ matrix ${A_n}$ and ${k \times n}$ matrix ${B_n}$. To find an eigenvalue ${z}$ of ${\frac{1}{\sqrt{n}} X_n + C_n}$, one has to solve the characteristic polynomial equation

$\displaystyle \det( \frac{1}{\sqrt{n}} X_n + A_n B_n - z ) = 0.$

This is an ${n \times n}$ determinantal equation, which looks difficult to control analytically. But we can manipulate it using (1). If we make the assumption that ${z}$ is outside the spectrum of ${\frac{1}{\sqrt{n}} X_n}$ (which we can do as long as ${z}$ is well away from the unit disk, as the unperturbed matrix ${\frac{1}{\sqrt{n}} X_n}$ has no outliers), we can divide by ${\frac{1}{\sqrt{n}} X_n - z}$ to arrive at

$\displaystyle \det( 1 + (\frac{1}{\sqrt{n}} X_n-z)^{-1} A_n B_n ) = 0.$

Now we apply the crucial identity (1) to rearrange this as

$\displaystyle \det( 1 + B_n (\frac{1}{\sqrt{n}} X_n-z)^{-1} A_n ) = 0.$

The crucial point is that this is now an equation involving only a ${k \times k}$ determinant, rather than an ${n \times n}$ one, and is thus much easier to solve. The situation is particularly simple for rank one perturbations

$\displaystyle \frac{1}{\sqrt{n}} X_n + u_n v_n^*$

in which case the eigenvalue equation is now just a scalar equation

$\displaystyle 1 + \langle (\frac{1}{\sqrt{n}} X_n-z)^{-1} u_n, v_n \rangle = 0$

that involves what is basically a single coefficient of the resolvent ${(\frac{1}{\sqrt{n}} X_n-z)^{-1}}$. (It is also an instructive exercise to derive this eigenvalue equation directly, rather than through (1).) There is by now a very well-developed theory for how to control such coefficients (particularly for ${z}$ in the exterior of the unit disk, in which case such basic tools as Neumann series work just fine); in particular, one has precise enough control on these coefficients to obtain the result on outliers mentioned above.

The same method can handle some other bounded rank perturbations. One basic example comes from looking at iid matrices with a non-zero mean ${\mu}$ and variance ${1}$; this can be modeled by ${\frac{1}{\sqrt{n}} X_n + \mu \sqrt{n} \phi_n \phi_n^*}$ where ${\phi_n}$ is the unit vector ${\phi_n := \frac{1}{\sqrt{n}} (1,\ldots,1)^*}$. Here, the bounded rank perturbation ${\mu \sqrt{n} \phi_n \phi_n^*}$ has a large operator norm (equal to ${|\mu| \sqrt{n}}$), so the previous result does not directly apply. Nevertheless, the self-adjoint nature of the perturbation has a stabilising effect, and I was able to show that there is still only one outlier, and that it is at the expected location of ${\mu \sqrt{n}+o(1)}$.

If one moves away from the case of self-adjoint perturbations, though, the situation changes. Let us now consider a matrix of the form ${\frac{1}{\sqrt{n}} X_n + \mu \sqrt{n} \phi_n \psi_n^*}$, where ${\psi_n}$ is a randomised version of ${\phi_n}$, e.g. ${\psi_n := \frac{1}{\sqrt{n}} (\pm 1, \ldots, \pm 1)^*}$, where the ${\pm 1}$ are iid Bernoulli signs; such models were proposed recently by Rajan and Abbott as a model for neural networks in which some nodes are excitatory (and give columns with positive mean) and some are inhibitory (leading to columns with negative mean). Despite the superficial similarity with the previous example, the outlier behaviour is now quite different. Instead of having one extremely large outlier (of size ${\sim\sqrt{n}}$) at an essentially deterministic location, we now have a number of eigenvalues of size ${O(1)}$, scattered according to a random process. Indeed, (in the case when the entries of ${X_n}$ were real and bounded) I was able to show that the outlier point process converged (in the sense of converging ${k}$-point correlation functions) to the zeroes of a random Laurent series

$\displaystyle g(z) = 1 - \mu \sum_{j=0}^\infty \frac{g_j}{z^{j+1}}$

where ${g_0,g_1,g_2,\ldots \equiv N(0,1)}$ are iid real Gaussians. This is basically because the coefficients of the resolvent ${(\frac{1}{\sqrt{n}} X_n - zI)^{-1}}$ have a Neumann series whose coefficients enjoy a central limit theorem.

On the other hand, as already observed numerically (and rigorously, in the gaussian case) by Rajan and Abbott, if one projects such matrices to have row sum zero, then the outliers all disappear. This can be explained by another appeal to (1); this projection amounts to right-multiplying ${\frac{1}{\sqrt{n}} X_n + \mu \sqrt{n} \phi_n \psi_n^*}$ by the projection matrix ${P}$ to the zero-sum vectors. But by (1), the non-zero eigenvalues of the resulting matrix ${(\frac{1}{\sqrt{n}} X_n + \mu \sqrt{n} \phi_n \psi_n^*)P}$ are the same as those for ${P (\frac{1}{\sqrt{n}} X_n + \mu \sqrt{n} \phi_n \psi_n^*)}$. Since ${P}$ annihilates ${\phi_n}$, we thus see that in this case the bounded rank perturbation plays no role, and the question reduces to obtaining a circular law with no outliers for ${P \frac{1}{\sqrt{n}} X_n}$. As it turns out, this can be done by invoking the machinery of Van Vu and myself that we used to prove the circular law for various random matrix models.