Van Vu and I have just uploaded to the arXiv our paper “Random matrices: Universality of local eigenvalue statistics up to the edge“, submitted to Comm. Math. Phys.. This is a sequel to our previous paper, in which we studied universality of local eigenvalue statistics (such as normalised eigenvalue spacings ) for random matrices of Wigner type, i.e. Hermitian (or symmetric) random matrices in which the upper-triangular entries are independent with mean zero and variance one (for technical reasons we also have to assume an exponential decay condition on the distribution of the entries). The results in the previous paper were almost entirely focused on the bulk region, in which the index i of the eigenvalues involved was in the range . The main purpose of this paper is to extend the main results of the previous paper all the way up to the edge, thus allowing one to control all indices . As an application, we obtain a variant of Soshnikov’s well-known result that the largest eigenvalue of Wigner matrices is distributed (after suitable normalisation) according to the Tracy-Widom law when the coefficients are symmetric, by assuming instead that the coefficients have vanishing third moment.
As one transitions from the bulk to the edge, the density of the eigenvalues decreases to zero (in accordance to the Wigner semicircular law), and so the average spacing between eigenvalues increases. (For instance, the spacing between eigenvalues in the bulk is of size , but at the extreme edge it increases to .) On the one hand, the increase in average spacing should make life easier, because one does not have to work at such a fine spatial scale in order to see the eigenvalue distribution. On the other hand, a certain key technical step in the previous paper (in which we adapted an argument of Erdos, Schlein, and Yau to show that eigenvectors of Wigner matrices were delocalised) seemed to require eigenvalue spacings to be of size , which was the main technical obstacle to extending the preceding results from the bulk to the edge.
The main new observation in the paper is that it was not the eigenvalue spacings which were of importance to eigenvalue delocalisation, but rather the somewhat smaller interlaced eigenvalue spacings , where is a minor of . The Cauchy interlacing law asserts that the latter is smaller than the former. But the interesting thing is that at the edge (when i is close to n), the interlaced spacings are much smaller than the former, and in particular remain of size about (up to log factors) even though the non-interlaced spacings increase to be as large as . This is ultimately due to a sort of “attractive force” on eigenvalues that draws them towards the origin, and counteracts the usual “eigenvalue repulsion effect”, that pushes eigenvalues away from each other. This induces “bias” for eigenvalues to move in towards the bulk rescues the delocalization result, and the remainder of the arguments in our previous paper then continue with few changes.
Below the fold I wish to give some heuristic justification of the interlacing bias phenomenon, sketch why this is relevant for eigenvector delocalisation, and finally to recall why eigenvalue delocalisation in turn is relevant for universality.
[Update, Aug 16: sign error corrected.]
— Interlacing bias —
Let be an Hermitian matrix, and let be a symmetric minor, say the upper left minor for concreteness. The largest eigenvalue of can be given by the minimax formula
where ranges over all -dimensional subspaces of . The formula for is similar, but is now constrained to lie in the hyperplane of . Comparing the two formulae (and noting that any i+1-dimensional subspace of intersects in a space of dimension at least i) one is led to the Cauchy interlacing inequality
Thus, the eigenvalues of intersperse (or interlace) the eigenvalues of .
One can then ask the question (for various matrix models of ) how is distributed in the interval , or how is distributed in the interval .
The answer to the first question is rather interesting: if the eigenvalues of are fixed, and the eigenvectors of are chosen uniformly at random (or equivalently, if one conjugates by a randomly chosen unitary matrix), then the eigenvalue turns out to be distributed according to a certain piecewise polynomial distribution (connected with Gelfand-Tsetlin patterns). The situation is easiest to describe when n=2, in which is uniformly distributed in the interval . Amusingly, this fact is equivalent to Archimedes’ famous result (inscribed on his tombstone) on the equivalence of areas between a sphere and a cylinder, and is an instructive exercise.
However, this type of analysis does not seem to lend itself well to general matrix ensembles, which are not invariant under unitary conjugation. An alternate approach proceeds by viewing as the matrix with an -dimensional column vector and its transpose attached, together with a final coordinate , like so:
One can compute the characteristic polynomial of in terms of , and the eigenvalues and unit eigenvectors of to be given by the formula
which implies that is the unique solution in the interval to the equation
(Note that the LHS is increasing in , while the RHS is decreasing, and the LHS also has poles at each eigenvalue of ; this establishes the uniqueness and also provides an alternate proof of the Cauchy interlacing law.)
I like to interpret the equation (3) as describing the balance between various “forces” acting on the eigenvalue . All the eigenvalues to the left of (i.e. for ) exert a rightward force on that is inversely proportional to the distance; meanwhile, the eigenvalues on the right exert a leftward force; thus the eigenvalues of “repel” the eigenvalues of (cf. the eigenvalue repulsion discussed in this previous blog post). On the other hand, the term on the right-hand side of (3) can be viewed as an attractive force towards the origin which becomes quite significant at the edge. (The term tends to be insignificant in applications.) This attractive force (which is of magnitude about at the edge) needs to be counterbalanced by repulsion from eigenvalues nearer the bulk, and one can show (under reasonable hypotheses on the matrix ensemble) that the only way this can occur is if is within or so of an eigenvalue of nearer the bulk (the numerators have size about 1 on the average, as can be formalised by Berry-Esseen-type variants of the central limit theorem). This is basically the source of the bias in the interlacing law.
(Another way to see why the interlacing gap at the edge must remain of size rather than is to consider the asymptotic , which suggests a spacing of about , on the average at least, for the increase in the largest eigenvalue.)
— Eigenvector delocalisation —
The closeness between the eigenvalues of and those of impacts the eigenvectors of . Indeed, suppose that was a unit eigenvector of with eigenvalue , thus
If is close to an eigenvalue of , this makes the minor ill-conditioned. This allows the vector to be quite large compared to , which (since u is normalised to be of unit size) makes small. Crunching the numbers, one can show that an eigenvalue gap of size translates (using the random nature of X) to a bound of about on (ignoring logs). By symmetry, we get a similar bound for the other coefficients of u, thus the coefficients of u are extremely uniform in magnitude (i.e. the eigenvector is delocalised).
— Why delocalisation is needed —
The delocalisation of eigenvectors is an essential ingredient in our Lindeberg-style proof of universality, in which we replace the coefficients of with a better understood distribution (e.g. a gaussian distribution) one at a time. In order to keep the errors under control, it is necessary that the effect of any given coefficient of on an eigenvalue is significantly less than the mean spacing between eigenvalues. On the other hand, if one modifies the (p,q) coefficient of by O(1), then the Hadamard variation formula (see this previous blog post) suggests that the eigenvalue should move by about , where is the p^th coordinate of the unit eigenvector corresponding to . With delocalisation, these coefficients have size about , leading to a total movement of , which is indeed smaller than the mean spacing ( in the bulk, at the extreme edge).
[Note there is a bit of room here; one could accept a somewhat weaker delocalisation result, conceding some powers of n, but one would need more moment assumptions on the distribution to compensate for this (one has to perform a higher order Taylor expansion before the net error terms become acceptable again.]