Van Vu and I have just uploaded to the arXiv our paper “Random matrices have simple spectrum“. Recall that an Hermitian matrix is said to have simple eigenvalues if all of its eigenvalues are distinct. This is a very typical property of matrices to have: for instance, as discussed in this previous post, in the space of all Hermitian matrices, the space of matrices without all eigenvalues simple has codimension three, and for real symmetric cases this space has codimension two. In particular, given any random matrix ensemble of Hermitian or real symmetric matrices with an absolutely continuous distribution, we conclude that random matrices drawn from this ensemble will almost surely have simple eigenvalues.

For discrete random matrix ensembles, though, the above argument breaks down, even though general universality heuristics predict that the statistics of discrete ensembles should behave similarly to those of continuous ensembles. A model case here is the adjacency matrix of an Erdös-Rényi graph – a graph on vertices in which any pair of vertices has an independent probability of being in the graph. For the purposes of this paper one should view as fixed, e.g. , while is an asymptotic parameter going to infinity. In this context, our main result is the following (answering a question of Babai):

Our argument works for more general Wigner-type matrix ensembles, but for sake of illustration we will stick with the Erdös-Renyi case. Previous work on local universality for such matrix models (e.g. the work of Erdos, Knowles, Yau, and Yin) was able to show that any individual eigenvalue gap did not vanish with probability (in fact for some absolute constant ), but because there are different gaps that one has to simultaneously ensure to be non-zero, this did not give Theorem 1 as one is forced to apply the union bound.

Our argument in fact gives simplicity of the spectrum with probability for any fixed ; in a subsequent paper we also show that it gives a quantitative lower bound on the eigenvalue gaps (analogous to how many results on the singularity probability of random matrices can be upgraded to a bound on the least singular value).

The basic idea of argument can be sketched as follows. Suppose that has a repeated eigenvalue . We split

for a random minor and a random sign vector ; crucially, and are independent. If has a repeated eigenvalue , then by the Cauchy interlacing law, also has an eigenvalue . We now write down the eigenvector equation for at :

Extracting the top coefficients, we obtain

If we let be the -eigenvector of , then by taking inner products with we conclude that

we typically expect to be non-zero, in which case we arrive at

In other words, in order for to have a repeated eigenvalue, the top right column of has to be orthogonal to an eigenvector of the minor . Note that and are going to be independent (once we specify which eigenvector of to take as ). On the other hand, thanks to inverse Littlewood-Offord theory (specifically, we use an inverse Littlewood-Offord theorem of Nguyen and Vu), we know that the vector is unlikely to be orthogonal to any given vector independent of , unless the coefficients of are extremely special (specifically, that most of them lie in a generalised arithmetic progression). The main remaining difficulty is then to show that eigenvectors of a random matrix are typically not of this special form, and this relies on a conditioning argument originally used by Komlós to bound the singularity probability of a random sign matrix. (Basically, if an eigenvector has this special form, then one can use a fraction of the rows and columns of the random matrix to determine the eigenvector completely, while still preserving enough randomness in the remaining portion of the matrix so that this vector will in fact not be an eigenvector with high probability.)

## 22 comments

Comments feed for this article

5 December, 2014 at 6:09 am

BogdanGreat. In particular, your result implies that Isomorphism problem is in P for a class of graphs containing almost all graphs. Is this the first results of this type? I mean, are graphs whose matrices have non-repeated eithenvalues is the first class of graphs for which 1) Isomorphism in known to be in P, and 2) Almost all graphs are known to belong to this class.

5 December, 2014 at 10:07 pm

QuentinActually, it was already known, by using the fact that the largest n^0.15 degrees are unique (which turns out to be sufficient). A paper by Babai, Erdos, and Selkow is a good reference, but there might be more recent results.

Click to access 1980-35.pdf

5 December, 2014 at 6:42 am

arch1“…not be an eigenvector with high probability” can be ambiguous to a newbie:-)

9 December, 2014 at 5:40 am

lingnaodai11Reblogged this on lingnaodai and commented:

Omg.. cool.. i feel like this is very interesting area of research.. I can finally see how the linear algebra and graph theory are used in this., i hope..

11 December, 2014 at 10:56 am

Interested_nonexpertGiven the conjectures that 1) all zeta zeroes are simple, and 2) zeta zeroes behave like eigenvalues of random matrices, they both seem to be consistent with this result :) is operator theory still an active research area to tackle RH?

11 December, 2014 at 11:16 am

Terence TaoIn the function field case, one can interpret the zeroes of the zeta function as (essentially) the eigenvalues of a Frobenius operator acting on certain cohomology groups, and this is the route taken in Deligne’s proof of the RH for varieties over finite fields (or, for that matter, Weil’s earlier proof of RH for curves). Some of the motivation of trying to rigorously develop a theory of the “field of one element” is to try to mimic this approach for the Riemann zeta function; see for instance some recent work by Connes and Consani in this direction. However, as I understand from talking to Connes, there is still a huge input missing, namely we do not currently have an F_1 analogue of the Riemann-Roch theorem.

My personal feeling, though, is that the empirical fact that zeta zero statistics appear to asymptotically resemble random matrix statistics is not primarily due to any random matrix (or Hilbert-Polya) type interpretation of the zeta function. Rather, I believe it is a manifestation of the universality of these sorts of statistical laws, in that these statistics are the generic behaviour for zeroes of many classes of random or deterministic polynomials or analytic functions (not just those polynomials that happen to be characteristic polynomials of random matrices), after restricting of course to those functions that obey a functional equation across the critical line.

14 December, 2014 at 3:09 pm

Thomas StrohmerIt may be worthwhile to mention that the first result of this kind is probably due to von Neumann and Wigner, see “Über das Verhalten von Eigenwerten bei adiabatischen Prozessen”, J. von Neumann, E. P. Wigner, Z. Phys. A, 1929.

20 January, 2015 at 9:36 pm

Steven HeilmanCan your subsequent paper also improve on the result of Erdos-Knowles-Yau-Yin, that the second largest eigenvalue follows the Tracy-Widow distribution? That is, can you improve on the restriction that they place on the parameter p, where each edge in the Erdos-Renyi model occurs with probability p? If so, that would be nice!

20 January, 2015 at 11:19 pm

Terence TaoWe haven’t done all the computations yet, but the range of sparsities that we can handle is probably going to be comparable to what is needed in EKYY. Also, our arguments do not establish universality of eigenvalue or spacing distributions; they are instead getting good bounds on the event of having an exceptionally small eigenvalue gap, which is a somewhat different question (more about the error term in universality than the main term).

3 February, 2015 at 6:11 am

EdithDefinition 1.1 is not correct. First it is required for all entries i<=j to be jointly independent and later, in the same definition, is stated that the elements of the diagonal can be correlated. All marginals of the joint independent distribution are also independent, obviously this is true for the bivariate marginals. This means that the elements of the diagonal have to be uncorrelated.

[This is a typo and will be corrected in the next revision of the ms – only the upper triangular entries are required to be jointly independent. -T.]4 February, 2015 at 4:10 am

New findings | jeevarajs[…] Addendum: He has blogged about it at – https://terrytao.wordpress.com/…/random-matrices-have-simp…/ […]

3 April, 2015 at 10:24 am

Random matrices: tail bounds for gaps between eigenvalues | What's new[…] matrices: tail bounds for gaps between eigenvalues“. This is a followup paper to my recent paper with Van in which we showed that random matrices of Wigner type (such as the adjacency graph of an […]

3 April, 2015 at 2:11 pm

Benjamin PetersonSmall correction: The link in the first sentence says the paper title “Random matrices have simple eigenvalues” when the paper title is actually “Random matrices have simple spectrum”.

[Corrected, thanks – T.]7 April, 2015 at 12:46 pm

LamDear Terry,

there are two arguments which I do not manage to understand.

1. In the last paragraph of the proof of Proposition 3.4, you apply the pigeonhole principle to get the existence of a GAP P’ which contains at least k-m elements of P_I with probability >n^{-O(1)}. (Also, should P_I be replaced by V_I, even if that still holds, and epsilon by epsilon/2 in the volume?) I don’t get how to get such a GAP. Do you fix one P_I and call it P’? Then how does it follow that uniformly at random, another P_I intersects this P’ with size at least k-m, with high probability?

2. In the main proof of Proposition 2.3, in bounding P(E_3′), after embedding H in a hyperplane, how does it follow that each row has independent probability (1-mu) of lying in H?

Thank you!

7 April, 2015 at 4:07 pm

Terence Tao1. Yes, should be , and the exponent should be .

The pigeonholing argument is going like this: for each , we have a progression , which is drawn from a set of possible progressions. Thus, by the pigeonhole principle, there exists one of these progressions, call it , such that with probability at least if is chosen uniformly at random.

2. This is the Odlyzko argument. As stated in the text, the hyperplane H is expressible as a graph of one of the coordinates, say , in terms of the other coordinates, in this case . Now take one of the rows and condition on the coordinates to be fixed. The remaining coordinate is still random, and by hypothesis it attains any given value with probability at most , so in particular it attains the value required to lie in with probability at most . Now integrate out the conditioning to obtain the result.

7 April, 2015 at 5:35 pm

LamThank you!

8 April, 2015 at 4:06 am

AnonymousIf the dimension is random, is it possible to extend theorem 1 (with appropriate condition on the probability distribution of values) such that has simple spectrum with probability 1.

2 February, 2017 at 4:31 pm

MikeThis may seem pedantic, but I’m trying to attach a specific meaning to the statement that the space of Hermitian matrices with repeated eigenvalues has codimension 3. It seems that this space should be a finite union of submanifolds all having codimension at least 3. Is this true? And is there a simple way to see this more rigorously (or a reference that does so)?

4 February, 2017 at 5:11 pm

Terence TaoOne way to do this is to describe this space as the image (under some smooth or algebraic map) of a space of (real) dimension . For instance, to describe a Hermitian matrix with repeated eigenvalues, one can list eigenvalues (one of which is repeated), and orthogonal eigenlines together with a further eigenplane orthogonal to the eigenlines. Actually one doesn’t need to specify the eigenplane as it is determined as the orthogonal complement of the span of the other eigenlines. Clearly the space of possible eigenvalues is dimensional. The first eigenline is dimensional (also the real dimension of ), then once that eigenline is fixed, the dimension of the possible options for the next eigenline is , and so forth, giving a total dimension of .

6 February, 2017 at 12:48 am

MikeThanks very much!

2 February, 2017 at 10:58 pm

AnonymousThis seems to follow from the fact that this space (Hermitian matrices with repeated eigenvalues) can be represented as a (closed) semialgebraic subset of .

21 March, 2018 at 1:14 pm

RaphaelI have a question: Are you aware of a discussion of a possible connection between random matrix theory and the recent Bender-Brody-Müller approach?

One thought about that: It seems, ideally random matrix theory would be exactly the Heisenberg matrix formulation of the BBM “Schrödinger type” formulation.

The Heisenberg uncertainty principle tells us that no quantum object can have a defined momentum and position in the same state. Measuring momentum and position one after the other leads inevitably to a result for which only probability distributions rather than defined values can be given.

Would it make sense to interpret the Berry-Keating (or therefrom derived) operators containing products like p̂x̂ such that one would force the system into eigenstates with localised (defined) product of momentum and position?

One then would have to read these terms like p̂(x̂(ψ)),

leading to operators with kind of a paradoxical eigenstates, which reflects itself already in that these operators are not self-adjoint.