Peter Denton, Stephen Parke, Xining Zhang, and I have just uploaded to the arXiv the short unpublished note “Eigenvectors from eigenvalues“. This note gives two proofs of a general eigenvector identity observed recently by Denton, Parke and Zhang in the course of some quantum mechanical calculations. The identity is as follows:

Theorem 1Let be an Hermitian matrix, with eigenvalues . Let be a unit eigenvector corresponding to the eigenvalue , and let be the component of . Thenwhere is the Hermitian matrix formed by deleting the row and column from .

For instance, if we have

for some real number , -dimensional vector , and Hermitian matrix , then we have

assuming that the denominator is non-zero.

Once one is aware of the identity, it is not so difficult to prove it; we give two proofs, each about half a page long, one of which is based on a variant of the Cauchy-Binet formula, and the other based on properties of the adjugate matrix. But perhaps it is surprising that such a formula exists at all; one does not normally expect to learn much information about eigenvectors purely from knowledge of eigenvalues. In the random matrix theory literature, for instance in this paper of Erdos, Schlein, and Yau, or this later paper of Van Vu and myself, a related identity has been used, namely

but it is not immediately obvious that one can derive the former identity from the latter. (I do so below the fold; we ended up not putting this proof in the note as it was longer than the two other proofs we found. I also give two other proofs below the fold, one from a more geometric perspective and one proceeding via Cramer’s rule.) It was certainly something of a surprise to me that there is no explicit appearance of the components of in the formula (1) (though they do indirectly appear through their effect on the eigenvalues ; for instance from taking traces one sees that ).

One can get some feeling of the identity (1) by considering some special cases. Suppose for instance that is a diagonal matrix with all distinct entries. The upper left entry of is one of the eigenvalues of . If it is equal to , then the eigenvalues of are the other eigenvalues of , and now the left and right-hand sides of (1) are equal to . At the other extreme, if is equal to a different eigenvalue of , then now appears as an eigenvalue of , and both sides of (1) now vanish. More generally, if we order the eigenvalues and , then the Cauchy interlacing inequalities tell us that

for , and

for , so that the right-hand side of (1) lies between and , which is of course consistent with (1) as is a unit vector. Thus the identity relates the coefficient sizes of an eigenvector with the extent to which the Cauchy interlacing inequalities are sharp.

** — 1. Relating the two identities — **

We now show how (1) can be deduced from (2). By a limiting argument, it suffices to prove (1) in the case when is not an eigenvalue of . Without loss of generality we may take . By subtracting the matrix from (and from , thus shifting all the eigenvalues down by , we may also assume without loss of generality that . So now we wish to show that

The right-hand side is just . If one differentiates the characteristic polynomial

at , one sees that

Finally, (2) can be rewritten as

so our task is now to show that

By Schur complement, we have

Since is an eigenvalue of , but not of (by hypothesis), the factor vanishes when . If we then differentiate (4) in and set we obtain (3) as desired.

** — 2. A geometric proof — **

Here is a more geometric way to think about the identity. One can view as a linear operator on (mapping to for any vector ); it then also acts on all the exterior powers by mapping to for all vectors . In particular, if one evaluates on the basis of induced by the orthogonal eigenbasis , we see that the action of on is rank one, with

for any , where is the inner product on induced by the standard inner product on . If we now apply this to the -form , we have , while is equal to plus some terms orthogonal to . Since , Theorem 1 follows.

** — 3. A proof using Cramer’s rule — **

By a limiting argument we can assume that all the eigenvalues of are simple. From the spectral theorem we can compute the resolvent for as

Extracting the component of both sides and using Cramer’s rule, we conclude that

or in terms of eigenvalues

Both sides are rational functions with a simple pole at the eigenvalues . Extracting the residue at we conclude that

and Theorem 1 follows. (Note that this approach also gives a formula for for , although the formula becomes messier when because the relevant minor of is no longer a scalar multiple of the identity .)

## 24 comments

Comments feed for this article

13 August, 2019 at 7:44 am

omarleoblogWhat kind of mathematics is this?

13 August, 2019 at 8:16 am

AnonLinear Algebra

1 September, 2019 at 7:27 am

That KittyAt the research level, these results are sometimes called things like “matrix analysis,” especially when dealing with finite-dimensional normed spaces. Maybe because mathematicians often consider linear algebra “solved” or because so much (especially applied) work can be called “linear algebra” that finer descriptors are needed?

[Insert any variation on the old joke about how all of maths is an attempt to reduce problems to (some generalization) of linear algebra]

14 August, 2019 at 7:31 am

cimmerianhydraI never visited your blog before, mr. Tao, and the first article I read is about such an interesting equality!

The last proof is especially elegant. About the second one: would you say that geometric intuition is always a valuable perspective from which to understand one’s results? Are there situations where geometry “gets in the way” of seeing things clearly?

14 August, 2019 at 7:35 am

cimmerianhydraFor some reason this was posted as a reply to a comment. I’m sorry, I have no idea how this happened.

13 August, 2019 at 9:21 am

Andrew KrauseAre there any analogues to the identities for any interesting infinite-dimensional cases? I’m wondering specifically if one can relate the eigenfunctions of a differential operator to spectral properties of that operator. Of course Kac’s famous paper on the “shape of the drum” and the solutions in that case suggest that there is no simple relationship, but perhaps there is an analogue of the spectrum of these sub-matrices which helps clarify this.

13 August, 2019 at 1:46 pm

Terence TaoThe first part of the Cramer rule proof, based on the spectral theorem, will hold in some infinite-dimensional settings: for instance, if is the (positive semi-definite) Laplacian on a smooth compact Riemannian manifold and is an -normalised eigenfunction corresponding to the eigenvalue , then it is still true that is the residue at of the integral kernel of the resolvent evaluated at . However it is not obvious how to evaluate this resolvent further; possibly there is still some remnant of Cramer’s rule involving Fredholm determinants, but I’m not sure how useful it would be. Certainly computing residues or other asymptotic expansions of resolvents is quite a common pastime in the spectral theory or microlocal analysis literature though.

13 August, 2019 at 1:22 pm

AnonymousIt seems that the LHS product is the discriminant of the characteristic polynomial of the matrix (i.e. the resultant of the characteristic polynomial and its derivative), while the RHS product is the resultant of the chracteristic polynomials of the matrices .

Since these resultants representing the two products can be computed via determinants of their corresponding Sylvester matrices, it seems closely related to the proof using Cramer’s rule.

13 August, 2019 at 11:11 pm

WattsIf is Toeplitz, say the Laplacian , then the right hand side is constant for any . Then the formula can only recover the first eigenvector. Right?

13 August, 2019 at 11:36 pm

WattsThis is incorrect, it is not constant for any . Ignore this comment.

13 August, 2019 at 11:25 pm

Part III: A Physicist Completes a Linear Algebra Result - Nevin Manimala's Blog[…] a bit. We put the paper online last weekend and it finally appeared on the arXiv, along with a new Terry blog post! I’m so excited you guys don’t even […]

14 August, 2019 at 2:16 am

RaphaelCool thing!

There is a chemical example which is immediately intuitive. A closed ring of n atoms has an adjacency matrix that is a constant on a 1-band-diagonal (at the n,n+1 and n+1,n elements) and at element (n,1) and (1,n). The eigenvalues of this matrix (Hückel matrix) correspond to the energy levels of the molecule (at some crude approximation). The adjacency matrix of the open chain you get when you remove one atom from the ring is the above described “1-band-diagonal” with dimension (n-1),(n-1). Now the energies (eigenvalues) of the open chain system with n-1 atoms are exactly matching the eigenvalues of the n+1 ring (they both are the real part of the roots of unity in the ring case of order n and in the open chain with n atoms case of order n+1). So removing one atom from the ring to get a chain is one example for making M out of A. In this example the eigenvalues of M without the first, and those of A are identical. From chemical intuition it is immediately clear that both v_1 eigenvectors are (1,…,1). An interesting question would be how or if to get information on the phases of the coefficients. In case of symmetric systems they would correspond to the characters of irreducible representations.

14 August, 2019 at 2:30 am

RaphaelOh sorry there is a mistake it is even a bit more interesting: The “ground state” eigenvectors of both systems (A and M) are different, the ring has obviously while the open chain one is (up to normalization).

14 August, 2019 at 7:19 am

AnonymousIn the last formula, this should be on the RHS.

[Corrected, thanks – T.]14 August, 2019 at 10:00 am

John Mangualwhat are these used for?

14 August, 2019 at 11:09 am

AnonymousThese are used for developing a new way to write neutrino oscillation probabilities in matter, both exactly and with simpler approximate expressions.

14 August, 2019 at 7:28 pm

AnonymousThere seems to be something strange about the left hand side of equation (1) in the arXiv paper. The product is not well defined if is an matrix and . It’s certainly not a squared matrix, so how is that determinant calculated?

15 August, 2019 at 4:49 am

AnonymousIt’s not multiplication, it’s concatenation: stick the column vector after the last column of B to “complete the square”.

18 August, 2019 at 1:22 am

Karl SvozilIn which way is this result different from the one mentioned, for instance, in Par. 79, p. 157 of Halmos’ “Finite-dimensional vector spaces”: Let be a self-adjoint operator in finite dimensional Hilbert space, and a polynomial

Then the respective orthogonal projectors are .

18 August, 2019 at 9:27 am

Terence TaoThis identity is related to the ones discussed here, but not quite the same; it involves polynomials of the matrix , rather than eigenvalues of the minor . In the notation of this post, Halmos’s identity (assuming simple eigenvalues) would read

It can be proven easily from the spectral theorem. Extracting the component of this identity, we see that to derive Theorem 1 from this identity (or vice versa) one would have to establish the identity

where denotes the component of a matrix . This identity has to be true (by combining Halmos’s identity with Theorem 1, and using a limiting argument to remove the hypothesis that the eigenvalues are simple), but actually I don’t see any direct way to prove (*) that doesn’t basically proceed via this route. (The right-hand side is , which suggests some sort of invocation of Cramer’s rule, but if one pursues this idea, one is basically repeating the Cramer’s rule proof given in the blog post, followed by the spectral theory proof of Halmos’s identity. The left-hand side is definitely reminiscent of the Cayley-Hamilton theorem, but I have thus far been unable to use that theorem to give a more tractable expression for the left-hand side of (*), other than by the route of combining Halmos’s identity with Theorem 1.)

It is interesting that there seem to be a number of rather different looking expressions for the same quantity, with the equality between any of the two being a slightly non-trivial theorem. If one writes (assuming simple eigenvalues and for sake of discussion)

then Theorem 1 asserts that , the identity from Erdos-Schlein-Yau (and my paper with Van Vu) asserts that , and Halmos’s identity asserts that . The Cramer rule proof of Theorem 1 basically proceeds by showing that and . In the first proof in the blog post an independent proof is given that . However I don’t know of a direct way to establish that or without going through or .

[UPDATE: I can now adapt the adjugate proof of Theorem 1 in the arXiv article to show (*) (or equivalently, ) directly. One can check thatby testing against each of the eigenvectors separately (or, if one wishes, first checking the identity when is a diagonal matrix, and using the spectral theorem to extend to general Hermitian ). [Amusingly, if one multiplies both sides of the above identity by , one also recovers the Cayley-Hamilton theorem.] Extracting the coefficient, one obtains (*). In other words, one can show by equating both quantities with

It is also rather easy to show that , so this proof is arguably “going through” in some sense.]

18 August, 2019 at 11:42 pm

Vassilis PapanicolaouThe RHS of (1) makes sense for arbitrary matrices, not only for Hernmitian. One, then, suspects that the formula extends to any square matrix. The guess is that, in the general case, the LHS is the product of the j-th component of the i-th eigenvector with the conjugate of the j-th component of the eigenvector of the Hermitian adjoint of the matrix coresponding to the conjugate eigenvalue.

19 August, 2019 at 9:24 am

Terence TaoYes, this seems to indeed be the case; for instance, the Cramer rule based proof looks like it will extend to the non-Hermitian case in the manner indicated.

19 August, 2019 at 2:40 am

Bo BMaybe it is worthwhile to write the identity in coordinate free notation: We may assume ; otherwise replace by . For any vector , let be the determinant of the quadratic form restricted to the orthogonal complement of . Let be the (a) vector with . Then the identity says that

if and have norm 1.

One advantage with this is that we may assume that , instead of , is an element of an ON basis, which helps (a little) in the proof.

It also suggests that the formula might hold in the case of infinite dimension, for a suitable notion of determinants. For instance, when and is of trace class, it should hold for Fredholm determinants, since they are limits of finite dimensional determinants. It might also hold for determinants defined by zeta-functions (?).

19 August, 2019 at 9:26 am

Terence TaoNice! This fits well with the geometric proof. The quantity clearly equals when , and also clearly vanishes when (because the null vector to the quadratic form now lies in the orthogonal complement of ), while on rewriting (where represents the action of on the exterior power , and is the Hodge dual of ) we see that is a quadratic form in , at which point the identity is forced upon us.

In less coordinate-free notation, the identity can also be written as , with basically the same proof; this is then more or less the adjugate matrix proof in the arXiv posting.