I was asked recently (in relation to my recent work with Van Vu on the spectral theory of random matrices) to explain some standard heuristics regarding how the eigenvalues of an matrix A behave under small perturbations. These heuristics can be summarised as follows:
- For normal matrices (and in particular, unitary or self-adjoint matrices), eigenvalues are very stable under small perturbations. For more general matrices, eigenvalues can become unstable if there is pseudospectrum present.
- For self-adjoint (Hermitian) matrices, eigenvalues that are too close together tend to repel quickly from each other under such perturbations. For more general matrices, eigenvalues can either be repelled or be attracted to each other, depending on the type of perturbation.
In this post I would like to briefly explain why these heuristics are plausible.
— Pseudospectrum —
As any student of linear algebra knows, the spectrum of a matrix A consists of all the complex numbers such that fails to be invertible, thus there exists a non-zero vector v such that is zero. This is a finite set, consisting of at most n points. But there is also an important set containing the spectrum (and which, at times, can be much larger than that spectrum), which is the pseudospectrum of the matrix A. Unlike the spectrum, which is canonically defined, there is more than one definition of the pseudospectrum; also, this concept depends on an additional parameter, a small number . We will define the pseudospectrum (or more precisely, the -pseudospectrum) to be the set of all the complex numbers such that has least singular value at most , or equivalently that there exists a unit vector v such that . (Another equivalent definition is that the -pseudospectrum consists of the spectrum, together with those complex numbers for which the resolvent has operator norm at least .)
The significance of the pseudospectrum is that it describes where the spectrum can go to under small perturbations. Indeed, if lies in the pseudospectrum , so that there exists a unit vector v whose image has magnitude at most , then we see that
and so lies in the spectrum of the perturbation of A. Note that the operator norm of is at most .
Conversely, if does not lie in the pseudospectrum , and is a small perturbation of A (with E having operator norm at most ), then for any unit vector v, one has
by the triangle inequality, and so cannot lie in the spectrum of .
Thus, if the pseudospectrum is tightly clustered around the spectrum, the spectrum is stable under small perturbations; but if the pseudospectrum is widely dispersed, then the spectrum becomes unstable.
No matter what A is, the pseudospectrum always contains the -neighbourhood of the spectrum . Indeed, if v is a unit eigenvector with eigenvalue , then , which implies that for any in the -neighbourhood of , and the claim follows.
Conversely, when A is normal, consists only of the -neighbourhood of . This is easiest to see by using the spectral theorem to diagonalise A and then computing everything explicitly. In particular, we conclude that if we perturb a normal matrix by a (possibly non-normal) perturbation of operator norm at most , then the spectrum moves by at most .
In the non-normal case, things can be quite different. A good example is provided by the shift matrix
This matrix is nilpotent: . As such, the only eigenvalue is zero. But observe that for any complex number ,
From this and a little computation, we see that if , then will lie in the -pseudospectrum of U. For fixed , we thus see that fills up the unit disk in the high dimensional limit . (The pseudospectrum will not venture far beyond the unit disk, as the operator norm of U is 1.) And indeed, it is easy to perturb U so that its spectrum moves far away from the origin. For instance, observe that the perturbation
of U has a characteristic polynomial of (or note that , which is basically the same fact thanks to the Cayley-Hamilton theorem) and so has eigenvalues equal to the roots of ; for fixed and n tending to infinity, this spectrum becomes asymptotically uniformly distributed on the unit circle, rather than at the origin.
[Update, Nov 8: Much more on the theory of pseudospectra can be found at this web site; thanks to Nick Trefethen for the link.]
— Spectral dynamics —
The pseudospectrum tells us, roughly speaking, how far the spectrum of a matrix A can move with respect to a small perturbation, but does not tell us the direction in which the spectrum moves. For this, it is convenient to use the language of calculus: we suppose that A=A(t) varies smoothly with respect to some time parameter t, and would like to “differentiate” the spectrum with respect to t.
Since it is a little unclear what it means to differentiate a set, let us work instead with the eigenvalues of . Note that generically (e.g. for almost all A), the eigenvalues will be distinct. (Proof: the eigenvalues are distinct when the characteristic polynomial has no repeated roots, or equivalently when the resultant of the characteristic polynomial with its derivative is non-zero. This is clearly a Zariski-open condition; since the condition is obeyed at least once, it is thus Zariski-dense.) So it is not unreasonable to assume that for all t in some open interval, the are distinct; an application of the implicit function theorem then allows one to make the smooth in t. Similarly, we can make the eigenvectors vary smoothly in t. (There is some freedom to multiply each eigenvector by a scalar, but this freedom will cancel itself out in the end, as we are ultimately interested only in the eigenvalues rather than the eigenvectors.)
The eigenvectors form a basis of . Let be the dual basis, thus for all , and so we have the reproducing formula
for all vectors u. Combining this with the eigenvector equations
we obtain the adjoint eigenvector equations
Next, we differentiate (2) using the product rule to obtain
Taking the inner product of this with the dual vector , and using (3) to cancel some terms, we obtain the first variation formula for eigenvalues:
Note that if A is normal, then we can take the eigenbasis to be orthonormal, in which case the dual basis is identical to . In particular we see that ; the infinitesimal change of each eigenvalue does not exceed the infinitesimal size of the perturbation. This is consistent with the stability of the spectrum for normal operators mentioned in the previous section.
[Side remark: if A evolves by the Lax pair equation for some matrix , then , and so from (5) we see that the spectrum of A is invariant in time. This fact underpins the inverse scattering method for solving integrable systems.]
Now we look at how the eigenvectors vary. Taking the inner product instead with a dual vector for , we obtain
applying (1) we conclude a first variation formula for the eigenvectors , namely that
for some scalar (the presence of this term reflects the freedom to multiply by a scalar). Similar considerations for the adjoint give
(here we use the derivative of the identity to get the correct multiple of on the right.)
We can use (5), (6), (6′) to obtain a second variation formula for the eigenvalues. Indeed, by differentiating (5) we obtain
applying (6), (6′) we conclude the second variation formula
Now suppose that A is self-adjoint, so as before we can take to be orthonormal. The above formula then becomes
One can view the terms on the right-hand side here as various “forces” acting on the eigenvalue ; the acceleration of the original matrix A provides one such force, while all the other eigenvalues provide a repulsive force (and note, as with Newton’s third law, that the force that exerts on is equal and opposite to the force that exerts on ; this is consistent with the trace formula ). The closer two eigenvalues are to each other, the stronger the repulsive force becomes.
When the matrix A is not self-adjoint, then the interaction between and can be either repulsive or attractive. Consider for instance the matrices
The eigenvalues of this matrix are for – so we see that they are attracted to each other as t evolves, until the matrix becomes degenerate at . In contrast, the self-adjoint matrices
have eigenvalues , which repel each other as t evolves.
The repulsion effect of eigenvalues is also consistent with the smallness of the set of matrices with repeated eigenvalues. Consider for instance the space of Hermitian matrices, which has real dimension . The subset of Hermitian matrices with distinct eigenvalues can be described by a collection of n orthogonal (complex) one-dimensional eigenspaces (which can be computed to have degrees of freedom) plus n real eigenvalues (for an additional n degrees of freedom), thus the set of matrices with distinct eigenvalues has full dimension .
Now consider the space of matrices with one repeated eigenvalue. This can be described by n-2 orthogonal complex one-dimensional eigenspaces, plus a complex two-dimensional orthogonal complement (which has degrees of freedom) plus n-1 real eigenvalues, thus the set of matrices with repeated eigenvalues only has dimension . Thus it is in fact very rare for eigenvalues to actually collide, which helps explain why there must be a repulsion effect in the first place.
An example can help illustrate this phenomenon. Consider the one-parameter family of Hermitian matrices
The eigenvalues of this matrix at time t are of course t and -t, which cross over each other when t changes sign. Now consider instead the Hermitian perturbation
for some small . The eigenvalues are now and ; they come close to each other as t approaches 0, but then “bounce” off of each other due to the repulsion effect.