You are currently browsing the category archive for the ‘math.RA’ category.
Peter Denton, Stephen Parke, Xining Zhang, and I have just uploaded to the arXiv a completely rewritten version of our previous paper, now titled “Eigenvectors from Eigenvalues: a survey of a basic identity in linear algebra“. This paper is now a survey of the various literature surrounding the following basic identity in linear algebra, which we propose to call the eigenvector-eigenvalue identity:
Theorem 1 (Eigenvector-eigenvalue identity) Let
be an
Hermitian matrix, with eigenvalues
. Let
be a unit eigenvector corresponding to the eigenvalue
, and let
be the
component of
. Then
where
is the
Hermitian matrix formed by deleting the
row and column from
.
When we posted the first version of this paper, we were unaware of previous appearances of this identity in the literature; a related identity had been used by Erdos-Schlein-Yau and by myself and Van Vu for applications to random matrix theory, but to our knowledge this specific identity appeared to be new. Even two months after our preprint first appeared on the arXiv in August, we had only learned of one other place in the literature where the identity showed up (by Forrester and Zhang, who also cite an earlier paper of Baryshnikov).
The situation changed rather dramatically with the publication of a popular science article in Quanta on this identity in November, which gave this result significantly more exposure. Within a few weeks we became informed (through private communication, online discussion, and exploration of the citation tree around the references we were alerted to) of over three dozen places where the identity, or some other closely related identity, had previously appeared in the literature, in such areas as numerical linear algebra, various aspects of graph theory (graph reconstruction, chemical graph theory, and walks on graphs), inverse eigenvalue problems, random matrix theory, and neutrino physics. As a consequence, we have decided to completely rewrite our article in order to collate this crowdsourced information, and survey the history of this identity, all the known proofs (we collect seven distinct ways to prove the identity (or generalisations thereof)), and all the applications of it that we are currently aware of. The citation graph of the literature that this ad hoc crowdsourcing effort produced is only very weakly connected, which we found surprising:
The earliest explicit appearance of the eigenvector-eigenvalue identity we are now aware of is in a 1966 paper of Thompson, although this paper is only cited (directly or indirectly) by a fraction of the known literature, and also there is a precursor identity of Löwner from 1934 that can be shown to imply the identity as a limiting case. At the end of the paper we speculate on some possible reasons why this identity only achieved a modest amount of recognition and dissemination prior to the November 2019 Quanta article.
Peter Denton, Stephen Parke, Xining Zhang, and I have just uploaded to the arXiv the short unpublished note “Eigenvectors from eigenvalues“. This note gives two proofs of a general eigenvector identity observed recently by Denton, Parke and Zhang in the course of some quantum mechanical calculations. The identity is as follows:
Theorem 1 Let
be an
Hermitian matrix, with eigenvalues
. Let
be a unit eigenvector corresponding to the eigenvalue
, and let
be the
component of
. Then
where
is the
Hermitian matrix formed by deleting the
row and column from
.
For instance, if we have
for some real number ,
-dimensional vector
, and
Hermitian matrix
, then we have
assuming that the denominator is non-zero.
Once one is aware of the identity, it is not so difficult to prove it; we give two proofs, each about half a page long, one of which is based on a variant of the Cauchy-Binet formula, and the other based on properties of the adjugate matrix. But perhaps it is surprising that such a formula exists at all; one does not normally expect to learn much information about eigenvectors purely from knowledge of eigenvalues. In the random matrix theory literature, for instance in this paper of Erdos, Schlein, and Yau, or this later paper of Van Vu and myself, a related identity has been used, namely
but it is not immediately obvious that one can derive the former identity from the latter. (I do so below the fold; we ended up not putting this proof in the note as it was longer than the two other proofs we found. I also give two other proofs below the fold, one from a more geometric perspective and one proceeding via Cramer’s rule.) It was certainly something of a surprise to me that there is no explicit appearance of the components of
in the formula (1) (though they do indirectly appear through their effect on the eigenvalues
; for instance from taking traces one sees that
).
One can get some feeling of the identity (1) by considering some special cases. Suppose for instance that is a diagonal matrix with all distinct entries. The upper left entry
of
is one of the eigenvalues of
. If it is equal to
, then the eigenvalues of
are the other
eigenvalues of
, and now the left and right-hand sides of (1) are equal to
. At the other extreme, if
is equal to a different eigenvalue of
, then
now appears as an eigenvalue of
, and both sides of (1) now vanish. More generally, if we order the eigenvalues
and
, then the Cauchy interlacing inequalities tell us that
for , and
for , so that the right-hand side of (1) lies between
and
, which is of course consistent with (1) as
is a unit vector. Thus the identity relates the coefficient sizes of an eigenvector with the extent to which the Cauchy interlacing inequalities are sharp.
(This post is mostly intended for my own reference, as I found myself repeatedly looking up several conversions between polynomial bases on various occasions.)
Let denote the vector space of polynomials
of one variable
with real coefficients of degree at most
. This is a vector space of dimension
, and the sequence of these spaces form a filtration:
A standard basis for these vector spaces are given by the monomials : every polynomial
in
can be expressed uniquely as a linear combination of the first
monomials
. More generally, if one has any sequence
of polynomials, with each
of degree exactly
, then an easy induction shows that
forms a basis for
.
In particular, if we have two such sequences and
of polynomials, with each
of degree
and each
of degree
, then
must be expressible uniquely as a linear combination of the polynomials
, thus we have an identity of the form
for some change of basis coefficients . These coefficients describe how to convert a polynomial expressed in the
basis into a polynomial expressed in the
basis.
Many standard combinatorial quantities involving two natural numbers
can be interpreted as such change of basis coefficients. The most familiar example are the binomial coefficients
, which measures the conversion from the shifted monomial basis
to the monomial basis
, thanks to (a special case of) the binomial formula:
thus for instance
More generally, for any shift , the conversion from
to
is measured by the coefficients
, thanks to the general case of the binomial formula.
But there are other bases of interest too. For instance if one uses the falling factorial basis
then the conversion from falling factorials to monomials is given by the Stirling numbers of the first kind :
thus for instance
and the conversion back is given by the Stirling numbers of the second kind :
thus for instance
If one uses the binomial functions as a basis instead of the falling factorials, one of course can rewrite these conversions as
and
thus for instance
and
As a slight variant, if one instead uses rising factorials
then the conversion to monomials yields the unsigned Stirling numbers of the first kind:
thus for instance
One final basis comes from the polylogarithm functions
For instance one has
and more generally one has
for all natural numbers and some polynomial
of degree
(the Eulerian polynomials), which when converted to the monomial basis yields the (shifted) Eulerian numbers
For instance
These particular coefficients also have useful combinatorial interpretations. For instance:
- The binomial coefficient
is of course the number of
-element subsets of
.
- The unsigned Stirling numbers
of the first kind are the number of permutations of
with exactly
cycles. The signed Stirling numbers
are then given by the formula
.
- The Stirling numbers
of the second kind are the number of ways to partition
into
non-empty subsets.
- The Eulerian numbers
are the number of permutations of
with exactly
ascents.
These coefficients behave similarly to each other in several ways. For instance, the binomial coefficients obey the well known Pascal identity
(with the convention that vanishes outside of the range
). In a similar spirit, the unsigned Stirling numbers
of the first kind obey the identity
and the signed counterparts obey the identity
The Stirling numbers of the second kind obey the identity
and the Eulerian numbers obey the identity
While talking mathematics with a postdoc here at UCLA (March Boedihardjo) we came across the following matrix problem which we managed to solve, but the proof was cute and the process of discovering it was fun, so I thought I would present the problem here as a puzzle without revealing the solution for now.
The problem involves word maps on a matrix group, which for sake of discussion we will take to be the special orthogonal group of real
matrices (one of the smallest matrix groups that contains a copy of the free group, which incidentally is the key observation powering the Banach-Tarski paradox). Given any abstract word
of two generators
and their inverses (i.e., an element of the free group
), one can define the word map
simply by substituting a pair of matrices in
into these generators. For instance, if one has the word
, then the corresponding word map
is given by
for . Because
contains a copy of the free group, we see the word map is non-trivial (not equal to the identity) if and only if the word itself is nontrivial.
Anyway, here is the problem:
Problem. Does there exist a sequence
of non-trivial word maps
that converge uniformly to the identity map?
To put it another way, given any , does there exist a non-trivial word
such that
for all
, where
denotes (say) the operator norm, and
denotes the identity matrix in
?
As I said, I don’t want to spoil the fun of working out this problem, so I will leave it as a challenge. Readers are welcome to share their thoughts, partial solutions, or full solutions in the comments below.
Apoorva Khare and I have updated our paper “On the sign patterns of entrywise positivity preservers in fixed dimension“, announced at this post from last month. The quantitative results are now sharpened using a new monotonicity property of ratios of Schur polynomials, namely that such ratios are monotone non-decreasing in each coordinate of
if
is in the positive orthant, and the partition
is larger than that of
. (This monotonicity was also independently observed by Rachid Ait-Haddou, using the theory of blossoms.) In the revised version of the paper we give two proofs of this monotonicity. The first relies on a deep positivity result of Lam, Postnikov, and Pylyavskyy, which uses a representation-theoretic positivity result of Haiman to show that the polynomial combination
of skew-Schur polynomials is Schur-positive for any partitions (using the convention that the skew-Schur polynomial
vanishes if
is not contained in
, and where
and
denotes the pointwise min and max of
and
respectively). It is fairly easy to derive the monotonicity of
from this, by using the expansion
of Schur polynomials into skew-Schur polynomials (as was done in this previous post).
The second proof of monotonicity avoids representation theory by a more elementary argument establishing the weaker claim that the above expression (1) is non-negative on the positive orthant. In fact we prove a more general determinantal log-supermodularity claim which may be of independent interest:
Theorem 1 Let
be any
totally positive matrix (thus, every minor has a non-negative determinant). Then for any
-tuples
of increasing elements of
, one has
where
denotes the
minor formed from the rows in
and columns in
.
For instance, if is the matrix
for some real numbers , one has
(corresponding to the case ,
), or
(corresponding to the case ,
,
,
,
). It turns out that this claim can be proven relatively easy by an induction argument, relying on the Dodgson and Karlin identities from this previous post; the difficulties are largely notational in nature. Combining this result with the Jacobi-Trudi identity for skew-Schur polynomials (discussed in this previous post) gives the non-negativity of (1); it can also be used to directly establish the monotonicity of ratios
by applying the theorem to a generalised Vandermonde matrix.
(Log-supermodularity also arises as the natural hypothesis for the FKG inequality, though I do not know of any interesting application of the FKG inequality in this current setting.)
Suppose we have an matrix
that is expressed in block-matrix form as
where is an
matrix,
is an
matrix,
is an
matrix, and
is a
matrix for some
. If
is invertible, we can use the technique of Schur complementation to express the inverse of
(if it exists) in terms of the inverse of
, and the other components
of course. Indeed, to solve the equation
where are
column vectors and
are
column vectors, we can expand this out as a system
Using the invertibility of , we can write the first equation as
and substituting this into the second equation yields
and thus (assuming that is invertible)
and then inserting this back into (1) gives
Comparing this with
we have managed to express the inverse of as
One can consider the inverse problem: given the inverse of
, does one have a nice formula for the inverse
of the minor
? Trying to recover this directly from (2) looks somewhat messy. However, one can proceed as follows. Let
denote the
matrix
(with the
identity matrix), and let
be its transpose:
Then for any scalar (which we identify with
times the identity matrix), one has
and hence by (2)
noting that the inverses here will exist for large enough. Taking limits as
, we conclude that
On the other hand, by the Woodbury matrix identity (discussed in this previous blog post), we have
and hence on taking limits and comparing with the preceding identity, one has
This achieves the aim of expressing the inverse of the minor in terms of the inverse of the full matrix. Taking traces and rearranging, we conclude in particular that
In the case, this can be simplified to
where is the
basis column vector.
We can apply this identity to understand how the spectrum of an random matrix
relates to that of its top left
minor
. Subtracting any complex multiple
of the identity from
(and hence from
), we can relate the Stieltjes transform
of
with the Stieltjes transform
of
:
At this point we begin to proceed informally. Assume for sake of argument that the random matrix is Hermitian, with distribution that is invariant under conjugation by the unitary group
; for instance,
could be drawn from the Gaussian Unitary Ensemble (GUE), or alternatively
could be of the form
for some real diagonal matrix
and
a unitary matrix drawn randomly from
using Haar measure. To fix normalisations we will assume that the eigenvalues of
are typically of size
. Then
is also Hermitian and
-invariant. Furthermore, the law of
will be the same as the law of
, where
is now drawn uniformly from the unit sphere (independently of
). Diagonalising
into eigenvalues
and eigenvectors
, we have
One can think of as a random (complex) Gaussian vector, divided by the magnitude of that vector (which, by the Chernoff inequality, will concentrate to
). Thus the coefficients
with respect to the orthonormal basis
can be thought of as independent (complex) Gaussian vectors, divided by that magnitude. Using this and the Chernoff inequality again, we see (for
distance
away from the real axis at least) that one has the concentration of measure
and thus
(that is to say, the diagonal entries of are roughly constant). Similarly we have
Inserting this into (5) and discarding terms of size , we thus conclude the approximate relationship
This can be viewed as a difference equation for the Stieltjes transform of top left minors of . Iterating this equation, and formally replacing the difference equation by a differential equation in the large
limit, we see that when
is large and
for some
, one expects the top left
minor
of
to have Stieltjes transform
where solves the Burgers-type equation
with initial data .
Example 1 If
is a constant multiple
of the identity, then
. One checks that
is a steady state solution to (7), which is unsurprising given that all minors of
are also
times the identity.
Example 2 If
is GUE normalised so that each entry has variance
, then by the semi-circular law (see previous notes) one has
(using an appropriate branch of the square root). One can then verify the self-similar solution
to (7), which is consistent with the fact that a top
minor of
also has the law of GUE, with each entry having variance
when
.
One can justify the approximation (6) given a sufficiently good well-posedness theory for the equation (7). We will not do so here, but will note that (as with the classical inviscid Burgers equation) the equation can be solved exactly (formally, at least) by the method of characteristics. For any initial position , we consider the characteristic flow
formed by solving the ODE
with initial data , ignoring for this discussion the problems of existence and uniqueness. Then from the chain rule, the equation (7) implies that
and thus . Inserting this back into (8) we see that
and thus (7) may be solved implicitly via the equation
for all and
.
Remark 3 In practice, the equation (9) may stop working when
crosses the real axis, as (7) does not necessarily hold in this region. It is a cute exercise (ultimately coming from the Cauchy-Schwarz inequality) to show that this crossing always happens, for instance if
has positive imaginary part then
necessarily has negative or zero imaginary part.
Example 4 Suppose we have
as in Example 1. Then (9) becomes
for any
, which after making the change of variables
becomes
as in Example 1.
Example 5 Suppose we have
as in Example 2. Then (9) becomes
If we write
one can calculate that
and hence
One can recover the spectral measure from the Stieltjes transform
as the weak limit of
as
; we write this informally as
In this informal notation, we have for instance that
which can be interpreted as the fact that the Cauchy distributions converge weakly to the Dirac mass at
as
. Similarly, the spectral measure associated to (10) is the semicircular measure
.
If we let be the spectral measure associated to
, then the curve
from
to the space of measures is the high-dimensional limit
of a Gelfand-Tsetlin pattern (discussed in this previous post), if the pattern is randomly generated amongst all matrices
with spectrum asymptotic to
as
. For instance, if
, then the curve is
, corresponding to a pattern that is entirely filled with
‘s. If instead
is a semicircular distribution, then the pattern is
thus at height from the top, the pattern is semicircular on the interval
. The interlacing property of Gelfand-Tsetlin patterns translates to the claim that
(resp.
) is non-decreasing (resp. non-increasing) in
for any fixed
. In principle one should be able to establish these monotonicity claims directly from the PDE (7) or from the implicit solution (9), but it was not clear to me how to do so.
An interesting example of such a limiting Gelfand-Tsetlin pattern occurs when , which corresponds to
being
, where
is an orthogonal projection to a random
-dimensional subspace of
. Here we have
and so (9) in this case becomes
A tedious calculation then gives the solution
For , there are simple poles at
, and the associated measure is
This reflects the interlacing property, which forces of the
eigenvalues of the
minor to be equal to
(resp.
). For
, the poles disappear and one just has
For , one has an inverse semicircle distribution
There is presumably a direct geometric explanation of this fact (basically describing the singular values of the product of two random orthogonal projections to half-dimensional subspaces of ), but I do not know of one off-hand.
The evolution of can also be understood using the
-transform and
-transform from free probability. Formally, letlet
be the inverse of
, thus
for all , and then define the
-transform
The equation (9) may be rewritten as
and hence
See these previous notes for a discussion of free probability topics such as the -transform.
Example 6 If
then the
transform is
.
Example 7 If
is given by (10), then the
transform is
Example 8 If
is given by (11), then the
transform is
This simple relationship (12) is essentially due to Nica and Speicher (thanks to Dima Shylakhtenko for this reference). It has the remarkable consequence that when is the reciprocal of a natural number
, then
is the free arithmetic mean of
copies of
, that is to say
is the free convolution
of
copies of
, pushed forward by the map
. In terms of random matrices, this is asserting that the top
minor of a random matrix
has spectral measure approximately equal to that of an arithmetic mean
of
independent copies of
, so that the process of taking top left minors is in some sense a continuous analogue of the process of taking freely independent arithmetic means. There ought to be a geometric proof of this assertion, but I do not know of one. In the limit
(or
), the
-transform becomes linear and the spectral measure becomes semicircular, which is of course consistent with the free central limit theorem.
In a similar vein, if one defines the function
and inverts it to obtain a function with
for all , then the
-transform
is defined by
Writing
for any ,
, we have
and so (9) becomes
which simplifies to
replacing by
we obtain
and thus
and hence
One can compute to be the
-transform of the measure
; from the link between
-transforms and free products (see e.g. these notes of Guionnet), we conclude that
is the free product of
and
. This is consistent with the random matrix theory interpretation, since
is also the spectral measure of
, where
is the orthogonal projection to the span of the first
basis elements, so in particular
has spectral measure
. If
is unitarily invariant then (by a fundamental result of Voiculescu) it is asymptotically freely independent of
, so the spectral measure of
is asymptotically the free product of that of
and of
.
Fix a non-negative integer . Define an (weak) integer partition of length
to be a tuple
of non-increasing non-negative integers
. (Here our partitions are “weak” in the sense that we allow some parts of the partition to be zero. Henceforth we will omit the modifier “weak”, as we will not need to consider the more usual notion of “strong” partitions.) To each such partition
, one can associate a Young diagram consisting of
left-justified rows of boxes, with the
row containing
boxes. A semi-standard Young tableau (or Young tableau for short)
of shape
is a filling of these boxes by integers in
that is weakly increasing along rows (moving rightwards) and strictly increasing along columns (moving downwards). The collection of such tableaux will be denoted
. The weight
of a tableau
is the tuple
, where
is the number of occurrences of the integer
in the tableau. For instance, if
and
, an example of a Young tableau of shape
would be
The weight here would be .
To each partition one can associate the Schur polynomial
on
variables
, which we will define as
using the multinomial convention
Thus for instance the Young tableau given above would contribute a term
to the Schur polynomial
. In the case of partitions of the form
, the Schur polynomial
is just the complete homogeneous symmetric polynomial
of degree
on
variables:
thus for instance
Schur polyomials are ubiquitous in the algebraic combinatorics of “type objects” such as the symmetric group
, the general linear group
, or the unitary group
. For instance, one can view
as the character of an irreducible polynomial representation of
associated with the partition
. However, we will not focus on these interpretations of Schur polynomials in this post.
This definition of Schur polynomials allows for a way to describe the polynomials recursively. If and
is a Young tableau of shape
, taking values in
, one can form a sub-tableau
of some shape
by removing all the appearances of
(which, among other things, necessarily deletes the
row). For instance, with
as in the previous example, the sub-tableau
would be
and the reduced partition in this case is
. As Young tableaux are required to be strictly increasing down columns, we can see that the reduced partition
must intersperse the original partition
in the sense that
for all ; we denote this interspersion relation as
(though we caution that this is not intended to be a partial ordering). In the converse direction, if
and
is a Young tableau with shape
with entries in
, one can form a Young tableau
with shape
and entries in
by appending to
an entry of
in all the boxes that appear in the
shape but not the
shape. This one-to-one correspondence leads to the recursion
where ,
, and the size
of a partition
is defined as
.
One can use this recursion (2) to prove some further standard identities for Schur polynomials, such as the determinant identity
for , where
denotes the Vandermonde determinant
with the convention that if
is negative. Thus for instance
We review the (standard) derivation of these identities via (2) below the fold. Among other things, these identities show that the Schur polynomials are symmetric, which is not immediately obvious from their definition.
One can also iterate (2) to write
where the sum is over all tuples , where each
is a partition of length
that intersperses the next partition
, with
set equal to
. We will call such a tuple an integral Gelfand-Tsetlin pattern based at
.
One can generalise (6) by introducing the skew Schur functions
for , whenever
is a partition of length
and
a partition of length
for some
, thus the Schur polynomial
is also the skew Schur polynomial
with
. (One could relabel the variables here to be something like
instead, but this labeling seems slightly more natural, particularly in view of identities such as (8) below.)
By construction, we have the decomposition
whenever , and
are partitions of lengths
respectively. This gives another recursive way to understand Schur polynomials and skew Schur polynomials. For instance, one can use it to establish the generalised Jacobi-Trudi identity
with the convention that for
larger than the length of
; we do this below the fold.
The Schur polynomials (and skew Schur polynomials) are “discretised” (or “quantised”) in the sense that their parameters are required to be integer-valued, and their definition similarly involves summation over a discrete set. It turns out that there are “continuous” (or “classical”) analogues of these functions, in which the parameters
now take real values rather than integers, and are defined via integration rather than summation. One can view these continuous analogues as a “semiclassical limit” of their discrete counterparts, in a manner that can be made precise using the machinery of geometric quantisation, but we will not do so here.
The continuous analogues can be defined as follows. Define a real partition of length to be a tuple
where
are now real numbers. We can define the relation
of interspersion between a length
real partition
and a length
real partition
precisely as before, by requiring that the inequalities (1) hold for all
. We can then define the continuous Schur functions
for
recursively by defining
for and
of length
, where
and the integral is with respect to
-dimensional Lebesgue measure, and
as before. Thus for instance
and
More generally, we can define the continuous skew Schur functions for
of length
,
of length
, and
recursively by defining
and
for . Thus for instance
and
By expanding out the recursion, one obtains the analogue
of (6), and more generally one has
We will call the tuples in the first integral real Gelfand-Tsetlin patterns based at
. The analogue of (8) is then
where the integral is over all real partitions of length
, with Lebesgue measure.
By approximating various integrals by their Riemann sums, one can relate the continuous Schur functions to their discrete counterparts by the limiting formula
as for any length
real partition
and any
, where
and
More generally, one has
as for any length
real partition
, any length
real partition
with
, and any
.
As a consequence of these limiting formulae, one expects all of the discrete identities above to have continuous counterparts. This is indeed the case; below the fold we shall prove the discrete and continuous identities in parallel. These are not new results by any means, but I was not able to locate a good place in the literature where they are explicitly written down, so I thought I would try to do so here (primarily for my own internal reference, but perhaps the calculations will be worthwhile to some others also).
The determinant of an
matrix (with coefficients in an arbitrary field) obey many useful identities, starting of course with the fundamental multiplicativity
for
matrices
. This multiplicativity can in turn be used to establish many further identities; in particular, as shown in this previous post, it implies the Schur determinant identity
whenever is an invertible
matrix,
is an
matrix,
is a
matrix, and
is a
matrix. The matrix
is known as the Schur complement of the block
.
I only recently discovered that this identity in turn immediately implies what I always found to be a somewhat curious identity, namely the Dodgson condensation identity (also known as the Desnanot-Jacobi identity)
for any and
matrix
, where
denotes the
matrix formed from
by removing the
row and
column, and similarly
denotes the
matrix formed from
by removing the
and
rows and
and
columns. Thus for instance when
we obtain
for any scalars . (Charles Dodgson, better known by his pen name Lewis Caroll, is of course also known for writing “Alice in Wonderland” and “Through the Looking Glass“.)
The derivation is not new; it is for instance noted explicitly in this paper of Brualdi and Schneider, though I do not know if this is the earliest place in the literature where it can be found. (EDIT: Apoorva Khare has pointed out to me that the original arguments of Dodgson can be interpreted as implicitly following this derivation.) I thought it is worth presenting the short derivation here, though.
Firstly, by swapping the first and rows, and similarly for the columns, it is easy to see that the Dodgson condensation identity is equivalent to the variant
Now write
where is an
matrix,
are
column vectors,
are
row vectors, and
are scalars. If
is invertible, we may apply the Schur determinant identity repeatedly to conclude that
and the claim (2) then follows by a brief calculation (and the explicit form of the
determinant). To remove the requirement that
be invertible, one can use a limiting argument, noting that one can work without loss of generality in an algebraically closed field, and in such a field, the set of invertible matrices is dense in the Zariski topology. (In the case when the scalars are reals or complexes, one can just use density in the ordinary topology instead if desired.)
The same argument gives the more general determinant identity of Sylvester
whenever ,
is a
-element subset of
, and
denotes the matrix formed from
by removing the rows associated to
and the columns associated to
. (The Dodgson condensation identity is basically the
case of this identity.)
A closely related proof of (2) proceeds by elementary row and column operations. Observe that if one adds some multiple of one of the first rows of
to one of the last two rows of
, then the left and right sides of (2) do not change. If the minor
is invertible, this allows one to reduce to the case where the components
of the matrix vanish. Similarly, using elementary column operations instead of row operations we may assume that
vanish. All matrices involved are now block-diagonal and the identity follows from a routine computation.
The latter approach can also prove the cute identity
for any , any
column vectors
, and any
matrix
, which can for instance be found in page 7 of this text of Karlin. Observe that both sides of this identity are unchanged if one adds some multiple of any column of
to one of
; for generic
, this allows one to reduce
to have only the first two entries allowed to be non-zero, at which point the determinants split into
and
determinants and we can reduce to the
case (eliminating the role of
). One can now either proceed by a direct computation, or by observing that the left-hand side is quartilinear in
and antisymmetric in
and
which forces it to be a scalar multiple of
, at which point one can test the identity at a single point (e.g.
and
for the standard basis
) to conclude the argument. (One can also derive this identity from the Sylvester determinant identity but I think the calculations are a little messier if one goes by that route. Conversely, one can recover the Dodgson condensation identity from Karlin’s identity by setting
,
(for instance) and then permuting some rows and columns.)
In July I will be spending a week at Park City, being one of the mini-course lecturers in the Graduate Summer School component of the Park City Summer Session on random matrices. I have chosen to give some lectures on least singular values of random matrices, the circular law, and the Lindeberg exchange method in random matrix theory; this is a slightly different set of topics than I had initially advertised (which was instead about the Lindeberg exchange method and the local relaxation flow method), but after consulting with the other mini-course lecturers I felt that this would be a more complementary set of topics. I have uploaded an draft of my lecture notes (some portion of which is derived from my monograph on the subject); as always, comments and corrections are welcome.
[Update, June 23: notes revised and reformatted to PCMI format. -T.]
[Update, Mar 19 2018: further revision. -T.]
Suppose is a continuous (but nonlinear) map from one normed vector space
to another
. The continuity means, roughly speaking, that if
are such that
is small, then
is also small (though the precise notion of “smallness” may depend on
or
, particularly if
is not known to be uniformly continuous). If
is known to be differentiable (in, say, the Fréchet sense), then we in fact have a linear bound of the form
for some depending on
, if
is small enough; one can of course make
independent of
(and drop the smallness condition) if
is known instead to be Lipschitz continuous.
In many applications in analysis, one would like more explicit and quantitative bounds that estimate quantities like in terms of quantities like
. There are a number of ways to do this. First of all, there is of course the trivial estimate arising from the triangle inequality:
This estimate is usually not very good when and
are close together. However, when
and
are far apart, this estimate can be more or less sharp. For instance, if the magnitude of
varies so much from
to
that
is more than (say) twice that of
, or vice versa, then (1) is sharp up to a multiplicative constant. Also, if
is oscillatory in nature, and the distance between
and
exceeds the “wavelength” of the oscillation of
at
(or at
), then one also typically expects (1) to be close to sharp. Conversely, if
does not vary much in magnitude from
to
, and the distance between
and
is less than the wavelength of any oscillation present in
, one expects to be able to improve upon (1).
When is relatively simple in form, one can sometimes proceed simply by substituting
. For instance, if
is the squaring function
in a commutative ring
, one has
and thus
or in terms of the original variables one has
If the ring is not commutative, one has to modify this to
Thus, for instance, if are
matrices and
denotes the operator norm, one sees from the triangle inequality and the sub-multiplicativity
of operator norm that
If involves
(or various components of
) in several places, one can sometimes get a good estimate by “swapping”
with
at each of the places in turn, using a telescoping series. For instance, if we again use the squaring function
in a non-commutative ring, we have
which for instance leads to a slight improvement of (2):
More generally, for any natural number , one has the identity
in a commutative ring, while in a non-commutative ring one must modify this to
and for matrices one has
Exercise 1 If
and
are unitary
matrices, show that the commutator
obeys the inequality
(Hint: first control
.)
Now suppose (for simplicity) that is a map between Euclidean spaces. If
is continuously differentiable, then one can use the fundamental theorem of calculus to write
where is any continuously differentiable path from
to
. For instance, if one uses the straight line path
, one has
In the one-dimensional case , this simplifies to
Among other things, this immediately implies the factor theorem for functions: if
is a
function for some
that vanishes at some point
, then
factors as the product of
and some
function
. Another basic consequence is that if
is uniformly bounded in magnitude by some constant
, then
is Lipschitz continuous with the same constant
.
Applying (4) to the power function , we obtain the identity
which can be compared with (3). Indeed, for and
close to
, one can use logarithms and Taylor expansion to arrive at the approximation
, so (3) behaves a little like a Riemann sum approximation to (5).
Exercise 2 For each
, let
and
be random variables taking values in a measurable space
, and let
be a bounded measurable function.
- (i) (Lindeberg exchange identity) Show that
- (ii) (Knowles-Yin exchange identity) Show that
where
is a mixture of
and
, with
uniformly drawn from
independently of each other and of the
.
- (iii) Discuss the relationship between the identities in parts (i), (ii) with the identities (3), (5).
(The identity in (i) is the starting point for the Lindeberg exchange method in probability theory, discussed for instance in this previous post. The identity in (ii) can also be used in the Lindeberg exchange method; the terms in the right-hand side are slightly more symmetric in the indices
, which can be a technical advantage in some applications; see this paper of Knowles and Yin for an instance of this.)
Exercise 3 If
is continuously
times differentiable, establish Taylor’s theorem with remainder
If
is bounded, conclude that
For real scalar functions , the average value of the continuous real-valued function
must be attained at some point
in the interval
. We thus conclude the mean-value theorem
for some (that can depend on
,
, and
). This can for instance give a second proof of fact that continuously differentiable functions
with bounded derivative are Lipschitz continuous. However it is worth stressing that the mean-value theorem is only available for real scalar functions; it is false for instance for complex scalar functions. A basic counterexample is given by the function
; there is no
for which
. On the other hand, as
has magnitude
, we still know from (4) that
is Lipschitz of constant
, and when combined with (1) we obtain the basic bounds
which are already very useful for many applications.
Exercise 4 Let
be
matrices, and let
be a non-negative real.
- (i) Establish the Duhamel formula
where
denotes the matrix exponential of
. (Hint: Differentiate
or
in
.)
- (ii) Establish the iterated Duhamel formula
for any
.
- (iii) Establish the infinitely iterated Duhamel formula
- (iv) If
is an
matrix depending in a continuously differentiable fashion on
, establish the variation formula
where
is the adjoint representation
applied to
, and
is the function
(thus
for non-zero
), with
defined using functional calculus.
We remark that further manipulation of (iv) of the above exercise using the fundamental theorem of calculus eventually leads to the Baker-Campbell-Hausdorff-Dynkin formula, as discussed in this previous blog post.
Exercise 5 Let
be positive definite
matrices, and let
be an
matrix. Show that there is a unique solution
to the Sylvester equation
which is given by the formula
In the above examples we had applied the fundamental theorem of calculus along linear curves . However, it is sometimes better to use other curves. For instance, the circular arc
can be useful, particularly if
and
are “orthogonal” or “independent” in some sense; a good example of this is the proof by Maurey and Pisier of the gaussian concentration inequality, given in Theorem 8 of this previous blog post. In a similar vein, if one wishes to compare a scalar random variable
of mean zero and variance one with a Gaussian random variable
of mean zero and variance one, it can be useful to introduce the intermediate random variables
(where
and
are independent); note that these variables have mean zero and variance one, and after coupling them together appropriately they evolve by the Ornstein-Uhlenbeck process, which has many useful properties. For instance, one can use these ideas to establish monotonicity formulae for entropy; see e.g. this paper of Courtade for an example of this and further references. More generally, one can exploit curves
that flow according to some geometrically natural ODE or PDE; several examples of this occur famously in Perelman’s proof of the Poincaré conjecture via Ricci flow, discussed for instance in this previous set of lecture notes.
In some cases, it is difficult to compute or the derivative
directly, but one can instead proceed by implicit differentiation, or some variant thereof. Consider for instance the matrix inversion map
(defined on the open dense subset of
matrices consisting of invertible matrices). If one wants to compute
for
close to
, one can write temporarily write
, thus
Multiplying both sides on the left by to eliminate the
term, and on the right by
to eliminate the
term, one obtains
and thus on reversing these steps we arrive at the basic identity
For instance, if are
matrices, and we consider the resolvents
then we have the resolvent identity
as long as does not lie in the spectrum of
or
(for instance, if
,
are self-adjoint then one can take
to be any strictly complex number). One can iterate this identity to obtain
for any natural number ; in particular, if
has operator norm less than one, one has the Neumann series
Similarly, if is a family of invertible matrices that depends in a continuously differentiable fashion on a time variable
, then by implicitly differentiating the identity
in using the product rule, we obtain
and hence
(this identity may also be easily derived from (6)). One can then use the fundamental theorem of calculus to obtain variants of (6), for instance by using the curve we arrive at
assuming that the curve stays entirely within the set of invertible matrices. While this identity may seem more complicated than (6), it is more symmetric, which conveys some advantages. For instance, using this identity it is easy to see that if are positive definite with
in the sense of positive definite matrices (that is,
is positive definite), then
. (Try to prove this using (6) instead!)
Exercise 6 If
is an invertible
matrix and
are
vectors, establish the Sherman-Morrison formula
whenever
is a scalar such that
is non-zero. (See also this previous blog post for more discussion of these sorts of identities.)
One can use the Cauchy integral formula to extend these identities to other functions of matrices. For instance, if is an entire function, and
is a counterclockwise contour that goes around the spectrum of both
and
, then we have
and similarly
and hence by (7) one has
similarly, if depends on
in a continuously differentiable fashion, then
as long as goes around the spectrum of
.
Exercise 7 If
is an
matrix depending continuously differentiably on
, and
is an entire function, establish the tracial chain rule
In a similar vein, given that the logarithm function is the antiderivative of the reciprocal, one can express the matrix logarithm of a positive definite matrix by the fundamental theorem of calculus identity
(with the constant term needed to prevent a logarithmic divergence in the integral). Differentiating, we see that if
is a family of positive definite matrices depending continuously on
, that
This can be used for instance to show that is a monotone increasing function, in the sense that
whenever
in the sense of positive definite matrices. One can of course integrate this formula to obtain some formulae for the difference
of the logarithm of two positive definite matrices
.
To compare the square root of two positive definite matrices
is trickier; there are multiple ways to proceed. One approach is to use contour integration as before (but one has to take some care to avoid branch cuts of the square root). Another to express the square root in terms of exponentials via the formula
where is the gamma function; this formula can be verified by first diagonalising
to reduce to the scalar case and using the definition of the Gamma function. Then one has
and one can use some of the previous identities to control . This is pretty messy though. A third way to proceed is via implicit differentiation. If for instance
is a family of positive definite matrices depending continuously differentiably on
, we can differentiate the identity
to obtain
This can for instance be solved using Exercise 5 to obtain
and this can in turn be integrated to obtain a formula for . This is again a rather messy formula, but it does at least demonstrate that the square root is a monotone increasing function on positive definite matrices:
implies
.
Several of the above identities for matrices can be (carefully) extended to operators on Hilbert spaces provided that they are sufficiently well behaved (in particular, if they have a good functional calculus, and if various spectral hypotheses are obeyed). We will not attempt to do so here, however.
Recent Comments