In less coordinate-free notation, the identity can also be written as , with basically the same proof; this is then more or less the adjugate matrix proof in the arXiv posting.

]]>if and have norm 1.

One advantage with this is that we may assume that , instead of , is an element of an ON basis, which helps (a little) in the proof.

It also suggests that the formula might hold in the case of infinite dimension, for a suitable notion of determinants. For instance, when and is of trace class, it should hold for Fredholm determinants, since they are limits of finite dimensional determinants. It might also hold for determinants defined by zeta-functions (?).

It can be proven easily from the spectral theorem. Extracting the component of this identity, we see that to derive Theorem 1 from this identity (or vice versa) one would have to establish the identity

where denotes the component of a matrix . This identity has to be true (by combining Halmos’s identity with Theorem 1, and using a limiting argument to remove the hypothesis that the eigenvalues are simple), but actually I don’t see any direct way to prove (*) that doesn’t basically proceed via this route. (The right-hand side is , which suggests some sort of invocation of Cramer’s rule, but if one pursues this idea, one is basically repeating the Cramer’s rule proof given in the blog post, followed by the spectral theory proof of Halmos’s identity. The left-hand side is definitely reminiscent of the Cayley-Hamilton theorem, but I have thus far been unable to use that theorem to give a more tractable expression for the left-hand side of (*), other than by the route of combining Halmos’s identity with Theorem 1.)

It is interesting that there seem to be a number of rather different looking expressions for the same quantity, with the equality between any of the two being a slightly non-trivial theorem. If one writes (assuming simple eigenvalues and for sake of discussion)

then Theorem 1 asserts that , the identity from Erdos-Schlein-Yau (and my paper with Van Vu) asserts that , and Halmos’s identity asserts that . The Cramer rule proof of Theorem 1 basically proceeds by showing that and . In the first proof in the blog post an independent proof is given that . However I don’t know of a direct way to establish that or without going through or .

*[UPDATE: I can now adapt the adjugate proof of Theorem 1 in the arXiv article to show (*) (or equivalently, ) directly. One can check that*

by testing against each of the eigenvectors separately (or, if one wishes, first checking the identity when is a diagonal matrix, and using the spectral theorem to extend to general Hermitian ). [Amusingly, if one multiplies both sides of the above identity by , one also recovers the Cayley-Hamilton theorem.] Extracting the coefficient, one obtains (*). In other words, one can show by equating both quantities with

*It is also rather easy to show that , so this proof is arguably “going through” in some sense.
]*

Then the respective orthogonal projectors are .

]]>