Israel Gelfand, who made profound and prolific contributions to many areas of mathematics, including functional analysis, representation theory, operator algebras, and partial differential equations, died on Monday, age 96.
Gelfand’s beautiful theory of -algebras and related spaces made quite an impact on me as a graduate student in Princeton, to the point where I was seriously considering working in this area; but there was not much activity in operator algebras at the time there, and I ended up working in harmonic analysis under Eli Stein instead. (Though I am currently involved in another operator algebras project, of which I hope to be able to discuss in the near future. The commutative version of Gelfand’s theory is discussed in these lecture notes of mine.)
I met Gelfand only once, in one of the famous “Gelfand seminars” at the IHES in 2000. The speaker was Tim Gowers, on his new proof of Szemerédi’s theorem. (Endre Szemerédi, incidentally, was Gelfand’s student.) Gelfand’s introduction to the seminar, on the subject of Banach spaces which both mathematicians contributed so greatly to, was approximately as long as Gowers’ talk itself!
There are far too many contributions to mathematics by Gelfand to name here, so I will only mention two. The first are the Gelfand-Tsetlin patterns, induced by an Hermitian matrix . Such matrices have real eigenvalues . If one takes the top minor, this is another Hermitian matrix, whose eigenvalues intersperse the eigenvalues of the original matrix: for every . This interspersing can be easily seen from the minimax characterisation
of the eigenvalues of , with the eigenvalues of the minor being similar but with now restricted to a subspace of rather than .
Similarly, the eigenvalues of the top minor of intersperse those of the previous minor. Repeating this procedure one eventually gets a pyramid of real numbers of height and width , with the numbers in each row interspersing the ones in the row below. Such a pattern is known as a Gelfand-Tsetlin pattern. The space of such patterns forms a convex cone, and (if one fixes the initial eigenvalues ) becomes a compact convex polytope. If one fixes the initial eigenvalues of but chooses the eigenvectors randomly (using the Haar measure of the unitary group), then the resulting Gelfand-Tsetlin pattern is uniformly distributed across this polytope; the case of this observation is essentially the classic observation of Archimedes that the cross-sectional areas of a sphere and a circumscribing cylinder are the same. (Ultimately, the reason for this is that the Gelfand-Tsetlin pattern almost turns the space of all with a fixed spectrum (i.e. the co-adjoint orbit associated to that spectrum) into a toric variety. More precisely, there exists a mostly diffeomorphic map from the co-adjoint orbit to a (singular) toric variety, and the Gelfand-Tsetlin pattern induces a complete set of action variables on that variety.) There is also a “quantum” (or more precisely, representation-theoretic) version of this observation, in which one can decompose any irreducible representation of the unitary group into a canonical basis (the Gelfand-Tsetlin basis), indexed by integer-valued Gelfand-Tsetlin patterns, by first decomposing this representation into irreducible representations of , then , and so forth.
The structure, symplectic geometry, and representation theory of Gelfand-Tsetlin patterns was enormously influential in my own work with Allen Knutson on honeycomb patterns, which control the sums of Hermitian matrices and also the structure constants of the tensor product operation for representations of ; indeed, Gelfand-Tsetlin patterns arise as the degenerate limit of honeycombs in three different ways, and we in fact discovered honeycombs by trying to glue three Gelfand-Tsetlin patterns together. (See for instance our Notices article for more discussion. The honeycomb analogue of the representation-theoretic properties of these patterns was eventually established by Henriques and Kamnitzer, using crystals and their Kashiwara bases.)
The second contribution of Gelfand I want to discuss is the Gelfand-Levitan-Marchenko equation for solving the one-dimensional inverse scattering problem: given the scattering data of an unknown potential function , recover . This is already interesting in and of itself, but is also instrumental in solving integrable systems such as the Korteweg-de Vries equation, because the Lax pair formulation of such equations implies that they can be linearised (and solved explicitly) by applying the scattering and inverse scattering transforms associated with the Lax operator. I discuss the derivation of this equation below the fold.
— 1. Inverse scattering —
for the scalar field . If the potential was zero, then (1) becomes the free wave equation, the general solution of (1) would be the superposition of a rightward moving wave and a leftward moving wave :
Now suppose is non-zero. Then in the left region , solves the free wave equation and thus takes the form
for some , while in the right region , solves the free wave equation and takes the form
for some . As for the central region , we will make the simplifying assumption that the equation (1) has no bound states, which implies in particular that as for any fixed and any reasonable (e.g. finite energy will do). From spectral theory, this assumption is equivalent to the assertion that the Hill operator (or Schrödinger operator) has no eigenfunctions in .) Thus the wave is asymptotically equal to for large negative times, then evolves to interact with the potential, then eventually is asymptotic to for large positive .
When is zero, we have and . For non-zero , the relationship between , and involves something called the reflection coefficients and transmission coefficients of . We will not define these just yet, but instead focus on two special cases. Firstly, consider the case when is the Dirac delta function and ; thus for large negative times, the wave consists just of a Dirac mass (or “photon”, if you will) arriving from the left, and this generates some special solution to (1) (a bit like the fundamental solution). (We will ignore the (mild) technical issues involved in solving PDE with data as rough as the Dirac distribution.) Evolving the wave equation (1), the solution eventually resolves to some transmitted wave (which should be some perturbation of the Dirac delta function) plus a reflected wave , which we will call . This function can be thought of as “scattering data” for the potential .
Next, we will consider the solution to (1) for which and ; one can view this solution as obtained by reversing the roles of space and time, and evolving (1) from to (with the potential now being a constant-in-space, variable-in-time potential rather than the other way around). One can construct this solution by the Picard iteration method, and using finite speed of propagation, one can show that is supported on the lower diagonal region , and is smooth except at the diagonal , where it can have a jump discontinuity. One can think of as a dual fundamental solution for the equation (1); it is also the Fourier transform of the Jost solutions to the eigenfunction equation (5).
We claim that the scattering data can be used to reconstruct the potential . This claim is obtained from two sub-claims:
- Given the scattering data , one can recover .
- Given , one can recover .
The second claim is easy: one simply substitutes into (1). Indeed, if one compares the components of both sides, one sees that
To verify the first claim, we exploit the time translation and time reversal symmetries of (1). Since is a solution to (1), the time-reversal is also a solution. More generally, the time translation is also a solution to (1) for any real number . Multiplying this by and integrating in , we see that the function
is also a solution to (1). Superimposing this with the solution , we see that
is also a solution to (1).
Now when , vanishes; and so this solution (2) is equal to for . Thus, by uniqueness of the evolution of (1) (in the direction), (2) must match the fundamental solution . On the other hand, by finite speed of propagation, this solution vanishes when . We thus obtain the Gelfand-Levitan-Marchenko equation
For any fixed , this is a linear integral equation for , and one can use it to solve from given by inverting the integral operator . (This inversion can be performed by Neumann series if is small enough, which is for instance the case when the potential is small; in fact, one can use scattering theory to show that this operator is in fact invertible for all without bound states, but we will not do so here.)
Now I’ll discuss briefly the relationship between the data and more traditional types of scattering data. We already have the fundamental solution , which equals for and for which . By time translation, we have a solution for any , which equals for and for which . Multiplying this solution by for some frequency and then integrating in , we obtain the (complex) solution
to (1), which equals
for , and equals some function for , where
On the other hand, we can write the solution (4) as , where . This implies that must be a multiple of the plane wave , thus for some complex number . Substituting into (1), we see that is a generalised eigenfunction of the Schrödinger operator: