You are currently browsing the category archive for the ‘math.SG’ category.
where is integrated over the Haar probability measure of the unitary group and is a non-zero complex parameter, as the expression
when the eigenvalues of are simple, where denotes the Vandermonde determinant
and is the constant
There are at least two standard ways to prove this formula in the literature. One way is by applying the Duistermaat-Heckman theorem to the pushforward of Liouville measure on the coadjoint orbit (or more precisely, a rotation of such an orbit by ) under the moment map , and then using a stationary phase expansion. Another way, which I only learned about recently, is to use the formulae for evolution of eigenvalues under Dyson Brownian motion (as well as the closely related formulae for the GUE ensemble), which were derived in this previous blog post. Both of these approaches can be found in several places in the literature (the former being observed in the original paper of Duistermaat and Heckman, and the latter observed in the paper of Itzykson and Zuber as well as in this later paper of Johansson), but I thought I would record both of these here for my own benefit.
The Harish-Chandra-Itzykson-Zuber formula can be extended to other compact Lie groups than . At first glance, this might suggest that these formulae could be of use in the study of the GOE ensemble, but unfortunately the Lie algebra associated to corresponds to real anti-symmetric matrices rather than real symmetric matrices. This also occurs in the case, but there one can simply multiply by to rotate a complex skew-Hermitian matrix into a complex Hermitian matrix. This is consistent, though, with the fact that the (somewhat rarely studied) anti-symmetric GOE ensemble has cleaner formulae (in particular, having a determinantal structure similar to GUE) than the (much more commonly studied) symmetric GOE ensemble.
The prize is, of course, richly deserved. I have mentioned some of Gromov’s work here on this blog, including the Bishop-Gromov inequality in Riemannian geometry (which (together with its parabolic counterpart, the monotonicity of Perelman reduced volume) plays an important role in Perelman’s proof of the Poincaré conjecture), the concept of Gromov-Hausdorff convergence (a version of which is also key in the proof of the Poincaré conjecture), and Gromov’s celebrated theorem on groups of polynomial growth, which I discussed in this post.
Another well-known result of Gromov that I am quite fond of is his nonsqueezing theorem in symplectic geometry (or Hamiltonian mechanics). In its original form, the theorem states that a ball of radius R in a symplectic vector space (with the usual symplectic structure ) cannot be mapped by a symplectomorphism into any cylinder which is narrower than the ball (i.e. ). This result, which was one of the foundational results in the modern theory of symplectic invariants, is sometimes referred to as the “principle of the symplectic camel”, as it has the amusing corollary that a large “camel” (or more precisely, a 2n-dimensional ball of radius R in phase space) cannot be deformed via canonical transformations to pass through a small “needle” (or more precisely through a 2n-1-dimensional ball of radius less than R in a hyperplane). It shows that Liouville’s theorem on the volume preservation of symplectomorphisms is not the only obstruction to mapping one object symplectically to another.
I can sketch Gromov’s original proof of the non-squeezing theorem here. The symplectic space can be identified with the complex space , and in particular gives an almost complex structure J on the ball (roughly speaking, J allows one to multiply tangent vectors v by complex numbers, and in particular Jv can be viewed as v multiplied by the unit imaginary i). This almost complex structure J is compatible with the symplectic form ; in particular J is tamed by , which basically means that for all non-zero tangent vectors v.
Now suppose for contradiction that there is a symplectic embedding from the ball to a smaller cylinder. Then we can push forward the almost complex structure J on the ball to give an almost complex structure on the image . This new structure is still tamed by the symplectic form on this image.
Just as complex structures can be used to define holomorphic functions, almost complex structures can be used to define pseudo-holomorphic or J-holomorphic curves. These are curves of one complex dimension (i.e. two real dimensions, that is to say a surface) which obey the analogue of the Cauchy-Riemann equations in the almost complex setting (i.e. the tangent space of the curve is preserved by J). The theory of such curves was pioneered by Gromov in the paper where the nonsqueezing theorem was proved. When J is the standard almost complex structure on , pseudoholomorphic curves coincide with holomorphic curves. Among other things, such curves are minimal surfaces (for much the same reason that holomorphic functions are harmonic), and their symplectic areas and surface areas coincide.
Now, the point lies in the cylinder and in particular lies in a disk of symplectic area spanning this cylinder. This disk will not be pseudo-holomorphic in general, but it turns out that it can be deformed to obtain a pseudo-holomorphic disk spanning passing through of symplectic area at most . Pulling this back by , we obtain a minimal surface spanning passing through the origin that has surface area at most . However, any minimal surface spanning and passing through the origin is known to have area at least , giving the desired contradiction. [This latter fact, incidentally, is quite a fun fact to prove; the key point is to first show that any closed loop of length strictly less than in the sphere must lie inside an open hemisphere, and so cannot be the boundary of any minimal surface spanning the unit ball and containing the origin. Thus, the symplectic camel theorem ultimately comes down to the fact that one cannot pass a unit ball through a loop of string of length less than .]
As is well known, the linear one-dimensional wave equation
where is the unknown field (which, for simplicity, we assume to be smooth), can be solved explicitly; indeed, the general solution to (1) takes the form
for some arbitrary (smooth) functions . (One can of course determine f and g once one specifies enough initial data or other boundary conditions, but this is not the focus of my post today.)
When one moves from linear wave equations to nonlinear wave equations, then in general one does not expect to have a closed-form solution such as (2). So I was pleasantly surprised recently while playing with the nonlinear wave equation
to discover that this equation can also be explicitly solved in closed form. (I hope to explain why I was interested in (3) in the first place in a later post.)
A posteriori, I now know the reason for this explicit solvability; (3) is the limiting case of the more general equation
which (after applying the simple transformation ) becomes the sinh-Gordon equation
(a close cousin of the more famous sine-Gordon equation ), which is known to be completely integrable, and exactly solvable. However, I only realised this after the fact, and stumbled upon the explicit solution to (3) by much more classical and elementary means. I thought I might share the computations here, as I found them somewhat cute, and seem to serve as an example of how one might go about finding explicit solutions to PDE in general; accordingly, I will take a rather pedestrian approach to describing the hunt for the solution, rather than presenting the shortest or slickest route to the answer.
[The computations do seem to be very classical, though, and thus presumably already in the literature; if anyone knows of a place where the solvability of (3) is discussed, I would be very happy to learn of it.] [Update, Jan 22: Patrick Dorey has pointed out that (3) is, indeed, extremely classical; it is known as Liouville’s equation and was solved by Liouville in J. Math. Pure et Appl. vol 18 (1853), 71-74, with essentially the same solution as presented here.]
My penultimate article for my PCM series is a very short one, on “Hamiltonians“. The PCM has a number of short articles to define terms which occur frequently in the longer articles, but are not substantive enough topics by themselves to warrant a full-length treatment. One of these is the term “Hamiltonian”, which is used in all the standard types of physical mechanics (classical or quantum, microscopic or statistical) to describe the total energy of a system. It is a remarkable feature of the laws of physics that this single object (which is a scalar-valued function in classical physics, and a self-adjoint operator in quantum mechanics) suffices to describe the entire dynamics of a system, although from a mathematical perspective it is not always easy to read off all the analytic aspects of this dynamics just from the form of the Hamiltonian.
In mathematics, Hamiltonians of course arise in the equations of mathematical physics (such as Hamilton’s equations of motion, or Schrödinger’s equations of motion), but also show up in symplectic geometry (as a special case of a moment map) and in microlocal analysis.
For this post, I would also like to highlight an article of my good friend Andrew Granville on one of my own favorite topics, “Analytic number theory“, focusing in particular on the classical problem of understanding the distribution of the primes, via such analytic tools as zeta functions and L-functions, sieve theory, and the circle method.