You are currently browsing the tag archive for the ‘semiclassical analysis’ tag.

One of the basic problems in the field of operator algebras is to develop a functional calculus for either a single operator , or a collection of operators. These operators could in principle act on any function space, but typically one either considers complex matrices (which act on a complex finite dimensional space), or operators (either bounded or unbounded) on a complex Hilbert space. (One can of course also obtain analogous results for real operators, but we will work throughout with complex operators in this post.)

Roughly speaking, a functional calculus is a way to assign an operator or to any function in a suitable function space, which is linear over the complex numbers, preserve the scalars (i.e. when ), and should be either an exact or approximate homomorphism in the sense that

should hold either exactly or approximately. In the case when the are self-adjoint operators acting on a Hilbert space (or Hermitian matrices), one often also desires the identity

to also hold either exactly or approximately. (Note that one cannot reasonably expect (1) and (2) to hold exactly for all if the and their adjoints do not commute with each other, so in those cases one has to be willing to allow some error terms in the above wish list of properties of the calculus.) Ideally, one should also be able to relate the operator norm of or with something like the uniform norm on . In principle, the existence of a good functional calculus allows one to manipulate operators as if they were scalars (or at least approximately as if they were scalars), which is very helpful for a number of applications, such as partial differential equations, spectral theory, noncommutative probability, and semiclassical mechanics. A functional calculus for multiple operators can be particularly valuable as it allows one to treat as being exact or approximate scalars *simultaneously*. For instance, if one is trying to solve a linear differential equation that can (formally at least) be expressed in the form

for some data , unknown function , some differential operators , and some nice function , then if one’s functional calculus is good enough (and is suitably “elliptic” in the sense that it does not vanish or otherwise degenerate too often), one should be able to solve this equation either exactly or approximately by the formula

which is of course how one would solve this equation if one pretended that the operators were in fact scalars. Formalising this calculus rigorously leads to the theory of pseudodifferential operators, which allows one to (approximately) solve or at least simplify a much wider range of differential equations than one what can achieve with more elementary algebraic transformations (e.g. integrating factors, change of variables, variation of parameters, etc.). In quantum mechanics, a functional calculus that allows one to treat operators as if they were approximately scalar can be used to rigorously justify the correspondence principle in physics, namely that the predictions of quantum mechanics approximate that of classical mechanics in the *semiclassical limit* .

There is no universal functional calculus that works in all situations; the strongest functional calculi, which are close to being an exact *-homomorphisms on very large class of functions, tend to only work for under very restrictive hypotheses on or (in particular, when , one needs the to commute either exactly, or very close to exactly), while there are weaker functional calculi which have fewer nice properties and only work for a very small class of functions, but can be applied to quite general operators or . In some cases the functional calculus is only formal, in the sense that or has to be interpreted as an infinite formal series that does not converge in a traditional sense. Also, when one wishes to select a functional calculus on non-commuting operators , there is a certain amount of non-uniqueness: one generally has a number of slightly different functional calculi to choose from, which generally have the same properties but differ in some minor technical details (particularly with regards to the behaviour of “lower order” components of the calculus). This is a similar to how one has a variety of slightly different coordinate systems available to parameterise a Riemannian manifold or Lie group. This is on contrast to the case when the underlying operator is (essentially) normal (so that commutes with ); in this special case (which includes the important subcases when is unitary or (essentially) self-adjoint), spectral theory gives us a canonical and very powerful functional calculus which can be used without further modification in applications.

Despite this lack of uniqueness, there is one standard choice for a functional calculus available for general operators , namely the Weyl functional calculus; it is analogous in some ways to normal coordinates for Riemannian manifolds, or *exponential coordinates of the first kind* for Lie groups, in that it treats lower order terms in a reasonably nice fashion. (But it is important to keep in mind that, like its analogues in Riemannian geometry or Lie theory, there will be some instances in which the Weyl calculus is not the optimal calculus to use for the application at hand.)

I decided to write some notes on the Weyl functional calculus (also known as Weyl quantisation), and to sketch the applications of this calculus both to the theory of pseudodifferential operators. They are mostly for my own benefit (so that I won’t have to redo these particular calculations again), but perhaps they will also be of interest to some readers here. (Of course, this material is also covered in many other places. e.g. Folland’s “harmonic analysis in phase space“.)

Let be a large integer, and let be the Gaussian Unitary Ensemble (GUE), i.e. the random Hermitian matrix with probability distribution

where is a Haar measure on Hermitian matrices and is the normalisation constant required to make the distribution of unit mass. The eigenvalues of this matrix are then a coupled family of real random variables. For any , we can define the *-point correlation function* to be the unique symmetric measure on such that

A standard computation (given for instance in these lecture notes of mine) gives the *Ginebre formula*

for the -point correlation function, where is another normalisation constant. Using Vandermonde determinants, one can rewrite this expression in determinantal form as

where the kernel is given by

where and are the (-normalised) Hermite polynomials (thus the are an orthonormal family, with each being a polynomial of degree ). Integrating out one or more of the variables, one is led to the *Gaudin-Mehta formula*

(In particular, the normalisation constant in the previous formula turns out to simply be equal to .) Again, see these lecture notes for details.

The functions can be viewed as an orthonormal basis of eigenfunctions for the *harmonic oscillator operator*

indeed it is a classical fact that

As such, the kernel can be viewed as the integral kernel of the spectral projection operator .

From (1) we see that the fine-scale structure of the eigenvalues of GUE are controlled by the asymptotics of as . The two main asymptotics of interest are given by the following lemmas:

Lemma 1 (Asymptotics of in the bulk)Let , and let be the semicircular law density at . Then, we haveas for any fixed (removing the singularity at in the usual manner).

Lemma 2 (Asymptotics of at the edge)We haveas for any fixed , where is the Airy function

and again removing the singularity at in the usual manner.

The proof of these asymptotics usually proceeds via computing the asymptotics of Hermite polynomials, together with the Christoffel-Darboux formula; this is for instance the approach taken in the previous notes. However, there is a slightly different approach that is closer in spirit to the methods of semi-classical analysis, which was briefly mentioned in the previous notes but not elaborated upon. For sake of completeness, I am recording some notes on this approach here, although to focus on the main ideas I will not be completely rigorous in the derivation (ignoring issues such as convegence of integrals or of operators, or (removable) singularities in kernels caused by zeroes in the denominator).

I’m continuing my series of articles for the Princeton Companion to Mathematics with my article on phase space. This brief article, which overlaps to some extent with my article on the Schrödinger equation, introduces the concept of *phase space*, which is used to describe both the positions and momenta of a system in both classical and quantum mechanics, although in the latter one has to accept a certain amount of ambiguity (or non-commutativity, if one prefers) in this description thanks to the uncertainty principle. (Note that positions alone are not sufficient to fully characterise the state of a system; this observation essentially goes all the way back to Zeno with his arrow paradox.)

Phase space is also used in pure mathematics, where it is used to simultaneously describe position (or time) and frequency; thus the term “time-frequency analysis” is sometimes used to describe phase space-based methods in analysis. The counterpart of classical mechanics is then symplectic geometry and Hamiltonian ODE, while the counterpart of quantum mechanics is the theory of linear differential and pseudodifferential operators. The former is essentially the “high-frequency limit” of the latter; this can be made more precise using the techniques of microlocal analysis, semi-classical analysis, and geometric quantisation.

As usual, I will highlight another author’s PCM article in this post, this one being Frank Kelly‘s article “The mathematics of traffic in networks“, a subject which, as a resident of Los Angeles, I can relate to on a personal level :-) . Frank’s article also discusses in detail Braess’s paradox, which is the rather unintuitive fact that adding extra capacity to a network can sometimes *increase* the overall delay in the network, by inadvertently redirecting more traffic through bottlenecks! If nothing else, this paradox demonstrates that the mathematics of traffic is non-trivial.

I’m continuing my series of articles for the Princeton Companion to Mathematics with my article on the Schrödinger equation – the fundamental equation of motion of quantum particles, possibly in the presence of an external field. My focus here is on the relationship between the Schrödinger equation of motion for wave functions (and the closely related Heisenberg equation of motion for quantum observables), and Hamilton’s equations of motion for classical particles (and the closely related Poisson equation of motion for classical observables). There is also some brief material on semiclassical analysis, scattering theory, and spectral theory, though with only a little more than 5 pages to work with in all, I could not devote much detail to these topics. (In particular, nonlinear Schrödinger equations, a favourite topic of mine, are not covered at all.)

As I said before, I will try to link to at least one other PCM article in every post in this series. Today I would like to highlight Madhu Sudan‘s delightful article on information and coding theory, “Reliable transmission of information“.

[*Update*, Oct 3: typos corrected.]

[*Update*, Oct 9: more typos corrected.]

## Recent Comments