You are currently browsing the category archive for the ‘math.QA’ category.
In contrast to previous notes, in this set of notes we shall focus exclusively on Fourier analysis in the one-dimensional setting for simplicity of notation, although all of the results here have natural extensions to higher dimensions. Depending on the physical context, one can view the physical domain
as representing either space or time; we will mostly think in terms of the former interpretation, even though the standard terminology of “time-frequency analysis”, which we will make more prominent use of in later notes, clearly originates from the latter.
In previous notes we have often performed various localisations in either physical space or Fourier space , for instance in order to take advantage of the uncertainty principle. One can formalise these operations in terms of the functional calculus of two basic operations on Schwartz functions
, the position operator
defined by
and the momentum operator , defined by
(The terminology comes from quantum mechanics, where it is customary to also insert a small constant on the right-hand side of (1) in accordance with de Broglie’s law. Such a normalisation is also used in several branches of mathematics, most notably semiclassical analysis and microlocal analysis, where it becomes profitable to consider the semiclassical limit
, but we will not emphasise this perspective here.) The momentum operator can be viewed as the counterpart to the position operator, but in frequency space instead of physical space, since we have the standard identity
for any and
. We observe that both operators
are formally self-adjoint in the sense that
for all , where we use the
Hermitian inner product
Clearly, for any polynomial of one real variable
(with complex coefficients), the operator
is given by the spatial multiplier operator
and similarly the operator is given by the Fourier multiplier operator
Inspired by this, if is any smooth function that obeys the derivative bounds
for all and
(that is to say, all derivatives of
grow at most polynomially), then we can define the spatial multiplier operator
by the formula
one can easily verify from several applications of the Leibniz rule that maps Schwartz functions to Schwartz functions. We refer to
as the symbol of this spatial multiplier operator. In a similar fashion, we define the Fourier multiplier operator
associated to the symbol
by the formula
For instance, any constant coefficient linear differential operators can be written in this notation as
however there are many Fourier multiplier operators that are not of this form, such as fractional derivative operators for non-integer values of
, which is a Fourier multiplier operator with symbol
. It is also very common to use spatial cutoffs
and Fourier cutoffs
for various bump functions
to localise functions in either space or frequency; we have seen several examples of such cutoffs in action in previous notes (often in the higher dimensional setting
).
We observe that the maps and
are ring homomorphisms, thus for instance
and
for any obeying the derivative bounds (2); also
is formally adjoint to
in the sense that
for , and similarly for
and
. One can interpret these facts as part of the functional calculus of the operators
, which can be interpreted as densely defined self-adjoint operators on
. However, in this set of notes we will not develop the spectral theory necessary in order to fully set out this functional calculus rigorously.
In the field of PDE and ODE, it is also very common to study variable coefficient linear differential operators
where the are now functions of the spatial variable
obeying the derivative bounds (2). A simple example is the quantum harmonic oscillator Hamiltonian
. One can rewrite this operator in our notation as
and so it is natural to interpret this operator as a combination of both the position operator
and the momentum operator
, where the symbol
this operator is the function
Indeed, from the Fourier inversion formula
for any we have
and hence on multiplying by and summing we have
Inspired by this, we can introduce the Kohn-Nirenberg quantisation by defining the operator by the formula
whenever and
is any smooth function obeying the derivative bounds
for all and
(note carefully that the exponent in
on the right-hand side is required to be uniform in
). This quantisation clearly generalises both the spatial multiplier operators
and the Fourier multiplier operators
defined earlier, which correspond to the cases when the symbol
is a function of
only or
only respectively. Thus we have combined the physical space
and the frequency space
into a single domain, known as phase space
. The term “time-frequency analysis” encompasses analysis based on decompositions and other manipulations of phase space, in much the same way that “Fourier analysis” encompasses analysis based on decompositions and other manipulations of frequency space. We remark that the Kohn-Nirenberg quantization is not the only choice of quantization one could use; see Remark 19 below.
In principle, the quantisations are potentially very useful for such tasks as inverting variable coefficient linear operators, or to localize a function simultaneously in physical and Fourier space. However, a fundamental difficulty arises: map from symbols
to operators
is now no longer a ring homomorphism, in particular
in general. Fundamentally, this is due to the fact that pointwise multiplication of symbols is a commutative operation, whereas the composition of operators such as and
does not necessarily commute. This lack of commutativity can be measured by introducing the commutator
of two operators , and noting from the product rule that
(In the language of Lie groups and Lie algebras, this tells us that are (up to complex constants) the standard Lie algebra generators of the Heisenberg group.) From a quantum mechanical perspective, this lack of commutativity is the root cause of the uncertainty principle that prevents one from simultaneously localizing in both position and momentum past a certain point. Here is one basic way of formalising this principle:
Exercise 2 (Heisenberg uncertainty principle) For any
and
, show that
(Hint: evaluate the expression
in two different ways and apply the Cauchy-Schwarz inequality.) Informally, this exercise asserts that the spatial uncertainty
and the frequency uncertainty
of a function obey the Heisenberg uncertainty relation
.
Nevertheless, one still has the correspondence principle, which asserts that in certain regimes (which, with our choice of normalisations, corresponds to the high-frequency regime), quantum mechanics continues to behave like a commutative theory, and one can sometimes proceed as if the operators (and the various operators
constructed from them) commute up to “lower order” errors. This can be formalised using the pseudodifferential calculus, which we give below the fold, in which we restrict the symbol
to certain “symbol classes” of various orders (which then restricts
to be pseudodifferential operators of various orders), and obtains approximate identities such as
where the error between the left and right-hand sides is of “lower order” and can in fact enjoys a useful asymptotic expansion. As a first approximation to this calculus, one can think of functions as having some sort of “phase space portrait”
which somehow combines the physical space representation
with its Fourier representation
, and pseudodifferential operators
behave approximately like “phase space multiplier operators” in this representation in the sense that
Unfortunately the uncertainty principle (or the non-commutativity of and
) prevents us from making these approximations perfectly precise, and it is not always clear how to even define a phase space portrait
of a function
precisely (although there are certain popular candidates for such a portrait, such as the FBI transform (also known as the Gabor transform in signal processing literature), or the Wigner quasiprobability distribution, each of which have some advantages and disadvantages). Nevertheless even if the concept of a phase space portrait is somewhat fuzzy, it is of great conceptual benefit both within mathematics and outside of it. For instance, the musical score one assigns a piece of music can be viewed as a phase space portrait of the sound waves generated by that music.
To complement the pseudodifferential calculus we have the basic Calderón-Vaillancourt theorem, which asserts that pseudodifferential operators of order zero are Calderón-Zygmund operators and thus bounded on for
. The standard proof of this theorem is a classic application of one of the basic techniques in harmonic analysis, namely the exploitation of almost orthogonality; the proof we will give here will achieve this through the elegant device of the Cotlar-Stein lemma.
Pseudodifferential operators (especially when generalised to higher dimensions ) are a fundamental tool in the theory of linear PDE, as well as related fields such as semiclassical analysis, microlocal analysis, and geometric quantisation. There is an even wider class of operators that is also of interest, namely the Fourier integral operators, which roughly speaking not only approximately multiply the phase space portrait
of a function by some multiplier
, but also move the portrait around by a canonical transformation. However, the development of theory of these operators is beyond the scope of these notes; see for instance the texts of Hormander or Eskin.
This set of notes is only the briefest introduction to the theory of pseudodifferential operators. Many texts are available that cover the theory in more detail, for instance this text of Taylor.
The fundamental notions of calculus, namely differentiation and integration, are often viewed as being the quintessential concepts in mathematical analysis, as their standard definitions involve the concept of a limit. However, it is possible to capture most of the essence of these notions by purely algebraic means (almost completely avoiding the use of limits, Riemann sums, and similar devices), which turns out to be useful when trying to generalise these concepts to more abstract situations in which it becomes convenient to permit the underlying number systems involved to be something other than the real or complex numbers, even if this makes many standard analysis constructions unavailable. For instance, the algebraic notion of a derivation often serves as a substitute for the analytic notion of a derivative in such cases, by abstracting out the key algebraic properties of differentiation, namely linearity and the Leibniz rule (also known as the product rule).
Abstract algebraic analogues of integration are less well known, but can still be developed. To motivate such an abstraction, consider the integration functional from the space
of complex-valued Schwarz functions
to the complex numbers, defined by
where the integration on the right is the usual Lebesgue integral (or improper Riemann integral) from analysis. This functional obeys two obvious algebraic properties. Firstly, it is linear over , thus
for all and
. Secondly, it is translation invariant, thus
for all , where
is the translation of
by
. Motivated by the uniqueness theory of Haar measure, one might expect that these two axioms already uniquely determine
after one sets a normalisation, for instance by requiring that
This is not quite true as stated (one can modify the proof of the Hahn-Banach theorem, after first applying a Fourier transform, to create pathological translation-invariant linear functionals on that are not multiples of the standard Fourier transform), but if one adds a mild analytical axiom, such as continuity of
(using the usual Schwartz topology on
), then the above axioms are enough to uniquely pin down the notion of integration. Indeed, if
is a continuous linear functional that is translation invariant, then from the linearity and translation invariance axioms one has
for all and non-zero reals
. If
is Schwartz, then as
, one can verify that the Newton quotients
converge in the Schwartz topology to the derivative
of
, so by the continuity axiom one has
Next, note that any Schwartz function of integral zero has an antiderivative which is also Schwartz, and so annihilates all zero-integral Schwartz functions, and thus must be a scalar multiple of the usual integration functional. Using the normalisation (4), we see that
must therefore be the usual integration functional, giving the claimed uniqueness.
Motivated by the above discussion, we can define the notion of an abstract integration functional taking values in some vector space
, and applied to inputs
in some other vector space
that enjoys a linear action
(the “translation action”) of some group
, as being a functional which is both linear and translation invariant, thus one has the axioms (1), (2), (3) for all
, scalars
, and
. The previous discussion then considered the special case when
,
,
, and
was the usual translation action.
Once we have performed this abstraction, we can now present analogues of classical integration which bear very little analytic resemblance to the classical concept, but which still have much of the algebraic structure of integration. Consider for instance the situation in which we keep the complex range , the translation group
, and the usual translation action
, but we replace the space
of Schwartz functions by the space
of polynomials
of degree at most
with complex coefficients, where
is a fixed natural number; note that this space is translation invariant, so it makes sense to talk about an abstract integration functional
. Of course, one cannot apply traditional integration concepts to non-zero polynomials, as they are not absolutely integrable. But one can repeat the previous arguments to show that any abstract integration functional must annihilate derivatives of polynomials of degree at most
:
Clearly, every polynomial of degree at most is thus annihilated by
, which makes
a scalar multiple of the functional that extracts the top coefficient
of a polynomial, thus if one sets a normalisation
for some constant , then one has
for any polynomial . So we see that up to a normalising constant, the operation of extracting the top order coefficient of a polynomial of fixed degree serves as the analogue of integration. In particular, despite the fact that integration is supposed to be the “opposite” of differentiation (as indicated for instance by (5)), we see in this case that integration is basically (
-fold) differentiation; indeed, compare (6) with the identity
In particular, we see, in contrast to the usual Lebesgue integral, the integration functional (6) can be localised to an arbitrary location: one only needs to know the germ of the polynomial at a single point
in order to determine the value of the functional (6). This localisation property may initially seem at odds with the translation invariance, but the two can be reconciled thanks to the extremely rigid nature of the class
, in contrast to the Schwartz class
which admits bump functions and so can generate local phenomena that can only be detected in small regions of the underlying spatial domain, and which therefore forces any translation-invariant integration functional on such function classes to measure the function at every single point in space.
The reversal of the relationship between integration and differentiation is also reflected in the fact that the abstract integration operation on polynomials interacts with the scaling operation in essentially the opposite way from the classical integration operation. Indeed, for classical integration on
, one has
for Schwartz functions , and so in this case the integration functional
obeys the scaling law
In contrast, the abstract integration operation defined in (6) obeys the opposite scaling law
Remark 1 One way to interpret what is going on is to view the integration operation (6) as a renormalised version of integration. A polynomial
is, in general, not absolutely integrable, and the partial integrals
diverge as
. But if one renormalises these integrals by the factor
, then one recovers convergence,
thus giving an interpretation of (6) as a renormalised classical integral, with the renormalisation being responsible for the unusual scaling relationship in (7). However, this interpretation is a little artificial, and it seems that it is best to view functionals such as (6) from an abstract algebraic perspective, rather than to try to force an analytic interpretation on them.
Now we return to the classical Lebesgue integral
As noted earlier, this integration functional has a translation invariance associated to translations along the real line , as well as a dilation invariance by real dilation parameters
. However, if we refine the class
of functions somewhat, we can obtain a stronger family of invariances, in which we allow complex translations and dilations. More precisely, let
denote the space of all functions
which are entire (or equivalently, are given by a Taylor series with an infinite radius of convergence around the origin) and also admit rapid decay in a sectorial neighbourhood of the real line, or more precisely there exists an
such that for every
there exists
such that one has the bound
whenever . For want of a better name, we shall call elements of this space Schwartz entire functions. This is clearly a complex vector space. A typical example of a Schwartz entire function are the complex gaussians
where are complex numbers with
. From the Cauchy integral formula (and its derivatives) we see that if
lies in
, then the restriction of
to the real line lies in
; conversely, from analytic continuation we see that every function in
has at most one extension in
. Thus one can identify
with a subspace of
, and in particular the integration functional (8) is inherited by
, and by abuse of notation we denote the resulting functional
as
also. Note, in analogy with the situation with polynomials, that this abstract integration functional is somewhat localised; one only needs to evaluate the function
on the real line, rather than the entire complex plane, in order to compute
. This is consistent with the rigid nature of Schwartz entire functions, as one can uniquely recover the entire function from its values on the real line by analytic continuation.
Of course, the functional remains translation invariant with respect to real translation:
However, thanks to contour shifting, we now also have translation invariance with respect to complex translation:
where of course we continue to define the translation operator for complex
by the usual formula
. In a similar vein, we also have the scaling law
for any , if
is a complex number sufficiently close to
(where “sufficiently close” depends on
, and more precisely depends on the sectoral aperture parameter
associated to
); again, one can verify that
lies in
for
sufficiently close to
. These invariances (which relocalise the integration functional
onto other contours than the real line
) are very useful for computing integrals, and in particular for computing gaussian integrals. For instance, the complex translation invariance tells us (after shifting by
) that
when with
, and then an application of the complex scaling law (and a continuity argument, observing that there is a compact path connecting
to
in the right half plane) gives
using the branch of on the right half-plane for which
. Using the normalisation (4) we thus have
giving the usual gaussian integral formula
This is a basic illustration of the power that a large symmetry group (in this case, the complex homothety group) can bring to bear on the task of computing integrals.
One can extend this sort of analysis to higher dimensions. For any natural number , let
denote the space of all functions
which is jointly entire in the sense that
can be expressed as a Taylor series in
which is absolutely convergent for all choices of
, and such that there exists an
such that for any
there is
for which one has the bound
whenever for all
, where
and
. Again, we call such functions Schwartz entire functions; a typical example is the function
where is an
complex symmetric matrix with positive definite real part,
is a vector in
, and
is a complex number. We can then define an abstract integration functional
by integration on the real slice
:
where is the usual Lebesgue measure on
. By contour shifting in each of the
variables
separately, we see that
is invariant with respect to complex translations of each of the
variables, and is thus invariant under translating the joint variable
by
. One can also verify the scaling law
for complex matrices
sufficiently close to the origin, where
. This can be seen for shear transformations
by Fubini’s theorem and the aforementioned translation invariance, while for diagonal transformations near the origin this can be seen from
applications of one-dimensional scaling law, and the general case then follows by composition. Among other things, these laws then easily lead to the higher-dimensional generalisation
whenever is a complex symmetric matrix with positive definite real part,
is a vector in
, and
is a complex number, basically by repeating the one-dimensional argument sketched earlier. Here, we choose the branch of
for all matrices
in the indicated class for which
.
Now we turn to an integration functional suitable for computing complex gaussian integrals such as
where is now a complex variable
is the adjoint
is a complex
matrix with positive definite Hermitian part,
are column vectors in
,
is a complex number, and
is
times Lebesgue measure on
. (The factors of two here turn out to be a natural normalisation, but they can be ignored on a first reading.) As we shall see later, such integrals are relevant when performing computations on the Gaussian Unitary Ensemble (GUE) in random matrix theory. Note that the integrand here is not complex analytic due to the presence of the complex conjugates. However, this can be dealt with by the trick of replacing the complex conjugate
by a variable
which is formally conjugate to
, but which is allowed to vary independently of
. More precisely, let
be the space of all functions
of two independent
-tuples
of complex variables, which is jointly entire in all variables (in the sense defined previously, i.e. there is a joint Taylor series that is absolutely convergent for all independent choices of
), and such that there is an
such that for every
there is
such that one has the bound
whenever . We will call such functions Schwartz analytic. Note that the integrand in (11) is Schwartz analytic when
has positive definite Hermitian part, if we reinterpret
as the transpose of
rather than as the adjoint of
in order to make the integrand entire in
and
. We can then define an abstract integration functional
by the formula
thus can be localised to the slice
of
(though, as with previous functionals, one can use contour shifting to relocalise
to other slices also.) One can also write this integral as
and note that the integrand here is a Schwartz entire function on , thus linking the Schwartz analytic integral with the Schwartz entire integral. Using this connection, one can verify that this functional
is invariant with respect to translating
and
by independent shifts in
(thus giving a
translation symmetry), and one also has the independent dilation symmetry
for complex matrices
that are sufficiently close to the identity, where
. Arguing as before, we can then compute (11) as
In particular, this gives an integral representation for the determinant-reciprocal of a complex
matrix with positive definite Hermitian part, in terms of gaussian expressions in which
only appears linearly in the exponential:
This formula is then convenient for computing statistics such as
for random matrices drawn from the Gaussian Unitary Ensemble (GUE), and some choice of spectral parameter
with
; we review this computation later in this post. By the trick of matrix differentiation of the determinant (as reviewed in this recent blog post), one can also use this method to compute matrix-valued statistics such as
However, if one restricts attention to classical integrals over real or complex (and in particular, commuting or bosonic) variables, it does not seem possible to easily eradicate the negative determinant factors in such calculations, which is unfortunate because many statistics of interest in random matrix theory, such as the expected Stieltjes transform
which is the Stieltjes transform of the density of states. However, it turns out (as I learned recently from Peter Sarnak and Tom Spencer) that it is possible to cancel out these negative determinant factors by balancing the bosonic gaussian integrals with an equal number of fermionic gaussian integrals, in which one integrates over a family of anticommuting variables. These fermionic integrals are closer in spirit to the polynomial integral (6) than to Lebesgue type integrals, and in particular obey a scaling law which is inverse to the Lebesgue scaling (in particular, a linear change of fermionic variables ends up transforming a fermionic integral by
rather than
), which conveniently cancels out the reciprocal determinants in the previous calculations. Furthermore, one can combine the bosonic and fermionic integrals into a unified integration concept, known as the Berezin integral (or Grassmann integral), in which one integrates functions of supervectors (vectors with both bosonic and fermionic components), and is of particular importance in the theory of supersymmetry in physics. (The prefix “super” in physics means, roughly speaking, that the object or concept that the prefix is attached to contains both bosonic and fermionic aspects.) When one applies this unified integration concept to gaussians, this can lead to quite compact and efficient calculations (provided that one is willing to work with “super”-analogues of various concepts in classical linear algebra, such as the supertrace or superdeterminant).
Abstract integrals of the flavour of (6) arose in quantum field theory, when physicists sought to formally compute integrals of the form
where are familiar commuting (or bosonic) variables (which, in particular, can often be localised to be scalar variables taking values in
or
), while
were more exotic anticommuting (or fermionic) variables, taking values in some vector space of fermions. (As we shall see shortly, one can formalise these concepts by working in a supercommutative algebra.) The integrand
was a formally analytic function of
, in that it could be expanded as a (formal, noncommutative) power series in the variables
. For functions
that depend only on bosonic variables, it is certainly possible for such analytic functions to be in the Schwartz class and thus fall under the scope of the classical integral, as discussed previously. However, functions
that depend on fermionic variables
behave rather differently. Indeed, a fermonic variable
must anticommute with itself, so that
. In particular, any power series in
terminates after the linear term in
, so that a function
can only be analytic in
if it is a polynomial of degree at most
in
; more generally, an analytic function
of
fermionic variables
must be a polynomial of degree at most
, and an analytic function
of
bosonic and
fermionic variables can be Schwartz in the bosonic variables but will be polynomial in the fermonic variables. As such, to interpret the integral (14), one can use classical (Lebesgue) integration (or the variants discussed above for integrating Schwartz entire or Schwartz analytic functions) for the bosonic variables, but must use abstract integrals such as (6) for the fermonic variables, leading to the concept of Berezin integration mentioned earlier.
In this post I would like to set out some of the basic algebraic formalism of Berezin integration, particularly with regards to integration of gaussian-type expressions, and then show how this formalism can be used to perform computations involving GUE (for instance, one can compute the density of states of GUE by this machinery without recourse to the theory of orthogonal polynomials). The use of supersymmetric gaussian integrals to analyse ensembles such as GUE appears in the work of Efetov (and was also proposed in the slightly earlier works of Parisi-Sourlas and McKane, with a related approach also appearing in the work of Wegner); the material here is adapted from this survey of Mirlin, as well as the later papers of Disertori-Pinson-Spencer and of Disertori.
One of the basic problems in the field of operator algebras is to develop a functional calculus for either a single operator , or a collection
of operators. These operators could in principle act on any function space, but typically one either considers complex matrices (which act on a complex finite dimensional space), or operators (either bounded or unbounded) on a complex Hilbert space. (One can of course also obtain analogous results for real operators, but we will work throughout with complex operators in this post.)
Roughly speaking, a functional calculus is a way to assign an operator or
to any function
in a suitable function space, which is linear over the complex numbers, preserve the scalars (i.e.
when
), and should be either an exact or approximate homomorphism in the sense that
should hold either exactly or approximately. In the case when the are self-adjoint operators acting on a Hilbert space (or Hermitian matrices), one often also desires the identity
to also hold either exactly or approximately. (Note that one cannot reasonably expect (1) and (2) to hold exactly for all if the
and their adjoints
do not commute with each other, so in those cases one has to be willing to allow some error terms in the above wish list of properties of the calculus.) Ideally, one should also be able to relate the operator norm of
or
with something like the uniform norm on
. In principle, the existence of a good functional calculus allows one to manipulate operators as if they were scalars (or at least approximately as if they were scalars), which is very helpful for a number of applications, such as partial differential equations, spectral theory, noncommutative probability, and semiclassical mechanics. A functional calculus for multiple operators
can be particularly valuable as it allows one to treat
as being exact or approximate scalars simultaneously. For instance, if one is trying to solve a linear differential equation that can (formally at least) be expressed in the form
for some data , unknown function
, some differential operators
, and some nice function
, then if one’s functional calculus is good enough (and
is suitably “elliptic” in the sense that it does not vanish or otherwise degenerate too often), one should be able to solve this equation either exactly or approximately by the formula
which is of course how one would solve this equation if one pretended that the operators were in fact scalars. Formalising this calculus rigorously leads to the theory of pseudodifferential operators, which allows one to (approximately) solve or at least simplify a much wider range of differential equations than one what can achieve with more elementary algebraic transformations (e.g. integrating factors, change of variables, variation of parameters, etc.). In quantum mechanics, a functional calculus that allows one to treat operators as if they were approximately scalar can be used to rigorously justify the correspondence principle in physics, namely that the predictions of quantum mechanics approximate that of classical mechanics in the semiclassical limit
.
There is no universal functional calculus that works in all situations; the strongest functional calculi, which are close to being an exact *-homomorphisms on very large class of functions, tend to only work for under very restrictive hypotheses on or
(in particular, when
, one needs the
to commute either exactly, or very close to exactly), while there are weaker functional calculi which have fewer nice properties and only work for a very small class of functions, but can be applied to quite general operators
or
. In some cases the functional calculus is only formal, in the sense that
or
has to be interpreted as an infinite formal series that does not converge in a traditional sense. Also, when one wishes to select a functional calculus on non-commuting operators
, there is a certain amount of non-uniqueness: one generally has a number of slightly different functional calculi to choose from, which generally have the same properties but differ in some minor technical details (particularly with regards to the behaviour of “lower order” components of the calculus). This is a similar to how one has a variety of slightly different coordinate systems available to parameterise a Riemannian manifold or Lie group. This is on contrast to the
case when the underlying operator
is (essentially) normal (so that
commutes with
); in this special case (which includes the important subcases when
is unitary or (essentially) self-adjoint), spectral theory gives us a canonical and very powerful functional calculus which can be used without further modification in applications.
Despite this lack of uniqueness, there is one standard choice for a functional calculus available for general operators , namely the Weyl functional calculus; it is analogous in some ways to normal coordinates for Riemannian manifolds, or exponential coordinates of the first kind for Lie groups, in that it treats lower order terms in a reasonably nice fashion. (But it is important to keep in mind that, like its analogues in Riemannian geometry or Lie theory, there will be some instances in which the Weyl calculus is not the optimal calculus to use for the application at hand.)
I decided to write some notes on the Weyl functional calculus (also known as Weyl quantisation), and to sketch the applications of this calculus both to the theory of pseudodifferential operators. They are mostly for my own benefit (so that I won’t have to redo these particular calculations again), but perhaps they will also be of interest to some readers here. (Of course, this material is also covered in many other places. e.g. Folland’s “harmonic analysis in phase space“.)
Given a set , a (simple) point process is a random subset
of
. (A non-simple point process would allow multiplicity; more formally,
is no longer a subset of
, but is a Radon measure on
, where we give
the structure of a locally compact Polish space, but I do not wish to dwell on these sorts of technical issues here.) Typically,
will be finite or countable, even when
is uncountable. Basic examples of point processes include
- (Bernoulli point process)
is an at most countable set,
is a parameter, and
a random set such that the events
for each
are jointly independent and occur with a probability of
each. This process is automatically simple.
- (Discrete Poisson point process)
is an at most countable space,
is a measure on
(i.e. an assignment of a non-negative number
to each
), and
is a multiset where the multiplicity of
in
is a Poisson random variable with intensity
, and the multiplicities of
as
varies in
are jointly independent. This process is usually not simple.
- (Continuous Poisson point process)
is a locally compact Polish space with a Radon measure
, and for each
of finite measure, the number of points
that
contains inside
is a Poisson random variable with intensity
. Furthermore, if
are disjoint sets, then the random variables
are jointly independent. (The fact that Poisson processes exist at all requires a non-trivial amount of measure theory, and will not be discussed here.) This process is almost surely simple iff all points in
have measure zero.
- (Spectral point processes) The spectrum of a random matrix is a point process in
(or in
, if the random matrix is Hermitian). If the spectrum is almost surely simple, then the point process is almost surely simple. In a similar spirit, the zeroes of a random polynomial are also a point process.
A remarkable fact is that many natural (simple) point processes are determinantal processes. Very roughly speaking, this means that there exists a positive semi-definite kernel such that, for any
, the probability that
all lie in the random set
is proportional to the determinant
. Examples of processes known to be determinantal include non-intersecting random walks, spectra of random matrix ensembles such as GUE, and zeroes of polynomials with gaussian coefficients.
I would be interested in finding a good explanation (even at the heuristic level) as to why determinantal processes are so prevalent in practice. I do have a very weak explanation, namely that determinantal processes obey a large number of rather pretty algebraic identities, and so it is plausible that any other process which has a very algebraic structure (in particular, any process involving gaussians, characteristic polynomials, etc.) would be connected in some way with determinantal processes. I’m not particularly satisfied with this explanation, but I thought I would at least describe some of these identities below to support this case. (This is partly for my own benefit, as I am trying to learn about these processes, particularly in connection with the spectral distribution of random matrices.) The material here is partly based on this survey of Hough, Krishnapur, Peres, and Virág.
On Thursday, UCLA hosted a “Fields Medalist Symposium“, in which four of the six University of California-affiliated Fields Medalists (Vaughan Jones (1990), Efim Zelmanov (1994), Richard Borcherds (1998), and myself (2006)) gave talks of varying levels of technical sophistication. (The other two are Michael Freedman (1986) and Steven Smale (1966), who could not attend.) The slides for my own talks are available here.
The talks were in order of the year in which the medal was awarded: we began with Vaughan, who spoke on “Flatland: a great place to do algebra”, then Efim, who spoke on “Pro-finite groups”, Richard, who spoke on “What is a quantum field theory?”, and myself, on “Nilsequences and the primes.” The audience was quite mixed, ranging from mathematics faculty to undergraduates to alumni to curiosity seekers, and I severely doubt that every audience member understood every talk, but there was something for everyone, and for me personally it was fantastic to see some perspectives from first-class mathematicians on some wonderful areas of mathematics outside of my own fields of expertise.
Disclaimer: the summaries below are reconstructed from my notes and from some hasty web research; I don’t vouch for 100% accuracy of the mathematical content, and would welcome corrections.
This problem lies in the highly interconnected interface between algebraic combinatorics (esp. the combinatorics of Young tableaux and related objects, including honeycombs and puzzles), algebraic geometry (particularly classical and quantum intersection theory and geometric invariant theory), linear algebra (additive and multiplicative, real and tropical), and the representation theory (classical, quantum, crystal, etc.) of classical groups. (Another open problem in this subject is to find a succinct and descriptive name for the field.) I myself haven’t actively worked in this area for several years, but I still find it a fascinating and beautiful subject. (With respect to the dichotomy between structure and randomness, this subject lies deep within the “structure” end of the spectrum.)
As mentioned above, the problems in this area can be approached from a variety of quite diverse perspectives, but here I will focus on the linear algebra perspective, which is perhaps the most accessible. About nine years ago, Allen Knutson and I introduced a combinatorial gadget, called a honeycomb, which among other things controlled the relationship between the eigenvalues of two arbitrary Hermitian matrices A, B, and the eigenvalues of their sum A+B; this was not the first such gadget that achieved this purpose, but it was a particularly convenient one for studying this problem, in particular it was used to resolve two conjectures in the subject, the saturation conjecture and the Horn conjecture. (These conjectures have since been proven by a variety of other methods.) There is a natural multiplicative version of these problems, which now relates the eigenvalues of two arbitrary unitary matrices U, V and the eigenvalues of their product UV; this led to the “quantum saturation” and “quantum Horn” conjectures, which were proven a couple years ago. However, the quantum analogue of a “honeycomb” remains a mystery; this is the main topic of the current post.
Recent Comments