You are currently browsing the tag archive for the ‘heat equation’ tag.
[These are notes intended mostly for myself, as these topics are useful in random matrix theory, but may be of interest to some readers also. -T.]
One of the most fundamental partial differential equations in mathematics is the heat equation
where is a scalar function
of both time and space, and
is the Laplacian
. For the purposes of this post, we will ignore all technical issues of regularity and decay, and always assume that the solutions to equations such as (1) have all the regularity and decay in order to justify all formal operations such as the chain rule, integration by parts, or differentiation under the integral sign. The factor of
in the definition of the heat propagator
is of course an arbitrary normalisation, chosen for some minor technical reasons; one can certainly continue the discussion below with other choices of normalisations if desired.
In probability theory, this equation takes on particular significance when is restricted to be non-negative, and furthermore to be a probability measure at each time, in the sense that
for all . (Actually, it suffices to verify this constraint at time
, as the heat equation (1) will then preserve this constraint.) Indeed, in this case, one can interpret
as the probability distribution of a Brownian motion
where is a stochastic process with initial probability distribution
; see for instance this previous blog post for more discussion.
A model example of a solution to the heat equation to keep in mind is that of the fundamental solution
defined for any , which represents the distribution of Brownian motion of a particle starting at the origin
at time
. At time
,
represents an
-valued random variable, each coefficient of which is an independent random variable of mean zero and variance
. (As
,
converges in the sense of distributions to a Dirac mass at the origin.)
The heat equation can also be viewed as the gradient flow for the Dirichlet form
since one has the integration by parts identity
for all smooth, rapidly decreasing , which formally implies that
is (half of) the negative gradient of the Dirichlet energy
with respect to the
inner product. Among other things, this implies that the Dirichlet energy decreases in time:
For instance, for the fundamental solution (3), one can verify for any time that
(assuming I have not made a mistake in the calculation). In a similar spirit we have
Since is non-negative, the formula (6) implies that
is integrable in time, and in particular we see that
converges to zero as
, in some averaged
sense at least; similarly, (8) suggests that
also converges to zero. This suggests that
converges to a constant function; but as
is also supposed to decay to zero at spatial infinity, we thus expect solutions to the heat equation in
to decay to zero in some sense as
. However, the decay is only expected to be polynomial in nature rather than exponential; for instance, the solution (3) decays in the
norm like
.
Since , we also observe the basic cancellation property
for any function .
There are other quantities relating to that also decrease in time under heat flow, particularly in the important case when
is a probability measure. In this case, it is natural to introduce the entropy
Thus, for instance, if is the uniform distribution on some measurable subset
of
of finite measure
, the entropy would be
. Intuitively, as the entropy decreases, the probability distribution gets wider and flatter. For instance, in the case of the fundamental solution (3), one has
for any
, reflecting the fact that
is approximately uniformly distributed on a ball of radius
(and thus of measure
).
A short formal computation shows (if one assumes for simplicity that is strictly positive, which is not an unreasonable hypothesis, particularly in view of the strong maximum principle) using (9), (5) that
where is the square root of
. For instance, if
is the fundamental solution (3), one can check that
(note that this is a significantly cleaner formula than (7)!).
In particular, the entropy is decreasing, which corresponds well to one’s intuition that the heat equation (or Brownian motion) should serve to spread out a probability distribution over time.
Actually, one can say more: the rate of decrease of the entropy is itself decreasing, or in other words the entropy is convex. I do not have a satisfactorily intuitive reason for this phenomenon, but it can be proved by straightforward application of basic several variable calculus tools (such as the chain rule, product rule, quotient rule, and integration by parts), and completing the square. Namely, by using the chain rule we have
valid for for any smooth function , we see from (1) that
and thus (again assuming that , and hence
, is strictly positive to avoid technicalities)
We thus have
It is now convenient to compute using the Einstein summation convention to hide the summation over indices . We have
and
By integration by parts and interchanging partial derivatives, we may write the first integral as
and from the quotient and product rules, we may write the second integral as
Gathering terms, completing the square, and making the summations explicit again, we see that
and so in particular is always decreasing.
The above identity can also be written as
Exercise 1 Give an alternate proof of the above identity by writing
,
and deriving the equation
for
.
It was observed in a well known paper of Bakry and Emery that the above monotonicity properties hold for a much larger class of heat flow-type equations, and lead to a number of important relations between energy and entropy, such as the log-Sobolev inequality of Gross and of Federbush, and the hypercontractivity inequality of Nelson; we will discuss one such family of generalisations (or more precisely, variants) below the fold.
LLet be a self-adjoint operator on a finite-dimensional Hilbert space
. The behaviour of this operator can be completely described by the spectral theorem for finite-dimensional self-adjoint operators (i.e. Hermitian matrices, when viewed in coordinates), which provides a sequence
of eigenvalues and an orthonormal basis
of eigenfunctions such that
for all
. In particular, given any function
on the spectrum
of
, one can then define the linear operator
by the formula
which then gives a functional calculus, in the sense that the map is a
-algebra isometric homomorphism from the algebra
of bounded continuous functions from
to
, to the algebra
of bounded linear operators on
. Thus, for instance, one can define heat operators
for
, Schrödinger operators
for
, resolvents
for
, and (if
is positive) wave operators
for
. These will be bounded operators (and, in the case of the Schrödinger and wave operators, unitary operators, and in the case of the heat operators with
positive, they will be contractions). Among other things, this functional calculus can then be used to solve differential equations such as the heat equation
The functional calculus can also be associated to a spectral measure. Indeed, for any vectors , there is a complex measure
on
with the property that
indeed, one can set to be the discrete measure on
defined by the formula
One can also view this complex measure as a coefficient
of a projection-valued measure on
, defined by setting
Finally, one can view as unitarily equivalent to a multiplication operator
on
, where
is the real-valued function
, and the intertwining map
is given by
so that .
It is an important fact in analysis that many of these above assertions extend to operators on an infinite-dimensional Hilbert space , so long as one one is careful about what “self-adjoint operator” means; these facts are collectively referred to as the spectral theorem. For instance, it turns out that most of the above claims have analogues for bounded self-adjoint operators
. However, in the theory of partial differential equations, one often needs to apply the spectral theorem to unbounded, densely defined linear operators
, which (initially, at least), are only defined on a dense subspace
of the Hilbert space
. A very typical situation arises when
is the square-integrable functions on some domain or manifold
(which may have a boundary or be otherwise “incomplete”), and
are the smooth compactly supported functions on
, and
is some linear differential operator. It is then of interest to obtain the spectral theorem for such operators, so that one build operators such as
or to solve equations such as (1), (2), (3), (4).
In order to do this, some necessary conditions on the densely defined operator must be imposed. The most obvious is that of symmetry, which asserts that
for all . In some applications, one also wants to impose positive definiteness, which asserts that
for all . These hypotheses are sufficient in the case when
is bounded, and in particular when
is finite dimensional. However, as it turns out, for unbounded operators these conditions are not, by themselves, enough to obtain a good spectral theory. For instance, one consequence of the spectral theorem should be that the resolvents
are well-defined for any strictly complex
, which by duality implies that the image of
should be dense in
. However, this can fail if one just assumes symmetry, or symmetry and positive definiteness. A well-known example occurs when
is the Hilbert space
,
is the space of test functions, and
is the one-dimensional Laplacian
. Then
is symmetric and positive, but the operator
does not have dense image for any complex
, since
for all test functions , as can be seen from a routine integration by parts. As such, the resolvent map is not everywhere uniquely defined. There is also a lack of uniqueness for the wave, heat, and Schrödinger equations for this operator (note that there are no spatial boundary conditions specified in these equations).
Another example occurs when ,
,
is the momentum operator
. Then the resolvent
can be uniquely defined for
in the upper half-plane, but not in the lower half-plane, due to the obstruction
for all test functions (note that the function
lies in
when
is in the lower half-plane). For related reasons, the translation operators
have a problem with either uniqueness or existence (depending on whether
is positive or negative), due to the unspecified boundary behaviour at the origin.
The key property that lets one avoid this bad behaviour is that of essential self-adjointness. Once is essentially self-adjoint, then spectral theorem becomes applicable again, leading to all the expected behaviour (e.g. existence and uniqueness for the various PDE given above).
Unfortunately, the concept of essential self-adjointness is defined rather abstractly, and is difficult to verify directly; unlike the symmetry condition (5) or the positive condition (6), it is not a “local” condition that can be easily verified just by testing on various inputs, but is instead a more “global” condition. In practice, to verify this property, one needs to invoke one of a number of a partial converses to the spectral theorem, which roughly speaking asserts that if at least one of the expected consequences of the spectral theorem is true for some symmetric densely defined operator
, then
is self-adjoint. Examples of “expected consequences” include:
- Existence of resolvents
(or equivalently, dense image for
);
- Existence of a contractive heat propagator semigroup
(in the positive case);
- Existence of a unitary Schrödinger propagator group
;
- Existence of a unitary wave propagator group
(in the positive case);
- Existence of a “reasonable” functional calculus.
- Unitary equivalence with a multiplication operator.
Thus, to actually verify essential self-adjointness of a differential operator, one typically has to first solve a PDE (such as the wave, Schrödinger, heat, or Helmholtz equation) by some non-spectral method (e.g. by a contraction mapping argument, or a perturbation argument based on an operator already known to be essentially self-adjoint). Once one can solve one of the PDEs, then one can apply one of the known converse spectral theorems to obtain essential self-adjointness, and then by the forward spectral theorem one can then solve all the other PDEs as well. But there is no getting out of that first step, which requires some input (typically of an ODE, PDE, or geometric nature) that is external to what abstract spectral theory can provide. For instance, if one wants to establish essential self-adjointness of the Laplace-Beltrami operator on a smooth Riemannian manifold
(using
as the domain space), it turns out (under reasonable regularity hypotheses) that essential self-adjointness is equivalent to geodesic completeness of the manifold, which is a global ODE condition rather than a local one: one needs geodesics to continue indefinitely in order to be able to (unitarily) solve PDEs such as the wave equation, which in turn leads to essential self-adjointness. (Note that the domains
and
in the previous examples were not geodesically complete.) For this reason, essential self-adjointness of a differential operator is sometimes referred to as quantum completeness (with the completeness of the associated Hamilton-Jacobi flow then being the analogous classical completeness).
In these notes, I wanted to record (mostly for my own benefit) the forward and converse spectral theorems, and to verify essential self-adjointness of the Laplace-Beltrami operator on geodesically complete manifolds. This is extremely standard analysis (covered, for instance, in the texts of Reed and Simon), but I wanted to write it down myself to make sure that I really understood this foundational material properly.
One theme in this course will be the central nature played by the gaussian random variables . Gaussians have an incredibly rich algebraic structure, and many results about general random variables can be established by first using this structure to verify the result for gaussians, and then using universality techniques (such as the Lindeberg exchange strategy) to extend the results to more general variables.
One way to exploit this algebraic structure is to continuously deform the variance from an initial variance of zero (so that the random variable is deterministic) to some final level
. We would like to use this to give a continuous family
of random variables
as
(viewed as a “time” parameter) runs from
to
.
At present, we have not completely specified what should be, because we have only described the individual distribution
of each
, and not the joint distribution. However, there is a very natural way to specify a joint distribution of this type, known as Brownian motion. In these notes we lay the necessary probability theory foundations to set up this motion, and indicate its connection with the heat equation, the central limit theorem, and the Ornstein-Uhlenbeck process. This is the beginning of stochastic calculus, which we will not develop fully here.
We will begin with one-dimensional Brownian motion, but it is a simple matter to extend the process to higher dimensions. In particular, we can define Brownian motion on vector spaces of matrices, such as the space of Hermitian matrices. This process is equivariant with respect to conjugation by unitary matrices, and so we can quotient out by this conjugation and obtain a new process on the quotient space, or in other words on the spectrum of
Hermitian matrices. This process is called Dyson Brownian motion, and turns out to have a simple description in terms of ordinary Brownian motion; it will play a key role in several of the subsequent notes in this course.
Recent Comments