You are currently browsing the tag archive for the ‘Burgers’ equation’ tag.
When solving the initial value problem to an ordinary differential equation, such as
where is the unknown solution (taking values in some finite-dimensional vector space
),
is the initial datum, and
is some nonlinear function (which we will take to be smooth for sake of argument), then one can construct a solution locally in time via the Picard iteration method. There are two basic ideas. The first is to use the fundamental theorem of calculus to rewrite the initial value problem (1) as the problem of solving an integral equation,
The second idea is to solve this integral equation by the contraction mapping theorem, showing that the integral operator defined by
is a contraction on a suitable complete metric space (e.g. a closed ball in the function space ), and thus has a unique fixed point in this space. This method works as long as one only seeks to construct local solutions (for time
in
for sufficiently small
), but the solutions constructed have a number of very good properties, including
- Existence: A solution
exists in the space
(and even in
) for
sufficiently small.
- Uniqueness: There is at most one solution
to the initial value problem in the space
(or in smoother spaces, such as
). (For solutions in the weaker space
we use the integral formulation (2) to define the solution concept.)
- Lipschitz continuous dependence on the data: If
is a sequence of initial data converging to
, then the associated solutions
converge uniformly to
on
(possibly after shrinking
slightly). In fact we have the Lipschitz bound
for
large enough and
, where
is an absolute constant.
This package of properties is referred to as (Lipschitz) wellposedness.
This method extends to certain partial differential equations, particularly those of a semilinear nature (linear except for lower order nonlinear terms). For instance, if trying to solve an initial value problem of the form
where now takes values in a function space
(e.g. a Sobolev space
),
is an initial datum,
is some (differential) operator (independent of
) that is (densely) defined on
, and
is a nonlinearity which is also (densely) defined on
, then (formally, at least) one can solve this problem by using Duhamel’s formula to convert the problem to that of solving an integral equation
and one can then hope to show that the associated nonlinear integral operator
is a contraction in a subset of a suitably chosen function space.
This method turns out to work surprisingly well for many semilinear partial differential equations, and in particular for semilinear parabolic, semilinear dispersive, and semilinear wave equations. As in the ODE case, when the method works, it usually gives the entire package of Lipschitz well-posedness: existence, uniqueness, and Lipschitz continuous dependence on the initial data, for short times at least.
However, when one moves from semilinear initial value problems to quasilinear initial value problems such as
in which the top order operator now depends on the solution
itself, then the nature of well-posedness changes; one can still hope to obtain (local) existence and uniqueness, and even continuous dependence on the data, but one usually is forced to give up Lipschitz continuous dependence at the highest available regularity (though one can often recover it at lower regularities). As a consequence, the Picard iteration method is not directly suitable for constructing solutions to such equations.
One can already see this phenomenon with a very simple equation, namely the one-dimensional constant-velocity transport equation
where we consider as part of the initial data. (If one wishes, one could view this equation as a rather trivial example of a system.
to emphasis this viewpoint, but this would be somewhat idiosyncratic.) One can solve this equation explicitly of course to get the solution
In particular, if we look at the solution just at time for simplicity, we have
Now let us see how this solution depends on the parameter
. One can ask whether this dependence is Lipschitz in
, in some function space
:
for some finite . But using the Newton approximation
we see that we should only expect such a bound when (and its translates) lie in
. Thus, we see a loss of derivatives phenomenon with regard to Lipschitz well-posedness; if the initial data
is in some regularity space, say
, then one only obtains Lipschitz dependence on
in a lower regularity space such as
.
We have just seen that if all one knows about the initial data is that it is bounded in a function space
, then one usually cannot hope to make the dependence of
on the velocity parameter
Lipschitz continuous. Indeed, one cannot even make it continuous uniformly in
. Given two values of
that are close together, e.g.
and
, and a reasonable function space
(e.g. a Sobolev space
, or a classical regularity space
) one can easily cook up a function
that is bounded in
but whose two solutions
and
separate in the
norm at time
, simply by choosing
to be supported on an interval of width
.
(Part of the problem here is that using a subtractive method to determine the distance between two solutions
is not a physically natural operation when transport mechanisms are present that could cause the key features of
(such as singularities) to be situated in slightly different locations. In such cases, the correct notion of distance may need to take transport into account, e.g. by using metrics of Wasserstein type.)
On the other hand, one still has non-uniform continuous dependence on the initial parameters: if lies in some reasonable function space
, then the map
is continuous in the
topology, even if it is not uniformly continuous with respect to
. (More succinctly: translation is a continuous but not uniformly continuous operation in most function spaces.) The reason for this is that we already have established this continuity in the case when
is so smooth that an additional derivative of
lies in
; and such smooth functions tend to be dense in the original space
, so the general case can then be established by a limiting argument, approximating a general function in
by a smoother function. We then see that the non-uniformity ultimately comes from the fact that a given function in
may be arbitrarily rough (or concentrated at an arbitrarily fine scale), and so the ability to approximate such a function by a smooth one can be arbitrarily poor.
In many quasilinear PDE, one often encounters qualitatively similar phenomena. Namely, one often has local well-posedness in sufficiently smooth function spaces (so that if the initial data lies in
, then for short times one has existence, uniqueness, and continuous dependence on the data in the
topology), but Lipschitz or uniform continuity in the
topology is usually false. However, if the data (and solution) is known to be in a high-regularity function space
, one can often recover Lipschitz or uniform continuity in a lower-regularity topology.
Because the continuous dependence on the data in quasilinear equations is necessarily non-uniform, the arguments needed to establish this dependence can be remarkably delicate. As with the simple example of the transport equation, the key is to approximate a rough solution by a smooth solution first, by smoothing out the data (this is the non-uniform step, as it depends on the physical scale (or wavelength) that the data features are located). But for quasilinear equations, keeping the rough and smooth solution together can require a little juggling of function space norms, in particular playing the low-frequency nature of the smooth solution against the high-frequency nature of the residual between the rough and smooth solutions.
Below the fold I will illustrate this phenomenon with one of the simplest quasilinear equations, namely the initial value problem for the inviscid Burgers’ equation
which is a modification of the transport equation (3) in which the velocity is no longer a parameter, but now depends (and is, in this case, actually equal to) the solution. To avoid technicalities we will work only with the classical function spaces
of
times continuously differentiable functions, though one can certainly work with other spaces (such as Sobolev spaces) by exploiting the Sobolev embedding theorem. To avoid having to distinguish continuity from uniform continuity, we shall work in a compact domain by assuming periodicity in space, thus for instance restricting
to the unit circle
.
This discussion is inspired by this survey article of Nikolay Tzvetkov, which further explores the distinction between well-posedness and ill-posedness in both semilinear and quasilinear settings.
We can now turn attention to one of the centerpiece universality results in random matrix theory, namely the Wigner semi-circle law for Wigner matrices. Recall from previous notes that a Wigner Hermitian matrix ensemble is a random matrix ensemble of Hermitian matrices (thus
; this includes real symmetric matrices as an important special case), in which the upper-triangular entries
,
are iid complex random variables with mean zero and unit variance, and the diagonal entries
are iid real variables, independent of the upper-triangular entries, with bounded mean and variance. Particular special cases of interest include the Gaussian Orthogonal Ensemble (GOE), the symmetric random sign matrices (aka symmetric Bernoulli ensemble), and the Gaussian Unitary Ensemble (GUE).
In previous notes we saw that the operator norm of was typically of size
, so it is natural to work with the normalised matrix
. Accordingly, given any
Hermitian matrix
, we can form the (normalised) empirical spectral distribution (or ESD for short)
of , where
are the (necessarily real) eigenvalues of
, counting multiplicity. The ESD is a probability measure, which can be viewed as a distribution of the normalised eigenvalues of
.
When is a random matrix ensemble, then the ESD
is now a random measure – i.e. a random variable taking values in the space
of probability measures on the real line. (Thus, the distribution of
is a probability measure on probability measures!)
Now we consider the behaviour of the ESD of a sequence of Hermitian matrix ensembles as
. Recall from Notes 0 that for any sequence of random variables in a
-compact metrisable space, one can define notions of convergence in probability and convergence almost surely. Specialising these definitions to the case of random probability measures on
, and to deterministic limits, we see that a sequence of random ESDs
converge in probability (resp. converge almost surely) to a deterministic limit
(which, confusingly enough, is a deterministic probability measure!) if, for every test function
, the quantities
converge in probability (resp. converge almost surely) to
.
Remark 1 As usual, convergence almost surely implies convergence in probability, but not vice versa. In the special case of random probability measures, there is an even weaker notion of convergence, namely convergence in expectation, defined as follows. Given a random ESD
, one can form its expectation
, defined via duality (the Riesz representation theorem) as
this probability measure can be viewed as the law of a random eigenvalue
drawn from a random matrix
from the ensemble. We then say that the ESDs converge in expectation to a limit
if
converges the vague topology to
, thus
for all
.
In general, these notions of convergence are distinct from each other; but in practice, one often finds in random matrix theory that these notions are effectively equivalent to each other, thanks to the concentration of measure phenomenon.
Exercise 1 Let
be a sequence of
Hermitian matrix ensembles, and let
be a continuous probability measure on
.
- Show that
converges almost surely to
if and only if
converges almost surely to
for all
.
- Show that
converges in probability to
if and only if
converges in probability to
for all
.
- Show that
converges in expectation to
if and only if
converges to
for all
.
We can now state the Wigner semi-circular law.
Theorem 1 (Semicircular law) Let
be the top left
minors of an infinite Wigner matrix
. Then the ESDs
converge almost surely (and hence also in probability and in expectation) to the Wigner semi-circular distribution
A numerical example of this theorem in action can be seen at the MathWorld entry for this law.
The semi-circular law nicely complements the upper Bai-Yin theorem from Notes 3, which asserts that (in the case when the entries have finite fourth moment, at least), the matrices almost surely has operator norm at most
. Note that the operator norm is the same thing as the largest magnitude of the eigenvalues. Because the semi-circular distribution (1) is supported on the interval
with positive density on the interior of this interval, Theorem 1 easily supplies the lower Bai-Yin theorem, that the operator norm of
is almost surely at least
, and thus (in the finite fourth moment case) the norm is in fact equal to
. Indeed, we have just shown that the circular law provides an alternate proof of the lower Bai-Yin bound (Proposition 11 of Notes 3).
As will hopefully become clearer in the next set of notes, the semi-circular law is the noncommutative (or free probability) analogue of the central limit theorem, with the semi-circular distribution (1) taking on the role of the normal distribution. Of course, there is a striking difference between the two distributions, in that the former is compactly supported while the latter is merely subgaussian. One reason for this is that the concentration of measure phenomenon is more powerful in the case of ESDs of Wigner matrices than it is for averages of iid variables; compare the concentration of measure results in Notes 3 with those in Notes 1.
There are several ways to prove (or at least to heuristically justify) the circular law. In this set of notes we shall focus on the two most popular methods, the moment method and the Stieltjes transform method, together with a third (heuristic) method based on Dyson Brownian motion (Notes 3b). In the next set of notes we shall also study the free probability method, and in the set of notes after that we use the determinantal processes method (although this method is initially only restricted to highly symmetric ensembles, such as GUE).
Recent Comments