You are currently browsing the category archive for the ‘math.AP’ category.
This week I am at the American Institute of Mathematics, as an organiser on a workshop on the universality phenomenon in random matrices. There have been a number of interesting discussions so far in this workshop. Percy Deift, in a lecture on universality for invariant ensembles, gave some applications of what he only half-jokingly termed “the most important identity in mathematics”, namely the formula
whenever are and matrices respectively (or more generally, and could be linear operators with sufficiently good spectral properties that make both sides equal). Note that the left-hand side is an determinant, while the right-hand side is a determinant; this formula is particularly useful when computing determinants of large matrices (or of operators), as one can often use it to transform such determinants into much smaller determinants. In particular, the asymptotic behaviour of determinants as can be converted via this formula to determinants of a fixed size (independent of ), which is often a more favourable situation to analyse. Unsurprisingly, this trick is particularly useful for understanding the asymptotic behaviour of determinantal processes.
There are many ways to prove the identity. One is to observe first that when are invertible square matrices of the same size, that and are conjugate to each other and thus clearly have the same determinant; a density argument then removes the invertibility hypothesis, and a padding-by-zeroes argument then extends the square case to the rectangular case. Another is to proceed via the spectral theorem, noting that and have the same non-zero eigenvalues.
By rescaling, one obtains the variant identity
which essentially relates the characteristic polynomial of with that of . When , a comparison of coefficients this already gives important basic identities such as and ; when is not equal to , an inspection of the coefficient similarly gives the Cauchy-Binet formula (which, incidentally, is also useful when performing computations on determinantal processes).
Thanks to this formula (and with a crucial insight of Alice Guionnet), I was able to solve a problem (on outliers for the circular law) that I had in the back of my mind for a few months, and initially posed to me by Larry Abbott; I hope to talk more about this in a future post.
Today, though, I wish to talk about another piece of mathematics that emerged from an afternoon of free-form discussion that we managed to schedule within the AIM workshop. Specifically, we hammered out a heuristic model of the mesoscopic structure of the eigenvalues of the Gaussian Unitary Ensemble (GUE), where is a large integer. As is well known, the probability density of these eigenvalues is given by the Ginebre distribution
where is Lebesgue measure on the Weyl chamber , is a constant, and the Hamiltonian is given by the formula
At the macroscopic scale of , the eigenvalues are distributed according to the Wigner semicircle law
Indeed, if one defines the classical location of the eigenvalue to be the unique solution in to the equation
then it is known that the random variable is quite close to . Indeed, a result of Gustavsson shows that, in the bulk region when for some fixed , is distributed asymptotically as a gaussian random variable with mean and variance . Note that from the semicircular law, the factor is the mean eigenvalue spacing.
At the other extreme, at the microscopic scale of the mean eigenvalue spacing (which is comparable to in the bulk, but can be as large as at the edge), the eigenvalues are asymptotically distributed with respect to a special determinantal point process, namely the Dyson sine process in the bulk (and the Airy process on the edge), as discussed in this previous post.
Here, I wish to discuss the mesoscopic structure of the eigenvalues, in which one involves scales that are intermediate between the microscopic scale and the macroscopic scale , for instance in correlating the eigenvalues and in the regime for some . Here, there is a surprising phenomenon; there is quite a long-range correlation between such eigenvalues. The result of Gustavsson shows that both and behave asymptotically like gaussian random variables, but a further result from the same paper shows that the correlation between these two random variables is asymptotic to (in the bulk, at least); thus, for instance, adjacent eigenvalues and are almost perfectly correlated (which makes sense, as their spacing is much less than either of their standard deviations), but that even very distant eigenvalues, such as and , have a correlation comparable to . One way to get a sense of this is to look at the trace
This is also the sum of the diagonal entries of a GUE matrix, and is thus normally distributed with a variance of . In contrast, each of the (in the bulk, at least) has a variance comparable to . In order for these two facts to be consistent, the average correlation between pairs of eigenvalues then has to be of the order of .
Below the fold, I give a heuristic way to see this correlation, based on Taylor expansion of the convex Hamiltonian around the minimum , which gives a conceptual probabilistic model for the mesoscopic structure of the GUE eigenvalues. While this heuristic is in no way rigorous, it does seem to explain many of the features currently known or conjectured about GUE, and looks likely to extend also to other models.
One of the key difficulties in performing analysis in infinite-dimensional function spaces, as opposed to finite-dimensional vector spaces, is that the Bolzano-Weierstrass theorem no longer holds: a bounded sequence in an infinite-dimensional function space need not have any convergent subsequences (when viewed using the strong topology). To put it another way, the closed unit ball in an infinite-dimensional function space usually fails to be (sequentially) compact.
As compactness is such a useful property to have in analysis, various tools have been developed over the years to try to salvage some sort of substitute for the compactness property in infinite-dimensional spaces. One of these tools is concentration compactness, which was discussed previously on this blog. This can be viewed as a compromise between weak compactness (which is true in very general circumstances, but is often too weak for applications) and strong compactness (which would be very useful in applications, but is usually false), in which one obtains convergence in an intermediate sense that involves a group of symmetries acting on the function space in question.
Concentration compactness is usually stated and proved in the language of standard analysis: epsilons and deltas, limits and supremas, and so forth. In this post, I wanted to note that one could also state and prove the basic foundations of concentration compactness in the framework of nonstandard analysis, in which one now deals with infinitesimals and ultralimits instead of epsilons and ordinary limits. This is a fairly mild change of viewpoint, but I found it to be informative to view this subject from a slightly different perspective. The nonstandard proofs require a fair amount of general machinery to set up, but conversely, once all the machinery is up and running, the proofs become slightly shorter, and can exploit tools from (standard) infinitary analysis, such as orthogonal projections in Hilbert spaces, or the continuous-pure point decomposition of measures. Because of the substantial amount of setup required, nonstandard proofs tend to have significantly more net complexity than their standard counterparts when it comes to basic results (such as those presented in this post), but the gap between the two narrows when the results become more difficult, and for particularly intricate and deep results it can happen that nonstandard proofs end up being simpler overall than their standard analogues, particularly if the nonstandard proof is able to tap the power of some existing mature body of infinitary mathematics (e.g. ergodic theory, measure theory, Hilbert space theory, or topological group theory) which is difficult to directly access in the standard formulation of the argument.
Hans Lindblad and I have just uploaded to the arXiv our joint paper “Asymptotic decay for a one-dimensional nonlinear wave equation“, submitted to Analysis & PDE. This paper, to our knowledge, is the first paper to analyse the asymptotic behaviour of the one-dimensional defocusing nonlinear wave equation
where is the solution and is a fixed exponent. Nowadays, this type of equation is considered a very simple example of a non-linear wave equation (there is only one spatial dimension, the equation is semilinear, the conserved energy is positive definite and coercive, and there are no derivatives in the nonlinear term), and indeed it is not difficult to show that any solution whose conserved energy
is finite, will exist globally for all time (and remain finite energy, of course). In particular, from the one-dimensional Gagliardo-Nirenberg inequality (a variant of the Sobolev embedding theorem), such solutions will remain uniformly bounded in for all time.
However, this leaves open the question of the asymptotic behaviour of such solutions in the limit as . In higher dimensions, there are a variety of scattering and asymptotic completeness results which show that solutions to nonlinear wave equations such as (1) decay asymptotically in various senses, at least if one is in the perturbative regime in which the solution is assumed small in some sense (e.g. small energy). For instance, a typical result might be that spatial norms such as might go to zero (in an average sense, at least). In general, such results for nonlinear wave equations are ultimately based on the fact that the linear wave equation in higher dimensions also enjoys an analogous decay as , as linear waves in higher dimensions spread out and disperse over time. (This can be formalised by decay estimates on the fundamental solution of the linear wave equation, or by basic estimates such as the (long-time) Strichartz estimates and their relatives.) The idea is then to view the nonlinear wave equation as a perturbation of the linear one.
On the other hand, the solution to the linear one-dimensional wave equation
does not exhibit any decay in time; as one learns in an undergraduate PDE class, the general (finite energy) solution to such an equation is given by the superposition of two travelling waves,
where and also have finite energy, so in particular norms such as cannot decay to zero as unless the solution is completely trivial.
Nevertheless, we were able to establish a nonlinear decay effect for equation (1), caused more by the nonlinear right-hand side of (1) than by the linear left-hand side, to obtain decay on the average:
Theorem 1. (Average decay) If is a finite energy solution to (1), then tends to zero as .
Actually we prove a slightly stronger statement than Theorem 1, in that the decay is uniform among all solutions with a given energy bound, but I will stick to the above formulation of the main result for simplicity.
Informally, the reason for the nonlinear decay is as follows. The linear evolution tries to force waves to move at constant velocity (indeed, from (3) we see that linear waves move at the speed of light ). But the defocusing nature of the nonlinearity will spread out any wave that is propagating along a constant velocity worldline. This intuition can be formalised by a Morawetz-type energy estimate that shows that the nonlinear potential energy must decay along any rectangular slab of spacetime (that represents the neighbourhood of a constant velocity worldline).
Now, just because the linear wave equation propagates along constant velocity worldlines, this does not mean that the nonlinear wave equation does too; one could imagine that a wave packet could propagate along a more complicated trajectory in which the velocity is not constant. However, energy methods still force the solution of the nonlinear wave equation to obey finite speed of propagation, which in the wave packet context means (roughly speaking) that the nonlinear trajectory is a Lipschitz continuous function (with Lipschitz constant at most ).
And now we deploy a trick which appears to be new to the field of nonlinear wave equations: we invoke the Rademacher differentiation theorem (or Lebesgue differentiation theorem), which asserts that Lipschitz continuous functions are almost everywhere differentiable. (By coincidence, I am teaching this theorem in my current course, both in one dimension (which is the case of interest here) and in higher dimensions.) A compactness argument allows one to extract a quantitative estimate from this theorem (cf. this earlier blog post of mine) which, roughly speaking, tells us that there are large portions of the trajectory which behave approximately linearly at an appropriate scale. This turns out to be a good enough control on the trajectory that one can apply the Morawetz inequality and rule out the existence of persistent wave packets over long periods of time, which is what leads to Theorem 1.
There is still scope for further work to be done on the asymptotics. In particular, we still do not have a good understanding of what the asymptotic profile of the solution should be, even in the perturbative regime; standard nonlinear geometric optics methods do not appear to work very well due to the extremely weak decay.
In my previous post, I briefly discussed the work of the four Fields medalists of 2010 (Lindenstrauss, Ngo, Smirnov, and Villani). In this post I will discuss the work of Dan Spielman (winner of the Nevanlinna prize), Yves Meyer (winner of the Gauss prize), and Louis Nirenberg (winner of the Chern medal). Again by chance, the work of all three of the recipients overlaps to some extent with my own areas of expertise, so I will be able to discuss a sample contribution for each of them. Again, my choice of contribution is somewhat idiosyncratic and is not intended to represent the “best” work of each of the awardees.
As is now widely reported, the Fields medals for 2010 have been awarded to Elon Lindenstrauss, Ngo Bao Chau, Stas Smirnov, and Cedric Villani. Concurrently, the Nevanlinna prize (for outstanding contributions to mathematical aspects of information science) was awarded to Dan Spielman, the Gauss prize (for outstanding mathematical contributions that have found significant applications outside of mathematics) to Yves Meyer, and the Chern medal (for lifelong achievement in mathematics) to Louis Nirenberg. All of the recipients are of course exceptionally qualified and deserving for these awards; congratulations to all of them. (I should mention that I myself was only very tangentially involved in the awards selection process, and like everyone else, had to wait until the ceremony to find out the winners. I imagine that the work of the prize committees must have been extremely difficult.)
Today, I thought I would mention one result of each of the Fields medalists; by chance, three of the four medalists work in areas reasonably close to my own. (Ngo is rather more distant from my areas of expertise, but I will give it a shot anyway.) This will of course only be a tiny sample of each of their work, and I do not claim to be necessarily describing their “best” achievement, as I only know a portion of the research of each of them, and my selection choice may be somewhat idiosyncratic. (I may discuss the work of Spielman, Meyer, and Nirenberg in a later post.)
A recurring theme in mathematics is that of duality: a mathematical object can either be described internally (or in physical space, or locally), by describing what physically consists of (or what kind of maps exist into ), or externally (or in frequency space, or globally), by describing what globally interacts or resonates with (or what kind of maps exist out of ). These two fundamentally opposed perspectives on the object are often dual to each other in various ways: performing an operation on may transform it one way in physical space, but in a dual way in frequency space, with the frequency space description often being a “inversion” of the physical space description. In several important cases, one is fortunate enough to have some sort of fundamental theorem connecting the internal and external perspectives. Here are some (closely inter-related) examples of this perspective:
- Vector space duality A vector space over a field can be described either by the set of vectors inside , or dually by the set of linear functionals from to the field (or equivalently, the set of vectors inside the dual space ). (If one is working in the category of topological vector spaces, one would work instead with continuous linear functionals; and so forth.) A fundamental connection between the two is given by the Hahn-Banach theorem (and its relatives).
- Vector subspace duality In a similar spirit, a subspace of can be described either by listing a basis or a spanning set, or dually by a list of linear functionals that cut out that subspace (i.e. a spanning set for the orthogonal complement . Again, the Hahn-Banach theorem provides a fundamental connection between the two perspectives.
- Convex duality More generally, a (closed, bounded) convex body in a vector space can be described either by listing a set of (extreme) points whose convex hull is , or else by listing a set of (irreducible) linear inequalities that cut out . The fundamental connection between the two is given by the Farkas lemma.
- Ideal-variety duality In a slightly different direction, an algebraic variety in an affine space can be viewed either “in physical space” or “internally” as a collection of points in , or else “in frequency space” or “externally” as a collection of polynomials on whose simultaneous zero locus cuts out . The fundamental connection between the two perspectives is given by the nullstellensatz, which then leads to many of the basic fundamental theorems in classical algebraic geometry.
- Hilbert space duality An element in a Hilbert space can either be thought of in physical space as a vector in that space, or in momentum space as a covector on that space. The fundamental connection between the two is given by the Riesz representation theorem for Hilbert spaces.
- Semantic-syntactic duality Much more generally still, a mathematical theory can either be described internally or syntactically via its axioms and theorems, or externally or semantically via its models. The fundamental connection between the two perspectives is given by the Gödel completeness theorem.
- Intrinsic-extrinsic duality A (Riemannian) manifold can either be viewed intrinsically (using only concepts that do not require an ambient space, such as the Levi-Civita connection), or extrinsically, for instance as the level set of some defining function in an ambient space. Some important connections between the two perspectives includes the Nash embedding theorem and the theorema egregium.
- Group duality A group can be described either via presentations (lists of generators, together with relations between them) or representations (realisations of that group in some more concrete group of transformations). A fundamental connection between the two is Cayley’s theorem. Unfortunately, in general it is difficult to build upon this connection (except in special cases, such as the abelian case), and one cannot always pass effortlessly from one perspective to the other.
- Pontryagin group duality A (locally compact Hausdorff) abelian group can be described either by listing its elements , or by listing the characters (i.e. continuous homomorphisms from to the unit circle, or equivalently elements of ). The connection between the two is the focus of abstract harmonic analysis.
- Pontryagin subgroup duality A subgroup of a locally compact abelian group can be described either by generators in , or generators in the orthogonal complement . One of the fundamental connections between the two is the Poisson summation formula.
- Fourier duality A (sufficiently nice) function on a locally compact abelian group (equipped with a Haar measure ) can either be described in physical space (by its values at each element of ) or in frequency space (by the values at elements of the Pontryagin dual ). The fundamental connection between the two is the Fourier inversion formula.
- The uncertainty principle The behaviour of a function at physical scales above (resp. below) a certain scale is almost completely controlled by the behaviour of its Fourier transform at frequency scales below (resp. above) the dual scale and vice versa, thanks to various mathematical manifestations of the uncertainty principle. (The Poisson summation formula can also be viewed as a variant of this principle, using subgroups instead of scales.)
- Stone/Gelfand duality A (locally compact Hausdorff) topological space can be viewed in physical space (as a collection of points), or dually, via the algebra of continuous complex-valued functions on that space, or (in the case when is compact and totally disconnected) via the boolean algebra of clopen sets (or equivalently, the idempotents of ). The fundamental connection between the two is given by the Stone representation theorem or the (commutative) Gelfand-Naimark theorem.
I have discussed a fair number of these examples in previous blog posts (indeed, most of the links above are to my own blog). In this post, I would like to discuss the uncertainty principle, that describes the dual relationship between physical space and frequency space. There are various concrete formalisations of this principle, most famously the Heisenberg uncertainty principle and the Hardy uncertainty principle – but in many situations, it is the heuristic formulation of the principle that is more useful and insightful than any particular rigorous theorem that attempts to capture that principle. Unfortunately, it is a bit tricky to formulate this heuristic in a succinct way that covers all the various applications of that principle; the Heisenberg inequality is a good start, but it only captures a portion of what the principle tells us. Consider for instance the following (deliberately vague) statements, each of which can be viewed (heuristically, at least) as a manifestation of the uncertainty principle:
- A function which is band-limited (restricted to low frequencies) is featureless and smooth at fine scales, but can be oscillatory (i.e. containing plenty of cancellation) at coarse scales. Conversely, a function which is smooth at fine scales will be almost entirely restricted to low frequencies.
- A function which is restricted to high frequencies is oscillatory at fine scales, but is negligible at coarse scales. Conversely, a function which is oscillatory at fine scales will be almost entirely restricted to high frequencies.
- Projecting a function to low frequencies corresponds to averaging out (or spreading out) that function at fine scales, leaving only the coarse scale behaviour.
- Projecting a frequency to high frequencies corresponds to removing the averaged coarse scale behaviour, leaving only the fine scale oscillation.
- The number of degrees of freedom of a function is bounded by the product of its spatial uncertainty and its frequency uncertainty (or more generally, by the volume of the phase space uncertainty). In particular, there are not enough degrees of freedom for a non-trivial function to be simulatenously localised to both very fine scales and very low frequencies.
- To control the coarse scale (or global) averaged behaviour of a function, one essentially only needs to know the low frequency components of the function (and vice versa).
- To control the fine scale (or local) oscillation of a function, one only needs to know the high frequency components of the function (and vice versa).
- Localising a function to a region of physical space will cause its Fourier transform (or inverse Fourier transform) to resemble a plane wave on every dual region of frequency space.
- Averaging a function along certain spatial directions or at certain scales will cause the Fourier transform to become localised to the dual directions and scales. The smoother the averaging, the sharper the localisation.
- The smoother a function is, the more rapidly decreasing its Fourier transform (or inverse Fourier transform) is (and vice versa).
- If a function is smooth or almost constant in certain directions or at certain scales, then its Fourier transform (or inverse Fourier transform) will decay away from the dual directions or beyond the dual scales.
- If a function has a singularity spanning certain directions or certain scales, then its Fourier transform (or inverse Fourier transform) will decay slowly along the dual directions or within the dual scales.
- Localisation operations in position approximately commute with localisation operations in frequency so long as the product of the spatial uncertainty and the frequency uncertainty is significantly larger than one.
- In the high frequency (or large scale) limit, position and frequency asymptotically behave like a pair of classical observables, and partial differential equations asymptotically behave like classical ordinary differential equations. At lower frequencies (or finer scales), the former becomes a “quantum mechanical perturbation” of the latter, with the strength of the quantum effects increasing as one moves to increasingly lower frequencies and finer spatial scales.
- Etc., etc.
- Almost all of the above statements generalise to other locally compact abelian groups than or , in which the concept of a direction or scale is replaced by that of a subgroup or an approximate subgroup. (In particular, as we will see below, the Poisson summation formula can be viewed as another manifestation of the uncertainty principle.)
I think of all of the above (closely related) assertions as being instances of “the uncertainty principle”, but it seems difficult to combine them all into a single unified assertion, even at the heuristic level; they seem to be better arranged as a cloud of tightly interconnected assertions, each of which is reinforced by several of the others. The famous inequality is at the centre of this cloud, but is by no means the only aspect of it.
The uncertainty principle (as interpreted in the above broad sense) is one of the most fundamental principles in harmonic analysis (and more specifically, to the subfield of time-frequency analysis), second only to the Fourier inversion formula (and more generally, Plancherel’s theorem) in importance; understanding this principle is a key piece of intuition in the subject that one has to internalise before one can really get to grips with this subject (and also with closely related subjects, such as semi-classical analysis and microlocal analysis). Like many fundamental results in mathematics, the principle is not actually that difficult to understand, once one sees how it works; and when one needs to use it rigorously, it is usually not too difficult to improvise a suitable formalisation of the principle for the occasion. But, given how vague this principle is, it is difficult to present this principle in a traditional “theorem-proof-remark” manner. Even in the more informal format of a blog post, I was surprised by how challenging it was to describe my own understanding of this piece of mathematics in a linear fashion, despite (or perhaps because of) it being one of the most central and basic conceptual tools in my own personal mathematical toolbox. In the end, I chose to give below a cloud of interrelated discussions about this principle rather than a linear development of the theory, as this seemed to more closely align with the nature of this principle.
A (smooth) Riemannian manifold is a smooth manifold without boundary, equipped with a Riemannian metric , which assigns a length to every tangent vector at a point , and more generally assigns an inner product
to every pair of tangent vectors at a point . (We use Roman font for here, as we will need to use to denote group elements later in this post.) This inner product is assumed to symmetric, positive definite, and smoothly varying in , and the length is then given in terms of the inner product by the formula
In coordinates (and also using abstract index notation), the metric can be viewed as an invertible symmetric rank tensor , with
One can also view the Riemannian metric as providing a (self-adjoint) identification between the tangent bundle of the manifold and the cotangent bundle ; indeed, every tangent vector is then identified with the cotangent vector , defined by the formula
In coordinates, .
A fundamental dynamical system on the tangent bundle (or equivalently, the cotangent bundle, using the above identification) of a Riemannian manifold is that of geodesic flow. Recall that geodesics are smooth curves that minimise the length
There is some degeneracy in this definition, because one can reparameterise the curve without affecting the length. In order to fix this degeneracy (and also because the square of the speed is a more tractable quantity analytically than the speed itself), it is better if one replaces the length with the energy
Minimising the energy of a parameterised curve turns out to be the same as minimising the length, together with an additional requirement that the speed stay constant in time. Minimisers (and more generally, critical points) of the energy functional (holding the endpoints fixed) are known as geodesic flows. From a physical perspective, geodesic flow governs the motion of a particle that is subject to no external forces and thus moves freely, save for the constraint that it must always lie on the manifold .
One can also view geodesic flows as a dynamical system on the tangent bundle (with the state at any time given by the position and the velocity ) or on the cotangent bundle (with the state then given by the position and the momentum ). With the latter perspective (sometimes referred to as cogeodesic flow), geodesic flow becomes a Hamiltonian flow, with Hamiltonian given as
where is the inverse inner product to , which can be defined for instance by the formula
In coordinates, geodesic flow is given by Hamilton’s equations of motion
In terms of the velocity , we can rewrite these equations as the geodesic equation
If the manifold is an embedded submanifold of a larger Euclidean space , with the metric on being induced from the standard metric on , then the geodesic flow equation can be rewritten in the equivalent form
where is now viewed as taking values in , and is similarly viewed as a subspace of . This is intuitively obvious from the geometric interpretation of geodesics: if the curvature of a curve contains components that are transverse to the manifold rather than normal to it, then it is geometrically clear that one should be able to shorten the curve by shifting it along the indicated transverse direction. It is an instructive exercise to rigorously formulate the above intuitive argument. This fact also conforms well with one’s physical intuition of geodesic flow as the motion of a free particle constrained to be in ; the normal quantity then corresponds to the centripetal force necessary to keep the particle lying in (otherwise it would fly off along a tangent line to , as per Newton’s first law). The precise value of the normal vector can be computed via the second fundamental form as , but we will not need this formula here.
In a beautiful paper from 1966, Vladimir Arnold (who, sadly, passed away last week), observed that many basic equations in physics, including the Euler equations of motion of a rigid body, and also (by which is a priori a remarkable coincidence) the Euler equations of fluid dynamics of an inviscid incompressible fluid, can be viewed (formally, at least) as geodesic flows on a (finite or infinite dimensional) Riemannian manifold. And not just any Riemannian manifold: the manifold is a Lie group (or, to be truly pedantic, a torsor of that group), equipped with a right-invariant (or left-invariant, depending on one’s conventions) metric. In the context of rigid bodies, the Lie group is the group of rigid motions; in the context of incompressible fluids, it is the group ) of measure-preserving diffeomorphisms. The right-invariance makes the Hamiltonian mechanics of geodesic flow in this context (where it is sometimes known as the Euler-Arnold equation or the Euler-Poisson equation) quite special; it becomes (formally, at least) completely integrable, and also indicates (in principle, at least) a way to reformulate these equations in a Lax pair formulation. And indeed, many further completely integrable equations, such as the Korteweg-de Vries equation, have since been reinterpreted as Euler-Arnold flows.
From a physical perspective, this all fits well with the interpretation of geodesic flow as the free motion of a system subject only to a physical constraint, such as rigidity or incompressibility. (I do not know, though, of a similarly intuitive explanation as to why the Korteweg de Vries equation is a geodesic flow.)
One consequence of being a completely integrable system is that one has a large number of conserved quantities. In the case of the Euler equations of motion of a rigid body, the conserved quantities are the linear and angular momentum (as observed in an external reference frame, rather than the frame of the object). In the case of the two-dimensional Euler equations, the conserved quantities are the pointwise values of the vorticity (as viewed in Lagrangian coordinates, rather than Eulerian coordinates). In higher dimensions, the conserved quantity is now the (Hodge star of) the vorticity, again viewed in Lagrangian coordinates. The vorticity itself then evolves by the vorticity equation, and is subject to vortex stretching as the diffeomorphism between the initial and final state becomes increasingly sheared.
The elegant Euler-Arnold formalism is reasonably well-known in some circles (particularly in Lagrangian and symplectic dynamics, where it can be viewed as a special case of the Euler-Poincaré formalism or Lie-Poisson formalism respectively), but not in others; I for instance was only vaguely aware of it until recently, and I think that even in fluid mechanics this perspective to the subject is not always emphasised. Given the circumstances, I thought it would therefore be appropriate to present Arnold’s original 1966 paper here. (For a more modern treatment of these topics, see the books of Arnold-Khesin and Marsden-Ratiu.)
In order to avoid technical issues, I will work formally, ignoring questions of regularity or integrability, and pretending that infinite-dimensional manifolds behave in exactly the same way as their finite-dimensional counterparts. In the finite-dimensional setting, it is not difficult to make all of the formal discussion below rigorous; but the situation in infinite dimensions is substantially more delicate. (Indeed, it is a notorious open problem whether the Euler equations for incompressible fluids even forms a global continuous flow in a reasonable topology in the first place!) However, I do not want to discuss these analytic issues here; see this paper of Ebin and Marsden for a treatment of these topics.
Semilinear dispersive and wave equations, of which the defocusing nonlinear wave equation
is a typical example (where is a fixed exponent, and is a scalar field), can be viewed as a “tug of war” between a linear dispersive equation, in this case the linear wave equation
and a nonlinear ODE, in this case the equation
is a typical example (where is a fixed exponent, and is a scalar field), can be viewed as a “tug of war” between a linear dispersive equation, in this case the linear wave equation
and a nonlinear ODE, in this case the equation
If the nonlinear term was not present, leaving only the dispersive equation (2), then as the term “dispersive” suggests, in the asymptotic limit , the solution would spread out in space and decay in amplitude. For instance, in the model case when and the initial position vanishes (leaving only the initial velocity as non-trivial initial data), the solution for is given by the formula
where is surface measure on the sphere . (To avoid technical issues, let us restrict attention to classical (smooth) solutions.) Thus, if the initial velocity was bounded and compactly supported, then the solution would be bounded by and would thus would decay uniformly to zero as . Similar phenomena occur for all dimensions greater than .
Conversely, if the dispersive term was not present, leaving only the ODE (3), then one no longer expects decay; indeed, given the conserved energy for the ODE (3), we do not expect any decay at all (and indeed, solutions are instead periodic in time for each fixed , as can easily be seen by viewing the ODE (and the energy curves) in phase space).
Depending on the relative “size” of the dispersive term and the nonlinear term , one can heuristically describe the behaviour of a solution at various positions at times as either being dispersion dominated (in which ), nonlinearity dominated (in which ), or contested (in which , are comparable in size). Very roughly speaking, when one is in the dispersion dominated regime, then perturbation theory becomes effective, and one can often show that the solution to the nonlinear equation indeed behaves like the solution to the linear counterpart, in particular exhibiting decay as . In principle, perturbation theory is also available in the nonlinearity dominated regime (in which the dispersion is now viewed as the perturbation, and the nonlinearity as the main term), but in practice this is often difficult to apply (due to the nonlinearity of the approximating equation and the large number of derivatives present in the perturbative term), and so one has to fall back on non-perturbative tools, such as conservation laws and monotonicity formulae. The contested regime is the most interesting, and gives rise to intermediate types of behaviour that are not present in the purely dispersive or purely nonlinear equations, such as solitary wave solutions (solitons) or solutions that blow up in finite time.
In order to analyse how solutions behave in each of these regimes rigorously, one usually works with a variety of function spaces (such as Lebesgue spaces and Sobolev spaces ). As such, one generally needs to first establish a number of function space estimates (e.g. Sobolev inequalities, Hölder-type inequalities, Strichartz estimates, etc.) in order to study these equations at the formal level.
Unfortunately, this emphasis on function spaces and their estimates can obscure the underlying physical intuition behind the dynamics of these equations, and the field of analysis of PDE sometimes acquires a reputation for being unduly technical as a consequence. However, as noted in a previous blog post, one can view function space norms as a way to formalise the intuitive notions of the “height” (amplitude) and “width” (wavelength) of a function (wave).
It turns out that one can similarly analyse the behaviour of nonlinear dispersive equations on a similar heuristic level, as that of understanding the dynamics as the amplitude and wavelength (or frequency ) of a wave. Below the fold I give some examples of this heuristic; for sake of concreteness I restrict attention to the nonlinear wave equation (1), though one can of course extend this heuristic to many other models also. Rigorous analogues of the arguments here can be found in several places, such as the book of Shatah and Struwe, or my own book on the subject.
Our study of random matrices, to date, has focused on somewhat general ensembles, such as iid random matrices or Wigner random matrices, in which the distribution of the individual entries of the matrices was essentially arbitrary (as long as certain moments, such as the mean and variance, were normalised). In these notes, we now focus on two much more special, and much more symmetric, ensembles:
- The Gaussian Unitary Ensemble (GUE), which is an ensemble of random Hermitian matrices in which the upper-triangular entries are iid with distribution , and the diagonal entries are iid with distribution , and independent of the upper-triangular ones; and
- The Gaussian random matrix ensemble, which is an ensemble of random (non-Hermitian) matrices whose entries are iid with distribution .
The symmetric nature of these ensembles will allow us to compute the spectral distribution by exact algebraic means, revealing a surprising connection with orthogonal polynomials and with determinantal processes. This will, for instance, recover the semi-circular law for GUE, but will also reveal fine spacing information, such as the distribution of the gap between adjacent eigenvalues, which is largely out of reach of tools such as the Stieltjes transform method and the moment method (although the moment method, with some effort, is able to control the extreme edges of the spectrum).
Similarly, we will see for the first time the circular law for eigenvalues of non-Hermitian matrices.
There are a number of other highly symmetric ensembles which can also be treated by the same methods, most notably the Gaussian Orthogonal Ensemble (GOE) and the Gaussian Symplectic Ensemble (GSE). However, for simplicity we shall focus just on the above two ensembles. For a systematic treatment of these ensembles, see the text by Deift.
When solving the initial value problem to an ordinary differential equation, such as
where is the unknown solution (taking values in some finite-dimensional vector space ), is the initial datum, and is some nonlinear function (which we will take to be smooth for sake of argument), then one can construct a solution locally in time via the Picard iteration method. There are two basic ideas. The first is to use the fundamental theorem of calculus to rewrite the initial value problem (1) as the problem of solving an integral equation,
The second idea is to solve this integral equation by the contraction mapping theorem, showing that the integral operator defined by
is a contraction on a suitable complete metric space (e.g. a closed ball in the function space ), and thus has a unique fixed point in this space. This method works as long as one only seeks to construct local solutions (for time in for sufficiently small ), but the solutions constructed have a number of very good properties, including
- Existence: A solution exists in the space (and even in ) for sufficiently small.
- Uniqueness: There is at most one solution to the initial value problem in the space (or in smoother spaces, such as ). (For solutions in the weaker space we use the integral formulation (2) to define the solution concept.)
- Lipschitz continuous dependence on the data: If is a sequence of initial data converging to , then the associated solutions converge uniformly to on (possibly after shrinking slightly). In fact we have the Lipschitz bound for large enough and , where is an absolute constant.
This package of properties is referred to as (Lipschitz) wellposedness.
This method extends to certain partial differential equations, particularly those of a semilinear nature (linear except for lower order nonlinear terms). For instance, if trying to solve an initial value problem of the form
where now takes values in a function space (e.g. a Sobolev space ), is an initial datum, is some (differential) operator (independent of ) that is (densely) defined on , and is a nonlinearity which is also (densely) defined on , then (formally, at least) one can solve this problem by using Duhamel’s formula to convert the problem to that of solving an integral equation
and one can then hope to show that the associated nonlinear integral operator
is a contraction in a subset of a suitably chosen function space.
This method turns out to work surprisingly well for many semilinear partial differential equations, and in particular for semilinear parabolic, semilinear dispersive, and semilinear wave equations. As in the ODE case, when the method works, it usually gives the entire package of Lipschitz well-posedness: existence, uniqueness, and Lipschitz continuous dependence on the initial data, for short times at least.
However, when one moves from semilinear initial value problems to quasilinear initial value problems such as
in which the top order operator now depends on the solution itself, then the nature of well-posedness changes; one can still hope to obtain (local) existence and uniqueness, and even continuous dependence on the data, but one usually is forced to give up Lipschitz continuous dependence at the highest available regularity (though one can often recover it at lower regularities). As a consequence, the Picard iteration method is not directly suitable for constructing solutions to such equations.
where we consider as part of the initial data. (If one wishes, one could view this equation as a rather trivial example of a system.
to emphasis this viewpoint, but this would be somewhat idiosyncratic.) One can solve this equation explicitly of course to get the solution
In particular, if we look at the solution just at time for simplicity, we have
Now let us see how this solution depends on the parameter . One can ask whether this dependence is Lipschitz in , in some function space :
for some finite . But using the Newton approximation
we see that we should only expect such a bound when (and its translates) lie in . Thus, we see a loss of derivatives phenomenon with regard to Lipschitz well-posedness; if the initial data is in some regularity space, say , then one only obtains Lipschitz dependence on in a lower regularity space such as .
We have just seen that if all one knows about the initial data is that it is bounded in a function space , then one usually cannot hope to make the dependence of on the velocity parameter Lipschitz continuous. Indeed, one cannot even make it continuous uniformly in . Given two values of that are close together, e.g. and , and a reasonable function space (e.g. a Sobolev space , or a classical regularity space ) one can easily cook up a function that is bounded in but whose two solutions and separate in the norm at time , simply by choosing to be supported on an interval of width .
(Part of the problem here is that using a subtractive method to determine the distance between two solutions is not a physically natural operation when transport mechanisms are present that could cause the key features of (such as singularities) to be situated in slightly different locations. In such cases, the correct notion of distance may need to take transport into account, e.g. by using metrics of Wasserstein type.)
On the other hand, one still has non-uniform continuous dependence on the initial parameters: if lies in some reasonable function space , then the map is continuous in the topology, even if it is not uniformly continuous with respect to . (More succinctly: translation is a continuous but not uniformly continuous operation in most function spaces.) The reason for this is that we already have established this continuity in the case when is so smooth that an additional derivative of lies in ; and such smooth functions tend to be dense in the original space , so the general case can then be established by a limiting argument, approximating a general function in by a smoother function. We then see that the non-uniformity ultimately comes from the fact that a given function in may be arbitrarily rough (or concentrated at an arbitrarily fine scale), and so the ability to approximate such a function by a smooth one can be arbitrarily poor.
In many quasilinear PDE, one often encounters qualitatively similar phenomena. Namely, one often has local well-posedness in sufficiently smooth function spaces (so that if the initial data lies in , then for short times one has existence, uniqueness, and continuous dependence on the data in the topology), but Lipschitz or uniform continuity in the topology is usually false. However, if the data (and solution) is known to be in a high-regularity function space , one can often recover Lipschitz or uniform continuity in a lower-regularity topology.
Because the continuous dependence on the data in quasilinear equations is necessarily non-uniform, the arguments needed to establish this dependence can be remarkably delicate. As with the simple example of the transport equation, the key is to approximate a rough solution by a smooth solution first, by smoothing out the data (this is the non-uniform step, as it depends on the physical scale (or wavelength) that the data features are located). But for quasilinear equations, keeping the rough and smooth solution together can require a little juggling of function space norms, in particular playing the low-frequency nature of the smooth solution against the high-frequency nature of the residual between the rough and smooth solutions.
Below the fold I will illustrate this phenomenon with one of the simplest quasilinear equations, namely the initial value problem for the inviscid Burgers’ equation
which is a modification of the transport equation (3) in which the velocity is no longer a parameter, but now depends (and is, in this case, actually equal to) the solution. To avoid technicalities we will work only with the classical function spaces of times continuously differentiable functions, though one can certainly work with other spaces (such as Sobolev spaces) by exploiting the Sobolev embedding theorem. To avoid having to distinguish continuity from uniform continuity, we shall work in a compact domain by assuming periodicity in space, thus for instance restricting to the unit circle .
This discussion is inspired by this survey article of Nikolay Tzvetkov, which further explores the distinction between well-posedness and ill-posedness in both semilinear and quasilinear settings.