You are currently browsing the category archive for the ‘math.AP’ category.

These lecture notes are a continuation of the 254A lecture notes from the previous quarter.

We consider the Euler equations for incompressible fluid flow on a Euclidean space {{\bf R}^d}; we will label {{\bf R}^d} as the “Eulerian space” {{\bf R}^d_E} (or “Euclidean space”, or “physical space”) to distinguish it from the “Lagrangian space” {{\bf R}^d_L} (or “labels space”) that we will introduce shortly (but the reader is free to also ignore the {E} or {L} subscripts if he or she wishes). Elements of Eulerian space {{\bf R}^d_E} will be referred to by symbols such as {x}, we use {dx} to denote Lebesgue measure on {{\bf R}^d_E} and we will use {x^1,\dots,x^d} for the {d} coordinates of {x}, and use indices such as {i,j,k} to index these coordinates (with the usual summation conventions), for instance {\partial_i} denotes partial differentiation along the {x^i} coordinates. (We use superscripts for coordinates {x^i} instead of subscripts {x_i} to be compatible with some differential geometry notation that we will use shortly; in particular, when using the summation notation, we will now be matching subscripts with superscripts for the pair of indices being summed.)

In Eulerian coordinates, the Euler equations read

\displaystyle  \partial_t u + u \cdot \nabla u = - \nabla p \ \ \ \ \ (1)

\displaystyle  \nabla \cdot u = 0

where {u: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}^d_E} is the velocity field and {p: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}} is the pressure field. These are functions of time {t \in [0,T)} and on the spatial location variable {x \in {\bf R}^d_E}. We will refer to the coordinates {(t,x) = (t,x^1,\dots,x^d)} as Eulerian coordinates. However, if one reviews the physical derivation of the Euler equations from 254A Notes 0, before one takes the continuum limit, the fundamental unknowns were not the velocity field {u} or the pressure field {p}, but rather the trajectories {(x^{(a)}(t))_{a \in A}}, which can be thought of as a single function {x: [0,T) \times A \rightarrow {\bf R}^d_E} from the coordinates {(t,a)} (where {t} is a time and {a} is an element of the label set {A}) to {{\bf R}^d}. The relationship between the trajectories {x^{(a)}(t) = x(t,a)} and the velocity field was given by the informal relationship

\displaystyle  \partial_t x(t,a) \approx u( t, x(t,a) ). \ \ \ \ \ (2)

We will refer to the coordinates {(t,a)} as (discrete) Lagrangian coordinates for describing the fluid.

In view of this, it is natural to ask whether there is an alternate way to formulate the continuum limit of incompressible inviscid fluids, by using a continuous version {(t,a)} of the Lagrangian coordinates, rather than Eulerian coordinates. This is indeed the case. Suppose for instance one has a smooth solution {u, p} to the Euler equations on a spacetime slab {[0,T) \times {\bf R}^d} in Eulerian coordinates; assume furthermore that the velocity field {u} is uniformly bounded. We introduce another copy {{\bf R}^d_L} of {{\bf R}^d}, which we call Lagrangian space or labels space; we use symbols such as {a} to refer to elements of this space, {da} to denote Lebesgue measure on {{\bf R}^d_L}, and {a^1,\dots,a^d} to refer to the {d} coordinates of {a}. We use indices such as {\alpha,\beta,\gamma} to index these coordinates, thus for instance {\partial_\alpha} denotes partial differentiation along the {a^\alpha} coordinate. We will use summation conventions for both the Eulerian coordinates {i,j,k} and the Lagrangian coordinates {\alpha,\beta,\gamma}, with an index being summed if it appears as both a subscript and a superscript in the same term. While {{\bf R}^d_L} and {{\bf R}^d_E} are of course isomorphic, we will try to refrain from identifying them, except perhaps at the initial time {t=0} in order to fix the initialisation of Lagrangian coordinates.

Given a smooth and bounded velocity field {u: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}^d_E}, define a trajectory map for this velocity to be any smooth map {X: [0,T) \times {\bf R}^d_L \rightarrow {\bf R}^d_E} that obeys the ODE

\displaystyle  \partial_t X(t,a) = u( t, X(t,a) ); \ \ \ \ \ (3)

in view of (2), this describes the trajectory (in {{\bf R}^d_E}) of a particle labeled by an element {a} of {{\bf R}^d_L}. From the Picard existence theorem and the hypothesis that {u} is smooth and bounded, such a map exists and is unique as long as one specifies the initial location {X(0,a)} assigned to each label {a}. Traditionally, one chooses the initial condition

\displaystyle  X(0,a) = a \ \ \ \ \ (4)

for {a \in {\bf R}^d_L}, so that we label each particle by its initial location at time {t=0}; we are also free to specify other initial conditions for the trajectory map if we please. Indeed, we have the freedom to “permute” the labels {a \in {\bf R}^d_L} by an arbitrary diffeomorphism: if {X: [0,T) \times {\bf R}^d_L \rightarrow {\bf R}^d_E} is a trajectory map, and {\pi: {\bf R}^d_L \rightarrow{\bf R}^d_L} is any diffeomorphism (a smooth map whose inverse exists and is also smooth), then the map {X \circ \pi: [0,T) \times {\bf R}^d_L \rightarrow {\bf R}^d_E} is also a trajectory map, albeit one with different initial conditions {X(0,a)}.

Despite the popularity of the initial condition (4), we will try to keep conceptually separate the Eulerian space {{\bf R}^d_E} from the Lagrangian space {{\bf R}^d_L}, as they play different physical roles in the interpretation of the fluid; for instance, while the Euclidean metric {d\eta^2 = dx^1 dx^1 + \dots + dx^d dx^d} is an important feature of Eulerian space {{\bf R}^d_E}, it is not a geometrically natural structure to use in Lagrangian space {{\bf R}^d_L}. We have the following more general version of Exercise 8 from 254A Notes 2:

Exercise 1 Let {u: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}^d_E} be smooth and bounded.

  • If {X_0: {\bf R}^d_L \rightarrow {\bf R}^d_E} is a smooth map, show that there exists a unique smooth trajectory map {X: [0,T) \times {\bf R}^d_L \rightarrow {\bf R}^d_E} with initial condition {X(0,a) = X_0(a)} for all {a \in {\bf R}^d_L}.
  • Show that if {X_0} is a diffeomorphism and {t \in [0,T)}, then the map {X(t): a \mapsto X(t,a)} is also a diffeomorphism.

Remark 2 The first of the Euler equations (1) can now be written in the form

\displaystyle  \frac{d^2}{dt^2} X(t,a) = - \nabla p( t, X(t,a) ) \ \ \ \ \ (5)

which can be viewed as a continuous limit of Newton’s first law {m^{(a)} \frac{d^2}{dt^2} x^{(a)}(t) = F^{(a)}(t)}.

Call a diffeomorphism {Y: {\bf R}^d_L \rightarrow {\bf R}^d_E} (oriented) volume preserving if one has the equation

\displaystyle  \mathrm{det}( \nabla Y )(a) = 1 \ \ \ \ \ (6)

for all {a \in {\bf R}^d_L}, where the total differential {\nabla Y} is the {d \times d} matrix with entries {\partial_\alpha Y^i} for {\alpha = 1,\dots,d} and {i=1,\dots,d}, where {Y^1,\dots,Y^d:{\bf R}^d_L \rightarrow {\bf R}} are the components of {Y}. (If one wishes, one can also view {\nabla Y} as a linear transformation from the tangent space {T_a {\bf R}^d_L} of Lagrangian space at {a} to the tangent space {T_{Y(a)} {\bf R}^d_E} of Eulerian space at {Y(a)}.) Equivalently, {Y} is orientation preserving and one has a Jacobian-free change of variables formula

\displaystyle  \int_{{\bf R}^d_E} f( Y(a) )\ da = \int_{{\bf R}^d_L} f(x)\ dx

for all {f \in C_c({\bf R}^d_L \rightarrow {\bf R})}, which is in turn equivalent to {Y(E) \subset {\bf R}^d_E} having the same Lebesgue measure as {E} for any measurable set {E \subset {\bf R}^d_L}.

The divergence-free condition {\nabla \cdot u = 0} then can be nicely expressed in terms of volume-preserving properties of the trajectory maps {X}, in a manner which confirms the interpretation of this condition as an incompressibility condition on the fluid:

Lemma 3 Let {u: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}^d_E} be smooth and bounded, let {X_0: {\bf R}^d_L \rightarrow {\bf R}^d_E} be a volume-preserving diffeomorphism, and let {X: [0,T) \times {\bf R}^d_L \rightarrow {\bf R}^d_E} be the trajectory map. Then the following are equivalent:

  • {\nabla \cdot u = 0} on {[0,T) \times {\bf R}^d_E}.
  • {X(t): {\bf R}^d_L \rightarrow {\bf R}^d_E} is volume-preserving for all {t \in [0,T)}.

Proof: Since {X_0} is orientation-preserving, we see from continuity that {X(t)} is also orientation-preserving. Suppose that {X(t)} is also volume-preserving, then for any {f \in C^\infty_c({\bf R}^d_E \rightarrow {\bf R})} we have the conservation law

\displaystyle  \int_{{\bf R}^d_L} f( X(t,a) )\ da = \int_{{\bf R}^d_E} f(x)\ dx

for all {t \in [0,T)}. Differentiating in time using the chain rule and (3) we conclude that

\displaystyle  \int_{{\bf R}^d_L} (u(t) \cdot \nabla f)( X(t,a)) \ da = 0

for all {t \in [0,T)}, and hence by change of variables

\displaystyle  \int_{{\bf R}^d_E} (u(t) \cdot \nabla f)(x) \ dx = 0

which by integration by parts gives

\displaystyle  \int_{{\bf R}^d_E} (\nabla \cdot u(t,x)) f(x)\ dx = 0

for all {f \in C^\infty_c({\bf R}^d_E \rightarrow {\bf R})} and {t \in [0,T)}, so {u} is divergence-free.

To prove the converse implication, it is convenient to introduce the labels map {A:[0,T) \times {\bf R}^d_E \rightarrow {\bf R}^d_L}, defined by setting {A(t): {\bf R}^d \rightarrow {\bf R}^d} to be the inverse of the diffeomorphism {X(t): {\bf R}^d_L \rightarrow {\bf R}^d_E}, thus

\displaystyle  X(t, A(t, x)) = x

for all {(t,x) \in [0,T) \times {\bf R}^d_E}. By the implicit function theorem, {A} is smooth, and by differentiating the above equation in time using (3) we see that

\displaystyle  D_t A(t,x) = 0

where {D_t} is the usual material derivative

\displaystyle  D_t := \partial_t + u \cdot \nabla \ \ \ \ \ (7)

acting on functions on {[0,T) \times {\bf R}^d_E}. If {u} is divergence-free, we have from integration by parts that

\displaystyle  \partial_t \int_{{\bf R}^d_E} \phi(t,x)\ dx = \int_{{\bf R}^d_E} D_t \phi(t,x)\ dx

for any test function {\phi: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}}. In particular, for any {g \in C^\infty_c({\bf R}^d_L \rightarrow {\bf R})}, we can calculate

\displaystyle \partial_t \int_{{\bf R}^d_E} g( A(t,x) )\ dx = \int_{{\bf R}^d_E} D_t (g(A(t,x)))\ dx

\displaystyle  = \int_{{\bf R}^d_E} 0\ dx

and hence

\displaystyle  \int_{{\bf R}^d_E} g(A(t,x))\ dx = \int_{{\bf R}^d_E} g(A(0,x))\ dx

for any {t \in [0,T)}. Since {X_0} is volume-preserving, so is {A(0)}, thus

\displaystyle  \int_{{\bf R}^d_E} g \circ A(t)\ dx = \int_{{\bf R}^d_L} g\ da.

Thus {A(t)} is volume-preserving, and hence {X(t)} is also. \Box

Exercise 4 Let {M: [0,T) \rightarrow \mathrm{GL}_d({\bf R})} be a continuously differentiable map from the time interval {[0,T)} to the general linear group {\mathrm{GL}_d({\bf R})} of invertible {d \times d} matrices. Establish Jacobi’s formula

\displaystyle  \partial_t \det(M(t)) = \det(M(t)) \mathrm{tr}( M(t)^{-1} \partial_t M(t) )

and use this and (6) to give an alternate proof of Lemma 3 that does not involve any integration.

Remark 5 One can view the use of Lagrangian coordinates as an extension of the method of characteristics. Indeed, from the chain rule we see that for any smooth function {f: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}} of Eulerian spacetime, one has

\displaystyle  \frac{d}{dt} f(t,X(t,a)) = (D_t f)(t,X(t,a))

and hence any transport equation that in Eulerian coordinates takes the form

\displaystyle  D_t f = g

for smooth functions {f,g: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}} of Eulerian spacetime is equivalent to the ODE

\displaystyle  \frac{d}{dt} F = G

where {F,G: [0,T) \times {\bf R}^d_L \rightarrow {\bf R}} are the smooth functions of Lagrangian spacetime defined by

\displaystyle  F(t,a) := f(t,X(t,a)); \quad G(t,a) := g(t,X(t,a)).

In this set of notes we recall some basic differential geometry notation, particularly with regards to pullbacks and Lie derivatives of differential forms and similar fields on manifolds such as {{\bf R}^d_E} and {{\bf R}^d_L}, and explore how the Euler equations look in this notation. Our discussion will be entirely formal in nature; we will assume that all functions have enough smoothness and decay at infinity to justify the relevant calculations. (It is possible to work rigorously in Lagrangian coordinates – see for instance the work of Ebin and Marsden – but we will not do so here.) As a general rule, Lagrangian coordinates tend to be somewhat less convenient to use than Eulerian coordinates for establishing the basic analytic properties of the Euler equations, such as local existence, uniqueness, and continuous dependence on the data; however, they are quite good at clarifying the more algebraic properties of these equations, such as conservation laws and the variational nature of the equations. It may well be that in the future we will be able to use the Lagrangian formalism more effectively on the analytic side of the subject also.

Remark 6 One can also write the Navier-Stokes equations in Lagrangian coordinates, but the equations are not expressed in a favourable form in these coordinates, as the Laplacian {\Delta} appearing in the viscosity term becomes replaced with a time-varying Laplace-Beltrami operator. As such, we will not discuss the Lagrangian coordinate formulation of Navier-Stokes here.

Read the rest of this entry »

Note: this post is not required reading for this course, or for the sequel course in the winter quarter.

In a Notes 2, we reviewed the classical construction of Leray of global weak solutions to the Navier-Stokes equations. We did not quite follow Leray’s original proof, in that the notes relied more heavily on the machinery of Littlewood-Paley projections, which have become increasingly common tools in modern PDE. On the other hand, we did use the same “exploiting compactness to pass to weakly convergent subsequence” strategy that is the standard one in the PDE literature used to construct weak solutions.

As I discussed in a previous post, the manipulation of sequences and their limits is analogous to a “cheap” version of nonstandard analysis in which one uses the Fréchet filter rather than an ultrafilter to construct the nonstandard universe. (The manipulation of generalised functions of Columbeau-type can also be comfortably interpreted within this sort of cheap nonstandard analysis.) Augmenting the manipulation of sequences with the right to pass to subsequences whenever convenient is then analogous to a sort of “lazy” nonstandard analysis, in which the implied ultrafilter is never actually constructed as a “completed object“, but is instead lazily evaluated, in the sense that whenever membership of a given subsequence of the natural numbers in the ultrafilter needs to be determined, one either passes to that subsequence (thus placing it in the ultrafilter) or the complement of the sequence (placing it out of the ultrafilter). This process can be viewed as the initial portion of the transfinite induction that one usually uses to construct ultrafilters (as discussed using a voting metaphor in this post), except that there is generally no need in any given application to perform the induction for any uncountable ordinal (or indeed for most of the countable ordinals also).

On the other hand, it is also possible to work directly in the orthodox framework of nonstandard analysis when constructing weak solutions. This leads to an approach to the subject which is largely equivalent to the usual subsequence-based approach, though there are some minor technical differences (for instance, the subsequence approach occasionally requires one to work with separable function spaces, whereas in the ultrafilter approach the reliance on separability is largely eliminated, particularly if one imposes a strong notion of saturation on the nonstandard universe). The subject acquires a more “algebraic” flavour, as the quintessential analysis operation of taking a limit is replaced with the “standard part” operation, which is an algebra homomorphism. The notion of a sequence is replaced by the distinction between standard and nonstandard objects, and the need to pass to subsequences disappears entirely. Also, the distinction between “bounded sequences” and “convergent sequences” is largely eradicated, particularly when the space that the sequences ranged in enjoys some compactness properties on bounded sets. Also, in this framework, the notorious non-uniqueness features of weak solutions can be “blamed” on the non-uniqueness of the nonstandard extension of the standard universe (as well as on the multiple possible ways to construct nonstandard mollifications of the original standard PDE). However, many of these changes are largely cosmetic; switching from a subsequence-based theory to a nonstandard analysis-based theory does not seem to bring one significantly closer for instance to the global regularity problem for Navier-Stokes, but it could have been an alternate path for the historical development and presentation of the subject.

In any case, I would like to present below the fold this nonstandard analysis perspective, quickly translating the relevant components of real analysis, functional analysis, and distributional theory that we need to this perspective, and then use it to re-prove Leray’s theorem on existence of global weak solutions to Navier-Stokes.

Read the rest of this entry »

I’ve just uploaded to the arXiv my paper “Embedding the Heisenberg group into a bounded dimensional Euclidean space with optimal distortion“, submitted to Revista Matematica Iberoamericana. This paper concerns the extent to which one can accurately embed the metric structure of the Heisenberg group

\displaystyle H := \begin{pmatrix} 1 & {\bf R} & {\bf R} \\ 0 & 1 & {\bf R} \\ 0 & 0 & 1 \end{pmatrix}

into Euclidean space, which we can write as {\{ [x,y,z]: x,y,z \in {\bf R} \}} with the notation

\displaystyle [x,y,z] := \begin{pmatrix} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \end{pmatrix}.

Here we give {H} the right-invariant Carnot-Carathéodory metric {d} coming from the right-invariant vector fields

\displaystyle X := \frac{\partial}{\partial x} + y \frac{\partial}{\partial z}; \quad Y := \frac{\partial}{\partial y}

but not from the commutator vector field

\displaystyle Z := [Y,X] = \frac{\partial}{\partial z}.

This gives {H} the geometry of a Carnot group. As observed by Semmes, it follows from the Carnot group differentiation theory of Pansu that there is no bilipschitz map from {(H,d)} to any Euclidean space {{\bf R}^D} or even to {\ell^2}, since such a map must be differentiable almost everywhere in the sense of Carnot groups, which in particular shows that the derivative map annihilate {Z} almost everywhere, which is incompatible with being bilipschitz.

On the other hand, if one snowflakes the Heisenberg group by replacing the metric {d} with {d^{1-\varepsilon}} for some {0 < \varepsilon < 1}, then it follows from the general theory of Assouad on embedding snowflaked metrics of doubling spaces that {(H,d^{1-\varepsilon})} may be embedded in a bilipschitz fashion into {\ell^2}, or even to {{\bf R}^{D_\varepsilon}} for some {D_\varepsilon} depending on {\varepsilon}.

Of course, the distortion of this bilipschitz embedding must degenerate in the limit {\varepsilon \rightarrow 0}. From the work of Austin-Naor-Tessera and Naor-Neiman it follows that {(H,d^{1-\varepsilon})} may be embedded into {\ell^2} with a distortion of {O( \varepsilon^{-1/2} )}, but no better. The Naor-Neiman paper also embeds {(H,d^{1-\varepsilon})} into a finite-dimensional space {{\bf R}^D} with {D} independent of {\varepsilon}, but at the cost of worsening the distortion to {O(\varepsilon^{-1})}. They then posed the question of whether this worsening of the distortion is necessary.

The main result of this paper answers this question in the negative:

Theorem 1 There exists an absolute constant {D} such that {(H,d^{1-\varepsilon})} may be embedded into {{\bf R}^D} in a bilipschitz fashion with distortion {O(\varepsilon^{-1/2})} for any {0 < \varepsilon \leq 1/2}.

To motivate the proof of this theorem, let us first present a bilipschitz map {\Phi: {\bf R} \rightarrow \ell^2} from the snowflaked line {({\bf R},d_{\bf R}^{1-\varepsilon})} (with {d_{\bf R}} being the usual metric on {{\bf R}}) into complex Hilbert space {\ell^2({\bf C})}. The map is given explicitly as a Weierstrass type function

\displaystyle \Phi(x) := \sum_{k \in {\bf Z}} 2^{-\varepsilon k} (\phi_k(x) - \phi_k(0))

where for each {k}, {\phi_k: {\bf R} \rightarrow \ell^2} is the function

\displaystyle \phi_k(x) := 2^k e^{2\pi i x / 2^k} e_k.

and {(e_k)_{k \in {\bf Z}}} are an orthonormal basis for {\ell^2({\bf C})}. The subtracting of the constant {\phi_k(0)} is purely in order to make the sum convergent as {k \rightarrow \infty}. If {x,y \in {\bf R}} are such that {2^{k_0-2} \leq d_{\bf R}(x,y) \leq 2^{k_0-1}} for some integer {k_0}, one can easily check the bounds

\displaystyle |\phi_k(x) - \phi_k(y)| \lesssim d_{\bf R}(x,y)^{(1-\varepsilon)} \min( 2^{-(1-\varepsilon) (k_0-k)}, 2^{-\varepsilon (k-k_0)} )

with the lower bound

\displaystyle |\phi_{k_0}(x) - \phi_{k_0}(y)| \gtrsim d_{\bf R}(x,y)^{(1-\varepsilon)}

at which point one finds that

\displaystyle d_{\bf R}(x,y)^{1-\varepsilon} \lesssim |\Phi(x) - \Phi(y)| \lesssim \varepsilon^{-1/2} d_{\bf R}(x,y)^{1-\varepsilon}

as desired.

The key here was that each function {\phi_k} oscillated at a different spatial scale {2^k}, and the functions were all orthogonal to each other (so that the upper bound involved a factor of {\varepsilon^{-1/2}} rather than {\varepsilon^{-1}}). One can replicate this example for the Heisenberg group without much difficulty. Indeed, if we let {\Gamma := \{ [a,b,c]: a,b,c \in {\bf Z} \}} be the discrete Heisenberg group, then the nilmanifold {H/\Gamma} is a three-dimensional smooth compact manifold; thus, by the Whitney embedding theorem, it smoothly embeds into {{\bf R}^6}. This gives a smooth immersion {\phi: H \rightarrow {\bf R}^6} which is {\Gamma}-automorphic in the sense that {\phi(p\gamma) = \phi(p)} for all {p \in H} and {\gamma \in \Gamma}. If one then defines {\phi_k: H \rightarrow \ell^2 \otimes {\bf R}^6} to be the function

\displaystyle \phi_k(p) := 2^k \phi( \delta_{2^{-k}}(p) ) \otimes e_k

where {\delta_\lambda: H \rightarrow H} is the scaling map

\displaystyle \delta_\lambda([x,y,z]) := [\lambda x, \lambda y, \lambda^2 z],

then one can repeat the previous arguments to obtain the required bilipschitz bounds

\displaystyle d(p,q)^{1-\varepsilon} \lesssim |\Phi(p) - \Phi(q) \lesssim \varepsilon^{-1/2} d(p,q)^{1-\varepsilon}

for the function

\displaystyle \Phi(p) :=\sum_{k \in {\bf Z}} 2^{-\varepsilon k} (\phi_k(p) - \phi_k(0)).

To adapt this construction to bounded dimension, the main obstruction was the requirement that the {\phi_k} took values in orthogonal subspaces. But if one works things out carefully, it is enough to require the weaker orthogonality requirement

\displaystyle B( \phi_{k_0}, \sum_{k>k_0} 2^{-\varepsilon(k-k_0)} \phi_k ) = 0

for all {k_0 \in {\bf Z}}, where {B(\phi, \psi): H \rightarrow {\bf R}^2} is the bilinear form

\displaystyle B(\phi,\psi) := (X \phi \cdot X \psi, Y \phi \cdot Y \psi ).

One can then try to construct the {\phi_k: H \rightarrow {\bf R}^D} for bounded dimension {D} by an iterative argument. After some standard reductions, the problem becomes this (roughly speaking): given a smooth, slowly varying function {\psi: H \rightarrow {\bf R}^{D}} whose derivatives obey certain quantitative upper and lower bounds, construct a smooth oscillating function {\phi: H \rightarrow {\bf R}^{D}}, whose derivatives also obey certain quantitative upper and lower bounds, which obey the equation

\displaystyle B(\phi,\psi) = 0. \ \ \ \ \ (1)

 

We view this as an underdetermined system of differential equations for {\phi} (two equations in {D} unknowns; after some reductions, our {D} can be taken to be the explicit value {36}). The trivial solution {\phi=0} to this equation will be inadmissible for our purposes due to the lower bounds we will require on {\phi} (in order to obtain the quantitative immersion property mentioned previously, as well as for a stronger “freeness” property that is needed to close the iteration). Because this construction will need to be iterated, it will be essential that the regularity control on {\phi} is the same as that on {\psi}; one cannot afford to “lose derivatives” when passing from {\psi} to {\phi}.

This problem has some formal similarities with the isometric embedding problem (discussed for instance in this previous post), which can be viewed as the problem of solving an equation of the form {Q(\phi,\phi) = g}, where {(M,g)} is a Riemannian manifold and {Q} is the bilinear form

\displaystyle Q(\phi,\psi)_{ij} = \partial_i \phi \cdot \partial_j \psi.

The isometric embedding problem also has the key obstacle that naive attempts to solve the equation {Q(\phi,\phi)=g} iteratively can lead to an undesirable “loss of derivatives” that prevents one from iterating indefinitely. This obstacle was famously resolved by the Nash-Moser iteration scheme in which one alternates between perturbatively adjusting an approximate solution to improve the residual error term, and mollifying the resulting perturbation to counteract the loss of derivatives. The current equation (1) differs in some key respects from the isometric embedding equation {Q(\phi,\phi)=g}, in particular being linear in the unknown field {\phi} rather than quadratic; nevertheless the key obstacle is the same, namely that naive attempts to solve either equation lose derivatives. Our approach to solving (1) was inspired by the Nash-Moser scheme; in retrospect, I also found similarities with Uchiyama’s constructive proof of the Fefferman-Stein decomposition theorem, discussed in this previous post (and in this recent one).

To motivate this iteration, we first express {B(\phi,\psi)} using the product rule in a form that does not place derivatives directly on the unknown {\phi}:

\displaystyle B(\phi,\psi) = \left( W(\phi \cdot W \psi) - \phi \cdot WW \psi\right)_{W = X,Y} \ \ \ \ \ (2)

 

This reveals that one can construct solutions {\phi} to (1) by solving the system of equations

\displaystyle \phi \cdot W \psi = \phi \cdot WW \psi = 0 \ \ \ \ \ (3)

 

for {W \in \{X, Y \}}. Because this system is zeroth order in {\phi}, this can easily be done by linear algebra (even in the presence of a forcing term {B(\phi,\psi)=F}) if one imposes a “freeness” condition (analogous to the notion of a free embedding in the isometric embedding problem) that {X \psi(p), Y \psi(p), XX \psi(p), YY \psi(p)} are linearly independent at each point {p}, which (together with some other technical conditions of a similar nature) one then adds to the list of upper and lower bounds required on {\psi} (with a related bound then imposed on {\phi}, in order to close the iteration). However, as mentioned previously, there is a “loss of derivatives” problem with this construction: due to the presence of the differential operators {W} in (3), a solution {\phi} constructed by this method can only be expected to have two degrees less regularity than {\psi} at best, which makes this construction unsuitable for iteration.

To get around this obstacle (which also prominently appears when solving (linearisations of) the isometric embedding equation {Q(\phi,\phi)=g}), we instead first construct a smooth, low-frequency solution {\phi_{\leq N_0} \colon H \rightarrow {\bf R}^{D}} to a low-frequency equation

\displaystyle B( \phi_{\leq N_0}, P_{\leq N_0} \psi ) = 0 \ \ \ \ \ (4)

 

where {P_{\leq N_0} \psi} is a mollification of {\psi} (of Littlewood-Paley type) applied at a small spatial scale {1/N_0} for some {N_0}, and then gradually relax the frequency cutoff {P_{\leq N_0}} to deform this low frequency solution {\phi_{\leq N_0}} to a solution {\phi} of the actual equation (1).

We will construct the low-frequency solution {\phi_{\leq N_0}} rather explicitly, using the Whitney embedding theorem to construct an initial oscillating map {f} into a very low dimensional space {{\bf R}^6}, composing it with a Veronese type embedding into a slightly larger dimensional space {{\bf R}^{27}} to obtain a required “freeness” property, and then composing further with a slowly varying isometry {U(p) \colon {\bf R}^{27} \rightarrow {\bf R}^{36}} depending on {P_{\leq N_0}} and constructed by a quantitative topological lemma (relying ultimately on the vanishing of the first few homotopy groups of high-dimensional spheres), in order to obtain the required orthogonality (4). (This sort of “quantitative null-homotopy” was first proposed by Gromov, with some recent progress on optimal bounds by Chambers-Manin-Weinberger and by Chambers-Dotterer-Manin-Weinberger, but we will not need these more advanced results here, as one can rely on the classical qualitative vanishing {\pi^k(S^d)=0} for {k < d} together with a compactness argument to obtain (ineffective) quantitative bounds, which suffice for this application).

To perform the deformation of {\phi_{\leq N_0}} into {\phi}, we must solve what is essentially the linearised equation

\displaystyle B( \dot \phi, \psi ) + B( \phi, \dot \psi ) = 0 \ \ \ \ \ (5)

 

of (1) when {\phi}, {\psi} (viewed as low frequency functions) are both being deformed at some rates {\dot \phi, \dot \psi} (which should be viewed as high frequency functions). To avoid losing derivatives, the magnitude of the deformation {\dot \phi} in {\phi} should not be significantly greater than the magnitude of the deformation {\dot \psi} in {\psi}, when measured in the same function space norms.

As before, if one directly solves the difference equation (5) using a naive application of (2) with {B(\phi,\dot \psi)} treated as a forcing term, one will lose at least one derivative of regularity when passing from {\dot \psi} to {\dot \phi}. However, observe that (2) (and the symmetry {B(\phi, \dot \psi) = B(\dot \psi,\phi)}) can be used to obtain the identity

\displaystyle B( \dot \phi, \psi ) + B( \phi, \dot \psi ) = \left( W(\dot \phi \cdot W \psi + \dot \psi \cdot W \phi) - (\dot \phi \cdot WW \psi + \dot \psi \cdot WW \phi)\right)_{W = X,Y} \ \ \ \ \ (6)

 

and then one can solve (5) by solving the system of equations

\displaystyle \dot \phi \cdot W \psi = - \dot \psi \cdot W \phi

for {W \in \{X,XX,Y,YY\}}. The key point here is that this system is zeroth order in both {\dot \phi} and {\dot \psi}, so one can solve this system without losing any derivatives when passing from {\dot \psi} to {\dot \phi}; compare this situation with that of the superficially similar system

\displaystyle \dot \phi \cdot W \psi = - \phi \cdot W \dot \psi

that one would obtain from naively linearising (3) without exploiting the symmetry of {B}. There is still however one residual “loss of derivatives” problem arising from the presence of a differential operator {W} on the {\phi} term, which prevents one from directly evolving this iteration scheme in time without losing regularity in {\phi}. It is here that we borrow the final key idea of the Nash-Moser scheme, which is to replace {\phi} by a mollified version {P_{\leq N} \phi} of itself (where the projection {P_{\leq N}} depends on the time parameter). This creates an error term in (5), but it turns out that this error term is quite small and smooth (being a “high-high paraproduct” of {\nabla \phi} and {\nabla\psi}, it ends up being far more regular than either {\phi} or {\psi}, even with the presence of the derivatives) and can be iterated away provided that the initial frequency cutoff {N_0} is large and the function {\psi} has a fairly high (but finite) amount of regularity (we will eventually use the Hölder space {C^{20,\alpha}} on the Heisenberg group to measure this).

We now turn to the local existence theory for the initial value problem for the incompressible Euler equations

\displaystyle  \partial_t u + (u \cdot \nabla) u = - \nabla p \ \ \ \ \ (1)

\displaystyle  \nabla \cdot u = 0

\displaystyle  u(0,x) = u_0(x).

For sake of discussion we will just work in the non-periodic domain {{\bf R}^d}, {d \geq 2}, although the arguments here can be adapted without much difficulty to the periodic setting. We will only work with solutions in which the pressure {p} is normalised in the usual fashion:

\displaystyle  p = - \Delta^{-1} \nabla \cdot \nabla \cdot (u \otimes u). \ \ \ \ \ (2)

Formally, the Euler equations (with normalised pressure) arise as the vanishing viscosity limit {\nu \rightarrow 0} of the Navier-Stokes equations

\displaystyle  \partial_t u + (u \cdot \nabla) u = - \nabla p + \nu \Delta u \ \ \ \ \ (3)

\displaystyle  \nabla \cdot u = 0

\displaystyle  p = - \Delta^{-1} \nabla \cdot \nabla \cdot (u \otimes u)

\displaystyle  u(0,x) = u_0(x)

that was studied in previous notes. However, because most of the bounds established in previous notes, either on the lifespan {T_*} of the solution or on the size of the solution itself, depended on {\nu}, it is not immediate how to justify passing to the limit and obtain either a strong well-posedness theory or a weak solution theory for the limiting equation (1). (For instance, weak solutions to the Navier-Stokes equations (or the approximate solutions used to create such weak solutions) have {\nabla u} lying in {L^2_{t,loc} L^2_x} for {\nu>0}, but the bound on the norm is {O(\nu^{-1/2})} and so one could lose this regularity in the limit {\nu \rightarrow 0}, at which point it is not clear how to ensure that the nonlinear term {u_j u} still converges in the sense of distributions to what one expects.)

Nevertheless, by carefully using the energy method (which we will do loosely following an approach of Bertozzi and Majda), it is still possible to obtain local-in-time estimates on (high-regularity) solutions to (3) that are uniform in the limit {\nu \rightarrow 0}. Such a priori estimates can then be combined with a number of variants of these estimates obtain a satisfactory local well-posedness theory for the Euler equations. Among other things, we will be able to establish the Beale-Kato-Majda criterion – smooth solutions to the Euler (or Navier-Stokes) equations can be continued indefinitely unless the integral

\displaystyle  \int_0^{T_*} \| \omega(t) \|_{L^\infty_x( {\bf R}^d \rightarrow \wedge^2 {\bf R}^d )}\ dt

becomes infinite at the final time {T_*}, where {\omega := \nabla \wedge u} is the vorticity field. The vorticity has the important property that it is transported by the Euler flow, and in two spatial dimensions it can be used to establish global regularity for both the Euler and Navier-Stokes equations in these settings. (Unfortunately, in three and higher dimensions the phenomenon of vortex stretching has frustrated all attempts to date to use the vorticity transport property to establish global regularity of either equation in this setting.)

There is a rather different approach to establishing local well-posedness for the Euler equations, which relies on the vorticity-stream formulation of these equations. This will be discused in a later set of notes.

Read the rest of this entry »

In the previous set of notes we developed a theory of “strong” solutions to the Navier-Stokes equations. This theory, based around viewing the Navier-Stokes equations as a perturbation of the linear heat equation, has many attractive features: solutions exist locally, are unique, depend continuously on the initial data, have a high degree of regularity, can be continued in time as long as a sufficiently high regularity norm is under control, and tend to enjoy the same sort of conservation laws that classical solutions do. However, it is a major open problem as to whether these solutions can be extended to be (forward) global in time, because the norms that we know how to control globally in time do not have high enough regularity to be useful for continuing the solution. Also, the theory becomes degenerate in the inviscid limit {\nu \rightarrow 0}.

However, it is possible to construct “weak” solutions which lack many of the desirable features of strong solutions (notably, uniqueness, propagation of regularity, and conservation laws) but can often be constructed globally in time even when one us unable to do so for strong solutions. Broadly speaking, one usually constructs weak solutions by some sort of “compactness method”, which can generally be described as follows.

  1. Construct a sequence of “approximate solutions” to the desired equation, for instance by developing a well-posedness theory for some “regularised” approximation to the original equation. (This theory often follows similar lines to those in the previous set of notes, for instance using such tools as the contraction mapping theorem to construct the approximate solutions.)
  2. Establish some uniform bounds (over appropriate time intervals) on these approximate solutions, even in the limit as an approximation parameter is sent to zero. (Uniformity is key; non-uniform bounds are often easy to obtain if one puts enough “mollification”, “hyper-dissipation”, or “discretisation” in the approximating equation.)
  3. Use some sort of “weak compactness” (e.g., the Banach-Alaoglu theorem, the Arzela-Ascoli theorem, or the Rellich compactness theorem) to extract a subsequence of approximate solutions that converge (in a topology weaker than that associated to the available uniform bounds) to a limit. (Note that there is no reason a priori to expect such limit points to be unique, or to have any regularity properties beyond that implied by the available uniform bounds..)
  4. Show that this limit solves the original equation in a suitable weak sense.

The quality of these weak solutions is very much determined by the type of uniform bounds one can obtain on the approximate solution; the stronger these bounds are, the more properties one can obtain on these weak solutions. For instance, if the approximate solutions enjoy an energy identity leading to uniform energy bounds, then (by using tools such as Fatou’s lemma) one tends to obtain energy inequalities for the resulting weak solution; but if one somehow is able to obtain uniform bounds in a higher regularity norm than the energy then one can often recover the full energy identity. If the uniform bounds are at the regularity level needed to obtain well-posedness, then one generally expects to upgrade the weak solution to a strong solution. (This phenomenon is often formalised through weak-strong uniqueness theorems, which we will discuss later in these notes.) Thus we see that as far as attacking global regularity is concerned, both the theory of strong solutions and the theory of weak solutions encounter essentially the same obstacle, namely the inability to obtain uniform bounds on (exact or approximate) solutions at high regularities (and at arbitrary times).

For simplicity, we will focus our discussion in this notes on finite energy weak solutions on {{\bf R}^d}. There is a completely analogous theory for periodic weak solutions on {{\bf R}^d} (or equivalently, weak solutions on the torus {({\bf R}^d/{\bf Z}^d)} which we will leave to the interested reader.

In recent years, a completely different way to construct weak solutions to the Navier-Stokes or Euler equations has been developed that are not based on the above compactness methods, but instead based on techniques of convex integration. These will be discussed in a later set of notes.

Read the rest of this entry »

We now begin the rigorous theory of the incompressible Navier-Stokes equations

\displaystyle  	\partial_t u + (u \cdot \nabla) u = \nu \Delta u - \nabla p 	\ \ \ \ \ (1)

\displaystyle  \nabla \cdot u = 0,

where {\nu>0} is a given constant (the kinematic viscosity, or viscosity for short), {u: I \times {\bf R}^d \rightarrow {\bf R}^d} is an unknown vector field (the velocity field), and {p: I \times {\bf R}^d \rightarrow {\bf R}} is an unknown scalar field (the pressure field). Here {I} is a time interval, usually of the form {[0,T]} or {[0,T)}. We will either be interested in spatially decaying situations, in which {u(t,x)} decays to zero as {x \rightarrow \infty}, or {{\bf Z}^d}-periodic (or periodic for short) settings, in which one has {u(t, x+n) = u(t,x)} for all {n \in {\bf Z}^d}. (One can also require the pressure {p} to be periodic as well; this brings up a small subtlety in the uniqueness theory for these equations, which we will address later in this set of notes.) As is usual, we abuse notation by identifying a {{\bf Z}^d}-periodic function on {{\bf R}^d} with a function on the torus {{\bf R}^d/{\bf Z}^d}.

In order for the system (1) to even make sense, one requires some level of regularity on the unknown fields {u,p}; this turns out to be a relatively important technical issue that will require some attention later in this set of notes, and we will end up transforming (1) into other forms that are more suitable for lower regularity candidate solution. Our focus here will be on local existence of these solutions in a short time interval {[0,T]} or {[0,T)}, for some {T>0}. (One could in principle also consider solutions that extend to negative times, but it turns out that the equations are not time-reversible, and the forward evolution is significantly more natural to study than the backwards one.) The study of Euler equations, in which {\nu=0}, will be deferred to subsequent lecture notes.

As the unknown fields involve a time parameter {t}, and the first equation of (1) involves time derivatives of {u}, the system (1) should be viewed as describing an evolution for the velocity field {u}. (As we shall see later, the pressure {p} is not really an independent dynamical field, as it can essentially be expressed in terms of the velocity field without requiring any differentiation or integration in time.) As such, the natural question to study for this system is the initial value problem, in which an initial velocity field {u_0: {\bf R}^d \rightarrow {\bf R}^d} is specified, and one wishes to locate a solution {(u,p)} to the system (1) with initial condition

\displaystyle  	u(0,x) = u_0(x) 	\ \ \ \ \ (2)

for {x \in {\bf R}^d}. Of course, in order for this initial condition to be compatible with the second equation in (1), we need the compatibility condition

\displaystyle  	\nabla \cdot u_0 = 0 	\ \ \ \ \ (3)

and one should also impose some regularity, decay, and/or periodicity hypotheses on {u_0} in order to be compatible with corresponding level of regularity etc. on the solution {u}.

The fundamental questions in the local theory of an evolution equation are that of existence, uniqueness, and continuous dependence. In the context of the Navier-Stokes equations, these questions can be phrased (somewhat broadly) as follows:

  • (a) (Local existence) Given suitable initial data {u_0}, does there exist a solution {(u,p)} to the above initial value problem that exists for some time {T>0}? What can one say about the time {T} of existence? How regular is the solution?
  • (b) (Uniqueness) Is it possible to have two solutions {(u,p), (u',p')} of a certain regularity class to the same initial value problem on a common time interval {[0,T)}? To what extent does the answer to this question depend on the regularity assumed on one or both of the solutions? Does one need to normalise the solutions beforehand in order to obtain uniqueness?
  • (c) (Continuous dependence on data) If one perturbs the initial conditions {u_0} by a small amount, what happens to the solution {(u,p)} and on the time of existence {T}? (This question tends to only be sensible once one has a reasonable uniqueness theory.)

The answers to these questions tend to be more complicated than a simple “Yes” or “No”, for instance they can depend on the precise regularity hypotheses one wishes to impose on the data and on the solution, and even on exactly how one interprets the concept of a “solution”. However, once one settles on such a set of hypotheses, it generally happens that one either gets a “strong” theory (in which one has existence, uniqueness, and continuous dependence on the data), a “weak” theory (in which one has existence of somewhat low-quality solutions, but with only limited uniqueness results (or even some spectacular failures of uniqueness) and almost no continuous dependence on data), or no satsfactory theory whatsoever. In the former case, we say (roughly speaking) that the initial value problem is locally well-posed, and one can then try to build upon the theory to explore more interesting topics such as global existence and asymptotics, classifying potential blowup, rigorous justification of conservation laws, and so forth. With a weak local theory, it becomes much more difficult to address these latter sorts of questions, and there are serious analytic pitfalls that one could fall into if one tries too strenuously to treat weak solutions as if they were strong. (For instance, conservation laws that are rigorously justified for strong, high-regularity solutions may well fail for weak, low-regularity ones.) Also, even if one is primarily interested in solutions at one level of regularity, the well-posedness theory at another level of regularity can be very helpful; for instance, if one is interested in smooth solutions in {{\bf R}^d}, it turns out that the well-posedness theory at the critical regularity of {\dot H^{\frac{d}{2}-1}({\bf R}^d)} can be used to establish globally smooth solutions from small initial data. As such, it can become quite important to know what kind of local theory one can obtain for a given equation.

This set of notes will focus on the “strong” theory, in which a substantial amount of regularity is assumed in the initial data and solution, giving a satisfactory (albeit largely local-in-time) well-posedness theory. “Weak” solutions will be considered in later notes.

The Navier-Stokes equations are not the simplest of partial differential equations to study, in part because they are an amalgam of three more basic equations, which behave rather differently from each other (for instance the first equation is nonlinear, while the latter two are linear):

  • (a) Transport equations such as {\partial_t u + (u \cdot \nabla) u = 0}.
  • (b) Diffusion equations (or heat equations) such as {\partial_t u = \nu \Delta u}.
  • (c) Systems such as {v = F - \nabla p}, {\nabla \cdot v = 0}, which (for want of a better name) we will call Leray systems.

Accordingly, we will devote some time to getting some preliminary understanding of the linear diffusion and Leray systems before returning to the theory for the Navier-Stokes equation. Transport systems will be discussed further in subsequent notes; in this set of notes, we will instead focus on a more basic example of nonlinear equations, namely the first-order ordinary differential equation

\displaystyle  	\partial_t u = F(u) 	\ \ \ \ \ (4)

where {u: I \rightarrow V} takes values in some finite-dimensional (real or complex) vector space {V} on some time interval {I}, and {F: V \rightarrow V} is a given linear or nonlinear function. (Here, we use “interval” to denote a connected non-empty subset of {{\bf R}}; in particular, we allow intervals to be half-infinite or infinite, or to be open, closed, or half-open.) Fundamental results in this area include the Picard existence and uniqueness theorem, the Duhamel formula, and Grönwall’s inequality; they will serve as motivation for the approach to local well-posedness that we will adopt in this set of notes. (There are other ways to construct strong or weak solutions for Navier-Stokes and Euler equations, which we will discuss in later notes.)

A key role in our treatment here will be played by the fundamental theorem of calculus (in various forms and variations). Roughly speaking, this theorem, and its variants, allow us to recast differential equations (such as (1) or (4)) as integral equations. Such integral equations are less tractable algebraically than their differential counterparts (for instance, they are not ideal for verifying conservation laws), but are significantly more convenient for well-posedness theory, basically because integration tends to increase the regularity of a function, while differentiation reduces it. (Indeed, the problem of “losing derivatives”, or more precisely “losing regularity”, is a key obstacle that one often has to address when trying to establish well-posedness for PDE, particularly those that are quite nonlinear and with rough initial data, though for nonlinear parabolic equations such as Navier-Stokes the obstacle is not as serious as it is for some other PDE, due to the smoothing effects of the heat equation.)

One weakness of the methods deployed here are that the quantitative bounds produced deteriorate to the point of uselessness in the inviscid limit {\nu \rightarrow 0}, rendering these techniques unsuitable for analysing the Euler equations in which {\nu=0}. However, some of the methods developed in later notes have bounds that remain uniform in the {\nu \rightarrow 0} limit, allowing one to also treat the Euler equations.

In this and subsequent set of notes, we use the following asymptotic notation (a variant of Vinogradov notation that is commonly used in PDE and harmonic analysis). The statement {X \lesssim Y}, {Y \gtrsim X}, or {X = O(Y)} will be used to denote an estimate of the form {|X| \leq CY} (or equivalently {Y \geq C^{-1} |X|}) for some constant {C}, and {X \sim Y} will be used to denote the estimates {X \lesssim Y \lesssim X}. If the constant {C} depends on other parameters (such as the dimension {d}), this will be indicated by subscripts, thus for instance {X \lesssim_d Y} denotes the estimate {|X| \leq C_d Y} for some {C_d} depending on {d}.

Read the rest of this entry »

This coming fall quarter, I am teaching a class on topics in the mathematical theory of incompressible fluid equations, focusing particularly on the incompressible Euler and Navier-Stokes equations. These two equations are by no means the only equations used to model fluids, but I will focus on these two equations in this course to narrow the focus down to something manageable. I have not fully decided on the choice of topics to cover in this course, but I would probably begin with some core topics such as local well-posedness theory and blowup criteria, conservation laws, and construction of weak solutions, then move on to some topics such as boundary layers and the Prandtl equations, the Euler-Poincare-Arnold interpretation of the Euler equations as an infinite dimensional geodesic flow, and some discussion of the Onsager conjecture. I will probably also continue to more advanced and recent topics in the winter quarter.

In this initial set of notes, we begin by reviewing the physical derivation of the Euler and Navier-Stokes equations from the first principles of Newtonian mechanics, and specifically from Newton’s famous three laws of motion. Strictly speaking, this derivation is not needed for the mathematical analysis of these equations, which can be viewed if one wishes as an arbitrarily chosen system of partial differential equations without any physical motivation; however, I feel that the derivation sheds some insight and intuition on these equations, and is also worth knowing on purely intellectual grounds regardless of its mathematical consequences. I also find it instructive to actually see the journey from Newton’s law

\displaystyle F = ma

to the seemingly rather different-looking law

\displaystyle \partial_t u + (u \cdot \nabla) u = -\nabla p + \nu \Delta u

\displaystyle \nabla \cdot u = 0

for incompressible Navier-Stokes (or, if one drops the viscosity term {\nu \Delta u}, the Euler equations).

Our discussion in this set of notes is physical rather than mathematical, and so we will not be working at mathematical levels of rigour and precision. In particular we will be fairly casual about interchanging summations, limits, and integrals, we will manipulate approximate identities {X \approx Y} as if they were exact identities (e.g., by differentiating both sides of the approximate identity), and we will not attempt to verify any regularity or convergence hypotheses in the expressions being manipulated. (The same holds for the exercises in this text, which also do not need to be justified at mathematical levels of rigour.) Of course, once we resume the mathematical portion of this course in subsequent notes, such issues will be an important focus of careful attention. This is a basic division of labour in mathematical modeling: non-rigorous heuristic reasoning is used to derive a mathematical model from physical (or other “real-life”) principles, but once a precise model is obtained, the analysis of that model should be completely rigorous if at all possible (even if this requires applying the model to regimes which do not correspond to the original physical motivation of that model). See the discussion by John Ball quoted at the end of these slides of Gero Friesecke for an expansion of these points.

Note: our treatment here will differ slightly from that presented in many fluid mechanics texts, in that it will emphasise first-principles derivations from many-particle systems, rather than relying on bulk laws of physics, such as the laws of thermodynamics, which we will not cover here. (However, the derivations from bulk laws tend to be more robust, in that they are not as reliant on assumptions about the particular interactions between particles. In particular, the physical hypotheses we assume in this post are probably quite a bit stronger than the minimal assumptions needed to justify the Euler or Navier-Stokes equations, which can hold even in situations in which one or more of the hypotheses assumed here break down.)

Read the rest of this entry »

We now approach conformal maps from yet another perspective. Given an open subset {U} of the complex numbers {{\bf C}}, define a univalent function on {U} to be a holomorphic function {f: U \rightarrow {\bf C}} that is also injective. We will primarily be studying this concept in the case when {U} is the unit disk {D(0,1) := \{ z \in {\bf C}: |z| < 1 \}}.

Clearly, a univalent function {f: D(0,1) \rightarrow {\bf C}} on the unit disk is a conformal map from {D(0,1)} to the image {f(D(0,1))}; in particular, {f(D(0,1))} is simply connected, and not all of {{\bf C}} (since otherwise the inverse map {f^{-1}: {\bf C} \rightarrow D(0,1)} would violate Liouville’s theorem). In the converse direction, the Riemann mapping theorem tells us that every open simply connected proper subset {V \subsetneq {\bf C}} of the complex numbers is the image of a univalent function on {D(0,1)}. Furthermore, if {V} contains the origin, then the univalent function {f: D(0,1) \rightarrow {\bf C}} with this image becomes unique once we normalise {f(0) = 0} and {f'(0) > 0}. Thus the Riemann mapping theorem provides a one-to-one correspondence between open simply connected proper subsets of the complex plane containing the origin, and univalent functions {f: D(0,1) \rightarrow {\bf C}} with {f(0)=0} and {f'(0)>0}. We will focus particular attention on the univalent functions {f: D(0,1) \rightarrow {\bf C}} with the normalisation {f(0)=0} and {f'(0)=1}; such functions will be called schlicht functions.

One basic example of a univalent function on {D(0,1)} is the Cayley transform {z \mapsto \frac{1+z}{1-z}}, which is a Möbius transformation from {D(0,1)} to the right half-plane {\{ \mathrm{Re}(z) > 0 \}}. (The slight variant {z \mapsto \frac{1-z}{1+z}} is also referred to as the Cayley transform, as is the closely related map {z \mapsto \frac{z-i}{z+i}}, which maps {D(0,1)} to the upper half-plane.) One can square this map to obtain a further univalent function {z \mapsto \left( \frac{1+z}{1-z} \right)^2}, which now maps {D(0,1)} to the complex numbers with the negative real axis {(-\infty,0]} removed. One can normalise this function to be schlicht to obtain the Koebe function

\displaystyle  f(z) := \frac{1}{4}\left( \left( \frac{1+z}{1-z} \right)^2 - 1\right) = \frac{z}{(1-z)^2}, \ \ \ \ \ (1)

which now maps {D(0,1)} to the complex numbers with the half-line {(-\infty,-1/4]} removed. A little more generally, for any {\theta \in {\bf R}} we have the rotated Koebe function

\displaystyle  f(z) := \frac{z}{(1 - e^{i\theta} z)^2} \ \ \ \ \ (2)

that is a schlicht function that maps {D(0,1)} to the complex numbers with the half-line {\{ -re^{-i\theta}: r \geq 1/4\}} removed.

Every schlicht function {f: D(0,1) \rightarrow {\bf C}} has a convergent Taylor expansion

\displaystyle  f(z) = a_1 z + a_2 z^2 + a_3 z^3 + \dots

for some complex coefficients {a_1,a_2,\dots} with {a_1=1}. For instance, the Koebe function has the expansion

\displaystyle  f(z) = z + 2 z^2 + 3 z^3 + \dots = \sum_{n=1}^\infty n z^n

and similarly the rotated Koebe function has the expansion

\displaystyle  f(z) = z + 2 e^{i\theta} z^2 + 3 e^{2i\theta} z^3 + \dots = \sum_{n=1}^\infty n e^{(n-1)\theta} z^n.

Intuitively, the Koebe function and its rotations should be the “largest” schlicht functions available. This is formalised by the famous Bieberbach conjecture, which asserts that for any schlicht function, the coefficients {a_n} should obey the bound {|a_n| \leq n} for all {n}. After a large number of partial results, this conjecture was eventually solved by de Branges; see for instance this survey of Korevaar or this survey of Koepf for a history.

It turns out that to resolve these sorts of questions, it is convenient to restrict attention to schlicht functions {g: D(0,1) \rightarrow {\bf C}} that are odd, thus {g(-z)=-g(z)} for all {z}, and the Taylor expansion now reads

\displaystyle  g(z) = b_1 z + b_3 z^3 + b_5 z^5 + \dots

for some complex coefficients {b_1,b_3,\dots} with {b_1=1}. One can transform a general schlicht function {f: D(0,1) \rightarrow {\bf C}} to an odd schlicht function {g: D(0,1) \rightarrow {\bf C}} by observing that the function {f(z^2)/z^2: D(0,1) \rightarrow {\bf C}}, after removing the singularity at zero, is a non-zero function that equals {1} at the origin, and thus (as {D(0,1)} is simply connected) has a unique holomorphic square root {(f(z^2)/z^2)^{1/2}} that also equals {1} at the origin. If one then sets

\displaystyle  g(z) := z (f(z^2)/z^2)^{1/2} \ \ \ \ \ (3)

it is not difficult to verify that {g} is an odd schlicht function which additionally obeys the equation

\displaystyle  f(z^2) = g(z)^2. \ \ \ \ \ (4)

Conversely, given an odd schlicht function {g}, the formula (4) uniquely determines a schlicht function {f}.

For instance, if {f} is the Koebe function (1), {g} becomes

\displaystyle  g(z) = \frac{z}{1-z^2} = z + z^3 + z^5 + \dots, \ \ \ \ \ (5)

which maps {D(0,1)} to the complex numbers with two slits {\{ \pm iy: y > 1/2 \}} removed, and if {f} is the rotated Koebe function (2), {g} becomes

\displaystyle  g(z) = \frac{z}{1- e^{i\theta} z^2} = z + e^{i\theta} z^3 + e^{2i\theta} z^5 + \dots. \ \ \ \ \ (6)

De Branges established the Bieberbach conjecture by first proving an analogous conjecture for odd schlicht functions known as Robertson’s conjecture. More precisely, we have

Theorem 1 (de Branges’ theorem) Let {n \geq 1} be a natural number.

  • (i) (Robertson conjecture) If {g(z) = b_1 z + b_3 z^3 + b_5 z^5 + \dots} is an odd schlicht function, then

    \displaystyle  \sum_{k=1}^n |b_{2k-1}|^2 \leq n.

  • (ii) (Bieberbach conjecture) If {f(z) = a_1 z + a_2 z^2 + a_3 z^3 + \dots} is a schlicht function, then

    \displaystyle  |a_n| \leq n.

It is easy to see that the Robertson conjecture for a given value of {n} implies the Bieberbach conjecture for the same value of {n}. Indeed, if {f(z) = a_1 z + a_2 z^2 + a_3 z^3 + \dots} is schlicht, and {g(z) = b_1 z + b_3 z^3 + b_5 z^5 + \dots} is the odd schlicht function given by (3), then from extracting the {z^{2n}} coefficient of (4) we obtain a formula

\displaystyle  a_n = \sum_{j=1}^n b_{2j-1} b_{2(n+1-j)-1}

for the coefficients of {f} in terms of the coefficients of {g}. Applying the Cauchy-Schwarz inequality, we derive the Bieberbach conjecture for this value of {n} from the Robertson conjecture for the same value of {n}. We remark that Littlewood and Paley had conjectured a stronger form {|b_{2k-1}| \leq 1} of Robertson’s conjecture, but this was disproved for {k=3} by Fekete and Szegö.

To prove the Robertson and Bieberbach conjectures, one first takes a logarithm and deduces both conjectures from a similar conjecture about the Taylor coefficients of {\log \frac{f(z)}{z}}, known as the Milin conjecture. Next, one continuously enlarges the image {f(D(0,1))} of the schlicht function to cover all of {{\bf C}}; done properly, this places the schlicht function {f} as the initial function {f = f_0} in a sequence {(f_t)_{t \geq 0}} of univalent maps {f_t: D(0,1) \rightarrow {\bf C}} known as a Loewner chain. The functions {f_t} obey a useful differential equation known as the Loewner equation, that involves an unspecified forcing term {\mu_t} (or {\theta(t)}, in the case that the image is a slit domain) coming from the boundary; this in turn gives useful differential equations for the Taylor coefficients of {f(z)}, {g(z)}, or {\log \frac{f(z)}{z}}. After some elementary calculus manipulations to “integrate” this equations, the Bieberbach, Robertson, and Milin conjectures are then reduced to establishing the non-negativity of a certain explicit hypergeometric function, which is non-trivial to prove (and will not be done here, except for small values of {n}) but for which several proofs exist in the literature.

The theory of Loewner chains subsequently became fundamental to a more recent topic in complex analysis, that of the Schramm-Loewner equation (SLE), which is the focus of the next and final set of notes.

Read the rest of this entry »

The Boussinesq equations for inviscid, incompressible two-dimensional fluid flow in the presence of gravity are given by

\displaystyle (\partial_t + u_x \partial_x+ u_y \partial_y) u_x = -\partial_x p \ \ \ \ \ (1)

\displaystyle (\partial_t + u_x \partial_x+ u_y \partial_y) u_y = \rho - \partial_y p \ \ \ \ \ (2)

\displaystyle (\partial_t + u_x \partial_x+ u_y \partial_y) \rho = 0 \ \ \ \ \ (3)

\displaystyle \partial_x u_x + \partial_y u_y = 0 \ \ \ \ \ (4)

where {u: {\bf R} \times {\bf R}^2 \rightarrow {\bf R}^2} is the velocity field, {p: {\bf R} \times {\bf R}^2 \rightarrow {\bf R}} is the pressure field, and {\rho: {\bf R} \times {\bf R}^2 \rightarrow {\bf R}} is the density field (or, in some physical interpretations, the temperature field). In this post we shall restrict ourselves to formal manipulations, assuming implicitly that all fields are regular enough (or sufficiently decaying at spatial infinity) that the manipulations are justified. Using the material derivative {D_t := \partial_t + u_x \partial_x + u_y \partial_y}, one can abbreviate these equations as

\displaystyle D_t u_x = -\partial_x p

\displaystyle D_t u_y = \rho - \partial_y p

\displaystyle D_t \rho = 0

\displaystyle \partial_x u_x + \partial_y u_y = 0.

One can eliminate the role of the pressure {p} by working with the vorticity {\omega := \partial_x u_y - \partial_y u_x}. A standard calculation then leads us to the equivalent “vorticity-stream” formulation

\displaystyle D_t \omega = \partial_x \rho

\displaystyle D_t \rho = 0

\displaystyle \omega = \partial_x u_y - \partial_y u_x

\displaystyle \partial_x u_y + \partial_y u_y = 0

of the Boussinesq equations. The latter two equations can be used to recover the velocity field {u} from the vorticity {\omega} by the Biot-Savart law

\displaystyle u_x := -\partial_y \Delta^{-1} \omega; \quad u_y = \partial_x \Delta^{-1} \omega.

It has long been observed (see e.g. Section 5.4.1 of Bertozzi-Majda) that the Boussinesq equations are very similar, though not quite identical, to the three-dimensional inviscid incompressible Euler equations under the hypothesis of axial symmetry (with swirl). The Euler equations are

\displaystyle \partial_t u + (u \cdot \nabla) u = - \nabla p

\displaystyle \nabla \cdot u = 0

where now the velocity field {u: {\bf R} \times {\bf R}^3 \rightarrow {\bf R}^3} and pressure field {p: {\bf R} \times {\bf R}^3 \rightarrow {\bf R}} are over the three-dimensional domain {{\bf R}^3}. If one expresses {{\bf R}^3} in polar coordinates {(z,r,\theta)} then one can write the velocity vector field {u} in these coordinates as

\displaystyle u = u^z \frac{d}{dz} + u^r \frac{d}{dr} + u^\theta \frac{d}{d\theta}.

If we make the axial symmetry assumption that these components, as well as {p}, do not depend on the {\theta} variable, thus

\displaystyle \partial_\theta u^z, \partial_\theta u^r, \partial_\theta u^\theta, \partial_\theta p = 0,

then after some calculation (which we give below the fold) one can eventually reduce the Euler equations to the system

\displaystyle \tilde D_t \omega = \frac{1}{r^4} \partial_z \rho \ \ \ \ \ (5)

\displaystyle \tilde D_t \rho = 0 \ \ \ \ \ (6)

\displaystyle \omega = \frac{1}{r} (\partial_z u^r - \partial_r u^z) \ \ \ \ \ (7)

\displaystyle \partial_z(ru^z) + \partial_r(ru^r) = 0 \ \ \ \ \ (8)

where {\tilde D_t := \partial_t + u^z \partial_z + u^r \partial_r} is the modified material derivative, and {\rho} is the field {\rho := (r u^\theta)^2}. This is almost identical with the Boussinesq equations except for some additional powers of {r}; thus, the intuition is that the Boussinesq equations are a simplified model for axially symmetric Euler flows when one stays away from the axis {r=0} and also does not wander off to {r=\infty}.

However, this heuristic is not rigorous; the above calculations do not actually give an embedding of the Boussinesq equations into Euler. (The equations do match on the cylinder {r=1}, but this is a measure zero subset of the domain, and so is not enough to give an embedding on any non-trivial region of space.) Recently, while playing around with trying to embed other equations into the Euler equations, I discovered that it is possible to make such an embedding into a four-dimensional Euler equation, albeit on a slightly curved manifold rather than in Euclidean space. More precisely, we use the Ebin-Marsden generalisation

\displaystyle \partial_t u + \nabla_u u = - \mathrm{grad}_g p

\displaystyle \mathrm{div}_g u = 0

of the Euler equations to an arbitrary Riemannian manifold {(M,g)} (ignoring any issues of boundary conditions for this discussion), where {u: {\bf R} \rightarrow \Gamma(TM)} is a time-dependent vector field, {p: {\bf R} \rightarrow C^\infty(M)} is a time-dependent scalar field, and {\nabla_u} is the covariant derivative along {u} using the Levi-Civita connection {\nabla}. In Penrose abstract index notation (using the Levi-Civita connection {\nabla}, and raising and lowering indices using the metric {g = g_{ij}}), the equations of motion become

\displaystyle \partial_t u^i + u^j \nabla_j u^i = - \nabla^i p \ \ \ \ \ (9)

 

\displaystyle \nabla_i u^i = 0;

in coordinates, this becomes

\displaystyle \partial_t u^i + u^j (\partial_j u^i + \Gamma^i_{jk} u^k) = - g^{ij} \partial_j p

\displaystyle \partial_i u^i + \Gamma^i_{ik} u^k = 0 \ \ \ \ \ (10)

where the Christoffel symbols {\Gamma^i_{jk}} are given by the formula

\displaystyle \Gamma^i_{jk} := \frac{1}{2} g^{il} (\partial_j g_{lk} + \partial_k g_{lj} - \partial_l g_{jk}),

where {g^{il}} is the inverse to the metric tensor {g_{il}}. If the coordinates are chosen so that the volume form {dg} is the Euclidean volume form {dx}, thus {\mathrm{det}(g)=1}, then on differentiating we have {g^{ij} \partial_k g_{ij} = 0}, and hence {\Gamma^i_{ik} = 0}, and so the divergence-free equation (10) simplifies in this case to {\partial_i u^i = 0}. The Ebin-Marsden Euler equations are the natural generalisation of the Euler equations to arbitrary manifolds; for instance, they (formally) conserve the kinetic energy

\displaystyle \frac{1}{2} \int_M |u|_g^2\ dg = \frac{1}{2} \int_M g_{ij} u^i u^j\ dg

and can be viewed as the formal geodesic flow equation on the infinite-dimensional manifold of volume-preserving diffeomorphisms on {M} (see this previous post for a discussion of this in the flat space case).

The specific four-dimensional manifold in question is the space {{\bf R} \times {\bf R}^+ \times {\bf R}/{\bf Z} \times {\bf R}/{\bf Z}} with metric

\displaystyle dx^2 + dy^2 + y^{-1} dz^2 + y dw^2

and solutions to the Boussinesq equation on {{\bf R} \times {\bf R}^+} can be transformed into solutions to the Euler equations on this manifold. This is part of a more general family of embeddings into the Euler equations in which passive scalar fields (such as the field {\rho} appearing in the Boussinesq equations) can be incorporated into the dynamics via fluctuations in the Riemannian metric {g}). I am writing the details below the fold (partly for my own benefit).

Read the rest of this entry »

Let {P(z) = z^n + a_{n-1} z^{n-1} + \dots + a_0} be a monic polynomial of degree {n} with complex coefficients. Then by the fundamental theorem of algebra, we can factor {P} as

\displaystyle  P(z) = (z-z_1) \dots (z-z_n) \ \ \ \ \ (1)

for some complex zeroes {z_1,\dots,z_n} (possibly with repetition).

Now suppose we evolve {P} with respect to time by heat flow, creating a function {P(t,z)} of two variables with given initial data {P(0,z) = P(z)} for which

\displaystyle  \partial_t P(t,z) = \partial_{zz} P(t,z). \ \ \ \ \ (2)

On the space of polynomials of degree at most {n}, the operator {\partial_{zz}} is nilpotent, and one can solve this equation explicitly both forwards and backwards in time by the Taylor series

\displaystyle  P(t,z) = \sum_{j=0}^\infty \frac{t^j}{j!} \partial_z^{2j} P(0,z).

For instance, if one starts with a quadratic {P(0,z) = z^2 + bz + c}, then the polynomial evolves by the formula

\displaystyle  P(t,z) = z^2 + bz + (c+2t).

As the polynomial {P(t)} evolves in time, the zeroes {z_1(t),\dots,z_n(t)} evolve also. Assuming for sake of discussion that the zeroes are simple, the inverse function theorem tells us that the zeroes will (locally, at least) evolve smoothly in time. What are the dynamics of this evolution?

For instance, in the quadratic case, the quadratic formula tells us that the zeroes are

\displaystyle  z_1(t) = \frac{-b + \sqrt{b^2 - 4(c+2t)}}{2}

and

\displaystyle  z_2(t) = \frac{-b - \sqrt{b^2 - 4(c+2t)}}{2}

after arbitrarily choosing a branch of the square root. If {b,c} are real and the discriminant {b^2 - 4c} is initially positive, we see that we start with two real zeroes centred around {-b/2}, which then approach each other until time {t = \frac{b^2-4c}{8}}, at which point the roots collide and then move off from each other in an imaginary direction.

In the general case, we can obtain the equations of motion by implicitly differentiating the defining equation

\displaystyle  P( t, z_i(t) ) = 0

in time using (2) to obtain

\displaystyle  \partial_{zz} P( t, z_i(t) ) + \partial_t z_i(t) \partial_z P(t,z_i(t)) = 0.

To simplify notation we drop the explicit dependence on time, thus

\displaystyle  \partial_{zz} P(z_i) + (\partial_t z_i) \partial_z P(z_i)= 0.

From (1) and the product rule, we see that

\displaystyle  \partial_z P( z_i ) = \prod_{j:j \neq i} (z_i - z_j)

and

\displaystyle  \partial_{zz} P( z_i ) = 2 \sum_{k:k \neq i} \prod_{j:j \neq i,k} (z_i - z_j)

(where all indices are understood to range over {1,\dots,n}) leading to the equations of motion

\displaystyle  \partial_t z_i = \sum_{k:k \neq i} \frac{2}{z_k - z_i}, \ \ \ \ \ (3)

at least when one avoids those times in which there is a repeated zero. In the case when the zeroes {z_i} are real, each term {\frac{2}{z_k-z_i}} represents a (first-order) attraction in the dynamics between {z_i} and {z_k}, but the dynamics are more complicated for complex zeroes (e.g. purely imaginary zeroes will experience repulsion rather than attraction, as one already sees in the quadratic example). Curiously, this system resembles that of Dyson brownian motion (except with the brownian motion part removed, and time reversed). I learned of the connection between the ODE (3) and the heat equation from this paper of Csordas, Smith, and Varga, but perhaps it has been mentioned in earlier literature as well.

One interesting consequence of these equations is that if the zeroes are real at some time, then they will stay real as long as the zeroes do not collide. Let us now restrict attention to the case of real simple zeroes, in which case we will rename the zeroes as {x_i} instead of {z_i}, and order them as {x_1 < \dots < x_n}. The evolution

\displaystyle  \partial_t x_i = \sum_{k:k \neq i} \frac{2}{x_k - x_i}

can now be thought of as reverse gradient flow for the “entropy”

\displaystyle  H := -\sum_{i,j: i \neq j} \log |x_i - x_j|,

(which is also essentially the logarithm of the discriminant of the polynomial) since we have

\displaystyle  \partial_t x_i = \frac{\partial H}{\partial x_i}.

In particular, we have the monotonicity formula

\displaystyle  \partial_t H = 4E

where {E} is the “energy”

\displaystyle  E := \frac{1}{4} \sum_i (\frac{\partial H}{\partial x_i})^2

\displaystyle  = \sum_i (\sum_{k:k \neq i} \frac{1}{x_k-x_i})^2

\displaystyle  = \sum_{i,k: i \neq k} \frac{1}{(x_k-x_i)^2} + 2 \sum_{i,j,k: i,j,k \hbox{ distinct}} \frac{1}{(x_k-x_i)(x_j-x_i)}

\displaystyle  = \sum_{i,k: i \neq k} \frac{1}{(x_k-x_i)^2}

where in the last line we use the antisymmetrisation identity

\displaystyle  \frac{1}{(x_k-x_i)(x_j-x_i)} + \frac{1}{(x_i-x_j)(x_k-x_j)} + \frac{1}{(x_j-x_k)(x_i-x_k)} = 0.

Among other things, this shows that as one goes backwards in time, the entropy decreases, and so no collisions can occur to the past, only in the future, which is of course consistent with the attractive nature of the dynamics. As {H} is a convex function of the positions {x_1,\dots,x_n}, one expects {H} to also evolve in a convex manner in time, that is to say the energy {E} should be increasing. This is indeed the case:

Exercise 1 Show that

\displaystyle  \partial_t E = 2 \sum_{i,j: i \neq j} (\frac{2}{(x_i-x_j)^2} - \sum_{k: i,j,k \hbox{ distinct}} \frac{1}{(x_k-x_i)(x_k-x_j)})^2.

Symmetric polynomials of the zeroes are polynomial functions of the coefficients and should thus evolve in a polynomial fashion. One can compute this explicitly in simple cases. For instance, the center of mass is an invariant:

\displaystyle  \partial_t \frac{1}{n} \sum_i x_i = 0.

The variance decreases linearly:

Exercise 2 Establish the virial identity

\displaystyle  \partial_t \sum_{i,j} (x_i-x_j)^2 = - 4n^2(n-1).

As the variance (which is proportional to {\sum_{i,j} (x_i-x_j)^2}) cannot become negative, this identity shows that “finite time blowup” must occur – that the zeroes must collide at or before the time {\frac{1}{4n^2(n-1)} \sum_{i,j} (x_i-x_j)^2}.

Exercise 3 Show that the Stieltjes transform

\displaystyle  s(t,z) = \sum_i \frac{1}{x_i - z}

solves the viscous Burgers equation

\displaystyle  \partial_t s = \partial_{zz} s - 2 s \partial_z s,

either by using the original heat equation (2) and the identity {s = - \partial_z P / P}, or else by using the equations of motion (3). This relation between the Burgers equation and the heat equation is known as the Cole-Hopf transformation.

The paper of Csordas, Smith, and Varga mentioned previously gives some other bounds on the lifespan of the dynamics; roughly speaking, they show that if there is one pair of zeroes that are much closer to each other than to the other zeroes then they must collide in a short amount of time (unless there is a collision occuring even earlier at some other location). Their argument extends also to situations where there are an infinite number of zeroes, which they apply to get new results on Newman’s conjecture in analytic number theory. I would be curious to know of further places in the literature where this dynamics has been studied.

Archives