We now begin the rigorous theory of the incompressible Navier-Stokes equations

$\displaystyle \partial_t u + (u \cdot \nabla) u = \nu \Delta u - \nabla p \ \ \ \ \ (1)$

$\displaystyle \nabla \cdot u = 0,$

where ${\nu>0}$ is a given constant (the kinematic viscosity, or viscosity for short), ${u: I \times {\bf R}^d \rightarrow {\bf R}^d}$ is an unknown vector field (the velocity field), and ${p: I \times {\bf R}^d \rightarrow {\bf R}}$ is an unknown scalar field (the pressure field). Here ${I}$ is a time interval, usually of the form ${[0,T]}$ or ${[0,T)}$. We will either be interested in spatially decaying situations, in which ${u(t,x)}$ decays to zero as ${x \rightarrow \infty}$, or ${{\bf Z}^d}$-periodic (or periodic for short) settings, in which one has ${u(t, x+n) = u(t,x)}$ for all ${n \in {\bf Z}^d}$. (One can also require the pressure ${p}$ to be periodic as well; this brings up a small subtlety in the uniqueness theory for these equations, which we will address later in this set of notes.) As is usual, we abuse notation by identifying a ${{\bf Z}^d}$-periodic function on ${{\bf R}^d}$ with a function on the torus ${{\bf R}^d/{\bf Z}^d}$.

In order for the system (1) to even make sense, one requires some level of regularity on the unknown fields ${u,p}$; this turns out to be a relatively important technical issue that will require some attention later in this set of notes, and we will end up transforming (1) into other forms that are more suitable for lower regularity candidate solution. Our focus here will be on local existence of these solutions in a short time interval ${[0,T]}$ or ${[0,T)}$, for some ${T>0}$. (One could in principle also consider solutions that extend to negative times, but it turns out that the equations are not time-reversible, and the forward evolution is significantly more natural to study than the backwards one.) The study of Euler equations, in which ${\nu=0}$, will be deferred to subsequent lecture notes.

As the unknown fields involve a time parameter ${t}$, and the first equation of (1) involves time derivatives of ${u}$, the system (1) should be viewed as describing an evolution for the velocity field ${u}$. (As we shall see later, the pressure ${p}$ is not really an independent dynamical field, as it can essentially be expressed in terms of the velocity field without requiring any differentiation or integration in time.) As such, the natural question to study for this system is the initial value problem, in which an initial velocity field ${u_0: {\bf R}^d \rightarrow {\bf R}^d}$ is specified, and one wishes to locate a solution ${(u,p)}$ to the system (1) with initial condition

$\displaystyle u(0,x) = u_0(x) \ \ \ \ \ (2)$

for ${x \in {\bf R}^d}$. Of course, in order for this initial condition to be compatible with the second equation in (1), we need the compatibility condition

$\displaystyle \nabla \cdot u_0 = 0 \ \ \ \ \ (3)$

and one should also impose some regularity, decay, and/or periodicity hypotheses on ${u_0}$ in order to be compatible with corresponding level of regularity etc. on the solution ${u}$.

The fundamental questions in the local theory of an evolution equation are that of existence, uniqueness, and continuous dependence. In the context of the Navier-Stokes equations, these questions can be phrased (somewhat broadly) as follows:

• (a) (Local existence) Given suitable initial data ${u_0}$, does there exist a solution ${(u,p)}$ to the above initial value problem that exists for some time ${T>0}$? What can one say about the time ${T}$ of existence? How regular is the solution?
• (b) (Uniqueness) Is it possible to have two solutions ${(u,p), (u',p')}$ of a certain regularity class to the same initial value problem on a common time interval ${[0,T)}$? To what extent does the answer to this question depend on the regularity assumed on one or both of the solutions? Does one need to normalise the solutions beforehand in order to obtain uniqueness?
• (c) (Continuous dependence on data) If one perturbs the initial conditions ${u_0}$ by a small amount, what happens to the solution ${(u,p)}$ and on the time of existence ${T}$? (This question tends to only be sensible once one has a reasonable uniqueness theory.)

The answers to these questions tend to be more complicated than a simple “Yes” or “No”, for instance they can depend on the precise regularity hypotheses one wishes to impose on the data and on the solution, and even on exactly how one interprets the concept of a “solution”. However, once one settles on such a set of hypotheses, it generally happens that one either gets a “strong” theory (in which one has existence, uniqueness, and continuous dependence on the data), a “weak” theory (in which one has existence of somewhat low-quality solutions, but with only limited uniqueness results (or even some spectacular failures of uniqueness) and almost no continuous dependence on data), or no satsfactory theory whatsoever. In the former case, we say (roughly speaking) that the initial value problem is locally well-posed, and one can then try to build upon the theory to explore more interesting topics such as global existence and asymptotics, classifying potential blowup, rigorous justification of conservation laws, and so forth. With a weak local theory, it becomes much more difficult to address these latter sorts of questions, and there are serious analytic pitfalls that one could fall into if one tries too strenuously to treat weak solutions as if they were strong. (For instance, conservation laws that are rigorously justified for strong, high-regularity solutions may well fail for weak, low-regularity ones.) Also, even if one is primarily interested in solutions at one level of regularity, the well-posedness theory at another level of regularity can be very helpful; for instance, if one is interested in smooth solutions in ${{\bf R}^d}$, it turns out that the well-posedness theory at the critical regularity of ${\dot H^{\frac{d}{2}-1}({\bf R}^d)}$ can be used to establish globally smooth solutions from small initial data. As such, it can become quite important to know what kind of local theory one can obtain for a given equation.

This set of notes will focus on the “strong” theory, in which a substantial amount of regularity is assumed in the initial data and solution, giving a satisfactory (albeit largely local-in-time) well-posedness theory. “Weak” solutions will be considered in later notes.

The Navier-Stokes equations are not the simplest of partial differential equations to study, in part because they are an amalgam of three more basic equations, which behave rather differently from each other (for instance the first equation is nonlinear, while the latter two are linear):

• (a) Transport equations such as ${\partial_t u + (u \cdot \nabla) u = 0}$.
• (b) Diffusion equations (or heat equations) such as ${\partial_t u = \nu \Delta u}$.
• (c) Systems such as ${v = F - \nabla p}$, ${\nabla \cdot v = 0}$, which (for want of a better name) we will call Leray systems.

Accordingly, we will devote some time to getting some preliminary understanding of the linear diffusion and Leray systems before returning to the theory for the Navier-Stokes equation. Transport systems will be discussed further in subsequent notes; in this set of notes, we will instead focus on a more basic example of nonlinear equations, namely the first-order ordinary differential equation

$\displaystyle \partial_t u = F(u) \ \ \ \ \ (4)$

where ${u: I \rightarrow V}$ takes values in some finite-dimensional (real or complex) vector space ${V}$ on some time interval ${I}$, and ${F: V \rightarrow V}$ is a given linear or nonlinear function. (Here, we use “interval” to denote a connected non-empty subset of ${{\bf R}}$; in particular, we allow intervals to be half-infinite or infinite, or to be open, closed, or half-open.) Fundamental results in this area include the Picard existence and uniqueness theorem, the Duhamel formula, and Grönwall’s inequality; they will serve as motivation for the approach to local well-posedness that we will adopt in this set of notes. (There are other ways to construct strong or weak solutions for Navier-Stokes and Euler equations, which we will discuss in later notes.)

A key role in our treatment here will be played by the fundamental theorem of calculus (in various forms and variations). Roughly speaking, this theorem, and its variants, allow us to recast differential equations (such as (1) or (4)) as integral equations. Such integral equations are less tractable algebraically than their differential counterparts (for instance, they are not ideal for verifying conservation laws), but are significantly more convenient for well-posedness theory, basically because integration tends to increase the regularity of a function, while differentiation reduces it. (Indeed, the problem of “losing derivatives”, or more precisely “losing regularity”, is a key obstacle that one often has to address when trying to establish well-posedness for PDE, particularly those that are quite nonlinear and with rough initial data, though for nonlinear parabolic equations such as Navier-Stokes the obstacle is not as serious as it is for some other PDE, due to the smoothing effects of the heat equation.)

One weakness of the methods deployed here are that the quantitative bounds produced deteriorate to the point of uselessness in the inviscid limit ${\nu \rightarrow 0}$, rendering these techniques unsuitable for analysing the Euler equations in which ${\nu=0}$. However, some of the methods developed in later notes have bounds that remain uniform in the ${\nu \rightarrow 0}$ limit, allowing one to also treat the Euler equations.

In this and subsequent set of notes, we use the following asymptotic notation (a variant of Vinogradov notation that is commonly used in PDE and harmonic analysis). The statement ${X \lesssim Y}$, ${Y \gtrsim X}$, or ${X = O(Y)}$ will be used to denote an estimate of the form ${|X| \leq CY}$ (or equivalently ${Y \geq C^{-1} |X|}$) for some constant ${C}$, and ${X \sim Y}$ will be used to denote the estimates ${X \lesssim Y \lesssim X}$. If the constant ${C}$ depends on other parameters (such as the dimension ${d}$), this will be indicated by subscripts, thus for instance ${X \lesssim_d Y}$ denotes the estimate ${|X| \leq C_d Y}$ for some ${C_d}$ depending on ${d}$.

— 1. Ordinary differential equations —

We now study solutions to ordinary differential equations (4), focusing in particular on the initial value problem when the initial state ${u(0) = u_0 \in V}$ is specified. We restrict attention to strong solutions ${u: I \rightarrow V}$, in which ${u}$ is continuously differentiable (${C^1}$) in the time variable, so that the derivative ${\partial_t}$ in (4) can be interpreted as the classical (strong) derivative, and one has the classical fundamental theorem of calculus

$\displaystyle u(t_2) - u(t_1) = \int_{t_1}^{t_2} \partial_t u(t)\ dt \ \ \ \ \ (5)$

whenever ${t_1,t_2 \in I}$ (in this post we use the signed definite integral, thus ${\int_{t_1}^{t_2} = -\int_{t_2}^{t_1}}$).

We begin with homogeneous linear equations

$\displaystyle \partial_t u = L u$

where ${L: V \rightarrow V}$ is a linear operator. Using the integrating factor ${e^{-tL}}$, where ${e^{-tL}: V \rightarrow V}$ is the matrix exponential of ${-tL}$, and noting that ${\frac{d}{dt} e^{-tL} = -L e^{-tL} = -e^{-tL} L}$, we see that this equation is equivalent to

$\displaystyle \partial_t (e^{-tL} u) = 0$

and hence from the fundamental theorem of calculus we see that if ${u(0) = u_0}$ then we have the unique global solution ${e^{-tL} u = u_0}$, or equivalently

$\displaystyle u(t) = e^{tL} u_0.$

More generally, if one wishes to solve the inhomogeneous linear equation

$\displaystyle \partial_t u = L u + F$

for some continuous ${F: {\bf R} \rightarrow V}$ with initial condition ${u(0) = u_0}$, then from the fundamental theorem of calculus we have a unique global solution given by

$\displaystyle e^{-tL} u = u_0 + \int_0^t e^{-sL} F(s)\ ds$

or equivalently one has the Duhamel’s formula

$\displaystyle u(t) = e^{tL} u_0 + \int_0^t e^{(t-s) L} F(s)\ ds, \ \ \ \ \ (6)$

which is continuously differentiable in time if ${F}$ is continuous. Intuitively, the first term ${e^{tL} u_0}$ represents the contribution of the initial data ${u_0}$ to the solution ${u(t)}$ at time ${t}$ (with the ${e^{tL}}$ factor representing the evolution from time ${0}$ to time ${t}$), while the integrand ${e^{(t-s)L} F(s)}$ represents the contribution of the forcing term ${F(s)}$ at time ${s}$ to the solution ${u(t)}$ at time ${t}$ (with the ${e^{(t-s)L}}$ factor representing the evolution from time ${s}$ to time ${t}$).

One can apply a similar analysis to the differential inequality

$\displaystyle \partial_t u(t) \leq A(t) u(t) + F(t) \ \ \ \ \ (7)$

where ${u: I \rightarrow {\bf R}}$ is now a scalar continuously differentiable function, ${F, A: I \rightarrow {\bf R}}$ are continuous functions, and ${I}$ is an interval containing ${0}$ as its left endpoint; we also assume an initial condition ${u(0) = u_0 \in {\bf R}}$. Here, the natural integrating factor is ${t \mapsto \exp( - \int_0^t A(t')\ dt' )}$, whose derivative is ${t \mapsto - A(t) \exp( - \int_0^t A(t')\ dt' )}$ by the chain rule and the fundamental theorem of calculus. Applying this integrating factor to (7), we may write it as

$\displaystyle \partial_t ( \exp( - \int_0^t A(t')\ ds ) u(t) ) \leq \exp( - \int_0^t A(t')\ dt' ) F(t)$

and hence by the fundamental theorem of calculus we have

$\displaystyle \exp( - \int_0^t A(t')\ dt' ) u(t) \leq u_0 + \int_0^t \exp( - \int_0^{s} A(t')\ dt' )F(s) \ ds$

or equivalently

$\displaystyle u(t) \leq \exp( \int_0^t A(t')\ dt' ) u_0 + \int_0^t \exp( \int_s^t A(t')\ dt' ) F(s)\ ds \ \ \ \ \ (8)$

for all ${t \in I}$ (compare with (6)). This is the differential form of Grönwall’s inequality. In the homogeneous case ${F=0}$, the inequality of course simplifies to

$\displaystyle u(t) \leq \exp( \int_0^t A(t')\ dt' ) u_0. \ \ \ \ \ (9)$

We continue assuming that ${F=0}$ for simplicity. From the fundamental theorem of calculus, (7) (and the initial condition ${u_0}$) implies the integral inequality

$\displaystyle u(t) \leq u_0 + \int_0^t A(s) u(s)\ ds, \ \ \ \ \ (10)$

although the converse implication of (7) from (10) is false in general. Nevertheless, there is an analogue of (9) just assuming the weaker inequality (10), and not requiring any differentiability on ${u}$, at least when all functions involved are non-negative:

Lemma 1 (Integral form of Grönwall inequality) Let ${I}$ be an interval containing ${0}$ as left endpoint, let ${u_0 \in [0,+\infty)}$, and let ${u, A: I \rightarrow [0,+\infty)}$ be continuous functions obeying the inequality (10) for all ${t \in I}$. Then one has (9) for all ${t \in I}$.

Proof: From (10) and the fundamental theorem of calculus, the function ${t \mapsto u_0 + \int_0^t A(s) u(s)\ ds}$ is continuously differentiable and obeys the differential inequality

$\displaystyle \frac{d}{dt}( u_0 + \int_0^t A(s) u(s)\ ds ) = A(t) u(t) \leq A(t) (u_0 + \int_0^t A(s) u(s)\ ds)$

(note here that we use the hypothesis that ${A(t)}$ is non-negative). Applying the differential form (9) of Gronwall’s inequality, we conclude that

$\displaystyle u_0 + \int_0^t A(s) u(s)\ ds \leq \exp( \int_0^t A(t')\ dt' ) u_0.$

The claim now follows from (10). $\Box$

Exercise 2 Relax the hypotheses of continuity on ${u,A}$ to that of being measurable and bounded on compact intervals. (You will need tools such as the fundamental theorem of calculus for absolutely continuous or Lipschitz functions, covered for instance in this previous set of notes.)

Gronwall’s inequality is an excellent tool for bounding the growth of a solution to an ODE or PDE, or the difference between two such solutions. Here is a basic example, one half of the Picard (or Picard-Lindeöf) theorem:

Theorem 3 (Picard uniqueness theorem) Let ${I}$ be an interval, let ${V}$ be a finite-dimensional vector space, let ${F: V \rightarrow V}$ be a function that is Lipschitz continuous on every bounded subset of ${V}$, and let ${u, v: I \rightarrow V}$ be continuously differentiable solutions to the ODE (4), thus

$\displaystyle \partial_t u = F(u); \quad \partial_t v = F(v)$

on ${I}$. If ${u(t_0) = v(t_0)}$ for some ${t_0 \in I}$, then ${u}$ and ${v}$ agree identically on ${I}$, thus ${u(t)=v(t)}$ for all ${t \in I}$.

Proof: By translating ${I}$ and ${t_0}$ we may assume without loss of generality that ${t_0=0}$. By splitting ${I}$ into at most two intervals, we may assume that ${0}$ is either the left or right endpoint of ${I}$; by applying the time reversal symmetry of replacing ${u,v}$ by ${t \mapsto u(-t), v \mapsto v(-t)}$ respectively, and also replacing ${I, F}$ by ${-I}$ and ${-F}$, we may assume without loss of generality that ${0}$ is the left endpoint of ${I}$. Finally, by writing ${I}$ as the union of compact intervals with left endpoint ${0}$, we may assume without loss of generality that ${I}$ is compact. In particular, ${u,v}$ are bounded and hence ${F}$ is Lipschitz continuous with some finite Lipschitz constant ${K}$ on the ranges of ${u}$ and ${v}$.

From the fundamental theorem of calculus we have

$\displaystyle u(t) = u(t_0) + \int_{t_0}^t F(u(s))\ ds$

and

$\displaystyle v(t) = v(t_0) + \int_{t_0}^t F(v(s))\ ds$

for every ${t \in I}$; subtracting, we conclude

$\displaystyle u(t) - v(t) = \int_{t_0}^t F(u(s)) - F(v(s))\ ds.$

Applying the Lipschitz property of ${F}$ and the triangle inequality, we conclude that

$\displaystyle |u(t)-v(t)| \leq K \int_{t_0}^t |u(s)-v(s)|\ ds.$

By the integral form of Grönwall’s inequality, we conclude that

$\displaystyle |u(t)-v(t)| \leq \exp( \int_{t_0}^t K\ ds) 0 = 0$

and the claim follows. $\Box$

Remark 4 The same result applies for infinite-dimensional normed vector spaces ${V}$, at least if one requires ${u,v}$ to be continuously differentiable in the strong (Fréchet) sense; the proof is identical.

Exercise 5 (Comparison principle) Let ${F: {\bf R} \rightarrow {\bf R}}$ be a function that is Lipschitz continuous on compact intervals. Let ${I}$ be an interval, and let ${u,v: I \rightarrow {\bf R}}$ be continuously differentiable functions such that

$\displaystyle \partial_t u(t) \leq F(u(t))$

and

$\displaystyle \partial_t v(t) \geq F(v(t))$

for all ${t \in I}$.

• (a) Suppose that ${u(t_0) \leq v(t_0)}$ for some ${t_0 \in I}$. Show that ${u(t) \leq v(t)}$ for all ${t \in I}$ with ${t \geq t_0}$. (Hint: there are several ways to proceed here. One is to try to verify the hypotheses of Grönwall’s inequality for the quantity ${\max(v(t)-u(t),0)}$ or ${\max(v(t)-u(t),0)^2}$.)
• (b) Suppose that ${u(t_0) < v(t_0)}$ for some ${t_0 \in I}$. Show that ${u(t) < v(t)}$ for all ${t \in I}$ with ${t \geq t_0}$.

Now we turn to the existence side of the Picard theorem.

Theorem 6 (Picard existence theorem) Let ${V}$ be a finite dimensional normed vector space, let ${R>0}$, and let ${u_0 \in V}$ lie in the closed ball ${\overline{B(0,R)} := \{ u \in V: \|u\| \leq R \}}$. Let ${F: V \rightarrow V}$ be a function which has a Lipschitz constant of ${K}$ on the ball ${\overline{B(0,2R)}}$. If one sets

$\displaystyle T := \frac{1}{2K + \|F(0)\|/R}, \ \ \ \ \ (11)$

then there exists a continuously differentiable solution ${u: [-T,T] \rightarrow V}$ to the ODE (4) with initial data ${u(0) = u_0}$ such that ${u(t) \in \overline{B(0,2R)}}$ for all ${t \in [0,T]}$.

Note that the solution produced by this theorem is unique on ${[-T,T]}$, thanks to Theorem 3. We will be primarily concerned with the case ${F(0)=0}$, in which case the time of existence ${T}$ simplifies to ${T = \frac{1}{2K}}$.

Proof: Using the fundamental theorem of calculus, we write (4) (with initial condition ${u(0)=u_0}$) in integral form as

$\displaystyle u(t) = u_0 + \int_0^t F(u(s))\ ds. \ \ \ \ \ (12)$

Indeed, if ${u}$ is continuously differentiable and solves (4) with ${u(0)=u_0}$ on ${[-T,T]}$, then (12) holds on ${[-T,T]}$. Conversely, if ${u}$ is continuous and solves (12) on ${[-T,T]}$, then by the fundamental theorem of calculus the right-hand side of (12) (and hence ${u}$) is continuously differentiable and solves (4) with ${u(0)=u_0}$. Thus it suffices to solve the integral equation (12) with a solution taking values in ${\overline{B(0,2R)}}$.

We can view this as a fixed point problem. Let ${X = C([T,T] \rightarrow \overline{B(0,2R)})}$ denote the space of continuous functions from ${[-T,T]}$ to ${\overline{B(0,2R)}}$. We give this the uniform metric

$\displaystyle d(u,v) := \sup_{t \in [-T,T]} \| u(t) - v(t) \|.$

As is well known, ${X}$ becomes a complete metric space with this metric. Let ${\Phi: X \rightarrow X}$ denote the map

$\displaystyle \Phi(u)(t) := u_0 + \int_0^t F(u(s))\ ds.$

Let us first verify that ${\Phi}$ does map ${X}$ to ${X}$. If ${u \in X}$, then ${\Phi(u)}$ is clearly continuous. For any ${t \in [-T,T]}$, one has from the triangle inequality that

$\displaystyle \| \Phi(u)(t) \| \leq \| u_0 \| + T \sup_{s \in [-T,T]} \| F(u(s)) \|$

$\displaystyle \leq R + T (\|F(0)\| + 2RK)$

$\displaystyle \leq 2R$

by choice of ${T}$, hence ${\Phi(u) \in X}$ as claimed. A similar argument shows that ${\Phi}$ is in fact a contraction on ${X}$. Namely, if ${u,v \in X}$, then

$\displaystyle \|\Phi(u)(t) - \Phi(v)(t)\| = \| \int_0^t F(u(s)) - F(v(s)) \ ds \|$

$\displaystyle \leq T \sup_{s \in [-T,T]} K \|u(s)-v(s)\|$

$\displaystyle \leq TK d(u,v)$

and hence ${d(\Phi(u),\Phi(v)) \leq \frac{1}{2} d(u,v)}$ by choice of ${T}$. Applying the contraction mapping theorem, we obtain a fixed point ${u \in X}$ to the equation ${u = \Phi(u)}$, which is precisely (12), and the claim follows. $\Box$

Remark 7 The proof extends without difficulty to infinite dimensional Banach spaces ${V}$. Up to a multiplicative constant, the result is sharp. For instance, consider the linear ODE ${\partial_t u = Ku}$ for some ${K>0}$, with ${\|u_0\|=R}$. Here, the function ${u \mapsto Ku}$ is of course Lipschitz with constant ${K}$ on all of ${V}$, and the solution is of the form ${u(t) = e^{Kt} u_0}$, hence ${u}$ will exit ${\overline{B(0,2R)}}$ in time ${\frac{\log 2}{K}}$, which is only larger than the time ${\frac{1}{2K}}$ given by the above theorem by a multiplicative constant.

We can iterate the Picard existence theorem (and combine it with the uniqueness theorem) to conclude that there is a maximal Cauchy development ${u: (T_-, T_+) \rightarrow V}$ to the ODE (4) with initial data ${u(0)=u_0}$, with the solution diverging to infinity (or “blowing up”) at the endpoint ${T_+}$ if this endpoint is finite, and similarly for ${T_-}$ (thus one has a dichotomy between global existence and finite time blowup). More precisely:

Theorem 8 (Maximal Cauchy development) Let ${V}$ be a finite dimensional normed vector space, let ${u_0 \in V}$, and let ${F: V \rightarrow V}$ be a function which is Lipschitz on bounded sets. Then there exists ${-\infty \leq T_- < 0 < T_+ \leq +\infty}$ and a continuously differentiable solution ${u: (T_-, T_+) \rightarrow V}$ to (4) with ${u(0) = u_0}$, such that ${\lim_{t \rightarrow T_+^-} \|u(t)\| = \infty}$ if ${T_+}$ is finite, and ${\lim_{t \rightarrow T_-^+} \|u(t)\| = \infty}$ if ${T_-}$ is finite. Furthermore, ${T_-, T_+}$, and ${u}$ are unique.

Proof: Uniqueness follows easily from Theorem 3. For existence, let ${I}$ be the union of all the intervals containing ${0}$ for which there is a continuously differentiable solution to (4) with ${u(0)=u_0}$. From Theorem 6 ${I}$ contains a neighbourhood of the origin. From Theorem 3 one can glue all the solutions together to obtain a continuously differentiable solution ${u: I \rightarrow V}$ to (4) with ${u(0)=u_0}$. If ${t_0 \in I}$ is contained in ${I}$, then by Theorem 6 (and time translation) one could find a solution ${\tilde u: (t_0-\varepsilon,t_0+\varepsilon) \rightarrow V}$ to (4) in a neighbourhood of ${t_0}$ such that ${u(t_0) = \tilde u(t_0)}$; by Theorem 3 we must then have ${(t_0-\varepsilon,t_0+\varepsilon) \subset I}$, otherwise we could glue ${\tilde u}$ to ${u}$ and obtain a solution on a larger domain than ${I}$, contradicting the definition of ${I}$. Thus ${I}$ is open, and is of the form ${I = (T_-, T_+)}$ for some ${-\infty \leq T_- < 0 < T_+ \leq +\infty}$.

Suppose for contradiction that ${T_+}$ is finite and ${\|u(t)\|}$ does not go to infinity as ${T \rightarrow T_+^-}$. Then there exists a finite ${R}$ and a sequence ${t_n \nearrow T_+}$ such that ${\|u(t_n)\| \leq R}$. Let ${K}$ be the Lipschitz constant of ${F}$ on ${\overline{B(0,2R)}}$. By Theorem 6, for each ${n}$ one can find a solution ${u_n}$ to (4) on ${(t_n-T, t_n+T)}$ with ${u_n(t_n) = u(t_n)}$, where ${T := \frac{1}{2K + \|F(0)\|/R}}$ does not depend on ${n}$. For ${n}$ large enough, this and Theorem 7 allow us to extend the solution ${u}$ outside of ${I}$, contradicting the definition of ${u}$. Thus we have ${\lim_{t \rightarrow T_+^-} \|u(t)\| = \infty}$ when ${T_+}$ is finite, and a similar argument gives ${\lim_{t \rightarrow T_-^+} \|u(t)\| = \infty}$ when ${T_-}$ is finite. $\Box$

Remark 9 Theorem 6 gives a more quantitative description of the blowup: if ${T_+}$ is finite, then for any ${T_- < t < T_+}$, one must have

$\displaystyle T_+ - t > \frac{1}{2K_{2\|u(t)\|} + \|F(0)\|/\|u(t)\|}$

where ${K_{2\|u(t)\|}}$ is the Lipschitz constant of ${F}$ on ${\overline{B(0,2\|u(t)\|)}}$. This can be used to give some explicit lower bound on blowup rates. For instance, if ${F(0)=0}$ and ${F(u)}$ behaves like ${|u|^p}$ for some ${p>1}$ in the sense that the Lipschitz constant of ${F}$ on ${\overline{B(0,R)}}$ is ${O(R^{p-1})}$ for any ${R>0}$, then we obtain a lower bound

$\displaystyle \| u(t) \| \gtrsim \frac{1}{(T_+-t)^{1/(p-1)}} \ \ \ \ \ (13)$

as ${t \nearrow T_+}$, if ${T_+}$ is finite, and similarly when ${T_-}$ is finite. This type of blowup rate is sharp. For instance, consider the scalar ODE

$\displaystyle \partial_t u = |u|^p$

where ${u}$ takes values in ${{\bf R}}$ and ${p>1}$ is fixed. Then for any ${T_+ \in {\bf R}}$, one has explicit solutions on ${(-\infty,T_+)}$ of the form

$\displaystyle u(t) = \frac{c}{(T_+-t)^{1/(p-1)}}$

where ${c := \frac{1}{(p-1)^{1/(p-1)}}}$ is a positive constant depending only on ${p}$. The blowup rate at ${T_+}$ is consistent with (13) and also with (11).

Exercise 10 (Higher regularity) Let the notation and hypotheses be as in Theorem 8. Suppose that ${F: V \rightarrow V}$ is ${k}$ times continuously differentiable for some natural number ${k}$. Show that the maximal Cauchy development ${u}$ is ${k+1}$ times continuously differentiable. In particular, if ${F}$ is smooth, then so is ${u}$.

Exercise 11 (Lipschitz continuous dependence on data) Let ${V}$ be a finite-dimensional normed vector space.

• (a) Let ${R>0}$, let ${F: V \rightarrow V}$ be a function which has a Lipschitz constant of ${K}$ on the ball ${\overline{B(0,2R)}}$, and let ${T}$ be the quantity (11). If ${u_0, v_0 \in \overline{B(0,R)}}$, and ${u,v: [-T,T] \rightarrow V}$ are the solutions to (4) with ${u(0)=u_0, v(0)=v_0}$ given by Theorem 6, show that

$\displaystyle \sup_{t \in [-T,T]} \|u(t)-v(t)\| \leq 2 \|u_0-v_0\|.$

• (b) Let ${F: V \rightarrow V}$ be a function which is Lipschitz on bounded sets, let ${u_0 \in V}$, and let ${u: (T_-, T_+) \rightarrow V}$ be the maximal Cauchy development of (4) with initial data ${u(0)=u_0}$ given by Theorem 6. Show that for any compact interval ${I \subset (T_-,T_+)}$ containing ${0}$, there exists an open neighbourhood ${U}$ of ${u_0}$, such that for any ${v_0 \in U}$, there exists a solution ${v: I \rightarrow V}$ of (4) with initial data ${v(0)=v_0}$. Furthermore, the map from ${v_0}$ to ${v}$ is a Lipschitz continuous map from ${U}$ to ${C(I \rightarrow V)}$.

Exercise 12 (Non-autonomous Picard theorem) Let ${V}$ be a finite-dimensional normed vector space, and let ${F: {\bf R} \times V \rightarrow V}$ be a function which is Lipschitz on bounded sets. Let ${u_0 \in V}$. Show that there exist ${-\infty \leq T_- < 0 < T_+ \leq +\infty}$ and a continuously differentiable function ${u: (T_-,T_+) \rightarrow V}$ solving the non-autonomous ODE

$\displaystyle \partial_t u(t) = F(t,u(t))$

for ${t \in (T_-,T_+)}$ with initial data ${u(0) = u_0}$; furthermore one has ${\lim_{t \rightarrow T_+^-} \|u(t)\| = \infty}$ if ${T_+}$ is finite, and ${\lim_{t \rightarrow T_-^+} \|u(t)\| = \infty}$ if ${T_-}$ is finite. Finally, show that ${T_-, T_+, u}$ are unique. (Hint: this could be done by repeating all of the previous arguments, but there is also a way to deduce this non-autonomous version of the Picard theorem directly from the Picard theorem by adding one extra dimension to the space ${V}$.)

The above theory is symmetric with respect to the time reversal of replacing ${t \mapsto u(t)}$ with ${t \mapsto u(-t)}$ and ${F}$ with ${-F}$. However, one can break this symmetry by introducing a dissipative linear term, in which case one only obtains the forward-in-time portion of the Picard existence theorem:

Exercise 13 Let ${V}$ be a finite dimensional normed vector space, let ${R>0}$, and let ${u_0 \in V}$ lie in the closed ball ${\overline{B(0,R)} := \{ u \in V: \|u\| \leq R \}}$. Let ${F: V \rightarrow V}$ be a function which has a Lipschitz constant of ${K}$ on the ball ${\overline{B(0,2R)}}$. Let ${T}$ be the quantity in (11). Let ${L: V \rightarrow V}$ be a linear operator obeying the dissipative estimates

$\displaystyle \| e^{tL} u \| \leq \| u \|$

for all ${u \in V}$ and ${t \geq 0}$. Show that there exists a continuously differentiable solution ${u: [0,T] \rightarrow V}$ to the ODE

$\displaystyle \partial_t u = Lu + F(u) \ \ \ \ \ (14)$

with initial data ${u(0) = u_0}$ such that ${u(t) \in \overline{B(0,2R)}}$ for all ${t \in [0,T]}$.

Remark 14 With the hypotheses of the above exercise, one can also solve the ODE backwards in time by an amount ${\frac{1}{2K + 2 \|L\|_{op} + \|F(0)\|/R}}$, where ${\|L\|_{op}}$ denotes the operator norm of ${L}$. However, in the limit as the operator norm of ${L}$ goes to infinity, the amount to which one can evolve backwards in time goes to zero, whereas the time in which one can evolve forwards in time remains bounded away from zero, thus breaking the time symmetry.

— 2. Leray systems —

Now we discuss the Leray system of equations

$\displaystyle v = F - \nabla p; \quad \nabla \cdot v = 0 \ \ \ \ \ (15)$

where ${F: {\bf R}^d \rightarrow {\bf R}^d}$ is given, and the vector field ${v: {\bf R}^d \rightarrow {\bf R}^d}$ and the scalar field ${p: {\bf R}^d \rightarrow {\bf R}}$ are unknown. In other words, we wish to decompose a specified function ${F}$ as the sum of a gradient ${\nabla p}$ and a divergence-free vector field ${v}$. We will use the usual Lebesgue spaces ${L^q(X \rightarrow {\bf R}^m)}$ of measurable functions ${f: X \rightarrow {\bf R}^m}$ (up to almost everywhere equivalence) defined on some measure space ${(X,\mu)}$ (which in our case will always be either ${{\bf R}^d}$ or ${{\bf R}^d/{\bf Z}^d}$ with Lebesgue measure) such that the ${L^q}$ norm ${(\int_X |f|^q\ d\mu)^{1/q}}$ is finite. (For ${q=\infty}$, the ${L^\infty}$ norm is defined instead to be the essential supremum of ${|f|}$.)

Proceeding purely formally, we could solve this system by taking the divergence of the first equation to conclude that

$\displaystyle 0 = \nabla \cdot F - \Delta p$

where ${\Delta p = \nabla \cdot (\nabla p)}$ is the Laplacian of ${p}$, and then we could formally solve for ${p}$ as

$\displaystyle p = \Delta^{-1} (\nabla \cdot F) \ \ \ \ \ (16)$

and then solve for ${v}$ as

$\displaystyle v = F - \nabla \Delta^{-1} (\nabla \cdot F). \ \ \ \ \ (17)$

However, if one wishes to justify this rigorously one runs into the issue that the Laplacian ${\Delta}$ is not quite invertible. To sort this out and make this problem well-defined, we need to specify the regularity and decay one wishes to impose on the data ${F}$ and on the solution ${v, p}$. To begin with, let us suppose that ${u,F,p}$ are all smooth.

We first understand the uniqueness theory for this problem. By linearity, this amounts to solving the homogeneous equation when ${F=0}$, thus we wish to classify the smooth fields ${v: {\bf R}^d \rightarrow {\bf R}^d}$ and ${p: {\bf R}^d \rightarrow {\bf R}}$ solving the system

$\displaystyle v = -\nabla p; \quad \nabla \cdot v = 0.$

Of course, we can eliminate ${v}$ and write this a single equation

$\displaystyle \Delta p = 0$

That is to say, the solutions to this equation arise by selecting ${p}$ to be a (smooth) harmonic function, and ${v}$ to be the negative gradient of ${p}$. This is consistent with our preceding discussion that identified the potential lack of invertibility of ${\Delta}$ as a key issue.

By linearity, this implies that (smooth) solutions ${(v,p)}$ to the system (15) are only unique up to the addition of an arbitrary harmonic function to ${p}$, and tbe subtraction of the gradient of that harmonic function from ${v}$.

We can largely eliminate this lack of uniqueness by imposing further requirements on ${F,p,v}$. For instance, suppose in addition that we require ${F,p,v}$ to all be ${{\bf Z}^d}$-periodic (or periodic for short), thus

$\displaystyle F(x+n) = F(x), p(x+n) = p(x), v(x+n) = v(x)$

for ${x \in {\bf R}^d}$ and ${n \in {\bf Z}^d}$. Then the only freedom we have is to modify ${p}$ by an arbitrary periodic harmonic function (and to subtract the gradient of that function from ${v}$). However, by Liouville’s theorem, the only periodic harmonic functions are the constants, whose gradient vanishes. Thus the only freedom in this setting is to add a constant to ${p}$. This freedom will be almost irrelevant when we consider the Euler and Navier-Stokes equations, since it is only the gradient of the pressure which appears in those equations, rather than the pressure itself. Nevertheless, if one wishes, one could remove this freedom by requiring that ${p}$ be of mean zero: ${\int_{{\bf R}^d/{\bf Z}^d} p(x)\ dx = 0}$.

Now suppose instead that we only require that ${F}$ and ${v}$ be ${{\bf Z}^d}$-periodic, but do not require ${p}$ to be ${{\bf Z}^d}$-periodic. Then we have the freedom to modify ${p}$ by a harmonic function ${u}$ which need not be ${{\bf Z}^d}$-periodic, but whose gradient ${\nabla u}$ is ${{\bf Z}^d}$-periodic. Since the gradient of a harmonic function is also harmonic, ${\nabla u}$ has to be constant, and so ${u}$ is an affine-linear function. Conversely, all affine-linear functions are harmonic, and their gradients are constant and thus also ${{\bf Z}^d}$-periodic. Thus, one has the freedom in this setting to add an arbitrary affine-linear function to ${p}$, and subtract the constant gradient of that function from ${v}$.

Instead of periodicity, one can also impose decay conditions on the various functions. Suppose for instance that we require the pressure to lie in an ${L^q({\bf R}^d \rightarrow {\bf R})}$ space for some ${1 \leq q < \infty}$; roughly speaking, this forces the pressure to decay to zero at infinity “on the average”. Then we only have the freedom to modify ${p}$ by a harmonic function ${u}$ that is also in the ${L^q}$ class (and modify ${v}$ by the negative gradient of this harmonic function). However, the mean value property of harmonic functions implies that

$\displaystyle u(x) = \frac{1}{|B(x,R)|} \int_{B(x,R)} u(y)\ dy$

for any ball ${B(x,R)}$ of some radius ${R}$ centred around ${x}$, where ${|B(x,R)|}$ denotes the measure of the ball. By Hölder’s inequality, we conclude that

$\displaystyle |u(x)| \leq \frac{1}{|B(x,R)|} |B(x,R)|^{1-1/q} \|u\|_{L^q({\bf R}^d)}.$

Sending ${R \rightarrow \infty}$ we conclude that ${u}$ vanishes identically; thus there are no non-trivial harmonic functions in ${L^q({\bf R}^d \rightarrow {\bf R})}$. Thus there is uniqueness for the problem (15) if we require the pressure ${p}$ to lie in ${L^q({\bf R}^d \rightarrow {\bf R})}$. If instead we require the vector field ${v}$ to be in ${L^q({\bf R}^d \rightarrow {\bf R}^d)}$, then we can modify ${p}$ by a harmonic function ${u}$ with ${\nabla u}$ in ${L^q({\bf R}^d \rightarrow {\bf R}^{d^2})}$, thus ${\nabla u}$ vanishes identically and hence ${u}$ is constant. So if we require ${v \in L^q({\bf R}^d \rightarrow {\bf R}^d)}$ then we only have the freedom to adjust ${p}$ by arbitrary constants.

Having discussed uniqueness, we now turn to existence. We begin with the periodic setting in which ${F,v,p}$ are required to be ${{\bf Z}^d}$-periodic and smooth, so that they can also be viewed (by slight abuse of notation) as functions on the torus ${{\bf R}^d/{\bf Z}^d}$. The system (15) is linear and translation-invariant, which strongly suggests that one solve the system using the Fourier transform (which tends to diagonalise linear translation-invariant equations, because the plane waves ${x \mapsto e^{2\pi i k \cdot x}}$ that underlie the Fourier transform are the eigenfunctions of translation.) Indeed, we may expand ${F,v,p}$ as Fourier series

$\displaystyle F(x) = \sum_{k \in {\bf Z}^d} \hat F(k) e^{2\pi i k \cdot x}$

$\displaystyle v(x) = \sum_{k \in {\bf Z}^d} \hat v(k) e^{2\pi i k \cdot x}$

$\displaystyle p(x) = \sum_{k \in {\bf Z}^d} \hat p(k) e^{2\pi i k \cdot x}$

where the Fourier coefficients ${\hat F(k) \in {\bf R}^d}$, ${\hat v(k) \in{\bf R}^d}$, ${\hat p(k) \in {\bf R}}$ are given by the formulae

$\displaystyle \hat F(k) = \int_{{\bf R}^d/{\bf Z}^d} F(x) e^{-2\pi i k \cdot x}\ dx$

$\displaystyle \hat v(k) = \int_{{\bf R}^d/{\bf Z}^d} v(x) e^{-2\pi i k \cdot x}\ dx$

$\displaystyle \hat p(k) = \int_{{\bf R}^d/{\bf Z}^d} p(x) e^{-2\pi i k \cdot x}\ dx.$

When ${F,v,p}$ are smooth, then ${\hat F(k), \hat v(k), \hat p(k)}$ are rapidly decreasing as ${k \rightarrow \infty}$, which will allow us to justify manipulations such as interchanging summation and derivatives without difficulty. Expanding out (15) in Fourier series and then comparing Fourier coefficients (which are unique for smooth functions), we obtain the system

$\displaystyle \hat v(k) = \hat F(k) - 2\pi i k \hat p(k) \ \ \ \ \ (18)$

$\displaystyle 2\pi i k \cdot \hat v(k) = 0 \ \ \ \ \ (19)$

for each ${k \in {\bf Z}^d}$. As mentioned above, the Fourier transform has diagonalised the system (15), in that there are no interactions between different frequencies ${k \in {\bf Z}^d}$, and we now have a decoupled system of vector equations. To solve these equations, we can take the inner product of both sides of (18) with ${k}$ and apply (19) to conclude that

$\displaystyle 0 = k \cdot \hat F(k) - 2\pi i |k|^2 \hat p(k).$

For non-zero ${k}$, we can then solve for ${\hat p(k)}$ and hence ${\hat v(k)}$ by the formulae

$\displaystyle \hat p(k) = \frac{k}{2\pi i |k|^2} \cdot \hat F(k)$

and

$\displaystyle \hat v(k) = \hat F(k) - k (\frac{k}{|k|^2} \cdot \hat F(k)).$

For ${k = 0}$, these formulae no longer apply; however from (18) we see that ${\hat v(0) = \hat F(0)}$, while ${\hat p(0)}$ can be arbitrary (which corresponds to the aforementioned freedom to add an arbitrary constant to ${p}$). Thus we have the explicit general solution

$\displaystyle p(x) = C + \sum_{k \in {\bf Z}^d \backslash \{0\}} \frac{k}{2\pi i |k|^2} \cdot \hat F(k) e^{2\pi i k \cdot x}$

$\displaystyle v(x) = \hat F(0) + \sum_{k \in {\bf Z}^d \backslash \{0\}} (\hat F(k) - k (\frac{k}{|k|^2} \cdot \hat F(k))) e^{2\pi i k \cdot x},$

where ${C}$ is an arbitrary constant. Note that if ${F}$ is smooth, then ${\hat F(k)}$ is rapidly decreasing and the functions ${p,v}$ defined by the above formulae are also smooth.

We can write the above general solution in a form similar to (16), (17) as

$\displaystyle p = C + \Delta^{-1} (\nabla \cdot F)$

$\displaystyle v = F - \nabla \Delta^{-1} (\nabla \cdot F)$

where, by definition, the inverse Laplacian ${\Delta^{-1}}$ of a smooth periodic function ${f: {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}}$ of mean zero is given by the Fourier series formula

$\displaystyle \Delta^{-1} f(x) := \sum_{k \in {\bf Z}^d \backslash \{0\}} \frac{1}{-4\pi^2 |k|^2} \hat f(k) e^{2\pi i k \cdot x}.$

(Note that ${\nabla \cdot F}$ automatically has mean zero.) It is easy to see that ${\Delta \Delta^{-1} f = f}$ for such functions ${f}$, thus justifying the choice of notation. We refer to ${F - \nabla \Delta^{-1} (\nabla \cdot F)}$ as the (periodic) Leray projection of ${F}$ and denote it ${\mathbb{P}}$, thus in the above solution we have ${v = \mathbb{P}(F)}$. By construction, ${\mathbb{P}(F)}$ is divergence-free, and ${\mathbb{P}(F)}$ vanishes whenever ${F}$ is a gradient ${F = \nabla p}$.

If we require ${F,v}$ to be ${{\bf Z}^d}$-periodic, but do not require ${p}$ to be ${{\bf Z}^d}$-periodic, then by the previous uniqueness discussion, the general solution is now

$\displaystyle p(x) = C + \Delta^{-1} (\nabla \cdot F)(x) + w \cdot x$

$\displaystyle v(x) = F(x) - \nabla \Delta^{-1} (\nabla \cdot F)(x) - w = \mathbb{P}(F) - w$

where ${C \in {\bf R}}$ and ${w \in {\bf R}^d}$ are arbitrary.

The above discussion was for smooth periodic functions ${F,p,v}$, but one can make the same construction in other function spaces. For instance, recall that for any ${s \geq 0}$, the Sobolev space ${H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)}$ consists of those elements ${f}$ of ${L^2({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)}$ whose Sobolev norm

$\displaystyle \| f \|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)} := (\sum_{k \in {\bf Z}^d} \langle k \rangle^{2s} |\hat f(k)|^2)^{1/2}$

is finite, where we use the “Japanese bracket” convention ${\langle k\rangle := (1 + |k|^2)^{1/2}}$. (One can also define Sobolev spaces for negative ${s}$, but we will not need them here.) Basic properties of these Sobolev spaces can be found in this previous post. From comparing Fourier coefficients we see that the operators ${\Delta^{-1}(\nabla \cdot)}$ and ${\nabla \Delta^{-1} (\nabla \cdot)}$ defined for smooth periodic functions can be extended without difficulty to ${H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$ (taking values in ${H^{s+1}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}$ and ${H^s({\bf R}^d/{\bf Z}^d \rightarrow{\bf R}^d)}$ respectively), with bounds of the form

$\displaystyle \| \nabla \Delta^{-1} (\nabla \cdot F)\|_{H^{s}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} \lesssim \| \Delta^{-1} (\nabla \cdot F)\|_{H^{s+1}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}$

$\displaystyle \lesssim \|F\|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}.$

Thus, if ${F \in H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$, then one can solve (15) (in the sense of distributions, at least) with some ${v \in H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$ and ${p \in H^{s+1}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}$, with bounds

$\displaystyle \| v \|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}, \| p \|_{H^{s+1}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} \lesssim \|F\|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}.$

In particular, the Leray projection ${\mathbb{P}}$ is bounded on ${H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$. (In fact it is a non-expansive map; see Exercise 16.)

One can argue similarly in the non-periodic setting, as long as one avoids the one-dimensional case ${d=1}$ which contains some technical divergences. Recall (see e.g., these previous lecture notes on this blog) that functions ${f \in L^2({\bf R}^d \rightarrow {\bf R}^m)}$ have a Fourier transform ${\hat f \in L^2({\bf R}^d \rightarrow {\bf R}^m)}$, which for ${f}$ in the dense subclass ${L^1({\bf R}^d \rightarrow {\bf R}^m) \cap L^2({\bf R}^d \rightarrow {\bf R}^m)}$ of ${L^2({\bf R}^d \rightarrow {\bf R}^m)}$ is defined by the formula

$\displaystyle \hat f(\xi) := \int_{{\bf R}^d} f(x) e^{-2\pi i x \cdot \xi}\ dx$

and then is extended to the rest of ${L^2({\bf R}^d \rightarrow {\bf R}^m)}$ by continuous extension in the ${L^2}$ topology, taking advantage of the Plancherel identity

$\displaystyle \|\hat f\|_{L^2({\bf R}^d \rightarrow {\bf R}^m)} = \| f \|_{L^2({\bf R}^d \rightarrow {\bf R}^m)}. \ \ \ \ \ (20)$

The Fourier transform is then extended to tempered distributions in the usual fashion (see this previous set of notes).

We then define the Sobolev space ${H^s({\bf R}^d \rightarrow {\bf R}^m)}$ for ${s \geq 0}$ to be the collection of those functions ${f \in L^2({\bf R}^d \rightarrow {\bf R}^m)}$ for which the norm

$\displaystyle \| f \|_{H^s({\bf R}^d \rightarrow {\bf R}^m)} := (\int_{{\bf R}^d} \langle \xi \rangle^{2s} |\hat f(\xi)|^2\ d\xi)^{1/2}$

is finite; equivalently, one has

$\displaystyle \| f \|_{H^s({\bf R}^d \rightarrow {\bf R}^m)} = \| \langle \nabla \rangle^s f \|_{L^2({\bf R}^d \rightarrow{\bf R}^m)}$

where the Fourier multiplier ${\langle \nabla \rangle^s}$ is defined by

$\displaystyle \widehat{\langle \nabla \rangle^s f}(\xi) = \langle \xi \rangle^s \hat f(\xi).$

For any vector-valued function ${F: {\bf R}^d \rightarrow {\bf R}^d}$ in the Schwartz class, we define ${\Delta^{-1} (\nabla \cdot F)}$ to be the scalar tempered distribution whose (distributional) Fourier transform is given by the formula

$\displaystyle \widehat{\Delta^{-1} (\nabla \cdot F)}(\xi) = -\frac{2\pi i \xi \cdot \hat F(\xi)}{4\pi^2 |\xi|^2} \ \ \ \ \ (21)$

and define the Leray projection ${\mathbb{P} F}$ to be the vector-valued distribution

$\displaystyle \mathbb{P} F = F - \nabla \Delta^{-1} (\nabla \cdot F) \ \ \ \ \ (22)$

or in terms of the (distributional) Fourier transform

$\displaystyle \widehat{\mathbb{P} F}(\xi) = \hat F(\xi) - \frac{\xi}{|\xi|^2} \xi \cdot \hat F(\xi).$

Then by using the well-known relationship

$\displaystyle \widehat{\partial_j F}(\xi) = 2\pi i \xi_j \hat F(\xi)$

between (distributional) derivatives and (distributional) Fourier transforms we see that the tempered distributions

$\displaystyle p = \Delta^{-1} (\nabla \cdot F), v = F - \nabla \Delta^{-1} (\nabla \cdot F)$

solve the equation (15) in the distributional sense, and hence also in the classical sense since ${p,F}$ have rapidly decreasing Fourier transforms and are thus smooth.

As in the periodic case we see that we have the bound

$\displaystyle \| \mathbb{P}(F) \|_{H^s({\bf R}^d \rightarrow {\bf R}^d)} \lesssim \| F \|_{H^s({\bf R}^d \rightarrow {\bf R}^d)}$

for all Schwartz vector fields ${F}$ (in fact ${\mathbb{P}}$ is again a non-expansive map), so we can extend the Leray projection without difficulty to ${H^s({\bf R}^d \rightarrow {\bf R}^d)}$ functions. The operator ${\Delta^{-1} (\nabla \cdot)}$ can similarly be extended continuously to a map from ${H^s({\bf R}^d \rightarrow {\bf R}^m)}$ to the space ${\{ f: \nabla f \in H^s({\bf R}^d \rightarrow {\bf R}^d)\}}$ of scalar tempered distributions with gradient in ${H^s({\bf R}^d \rightarrow {\bf R}^d)}$, although we will not need to work directly with the pressure much in this course. This allows us to solve (15) in a distributional sense for all ${F \in H^s({\bf R}^d \rightarrow {\bf R}^d)}$.

Remark 15 (Remark removed due to inaccuracy.)

Exercise 16 (Hodge decomposition) Define the following three subspaces of the Hilbert space ${L^2({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$:

• ${d{\mathcal E}(\Omega^0)}$ is the space of all elements of ${L^2({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$ of the form ${u = \nabla f}$ (in the sense of distributions) for some ${f \in H^1({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}$;
• ${{\mathcal H}^1}$ is the space of all elements of ${L^2({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$ that are weakly harmonic in the sense that ${\Delta u = 0}$ (in the sense of distributions).
• ${d^*{\mathcal E}(\Omega^2)}$ is the space of all elements ${u = (u_1,\dots,u_n)}$ of ${L^2({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$ which take the form

$\displaystyle u_i = \partial_j \omega_{ij}$

(with the usual summation conventions) for some tensor ${(\omega_{ij})_{1 \leq i,j \leq d} \in H^1({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^{d^2})}$ obeying the antisymmetry property ${\omega_{ji} = -\omega_{ij}}$.

• (a) Show that these three spaces are closed subspaces of ${L^2({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$, and one has the orthogonal decomposition

$\displaystyle L^2({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d) = d{\mathcal E}(\Omega^0) \oplus {\mathcal H}^1 \oplus d^*{\mathcal E}(\Omega^2).$

This is a simple case of a more general splitting known as the Hodge decomposition, which is available for more general differential forms on manifolds.

• (b) Show that on ${L^2({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$, the Leray projection ${\mathbb{P}}$ is the orthogonal projection to ${{\mathcal H}^1 \oplus d^*{\mathcal E}(\Omega^2)}$.
• (c) Show that the Leray projection is a non-expansive map on ${H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$ for all ${s \geq 0}$ (that is to say, its operator norm is at most ${1}$).

Exercise 17 (Helmholtz decomposition) Define the following two subspaces of the Hilbert space ${L^2({\bf R}^d \rightarrow {\bf R}^d)}$:

• ${H_{\mathrm{df}}}$ is the space of functions ${u \in L^2({\bf R}^d \rightarrow {\bf R}^d)}$ which are divergence-free, by which we mean that ${\nabla \cdot u = 0}$ in the sense of distributions.
• ${H_{\mathrm{cf}}}$ is the space of functions ${u \in L^2({\bf R}^d \rightarrow {\bf R}^d)}$ which are curl-free, by which we mean that ${\nabla \wedge u = 0}$ in the sense of distributions, where ${\nabla \wedge u}$ is the rank two tensor with components ${(\nabla \wedge u)_{ij} := \partial_i u_j - \partial_j u_i}$.
• (a) Show that these two spaces are closed subspaces of ${L^2({\bf R}^d \rightarrow {\bf R}^d)}$, and one has the orthogonal decomposition

$\displaystyle L^2({\bf R}^d \rightarrow {\bf R}^d) = H_{\mathrm{df}} \oplus H_{\mathrm{cf}}.$

This is known as the Helmholtz decomposition (particularly in the three-dimensional case ${d=3}$, in which one can interpret ${\nabla \wedge u}$ as the curl of ${u}$).

• (b) Show that on ${L^2({\bf R}^d \rightarrow {\bf R}^d)}$, the Leray projection ${\mathbb{P}}$ is the orthogonal projection to ${H_{\mathrm{df}}}$.
• (c) Show that the Leray projection is a non-expansive map on ${H^s({\bf R}^d \rightarrow {\bf R}^d)}$ for all ${s \geq 0}$.

Exercise 18 (Singular integral form of Leray projection) Let ${d \geq 3}$. Then the function ${x \mapsto \frac{1}{|x|^{d-2}}}$ is locally integrable and thus well-defined as a distribution.

• (a) For ${i,j=1,\dots,d}$, show that the distribution ${\partial_i \partial_j \frac{1}{|x|^{d-2}}}$, defined on test functions ${\phi: {\bf R}^d \rightarrow {\bf R}}$ by the formula

$\displaystyle \langle \partial_i \partial_j \frac{1}{|x|^{d-2}}, \phi \rangle = \int_{{\bf R}^d} \frac{1}{|x|^{d-2}} \partial_j \partial_i \phi(x)\ dx,$

can be expressed in principal value form as

$\displaystyle \langle \partial_i \partial_j \frac{1}{|x|^{d-2}}, \phi \rangle = (d-2) \lim_{\varepsilon \rightarrow 0} \int_{|x| > \varepsilon} (\frac{d x_i x_j}{|x|^{d+2}}$

$\displaystyle - \frac{\delta_{ij}}{|x|^d}) \phi(x)\ dx - \frac{d-2}{d} |S^{d-1}|\delta_{ij} \phi(0),$

where ${|S^{d-1}|}$ denotes the surface area of the unit sphere ${S^{d-1}}$ in ${{\bf R}^d}$ and ${\delta_{ij}}$ is the Kronecker delta.

• (b) Conclude in particular the Newtonian potential identity

$\displaystyle \Delta \frac{1}{|x|^{d-2}} = - (d-2) |S^{d-1}| \delta_0$

where (at the risk of a mild notational clash) ${\delta_0}$ is the Dirac delta distribution at ${0}$.

• (c) For a test vector field ${F: {\bf R}^d \rightarrow {\bf R}^d}$, establish the explicit form

$\displaystyle (\mathbb{P} F(x))_i = \frac{d-1}{d} F_i(x)$

$\displaystyle + \lim_{\varepsilon \rightarrow 0} \frac{1}{|S^{d-1}|} \int_{|y| \geq \varepsilon} (\frac{dy_i y_j F_j(x-y)}{|y|^{d+2}} - \frac{F_i(x-y)}{|y|^d})\ dy.$

• (d) Extend part (c) to the case ${d=2}$. (Hint: Replace the role of ${\frac{1}{d-2} \frac{1}{|x|^{d-2}}}$ with ${\log |x|}$, in the spirit of the replica trick from physics.)

Remark 19 One can also solve (15) in ${L^q}$-based Sobolev spaces for exponents ${1 < q < \infty}$ other than ${q=2}$ by using Calderón-Zygmund theory and the singular integral form of the Leray projection given in Exercise 18. However, we will try to avoid having to rely on this theory in these notes.

— 3. The heat equation —

We now turn to the study of the heat equation

$\displaystyle \partial_t u = \nu \Delta u \ \ \ \ \ (23)$

on a spacetime region ${[0,T] \times {\bf R}^d}$, with initial data ${u(0) = u_0}$, where ${\nu>0}$ is a fixed constant; we also consider the inhomogeneous analog

$\displaystyle \partial_t u = \nu \Delta u + F \ \ \ \ \ (24)$

with some forcing term ${F: [0,T] \times {\bf R}^d \rightarrow {\bf R}}$.

Formally, the solution to the initial value problem for (23) should be given by ${u(t) = e^{\nu t \Delta} u_0}$, and (by the Duhamel formula(6)) the solution to (24) should similarly be

$\displaystyle u(t) = e^{\nu t \Delta} u_0 + \int_0^t e^{\nu (t-s) \Delta} F(s)\ ds;$

but there are subtleties arising from the unbounded nature of ${\Delta}$.

The first issue is that even if ${u_0}$ vanishes and ${u}$ is required to be smooth without any decay hypothesis at infinity, one can have non-uniqueness. The following counterexample is basically due to Tychonoff:

Exercise 20 (Tychonoff example) Let ${1 < \theta < 2}$ be a real number, and let ${\nu>0}$.

• (a) Show that there exists smooth, compactly supported function ${\phi: {\bf R} \rightarrow {\bf R}}$, not identically zero, obeying the derivative bounds

$\displaystyle |\phi^{(k)}(t)| \leq (Ck)^{\theta k}$

for all ${k \geq 0}$ and ${t \in {\bf R}}$. (Hint: one can construct ${\phi = \psi_1 * \psi_2 * \dots}$ as the convolution of an infinite number of approximate identities ${\psi_n}$, where each ${\psi_n}$ is supported on an interval of length ${n^{-\theta}}$, and use the identity ${\frac{d}{dt}(f*g) = (\frac{d}{dt} f) * g = f * \frac{d}{dt} g}$ repeatedly. To justify things rigorously, one may need to first work with finite convolutions and take limits.)

• (b) With ${\phi}$ as in part (i) show that the function

$\displaystyle u(t,x) := \sum_{k=0}^\infty \frac{x^{2k}}{\nu^k (2k)!} \phi^{(k)}(t)$

is well-defined as a smooth function on ${{\bf R} \times {\bf R}}$ that is compactly supported in time, and obeys the heat equation (23) for ${d=1}$ without being identically zero.

• (c) Show that the initial value problem to (23) is not unique (for any dimension ${d \geq 1}$) if ${u}$ is only required to be smooth, even if ${u_0}$ vanishes.

Exercise 21 (Kowalevski example)

• (a) Let ${u_0: {\bf R} \rightarrow {\bf R}}$ be the function ${u_0(x) := \frac{1}{1+x^2}}$. Show that there does not exist any solution ${u: {\bf R} \times {\bf R} \rightarrow {\bf R}}$ to (23) that is jointly real analytic in ${t,x}$ at ${0}$ (that is to say, it can be expressed as an absolutely convergent power series in ${t,x}$ in a neighbourhood of ${0}$).
• (b) Modify the above example by replacing ${\frac{1}{1+x^2}}$ by a function that extends to an entire function on ${{\bf C}}$ (as opposed to ${z \mapsto \frac{1}{1+z^2}}$, which has poles at ${\pm i}$).

This classic example, due to Sofia Kowalevski, demonstrates the need for some hypotheses on the PDE in order to invoke the Cauchy-Kowaleski theorem.

One can recover uniqueness (forwards in time) by imposing some growth condition at infinity. We give a simple example of this, which illustrates a basic tool in the subject, namely the energy method, which is based on understanding the rate of change of various “energy” integrals of integrands which primarily involve quadratic expressions of the solution or its derivatives. The reason for favouring quadratic expressions is that they are more likely to produce integrals with a definite sign (positive definite or negative definite), such as (squares of) ${L^2}$ norms or higher Sobolev norms of the solution, particularly after suitable application of integration by parts.

Proposition 22 (Uniqueness with energy bounds) Let ${T>0}$, and let ${u, v: [0,T] \times {\bf R} \rightarrow {\bf R}^m}$ be smooth solutions to (24) with common initial data ${u(0) = v(0) = u_0}$ and forcing term ${F: [0,T] \times {\bf R} \rightarrow {\bf R}^m}$ such that the norm

$\displaystyle \| u \|_{L^\infty_t L^2_x([0,T] \times {\bf R} \rightarrow {\bf R}^m)} := \sup_{t \in [0,T]} \|u\|_{L^2({\bf R} \rightarrow {\bf R}^m)}$

of ${u}$ is finite, and similarly for ${v}$. Then ${u=v}$.

Proof: As the heat equation (23) is linear, we may subtract ${v}$ from ${u}$ and assume without loss of generality that ${v=0}$, ${u_0=0}$, and ${F=0}$. By working with each component separately we may take ${m=1}$.

Let ${\eta: {\bf R}^d \rightarrow {\bf R}}$ be a non-negative test function supported on ${B(0,2)}$ that equals ${1}$ on ${B(0,1)}$. Let ${R>0}$ be a parameter, and consider the “energy” (or more precisely, “local mass”)

$\displaystyle E_R(t) := \int_{\bf R} u(t,x)^2 \eta(x/R)\ dx$

for ${t \in [0,T]}$. As ${u(0)=u_0=0}$, we have ${E_R(t)=0}$. As ${u}$ is smooth and ${\eta}$ is compactly supported, ${E_R(t)}$ depends smoothly on ${t}$, and we can differentiate under the integral sign to obtain

$\displaystyle \partial_t E_R(t) = 2 \int_{\bf R} u(t,x) \partial_t u(t,x) \eta(x/R)\ dx.$

Using (23) we thus have

$\displaystyle \partial_t E_R(t) = 2 \nu \int_{\bf R} u(t,x) \partial_i \partial_i u(t,x) \eta(x/R)\ dx$

using the usual summation conventions.

A basic rule of thumb in the energy method is this: whenever one is faced with an integral in which one term in the integrand has much lower regularity (or much less control on regularity) than any other, due to a large number of derivatives placed on that term, one should integrate by parts to move one or more derivatives off of that term to other terms in order to make the distribution of derivatives more balanced (which, as we shall see, tends to make the integrals easier to estimate, or to ascribe a definite sign to). Accordingly, we integrate by parts to write

$\displaystyle \partial_t E_R(t) = - 2 \nu \int_{\bf R} \partial_i u(t,x) \partial_i u(t,x) \eta(x/R)\ dx$

$\displaystyle - 2 \nu R^{-1} \int_{\bf R} u(t,x) \partial_i u(t,x) \partial_i \eta(x/R)\ dx.$

The first term is non-positive, thus we may discard it to obtain the inequality

$\displaystyle \partial_t E_R(t) \leq - 2 \nu R^{-1} \int_{\bf R} u(t,x) \partial_i u(t,x) \partial_i \eta(x/R)\ dx.$

Another rule of thumb in the energy method is to keep an eye out for opportunities to express some expression appearing in the integrand as a total derivative In this case, we can write

$\displaystyle 2 u(t,x) \partial_i u(t,x) = \partial_i ( u(t,x)^2 )$

and then integrate by parts to move the derivative on to the much more slowly varying function ${\eta(x,R)}$ to conclude

$\displaystyle \partial_t E_R(t) \leq \nu R^{-2} \int_{\bf R} u(t,x)^2 \partial_i \partial_i \eta(x/R)\ dx.$

In particular we have a bound of the form

$\displaystyle \partial_t E_R(t) \lesssim_{\nu,\eta} R^{-2} \| u \|_{L^\infty_t L^2_x([0,T] \times {\bf R})}^2$

where the subscript indicates that the implied constant can depend on ${\nu}$ and ${\eta}$. Since ${E_R(0)=0}$, we conclude from the fundamental theorem of calculus that

$\displaystyle E_R(t) \lesssim_{\nu,\eta,T} R^{-2} \| u \|_{L^\infty_t L^2_x([0,T] \times {\bf R})}^2$

for all ${t \in [0,T]}$ (note how it is important here that we evolve forwards in time, rather than backwards). Sending ${R \rightarrow \infty}$ and using the dominated convergence theorem, we conclude that

$\displaystyle \int_{\bf R} u(t,x)^2\ dx \lesssim_{\nu,\eta,T} 0$

and thus ${u}$ vanishes identically, as required. $\Box$

Now we turn to existence for the heat equation, restricting attention to forward in time solutions. Formally, if one solves the heat equation (23), then on taking spatial Fourier transforms

$\displaystyle \hat u(t,\xi) := \int_{{\bf R}^d} u(t,x) e^{-2\pi i \xi \cdot x}\ dx$

the equation transforms to the ODE

$\displaystyle \partial_t \hat u(t,\xi) = - 4\pi^2 \nu |\xi|^2 \hat u(t,\xi)$

which when combined with the initial condition ${u(0) = u_0}$ gives

$\displaystyle \hat u(t,\xi) = e^{- 4\pi^2 \nu |\xi|^2 t} \hat u_0(\xi)$

and hence by the Fourier inversion formula we arrive (formally, at least) at the representation

$\displaystyle u(t,x) = \int_{{\bf R}^d} e^{- 4\pi^2 \nu |\xi|^2 t} \hat u_0(\xi) e^{2\pi i \xi \cdot x}\ dx. \ \ \ \ \ (25)$

As we are assuming forward time evolution ${t \geq 0}$, the exponential factor ${e^{- 4\pi^2 \nu |\xi|^2 t}}$ here is bounded. In the case that ${u_0:{\bf R}^d \rightarrow {\bf R}^m}$ is a Schwartz function, then ${\hat u_0}$ is also Schwartz, and this formula is certainly well-defined to be smooth in both time and space (and rapidly decreasing in space for any fixed time), and in particular in ${L^\infty_t L^2_x([0,+\infty) \times {\bf R}^d)}$; one can easily justify differentiation under the integral sign to conclude that (23) is indeed verified, and the Fourier inversion formula shows that we have the initial data condition ${u(0)=u_0}$. So this is the unique solution to the initial value problem (23) for the heat equation that lies in ${L^\infty_t L^2_x}$. By definition we declare the right-hand side of (25) to be ${e^{\nu t\Delta} u_0}$, thus

$\displaystyle e^{\nu t \Delta} u_0(x) = \int_{{\bf R}^d} e^{- 4\pi^2 \nu |\xi|^2 t} \hat u_0(\xi) e^{2\pi i \xi \cdot x}\ d\xi \ \ \ \ \ (26)$

for all ${t \geq 0}$ and all Schwartz functions ${u}$; equivalently, one has

$\displaystyle \widehat{e^{\nu t \Delta} u_0}(\xi) = e^{- 4\pi^2 \nu |\xi|^2 t} \hat u_0(\xi). \ \ \ \ \ (27)$

(One can justify this choice of notation using the functional calculus of the self-adjoint operator ${\Delta}$, as discussed for instance in this previous blog post, but we will not do so here since the Fourier transform is available as a substitute.) It is also clear from (27) that ${e^{\nu t\Delta}}$ commutes with other Fourier multipliers such as ${\langle \nabla \rangle^s}$ or constant-coefficient differential operators, on Schwartz functions at least.

From (27) and Plancherel’s theorem we see that ${e^{\nu t \Delta}}$ for ${t \geq 0}$ is a non-expansive map in (the Schwartz functions of) ${L^2({\bf R}^d \rightarrow {\bf R}^m)}$, and more generally in ${H^s({\bf R}^d \rightarrow {\bf R}^m)}$ for any ${s \geq 0}$, thus

$\displaystyle \| e^{\nu t \Delta} u_0 \|_{H^s({\bf R}^d \rightarrow {\bf R}^m)} \leq \| u_0 \|_{H^s({\bf R}^d \rightarrow {\bf R}^m)}$

for any Schwartz ${u_0: {\bf R}^d \rightarrow {\bf R}^m}$ and any ${s, t \geq 0}$. Thus by density one can extend the heat propagator ${e^{\nu t \Delta}}$ for ${t \geq 0}$ to all of ${L^2({\bf R}^d \rightarrow {\bf R}^m)}$, in a fashion that is a non-expansive map on ${L^2({\bf R}^d \rightarrow {\bf R}^m)}$ and more generally on ${H^s({\bf R}^d \rightarrow {\bf R}^m)}$. By a limiting argument, (27) holds almost everywhere for all ${u_0 \in L^2({\bf R}^d \rightarrow {\bf R}^m)}$.

There is also a smoothing effect:

Exercise 23 (Smoothing effect) Let ${s' \geq s \geq 0}$. Show that

$\displaystyle \| e^{\nu t \Delta} u_0 \|_{H^{s'}({\bf R}^d \rightarrow {\bf R}^m)} \lesssim_{s,s'} (1+(\nu t)^{-\frac{s'-s}{2}}) \| u_0 \|_{H^s({\bf R}^d \rightarrow {\bf R}^m)}$

for all ${u_0 \in H^s({\bf R}^d \rightarrow {\bf R}^m)}$ and ${\nu, t > 0}$.

Exercise 24 (Fundamental solution for the heat equation) For ${u_0 \in L^2({\bf R}^d \rightarrow {\bf R}^m)}$ and ${t>0}$, establish the identity

$\displaystyle e^{\nu t \Delta} u_0(x) = \frac{1}{(4\pi \nu t)^{d/2}} \int_{{\bf R}^d} e^{-\frac{|x-y|^2}{4\nu t}} u_0(y)\ dy.$

for almost every ${x \in {\bf R}^d}$. (Hint: first work with Schwartz functions. Either compute the Fourier transform explicitly, or verify directly that the heat equation initial value problem is solved by the right-hand side.) Conclude in particular that (after modification on a measure zero set if necessary) ${e^{\nu t \Delta} u_0}$ is smooth for any ${t>0}$.

Exercise 25 (Ill-posedness of the backwards heat equation) Show that there exists a Schwartz function ${u_0: {\bf R}^d \rightarrow {\bf R}}$ with the property that there is no solution ${u \in L^\infty_t L^2_x([-T,0] \times {\bf R}^d \rightarrow {\bf R})}$ to (23) with final data ${u(0)=u_0}$ for any ${-T < 0}$. (Hint: choose ${u_0}$ so that the Fourier transform ${\hat u_0}$ decays somewhat, but not extremely rapidly. Then argue by contradiction using (27).

Exercise 26 (Continuity in the strong operator topology) For any ${s \geq 0}$, let ${C^0_t H^s_x([0,+\infty) \times {\bf R}^d \rightarrow {\bf R}^m)}$ denote the Banach space of functions ${u: [0,+\infty) \times {\bf R}^d \rightarrow {\bf R}^m}$ such that for each ${t}$, ${u(t)}$ lies in ${H^s_x({\bf R}^d \rightarrow {\bf R}^m)}$ and varies continuously and boundedly in ${t}$ in the strong topology, with norm

$\displaystyle \| u \|_{C^0_t H^s_x([0,+\infty) \times {\bf R}^d \rightarrow {\bf R}^m)} := \sup_{t \in [0,+\infty)} \| u(t) \|_{H^s_x({\bf R}^d \rightarrow {\bf R}^m)}.$

Show that if ${u_0 \in H^s_x({\bf R}^d \rightarrow {\bf R}^m)}$ and ${u(t) = e^{\nu t \Delta} u_0}$ solves the heat equation on ${[0,+\infty) \times {\bf R}^d}$, then ${u \in C^0_t H^s_x([0,+\infty) \times {\bf R}^d \rightarrow {\bf R}^m)}$ with

$\displaystyle \| u \|_{C^0_t H^s_x([0,+\infty) \times {\bf R}^d \rightarrow {\bf R}^m)} \leq \| u_0 \|_{H^s_x({\bf R}^d \rightarrow {\bf R}^m)}.$

Similar considerations apply to the inhomogeneous heat equation (24). If ${u_0: {\bf R}^d \rightarrow {\bf R}^m}$ and ${F: [0,T] \times {\bf R}^d \rightarrow {\bf R}^m}$ are Schwartz for some ${T>0}$, then the function ${u: [0,T] \times {\bf R}^d \rightarrow {\bf R}^m}$ defined by the Duhamel formula

$\displaystyle u(t) := e^{\nu t \Delta} u_0 + \int_0^t e^{\nu(t-s) \Delta} F(s)\ ds \ \ \ \ \ (28)$

can easily be verified to also be Schwartz and solve (24) with initial data ${u_0}$; by Proposition 22, this is the only such solution in ${L^\infty_t L^2_x([0,T] \times {\bf R}^d \rightarrow {\bf R}^m)}$. It also obeys good estimates:

Exercise 27 (Energy estimates) Let ${u_0: {\bf R}^d \rightarrow {\bf R}}$, ${F: [0,T] \times {\bf R}^d \rightarrow {\bf R}}$, and ${G: [0,T] \times {\bf R}^d \rightarrow {\bf R}^d}$ be Schwartz functions for some ${T>0}$, and let ${u}$ be the solution to the equation

$\displaystyle \partial_t u = \nu \Delta u + F + \nabla \cdot G$

with initial condition ${u(0) = u_0}$ given by the Duhamel formula. For any ${s \geq 0}$, establish the energy estimate

$\displaystyle \| u \|_{C^0_t H^s_x([0,T] \times {\bf R}^d\rightarrow {\bf R})} + \nu^{1/2} \| \nabla u \|_{L^2_t H^s_x([0,T] \times {\bf R}^d\rightarrow {\bf R})} \ \ \ \ \ (29)$

$\displaystyle \lesssim \|u_0\|_{H^s_x({\bf R}^d \rightarrow {\bf R})} + \| F \|_{L^1_t H^s_x([0,T] \times {\bf R}^d\rightarrow {\bf R})}$

$\displaystyle + \nu^{-1/2} \| G \|_{L^2_t H^s_x([0,T] \times {\bf R}^d\rightarrow {\bf R}^d)}$

in two different ways:

• (i) By using the Fourier representation (27) and Plancherel’s formula;
• (ii) By using energy methods as in the proof of Proposition 22. (Hint: first reduce to the case ${s=0}$. You may find the arithmetic mean-geometric mean inequality ${ab \leq \frac{1}{2} a^2 + \frac{1}{2} b^2}$ to useful.)

Here of course we are using the norms

$\displaystyle \| F \|_{L^1_t H^s_x([0,T] \times {\bf R}^d)\rightarrow {\bf R}^m} := \int_{[0,T]} \| F(t) \|_{H^s({\bf R}^d \rightarrow {\bf R}^m)}\ dt$

and

$\displaystyle \| G \|_{L^2_t H^s_x([0,T] \times {\bf R}^d)\rightarrow {\bf R}^m} := (\int_{[0,T]} \| G(t) \|_{H^s({\bf R}^d \rightarrow {\bf R}^m)}^2\ dt)^{1/2}$

The energy estimate contains some smoothing effects similar (though not identical) to those in Exercise 23, since it shows that ${u}$ can in principle be one degree of regularity smoother than ${u_0}$ (if one averages in time in an ${L^2}$ sense, and the viscosity ${\nu}$ is not sent to zero), and two degrees of regularity smoother than the forcing term ${\nabla G}$ (with the same caveats). As we shall shortly see, this smoothing effect will allow us to handle the nonlinear terms in the Navier-Stokes equations for the purposes of setting up a local well-posedness theory.

Exercise 28 (Distributional solution) Let ${s \geq 0}$, let ${u_0 \in H^s({\bf R}^d \rightarrow {\bf R})}$, and let ${F \in C^0_t H^s_x( [0,T] \times {\bf R}^d \rightarrow {\bf R})}$ for some ${T>0}$. Let ${u \in C^0_t H^s_x([0,T] \times {\bf R}^d \rightarrow {\bf R})}$ be given by the Duhamel formula (28). Show that (24) is true in the spacetime distributional sense, or more precisely that

$\displaystyle \langle \partial_t u(t), \phi \rangle = \langle \nu\Delta u + F, \phi \rangle \ \ \ \ \ (30)$

in the sense of spaceime distributions for any test function ${\phi: [0,T] \times {\bf R}^d \rightarrow {\bf R}}$ supported in the interior of ${[0,T] \times {\bf R}^d}$.

Pretty much all of the above discussion can be extended to the periodic setting:

Exercise 29 Let ${d,m \geq 1}$ and ${\nu > 0}$.

• (a) If ${u_0: {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m}$ is smooth, define ${e^{\nu t\Delta} u: {\bf R}^d/ {\bf Z}^d \rightarrow {\bf R}^m}$ by the formula

$\displaystyle e^{\nu t \Delta} u_0(x) := \sum_{k \in {\bf Z}^d} e^{-4\pi^2 |k|^2 \nu t} \hat u_0(k) e^{2\pi i k \cdot x} \ \ \ \ \ (31)$

where ${\hat u_0(k) := \int_{{\bf R}^d/{\bf Z}^d} u(x) e^{-2\pi i k \cdot x}}$ are the Fourier coefficients of ${u_0}$. Show that ${e^{\nu t\Delta}}$ extends continuously to a non-expansive map on ${H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)}$ for every ${s \geq 0}$, and that if ${u_0 \in H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)}$ then the function ${(t,x) \mapsto e^{\nu t\Delta} u_0(x)}$ lies in ${C^0_t H^s_x([0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)}$.

• (b) For ${u_0 \in L^2({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)}$ and ${t>0}$, establish the formula

$\displaystyle e^{\nu t \Delta} u_0(x) = \frac{1}{(4\pi \nu t)^{d/2}} \int_{{\bf R}^d} e^{-\frac{|x-y|^2}{4\nu t}} u_0(y)\ dy.$

for almost every ${x \in {\bf R}^d}$, where (by abuse of notation) we identify functions ${u: {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m}$ with ${{\bf Z}^d}$-periodic functions ${u: {\bf R}^d \rightarrow {\bf R}^m}$ in the usual fashion.

• (c) If ${T > 0}$, and ${u_0: {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m}$ and ${F: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m}$ are smooth, show that the function ${u: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m}$ defined by (28) is smooth and solves the inhomogeneous equation (24) with initial data ${u(0) = u_0}$, and that this is the unique smooth solution to that initial value problem.
• (d) If ${T > 0}$, ${s \geq 0}$, and ${u_0: {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m}$ and ${F,G: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m}$ are smooth, and ${u}$ is the unique smooth solution to the heat equation ${\partial_t u = \nu \Delta u + F + \nabla G}$ with ${u(0)=u_0}$, establish the energy estimate

$\displaystyle \| u \|_{C^0_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)} + \nu^{1/2} \| \nabla u \|_{L^2_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)}$

$\displaystyle \lesssim \|u_0 \|_{H^s_x({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)} + \| F \|_{L^1_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)}$

$\displaystyle + \nu^{-1/2} \| G \|_{L^2_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)}.$

• (e) If ${T>0}$, ${u_0 \in H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)}$ and ${F \in C^0_t H^s([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)}$, show that the function ${u}$ given by (28) is in ${C^0_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^m)}$ and obeys (24) in the sense of spacetime distributions (30).

Remark 30 The heat equation for negative viscosities ${\nu < 0}$ can be transformed into a positive viscosity heat equation by time reversal: if ${u}$ solves the equation ${\partial_t u = - \nu \Delta u}$, then ${(t,x) \mapsto (-t, x)}$ solves the equation ${\partial_t u = \nu \Delta u}$. Thus one can solve negative viscosity heat equations (also known as backwards heat equations) backwards in time, but one tends not to have well-posedness forwards in time. In a similar spirit, if ${\nu}$ is positive, one can normalise it to (say) ${1}$ by an appropriate rescaling of the time variable, ${t \mapsto t/\nu}$. However, we will generally keep the parameter ${\nu}$ non-normalised in preparation for understanding the limit as ${\nu \rightarrow 0}$.

— 4. Local well-posedness for Navier-Stokes —

We now have all the ingredients necessary to create a local well-posedness theory for the Navier-Stokes equations (1).

We first dispose of the one-dimensional case ${d=1}$, which is rather degenerate as incompressible one-dimensional fluids are somewhat boring. Namely, suppose that one had a smooth solution to the one-dimensional Navier-Stokes equations

$\displaystyle \partial_t u + u \partial_x u = - \partial_x p + \nu \partial_{xx} u$

$\displaystyle \partial_x u = 0.$

The second equation implies that ${u}$ is just a function of time, ${u = u(t)}$, and the first equation becomes

$\displaystyle u'(t) = - \partial_x p(t,x).$

To solve this equation, one can set ${u: {\bf R} \rightarrow {\bf R}}$ to be an arbitrary smooth function of time, and then set

$\displaystyle p(t,x) = a(t) - u'(t) x$

for an arbitrary smooth function ${a: {\bf R} \rightarrow {\bf R}}$. If one requires the pressure to be bounded, then ${u'}$ vanishes identically, and then ${u}$ is constant in time, which among other things shows that the initial value problem is (rather trivially) well-posed in the category of smooth solutions, up to the ability to alter the pressure by an arbitrary constant ${a(t)}$. On the other hand, if one does not require the pressure to stay bounded, then one has a lot less uniqueness, since the function ${u(t)}$ is essentially unconstrained.

Now we work in two or higher dimensions ${d \geq 2}$, and consider solutions to (1) on the spacetime region ${[0,T] \times {\bf R}^d}$. To begin with, we assume that ${u: [0,T] \times {\bf R}^d \rightarrow {\bf R}^d}$ is smooth and periodic in space: ${u(t,x+n) = u(t,x)}$ for ${n \in {\bf Z}^d}$; we assume ${p}$ is smooth but do not place any periodicity hypotheses on it. Then, by (1), ${\nabla p}$ is periodic. In particular, for any ${n \in {\bf Z}^d}$ and ${t}$, the function ${x \mapsto p(t,x+n)-p(t,x)}$ has vanishing gradient and is thus constant in ${x}$, so that

$\displaystyle p(t,x+n) - p(t,x) = a_n(t)$

for all ${x \in {\bf R}^d}$ and some function ${a_n(t)}$ of ${t}$. The map ${n \mapsto a_n(t)}$ is a homomorphism for fixed ${t}$, so we can write ${a_n(t) = n \cdot q(t)}$ for some ${a: [0,T] \rightarrow {\bf R}^d}$, which will be smooth since ${p}$ is smooth. We thus have ${p(t,x) = p_0(t,x) + x \cdot a(t)}$ for some smooth ${{\bf Z}^d}$-periodic function ${p_0}$. By subtracting off the mean, we can further decompose

$\displaystyle p(t,x) = p_1(t,x) + x \cdot a(t) + r(t)$

for some smooth function ${r: [0,T] \rightarrow {\bf R}}$ and some smooth ${{\bf Z}^d}$-periodic function ${p_1: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}}$ which has mean zero at every time.

Note that one can simply omit the constant term ${r(t)}$ from the pressure without affecting the system (1). One can also eliminate the linear term ${x \cdot a(t)}$ by the following “generalised Galilean transformation“. If ${u, p, p_1, a, r}$ are as above, and one lets

$\displaystyle v(t) := \int_0^t a(s)\ ds$

be the primitive of ${a}$, and

$\displaystyle X(t) := \int_0^t v(s)\ ds$

be the primitive of ${v}$, then a short calculation reveals that the smooth function ${u_2: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ defined by

$\displaystyle u_2(t, x) := u(t, x - X(t)) + v(t)$

and the smooth function ${p_2: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}}$ defined by

$\displaystyle p_2(t,x) := p_1(t,x - X(t))$

solves the Navier-Stokes equations

$\displaystyle \partial_t u_2 + u_2 \cdot \nabla u_2 = - \nabla p_2 + \nu \Delta u_2$

$\displaystyle \nabla \cdot u_2 = 0$

with ${u_2}$ having the same initial data as ${u}$; conversely, if ${(u_2,p_2)}$ is a solution to Navier-Stokes, then so is ${(u,p)}$. In particular this reveals a lack of uniqueness for the periodic Navier-Stokes equations that is essentially the same lack of uniqueness that is present for the Leray system: one can add an arbitrary spatially affine function to the pressure ${p}$ by applying a suitable Galilean transform to ${u}$. On the other hand, we can eliminate this lack of uniqueness by requiring that the pressure be normalised in the sense that ${q(t)=0}$ and ${r(t)=0}$, that is to say we require ${p}$ to be ${{\bf Z}^d}$-periodic and mean zero. The above discussion shows that any smooth solution to Navier-Stokes with ${u}$ periodic can be transformed by a Galilean transformation to one in which the pressure is normalised.

Once the pressure is normalised, it turns out that one can recover uniqueness (much as was the case with the Leray system):

Theorem 31 (Uniqueness with normalised pressure) Let ${(u_1,p_1), (u_2,p_2)}$ be two smooth periodic solutions to (1) on ${[0,T] \times {\bf R}^d}$ with normalised pressure such that ${u_1(0)=u_2(0)}$. Then ${(u_1,p_1)=(u_2,p_2)}$.

Proof: We use the energy method. Write ${w = u_1 - u_2}$, then subtracting (1) for ${(u_1,p_1)}$ from ${(u_2,p_2)}$ we see that ${w: [0,T] \times {\bf R}^d \rightarrow {\bf R}^d}$ is smooth with

$\displaystyle \partial_t w + (u_1 \cdot \nabla) w + (w \cdot \nabla) u_2 = \nu \Delta w - \nabla (p_1 - p_2)$

and

$\displaystyle \nabla \cdot w = 0.$

Now we consider the energy ${E(t) := \int_{{\bf R}^d/{\bf Z}^d} |w(t,x)|^2\ dx}$. This varies smoothly with ${t}$, and we can differentiate under the integral sign to obtain

$\displaystyle \partial_t E(t) = 2 \int_{{\bf R}^d/{\bf Z}^d} w(t,x) \cdot \partial_t w(t,x)\ dx$

$\displaystyle = A + B + C+ D$

where

$\displaystyle A := - 2 \int_{{\bf R}^d/{\bf Z}^d} w \cdot (u_1 \cdot \nabla) w\ dx$

$\displaystyle B := -2 \int_{{\bf R}^d/{\bf Z}^d} w\cdot (w \cdot \nabla) u_2\ dx$

$\displaystyle C := 2\nu \int_{{\bf R}^d/{\bf Z}^d} w \partial_i \partial_i w\ dx$

$\displaystyle D := - 2 \int_{{\bf R}^d/{\bf Z}^d} w \cdot \nabla (p_1-p_2)\ dx$

and we have omitted the explicit dependence on ${t}$ and ${x}$ for brevity.

For ${A}$, we observe the total derivative ${2 w \cdot (u_1 \cdot \nabla) w = (u_1 \cdot \nabla) |w|^2}$ and integrate by parts to conclude that

$\displaystyle A = \int_{{\bf R}^d/{\bf Z}^d} (\nabla \cdot u_1) |w|^2\ dx = 0$

since ${u_1}$ is divergence-free. Similarly, integration by parts shows that ${D}$ vanishes since ${w}$ is divergence-free. Another integration by parts gives

$\displaystyle C = - 2\nu \int_{{\bf R}^d/{\bf Z}^d} \partial_i w \partial_i w\ dx$

and hence ${C \leq 0}$. Finally, from Hölder’s inequality we have

$\displaystyle B \leq 2 E(t) \|\nabla u_2 \|_{L^\infty_t L^\infty_x([0,T] \times {\bf R}^d/{\bf Z}^d)}$

and hence

$\displaystyle \partial_t E(t) \leq 2 E(t) \|\nabla u_2 \|_{L^\infty_t L^\infty_x([0,T] \times {\bf R}^d/{\bf Z}^d)}.$

Since ${E(0)=0}$, we conclude from Gronwall’s inequality that ${E(t) \leq 0}$ for all ${t \in [0,T]}$, and hence ${w}$ is identically zero, thus ${u_1=u_2}$. Substituting this into (1) we conclude that ${\nabla p_1 = \nabla p_2}$; as ${p_1,p_2}$ have mean zero, we conclude (e.g., from Fourier inversion) that ${p_1=p_2}$, and the claim follows. $\Box$

Now we turn to existence in the periodic setting, assuming normalised pressure. For various technical reasons, it is convenient to reduce to the case when the velocity field ${u}$ has zero mean. Observe that the right-hand sides ${\nu \Delta u}$, ${\nabla p}$ of (1) have zero mean on ${{\bf R}^d/{\bf Z}^d}$, thanks to integration by parts. A further integration by parts, using the divergence-free condition ${\nabla \cdot u = 0}$, reveals that the transport term ${(u \cdot \nabla) u}$ also has zero mean:

$\displaystyle \int_{{\bf R}^d/{\bf Z}^d} ((u \cdot \nabla) u)_i\ dx = \int_{{\bf R}^d/{\bf Z}^d} u_j \partial_j u_i\ dx$

$\displaystyle = - \int_{{\bf R}^d/{\bf Z}^d} (\partial_j u_j) u_i\ dx$

$\displaystyle = 0.$

Thus, we see that the mean ${\int_{{\bf R}^d/{\bf Z}^d} u(t,x)\ dx}$ is a conserved integral of motion: if ${v_0 := \int_{{\bf R}^d/{\bf Z}^d} u_0(x)\ dx}$ is the mean initial velocity, and ${(u,p)}$ is a solution to (1) (obeying some minimal regularity hypothesis), then ${u(t)}$ continues to have mean velocity ${v_0}$ for all subsequent times. On the other hand, if ${(u,p)}$ is a smooth periodic solution to (1) with normalised pressure and initial velocity ${u_0}$, then the Galilean transform ${(\tilde u, \tilde p)}$ defined by

$\displaystyle \tilde u(t,x) := u(t, x + v_0 t) - v_0$

$\displaystyle \tilde p(t,x) := p(t, x + v_0 t)$

can be easily verified to be a smooth periodic solution to (1) with normalised pressure and initial velocity ${u_0 - v_0}$. Of course, one can reconstruct ${(u,p)}$ from ${(\tilde u,\tilde p)}$ by the inverse tranformation

$\displaystyle u(t,x) = \tilde u(t, x - v_0 t) + v_0$

$\displaystyle p(t,x) = \tilde p(t, x - v_0 t).$

Thus, up to this simple transformation, solving the initial value problem for (1) for ${u_0}$ is equivalent to that of ${u_0-v_0}$, so we may assume without loss of generality that the initial velocity (and hence the velocity at all subsequent times) has zero mean.

A general rule of thumb is that whenever an integral of a solution to a PDE can be proven to vanish (or be equal to boundary terms) by integration by parts, it is because the integrand can be rewritten in “divergence form” – as the divergence of a tensor of one higher rank. (This is because the integration by parts identity ${\int f \partial_i g = - \int (\partial_i f) g}$ arises from the divergence form ${\partial_i(fg)}$ of the expression ${f \partial_i g + (\partial_i f) g}$.) Thus we expect the transport term ${(u \cdot \nabla) u}$ to be in divergence form. Indeed, in components we have

$\displaystyle ((u \cdot \nabla) u)_i = u_j \partial_j u_i;$

since we have the divergence-free condition ${\partial_j u_j=0}$, we thus have from the Leibniz rule that

$\displaystyle ((u \cdot \nabla) u)_i = \partial_j (u_j u_i).$

We write this in coordinate-free notation as

$\displaystyle (u \cdot \nabla) u = \nabla \cdot (u \otimes u)$

where ${u \otimes u}$ is the tensor product ${(u \otimes u)_{ji} := u_j u_i}$ and ${\nabla \cdot (u \otimes u)}$ denotes the divergence

$\displaystyle (\nabla \cdot (u \otimes u))_i = \partial_j (u \otimes u)_{ji}.$

Thus we can rewrite (1) as the system

$\displaystyle \partial_t u + \nabla \cdot (u \otimes u) = \nu \Delta u - \nabla p \ \ \ \ \ (32)$

Next, we observe that we can use the Leray projection operator ${\mathbb{P}}$ to eliminate the role of the (normalised) pressure. Namely, if ${(u,p)}$ are a smooth periodic solution to (1) with normalised pressure, then on applying ${\mathbb{P}}$ (which preserves divergence-free vector fields such as ${\partial_t u}$ and ${\Delta u}$, but annihilates gradients such as ${\nabla p}$) we conclude an equation that does not involve the pressure at all:

$\displaystyle \partial_t u + \mathbb{P}( \nabla \cdot (u \otimes u) ) = \nu \Delta u. \ \ \ \ \ (33)$

Conversely, suppose that one has a smooth periodic solution ${u: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ to (33) with initial condition ${u(0)=u_0}$ for some smooth periodic divergence-free vector field ${u_0: {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$. Taking divergences of both sides of (33), we then conclude that

$\displaystyle \partial_t (\nabla \cdot u) = \nu \Delta (\nabla \cdot u),$

that is to say ${\nabla \cdot u}$ obeys the heat equation (23). Since ${\nabla \cdot u}$ is periodic, smooth, and vanishes at ${0}$, we see from Exercise 29(c) that ${\nabla \cdot u}$ vanishes on all of ${[0,T] \times {\bf R}^d/{\bf Z}^d}$, thus ${u}$ is divergence free on the entire time interval ${[0,T]}$. From (33) and (22) we thus see that if one defines ${p: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}}$ to be the function

$\displaystyle p := - \Delta^{-1} (\nabla \cdot \nabla \cdot (u \otimes u))$

(which can easily be verified to be a smooth function in both space and time) then ${(u,p)}$ is a smooth periodic solution to (1) with normalised pressure and initial condition ${u(0)=u_0}$ (and is thus the unique solution to this system, thanks to Theorem 31). Thus, the problem of finding a smooth solution to (1) in the smooth periodic setting with normalised pressure and divergence-free initial data ${u(0)=u_0}$ is equivalent to that of solving (33) with the same initial data.

By Duhamel’s formula (Exercise 29(c)), any smooth solution ${u: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ to the initial value problem (33) with ${u(0)=u_0}$ obeys the Duhamel formula

$\displaystyle u(t) = e^{\nu t \Delta} u_0 + \int_0^t e^{\nu (t-s) \Delta} \mathbb{P}( \nabla \cdot (u(s) \otimes u(s)) )\ ds. \ \ \ \ \ (34)$

(The operator ${e^{\nu (t-s) \Delta} \mathbb{P}}$ is sometimes referred to as the Oseen operator in the literature.) Conversely, a smooth solution ${u: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ to (34) will solve the initial value problem (33) with initial data ${u(0)=u_0}$.

To obtain existence of smooth periodic solutions (with normalised pressure) to the Navier-Stokes equations with given smooth divergence-free periodic initial data ${u_0}$, it thus suffices to find a smooth periodic solution to the integral equation (34). We will achieve this by a two-step procedure:

• (i) (Existence at finite regularity) Construct a solution ${u}$ to (34) in a certain function space with a finite amount of regularity (assuming that the initial data ${u_0}$ has a similarly finite amount of regularity); and then
• (ii) (Propagation of regularity) show that if ${u_0}$ is in fact smooth, then the solution constructed in (i) is also smooth.

The reason for this two step procedure is that one wishes to solve (34) using iteration-type methods (which for instance power the contraction mapping theorem that was used to prove the Picard existence theorem); however the function space ${C^\infty([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$ that one ultimately wishes the solution to lie in is not well adapted for such iteration (for instance, it is not a Banach space, instead being merely a Fréchet space). Instead, we iterate in an auxiliary lower regularity space first, and then “bootstrap” the lower regularity to the desired higher regularity. Observe that the same situation occured with the Picard existence theorem, where one performed the iteration in the low regularity space ${C([0,T] \rightarrow \overline{B(0,2R)})}$, even though ultimately one desired the solution to be continuously differentiable or even smooth.

Of course, to run this procedure, one actually has to write down an explicit function space in which one will perform the iteration argument. Selection of this space is actually a non-trivial matter and often requires a substantial amount of trial and error, as well as experience with similar iteration arguments for other PDE. Often one is guided by the function space theory for the linearised counterpart of the PDE, which in this case is the heat equation (23). As such, the following definition can be at least partially motivated by the energy estimates in Exercise 29(d).

Definition 32 (Mild solution) Let ${s \geq 0}$, ${T>0}$, and let ${u_0 \in H^{s}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0}$ be divergence-free, where ${H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0}$ denotes the subspace of ${H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$ consisting of mean zero functions. An ${H^s}$-mild solution (or Fujita-Kato mild solution to the Navier-Stokes equations with initial data ${u_0}$ is a function ${u: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ in the function space

$\displaystyle u \in C^0_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0$

$\displaystyle \cap L^2_t H^{s+1}_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0$

that obeys the integral equation (34) (in the sense of distributions) for all ${t \in [0,T]}$. We say that ${u}$ is a mild solution on ${[0,T_*)}$ if it is a mild solution on ${[0,T]}$ for every ${0 < T < T_*}$.

Remark 33 The definition of a mild solution could be extended to those choices of initial data ${u_0}$ that are not divergence-free, but then this solution concept no longer has any direct connection with the Navier-Stokes equations, so we will not consider such “solutions” here. Similarly, one could also consider mild solutions without the mean zero hypothesis, but the function space estimates are slightly less favourable in this setting and so we shall restrict attention to mean zero solutions only.

Note that the regularity on ${u}$ places ${u \otimes u}$ in ${L^\infty_t L^1_x}$ (with plenty of room to spare), which is more than enough regularity to make sense of the right-hand side of (34) in a distributional sense at least. One can also define mild solutions for other function spaces than the one provided here, but we focus on this notion for now, which was introduced in the work of Fujita and Kato. We record a simple compatibility property of mild solutions:

Exercise 34 (Splitting) Let ${s \geq 0}$, ${T>0}$, let ${u_0 \in H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0}$ be divergence-free, and let

$\displaystyle u \in C^0_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0$

$\displaystyle \cap L^2_t H^{s+1}_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0.$

Let ${0 < \tau < T}$. Show that the following are equivalent:

• (i) ${u: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ is an ${H^s}$ mild solution to the Navier-Stokes equations on ${[0,T]}$ with initial data ${u_0}$.
• (ii) ${u: [0,\tau] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ is an ${H^s}$ mild solution to the Navier-Stokes equations on ${[0,\tau]}$ with initial data ${u_0}$, and the translated function ${u_\tau: [0,T-\tau] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ defined by ${u_\tau(t,x) := u(t+\tau,x)}$ is an ${H^s}$ mild solution to the Navier-Stokes equations with initial condition ${u(\tau)}$.

To use this notion of a mild solution, we will need the following harmonic analysis estimate:

Proposition 35 (Product estimate) Let ${s \geq 0}$, and let ${u,v \in H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}) \cap L^\infty({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}$. Then one has ${uv \in H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}$, with the estimate

$\displaystyle \| uv \|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} \lesssim_{d,s} \| u \|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} \| v \|_{L^\infty({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}$

$\displaystyle + \| u \|_{L^\infty({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} \| v \|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}.$

When ${s=0}$ this claim follows immediately from Hölder’s inequality. For ${s=1}$ the claim is similarly immediate from the Leibniz rule ${\nabla(uv) = (\nabla u) v + u (\nabla v)}$ and the triangle and Hölder inequalities (noting that ${\| u \|_{H^1({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}}$ is comparable to ${\| u \|_{L^2({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} + \| \nabla u \|_{L^2({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}}$. For more general ${s}$ the claim is not quite so immediate (for instance, when ${s=2}$ one runs into difficulties controlling the intermediate term ${2 (\nabla u) \otimes (\nabla v)}$ arising in the Leibniz expansion of ${\nabla^2(uv)}$). Nevertheless the bound is still true. However, to prove it we will need to introduce a tool from harmonic analysis, namely Littlewood-Paley theory, and we defer the proof to the appendix.

We also need a simple case of Sobolev embedding:

Exercise 36 (Sobolev embedding)

• (a) If ${s > \frac{d}{2}}$, show that for any ${u \in H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}$, one has ${u \in L^\infty({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}$ with

$\displaystyle \| u \|_{L^\infty({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} \lesssim_{d,s} \| u \|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}.$

• (b) Show that the inequality fails at ${s = \frac{d}{2}}$.
• (c) Establish the same statements with ${{\bf R}^d/{\bf Z}^d}$ replaced by ${{\bf R}^d}$ throughout.

In particular, combining this exercise with Proposition 35 we see that for ${s > \frac{d}{2}}$, ${H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}$ is a Banach algebra:

$\displaystyle \| uv \|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} \lesssim_{d,s} \| u \|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} \| v \|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}. \ \ \ \ \ (35)$

Now we can construct mild solutions at high regularities ${s > \frac{d}{2}}$.

Theorem 37 (Local well-posedness of mild solutions at high regularity) Let ${s > \frac{d}{2}}$, and let ${u_0 \in H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0}$ be divergence-free. Then there exists a time

$\displaystyle T \gg_{d,s} \frac{\nu}{\|u_0\|_{H^s_x({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}^2},$

and an ${H^s}$ mild solution ${u: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ to (34). Furthermore, this mild solution is unique.

The hypothesis ${s > \frac{d}{2}}$ is not optimal; we return to this point later in these notes.

Proof: We begin with existence. We can write (34) in the fixed point form

$\displaystyle u = \Phi(u)$

where

$\displaystyle \Phi(u)(t) := e^{\nu t \Delta} u_0 + \int_0^t e^{\nu (t-s) \Delta} \mathbb{P}( \nabla \cdot (u(s) \otimes u(s)) )\ ds. \ \ \ \ \ (36)$

We remark that this expression automatically has mean zero since ${u_0}$ has mean zero. Let ${X_T}$ denote the function space

$\displaystyle X_T := C^0_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0 \cap L^2_t H^{s+1}_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0$

with norm

$\displaystyle \|u\|_{X_T} := \| u \|_{C^0_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \nu^{1/2} \| u \|_{L^2_t H^{s+1}_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}.$

This is a Banach space. Because of the mean zero restriction on ${X}$, we may estimate

$\displaystyle \|u\|_{X_T} \lesssim_{s,d} \| u \|_{C^0_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \nu^{1/2} \| \nabla u \|_{L^2_t H^{s}_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^{d^2})}.$

Note that if ${u \in X_T}$, then ${u \otimes u \in C^t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^{d^2})}$ by (35), which by Exercise 29(d) (and the fact that ${\mathbb{P}}$ commutes with ${\nabla}$ and is a non-expansive map on ${H^s}$) implies that ${\Phi(u) \in X_T}$. Thus ${\Phi}$ is a map from ${X_T}$ to ${X_T}$. In fact we can obtain more quantitative control on this map. By using Exercise 29(d), (35), and the Hölder bound

$\displaystyle \| F \|_{L^2_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} \leq T^{1/2} \| F \|_{C^0_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} \ \ \ \ \ (37)$

we have

$\displaystyle \| \Phi(u) \|_{X_T} \lesssim \|u_0\|_{H^s_x({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \nu^{-1/2} \| \mathbb{P} (u \otimes u) \|_{L^2_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d\rightarrow {\bf R}^{d^2})}$

$\displaystyle \lesssim \|u_0\|_{H^s_x({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \nu^{-1/2} T^{1/2} \| u \otimes u \|_{C^0_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d\rightarrow {\bf R}^{d^2})}$

$\displaystyle \lesssim_{d,s} \|u_0\|_{H^s_x({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \nu^{-1/2} T^{1/2} \| u \|_{C^0_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d\rightarrow {\bf R}^d)}^2$

$\displaystyle \lesssim \|u_0\|_{H^s_x({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \nu^{-1/2} T^{1/2} \| u \|_{X_T}^2.$

Thus, if we set ${R := C_{d,s} \|u_0\|_{H^s_x({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}}$ for a suitably large constant ${C_{d,s} > 0}$, and set ${T := c_{d,s} \frac{\nu}{\|u_0\|_{H^s_x({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}^2}}$ for a sufficienly small constant ${c_{d,s} > 0}$, then ${\Phi}$ maps the closed ball ${\overline{B_{X_T}(0,R)}}$ in ${X_T}$ to itself. Furthermore, for ${u,v \in X_T}$, we have by similar arguments to above

$\displaystyle \| \Phi(u) - \Phi(v) \|_{X_T} \lesssim \nu^{-1/2} \| \mathbb{P} (u \otimes u - v \otimes v) \|_{L^2_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d\rightarrow {\bf R}^d)}$

$\displaystyle \lesssim \nu^{-1/2} T^{1/2} \| u \otimes u - v \otimes v\|_{C^0_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d\rightarrow {\bf R}^d)}$

$\displaystyle \lesssim_{d,s} \nu^{-1/2} T^{1/2} \| u - v\|_{C^0_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d\rightarrow {\bf R}^d)}$

$\displaystyle (\| u \|_{C^0_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d\rightarrow {\bf R}^d)} + \| v\|_{C^0_t H^s_x([0,T] \times {\bf R}^d/{\bf Z}^d\rightarrow {\bf R}^d)})$

$\displaystyle \lesssim_{d,s} \nu^{-1/2} T^{1/2} \| u - v\|_{X_T} (\|u\|_{X_T} + \|v\|_{X_T})$

and hence if the constant ${c_{d,s}}$ is chosen small enough, ${\Phi}$ is also a contraction (with constant, say, ${\frac{1}{2}}$) on ${\overline{B(0,R)}}$. Thus there exists ${u \in \overline{B(0,R)}}$ such that ${u=\Phi(u)}$, thus ${u}$ is an ${H^s}$ mild solution.

Now we show that it is the only mild solution. Suppose for contradiction that there is another ${H^s}$ mild solution ${v}$ with the same initial data ${u_0}$. This solution ${v}$ might not lie in ${\overline{B_{X_T}(0,R)}}$, but it will lie in ${\overline{B_{X_{T}}(0,R')}}$ for some ${R'>R}$. By the same arguments as above, if ${0 < T' < T}$ is sufficiently small depending on ${d,s,R'}$ then ${\Phi}$ will be a contraction on ${\overline{B_{X_{T'}}(0,R')}}$, which implies that ${u}$ and ${v}$ agree on ${[0,T']}$. Now we apply Exercise 34 to advance in time by ${T'}$ and iterate this process (noting that ${T'}$ depends on ${d,s,R'}$ but does not otherwise depend on ${u}$ or ${v}$) until one concludes that ${u=v}$ on all of ${[0,T]}$. $\Box$

Iterating this as in the proof of Theorem 8, we have

Theorem 38 (Maximal Cauchy development) Let ${s > \frac{d}{2}}$, and let ${u_0 \in H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0}$ be divergence-free. Then there exists a time ${0 < T_* \leq \infty}$ and an ${H^s}$ mild solution ${u: [0,T_*) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ to (34), such that if ${T_* < \infty}$ then ${\|u(t)\|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} \rightarrow \infty}$ as ${t \rightarrow T_*^-}$. Furthermore, ${T_*}$ and ${u}$ are unique.

In principle, if the initial data ${u_0}$ belongs to multiple Sobolev spaces ${u_0 \in H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0}$ the maximal time of existence ${T_*}$ could depend on ${s}$ (so that the solution exits different regularity classes ${H^s}$ at different times). However, this is not the case, because there is an ${s}$-independent blowup criterion:

Proposition 39 (Blowup criterion) Let ${s, u_0, T_*, u}$ be as in Theorem 38. If ${T_* < \infty}$, then ${\| u \|_{L^\infty_t L^\infty_x([0,T_*))} = \infty}$.

Note from Exercise 36 that ${\| u \|_{L^\infty_t L^\infty_x([0,T])}}$ is finite for any ${0 \leq T < T_*}$. This shows that ${T_*}$ is the unique time at which the ${L^\infty_t L^\infty_x}$ norm “blows up” (becomes infinite) and thus ${T_*}$ is independent of ${s}$.

Proof: Suppose for contradiction that ${T_* < \infty}$ but that the quantity ${A := \| u \|_{L^\infty_t L^\infty_x([0,T_*))}}$ was finite. Let ${0 < \tau_1 < \tau_2 < T_*}$ be parameters to be optimised in later. We define the norm

$\displaystyle \|u\|_{X_{[\tau_1,\tau_2]}} := \| u \|_{C^0_t H^s_x([\tau_1,\tau_2] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \nu^{1/2} \| u \|_{L^2_t H^{s+1}_x([\tau_1,\tau_2] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}.$

As ${u}$ is a mild solution, this expression is finite.

We adapt the proof of Theorem 37. Using Exercise 29(d) (and Exercise 34) we have

$\displaystyle \|u\|_{X_{[\tau_1,\tau_2]}} \lesssim_{s,d} \| u(\tau_1) \|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \nu^{-1/2} \| \mathbb{P}(u \otimes u) \|_{L^2_t H^s_x([\tau_1,\tau_2] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}.$

Again we discard ${\mathbb{P}}$ and use (a variant of) (37) to conclude

$\displaystyle \|u\|_{X_{[\tau_1,\tau_2]}} \lesssim_{s,d} \| u(\tau_1) \|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \nu^{-1/2} (\tau_2-\tau_1)^{1/2} \| u \otimes u \|_{L^\infty_t H^s_x([\tau_1,\tau_2] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}.$

If we now use Proposition 35 in place of (35), we conclude that

$\displaystyle \|u\|_{X_{[\tau_1,\tau_2]}} \lesssim_{s,d} \| u(\tau_1) \|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \nu^{-1/2} (\tau_2-\tau_1)^{1/2} A \| u \|_{L^\infty_t H^s_x([\tau_1,\tau_2] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$

$\displaystyle \lesssim_{s,d} \| u(\tau_1) \|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \nu^{-1/2} (T_*-\tau_1)^{1/2} A \|u\|_{X_{[\tau_1,\tau_2]}}.$

If we choose ${\tau_1}$ to be sufficiently close to ${T_*}$ (depending on ${A}$ and ${\nu}$), we can absorb the second term on the RHS into the LHS and conclude that

$\displaystyle \|u\|_{X_{[\tau_1,\tau_2]}} \lesssim_{s,d} \| u(\tau_1) \|_{H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}.$

In particular, ${\|u(\tau_2)\|_{H^s_x({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}}$ stays bounded as ${\tau_2 \rightarrow T_*}$, contradicting Theorem 38. $\Box$

Corollary 40 (Existence of smooth solutions) If ${u_0: {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ is smooth and divergence free then there is a ${0 < T_* \leq \infty}$ and a smooth periodic solution ${(u,p)}$ to the Navier-Stokes equations on ${[0,T_*) \times {\bf R}^d/{\bf Z}^d}$ with normalised pressure such that if ${T_*<\infty}$, then ${\| u \|_{L^\infty_t L^\infty_x([0,T_*))} = \infty}$. Furthermore, ${T_*}$ and ${(u,p)}$ are unique.

Proof: As discussed previously, we may assume without loss of generality that ${u_0}$ has mean zero. As ${u_0}$ is periodic and smooth, it lies in ${H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0}$ for every ${s}$. From the preceding discussion we already have ${0 < T_* \leq \infty}$ and a function ${u: [0,T_*) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ that is an ${H^s}$ mild solution for every ${s > \frac{d}{2}}$, and with ${\|u\|_{L^\infty_t L^\infty_x([0,T_*))} = \infty}$ if ${T_*}$ is finite. It will suffice to show that ${u}$ is smooth, since we know from preceding discussion that a smooth solution to (33) can be converted to a smooth solution to (1).

By Exercise 29, one has

$\displaystyle \partial_t u = \nu \Delta u + \nabla \cdot \mathbb{P} (u \otimes u)$

in the sense of spacetime distributions. The right-hand side lies in ${C^0_t H^s_x}$ for every ${s}$, hence the left-hand side does also; this makes ${u}$ lie in ${C^1_t H^s_x}$. It is then easy to see that this implies that the right-hand side of the above equation lies in ${C^1_t H^s_x}$ for every ${s}$, and so ${u}$ now lies in ${C^2_t H^s_x}$ for every ${s}$. Iterating this (and using Sobolev embedding) we conclude that ${u}$ is smooth in space and time, giving the claim. $\Box$

Remark 41 When ${d=3}$, it is a notorious open problem whether the maximal lifespan ${T_*}$ given by the above corollary is always infinite.

Exercise 42 (Instantaneous smoothing) Let ${s > \frac{d}{2}}$, let ${u_0 \in H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0}$ be divergence-free, and let ${u: [0,T_*) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ be the maximal Cauchy development provided by Theorem 38. Show that ${u}$ is smooth on ${(0,T_*) \times {\bf R}^d/{\bf Z}^d}$ (note the omission of the initial time ${t=0}$). (Hint: first show that ${t \mapsto u(t+\varepsilon)}$ is a ${H^{s+1}}$ mild solution for arbitrarily small ${0 < \varepsilon < T_*}$.)

Exercise 43 (Lipschitz continuous dependence on initial data) Let ${s > \frac{d}{2}}$, let ${T>0}$, and let ${u_0 \in H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0}$ be divergence-free. Suppose one has an ${H^s}$ mild solution ${u: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ to the Navier-Stokes equations with initial data ${u_0}$. Show that there in a neighbourhood ${U}$ of ${u_0}$ in (the divergence-free elements of) in ${H^s({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0}$, such that for every ${v_0 \in U}$, there exists an ${H^s}$ mild solution ${v: [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ to the Navier-Stokes equations with initial data ${v_0}$ with the map from ${v_0}$ to ${v}$ Lipschitz continuous (using the ${H^s}$ metric for the initial data ${v_0}$ and the ${C^0_t H^s}$ metric for the solution ${v}$).

Now we discuss the issue of relaxing the regularity condition ${s > \frac{d}{2}}$ in the above theory. The main inefficiency in the above arguments is the use of the crude estimate (37), which sacrifices some of the ${L^p}$ exponent in time in exchange for extracting a positive power of the lifespan ${T}$ that can be used to create a contraction mapping, as long as ${T}$ is small enough. It turns out that by using a different energy estimate than Exercise 29(d), one can avoid such an exchange, allowing one to construct solutions at lower regularity, and in particular at the “critical” regularity of ${H^{\frac{d}{2}-1}}$. Furthermore, in the category of smooth solutions, one can even achieve the desirable goal of ensuring that the time of existence ${T}$ is infinite – but only provided that the initial data is small. More precisely,

Proposition 44 Let ${u_0 \in H^{\frac{d}{2}-1}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})^0}$ and let ${F \in L^1_t H^{\frac{d}{2}-1}_x([0,\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R})^0}$. Then the function ${u: [0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}}$ defined by the Duhamel formula

$\displaystyle u(t) = e^{\nu t\Delta} u_0 + \int_0^t e^{\nu(t-s)\Delta} F(s)\ ds$

also has mean zero for all ${t}$, and obeys the estimates

$\displaystyle \| u \|_{C^0_t H^{\frac{d}{2}-1}([0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} + \nu^{1/2} \| u \|_{L^2_t H^{\frac{d}{2}}_x([0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}$

$\displaystyle + \nu^{1/2} \| u \|_{L^2_t L^\infty_x([0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}$

$\displaystyle \lesssim_d \| u_0 \|_{H^{\frac{d}{2}-1}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} + \|F \|_{L^1_t H^{\frac{d}{2}-1}_x([0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}.$

Proof: By Minkowski’s integral inequality, it will suffice to establish the bounds in the case ${F=0}$. The first two norms of the right-hand side are already established by Exercise 29(d), so it remains to establish the estimate

$\displaystyle \nu^{1/2} \| u \|_{L^2_t L^\infty_x([0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} \lesssim_d \| u_0 \|_{H^{\frac{d}{2}-1}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}.$

By working with the rescaled function ${\tilde u(t,x) := u(\nu^{-1} t, x)}$ (and also rescaling ${\tilde F(t,x) = \nu^{-1} F(\nu^{-1} t, x)}$), we may normalise ${\nu=1}$. By a limiting argument we may assume without loss of generality that ${u_0}$ is Schwarz. We cannot directly apply Exercise 36 here due to the failure of endpoint Sobolev embedding; nevertheless we may argue as follows. For any ${t>0}$, we see from (31), the mean zero hypothesis, and the triangle inequality that

$\displaystyle \| u(t) \|_{L^\infty({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} \leq \sum_{k \in {\bf Z}^d \backslash \{0\}} e^{- 4\pi^2 |k|^2 t} |\hat u_0(k)|.$

To proceed further, we choose an exponent ${\alpha}$ such that

$\displaystyle d-2 < \alpha < d \ \ \ \ \ (38)$

(for instance one can take ${\alpha=d-1}$), and use Cauchy-Schwarz to bound

$\displaystyle \| u(t) \|_{L^\infty({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} \leq (\sum_{k \in {\bf Z}^d \backslash \{0\}} e^{- 4\pi^2 |k|^2 t} |k|^\alpha |\hat u_0(k)|^2)^{1/2}$

$\displaystyle \times (\sum_{k \in {\bf Z}^d \backslash \{0\}} e^{- 4\pi^2 |k|^2 t} |k|^{-\alpha})^{1/2}.$

As ${\alpha < d}$, one has the bound

$\displaystyle \sum_{k \in {\bf Z}^d \backslash \{0\}} e^{- 4\pi^2 |k|^2 t}|k|^{-\alpha} \lesssim_{d,\alpha} |t|^{-\frac{d}{2} + \frac{\alpha}{2}}$

(which can be verified using the integral test for ${t \geq 1}$, while for ${t < 1}$ it is easy to bound the LHS by ${O(1)}$) and hence

$\displaystyle \| u(t) \|_{L^\infty({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})} \lesssim_{d,\alpha} |t|^{-\frac{d}{4} + \frac{\alpha}{4}} (\sum_{k \in {\bf Z}^d \backslash \{0\}} e^{- 4\pi^2 |k|^2 t} |k|^\alpha |\hat u_0(k)|^2)^{1/2}.$

Taking ${L^2_t}$ norms and using Fubini’s theorem, we conclude

$\displaystyle \| u \|_{L^2_t L^\infty_x([0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}^2$

$\displaystyle \lesssim_{d,\alpha} \sum_{k \in {\bf Z}^d \backslash \{0\}} |k|^\alpha (\int_0^\infty |t|^{-\frac{d}{2} + \frac{\alpha}{2}} e^{-4\pi^2 |k|^2 t}\ dt) |\hat u_0(k)|^2.$

As ${\alpha > d-2}$, the inner integral can be estimated as

$\displaystyle \int_0^\infty |t|^{-\frac{d}{2}+\frac{\alpha}{2}} e^{-4\pi^2 |k|^2 t}\ dt \lesssim_{d,\alpha} |k|^{d - 2 - \alpha}$

and the claim follows. $\Box$

This gives the following small data global existence result, also due to Fujita and Kato:

Theorem 45 (Small data global existence) Suppose that ${u_0 \in H^{\frac{d}{2}-1}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0}$ is divergence-free with norm at most ${\varepsilon_d \nu}$, where ${\varepsilon_d>0}$ is a sufficiently small constant depending only on ${d}$. Then there exists a ${H^{\frac{d}{2}-1}}$ mild solution to the Navier-Stokes equations on ${[0,+\infty)}$. Furthermore, if ${u_0}$ is smooth, then this mild solution is also smooth.

Proof: By working with the rescaled function ${\tilde u(t,x) := \nu^{-1} u(\nu^{-1} t, x)}$, we may normalise ${\nu=1}$. Let ${X}$ denote the Banach space of functions

$\displaystyle X = C^0_t H^{d/2-1}_x( [0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0 \cap L^2_t H^{d/2}_x( [0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0 \cap L^2_t L^\infty_x( [0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0$

with the obvious norm

$\displaystyle \|u\|_X = \|u\|_{C^0_t H^{d/2-1}_x( [0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \|u\|_{L^2_t H^{d/2}_x( [0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$

$\displaystyle + \|u\|_{L^2_t L^\infty_x( [0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}.$

Let ${\Phi}$ be the Duhamel operator (36). If ${u \in X}$, then by Proposition 44 and Lemma 35 one has

$\displaystyle \| \Phi(u) \|_X \lesssim_d \varepsilon_d + \| \mathbb{P}( \nabla \cdot (u \cdot u) ) \|_{L^1_t H^{\frac{d}{2}-1}( [0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$

$\displaystyle \lesssim_d \varepsilon_d + \| u \otimes u \|_{L^1_t H^{\frac{d}{2}}( [0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^{d^2})}$

$\displaystyle \lesssim_d \varepsilon_d + \| u \|_{L^2_t H^{\frac{d}{2}}_x( [0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^{d})} \| u \|_{L^2_t L^\infty_x( [0,+\infty) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^{d})}$

$\displaystyle \lesssim_d \varepsilon_d + \| u \|_X^2.$

In particular, ${\Phi}$ maps ${X}$ to ${X}$. A similar argument establishes the bound

$\displaystyle \| \Phi(u) - \Phi(v) \|_X \lesssim_d \| u-v\|_X ( \|u\|_X + \|v\|_X )$

for all ${u,v \in X}$. For ${\varepsilon_d}$ small enough, ${\Phi}$ will be a contraction on ${\overline{B(0, C_d \varepsilon_d)}}$ for some absolute constant ${C_d}$ depending only on ${d}$, and hence has a fixed point ${u = \Phi(u)}$ which will be the desired ${H^{\frac{d}{2}-1}}$ mild solution.

Now suppose that ${u_0}$ is smooth. Let ${s > \frac{d}{2}}$, and let ${u: [0,T_*) \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d}$ be the maximal Cauchy development provided by Theorem 38. For any ${0 < T \leq T_*}$, if one defines

$\displaystyle \|u\|_{X_T} = \|u\|_{C^0_t H^{d/2-1}_x( [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \|u\|_{L^2_t H^{d/2}_x( [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$

$\displaystyle + \|u\|_{L^2_t L^\infty_x( [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$

then the preceding arguments give

$\displaystyle \|u\|_{X_T} \lesssim_d \varepsilon_d + \| u \|_{X_T}^2,$

thus either ${\|u\|_{X_T} \lesssim_d \varepsilon_d}$ or ${\|u\|_{X_T} \gtrsim_d 1}$. On the other hand, ${\|u\|_{X_T}}$ depends continuously on ${d}$ and converges to ${\|u\|_{H^{d/2-1}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} \leq \varepsilon_d}$ as ${T \rightarrow 0}$. For ${\varepsilon_d}$ small enough, this implies that ${\|u\|_{X_T} \lesssim_d \varepsilon_d}$ for all ${T}$ (this is an example of a “continuity argument”). In particular, ${u}$ lies in ${X}$ with norm ${O(C_d \varepsilon_d)}$, and so agrees with the fixed point of ${\Phi}$ located previously. Next, if we set

$\displaystyle \|u\|_{X^s_T} = \|u\|_{C^0_t H^{s}_x( [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \|u\|_{L^2_t H^{s+1}_x( [0,T] \times {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$

then repeating the previous arguments also gives

$\displaystyle \| u \|_{X^s_T} \lesssim_d \| u_0 \|_{H^{s}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} + \| u \|_{X_T} \| u \|_{X^s_T};$

as ${\|u\|_{X^s_T}}$ is finite and ${\|u\|_{X_T} \lesssim_d \varepsilon_d}$, we conclude (for ${\varepsilon_d}$ small enough) that

$\displaystyle \| u \|_{X^s_T} \lesssim_d \| u_0 \|_{H^{s}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}.$

In particular we have

$\displaystyle \| u(t) \|_{H^{s}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)} \lesssim_d \| u_0 \|_{H^{s}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)}$

for all ${0 < t < T_*}$, and hence by Theorem 38 we have ${T_* = \infty}$. The argument used to prove Corollary 40 shows that ${u}$ is smooth, and the claim follows. $\Box$

Remark 46 Modifications of this argument also allow one to establish local existence of ${H^{\frac{d}{2}-1}}$ mild solutions when the initial data ${u_0}$ lies in ${H^{\frac{d}{2}-1}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0}$, but has large norm rather than norm less than ${\varepsilon_d}$; see Exercise 55 below. However, in this case one does not have a lower bound on the time of existence that depends only on the norm of the data, as was the case with Theorem 37. Further modification of the argument also allows one to extend Theorem 38 to the entire “subcritical” range of regularities ${s > \frac{d}{2}-1}$. See the paper of Fujita and Kato for details.

We now turn attention to the non-periodic case in two and higher dimensions ${d \geq 2}$. The theory is largely identical, though with some minor technical differences. Unlike the periodic case, we will not attempt to reduce to the case of ${u}$ having mean zero (indeed, we will not even assume that ${u}$ is absolutely integrable, so that the mean might not even be well defined).

In the periodic case, we focused initially on smooth solutions. Smoothness is not sufficient by itself in the non-periodic setting to provide a good well-posedness theory, as we already saw in Section 3 when discussing the linear heat equation; some additional decay at spatial infinity is needed. There is some flexibility as to how much smoothness to prescribe. Let us say that a solution ${(u,p)}$ to Navier-Stokes is classical if ${u: [0,T] \times {\bf R}^d \rightarrow {\bf R}^d}$ and ${p: [0,T] \times {\bf R}^d \rightarrow {\bf R}}$ are smooth, and furthermore ${u}$ lies in ${C^0_t H^s_x( [0,T] \times {\bf R}^d \rightarrow {\bf R}^d )}$ for every ${s}$.

Now we work on normalising the pressure. Suppose ${(u,p)}$ is a classical solution. As before we may write the Navier-Stokes equation in divergence form as (32). Taking a further divergence we obtain the equation

$\displaystyle \Delta p = \nabla \cdot \nabla \cdot (u \otimes u).$

The function ${u \otimes u}$ belongs to ${C^t H^s_x}$ for every ${s}$, so if we define the normalised pressure

$\displaystyle p_0 := \Delta^{-1} (\nabla \cdot \nabla \cdot (u \otimes u))$

via the Fourier transform as

$\displaystyle \widehat{\Delta p_0}(\xi) = \frac{\xi_i \xi_j}{|\xi|^2} \widehat{u_i u_j}(\xi)$

then ${p_0}$ will also belong to ${C^0_t H^s_x}$ for every ${s}$. We then have ${p = p_0 + h}$ for some harmonic function ${h: [0,T] \times {\bf R}^d \rightarrow {\bf R}}$that is smooth in space and continuous in time (thus all spatial derivatives exist and are continuous in both space and time). To control this harmonic function, we return to (32), which we write as

$\displaystyle \nabla h = -\partial_t u + F$

where

$\displaystyle F := - \nabla \cdot (u \otimes u) + \nu \Delta u - \nabla p_0$

and apply the fundamental theorem of calculus to conclude that

$\displaystyle \int_{t_1}^{t_2} \nabla h(t)\ dt = - u(t_2) + u(t_1) + \int_{t_2}^{t_1} F(t)\ dt.$

The left-hand side is harmonic (thanks to differentiating under the integral sign), and the right-hand side lies in ${L^2({\bf R}^d \rightarrow {\bf R})}$ (in fact it is in ${H^s}$ for every ${s}$), hence both sides vanish. By the fundamental theorem of calculus this implies that ${\nabla h(t)}$ vanishes identically, thus ${h(t)}$ is constant in space. One can then subtract it from the pressure without affecting (1); also, we now have

$\displaystyle \nabla p = \nabla p_0 = \nabla \Delta^{-1} (\nabla \cdot \nabla \cdot (u \otimes u))$

which implies that ${\nabla p \in C^0_t H^s_x}$ for all ${s}$, which by the Navier-Stokes equations implies that ${u \in C^1_t H^s_x}$ for all ${x}$; iterating this we find eventually that all time derivatives of ${p_0}$ exist in ${H^s_x}$, and hence ${p_0}$ is smooth. Thus, in the category of classical solutions, at least, we may assume without loss of generality that we have normalised pressure

$\displaystyle p_0 := - \Delta^{-1} (\nabla \cdot \nabla \cdot (u \otimes u))$

in which case the Navier-Stokes equations may be written as before as (33). (See also this paper of mine for some variants of this argument.)

Exercise 47 (Uniqueness with normalised pressure) Let ${(u_1,p_1), (u_2,p_2)}$ be two smooth classical solutions to (1) on ${[0,T] \times {\bf R}^d}$ with normalised pressure such that ${u_1(0)=u_2(0)}$. Then ${(u_1,p_1)=(u_2,p_2)}$.

We can now define the notion of a Fujita-Kato ${H^s}$ mild solution as before, except that we replace all mention of the torus ${{\bf R}^d/{\bf Z}^d}$ with the Euclidean space ${{\bf R}^d}$, and omit all requirements for the solution to be of mean zero. As stated in the appendix, the product estimate in Proposition 35 continues to hold in ${{\bf R}^d}$, so one can obtain the analogue of Theorem 37, Theorem 38, Proposition 39, and Corollary 40 on ${{\bf R}^d}$ by repeating the proofs with the obvious changes; we leave the details as an exercise for the interested reader.

Exercise 48 Establish an analogue of Proposition 44 on ${{\bf R}^d}$, using the homogeneous Sobolev space ${\dot H^{\frac{d}{2}-1}({\bf R}^d \rightarrow {\bf R})}$ defined to be the closure of the Schwartz functions ${f: {\bf R}^d \rightarrow {\bf R}}$ with respect to the norm

$\displaystyle \| f \|_{\dot H^{\frac{d}{2}-1}({\bf R}^d \rightarrow {\bf R})} := (\int_{{\bf R}^d} |\xi|^{d-2} |\hat f(\xi)|^2\ d\xi)^{1/2},$

and use this to state and prove an analogue of Theorem 45.

— 5. Heuristics —

There are several further extensions of these types of local and global existence results for smooth solutions, in which the role of the Sobolev spaces ${H^s}$ here are replaced by other function spaces. For instance, in three dimensions in the non-periodic setting, the role of the critical space ${\dot H^{1/2}({\bf R}^3 \rightarrow {\bf R}^3)}$ was replaced by the larger critical space ${L^3({\bf R}^3 \rightarrow {\bf R}^3)}$ by Kato, and to the even larger space ${BMO^{-1}({\bf R}^3 \rightarrow {\bf R}^3)}$ by Koch and Tataru, who also gave evidence that the latter space essentially the limit of the method; in even larger spaces such as the Besov space ${B^{-1}_{\infty,\infty}}$, there are constructions of Bourgain and Pavlovic that demonstrate ill-posedness in the sense of “norm inflation” – solutions that start from arbitrarily small norm data but end up being arbitrarily large in arbitrarily small amounts of time. (This grossly abbreviated history skips over dozens of other results, both positive and negative, in yet further function spaces, such as Morrey spaces or Besov spaces. See for instance the recent text of Lemarie-Rieusset for a survey.)

Rather tham detail these other results, let us present instead a scaling heuristic which can be used to interpret these results (and can clarify why all the positive well-posedness results discussed here involve either “subcritical” or “critical” function spaces, rather than “supercritical” ones). For simplicity we restrict our discussion to the non-periodic setting ${{\bf R}^d}$, although the discussion here could also be adapted without much difficulty to the periodic setting (which effectively just imposes an additional constraint ${N \gtrsim 1}$ on the frequency parameter ${N}$ to be introduced below).

In this heuristic discussion, we assume that any given time ${t}$, the velocity field ${u(t)}$ is primarily located at a certain frequency ${N = N(t)}$ (or equivalently, at a certain wavelength ${1/N = 1/N(t)}$) in the sense that the spatial Fourier transform ${\hat u(t)}$ is largely concentrated in the region ${|\xi| \sim N}$. We also assume that at this time, the solution has an amplitude ${A(t)}$, in the sense that ${u(t,x)}$ tends to be of order ${A(t)}$ in magnitude in the region where it is concentrated. (We are deliberately leaving terms such as “concentrated” vague for the purposes of this discussion.) Using this ansatz, one can then heuristically compute the magnitude of various terms in the Navier-Stokes equations (1) or the projected version (33). For instance, if ${u}$ has amplitude ${\sim A}$ and frequency ${\sim N}$, then ${\Delta u}$ should have amplitude ${\sim AN^2}$ (and frequency ${\sim N}$), since the Laplacian operator ${\Delta}$ multiplies the Fourier transform ${\hat u(t,\xi)}$ by ${4\pi^2 |\xi|^2 \sim N^2}$; one can also take a more “physical space” viewpoint and view the second derivatives in ${\Delta}$ as being roughly like dividing out by the wavelength ${1/N}$ twice. Thus we see that the viscosity term ${\nu \Delta u}$ in (1) or (33) should have size about ${\nu AN^2}$. Similarly, the expression ${u \otimes u}$ in (33) should have magnitude ${\sim A^2}$ and frequency ${\sim N}$ (or maybe slightly less due to cancellation), so ${\nabla \cdot (u \otimes u)}$ and hence ${\mathbb{P} \nabla \cdot (u \otimes u)}$ should have magnitude ${\sim A^2 N}$. The terms ${(u \cdot \nabla) u}$ and ${\nabla p}$ in (1) can similarly be computed to have magnitude ${\sim A^2 N}$. Finally, if the solution oscillates (or blows up) in time in intervals of length ${T}$ (which one can think of as the natural time scale for the solution), then the term ${\partial_t u}$ should have magnitude ${\sim A/T}$.

This leads to the following heuristics:

• If ${\nu A N^2 \gg A^2 N}$ (or equivalently if ${A \ll \nu N}$), then the viscosity term ${\nu \Delta u}$ dominates the nonlinear terms in (1) or (33), and one should expect the Navier-Stokes equations to behave like the heat equation (23) in this regime. In particular solutions should exist and maintain (or even improve) their regularity as long as this regime persists. To balance the equation (1) or (33), one expects ${A/T \sim \nu A N^2}$, so the natural time scale here is ${T \sim \frac{1}{\nu N^2}}$.
• If ${\nu A N^2 \ll A^2 N}$ (or equivalently if ${A \gg \nu N}$), then nonlinear effects dominate, and the behaviour is likely to be quite different to that of the heat equation. One now expects ${A/T \sim A^2 N}$, so the natural time scale here is ${T \sim \frac{1}{AN}}$. In particular, one could theoretically have blowup or other bad behaviour after this time scale.

As a general rule of thumb, the known well-posedness theory for the Navier-Stokes equation is only applicable when the hypotheses on the initial data (and on the timescale being considered) is compatible either with the viscosity-dominated regime ${A \ll \nu N}$, or the time-limited regime ${T \ll \frac{1}{AN}}$. Outside of these regimes, we expect the evolution to be highly nonlinear in nature, and techniques such as the ones in this set of notes, which are primarily based on approximating the evolution by the linear heat flow, are not expected to apply.

Let’s discuss some of the results in this set of notes using these heuristics. Suppose we are given that the initial data ${u_0}$ is bounded in ${H^s}$ norm by some bound ${B}$:

$\displaystyle \| u_0 \|_{H^s({\bf R}^d \rightarrow {\bf R}^d)} \leq B.$

As in the above heuristics, we assume that ${u_0}$ exhibits some amplitude ${A}$ and frequency ${N}$. Heuristically, the ${H^s}$ norm of ${u_0}$ should resemble ${(1+N)^s}$ times the ${L^2}$ norm of ${u_0}$, which should be roughly ${A V^{1/2}}$, where ${V}$ is the volume of the region where ${u_0}$ is concentrated in. Thus we morally have a bound of the form

$\displaystyle (1+N)^s A V^{1/2} \lesssim B.$

To use this bound, we invoke (at a heuristic level) the uncertainty principle ${\Delta x \Delta \xi \gtrsim 1}$, which indicates that the data ${u_0}$ should be spatially spread out at a scale of at least the wavelength ${1/N}$, which implies that the volume ${V}$ should be at least ${\gtrsim 1/N^d}$. Thus we have

$\displaystyle (1+N)^s N^{-\frac{d}{2}} A \lesssim B.$

Suppose we have ${s \geq \frac{d}{2}}$, then we have the crude bound

$\displaystyle (1+N)^s \geq N^{\frac{d}{2}}, \ \ \ \ \ (39)$

so we expect to have an amplitude bound ${A \lesssim B}$. If we are in the nonlinear regime ${A \gg \nu N}$, this implies that ${AN \lesssim B^2 / \nu}$, and so the natural time scale ${T \sim \frac{1}{AN}}$ here is lower bounded by ${T \gtrsim \frac{\nu}{B^2}}$. This matches up with the local existence time given in Theorem 37 (or the non-periodic analogue of this theorem). However, the use of the crude bound (39) suggests that one can make improvements to this bound when ${s}$ is far from ${\frac{d}{2}}$:

Exercise 49 If ${\frac{d}{2}-1 < s < \frac{d}{2} + 1}$, make a heuristic argument as to why the optimal lower bound for the time of existence ${T}$ for the Navier-Stokes equation in terms of the ${H^s({\bf R}^d \rightarrow {\bf R}^d)}$ norm of the initial data ${u_0}$ should take the form

$\displaystyle T \gtrsim \frac{1}{\nu (\| u_0 \|_{H^s({\bf R}^d \rightarrow {\bf R}^d)}/\nu)^{\frac{2}{s-\frac{d}{2}+1}}}.$

In a similar spirit, suppose we have the smallness hypothesis

$\displaystyle \| u_0 \|_{\dot H^{\frac{d}{2}-1}({\bf R}^d \rightarrow {\bf R}^d)} \leq \varepsilon_d \nu$

on the critical norm ${\| u_0 \|_{\dot H^{\frac{d}{2}-1}({\bf R}^d \rightarrow {\bf R}^d)}}$, then a similar analysis to above leads to

$\displaystyle N^{\frac{d}{2}-1} N^{-\frac{d}{2}} A \lesssim \varepsilon_d \nu$

and hence we will be in the viscosity dominated regime ${A \ll \nu N}$ if ${\varepsilon_d}$ is small enough, regardless of what time scale ${T}$ one uses; this is consistent with the global existence result in Theorem 45. On the other hand, if the norm ${\| u_0 \|_{\dot H^{\frac{d}{2}-1}({\bf R}^d \rightarrow {\bf R}^d)}}$ is much larger than ${\nu}$, then ${A}$ can be larger than ${\nu N}$, and we can fail to be in the viscosity dominated regime at any choice of frequency ${N}$; setting ${A}$ to be a large multiple of ${\nu N}$ and sending ${N}$ to infinity, we see that the natural time scale ${T}$ could be arbitrarily small.

Finally, if one only controls a supercritical norm such as ${\| u_0 \|_{H^{\frac{d}{2}-1-\delta}({\bf R}^d \rightarrow {\bf R}^d)}}$ for some ${\delta > 0}$, this gives a bound on a quantity of the form ${A N^{-1-\delta}}$, which allows one to leave the viscosity dominated regime ${A \ll \nu N}$ (with plenty of room to spare) when ${N}$ is large, creating examples of initial data for which the natural time scale can be made arbitrarily small. As ${N}$ increases (restricting to, say, powers of two), the supercritical norm of these examples decays geometrically, so one can superimpose an infinite number of these examples together, leading to a choice of initial data with arbitrarily small supercritical norm for which the natural time scale is in fact zero. This strongly suggests that there is no good local well-posedness theory at such regularities.

Exercise 50 Discuss the product estimate in Proposition 35, the Sobolev estimate in Exercise 36, and the energy estimates in Exercise 29(d) and Proposition 44 using the above heuristics.

Remark 51 These heuristics can also be used to locate errors in many purported solutions to the Navier-Stokes global regularity problem that proceed through a sequence of estimates on a Navier-Stokes solution. At some point, the estimates have to rule out the scenario that the solution ${u}$ leaves the viscosity-dominated regime ${A \ll \nu N}$ at larger and larger frequencies ${N}$ (and at smaller and smaller time scales ${T}$), with the time scales converging to zero to achieve a finite time blowup. If the estimates in the proposed solution are strong enough to heuristically rule out this scenario by the end of the argument, but not at the beginning of the argument, then there must be some step inside the argument where one moves from “supercritical” estimates that are too weak to rule out this scenario, to “critical” or “subcritical” estimates which are capable of doing so. This step is often where the error in the argument may be found.

The above heuristics are closely tied to the classification of various function space norms as being “subcritical”, “supercritical”, or “critical”. Roughly speaking, a norm is subcritical if bounding that norm heuristically places one in the linear-dominated regime (which, for Navier-Stokes, is the viscosity-dominated regime) at high frequencies; critical if control of the norm very nearly places one in the linear-dominated regime at high frequencies; and supercritical if control of the norm completely fails to place one in the linear-dominated regime at high frequencies. When the equation in question enjoys a scaling symmetry, the distinction between subcritical, supercritical, and critical norms can be made by seeing how the the top-order component of these norms vary with respect to scaling a function to be high frequency. In the case of the Navier-Stokes equations (1), the scaling ${(u,p) \mapsto (u_\lambda,p_\lambda)}$ is given by the formulae

$\displaystyle u_\lambda(t,x) := \lambda u( \lambda^2 t, \lambda x) \ \ \ \ \ (40)$

$\displaystyle p_\lambda(t,x) := \lambda^2 p(\lambda^2 t, \lambda x) \ \ \ \ \ (41)$

with the initial data ${u_0}$ similarly being scaled to

$\displaystyle u_{0,\lambda}(x) := \lambda u_0(\lambda x).$

Here ${\lambda > 0}$ is a scaling parameter; as ${\lambda \rightarrow \infty}$, the functions ${u_\lambda, p_\lambda, u_{0,\lambda}}$ are being sent to increasingly fine scales (i.e., high frequencies). One easily checks that if ${(u,p)}$ solves the Navier-Stokes equations (1) with initial data ${u_0}$, then ${(u_\lambda,p_\lambda)}$ solves the same equations with initial data ${u_{0,\lambda}}$; similarly for other formulations of the Navier-Stokes equations such as (33) or (34). (In terms of the parameters ${A,N,T,V}$ from the previous heuristic discussion, this scaling corresponds to the map ${(A,N,T,V) \mapsto (\lambda A, \lambda N, \lambda^{-2} T, \lambda^{-d} V)}$.)

Typically, if one considers a function space norm of ${u_{0,\lambda}}$ (or of ${u_\lambda}$ or ${p_\lambda}$) in the limit ${\lambda \rightarrow \infty}$, the top order behaviour will be given by some power ${\lambda^\alpha}$ of ${\lambda}$. A norm is called subcritical if the exponent ${\alpha}$ is positive, supercritical if the exponent is negative, and critical if the exponent is zero. For instance, one can calculate the Fourier transform

$\displaystyle \widehat{u_{0,\lambda}(\xi)} = \lambda^{1+d} \hat u_0(\xi/\lambda)$

and hence

$\displaystyle \| u_{0,\lambda} \|_{H^s({\bf R}^d \rightarrow {\bf R}^d)}^2 = \lambda^{2-2d} \int_{{\bf R}^d} \langle \xi \rangle^{2s} |\hat u_0(\xi/\lambda)|^2\ d\xi$

$\displaystyle = \lambda^{2-d} \int_{{\bf R}^d} \langle \lambda \xi \rangle^{2s} |\hat u_0(\xi)|^2\ d\xi.$

As ${\lambda \rightarrow \infty}$, this expression behaves like ${\lambda^{2-d+2s}}$ to top order; hence the ${H^s}$ norm is subcritical when ${s > \frac{d}{2}-1}$, supercritical when ${s < \frac{d}{2}-1}$, and critical when ${s = \frac{d}{2}-1}$.

Another way to phrase this classification is to use dimensional analysis. If we use ${L}$ to denote the unit of length, and ${T}$ the unit of time, then the velocity field ${u}$ should have units ${L T^{-1}}$, and the terms ${\partial_t u}$ and ${(u \cdot \nabla) u}$ in (1) then have units ${L T^{-2}}$. To be dimensionally consistent, the kinematic viscosity ${\nu}$ must then have the units ${L^2 T^{-1}}$, and the pressure ${p}$ should have units ${L^2 T^{-2}}$. (This differs from the usual units given in physics to the pressure, which is ${M L^{2-d} T^{-2}}$ where ${M}$ is the unit of mass; the discrepancy comes from the choice to normalise the density, which usually has units ${M L^{-d}}$, to equal ${1}$.) If we fix ${\nu}$ to be a dimensionless constant such as ${1}$, this forces a relation ${T = L^2}$ between the time and length units, so now ${u}$ and ${p}$ have the units ${L^{-1}}$ and ${L^{-2}}$ respectively (compare with (40) and (41)). Of course ${u_0}$ will then also have units ${L^{-1}}$. One can then declare a function space norm of ${u_0}$, ${u}$, or ${p}$ to be subcritical if its top order term has units of a negative power of ${L}$, supercritical if this is a positive power of ${L}$, and critical if it is dimensionless. For instance, the top order term in ${\| u_{0} \|_{H^s({\bf R}^d \rightarrow {\bf R}^d)}}$ is the ${L^2}$ norm of ${|\nabla|^s u_0}$; as ${|\nabla|^s u_0}$ has the units of ${L^{1-s}}$, and Lebesgue measure ${dx}$ has the units of ${L^d}$, we see that ${\| u_{0} \|_{H^s({\bf R}^d \rightarrow {\bf R}^d)}}$ has the units of ${L^{1-s} L^{d/2}}$, giving the same division into subcritical, supercritical, and critical spaces as before.

— 6. Appendix: some Littlewood-Paley theory —

We now prove Proposition 35. By a limiting argument it suffices to establish the claim for smooth ${u,v}$. The claim is immediate from Hölder’s inequality when ${s=0}$, so we will assume ${s>0}$. For brevity we shall abbreviate ${L^2({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}$ as ${L^2}$, and similarly for ${H^s}$, etc..

We use the technique of Littlewood-Paley projections. Let ${\phi: {\bf R}^d \rightarrow {\bf R}}$ be an even bump function (depending only on ${d}$) that equals ${1}$ on ${B(0,1/2)}$ and is supported on ${B(0,1)}$; for the purposes of asymptotic notation, any bound that depends on ${\phi}$ can thus be thought of as depending on ${d}$ instead. For any dyadic integer ${N}$ (by which we mean an integer that is a power of ${2}$), define the Littlewood-Paley projections ${P_{\leq N}, P_N}$ on periodic smooth functions ${f: {\bf R}^d/{\bf Z}^d \rightarrow {\bf R}}$ by the formulae

$\displaystyle P_{\leq N} f(x) := \sum_{k \in {\bf Z}^d} \phi( k/N ) \hat f(k) e^{2\pi i k \cdot x}$

and (for ${N > 1}$)

$\displaystyle P_N := P_{\leq N} - P_{\leq N/2}$

so one has the Littlewood-Paley decomposition

$\displaystyle f = P_{\leq 1} f + \sum_{N>1} P_N f.$

Here and in the sequel ${N}$ is always understood to be restricted to be a dyadic integer.

The key point of this decomposition is that the ${L^p}$ and Sobolev norms of the individual components of this decomposition are easier to estimate than the original function ${f}$. The following estimates in particular will suffice for our applications:

Exercise 52 (Basic Littlewood-Paley estimates)

• (a) For any dyadic integer ${N}$, show that

$\displaystyle P_{\leq N} f(x) = \int_{{\bf R}^d} \check \phi(y) f(x - \frac{y}{N})\ dy$

where ${\check \phi(x) := \int_{{\bf R}^d} \phi(\xi) e^{2\pi i \xi \cdot x}\ d\xi}$ is the inverse Fourier transform of ${\phi}$ on ${{\bf R}^d}$, and the difference ${x - \frac{y}{N} \in {\bf R}^d/{\bf Z}^d}$ between the coset ${x \in {\bf R}^d/{\bf Z}^d}$ and the shift ${\frac{y}{N} \in {\bf R}^d}$ is defined in the obvious fashion. In particular if ${f}$ is real-valued then so is ${P_{\leq N}}$ and ${P_N}$. Conclude the Bernstein inequality

$\displaystyle \| P_{\leq N} f \|_{L^q} \lesssim_{d} N^{\frac{d}{p}-\frac{d}{q}} \| f \|_{L^p}$

for all smooth functions ${f: {\bf R}^d/{\bf Z}^d \rightarrow{\bf R}}$, all ${N \geq 1}$ and ${1 \leq p \leq q \leq \infty}$; in particular

$\displaystyle \| P_{\leq N} f \|_{L^p} \lesssim_{d} \| f \|_{L^p}.$

(Recall our convention that constants that depend on ${\phi}$ can also be thought of as depending just on ${d}$.) By the triangle inequality, the same estimates also hold for ${P_N}$, ${N > 1}$.

• (b) For any ${s \geq 0}$, show that

$\displaystyle \| f \|_{H^s} \sim_{d,s} \| P_{\leq 1} f \|_{L^2} + \left( \sum_{N>1} N^{2s} \| P_N f \|_{L^2}^2\right)^{1/2}$

Remark 53 The more advanced Littlewood-Paley inequality, which is usually proven using the Calderón-Zygmund theory of singular integrals, asserts that

$\displaystyle \| f \|_{L^p} \sim_{d,p} \| |P_{\leq 1} f| + (\sum_{N > 1} |P_N f|^2)^{1/2} \|_{L^p}$

for any ${1 < p < \infty}$. However, we will not use this estimate here.

We return now to the proof of Proposition 35. Let ${\phi}$ be as above. By Exercise 52, it suffices to establish the bounds

$\displaystyle \| P_{\leq 1}( uv) \|_{L^2} \lesssim_{d,s} \| u \|_{H^s} \| v \|_{L^\infty} + \| u \|_{L^\infty} \| v \|_{H^s} \ \ \ \ \ (42)$

and

$\displaystyle \sum_{N>1} N^{2s} \| P_{N}( uv) \|_{L^2}^2 \lesssim_{d,s} \| u \|_{H^s}^2 \| v \|_{L^\infty}^2 + \| u \|_{L^\infty}^2 \| v \|_{H^s}^2. \ \ \ \ \ (43)$

The estimate (42) follows by dropping ${P_{\leq 1}}$ (using Exercise 52) and applying Hölder’s inequality, so we turn to (43). We may restrict attention to those terms where ${N>8}$ (say) since the other terms can be treated by the same argument used to prove (42).

The basic strategy here is to split the product ${uv}$ (or the component ${P_N(uv)}$ of this product) into paraproducts in which some constraint is imposed between the frequencies of the ${u}$ and ${v}$ terms. There are many ways to achieve this splitting; we will use

$\displaystyle P_N(uv) = P_N( (P_{\leq N/8}u) v ) + \sum_{M>N/8} P_N( (P_M u) v).$

By the triangle inequality, it suffices to show the estimates

$\displaystyle \sum_{N>8} N^{2s} \| P_{N}( (P_{\leq N/8} u) v) \|_{L^2}^2 \lesssim_{d,s} \| u \|_{L^\infty}^2 \| v \|_{H^s}^2 \ \ \ \ \ (44)$

and

$\displaystyle \sum_{N>8} N^{2s} (\sum_{M>N/8} \| P_{N}( (P_M u) v) \|_{L^2})^2 \lesssim_{d,s} \| u \|_{H^s}^2 \| v \|_{L^\infty}^2 \ \ \ \ \ (45)$

We begin with (44). We can expand further

$\displaystyle P_N( (P_{\leq N/8} u) v ) = P_N( (P_{\leq N/8} u) P_{\leq 1} v ) + \sum_{M>1} P_N( (P_{\leq N/8} u) P_M v ).$

The key point now is that (by inspecting the Fourier series expansions) the first term on the RHS vanishes, and the summands in the second term also vanish unless ${M \sim N}$. Thus

$\displaystyle N^{2s} \| P_{N}( (P_{\leq N/8} u) v) \|_{L^2}^2 \lesssim \sum_{M \sim N} N^{2s} \| P_{N}( (P_{\leq N/8} u) (P_M v)) \|_{L^2}^2$

$\displaystyle \lesssim_d \sum_{M \sim N} M^{2s} \| (P_{\leq N/8} u) (P_M v) \|_{L^2}^2$

$\displaystyle \lesssim_d \sum_{M \sim N} M^{2s} \| P_{\leq N/8} u \|_{L^\infty}^2 \| P_M v \|_{L^2}^2$

$\displaystyle \lesssim_d \| u \|_{L^\infty}^2 \sum_{M \sim N} M^{2s} \| P_M v \|_{L^2}^2$

and the claim follows by summing in ${N}$, interchanging the summations, and using Exercise 52. Now we prove (45). We bound

$\displaystyle N^{2s} (\sum_{M>N/8} \| P_{N}( (P_M u) v) \|_{L^2})^2 \lesssim_d N^{2s} (\sum_{M>N/8} \| (P_M u) v \|_{L^2})^2$

$\displaystyle \lesssim_d N^{2s} (\sum_{M > N/8} \| P_M u \|_{L^2} \| v \|_{L^\infty})^2$

$\displaystyle \lesssim_{d,s} \| v\|_{L^\infty}^2 \sum_{M > N/8} (N/M)^{s} M^{2s} \| P_M u \|_{L^2}^2$

using Cauchy-Schwarz, and the claim again follows by summing in ${N}$, interchanging the summations, and using Exercise 52.

There is an essentially identical theory in the non-periodic setting, in which the role of smooth periodic functions are now replaced by Schwartz functions, the Littlewood-Paley projections ${P_{\leq N}}$ are now defined as

$\displaystyle P_{\leq N} f(x) := \int_{{\bf R}^d} \phi( \xi/N ) \hat f(\xi) e^{2\pi i \xi \cdot x},$

and ${P_N}$ is defined as before.

Exercise 54 (Non-periodic Littlewood-Paley theory) With ${L^2}$ now denoting ${L^2({\bf R}^d \rightarrow {\bf R})}$ instead of ${L^2({\bf R}^d/{\bf Z}^d \rightarrow {\bf R})}$, and similarly for other function spaces, establish the non-periodic analogue of Exercise 52 for Schwartz functions ${f}$.

In particular, one obtains the non-periodic analogue of Proposition 35 by repeating the proof verbatim.

(The exercise below will be moved to a more suitable location after the conclusion of this course.)

Exercise 55 (Large data critical local existence) Suppose that ${u_0 \in H^{\frac{d}{2}-1}({\bf R}^d/{\bf Z}^d \rightarrow {\bf R}^d)^0}$ is divergence-free. Show that there exists ${T>0}$ and a ${H^{\frac{d}{2}-1}}$ mild solution to the Navier-Stokes equations on ${[0,T]}$. Furthermore, if ${u_0}$ is smooth, then this mild solution is also smooth. (Hint: By choosing ${T}$ small enough, one can ensure that the linear evolution ${t \mapsto e^{t\nu \Delta} u_0}$ is small in ${L^2_t H^{n/2}_x}$ and ${L^2_t L^\infty_x}$ norms. Now run a contraction mapping argument in a space of functions ${u}$ that are small in ${L^2_t H^{n/2}_x}$ and ${L^2_t L^\infty_x}$ norm and bounded in ${C^0_t H^{n/2}_x}$ norm. One will have to carefully choose all the relevant parameters in the right order, and to choose an appropriate weighted metric on this space of functions, in order to actually obtain a contraction.