You are currently browsing the tag archive for the ‘Navier-Stokes equations’ tag.
I’ve just uploaded to the arXiv my paper “Quantitative bounds for critically bounded solutions to the Navier-Stokes equations“, submitted to the proceedings of the Linde Hall Inaugural Math Symposium. (I unfortunately had to cancel my physical attendance at this symposium for personal reasons, but was still able to contribute to the proceedings.) In recent years I have been interested in working towards establishing the existence of classical solutions for the Navier-Stokes equations
that blow up in finite time, but this time for a change I took a look at the other side of the theory, namely the conditional regularity results for this equation. There are several such results that assert that if a certain norm of the solution stays bounded (or grows at a controlled rate), then the solution stays regular; taken in the contrapositive, they assert that if a solution blows up at a certain finite time , then certain norms of the solution must also go to infinity. Here are some examples (not an exhaustive list) of such blowup criteria:
- (Leray blowup criterion, 1934) If
blows up at a finite time
, and
, then
for an absolute constant
.
- (Prodi–Serrin–Ladyzhenskaya blowup criterion, 1959-1967) If
blows up at a finite time
, and
, then
, where
.
- (Beale-Kato-Majda blowup criterion, 1984) If
blows up at a finite time
, then
, where
is the vorticity.
- (Kato blowup criterion, 1984) If
blows up at a finite time
, then
for some absolute constant
.
- (Escauriaza-Seregin-Sverak blowup criterion, 2003) If
blows up at a finite time
, then
.
- (Seregin blowup criterion, 2012) If
blows up at a finite time
, then
.
- (Phuc blowup criterion, 2015) If
blows up at a finite time
, then
for any
.
- (Gallagher-Koch-Planchon blowup criterion, 2016) If
blows up at a finite time
, then
for any
.
- (Albritton blowup criterion, 2016) If
blows up at a finite time
, then
for any
.
My current paper is most closely related to the Escauriaza-Seregin-Sverak blowup criterion, which was the first to show a critical (i.e., scale-invariant, or dimensionless) spatial norm, namely , had to become large. This result now has many proofs; for instance, many of the subsequent blowup criterion results imply the Escauriaza-Seregin-Sverak one as a special case, and there are also additional proofs by Gallagher-Koch-Planchon (building on ideas of Kenig-Koch), and by Dong-Du. However, all of these proofs rely on some form of a compactness argument: given a finite time blowup, one extracts some suitable family of rescaled solutions that converges in some weak sense to a limiting solution that has some additional good properties (such as almost periodicity modulo symmetries), which one can then rule out using additional qualitative tools, such as unique continuation and backwards uniqueness theorems for parabolic heat equations. In particular, all known proofs use some version of the backwards uniqueness theorem of Escauriaza, Seregin, and Sverak. Because of this reliance on compactness, the existing proofs of the Escauriaza-Seregin-Sverak blowup criterion are qualitative, in that they do not provide any quantitative information on how fast the
norm will go to infinity (along a subsequence of times).
On the other hand, it is a general principle that qualitative arguments established using compactness methods ought to have quantitative analogues that replace the use of compactness by more complicated substitutes that give effective bounds; see for instance these previous blog posts for more discussion. I therefore was interested in trying to obtain a quantitative version of this blowup criterion that gave reasonably good effective bounds (in particular, my objective was to avoid truly enormous bounds such as tower-exponential or Ackermann function bounds, which often arise if one “naively” tries to make a compactness argument effective). In particular, I obtained the following triple-exponential quantitative regularity bounds:
Theorem 1 If
is a classical solution to Navier-Stokes on
with
and
for
and
.
As a corollary, one can now improve the Escauriaza-Seregin-Sverak blowup criterion to
for some absolute constant , which to my knowledge is the first (very slightly) supercritical blowup criterion for Navier-Stokes in the literature.
The proof uses many of the same quantitative inputs as previous arguments, most notably the Carleman inequalities used to establish unique continuation and backwards uniqueness theorems for backwards heat equations, but also some additional techniques that make the quantitative bounds more efficient. The proof focuses initially on points of concentration of the solution, which we define as points where there is a frequency
for which one has the bound
for a large absolute constant , where
is a Littlewood-Paley projection to frequencies
. (This can be compared with the upper bound of
for the quantity on the left-hand side that follows from (1).) The factor of
normalises the left-hand side of (2) to be dimensionless (i.e., critical). The main task is to show that the dimensionless quantity
cannot get too large; in particular, we end up establishing a bound of the form
from which the above theorem ends up following from a routine adaptation of the local well-posedness and regularity theory for Navier-Stokes.
The strategy is to show that any concentration such as (2) when is large must force a significant component of the
norm of
to also show up at many other locations than
, which eventually contradicts (1) if one can produce enough such regions of non-trivial
norm. (This can be viewed as a quantitative variant of the “rigidity” theorems in some of the previous proofs of the Escauriaza-Seregin-Sverak theorem that rule out solutions that exhibit too much “compactness” or “almost periodicity” in the
topology.) The chain of causality that leads from a concentration (2) at
to significant
norm at other regions of the time slice
is somewhat involved (though simpler than the much more convoluted schemes I initially envisaged for this argument):
- Firstly, by using Duhamel’s formula, one can show that a concentration (2) can only occur (with
large) if there was also a preceding concentration
at some slightly previous point
in spacetime, with
also close to
(more precisely, we have
,
, and
). This can be viewed as a sort of contrapositive of a “local regularity theorem”, such as the ones established by Caffarelli, Kohn, and Nirenberg. A key point here is that the lower bound
in the conclusion (3) is precisely the same as the lower bound in (2), so that this backwards propagation of concentration can be iterated.
- Iterating the previous step, one can find a sequence of concentration points
with the
propagating backwards in time; by using estimates ultimately resulting from the dissipative term in the energy identity, one can extract such a sequence in which the
increase geometrically with time, the
are comparable (up to polynomial factors in
) to the natural frequency scale
, and one has
. Using the “epochs of regularity” theory that ultimately dates back to Leray, and tweaking the
slightly, one can also place the times
in intervals
(of length comparable to a small multiple of
) in which the solution is quite regular (in particular,
enjoy good
bounds on
).
- The concentration (4) can be used to establish a lower bound for the
norm of the vorticity
near
. As is well known, the vorticity obeys the vorticity equation
In the epoch of regularity
, the coefficients
of this equation obey good
bounds, allowing the machinery of Carleman estimates to come into play. Using a Carleman estimate that is used to establish unique continuation results for backwards heat equations, one can propagate this lower bound to also give lower
bounds on the vorticity (and its first derivative) in annuli of the form
for various radii
, although the lower bounds decay at a gaussian rate with
.
- Meanwhile, using an energy pigeonholing argument of Bourgain (which, in this Navier-Stokes context, is actually an enstrophy pigeonholing argument), one can locate some annuli
where (a slightly normalised form of) the entrosphy is small at time
; using a version of the localised enstrophy estimates from a previous paper of mine, one can then propagate this sort of control forward in time, obtaining an “annulus of regularity” of the form
in which one has good estimates; in particular, one has
type bounds on
in this cylindrical annulus.
- By intersecting the previous epoch of regularity
with the above annulus of regularity, we have some lower bounds on the
norm of the vorticity (and its first derivative) in the annulus of regularity. Using a Carleman estimate first introduced by Escauriaza, Seregin, and Sverak, as well as a second application of the Carleman estimate used previously, one can then propagate this lower bound back up to time
, establishing a lower bound for the vorticity on the spatial annulus
. By some basic Littlewood-Paley theory one can parlay this lower bound to a lower bound on the
norm of the velocity
; crucially, this lower bound is uniform in
.
- If
is very large (triple exponential in
!), one can then find enough scales
with disjoint
annuli that the total lower bound on the
norm of
provided by the above arguments is inconsistent with (1), thus establishing the claim.
The chain of causality is summarised in the following image:
It seems natural to conjecture that similar triply logarithmic improvements can be made to several of the other blowup criteria listed above, but I have not attempted to pursue this question. It seems difficult to improve the triple logarithmic factor using only the techniques here; the Bourgain pigeonholing argument inevitably costs one exponential, the Carleman inequalities cost a second, and the stacking of scales at the end to contradict the upper bound costs the third.
I was recently asked to contribute a short comment to Nature Reviews Physics, as part of a series of articles on fluid dynamics on the occasion of the 200th anniversary (this August) of the birthday of George Stokes. My contribution is now online as “Searching for singularities in the Navier–Stokes equations“, where I discuss the global regularity problem for Navier-Stokes and my thoughts on how one could try to construct a solution that blows up in finite time via an approximately discretely self-similar “fluid computer”. (The rest of the series does not currently seem to be available online, but I expect they will become so shortly.)
In the previous set of notes we developed a theory of “strong” solutions to the Navier-Stokes equations. This theory, based around viewing the Navier-Stokes equations as a perturbation of the linear heat equation, has many attractive features: solutions exist locally, are unique, depend continuously on the initial data, have a high degree of regularity, can be continued in time as long as a sufficiently high regularity norm is under control, and tend to enjoy the same sort of conservation laws that classical solutions do. However, it is a major open problem as to whether these solutions can be extended to be (forward) global in time, because the norms that we know how to control globally in time do not have high enough regularity to be useful for continuing the solution. Also, the theory becomes degenerate in the inviscid limit .
However, it is possible to construct “weak” solutions which lack many of the desirable features of strong solutions (notably, uniqueness, propagation of regularity, and conservation laws) but can often be constructed globally in time even when one us unable to do so for strong solutions. Broadly speaking, one usually constructs weak solutions by some sort of “compactness method”, which can generally be described as follows.
- Construct a sequence of “approximate solutions” to the desired equation, for instance by developing a well-posedness theory for some “regularised” approximation to the original equation. (This theory often follows similar lines to those in the previous set of notes, for instance using such tools as the contraction mapping theorem to construct the approximate solutions.)
- Establish some uniform bounds (over appropriate time intervals) on these approximate solutions, even in the limit as an approximation parameter is sent to zero. (Uniformity is key; non-uniform bounds are often easy to obtain if one puts enough “mollification”, “hyper-dissipation”, or “discretisation” in the approximating equation.)
- Use some sort of “weak compactness” (e.g., the Banach-Alaoglu theorem, the Arzela-Ascoli theorem, or the Rellich compactness theorem) to extract a subsequence of approximate solutions that converge (in a topology weaker than that associated to the available uniform bounds) to a limit. (Note that there is no reason a priori to expect such limit points to be unique, or to have any regularity properties beyond that implied by the available uniform bounds..)
- Show that this limit solves the original equation in a suitable weak sense.
The quality of these weak solutions is very much determined by the type of uniform bounds one can obtain on the approximate solution; the stronger these bounds are, the more properties one can obtain on these weak solutions. For instance, if the approximate solutions enjoy an energy identity leading to uniform energy bounds, then (by using tools such as Fatou’s lemma) one tends to obtain energy inequalities for the resulting weak solution; but if one somehow is able to obtain uniform bounds in a higher regularity norm than the energy then one can often recover the full energy identity. If the uniform bounds are at the regularity level needed to obtain well-posedness, then one generally expects to upgrade the weak solution to a strong solution. (This phenomenon is often formalised through weak-strong uniqueness theorems, which we will discuss later in these notes.) Thus we see that as far as attacking global regularity is concerned, both the theory of strong solutions and the theory of weak solutions encounter essentially the same obstacle, namely the inability to obtain uniform bounds on (exact or approximate) solutions at high regularities (and at arbitrary times).
For simplicity, we will focus our discussion in this notes on finite energy weak solutions on . There is a completely analogous theory for periodic weak solutions on
(or equivalently, weak solutions on the torus
which we will leave to the interested reader.
In recent years, a completely different way to construct weak solutions to the Navier-Stokes or Euler equations has been developed that are not based on the above compactness methods, but instead based on techniques of convex integration. These will be discussed in a later set of notes.
This coming fall quarter, I am teaching a class on topics in the mathematical theory of incompressible fluid equations, focusing particularly on the incompressible Euler and Navier-Stokes equations. These two equations are by no means the only equations used to model fluids, but I will focus on these two equations in this course to narrow the focus down to something manageable. I have not fully decided on the choice of topics to cover in this course, but I would probably begin with some core topics such as local well-posedness theory and blowup criteria, conservation laws, and construction of weak solutions, then move on to some topics such as boundary layers and the Prandtl equations, the Euler-Poincare-Arnold interpretation of the Euler equations as an infinite dimensional geodesic flow, and some discussion of the Onsager conjecture. I will probably also continue to more advanced and recent topics in the winter quarter.
In this initial set of notes, we begin by reviewing the physical derivation of the Euler and Navier-Stokes equations from the first principles of Newtonian mechanics, and specifically from Newton’s famous three laws of motion. Strictly speaking, this derivation is not needed for the mathematical analysis of these equations, which can be viewed if one wishes as an arbitrarily chosen system of partial differential equations without any physical motivation; however, I feel that the derivation sheds some insight and intuition on these equations, and is also worth knowing on purely intellectual grounds regardless of its mathematical consequences. I also find it instructive to actually see the journey from Newton’s law
to the seemingly rather different-looking law
for incompressible Navier-Stokes (or, if one drops the viscosity term , the Euler equations).
Our discussion in this set of notes is physical rather than mathematical, and so we will not be working at mathematical levels of rigour and precision. In particular we will be fairly casual about interchanging summations, limits, and integrals, we will manipulate approximate identities as if they were exact identities (e.g., by differentiating both sides of the approximate identity), and we will not attempt to verify any regularity or convergence hypotheses in the expressions being manipulated. (The same holds for the exercises in this text, which also do not need to be justified at mathematical levels of rigour.) Of course, once we resume the mathematical portion of this course in subsequent notes, such issues will be an important focus of careful attention. This is a basic division of labour in mathematical modeling: non-rigorous heuristic reasoning is used to derive a mathematical model from physical (or other “real-life”) principles, but once a precise model is obtained, the analysis of that model should be completely rigorous if at all possible (even if this requires applying the model to regimes which do not correspond to the original physical motivation of that model). See the discussion by John Ball quoted at the end of these slides of Gero Friesecke for an expansion of these points.
Note: our treatment here will differ slightly from that presented in many fluid mechanics texts, in that it will emphasise first-principles derivations from many-particle systems, rather than relying on bulk laws of physics, such as the laws of thermodynamics, which we will not cover here. (However, the derivations from bulk laws tend to be more robust, in that they are not as reliant on assumptions about the particular interactions between particles. In particular, the physical hypotheses we assume in this post are probably quite a bit stronger than the minimal assumptions needed to justify the Euler or Navier-Stokes equations, which can hold even in situations in which one or more of the hypotheses assumed here break down.)
Many fluid equations are expected to exhibit turbulence in their solutions, in which a significant portion of their energy ends up in high frequency modes. A typical example arises from the three-dimensional periodic Navier-Stokes equations
where is the velocity field,
is a forcing term,
is a pressure field, and
is the viscosity. To study the dynamics of energy for this system, we first pass to the Fourier transform
so that the system becomes
We may normalise (and
) to have mean zero, so that
. Then we introduce the dyadic energies
where ranges over the powers of two, and
is shorthand for
. Taking the inner product of (1) with
, we obtain the energy flow equation
where range over powers of two,
is the energy flow rate
is the energy dissipation rate
and is the energy injection rate
The Navier-Stokes equations are notoriously difficult to solve in general. Despite this, Kolmogorov in 1941 was able to give a convincing heuristic argument for what the distribution of the dyadic energies should become over long times, assuming that some sort of distributional steady state is reached. It is common to present this argument in the form of dimensional analysis, but one can also give a more “first principles” form Kolmogorov’s argument, which I will do here. Heuristically, one can divide the frequency scales
into three regimes:
- The injection regime in which the energy injection rate
dominates the right-hand side of (2);
- The energy flow regime in which the flow rates
dominate the right-hand side of (2); and
- The dissipation regime in which the dissipation
dominates the right-hand side of (2).
If we assume a fairly steady and smooth forcing term , then
will be supported on the low frequency modes
, and so we heuristically expect the injection regime to consist of the low scales
. Conversely, if we take the viscosity
to be small, we expect the dissipation regime to only occur for very large frequencies
, with the energy flow regime occupying the intermediate frequencies.
We can heuristically predict the dividing line between the energy flow regime. Of all the flow rates , it turns out in practice that the terms in which
(i.e., interactions between comparable scales, rather than widely separated scales) will dominate the other flow rates, so we will focus just on these terms. It is convenient to return back to physical space, decomposing the velocity field
into Littlewood-Paley components
of the velocity field at frequency
. By Plancherel’s theorem, this field will have an
norm of
, and as a naive model of turbulence we expect this field to be spread out more or less uniformly on the torus, so we have the heuristic
and a similar heuristic applied to gives
(One can consider modifications of the Kolmogorov model in which is concentrated on a lower-dimensional subset of the three-dimensional torus, leading to some changes in the numerology below, but we will not consider such variants here.) Since
we thus arrive at the heuristic
Of course, there is the possibility that due to significant cancellation, the energy flow is significantly less than , but we will assume that cancellation effects are not that significant, so that we typically have
or (assuming that does not oscillate too much in
, and
are close to
)
On the other hand, we clearly have
We thus expect to be in the dissipation regime when
and in the energy flow regime when
Now we study the energy flow regime further. We assume a “statistically scale-invariant” dynamics in this regime, in particular assuming a power law
for some . From (3), we then expect an average asymptotic of the form
for some structure constants that depend on the exact nature of the turbulence; here we have replaced the factor
by the comparable term
to make things more symmetric. In order to attain a steady state in the energy flow regime, we thus need a cancellation in the structure constants:
On the other hand, if one is assuming statistical scale invariance, we expect the structure constants to be scale-invariant (in the energy flow regime), in that
for dyadic . Also, since the Euler equations conserve energy, the energy flows
symmetrise to zero,
which from (7) suggests a similar cancellation among the structure constants
Combining this with the scale-invariance (9), we see that for fixed , we may organise the structure constants
for dyadic
into sextuples which sum to zero (including some degenerate tuples of order less than six). This will automatically guarantee the cancellation (8) required for a steady state energy distribution, provided that
or in other words
for any other value of , there is no particular reason to expect this cancellation (8) to hold. Thus we are led to the heuristic conclusion that the most stable power law distribution for the energies
is the
law
or in terms of shell energies, we have the famous Kolmogorov 5/3 law
Given that frequency interactions tend to cascade from low frequencies to high (if only because there are so many more high frequencies than low ones), the above analysis predicts a stablising effect around this power law: scales at which a law (6) holds for some are likely to lose energy in the near-term, while scales at which a law (6) hold for some
are conversely expected to gain energy, thus nudging the exponent of power law towards
.
We can solve for in terms of energy dissipation as follows. If we let
be the frequency scale demarcating the transition from the energy flow regime (5) to the dissipation regime (4), we have
and hence by (10)
On the other hand, if we let be the energy dissipation at this scale
(which we expect to be the dominant scale of energy dissipation), we have
Some simple algebra then lets us solve for and
as
and
Thus, we have the Kolmogorov prediction
for
with energy dissipation occuring at the high end of this scale, which is counterbalanced by the energy injection at the low end
of the scale.
I’ve just uploaded to the arXiv the paper “Finite time blowup for an averaged three-dimensional Navier-Stokes equation“, submitted to J. Amer. Math. Soc.. The main purpose of this paper is to formalise the “supercriticality barrier” for the global regularity problem for the Navier-Stokes equation, which roughly speaking asserts that it is not possible to establish global regularity by any “abstract” approach which only uses upper bound function space estimates on the nonlinear part of the equation, combined with the energy identity. This is done by constructing a modification of the Navier-Stokes equations with a nonlinearity that obeys essentially all of the function space estimates that the true Navier-Stokes nonlinearity does, and which also obeys the energy identity, but for which one can construct solutions that blow up in finite time. Results of this type had been previously established by Montgomery-Smith, Gallagher-Paicu, and Li-Sinai for variants of the Navier-Stokes equation without the energy identity, and by Katz-Pavlovic and by Cheskidov for dyadic analogues of the Navier-Stokes equations in five and higher dimensions that obeyed the energy identity (see also the work of Plechac and Sverak and of Hou and Lei that also suggest blowup for other Navier-Stokes type models obeying the energy identity in five and higher dimensions), but to my knowledge this is the first blowup result for a Navier-Stokes type equation in three dimensions that also obeys the energy identity. Intriguingly, the method of proof in fact hints at a possible route to establishing blowup for the true Navier-Stokes equations, which I am now increasingly inclined to believe is the case (albeit for a very small set of initial data).
To state the results more precisely, recall that the Navier-Stokes equations can be written in the form
for a divergence-free velocity field and a pressure field
, where
is the viscosity, which we will normalise to be one. We will work in the non-periodic setting, so the spatial domain is
, and for sake of exposition I will not discuss matters of regularity or decay of the solution (but we will always be working with strong notions of solution here rather than weak ones). Applying the Leray projection
to divergence-free vector fields to this equation, we can eliminate the pressure, and obtain an evolution equation
purely for the velocity field, where is a certain bilinear operator on divergence-free vector fields (specifically,
. The global regularity problem for Navier-Stokes is then equivalent to the global regularity problem for the evolution equation (1).
An important feature of the bilinear operator appearing in (1) is the cancellation law
(using the inner product on divergence-free vector fields), which leads in particular to the fundamental energy identity
This identity (and its consequences) provide essentially the only known a priori bound on solutions to the Navier-Stokes equations from large data and arbitrary times. Unfortunately, as discussed in this previous post, the quantities controlled by the energy identity are supercritical with respect to scaling, which is the fundamental obstacle that has defeated all attempts to solve the global regularity problem for Navier-Stokes without any additional assumptions on the data or solution (e.g. perturbative hypotheses, or a priori control on a critical norm such as the norm).
Our main result is then (slightly informally stated) as follows
Theorem 1 There exists an averaged version
of the bilinear operator
, of the form
for some probability space
, some spatial rotation operators
for
, and some Fourier multipliers
of order
, for which one still has the cancellation law
and for which the averaged Navier-Stokes equation
admits solutions that blow up in finite time.
(There are some integrability conditions on the Fourier multipliers required in the above theorem in order for the conclusion to be non-trivial, but I am omitting them here for sake of exposition.)
Because spatial rotations and Fourier multipliers of order are bounded on most function spaces,
automatically obeys almost all of the upper bound estimates that
does. Thus, this theorem blocks any attempt to prove global regularity for the true Navier-Stokes equations which relies purely on the energy identity and on upper bound estimates for the nonlinearity; one must use some additional structure of the nonlinear operator
which is not shared by an averaged version
. Such additional structure certainly exists – for instance, the Navier-Stokes equation has a vorticity formulation involving only differential operators rather than pseudodifferential ones, whereas a general equation of the form (2) does not. However, “abstract” approaches to global regularity generally do not exploit such structure, and thus cannot be used to affirmatively answer the Navier-Stokes problem.
It turns out that the particular averaged bilinear operator that we will use will be a finite linear combination of local cascade operators, which take the form
where is a small parameter,
are Schwartz vector fields whose Fourier transform is supported on an annulus, and
is an
-rescaled version of
(basically a “wavelet” of wavelength about
centred at the origin). Such operators were essentially introduced by Katz and Pavlovic as dyadic models for
; they have the essentially the same scaling property as
(except that one can only scale along powers of
, rather than over all positive reals), and in fact they can be expressed as an average of
in the sense of the above theorem, as can be shown after a somewhat tedious amount of Fourier-analytic symbol manipulations.
If we consider nonlinearities which are a finite linear combination of local cascade operators, then the equation (2) more or less collapses to a system of ODE in certain “wavelet coefficients” of
. The precise ODE that shows up depends on what precise combination of local cascade operators one is using. Katz and Pavlovic essentially considered a single cascade operator together with its “adjoint” (needed to preserve the energy identity), and arrived (more or less) at the system of ODE
where are scalar fields for each integer
. (Actually, Katz-Pavlovic worked with a technical variant of this particular equation, but the differences are not so important for this current discussion.) Note that the quadratic terms on the RHS carry a higher exponent of
than the dissipation term; this reflects the supercritical nature of this evolution (the energy
is monotone decreasing in this flow, so the natural size of
given the control on the energy is
). There is a slight technical issue with the dissipation if one wishes to embed (3) into an equation of the form (2), but it is minor and I will not discuss it further here.
In principle, if the mode has size comparable to
at some time
, then energy should flow from
to
at a rate comparable to
, so that by time
or so, most of the energy of
should have drained into the
mode (with hardly any energy dissipated). Since the series
is summable, this suggests finite time blowup for this ODE as the energy races ever more quickly to higher and higher modes. Such a scenario was indeed established by Katz and Pavlovic (and refined by Cheskidov) if the dissipation strength
was weakened somewhat (the exponent
has to be lowered to be less than
). As mentioned above, this is enough to give a version of Theorem 1 in five and higher dimensions.
On the other hand, it was shown a few years ago by Barbato, Morandin, and Romito that (3) in fact admits global smooth solutions (at least in the dyadic case , and assuming non-negative initial data). Roughly speaking, the problem is that as energy is being transferred from
to
, energy is also simultaneously being transferred from
to
, and as such the solution races off to higher modes a bit too prematurely, without absorbing all of the energy from lower modes. This weakens the strength of the blowup to the point where the moderately strong dissipation in (3) is enough to kill the high frequency cascade before a true singularity occurs. Because of this, the original Katz-Pavlovic model cannot quite be used to establish Theorem 1 in three dimensions. (Actually, the original Katz-Pavlovic model had some additional dispersive features which allowed for another proof of global smooth solutions, which is an unpublished result of Nazarov.)
To get around this, I had to “engineer” an ODE system with similar features to (3) (namely, a quadratic nonlinearity, a monotone total energy, and the indicated exponents of for both the dissipation term and the quadratic terms), but for which the cascade of energy from scale
to scale
was not interrupted by the cascade of energy from scale
to scale
. To do this, I needed to insert a delay in the cascade process (so that after energy was dumped into scale
, it would take some time before the energy would start to transfer to scale
), but the process also needed to be abrupt (once the process of energy transfer started, it needed to conclude very quickly, before the delayed transfer for the next scale kicked in). It turned out that one could build a “quadratic circuit” out of some basic “quadratic gates” (analogous to how an electrical circuit could be built out of basic gates such as amplifiers or resistors) that achieved this task, leading to an ODE system essentially of the form
where is a suitable large parameter and
is a suitable small parameter (much smaller than
). To visualise the dynamics of such a system, I found it useful to describe this system graphically by a “circuit diagram” that is analogous (but not identical) to the circuit diagrams arising in electrical engineering:
The coupling constants here range widely from being very large to very small; in practice, this makes the and
modes absorb very little energy, but exert a sizeable influence on the remaining modes. If a lot of energy is suddenly dumped into
, what happens next is roughly as follows: for a moderate period of time, nothing much happens other than a trickle of energy into
, which in turn causes a rapid exponential growth of
(from a very low base). After this delay,
suddenly crosses a certain threshold, at which point it causes
and
to exchange energy back and forth with extreme speed. The energy from
then rapidly drains into
, and the process begins again (with a slight loss in energy due to the dissipation). If one plots the total energy
as a function of time, it looks schematically like this:
As in the previous heuristic discussion, the time between cascades from one frequency scale to the next decay exponentially, leading to blowup at some finite time . (One could describe the dynamics here as being similar to the famous “lighting the beacons” scene in the Lord of the Rings movies, except that (a) as each beacon gets ignited, the previous one is extinguished, as per the energy identity; (b) the time between beacon lightings decrease exponentially; and (c) there is no soundtrack.)
There is a real (but remote) possibility that this sort of construction can be adapted to the true Navier-Stokes equations. The basic blowup mechanism in the averaged equation is that of a von Neumann machine, or more precisely a construct (built within the laws of the inviscid evolution ) that, after some time delay, manages to suddenly create a replica of itself at a finer scale (and to largely erase its original instantiation in the process). In principle, such a von Neumann machine could also be built out of the laws of the inviscid form of the Navier-Stokes equations (i.e. the Euler equations). In physical terms, one would have to build the machine purely out of an ideal fluid (i.e. an inviscid incompressible fluid). If one could somehow create enough “logic gates” out of ideal fluid, one could presumably build a sort of “fluid computer”, at which point the task of building a von Neumann machine appears to reduce to a software engineering exercise rather than a PDE problem (providing that the gates are suitably stable with respect to perturbations, but (as with actual computers) this can presumably be done by converting the analog signals of fluid mechanics into a more error-resistant digital form). The key thing missing in this program (in both senses of the word) to establish blowup for Navier-Stokes is to construct the logic gates within the laws of ideal fluids. (Compare with the situation for cellular automata such as Conway’s “Game of Life“, in which Turing complete computers, universal constructors, and replicators have all been built within the laws of that game.)
A few days ago, I released a preprint entitled “Localisation and compactness properties of the Navier-Stokes global regularity problem“, discussed in this previous blog post. As it turns out, I was somewhat impatient to finalise the paper and move on to other things, and the original preprint was still somewhat rough in places (contradicting my own advice on this matter), with a number of typos of minor to moderate severity. But a bit more seriously, I discovered on a further proofreading that there was a subtle error in a component of the argument that I had believed to be routine – namely the persistence of higher regularity for mild solutions. As a consequence, some of the implications stated in the first version were not exactly correct as stated; but they can be repaired by replacing a “bad” notion of global regularity for a certain class of data with a “good” notion. I have completed (and proofread) an updated version of the ms, which should appear at the arXiv link of the paper in a day or two (and which I have also placed at this link). (In the meantime, it is probably best not to read the original ms too carefully, as this could lead to some confusion.) I’ve also added a new section that shows that, due to this technicality, one can exhibit smooth initial data to the Navier-Stokes equation for which there are no smooth solutions, which superficially sounds very close to a negative solution to the global regularity problem, but is actually nothing of the sort.
Let me now describe the issue in more detail (and also to explain why I missed it previously). A standard principle in the theory of evolutionary partial differentiation equations is that regularity in space can be used to imply regularity in time. To illustrate this, consider a solution to the supercritical nonlinear wave equation
(1)
for some field . Suppose one already knew that
had some regularity in space, and in particular the
norm of
was bounded (thus
and up to two spatial derivatives of
were bounded). Then, by (1), we see that two time derivatives of
were also bounded, and one then gets the additional regularity of
.
In a similar vein, suppose one initially knew that had the regularity
. Then (1) soon tells us that
also has the regularity
; then, if one differentiates (1) in time to obtain
one can conclude that also has the regularity of
. One can continue this process indefinitely; in particular, if one knew that
, then these sorts of manipulations show that
is infinitely smooth in both space and time.
The issue that caught me by surprise is that for the Navier-Stokes equations
(2)
(setting the forcing term equal to zero for simplicity), infinite regularity in space does not automatically imply infinite regularity in time, even if one assumes the initial data lies in a standard function space such as the Sobolev space
. The problem lies with the pressure term
, which is recovered from the velocity via the elliptic equation
(3)
that can be obtained by taking the divergence of (2). This equation is solved by a non-local integral operator:
If, say, lies in
, then there is no difficulty establishing a bound on
in terms of
(for instance, one can use singular integral theory and Sobolev embedding to place
in
. However, one runs into difficulty when trying to compute time derivatives of
. Differentiating (3) once, one gets
.
At the regularity of , one can still (barely) control this quantity by using (2) to expand out
and using some integration by parts. But when one wishes to compute a second time derivative of the pressure, one obtains (after integration by parts) an expansion of the form
and now there is not enough regularity on available to get any control on
, even if one assumes that
is smooth. Indeed, following this observation, I was able to show that given generic smooth
data, the pressure
will instantaneously fail to be
in time, and thence (by (2)) the velocity will instantaneously fail to be
in time. (Switching to the vorticity formulation buys one further degree of time differentiability, but does not fully eliminate the problem; the vorticity
will fail to be
in time. Switching to material coordinates seems to makes things very slightly better, but I believe there is still a breakdown of time regularity in these coordinates also.)
For later times t>0 (and assuming homogeneous data f=0 for simplicity), this issue no longer arises, because of the instantaneous smoothing effect of the Navier-Stokes flow, which for instance will upgrade regularity to
regularity instantaneously. It is only the initial time at which some time irregularity can occur.
This breakdown of regularity does not actually impact the original formulation of the Clay Millennium Prize problem, though, because in that problem the initial velocity is required to be Schwartz class (so all derivatives are rapidly decreasing). In this class, the regularity theory works as expected; if one has a solution which already has some reasonable regularity (e.g. a mild solution) and the data is Schwartz, then the solution will be smooth in spacetime. (Another class where things work as expected is when the vorticity is Schwartz; in such cases, the solution remains smooth in both space and time (for short times, at least), and the Schwartz nature of the vorticity is preserved (because the vorticity is subject to fewer non-local effects than the velocity, as it is not directly affected by the pressure).)
This issue means that one of the implications in the original paper (roughly speaking, that global regularity for Schwartz data implies global regularity for smooth data) is not correct as stated. But this can be fixed by weakening the notion of global regularity in the latter setting, by limiting the amount of time differentiability available at the initial time. More precisely, call a solution
and
almost smooth if
and
are smooth on the half-open slab
; and
- For every
,
exist and are continuous on the full slab
.
Thus, an almost smooth solution is the same concept as a smooth solution, except that at time zero, the velocity field is only , and the pressure field is only
. This is still enough regularity to interpret the Navier-Stokes equation (2) in a classical manner, but falls slightly short of full smoothness.
(I had already introduced this notion of almost smoothness in the more general setting of smooth finite energy solutions in the first draft of this paper, but had failed to realise that it was also necessary in the smooth setting also.)
One can now “fix” the global regularity conjectures for Navier-Stokes in the smooth or smooth finite energy setting by requiring the solutions to merely be almost smooth instead of smooth. Once one does so, the results in my paper then work as before: roughly speaking, if one knows that Schwartz data produces smooth solutions, one can conclude that smooth
or smooth finite energy data produces almost smooth solutions (and the paper now contains counterexamples to show that one does not always have smooth solutions in this category).
The diagram of implications between conjectures has been adjusted to reflect this issue, and now reads as follows:
I’ve just uploaded to the arXiv my paper “Localisation and compactness properties of the Navier-Stokes global regularity problem“, submitted to Analysis and PDE. This paper concerns the global regularity problem for the Navier-Stokes system of equations
in three dimensions. Thus, we specify initial data , where
is a time,
is the initial velocity field (which, in order to be compatible with (2), (3), is required to be divergence-free),
is the forcing term, and then seek to extend this initial data to a solution
with this data, where the velocity field
and pressure term
are the unknown fields.
Roughly speaking, the global regularity problem asserts that given every smooth set of initial data , there exists a smooth solution
to the Navier-Stokes equation with this data. However, this is not a good formulation of the problem because it does not exclude the possibility that one or more of the fields
grows too fast at spatial infinity. This problem is evident even for the much simpler heat equation
As long as one has some mild conditions at infinity on the smooth initial data (e.g. polynomial growth at spatial infinity), then one can solve this equation using the fundamental solution of the heat equation:
If furthermore is a tempered distribution, one can use Fourier-analytic methods to show that this is the unique solution to the heat equation with this data. But once one allows sufficiently rapid growth at spatial infinity, existence and uniqueness can break down. Consider for instance the backwards heat kernel
for some , which is smooth (albeit exponentially growing) at time zero, and is a smooth solution to the heat equation for
, but develops a dramatic singularity at time
. A famous example of Tychonoff from 1935, based on a power series construction, also shows that uniqueness for the heat equation can also fail once growth conditions are removed. An explicit example of non-uniqueness for the heat equation is given by the contour integral
where is the
-shaped contour consisting of the positive real axis and the upper imaginary axis, with
being interpreted with the standard branch (with cut on the negative axis). One can show by contour integration that this function solves the heat equation and is smooth (but rapidly growing at infinity), and vanishes for
, but is not identically zero for
.
Thus, in order to obtain a meaningful (and physically realistic) problem, one needs to impose some decay (or at least limited growth) hypotheses on the data and solution
in addition to smoothness. For the data, one can impose a variety of such hypotheses, including the following:
- (Finite energy data) One has
and
.
- (
data) One has
and
.
- (Schwartz data) One has
and
for all
.
- (Periodic data) There is some
such that
and
for all
and
.
- (Homogeneous data)
.
Note that smoothness alone does not necessarily imply finite energy, , or the Schwartz property. For instance, the (scalar) function
is smooth and finite energy, but not in
or Schwartz. Periodicity is of course incompatible with finite energy,
, or the Schwartz property, except in the trivial case when the data is identically zero.
Similarly, one can impose conditions at spatial infinity on the solution, such as the following:
- (Finite energy solution) One has
.
- (
solution) One has
and
.
- (Partially periodic solution) There is some
such that
for all
and
.
- (Fully periodic solution) There is some
such that
and
for all
and
.
(The component of the
solution is for technical reasons, and should not be paid too much attention for this discussion.) Note that we do not consider the notion of a Schwartz solution; as we shall see shortly, this is too restrictive a concept of solution to the Navier-Stokes equation.
Finally, one can downgrade the regularity of the solution down from smoothness. There are many ways to do so; two such examples include
- (
mild solutions) The solution is not smooth, but is
(in the preceding sense) and solves the equation (1) in the sense that the Duhamel formula
holds.
- (Leray-Hopf weak solution) The solution
is not smooth, but lies in
, solves (1) in the sense of distributions (after rewriting the system in divergence form), and obeys an energy inequality.
Finally, one can ask for two types of global regularity results on the Navier-Stokes problem: a qualitative regularity result, in which one merely provides existence of a smooth solution without any explicit bounds on that solution, and a quantitative regularity result, which provides bounds on the solution in terms of the initial data, e.g. a bound of the form
for some function . One can make a further distinction between local quantitative results, in which
is allowed to depend on
, and global quantitative results, in which there is no dependence on
(the latter is only reasonable though in the homogeneous case, or if
has some decay in time).
By combining these various hypotheses and conclusions, we see that one can write down quite a large number of slightly different variants of the global regularity problem. In the official formulation of the regularity problem for the Clay Millennium prize, a positive correct solution to either of the following two problems would be accepted for the prize:
- Conjecture 1.4 (Qualitative regularity for homogeneous periodic data) If
is periodic, smooth, and homogeneous, then there exists a smooth partially periodic solution
with this data.
- Conjecture 1.3 (Qualitative regularity for homogeneous Schwartz data) If
is Schwartz and homogeneous, then there exists a smooth finite energy solution
with this data.
(The numbering here corresponds to the numbering in the paper.)
Furthermore, a negative correct solution to either of the following two problems would also be accepted for the prize:
- Conjecture 1.6 (Qualitative regularity for periodic data) If
is periodic and smooth, then there exists a smooth partially periodic solution
with this data.
- Conjecture 1.5 (Qualitative regularity for Schwartz data) If
is Schwartz, then there exists a smooth finite energy solution
with this data.
I am not announcing any major progress on these conjectures here. What my paper does study, though, is the question of whether the answer to these conjectures is somehow sensitive to the choice of formulation. For instance:
- Note in the periodic formulations of the Clay prize problem that the solution is only required to be partially periodic, rather than fully periodic; thus the pressure has no periodicity hypothesis. One can ask the extent to which the above problems change if one also requires pressure periodicity.
- In another direction, one can ask the extent to which quantitative formulations of the Navier-Stokes problem are stronger than their qualitative counterparts; in particular, whether it is possible that each choice of initial data in a certain class leads to a smooth solution, but with no uniform bound on that solution in terms of various natural norms of the data.
- Finally, one can ask the extent to which the conjecture depends on the category of data. For instance, could it be that global regularity is true for smooth periodic data but false for Schwartz data? True for Schwartz data but false for smooth
data? And so forth.
One motivation for the final question (which was posed to me by my colleague, Andrea Bertozzi) is that the Schwartz property on the initial data tends to be instantly destroyed by the Navier-Stokes flow. This can be seen by introducing the vorticity
. If
is Schwartz, then from Stokes’ theorem we necessarily have vanishing of certain moments of the vorticity, for instance:
On the other hand, some integration by parts using (1) reveals that such moments are usually not preserved by the flow; for instance, one has the law
and one can easily concoct examples for which the right-hand side is non-zero at time zero. This suggests that the Schwartz class may be unnecessarily restrictive for Conjecture 1.3 or Conjecture 1.5.
My paper arose out of an attempt to address these three questions, and ended up obtaining partial results in all three directions. Roughly speaking, the results that address these three questions are as follows:
- (Homogenisation) If one only assumes partial periodicity instead of full periodicity, then the forcing term
becomes irrelevant. In particular, Conjecture 1.4 and Conjecture 1.6 are equivalent.
- (Concentration compactness) In the
category (both periodic and nonperiodic, homogeneous or nonhomogeneous), the qualitative and quantitative formulations of the Navier-Stokes global regularity problem are essentially equivalent.
- (Localisation) The (inhomogeneous) Navier-Stokes problems in the Schwartz, smooth
, and finite energy categories are essentially equivalent to each other, and are also implied by the (fully) periodic version of these problems.
The first two of these families of results are relatively routine, drawing on existing methods in the literature; the localisation results though are somewhat more novel, and introduce some new local energy and local enstrophy estimates which may be of independent interest.
Broadly speaking, the moral to draw from these results is that the precise formulation of the Navier-Stokes equation global regularity problem is only of secondary importance; modulo a number of caveats and technicalities, the various formulations are close to being equivalent, and a breakthrough on any one of the formulations is likely to lead (either directly or indirectly) to a comparable breakthrough on any of the others.
This is only a caricature of the actual implications, though. Below is the diagram from the paper indicating the various formulations of the Navier-Stokes equations, and the known implications between them:
The above three streams of results are discussed in more detail below the fold.
I’ve just uploaded to the arXiv my paper “Global regularity for a logarithmically supercritical hyperdissipative Navier-Stokes equation“, submitted to Analysis & PDE. It is a famous problem to establish the existence of global smooth solutions to the three-dimensional Navier-Stokes system of equations
given smooth, compactly supported, divergence-free initial data .
I do not claim to have any substantial progress on this problem here. Instead, the paper makes a small observation about the hyper-dissipative version of the Navier-Stokes equations, namely
for some . It is a folklore result that global regularity for this equation holds for
; the significance of the exponent
is that it is energy-critical, in the sense that the scaling which preserves this particular hyper-dissipative Navier-Stokes equation, also preserves the energy.
Values of below
(including, unfortunately, the case
, which is the original Navier-Stokes equation) are supercritical and thus establishing global regularity beyond the reach of most known methods (see my earlier blog post for more discussion).
A few years ago, I observed (in the case of the spherically symmetric wave equation) that this “criticality barrier” had a very small amount of flexibility to it, in that one could push a critical argument to a slightly supercritical one by exploiting spacetime integral estimates a little bit more. I realised recently that the same principle applied to hyperdissipative Navier-Stokes; here, the relevant spacetime integral estimate is the energy dissipation inequality
which ensures that the energy dissipation is locally integrable (and in fact globally integrable) in time.
In this paper I push the global regularity results by a fraction of a logarithm from towards
. For instance, the argument shows that the logarithmically supercritical equation
(0)
admits global smooth solutions.
The argument is in fact quite simple (the paper is seven pages in length), and relies on known technology; one just applies the energy method and a logarithmically modified Sobolev inequality in the spirit of a well-known inequality of Brezis and Wainger. It looks like it will take quite a bit of effort though to improve the logarithmic factor much further.
One way to explain the tiny bit of wiggle room beyond the critical case is as follows. The standard energy method approach to the critical Navier-Stokes equation relies at one stage on Gronwall’s inequality, which among other things asserts that if a time-dependent non-negative quantity E(t) obeys the differential inequality
(1)
and was locally integrable, then E does not blow up in time; in fact, one has the inequality
.
A slight modification of the argument shows that one can replace the linear inequality with a slightly superlinear inequality. For instance, the differential inequality
(2)
also does not blow up in time; indeed, a separation of variables argument gives the explicit double-exponential bound
(let’s take and all functions smooth, to avoid technicalities). It is this ability to go beyond Gronwall’s inequality by a little bit which is really at the heart of the logarithmically supercritical phenomenon. In the paper, I establish an inequality basically of the shape (2), where
is a suitably high-regularity Sobolev norm of
, and
is basically the energy dissipation mentioned earlier. The point is that the logarithmic loss of
in the dissipation can eventually be converted (by a Brezis-Wainger type argument) to a logarithmic loss in the high-regularity energy, as this energy can serve as a proxy for the frequency
, which in turn serves as a proxy for the Laplacian
.
To put it another way, with a linear exponential growth model, such as , it takes a constant amount of time for E to double, and so E never becomes infinite in finite time. With an equation such as
, the time taken for E to double from (say)
to
now shrinks to zero, but only as quickly as the harmonic series
, so it still takes an infinite amount of time for E to blow up. But because the divergence of
is logarithmically slow, the growth of E is now a double exponential rather than a single one. So there is a little bit of room to exploit between exponential growth and blowup.
Interestingly, there is a heuristic argument that suggests that the half-logarithmic loss in (0) can be widened to a full logarithmic loss, which I give below the fold.
This problem is formulated in a qualitative way: the conjecture asserts that the velocity field
- (Qualitative regularity conjecture) Given any smooth divergence-free data
, there exists a global smooth solution
to the Navier-Stokes equations.
- (Local-in-time quantitative regularity conjecture)
Given any smooth solutionto the Navier-Stokes equations with
, one has the a priori bound
for some non-decreasing function
.
- (Global-in-time quantitative regularity conjecture) This is the same conjecture as 2, but with the condition
replaced by
.
It is easy to see that Conjecture 3 implies Conjecture 2, which implies Conjecture 1. By using the compactness of the local periodic Navier-Stokes flow in , one can show that Conjecture 1 implies Conjecture 2; and by using the energy identity (and in particular the fact that the energy dissipation is bounded) one can deduce Conjecture 3 from Conjecture 2. The argument uses only standard tools and is likely to generalise in a number of ways, which I discuss in the paper. (In particular one should be able to replace the
norm here by any other subcritical norm.)
Recent Comments