It is always dangerous to venture an opinion as to why a problem is hard (cf. Clarke’s first law), but I’m going to stick my neck out on this one, because (a) it seems that there has been a lot of effort expended on this problem recently, sometimes perhaps without full awareness of the main difficulties, and (b) I would love to be proved wrong on this opinion :-) .

The global regularity problem for Navier-Stokes is of course a Clay Millennium Prize problem and it would be redundant to describe it again here. I will note, however, that it asks for existence of global smooth solutions to a Cauchy problem for a nonlinear PDE. There are countless other global regularity results of this type for many (but certainly not all) other nonlinear PDE; for instance, global regularity is known for Navier-Stokes in two spatial dimensions rather than three (this result essentially dates all the way back to Leray’s thesis in 1933!). Why is the three-dimensional Navier-Stokes global regularity problem considered so hard, when global regularity for so many other equations is easy, or at least achievable?

(For this post, I am only considering the global regularity problem for Navier-Stokes, from a purely mathematical viewpoint, and in the precise formulation given by the Clay Institute; I will not discuss at all the question as to what implications a rigorous solution (either positive or negative) to this problem would have for physics, computational fluid dynamics, or other disciplines, as these are beyond my area of expertise. But if anyone qualified in these fields wants to make a comment along these lines, by all means do so.)

The standard response to this question is *turbulence* – the behaviour of three-dimensional Navier-Stokes equations at fine scales is much more nonlinear (and hence unstable) than at coarse scales. I would phrase the obstruction slightly differently, as *supercriticality*. Or more precisely, all of the globally controlled quantities for Navier-Stokes evolution which we are aware of (and we are not aware of very many) are either *supercritical* with respect to scaling, which means that they are much weaker at controlling fine-scale behaviour than controlling coarse-scale behaviour, or they are *non-coercive*, which means that they do not really control the solution at all, either at coarse scales or at fine. (I’ll define these terms more precisely later.) At present, all known methods for obtaining global smooth solutions to a (deterministic) nonlinear PDE Cauchy problem require either

- Exact and explicit solutions (or at least an exact, explicit transformation to a significantly simpler PDE or ODE);
- Perturbative hypotheses (e.g. small data, data close to a special solution, or more generally a hypothesis which involves an somewhere); or
- One or more globally controlled quantities (such as the total energy) which are both coercive and either critical or subcritical.

(Note that the presence of (1), (2), or (3) are currently *necessary *conditions for a global regularity result, but far from *sufficient*; otherwise, papers on the global regularity problem for various nonlinear PDEs would be substantially shorter :-) . In particular, there have been many good, deep, and highly non-trivial papers recently on global regularity for Navier-Stokes, but they all assume either (1), (2) or (3) via additional hypotheses on the data or solution. For instance, in recent years we have seen good results on global regularity assuming (2), as well as good results on global regularity assuming (3); a complete bibilography of recent results is unfortunately too lengthy to be given here.)

The Navier-Stokes global regularity problem for arbitrary large smooth data lacks all of these three ingredients. Reinstating (2) is impossible without changing the statement of the problem, or adding some additional hypotheses; also, in perturbative situations the Navier-Stokes equation evolves almost linearly, while in the non-perturbative setting it behaves very nonlinearly, so there is basically no chance of a reduction of the non-perturbative case to the perturbative one unless one comes up with a highly nonlinear transform to achieve this (e.g. a naive scaling argument cannot possibly work). Thus, one is left with only three possible strategies if one wants to solve the full problem:

- Solve the Navier-Stokes equation exactly and explicitly (or at least transform this equation exactly and explicitly to a simpler equation);
- Discover a new globally controlled quantity which is both coercive and either critical or subcritical; or
- Discover a new method which yields global smooth solutions even in the absence of the ingredients (1), (2), and (3) above.

For the rest of this post I refer to these strategies as “Strategy 1”, “Strategy 2”, and “Strategy 3”.

Much effort has been expended here, especially on Strategy 3, but the supercriticality of the equation presents a truly significant obstacle which already defeats all known methods. Strategy 1 is probably hopeless; the last century of experience has shown that (with the very notable exception of completely integrable systems, of which the Navier-Stokes equations is *not* an example) most nonlinear PDE, even those arising from physics, do not enjoy explicit formulae for solutions from *arbitrary* data (although it may well be the case that there are interesting exact solutions from special (e.g. symmetric) data). Strategy 2 may have a little more hope; after all, the Poincaré conjecture became solvable (though still very far from trivial) after Perelman introduced a new globally controlled quantity for Ricci flow (the *Perelman entropy*) which turned out to be both coercive and critical. (See also my exposition of this topic.) But we are still not very good at discovering new globally controlled quantities; to quote Klainerman, “the discovery of any new bound, stronger than that provided by the energy, for general solutions of *any* of our basic physical equations would have the significance of a major event” (emphasis mine).

I will return to Strategy 2 later, but let us now discuss Strategy 3. The first basic observation is that the Navier-Stokes equation, like many other of our basic model equations, obeys a *scale invariance*: specifically, given any scaling parameter , and any smooth velocity field solving the Navier-Stokes equations for some time T, one can form a new velocity field to the Navier-Stokes equation up to time , by the formula

(Strictly speaking, this scaling invariance is only present as stated in the absence of an external force, and with the non-periodic domain rather than the periodic domain . One can adapt the arguments here to these other settings with some minor effort, the key point being that an approximate scale invariance can play the role of a perfect scale invariance in the considerations below. The pressure field gets rescaled too, to , but we will not need to study the pressure here. The viscosity remains unchanged.)

We shall think of the rescaling parameter as being large (e.g. ). One should then think of the transformation from u to as a kind of “magnifying glass”, taking fine-scale behaviour of u and matching it with an identical (but rescaled, and slowed down) coarse-scale behaviour of . The point of this magnifying glass is that it allows us to treat both fine-scale and coarse-scale behaviour on an equal footing, by identifying both types of behaviour with something that goes on at a fixed scale (e.g. the unit scale). Observe that the scaling suggests that fine-scale behaviour should play out on much smaller time scales than coarse-scale behaviour (T versus ). Thus, for instance, if a unit-scale solution does something funny at time 1, then the rescaled fine-scale solution will exhibit something similarly funny at spatial scales and at time . Blowup can occur when the solution shifts its energy into increasingly finer and finer scales, thus evolving more and more rapidly and eventually reaching a singularity in which the scale in both space and time on which the bulk of the evolution is occuring has shrunk to zero. In order to prevent blowup, therefore, we must arrest this motion of energy from coarse scales (or low frequencies) to fine scales (or high frequencies). (There are many ways in which to make these statements rigorous, for instance using Littlewood-Paley theory, which we will not discuss here, preferring instead to leave terms such as “coarse-scale” and “fine-scale” undefined.)

Now, let us take an arbitrary large-data smooth solution to Navier-Stokes, and let it evolve over a very long period of time [0,T), assuming that it stays smooth except possibly at time T. At very late times of the evolution, such as those near to the final time T, there is no reason to expect the solution to resemble the initial data any more (except in perturbative regimes, but these are not available in the arbitrary large-data case). Indeed, the only control we are likely to have on the late-time stages of the solution are those provided by globally controlled quantities of the evolution. Barring a breakthrough in Strategy 2, we only have two really useful globally controlled (i.e. bounded even for very large T) quantities:

- The
*maximum kinetic energy*; and - The
*cumulative energy dissipation*.

Indeed, the energy conservation law implies that these quantities are both bounded by the initial kinetic energy E, which could be large (we are assuming our data could be large) but is at least finite by hypothesis.

The above two quantities are *coercive*, in the sense that control of these quantities imply that the solution, even at very late times, stays in a bounded region of some function space. However, this is basically the only thing we know about the solution at late times (other than that it is smooth until time T, but this is a qualitative assumption and gives no bounds). So, unless there is a breakthrough in Strategy 2, we cannot rule out the worst-case scenario that the solution near time T is essentially an *arbitrary* smooth divergence-free vector field which is bounded both in kinetic energy and in cumulative energy dissipation by E. In particular, near time T the solution could be concentrating the bulk of its energy into fine-scale behaviour, say at some spatial scale . (Of course, cumulative energy dissipation is not a function of a single time, but is an integral over all time; let me suppress this fact for the sake of the current discussion.)

Now, let us take our magnifying glass and blow up this fine-scale behaviour by to create a coarse-scale solution to Navier-Stokes. Given that the fine-scale solution could (in the worst-case scenario) be as bad as an arbitrary smooth vector field with kinetic energy and cumulative energy dissipation at most E, the rescaled unit-scale solution can be as bad as an arbitrary smooth vector field with kinetic energy and cumulative energy dissipation at most , as a simple change-of-variables shows. Note that the control given by our two key quantities has worsened by a factor of ; because of this worsening, we say that these quantities are *supercritical* – they become increasingly useless for controlling the solution as one moves to finer and finer scales. This should be contrasted with *critical* quantities (such as the energy for *two-dimensional* Navier-Stokes), which are invariant under scaling and thus control all scales equally well (or equally poorly), and *subcritical* quantities, control of which becomes increasingly powerful at fine scales (and increasingly useless at very coarse scales).

Now, suppose we know of examples of unit-scale solutions whose kinetic energy and cumulative energy dissipation are as large as , but which can shift their energy to the next finer scale, e.g. a half-unit scale, in a bounded amount O(1) of time. Given the previous discussion, we cannot rule out the possibility that our rescaled solution behaves like this example. Undoing the scaling, this means that we cannot rule out the possibility that the original solution will shift its energy from spatial scale to spatial scale in time . If this bad scenario repeates over and over again, then convergence of geometric series shows that the solution may in fact blow up in finite time. Note that the bad scenarios do not have to happen immediately after each other (the *self-similar* blowup scenario); the solution could shift from scale to , wait for a little bit (in rescaled time) to “mix up” the system and return to an “arbitrary” (and thus potentially “worst-case”) state, and then shift to , and so forth. While the cumulative energy dissipation bound can provide a little bit of a bound on how long the system can “wait” in such a “holding pattern”, it is far too weak to prevent blowup in finite time. To put it another way, we have no rigorous, deterministic way of preventing Maxwell’s demon from plaguing the solution at increasingly frequent (in absolute time) intervals, invoking various rescalings of the above scenario to nudge the energy of the solution into increasingly finer scales, until blowup is attained.

Thus, in order for Strategy 3 to be successful, we basically need to rule out the scenario in which unit-scale solutions with *arbitrarily large *kinetic energy and cumulative energy dissipation shift their energy to the next highest scale. But every single analytic technique we are aware of (except for those involving *exact* solutions, i.e. Strategy 1) requires at least one bound on the size of solution in order to have any chance at all. Basically, one needs at least one bound in order to control all nonlinear errors – and any strategy we know of which does not proceed via exact solutions will have at least one nonlinear error that needs to be controlled. The only thing we have here is a bound on the *scale* of the solution, which is not a bound in the sense that a norm of the solution is bounded; and so we are stuck.

To summarise, any argument which claims to yield global regularity for Navier-Stokes via Strategy 3 must inevitably (via the scale invariance) provide a radically new method for providing non-trivial control of nonlinear unit-scale solutions of arbitrary large size for unit time, which looks impossible without new breakthroughs on Strategy 1 or Strategy 2. (There are a couple of loopholes that one might try to exploit: one can instead try to refine the control on the “waiting time” or “amount of mixing” between each shift to the next finer scale, or try to exploit the fact that each such shift requires a certain amount of energy dissipation, but one can use similar scaling arguments to the preceding to show that these types of loopholes cannot be exploited without a new bound along the lines of Strategy 2, or some sort of argument which works for arbitrarily large data at unit scales.)

To rephrase in an even more jargon-heavy manner: the “energy surface” on which the dynamics is known to live in, can be quotiented by the scale invariance. After this quotienting, the solution can stray arbitrarily far from the origin even at unit scales, and so we lose all control of the solution unless we have exact control (Strategy 1) or can significantly shrink the energy surface (Strategy 2).

The above was a general critique of Strategy 3. Now I’ll turn to some known specific attempts to implement Strategy 3, and discuss where the difficulty lies with these:

*Using weaker or approximate notions of solution*(e.g. viscosity solutions, penalised solutions, super- or sub- solutions, etc.). This type of approach dates all the way back to Leray. It has long been known that by weakening the nonlinear portion of Navier-Stokes (e.g. taming the nonlinearity), or strengthening the linear portion (e.g. introducing hyperdissipation), or by performing a discretisation or regularisation of spatial scales, or by relaxing the notion of a “solution”, one can get global solutions to approximate Navier-Stokes equations. The hope is then to take limits and recover a smooth solution, as opposed to a mere global*weak*solution, which was already constructed by Leray for Navier-Stokes all the way back in 1933. But in order to ensure the limit is smooth, we need convergence in a strong topology. In fact, the same type of scaling arguments used before basically require that we obtain convergence in either a critical or subcritical topology. Absent a breakthrough in Strategy 2, the only type of convergences we have are in very rough – in particular, in supercritical – topologies. Attempting to upgrade such convergence to critical or subcritical topologies is the qualitative analogue of the quantitative problems discussed earlier, and ultimately faces the same problem (albeit in very different language) of trying to control unit-scale solutions of arbitrarily large size. Working in a purely qualitative setting (using limits, etc.) instead of a quantitative one (using estimates, etc.) can disguise these problems (and, unfortunately, can lead to errors if limits are manipulated carelessly), but the qualitative formalism does not magically make these problems disappear. Note that weak solutions are already known to be badly behaved for the closely related Euler equation. More generally, by recasting the problem in a sufficiently abstract formalism (e.g. formal limits of near-solutions), there are a number of ways to create an abstract object which could be considered as a kind of generalised solution, but the moment one tries to establish actual control on the regularity of this generalised solution one will encounter all the supercriticality difficulties mentioned earlier.*Iterative methods*(e.g. contraction mapping principle, Nash-Moser iteration, power series, etc.)*in a function space*. These methods are perturbative, and require*something*to be small: either the data has to be small, the nonlinearity has to be small, or the time of existence desired has to be small. These methods are excellent for constructing*local*solutions for large data, or global solutions for*small*data, but cannot handle global solutions for large data (running into the same problems as any other Strategy 3 approach). These approaches are also typically rather insensitive to the specific structure of the equation, which is already a major warning sign since one can easily construct (rather artificial) systems similar to Navier-Stokes for which blowup is known to occur. The optimal perturbative result is probably very close to that established by Koch-Tataru, for reasons discussed in that paper.*Exploiting blowup criteria*. Perturbative theory can yield some highly non-trivial blowup criteria – that certain norms of the solution must diverge if the solution is to blow up. For instance, a celebrated result of Beale-Kato-Majda shows that the maximal vorticity must have a divergent time integral at the blowup point. However, all such blowup criteria are subcritical or critical in nature, and thus, barring a breakthrough in Strategy 2, the known globally controlled quantities cannot be used to reach a contradiction. Scaling arguments similar to those given above show that perturbative methods cannot achieve a supercritical blowup criterion.*Asymptotic analysis of the blowup point(s)*. Another proposal is to rescale the solution near a blowup point and take some sort of limit, and then continue the analysis until a contradiction ensues. This type of approach is useful in many other contexts (for instance, in understanding Ricci flow). However, in order to actually extract a useful limit (in particular, one which still solves Navier-Stokes in a strong sense, and does collapse to the trivial solution), one needs to uniformly control all rescalings of the solution – or in other words, one needs a breakthrough in Strategy 2. Another major difficulty with this approach is that blowup can occur not just at one point, but can conceivably blow up on a one-dimensional set; this is another manifestation of supercriticality.*Analysis of a minimal blowup solution*. This is a strategy, initiated by Bourgain, which has recently been very successful in establishing large data global regularity for a variety of equations with a critical conserved quantity, namely to assume for contradiction that a blowup solution exists, and then extract a*minimal*blowup solution which minimises the conserved quantity. This strategy (which basically pushes the perturbative theory to its natural limit) seems set to become the standard method for dealing with large data critical equations. It has the appealing feature that there is enough compactness (or almost periodicity) in the minimal blowup solution (once one quotients out by the scaling symmetry) that one can begin to use subcritical and supercritical conservation laws and monotonicity formulae as well (see my survey on this topic). Unfortunately, as the strategy is currently understood, it does not seem to be directly applicable to a supercritical situation (unless one simply assumes that some critical norm is globally bounded) because it is impossible, in view of the scale invariance, to minimise a non-scale-invariant quantity.*Abstract approaches*(avoiding the use of properties specific to the Navier-Stokes equation). At its best, abstraction can efficiently organise and capture the key difficulties of a problem, placing the problem in a framework which allows for a direct and natural resolution of these difficulties without being distracted by irrelevant concrete details. (Kato’s semigroup method is a good example of this in nonlinear PDE; regrettably for this discussion, it is limited to subcritical situations.) At its worst, abstraction conceals the difficulty within some subtle notation or concept (e.g. in various types of convergence to a limit), thus incurring the risk that the difficulty is “magically” avoided by an inconspicuous error in the abstract manipulations. An abstract approach which manages to breezily ignore the supercritical nature of the problem thus looks very suspicious. More substantively, there are many equations which enjoy a coercive conservation law yet still can exhibit finite time blowup (e.g. the mass-critical focusing NLS equation); an abstract approach thus would have to exploit some subtle feature of Navier-Stokes which is not present in all the examples in which blowup is known to be possible. Such a feature is unlikely to be discovered abstractly before it is first discovered concretely; the field of PDE has proven to be the type of mathematics where progress generally starts in the concrete and then flows to the abstract, rather than vice versa.

If we abandon Strategy 1 and Strategy 3, we are thus left with Strategy 2 – discovering new bounds, stronger than those provided by the (supercritical) energy. This is not *a priori* impossible, but there is a huge gap between simply wishing for a new bound and actually discovering and then rigorously establishing one. Simply sticking in the existing energy bounds into the Navier-Stokes equation and seeing what comes out will provide a few more bounds, but they will all be supercritical, as a scaling argument quickly reveals. The only other way we know of to create global non-perturbative deterministic bounds is to discover a new conserved or monotone quantity. In the past, when such quantities have been discovered, they have always been connected either to geometry (symplectic, Riemmanian, complex, etc.), to physics, or to some consistently favourable (defocusing) sign in the nonlinearity (or in various “curvatures” in the system). There appears to be very little usable geometry in the equation; on the one hand, the Euclidean structure enters the equation via the diffusive term and by the divergence-free nature of the vector field, but the nonlinearity is instead describing transport by the velocity vector field, which is basically just an arbitrary volume-preserving infinitesimal diffeomorphism (and in particular does not respect the Euclidean structure at all). One can try to quotient out by this diffeomorphism (i.e. work in material coordinates) but there are very few geometric invariants left to play with when one does so. (In the case of the Euler equations, the vorticity vector field is preserved modulo this diffeomorphism, as observed for instance by Li, but this invariant is very far from coercive, being almost purely topological in nature.) The Navier-Stokes equation, being a system rather than a scalar equation, also appears to have almost no favourable sign properties, in particular ruling out the type of bounds which the maximum principle or similar comparison principles can give. This leaves physics, but apart from the energy, it is not clear if there are any physical quantities of fluids which are *deterministically* monotone. (Things look better on the stochastic level, in which the laws of thermodynamics might play a role, but the Navier-Stokes problem, as defined by the Clay institute, is deterministic, and so we have Maxwell’s demon to contend with.) It would of course be fantastic to obtain a fourth source of non-perturbative controlled quantities, not arising from geometry, physics, or favourable signs, but this looks somewhat of a long shot at present. Indeed given the turbulent, unstable, and chaotic nature of Navier-Stokes, it is quite conceivable that in fact no reasonable globally controlled quantities exist beyond that which arise from the energy.

Of course, given how hard it is to show global regularity, one might try instead to establish finite time blowup instead (this also is acceptable for the Millennium prize). Unfortunately, even though the Navier-Stokes equation is known to be very unstable, it is not clear at all how to pass from this to a rigorous demonstration of a blowup solution. All the rigorous finite time blowup results (as opposed to mere instability results) that I am aware of rely on one or more of the following ingredients:

- Exact blowup solutions (or at least an exact transformation to a significantly simpler PDE or ODE, for which blowup can be established);
- An ansatz for a blowup solution (or approximate solution), combined with some nonlinear stability theory for that ansatz;
- A comparison principle argument, dominating the solution by another object which blows up in finite time, taking the solution with it; or
- An indirect argument, constructing a functional of the solution which must attain an impossible value in finite time (e.g. a quantity which is manifestly non-negative for smooth solutions, but must become negative in finite time).

It may well be that there is some exotic symmetry reduction which gives (1), but no-one has located any good exactly solvable special case of Navier-Stokes (in fact, those which have been found, are known to have global smooth solutions). (2) is problematic for two reasons: firstly, we do not have a good ansatz for a blowup solution, but perhaps more importantly it seems hopeless to establish a stability theory for any such ansatz thus created, as this problem is essentially a more difficult version of the global regularity problem, and in particular subject to the main difficulty, namely controlling the highly nonlinear behaviour at fine scales. (One of the ironies in pursuing method (2) is that in order to establish rigorous *blowup* in some sense, one must first establish rigorous *stability* in some other (renormalised) sense.) Method (3) would require a comparison principle, which as noted before appears to be absent for the non-scalar Navier-Stokes equations. Method (4) suffers from the same problem, ultimately coming back to the “Strategy 2” problem that we have virtually no globally monotone quantities in this system to play with (other than energy monotonicity, which clearly looks insufficient by itself). Obtaining a new type of mechanism to force blowup other than (1)-(4) above would be quite revolutionary, not just for Navier-Stokes; but I am unaware of even any proposals in these directions, though perhaps topological methods might have some effectiveness.

So, after all this negativity, do I have any positive suggestions for how to solve this problem? My opinion is that Strategy 1 is impossible, and Strategy 2 would require either some exceptionally good intuition from physics, or else an incredible stroke of luck. Which leaves Strategy 3 (and indeed, I think one of the main reasons why the Navier-Stokes problem is interesting is that it *forces* us to create a Strategy 3 technique). Given how difficult this strategy seems to be, as discussed above, I only have some extremely tentative and speculative thoughts in this direction, all of which I would classify as “blue-sky” long shots:

*Work with ensembles of data, rather than a single initial datum*. All of our current theory for deterministic evolution equations deals only with a single solution from a single initial datum. It may be more effective to work with parameterised familes of data and solutions, or perhaps probability measures (e.g. Gibbs measures or other invariant measures). One obvious partial result to shoot for is to try to establish global regularity for*generic*large data rather than*all*large data; in other words, acknowledge that Maxwell’s demon might exist, but show that the probability of it actually intervening is very small. The problem is that we have virtually no tools for dealing with generic (average-case) data other than by treating all (worst-case) data; the enemy is that the Navier-Stokes flow itself might have some perverse entropy-reducing property which somehow makes the average case drift towards (or at least recur near) the worst case over long periods of time. This is incredibly unlikely to be the truth, but we have no tools to prevent it from happening at present.*Work with a much simpler (but still supercritical) toy model*. The Navier-Stokes model is parabolic, which is nice, but is complicated in many other ways, being relatively high-dimensional and also non-scalar in nature. It may make sense to work with other, simplified models which still contain the key difficulty that the only globally controlled quantities are supercritical. Examples include the Katz-Pavlovic dyadic model for the Euler equations (for which blowup can be demonstrated by a monotonicity argument; see this survey for more details), or the spherically symmetric defocusing supercritical nonlinear wave equation.*Develop non-perturbative tools to control deterministic non-integrable dynamical systems*. Throughout this post we have been discussing PDEs, but actually there are similar issues arising in the nominally simpler context of finite-dimensional dynamical systems (ODEs). Except in perturbative contexts (such as the neighbourhood of a fixed point or invariant torus), the long-time evolution of a dynamical system for deterministic data is still largely only controllable by the classical tools of exact solutions, conservation laws and monotonicity formulae; a discovery of a new and effective tool for this purpose would be a major breakthrough. One natural place to start is to better understand the long-time, non-perturbative dynamics of the classical three-body problem, for which there are still fundamental unsolved questions.*Establish really good bounds for critical or nearly-critical problems*. Recently, I showed that having a very good bound for a critical equation essentially implies that one also has a global regularity result for a slightly supercritical equation. The idea is to use a monotonicity formula which does weaken very slightly as one passes to finer and finer scales, but such that each such passage to a finer scale costs a significant amount of monotonicity; since there is only a bounded amount of monotonicity to go around, it turns out that the latter effect just barely manages to overcome the former in my equation to recover global regularity (though by doing so, the bounds worsen from polynomial in the critical case to double exponential in my logarithmically supercritical case). I severely doubt that my method can push to non-logarithmically supercritical equations, but it does illustrate that having very strong bounds at the critical level may lead to some modest progress on the problem.*Try a topological method*. This is a special case of (1). It may well be that a primarily topological argument may be used either to construct solutions, or to establish blowup; there are some precedents for this type of construction in elliptic theory. Such methods are very global by nature, and thus not restricted to perturbative or nearly-linear regimes. However, there is no obvious topology here (except possibly for that generated by the vortex filaments) and as far as I know, there is not even a “proof-of-concept” version of this idea for any evolution equation. So this is really more of a wish than any sort of concrete strategy.*Understand pseudorandomness*. This is an incredibly vague statement; but part of the difficulty with this problem, which also exists in one form or another in many other famous problems (e.g. Riemann hypothesis, , , twin prime and Goldbach conjectures, normality of digits of , Collatz conjecture, etc.) is that we expect any sufficiently complex (but deterministic) dynamical system to behave “chaotically” or “pseudorandomly”, but we still have very few tools for actually making this intuition precise, especially if one is considering deterministic initial data rather than generic data. Understanding pseudorandomness in other contexts, even dramatically different ones, may indirectly shed some insight on the turbulent behaviour of Navier-Stokes.

In conclusion, while it is good to occasionally have a crack at impossible problems, just to try one’s luck, I would personally spend much more of my time on other, more tractable PDE problems than the Clay prize problem, though one should certainly keep that problem in mind if, in the course on working on other problems, one indeed does stumble upon something that smells like a breakthrough in Strategy 1, 2, or 3 above. (In particular, there are many other serious and interesting questions in fluid equations that are not anywhere near as difficult as global regularity for Navier-Stokes, but still highly worthwhile to resolve.)

## 445 comments

Comments feed for this article

25 February, 2019 at 1:18 am

R. Kh.As this is noticed by M Struve: O. A LADYZHENSKAYA would not have agreed with this formulation of the problem -corresponding regularity question – for the Sixht Problem of the Millenium – for 0. A – the MAIN PROBLEM IS EXISTENCE AND UNIQUENESS – OBVIOUSLY

R. KH; Z

25 February, 2019 at 5:37 am

victorivriiI am really puzzled by Mr Zeytounian comments (as long as I can decrypt his writings).

1) Indeed, Ch. Fefferman made an omission in the official description https://www.claymath.org/sites/default/files/navierstokes.pdf , not mentioning that in the periodic framework not only the gradient of the pressure but the pressure itself must be periodic.

However, this hardly warrants “very badli” statement, Furthermore, this omission was corrected on the 6th page of the official description [well, it was a patch; the original description originally contained 5 pages, with the 5th page having only few lines and then the 6th page was sloppily added 61/2 years later: it follows from it appearance that it was simply inserted into pdf file and looking at the shape of symbols one can see that it’s author was a German mathematician)

2) This problem was not resolved as far as I know.

3) For Olga Ladyzhenskaya definitely the main question was the existence and uniqueness; however both of them are known, but in different classes. If the conjecture of the global regularity holds then the existence and uniqueness would be established in the class of smooth functions.

However, if the conjecture of the global regularity fails, the next problem indeed would be to find the class in which both existence and uniqueness hold.

25 February, 2019 at 6:36 am

victorivriiA bit lame analogy: for uniqueness holds in the class of continuous functions, and the global existence holds in the class of bounded functions. However in the former class there is no global existence and in the latter there is no unicity.

It was established 50+ y.a. (if I am not mistaken) that in the class of bounded functions there are both global existence and uniqueness, if to this equation dissipativity condition is added.

8 May, 2019 at 1:23 pm

Bob McCannPhysics perspective: Navier-Stokes is a model that applies to bulk fluids where the scale size is large enough that individual particle motion is not relevant. Nearing that scale it becomes necessary to switch models to something including distributions and collisions, e.g., the Vlasov or Boltzmann equations. As an exercise in pure math, the Millennium problem is an excellent challenge for extending the reach of our analysis tool set. Here is an idea I have used in Plasma Physics. If we can bound the problem between two simpler problems that are better behaved, it may be possible to constrain the more difficult behavior. For instance, if we use a Fourier expansion with index “n” and the behavior of the first few terms is well behaved but high order terms are too difficult, then expanding in (1/n) and moving to the limit of n->infinity sometimes results in a well behaved equation that bounds the small scale behavior. Using analytic matching, analytic continuation, or the Wiener-Hopf technique (depending on the character of the small and large scale equations) can then provide tighter bounds on the more difficult intermediate scale behavior.

16 May, 2019 at 5:09 am

ferhatmohamed Ferhatafter 10 years of effort i think this puzzle requires to develope a new variational methods

17 July, 2020 at 3:52 am

Colin McLartyTerrence: just a question about terminology. for use in a talk to historians and philosophers of math. I think that when you say “exact and explicit” you mean something like a closed form solution (where “closed form” is also not a precisely defined expression). Are you happy with using “exact solution” for a solution which is in no sense closed form?

17 July, 2020 at 5:19 am

AnonymousI’m not Terence but I’d expect it to be possible to make a formal definition in terms of computable reals. E.g. an infinite series solution that converges to within epsilon after terms where f is computable using an explicitly stated algorithm, seems explicit to me.

17 July, 2020 at 6:25 pm

Terence TaoYes, in this text I mean solutions that are both exact (not approximate) and explicit (describable in terms of operations simpler than “solve the Navier-Stokes equations”), with the point being that it should be an easier task to determine the regularity or singularity of such explicitly described solutions than arbitrary (exact) solutions to the Navier-Stokes equations. On the other hand, explicit approximate solutions are typically not, by themselves, enough to settle these sorts of questions: just because an explicit approximate solution exhibits an explicit singularity at a point, it is often not feasible to obtain a sufficiently strong perturbative theory to establish that there is also an exact solution close to the approximate solution that also exhibits a singularity. (Perturbation theory tends to only work well when the functions involved are relatively small, but singular solutions by their nature tend to be quite large in various norms. One can sometimes get around this by working in multiple norms simultaneously, but it can be quite delicate.)

20 August, 2020 at 1:32 am

ChristianDear Prof. Tao,

I wonder if it’s possible to explain your words “the behaviour of three-dimensional Navier-Stokes equations at fine scales is much more nonlinear (and hence unstable) than at coarse scales” with estimates using $u^(\lambda)$ and $u.$

Many thanks for your effort.

4 December, 2020 at 6:12 pm

DanielA proposed solution to the NS problem is at https://vixra.org/pdf/1911.0343v8.pdf

27 December, 2020 at 1:17 am

AnonymousI recently came across this article, describing a fluid mechanics experiment (i.e. with actual water tanks and stuff) that seems to have uncovered some previously unknown conservation laws in near-turbulence that might extend into the turbulent regime. It probably doesn’t reach the NS regularity question, but I wonder if it’s been studied mathematically and is considered mathematically significant:

https://www.quantamagazine.org/an-unexpected-twist-lights-up-the-secrets-of-turbulence-20200903/

That’s a popular-level article but it links to the research article that was in Science Magazine a while earlier.

17 February, 2021 at 2:09 am

RobertIf the Navier-Stokes problem cannot be solved by arguably the greatest mathematical mind on Earth, could that mean that the premise, i.e., the formulation of the Navier-Stokes equation is in itself incorrect?

Can anyone tell me whether proving that the premise is incorrect would be considered sufficient as a counter example?

In Professor Tao’s article he speaks of:

“Discover a new globally controlled quantity which is both coercive and either critical or subcritical”

If this was discovered, then this would render the premise incorrect since this new globally controlled quantity is not in the equation. Am I right or wrong here? I am just a layman in the field of mathematics and physics but I like to solve problems.

17 February, 2021 at 10:52 am

Anonymous1. It is a valid interesting problem regardless of whether Terry (or any other expert) can solve it.

2. A hypothetical new globally controlled quantity would already be “in the equation,” just not yet identified by mathematicians.

25 April, 2021 at 7:51 am

PolihronovDear Prof. Tao,

Thank you for a very informative blog, I have read it many times!

I wanted to ask a question about the viscosity and NSE scale-invariance. In the blog you discuss one of the strategies and then mention that pressure gets rescaled too; then you have said that the viscosity remains unchanged (under rescaling).

Would you be able to share your thoughts on the meaning of this. In the Millennium problem, how is the viscosity set to behave under rescaling.

My attempt to understand it:

In the dimensionalized NSE, the viscosity is a fixed constant; however, since it has dimension/units; then, must be rescaled when the NSE is rescaled because the units are rescaled.

Coming to the nondimensionalized NSE: here the viscosity is replaced by the Reynolds number . Since in the Millennium problem, the viscosity is a fixed parameter, it would then follow that the Reynolds number is a fixed number as well.

About the scalability of :

It appears that one can look upon the nondimensionalized NSE in its own merit, and written in the nondimensionalized variables , it can be thought of having scale-invariance in these variables. Leray’s self-similar solutions seem to suggest this as well. They are written for the nondimensionalized NSE; their name, “self-similar” refers to rescaling. Their rescaling always yields the proper rescaling of the velocity and pressure . In other words, must be scalable in the nondimensionalized NSE. Is then the Reynolds number scalable just like the viscosity is scalable?

Would you be able to comment on this. Would there be a mathemathical reason to set or as non-scalable. I’m wondering, since there is no unified viscosity theory, and the molecular origins of viscosity are unclear.

26 April, 2021 at 9:23 am

Terence TaoMathematically, there is no distinction between a “rescaleable” or “non-rescalable” quantity (or a “dimensional” or “dimensionless” quantity), as long as we are describing quantities such as position coordinates , time , and viscosity in terms of real numbers. (If one wants to describe such quantities using –torsors instead, then there are distinctions of this type, but this is not usually how the mathematical equations are formulated.)

In particular, one is free to apply any (smooth, invertible) change of variables one wants to the fields to transform the Navier-Stokes equation to another equation. Sometimes when one does so, the transformed equation is the same as the original equation (with no change in the viscosity parameter); in other cases, it transforms to a Navier-Stokes equation with a different viscosity parameter; and in yet other cases it transforms to a completely different equation which is not of Navier-Stokes type. All three such types of transformations are mathematically useful, though in any specific context one such transformation may be more useful than another.

More specifically: for any , the change of variables defined by

transforms solutions of the Navier-Stokes equation at a given viscosity to solutions of the Navier-Stokes equation with the same viscosity . This gives a scale-invariance to the set of solutions to such an equation (with fixed viscosity ) which is very useful for understanding the structure of the space of such solutions, as is discussed in this post.

More generally, though, one can consider dimensional scalings for any defined by

which transform a solution to Navier-Stokes at a given viscosity to a solution to Navier-Stokes at viscosity . The previous scaling is the special case in which . This more general scaling is useful when one wants to analyse a specific time scale and a specific length scale , or if one wishes to investigate limits when the viscosity tends to zero or to infinity. It is not used in this post, but this does not mean that it is not useful in other contexts.

As an example of the third type of transformation, the transformation , where

is the vorticity field, transforms the usual formulation of Navier-Stokes into the vorticity-stream formulation (consisting of the vorticity equation and the Biot-Savart law), in which the pressure is absent. This is a useful formulation for many purposes (particularly in two dimensions, when vorticity becomes a scalar transported by the velocity field).

27 April, 2021 at 7:22 am

PolihronovDear Prof. Tao,

Thank you for your reply! Yes, then the solutions

$\latex u^{(\lambda)}(t,x):=\frac{1}{\lambda}u(\frac{t}{\lambda^2},\frac{x}{\lambda})$;

$\latex p^{(\lambda)}(t,x):=\frac{1}{\lambda^2}p(\frac{t}{\lambda^2},\frac{x}{\lambda})$

would fall into the scenario of Energy-supercritical NSE as you had said above.

These solutions are scalable; they are well-known as Leray’s self-similar solutions. One arrives at them when the viscosity $\latex \nu$ is not changed by rescaling, as you mentioned. Their form is

$\latex u(t,x):=\frac{1}{\sqrt{t}}u(\frac{x}{\sqrt{t}})$;

$\latex p(t,x):=\frac{1}{t}p(\frac{x}{\sqrt{t}})$

and are known not to be strong solutions (Cannone and Planchon /1996). It would not be feasible then to use Beale-Kato-Majda to study them, since they are not strong.

Also, the Millennium problem requires smoothness of solutions at $\latex t=0$. Leray’s self-similar solutions are not defined at $\latex t=0$ and are divergent at the origin (or, along the coordinate axes).

To sum up, if viscosity $\latex \nu$ is not changed by rescaling, we arrive at an energy-supercritical NSE with scalable solutions that are not strong solutions; and are divergent at the initial moment and along the coord. axes. Such approach to show NSE regularity would be bound not to succeed.

If the viscosity $\latex \nu$ is allowed to scale, all of the above issues are resolved. Namely, smoothness at $\latex t=0$, along the axes and criticality are no longer a problem. Beale-Kato-Majda show strong solutions at $\latex T=\infty$. Would this be an acceptable approach to the Millennium problem?

3 May, 2021 at 7:06 am

PolihronovDear Prof. Tao,

If I could add this detail –

The solutions ; and form a group of scaling transformations. It is a Lie group, and the solutions have important invariant properties. From these properties, one can derive their functional form

and

where and are arbitrary functions. Here and are derived under the assumption that the viscosity is unchanged under scaling. One can see that are not defined at and are not smooth functions.

It is true that supercriticality is an obstruction to proving NSE regularity; although supercriticality comes to view at a deeper level of analysis. It seems that one has more urgent issues coming from the NSE scaling transformation itself. What would then be the justification of analyzing NSE regularity through , ? How could they be useful for understanding the structure of the space of NSE solutions.

4 May, 2021 at 2:16 pm

Terence TaoThe rescaled solutions are not required to equal the original solutions ; typically, they are a different solution to the Navier-Stokes equations, with a different set of initial data (a rescaling of the original set). So they do not need to take the scale-invariant form you describe.

5 May, 2021 at 8:51 am

PolihronovThe Lie scaling transformation preserves the NSE; If is to remain unchanged, the transformation is

where denotes the transformed entity .

Then

.

and transform as expected. , are rescaled versions of the original .

10 May, 2021 at 12:42 pm

PolihronovI would add this as well –

The questions arise due to the complexity of Lie’s invariant theory. When speaking of invariants, the assumption is

,

while it is not the case; the correct invariance is

.

Understanding differential invariants in detail is crucial in them being highly relevant to the NSE.

This is true specifically on the functional form of the various differential invariants.

Charles L. Bouton, a Harvard Math professor has written an excellent article on this subject:

Invariants of the General Linear Differential Equation and Their Relation to the Theory of Continuous Groups

American Journal of Mathematics, Vol. 21, No. 1 (Jan., 1899), pp. 25-84

http://www.jstor.org/stable/2369876

This article settles the matter; and of the importance of the viscosity in NSE regularity. Which importance goes even further in posing the question on how behaves at various scales in fluids.

8 July, 2021 at 5:01 am

J. YongThere is a paper by A. Tsionskiy and M. Tsionskiy with the title “Existence, uniquess and smoothness of a solution for 3D Navier-Stokes equations with any smooth initial velocity. A priori estimate of this solution” published in Adv. Theor. Math. Phys.19 (2015), 701-743. Is there any comment on that? Does it mean that the problem has been solved? Or the paper has some problems?

8 July, 2021 at 8:44 am

AnonymousIt’s clear at a glance that this paper has basically no chance of being correct. Surely no expert has wasted their time checking it.