It is always dangerous to venture an opinion as to why a problem is hard (cf. Clarke’s first law), but I’m going to stick my neck out on this one, because (a) it seems that there has been a lot of effort expended on this problem recently, sometimes perhaps without full awareness of the main difficulties, and (b) I would love to be proved wrong on this opinion :-) .

The global regularity problem for Navier-Stokes is of course a Clay Millennium Prize problem and it would be redundant to describe it again here. I will note, however, that it asks for existence of global smooth solutions to a Cauchy problem for a nonlinear PDE. There are countless other global regularity results of this type for many (but certainly not all) other nonlinear PDE; for instance, global regularity is known for Navier-Stokes in two spatial dimensions rather than three (this result essentially dates all the way back to Leray’s thesis in 1933!). Why is the three-dimensional Navier-Stokes global regularity problem considered so hard, when global regularity for so many other equations is easy, or at least achievable?

(For this post, I am only considering the global regularity problem for Navier-Stokes, from a purely mathematical viewpoint, and in the precise formulation given by the Clay Institute; I will not discuss at all the question as to what implications a rigorous solution (either positive or negative) to this problem would have for physics, computational fluid dynamics, or other disciplines, as these are beyond my area of expertise. But if anyone qualified in these fields wants to make a comment along these lines, by all means do so.)

The standard response to this question is *turbulence* – the behaviour of three-dimensional Navier-Stokes equations at fine scales is much more nonlinear (and hence unstable) than at coarse scales. I would phrase the obstruction slightly differently, as *supercriticality*. Or more precisely, all of the globally controlled quantities for Navier-Stokes evolution which we are aware of (and we are not aware of very many) are either *supercritical* with respect to scaling, which means that they are much weaker at controlling fine-scale behaviour than controlling coarse-scale behaviour, or they are *non-coercive*, which means that they do not really control the solution at all, either at coarse scales or at fine. (I’ll define these terms more precisely later.) At present, all known methods for obtaining global smooth solutions to a (deterministic) nonlinear PDE Cauchy problem require either

- Exact and explicit solutions (or at least an exact, explicit transformation to a significantly simpler PDE or ODE);
- Perturbative hypotheses (e.g. small data, data close to a special solution, or more generally a hypothesis which involves an somewhere); or
- One or more globally controlled quantities (such as the total energy) which are both coercive and either critical or subcritical.

(Note that the presence of (1), (2), or (3) are currently *necessary *conditions for a global regularity result, but far from *sufficient*; otherwise, papers on the global regularity problem for various nonlinear PDEs would be substantially shorter :-) . In particular, there have been many good, deep, and highly non-trivial papers recently on global regularity for Navier-Stokes, but they all assume either (1), (2) or (3) via additional hypotheses on the data or solution. For instance, in recent years we have seen good results on global regularity assuming (2), as well as good results on global regularity assuming (3); a complete bibilography of recent results is unfortunately too lengthy to be given here.)

The Navier-Stokes global regularity problem for arbitrary large smooth data lacks all of these three ingredients. Reinstating (2) is impossible without changing the statement of the problem, or adding some additional hypotheses; also, in perturbative situations the Navier-Stokes equation evolves almost linearly, while in the non-perturbative setting it behaves very nonlinearly, so there is basically no chance of a reduction of the non-perturbative case to the perturbative one unless one comes up with a highly nonlinear transform to achieve this (e.g. a naive scaling argument cannot possibly work). Thus, one is left with only three possible strategies if one wants to solve the full problem:

- Solve the Navier-Stokes equation exactly and explicitly (or at least transform this equation exactly and explicitly to a simpler equation);
- Discover a new globally controlled quantity which is both coercive and either critical or subcritical; or
- Discover a new method which yields global smooth solutions even in the absence of the ingredients (1), (2), and (3) above.

For the rest of this post I refer to these strategies as “Strategy 1″, “Strategy 2″, and “Strategy 3″.

Much effort has been expended here, especially on Strategy 3, but the supercriticality of the equation presents a truly significant obstacle which already defeats all known methods. Strategy 1 is probably hopeless; the last century of experience has shown that (with the very notable exception of completely integrable systems, of which the Navier-Stokes equations is *not* an example) most nonlinear PDE, even those arising from physics, do not enjoy explicit formulae for solutions from *arbitrary* data (although it may well be the case that there are interesting exact solutions from special (e.g. symmetric) data). Strategy 2 may have a little more hope; after all, the Poincaré conjecture became solvable (though still very far from trivial) after Perelman introduced a new globally controlled quantity for Ricci flow (the *Perelman entropy*) which turned out to be both coercive and critical. (See also my exposition of this topic.) But we are still not very good at discovering new globally controlled quantities; to quote Klainerman, “the discovery of any new bound, stronger than that provided by the energy, for general solutions of *any* of our basic physical equations would have the significance of a major event” (emphasis mine).

I will return to Strategy 2 later, but let us now discuss Strategy 3. The first basic observation is that the Navier-Stokes equation, like many other of our basic model equations, obeys a *scale invariance*: specifically, given any scaling parameter , and any smooth velocity field solving the Navier-Stokes equations for some time T, one can form a new velocity field to the Navier-Stokes equation up to time , by the formula

(Strictly speaking, this scaling invariance is only present as stated in the absence of an external force, and with the non-periodic domain rather than the periodic domain . One can adapt the arguments here to these other settings with some minor effort, the key point being that an approximate scale invariance can play the role of a perfect scale invariance in the considerations below. The pressure field gets rescaled too, to , but we will not need to study the pressure here. The viscosity remains unchanged.)

We shall think of the rescaling parameter as being large (e.g. ). One should then think of the transformation from u to as a kind of “magnifying glass”, taking fine-scale behaviour of u and matching it with an identical (but rescaled, and slowed down) coarse-scale behaviour of . The point of this magnifying glass is that it allows us to treat both fine-scale and coarse-scale behaviour on an equal footing, by identifying both types of behaviour with something that goes on at a fixed scale (e.g. the unit scale). Observe that the scaling suggests that fine-scale behaviour should play out on much smaller time scales than coarse-scale behaviour (T versus ). Thus, for instance, if a unit-scale solution does something funny at time 1, then the rescaled fine-scale solution will exhibit something similarly funny at spatial scales and at time . Blowup can occur when the solution shifts its energy into increasingly finer and finer scales, thus evolving more and more rapidly and eventually reaching a singularity in which the scale in both space and time on which the bulk of the evolution is occuring has shrunk to zero. In order to prevent blowup, therefore, we must arrest this motion of energy from coarse scales (or low frequencies) to fine scales (or high frequencies). (There are many ways in which to make these statements rigorous, for instance using Littlewood-Paley theory, which we will not discuss here, preferring instead to leave terms such as “coarse-scale” and “fine-scale” undefined.)

Now, let us take an arbitrary large-data smooth solution to Navier-Stokes, and let it evolve over a very long period of time [0,T), assuming that it stays smooth except possibly at time T. At very late times of the evolution, such as those near to the final time T, there is no reason to expect the solution to resemble the initial data any more (except in perturbative regimes, but these are not available in the arbitrary large-data case). Indeed, the only control we are likely to have on the late-time stages of the solution are those provided by globally controlled quantities of the evolution. Barring a breakthrough in Strategy 2, we only have two really useful globally controlled (i.e. bounded even for very large T) quantities:

- The
*maximum kinetic energy*; and - The
*cumulative energy dissipation*.

Indeed, the energy conservation law implies that these quantities are both bounded by the initial kinetic energy E, which could be large (we are assuming our data could be large) but is at least finite by hypothesis.

The above two quantities are *coercive*, in the sense that control of these quantities imply that the solution, even at very late times, stays in a bounded region of some function space. However, this is basically the only thing we know about the solution at late times (other than that it is smooth until time T, but this is a qualitative assumption and gives no bounds). So, unless there is a breakthrough in Strategy 2, we cannot rule out the worst-case scenario that the solution near time T is essentially an *arbitrary* smooth divergence-free vector field which is bounded both in kinetic energy and in cumulative energy dissipation by E. In particular, near time T the solution could be concentrating the bulk of its energy into fine-scale behaviour, say at some spatial scale . (Of course, cumulative energy dissipation is not a function of a single time, but is an integral over all time; let me suppress this fact for the sake of the current discussion.)

Now, let us take our magnifying glass and blow up this fine-scale behaviour by to create a coarse-scale solution to Navier-Stokes. Given that the fine-scale solution could (in the worst-case scenario) be as bad as an arbitrary smooth vector field with kinetic energy and cumulative energy dissipation at most E, the rescaled unit-scale solution can be as bad as an arbitrary smooth vector field with kinetic energy and cumulative energy dissipation at most , as a simple change-of-variables shows. Note that the control given by our two key quantities has worsened by a factor of ; because of this worsening, we say that these quantities are *supercritical* – they become increasingly useless for controlling the solution as one moves to finer and finer scales. This should be contrasted with *critical* quantities (such as the energy for *two-dimensional* Navier-Stokes), which are invariant under scaling and thus control all scales equally well (or equally poorly), and *subcritical* quantities, control of which becomes increasingly powerful at fine scales (and increasingly useless at very coarse scales).

Now, suppose we know of examples of unit-scale solutions whose kinetic energy and cumulative energy dissipation are as large as , but which can shift their energy to the next finer scale, e.g. a half-unit scale, in a bounded amount O(1) of time. Given the previous discussion, we cannot rule out the possibility that our rescaled solution behaves like this example. Undoing the scaling, this means that we cannot rule out the possibility that the original solution will shift its energy from spatial scale to spatial scale in time . If this bad scenario repeates over and over again, then convergence of geometric series shows that the solution may in fact blow up in finite time. Note that the bad scenarios do not have to happen immediately after each other (the *self-similar* blowup scenario); the solution could shift from scale to , wait for a little bit (in rescaled time) to “mix up” the system and return to an “arbitrary” (and thus potentially “worst-case”) state, and then shift to , and so forth. While the cumulative energy dissipation bound can provide a little bit of a bound on how long the system can “wait” in such a “holding pattern”, it is far too weak to prevent blowup in finite time. To put it another way, we have no rigorous, deterministic way of preventing Maxwell’s demon from plaguing the solution at increasingly frequent (in absolute time) intervals, invoking various rescalings of the above scenario to nudge the energy of the solution into increasingly finer scales, until blowup is attained.

Thus, in order for Strategy 3 to be successful, we basically need to rule out the scenario in which unit-scale solutions with *arbitrarily large *kinetic energy and cumulative energy dissipation shift their energy to the next highest scale. But every single analytic technique we are aware of (except for those involving *exact* solutions, i.e. Strategy 1) requires at least one bound on the size of solution in order to have any chance at all. Basically, one needs at least one bound in order to control all nonlinear errors – and any strategy we know of which does not proceed via exact solutions will have at least one nonlinear error that needs to be controlled. The only thing we have here is a bound on the *scale* of the solution, which is not a bound in the sense that a norm of the solution is bounded; and so we are stuck.

To summarise, any argument which claims to yield global regularity for Navier-Stokes via Strategy 3 must inevitably (via the scale invariance) provide a radically new method for providing non-trivial control of nonlinear unit-scale solutions of arbitrary large size for unit time, which looks impossible without new breakthroughs on Strategy 1 or Strategy 2. (There are a couple of loopholes that one might try to exploit: one can instead try to refine the control on the “waiting time” or “amount of mixing” between each shift to the next finer scale, or try to exploit the fact that each such shift requires a certain amount of energy dissipation, but one can use similar scaling arguments to the preceding to show that these types of loopholes cannot be exploited without a new bound along the lines of Strategy 2, or some sort of argument which works for arbitrarily large data at unit scales.)

To rephrase in an even more jargon-heavy manner: the “energy surface” on which the dynamics is known to live in, can be quotiented by the scale invariance. After this quotienting, the solution can stray arbitrarily far from the origin even at unit scales, and so we lose all control of the solution unless we have exact control (Strategy 1) or can significantly shrink the energy surface (Strategy 2).

The above was a general critique of Strategy 3. Now I’ll turn to some known specific attempts to implement Strategy 3, and discuss where the difficulty lies with these:

*Using weaker or approximate notions of solution*(e.g. viscosity solutions, penalised solutions, super- or sub- solutions, etc.). This type of approach dates all the way back to Leray. It has long been known that by weakening the nonlinear portion of Navier-Stokes (e.g. taming the nonlinearity), or strengthening the linear portion (e.g. introducing hyperdissipation), or by performing a discretisation or regularisation of spatial scales, or by relaxing the notion of a “solution”, one can get global solutions to approximate Navier-Stokes equations. The hope is then to take limits and recover a smooth solution, as opposed to a mere global*weak*solution, which was already constructed by Leray for Navier-Stokes all the way back in 1933. But in order to ensure the limit is smooth, we need convergence in a strong topology. In fact, the same type of scaling arguments used before basically require that we obtain convergence in either a critical or subcritical topology. Absent a breakthrough in Strategy 2, the only type of convergences we have are in very rough – in particular, in supercritical – topologies. Attempting to upgrade such convergence to critical or subcritical topologies is the qualitative analogue of the quantitative problems discussed earlier, and ultimately faces the same problem (albeit in very different language) of trying to control unit-scale solutions of arbitrarily large size. Working in a purely qualitative setting (using limits, etc.) instead of a quantitative one (using estimates, etc.) can disguise these problems (and, unfortunately, can lead to errors if limits are manipulated carelessly), but the qualitative formalism does not magically make these problems disappear. Note that weak solutions are already known to be badly behaved for the closely related Euler equation. More generally, by recasting the problem in a sufficiently abstract formalism (e.g. formal limits of near-solutions), there are a number of ways to create an abstract object which could be considered as a kind of generalised solution, but the moment one tries to establish actual control on the regularity of this generalised solution one will encounter all the supercriticality difficulties mentioned earlier.*Iterative methods*(e.g. contraction mapping principle, Nash-Moser iteration, power series, etc.)*in a function space*. These methods are perturbative, and require*something*to be small: either the data has to be small, the nonlinearity has to be small, or the time of existence desired has to be small. These methods are excellent for constructing*local*solutions for large data, or global solutions for*small*data, but cannot handle global solutions for large data (running into the same problems as any other Strategy 3 approach). These approaches are also typically rather insensitive to the specific structure of the equation, which is already a major warning sign since one can easily construct (rather artificial) systems similar to Navier-Stokes for which blowup is known to occur. The optimal perturbative result is probably very close to that established by Koch-Tataru, for reasons discussed in that paper.*Exploiting blowup criteria*. Perturbative theory can yield some highly non-trivial blowup criteria – that certain norms of the solution must diverge if the solution is to blow up. For instance, a celebrated result of Beale-Kato-Majda shows that the maximal vorticity must have a divergent time integral at the blowup point. However, all such blowup criteria are subcritical or critical in nature, and thus, barring a breakthrough in Strategy 2, the known globally controlled quantities cannot be used to reach a contradiction. Scaling arguments similar to those given above show that perturbative methods cannot achieve a supercritical blowup criterion.*Asymptotic analysis of the blowup point(s)*. Another proposal is to rescale the solution near a blowup point and take some sort of limit, and then continue the analysis until a contradiction ensues. This type of approach is useful in many other contexts (for instance, in understanding Ricci flow). However, in order to actually extract a useful limit (in particular, one which still solves Navier-Stokes in a strong sense, and does collapse to the trivial solution), one needs to uniformly control all rescalings of the solution – or in other words, one needs a breakthrough in Strategy 2. Another major difficulty with this approach is that blowup can occur not just at one point, but can conceivably blow up on a one-dimensional set; this is another manifestation of supercriticality.*Analysis of a minimal blowup solution*. This is a strategy, initiated by Bourgain, which has recently been very successful in establishing large data global regularity for a variety of equations with a critical conserved quantity, namely to assume for contradiction that a blowup solution exists, and then extract a*minimal*blowup solution which minimises the conserved quantity. This strategy (which basically pushes the perturbative theory to its natural limit) seems set to become the standard method for dealing with large data critical equations. It has the appealing feature that there is enough compactness (or almost periodicity) in the minimal blowup solution (once one quotients out by the scaling symmetry) that one can begin to use subcritical and supercritical conservation laws and monotonicity formulae as well (see my survey on this topic). Unfortunately, as the strategy is currently understood, it does not seem to be directly applicable to a supercritical situation (unless one simply assumes that some critical norm is globally bounded) because it is impossible, in view of the scale invariance, to minimise a non-scale-invariant quantity.*Abstract approaches*(avoiding the use of properties specific to the Navier-Stokes equation). At its best, abstraction can efficiently organise and capture the key difficulties of a problem, placing the problem in a framework which allows for a direct and natural resolution of these difficulties without being distracted by irrelevant concrete details. (Kato’s semigroup method is a good example of this in nonlinear PDE; regrettably for this discussion, it is limited to subcritical situations.) At its worst, abstraction conceals the difficulty within some subtle notation or concept (e.g. in various types of convergence to a limit), thus incurring the risk that the difficulty is “magically” avoided by an inconspicuous error in the abstract manipulations. An abstract approach which manages to breezily ignore the supercritical nature of the problem thus looks very suspicious. More substantively, there are many equations which enjoy a coercive conservation law yet still can exhibit finite time blowup (e.g. the mass-critical focusing NLS equation); an abstract approach thus would have to exploit some subtle feature of Navier-Stokes which is not present in all the examples in which blowup is known to be possible. Such a feature is unlikely to be discovered abstractly before it is first discovered concretely; the field of PDE has proven to be the type of mathematics where progress generally starts in the concrete and then flows to the abstract, rather than vice versa.

If we abandon Strategy 1 and Strategy 3, we are thus left with Strategy 2 – discovering new bounds, stronger than those provided by the (supercritical) energy. This is not *a priori* impossible, but there is a huge gap between simply wishing for a new bound and actually discovering and then rigorously establishing one. Simply sticking in the existing energy bounds into the Navier-Stokes equation and seeing what comes out will provide a few more bounds, but they will all be supercritical, as a scaling argument quickly reveals. The only other way we know of to create global non-perturbative deterministic bounds is to discover a new conserved or monotone quantity. In the past, when such quantities have been discovered, they have always been connected either to geometry (symplectic, Riemmanian, complex, etc.), to physics, or to some consistently favourable (defocusing) sign in the nonlinearity (or in various “curvatures” in the system). There appears to be very little usable geometry in the equation; on the one hand, the Euclidean structure enters the equation via the diffusive term and by the divergence-free nature of the vector field, but the nonlinearity is instead describing transport by the velocity vector field, which is basically just an arbitrary volume-preserving infinitesimal diffeomorphism (and in particular does not respect the Euclidean structure at all). One can try to quotient out by this diffeomorphism (i.e. work in material coordinates) but there are very few geometric invariants left to play with when one does so. (In the case of the Euler equations, the vorticity vector field is preserved modulo this diffeomorphism, as observed for instance by Li, but this invariant is very far from coercive, being almost purely topological in nature.) The Navier-Stokes equation, being a system rather than a scalar equation, also appears to have almost no favourable sign properties, in particular ruling out the type of bounds which the maximum principle or similar comparison principles can give. This leaves physics, but apart from the energy, it is not clear if there are any physical quantities of fluids which are *deterministically* monotone. (Things look better on the stochastic level, in which the laws of thermodynamics might play a role, but the Navier-Stokes problem, as defined by the Clay institute, is deterministic, and so we have Maxwell’s demon to contend with.) It would of course be fantastic to obtain a fourth source of non-perturbative controlled quantities, not arising from geometry, physics, or favourable signs, but this looks somewhat of a long shot at present. Indeed given the turbulent, unstable, and chaotic nature of Navier-Stokes, it is quite conceivable that in fact no reasonable globally controlled quantities exist beyond that which arise from the energy.

Of course, given how hard it is to show global regularity, one might try instead to establish finite time blowup instead (this also is acceptable for the Millennium prize). Unfortunately, even though the Navier-Stokes equation is known to be very unstable, it is not clear at all how to pass from this to a rigorous demonstration of a blowup solution. All the rigorous finite time blowup results (as opposed to mere instability results) that I am aware of rely on one or more of the following ingredients:

- Exact blowup solutions (or at least an exact transformation to a significantly simpler PDE or ODE, for which blowup can be established);
- An ansatz for a blowup solution (or approximate solution), combined with some nonlinear stability theory for that ansatz;
- A comparison principle argument, dominating the solution by another object which blows up in finite time, taking the solution with it; or
- An indirect argument, constructing a functional of the solution which must attain an impossible value in finite time (e.g. a quantity which is manifestly non-negative for smooth solutions, but must become negative in finite time).

It may well be that there is some exotic symmetry reduction which gives (1), but no-one has located any good exactly solvable special case of Navier-Stokes (in fact, those which have been found, are known to have global smooth solutions). (2) is problematic for two reasons: firstly, we do not have a good ansatz for a blowup solution, but perhaps more importantly it seems hopeless to establish a stability theory for any such ansatz thus created, as this problem is essentially a more difficult version of the global regularity problem, and in particular subject to the main difficulty, namely controlling the highly nonlinear behaviour at fine scales. (One of the ironies in pursuing method (2) is that in order to establish rigorous *blowup* in some sense, one must first establish rigorous *stability* in some other (renormalised) sense.) Method (3) would require a comparison principle, which as noted before appears to be absent for the non-scalar Navier-Stokes equations. Method (4) suffers from the same problem, ultimately coming back to the “Strategy 2″ problem that we have virtually no globally monotone quantities in this system to play with (other than energy monotonicity, which clearly looks insufficient by itself). Obtaining a new type of mechanism to force blowup other than (1)-(4) above would be quite revolutionary, not just for Navier-Stokes; but I am unaware of even any proposals in these directions, though perhaps topological methods might have some effectiveness.

So, after all this negativity, do I have any positive suggestions for how to solve this problem? My opinion is that Strategy 1 is impossible, and Strategy 2 would require either some exceptionally good intuition from physics, or else an incredible stroke of luck. Which leaves Strategy 3 (and indeed, I think one of the main reasons why the Navier-Stokes problem is interesting is that it *forces* us to create a Strategy 3 technique). Given how difficult this strategy seems to be, as discussed above, I only have some extremely tentative and speculative thoughts in this direction, all of which I would classify as “blue-sky” long shots:

*Work with ensembles of data, rather than a single initial datum*. All of our current theory for deterministic evolution equations deals only with a single solution from a single initial datum. It may be more effective to work with parameterised familes of data and solutions, or perhaps probability measures (e.g. Gibbs measures or other invariant measures). One obvious partial result to shoot for is to try to establish global regularity for*generic*large data rather than*all*large data; in other words, acknowledge that Maxwell’s demon might exist, but show that the probability of it actually intervening is very small. The problem is that we have virtually no tools for dealing with generic (average-case) data other than by treating all (worst-case) data; the enemy is that the Navier-Stokes flow itself might have some perverse entropy-reducing property which somehow makes the average case drift towards (or at least recur near) the worst case over long periods of time. This is incredibly unlikely to be the truth, but we have no tools to prevent it from happening at present.*Work with a much simpler (but still supercritical) toy model*. The Navier-Stokes model is parabolic, which is nice, but is complicated in many other ways, being relatively high-dimensional and also non-scalar in nature. It may make sense to work with other, simplified models which still contain the key difficulty that the only globally controlled quantities are supercritical. Examples include the Katz-Pavlovic dyadic model for the Euler equations (for which blowup can be demonstrated by a monotonicity argument; see this survey for more details), or the spherically symmetric defocusing supercritical nonlinear wave equation.*Develop non-perturbative tools to control deterministic non-integrable dynamical systems*. Throughout this post we have been discussing PDEs, but actually there are similar issues arising in the nominally simpler context of finite-dimensional dynamical systems (ODEs). Except in perturbative contexts (such as the neighbourhood of a fixed point or invariant torus), the long-time evolution of a dynamical system for deterministic data is still largely only controllable by the classical tools of exact solutions, conservation laws and monotonicity formulae; a discovery of a new and effective tool for this purpose would be a major breakthrough. One natural place to start is to better understand the long-time, non-perturbative dynamics of the classical three-body problem, for which there are still fundamental unsolved questions.*Establish really good bounds for critical or nearly-critical problems*. Recently, I showed that having a very good bound for a critical equation essentially implies that one also has a global regularity result for a slightly supercritical equation. The idea is to use a monotonicity formula which does weaken very slightly as one passes to finer and finer scales, but such that each such passage to a finer scale costs a significant amount of monotonicity; since there is only a bounded amount of monotonicity to go around, it turns out that the latter effect just barely manages to overcome the former in my equation to recover global regularity (though by doing so, the bounds worsen from polynomial in the critical case to double exponential in my logarithmically supercritical case). I severely doubt that my method can push to non-logarithmically supercritical equations, but it does illustrate that having very strong bounds at the critical level may lead to some modest progress on the problem.*Try a topological method*. This is a special case of (1). It may well be that a primarily topological argument may be used either to construct solutions, or to establish blowup; there are some precedents for this type of construction in elliptic theory. Such methods are very global by nature, and thus not restricted to perturbative or nearly-linear regimes. However, there is no obvious topology here (except possibly for that generated by the vortex filaments) and as far as I know, there is not even a “proof-of-concept” version of this idea for any evolution equation. So this is really more of a wish than any sort of concrete strategy.*Understand pseudorandomness*. This is an incredibly vague statement; but part of the difficulty with this problem, which also exists in one form or another in many other famous problems (e.g. Riemann hypothesis, , , twin prime and Goldbach conjectures, normality of digits of , Collatz conjecture, etc.) is that we expect any sufficiently complex (but deterministic) dynamical system to behave “chaotically” or “pseudorandomly”, but we still have very few tools for actually making this intuition precise, especially if one is considering deterministic initial data rather than generic data. Understanding pseudorandomness in other contexts, even dramatically different ones, may indirectly shed some insight on the turbulent behaviour of Navier-Stokes.

In conclusion, while it is good to occasionally have a crack at impossible problems, just to try one’s luck, I would personally spend much more of my time on other, more tractable PDE problems than the Clay prize problem, though one should certainly keep that problem in mind if, in the course on working on other problems, one indeed does stumble upon something that smells like a breakthrough in Strategy 1, 2, or 3 above. (In particular, there are many other serious and interesting questions in fluid equations that are not anywhere near as difficult as global regularity for Navier-Stokes, but still highly worthwhile to resolve.)

## 368 comments

Comments feed for this article

18 March, 2007 at 7:14 pm

Greg KuperbergThis is an interesting-looking review of a topic that I don’t know beans about. But I notice that you do not mention models of fluid flow with quantized vortices. These models approximate continuous 3+1-dimensional fluid equations by non-local 1+1-dimensional equations. It seems conceivable, although I know nothing about the analytic issues, that you could hope for some kind of convergence as the quantization parameter goes to zero.

My wife, Rena Zieve, studies fluids with quantized vortices. In her case, the motivation is not that the model approximates classical fluids, but that it is a true description of superfluid helium.

18 March, 2007 at 9:07 pm

Terence TaoDear Greg,

It is indeed the analytic issues (and specifically, the establishing of estimates which control the solution, or convergence of approximate solutions to the true solution) which are the heart of the matter. Suppose for instance that we do manage to construct a global solution to a quantised Navier-Stokes for every (this is certainly conceivable, as one can do similar things for other relaxations, regularisations, or discretisations of Navier-Stokes). Now, as you say, one now needs to establish convergence in some topology of these quantised solutions in the limit . In order to get any useful sort of limit, as a bare minimum one needs the sequence to enjoy some sort of bound in some function space norm, uniformly in , otherwise all sorts of bad things could happen (e.g. breakdown of the energy conservation identity). The key phrase here is uniformity in . If one has some argument that bounds solutions to the quantised Navier-Stokes uniformly in , then by taking limits one expects the same argument to work directly in the limit . In other words, one can dispense with the quantised Navier-Stokes and work with the original Navier-Stokes equation directly.

Thus, for the point of view of establishing estimates (which is the key problem), perturbing the equation by varying a small parameter does not really help. Such tricks are however useful for more

qualitativeorformalaspects of the theory. For instance, to justify things like the energy conservation identity for very rough solutions (for which one cannot directly justify things like differentiation under the integral sign), a typical trick is to first relax the solution to a smoother solution (and perhaps a smoother equation), establish the relevant identity in that smooth setting, and then take limits to recover the identity in the original solution. These sorts of perturbations are also useful for a number of simple topological arguments, such as an application of the continuity method (showing that all solutions obey some property P by showing first that the set of solutions obeying P is open, closed, and non-empty, and then using the connectedness of the solution space).From a physical viewpoint, it may well be that one of these modified equations is in fact a more realistic model for fluids than Navier-Stokes. But for the narrow purposes of solving the Clay Prize Problem, we’re stuck with the original Navier-Stokes equation :-) .

18 March, 2007 at 9:13 pm

Greg KuperbergYou make it sound like it could be worthwhile to try to disprove the conjecture.

19 March, 2007 at 5:01 am

Nets KatzTerry,

I think you are a little pessimistic about strategy 1 for proving blow-up – working with ensembles. For that strategy to work it is not absolutely essential that the typical solution blow up. In order for Navier Stokes to blow up, you just to have each scale “activated”, that is have energy flow sufficiently into each scale.

Suppose you could prove with positive probabiity that energy cascades into a generic situation in which it is one scale higher. Generic has to mean that we get positive probability of flowing to the next scale. You might get a measure zero set of solutions with blow-up but nevertheless use a probabilistic argument

to prove that it occurs.

19 March, 2007 at 8:50 am

Terence TaoGreg, I certainly think one should pursue the blowup direction as well as the regularity problem. As noted in my main post, though, our technologies for establishing blowup are rather limited at present. Which brings me to Nets’ interesting idea. This idea may at least be able to establish a “norm inflation” scenario, in which some high regularity Sobolev norm is shown to increase quite rapidly in a bounded amount of time. That would be enough to disprove a fairly strong version of global regularity, namely that one can bound some critical global spacetime norm uniformly by, say, something depending only on the norm of the inital data. (This would imply as a corollary by standard persistence-of-regularity arguments that any Sobolev norm of the solution at a late time is controlled by the Sobolev norm at time zero, and the norm of the initial data.) I suspect that this type of global regularity result, while very common for critical problems, is probably false for supercritical problems, and might be disprovable by some sort of contradiction argument (e.g. by Bourgain’s induction on energy method). If one was extremely optimistic one might then hope to run a Baire category type argument to create an actual blowup, but this looks somewhat unlikely to me.

19 March, 2007 at 7:19 pm

Terence TaoDear Nets,

I thought about it a bit more, and this strategy to establish blowup may run up against the same supercriticality issues which plague the global regularity problem.

Thanks to the recent work of Escauriaza-Seregin-Sverak and others, we know that blowup solutions to Navier-Stokes must in fact blow up in the critical norm . This rules out self-similar type solutions in which the solution shifts all of its energy from one scale to the next while keeping critical norms under control. It is plausible that these results can be localised, leading one to also rule out “self-similar + radiation” solutions (such as those appearing in the recent works of Merle and Raphael for NLS, as well as even more recent work on wave maps by Krieger-Schlag-Tataru and Rodnianski-Sterbenz) in which a lot of mass is radiated away to coarse scales but the concentrating portion of the solution stays bounded in . If these scenarios are ruled out, then the blowup solution must become increasingly large (and hence increasingly nonlinear) in critical norms at fine scales as one approaches the blowup time. This creates a lot of scope for an anti-Maxwell’s demon (Maxwell’s angel?) to cause trouble – one may start with a promisingly randomly distributed ensemble at coarse scales, but as one pushes the ensemble into fine scales with large critical norm, and the evolution becomes very nonlinear and unpredictable, Maxwell’s angel could conceivably sneak in and drain a lot of entropy out of the ensemble, and eventually nudge the entire ensemble into states in which the energy all dissipates harmlessly and one has global regularity. (Now, if we could obtain some sort of rigorous version of the second law of thermodynamics here, one might be able to prevent this from happening, but this looks very remote at present.)

19 March, 2007 at 11:20 pm

Nets KatzTerry,

That’s really unfortunate. For this to work, nonlinearity needs to be a

friend and not an enemy. In fact the test problem should first to prove blow up

for 3D Euler with finite energy.

Some advantages there: Energy never dissipates away, cascading always

occurs exactly by vortex stretching so that some structure seems to be preserved (for instance the locations of zeroes just convect)

Of course the passage to high scales should happen at time scales so startlingly

fast that the linear term is having very little effect at all so that the blow up for

Euler should indicate the blow up for Navier Stokes.

20 March, 2007 at 12:21 am

Nets KatzIn fact, so that I can understand your objection better, let’s restrict

our attention to 3D Euler. The critical norm which you say we should be

desperately clinging to is L^{infty} of vorticity. We don’t need a recent result

here. Beale-Kato-Majda already prove that if we have blow up of Euler

then the L^{infty} of vorticity must blow up. Does that mean the equation

becomes “increasingly unstable”?

By contrast let me mention a 2D problem for Euler which is critical. There

global solvability is guaranteed since vorticity is convected so that L^{\infty}

of vorticity is conserved. Beale-Kato-Majda show that the growth of

Sobolev norms in time is doubly exponential. The question, perhaps somewhat less glamorous than global solvability, is whether this estimate

is sharp and double exponential growth actually occurs. As we’ve discussed in the past, if one looks at the highest order part of the growth

exclusively, in a certain position, one is forced to consider a system

of ODE for SL(2) valued variables, one variable for each active scale. The

criticality enforces that different scales could be of equal strength and

that many different scales have to work together to achieve double exponential growth. More over if we look at this system of N equations

which governs the growth of the frequencies that are like 2^{-N}, this

system is highly nonlinear, in that, in the relevant time scale which

is 1/N, you could have all of the N/2 highest scales change completely.

You can make very precise that this critical setting becomes increasingly

unstable as the N gets larger. In particular the system has increasing

numbers of variables more remote from each other in scale, all of which

interact. That is tough going.

On the other hand, in the supercritical case, if you posit any reasonable

worst cascading of energy, what happens is that the highest order scale

dominates exponentially the lower order scales. The cascading process

should indeed be nonlinear, but it seems self-similar as frequencies increase. In the time we can activate the N+1st scale, the Nth scale might get all messed up – that’s the nonlinearity – but we don’t care so

much what happens at slightly lower scales. The problem is essentially local in scale.

Now you might argue that this local process could have a derandomizing effect on the ensemble. (And the more times you do it, the more derandomizing it gets.) But if you could show that were true, it would

probably be the most amazing theorem in fluid mechanics ever. In any

case, in this setting, it seems to me the supercritical problem might be

easier to understand than the critical one. Am I wrong?

20 March, 2007 at 8:26 pm

Terence TaoDear Nets,

It’s a bit trickier to use scaling analysis for Euler, since it has a two-parameter scaling symmetry rather than a one-parameter one; the spatial scale, temporal scale, and velocity magnitude are related only by a single constraint rather than two. Nevertheless I agree that the problem of improving the double-exponential Beale-Kato-Majda bound for 2D Euler is a great problem – it is perhaps the one place where the wall separating the impossibly supercritical problems from the feasible critical problems is thinnest. It is clear that some improvement should be possible, and whatever technique is used to accomplish this will undoubtedly be interesting.

I also agree that it is against all physical intuition that fluid equations could derandomise ensembles of solutions – but we don’t seem to have any way at present to make this intuition rigorous. If we have to admit the existence of derandomisers to move the solution to either the best-case scenario or worst-case scenario at whim, then any supercritical equation could conceivably do just about anything from generic blowup to global regularity, depending on the mood of the derandomiser. This is why I feel that “understanding pseudorandomness” needs to be part of the solution (unless we find a new monotonicity formula or something, which lets us gain more control on either the best case or worst case and allows us to establish results even if the equation tries its best to derandomise the flow).

4 February, 2012 at 10:13 am

sdenisovHi Terry :). There is a simple perturbative argument which shows that double exponential growth for, say, higher Sobolev norms can not be significantly improved (although I do not have the sharp estimate yet). This low bound is true for arbitrarily long but finite time and assumes that the Sobolev norm is already large for t=0. More interesting question is the lower bound which is valid for all time. In case of 2d Euler on the torus, the best estimate is superlinear growth of the vorticity gradient. However, if one considers the dynamics of patches (a very different problem) then there is a mechanism for INFINITE in time double exponential rate of merging and singularity formation (with a regular strain present). Anyhow, the double exponential bound for Euler might be improved by only very little.

20 March, 2007 at 8:37 pm

Ars Mathematica » Blog Archive » Why Navier-Stokes is Hard[...] Tao has a thoughtful post that explains why proving existence results for Navier-Stokes equations is so [...]

23 March, 2007 at 8:18 am

Not Even Wrong » Blog Archive » All Sorts of Links[...] each have fascinating blog entries on Millenium problems. Terry Tao writes a long explanation of Why Global Regularity for Navier-Stokes is Hard. He also comments about the recent New York Times piece about him and about math education issues. [...]

29 March, 2007 at 5:52 am

Gil KalaiDear Terry

This is a wonderful post. I have a (rather generic) question: is looking at the problem at high dimensions (even asymptotically somehow) can make things easier regarding the negative direction or even regarding some aspects of the positive direction?

29 March, 2007 at 7:19 am

Terence TaoDear Gil,

Intuitively, the equation should behave worse in higher dimensions, because the relationship between the energy and the scaling becomes increasingly unfavourable. Certainly the equation is more unstable; but it is hard to convert instability to actual blowup (it’s the “Maxwell’s angel” problem discussed earlier – the instability might have the improbable effect of always rescuing the solution just before it blows up). Conversely, in two dimensions, where the energy becomes critical, the result of Beale-Kato-Majda shows that one has global smooth solutions for Navier Stokes.

Another popular model to study is the

hyperdissipationmodel, in which the dispersive factor in the Navier-Stokes equation is replaced by a power , where is a parameter which serves a similar purpose to dimension, in that it determines the relationship between energy and scaling. The level is the threshold beyond which the energy becomes subcritical and one has global regularity. (I’m not sure what happens exactly at 5/4, but perhaps Nets does.) In this paper it is shown that as moves from 5/4 continuously down to 1, the (upper bound on the) dimension of the singular set at blowup time moves continuously from 0 to 1.31 March, 2007 at 2:10 pm

HongjieDear Terry,

3D Navier-Stokes equation is globally well-posed when \alpha=5/4. More generally, n D NSE is globally well-posed when \alpha=(n+2)/4. One proof of this can be found, for example, in the paper http://arxiv.org/abs/math/0104199 you mentioned. But it is not explicitly stated there…

4 April, 2007 at 10:21 am

Y. Charles LiHi, Terry

This is a very nice article, will be helpful especially to fresh Ph.D.’s.

About the scaling argument, simple as it is, it does dictate important

estimates, e.g. for 3D NS, Leray obtained: For any initial condition u(0),

when

t > T = C \nu^-5 || u(0) ||^4_L^2

no more singularity. Using the scaling,

|| u(0) ||^4_L^2 —> \lambda^2 || u(0) ||^4_L^2 , T —> \lambda^2 T

as dictated by the scaling t —> \lambda^2 t. Pretend this is 2D

(of course, 2D global regularity has easier argument),

|| u(0) ||^4_L^2 —> || u(0) ||^4_L^2 , T —> T

while t —> \lambda^2 t by scaling. So any local solution can be rescaled to global.

I’m also glad you start to appreciate “turbulence”. I wrote a thing on

Mathematical Intelligencer:

http://www.math.missouri.edu/~cli/Nature-T.pdf

It should have appeared in hard copy. It is a follow-up of the famous article by

Ruelle and Takens:

http://www.springerlink.com/content/h1760361517x10h2/

I hope it brings chaos “closer” to turbulence. How close? it is hard to judge.

From a different perspective, the case is like the “global warming” problem,

the debate of whether turbulence is chaos is more or less over, the question is:

What can one do about chaos (turbulence) in terms of physical “reachable” description?

I wrote something along this line which is too immature to make public. I asked

Joel Lebowitz to read it for me and he is reading it.

Best Regards

Charles

4 April, 2007 at 10:26 am

Y. Charles LiRuelle and Takens paper link is not working properly:

D. Ruelle, F. Takens, On the nature of turbulence, Comm. Math. Phys., 20 (167-192),

23 (243-244), 1971.

Y. Charles Li

7 April, 2007 at 10:22 am

Dave PurvanceIn the time evolution of a viscous, incompressible flow’s Fourier modes convection’s quadratic nonlinearity is expressed as a convolution of an unknown flow unshifted and shifted in wavenumber. When the unshifted portion is left to be some unknown function and its shifted counterpart is assumed to be a time series of finite order, and, when a discrete span of wavenumbers are considered simultaneously, then the three-space Navier-Stokes evolution equations become a large matrix differential equation. As the order of the time series and the number of wavenumbers considered get large, this Navier-Stokes matrix differential equation becomes ever closer to the continuous three-space Navier-Stokes evolution equations. Using conjugate symmetry and similarity transforms, it is argued at “http://arxiv.org/ftp/math/papers/0610/0610086.pdf” that the exact solution to the Navier-Stokes matrix differential equation is a stable matrix exponential and that the assumed time series is its truncated Taylor expansion in time.

Care to comment?

7 April, 2007 at 10:45 am

Terence TaoDear Dave,

This is an instance of the type 1 of Strategy 3 listed in my post above: “Using weaker or approximate notions of solution”. A localisation of the frequency space is essentially equivalent via the uncertainty principle to a discretisation of physical space. As discussed above, there is indeed no difficulty creating a global solution for the approximated equation. When however one tries to take a limit to create a global strong solution for the original equation, it is necessary to obtain a uniform subcritical or critical bound on all approximate solutions (i.e. to achieve Strategy 2) in order to extract a smooth limit. Otherwise, the best one can achieve are the global weak solutions of Leray.

8 April, 2007 at 6:24 am

Dave PurvanceDear Terence,

Thanks for your reply.

Unless something strange happens to a matrix eigendecomposition as the matrix size becomes infinitely large, convection remains oscillatory (purely imaginary eigenvalues) for all time. Convection is therefore absolutely bounded by the magnitude of the initial condition. Viscous shear (negative real eigenvalues) diffuses these convective oscillations to zero as time goes to infinity. The only problem I see in taking a limit here is that convective oscillations may become extremely (infinitely?) fast.

Is this the limit problem you are talking about ?

8 April, 2007 at 10:12 am

Terence TaoDear Dave, this is part of the problem, but more importantly is the fact that as one transitions from finite-dimensional state spaces to infinite-dimensional ones, being bounded in one norm no longer implies being bounded in another. Thus for instance it is conceivable that limit remains bounded in energy, but is no longer smooth, which is the whole point of the Navier-Stokes global regularity property. (Bounded-energy global weak solutions were constructed all the way back in 1933 by Leray, essentially by a variant of the method you describe, but it is known that weak solutions need not be smooth.) In particular, the energy may concentrate in finer and finer length scales until a singularity occurs in finite time. This singularity is not entirely visible from the finite dimensional approximations, as they only have finitely many scales in the first place; it is a phenomenon which emerges in the limit.

In order to obtain regularity control on the limit, as opposed to merely a weak solution, it is not enough to bound the energy (which is a supercritical quantity); one also needs to bound a critical or subcritical quantity, and no such global bounds for arbitrarily large data are currently known at present (this is the “Strategy 2″ I discuss above).

8 April, 2007 at 12:02 pm

Simons Lecture III: Structure and randomness in PDE « What’s new[...] For , one has global smooth solutions for small data with either sign. For large data in the focusing case, finite time blowup is possible. For large data in the defocusing case, the existence of global smooth solutions are unknown even for spherically symmetric data, indeed this problem, being supercritical, is of comparable difficulty to the Navier-Stokes global regularity problem. [...]

9 April, 2007 at 5:45 am

One side of "our" difficult problem (i.e. turbulence, did you think about something else?) « Alex’s Blog[...] Why global regularity for Navier-Stokes is hard « What’s new [...]

10 April, 2007 at 3:24 pm

Dave PurvanceWith the best of respect, I really believe the new bound provided by the Navier-Stokes “matrix” differential equation is the [-1,1] bound on convection (purely imaginary eigenvalues). Convection “chirps up” in frequency in all powers of increasing time, but stays bounded in magnitude by [-1,1]. Viscous shear (real negative eigenvalues) smoothly diffuses these highly nonlinear oscillatory chirps to zero as time goes to infinity. There needs to be no mention of energy.

And with this I’ll quit babbling…

15 April, 2007 at 3:11 am

sgrajeevWhat a thoughtful and inspiring discussion!

Some questions

Are there other examples of supercritical PDEs that are better understood, or are the difficulties you mention generic to all such cases? e.g. \waveoperator\phi+\lambda\phi^p=0 for p big enough.

The scaling arguments you start with also appear in

quantum field theory where supercritical means something like `non-renormalizable’. Is there more to this analogy? If there is, it is bad news for proving regularity.

There is some geometry in the Euler equations: the fluid follows a geodesic w.r.t. the L^2 metric on the group of volume preserving Diffeomoprhisms (Arnold). Does this suggest anything for Navier-Stokes?

15 April, 2007 at 6:52 am

Terence TaoDear sgrajeev,

Most supercritical evolution equations are just as poorly understood as the Navier-Stokes equations, unfortunately, with the notable exception of the supercritical elliptic and parabolic equations for which there is a favourable sign which allows for some sort of maximum principle or comparison principle to take hold; basically, in such cases, the nonlinearity is extremely strong, but it always acting in a favourable direction, using its strength to reduce the size of the solution rather than increase it. However, in oscillatory evolution equations such as wave or Schrodinger equations it appears that there is no similar phenomenon taking place; the nonlinearity can “try” to reduce the size of its solution, but it is so strong that it can “overshoot” and end up changing the sign or direction of the solution and making it much larger at the same time.

There do seem to be analogies between classical PDE and quantum field theory (which can be viewed as a kind of quantum PDE) but this is definitely an underexplored area of study. For instance, the Cauchy problem for quantum field theory has not been studied much, even in linear models (perhaps it is a bad question to ask).

It is true that the Euler equation has some nice geometrical structure, and I would indeed think this structure will be key in the future understanding of this equation, and thus indirectly to Navier-Stokes. There is however a major obstruction with using the diffeomorphism group structure to understand Navier-Stokes, namely that this equation contains the Laplacian (via the viscosity term), which relies on the Euclidean (or Riemannian) geometry of space. This type of geometry is not preserved at all by diffeomorphisms. Because of the presence of two very different types of geometry in Navier-Stokes it seems significantly less likely that we get the same type of “geometric miracles” (e.g. unexpected monotonicity formulae) that we do in, say, Ricci flow, which only involves one type of geometry. But there could still be many surprises in this equation.

15 April, 2007 at 1:02 pm

RajeevThanks.

The Euclidean metric on the domain does go into the Euler equations as well; e.g., in determining the metric on the group of volume preserving diffeomorphisms. The length^2 of a tangent vector to the group is just the length^2 of the velocity field w.r.t. the Euclidean metric integrated on the domain of the fluid ( which is also the kinetic energy ). Euler equations are the geodesic equations for this metric, according to Arnold.

Viscosity introduces the Laplacian in a more direct way. The viscous force is the gradient of the function

. Often dissipative equations can be thought of as a Hamiltonian system with a complex valued hamilton . (Sorry to plug my work here, but it is relatively recent, so may not be known: arXiv:quant-ph/0701141).

Not claiming that this gives a better control on the scaling properties of Navier-Stokes. But, it looks like Navier-Stokes has a geometrical meaning too, using complex geometry. Might be of independent interest.

13 May, 2007 at 4:35 am

Rajeev’s Journal » Blog Archive » Fuzzy Fluids[...] Terrence Tao has made some deep observations on why the regularity of three dimensional Navier-Stokes is such a hard problem. He has gone on to many other equally fascinating topics, I remain fixated on his main point there: that Navier-Stokes is `supercritical’. The nonlinearities become stronger at small distance scales, making it impossible to know (using present techniques) whether solutions remain smooth for all time. Thus it is crucial to understand the scale dependence of non-linearities in fluid mechanics. [...]

22 May, 2007 at 8:00 am

Smooth Solution to the 3-Space Navier-Stokes Equations by David Purvance « Smooth Chaos in Fluid Dynamics[...] post argues that viscous shear is both the coercive and critical quantity that assures smooth solutions to the 3-space-periodic Navier-Stokes [...]

22 May, 2007 at 8:19 am

DavePurvanceThis post argues that viscous shear is both the “coercive” and “critical” quantity that assures smooth solutions to the 3-space Navier-Stokes equations. Isn’t this what physicists have long been expected?

23 May, 2007 at 7:44 am

math studentHi Prof. Tao,

I am curious that if we assume the solution is periodic , any partial result ?

23 May, 2007 at 8:33 pm

Terence TaoDear math student,

The difficulty in global regularity for Navier-Stokes lies at the fine spatial scales – in particular, scales much smaller than the period L. Intuitively, we expect the behaviour at such scales to not “see” the periodicity, and so whatever difficulties which are present in the non-periodic case will also be expected to be present in the periodic case.

Dear Dave,

There is an error in your post when you attempt to derive (42) from (40), trying to diagonalise different matrices A_n simultaneously. The rotation matrices T_n used to diagonalise A_n depend on n, so you cannot deduce (42) from (7) by conjugating by a single matrix.

More generally, there is no quantitative advantage gained in discretising the problem by introducing some truncation parameters (such as the parameters N and M in your post). The only bounds in the discretised model which would be of use to the original continuous model would be those bounds which are uniform in the truncation parameters, since these are the only bounds which will survive the passage back to the limit. But if there was a bound in the discretised model that was independent of those parameters, one could also have proven it directly in the continuous model simply by taking the limit of the _proof_ rather than of the result. Truncated models are useful for justifying some qualitative statements (e.g. ensuring that all sums and integrals converge, etc.) but do not progress towards the heart of the matter, which is to establish global bounds on solutions to the continuous equation.

25 May, 2007 at 7:08 am

DavePurvanceLet’s try this again…

Dear Terence,

Even though it only becomes clear in the paragraphs below (47) of my post, convective matrix becomes known only after are known for , so a simultaneous diagonalization of all convective matrices is not possible. I don’t see how diagonalizing convective matrices one at a time in increasing time-order invalidates the Navier-Stokes matrix differential equation (7) or its matrix exponential solution (45). Could you explain? Also, the coercive, critical (Gaussian low-pass) nature of viscous shear holds independent of and and therefore should remain in the limit as .

I hope I am understanding your welcomed criticisms.

25 May, 2007 at 8:41 am

Terence TaoDear Dave,

Each time you apply one of the transformations T_n to diagonalise one of the matrices A_n in (7), you undo the diagonalisation of the preceding matrices. Thus, as I said above, the formulation (42) of your truncated Navier-Stokes equation is not valid (indeed, as you have just commented, the matrices A_n are not simultaneously diagonalisable in general, so there is no way that (42) can be deduced from (7)).

28 May, 2007 at 10:10 am

FrederickDear Terence,

Even one puts N-S equation in a cubic box, the essential difficulty remains, because the high frequency (ultra-violet) part remains. Can we truncate the ultra-violet part? Numerical (lattice) simulation tries to smear it, so lattice simulation is still an ostrich policy.

I guess it is the meaning of some of your meaning. I think this is right and the main difficulty of nonlinear analysis. Actually I think the whole subject of nonlinear equations is try to find some ways to deal with the ultra-violet part. Because in a nonlinear equation, one could not omit even the 1 billionth Fourier mode. Such as x^2, even x is a periodic function, whose property is known, nearly all the information about x^2 is lost. This is mystic to me, why even one nonlinear term could destroy all the information and brings lots of randomness? There seems no efficient method to calculate Fourier coefficients of x^2 and to understand property of x^2.

The above is my understanding of nonlinear equations from the point of Fourier analysis.

My question are:

1. Is there some other Fourier series other than sine, cosine that is friendly to nonlinear terms? I mean, when one expands x in some new Fourier series, then x^2 could be easily calculated. If one expands x in terms of exp(i*n*omega*t), then one has to take into consideration of all the terms such as exp(i*(m-n)*omega*t), even m=1 billion+1 and n=1 billion. Could we expand x in terms of NewFourier(n, omega, t), when we calculate x^2, maybe only several m and n that are needed to calculate accurately x^2? Have we find such new Fourier series? Or there is just no such series at all? Just as quintic polynomial equations have no general solution in radicals according to Abel–Ruffini theorem.

2. Someone saw a analogy between ultra-violet part of N-S equation and ultra-violet divergences of quantum field theory. Is the analogy proper and accurate? Surely if it is accurate, renormalization group is quite a great tool to extract information.

Frederick

29 May, 2007 at 10:34 am

DavePurvanceDear Terence,

I have added two extra steps to equation (42) to make clear that is used as a similarity transform on to change, if necessary, its wavenumber flow basis from to . does not effect any lower order matrix for . For one to argue that there is something wrong with (42), I would think one would have to argue that the 3-space Naiver-Stokes equations and its solution cannot be described in terms of a single wavenumber flow basis. I haven’t seen anyone argue this yet. Have you?

You are right, though. These similarity transforms cannot be “deduced” from (7) . They don’t invalidate (42) either.

30 May, 2007 at 4:15 am

DavePurvanceDear Terence,

I might also add that the similarity transform is necessary in my post’s (42) to keep the flow from “diffusing” into the one forbidden flow direction as a passive scalar can do in the advection-diffusion equation.

(Excuse my less-than-stellar grammer. I believe in my last comment “effect” should read “affect”. Also, WordPress promises me that sooner or later they will get all of the bugs out of rendering latex in blog comments.)

3 August, 2007 at 9:31 am

2006 ICM: Étienne Ghys, “Knots and dynamics” « What’s new[...] have no discernible structure other than that of a general diffeomorphism. As I discussed in my own post on Navier-Stokes, the discovery of a new conserved quantity for fluid equations could potentially be extremely [...]

16 August, 2007 at 3:51 pm

Stephen Montgomery-SmithMy feeling is that there aren’t any undiscovered globally controlled quantities of the type described in strategy (2). I admit that my reasoning is full of holes. But the energy estimate for the Navier-Stokes equation gives rise (via a heuristic argument) to the Kolmogorov 5/3 power law for the spectrum. However in 2D we have enstrophy estimates giving rise to the Kraichnan power of 3 law. My understanding is that both experiment and numerics confirm this, at least to some extent.

Similarly, if there were some strange, new, undiscovered globally controlled quantity, perhaps one might expect this to also give rise to a law different to the 5/3 law. But then experiment would have picked this up.

I do admit that one of the huge holes in my argument is that energy and enstrophy in 3D and 2D come equipped with well defined dissipation rates. So in a way, it is just a thought. But, for example, if one could make helicity somehow “monotone”, then this would definitely qualify.

17 August, 2007 at 8:44 am

Terence TaoDear Stephen,

That’s a fairly plausible argument. (One could perhaps hypothesise that there are quantities which are globally controlled, but only start becoming strong enough to affect ensemble distributions such as the power law at very small scales, beyond what experiment and numerics can detect. Of course, if that was the case, then it would be a mathematical artefact rather than a physically or computationally useful quantity, but I presume it would still be technically legal to use it in order to claim the Millennium prize. :-) )

There is also the possibility that there is some sort of “adaptive” controlled quantity, which relies on the prior history of the solution as well as on the current state, and so is not cutting down the energy surface in a way which would distort the power law. This seems to go against thermodynamical intuition though; fluids don’t seem to have much “memory”. On balance I would agree with you that Strategy (2) is unlikely to work, though it can’t be ruled out completely, and it’s not as if the other strategies have a significantly higher probability of success right now anyway. :-)

17 August, 2007 at 11:26 am

Stephen Montgomery-SmithI know that the Navier-Stokes equation is “memoryless” but whenever one actually looks at numerics, the solution really does look like it has a lot of memory. I have observed this quite dramatically with the 2D Euler equation. The vorticity scalar is pushed around by the flow, and after a certain amount of time seems to arrange itself in “puff pastry” like thin layers of positive and negative vorticity, to the point that one could look at an instance of a flow, and make a rather good guess as to how long the flow has evolved.

From watching the Science Channel, I get the impression that a similar thing happens with the solar magnetic field. Presumably this satisfies some kind of memoryless hydromagnetic equation. But according to the T.V. shows I watch, the magnetic fields get all twisted up around each other. At some point they “snap” (presumably get stretched so much that some kind of dissipation kicks in), and this causes release of huge amounts of energy, giving rise to solar flares that cause inconvenience to man made communications satellites.

And I know that you have spent some time with the “magnetization variable” formulation of the Navier-Stokes equation. While everyone who seems to have worked on it seems to have got no-where, nevertheless the magnetization variable differs from the velocity field by the gradient of a scalar field, and perhaps that scalar field codes in a kind of abstract memory.

(For those of you unfamiliar with the magnetization variable formulation, I did a short write up at http://www.qeden.com/wiki/Navier-Stokes_Existence_and_Smoothness).

And I guess that this paper:

http://www.iumj.indiana.edu/IUMJ/fulltext.php?artid=42034&year=1993&volume=42

is an example of what an adaptive controlled quantity would be.

18 August, 2007 at 11:33 am

Stephen Montgomery-SmithI was looking back at the old comments (I only found this page a few days ago). I saw Hongjie’s comment from March 31st that the hyperviscous NS is globally well posed for alpha=5/4. Actually this has a rather short proof, because in this situation we have the boundedness of

int_0^infty || (-Lap)^{5/8} u ||_2^2 dt.

Then the proof proceeds following the same lines as, say, the proof that the Prodi-Serrin conditions are sufficient for global well-posedness for regular NS.

20 August, 2007 at 3:31 pm

“Math Doesn’t Suck”, and the Chayes-McKellar-Winn theorem « What’s new[...] In that case, Markov chain theory lets one conclude that if the solution started out at a fixed total energy E, and the system S was isolated, then the limiting distribution of microstates would just be the uniform distribution on the energy surface ; every state on this surface is equally likely to occur at any given instant of time (this is known as the fundamental postulate of statistical mechanics, though in this simple Markov chain model we can actually derive this postulate rigorously). This distribution is known as the microcanonical ensemble of S at energy E. It is remarkable that this ensemble is largely independent of the actual values of the transition probabilities; it is only the energy E and the function H which are relevant. (This analysis is perfectly rigorous in the Markov chain model, but in more realistic models such as Hamiltonian mechanics or quantum mechanics, it is much more difficult to rigorously justify convergence to the microcanonical ensemble. The trouble is that while these models appear to have a chaotic dynamics, which should thus exhibit very pseudorandom behaviour (similar to the genuinely random behaviour of a Markov chain model), it is very difficult to demonstrate this pseudorandomness rigorously; the same difficulty, incidentally, is present in the Navier-Stokes regularity problem.) [...]

7 September, 2007 at 6:23 am

Stephen Montgomery-SmithI am having doubts about my earlier remark that the Kolmogorov 5/3 law suggests the non-existence of undiscovered global controlling quantities. The reason I say this is because in the 2D case we have the Kraichnan power of 3 law that comes from the enstrophy. But in the 2D case, there are also better controlling quantities than the enstrophy, namely the L_p norms of the vorticity for all p greater than 2.

2 October, 2007 at 4:15 am

Notes : Wiki page on NS equations « jtstnsp[...] Now ,the next job is to take a look at Terry Tao’s article on NS equations. [...]

9 October, 2007 at 11:23 am

A quantitative formulation of the global regularity problem for the periodic Navier-Stokes equation « What’s new[...] Tuesday, October 9th, 2007 in math.AP, paper Tags: compactness, Navier-Stokes equations I have just uploaded to the arXiv my paper “A quantitative formulation of the global regularity problem for the periodic Navier-Stokes equation”, submitted to Dynamics of PDE. This is a short note on one formulation of the Clay Millennium prize problem, namely that there exists a global smooth solution to the Navier-Stokes equation on the torus given any smooth divergence-free data. (I should emphasise right off the bat that I am not claiming any major breakthrough on this problem, which remains extremely challenging in my opinion.) [...]

29 December, 2007 at 12:36 am

nOnoscience » Blog Archive » Turbulence[...] If you can handle comfortably some mathematical language, read the excellent post titled “Why Global Regularity for Navier Stokes is Hard” by Terrence [...]

16 March, 2008 at 6:21 am

cgjoh@csc.kth.seFor computational evidence of blowup of incompressible Euler solutions, see the recent article (to appear in BIT Numerical Mathematics)

Blowup of incompressible Euler solutions

available on http://www.csc.kth.se/~cgjoh

20 March, 2008 at 8:47 am

DavePurvanceDear Terence,

In your comments above I believe you have confirmed my argument that for any spatially periodic flow of initial finite value the incompressible Navier-Stokes equations can be posed as the nonlinear matrix differential equation

. (1)

This equation is nonlinear because matrix is a function of the unknown flow . When in is expanded as a time series

(2),

then (1) becomes

. (3)

When have a common flow basis, i.e., when commute, then the solution to (1) is the stable matrix exponential

. (4)

However, as you correctly pointed out in your comments above, commutativity of must be assumed a priori and, therefore, it is a mistake to assert (4) is a general solution to the Navier-Stokes (1).

To remedy this commutativity problem, I have recently argued in this arXiv paper that using the time expanded flow (2) everywhere in (3) gives the equation

. (5)

Matching coefficients in (5) when solves for the unknown flow coefficients . For they are

. (6)

Without assuming commutativity, let the Taylor expansion of the matrix exponential function (4) be

. (7)

This is a stable function of and converge even when do not commute. Obviously, when commute,

. (8)

What I have also argued in this same paper is that when do not commute, the illusive bound to the periodic Navier-Stokes flow is

(9)

The paper numerically confirms this bound (9) for using a white-noise initial flow. These numerical results have given me the courage to ask you again for your opinion of my recent findings (even though it was a humbling experience last time). Thank you.

Dave Purvance

20 March, 2008 at 9:41 am

A bound for periodic Navier-Stokes flows « Smooth Chaotic Fluids[...] as Terry Tao correctly pointed out, commutativity of must be assumed a priori and, therefore, it is a mistake [...]

20 March, 2008 at 11:10 am

DavePurvanceDear Anonymous,

It’s only a mistake to hold that my matrix exponential function is a general solution to the periodic Navier-Stokes equation. The bound that I have identified is this same matrix exponential function, but expanded without the commutativity assumption. It is NOT claimed to be a solution, just a bound.

Dave Purvance

22 March, 2008 at 2:14 pm

Terence TaoDear Dave,

Unfortunately there is an error in deriving (26) in your paper. You seem to be treating various monomials involving the as if they were non-negative numbers, when in fact they are matrices of indefinite signature. As such it is not necessary the case that a difference is bounded in magnitude by a sum; for instance, might be larger than (e.g. consider the case when ), and similarly can be larger than . Also, it is not true that matrix product order has no influence on the “bound” of the product: for instance, it is entirely possible for to be different from . (Both are bounded above by , but this is not the type of expression you have on the right-hand side of (26). Similarly, is bounded above by , but this is again not the type of expression you have on the right-hand side of (26), in which the norm is outside the sum and product, rather than inside.)

More generally, one cannot control a non-commutative flow by its commutative counterpart, because cancellations in the latter do not necessarily force cancellations in the former. It’s easiest to illustrate this phenomenon with a discrete equation rather than a continuous equation . It is an easy matter to find non-commuting matrices A, B whose commutator has an eigenvalue larger than 1. If we then let U(t) vary periodically between A, B, -A, and -B in turn and let u(0) be an eigenvector of corresponding to that eigenvalue, then grows exponentially in time, even though the abelianisation stays bounded. It is easy to convert this discrete-time example to a continuous-time one but I will leave this as an exercise.

[The one time in which one can safely control a non-commutative flow by a commutative one is if the latter has no cancellation in it, in which case Gronwall's inequality applies.]

18 April, 2008 at 9:16 am

DavePurvanceDear Terence,

While it may turn out that “commutative flows”, as you term them, may be the only flows physically possible, I did not mean my mathematical bounds to be derived from a commutative flow. My bounds, which I now denote as with corresponding matrices , use the commutative solution’s matrix exponential function, but this function is evaluated with a noncommutative solution’s . are therefore not solution coefficients to the Navier-Stokes matrix differential equation or any other differential equation that I know of, unless, of course, become commutative.

Relating to your criticism that the inequality

(1)

is not established, isn’t its opposite

(2)

contradictory? For instance, in the commutative case when with this latter inequality asserts

. (3)

and and the contested inequality (1) must accommodate both commutative and noncommutative cases.

Finally, the numerical tests. The 10th order is comprised of 512 matrix product terms and each matrix is 1029×1029. Using numerous “white-noise” initial flows, the sum of these matrix product terms always agrees with the bounds derived from (1). If (1) is not true, then how does one explain these (near miraculous) numerical results?

18 April, 2008 at 11:35 pm

JonasDave,

As far as I can tell there is no miracle going on here. That the inequality holds for white noise initial data says very little. There are numerous examples of inequalities in mathematics which hold for a typical random sample of the objects in question, however it is also often the case that the examples where the inequality is false are highly structured objects which a random sample have essentially no chance of ever turning up. A proper test of the bound would e.g. start out wit ha single initial data and then try to modify it locally to make the inequality come closer to equality, then modify this data and repeat this until, possibly, data is found which violates the inequality.

In you discussion above about inequalities (1)+(2) you miss the alternative that in general none of the inequalities need to be true. There could very well be matrices for which (1) holds and others for which (2) holds.

For matrices in general let A be the 3X3-matrix which has a zero in the upper left corner and the rest is 0, and let B be the matrix which has -1 at the last two position in the top row the the rest is 0. If we let A be B_n and B be S_n then inequality (1) fails, since the norm of A-B is 3^0.5 > 2, and the norm of A is 1.

19 April, 2008 at 7:00 am

DavePurvanceDear Jonas,

Your comments on my numerical tests may be true, and I would love to have the computing power to try what you describe as a valid numerical test. In fact, I hope someone is interested enough to try, even though if nothing negative is found, someone else might criticize that you just didn’t probe hard enough.

One point that I believe adds credence to my numerical tests is that both solution matrices and bound matrices are made from the same system matrices , and, are made from the same flow coefficients . So , if is random, so is the bound , and if is structured, so is .

Also, I’m sure one can always produce simple 3×3 matrices that support any argument. The challenge is to make your argument with matrices that represent the periodic 3D incompressible Navier-Stokes equation.

21 April, 2008 at 10:56 am

285G, Lecture 7: Rescaling of Ricci flows and kappa-noncollapsing « What’s new[...] to begin the surgery program discussed above. (In contrast, the main reason why questions such as Navier-Stokes global regularity are so difficult is that no controlled quantity which is both coercive and critical or subcritical is known.) The [...]

24 April, 2008 at 12:15 pm

Lower and Upper Bounds for Periodic Navier-Stokes Flows « Smooth Chaotic Fluids[...] differential equation also has a general noncommutative time series solution. These all have been recognized by the world-renown expert Terry Tao (read in his blog’s comments). Terry also happens to be a very decent and fair person in my [...]

9 May, 2008 at 3:41 am

DavePurvanceTerence et al,

I seemed to have messed up the links.

My proposed solution to the periodic 3D Navier-Stokes problem will always be arXived here. My WordPress blog discussing this solution will always be here.

Your criticisms and comments are always welcome.

30 May, 2008 at 4:29 am

DavePurvanceDear Terence,

Thanks for acknowledging in your 22 March 2008 comment above my commutative and noncommutative solutions to my periodic Navier-Stokes matrix differential equation

, (1)

having initial condition .

In the same comment, though, you misunderstood the pair of matrix polynomial inequalities

(2)

I used to put bounds

(3)

on the general noncommutative solution coefficients . I take full responsibility for this misunderstanding because I didn’t really prove (2). My arXiv paper now does.

At the time of your comment I denoted in (2) as and in (3) as . Perhaps this notation was the source of the misunderstanding. Whatever the case, I am sure you understand that this is an important issue and I ask you for the opportunity to counter your criticism of (2).

The inequalities in (2) do not mean that a commutative flow of one initial condition somehow can control another noncommutative flow of another initial condition .

Matrix polynomials are defined by

. (4)

Bound coefficients are the Taylor time expansion coefficients of the matrix exponential

(5)

assuming the same noncommutative as in (4) and resulting in matrix polynomials with

. (6)

Again, as noted in (4) and (6), and are functions of the same and therefore both are a function of the same initial condition . The difference between and is that is a solution to (1) because assumes commutative . is not as solution to (1) because assumes noncommutative . They both, however, are stable functions of independent of their commutative properties and and simplify to the same polynomial when commute.

Inequalities (2) are generalizations of a pair of equivalent commutator inequalities, which for illustration, I will derive here for the two simple matrix monomials making up the matrix commutator

. (7)

is the first monomial and is the second. The reasoning for putting bounds on (7) is the same as the reasoning used to derive (2). The Navier-Stokes matrix polynomials and , however, are much more complicated than the monomials and used here. So to get the full derivation of (2), you will have to read my paper.

By the the definition of any matrix norm

. (8)

Adding the two halves of (8) gives

(9)

Again by the definition of any matrix norm

. (10)

Adding (9) and (10) and simplifying gives

. (11)

Doubling both halves of (8) and subtracting the result of each from (11) gives

. (12)

which upon rearrangement yield the desired inequalities

. (13)

So actually, inequalities (13) are just two operations away from the well known matrix commutator inequalities (11). And again, the reasoning for deriving the Navier-Stokes (2) parallels the reasoning provided in (7)-(13).

Inequalities (13) can be easily checked numerically for arbitrary square matrices . I used random complex matrices and MATLAB’s “gallery” function to mix and match different types of . The experiments ranged in dimension from 2 to 2000 and the inequalities (13) never failed. And, of course, as detailed in my paper, I have also numerically checked the Navier-Stokes (2) through time order 10 using both random and structured initial flows. These checks have never failed either.

Sorry for the confusion and thanks for letting me clear it up.

5 June, 2008 at 7:39 pm

lib.mexmat.ru/forumТао http://terrytao.wordpress.com/2007/03/18/why-globa…..ment-26446 писал(а):

At present, all known methods for obtaining global smooth solutions to a (deterministic) nonlinear PDE Cauchy problem require either {В настоящее время, все известные методы получения глобальных гладких решений (детерминированных) нелинейных PDE (Cauchy проблема) требуют то или иное (также?)}

1. Exact and explicit solutions (or at least an exact, explicit transformation to a significantly simpler PDE or ODE); -{Точные и явные решения (или по крайней мере точное, явное преобразование к значительно более простому PDE или ОDЕ)}

2. ;Perturbative hypotheses (e.g. small data, data close to a special solution, or more generally a hypothesis which involves an somewhere); or

3. One or more globally controlled quantities (such as the total energy) which are both coercive and either critical or subcritica.- {Одно или более глобально управляемые количества, (типа полной энергии), которые являются принудительными и также критическими или подкритическими.}

Или, может быть, самого автора стоит об этом спросить, воспользовавшись окном для комментариев его блога по указанному в цитате адресу. Возможно, он, действительно, как и Вы, почему-то молчаливо подразумевает п. 1 в обязательном сочетании с п. 3, хотя из приведенной цитаты это, кажется, никак не вытекает. При этом заодно можно предложить Тао и его коллегам по обсуждению сформулировать свое отношение к высказываниям Л.Д. Ландау и М. Клайна, о чем мы здесь столько спорим.

12 June, 2008 at 9:23 pm

Shock waves « Hydrobates[...] If viscosity is included in the description of fluids then the Euler equations are replaced by the Navier-Stokes equations. There is reason to suspect that in this case shock waves are smoothed out and a smooth initial configuration remains smooth in the course of the evolution, for all time. There are simple examples where this can be seen but there is still no global regularity result for Navier-Stokes (and no counterexample). The Clay Foundation has offered a prize of one million dollars for the solution of this problem in either direction. The fact that the prize has not yet been collected is a sign of the difficulty of the problem. For a discussion of this question and its broader mathematical significance I recommend the excellent account of Tao. [...]

16 July, 2008 at 7:28 am

AnonymousI am looking for more information about the use of Strichartz estimates in solving wave equations. Can anyone provide me some reference? I am not looking for research papers yet. I want to grasp the main ideas. The info on DispersiveWiki is not particularly useful. And I don’t seem anything revelant except a lot of research papers which I want to avoid at the first stage. Thanks.

17 July, 2008 at 2:42 pm

Terence TaoDear Anonymous,

You might try Chris Sogge’s book on nonlinear wave equations, or my own on nonlinear dispersive equations.

17 July, 2008 at 4:50 pm

AnonymousThanks. I’ll have a look at both.

19 July, 2008 at 3:54 pm

Global existence and uniqueness results for weak solutions of the focusing mass-critical non-linear Schrödinger equation « What’s new[...] smooth solutions to the Navier-Stokes equation is one of the Clay Millennium problems that I have blogged about before, but global existence of weak solutions is quite easy with today’s technology and was first [...]

22 July, 2008 at 11:19 am

Claes JohnsonA resolution of the Clay Navier-Stokes problem is proposed in the article

Blowup of Incompressible Euler Solutions, published online July 19 2008

in BIT Numerical Mathematics.

22 July, 2008 at 7:20 pm

AnonymousTo Claes Johnson

I don’t think this paper is called “resolution”… it is just some numerical evidence. For incompressible Euler, finite time blow-up or not, has been a long time controversial question. Both sides have numerical “evidence”…

For incompressible Navier-Stokes, physicists tend to believe no finite time blow-up.

23 July, 2008 at 12:21 am

Claes JohnsonTo Anonymous:

This discussion is important. I hope e.g. Terence will take part.

You express a common misconception, addressed in the article, that

computational evidence is not mathematical evidence.

The central concepts are wellposedness and turbulence, and in this

context a computational solution is as much a solution as anything.

Please read the article with an open mind. Looking forward to

your comments.

Claes

29 July, 2008 at 6:34 pm

AnonymousClaes,

The issue is that the NS equation is highly numerically unstable, so simulations showing blow up tell us very little; the blow up may be an artifact of the simulation rather than an indication that NS actually blows up.

[different anonymous from 7:20pm above]

29 July, 2008 at 11:21 pm

Claes JohnsonTo Anonymous 2:

You are right that an unstable numerical scheme for a stable problem

can give artifacts. But our computational Euler solution is validated

by a posteriori output error estimation by Euler residuals multiplied

by stability factors obtained by solving dual linearized problems.

A computed Euler solution is thus a representative solution and

not an artifact. Since the computed solution shows blowup and

is representative, there is blowup.

Correspondence (with e.g. Terence) on the blowup problem is published on

http://www.nada.kth.se/~jhoffman/pmwiki/pmwiki.php?n=Forum.Clay

Claes

30 July, 2008 at 5:20 am

AnonymousClaes, take a look at [[Goodstein's theorem]] in Wikipedia for how fast a function can grow and still be considered bounded in a mathematical sense. Double exponential is not bad at all. Triple or quadruple or 57-times-iterated exponential is only a little bit worse. Then you get functions like Ackermann’s, which are worse than n-times-iterated exponential for arbitrary large n, but for which you can still write down an formula in terms of induction on more than one variable. And then there are functions that grow so fast that no expression can even be written down–one can only prove indirectly that they are still finite. Goodstein’s theorem says that starting with any n, if you iterate a certain calculation for enough steps you’ll eventually reach zero. But it turns out (proved by Kirby and Paris) that as a function of n, the number of steps before you hit zero is so large, that it’s impossible to write down a formula for it. Yet that it’s always finite (the process never goes on without bound) is a remarkable fact of pure mathematics, even though even for values as small as n=4, f(n) could not be written down as a decimal number if you could engrave a billion digits on each electron in the physical universe.

A more practical example: suppose you have a computer program with array references like a[x*y + z]. It is provably impossible in general for a compiler to tell without running the program whether there will ever be a subscript-out-of-range error. Suppose instead the references are like a[37*y++19*z+3*w]–that is, you never multiply two variables together in a subscript, you can only add them or multiply by constants. Then compile time checking is possible (this is called Presberger arithmetic). The running time formula is a tower of iterated exponentials in the size of the expressions, which is absolutely intractable in the worst case, but it’s a remarkable theoretical result that it’s decidable at all. And it turns out that for most real practical cases, the worst case exponential tower is avoided. And even without that, the decidability is of interest to pure mathematicians. Maybe they’re not of interest to anyone else, but pure mathematicians went into that field precisely because they’re into that sort of thing. It’s not up to anyone else to talk them out of it.

Who knows, maybe global stability will involve growth functions like that. If so, the Clay math people want to know about it. In a physical sense it’s as bad as a singularity but from a pure math point of view it’s not the same at all. And as Terence and others have said, reformulations of the problem may be very interesting from a physics perspective, but the pure math question posed as a Clay problem is very specific.

30 July, 2008 at 5:27 am

AnonymousRe the above: an amusing question occurs to me for the math folks here (I’m just a computer guy). What happens if someone turns in a proof of NS global stability, that is rigorous but depends on something like a large cardinal axiom? Is that even imaginable, mathematically speaking?

30 July, 2008 at 7:48 am

Claes JohnsonTo Anonymous:

You touch an essential point: There is a big difference between

100 and 10^100 = googol. If you do not make this distinction

in a quantitative analysis, it is not an analysis. I am surprised

to see that some (pure) mathematicans do not make this distinction,

and I wonder what mathematical tradition it can reflect.

Claes

8 August, 2008 at 5:39 am

westy31Hi turbulence friends,

I happened to be thinking about the Navier Stokes equation and turbulence, when I found this discussion. I have some questions and comments.

I recently put a free n-dimensional Navier Stokes simulator on the web at:

http://www.xs4all.nl/~westy31/CellFlow/CellFlow.html

One question I am working on is how turbulence scales in n dimensions. It was remarked by Terrence in this discussion that 3D turbulence is supercritical, while 2D turbulence is critical. This I would like to understand.

First, I will comment on scaling. I think the scaling

can be considered a special case of scaling behaviour that is most easily understood in terms of dimensional analyses. If you are given data about a fluid flow, without mention of the units used, you would not be able to deduce the correct units. This is because classical physics has a symmetry that corresponds to the fact that there are no natural length, time and mass scales. The scaling law for a quantity is easily written down by just writing out the dimension of the quantity:

etc,

The scaling is a special case: .

Deriving an energy spectrum from dimensional analyses only, was just what Kolmogorov did in his 1941 paper. The outcome, which is easy to check yourself, is

See also the Wikipedia page:

http://en.wikipedia.org/wiki/Turbulence

This outcome is independent of the dimension of the flow, (Note that k stands for the norm |k|, not the vector) which seems to contradict the claim that 3D and 2D have a different criticality. Personally, I have a hard time believing that the energy spectrum does not depend on space dimension, so I would welcome any arguments to that end. I heard (see Stephen’s comment) that in 2D, you have ‘Kraichnan power of [minus] 3 law’, but it is unclear to me how this relates to Kolmogorov’s theory. Clarification would be welcome here.

While thinking about the -5/3 law, I decided they need to be modified anyhow, to meet some consistency requirements. If you try to integrate the dissipation () over k, to check that you retrieve , you run into a diverging integral. To solve this, I propose:

Note that this formula is scale invariant.

The double exponential cut-off arises because for wave number higher than the Kolmogorov scale , dissipation takes over. With the ultraviolet exponential cut-off, we can integrate the dissipation and get a consistent answer.

Next, try to integrate E(k) to get the total turbulent kinetic energy. Again, we get infinity, but now because of the infrared limit. This is solved by an abrupt low wave number cut-off k_0, which is justified because we have a fluid in a finite container.

Interestingly, we can now for a dimensionless number out off the ratio between the high and low wave number cut-offs: Re= .

This dimensionless number can be interpreted as the Reynolds number, if we insert the Kolmogorov velocity for velocity, and 1/k_0 as the scale in the regular expression for the Reynolds number.

The idea of this Reynolds number makes sense. After all, a laminar flow is quite smooth, and this should somehow be reflected in the energy spectrum formula. The formula I propose automatically makes a smooth (=steeply declining in wave number) energy spectrum if the Reynolds number is low.

Because of the viscous cut-off, I personally believe that the Navier Stokes equation does not blow-up, ie has zero energy in infinite wave numbers.

Gerard

23 August, 2008 at 6:48 am

PDEbeginnerDear Prof. Tao,

This article is very nice and very helpful for me to understand the last term’s lecture on NSE I attended. As for the supercritical and critical things (I didn’t understand it in the lecture), I was wondering if my present understanding is right:

When the energy transfers to the finer and finer scales, the velocity will concentrate on smaller and smaller space. This will possibly make the critical and subcritical norm of velocity blow up.

On the other hand, some people works on partial regularity of the solutions, suitable solutions, and considers Hausdorff dimension of singular points. I was wondering if this is helpful for solving the problem.

Thanks a lot!

All the best,

PDEbeginner

24 August, 2008 at 9:15 am

Alberto G. P.Hi,

The String 2008 conferences were held at CERN this week. The talk by Minwalla was about ‘Nonlinear Fluid Dynamics from Gravity’ and seem me very interesting:

(pdf here: http://indico.cern.ch/getFile.py/access?contribId=24&resId=0&materialId=slides&confId=21917)

I have read the Terence’s post and I think that he didn’t include this technique in the Strategy 3. Unfortunately I am not an expert on this subject, more exactly, I know very little about physics and mathematics. Nevertheless I am very interested in the following questions:

1) Could we know something about Navier Stokes equations (NSE) by means of the proposed duality between NSE and EFE (Einstein Field Equations)?.

2) We could discovery analytic EFE solutions in AdS space and then obtain his dual NSE solutions ( ?).

3) Is the global regularity problem for EFE resolved? If so it can be use the proposed duality to resolve the global regularity problem for NSE.

Can somebody help me? I asked about Minwalla’s talk in some physics’s blog but nobody reply me. Perhaps I am doing senseless questions (:-)sorry.

25 August, 2008 at 2:25 am

Pedro Lauridsen RibeiroHi Alberto,

The aforementioned transparencies are a bit hard to read,

as some of the math symbols there are truncated. However,

I can make some comments on your questions proper.

First, let’s set the stage with a bit more of detail. Minwalla’s

proposed duality uses a particular instance of the

Maldacena-Witten AdS/CFT correspondence which relates a

particular class of solutions of Einstein’s equations with negative cosmological constant in d+1 dimensions which possess “locally” a conformal infinity of anti-de Sitter (AdS) type with solutions of

“generalized” Navier-Stokes equations in d dimensions. For that,

he uses the Fefferman-Graham asymptotic expansion of the

conformally rescaled bulk metric in a collar neighbourhood of

conformal infinity, whose coefficients satisfy, due to the Einstein

equations, a family of recursion relations which constitute a non-linear

Fuchsian system (i.e., a system of differential equations whose

coefficients present isolated algebraic singularities for one or

more coordinates, here the distance to the boundary in geodesic

normal coordinates for the conformally rescaled metric). Such

system determines uniquely the asymptotic expansion of the

bulk metric near infinity (hence, its _long-distance_ or

_long-wavelength_ behaviour) from the boundary metric and

a symmetric rank-2 tensor on the boundary called the

boundary stress-energy tensor. When conserved and traceless,

this tensor reduces to the rescaled electric part of the bulk

Weyl tensor, but this is not always the case – in general, the

divergence and the trace of the boundary stress-energy tensor

give rise to conformal invariants at the boundary which can often

be computed exactly, due to the Fefferman-Graham equations.

It’s the inhomogeneous conservation law of the boundary

stress-energy tensor that gives rise to the “generalized”

Navier-Stokes equation of Minwalla, for a particular class of

bulk metrics.

Now, about your questions (in some places without pretenses to rigour):

1.) The AdS-CFT correspondence exchanges large scales in the bulk

with small scales in the boundary, but knowing the behaviour of the

boundary conservation law in the large involves geometric information

deep in the bulk. The class of solutions of the bulk Einstein’s equations

which give rise to the “generalized” Navier-Stokes equations has

a small-codimension singular locus (a “black brane”) deep inside.

In this case, one may expect to obtain blowup for these “generalized” Navier-Stokes equations, but Minwalla presents only circunstantial

evidence to the conjecture that the “generalized” Navier-Stokes

equation always arises from the procedure above – this involves proving

global existence and uniqueness for the Fefferman-Graham system for

the bulk metric.

2.) Using the Fefferman-Graham formulae for the boundary stress-energy

tensor in terms of the bulk metric, one can obtain the actual form of the coefficients of the “generalized” Navier-Stokes equations, but not its actual solution, unless the form of the resulting equations is, of course,

sufficiently simplified. Otherwise, the equation is as hard to solve as

ever.

3.) When the boundary metric and the boundary stress-energy tensor

are real-analytic, the Fefferman-Graham system admits a unique

solution and, moreover, the asymptotic expansion actually converges

in a sufficiently small collar neighbourhood of the conformal infinity

(this expansion is a power series in the geodesic distance to the

boundary if and only if the boundary stress-energy tensor is conserved

and traceless, otherwise the expansion involves logarithmic terms as

well). This was proved by Kichenassamy. As far as I know, almost nothing

is known in general about the maximum domain of convergence.

For non-analytic boundary data, there seems to be so far no sufficiently

strong estimate of the solution in terms of the boundary data. This

question was studied by Michael T. Anderson in several papers,

almost all of them in Euclidean signature, because then one can use

techniques from nonlinear boundary value problems (Leray-Schauder

degree, etc.), but the problem is quite different from a PDE perspective,

and Wick rotation (to move back and forth from Lorentzian to Euclidean signature) is a very tricky procedure in curved spacetimes, which globally

in general cannot be done, specially if the metric is not stationary.

Only very few of Anderson’s papers deal with Lorentzian signature.

In one of these, he shows that a sequence of Cauchy data for

geodesically complete, asymptotically AdS spacetimes (i.e., subject

to the boundary condition that the conformal infinity should be that of

AdS, or, more precisely, of its universal cover) which become asymptotically

stationary at infinite times actually give rise to a _globally_ stationary

spacetime. This is rather different from the asymptotically flat case,

since nonstationary (albeit sufficiently small) perturbations of the

Cauchy data for Minkowski spacetime give rise to nontrivial, smooth

and geodesically complete spacetimes, as proven by Christodoulou

and Klainerman. The difference is that the boundary conditions needed

for the Einstein equations in the case of a negative cosmological constant

(as here the “unperturbed” solution – i.e. AdS – is not globally hyperbolic)

do not let the nonlinear effects disperse at large times – they are focused

back in an “almost-perodic” manner. The result of Anderson gives evidence

to the possibility that AdS spacetime is globally nonlinearly unstable – according to his result, nonstationary perturbations will eventually lead

to geodesic incompleteness. But here there’s a caveat: to prove his result,

Anderson needs a “unique continuation” property for the rescaled linearized Einstein equations across the boundary. Here he borrows his intuition from the Euclidean case, where such property holds even in the nonanalytic case. However, the linearized Einstein equations are of wave type (modulo

some gauge fixing), and in the nonanalytic case (which is even beyond the

reach of Kichanassamy’s approach), there are counterexamples to

unique continuation by employing nonanalytic perturbations of

the coefficients of lower order (Cohen, Hörmander, Alinhac-Baouendi)

if, in particular, the boundary is totally geodesic (i.e., gliding – null geodesics tangent to the boundary always have infinite order contact), which is

_always_ the case for conformal infinities. A proof of global nonlinear

instability of AdS would probably involve proving that such perturbations always lead to geodesically incomplete bulk metrics, which might be the

case for all we know, but it remains an open (albeit perhaps solvable in

this form) problem. Now, to move from this back to Minwalla’s “generalized”

Navier-Stokes equations, one needs estimates relating boundary quantities to bulk quantities (which is also needed if one wants to extend

Kichenassamy’s existence result to the nonanalytic case and prove

on the other hand the _stablility_ of AdS, which does hold locally in time in

4 dimensions, as proven by Friedrich). Such estimates exist in the

case of Euclidean signature, but it’s hardly likely they carry through

to Lorentzian signature, specially in nonanalytic cases. Since the boundary stress-energy tensor involves the Weyl tensor, and so do the estimates obtained by Christodoulou and Klainerman in their proof of the global nonlinear stability of Minkowski spacetime, there may be such estimates

in spite of that.

You see, it’s not that your questions are senseless, it’s just that they are

too hard, but nevertheless always worth asking…;-)

Cheers,

Pedro

25 August, 2008 at 8:31 am

Pedro Lauridsen RibeiroAh, yet on Alberto’s third question: for large asymptotically

flat data, Christodoulou recently proved that, under some

reasonable technical assumptions, solutions of Einstein’s

equations develop trapped surfaces in finite time, which

implies gravitational collapse by Penrose’s singularity

theorem. For those brave souls who wish to endeavor

going through the proof (594 pages!!!), the preprint

can be found at http://arxiv.org/abs/0805.3880.

11 August, 2011 at 10:38 am

AnonymousKlainerman and Rodnianski found a simplification of Christodoulou’s proof. This paper is 117 pages long.

25 August, 2008 at 9:47 am

Alberto G. P.Let me make some comments about I have understood

We can start with a bulk metric that define an AdS space. The bulk metric is an exact solution of EFE with negative constant. There are an unique asintotic expansion of the bulk metric valid only in a certain region belongs to AdS (near infinity). The coefficients of this expansion are used to buid a new tensor defined on the boundary of AdS (stress-energy tensor). If it is imposed some conservation law to this tensor then the generalized NSE arise. But although we know the stress-energy tensor we don´t know dual generalized NSE solutions (i.e.it hasn´t got the solutions yet).

The global regularity for EFE is an open problem. Moreover It can not be used the AdS/CFT correspondence to solve global regularity for generalized NSE because nobody know how to transform the bulk quantities to boundary quantities.

As thanks let me conclude with a quotation from Don Quixote:

“nunca fuera caballero

de damas tan bien servido

como fuera Don Quijote

cuando de su aldea vino

doncellas curaban dél;

princesas de su rocino,”

;-).

25 August, 2008 at 5:08 pm

Terence TaoDear PDEBeginner,

You are correct that the difficulty in the Navier-Stokes regularity problem is in preventing passage of energy to higher scales, which will cause subcritical norms to blow up (and critical norms to either blow up, concentrate, or otherwise develop a singularity). Thus far, the partial regularity results that are known do not prevent this blowup from occurring. To oversimplify, the above blowup scenario is consistent with concentration on any set of (parabolic) dimension 1 or less, but not on dimension higher than 1. And indeed, the known partial regularity theory, in particular the result of Kohn, Caffarelli, and Nirenberg excludes singularities with dimension higher than 1. (I am not sure if singularities of dimension exactly 1 are known to be excluded. Given that in two dimensions no singularities occur, this suggests that 1-dimensional singularities do not occur in three dimensions, though this is nowhere near a rigorous argument.)

26 August, 2008 at 2:39 pm

westy31Hi again,

Question:

Would the Energy spectrum contain enough information to decide if the equation blows up?

For example, say the energy spectrum is k^-5/3. This would mean, I presume, that the velocity does not fluctuate infinitely fast in space, because high frequancy components of the field get progressively smaller. Not so for higher derivatives of the field though, since these get multiplied by k each time you differentiate.

As a second example, take k^-5/3*exp(-k/k_c).

This one stays small for large k, for all derivatives. It would seem to me that this guarantees smoothness.

Gerard

26 August, 2008 at 3:50 pm

Terence TaoDear Gerard,

Yes, smoothness of the velocity field is equivalent to the energy spectrum decaying faster than any power of k as . So if the Kolmogorov power law managed to extend itself to arbitrarily high frequencies then regularity of Navier-Stokes would break down; but I believe the expectation is that the viscosity term in Navier-Stokes should cause one to leave the Kolmogorov regime at sufficiently fine scales. (The derivation of the Kolmogorov law implicitly assumes that energy is spread evenly throughout space at each frequency scale, and that the rate of energy flows remain constant, whereas it is known that blowup only occurs if the energy concentrates on a low dimensional set, and if the rate of energy flow becomes infinite. So blowup is not expected to be caused by the same mechanism that gives the Kolmogorov law, or at least the two phenomena should occur at different scales.)

28 August, 2008 at 1:26 pm

On the Convergence of Periodic Navier-Stokes Flows « Smooth Chaotic Navier-Stokes Flows[...] On the Convergence of Periodic Navier-Stokes Flows Filed under: Mathematics — DavePurvance @ 2:26 pm It’s not that the 3D spatially periodic, incompressible Navier-Stokes equation can be posed as a nonlinear matrix differential equation. It’s not that this matrix differential equation has a smooth and bounded matrix exponential solution, found when the flow in this differential equation is Taylor expanded in time and the resulting system matrices are all commutative. And, it’s not that this time expanded matrix differential equation also has a more general time series solution. It is called the noncommutative solution because its coefficients involve noncommutative matrix polynomials. This matrix differential equation and its commutative and noncommutative solutions have been recognized by the world-renown expert Terence Tao. Read Terence’s 22 March 2008 comments under his blog Why global regularity for the Navier-Stokes is so hard. [...]

28 August, 2008 at 10:26 pm

Claes JohnsonDear Gerard:

It is necessary to give a quantitative meaning “smooth”, which means that the size of Sobolev norms must be taken into account. If the C^1 norm of a function is a number of moderate size, say 10^2, you can say that the function is C^1-smooth, but if the C^1-norm is of size 10^100 = googol, then it is not C^1-smooth. The reason to introduce Sobolev spaces with norms is to measure norms, and to measure norms is to make a distinction between 10^2

and 10^100. It is very strange that this aspect seems to be completely lost in

the discussion of the Clay problem. In order to answer if a smooth solution exists, you first have to define what “smooth” is. This is not done now and no progress is being made. So Terence: what is your definition o “smooth”? Is the size of Sobolev norms not relevant? If not, how do you make a distinction between a smooth and a non-smooth function? What is then the meaning of Sobolev norms?

15 September, 2008 at 10:23 am

Bill LaytonHi All,

This is a very interesting discussion that I’ve enjoyed reading!

Might helicity play a key role in NSE uniqueness / non-uniqueness and deserves much more study?

It is the other key invariant of the 3d Euler equations and a control quantity for the 3d NSE. Of course it is much harder than enstrophy in 2d because it has 2 signs and dissipation reduces helicity mode by mode rather than by global magnitude.

The plan is one that surely many have thought about. If the NSE nonlinearity is zero (or sufficiently small) uniqueness is trivial. If the helicity is zero or sufficiently small, then the flow is also unique and smooth (a recent result of Berselli). The “only” problem is then to interpolate between these two cases locally.

This is surely a technically intricate problem. But, to me it seems like one with more hope of success for the first few steps than squeezing a “new” a priori bound on ||grad u|| from what are in many ways essentially the same estimates that Leray derived in his seminar papers.

From the computational point of view (and computational insight is very useful since it’s generally easier to prove something that is true and understood at some level), helicity is a scalar so much easier to visualize.

So, …

Any ideas on this path among the many possibilities??

Bill

26 September, 2008 at 5:09 am

David CollinsDear Terence,

I am a physics layman and math undergraduate from germany on a ODE level. I have a bulk of wild ideas, some probably persued before and many false. Some at least seem to fit to some problems outlined above.

Smooth … ? … Discrete

a) Instead of only using discrete models in order to find smooth limit tactics, maybe Navier Stokes equations lead in the other direction to a ‘self-quantization’. A superposition of data ensembles might ‘wash out’ the many probable topological knots of vortex filaments (’Shaking Strings in a box leads to Knot formation’ http://www.pnas.org/cgi/content/abstract/104/42/16432 ) (long shots 1 & 5)

b) Velocity field modeled as a rapid growth phenomena.

A toy model which would be less rigid and would allow a bit more complexity than triangular or cubic lattices, and that is appears frequently in nature could be a recently further generalized model of primordia growth, the Snow map http://maven.smith.edu/~phyllo/Assets/pdf/snow.pdf which allows continuous diffeomorphism via irregular rhombic tilings (which are periodic weak attractors) between disjunct discrete lattices (strong asymptotic attractors). In it’s current form it is of course only an analog to laminar flow of molecule ensembles. But there is a striking interplay between random divergence angles in local topology & pseudorandomness in global topology. What happens when taking the limit in this model? (long shots 2, 5, 6)

c) With the unproved assumption that nature follows a ‘least crypticity princible’ and thus the simplest mathematics possible one could heuristicly ask ‘If a smooth continuous solution to fluid dynamics does exist, why does nature bother with quanta (x > 0)?’

d) Maybe the mathematical superposition of all possible solutions lead to temporal and spacial aliasing effects, thus leading to quasilattice geometry. Thus blowups or quasi-blowups in Ackermann/Goodstein scenarios. (Very wild guess)

Dimensional analysis shows that fluid entities have analogs in Plancks Laws:

Dynamic Viscosity n (eta)[kg/ms] ~ Spectral Energy Density u

Surface Tension s (sigma) [kg/s²] ~ Spectral Radiance per Frequency I(f)

Pressure Change dP/dt [kg/ms²] ~ Spectral Radiance per Wavelength I(r)

Dimensional analysis also shows possible uncertainties or complements:

Dyn. Viscosity & Volume: n * V > h

Kinematic Visc. & Mass v * m > h

(measuring Volume of superfluid H3 with exact viscosity n = 0 ?)

Again a very wild guess:

A ‘bubble’ or more precisely a ‘Spherical Hill Vortex’ might be the missing link between ‘Hollow black body’ and ‘Fluid’, having n, s, dP/dt aka u, I(f), I(r).

h) A vector that keeps appearing in this dimensional analysis and in Navier Stokes is w := 1/rt or [1/ms]. For instance h=mw and Du = w in the viscose term.

i) First attempt to interpretate w: This could be interpreted either as rate of curvature change (dK/dt with K=1/r) or as change of spacial frequency. A 3-d Snow model might show which and when is the appropriate interpretation via the parameters D (spacial frequency, periodic attractors), the ‘crookedness’ of the primordial front (local discrete curvature) i.e. the divergence angles and/or the mean parastechy angle (global curvature). Actually a common formulation of NS has the (‘fluid kinematic energy’) dimensions

F/V = mw² (Navier Stokes)

n = mw (Dyn. Viscosity as ‘fluid impuls’)

U ~ mw (Spect. Radiance per frequency)

P= mw/dt (Pressure as ‘fluid force’) etc.

j) Second attempt: After asking my calculus professor about the nature of this ‘dual’ vector, she gave me the hint to look up the Poisson resummation formula which relates a summation over the numbers x_n = n, to a summation over the numbers y_n = 1/n .

This in turn can be generalized via Pontryagin duality to dual lattices, and more general via Selberg trace leads directly into the heart of pseudo/randomness, Riemanns Zeta hypothesis.

Thanks for your time and patience

Dave

28 October, 2008 at 9:53 am

Turbulence « Unruled Notebook[...] If you can handle comfortably some mathematical language, read the excellent post titled “Why Global Regularity for Navier Stokes is Hard” by Terrence [...]

15 November, 2008 at 4:29 am

isaacDear Terence,

What do you do during your free time? Any hobbies?

2 January, 2009 at 12:53 pm

LiangyuHi, terry, if the solution of the Navier Stokes equation can develop sigularity in finite time then the Navier Stokes equation surely cannot be an accurate model of 3D fluid dynamics. Which would implie that some physical assumption in the formulation of the Navier-Stokes equation is wrong?

3 January, 2009 at 10:32 pm

Terence TaoDear Liangyu,

The derivation of the Navier-Stokes equations from more fundamental laws of physics (e.g. Newton’s laws of motion) involves a number of simplifying assumptions (most notably, treating a fluid as a continuum rather than as consisting of a large number of atoms). Presumably, if these equations lead to singularity, then these assumptions would break down before the actual singularity is reached. (Note also that it is known that singularity can only occur if the velocity goes to infinity; in practice, of course, relativistic effects (among other things) would kick in long before then.)

29 July, 2009 at 8:43 am

JonasDear Terence,

How does one show that a singularity can only occur if the velocity is unbounded? Who proved this first?

8 August, 2009 at 3:46 pm

Terence TaoThis was first shown in a paper of Beale-Kato-Majda (the result was initially for the Euler equations, but the same arguments apply to the Navier-Stokes equations). I believe the method is basically an energy method, controlling the growth of Sobolev norms via integration by parts and standard inequalities (Holder, Sobolev, Gagliardo-Nirenberg, etc.).

26 January, 2009 at 7:34 am

Michael Nielsen » Doing science online[...] information and insight. To understand how valuable Tao’s blog is, let’s look at a example post, about the Navier-Stokes equations. As many of you know, these are the standard equations used by [...]

27 January, 2009 at 7:14 pm

kmHi,

In the following link:

http://knol.google.com/k/claes-johnson/the-clay-navier-stokes-millennium/yvfu3xg7d7wt/14#

a comment says that a blowup solution in Navier-Stokes means turbulence and “A turbulent solution is a non-smooth solution, and blowup from smooth data initial data is the same as transition from laminar to turbulent flow”.

So, if the above is true, then Navier-Stokes can be used to describe turbulence only if it does give non-smooth solution. Is that right?

27 January, 2009 at 10:11 pm

tmrIs it known for blow up to occur in 4 dimensional Navier-Stokes?

28 January, 2009 at 9:29 am

Terence TaoDear km: Unlike blowup (which means that the solution ceases to be smooth after a finite amount of time), which is a precise mathematical concept, turbulence (which means that energy shifts to higher and higher frequencies, or equivalently to finer and finer scales) does not have a canonical mathematical definition. But, at an intuitive level at least (and oversimplifying somewhat), one can think of blowup as an infinite amount of turbulence: a significant portion of the energy moves into infinitely fine scales within a finite amount of time, causing singularity. So if there is no blowup for the Navier-Stokes equation, it means that the solution can become somewhat turbulent for a finite amount of time (as is of course seen in real-life fluids), but eventually the viscosity effects dominate and damp out the turbulence. If there was blowup, then the mathematical Navier-Stokes equation would become singular after some time, though in the physical setting what would happen instead is that the solution becomes so turbulent at so fine scales that the simplifying assumptions used to derive the Navier-Stokes equation (e.g. assuming that a fluid is a continuum, rather than being made of molecues) break down, and some other dynamics take over. This does not seem to occur in practice (fluids do become turbulent, but not all the way to the molecular scale; at some point, viscosity effects assert themselves to remove the turbulent energy from the system). Whether this is always the case is, literally, a million-dollar question.

Dear tmr: No blowup result is known for any higher-dimensional Navier-Stokes equation as far as I am aware. Heuristically, the higher-dimensional equations should be more unstable and so would have a better chance of exhibiting blowup, but by the same token the instability may make it even harder to

rigorouslydemonstrate blowup in higher dimensions than in lower ones. But perhaps some low-dimensional reduction of a high-dimensional Navier-Stokes equation might be amenable to study. (In the periodic setting, one can view a low-dimensional Navier-Stokes solution as a solution of a high-dimensional Navier-Stokes equation also, simply by extending the solution trivially in some additional spatial dimensions, so in that case, at least, global regularity of high-dimensional NS implies the same for low-dmiensional NS.)28 January, 2009 at 1:10 am

Mathematics, Science, and Blogs « Combinatorics and more[...] and blogs. Michael’s primary example is a post over Terry Tao’s blog about the Navier-Stokes equation and he suggests blogs as a way of scaling up scientific conversation. Michael is writing a book [...]

28 January, 2009 at 4:05 pm

kmHi Terence,

Thanks for the explanation. So, reality suggests that Navier-Stokes equation should not give blow-up solutions if we believe Navier-Stokes equation describes it accurately.

29 January, 2009 at 3:51 am

big bangs spectatorThe Laplacian is physically problematic, however, that we haven’t observed a blowup doesn’t mean it can’t occur. E.g., there have been reports of strange new experiments, e.g., a hexagonal pattern in a rotating bucket of fluid. Ps a quick off-topic perspective statistic: the Physical Review Series database gives 4844, 3271, 1997, 299, 804, 564, 44 citations with the words Schrodinger, Josephson, Ginzburg-Landau, Landau-Ginzburg, sine-Gordon, Navier-Stokes, KdV in the Title/Abstract.

29 January, 2009 at 4:32 am

kmFirst, I have to say that I am an novice in turbulence or Navier-Stokes equation (NS equation).

Second, I think it is impossible to observe blowup or infinite amount of turbulence in real life. An object in real life with singular properties is probably the black-hole. I can think of no other physical phenomena in reality that possess singularities like infinite velocity or energy.

Third, your comment suggests a possibility that not observing a blowup or infinite amount of turbulence in real life doesn’t mean NS equation can’t give blowup solution.

29 January, 2009 at 11:17 am

tmrThanks a lot prof Tao for your reply!

11 February, 2009 at 5:22 am

nareshheloo frnds

i think everyone over here are sticking themselves to naviers stoke equation , now lets think different, is navier stokes equation true????????????? i dont think so specially for a 3d case, reason is

as far as my knowledge, in an flow over an aircraft wing huge turbulance exist, first of all we need to understand how viscosity varies in 3 dimension with speed, below mach 1 its a very simple case we can say viscosity is proportional to speed but after mach 1 shock waves begin to form so here comes the case , hence if we understand the variation of viscosity with speed keeping the surface in mind then turbulance can be solved,

naviers stokes equation is wrong because!!!!!!!!!!!!!!!!!!!!

it relates velocity with pressure only what about temperature?????

for god sake viscosity of air changes with temperature hence we need to consider it/// please think about this hence we shldnt find the grad of kinematic viscosity instead we should use the variation of temperature with space coordinates in terms of viscosity, next thing is co effcient of viscosity must be considered , but it is nt done in navier stoke equation

so my opinion is navier stokes equation is absolutely wrong and hence it is unprooved

frnds and professors ive got the solution for this problem but i dnt know how to move about pls help me

thanks a lot for sparing your precious time sin reading my opinion

thankyou and please reply me ur questions and comments

im eager to hear from you

thanks again

27 April, 2014 at 10:05 am

Shree NidhiNS equation is wrong! Great discovery!

7 March, 2009 at 7:51 pm

TarunThanks for the great run down, Professor Tao.

I was reading about the function space BMO (Bounded Mean Oscillation) when I came across a paper which said that the regularity of BMO is ideal for Navier-Stokes solutions. I was wondering why this was and what role does (or maybe, should) Fefferman duality () play in the regularity of Navier-Stokes solutions?

Thanks a lot,

Tarun

7 March, 2009 at 8:52 pm

Terence TaoDear Tarun,

The space BMO (or more precisely the space ) is generally believed to be the weakest function space in which one can plausibly hope to (locally) solve the Navier-Stokes equation by perturbation theory methods, at least without first applying some further renormalisation of the equation. Basically, in order for the equation to make sense even as a distribution, the velocity field u should be locally square-integrable in spacetime; and in particular the effect of the nonlinearity (viewed as a forcing term) should have this square-integrability. If one follows the perturbation theory philosophy of assuming that the nonlinear solution u behaves like its linear counterpart , and then one inspects the characterisation of BMO functions in terms of Carleson measure properties of their heat extensions, we see that what one is asking for is essentially that lies in BMO.

This seems to not be too directly related to the Fefferman-Stein duality between BMO and H^1, although of course the Carleson measure characterisation of BMO dualises to give a version of the atomic decomposition for H^1.

20 March, 2009 at 12:28 am

StudentDear Professor Tao:

I am a student studying stochastic differential equation and am a novice with physics or applied math. Some elaboration on your sentence below would be very much appreciated:

“Things look better on the stochastic level, in which the laws of thermodynamics might play a role, but the Navier-Stokes problem, as defined by the Clay institute, is deterministic, and so we have Maxwell’s demon to contend with.)”

Dear Tarun:

I would be interested to know the paper which argued that the space of BMO is ideal for NSE solution.

Thank you.

21 March, 2009 at 10:31 pm

tmrProbably that is a Koch-Tataru paper.

22 March, 2009 at 3:07 pm

StudentDear tmr:

Thanks. That’s kind of what I thought, but I was wondering if there are any others that can be recommended.

26 March, 2009 at 9:27 am

AnonymousDear Terry,

Did you used to have notes on the Nash-Moser iteration scheme somewhere? I looked on your website and could not find it. Thanks!

29 March, 2009 at 1:37 pm

Terence TaoDear anonymous,

The Nash-Moser notes are at http://www.math.ucla.edu/%7Etao/preprints/Expository/nashmoser.dvi

29 March, 2009 at 8:52 pm

AnonymousDear Prof. Tao,

Excuse me my question is not related to the NS equations but since I do not know under which post we should write our general questions, I am writing it here.

My question is that is it possible to have two dependent random variables such that one has binomial distribution and the other has normal distribution?

thanks

17 April, 2009 at 5:51 pm

StudentI would really appreciate if you could explain why

“Riesz transforms are bounded in BMO, but not in L^oo”

a claim made by “Limiting Case of the Sobolev Inequality in BMO, with Application to the Euler Equations” by Kozono and Taniuchi.

Best regards,

17 April, 2009 at 6:01 pm

Terence TaoDear Student,

If you test the Riesz transform R_j against the indicator function , you will obtain a logarithmic divergence. (This is easiest to see in one dimension, i.e. applying the Hilbert transform to the Heaviside function.) So is not bounded on . The bound on BMO is more complicated to show, and basically requires the use of Calderon-Zygmund theory; I would suggest Stein’s “Harmonic analysis” as a reference.

18 April, 2009 at 12:05 pm

anonymousI have a question about strategy 2, discovering globally controlled quantities. Controlled quantities seem to be connected to invariance of the equations. For example, with the (linear) Schrodinger equation a phase rotation corresponds to the conserved quantity of mass, a space translation corresponds to momentum, etc. However, these invariances are precisely the defects to compactness of the linear Schrodinger operator, in the sense of the profile decomposition. In fact, one can think of the profile decomposition as the statement that the known invariance of the Schrodinger operator are the only (at least, noncompact) ones. It seems reasonable that we might be able to reverse this logic, and conclude that the known conserved quantities are the only ones. I don’t know very much about Navier-Stokes, but would it be plausible to attempt to classify the conserved quantities in a similar manner?

18 April, 2009 at 1:47 pm

Terence TaoDear anonymous,

For Hamiltonian equations (such as the Schrodinger equation), Noether’s theorem provides a very close connection between conserved quantities and invariances of the equation. But conservation laws aren’t necessarily the only ways to get control, and in any event Navier-Stokes is not a Hamiltonian system (it is not time-reversible).

Another obvious source of controlled quantities are monotonicity formulas – a formula that demonstrates that some coercive quantity is non-increasing in time. For instance, energy is a monotone quantity in Navier-Stokes. But unlike the situation with conservation laws of Hamiltonian systems, where we have a nice characterisation, there is no known general way to systematically classify all monotone quantities associated with an equation such as Navier-Stokes. But if one could find more such monotone quantities (and if they had a favourable scaling), then this would be major progress on the Navier-Stokes problem. (The solution to the Poincare conjecture, incidentally, was only made possible by the discovery by Perelman of certain scale-invariant monotone quantities for Ricci flow; one could conceivably imagine a similar miracle leading to a breakthrough for Navier-Stokes, though given that this equation is much less geometric in nature than Ricci flow, such miracles are perhaps less likely.)

24 April, 2009 at 11:29 pm

StudentDear Professor Tao:

Thank you very much for your reply above. I have the follow-up questions.

I understand that this improvement below was made (Sorry for not using LaTeX; I’ll retry if it’s not acceptable):

From Beale-Kato-Majda (1984):

||grad u||_{L inf}

<=

C(1 + ||vort u||_{L inf}(1 + log^{+}||u||_{W^{s+1,p}) + ||vort u||_{L^2})

To Kozono, Taniuchi (2000):

For 1 < p n/p, with C(n, s, p)

||f||_{L inf}

<=

C(1 + ||f||_{BMO}(1 + log^{+}||f||_{W^{s,p}}))

In words of Ohkitani Koji (2008), that is,

“At the level of the BKM theorem which deals wit ha sup norm of the vorticity, it was not possible to rule out a possibility that vorticity blows up mildly as a logarithmic function of space variables. According to the recent updates using a BMO norm, we can safely exclude such possibilities”

So what is the next step? Unfortunately, Kozono and Taniuchi did not really leave any thought on the subsequent possible advancement. Looking back, it seems, at least to my Professor, BMO was the obvious choice. But he thinks it’s hard to see any more improvement possible beyond.

Elaboration would be very much appreciated. I will reread Tataru’s paper you mentioned; it seems to be connected somewhat.

Finally, please share the source of the paper you are talking about, in reply to a comment by Katz below:

“Thanks to the recent work of Escauriaza-Seregin-Sverak and others, we know that blowup solutions to Navier-Stokes must in fact blow up in the critical norm L^3({\Bbb R}^3). “

6 May, 2009 at 9:07 pm

Student“Try a topological method. This is a special case of (1). It may well be that a primarily topological argument may be used either to construct solutions, or to establish blowup; there are some precedents for this type of construction in elliptic theory.”

Please give us your specific recommended references (books, articles) to examples of these “precedents.”

Best regards,

7 May, 2009 at 3:51 am

Terence TaoDear Student,

One example of such methods that comes to mind is Gromov’s topological method to construct J-holomorphic curves in his celebrated paper on non-squeezing. There are also a number of methods for elliptic variational problems with a somewhat topological flavour (e.g. the mountain pass lemma).

18 May, 2009 at 9:56 pm

NSE BeginnerDear Professor Tao:

1. Aside from using Leray-Projection operator, one way to get rid of the pressure is to work on the vorticity equation. Solving this vorticity equation should be equivalent to solving the Navier-Stokes Equation. Is that correct? And if so, any pros and cons for this idea?

2. Beale-Kato-Majda showed in 1984 that if the vorticity does not blow up, then the solution can be extended beyond time T. It seems obvious to me that if the gradient, first derivative, of the vorticity does not blow up, then the same result applies (I am asking this question because showing that the gradient of the vorticity does not blow up seems easier from looking at the vorticity equation). Is that correct?

Thank you.

19 May, 2009 at 4:16 pm

Terence TaoDear NSE beginner,

Well, the vorticity equation still involves the velocity u, which requires the Biot-Savart law to recover from the vorticity, so one is pretty much back to where one started. But it does seem clear that the dynamics of the vorticity (and in particular, the behaviour of vortex stream lines, vorticity shearing, etc.) is going to be an important aspect of the problem.

At fine scales (which is where all the difficulties to the problem lie), the derivative of vorticity is going to be much larger than the vorticity itself, so it is extremely likely that the BKM result extends to the derivative. By the same token, though, this derivative is going to be harder to control than the raw vorticity; in particular it will be sensitive to shearing effects caused by irregularity in the velocity field u, which as mentioned earlier cannot be so easily eliminated from the vorticity equation.

23 May, 2009 at 9:15 am

StudentDear Professor Tao:

There was nowhere else for me to write this; at least there was somebody else who asked you about the Nash-Moser Notes on this blog.

Below I am writing about your Nash-Moser Notes

http://www.math.ucla.edu/%7Etao/preprints/Expository/nashmoser.dvi

On page 3, in the middle you state

“we conclude from (5) (with x = u_n …”

but I don’t see any x in (5). You mean u = u_n?

On page 4 bottom, (11), I think the power of N should be 40, not 20.

In the middle page 6, you define the Littlewood-Paley operators, the case for N. I think there is one extra f^ that you do not need, for the case of > N that is.

Finally, my question: on the bottom of page 6 you claim a certain identity “Suppose for now that u_1, . . . , u_{n+1} \in 5B. Then from the identity…”

I do not know how you obtain the third equality; i.e. going from second line to third. Please explain.

23 May, 2009 at 10:32 am

Terence TaoThanks for the corrections! In the application of (5), x and h should be u and v (similarly the h in (5) should also be a v).

On page 6, on the third line of the equality, (you can see this also by looking at the next displayed inequality). The point is that is equal to .

26 May, 2009 at 2:31 pm

PDEbeginnerDear Prof. Tao,

I read your nice note on the Nash-Moser iteration scheme http://www.math.ucla.edu/%7Etao/preprints/Expository/nashmoser.dvi as being shown above. But it is a little abstract, could you give some nice literatures having some examples? (I tried to find some examples in your lecture notes on Nonlinear Dispersive and Wave Equations at New Mexico State University, but failed).

Thanks in advance!

26 May, 2009 at 3:55 pm

timurDear PDEbeginner,

To me, the notes you mentioned was the most readable account on Nash-Moser, the others being very complicated, abstract, long, or the main idea was buried in details. If you think you lack intuition on what is “frequency” of a function you might want to look at first couple of the following notes (which are again by prof. Tao):

http://www.math.ucla.edu/~tao/254a.1.01w/

26 May, 2009 at 11:33 pm

PDEbeginnerDear Timur,

Thanks a lot for your kindness and your reference! I have downloaded part of the lecture notes provided by you and plan to read them (especially on Littlewood-Paley projection part).

27 May, 2009 at 9:22 am

AnonymousHamilton’s paper, referenced in Tao’s notes, has a number of examples of how the Nash-Moser inverse function theorem can be used.

3 June, 2009 at 8:09 am

hegel triadHi Prof. Tao,

I would like your advice on the following. I am at a conference in France, and the famous Prof. Temam has just shared with his audience a model used in oceanography – the Lagrangian-averaged Navier-Stokes-alpha. He said that it regularizes the NS by modifying the nonlinearity so that scales smaller than alpha are swept by larger scales, while preserving their fundamental inviscid transport properties such as convection of vorticity. I have misgivings as the audience was shell-shocked. Please express your opinion on the model.

4 June, 2009 at 12:53 pm

The Vlasov-Poisson system « Hydrobates[...] of the equations. For a discussion of the significance of scaling properties in general see Terry Tao’s post on the Navier-Stokes equations. In the case the potential and kinetic energies satisfy an inequality of the form and this plays [...]

13 June, 2009 at 3:05 am

StudentDear Professor Tao:

You stated:

“There are a couple of loopholes that one might try to exploit: one can instead try to refine the control on the “waiting time” or “amount of mixing” between each shift to the next finer scale, or try to exploit the fact that each such shift requires a certain amount of energy dissipation.”

I’m sorry but I do not have a good grasp of what you mean here. It seems as if there is some periodic finer rescaling from time to time and shift between each. I can see why if there is such thing, it would be a problem but how do you know that there IS such periodic rescaling and how should it be described mathematically? Or are you suggesting to suppose that there is such periodic rescaling and try to overcome it?

And why does this shift require any energy dissipation? I think of the Laplacian in the NSE when you say dissipation, but I’m afraid that’s not what you mean.

Any elaboration would be appreciated.

13 June, 2009 at 7:35 am

Terence TaoWell, if I if I could prove that a solution could periodically shift its scales to higher scales indefinitely, then I would have disproven the regularity conjecture; my point is that this is an “enemy” that one will have to address at some point if one is to solve the conjecture. I also don’t know whether the dissipation effect of the Laplacian will be strong enough to prevent an infinite number of such shifts, but this is one potential way to stop such a cascade from occurring. I used some ideas distantly related to this when establishing global regularity for the logarithmically supercritical wave equation, but this equation was only barely supercritical, and is unlikely to be adapted as is for the Navier-Stokes equation (though perhaps some sort of “logarithmically supercritical Navier-Stokes equation” is within reach by these sorts of methods).

19 June, 2009 at 9:08 pm

Global regularity for a logarithmically supercritical hyperdissipative Navier-Stokes equation « What’s new[...] supercritical and thus establishing global regularity beyond the reach of most known methods (see my earlier blog post for more [...]

29 July, 2009 at 12:25 pm

Liveblogging Science 2.0 | Serendipity[...] to support community approaches to mathematics. In particular, he deconstructs one particular post: Why global regularity for Navier-Stokes is hard. (http://terrytao.wordpress.com/2007/03/18/why-global-regularity-for-navier-stokes-is-hard/), which [...]

30 July, 2009 at 8:18 am

hyfen.net » Doing it in the open: Michael Nielsen at Science 2.0[...] a lot of time thinking about open science and mass online collaboration. He began by highlighting a post on mathematician Terence Tao’s blog that outlined the Navier-Stokes problem and sketched a [...]

9 September, 2009 at 6:17 am

SudentDear Professor Tao:

As you mentioned in the discussion above, taking \alpha as the power of Laplacian,

\alpha = 1/2 + d/4

implies global existence of strong solution (cf. Jiahong Wu, “Generalized Incompressible NSE in Besov Sp.”).

Hence, with \alpha = 5/4 in the case of 3-d, the case is closed. Naively it seems in 2-D, the best we can hope for is 2-D NSE. But I heard a rumor that 2-D Euler is also done (or Nets Katz stating “2-D problem for Euler which is critical”). Yet, 2-D Euler is basicaly 2-D NSE wth \alpha = 0. So I guess it required a somewhat different proof or some unique advantage of low dimension or something.

Does global strong solution to 2-D Euler Eq. exist? And if so, how do we prove it? Could you give me some reference to a textbook or paper?

Thank you.

10 September, 2009 at 7:53 am

timurLocal well posedness of the Euler equations for initial velocity in H^s with s>n/2+1 has been proven by Lichtenstein, Guynter, and Ebin-Marsden. Correct me if I am wrong, but it seems there is no local existence result for initial velocity in H^s with s<=n/2+1.

The global well posedness theory in 2D is as follows:

* Global well posedness for initial vorticity in [Wolibner '33], [Yudovich '63], [Kato '67].

* Global existence of weak solutions for initial vorticity in , .

* The above is extended to p=1 and measure-valued vorticities [Delort].

Global well posedness for Euler in 2D is possible because in vorticity formulation it becomes a transport equation.

12 September, 2009 at 9:42 pm

StudentDear timur:

thank you very much. I found your information very helpful.

14 November, 2009 at 3:03 pm

StudentDear Professor Tao:

Above you described the rescaling of NSE and stated that

“Strictly speaking, this scaling invariance is only … with the non-periodic domain {\Bbb R}^3 rather than the periodic domain {\Bbb T}^3″

I’m sorry if it’s obvious but it is not clear to me why the rescaling must change in T^{3}. If you could explain, I would appreciate it.

14 November, 2009 at 6:34 pm

Terence TaoWhen one rescales a function which is periodic with period 1, one gets a function with a different period. Since functions on the torus only correspond to functions which are periodic with period 1 (in all three cardinal directions), this shows that scaling by an arbitrary real parameter cannot be done while still staying in the standard torus. (For scaling by an integer parameter, one can still rescale, but one should caution that integrated quantities such as energy rescale with a different factor than their non-periodic counterparts. Alternatively, one can scale the length of the torus along with the solution, thus working with tori such as for various .

14 November, 2009 at 8:00 pm

StudentDear Professor Tao:

I understood. Thank you very much.

23 November, 2009 at 11:18 am

Anonymousdear PRof Tao can I get the formula for the viscosity term in the NSE in Lagrangian co-ordinates?because I think attacking NSE in Lagrangian co-ordintes would be a step towards arriving at the solution but the viscosity term is what i cant figure out

24 December, 2009 at 6:13 am

StudentDear Professor Tao:

You stated

“It has long been known that by weakening the nonlinear portion of Navier-Stokes (e.g. taming the nonlinearity), or strengthening the linear portion (e.g. introducing hyperdissipation), …”

I am aware that by raising the power of Laplacian, e.g. from 1 to any real number equal to or bigger than 5/4 in the 3-D case, one can obtain the global regularity of the Navier-Stokes Equation (NSE). But what is the explicit modification on the Nonlinear portion,

u \cdot \nabla u,

that will achieve the global regularity, even with the power of Laplacian is 1, in the case d = 3?

We certainly cannot make the \nabla part of Nonlinear portion hyperdissipative because it will screw up the divergence-free advantage in many ways. E.g. dotting NSE by u and integrating by parts no longer allows us to get rid of the nonlinear portion if the \nabla is made hyperdissipative.

Has such modification been done?

25 December, 2009 at 1:32 pm

timurFor example, you can use where is some smoothing operator like . is also possible. The key words that might be useful are Leray regularization, Leray-alpha, modified Leray-alpha, Navier-Stokes-alpha, Simplified Bardina model, etc.

24 December, 2009 at 6:22 am

StudentDear Professor Tao:

You stated

“most nonlinear PDE, even those arising from physics, do not enjoy explicit formulae for solutions from arbitrary data (although it may well be the case that there are interesting exact solutions from special (e.g. symmetric) data).”

Also,

“It may well be that there is some exotic symmetry reduction which gives (1), but no-one has located any good exactly solvable special case of Navier-Stokes (in fact, those which have been found, are known to have global smooth solutions).”

Could you give some reference to any work done on these ideas, on NSE or other PDEs. Any elaboration will be appreciated too.

Thank you very much.

20 January, 2010 at 8:07 am

Critical wave maps « Hydrobates[...] of activity at the moment. For more information about this classification and its significance see this post of Terence Tao. Actually there are a couple of results around on problems which could be called marginally [...]

2 April, 2010 at 1:42 pm

Amplitude-frequency dynamics for semilinear dispersive equations « What’s new[...] analogous in many ways to the even more famous global regularity problem for Navier-Stokes (see my previous blog post on this [...]

22 April, 2010 at 8:27 am

Frank TrunkThe problem with the Navier – Stokes equations is with the physics. They don’t define the physics of 3D fluid flow completely. Part of that problem is with Newton’s second law. I have done some work on the problem, but unforntunately is has been classified by the Dept. of the Navy. Define the proper physics and you will have the solution to your problem.

23 April, 2010 at 12:33 pm

Robert CoulterI find this post by by Professor Tao to be very enlightening. I have been fascinated with this issue since about 2002 when I learned of the Clay Millenium prize.

Concerning the previous comment:

I believe from the mathematician’s viewpoint that this is mainly an issue about this class of equations. The fact that they have something to do with fluid flow is secondary.

That being said, I believe that having a great insight into fluid behavior may offer some aid in solving the problem.

23 April, 2010 at 1:08 pm

Robert CoulterHello Professor Tao:

You stated earlier:

“(Note also that it is known that singularity can only occur if the velocity goes to infinity; in

practice, of course, relativistic effects (among other things) would kick in long before then.)”

Is the above correct? Does the vorticity blowing-up necessarily imply the velocity blowing up also? If infinite velocity is required for the singularity, then should one rule out searches for other types of singularities (e.g. infinite gradients (velocity step changes))

25 April, 2010 at 9:58 am

Terence TaoOops, you’re right, the Beale-Kato-Majda criterion is with respect to the vorticity, not the velocity.

Nevertheless, an L^infty bound on the velocity does imply regularity for Navier-Stokes (though not for Euler, as far as I know) through energy arguments (at least in the case when the energy is finite, which is the usual hypothesis for the global regularity problem). Indeed, one can show inductively (using energy estimates and Gagliardo-Nirenberg, and the bounded velocity hypothesis) that the H^k norm of the velocity does not blow up in finite time as long as the solution stays smooth. In fact one can even relax L^infty to L^3 (thanks to the Escuriaza-Seregin-Sverak result).

25 April, 2010 at 11:14 am

Robert CoulterProfessor Tao:

Thanks for your response. I will try to look up some of those sources.

26 April, 2010 at 5:06 am

Robert CoulterConcerning NS and the initial conditions of the Clay problem:

It appears that it has been proven that if the velocity becomes infinite it is limited to a Hausdorff dimension zero, which I believe that this means that it has no metric scale — no length, no area, no volume, etc — Just a set of points, no more — Is this correct??

It seems like an earlier post said that a group of singularities might be possible along a “segment”. Did I misread this? Would this be Hausdorff dimension one?

Also, has it been proven that if the singularity exists that it has no time dimension measure?

26 April, 2010 at 11:04 am

Terence TaoI believe that the best partial regularity result known for 3D Navier-Stokes is that the singularity set has Hausdorff dimension at most 1 in space, a result of Caffarelli-Kohn-Nirenberg, building on earlier work by Scheffer. There are some variants of this result by Fang-Hua Lin and by Katz-Pavlovic, and there may also be some subsequent literature in this direction (although the dimension 1 is very natural for scaling reasons and it would be a major breakthrough to lower this number). I believe there are also spacetime versions of this (using the usual heuristic that one time dimension has the same scaling as two spatial dimensions for parabolic equations, I would expect the best upper bound on the dimensions of times where the solution is singular to be 1/2).

4 May, 2010 at 4:59 am

Robert CoulterConcerning the Clay Problem Fefferman Statement:

At (5) on the first page there is a specification for the forces allowed. Can someone

explain the mathematical notation here? It says for any K. Is it implied that K must be positive? Doesn’t a negative K imply force can grow large with increasing x and t?

4 May, 2010 at 7:50 am

Terence TaoThe bound (5) is required to hold for

allK, both positive and negative. But in this case, the inequality for positive K is stronger than that for negative K, which thus becomes redundant.This condition is basically asserting that the force lies in the Schwartz space,

http://en.wikipedia.org/wiki/Schwartz_space

5 May, 2010 at 12:11 pm

Robert CoulterProfessor Tao:

Thanks for your response.

Also, again, thanks for having this blog.

I look forward to any other information you may want to share on this subject.

13 June, 2010 at 8:42 am

BobAn insightful web-paper, thank you very much. I have two general questions in this regard:

You said that: “There are countless other global regularity results of this type for many (but certainly not all) other nonlinear PDE”

It somewho implies to me that: there are some similar problems with such a desired theory but NS leaves as unresolved (though, prior to read your post i thought that NS is one of the simplest possible prototype in the field of nonlinear PDE for which we do not know desired theory).

1. You just count 2d NS as a sample from mentioned “countless” examples. But It seems that (i’m not expert) 2D sitution is very different from 3D for nonlinear vector PDEs, my question is that are there some similar nonlinear hydrodynamics 3D vector PDEs for which we have uniquness and global in time regularity results (e.g. they meay be compressible, and/or non-isothermal or coupled with some scaler transport, and/or have some additional terms, etc.)?

It is very helpful if you give us some reference in this regard, if above issue has positive answer.

2. Also, are there some modifications for 3D NS which follows the desired regularity (if positive, could you please state theoreticaly minimal ones)?

13 June, 2010 at 1:41 pm

Robert CoulterI believe the ones that provide regularity strengthen the viscous drag term – e.g. Maddingly-Sinai (2003).

13 June, 2010 at 4:15 pm

Robert CoulterAlso, there were discussions along this line in earlier posts.

2 July, 2010 at 12:51 am

BobRobert,

thanks for hint. I implied something more than what already stated above, in particular more realistic case (in fact goal is to know that these set of eq. are hard or other alternatives are also hard problems).

e.g. is there reqularity results if we ignore incompressibility and consider 3d compressible viscous flow?

or if we add limited copressibility; commonly by a term porportianal to time derivative of pressure field to continuty eq.?

2 July, 2010 at 7:52 am

PDE BeginnerSome comments and a question to Bob’s comments:

“It somewho implies to me that: there are some similar problems with such a desired theory but NS leaves as unresolved”

I think similarly difficult problems that have caught much attention recently and remains basically unsolved in the supercritical regime include Quasi-geostrophic Equation, Porous Media Equation, Cordoba-Cordoba-Fontelos (CCF) Model. The relation among these equations can be simplified as follows: Take the 2-D NSE, and its curl to get a vorticity equation. The nonlinear term looks like

u\cdot\nabla v

where v is the vorticity of u. u here is now \grad^{\perp}(-\triangle)^{-1}v.

The QG is the case where u is \grad^{\perp}(-\triangle)^{-1/2}, and hence the Riesz transform of v. The Porous Media is slightly complicated but can be simplified to a very similar form to the QG (identity plus a singular integral operator. cf. the work by Castro, Cordoba, Gancedo, Orive on this). The CCF model is the case where Riesz transform of QG is changed to Hilbert transform. Finally, there is a modified QG which interpolates between the vorticity Euler Equation and the QG (cf. work by Constantin, Iyer and Wu on this).

For all these equations above, the nonlinear term is more complicated than that of Burger’s; while the best known conserved quantity for these is the sup norm making the case of the power of fractional Laplacian on the dissipation term 1/2 the critical case, almost (cf. work by Luis Silvestre on slightly supercritical Quasi-geostrophic if you wish) nothing is known in the supercritical regime. I find the case of the NSE similar. The best conserved quantity there is different making the power of the fractional Laplacian that makes it critical at 1/2 + d/4 (d is dimension) (and similarly for the MHD Equations cf. work by Wu on this) but essentially we have no result in the supercritical regime (cf. work by Terence Tao on the logarithmically supercritical).

I now have a question for your question: You stated

“if we add limited copressibility; commonly by a term porportianal to time derivative of pressure field to continuty eq.”

Can you give some reference where such method of reducing compressibility by a term proportional to time derivative of pressure field? You said “common,” so I would imagine there are examples. I am new to such idea.

2 July, 2010 at 8:18 am

BobPDE Beginner: first thanks for comments, though i did not still get an effective response; ignoring adding higher order smoothing term (is lot); i look for some known theory for 3d related problems.

your answer: in practice sound speed in any media is finite, so in principles incompresibility constraint is unphysical; of course plays rule in small scales where the time scale of vortices are comparable to that of sound speed.

anyway (i do not know theoric work, it was my question here) there are some computational worksd on this method; e.g.:

Adding limited compressibility to incompressible hydrocodes, Journal of Computational Physics, Volume 34 (1980) 390-400,

2 July, 2010 at 2:22 pm

Robert CoulterHas anyone done any work on the action of entropy in relation to NS and/or Euler?

My hunch is that in the NS stokes case that the “process” would tend to evolve in the direction of a lower free energy state. Maybe this would be the Helmholtz Free Energy = A = U – TS. In Euler there is no energy dissipation, so it appears only the TS term is active (process seeks higher S always). In NS the process “arrow” is decided by competition of seeking a lower energy state (lower U via viscosity removing energy) versus seeking higher entropy (higher S – nonlinearity term randomizing particle trajectories?). Both terms seek to diminish the macro world’s ability to recover energy/work from the system (left unattended).

2 July, 2010 at 2:57 pm

Robert CoulterIn addition to my previous post:

In the case of NS there appears to be a pathway to T (temperature) in that there is an implicit assumption the the viscous term removes the kinetic energy and “magically” removes it from the system. Practically this is not hard to visualize if the system isothermally maintained with the Q (heat) passing into the external heat reservoir.

In the Euler case there is (from what I can tell) no easy visualization. There is no obvious pathway to T (T interpreted to be random kinetic energy) . The energy stays in finite discrete “velocity bundles” (for lack of a better term). Do the velocity bundles progress to so much smaller scales that the Lagrangian particle trajectories collide providing the only mechanism in the Euler case of dispersing the kinetic energy to T?

In the NS low viscosity case the action is probably similar. The “entropy” increasing effect is much stronger than the viscous (U decreasing action).

4 August, 2010 at 9:36 am

jj2In case you are interested, you find the proof of Statement D of the Clay Math. Institute Navier-Stokes problem in Electon. J. Diff. Equ. Vol 2010(2010), No. 93, pp. 1-14 at http://ejde.math.txstate.edu. The official problem statement did not require the pressure to be space-periodic and did not exclude a feedback force as the external force. Thus, a simple blow-up solution could be explicitly constructed. Changes to the problem that are needed so that the presented example is not any more valid are not small modifications.

5 August, 2010 at 3:06 pm

Robert CoulterBlow up term g is only a function of the time variable. Looks like all velocity vectors go to infinity across the entire space at t = a. Doesn’t this imply infinite energy at blowup? If so, how did the energy enter the system? My hunch is that an infinite external force is in the math somewhere; otherwise, one would have to explain how the energy appears.

5 August, 2010 at 9:57 pm

jj2The energy comes from the time derivatives of the velocity vector at t=0.

The external force does not have infinite energy for t=0 to t=infinity.

The official problem setting does not require solutions to be physically meaningsful as blow-up solutions in general are not physical. The physical

problem of the pressure growing to infinity when space dimensions grow to infinity is only in finding an infinite physical container. Gime me such and I give you the solution. Physically you can make a finite volume fluid behave in this way if you first speed the higher order time derivatives of velocity to suitable values. Then apply zero force, the fluid will follow the solution in Lemma 1.

5 August, 2010 at 10:01 pm

jj2Is is possible to recover the “original” problem by a small modification? Several modifications are needed, they are described at the end of the article, they are not small. What is the “original” problem? Fefferman explicitely says that he does not give the whole NSE problem but four very small subproblems as the millennium prize problem. Only Fefferman has ever defined these problems, so they must be the original problems. The

whole issue of modifications was if the stated problem is poorly defined and needs to be modified, but it is well defined and solved in the paper.

If you pose a million dollar question, try to check before posing it that it

is the question you want solved.

5 August, 2010 at 10:06 pm

jj2See also arxiv:0809.4935. This paper is 2.5 years old and was in journal review for 15.5 months, presently submitted in a journal. So far no serious errors found in 2.5 years, only small issues like a few typos.

5 August, 2010 at 10:20 pm

jj2Why the initial values of the time derivatives of the velocity vector were not defined at t=0? There was an old theorem stateing that the solutions were unique. As the solutions were unique, feedback forces did not need to be considered (there is nothing to select if there is only one solution).

Unfortunately, the proof of this old theorem implicitly assumed the pressure to be periodic. Proofs which implicitly assume something are wrong. This therem seemed to be known and believed by many people.

6 August, 2010 at 2:12 am

Robert CoulterI suspect that condition (9) of the Fefferman paper is violated. It appears that the external force is embedded in the pressure equation.

6 August, 2010 at 3:28 am

jj2No condition is violated. You may say, if you want, that the external force

is embedded in the pressure, but actually this is not the case. The external force is zet to zero and kept at that value. Pressure is eliminated by integrability conditions and the remaining equation for the velocity vector has many solutions. You cannot and should not say or think that in some

way this what appears as pressure actually is the external force. That is

what a physicist might say (as they are not too precise), mathematically

the force is set to zero, pressure is eliminated, velocoty is solved, pressure

computed from velocity and that is how it is.

29 August, 2010 at 8:27 am

Robert CoulterRedefining the pressure and external force terms on the right side of the NS equations to be pressure-only does not mean the external force is set to zero. When the global energy in the system increases, this can only come about from the application of an external force. Also, to achieve infinite global energy in finite time would require an infinite external force.

Also, the initial global kinetic energy is finite and is calculated by the global integral of the velocity squared (at t=0). Time derivatives of the velocity do not store energy. There is no potential energy term (incompressible fluid).

The apparent non-uniqueness claim also comes about because of the external force. One can arbitrarily determine multiple trajectories of the solution from a common initial condition if given latitude to vary the external force as needed.

29 August, 2010 at 7:52 pm

Jorma JormakkaProfessor Tao has made settings that do not allow jj2 to post any messages to this web-site. This is because I tried as jj2 to put a small announcement of a discussion in reddit.com where some students, notably one called cowgod42, tried to break my peer-reviewed Navier-Stokes paper, naturally they could not break it. First Professor Tao removed my announcement of my paper solving the Navier-Stokes problem. I managed to put it back and to have a short discussion with Robert. Now, however, the censorship on this site has become tighter and jj2 cannot post anything.

I will first see if I can post a message using another name, and them will answer to you Robert.

29 August, 2010 at 8:17 pm

Jorma JormakkaGood, this new identity works, meaning that Professor Tao had put jj2 to a spam list (for simply announcing that reddit.com had not managed to break the proof and asking if the people on this site can break it, there was no spam or offence in the attempted post. I am sad that there is censorship.)

To answer the comments by Robert.

Mass in accelerating movement has kinetic energy. Fluid in accelerating movement also has kinetic energy, not potential energy. If you observe a falling mass and select the origin of the time as some point when the mass already is in accelerating movement, then surely the mass has kinetic energy stored in the time derivatives.

The external force select a trajectory of a solution requiring no external force. Physical examples you can think of is knocking an egg standing on its sharper head to one side. This knocking can be infinitesimally small and short in time. Zero energy is needed since you can always make the energy smaller by making an even smaller knock. Another example is steering a boat or a car. If you let the boat or car move where it naturally goea, you do not need to use any energy, you do not turn the wheel. Assuming that the boat can naturally move by the current either to a waterfall or to a safe harbor depending on to what direction it originally is pushed, you can steer the boat to the direction you want, stop the motor, and it will go where you wanted. The energy to select the direction where it goes can again be made infinitesimally small.

In the paper the external force is the difference of derivatives of two solutions: the selected solution having a finite-time blow up, and the currect solutiuon. By the local-in-time existence and uniqueness theorem the solution is unique if the initial conditions are fully described. In this case, the external force sets the time derivatives of velocity at t=0 and the initial conditions for velocity are fully described. Thus, the solution is unique. One solution is that the selected solution equals the currecnt solution, i.e., the force has zero value. As the solution is unique, this is the only solution. Thus, the external force has zero value from t=0 to infinity.

The energy of the external force can be obtained by integrating its value, and the energy is zero. The external force is not zero, though it has zero value. Feedback control forces still exercise control even if they are not used: if the current solution were not the selected solution, then the external force would not be zero. As it is zero, they must equal.

You see the same external control force in Internet. Even if your posts are not censured, there is control. Should you be successfully proving one of the Clay math problems, then you would see that there will be no publicity to the result even if you manage to publish it after a very long peer-review

in a respectable journal. And you may find that your identity is put to a spam list so that you could not answer to incorrect comments made against the proof. There is control even if you do not notice any force, zero value of feedback control force is not the same as zero force.

Hope these comments explain the issues you raised. The proof has been checked by over 10 mathematicians and much more have seen it. Nobody has been able to come up with a valid counterargument, so we must say that it is correct. It is so simple that any second year undergraduate can check it. I very much doubt that an undergraduate could have found it. After all, it is a nonlinear partial differential equation in three dimensions. Not at all so easy to find the solution as to check the solution.

29 August, 2010 at 10:02 pm

Terence TaoSelf-promotion of one’s own papers is against the stated comment policy of this blog; see

http://terrytao.wordpress.com/about/

If you wish to discuss your own research papers, please do so using another venue, such as your own personal web pages.

That said, I will make two mathematical comments pertaining to the above discussion:

1. It is indeed true that as stated by Fefferman, the initial value problem for the periodic Navier-Stokes equation does not have unique solutions, due to the allowance of the pressure function to be unbounded at spatial infinity. It is this unbounded nature of the pressure function which also causes such unphysical effects as blowup of energy. However, if one adds the additional physical constraint that the pressure be bounded, then this lack of uniqueness disappears (it is easy to see that if the velocity field u and external force f are periodic, and the pressure p is smooth and bounded, then the pressure p is also periodic). In retrospect, it would probably have been best if the additional physical requirement that the pressure be bounded be placed in the official description of the Navier-Stokes problem, though ultimately the presence or absence of this requirement does not affect the truth of either problem (B) or (D) in that description (because the unphysical solutions with unbounded pressure are just (variable velocity) Galilean transformations of a physical solution (p,u), at least when f=0; indeed it is not difficult to show that one must have and for some smooth functions and . In particular, one has global regularity for at least one of these solutions if and only if one also has global regularity for the physical solution.).

Note also that one does not need any particularly fancy solutions to demonstrate non-uniqueness in the case of unbounded pressure. For instance, one can simply take , , for any smooth function a to show non-uniqueness in this setting. (Among other things, this makes Theorems 2.2 and 2.3 of the EJDE paper trivial to prove.)

2. Even without the imposition of the physical condition that the pressure be bounded, the proof of Theorem 2.4 in the EJDE paper, which would answer part (D) of the official description of the Navier-Stokes problem, is incorrect, due to a circular definition of f. To prove Theorem 2.4, one must provide data u^0, f with the property that there are no global solutions (p,u) to the Navier-Stokes problem with this given data; or equivalently, every local solution (p,u) to the Navier-Stokes problem with this given data develops a singularity in finite time. Note in particular that f has to be specified prior to u. However, in the claimed proof of Theorem 2.4, f is not defined independently of u, but is instead defined in terms of u. This is not a valid definition of f for the purposes of demonstrating Theorem 2.4, since u only appears in the statement of that theorem

afterf has been selected; hence any definition of f that involves u would be circular. Using a control equation does not eliminate this circularity, because the solutions u appearing in the statement of Theorem 2.4 are not forced to obey any control equation, but are instead arbitrary solutions to the (uncontrolled) Navier-Stokes equation with the specified data.In more detail: the purported proof of Theorem 2.4 defines f in terms of a putative solution (p,u) to a controlled Navier-Stokes equation by adding the control equation

.

However, once f is chosen in this fashion, there is nothing preventing the existence of a second solution to the original (uncontrolled) Navier-Stokes equation with this forcing term f, due to the lack of uniqueness. (Indeed, if one sets f=0 and u=U as indicated in the purported proof of Theorem 2.4, then there are plenty of such second solutions , thanks to Lemma 2.1.) Such solutions would contradict the conclusion of Theorem 2.4, but are not ruled out by the arguments in the purported proof of that theorem because f is controlled by u, not by :

.

I have no interest in discussing this paper further. I will permit exactly one final response on this topic; however, all subsequent posts that are in violation of the comment policy of this blog will be deleted.

29 August, 2010 at 11:04 pm

Jorma JormakkaDear Professor Tao,

I was not doing any self-promotion by posting to your site a message: In case you are interested you find the solution to the problem you are discussing in this site in a published paper. Instead, as you were discussing the problem, you should have been expected to be interested in published results concerning the problem. You can see that I have made no home page and do not run a blog. People who do self-promotion have always hopepages and blogs. Those who do not have, like me, kind of dislike being in publicity. I thought you would be delighted that there is a new result on your field.

As there were posts from me and Robert Coulter responded to them, I think you should not have put my identity to a spam list. That is certainly a questionable practise. If you wanted me to stop, why did you not simply write a post explaining your policy rules.

As for your comments, they are wrong.

Feffermain in his problem statement says clearly that the solutions are unique up to some finite T. They are not, so Fefferman does not state correctly but definitely incorrectly.

There could indeed have been a condition that the pressure be bounded. Instead, for the nonperiodic case, Fefferman explicitely writes that no conditions are set to pressure. Thus, you agree that conditions must be set to pressure.

You state that the external force must be set first as a point function, not as a feedback control function. This is not stated in the problem statement. Thus, you agree with me that this a necessary condition to be demanded. You must alos agree that it is nowere given in the official problem statement. This is discussed in my paper in section 4.

Your last comment is clearly wrong. When the force is defined in the way the solutions become unique because of the time derivatives for the velocity vector are defined.

If you do not allow me to make posts, do also not allow people like Robert Coulter to comment to my existing posts. Otherwise I cannot respond to incorrect claims against the paper.

If you think my paper is incorrect, then submit a paper to EJDE explaining your arguments. They are incorrect and posting clearly incorrect arguments with the authority of a leading figure is highly questionable.

29 August, 2010 at 9:08 pm

Jorma JormakkaRobert,

Mathematically there is no such thing as apparent non-uniqueness that is not real non-uniqueness. Nor are there such things as pressure term which is actually a force, and force that is actually pressure. Mathematically, pressure is pressure, and force is force.

The solutions are non-unique because there is a family of solutions allowing a free function g(t), such that g(0)=g'(0)=0. Thus, the solutions are mathematically non-unique and not in some way apparently non-unique.

Because the space-periodic problem is posed on R^3 x [0,infty) there are no conditions that deny these solutions. If the space-periodic problem were posed on R^3/Z^3 x [0,infty), we could argue that the presented solution is not acceptable since the external force does not exist if it is defined using the velocity vector that does not exist in torus times time.

However, this would not help. Statement B is proven in exactly the same way as Statement D. Notice that no similar definition of a nicer space can be done in the non-periodic case. The same construction (the transform in the beginning of Section 3) can be made to any solution for zero force. Let us assume we have one such solution and it is smooth in the whole space-time. We do the transform to this solution and get a new solution with finite-time blow-up. Even if we try to demand that in acceptable solutions the pressure must have finite energy, the new solution that has been obtained by the transform is in this case a perfectly valid solution in R^3 x [0,infty), i.e., it exists and thus the external force also exists. The new solution does not satisfy the conditions that we imposed, so it is a counterexample: for those initial values and external force there does not exist a smooth solution satisfying what we demanded since the unique solution that in this case exists, does not fill the demands we posed.

Posing more conditions is not a way to exclude counterexamples. In some way we must exclude the existence of the feedback force. In the space-periodic case this was possible by posing the problem in a torus, in the non-periodic case it is not possible.

The only way to correct the Clay problem setting is to impose periodicity of pressure in the space-periodic case, growth conditions to pressure in the nonperidic case (finite energy etc.), and in both cases to exclude feedback forces as external forces.

The motivation why should anybody want to exclude feedback forces and the mechanism how to do it so that feedback forces cannot in some way be smuggled in, requires careful thinking. Basically, we would like to allow any external forces in NSE, also feedback forces. They appear in many technical applications, like a propulsion system for a ship is feedback control by the captain. There is no good reason to exclude these forces. If these forces are excluded, it is difficult to think we are talking any more of the original problem where the external force was general. The restricted problem without feedback forces would be more limited in applications.

Notice also that the uniqueness of solutions in this case was strongly believed in the field of PDEs, and apparently proved by a theorem e.g. in Temam. It is wrong in this case. Remember also that people in PDEs like to call people from other fields as cranks and crack-pots, as is shown by the discussion in reddit.com. These people certainly need a small lesson.

30 August, 2010 at 1:48 pm

Robert CoulterPer Professor Tao’s instruction, this is my last post concerning this paper:

The right side of the NS equations are the “sum of the forces”. The pressure gradient term is one of these forces along with the viscous drag term and external force (if present). If the total kinetic energy of the system increases over any delta t then the external force is present. I cannot see anyway around this. Per my interpretation of the problem, If the external force is present then it must comply with the Clay problem’s bounds on the force (it must be finite).

It seems like the main thrust of this strategy is to somehow get this external force under the umbrella of the pressure term, maybe because Fefferman simply forgot to forbid it. Maybe he also forgot to forbid converting the fluid mass to energy per E=mc2.

It seems like the strategy here is to construct a trivial blowup solution and then look for weasel words in the Clay NS problem statement to allow it. This certainly doesn’t advance or enlighten anyone on the mysteries of the NS problem.

6 September, 2010 at 3:33 pm

Unified Discussion There is no god[...] 2) The insolvable paradoxes contained in the Navier Stokes equations: http://terrytao.wordpress.com/2007/0…#comment-35308 (This arises from an assumption of non-discreteness. Noones ever simulated or worked with a truely [...]

27 September, 2010 at 4:54 am

ThierryProfessor Tao

As you mentionned the molecular limit for the physical NS problem, you might be interested by the fact that there are papers published on molecular simulations of the Rayleigh Taylor instability at scales < µm.

The results show that the molecular simulation explains the phenomenon better than NS which begins to break down at these scales.

It is not surprising to find such a result given that the NSE were established for a continuum.

When this fundamental hypothesis breaks down, e.g when the spatial scales are such that it appears that the fluid is discrete rathen than continuous, the NSE must be replaced by another theory which deals with discrete systems.

This suggests by analogy with the infinite potential of pointlike particles that when a mathematical description of a physical theory (here NSE) is extrapolated to a domain where it is physically invalid, the mathematical results are pathological too. This pathology manifests itself generally by an appearance of singularities.

I am well aware that this is no mathematical demonstration but I am convinced that the NSE always blow up at small scales because that is the clearest signal the Nature can send to tell us that something is wrong with the theory at these scales.

I have also a question. Do you believe that a stochastic interpretation of NSE could be possible by proving some kind of generalised ergodic theorem?

27 September, 2010 at 10:56 am

Terence TaoIt is true that physics no longer follows the Navier-Stokes equations at microscopic scales, but this does not seem to shed much light either way on the regularity of the Navier-Stokes equations, other than to say that physical considerations cannot be used to justify regularity. For instance, in a two-dimensional universe one would encounter the same physical breakdown of Navier-Stokes at microscopic scales, yet the Navier-Stokes equations are known to enjoy global regularity in two dimensions.

The general question of rigorously deriving macroscopic equations of motion for continuous media from the microscopic equations of motion of N-body systems is an excellent one in general, and one for which we still have very few actually rigorous results (as you suggest, one often has to make a lot of unjustified assumptions about “mixing” before one can derive a macroscopic model, lest one run into Maxwell’s demon or some other similar obstruction). Typically, though, one has had to first understand the regularity theory of the macroscopic equations quite well before one could ensure convergence of the microscopic equations to the macroscopic model, so this task may be even harder than the global regularity problem (unless one worked locally in time, of course). But it is perhaps conceivable that Navier-Stokes with a stochastic noise component could be viewed, conjecturally at least, as the limit of some stochastic particle model.

28 September, 2010 at 8:33 am

Robert CoulterBased on Professor Tao’s suggestion at the opening of this blog, the following is my current opinion about the implications of a Navier Stokes resolution on conduit flow.

Note: In this post “perfect fluid” is a fluid that has no molecular bumpiness (i.e. a perfect continuum), but may have any viscosity.

Dimensionless quantities in fluid flow (I will speak only in terms of flow in conduits since this is what I know, not airplane wings, e.g.) are the Reynolds number, the friction factor (Darcy or Fanning), and relative roughness. The relationship of these three quantities is depicted very well in the Moody diagram – see wiki link: http://www.mathworks.com/matlabcentral/fx_files/7747/1/moody.png (For those that prefer equations, see Colebrook -White, link: http://en.wikipedia.org/wiki/Darcy_friction_factor_formulae

The relative roughness (on the right side of the diagram) is the degree of roughness on the inside wall of the conduit relative to the conduit diameter. The Reynolds number (at the bottom) is a measure of the degree of mixing/turbulence occurring inside the conduit. The friction factor (on the left) indicates the conduit’s resistance to flow and is usually the key parameter engineer’s desire in using the diagram. Upon examining this diagram, one minor peculiarity is that the velocity is embedded twice — in the

Reynold’s number and the friction factor. This makes for some minor hassles in that the velocity cannot be solved for explicitly from knowing the other quantities. An iteration process is needed which is can be started by guessing

the friction factor, solving for the Reynold’s number, and then using the diagram to fetch a new friction factor and repeating the process until there is convergence at the correct point on the diagram.

The most striking oddity in the diagram, and one that has fascinated me for many years, is that even though it is a log/log plot of two carefully chosen dimensionless quantities, the curves are still highly unlinear. Also, at Reynolds number, 2000 – 10000, there is a huge undefined gap between where laminar flow breakdown and transition flow begins. This is called the “critical region”. Another important characteristic is that the friction factors in the laminar region only depend on the Reynold’s number. The friction factors in the fully turbulent region only depend on the relative roughness.

Note that the Moody diagram / Colebrook equation are based on our experience with REAL fluids and REAL conduits. Real fluids have “bumpiness” at the molecular level, and there are no perfectly smooth conduits (so the bottom curve may not represent a perfectly smooth conduit).

What would the Moody diagram of a perfectly smooth fluid look like? Would the perfect conduit line move? The resolution of the NS problem may help in this regards.

In summary the main oddities, IMO, in Moody’s list are:

1. Turbulent curves are highly unlinear for decreasing Re number.

2. Discontinuity (of friction factor) between laminar and turbulent regions. (As far as I know there is no known mathematical model for this region.)

3. Position and shape of the smooth conduit line (for perfect pipe/ perfect smooth fluid).

In regards to #1, the Colebrook equation illustrates the nonlinearity of the friction factor in the turbulent region. The friction factor is implicit in the Colebrook equation and must be solved by iterative methods.

.

The major oddity is #2 (above). This is referred to as the “critical” region where laminar flow breaks down. At this point, there seems a complete absence of models/understanding. Engineers are instructed that this flow may be laminar or turbulent, but there is no guide for any given situation. Some versions of Moody’s diagram show the curves in this region. I think this is done only to provide a “number” as most engineers need one. The Colebrook equation appears to breakdown here (does not converge). Of course, it was not meant to represent this region. Does the breakdown of the Colebrook equation offer any insight here? Will resolving NS in either direction help here? I believe so.

The start of this “critical” region is generally understood to be where laminar flow is no longer “guaranteed”. It is generally accepted as a Reynolds number of 2000. Below this value, laminar flow will be maintained regardless of any protrusions or interruptions to the laminar flow regime. The flow “dampens out” the interruptions. This is generally attributed to the viscosity of the fluid. Beyond Re 2000, there is the first onset of instability in the flow. An interruption may send at least some portion of the fluid into an “oscillatory” or “sinusoidal” state.

At this point it may be useful to define laminar and turbulent flow. To engineers, the definitions are based on how the friction factors are calculated. Laminar flow IS when the laminar flow equation for the friction factor is applicable. Turbulent flow IS when the Colebrook equation is applicable. These definitions; however, are not very descriptive of the fluid flow behavior. I will try to make some attempt at a descriptive definition as follows:

For purposes of these descriptive definitions imagine fluid flowing in a circular conduit and an arbitrary radial line from the center of the fluid flow to the inside wall of the conduit. We assume that the flow is at steady-state and sufficiently downstream of any “macro” disturbances (Fluid is given time to recover.):

1. Laminar flow: There are no flow reversals along the radial line.

2. Turbulent flow: There are flow reversals along the radial line. By extension, one can suggest that more turbulence corresponds to a greater degree/measure/magnitude of flow reversals along the radial line.

3. Critical Region: We cannot predict with sufficiently accuracy the degree (if any) of flow reversals that occur along the radial line for a given situation.

Another key point about the Moody diagram needs to be made. The curve at the very bottom is for an ideal perfectly smooth conduit. This is a conduit with, at least in theory, no protrusions (disturbances) on the inside wall. Of course, perfectly smooth conduits do not exist. Also, there are no fluids that

are perfectly “smooth” in the sense that they do not depict a perfect continuum because of molecular interaction at the very fine scales (I will call these perfectly smooth fluids “perfect’ from here out). If a perfectly smooth conduit and a smooth fluid existed, where would this curve be?

Would it be near this one or further down? Would it be an extension of the laminar line with no discontinuity? This will discuss further below. I believe that a resolution to the NS problem with have the greatest impact on the position of this curve.

Warning: From here on out, there will be my opinion and conjecture. Do not accept as fact!

Upon learning about the NS Clay prize problem, I was very skeptical that any resolution to any supposed mathematical “difficulties” with the NS equations would have any impact on the practical matters of fluid flow analysis and engineering. Part of this belief, or maybe bias at the time, is that theoretical

models have little applicability to fluid flow with the exception of the laminar regime. The more I looked into it , the more I came to believe that the mathematicians may have something to contribute here. Part of my conversion

is based on:

1. NS appears to be a very good model within the context of its assumptions. More to the point, even the engineering models (Colebrook, Darcy, etc.) above make assumptions. For example,a Newtonian fluid is assumed in both cases.

2. I now believe that the NS model is sufficient that regardless of the eventual outcome of the resolution of the NS singularities (Singularities exist – positive or Singularities – negative), I believe that something will be learned in either case that will be applicable to our understanding of fluid behavior.

I will venture to how a resolution of the NS equations may affect our understanding of real fluid flow. It is likely that a resolution in either direction will significantly improve the ability to model perfect and real fluids. The bottom line questions is what a perfect fluid (no molecular bumpiness) would do in perfect conduit (inside wall perfectly smooth) after moving past a disturbance for a sufficient length. The following, I suggest, is that in NS positive singularity case is turbulence occurs. In the NS singularity negative, case it does not.

Case #1: NS develop singularities. That is, the flow (velocity) or its derivative are not defined at some point. IMO, this would be the most exciting development. At this point there could be defined a new regime of flow. Maybe it could be called “hard” turbulence. Hard turbulence, I conjecture, would correspond to the velocity achieving infinity along the radial line or experiencing a discontinuity or step. At this point, a number of questions will come up:

1. Where along the Moody diagram will hard turbulence appear? in the critical region? At the beginning of turbulence? Or much further to the right in what is currently called full turbulence.

2. Depending on the type of singularity, will Lagrangian particle trajectories be conserved?

3. If Lagrangian particle trajectories are not preserved, what happens?. Does the particle split? Or maybe does it become like quantum mechanics where it will not be possible to define a particular particle trajectory with all certainty — just the probability of a specific path.

4. Will there be a new quantity, maybe call it gamma, that indicates the rate of singularity formation per unit volume of flow. Will this in time replace the Reynolds number as the de facto measure of turbulence. Will the gamma value depend on Reynold’s number, relative roughness or both?

5. Once disturbed, always disturbed (critical velocity, shear rate, etc.) if flow precedes into a perfectly smooth conduit (the bottom curve)?

No extension of the laminar to high Re# for perfectly smooth pipes. The discontinuity stays in place for the perfectly smooth pipe / perfect fluild. Once turbulence starts in can be maintained by holding the velocity constant even in perfectly smooth pipes.

6. Turbulence can be caused by mathematical fact based on Re # components (visc,vel,dens,char. leng) in addition to obtrusions/disturbance (Pipe roughness, molecular “bumpiness”). Perfect fluids can develop turbulence.

7. Critical region exist for perfect conduits, perfect fluids. It can be defined for certain Re# and roughness per NS.

8. Whole nature of fluids changes. Speculation about behavior of perfect fluids in stars?

9. Since perfect fluids can develop hard turbulence, how close are they to real fluids in behavior. Will the perfectly smooth line move any, or is it already fairly close to the real world “smooth” conduit and real world fluids?

Case #2: NS does not develop singularities.

From my readings, this seems to be what most mathematicians want. Unfortunately, IMO, it would have little impact outside the mathematical community. Better algorithms for CFD may come available, but that

is about it.

Concerning the Moody diagram above:

1. Turbulent flow will be interpreted as simply highly involved, complex, or disturbed laminar flow. With “soft” turbulence only maintained by roughness in the conduit (for a perfect fluid).

2. If disturbed to any degree, the turbulent flow will recover if all disturbance removed in a perfectly smooth pipe with perfect fluid (velocity held constant). Laminar line extends to high Re#

for perfectly smooth pipes and perfect fluids. Note: May take a very long length of perfect pipe to dampen a a perfect fluid if velocity if Re# is very high. Nether the less, this will occur in a finite length for any nonzero viscosity. Perfect fluid in perfect pipe for inviscid may oscillate indefinitely (after a disturbance). We assume no slip at wall in all cases.

3. Turbulence will only be a macro phenomenon. Basic nature of fluids will not change much.

4. The Moody diagram for a perfect fluid would show that the roughness curves spread out more with the curves for smooth pipes approaching the laminar line at the bottom.

5. Critical region does not exist for perfect conduits, perfect fluids. Critical region still holds for certain ratio of Re# to pipe roughness. Point where obstructions are likely to create oscillatory flow.

25 October, 2010 at 2:00 am

Lizhi DuTao is the only smart guy in this world! Only you can solve Hard Problems, other people never can! So, reject him even without any reason!

31 December, 2010 at 5:35 am

Robert CoulterFefferman statement A, when applied to Euler equations, appears true if one is allowed to arbitrarily specified the initial time derivative of the velocity field. The simple case is to set all the initial velocity time derivatives equal to zero. This creates the steady-state case for all time. The pressure field is solved from the remaining convective terms. This would appear to be true for any velocity field meeting condition (4).

I assume, though, that the time derivatives cannot be arbitrarily specified. One must show that statement A holds irregardless of the value of the initial velocity time derivatives.

In the NS case, the steady state condition cannot be maintained without an external force to exactly counteract the drag term on the right.

24 May, 2012 at 3:37 am

rbcoulterI now have reluctance on the above post. The pressure field can be recovered from any arbitrarily chosen velocity field through the pressure Poisson equation. However, the pressure cannot be multivalued at a point which, I believe, could happen from the integration along different spatial directions. The only way to “force” the pressure to be single valued at every point is to allow the time derivatives to be nonzero. Because of this, I suspect must Euler flow fields are unstable. If any one else knows otherwise, please let me know.

29 January, 2011 at 2:25 pm

HafidDear Terence,

If we have

1). An equation (E) wich the existence of solution is satisfied but we can’t prove the uniqueness.

2). A perturbed equation (Ep) of (E) wich the existence and the uniqueness of solution are satisfied.

3).The solutions (Sp) of (Ep) converge strongly to the solutions (S) of (E).

Question:

Are that (2) and (3) implies the uniqueness of solution in (E)?

29 January, 2011 at 2:41 pm

Terence TaoUnfortunately, no; there can exist “ghost solutions” that solve the original problem, but are not limits of the perturbed problem.

For instance, consider the ODE with initial data . It has at least two solutions, and . But if one considers the smoothed ODE for any with the same initial condition , there is only one solution (the trivial zero solution), by Gronwall’s inequality. Thus we see that uniqueness is lost in the limit due to the presence of the “ghost solution” .

2 February, 2011 at 10:56 am

HafidUnder what conditions do I get the uniqueness of solution in (E)?

6 February, 2011 at 9:12 pm

SantiagoProf. Tao,

What is your opinion on Dr. Amador Muriel’s “An Exact Solution of the 3D Navier-Stokes Equation”

7 February, 2011 at 6:50 am

Robert CoulterIt appears that the main argument of the Muriel paper is that some

other fluid flow model (referred to as “TE” in the paper), which has its roots

from various non-continuum theories (kinetic theory, Liouvilles’s equation, etc.), exactly solves the NSEs. The only supposed “proof” of this is that

this “TE” model’s pressure calculation, from a certain flow condition,

is “the same” as the pressure that would be calculated from the NSEs.

It appears that the author does not understand the Clay prize problem.

Clay is not looking for models comparable or superior to NSs. They are looking for inconsistencies within NS itself.

IMO, the author’s only intent here is to some how validate a

model by comparing it to NS and then argue that NS is solved, or that

NS is lacking..

The logic seems to go around in a circle.

9 February, 2011 at 9:17 am

Gandhi ViswanathanDear Professor Tao,

I understand the need for a (conserved or monotone)

quantity which is subcritical or critical (under scaling)

to show global-in-time regularity. However, what

about for proving blow-up? Do you really need a

subcritical or critical quantity ?

I would guess that the answer is yes, but I am unable

to see this clearly. For instance: imagine we find some

time dependent functional of the velocity W(t,u)

(e.g., some Sobolev norm) which blows up at some

finite time T* (where of course the finite time singularit

obeys the BKM criteria for blow ups and so forth).

Then, even if this quantity W is supercritical, are we

still not able to show blow up?

SO, it seems to me that to show blow up, we do not

require subcritical or critical quantities. Is there something wrong with this reasoning?

If you could give a (even if brief y/n) reply, I would

very much appreciate it.

Thank you.

9 February, 2011 at 10:09 am

Terence TaoInteresting question! It is in principle conceivable that one could establish blowup via a monotonicity formula that gives lower bounds for the growth of a supercritical quantity. However, in practice, in order to establish such a monotonicity formula, one often has to obtain upper bounds on the growth of error terms that are just as supercritical as the main quantity in the formula, and this is unlikely to be possible for the reasons discussed in the main post. (This is particularly the case given that we know that there are many solutions to Navier-Stokes that do _not_ blow up, and so any such monotonicity formula must somehow use special properties of the initial data, and then must work hard to show that these properties are somehow preserved by the evolution, and then we are back to the problem in the main post.)

From a more heuristic perspective, the blowup mechanism (if such exists) is likely to come from the delicate nonlinear behaviour of the high frequency modes, and a supercritical quantity (which is dominated by low frequency behaviour) is unlikely to be capable of capturing this behaviour properly. While one cannot rule out the possibility of an algebraic miracle that somehow makes the supercritical quantity monotone and forcing blowup, it is still very strange that a property of the low modes is responsible for making the high modes blow up, and so I would doubt that this possibility actually occurs.

9 February, 2011 at 11:51 am

Gandhi ViswanathanDear Professor Tao,

Thank you for the prompt reply, which goes straight to the heart of the matter.

23 February, 2011 at 6:35 am

David PurvanceCorrect me if I am wrong, but it appears that a Fourier-Laplace analysis shows that 3D solutions only have to be integrable, not necessarily smooth: http://purvanced.wordpress.com

23 February, 2011 at 7:51 am

Gandhi ViswanathanDear David,

Your quantity $J$ has a $u$ inside it. This masks a nonlinearity in $u$.

The crucial point is this: the nonlinearity can excite higher frequency modes (because the quadratic nonlinearity becomes a convolution in the Fourier domain). This exciting of higher frequency modes may go on uncontrolled (the so-called direct cascade). Since the time scales become smaller for the higher frequency modes, what we cannot rule out is the Fourier transform of the velocity field acquiring a power law tail in finite time. Such a power law tail automatically guarantees the breakdown of smoothness, because all smooth functions have rapidly decaying Fourier transforms. If you go back to your own results, you will see that you cannot rule out the possibility of this type of finite time singularity.

Moreover, the addition of dissipation (via Laplacians) is unable to control the cascade unless a power 5/4 of a Laplacian is used. Professor Tao has improved this 5/4 result in the margins, via an additional logarithmic factor. Excepting such marginal improvments, the 5/4 result, originally due to Lions, is still the best result to date. (I have reviewed in RSTA recently this 5/4 result of Lions: doi: 10.1098/rsta.2010.0257.)

It is perhaps due to the nature of these obstructions that Professor Fefferman has suggested the investigation of weak solutions. Another intriguing problem is that there appears to exist a time T* such that no matter how small the (positive) viscosity is made, yet half the energy is dissipated away by this time T*. This result is really amazing and suggests new questions: How can half the energy be dissipated away independently of the size of the viscosity? We need to understand how so much energy can be dissipated independently of the viscosity. There is something interesting going on…

23 February, 2011 at 11:20 am

David PurvanceDear Gandhi,

Rather than here, perhaps Prof. Tao would prefer that we carry on this conversation at my blog, but…

My solution fully recognizes that the Navier-Stokes equation creates a nonlinear eigenproblem, precisely for the reason you cite. Many nonlinear eigenproblems are documented in the literature, and, in the case at hand it only means that Navier-Stokes’ eigenvectors have a relationship more complicated than traditional linear eigenproblems.

16 September, 2013 at 7:23 am

AnonymousDear Gandhi and Terry,

In this paper mentioned above

http://rsta.royalsocietypublishing.org/content/369/1935/359.abstract

The L^1 estimate on frequency side is interesting to me. But I don’t see how the statement right after (4.10) works. Can any of you give a bit more clarification?

Thanks a lot!

bk

16 September, 2013 at 8:24 am

GandhiDear Anonymous, thank you for pointing this out. You are right. TMV and I must see how much is salvageable. There is a gap and we will see if it can be fixed. As soon as we figure it out, we will make public any correction and take any other necessary steps.

23 February, 2011 at 4:52 pm

Robert Coulter“How can half the energy be dissipated away independently of the size of the viscosity?”

Gandhi Viswanathan

Isn’t this really more about why the “mean” Laplacian of the velocity increases as the viscosity decreases. Isn’t this intuitive from what we know of turbulence?

The system adjusts to lower viscosity by decreasing the length of the mixing scale.

If you meant something else, please let me know.

24 February, 2011 at 12:51 pm

Robert CoulterIt has occurred to me that it may be possible to construct a trivial blowup solution that stays within the constraints of Fefferman conditions (4),(5) and (7). Although the force appears to be sufficiently constrained the “power draw” from the surroundings appears not to be. The local power draw density can be expressed as:

PDD = Fe * v

where Fe is the applied force and v is the local velocity. As long as the external force is present. the power draw can be made to be large as needed

by increasing the local velocity. One may imagine that a singularity could possibly be created this way — maybe some type of energy focusing scenario. The singularity would have a local infinite power draw from the surroundings (not the flow field). The total bounded energy, condition (7), could be maintained, if need be, by applying a modest negative force to the rest of the flow field.

If I am missing something here, please let me know.

11 March, 2011 at 6:23 am

Existence And Smoothness of The Navier-Stokes Equation « Hezhigang's Blog[...] Why global regularity for Navier-Stokes is hard [...]

31 March, 2011 at 3:41 am

JUSTICUSCan anyone comment on the total kinetic energy, what is its relation with regularity?

31 March, 2011 at 11:10 am

Robert CoulterApparently, the finiteness of the initial kinetic energy is enough to show that smooth initial conditions stay smooth for all time in 2D. In 3D, this is not sufficient (See Professor Tao’s discussion above) because of the energy scaling being different.

5 April, 2011 at 12:50 am

JUSTICUSBut basically if the solution blows up, the total kinetic energy must increase as function of time (normally one has that dE/dt=-vZ, where Z is the enstrophy and v is viscosity?

5 April, 2011 at 2:42 am

Robert CoulterNo. In fact the total KE must decrease monotonically (assuming no external force and a nonzero initial velocity field) for NS.. This is because the viscosity term is always removing energy from the system, and no new energy is being added.

This does not exclude the energy from concentrating in small regions of the flow field (at the expense of reducing it in other regions). Note that the total energy expression is an integral over the entire field and therefore cannot in itself exclude a point (velocity) having an infinite value.

23 April, 2011 at 1:42 am

Hafid1) . When u(0,x)=0 , is the solution u=0 unique ?

2) . Has u(t ,x)=0 the conditions which are claimed by the Clay-Foundation?

Thank you .

23 April, 2011 at 4:04 am

Robert CoulterNo. There are many exactly solvable solutions to the Navier Stokes Equations.

See “Vorticity and Incompressible Flow” by Majda , Bertozzi for many examples.

Also see the Taylor-Green Vortex, http://en.wikipedia.org/wiki/Taylor–Green_vortex. The zero solution is the most trivial of them all.

The Clay Foundation is looking for either a blow up solution that starts from a non-blowup state, OR they are looking for a proof that the Navier Stokes equations do not “blow up” for reasonable starting conditions. (Note that the Clay problem defines what they (Fefferman) consider to be reasonable starting conditions..)

23 April, 2011 at 8:26 am

HafidThank you for your reply.

What is your opinion on this paper?

http://arxiv.org/abs/1104.3255

23 April, 2011 at 9:16 am

Terence TaoThe deduction after (3.48) is incorrect. The supercritical expression appearing on the right-hand side in (3.48) has not been demonstrated to be bounded uniformly in , so one has not demonstrated that converges to zero as . As discussed in the main blog post, control of a supercritical expression is in fact the principal difficulty in using standard techniques (such as energy estimates) to control Navier-Stokes solutions in three and higher dimensions.

Also: it is against the policy of this blog to use the comment space to promote one’s own research: see http://terrytao.wordpress.com/about/

Such comments will be subject to deletion.

23 April, 2011 at 1:10 pm

HafidThank you for your reply.

Theorem 3.1. means that u saisfies this condition.

All my respect Professor Tao.

I thought the site’s goal is to enrich the ideas.

23 April, 2011 at 8:00 pm

Terence TaoTheorem 3.1 shows that each lies in for each l and , but does not bound the norm of uniformly in , which is what is needed for the argument after (3.48). Instead, it provides a bound which depends on in an unspecified manner (and, for supercritical l, the bound will blow up as goes to zero).

Your approach is an instance of Attempt 1 of Strategy 3 as listed in the blog post (Using weaker or approximate notions of solution), and as mentioned in the discussion, the error commonly made in this approach is to mistake qualitative control (in this case, the assertion that lies in for each ) for quantitative control (in this case, the assertion that the norms of are bounded uniformly in ).

As stated in the blog policy at http://terrytao.wordpress.com/about/ , the comments of this blog are not to be used for promotion of one’s own research.

23 April, 2011 at 10:41 pm

HafidThank you for your useful comments, I will send you the second version.

26 April, 2011 at 9:12 am

AnonymousThere are at least 2 typos in the abstract of your paper. How seriously do you expect to be taken?

Life is difficult for everybody, I sympathize if you really suffer from the situation, but try to reconsider your goals in life, reality is the best you will have. You will (most probably) gain nothing from taking time from Terence without putting serious work to make such interactions efficient.

Good luck.

28 April, 2011 at 11:57 am

HafidDear Professor Tao;

As I promised here is the second version v2:

http://hal.archives-ouvertes.fr/hal-00586485

28 April, 2011 at 12:27 pm

Gandhi ViswanathanDear Hafid,

If you add hyperviscosity, it becomes a completely different problem. One can then show that finite time singularities are prohibited.

Lions showed more than 40 years ago that hyperviscosity leads to global regularity.

This happens precisely due to issue of supercriticality and subcriticality discussed by Professor Tao. With the 5/4 Laplacian of Lion, the energy is critical.

The challenge is to study the standard parabolic Navier-Stokes system, for which the energy is supercritical.

Lion’s 5/4 result is the best to date (excepting marginal logarithmic improvements to the 5/4 result, due to Professor Tao).

Find us a critical or subcritical conserved or monotone globally controlled quantity for the standard parabolic N-S system and you will catch people’s attention!

28 April, 2011 at 2:11 pm

HafidDear Gandhi Viswanathan,

Professor J. L. Lions did not prove the strong convergence.

The solution is in Roma.

28 April, 2011 at 7:31 pm

timurIn your cited work [9] it appears that you mistook subsequential convergence as sequential. When you have a compact sequence you can only deduce that a subsequence converges, so in Theorem 3.3 of the current paper, which is recalled from [9], you cannot say the whole sequence u_epsilon converges to a weak solution of the NSE.

28 April, 2011 at 11:13 pm

HafidThank you for your interest in my paper. But it is also true for a sequence!

29 April, 2011 at 7:02 am

HafidStandard methods from PDE appear inadequate to settle the problem. Instead, we probably need some deep, new ideas. Charles L. Fefferman.

http://arxiv.org/abs/1104.3255

29 April, 2011 at 8:25 am

Terence TaoThe inequality (3.17) is incorrect. (3.11) bounds the (supercritical) norm of , but in (3.16) it is the (subcritical) norm of that needs to be bounded instead.

I strongly recommend that you learn the distinction between subcritical, critical, and supercritical quantities, as discussed in my blog post above. Without this understanding, any attempt to resolve the Navier-Stokes global regularity problem is certain to fail (and the error can usually be located by finding the first step in the argument in which a uniform bound on a critical or subcritical quantity is claimed).

To be blunt, your paper does not contain the deep, new ideas that Fefferman was alluding to. Please do not post here again.

29 April, 2011 at 10:45 am

Robert CoulterThis may be a trivial question for the mathematicians participating on this blog:

Much reference is made to scale invariance and its role in possible blowup of NS. When I look at Professor Tao’s explanation of this property, is the possible blowup occurring on constant re-scalings of the original solution moving backward in time? In other words, if one has a solution to time, T, and notices something odd at T and then rescales, is the oddity magnified but at a previous time because the rescaled solution (say *) has the oddity occurring at T* = T/lambda^2 which for lambda > 1 seems to be previous to the original solution?

29 April, 2011 at 10:53 am

Terence TaoIt depends to some extent on how one normalises things, but if a solution does something bad (e.g. blows up) at time , then the rescaled solution blows up at time , not . (As mentioned in the blog post, for , one should view as a “magnification” of .)

5 May, 2011 at 7:14 am

Umesh“Thus, for instance, if a unit-scale solution does something funny at time 1, then the rescaled fine-scale solution will exhibit something similarly funny at spatial scales 1/\lambda and at time 1/\lambda^2..”

Excuse my stupidity Prof. Tao, but you just corrected this statement in your previous post, and now my initial confusion has cleared, but can you please also clarify, why do you say in the above statement “…rescaled fine-scale solution will exhibit something similarly funny…” does it actually mean “..rescaled coarse-scale solution..”. I know I am muddling up something very basic here, but a little time from you to clarify this will help me greatly. Thank you for your time Sir.

5 May, 2011 at 7:22 am

Terence TaoIn that particular context, the unit scale solution is and the fine scale solution is (or, if one wishes, one could call the unit scale solution and the fine scale solution ; they are both rescalings of each other, one by and one by .)

6 May, 2011 at 5:55 am

UmeshThank you Prof. Tao. I have another question regarding the Navier-Stokes equation problem. What exactly is meant by ‘arbitrarily large initial data ?

13 May, 2011 at 8:16 am

Terence TaoThis is not a precise technical term, but in the current discussion, this basically refers to data for which is not know to be bounded in some function space norm by a fixed quantity.

29 April, 2011 at 12:57 pm

HafidI took my students that you are a good example but you lack the education.

Tchao

29 April, 2011 at 1:41 pm

HafidYou are the best.

30 April, 2011 at 8:32 am

Robert CoulterBut if u is finite, say u(T,x) =a, then at ?

30 April, 2011 at 2:14 pm

Robert CoulterIt seems to me all the scale invariance implies is that later solutions can be “mapped” from earlier solutions. However, the later solutions have magnitudes reduce by the factor lamba and are further away from the origin. Earlier mappings have magnitudes increased by the factor lambda and are closer to the origin. This seems to make sense when imagining a decaying vortex (e.g.). Anyway, just having trouble visualizing how scale invariance may depict blowup later in time.

30 April, 2011 at 6:21 pm

Gandhi ViswanathanDear Robert,

When you refer above to ‘mapped’ solutions to earlier times, etc. we must take care about whether we are talking about mapping to the SAME solution or a NEW solution.

Scale invariance can refer to a single (same) solution, in which case the solution is said to be self-affine. Initially smooth self-affine blow-up solutions have been ruled out for the 3-D Navier-Stokes and Euler equations (I forgot the refs.). On the other hand, scale invariance can refer to the PDEs themselves. In other words, given a solution, we can create a whole family of (distinct) rescaled solutions. The latter is the type of scale invariance which is relevant to the Navier-Stokes system. (Notice the superscript in .)

If there is supercritical scale invariance AND nonlinearity, very interesting things can happen in principle, for the following reason. In some short period of time T, the quadratic nonlinearity can excite higher frequency Fourier spatial modes. So the Fourier transform can become ‘wider’ or ‘fatter’ (due to convolution). If this were the only effect, it would not be a big problem. But the supercritical scale invariance allows the same type of process to occur again, but now at a much faster NEW time scale. This can be repeated over and over, faster and faster, such that in finite time the Fourier transform may acquire a power law tail, i.e. it may become genuinely ‘fat tailed’ and no longer decay rapidly. If this happens, then a singularity has developed, because the Fourier transform of a smooth solution must decay more rapidly than any power law.

This is more or less my understanding of why Professor Tao states in his original post that the obstruction is supercriticality.

Indeed, as expected, when the Navier-Stokes equations are perturbed to make them critical (or subcritical), the ‘cascade’ process is disrupted because as the energy cascades to the higher spatial frequency modes, the time scales cannot not get faster and faster. So it takes an infinitely long time for the Fourier transform to sprout a power law tail. In my understanding, this fact alone allows one to prove global-in-time regularity for these perturbed problems (such as hyperviscosity models).

About ‘visualizing’ scale invariance in blow-ups, etc.: one suggestion is to understand fractals, because they are very visual and and fluid turbulence has much in common with fractals and power laws, e.g, Kolmogorov’s 5/3 power law spectrum. (There is actually a somewhat close relationship between scale invariance, fractals and power laws.)

1 May, 2011 at 3:54 am

Robert CoulterGandhi –

Thanks for your detailed and descriptive explanation. I understand the point now.

Leray had conjectured a (backward) self-similar singular solution. This was disproved by Necas-Ruzicka-Sverak (1996).

3 May, 2011 at 2:01 pm

Robert CoulterThe meanings of supercritical, critical and sub-critical appear to have different meanings to mathematicians. Am I interpreting this correctly in saying that the scaling of the velocity (as it relates to NS) is sub-critical (1/$latex\lambda$)? However, the energy scaling is supercritical (($latex\lambda$). Is this because the spatial realm (after rescaling) increases by the cube of x (length) (and this effects the calculation of the total energy?)? If this is not correct, can someone provide the explicit expression showing why the energy scaling is supercritical?

5 May, 2011 at 7:56 am

gandhiviswanathanDear Robert,

The terminology is explained in PDE books etc., for instance see this online wiki:

http://wiki.math.toronto.edu/DispersiveWiki/index.php/Critical

In the context of the Navier-Stokes system, we say that energy is supercritical because it is not suitable for bounding or controlling the fine scale behavior. In other words, given an initial condition with finite fixed energy E, the solution in time can evolve ever finer-scale structures, possibly even leading to a finite time singularity. Although energy is a conserved quantity, it becomes useless at infinitely small scales.

5 May, 2011 at 11:15 am

Robert CoulterGhandi:

Thanks. That looks like an excellent link explaining these concepts.

6 June, 2011 at 2:13 am

Navier-Stokes方程简介 « Fight with Infinity[...] Tao Why global regularity for Navier-Stokes is hard [...]

5 July, 2011 at 12:43 pm

W Ethan EagleHello,

This may appear to be stream of consciousness, but since this is a blog and not a referred journal perhaps I’m ok with that. I am a PhD student at Michigan in the area of experimental fluid mechanics. I stumbled across this blog, and wanted to add some thoughts focused on (my) questions about strategy 2, given that my thoughts are not yet fully developed. I study the effect of vorticity in high speed flows with the application of flow breakdown paramount to questions about pathology in the NS which may be avoided through engineering control. Therefore the idea that topological considerations a-la vorticity have some part yet to play in the unfolding drama of the NSE intrigues me.

Background

In particular I am interested in how development and competition between vortex stretching terms which exist as strain terms compete with viscous terms. Using a simple 1-D argument, a length scale is defined via this competing effect for a given initial condition (e.g. Burger’s Equation)

In 2-D NS, I am aware of an oddity commonly referred to as the ‘inverse energy cascade’ which allows for energy containing structures to grow unboundedly in the fluid domain, until they reach the bounds(physical or computational)… and yet formalized ‘regularity’ of these solutions apparently have been given. (I will admit that this was a TIL moment)

This is curious to me in a number of respects, the first of which is that vorticity production in a 2-D flow seems inherently bound to point sources and is irrational for any closed path that does not contain such point sources.

The ‘topology’ of the 3-D analog is much more exotic, but can be simply thought of as a closed path (bounded survace) contains a magnitude of ‘circulation.’ (and that a 2-D ‘line’ vortex which extends to infinity is unphysical) – I frequently like to make the connection to Physics via EandM that there are no magnetic monopoles.) I will forgo looking up the formal definition and procede colloquially (while crossing my fingers this doesn’t ruin my argument).

Method(?)

Vorticity ‘closure’ would appear to me to be a strategy of type 2.

The curiosity of the second problem is the energy transfer mechanism which occurs in the NS equations is dependent on the gradient, and is (might be?) dominated by high gradient sources(Rapid strain part). Thus the vorticity is most likely to be contained in small (self-organizing?) regions in the flow. As the flows organize, then other mechanisms compete to dissipate and diffuse vorticity.

Results(?)

My essential conjecture is a production term which provides for an inverse energy cascade in some (small) spatial/temporal range near high gradients. (but perhaps such a conjecture is like Maxwell’s demon and is entropy destroying?!?) In any case, away from the boundary (or away from production regions of high gradient) this threshold value reverses and dissipation and diffusion eventually matches(and excede) production.

My physical intuition of the problem suggests that at a boundary the flow behaves like its 2-D, and only after reaching some threshold does the flow recover its 3D entropy-increasing behavior.

(this seems proof positive via an entropy argument over similar large scales)

Conclusion(?)

Given an energy source (high gradient) can entropy generation be negative(integrally) in a small region so long as it is positive(integrally) in some larger region? – how would this manifest physically?

If so, would this explain the vulgarity of the NS by the attempts to find smaller and smaller regions over which entropy is still found to be positive (because of scale similarity?) when this is impossible due to the 3D region locally appearing 2D at the smaller scale – think ‘earth is flat’ – and the relatively non-isotropic distribution of vorticity? The only(?!) way to avoid such a 2D vs 3D effect would be to go to infinite reynolds number, but this would seem to violate the premise of starting with finite bounds.

Discussion(?)

Sorry for the confusion apparent in these thoughts. Any discussion which attempts to clarify them (if that is even possible) would be appreciated.

P.S. Do the problems with NSE formalism also occur for decaying flow in the absence of boundaries?

5 July, 2011 at 2:34 pm

Robert Coulter“Given an energy source (high gradient) can entropy generation be negative(integrally) in a small region so long as it is positive(integrally) in some larger region? – how would this manifest physically?”

Yes. There is often a good bit of misunderstanding about entropy. Thermodynamic theory only states that system+surroundings must increase in entropy. The “system” may “choose” one or the other — which ever one causes the greatest increase in TOTAL entropy (system+surroundings). This is at the heart of why some chemical reactions are exothermic and others are endothermic.

20 July, 2011 at 9:34 am

Gandhi ViswanathanDear Professor Tao,

I have been trying for a long time to get a deeper understanding of the importance of scale invariance and have a question about supercriticality in the simpler viscous 3-D Burger’s equation:

Here the viscosity parameter is set to 1 without loss of generality. The only difference compared to the 3-D Navier-Stokes equations is the lack of pressure, so there are no inverse Laplacians and no singular integrals, etc.

As far as I can tell, the scale-invariance argument you use for Navier-Stokes also applies to the above viscous 3-D Burger’s equation. Yet, we know that the viscous 3-D Burger’s equation has global-in-time regularity. This has been shown using a maximum principle for the kinetic energy density (an explanation is given in doi:10.1088/0951-7715/21/12/T02 ). Specifically, the 3D viscous Burger’s equation has the same supercriticality issue, yet it is regular.

The only way that I see how to reconcile the two facts is to assume that there must be a critical or subcritical monotone or conserved quantity for the Burger’s case which is absent for the NS case. Is this correct? And if so, is this monotone quantity the kinetic energy density ? And if so, for the NS case, is the nonlocal dependence of pressure on velocity the reason why the energy density is no longer the required monotone quantity? Finally, if all the above are true, then does this not mean that in addition to supercriticality, an equally important obstruction for the NS case is the nonlocal property of the kernel of the inverse Laplacian (in the pressure).

If you have time, I would greatly appreciate quick answers or hints.

Your blog is terrific.

-Gandhi

20 July, 2011 at 12:32 pm

Terence TaoYes, in the case of Burgers there is another monotone quantity, namely the supremum of the kinetic energy density (or equivalently, the norm of u), which makes the problem subcritical in this case.

20 July, 2011 at 12:58 pm

Gandhi ViswanathanThanks!

4 August, 2011 at 6:56 pm

Localisation and compactness properties of the Navier-Stokes global regularity problem « What’s new[...] the lack of a controlled coercive quantity that is either subcritical or critical, as discussed in this post – because this norm is supercritical. Actually, one can guess at the existence of such a [...]

22 August, 2011 at 10:14 am

Why is Navier–Stokes existence and smoothness difficult to prove? - Quora[...] prove? Ethan Eagle A much longer and in-depth look at this topic can be found here:http://terrytao.wordpress.com/20…This answer .Please specify the necessary improvements. Edit Link Text Show answer summary [...]

12 September, 2011 at 3:13 pm

The problem with open science | The Great Antidote[...] is) as the crux of his ‘open science is better science’ conjecture. He also mentioned Terence Tao’s blog, and his own personal use of sites like delicious and friendfeed. Then at some point, I realized [...]

21 January, 2012 at 11:20 am

David BrownThe Navier-Stokes equations are presumably a maximal likelihood estimator for some set of equations that are fully compatible with the Standard Model of particle physics and general relativity theory. By looking at this vastly more complicated Markov branching process, mathematicians and theoretical physicists might stumble upon new and promising approaches to the Navier-Stokes equations.

6 March, 2012 at 1:24 pm

CyHi Terrence,

I don’t have any education, but I like math stuff. I guess my question would be how this relates to the other Millennium Problems?

Correct me if I’m wrong, but the issue you’re talking about with criticality (plus super and sub) seems like it is a ‘rephrasing’ or at least equivalent (for some assumptions) to renormalization.

In other words, I get the impression that chaos in calculation is at the heart of Navier-Stokes, Yang-Mills theory, and the Poincare Conjecture. At least personally, I think the key to solving the Poincare conjecture was Perelman’s Entropy.

So what I’m trying to say (or ask, I guess) is how right am I in thinking that it amounts to the same problem? Renormalization in a Quantum Field Theory at least looks like turbulent vs laminar flow in that things are great within a certain bound, but then get unpredictable past a certain point. In other words, a QFT that is not asymptotically free has blow-ups similar to how Navier-Stokes has blow-ups. And a similar thing happens in Ricci flow, but Perelman got a handle on it with Perelman’s Entropy. Am I way off base with these comparisons?

I know this is apples and oranges in some sense, but they’re all PDE problems in 4D where we need some global invariant to wrestle with the chaos in perturbations. Right?

Anyway, I should probably go learn some more. But being a broke, nearly homeless person – maybe I have more pressing concerns, haha. Thanks for the wonderful post, it was very interesting.

-Cy

10 April, 2012 at 8:22 am

dcs24Reblogged this on Daniel C. Sutton.

13 April, 2012 at 6:17 am

Juha-Matti PerkkiöDear Professor Tao,

I stumbled into a positive “proof” of NSE with any initial data in posted by A.&M.Tsionskiy into ArXiv [http://arxiv.org/abs/1201.1609] on 8 Jan 2012. I have re-checked the formal computations leading to an integral equation formulation in the space variables, but I am not quite convinced about the functional analytic argument leading to the application of the contraction principle. Has anyone else following this blog had a look on this paper?

13 April, 2012 at 6:55 am

Terence TaoThe second inequality in (6.7) is incorrect; one cannot control the C^2 norm by the L^2 norm, as can be seen for instance by considering high frequency sinusoids (smoothly truncated to be of compact support).

7 May, 2012 at 10:04 am

AnonymousThis article is very good quality. A knowledged person in engineering mathematics can understand better.i know integration,differention,limits. I dont know curl,legrenge transform etc. If any website first clear about smoothness.what is week solution.then what is strong solution.etc with some similar examples.then their is ability to atleast understand and admire it.

13 May, 2012 at 12:15 pm

BetulI am a Physics student, my english is not very well, i am really into this topic.It is already a complicated one, besides my poor english.Can’t you just write it a little simplified English so that I and students like me with poor english could understand… :(

14 May, 2012 at 2:32 pm

nickHi Terrence,

While thinking about your article, I cam across an idea for a potential globally controlled quantity that might be coercive and subcritical.

The inspiration comes from the Heisenberg uncertainty principles. In physics these are supposed to represent the uncertainty of a measurement, but one could instead view them as curvature criteria on an imbedded space. The x,y,z momentum criteria would be on the x,y,z curvature and the E,t relationship would be analogous to the ricci flow equation, where energy is the radius of the open ball and momentum is the sectional curvature in each orthogonal direction.

One then relates the probability current to the regular current in Navier Stokes. The probability current is complex valued which is unsettling, and I dont know what to do with that. Perhaps this indicates that general solutions to these equations will take complex values, like general solutions to polynomials will take complex numbers?

In physics this relationship and the corresponding quantization of states has great coercive powers and definitely works at fine scales. Do you think such a quantity would be a possible candidate for analysis of the Navier Stokes equation?

Nick

15 May, 2012 at 4:16 am

Terence TaoWell, a key difference here between physics and mathematics is that in the former it is often sufficient to work with reasoning which is expected to apply in almost all conceivable scenarios, but which can fail for some exceptional set of unlikely scenarios e.g. “Maxwell’s demon” type scenarios which contradict the second law of thermodynamics. But for a mathematical proof one has to eliminate Maxwell demon type scenarios completely, as opposed to almost completely, which is a very different standard of proof, and which among other things tends to rule out most candidates for coercive controlled quantities in Navier-Stokes which arise from probabilistic or uncertainty-based considerations.

15 May, 2012 at 11:06 am

AnonymousDear Prof. Tao,

I have a problem about the global existence and uniqueness of Navier-Stokes equations. Given a good initial data , consider the following mild solution form . By a Banach fixed point argument, we can show that there exists a local solution where depends on .

To extend the solution, we consider the energy inequality . At the time with small, thanks to the smoothness effect of , we can get an estimate of for all where only depend on . Since , we can take to get an bound for . From this way, it seems we can get the global existence and uniqueness.

I don't know what is the wrong in the above argument.

Thank you!

15 May, 2012 at 1:01 pm

Terence TaoThe smoothing effect you refer to is only directly applicable for the linear heat equation. To obtain the analogous smoothing effect for the Navier-Stokes equation, one must also deal with the contribution of the convection term (and also the pressure , though this can be viewed as a component of the convection term). Due to supercriticality, it is impossible to control the effect of this convection term purely in terms of the norm, and higher regularity norms such as or must be employed. This leads to a potential blowup in the norm, instead of uniform bounds and regularity.

16 May, 2012 at 2:40 am

AnonymousThanks! I still have some problem about this. Suppose that , writing and using some classical estimate of , we have the following a’priori estimate fo the mild solution ($\latex 1/2<\alpha0$ some constants to be chosen and with the metric ,

we can use Banach fixed point theorem to show there exists a unique local mild solution .

Now let , there exists a local unique solution by Banach fixed point theorem. Thanks to the above argument and energy estimate, we can get the global solution in (of course ).

I can't find any problem for the above argument.

16 May, 2012 at 2:58 am

AnonymousThe system can’t show some long formula, so I write my comment again.

Thanks! I still have some problem about this. Suppose that latex B(u)=\Pi u \cdot \nabla u$ and using some classical estimate of B, we have the following a’priori estimate fo the mild solution ($\latex 1/2<\alpha<1$): is bounded by

further by .

Therefore,

Define the space with $B,T$

some constants to be chosen and with the metric ,

we can use Banach fixed point theorem to show there exists a unique local mild solution .

Now let u_0 \in H^{\alpha}, there exists a local unique solution u \in C([0,T]; H^\alpha) by Banach fixed point theorem. Thanks to the above argument and energy estimate, we can get the global solution in C([0,\infty); H^\alpha) (of course 1/2<\alpha<1).

I can't find any problem for the above argument.

17 May, 2012 at 3:15 am

Terence TaoThe equation following “Therefore” is incorrect; one cannot justify the insertion of the prefactor in the integrand because it can be much smaller than 1.

There are equations of the same strength as Navier-Stokes (in the sense that one has essentially the same energy estimates, and the same estimates for the nonlinearity and the linear evolution) but which are known to blow up in finite time (in particular, there is a dyadic model of Pavlovic and Katz which achieves this). As such, it is not possible to establish global regularity purely from energy methods and the usual linear and nonlinear estimates on the heat equation and on the nonlinearity. One can also see this from performing a scaling analysis or dimensional analysis, as indicated in the body of the main post.

17 May, 2012 at 4:12 am

AnonymousThank you so much for your patience and kindness, Prof. Tao! Actually I forgot the term in the integrand. I know my problem should not be a problem, but still want to find the reason.

The inequality following ‘Therefore’ should be:

. Because is integrable, we can use a Banach fixed point theorem.

17 May, 2012 at 5:58 am

Terence TaoThe expression is not uniformly bounded as , and so the contraction mapping principle is inapplicable.

Due to lack of time, I will not be responding to any further arguments of this nature, and refer you to the discussion in the blog post.

20 May, 2012 at 4:13 am

Gil KalaiDear Terry, A few comments mentioned stochastic and quantum versions of NS and you made an appealing case that while NS might be easier for such versions and while one of such models may well be more realistic, they offer little help to the Clay Problem since proving uniform bounds for any perturbational version will translate to proving bounds to the original NS problem, while non-uniform estimates will not help.

My question is if there are variants of the NS question of stochastic (or quantum) nature which are known or expected to be more realistic, or stochastic variants where it is known or expected that the situation is easier. (Even without hope towards the Clay Problem itself.)

20 May, 2012 at 7:30 am

Terence TaoWell, one of the obvious things to try here is to put a stochastic forcing term in the equation (say in the periodic case, to avoid some technical issues). This ought to be easier, as the stochastic noise should keep pushing one away from what we believe to be the relatively rare blowup or near-blowup scenarios, but again there could be some exceedingly perverse “Maxwell’s demon” type dynamics which somehow manages to reverse all the entropy created by the noise and lead almost all of the solutions towards blowup. In the Hamiltonian case one has (in principle) the theory of invariant measures (such as Gibbs measures) to prevent this sort of entropy collapse (though this is more geared towards stochastic initial data than stochastic forcing term), but as far as I am aware there are no such invariant measures in the dissipative setting of Navier-Stokes. Still, it is conceivable that one could do something nontrivial and rigorous with such a model, though to my knowledge the literature in this direction is quite scant.

13 June, 2012 at 2:32 am

JusticusDear Prof. Tao, what do you make of this article? It seems that by considering the Lamb vector divergence, one can derive an analytical expression for the pressure field through a Poisson’s equation which will ensure that the time derivative of the L^2 norm of vorticity is always negative and thus ensures the regularity of N-S equations with smooth, square-integrable and divergence-free initial data. This in turn implies a smooth exponential decay of the L^2 norms of vorticity and velocity.

Best regards,

Justicus

http://arxiv.org/abs/1201.1609

(please disregard the earlier versions)

13 June, 2012 at 10:43 am

JusticusSorry, here’s the correct link

http://arxiv.org/abs/1206.1281

13 June, 2012 at 11:02 am

Terence TaoThe pressure in the Navier-Stokes problem is not exogeneous; it cannot be prescribed independently of the velocity and vorticity fields, but is instead coupled to those fields (as one can already see from the equation (9) in the paper, which determines the pressure as the solution to a Poisson equation involving those fields). As such, if one declares the pressure to equal something (as was done in (16), (19), (24)), there is no longer any guarantee that one can actually solve the Navier-Stokes equation with this prescribed pressure; note that all the local existence theorems for Navier-Stokes in the literature do not permit an arbitrary prescription of the pressure. (And indeed, it appears that the prescription given in the paper is inconsistent with the Navier-Stokes equations: for instance, the equations (9) and (29) will contradict each other, even at the initial time t=0, for most choices of initial velocity field u).

20 June, 2012 at 2:42 am

rbcoulter“It is a well known fact that the solutions are regular if the

enstrophy of the system stays bounded, see [3]. This means that we will

require ….”

Then it refers to the energy dissipation integral (Enstrophy). My understanding is that regular solutions meet this condition but it alone is not sufficient to ensure regularity. See Professor Tao’s opening comments at the top of this blog.

15 June, 2012 at 6:48 am

DanielDear Terence Tao and others

This is a very interesting discussion here… Thank you..

I am not an expert in this topic…I have just one probably elementary question. Why is it the case that we need to find an inadmissable solution of NS that has finite time blow up to prove breakdown of solutions to NS,; the actual “statements A,B,C, and D” don’t mention blow up at all ?

Daniel

15 June, 2012 at 7:01 am

Terence TaoThis is because we have local existence theorems that say, roughly speaking, that given sufficiently “nice” initial data, one can solve the Navier-Stokes equation with this initial data and with a smooth solution up to any time for which the solution has not yet blown up (say, in the sense that the velocity becomes unbounded). So if we can rule out blow up, we obtain global smooth solutions as a corollary.

16 June, 2012 at 6:43 am

DanielThanks for the quick reply, it was good !

24 June, 2012 at 4:24 am

Gil KalaiDear Terry, I have follow up questions regarding Maxwell demons. 1) What precisely one refers to as Maxwell demons in the context of NS and other PDE. (I am sure this is clear to all experts). 2) How central is the demons issue in the global regularity question? 3) You mentioned that stochastic versions may still have some demons? Still, are there some stochastic versions which guarantee no demons, or that can be described as being conditioned on “no demons?”

24 June, 2012 at 10:32 am

Terence TaoIt’s not really a well-defined term, but basically from the local existence theory we know that if at some given time, the energy is mostly concentrated at a certain frequency scale N (or spatial scale 1/N), so that the solution tends to oscillate with frequency 1/N, then after a certain short amount of time (about time N^{-5/2}, if I recall correctly), the solution will still exist, but may now be concentrated in some combination of the frequencies N/2, N, and 2N (if we bin the frequencies in some dyadic way; this is a massive oversimplification and should not be taken too seriously). The worst case is if the bulk of the energy somehow gets transferred into the 2N frequency range, instead of the N and N/2 ranges, and if this worst case scenario happens over and over again, one could get blowup in finite time. This would be analogous to an (allgedly) pseudorandom walk favoring one direction over another so much that by time t, it has moved about O(t) away from its starting point, rather than the expected O(sqrt(t)), which is also basically the analogy behind Maxwell’s demon in thermodynamics. So any resolution of the global regularity conjecture must at some point confront and eliminate this Maxwell’s demon scenario, which is otherwise consistent with all the known conservation laws, monotonicity formulae, and local existence theory for this equation. (Actually, we can use these known facts to prevent the purely “ballistic” behaviour in which the worst case scenario happens _almost all_ of the time, analogous to a pseudorandom walk deviating about ct away from the origin in time t for some constant c > 0, but there are still intermediate scenarios, analogous to a pseudorandom walk deviating something like t/log t away from the origin, in which the worst case happens often enough to still cause blowup, which we cannot rule out from known facts.)

There are two levels of stochastic analogues of Navier-Stokes. The first is to have random initial data, but deterministic evolution; the second is to have both random initial data and stochastic evolution. (The first is like the output of a pseudorandom number generator with random seed; the second is like a pseudorandom number which also has access to additional random bits as time progresses, but those bits are incorporated into the dynamics of the (allegedly) pseudorandom number generator in a complicated and nonlinear fashion). Intuitively, this randomness (particularly in the second level) should make the demon scenario much less likely, but because everything is coupled together, and there is no obvious concentration of measure (or invariant measure) theory available, it is still theoretically possible (at our current level of understanding) that the demon scenario still occurs (much as a badly designed pseudorandom number generator may posssibly create biased output even if given perfectly unbiased random inputs).

26 June, 2012 at 10:00 pm

JusticusDear Prof. Tao,

I was wondering is the regularity issue independent of the choice of the viscosity parameter \nu ? In other words, if for some /nu the system is regular, will that imply regularity for all \nu >0 other things being equal?

Thanks for the excellent blog!

26 June, 2012 at 10:29 pm

Terence TaoYes, one can easily renormalise the viscosity to be any value one wishes (to put it another way, viscosity is not dimensionless). For instance, if obeys the Navier-Stokes equation with viscosity and pressure , then obeys the Navier-Stokes equation with viscosity 1 and pressure . From these sorts of transformations it is a routine matter to deduce global regularity for arbitrary from global regularity for a single value of . (Note though that any quantitative bounds that might be associated to a global regularity result will almost certainly be affected by such renormalisations, and so one does not immediately gain much insight as to what happens in the zero viscosity limit . In particular, there is no known formal relationship between the global regularity problem for Navier-Stokes and for incompressible Euler, though it is widely believed that the latter is less likely to be globally regular.)

26 June, 2012 at 11:04 pm

JusticusThanks!

16 July, 2012 at 8:27 am

AnonymousOne of the ultimate aims to prove the global regularity of the NS Equations has something to do with turbulence. SUPPOSE that the statement (B) or (A) has been shown to be affirmative, how would the conclusion help us understand turbulence?

16 July, 2012 at 9:21 am

AnonymousA partial answer: I think it was Mandelbrot who first conjectured that the onset of turbulence happens when solutions become fractal (in the space variables). This conjecture can only be true if initially smooth solutions break down in finite time. If solutions stay smooth forever in time, then turbulence is not as Mandelbrot conjectured it to be.

6 August, 2012 at 5:05 am

rbcoulterThere are some salient arguments that NS may not explain turbulence. IMO, if (B) or (A) is true, then turbulence is probably just an illusion in the sense that there would be some zoomed in micro-scale that appears “laminar”. It would be much more interesting if (B) / (A) is not true, where “turbulence” persists to any micro-scale (or singularity exists).

17 September, 2012 at 11:30 pm

AnonymousSingularities may not be part of God’s mind.

How much do we know about the 2d turbulence today?

Have we learnt anything from the smooth 2d NS solution?

Clearly the regularity and the uniqueness rule out the anomaly due to bifurcations, strange attractors and scenarios of blow-up

(note that these breakdown mechanisms exist in many 1d models).

On the physics side, there are of course differences as well as similarities

between the 2d and 3d turbulence; it is hard to believe that

the 2d implication is categorically disconnected from its 3d counterpart.

One often attributes the 2d smoothness to the absence of the vorticity stretching. This argument is not entirely convincing because

the convection may induce velocity (with large gradients) which, as a result, may well tear apart vicinity flows.

16 August, 2014 at 8:31 am

Robert CoulterThe 2D versus 3D situation is a good question. The mathematicians seemed to be almost completely focused on the the time evolution problem in 3D. The 2D time evolution problem does not produce singularities assuming there is not a nearby (local) zero velocity boundary condition (constraint). However, in the real world, there is almost always a nearby BC that is zero — airplane wing, pipe wall, etc. It is generally accepted by myself and others that the culprit of turbulence starts at these zero velocity boundaries creating turbulence some distance away. In short, the NS model, IMO, has some issues even in 2D if one considers real life BCs.

18 August, 2014 at 11:27 pm

AnonymousThe presence of a solid surface will complicate the initiation of turbulence but cannot be the core reason. (The experts must have been well aware of this fact. Hence goes the formulation of the Clay NS problem.) Consider blowing a puff of smoke into the open space. If one blows hard enough, turbulence ensues (you know it when you see it). Of course, we also observe smooth smoke rings as a result of gentle blows. With carefully controlled starting conditions in between these two extremes, a laminar puff will become turbulent.

20 August, 2014 at 3:01 pm

AnonymousYou might want to elaborate on what you mean by “core” reason. The point was simply that there are some intriguing issues in the 2D case in conjuction with boundary conditions. The Clay problems have been written to avoid these boundary condition issues. Why is this? Cafarelli gave some hint in his video. He was worried about infinity causing problems. Why don’t they have a case with a BC of velocity = 0 at a finite distance? Maybe that would have been too easy — that is, to show that regularity DOES not exist in those cases. The entire universe is not an ocean nor a periodic torus ocean.

Also, the puff of smoke is air blowing past the lips of the mouth (BC velocity = 0 at the lips). Also, turbulent smoke emanating from a pipe, for instance, has air going past a bluff body, with turbulence forming downstream of the bluff body (above the pipe).

21 August, 2014 at 1:43 am

Anonymous(To Anonymous, 20 August, 2014)

The core reason for turbulence is the non-linearity term in the equations of motion (3d and 2d); the view imputing to vorticity stretching is incomplete (this is exactly why we are talking about turbulence in 2d).

The BC v->0 at infinity is implied in the Clay formulation for a class of solutions because any REGULAR solution must have bounded-energy (roughly speaking nothing happens at infinity). It is wise not to specify this decay BC simply because a solution may become singular, say in finite time as some believe (the statement C), and hence it or its influence would propagate to infinity instantaneously. (The arguments are valid for flows in R^2.)

The BC v=0 at the lips applies equally to the three cases: laminar, transitional and turbulent. Thus it cannot be the core reason for instigating turbulence.

10 August, 2012 at 7:51 am

G.K ChestertonWhy is The Navier–Stokes existence and smoothness especially on turbulance still a open problem, and still incomplete.Is it true if this one is one of the hardest problmes in mathematics.regards

10 August, 2012 at 8:00 am

G.K ChestertonDear Prof Tao : What could we do to solve Navier-Stokes Equation easily ,at last i wish to know why is the official statement of the problem was given by Prof Charles Fefferman. Because as far as i know Prof Charles Fefferman was a genius and a brilliant.He got fields medal also.By the way is there a blog(i means Mr Grigory Parelman blog

Sincerely Yours

19 September, 2012 at 6:06 pm

DanielDear Terence Tao and others,

Following on from the comments on 15 June. I have another probably elementary question: To rule out “blow up” does one have to include the external force vector in doing so, or is it adequate to rule out “blow up” with no external force vector ?

Daniel

22 September, 2012 at 8:10 am

rbcoulterOptions A and B in the Clay problem statement do not have an external force. These are the “prove regularity” options so one would not need to consider external forces for them. The other options allow for a cleverly constrained finite force field to be applied to possibly “assist” blowup.

22 September, 2012 at 10:29 pm

DanielThank you rbcoulter for that logical reply, I gave you a thumb up for that.

27 October, 2012 at 6:14 am

westy31I was wandering about a related problem, which I pose as a question here:

I you insert a blob of paint into a fluid,of ‘infinite Schmidt number’, does the blob ever split into separate islands of paint?

The Schmidt number is the ratio of mass diffusion and viscosity (ie momentum diffusion). If this is infinite, then you do not have diffusion. This means that the paint concentration does not smooth out: It is either zero or equal to the initial concentration.

I put in the infinite Schmidt number to make things more interesting. The blob of paint cannot smooth out its concentration, it can only deform into an ever more complex octopus-like deformed blob. So does this blob ever break up?

The second question: Suppose the blob did break up into islands, does that imply violation of regularity?

Gerard

26 December, 2012 at 11:26 am

timurAs long as the velocity field stays continuous in space and time, the blob will l not break.

9 November, 2012 at 5:09 am

AnonymousGood question. I would say that Navier Stokes breaks down if the “octopus loses a leg” or develops a hole.

22 December, 2012 at 7:25 pm

CraigIn my humble opinion, the real reason why proving global regularity for the NS equations is hard is because it’s false. Is there a reason why finding a counter-example has been so difficult?

26 December, 2012 at 11:21 am

timurIMHO, the real reason why finding a counter-example has been so difficult is because the global regularity is true. (Sorry I could not help)

27 December, 2012 at 5:07 pm

Mathematics: What progress has been made till date towards the resolution of the Navier-Stokes existence and smoothness problem? - Quora[...] Student in MathematicsFields Medalist Terry Tao has a long, detailed writeup about the problem here:http://terrytao.wordpress.com/20…Embed QuoteComment Loading… • Just now Add [...]

29 January, 2013 at 8:58 am

MikeFairly interesting piece of work, apparently modelling of some sort of causality and the possible connection of the Navier-Stokes equations to gravitation and economic theory

http://jussilindgren.files.wordpress.com/2012/10/fieldtheory2.pdf

14 February, 2013 at 1:28 pm

jussilindgrenActually, there is a slightly improved version available, the key equation is what i call a Wick-rotated N-S-equation. It should be compatible with the relativistically invariant respective quantum equation, this is the main equation

which is a sort of nonlinear wave equation system, any ideas how to get solutions?

14 February, 2013 at 1:35 pm

jussilindgrenand the respective quantum equation is

6 March, 2013 at 3:26 am

jlgallowayReblogged this on Concrete Dreams and commented:

Excellent opinion article by Terence Tao regarding why global regularity for Navier-Stokes is hard.

3 May, 2013 at 2:54 am

Jacek CyrankaWhat about computer simulations?

As I understand If a counter-example for global regularity of 3D NS exists, there should be some computer simulations which supports this. I mean initial conditions which in computer simulations seems to develop a blow up after a finite time (e.g. 1 hour of simulation) .

Of course, one can never eliminate the possibility that the blow-up was caused by a numerical method instabilities , or finite representation of numbers on a computer. But there are tools to deal with that, for instance by using interval arithmetic instead of basic fixed point arithmetic one can produce rigorous results as well.

3 May, 2013 at 7:10 am

David PurvanceNumerical simulations using white noise flows in a discrete 3D periodic solution blow up. This solution and an explanation for the blow up are developed at purvanced.wordpress.com.

6 May, 2013 at 9:26 pm

DanielHi Terrence,

Im thinking that some of the comments are against your policy and need to be deleted. I have even seen some comments that are disrespectful to you.

Daniel

7 October, 2013 at 12:25 pm

David PurvanceDear Terence or other interested person,

In the development of the Navier Stokes equation in wavenumber space the projection tensor restricts flow to the PLANE perpendicular to wavenumber to maintain incompressibility. For a deterministic equation isn’t this a bit arbitrary or undeterministic? There are an inifinite number of directions in that PLANE. What I have discovered is that if you require the projection tensor to be completely deterministic and restrict flow to the direction of the initial flow, which by definition must be in the PLANE perpendicular to wavenumber, the flow remains stable and never blows up!

27 November, 2013 at 8:24 am

Juha-Matti PerkkiöWouldn’t that imply that the standard NSE has a smooth eternal solution with that property, and by local uniqueness any smooth solution would have that property as well? That feels to funny.

2 December, 2013 at 10:10 am

David PurvanceSorry, I don’t follow your argument. What I have developed (purvanced.wordpress.com) is a regular solution for directionally stationary flows. I conjecture this solution is the general solution for flows devoid of external forces.

14 December, 2013 at 1:26 pm

Robert CoulterI would be suspicous also. Finding a way to make all initial conditions “stable and never blows up” is trivial if done with an external force acting against the flow. In other words, a force is applied to “assist” the viscosity dampening term in preventing blowups. I think the problem with have here is that many work with the pressure-free versions of NSE. The pressure must resolve to be a scalar at everypoint (i.e. not multi-valued) to ensure there is no external force. Also, the pressure (or potential field) must be constant along the boundary (even if the boundary is infinite) or one would be introducing an energy source from infinity (or boundary). If one has a proposed soluton, u = (x,t), then the pressure field p = (x,t), should then be calculated. This may not be easy since it would involve integrating the solution with various products of itself and its derivatives (the left side of NSE minus the right side viscosity term). Also, ALL paths between two points (at each point in time) must integrate to show the same delta P, and a closed path cannot have the pressure change (while integrating along it) after returning to the starting point.

30 November, 2013 at 8:31 pm

CraigTerrence, why do you say that the Navier Stokes equations exhibit pseudo randomness? You compare this problem with other famous problems, but these other problems are discrete, while the NS problem is continuous. I would really appreciate it if you could explain. Thank you.

6 December, 2013 at 8:09 pm

MickeyThis isn’t really a response to your post, but has anyone seen this paper by Tsionskiy-Tsionskiy on Navier-Stokes:

“Existence, uniqueness and smoothness of a solution for 3D Navier-Stokes equations with any smooth initial velocity”

Arkadiy Tsionskiy, Mikhail Tsionskiy

Electron. J. Diff. Equ., Vol. 2013 (2013), No. 83, pp. 1-17.

They say they’ve solved everything. I know there are many “proofs” floating around to the Clay problems, so normally I wouldn’t think too much about it. But this was published in a journal with peer review, so I wonder if anybody has looked at it.

7 December, 2013 at 8:10 am

MickeyI see now that you pointed out an error in the Tsionskiy-Tsionskiy paper in a comment on this thread in April. Please disregard my question!

11 January, 2014 at 9:12 am

susanEminent kazaki mathematician

Mukhtarbay Otelbaev, Prof. Dr.

has JUST now published a claimed full proof of the navier stokes clay problem.

Is it correct?

http://bnews.kz/en/news/post/180213/

He has over 200 papers, 70 phd students, is an academician of their academy of scieces and has previous papers on navier stokes and functional analysis.

SO HE IS NOT A CRACKPOT.

11 January, 2014 at 12:28 pm

susanA link to the paper (in Russian): http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf

11 January, 2014 at 7:48 pm

Sam ChioMukhtarbay Otelbaev published an article in which he claims to have provided a full solution to the problem with periodic boundary conditions in space variables. Is there any link to his paper in English?

12 January, 2014 at 8:17 am

Navier Stokes Solution Claimed and Twitter Algos « Pink Iguana[…] for Wavy Vortex flow on an IBM mainframe. Can’t parse the paper in Russian though. I hope Tao can read Russian, otherwise confirmation could take a […]

13 January, 2014 at 5:39 am

Emanuela GrossiTranslation of proposed Navier-Stokes solution by Mukhtarbay Otelbaev https://github.com/myw/navier_stokes_translate

13 January, 2014 at 5:46 pm

Juha-Matti PerkkiöDear professor Tao and all who read this blog,

assuming for a while that Otelbaev’s estimate is correct, I would like to better understand his reformulation of the problem. Referring to the translation in progress from https://github.com/myw/navier_stokes_translate he wrote that

“The system of equations (1.1) and initial/boundary constraints (1.2), (1.3)

do not allow a unique solution to the pressure p(t; x). For this reason, we add the constraint

”

How so? Taking the divergence of NSE we see that

and this Poisson equation most certainly has a unique solution in the torus Q. Should the integral of p over Q stay constant in the first place?

Secondly, the Millenium problem is an initial value problem. The estimate would prove it if for any smooth solenoidal velocity field v(t_0,x) one could construct a suitably smooth body force f(t,x) for 0<t<t_0 that drives v(0,x)=0 into v(t_0,x) in the whole torus Q. Is this indeed the case?

Yours,

Juha-Matti Perkkiö

15 January, 2014 at 12:12 am

uvsAs to 1: the normalization is ok. Indeed: if is a pressure field that solves NSE, then so is for any function .

There is something else about pressure in Otelbaev’s paper that bugs many people. In (1.3) periodic BCs wrt are stated. If is periodic, then $p$ itself must not be periodic, so potential multiple solutions are excluded, thus simplifying the problem. No BC wrt pressure are stated in the millenium prize formulation.

As to 2: Consider an IVP and define , this will satisfy a NSE with homogeneous IC and a modified force term. Solve this, so

15 January, 2014 at 12:47 am

uvsSorry, I was too fast and forgot NSE was nonlinear. The construction is indeed slightly more involved.

Start with arbitrary smooth initial data . The IVP has a unique smooth solution locally in time, say, for . Define . We have

which can be computed, given . Thus satisfies the original NSE for (as ) and a modified NSE with homogeneous ICs for which concides with for $t^*/2 <t < t^*$ then $u$ for $0<t<t^*/2$ is that startup flow which you requested.

14 January, 2014 at 12:25 am

AnonymousDear All,

Here are some comments:

(1) Otelbaev’s reformulation defines an over-determined pde system.

By the continuity, the NS momentum equation is a self-consistent system for the velocity field. Hence there is no need to specify any boundary conditions

on the pressure; his periodic pressure stated in (1.3) is redundant.

(2) The Clay’s NS problem formulations (A to D) are mathematically consistent .

(3) The extra pressure constraint of Otelbaev (1.4) is equivalent to the pressure p in L^1(Q). But there is no justification / derivation of this constraint presented in his paper. Thus the constraint is not an apriori bound.

On the other hand, there has been no experimental evidence to indicate that the pressure field in fluid motions is generally constrained in this manner.

(If we are free to specify extra constraints in addition to the Clay’s velocity boundary conditions, the NS is trivial to solve!)

(4) Finally, his analysis does not shed any new light on turbulence.

To elucidate on the nature of turbulence in the light of NS solutions is in fact implied in the Clay formulations.

19 January, 2014 at 8:36 pm

Stephen Montgomery-SmithIn the incompressible Navier-Stokes equation, the pressure is only determined up to a constant. Therefore equation (1.4) is harmless and irrelevant. Also, by the time he gets to page 12, he has rewritten the Navier-Stokes equation in a much more abstract form in equation (3.4). And the pressure term is implicitly given by equation (4.3), which is the Leray projection.

20 January, 2014 at 5:32 pm

AnonymousEquation (1.4) basically rules out the possibility of singularity (time wise). To assign the pressure in L^1(Q) means effectively that Poisson’s equation for the pressure has an inverse a priori! When you say that the pressure is determined up to a constant, do you mean that the NS sees only the pressure derivative or do you mean that there will definitely be no singularity in the pressure?

20 January, 2014 at 5:52 pm

Stephen Montgomery-SmithThe rest of the Navier-Stokes equations also make implicit assumptions on the solution (smooth, etc). That is why the Clay Millennium problem is stated as “existence of solutions” when in fact weak solutions are already known to exist. What the Clay problem is really asking is if there is some initial data which when the NS equations are solved, ceases to have a solution in the classical sense at some time t=a. It is well known that if one can prove that certain quantities remain bounded as t converges to a, then the classical solution extends beyond time t=a. It is in this sense that Otelbaev claims to have a solution. So implicitly stating that the pressure is in L^1 before time t=a does not effect the difficult of the problem one bit. And the main statement of Otelbaev’s claimed result is that the appropriate quantities are bounded as t converges to a.

But more than that, Otelbaev never makes any use of equation (1.4) anywhere else in the paper.

22 January, 2014 at 5:17 am

AnonymousDon’t weak solutions include singularly-behaved functions?

To go beyond the time t=a, how do we know that the flow would not develop a singularity (even though the Leray type singularity has been ruled out)?

It is precisely in this sense that an analysis of the NS must make allowance for a possible singularity. Otelbaev has implicitly used BOTH (1.4) and the periodic boundary conditions (1.3); they are assumed in deriving (4.5) from (4.4).

The issue is that we are not free to apply the Helmholtz-Leray projection without constraints; any attempt to inverse the Poisson equation for the pressure must be fully justified.

Otelbaev did not derive any apriori bound on the “RHS” function, at least he did realise the constraints on the H-L projection. For a careful introduction, see Cannone’s review S1.1 in Handbook of Math Flu-Dyn (2003). An example of using the H-L projection with certain periodic BC’s is given in pp.36-39 of

Navier-Stokes Equations and Turbulence by Foias etal (2000). The issue may be further clarified by considering the NS in R^3, see Lemma 1.6 of Majda and Bertozzi (2002).

14 January, 2014 at 3:59 am

AnonymousDeal all,

Another anonymous comment – not the same anonymouus as at 14-01-2014 12:25.

It seems strange to me, that Prof. Otelbaev puts $\nu = 1$ in his formulation of the problem. He writes, “we can do it without loss of generality”. I am not a specialist, but it seems very strange to me. For example, canonic Clay formulation anf formulation from Prof. Ladyzhenskaya book all contain the $\nu$ coefficients.

May be the Prof. Otelbaev is valid only for $Re=1$ !? Therefore it cannot contain the turbulence considereation.

IT

14 January, 2014 at 11:23 pm

Juha-Matti PerkkiöThe value of the dynamic viscosity is totally irrelevant by rescaling. My both questions are still unanswered. The first one probably has a positive answer, provided that one can tweak the body force, but then again, prof. Otelbaev´s approach doesn’t immediately solve the problem, since one needs to play the game without arbitrary God forces.

17 January, 2014 at 6:02 am

CK ChioSetting is possible because the equation can be written into dimensionless form. You can rescale the velocity and time units.

15 January, 2014 at 12:13 am

matthew millerI’m extremely excited about the prospects of a strong-solution for NS. Just wanted to convey my enthusiasm, even if it’s of no positive worth. The first time I had ever even read there were such unsolved classical problems was on this blog and I think it’d be really great if Professor Tao brought the experience full circle by giving his comments, be they speculative or definitive, with regards the news of this proposed proof. That said I’m very curious to the developments of this thread in relation to the proof. Thanks!

15 January, 2014 at 11:52 pm

Sam ChioHere is link to an earlier publication M. Otelbaev et. al. 2006, in English – Existence Conditions for a Global Strong Solution to One Class of Nonlinear Evolution Equations in a Hilbert Space

http://enu.kz/repository/repository2013/Existence-Conditions-for-a-Global-Strong-Solution-to-One-Class-of-Nonlinear-Evolution-Equations-in-a-Hilbert-Space.pdf

18 January, 2014 at 4:27 am

La demostración de Otelbaev del problema del milenio de Navier-Stokes | La Ciencia de la Mula Francis[…] exquisita el genial Terry Tao en “Why global regularity for Navier-Stokes is hard,” What’s New, 18 Mar 2007; también recomiendo leer a O. A. Ladyzhenskaya, “Sixth problem of the millennium: […]

19 January, 2014 at 1:27 pm

1011Reblogged this on 1011 and commented:

¿Por qué el problema de Navier-Stokes es tan difícil?

Demostración de Otelbaev (100 páginas en ruso):

http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf

19 January, 2014 at 8:00 pm

Stephen Montgomery-SmithIs there a blog anywhere with people trying to see if his proof is correct? So far in my reading, I am up to Lemma 6.6. I think I found a small error, but I am not sure if it is significant. The equality in the last line of the second displayed equation on page 37 seems to disagree with the definition of on page 34 by a factor .

As yet, I don’t see the main idea in his proof, and I am at the stage of checking it line by line. Also I don’t know Russian, so I have to ask friends to translate for me. So right now, I don’t know the statement of Lemma 6.7.

19 January, 2014 at 8:37 pm

Terence TaoVillatoro’s blog has some detailed analysis, and has recently raised some serious issues with the paper as well, in the crucial Section 6. (It’s in Spanish, but this is easy to translate online.)

I can’t read Russian either, so I am happy to defer the detailed checking to others, but my feeling is that this sort of abstract approach to the regularity problem, using only the energy identity and harmonic analysis estimates on the nonlinearity rather than more precise geometric information specific to the Navier-Stokes equation (e.g. the vorticity equation) is necessarily doomed to failure. I think I can formalise a specific obstruction in this regard and hope to present it here in a couple weeks.

21 January, 2014 at 5:01 am

Stephen Montgomery-SmithI think the typo is insignificant. It should read . And since , and , you can then finish off with .

25 January, 2014 at 7:07 pm

AndreasPlease forgive me my elementary question; I see that it is mentioned but not answered in the post-blog: What would be the impact of a negative result on the Navier-Stokes equation?

Is there any other solution than NSE does not explain physical fluids? I have searched, but could not find any texts talking about this possibility. Do you have any reference? Thank you!

26 January, 2014 at 5:13 am

Robert CoulterThere are many that do not believe that NS adequately explains the transistion from laminar to turbulent flow. This transistion is thought by some to be modeled by a mathematical bifurcation. Search the web with keywords “bifurcation”, “symmetry breaking” and “fluid dynamics” to find many articles on this subject. An interesting historical note is that Hiesenberg did his dissertation on this transistion (See Orr-Sommerfeld equation). Apparently he struggled with this. Maybe it was easier to work on Quantum Mechanics!

26 January, 2014 at 6:50 pm

AnonymousThe structure of NS solutions has been shown to lend itself for an satisfactory explanation of the laminar-turbulent transition. The process of the transition has nothing to do “instability” or “bifurcation”; the non-linearity in the equations of motion has an inherent capability of proliferating vorticity scales.

27 January, 2014 at 2:39 am

Robert CoulterAre you saying that “proliferatiing vorticity scales” occur gradually as Reynolds number is increased? Or is there a point where the vortices suddenly appear — or at least rapidly increase there number? Experimental evidence does not support a positive answer to the first question. A positive answer to the second questions is admission of a transistion point.

27 January, 2014 at 7:10 pm

AnonymousThe vortices emerge in large quantities in rows as the Reynolds number increases so that the flow appears to undergo a process of successive “jumps”. Depending on the geometry and test conditions of experiments, there is no particular reason that a laminar flow cannot suddenly transit into turbulence (the number of the vortices is an accumulation of those large quantities and the vorticity scales appear within a tiny margin of the Reynolds number variation); the solution of the NS in this case must be a strong non-linear function of the Reynolds number). It is extremely difficult, if ever feasible, in most high-Re experiments to detect this intricate process as the (numerous) vortices interact with each other according to the Biot-Savart relation.

28 January, 2014 at 2:28 am

Robert CoulterThe first “jump” does not appear until Re # just over 2000 (in pipe flow). Below that the flow is laminar. I have never heard of laminar flow “suddenly transit” into turbulence without increasing the Re #.

29 January, 2014 at 6:01 pm

AnonymousThe critical Reynolds number (based on pipe diameter and pipe mean speed) in pipe flow experiments has been quoted from 1800 to 20000 approx., depending on test set-up and conditions. There are experimental confirmations of abrupt transition in some laminar flows within a tiny margin of the critical Reynolds number VARIATION,

for instance, in the boundary layer on a rotating circular disk.

By the way, what is a transition point?

30 January, 2014 at 4:02 am

Robert CoulterIt is generally recognized that transistion occurs above 2000. There is a wide range reported on where this transistion occurs. However, I have never seen any report on the transistioning occuring below 2000 (I will grant you the 1800 but that seems a little low.)

A transistion point –an abrupt change (1) where a mathematical model breaksdown and a new model takes over (2) where a mathematical model has an inflection point (3) where a mathematical model developes a singularity or becomes mutli-valued (any of the derivatives) (4) a point in a mathematical model where a characteristic changes – e.g. overdamped, underdamped, subsonic flow, sonic flow

5 February, 2014 at 5:44 pm

AnonymousCare to read Mullin’s ann. rev. (2011)? Are we not talking about the experimental investigations any more? Hope that you find an evidence in physics for a transition point in pipe flows.

2 February, 2014 at 8:06 am

anonimousIn the days following the last comment, several counter-examples have been found to some intermediate statements in the paper by Otelbaev and one conctrete gap in the proof. Details can be found on http://math.stackexchange.com/questions/634890/has-prof-otelbaev-shown-existence-of-strong-solutions-for-navier-stokes-equatio/658551#658551

Until the answers to these essential remarks are given, the result should not be considered proved.