It is always dangerous to venture an opinion as to why a problem is hard (cf. Clarke’s first law), but I’m going to stick my neck out on this one, because (a) it seems that there has been a lot of effort expended on this problem recently, sometimes perhaps without full awareness of the main difficulties, and (b) I would love to be proved wrong on this opinion :-) .

The global regularity problem for Navier-Stokes is of course a Clay Millennium Prize problem and it would be redundant to describe it again here. I will note, however, that it asks for existence of global smooth solutions to a Cauchy problem for a nonlinear PDE. There are countless other global regularity results of this type for many (but certainly not all) other nonlinear PDE; for instance, global regularity is known for Navier-Stokes in two spatial dimensions rather than three (this result essentially dates all the way back to Leray’s thesis in 1933!). Why is the three-dimensional Navier-Stokes global regularity problem considered so hard, when global regularity for so many other equations is easy, or at least achievable?

(For this post, I am only considering the global regularity problem for Navier-Stokes, from a purely mathematical viewpoint, and in the precise formulation given by the Clay Institute; I will not discuss at all the question as to what implications a rigorous solution (either positive or negative) to this problem would have for physics, computational fluid dynamics, or other disciplines, as these are beyond my area of expertise. But if anyone qualified in these fields wants to make a comment along these lines, by all means do so.)

The standard response to this question is *turbulence* – the behaviour of three-dimensional Navier-Stokes equations at fine scales is much more nonlinear (and hence unstable) than at coarse scales. I would phrase the obstruction slightly differently, as *supercriticality*. Or more precisely, all of the globally controlled quantities for Navier-Stokes evolution which we are aware of (and we are not aware of very many) are either *supercritical* with respect to scaling, which means that they are much weaker at controlling fine-scale behaviour than controlling coarse-scale behaviour, or they are *non-coercive*, which means that they do not really control the solution at all, either at coarse scales or at fine. (I’ll define these terms more precisely later.) At present, all known methods for obtaining global smooth solutions to a (deterministic) nonlinear PDE Cauchy problem require either

- Exact and explicit solutions (or at least an exact, explicit transformation to a significantly simpler PDE or ODE);
- Perturbative hypotheses (e.g. small data, data close to a special solution, or more generally a hypothesis which involves an somewhere); or
- One or more globally controlled quantities (such as the total energy) which are both coercive and either critical or subcritical.

(Note that the presence of (1), (2), or (3) are currently *necessary *conditions for a global regularity result, but far from *sufficient*; otherwise, papers on the global regularity problem for various nonlinear PDEs would be substantially shorter :-) . In particular, there have been many good, deep, and highly non-trivial papers recently on global regularity for Navier-Stokes, but they all assume either (1), (2) or (3) via additional hypotheses on the data or solution. For instance, in recent years we have seen good results on global regularity assuming (2), as well as good results on global regularity assuming (3); a complete bibilography of recent results is unfortunately too lengthy to be given here.)

The Navier-Stokes global regularity problem for arbitrary large smooth data lacks all of these three ingredients. Reinstating (2) is impossible without changing the statement of the problem, or adding some additional hypotheses; also, in perturbative situations the Navier-Stokes equation evolves almost linearly, while in the non-perturbative setting it behaves very nonlinearly, so there is basically no chance of a reduction of the non-perturbative case to the perturbative one unless one comes up with a highly nonlinear transform to achieve this (e.g. a naive scaling argument cannot possibly work). Thus, one is left with only three possible strategies if one wants to solve the full problem:

- Solve the Navier-Stokes equation exactly and explicitly (or at least transform this equation exactly and explicitly to a simpler equation);
- Discover a new globally controlled quantity which is both coercive and either critical or subcritical; or
- Discover a new method which yields global smooth solutions even in the absence of the ingredients (1), (2), and (3) above.

For the rest of this post I refer to these strategies as “Strategy 1”, “Strategy 2”, and “Strategy 3”.

Much effort has been expended here, especially on Strategy 3, but the supercriticality of the equation presents a truly significant obstacle which already defeats all known methods. Strategy 1 is probably hopeless; the last century of experience has shown that (with the very notable exception of completely integrable systems, of which the Navier-Stokes equations is *not* an example) most nonlinear PDE, even those arising from physics, do not enjoy explicit formulae for solutions from *arbitrary* data (although it may well be the case that there are interesting exact solutions from special (e.g. symmetric) data). Strategy 2 may have a little more hope; after all, the Poincaré conjecture became solvable (though still very far from trivial) after Perelman introduced a new globally controlled quantity for Ricci flow (the *Perelman entropy*) which turned out to be both coercive and critical. (See also my exposition of this topic.) But we are still not very good at discovering new globally controlled quantities; to quote Klainerman, “the discovery of any new bound, stronger than that provided by the energy, for general solutions of *any* of our basic physical equations would have the significance of a major event” (emphasis mine).

I will return to Strategy 2 later, but let us now discuss Strategy 3. The first basic observation is that the Navier-Stokes equation, like many other of our basic model equations, obeys a *scale invariance*: specifically, given any scaling parameter , and any smooth velocity field solving the Navier-Stokes equations for some time T, one can form a new velocity field to the Navier-Stokes equation up to time , by the formula

(Strictly speaking, this scaling invariance is only present as stated in the absence of an external force, and with the non-periodic domain rather than the periodic domain . One can adapt the arguments here to these other settings with some minor effort, the key point being that an approximate scale invariance can play the role of a perfect scale invariance in the considerations below. The pressure field gets rescaled too, to , but we will not need to study the pressure here. The viscosity remains unchanged.)

We shall think of the rescaling parameter as being large (e.g. ). One should then think of the transformation from u to as a kind of “magnifying glass”, taking fine-scale behaviour of u and matching it with an identical (but rescaled, and slowed down) coarse-scale behaviour of . The point of this magnifying glass is that it allows us to treat both fine-scale and coarse-scale behaviour on an equal footing, by identifying both types of behaviour with something that goes on at a fixed scale (e.g. the unit scale). Observe that the scaling suggests that fine-scale behaviour should play out on much smaller time scales than coarse-scale behaviour (T versus ). Thus, for instance, if a unit-scale solution does something funny at time 1, then the rescaled fine-scale solution will exhibit something similarly funny at spatial scales and at time . Blowup can occur when the solution shifts its energy into increasingly finer and finer scales, thus evolving more and more rapidly and eventually reaching a singularity in which the scale in both space and time on which the bulk of the evolution is occuring has shrunk to zero. In order to prevent blowup, therefore, we must arrest this motion of energy from coarse scales (or low frequencies) to fine scales (or high frequencies). (There are many ways in which to make these statements rigorous, for instance using Littlewood-Paley theory, which we will not discuss here, preferring instead to leave terms such as “coarse-scale” and “fine-scale” undefined.)

Now, let us take an arbitrary large-data smooth solution to Navier-Stokes, and let it evolve over a very long period of time [0,T), assuming that it stays smooth except possibly at time T. At very late times of the evolution, such as those near to the final time T, there is no reason to expect the solution to resemble the initial data any more (except in perturbative regimes, but these are not available in the arbitrary large-data case). Indeed, the only control we are likely to have on the late-time stages of the solution are those provided by globally controlled quantities of the evolution. Barring a breakthrough in Strategy 2, we only have two really useful globally controlled (i.e. bounded even for very large T) quantities:

- The
*maximum kinetic energy*; and - The
*cumulative energy dissipation*.

Indeed, the energy conservation law implies that these quantities are both bounded by the initial kinetic energy E, which could be large (we are assuming our data could be large) but is at least finite by hypothesis.

The above two quantities are *coercive*, in the sense that control of these quantities imply that the solution, even at very late times, stays in a bounded region of some function space. However, this is basically the only thing we know about the solution at late times (other than that it is smooth until time T, but this is a qualitative assumption and gives no bounds). So, unless there is a breakthrough in Strategy 2, we cannot rule out the worst-case scenario that the solution near time T is essentially an *arbitrary* smooth divergence-free vector field which is bounded both in kinetic energy and in cumulative energy dissipation by E. In particular, near time T the solution could be concentrating the bulk of its energy into fine-scale behaviour, say at some spatial scale . (Of course, cumulative energy dissipation is not a function of a single time, but is an integral over all time; let me suppress this fact for the sake of the current discussion.)

Now, let us take our magnifying glass and blow up this fine-scale behaviour by to create a coarse-scale solution to Navier-Stokes. Given that the fine-scale solution could (in the worst-case scenario) be as bad as an arbitrary smooth vector field with kinetic energy and cumulative energy dissipation at most E, the rescaled unit-scale solution can be as bad as an arbitrary smooth vector field with kinetic energy and cumulative energy dissipation at most , as a simple change-of-variables shows. Note that the control given by our two key quantities has worsened by a factor of ; because of this worsening, we say that these quantities are *supercritical* – they become increasingly useless for controlling the solution as one moves to finer and finer scales. This should be contrasted with *critical* quantities (such as the energy for *two-dimensional* Navier-Stokes), which are invariant under scaling and thus control all scales equally well (or equally poorly), and *subcritical* quantities, control of which becomes increasingly powerful at fine scales (and increasingly useless at very coarse scales).

Now, suppose we know of examples of unit-scale solutions whose kinetic energy and cumulative energy dissipation are as large as , but which can shift their energy to the next finer scale, e.g. a half-unit scale, in a bounded amount O(1) of time. Given the previous discussion, we cannot rule out the possibility that our rescaled solution behaves like this example. Undoing the scaling, this means that we cannot rule out the possibility that the original solution will shift its energy from spatial scale to spatial scale in time . If this bad scenario repeates over and over again, then convergence of geometric series shows that the solution may in fact blow up in finite time. Note that the bad scenarios do not have to happen immediately after each other (the *self-similar* blowup scenario); the solution could shift from scale to , wait for a little bit (in rescaled time) to “mix up” the system and return to an “arbitrary” (and thus potentially “worst-case”) state, and then shift to , and so forth. While the cumulative energy dissipation bound can provide a little bit of a bound on how long the system can “wait” in such a “holding pattern”, it is far too weak to prevent blowup in finite time. To put it another way, we have no rigorous, deterministic way of preventing Maxwell’s demon from plaguing the solution at increasingly frequent (in absolute time) intervals, invoking various rescalings of the above scenario to nudge the energy of the solution into increasingly finer scales, until blowup is attained.

Thus, in order for Strategy 3 to be successful, we basically need to rule out the scenario in which unit-scale solutions with *arbitrarily large *kinetic energy and cumulative energy dissipation shift their energy to the next highest scale. But every single analytic technique we are aware of (except for those involving *exact* solutions, i.e. Strategy 1) requires at least one bound on the size of solution in order to have any chance at all. Basically, one needs at least one bound in order to control all nonlinear errors – and any strategy we know of which does not proceed via exact solutions will have at least one nonlinear error that needs to be controlled. The only thing we have here is a bound on the *scale* of the solution, which is not a bound in the sense that a norm of the solution is bounded; and so we are stuck.

To summarise, any argument which claims to yield global regularity for Navier-Stokes via Strategy 3 must inevitably (via the scale invariance) provide a radically new method for providing non-trivial control of nonlinear unit-scale solutions of arbitrary large size for unit time, which looks impossible without new breakthroughs on Strategy 1 or Strategy 2. (There are a couple of loopholes that one might try to exploit: one can instead try to refine the control on the “waiting time” or “amount of mixing” between each shift to the next finer scale, or try to exploit the fact that each such shift requires a certain amount of energy dissipation, but one can use similar scaling arguments to the preceding to show that these types of loopholes cannot be exploited without a new bound along the lines of Strategy 2, or some sort of argument which works for arbitrarily large data at unit scales.)

To rephrase in an even more jargon-heavy manner: the “energy surface” on which the dynamics is known to live in, can be quotiented by the scale invariance. After this quotienting, the solution can stray arbitrarily far from the origin even at unit scales, and so we lose all control of the solution unless we have exact control (Strategy 1) or can significantly shrink the energy surface (Strategy 2).

The above was a general critique of Strategy 3. Now I’ll turn to some known specific attempts to implement Strategy 3, and discuss where the difficulty lies with these:

*Using weaker or approximate notions of solution*(e.g. viscosity solutions, penalised solutions, super- or sub- solutions, etc.). This type of approach dates all the way back to Leray. It has long been known that by weakening the nonlinear portion of Navier-Stokes (e.g. taming the nonlinearity), or strengthening the linear portion (e.g. introducing hyperdissipation), or by performing a discretisation or regularisation of spatial scales, or by relaxing the notion of a “solution”, one can get global solutions to approximate Navier-Stokes equations. The hope is then to take limits and recover a smooth solution, as opposed to a mere global*weak*solution, which was already constructed by Leray for Navier-Stokes all the way back in 1933. But in order to ensure the limit is smooth, we need convergence in a strong topology. In fact, the same type of scaling arguments used before basically require that we obtain convergence in either a critical or subcritical topology. Absent a breakthrough in Strategy 2, the only type of convergences we have are in very rough – in particular, in supercritical – topologies. Attempting to upgrade such convergence to critical or subcritical topologies is the qualitative analogue of the quantitative problems discussed earlier, and ultimately faces the same problem (albeit in very different language) of trying to control unit-scale solutions of arbitrarily large size. Working in a purely qualitative setting (using limits, etc.) instead of a quantitative one (using estimates, etc.) can disguise these problems (and, unfortunately, can lead to errors if limits are manipulated carelessly), but the qualitative formalism does not magically make these problems disappear. Note that weak solutions are already known to be badly behaved for the closely related Euler equation. More generally, by recasting the problem in a sufficiently abstract formalism (e.g. formal limits of near-solutions), there are a number of ways to create an abstract object which could be considered as a kind of generalised solution, but the moment one tries to establish actual control on the regularity of this generalised solution one will encounter all the supercriticality difficulties mentioned earlier.*Iterative methods*(e.g. contraction mapping principle, Nash-Moser iteration, power series, etc.)*in a function space*. These methods are perturbative, and require*something*to be small: either the data has to be small, the nonlinearity has to be small, or the time of existence desired has to be small. These methods are excellent for constructing*local*solutions for large data, or global solutions for*small*data, but cannot handle global solutions for large data (running into the same problems as any other Strategy 3 approach). These approaches are also typically rather insensitive to the specific structure of the equation, which is already a major warning sign since one can easily construct (rather artificial) systems similar to Navier-Stokes for which blowup is known to occur. The optimal perturbative result is probably very close to that established by Koch-Tataru, for reasons discussed in that paper.*Exploiting blowup criteria*. Perturbative theory can yield some highly non-trivial blowup criteria – that certain norms of the solution must diverge if the solution is to blow up. For instance, a celebrated result of Beale-Kato-Majda shows that the maximal vorticity must have a divergent time integral at the blowup point. However, all such blowup criteria are subcritical or critical in nature, and thus, barring a breakthrough in Strategy 2, the known globally controlled quantities cannot be used to reach a contradiction. Scaling arguments similar to those given above show that perturbative methods cannot achieve a supercritical blowup criterion.*Asymptotic analysis of the blowup point(s)*. Another proposal is to rescale the solution near a blowup point and take some sort of limit, and then continue the analysis until a contradiction ensues. This type of approach is useful in many other contexts (for instance, in understanding Ricci flow). However, in order to actually extract a useful limit (in particular, one which still solves Navier-Stokes in a strong sense, and does collapse to the trivial solution), one needs to uniformly control all rescalings of the solution – or in other words, one needs a breakthrough in Strategy 2. Another major difficulty with this approach is that blowup can occur not just at one point, but can conceivably blow up on a one-dimensional set; this is another manifestation of supercriticality.*Analysis of a minimal blowup solution*. This is a strategy, initiated by Bourgain, which has recently been very successful in establishing large data global regularity for a variety of equations with a critical conserved quantity, namely to assume for contradiction that a blowup solution exists, and then extract a*minimal*blowup solution which minimises the conserved quantity. This strategy (which basically pushes the perturbative theory to its natural limit) seems set to become the standard method for dealing with large data critical equations. It has the appealing feature that there is enough compactness (or almost periodicity) in the minimal blowup solution (once one quotients out by the scaling symmetry) that one can begin to use subcritical and supercritical conservation laws and monotonicity formulae as well (see my survey on this topic). Unfortunately, as the strategy is currently understood, it does not seem to be directly applicable to a supercritical situation (unless one simply assumes that some critical norm is globally bounded) because it is impossible, in view of the scale invariance, to minimise a non-scale-invariant quantity.*Abstract approaches*(avoiding the use of properties specific to the Navier-Stokes equation). At its best, abstraction can efficiently organise and capture the key difficulties of a problem, placing the problem in a framework which allows for a direct and natural resolution of these difficulties without being distracted by irrelevant concrete details. (Kato’s semigroup method is a good example of this in nonlinear PDE; regrettably for this discussion, it is limited to subcritical situations.) At its worst, abstraction conceals the difficulty within some subtle notation or concept (e.g. in various types of convergence to a limit), thus incurring the risk that the difficulty is “magically” avoided by an inconspicuous error in the abstract manipulations. An abstract approach which manages to breezily ignore the supercritical nature of the problem thus looks very suspicious. More substantively, there are many equations which enjoy a coercive conservation law yet still can exhibit finite time blowup (e.g. the mass-critical focusing NLS equation); an abstract approach thus would have to exploit some subtle feature of Navier-Stokes which is not present in all the examples in which blowup is known to be possible. Such a feature is unlikely to be discovered abstractly before it is first discovered concretely; the field of PDE has proven to be the type of mathematics where progress generally starts in the concrete and then flows to the abstract, rather than vice versa.

If we abandon Strategy 1 and Strategy 3, we are thus left with Strategy 2 – discovering new bounds, stronger than those provided by the (supercritical) energy. This is not *a priori* impossible, but there is a huge gap between simply wishing for a new bound and actually discovering and then rigorously establishing one. Simply sticking in the existing energy bounds into the Navier-Stokes equation and seeing what comes out will provide a few more bounds, but they will all be supercritical, as a scaling argument quickly reveals. The only other way we know of to create global non-perturbative deterministic bounds is to discover a new conserved or monotone quantity. In the past, when such quantities have been discovered, they have always been connected either to geometry (symplectic, Riemmanian, complex, etc.), to physics, or to some consistently favourable (defocusing) sign in the nonlinearity (or in various “curvatures” in the system). There appears to be very little usable geometry in the equation; on the one hand, the Euclidean structure enters the equation via the diffusive term and by the divergence-free nature of the vector field, but the nonlinearity is instead describing transport by the velocity vector field, which is basically just an arbitrary volume-preserving infinitesimal diffeomorphism (and in particular does not respect the Euclidean structure at all). One can try to quotient out by this diffeomorphism (i.e. work in material coordinates) but there are very few geometric invariants left to play with when one does so. (In the case of the Euler equations, the vorticity vector field is preserved modulo this diffeomorphism, as observed for instance by Li, but this invariant is very far from coercive, being almost purely topological in nature.) The Navier-Stokes equation, being a system rather than a scalar equation, also appears to have almost no favourable sign properties, in particular ruling out the type of bounds which the maximum principle or similar comparison principles can give. This leaves physics, but apart from the energy, it is not clear if there are any physical quantities of fluids which are *deterministically* monotone. (Things look better on the stochastic level, in which the laws of thermodynamics might play a role, but the Navier-Stokes problem, as defined by the Clay institute, is deterministic, and so we have Maxwell’s demon to contend with.) It would of course be fantastic to obtain a fourth source of non-perturbative controlled quantities, not arising from geometry, physics, or favourable signs, but this looks somewhat of a long shot at present. Indeed given the turbulent, unstable, and chaotic nature of Navier-Stokes, it is quite conceivable that in fact no reasonable globally controlled quantities exist beyond that which arise from the energy.

Of course, given how hard it is to show global regularity, one might try instead to establish finite time blowup instead (this also is acceptable for the Millennium prize). Unfortunately, even though the Navier-Stokes equation is known to be very unstable, it is not clear at all how to pass from this to a rigorous demonstration of a blowup solution. All the rigorous finite time blowup results (as opposed to mere instability results) that I am aware of rely on one or more of the following ingredients:

- Exact blowup solutions (or at least an exact transformation to a significantly simpler PDE or ODE, for which blowup can be established);
- An ansatz for a blowup solution (or approximate solution), combined with some nonlinear stability theory for that ansatz;
- A comparison principle argument, dominating the solution by another object which blows up in finite time, taking the solution with it; or
- An indirect argument, constructing a functional of the solution which must attain an impossible value in finite time (e.g. a quantity which is manifestly non-negative for smooth solutions, but must become negative in finite time).

It may well be that there is some exotic symmetry reduction which gives (1), but no-one has located any good exactly solvable special case of Navier-Stokes (in fact, those which have been found, are known to have global smooth solutions). (2) is problematic for two reasons: firstly, we do not have a good ansatz for a blowup solution, but perhaps more importantly it seems hopeless to establish a stability theory for any such ansatz thus created, as this problem is essentially a more difficult version of the global regularity problem, and in particular subject to the main difficulty, namely controlling the highly nonlinear behaviour at fine scales. (One of the ironies in pursuing method (2) is that in order to establish rigorous *blowup* in some sense, one must first establish rigorous *stability* in some other (renormalised) sense.) Method (3) would require a comparison principle, which as noted before appears to be absent for the non-scalar Navier-Stokes equations. Method (4) suffers from the same problem, ultimately coming back to the “Strategy 2” problem that we have virtually no globally monotone quantities in this system to play with (other than energy monotonicity, which clearly looks insufficient by itself). Obtaining a new type of mechanism to force blowup other than (1)-(4) above would be quite revolutionary, not just for Navier-Stokes; but I am unaware of even any proposals in these directions, though perhaps topological methods might have some effectiveness.

So, after all this negativity, do I have any positive suggestions for how to solve this problem? My opinion is that Strategy 1 is impossible, and Strategy 2 would require either some exceptionally good intuition from physics, or else an incredible stroke of luck. Which leaves Strategy 3 (and indeed, I think one of the main reasons why the Navier-Stokes problem is interesting is that it *forces* us to create a Strategy 3 technique). Given how difficult this strategy seems to be, as discussed above, I only have some extremely tentative and speculative thoughts in this direction, all of which I would classify as “blue-sky” long shots:

*Work with ensembles of data, rather than a single initial datum*. All of our current theory for deterministic evolution equations deals only with a single solution from a single initial datum. It may be more effective to work with parameterised familes of data and solutions, or perhaps probability measures (e.g. Gibbs measures or other invariant measures). One obvious partial result to shoot for is to try to establish global regularity for*generic*large data rather than*all*large data; in other words, acknowledge that Maxwell’s demon might exist, but show that the probability of it actually intervening is very small. The problem is that we have virtually no tools for dealing with generic (average-case) data other than by treating all (worst-case) data; the enemy is that the Navier-Stokes flow itself might have some perverse entropy-reducing property which somehow makes the average case drift towards (or at least recur near) the worst case over long periods of time. This is incredibly unlikely to be the truth, but we have no tools to prevent it from happening at present.*Work with a much simpler (but still supercritical) toy model*. The Navier-Stokes model is parabolic, which is nice, but is complicated in many other ways, being relatively high-dimensional and also non-scalar in nature. It may make sense to work with other, simplified models which still contain the key difficulty that the only globally controlled quantities are supercritical. Examples include the Katz-Pavlovic dyadic model for the Euler equations (for which blowup can be demonstrated by a monotonicity argument; see this survey for more details), or the spherically symmetric defocusing supercritical nonlinear wave equation.*Develop non-perturbative tools to control deterministic non-integrable dynamical systems*. Throughout this post we have been discussing PDEs, but actually there are similar issues arising in the nominally simpler context of finite-dimensional dynamical systems (ODEs). Except in perturbative contexts (such as the neighbourhood of a fixed point or invariant torus), the long-time evolution of a dynamical system for deterministic data is still largely only controllable by the classical tools of exact solutions, conservation laws and monotonicity formulae; a discovery of a new and effective tool for this purpose would be a major breakthrough. One natural place to start is to better understand the long-time, non-perturbative dynamics of the classical three-body problem, for which there are still fundamental unsolved questions.*Establish really good bounds for critical or nearly-critical problems*. Recently, I showed that having a very good bound for a critical equation essentially implies that one also has a global regularity result for a slightly supercritical equation. The idea is to use a monotonicity formula which does weaken very slightly as one passes to finer and finer scales, but such that each such passage to a finer scale costs a significant amount of monotonicity; since there is only a bounded amount of monotonicity to go around, it turns out that the latter effect just barely manages to overcome the former in my equation to recover global regularity (though by doing so, the bounds worsen from polynomial in the critical case to double exponential in my logarithmically supercritical case). I severely doubt that my method can push to non-logarithmically supercritical equations, but it does illustrate that having very strong bounds at the critical level may lead to some modest progress on the problem.*Try a topological method*. This is a special case of (1). It may well be that a primarily topological argument may be used either to construct solutions, or to establish blowup; there are some precedents for this type of construction in elliptic theory. Such methods are very global by nature, and thus not restricted to perturbative or nearly-linear regimes. However, there is no obvious topology here (except possibly for that generated by the vortex filaments) and as far as I know, there is not even a “proof-of-concept” version of this idea for any evolution equation. So this is really more of a wish than any sort of concrete strategy.*Understand pseudorandomness*. This is an incredibly vague statement; but part of the difficulty with this problem, which also exists in one form or another in many other famous problems (e.g. Riemann hypothesis, , , twin prime and Goldbach conjectures, normality of digits of , Collatz conjecture, etc.) is that we expect any sufficiently complex (but deterministic) dynamical system to behave “chaotically” or “pseudorandomly”, but we still have very few tools for actually making this intuition precise, especially if one is considering deterministic initial data rather than generic data. Understanding pseudorandomness in other contexts, even dramatically different ones, may indirectly shed some insight on the turbulent behaviour of Navier-Stokes.

In conclusion, while it is good to occasionally have a crack at impossible problems, just to try one’s luck, I would personally spend much more of my time on other, more tractable PDE problems than the Clay prize problem, though one should certainly keep that problem in mind if, in the course on working on other problems, one indeed does stumble upon something that smells like a breakthrough in Strategy 1, 2, or 3 above. (In particular, there are many other serious and interesting questions in fluid equations that are not anywhere near as difficult as global regularity for Navier-Stokes, but still highly worthwhile to resolve.)

## 756 comments

Comments feed for this article

26 March, 2022 at 7:40 pm

Dwight WalshWell, it has been a week now since I made my postings showing that your constructed q functions needed to be constrained by the Navier-Stokes equation in order to be a valid rebuttal to my proof. Now, I know you would like nothing better than to shoot down my arguments on this forum, but it seems you cannot do so.

27 March, 2022 at 12:21 am

Antoine DeleforgeNo. I already explained to you in full details why our counter-example for q completely invalidates your proof in my last comment (13 February, 2022 at 9:57 am), regardless of whether it is constrained by NS, in a language that I believe should be understandable by a high-school pupil. I note you still haven’t given any proper thoughts to this and keep displaying your misunderstandings over and over again. Again, if you are not able to follow this basic logic, I am afraid the situation is hopeless and there is no point in continuing writing here. Bye!

27 March, 2022 at 1:53 pm

Dwight WalshThere is something else you’ve overlooked in your constructed q function which I probably should have picked up weeks ago. The fact that this must be a smooth blowup means that there is a time t_b < T_b such that for all t in the semi-open interval t_b <= t T_b of |grad q(0,t)| = 0. For your constructed q, however, we have limit as t -> T_b of |grad q(0,t)| = \infty.

Now I realize this might not be the most pleasant, ego-building news, but I too have had my fair share of mistakes and disappointments. So let’s be a sport about this. — OK

27 March, 2022 at 2:48 pm

AnonymousDwight, your proof does not even begin to address the difficulty of the problem. You’re wasting everyone’s time

27 March, 2022 at 7:38 pm

Dwight WalshSo fire me! (LOL)!

And don’t tell me about wasting time until you have spent 10 years struggling unsuccessfully to rebuild a career after being laid-off for “lack of contract support” despite excellent performance ratings, spent the next 20 years unable to work due to heath problems resulting from the abuse of the first 10 years, and then dealing with a heart attack from it all!

But I did learn patience over these past 30 years. [Let’s face it — my very survival depended on it!] Therefore I won’t mind “wasting everyone’s time” until I get some straight answers about where these errors are in my proof and why they are errors. At least they are getting paid. And so-called “global errors” which are red-flags at best and provide no such information are not sufficient. I already explained why false red-flags are likely with my particular proof.

28 March, 2022 at 2:49 am

AnonymousIf you so sure about the correctness of your proof, do you have a reasonable explanation how so many experts missed such a simple and elegant solution to the NS problem?

28 March, 2022 at 6:44 am

Dwight WalshI have an explanation though I doubt you would call it reasonable. It’s the sort of thing that tends to happen when large amounts of credit and/or money is at stake. The current “experts” don’t like competing with newcomers with new ideas that might actually work. I happen to be living proof of this, and it wouldn’t surprise me in the slightest if others before me have had ideas similar to mine, but found themselves up against the same system that I am facing now. We simply don’t hear about them.

28 March, 2022 at 9:59 pm

Dwight WalshAntoine and anon,

Here’s what you are missing in your postings of 10 February 2022 at 6:00 AM:

Since the blowup of q at |x| = 0 as t -> T_b must be a smooth blowup, there is a time t_b < T_b such that a global maximum in q occurs at this point

for all t in the semi-open interval t_b <= t T_b of |grad p| must equal 0. For every constructed q function shown on this page, however, this limit is infinite.

Now from equation (108) the function Q and therefore the scalar pressure p must also blow up at |x| = 0 as t -> T_b. But again, since this is a smooth blowup, we must have |grad p(0,t)| -> 0 as t -> T_b. This means that |grad p| is continuous on the entire CLOSED time interval [0,T_b], and therefore the time integral of |grad p| from 0 to T_b is finite, consistent with equation (120). At this point, equation (128) implies that the maximum of K at time T_b is finite, and therefore THERE IS NO BLOWUP AFTER ALL!

Oh well — Better luck next time!

29 March, 2022 at 4:44 pm

Dwight WalshAntoine,

In your posting of 7 February 2022 at 2:14 AM, you stated:

“… the integral [in equation (118)] is unbounded at x=0 on the open-interval [0,T_b) (do the math, it’s pretty straightforward).”

In view of the facts presented in this current posting, do you still support your original statement that the integral in equation (118) is unbounded, or do you acknowledge that this statement is erroneous?

30 March, 2022 at 5:31 am

AntoineSorry but even after reading your comment 10 times I can’t make sense of it. There seems to be a display issue because the phrase “for all t in the semi-open interval t_b <= t T_b of |grad p|…" makes no sense.

It does not matter though. I can reiterate, with 100% certainty, that the counter example q that has been given to you by me and others constitutes a rigorous proof that the statement "(108) and (109) implies (118)" made in you paper is false in general, which invalidates your proof. The reason for this implication to be false has to do with an illegal implicit permutation of limit when you compute (116). But you do not seem to be able to understand this. That's why we prove it to you in the form of a counter example, which is (usually, for most people) easier to understand, while being a completely rigorous refutation.

It is amazing to me that you keep replying things like "but wait because your q has to verify this and this because of NS". It really shows that, after months, you still haven't understood our refutation on a fundamental, logical level. "Our q" does not have to verify anything that you see fit. As explained in details on my Feb. 1 comment, it is just a counter example to a specific, false logical implication made in your paper.

P.S.: Your recent comment where you write "The current “experts” don’t like competing with newcomers" show how sadly and completely misguided you are. I am not at all working on NS, I work in a completely different, remote field of applied math. You are not a "competitor" to me, or most likely to anyone writing in this comment section. In fact, most reasonable people don't waste their time trying to frontally tackle huge problems like NS. This is very well explained by Terence Tao in this blogpost: https://terrytao.wordpress.com/career-advice/dont-prematurely-obsess-on-a-single-big-problem-or-big-theory/.

I, for one, would be absolutely thrilled if a random, lone researcher like you would come up with the solution to a millennium problem. It just so happen that one basic mistake in your paper (pointed out by a previous anonymous commenter) was easy enough that I could understand it. I was then foolish enough to believe that I would be able to explain it to you clearly, hence saving your and everyone's time. Obviously I strongly overestimated you.

30 March, 2022 at 6:41 am

AnonymousThis article Ten Signs a Claimed Mathematical Breakthrough is Wrong is also relevant.

30 March, 2022 at 8:57 am

Dwight WalshIn your posting on 5 February 2022 at 9:22 AM, you stated:

The counter example I gave for q *is* smooth. I never questioned or discussed smoothness in my argument, …

Perhaps we both should have thought a little more about what “smooth” means in the current context. What is happening as t ->T_b at |x|=0 is a SMOOTH BLOWUP in the q function. This means that q must have a smooth spatial profile at all points x in R^3 for t < T_b; and, as discussed in the paper, there must be a time t_b T_b is zero. Unfortunately, none of the constructed q function I have seen either here or in my email have this property, even though they may have a smooth temporal profile on the semi-open time interval [0,T_b) but clearly blow up as t->T_b.

30 March, 2022 at 9:05 am

Dwight WalshI see, after posting of course, that the same rendering problem is occurring now which happened in my last posting. If you see the combination t_b T_b in the posting, then probably the system dropped a less-than sign between the t_b and the T_b.

30 March, 2022 at 6:41 pm

Dwight WalshWell this is incredible! I get thumb-downed for posting info on a rendering error. Some folks out there never cease to amaze me! (LOL)

30 March, 2022 at 9:26 am

Dwight WalshOne important point I forgot to mention in this posting:

Even if you do find an acceptable q function, my posting of 28 March 2022 at 9:59 PM indicates that you are only a “hop and step” away from proving equation (120) which implies there is no blowup, thereby contradicting the conjectured q function.

30 March, 2022 at 5:36 pm

Dwight WalshWHOOPS! I TAKE THAT BACK!

There is no contradiction! The fact that there is no blowup shows that by using an acceptable constructed q function, we come to the same conclusion that we did when using the actual q function. Therefore — no counter-example!

Case closed.

31 March, 2022 at 6:15 am

AnonymousJesus Dwight would you just stop already?! What is it you’re seeking? If you’ve truly discovered a complete solution, thus cracking NS, well good for you! If us mere mortals are too stupid to understand your solution why do you keep shoving it down our throats? Can’t you just revel in its beauty by your own? Isn’t that a reward in itself already?

31 March, 2022 at 9:01 am

Dwight WalshCome on, Anonymous — You know full well what I am seeking. You and your comrades are not claiming you don’t understand my proof. What you are claiming is that you have iron-clad proof that my proof is invalid. To make such a claim, however, you had better have a thorough understanding of what my arguments are and what invalidates them. But all you have been able to provide is a nebulous “global error” which is nothing more than a “red flag” at best. Then, on 31 March 2022, I show that even in playing by your rules, finding a so-called “counter-example” is not possible. None of your constructed q functions meet the required constraints, and any others that do would lead to the same conclusion (ie. equation (120)) as the actual q function. So it appears that you don’t even have those red flags anymore.

Therefore, I am seeking at least acknowledgement that you don’t have conclusive proof that my proof of NS global regularity is invalid. Also, an apology for the insulting and offensive comments (such as your last one) would be most appropriate.

31 March, 2022 at 4:43 pm

AnonymousDW, you are fooling yourself by assuming the result to prove the result itself. Not to mention that you make numerous undergraduate level mistakes.

You do not understand the potential theory you quoted, and you do not at all understand the Poisson equation. This demonstrates that your claim of ever passing a course in PDE is false.

“Also, the proof we present requires only an undergraduate background in calculus, differential equations (ordinary and partial), potential theory, and vector analysis for a reader to follow it.” As a matter of fact, the “paper” is so badly written that you fool yourself to think it is correct.

In the way you dispute Antoine’s comment, you don’t even have a sense of mathematical logic, which is an absolutely must for one’s mathematical training. This is one of the reasons why you fool yourself.

If you love to fantasy your victory, feel free to do so. But PLEASE STOP POLLUTING THE COMMENT AREA.

31 March, 2022 at 9:15 pm

Dwight Walsh>DW, you are fooling yourself by assuming the result to prove the result itself.

Show me where I do this!

——————————————————–

> Not to mention that you make numerous undergraduate level mistakes.

Give me just one example!

——————————————————–

>You do not understand the potential theory you quoted, and you do not at all understand the Poisson equation. This demonstrates that your claim of ever passing a course in PDE is false.

This is immediately refuted with a quick look at my academic transcript. Not only that but I made extensive use of my PDE background in the electro-magnetics and quantum mechanics courses also on my transcript. Case closed!

——————————————————–

>“Also, the proof we present requires only an undergraduate background in calculus, differential equations (ordinary and partial), potential theory, and vector analysis for a reader to follow it.” As a matter of fact, the “paper” is so badly written that you fool yourself to think it is correct.

This doesn’t jibe too well with the comment from one reviewer who stated “the proof is so easy that even an undergraduate reader could follow”.

——————————————————–

>In the way you dispute Antoine’s comment, you don’t even have a sense of mathematical logic, which is an absolutely must for one’s mathematical training. This is one of the reasons why you fool yourself.

Yeah, somehow use of a false statement in a proof has always given me heartburn! Also, over the past few days I have shown that Antoine’s constructed q functions (along with everyone else’s) do not meet the required constraints. However, use of q functions that are consistent with the constraints leads to the same result (equation(120)) as the actual q function, thereby foiling any claims of a “global error” and preserving the credibility of the proof by your own rules!

——————————————————–

>PLEASE STOP POLLUTING THE COMMENT AREA

Don’t worry — I was hoping to receive a simple acknowledgement that your “counter-example” showing a “global error” in my proof is invalid. At this point, however, you’ve got your foot so far into your mouth I doubt that will be necessary.

1 April, 2022 at 5:21 am

AnonymousYes yes Dwight you got it. Congratulations and gtfo.

1 April, 2022 at 7:14 pm

Dwight WalshThank you for your acknowledgement that you do not have a valid counter-example for establishing a “global error” in my proof even if it was done begrudgingly and unprofessionally. But it seems that your comrades need a little more convincing.

3 April, 2022 at 9:57 am

Jas, the PhysicistCan you guys speak mathematics please this is really hard to read I don’t know what you’re saying.

1 April, 2022 at 7:59 pm

Dwight WalshAntoine,

What about your version of q(x,t)? Do you now acknowledge that your constructed q(x,t) does not satisfy the smooth blowup constraint even though it may be smooth on the semi-open time interval [0,t)?

2 April, 2022 at 4:25 am

AnonymousGTFO, DW. You proved the result you like and you should go publish it somewhere. Not here. You would probably also like to inform the medias as well regarding your breakthrough. Don’t forget to prepare your undergraduate transcript since the journalists will certainly ask for it. Congratulations.

What others’ objections do not matter. It is very much disgraceful, annoying and disgusting you bother others over and over. Stop. Go publish your result.

2 April, 2022 at 3:47 pm

Dwight WalshI asked Antoine, not you!

>What others’ objections do not matter. It is very much disgraceful, annoying and disgusting you bother others over and over.

That’s because others stuck with the (false) “global error” claims over and over and refused to read the paper. I only defended the paper over and over. BTW — Since you are such an “expert” mathematician, how come you didn’t find the actual (or “local”) error in the proof, but instead jumped on the “global error” bandwagon? It seems to me that an expert would have found the error in less time than what it took you to post all of those insulting messages about me. Then, the matter would be settled, wouldn’t it!?

2 April, 2022 at 6:10 pm

AnonymousDW, If you have a damn beautiful proof that you think correct, GO PUBLISH it. Or are you simply afraid of your “authorities” may easily point your mistakes if you submit your result anywhere? What the f* do you get by harassing readers in this blog?!

If your proof if ultimately correct, it does NOT make a damn difference if Antonie, or any other readers HERE objects you or not. You have no certification/credit to award even if every reader HERE says you are right. There are tremendously many online instructions for amateur mathematicians on writing and publishing papers.

If you don’t care about publication, then post your preprints on arXiv like what a professional would do.

By all means, stop you nonsense and live your life. You have done such thing for a half damn year.

2 April, 2022 at 6:27 pm

Anonymous“I only defended the paper over and over.” This is not the right place to “defend” your paper, DW. Don’t you understand?!

You could either get into a math PhD program in a graduate school so that you can write your masterpiece as a PhD dissertation and defend it in front of the thesis committee, or submit your “paper” to an academic journal.

Please, stop your smooth blowup drama here and leave.

3 April, 2022 at 12:38 am

Dwight WalshWhat makes you think I didn’t already attempt everything you suggested? Those suggestions are actually two-party decisions and I am only one person. Therefore, they are not possible if I am simply ignored, which has been the case for 30 years now so I am ready for a change. Also, don’t tell me about professionalism when my research career was done away with after just eight years!

So, while I got the most hostile reception ever on this forum, at least I wasn’t ignored which is far better than what I got anyplace else.

3 April, 2022 at 9:56 am

Jas, the PhysicistWhich school did you want to go to but didn’t get accepted in and why is your solution correct? I still don’t even understand the problem do you want to see my solution?

3 April, 2022 at 9:59 am

Jas, the PhysicistGo publish your result it has to be in a journal for two years before you can win the prize I think. You better do it or I’m gonna publish mine.

3 April, 2022 at 11:09 am

AnonymousMaybe they will let the two of you share the prize

3 April, 2022 at 9:44 pm

Dwight WalshThis is not the right place to “defend” your paper, DW. Don’t you understand?!

Don’t you understand, Anonymous — This is not simply about defending a paper. At the time I announced the paper on this forum you claimed there was an error in equation (118) because I didn’t consider q functions such as r^{-100} which blow up at r=0. You didn’t seem to catch on to the fact that q was already defined in the proof, and therefore not adjustable at some future point in time. I clarified that this was not an error, but instead of acknowledging that I was only using q according to its definition, you simply went deeper into nonsensical claims. Eventually, it looked very much like you and your comrades were out to adjust the q function to your liking, thereby causing a fictitious blowup, and then claiming this blowup as a “counter-example” for proving a “global error”. Can we say “strawman fallacy on steroids”?

At this point, I have a better (though not perfect) understanding of what you were trying to do. It is basically an error-in-proof detection technique based on the writings of Prof. Terence Tau. While I won’t argue over its legitimacy, I will say that the user must thoroughly understand how to use it or the results can be very misleading, and I strongly believe this is EXACTLY what happened in the case of my proof. As I explained in previous postings, the constructed q functions used in these arguments were not consistent with the required constraints. I was able to show, however, that any correctly constructed q function would lead to the same long-term result as the actual q function, namely a finite time integral of |grad p| for all t > 0. This, of course, implies there are no “global errors”. Nevertheless, you and your comrades tenaciously held onto the claim that my proof was erroneous despite the fact that you could not actually find any errors.

Now, as we all know, postings on an online forum can be read all over the world. This being the case, I do my utmost to keep my postings as clear and accurate as possible. If I do make a mistake, I do my best to correct it whether I discover it myself or if someone else informs me of it. Similarly, I expect others to acknowledge their errors, and that includes erroneous claims made about my work or my ideas. Unfortunately, however, that part has been rather shortcoming lately, and I simply will not allow such claims to go unchallenged.

3 April, 2022 at 11:59 pm

AnonymousDwight.. if there’s one thing you need to learn it’s called the Dunning-Kruger Theorem. Go read it up – if you think deeply about it it just might help sort out your issues with NS.

But seriously, just look at what has transpired over all this time. You posted your purported proof, others tried to point out what’s wrong with it and your subsequent posts basically amounted to shouting them down and accusing them of denying you the glory of solving this big-ass problem.

BUT YOU ARE WRONG! Jesus I have no background in hardcore PDEs and even I could understand what Antoine and others have been trying kindly to tell you, only to be met with frenzied denial. Really, stop thinking about PDEs for a while and try to understand the (rather basic, tbh) logic of Antoine’s argument. Leave your ego at the door for just this once. Sit down and try to earnestly understand what all these reasonably intelligent people have got to say.

I already suspect you’re fuming and foaming by now but here’s the thing. You can proceed as I suggested – and I’m sincerely trying to help you – or you can continue shitposting here and wallowing in all that misery until TT blocks you. Be wise.

6 April, 2022 at 12:02 am

Dwight WalshWell, except for the Dunning-Kruger Theorem, I’ve discussed all of these issues with you and your comrades a million and one times already, so there isn’t much point to me repeating these arguments again. Evidently, you simply don’t have the mathematical literacy to understand the concept of a smooth blowup which is central to the proof. Now, as I recall, you called yourself an “expert” in one of your posts, but I see no evidence of this whatsoever — not in mathematics anyway. Was this a self-appointed title?

BTW – It is ABSOLUTELY INCORRECT to place the lim as \epsilon -> 0 operation behind the integral sign in equation (117). In fact, you actually end up with an expression that doesn’t even make sense since \epsilon appears only as the lower limit of integration. Therefore, in your version of the Green’s function integral for solving the Poisson equation, you integrate over R^3 the lim as \epsilon -> 0 of a function that does not depend on \epsilon. Not only that, but now there is also a completely undefined \epsilon in the lower limit of the integral. Can we say COMPLETE UTTER NONSENSE! Now if I have completely misunderstood this “theorem” of yours, please send me or post a copy of it along with your proof complete with correctly rendered math symbols and equations. I won’t divulge your email address to anyone without your permission.

Anyway, your comrade Antoine applied this “theorem”, or at least attempted to do so, in his posting on 7 February 2022 at 2:14 AM to construct the version of the q function used as a “counter-example” to my proof. Since the entire attempt was based on faulty claims from the start, however, I figured this was a moot point and didn’t raise the issue any longer.\

In regard to me “fuming and foaming”, all I ever did was react as a normal homo-sapient would when experiencing the type of frustrations I indicated in my last post on 3 April 2022 at 9:44 PM. While we are on the subject, however, I should add that you have used profanities over the last few posts whereas I have not.

Regarding Prof. Tau, he doesn’t seem to give a flip about what’s put up on this forum. He went on to bigger and better things over five years ago, probably after concluding that this problem truly was the “impossible problem” he described in this article. In fact, as I understand it, he is now looking for a counter-example to global regularity rather than proof of it. As I see it, however, he got stuck on the “infamous” nonlinear convective term and didn’t adequately consider other constraints on the solution such as those imposed by the smooth blowup conditions. I tried to inform him of this possibility on this forum a few times over the past 18 months, but got the same non-response that I have been accustom to for the past 30 years.

So, I have a suggestion for you. Since you seem to have lots of clout on this forum, perhaps you could contact TT, inform him of my ideas and my paper, and suggest that he get in touch with me. Although you are unable to comprehend the smooth blowup concept, perhaps he can. And, if I am correct, it could save him from decades of pursuing dead end leads. Now doesn’t that sound much more positive than claiming I will be wallowing in all that misery until TT blocks me?

6 April, 2022 at 8:33 am

AnonymousDwight, you seem to have not looked at Tao’s progress toward disproving global regularity because he demonstrates the existence of what you call a “smooth blowup” for a slightly modified Navier-Stokes equation.

https://arxiv.org/abs/1402.0290

Your argument would just as well imply global regularity for this equation

6 April, 2022 at 9:48 am

Dwight Walsh>Your argument would just as well imply global regularity for this equation.

I’m afraid this statement needs proof. In addition to the smooth blowup constraints, my arguments use the relatively simple form of the exact NS equation to establish upper bounds on the fluid velocity and scalar pressure from the initial conditions. With Prof. Tao’s HORRENDOUSLY COMPLICATED “averaged” convective term, however, I’m not convinced this is even possible. So this time, the burden of proof is on you. And if you can’t follow even my relatively simple proof, you are in for one rough ride!

6 April, 2022 at 11:50 pm

Dwight Walsh>Your argument would just as well imply global regularity for this equation.

There is something I should have asked in my last post about this statement of yours. Since you don’t understand my argument, how do you know it “would just as well imply global regularity” for Tao’s “averaged” NS equation?

10 April, 2022 at 8:10 am

AnonymousDwight, something I think you don’t appreciate is that there’s nothing that special about the pressure. From an analytic standpoint it is almost the same thing is . If you take the averaged nonlinearity in Tao’s paper and apply some order -1 operator to it, then you can call it and apply the arguments from your paper (which I do understand. As you point out it’s not difficult.)

10 April, 2022 at 9:30 pm

Dwight WalshI’m not sure what you mean by “there’s nothing that special about the pressure”. This doesn’t jibe very well with your statement on 22 September 2021 at 8:19 AM which reads as

——————————————-

If you actually prove a bound on the time integral of grad p, concluding global regularity would be routine.

——————————————-

Also, if you follow equations (121)-(125) of the paper, you will see that grad p(x_b, t) is the only force (per unit volume) that can cause increase to the kinetic energy density K(x_b, t) at a potential blowup point x_b after this point becomes a global maximum of K. Therefore, it seems to me there is something quite special about the scalar pressure p. It’s gradient is the only quantity that could possibly cause a blowup.

Finally, there is one important aspect to my paper that I don’t believe you understand. In establishing upper bounds on the fluid velocity components u_i, the nonlinear (convective) term is used in its EXACT form instead of some sort of AVERAGE form. Now I don’t follow at all how Dr. Tao comes up with his average nonlinear term, but in general, detailed information is lost in any averaging process as we choose a single value as “representative” of a bunch of values, and there is no single inverse operator that magically restores this lost information. I believe Dr. Tao calls this missing information the “finer structure” of the nonlinear term which he states is necessary for a positive solution of the Navier-Stokes global regularity problem. Anyway, it seems there is no suitable operator that can be applied to the averaged nonlinear term to restore the correct value of p. Therefore, the arguments from my paper cannot be applied to the average nonlinear term, and attempts to do so would produce meaningless results.

6 April, 2022 at 6:39 pm

AnonymousLook at what you’re doing again Dwight. When people suggest something new, you dismiss it as ‘horrendously complicated’ and you’re ‘not convinced this is possible’. Be honest, have you really sat down and made attempts to understand what Prof Tao wrote? Or did you immediately dismiss it simply because it’s completely beyond your reach?

6 April, 2022 at 9:34 pm

Dwight WalshI’ve had a copy of this paper by Prof. Tao since before I even knew about this forum, so it is not exactly “something new” to me. However, since he and I are not working the same problem, it is irrelevant. So let’s get back to the question as to what, if anything, is actually wrong with my proof of NS global regularity. And, since you can’t understand smooth blowups or their implications, why don’t you and maybe your comrades suggest to Prof. Tau that he get in touch with me?

9 April, 2022 at 11:11 pm

Dwight WalshIn the “Statement of main result” section of this paper, Prof. Tao states that

——————

“… any proposed positive solution to the regularity problem which does not use the finer structure of the nonlinearity cannot possibly be successful.”

——————

In my paper, I use the nonlinear term in its EXACT form. How much more “finer structure” of this term is needed to resolve NS global regularity?

3 April, 2022 at 9:53 am

Jas, the PhysicistOkay, but what about that topology we are supposed to construct? Has anyone found the open sets yet? I remember thinking it was something very “not-smooth” like even fractal but can someone please correct me because I solved the problem in isolation and I don’t think it’s completely correct.

How would I turn a fractal into an open set anyway?

3 April, 2022 at 10:02 am

Jas, the Physicist> Why is the three-dimensional Navier-Stokes global regularity problem considered so hard, when global regularity for so many other equations is easy, or at least achievable?

My first thought was: knots.

Knots are usually thought of as a one-dimensional object in three-dimensional space. Sometimes we refer to this as the knot having a codimension of 2 (3-1).

Codimension 2 turns out to make knots more interesting than they otherwise would be. In codimension 1, you don’t have much freedom to deform your object–imagine a circle in a plane, you can certainly push and prod it into many shapes, but they’re all the “same” in a sense, unless you cross the circle over itself, a transformation which isn’t allowed. In codimension 3, where you have a one-dimensional object embedded in a four-dimensional space, you have too much freedom for anything interesting to happen–you can always unravel your knot.

As it turns out, however, we can talk about higher-dimensional knot theory, but we need to change what our idea of a knot is. That is, instead of imagining one-dimensional objects in higher-dimensional spaces, we want to consider codimension two objects like a two-dimensional surface embedded in four dimensions, and so on.

There is actually a freely available book appropriately called High-dimensional Knot Theory. It is likely unreadable to a layperson, but the entire book focuses on these codimension 2 embeddings.

23 April, 2022 at 10:12 pm

Dwight WalshOn 26 January 2022 at 10:08 AM, the following comment was posted by Anonymous:

———————————–

I gave you specific equation numbers and a counterexample for your inequalities for the pressure (equations (117)-(118)). Because you don’t understand analysis well enough to pass an undergraduate course on the subject, you dismissed it immediately.

Everyone, please stop engaging with this guy. His paper is getting rejected from journals because it’s wrong. It’s clear as an expert reading his paper that there is no new idea, just an elementary analysis mistake.

———————————–

Is that so!? Then perhaps you would like to explain how an article titled “A shorter solution to the Clay millennium problem about regularity of the Navier-Stokes equations” recently appeared in the Journal of Scientific Research and Studies [Vol. 9(1), pp. 8-16, February, 2022] in which the author, Konstantinos E. Kyritsis, proved Navier-Stokes global regularity using the EXACT same principles as I did, but with a somewhat different notation. Evidently, not everyone is buying your “counterexamples” or your “expertise”, but I guess success is based on know-who and not know-how.

24 April, 2022 at 8:15 am

AnonymousMy comments apply just as well to that paper. You should contact them, Dwight, perhaps you can collaborate toward a proof of another millenium problem.

24 April, 2022 at 4:15 pm

Dwight WalshWell then, I think this proves that just because a paper is “wrong” doesn’t mean it will be rejected, right!? So let’s not have any more nonsense about “his paper is getting rejected from journals because it’s wrong”. And, since you are such an “expert”, why don’t you fill me in a little on how an author can get a “wrong” paper accepted?

25 April, 2022 at 4:25 pm

Dwight WalshIn looking back over some of the recent comments, I noticed that you experts have been strangely silent about the question I posted on 09 April 2022 at 11:11 AM regarding how much “finer structure” of this notorious nonlinear term is needed to resolve NS global regularity. In my paper, I obtain EXACT upper bounds for the u_i based on the original form of the entire NS equation whereas everyone else (including Prof. Tao himself) uses approximations in the nonlinear term. Which would you say is more reliable? All right, experts — you can ignore this question now.