I’ve just uploaded to the arXiv the paper “Finite time blowup for an averaged three-dimensional Navier-Stokes equation“, submitted to J. Amer. Math. Soc.. The main purpose of this paper is to formalise the “supercriticality barrier” for the global regularity problem for the Navier-Stokes equation, which roughly speaking asserts that it is not possible to establish global regularity by any “abstract” approach which only uses upper bound function space estimates on the nonlinear part of the equation, combined with the energy identity. This is done by constructing a modification of the Navier-Stokes equations with a nonlinearity that obeys essentially all of the function space estimates that the true Navier-Stokes nonlinearity does, and which also obeys the energy identity, but for which one can construct solutions that blow up in finite time. Results of this type had been previously established by Montgomery-Smith, Gallagher-Paicu, and Li-Sinai for variants of the Navier-Stokes equation without the energy identity, and by Katz-Pavlovic and by Cheskidov for dyadic analogues of the Navier-Stokes equations in five and higher dimensions that obeyed the energy identity (see also the work of Plechac and Sverak and of Hou and Lei that also suggest blowup for other Navier-Stokes type models obeying the energy identity in five and higher dimensions), but to my knowledge this is the first blowup result for a Navier-Stokes type equation in three dimensions that also obeys the energy identity. Intriguingly, the method of proof in fact hints at a possible route to establishing blowup for the true Navier-Stokes equations, which I am now increasingly inclined to believe is the case (albeit for a very small set of initial data).

To state the results more precisely, recall that the Navier-Stokes equations can be written in the form

for a divergence-free velocity field and a pressure field , where is the viscosity, which we will normalise to be one. We will work in the non-periodic setting, so the spatial domain is , and for sake of exposition I will not discuss matters of regularity or decay of the solution (but we will always be working with strong notions of solution here rather than weak ones). Applying the Leray projection to divergence-free vector fields to this equation, we can eliminate the pressure, and obtain an evolution equation

purely for the velocity field, where is a certain bilinear operator on divergence-free vector fields (specifically, . The global regularity problem for Navier-Stokes is then equivalent to the global regularity problem for the evolution equation (1).

An important feature of the bilinear operator appearing in (1) is the cancellation law

(using the inner product on divergence-free vector fields), which leads in particular to the fundamental energy identity

This identity (and its consequences) provide essentially the only known *a priori* bound on solutions to the Navier-Stokes equations from large data and arbitrary times. Unfortunately, as discussed in this previous post, the quantities controlled by the energy identity are supercritical with respect to scaling, which is the fundamental obstacle that has defeated all attempts to solve the global regularity problem for Navier-Stokes without any additional assumptions on the data or solution (e.g. perturbative hypotheses, or *a priori* control on a critical norm such as the norm).

Our main result is then (slightly informally stated) as follows

Theorem 1There exists anaveragedversion of the bilinear operator , of the formfor some probability space , some spatial rotation operators for , and some Fourier multipliers of order , for which one still has the cancellation law

and for which the averaged Navier-Stokes equation

(There are some integrability conditions on the Fourier multipliers required in the above theorem in order for the conclusion to be non-trivial, but I am omitting them here for sake of exposition.)

Because spatial rotations and Fourier multipliers of order are bounded on most function spaces, automatically obeys almost all of the upper bound estimates that does. Thus, this theorem blocks any attempt to prove global regularity for the true Navier-Stokes equations which relies purely on the energy identity and on upper bound estimates for the nonlinearity; one must use some additional structure of the nonlinear operator which is not shared by an averaged version . Such additional structure certainly exists – for instance, the Navier-Stokes equation has a vorticity formulation involving only differential operators rather than pseudodifferential ones, whereas a general equation of the form (2) does not. However, “abstract” approaches to global regularity generally do not exploit such structure, and thus cannot be used to affirmatively answer the Navier-Stokes problem.

It turns out that the particular averaged bilinear operator that we will use will be a finite linear combination of *local cascade operators*, which take the form

where is a small parameter, are Schwartz vector fields whose Fourier transform is supported on an annulus, and is an -rescaled version of (basically a “wavelet” of wavelength about centred at the origin). Such operators were essentially introduced by Katz and Pavlovic as dyadic models for ; they have the essentially the same scaling property as (except that one can only scale along powers of , rather than over all positive reals), and in fact they can be expressed as an average of in the sense of the above theorem, as can be shown after a somewhat tedious amount of Fourier-analytic symbol manipulations.

If we consider nonlinearities which are a finite linear combination of local cascade operators, then the equation (2) more or less collapses to a system of ODE in certain “wavelet coefficients” of . The precise ODE that shows up depends on what precise combination of local cascade operators one is using. Katz and Pavlovic essentially considered a single cascade operator together with its “adjoint” (needed to preserve the energy identity), and arrived (more or less) at the system of ODE

where are scalar fields for each integer . (Actually, Katz-Pavlovic worked with a technical variant of this particular equation, but the differences are not so important for this current discussion.) Note that the quadratic terms on the RHS carry a higher exponent of than the dissipation term; this reflects the supercritical nature of this evolution (the energy is monotone decreasing in this flow, so the natural size of given the control on the energy is ). There is a slight technical issue with the dissipation if one wishes to embed (3) into an equation of the form (2), but it is minor and I will not discuss it further here.

In principle, if the mode has size comparable to at some time , then energy should flow from to at a rate comparable to , so that by time or so, most of the energy of should have drained into the mode (with hardly any energy dissipated). Since the series is summable, this suggests finite time blowup for this ODE as the energy races ever more quickly to higher and higher modes. Such a scenario was indeed established by Katz and Pavlovic (and refined by Cheskidov) if the dissipation strength was weakened somewhat (the exponent has to be lowered to be less than ). As mentioned above, this is enough to give a version of Theorem 1 in five and higher dimensions.

On the other hand, it was shown a few years ago by Barbato, Morandin, and Romito that (3) in fact admits global smooth solutions (at least in the dyadic case , and assuming non-negative initial data). Roughly speaking, the problem is that as energy is being transferred from to , energy is also simultaneously being transferred from to , and as such the solution races off to higher modes a bit too prematurely, without absorbing all of the energy from lower modes. This weakens the strength of the blowup to the point where the moderately strong dissipation in (3) is enough to kill the high frequency cascade before a true singularity occurs. Because of this, the original Katz-Pavlovic model cannot quite be used to establish Theorem 1 in three dimensions. (Actually, the original Katz-Pavlovic model had some additional dispersive features which allowed for another proof of global smooth solutions, which is an unpublished result of Nazarov.)

To get around this, I had to “engineer” an ODE system with similar features to (3) (namely, a quadratic nonlinearity, a monotone total energy, and the indicated exponents of for both the dissipation term and the quadratic terms), but for which the cascade of energy from scale to scale was not interrupted by the cascade of energy from scale to scale . To do this, I needed to insert a *delay* in the cascade process (so that after energy was dumped into scale , it would take some time before the energy would start to transfer to scale ), but the process also needed to be *abrupt* (once the process of energy transfer started, it needed to conclude very quickly, before the delayed transfer for the next scale kicked in). It turned out that one could build a “quadratic circuit” out of some basic “quadratic gates” (analogous to how an electrical circuit could be built out of basic gates such as amplifiers or resistors) that achieved this task, leading to an ODE system essentially of the form

where is a suitable large parameter and is a suitable small parameter (much smaller than ). To visualise the dynamics of such a system, I found it useful to describe this system graphically by a “circuit diagram” that is analogous (but not identical) to the circuit diagrams arising in electrical engineering:

The coupling constants here range widely from being very large to very small; in practice, this makes the and modes absorb very little energy, but exert a sizeable influence on the remaining modes. If a lot of energy is suddenly dumped into , what happens next is roughly as follows: for a moderate period of time, nothing much happens other than a trickle of energy into , which in turn causes a rapid exponential growth of (from a very low base). After this delay, suddenly crosses a certain threshold, at which point it causes and to exchange energy back and forth with extreme speed. The energy from then rapidly drains into , and the process begins again (with a slight loss in energy due to the dissipation). If one plots the total energy as a function of time, it looks schematically like this:

As in the previous heuristic discussion, the time between cascades from one frequency scale to the next decay exponentially, leading to blowup at some finite time . (One could describe the dynamics here as being similar to the famous “lighting the beacons” scene in the Lord of the Rings movies, except that (a) as each beacon gets ignited, the previous one is extinguished, as per the energy identity; (b) the time between beacon lightings decrease exponentially; and (c) there is no soundtrack.)

There is a real (but remote) possibility that this sort of construction can be adapted to the true Navier-Stokes equations. The basic blowup mechanism in the averaged equation is that of a von Neumann machine, or more precisely a construct (built within the laws of the inviscid evolution ) that, after some time delay, manages to suddenly create a replica of itself at a finer scale (and to largely erase its original instantiation in the process). In principle, such a von Neumann machine could also be built out of the laws of the inviscid form of the Navier-Stokes equations (i.e. the Euler equations). In physical terms, one would have to build the machine purely out of an ideal fluid (i.e. an inviscid incompressible fluid). If one could somehow create enough “logic gates” out of ideal fluid, one could presumably build a sort of “fluid computer”, at which point the task of building a von Neumann machine appears to reduce to a software engineering exercise rather than a PDE problem (providing that the gates are suitably stable with respect to perturbations, but (as with actual computers) this can presumably be done by converting the analog signals of fluid mechanics into a more error-resistant digital form). The key thing missing in this program (in both senses of the word) to establish blowup for Navier-Stokes is to construct the logic gates within the laws of ideal fluids. (Compare with the situation for cellular automata such as Conway’s “Game of Life“, in which Turing complete computers, universal constructors, and replicators have all been built within the laws of that game.)

## 158 comments

Comments feed for this article

4 February, 2014 at 2:30 pm

Mitzpe RamonThis seems to be the first post on the Navier-Stokes problem where you do *not* say from the beginning that you do not claim a substantial progress. So would you say that this is a big step forward towards a solution of the problem? Maybe the biggest step in the last decades, because you found a program which is not unrealistic?

6 February, 2014 at 3:44 pm

LiamOf course, other people have to decide this.

28 September, 2015 at 5:57 am

Warren D SmithI suggest reading my 2003 paper available online as #58 here

http://rangevoting.org/WarrenSmithPages/homepage/works.html

which proved that EITHER

1. the Navier Stokes equations of hydrodynamics (incompressible fluid with viscosity; confined within a certain compact strange-shaped rigid container) would in finite time solve the Turing “halting problem,” OR

2. no solution of the NS equations necessarily exists, OR

3. solutions of the NS

equations do always exist, but disagree hugely with experiments

on real fluids like water & air in certain simple and seemingly innocent scenarios, said experiments having been performed millions of times.

(3 options, all rather ruinous for the discipline of “computational fluid dynamics.”)

Unfortunately this paper was rejected by an idiot anonymous referee who did not understand the meaning of the word “undecidability” but thought he did, and no amount of supporting letters from, e.g. professors who’d written books on the topic of undecidability, etc, would convince him otherwise. Anyhow, the paper is correct.

It also contains some philosophical discussion, e.g. arguing that the so-called “hydrodynamic limit” is a fiction.

I personally think my paper kills CFD. It does not, however, establish which of the 3 options above, is correct.

20 January, 2016 at 2:55 pm

Mason BogueHey, this paper is wrong. Essentially you fail to prove that the additional error-correction circuitry on a “child” fluid computer does not effectively make it larger than its “parent”. In fact I think it’s highly likely that this kills your idea in its present form. There are also some issues with style; you should be familiar with how academic papers are written.

Also I strongly suggest not attempting to solve problems in fields where you do not yet have true literacy. Inventing your own notation (before an exhaustive search for extant alternatives) is usually a sign you’re on the wrong path.

4 February, 2014 at 2:52 pm

E.L. WistyReblogged this on Pink Iguana and commented:

Tao’s Navier-Stokes paper

5 February, 2014 at 8:19 am

AnonymousAs a first-year undergraduate, I am curious as to what this actually means. Given the timing of this and the statement “…using a blowup solution to a certain averaged version of the Navier-Stokes equation to demonstrate that any proposed positive solution to the regularity problem which does not use the finer structure of the nonlinearity cannot possibly successful.”, would it be safe to read this as “Otelbaev’s claimed proof is wrong.”?

5 February, 2014 at 10:35 am

samuelfhopkinsOf course it’s always possible that both proofs are correct and mathematics is inconsistent. :)

5 February, 2014 at 1:06 pm

jussilindgrenDear Terry, what do you think of this simple argument:

The full Navier-Stokes equations can be stated as

with the usual incompressibility condition

We transform the equation in a more useful form using the following vector calculus identity

Substituting this back into the Navier-Stokes equation one has

Operating with the curl operator, one gets the vorticity equation

In order to ultimately obtain the enstrophy equation, one needs to dot the Navier-Stokes equation with vorticity:

Now we use the following vector calculus identity, which is the key in this rather short proof of regularity :

Let us make then the following identification:

and

We then have

We substitute this expression back to the enstrophy equation to get:

The only problematic term now is the second one on the right side of the equation. Let us consider it more closely:

We can write it as

Where we have introduced the matrix differential operator

It is important to note that this representation matrix is skew-symmetric.

Now we know that a scalar is invariant under transposition, so we have the equality

From skew-symmetry of the operator matrix we then have

So that finally we have the rather surprising equality

From the enstrophy equation we can solve for this!

Substituting back this to the latter version of the enstrophy equation, one gets

Now if integrate over the whole space and keep in mind that the velocity field decays sufficiently fast at infinity, by direct use of divergence theorem the divergence term on the right hand side is killed by the space-integral, and we get that

It is well known that if the total enstrophy stays bounded, the solutions stay regular. Note that in particular for the Euler equations, total enstrophy is conserved. QED. Best, Jussi Lindgren

9 March, 2014 at 12:59 am

AnonymousThe proof of this dissipation law is clearly erroneous, but it does not imply the claim is false. Do there exist examples of suitably regular solutions, say, in H^1([0,T],H^1(R^3,R^3)), that violate this inequality? (I know that there are solutions of infinite total energy that produce singularities.)

25 April, 2014 at 11:25 pm

AnonymousAll of the vector calculus stuff looks fine, at least.

5 February, 2014 at 1:28 pm

jussilindgrenyou can comment at my blog at http://navierstokesregularity.wordpress.com

5 February, 2014 at 3:37 pm

Vlado Vrhovskihttp://www.newscientist.com/article/dn24915-kazakh-mathematician-may-have-solved-1-million-puzzle.html#.UvLKT7TvnIU

6 February, 2014 at 1:23 am

friend48Jussi wrote

Now we know that a scalar is invariant under transposition, so we have the equality

(\mathbf{u}\times \mathbf{\omega})^T \mathbf{R}\mathbf{\omega}=\mathbf{\omega}^T \mathbf{R}^T (\mathbf{u}\times \mathbf{\omega})

This would be correct if R were a usual matrix. However with an operator matrix this is wrong. Just try the product of 2 scalar functions with a differential operator in between. Then the formula ovbiously fails.

Another issue.

Prof. Tao’s paper does not formally kill Otelbaev’s one. O., while considering the periodic problem, has the linear part, the Laplacian on the torus, i.e., the operator with discrete sectrum. In Prof. Tao’s realization of his abstract setting, the linear part is the Laplacian in the whole space, i.e., the operator with the whole spectrum continuouos. Moreover, Otelbaev’s ‘proof’ is heavily based upon the lowest point in the spectrum being isolated.

6 February, 2014 at 5:01 am

AnonymousOtelbaev has implicitly used BOTH integral condition (1.4) and the periodic boundary conditions (1.3); they are assumed in deriving (4.5) from (4.4). These conditions are extra to the Clay formulation in periodic domains. Hence he is working on an over-determined system of pde. The (subtle) point is that we are constrained in applying the Helmholtz-Leray projection; any attempt to invert the pressure Poisson equation must be fully justified. Otelbaev did not derive any apriori bound on the “RHS” function of the Poisson equation. At most, his “proof” is formal and is not for the Clay NS problems.

6 February, 2014 at 9:13 am

Arie IsraelDear Terry, Thanks for the incredible blog post. This has already become a personal favorite of mine. The idea of using intuition from electrical engineering to help construct blow-up solutions really struck me as profound. As a non-expert on fluid mechanics, I have a very simple question: Do you believe that one can make any formal equivalence or connection between regularity for Navier-Stokes and regularity for averaged Navier-Stokes. e.g., for given fixed initial data, is the solution to the averaged version of Navier-Stokes _more_ regular than the solution to true Navier-Stokes (with the same initial data)?

7 February, 2014 at 4:31 pm

Terence TaoFor short time (local) theory, in which the evolution is close to linear, one expects any local existence theory for the non-averaged NS equation to carry over to the averaged NS equation, basically because local existence theory is based on linear or multilinear estimates, which behave well with respect to averaging (Minkowski’s inequality). But for long time theory the two could be quite different. In particular the NS equation has the vorticity equation formulation (with all the attendant phenomena such as vortex stretching), which the averaged one does not. There is some work of Hou and Lei, http://www.ams.org/mathscinet-getitem?mr=2492706, that I recently learned about that suggests that the true NS equations may have some stabilising effects in their nonlinearity that make their behaviour better (at least for typical data) than other NS-like equations. This suggests there are going to be real challenges in transferring the blowup results from the averaged NS setting to the true NS setting, but they do not seem to be completely insurmountable. (Even if true NS behaves better than averaged NS for “most” data, one just needs a very special set of initial data in which the true NS can exhibit the “fluid logic gate” behaviour that the averaged NS equation enjoys by design, in order to (in principle, at least) replicate the blowup scenario.)

20 July, 2015 at 2:25 am

AnonymousIs it possible that any possible blowup is represented near the singular time by exactly one type of singularity from a finite set of singularities? (e.g. as in the case of Ricci flow near the singular time.)

24 August, 2015 at 1:55 am

AnonymousIs it possible that the needed condition on the “very special initial data” needed for the “fluid logic gate” behavior for the true NS setting, may be formulated as a PDE (for the initial data)? (if so, it is very important to explicitly find this PDE and check if it has any solution.)

6 February, 2014 at 10:51 am

gowersI have a variant of Mitzpe Ramon’s question, but would understand if you didn’t want to answer it. You say that your proof hints at a way of establishing blow-up for the true Navier-Stokes equation. Would it be correct to deduce from your willingness to go public with this that there are some clearly identifiable serious difficulties in turning that hint into a proof? Given your speed at doing mathematics, the deduction is not all that convincing, but I’m still curious, as a total non-expert, to know whether there’s any prospect of seeing another Clay problem fall in the next few years.

6 February, 2014 at 11:11 am

Felipe Voloch@gowers re: Clay problem falls. My understanding, from asking Jaffe a question many years ago, is that there is no Clay prize for disproving any of the conjectures featured in the prizes, except perhaps PvsNP.

6 February, 2014 at 11:26 am

comment“To give reasonable lee-way to solvers while retaining the heart of the problem, we ask for a proof of one of the following four statements. […] (C) Breakdown of Navier–Stokes solutions on ”

http://www.claymath.org/sites/default/files/navierstokes.pdf

6 February, 2014 at 11:34 am

SniffnoyNot so; see the rules. For P vs NP and the Navier-Stokes problems, either a proof or counterexample will get the full prize. (And for Navier-Stokes, doing it for either the non-periodic version or the periodic version suffices for the prize.) For the other ones, a counterexample

maybe awarded the prize, or may get a smaller prize, or nothing; it’s up to them.6 February, 2014 at 11:56 am

Felipe VolochThanks for the corrections. I was misinformed by Jaffe. Since I asked him that at the end of one of the Millennium lectures at UT, his answer may be on tape. https://www.ma.utexas.edu/millenium_site/mlectures.html

12 February, 2014 at 6:19 am

David BrownHow long will the Clay Mathematics Institute survive?

“The average life expectancy of a multinational corportation—Fortune 500 or its equivalent—is between 40 and 50 years.” http://www.businessweek.com/chapter/degeus.htm

7 February, 2014 at 4:37 pm

Terence TaoI think there’s a limit to how much one can deduce from “dogs that don’t bark in the night” :-). The paper reflects my current thinking on the subject, which is that (a) proving global regularity for Navier-Stokes is a hopeless task for the foreseeable future, but (b) proving blowup for Navier-Stokes is not. (This is a shift from my previous viewpoint that (a) and the negation of (b) were both true; it was only through the course of trying to formalise (a) that I was able to glimpse a possible route to (b) that I didn’t see before.)

But there is still quite a long way to go to actually reach a proof of blowup for Navier-Stokes. Most obviously, we currently don’t have any designs for logic gates made out of pure fluid (although the pointer to the fluidics literature that Arie Israel makes below is extremely intriguing), whereas in the averaged Navier-Stokes equation I could “bake in” these gates into the laws of physics by fiat. I certainly intend to look at these issues further, but I can’t predict what will come out of this program yet.

6 February, 2014 at 12:13 pm

Gil KalaiDear Terry, you draw the analogy with Conway’s game of life which allows universal computation.Can you show such universal computation for the (dyadic?) variants of NS you considered in the paper? One apparatus that can be helpful is the fault-tolerance apparatus (which also goes back to von-Newmann) which allows universal computation for noisy logical gates.

(Appropos that, to the best of my knowledge it is not known if noisy game of life allows universal computation http://cstheory.stackexchange.com/questions/17914/does-a-noisy-version-of-conways-game-of-life-support-universal-computation .)

7 February, 2014 at 8:08 am

AnonymousGil, why is error correction important? It’s needed for practical quantum computation because the qubits can’t be initialized perfectly. But for the purely mathematical NS problem, isn’t it enough if there’s some set of initial conditions (i.e. of measure 0) where the computation goes through? For that matter, it seems odd that the Millenium problem calls for an explicit counterexample in the case of a negative answer. Maybe there’s a blowup that only has a non-constructive existence proof. What then?

7 February, 2014 at 4:50 pm

Terence TaoOne would have to be precise about the computational model, but I do have the impression that if one was allowed to chain together an unlimited number of ‘quadratic gates’ together connecting an unlimited number of modes (i.e. to solve arbitrary quadratic ODE , subject to the energy conservation law ), one could then perform an essentially Turing-complete set of (continuous) computing tasks. This is reasoning in analogy with systems of quadratic equations over, say, ; one can convert any bounded degree algebraic system of equations over this field into a system of quadratic equations by expanding the number of variables by a bounded amount. In particular, I think one can already show that solving quadratic equations over finite fields is NP-complete (it’s pretty close to 3-SAT, for instance).

Some noise tolerance is going to be needed, because it would be hopeless to expect the von Neumann machine to create a perfect rescaled version of itself, while completely deleting all previous traces of itself. (In particular, the rescaling we are using does not preserve the viscosity term (it is supercritical in that regard), so one cannot expect perfect self-replication.) However, it may well be that by choosing parameters appropriately that the noise tolerance could be obtained by PDE methods (e.g. Gronwall inequality) rather than by deliberately encoding error correction into the software; this is what happened in the averaged NS model I considered (although it took a while to figure out exactly how to select the parameters and to control the noise levels, leading to a rather lengthy and complicated bootstrap argument in the paper). I have a vague hope that if one can make the fluidic circuitry be based on “digital” signals rather than “analog” ones, then this will automatically give a certain degree of noise tolerance (the same way that physical electronic computers are somewhat resistant to low levels of external radiation due to the digital nature of the signals) and this may be all that is needed for the purposes of creating the von Neumann machine that blows up in finite time.

8 February, 2014 at 1:51 pm

AnonymousThanks… the Wikipedia article on fluidics was interesting and I didn’t realize they were that computationally powerful. But I thought fluidic devices (such as automatic transmissions in old cars) involved putting carefully designed physical obstructions in the moving fluid: if the NS problem is supposed to be obstruction-free then can those techniques still work? Could there be a way to use solitons (since they are self-reinforcing) to communicate between stages, like gliders in Conway’s Life game, instead of fancy digital error correction? As probably shows, I’m pretty ignorant about this general topic, and don’t know if solitons even arise in NS.

10 February, 2014 at 9:28 am

Terence TaoSolitons (or at least travelling waves) are a possibility, although stability of these solutions will be an issue.

It’s true that before one can use the fluidics literature, one has to first solve the “materials science” problem of constructing materials out of pure ideal fluid which can function as the physical obstructions used in fluidic gates. This looks practically impossible, but perhaps not mathematically impossible: if for instance one can create vortex sheets of extremely high vorticity that are reasonably stable, and not penetrable by lower-vorticity streams of fluids, then they might be able to functionally serve as the walls and other obstructions of fluidic gates. (There is a price one pays for using such fancy fluid formations in one’s machine, though, which is that the task of constructing the replica of this machine becomes more difficult. Still, this feels like it is “merely” an extremely difficult engineering problem, rather than a fundamental obstruction.)

13 February, 2014 at 5:20 am

David Brown“… if for instance one can create vortex sheets of extremely high vorticity that that are reasonably stable, and not penetrable by lower-vorticity streams of fluids, then they might be able to serve as the walls and other obstructions of fluidic gates. …” Could it be of some value to look into MHD solutions? If there are ideal fluid vortices that could approximate an infinite number of electron energy-density levels, then perhaps spintronics could suggest a way of approximating ideal fluidic gates. http://en.wikipedia.org/wiki/Spintronics

4 March, 2014 at 9:55 am

JaySuppose we could construct a bunch of fluidic gates from water and some material we can move and shrink at will (say for 2-4 orders of magnitude).

How should we arrange the gates so that it looks like a physical proof of concept for your mathematical proposal?

4 March, 2014 at 10:12 am

Terence TaoWell, there would be a large variety of possible designs (cf. the many possible ways to design “spaceships” in the Game of Life, with many of the larger designs using more primitive objects such as “glider guns” as an analogue of the hypothetical fluidic logic gates considered here). The conservation laws of the Euler equations, particularly conservation of energy, however provide a significant additional challenge which is not present in the Game of Life (in which there is no upper bound on the number of active cells one can generate).

In analogy with the constructions in my paper, once one has enough gates to build reliable and programmable machines, one could imagine a design that consists of a large, slow machine whose primary purpose is to create a tiny, fast, and low-energy machine , which then dismantles (and cannibalises) the large machine to create a smaller copy of that large machine, which holds a large fraction (say 99%) of the original energy of . The smaller machine then “turns on” the smaller copy to repeat the process, and then moves away from that copy (at which point we don’t care too much what happens next to , though it may be a good idea to put in some sort of “self-destruct” mechanism into B’s programming to guarantee that it doesn’t come back to disrupt the dynamics). The process them repeats itself, with the majority of the energy shifting itself to the next finer scale at increasingly rapid scales. As long as the energy transfer process is efficient enough, one should be able to “outrun” the effects of viscosity, which can then be treated as a negligible error (it becomes weaker at an exponential rate as one moves from one scale to the next, while renormalising the dynamics appropriately).

4 March, 2014 at 10:55 am

JayIs there any garantee that, if you can construct A, there’s a machine B that can dismantle A?

4 March, 2014 at 11:01 am

Terence TaoWell, if B has sufficiently advanced programming, I believe this is possible in principle, particularly if it is possible to first “turn A off”, effectively converting A into a collection of much smaller, mostly inert, components. In particular, even though A is much more massive than B, each individual component of A could be a lot smaller than B, so B could work on deactivating and then reassembling individual components of A one at a time.

Of course, the task of actually engineering the required hardware and software for these machines would be enormously difficult, but I don’t see why it is necessarily impossible.

4 March, 2014 at 11:07 am

JayYep, if A was inert that sounds “easy”. But what if we can’t turn A off? Actually, it would most probably need to be self-correcting, no?

4 March, 2014 at 11:22 am

Jay(Ok, let’s just add a backdoor A could not correct for)

Thank you very much for your answers!

5 March, 2014 at 7:03 pm

AnonymousI notice that the Clay problem statement has a force term, that we think of as constant (like gravity) but it’s written as completely parametrized in time and space, so it can do anything it wants, subject to its Jacobian decaying at superpolynomial (but maybe only slightly superpolynomial) speed. So if the von Neumann machine can scale itself down at superpolynomial speed asymtotically faster than the force decay, maybe the force term can be used to “operate the machinery”, i.e. control errors by nudging stuff back into place as needed? As the force decays more slowly than the machine scales down, it becomes more and more powerful (relative to the machine) as time increases.

Am I reading that right? Is the force constraint really given too loosely to resemble a physical system? Your paper doesn’t seem to use the force term, but maybe I missed it.

5 March, 2014 at 7:41 pm

Terence TaoIn the global regularity problem, the forcing term is also required to be smooth as well as decaying in space; in particular, all derivatives of the forcing term remain bounded. Because of this, the forcing term is not much use for directly manipulating the fine-scale dynamics of a solution; in fact, at fine scales, the strength of this term is even smaller than the viscosity term, which is already being treated perturbatively. (However, the forcing term can be used to deal with coarse scale components of the solution, and as such is useful for such tasks as passing back and forth between periodic and non-periodic formulations of the Navier-Stokes problem; I exploited this fact in a previous paper on Navier-Stokes.)

For related reasons, fine tuning the initial data at fine scales will also not help in engineering blowup: by the time the blowup machine actually gets to such fine scales, the fine scale components of the initial data would have long since dissipated away. The blowup mechanism has to be completely endogenous at fine scales: the initial data and/or forcing term can set things up at the initial scale, but after that the solution has to “blow itself up” rather than rely on data or forcing term.

6 February, 2014 at 10:28 pm

Gil KalaiThe question if NS evolutions in three dimensions and in two dimensions support universal classical computation and classical fault tolerance was also raised in my debate with Aram Harrow regarding quantum computation in this comment (to the seventh post). NS equations were also considered a couple of times earlier in the debate by John Sidles. The context was the question of how to define classical processes “without classical fault-tolerance.” A specific question that was asked is if 2-dimensional Navier-Stokes evolutions can be approximated (in all scales) by bounded depth (probabilistic) circuits? (Or at least is it the case that they do not support universal classical computation.)

See also this earlier comment there speculating a connection between the computational complexity/fault tolerance capabilities of a class of classic evolutions and questions about regularity, well-posedness Maxwell daemons and other self-defeating behavior. The comments and threads following these comments contains further interesting links: To a paper by Andy Yau “Classical physics and the Curch-Turing Thesis,” to an MO question by Mariano Suárez-Alvarez, and to an earlier post on self-defeating behavior over this blog.

7 February, 2014 at 11:34 am

Arie IsraelGil, I was surprised to learn that experimental fluid mechanics people had thought of this analogy before. Apparently the key name is ‘Fluidics’ and those ideas date back at least to the sixties. Not sure what is the state of the art. Additionally, early electrical engineers believed that electricity followed laws similar to fluid dynamics. This is called the ‘hydraulic analogy’. Historically, that’s where the word ‘current’ comes from. All this can be found on Wikipedia.

9 February, 2014 at 6:03 pm

AnonymousSo the ideas have been around for more than 50 years.

7 February, 2014 at 3:24 am

Navier-Stokes Fluid Computers | Combinatorics and more[…] Tao posted a very intriguing post on the Navier-Stokes equation, based on a recently uploaded paper Finite time blowup for an […]

7 February, 2014 at 1:48 pm

kurtWell, I thought this was clear when looking at the projections

$u_\theta=\langle u,\theta\rangle,\;\theta \in S^2$ and writing NS as

$$\partial_t u_\theta + div(u_\theta u -\nu \nabla u_\theta +\theta p)=0$$

followed by some estimates on the vector field $X=u_\theta u -\nu \nabla u_\theta +\theta p$ using linear theory for the scalar functions $u_\theta$,

(\vert\vert\nabla u_\theta + u_\theta X\vert\vert = ..).

But I’m certainly wrong as I abandoned NS a long time ago :)

At any rate a great result. Thanks.

8 February, 2014 at 1:14 am

AnonymousTypographical comment: For inner products, one should use

\usepackage{mathtools}

\DeclarePairedDelimiter{\inner}{\langle}{\rangle}

Then “\inner{v,v}” will produce “”.

8 February, 2014 at 2:18 am

AnonymousThis was a comment to the paper—not your blogpost. :)

8 February, 2014 at 4:18 am

MrCactu5 (@MonsieurCactus)You can build computers out of many things. I am impressed you can build a computer out of the Navier-Stokes equation.

A long time ago, I read this paper MineSweeper is NP Complete. Nobody plays Minesweeper anymore — instead they play Candy Crush Saga.

When I solve Sudoku’s I always imagine the numbers moving across the page. It is not that effective on the harder puzzles though.

8 February, 2014 at 5:43 am

AnonymousAnother comment: For maps, one should use “\colon” to get the correct horizontal spacing. A possibility is to envoke

\newcommand*\map[3]{#1\colon #2\to #3}

in the preamble and then use, say “\map{f}{A}{B}” to get “”.

8 February, 2014 at 1:21 pm

John SidlesThe computational potentiality of Navier-Stokes flow, and its relation to open questions in quantum dynamical simulation, both were discussed as long ago as 2008, in the context of Scott Aaronson’s lecture

Quantum Computing Since Democritus Lecture 13: How Big Are Quantum States(per comment#45of Scott’s post).It’s *WONDERFUL* to see that high-level mathematical techniques and creativity now are being focused upon these tough-but-crucial questions, which are important equally to mathematicians, scientists, engineers … and (in the long run) even medical researchers like me.

Please accept my appreciation of these fine new results, my thanks for the work they represent, and my sincere hopes for sustained progress in this fascinating and far-reaching line of research.

9 February, 2014 at 9:19 pm

AnonymousTao is unable to prove the Navier Stokes global regularity in the foreseeable future. That’s all folks!

9 February, 2014 at 9:52 pm

MeDear Terry

The analogy with electrical engineering is truly fascinating.

I was wondering what would be the obstruction to applying your ideas in 2D ? (where Navier-Stokes and Euler equations are known to have global regular solutions). In other words, could we build in 2D such quadratics circuits that quickly replicate signals to finer and finer scales ? Maybe in 2D dimensions you get less X_n modes to play with for a given n, and so the circuits you can build are smaller

Thanks

10 February, 2014 at 8:54 am

Terence TaoSee my answer to a similar question at https://terrytao.wordpress.com/2007/03/18/why-global-regularity-for-navier-stokes-is-hard/#comment-270129 . Basically, in 2D the dissipation term is much stronger, and I don’t think my construction adapts to 2D Navier-Stokes. It is perhaps possible that one could modify the methods to create an averaged 2D Euler-type equation which exhibits rapid growth for a fixed amount of time (similar to a result I had worked out with Colliander, Keel, Takaoka and Staffilani for 2D periodic cubic NLS, see https://terrytao.wordpress.com/2008/08/14/weakly-turbulent-solutions-for-the-cubic-defocusing-nonlinear-schrodinger-equation/ ), but over long enough time, all the error terms will eventually pile up and prevent any accurate analysis of the situation. (With finite time blowup, there is not enough time for the low frequency modes to cause much mischief as the von Neumann machine replicates to ever finer scales, but I don’t see how to deal with these modes for arbitrary amounts of time. In particular,in the absence of viscosity, there is the bizarre but not entirely impossible scenario in which the low frequency errors eventually manage to spontaneously form into their own von Neumann machine, which is “faster” than the original one and can “intercept” and then “disable” the original machine, halting the cascade to finer and finer scales.)

11 February, 2014 at 3:06 am

AnonymousHi Terry Tao,

I have some comments and queries. All equation numbers refer to arXiv:1402.0290. (a) For large initial data and for long-time solutions, equations (1.1) and (1.5) have not been shown to be equivalent.

A “blowup” is a weak solution. (b) The NS equations are locally well-posed (or globally for small data) for t in [0, t_a], where t_a depends on the size of initial data. Within Schwartz class, infinitely many initial flows, u_0(x), can be specified. Over [0, t_a], (1.9) is, if not false, irrelevant to the Navier-Stokes (1.1). (c) Initial value problem (1.9) defines a new set of equations for fluid motions but describes a stochastic field which is already in existence. In view of (b), when/where do the stochastic characters of (1.9) originate? (d) Denote your blowup time by t_b. Following (b) and (c), IVP (1.9) with u_0(x) is irrelevant to the Navier-Stokes (1.1) for any t in (t_a,t_b]. (e) Given any *finite-energy* initial data (Schwartz), what is the implication in physics for the finite-time blowup in a *stochastic field*? What happens to the field *beyond* t_b?

Thanks

11 February, 2014 at 8:21 am

Terence Tao(a) The precise sense in which I show (1.1) and (1.5) to be equivalent is stated and proved in Lemma 1.3.

(b)-(d). It is true that the averaged Navier-Stokes equation (1.9) (which, by the way, is a deterministic equation, not a stochastic one, as the probabilistic (or averaging) variable in the definition of is integrated out) is not directly related to the true Navier-Stokes flow (1.1) (or its equivalent form (1.5)). So the results of this paper do not

directlysay anything about the true Navier-Stokes equations. However, as described in the introduction, a blowup result for the averaged Navier-Stokes equation does create abarrierto certain strategies for proving global regularity for the true Navier-Stokes equation, in that any such strategy must be capable of distinguishing (1.1) from (1.9) if it is to have any chance to work. Many proposed strategies for establishing global regularity for true Navier-Stokes (including some that were proposed very recently) fail this test and can thus be excluded as viable strategies.14 February, 2014 at 3:09 am

AnonymousIt is a salient point that eqn (1.5) is a DERIVED equation from (1.1). Close to blow-up time t_b, u unnecessarily stays in L^1 for example. The velocity Hessian of mild solutions tends to be unbounded throughout R^3. Therefore it is invalid to invert the pressure Laplacian without knowledge of velocity decays at infinity. In other words, (1.5) may well become an increasingly useless substitution for (1.1) as time approaches t_b. Above all, (1.5) must be fully justified to exist for blow-ups. The generalisation from (1.5) to (1.9) would not have a well-defined meaning, particularly for solutions involving finite-time singularities. Wherever potential blow-ups are implied, such as the Euler in S1.3, this line of reasoning has ramifications.

14 February, 2014 at 8:34 am

Terence TaoThe justification of the equivalence of the global regularity problems (or equivalently after taking contrapositives, the blowup problems) for (1.1) and (1.5) is indeed non-trivial, and occupies a significant portion of my previous paper http://msp.org/apde/2013/6-1/p02.xhtml . As I said in my previous comment, the precise sense in which I show (1.1) and (1.5) to be equivalent is stated and proved in Lemma 1.3 (which relies heavily on the results of that previous paper).

11 February, 2014 at 1:27 pm

Marcelo de AlmeidaReblogged this on Being simple.

12 February, 2014 at 8:55 am

friend48Prof. Tao,

concerning your final remark

Many proposed strategies for establishing global regularity for true Navier-Stokes (including some that were proposed very recently) fail this test and can thus be excluded as viable strategies.

Your great paper does not formally exclude as viable strategy the recent paper by Otelbaev, since in the abstract part the latter contains an extra condition for the main linear operator (the Laplacian, for the NS), the condition that is not present in your paper.

The condition requires that the lowest eigenvalue be isolated. This is correct for the peridic problem, but wrong for the problem for the everaged NS in the whole space. Moreover, it is not that easily visible, how your construction can be adapted to the periodic case, since it uses the everaging over rotations, this one being prevented by the geometry of the torus..

12 February, 2014 at 9:07 am

Terence TaoMy paper is set in the non-periodic setting for technical convenience, but one can transfer from the non-periodic setting to the periodic setting in a number of ways. For instance, in this previous paper of mine I showed that global regularity for the homogeneous non-periodic Navier-Stokes problem follows from global regularity for the inhomogeneous periodic Navier-Stokes problem with forcing term, and so any obstruction to solving the former problem also gives an obstruction to the latter. Alternatively, one can take the local cascade operators in my current paper and adapt them to the periodic setting by throwing away all negative frequency scales (which were never actually excited by the local cascade evolution in any event) and restricting the frequency variable to be integer. The resulting periodic equation has essentially the same dynamics as the non-periodic cascade equation; it is no longer an average of the periodic Navier-Stokes equation (since, as you say, rotations are no longer directly available on the torus), but the periodic local cascade operator still obeys essentially the same estimates as the periodic Euler operator, because the non-periodic version of the former is still an average of the non-periodic version of the latter, and essentially all periodic estimates one uses on these operators can be derived from their non-periodic counterparts (together with estimates that exploit the compactness of the domain, e.g. Holder’s inequality).

12 February, 2014 at 8:56 am

friend48the lowest eigenvalue…..

I meant, of course, the lowest point of the spectrum.

15 February, 2014 at 12:33 am

AnonymousOtelbaev attempts to solve *a* NS problem which demands additional symmetry (via periodic BC’s) and his flow is assumed to satisfy further integral constraints. He deals with a pde system (with specific IC/BC’s) which has an essentially DIFFERENT formulation compared to the present periodic settings. We cannot say that his proof is free from inconsistencies and he breaks new ground. The point is: implications concluded from a NS-irrelevant pde throw little new light on how to overcome the NS difficulties.

14 February, 2014 at 3:48 am

jussilindgrenWhat about the following argument:

It seems to me that the following geometric argument is sufficient to ensure smooth solutions:

Given that the question of regularity depends on the behaviour of the volume integral (over the whole space) of the following scalar product:

By utilising the scalar triple product, one can easily see that this term depends only the mutually perpendicular parts of the velocity, vorticity and curl of vorticity fields (as swapping the order of the terms using scalar triple product always kills the parallel part in the cross product). Then we can reduce the question of regularity into such fields where these three fields are mutually perpendicular. But on the other hand using the divergence theorem, it is clear that the enstrophy is identically the volume integral of

as the component which is perpendicular to the curl of vorticity is killed by the scalar product.

So, in other words for such perpendicular fields the enstrophy is identically zero and so the solution is regular. This means that the solutions must be regular for all fields, as adding a non-perpendicular parts to the fields do not change the critical integral.

The whole story is at my blog: http://navierstokesregularity.wordpress.com/

14 February, 2014 at 10:04 am

friend48jussilindgren:What about the following argument:……….

Wouldn’t it be polite and professional to start your post by admittng that, yes, your latest ‘proof’ is wrong, and what is you are wrtig now is not a continuation of this latest one but something completely new.

14 February, 2014 at 1:20 pm

Anonymousindeed, I fully agree with you friend48. Sorry. I don’t care about the millions, I just care about whether we can could establish some certainty. Would be fantastic, if we knew the limits of computers.

15 February, 2014 at 10:43 am

friend48jussilindgren:”Then we can reduce the question of regularity into such fields where these three fields are mutually perpendicular. ”

——————–

This statement is not proved. You should have described the process of your reduction.You did not do this, moreover, you cannot do this.

24 February, 2014 at 4:31 pm

More Quick Links | Not Even Wrong[…] Tao has some new ideas about the Navier-Stokes equation. See his blog here, a paper here, and a story by Erica Klarreich at Quanta […]

24 February, 2014 at 7:13 pm

weather_or_notSo does you work have any bearing on long-term climate predictions?

25 February, 2014 at 8:43 am

noneI’d say this has no implications at all since it’s about transferring energy to finer and finer scales without limit. The NS equations for air or water (i.e. in physics) break down at the molecular scale.

25 February, 2014 at 12:08 pm

Conserved quantities for the Euler equations | What's new[…] Euler equations are the inviscid limit of the Navier-Stokes equations; as discussed in my previous post, one potential route to establishing finite time blowup for the latter equations when is to be […]

1 March, 2014 at 3:04 pm

Navier Stokes looks like its gonna blow « Pink Iguana[…] Tao has some new ideas about the Navier-Stokes equation. See his blog here, a paper here, and a story by Erica Klarreich at […]

2 March, 2014 at 3:01 pm

La mystérieuse équation de Navier-Stokes | Science étonnante[…] Un autre billet de Terry Tao qui semble doucher les espoirs kazakhs […]

2 March, 2014 at 3:27 pm

Shtetl-Optimized » Blog Archive » Recent papers by Susskind and Tao illustrate the long reach of computation[…] see that, in blog comments here and here, Tao says that the crucial difference between the 2- and 3-dimensional Navier-Stokes […]

4 March, 2014 at 1:01 am

AnonymousProf. Tao, do you think it’s feasible and interesting, for someone to do a numerical simulation of the blowup in this result, making some nice pictures, or is there just too much “stuff” going on for simulation to be practical? Thanks.

4 March, 2014 at 10:23 am

Terence TaoThe five-dimensional ODE in Section 5.5 of my paper, which models a single stage of the energy transition process, should be solvable numerically for reasonable choices of the parameters ; the dynamics should then look like the schematic depicted in Figure 6 of my paper. One could then try to chain several of these ODE together to give a system similar to (6.3)-(6.6), which describes the full blowup dynamics.

6 March, 2014 at 9:07 am

arch1Maybe I’m reading too much into the “fluid computer” analogy, but-

In order to programmably self-replicate, it seems that the Navier-Stokes computer’s hardware would need to support not only computation, but also at least some minimal fabrication primitives. If so, would the latter capability somehow come for free with Navier-Stokes based computation (which I understand is itself still just a glimmer), or must it be explicitly designed in?

6 March, 2014 at 9:18 am

Terence TaoYes, this will have to be done also; strictly speaking one needs a “universal constructor” in addition to a “universal computer” in order to build a self-replicating machine by this route. But given that the computer will literally be (fluid) mechanical in nature, I expect the “fabrication primitives” to be not so different actually from the “logic gate primitives” that are also needed, and if one can engineer the latter then it is reasonable to expect that one can also engineer the former. (This is not to say that the task is trivial: after all, we still have not engineered a true von Neumann machine in the real world, despite having more or less perfected the art of computation. On the other hand, in Conway’s game of life, my understanding is that the tasks of building a universal computer and of building a universal constructor, while technically different, were of comparable levels of difficulty.)

6 March, 2014 at 12:29 pm

arch1I see, thanks.

11 March, 2014 at 8:06 pm

arch1Do physics-of-computation results such as Landauer’s energy minimum for bit erasure have any relevance to the potential N-S blowup scenario sketched here?

12 March, 2014 at 7:02 am

Terence TaoPerhaps not directly, but the second law of thermodynamics is certainly a concern, as it would suggest that the total disorder present in a fluid machine will tend to increase over time. One way to ameliorate this is to try to rely on reversible computing, but I think the more promising route is simply to accept the increase in total disorder, and try instead to reduce local disorder, so that a von Neumann machine can create a near-perfect replica of itself at a smaller scale (and with smaller energy and, hopefully, smaller entropy too), at the cost of leaving behind a discarded and highly disordered remnant of that is absorbing the entropy (but which has been moved sufficiently far from in either physical space or frequency space so as not to disrupt the remaining dynamics).

Note also that other model equations in physics (e.g. focusing nonlinear Schrodinger equation) can exhibit stable self-similar blowup solutions in finite time, so finite time blowup is not intrinsically in contradiction to the second law of thermodynamics (there is some small radiation dispersing away from the bulk of the self-similar solution which is basically carrying the disorder in the system).

15 March, 2014 at 7:58 am

Could We Have Felt Evidence For SDP ≠ P? | Gödel's Lost Letter and P=NP[…] the latter, Terry Tao’s recent breakthrough on the Navier-Stokes equations is an example of how much the same ideas keep recirculating, and how […]

10 April, 2014 at 10:14 am

Multiple-Credit Tests | Gödel's Lost Letter and P=NP[…] Tree” is Terence Tao, and our extrapolation of his new result on Navier-Stokes is a flight of […]

15 April, 2014 at 2:12 am

JOEI think this is probably the wrong place to put this but I read your previous article ”Does one have to be a genius to do maths?” and I think I must be ailing from one of the condition you spoke of. Now where does the Navier-Stokes problem come in; well I threw much of myself into tackling this one for 2yrs now. I am no proffesional mathematician but I believe I may have cracked something; like I have found the main basis of boundedness in the NSE; smoothness isnt that hard to achieve afterwards.

I wish to publish these findings but I am discouraged by the fact that as an amateur I will be taking down an icon in an already protective scientific community & I dont mean Prof. Otelbaev. Having gone through Otelbayiev’s own work I have found an error consistent with other peoples attempts at attacking the NSE.

Dear Dr Tao, I love your new way of looking at this problem and something interesting hit me! Forgive me for suggesting this but would you mind looking into whether you could fit this idea into galactic clusters, I thought Gamma ray bursts but it would be interesting to see what you might come up with. By the way, can you give me in your proffessional capasity advice on what to do?

Thanks.

16 April, 2014 at 10:03 pm

Gil KalaiDear Terry, following is a question about another (rather natural) possible direction from your philosophy but this time towards a positive solution rather than a negative solution for the NS question. Namely, perhaps a key for showing that finite-time blow up is

not possiblewould be by showing (an apparently weaker statement) that the NS/Euler equation do not support “deep” computation. You drew the analogy with quantum computing (and fault-tolerance) and indeed in this area there are several results which asserts that under certain conditions “deep computation” is not available. Limitation on computation immediately leads to a large number of conserved quantities, which is the quantum case are sometimes referred to as “phases of matter.” (Before, I proposed to impose such limitation on computation and to derived conservation laws on top of the equations in order to explain why deep computation is absent from fluids in nature, but it is a possibility that limitation on deep computation can be derived from the equation itself.) On the technical level (on the quantum side) results of this kind usually refer to “gapped systems” (so some spectral gap is assumed) and a prominent technical tool that is used (with much success in the last decade) is the Lieb-Robinson inequality. (See, e.g. this review paper.) I don’t know if spectral gaps and Lieb-Robinson have some analogs for the NS/Euler evolutions. Anyway, this is a direction worth noting.17 April, 2014 at 9:22 am

Terence TaoMy personal feeling is that because the Navier-Stokes/Euler equations are nonlinear rather than linear, there will be far fewer barriers to computation than in the quantum setting, and that one should be able to almost freely traverse the “energy surface” of state space coming from the various conserved quantities of the Euler equations (energy, momentum, angular momentum, circulation, helicity). (Well, I’m not 100% sure about circulation, because this is essentially a pointwise conservation law (conservation of the vorticity 2-form) and could potentially be a serious barrier to how freely the state can evolve, but the other conservation laws at least do not seem to constrain the dynamics to the extent that they preclude computation.)

28 June, 2014 at 12:40 pm

Gil KalaiIt seems (but I am not sure about it at all) that perhaps the crux of the matter for supporting a von Neumann machine is to show that NS supports computation of arbitrary depth. (Namely that there is no absolute bound on the depth of the computation independent of the number of bits you can use, and that you do not need arbitrary long computation with fixed number of n bits). If this is the case this is good news in the sense that the mathematical distinction between bounded depth computation and computation beyond bounded depth is much more definite and clear compared to other computational complexity issues. So a crucial question is if you can implement with a NS-machine something like the “majority function.” (If NS supports only bounded depth computation (is it the case for D=2?), I don’t know if this has direct consequences on the regularity conjectures but at least it will imply (by a theorem of Green) Mobius randomness. :) )

26 April, 2014 at 7:28 am

Viktor IvanovGlobal regularity is proven in my paper A SOLUTION OF THE 3D NAVIER-STOKES PROBLEM published in Int. Journal of Pure and Applied Mathematics, Vol. 91 No. 3 2014, 321-328,

28 May, 2014 at 8:05 pm

BaiamosBut what about Otelbayev proof?

25 June, 2014 at 1:03 am

So what happened to the abc conjecture and Navier-Stokes? | The Aperiodical[…] the arXiv, which states some limits on what a solution to Navier-Stokes can look like, but comments on his blog post about the paper say that it doesn’t rule out Otelbaev’s […]

22 August, 2014 at 4:44 am

Un ordinateur liquide | www.Affectueusement.Biz[…] https://terrytao.wordpress.com/2014/02/04/finite-time-blowup-for-an-averaged-three-dimensional-navier… (dernier paragraphe) […]

29 August, 2014 at 2:21 pm

Links For February - My blog[…] Tao, whom I like to admire from afar, has posted what is maybe a takedown of Otelbaev’s claimed proof of Navier-Stokes, but the best part? […]

30 August, 2014 at 1:12 am

The Other Clay Maths Problem | Ajit Jadhav's Weblog[…] That, indeed, turns out to have been the actual case. Terry Tao didn’t directly tackle the Clay Maths problem itself. See the Simons Foundation’s original coverage here [^], or the San Francisco-based Scientific American’s copy-paste job, here [^]. What Terry instead did is to pose a similar, and related, problem, and then solved it [^]. […]

18 September, 2014 at 10:03 am

Fred Chapman (@fwchapman)Prof. Tao, I enjoyed your lecture on this topic at Lehigh University last week. I have a follow-up question.

I am intrigued by your idea of building a Turing-complete computer out of water. If you can do this, it would be of considerable interest in its own right!

Are you familiar with Stephen Wolfram’s work on universal Turing machines in his “New Kind of Science” (NKS) research initiative? Could there be some useful connections between various NKS representations of universal Turing machines and the machine you want to build using fluid dynamics?

In 2007, Wolfram awarded a prize to Alex Smith for proving that a particular Turing machine with 2 states and 3 symbols (colors) is universal. This may be the simplest universal Turing machine that exists. Here’s more info:

http://www.wolframscience.com/prizes/tm23/

http://en.wikipedia.org/wiki/Wolfram's_2-state_3-symbol_Turing_machine

Wishing you success,

Fred Chapman

Bethlehem, PA

18 September, 2014 at 11:05 am

Fred Chapman (@fwchapman)P.S. When Wolfram was at the Institute for Advanced Study in 1983-1986, he simulated physical processes like turbulent fluid flow using cellular automata. Would it be fruitful to collaborate with Wolfram on your approach to Navier-Stokes via fluid-mechanical universal Turing machines?

http://en.wikipedia.org/wiki/Stephen_Wolfram#Complex_systems_and_cellular_automata

2 November, 2014 at 5:23 pm

If You Can Explain What Happens When Smoke Comes Off A Cigarette, You'll Get A $1 Million Prize | Business Insider[…] In a more recent blog post and paper preprint, Tao constructs a set of differential equations that behave like an “averaged” version of Navier-Stokes and devises a clever model of a very abstract “fluid machine,” vaguely analogous to electronic computers built out of electronic components. […]

9 November, 2014 at 6:31 pm

Choro TukembaevDear Prof. Terence Tao,

Harness a stallion with a timid doe in one bridle together?

Harness a compressibility with incompressibility in one bridle together? Alexander Pushkin warned that this is impossible (Poem “Poltava”, 1829).

“Socrates first brought down philosophy from heaven to earth”. Your counter-example does not apply to an incompressible fluid, therefore very fit the words of Cicero. Required descend from heaven to earth, and then to solve the Navier-Stokes equations for an incompressible fluid, strictly following the requirements “The Millennium Prize Problems”.

Finally, professor Omurov has found a solution for Navier-Stokes equations exactly as it was required in “The Millennium Prize Problems” of the Clay Mathematics Institute, and for this purpose he invented a new analytical method, see:

(English) http://literatura.kg/articles/?aid=2030

(in Russian) http://literatura.kg/articles/?aid=2042

“Nullius in verba” ■

14 January, 2015 at 12:56 am

joeProf. Tao,

The Navier Stokes equations in the current form could includes momentum conservation terms like angular momentum conservation in space? for example ocean waves carry large momentum energy due to gravitational forces, this could counter the positive curvature inflow to a negative outflow. The constant velocity of this flow would follow a geodesic path along a smooth surface conserving both momentum and constant flow velocity much like the Riemannian surface.

The entropy of this flow volume would minimize energy loss since all particles are confined in a Riemannian surface of constant total curvature irrespective of its shape, like turbulence? even for 3 dimension !

You would expect a smooth group velocity over this surface since the geodesic path conserves energy and momentum on any point. This could solve the problem of blow up, if angular momentum is taken into account. If the Gaussian K>0, you get a circular in-flow motion, K<0, the flow diverges away, if K=0 then a steady flow.

14 January, 2015 at 3:56 pm

AndreasI found this topic highly interesting due to the unexpected methods used here; and especially the great potential for future breakthroughs. Now there has not been any update on this kind of research for nearly a year, so i’m very curious about the progress. Is the proposed method still an active area of research of yours or did you suspend it for some time? Much success with your project – and thanks alot for this entertaining blog!

14 January, 2015 at 9:49 pm

joeAndreas, are you referring to me? or Prof. Tao

I have the complete equations but never submitted to any journals. Was hoping to help Prof Tao in submitting or co-author the papers since he has the deep mathematical skill and reputation on the subject.

15 January, 2015 at 12:35 am

AinuruSee the research of 3D incompressible fluid, which is published at the end of 2014

http://www.smolensk.ru/user/sgma/MMORPH/N-43-html/cont.htm

The proximity of solutions of the Navier-Stokes and Euler equations given in a separate paragraph. The results are summarized for the Navier-Stokes equations in multidimensional space.

27 March, 2015 at 12:48 am

matthew millerFor Professor Tao,

If you’ll oblige me, what are the author’s views with regards overlap between a recent publication on FPU and your own on Naiver-Stokes?

Does this special business about 6 wave modes (and not another number) leading to dissipation in the absence of viscosity apply positively, negatively or neutrally to global regularity of the Naiver-Stokes problem – especially as regards your program/suggestion to establish finite time blowup?

.

27 March, 2015 at 1:01 am

matthew millerFrom the above comment here’s the name of the paper, published in PNAS on the 23rd of March. Can also find on arXiv.

Route to thermalization in the α-Fermi-Pasta-Ulam system.

27 March, 2015 at 6:08 am

Terence TaoAs far as I can tell, the analysis in that paper is specific to the Fermi-Pasta-Ulam system and does not appear to have any analogue for Navier-Stokes evolution, which has quite a different dynamics.

27 March, 2015 at 2:12 am

shaurabh aggarwalCan we solve navier stokes equation with more than 4 dimensions

28 March, 2015 at 12:00 am

AnonymousIt seems like the crux of the matter is to bound the ratio of the energy distributed in the high frequencies to the low frequencies, so extra dimensions should make the problem even harder.

28 March, 2015 at 12:24 am

AnonymousI apologize if this is a wrong forum for these kind of queries, but there are hints that a global existence result of regular solutions would require a global _lower_ bound for the energy of the solutions. There are results such as Schönbek’s, which say that if the Fourier-transform of the solution does not vanish too fast at the origin, then the energy of the solution decays slower than C(1+t)^-(n/2+1). Is there a known lower bound for the energy for any initial data in, say, the Schwartz-class?

28 March, 2015 at 10:52 am

Terence TaoI consider this unlikely: blowup or lack thereof is going to be decided by the dynamics of the high-frequency component of the solution, whereas questions of energy decay are mostly decided by the dynamics of the low-frequency components (as the result of Schonbek indicates). Note that if a solution has unusually rapid energy decay, then one should be able to modify that solution to one which does not exhibit such decay simply by placing an additional low-frequency component to the initial velocity that is supported sufficiently far away from the rest of the initial data that it does not significantly affect the dynamics of that data (other than by an approximately linear superposition of the original solution with the evolution of the low-frequency component). This suggests that the singularity behaviour of the solution is more or less decoupled from the energy decay behaviour. (Note also that the construction in my paper suggests that blowup can be achieved using arbitrarily small amounts of energy.)

29 March, 2015 at 8:32 am

Juha-Matti PerkkiöDear prof. Tao,

Of course the route from a lower bound of the energy to a possible long-time existence of regular solutions is far from obvious. However, they are at least somewhat connected already in a very primitive level: If there was a solution whose energy decays too rapidly, say at the exact rate E(t)=(T-t)^a for some interval [T’,T] and a<1, then the H^1-norm would blow up at t=T. This naive remark is of course not in discord with your argument, but it already suggests that energy decay and regularity are at least somewhat coupled.

30 May, 2015 at 2:16 pm

Andreas Z.This is one of the most fascinating post I have read at your blog. If there any chance that you continue post about this topic and your progress or ideas? Thank you a lot! :)

2 June, 2015 at 10:59 pm

SergeyDear Dr. Tao, let me present my article regarding the general solution of Navier-Stokes Eqs.:

http://arxiv.org/abs/1502.01206

– I kindly ask you to give a few comments, if possible…

12 June, 2015 at 1:51 am

DanielDear Prof. Tao,

It is kinda hard to imagine how it is possible to create blowups while obeying the energy identity. Is the key here “finite time”? If your fluid has to be ideal in this system and you are taking dissipation terms into consideration, doesn’t it mean that your fluid can be non-ideal, too? I mean, if your system actually needs dissipation to operate as gates, why can’t it work with non-ideal fluids that will come with intrinsic dissipation? It sounded to me like a system where the wires has to be perfect conductors but you put resistive components into the circuit so you can do the gating. Did I misunderstand what ideal fluid or dissipation is? Another question, is there a fundamental reason why this approach won’t work with electromagnetic waves?

-Kind regard, Daniel.

12 June, 2015 at 5:13 am

Terence TaoThe energy identity controls the total energy of the fluid, integrated over all of space, but it does not control the pointwise energy density of the fluid (or equivalently, the square of the speed of the fluid), because the energy could be concentrated in an arbitrary small ball of space. In particular, in the blowup scenario envisaged here, the energy remains bounded but is being concentrated into smaller and smaller balls, and in finite time one arrives at a singularity in which a finite nonzero amount of energy is supposed to concentrate into a single point, which cannot occur for a smooth solution to Navier-Stokes.

By “ideal fluid” here, I mean an incompressible fluid without dissipation, whereas Navier-Stokes models incompressible fluids with dissipation. So a system that is designed to work for ideal fluids does not work perfectly when one instead substitutes the viscous fluids modeled by Navier-Stokes. However, if one shrinks the system down to a very small spatial scale (and scales up the velocity field accordingly, keeping the total energy constant), this rescaling is effectively equivalent (in the sense of matching Reynolds number) to scaling down the viscosity to become very small while keeping the physical length scale unchanged, making the ideal fluid approximation much more accurate (and it will become exponentially more accurate if the system evolves as predicted to smaller and smaller scales). Because of this, I expect that if one can create the desired approximately self-similar dynamics for an ideal fluid (with suitable “error correction” coded in if necessary to give enough stability), then one can also obtain such dynamics for viscous fluids if one initialises the system to be supported in a sufficiently small scale (or equivalently, one assumes the viscosity parameter to be sufficiently small).

12 June, 2015 at 6:25 am

arch1“…the energy remains bounded but is being concentrated into smaller and smaller balls, and in finite time one arrives at a singularity in which a finite nonzero amount of energy is supposed to concentrate into a single point…”

This reminds me of something I think I once read concerning the crack of a bullwhip (namely, that it results from the tip going supersonic).

1 July, 2015 at 9:56 am

rbcoulterHello Professor Tao: Is it necessarily so that the energy is finite at a point in the blowup scenario? Assuming that the energy density is infinite in the blowup scenario, I would imagine that one would need to take limits to calculate the actual energy at that point. For example, if r is the radius of a ball surrounding the blowup point, then the mass of the ball is proportional to r cubed. If the velocity blows up proportional to 1/r^n then the energy blows up 1/r^2n. Since the energy at the blowup point is the product of the energy density and the mass of the ball, only in the case of n = 1.5 will the energy be finite at the point. For n1.5 the energy at the point is infinite.

30 July, 2015 at 10:06 pm

danieldI wonder what could be shown if the fluid were acted on by an external force – say the fluid was a turbulent plasma under the influence of a magnetic field – and somehow altering that magnetic field to control the direction of the turbulence

basically using an external force to do this – ”the energy remains bounded but is being concentrated into smaller and smaller balls”

31 July, 2015 at 7:55 am

Terence TaoIt depends on how smooth the external force is, or equivalently how quickly it oscillates at small scales. If one applies a smooth external force, then by the time the energy is concentrated into a small ball, the force is effectively constant, and can be normalised to be negligible by applying a Galilean change of coordinates. (The analogy I sometimes use is that a smooth force is like the ability to manipulate an object with very fat, clumsy fingers; one do all sorts of macroscopic changes to the state with such a force, but it is difficult to obtain precise fine-scale control.)

If one allows for very rough external force, then one could certainly exhibit blowup as well – a singular external force can certainly produce a singular solution. But this is is rather easy to accomplish and doesn’t seem to shed much light on the global regularity question (which requires a smooth external force).

One possible interesting scenario, which has neither been constructed nor prohibited to my knowledge, is to find some singular (but still bounded velocity)

initial datathat leads to blowup (in the sense of, say, the L^3 norm of velocity diverging) in finite time, without the assistance of a singular external force. In principle, one could imagine singularities in the data being somehow sustained until the time comes that they are needed to guide the solution from one fine scale to an even finer scale. There is a little bit of hope that such a scenario could be constructed for active scalar equations such as SQG, where the scalar is transported and so “remembers” in some sense its initial configuration. This would still be fairly far from a finite time blowup from smooth data with smooth external force, though.14 August, 2015 at 6:25 am

AnonymousWhat do you mean by singular initial data (but finite velocity)? How can we generate a fluid motion with such data?

14 August, 2015 at 12:10 pm

Terence TaoBy “singular” I mean here “not smooth”, for instance the velocity may be bounded, while the derivative of the velocity (or related quantities such as the vorticity) are unbounded (or perhaps it is the second derivatives of the velocity are unbounded). This allows for nontrivial fine scale structure to the initial data which could conceivably be used to “steer” the solution through its evolution through finer and finer scales into finite time blowup.

25 August, 2015 at 1:26 am

AnonymousThere is a paper (arXiv:1104.3615 or CommMathPhys 312(3)) whose initial data is close to the type you mentioned. It was claimed that the critical case L3-velocity norm blows up in finite time from initial smooth data with compact support (i.e. finite initial energy in R^3). Apart from lack of convincing apriori bounds and a few technical glitches, one assumption made in that paper was velocity field (and pressure) might be split into two parts: one part is linear and governed by the Stokes system, and the rest by the NSEs. Moreover, the separation assumption was considered to hold independent of the size of the initial data, and of time interval ahead possible singularity. No justification and qualification were given. By the well-known NS regularity for small data, the assumption cannot be valid for arbitrary initial data. In general, the claimed out-of-bound condition (Theorem 1.1) at most implies that the assumed flowfield breaks down in finite time; the blowup does not necessarily represent a genuine singularity condition for the NSEqs. (Similar arguments apply to paper arXiv:1508.05313.)

12 June, 2015 at 6:01 am

Sergey_ErshkovDear Prof. Tao, as for ansatz in the reference arXiv.org above, the momentum equation of NSE has been presented as a system of PDE (each was solved accordingly): invariant for pressure, and the sum of 2 equations: – with zero curl for the field of flow velocity (viscous-free), and the proper Eq. with viscous effects but variable curl.

A solenoidal Eq. with viscous effects is represented by the proper Heat equation for each component of flow velocity with variable curl.

Non-viscous case is presented by the PDE-system of 3 linear differential equations (in regard to the time-parameter), depending on the components of solution of the above Heat Eq. The general solution of PDE-system above is composed of the solutions of 2 complex Riccati Eqs. (which are chosen to form such a composed solution as the real function in any case).

So, the existence of the general solution of Navier-Stokes equations is proved to be the question of existence of the proper solution for such a PDE-system of linear equations. Final solution is proved to be the sum of 2 components: – an irrotational (curl-free) one and a solenoidal (variable curl) components.

17 June, 2015 at 10:16 pm

AnonymousPaper arXiv:1502.01206v3 is absolutely INCORRECT. Curl on every term in the large brackets in (2.1) equals to zero because curl(grad A)=0 for any scalar A. But the expression of 1st eqn in (2.3) does not vanish and is NOT the Bernoulli principle in general. Helmholtz’s decomposition of the velocity (vector) field has nothing to do with the (scalar) pressure. Eqns (2.3)-(2.5) are nonsense. It may be helpful to go back to basic textbooks.

18 June, 2015 at 12:17 am

Sergey_Ershkov“Curl on every term in the large brackets in (2.1) equals to zero because curl(grad A)=0 for any scalar A.” – yes, this is true (this is trivial, obvious note). And what else?

“But the expression of 1st eqn in (2.3) does not vanish and is NOT the Bernoulli principle in general” – I don’t suggest it to be vanish or to be equal to Bernoulli invariant (I supposed it to be like ~ Bernoulli invariant).

I suggest to present 1 non-linear PDE (Navier-Stokes) as a sum of 3 parts: Bernoulli-like invariant, and 2 others (curl-free and with variable curl).

You should be more attentive when you read a text!

“Helmholtz’s decomposition of the velocity (vector) field has nothing to do with the (scalar) pressure” – I have no aim “to do with the (scalar) pressure”. I just present the vector gradient field of the scalar components of pressure is to be dependent on the appropriate components of the velocity field.

“Eqns (2.3)-(2.5) are nonsense” – this is my approach to represent of initial NSE as a sum of 3 Eqs. (I have explained it above already). Such decomposition is true, if we could find the proper solution to each of them.

“It may be helpful to go back to basic textbooks” – I kindly advise you not to be dubious and also you should be more attentive when you read any scientific material.

Kind regards!

18 July, 2015 at 7:51 pm

AnonymousErshkov’s solution is CORRECT and must enter in the basic textbooks as the most closer one to the problem.

4 August, 2015 at 4:54 am

Sergey_ErshkovThank you for your esteemed opinion, my unknown friend Anonymous (#2). We are under attack with you :) {I mean the enormous number of likes/dislikes}.

Yes, I think that my solution has been presented in a more general form than ever. But it concerns only the time-depending structure of solution; the space part is determined by 4 PDEs of 1-st order {for curl-free part of solution} – i.e., by 1 continuity equation and 3 “zero curl” conditions – and additionaly determined by the Heat-transfer Eq. {for the part of solution with variable curl}.

Such a decomposition – curl-free vs. variable curl – is defined by the fundamental Helmholtz theorem of vector calculus.

As for decomposition of one non-linear PDE (for curl-free part of solution) to the system of 1 invariant of Bernoulli-type + system of linear PDEs in regard to the time-parameter t: – of course, you should know a Caratheodory’s existence theorem – it proves the existense of a solution for such a case.

So, in a future it should be investigated properly the space part of a solution (I mean the solving of 4 PDEs of 1-st order above + Heat-transfer Eq.) as well as it should be calculated the appropriate estimations for energy of the flow – according to the demands of Clay Mathematics institute.

I hope that my first result (concerning the presentation of general solution of Navier-Stokes Eqs.) will make it possible to solve this problem by some unknown genius … if you have any questions regarding my paper or about some collaborations {may be, future mutual publications about NSE}, you could contact me through ResearchGate.

17 June, 2015 at 1:26 am

Sergey_ErshkovHere below you will find the up-to-date reference, for your perusal:

“On Existence of General Solution of the Navier − Stokes Equations for 3D Non-Stationary Incompressible Flow”

http://www.dl.begellhouse.com/ru/journals/71cb29ca5b40f8f8,669062760250c799,0679e1964365ade8.html

18 July, 2015 at 8:23 pm

AnonymousSo many likes/dislikes

28 July, 2015 at 6:51 am

Lars EricsonRegarding the self-replicating at finer scales von Neumann machine, would the ideas of digital physics be relevant? (https://en.wikipedia.org/wiki/Digital_physics) In digital physics, the universe is a cellular automata. There is nothing smaller than a single cell, and the speed of light is the “clock speed”, the rate at which information can move from one cell to the next. In that model, you can’t keep infinitely self-replicating at smaller scales, because you can’t replicate smaller than a single cell.

Also there is that intuition that computation = energy, in the sense that if I have an idle GPU it consumes 35W. When it is 100% utilized it consumes 235W and heats up. You can try to speed up the GPU by freezing it or by increasing the clock speed. At higher clock speeds, the GPU consumes quadratically more energy. The freezing also takes energy. Both of these imply a physical limit on miniaturizing computation. (http://electronics.stackexchange.com/questions/81344/is-cpu-gpgpu-heat-dissipation-quadratic-in-clock-frequency)

31 July, 2015 at 1:49 pm

AnonymousNS regularity is a problem in pure mathematics, inspired by physics but not constrained by it. It’s set in a continuum so there is no “Planck constant”. It’s just like geometry was inspired by surveying but intuitions from surveying are of no use in understanding the Banach-Tarski paradox. You have to work out all the details, and in the case of NS, it’s hard enough that nobody has been able to do that.

3 August, 2015 at 6:01 am

Lars EricsonIt’s a thought experiment to make computers out of water that make tinier computers out of water. It’s a real experiment to take a chip and overclock it to make it go faster and then discover that you have to add a tower of liquid nitrogen so it doesn’t melt, and that the amount of energy consumption and concomitant required cooling grows exponentially (not quadratically as I said mistakenly in the post) with clock speed. Tiny fast computers need giant hot coolers so they can be tiny and fast. There are all kinds of physically-induced tradeoffs. These are experimentally observable. Theoretically, digital physics posits a lower bound (the cell) on even-tinier self-replication. John Wheeler posited the “it from bit” connection. (https://en.wikipedia.org/wiki/Digital_physics#Wheeler.27s_.22it_from_bit.22) Ed Fredkin posited that all things are discrete rather than continuous. (http://www.bottomlayer.com/bottom/finite-all.html) Cellular models explain the speed of light as the clock speed to move information from one cell to the next. (https://en.wikipedia.org/wiki/Speed_of_light_%28cellular_automaton%29)

A non-trivial thought experiment would take into consideration both the real, observable physical limits to computing, and the theoretical ones. So to say that NS is inspired by physics but not constrained by it is, pragmatically speaking, nonsense, because physically unconstrained solutions will have no physical relevance. Yes, you can mathematically construct an infinite sequence of numbers, but you can’t construct them all and pile them on a plate.

2 August, 2015 at 3:38 am

AnonymousAccording to the official problem description (by the Clay Math. Institute), “… if there is a solution with a finite blowup time , then the velocity becomes unbounded near the blowup time.” (pages 2-3).

Therefore, it seems reasonable to expect that there is a (special) relativistic version of the NS equations without a finite blowup time.

2 August, 2015 at 7:11 pm

arithmeticaSo who’s the dingus who downvoted all of Terry’s comments for no reason? What juvenile behavior.

3 August, 2015 at 12:14 am

Sergey_Ershkovarithmetica, this is indeed juvenile behavior.

I suspect Anonymous as the main hooligan, who commented before you (see such an enormous number of likes/dislikes at his reply to me from 12-17 June 2015 as well as in my posts). This is unnormal behaviour.

As for me, I respect opinion of Dr.Tao.

21 August, 2015 at 10:39 am

Philip LI saw your talk for the Einstein Memorial lecture, and it was interesting. Your method reminds me of several things, aside from of course cellular automata. Some automata are able to perform all computations, I suppose this includes non-linear dynamics.

1. Density functional theory. Density functional theory was originally used in chemistry to determine the spectrum of molecules. It allows approximation of a relatively intractable multi-body systems (i.e. electron-electron interaction, electron-nucleus interaction). It uses correlations (k-space mean field), providing additional structure (in the literal sense). The approximation can be refined by including 3-body correlations, 4-… etc. Ground states of main interest for chemists, physicists, and maybe molecular biologists because this is normally the only states that are thermodynamically accessible, and is the closest to the actual symmetry of the molecule. I also would like to mention the correspondence between (self interacting) solitons and the Schroedinger inverse problem.

2. Formation of (essential) singularities in finite time. This is also found in relativity, or calculations for formation of black holes. GR are nonlinear equations, which do not (yet?) have a proper Feynman diagrammatic perturbative description.

3. Diagrammatic expansion, This relates to points 1. and 2. . However, your work would differ in the sense that an exact description of non-renormalizable dynamics, where the approximation is refinable. The Feynman approximation involves linear interactions only (though QED might remain true for high amplitude/energy interactions, I think). There is no theory like this for gravity that I know. But my knowledge is humble.

Also, I learned in fluid mechanics Richardson-Kolmogorov cascade.

21 August, 2015 at 11:05 am

Philip LThere are also multi-time/scale approximations. There being the possibility of separable time scales, despite the absence of superposition. How do the frequencies of these modes change as parameters are changed? There may be a ‘topological phase transitions’ in the spectrum, and other global changes (e.g. bifurcations, cusps).

21 August, 2015 at 2:38 pm

Philip LThe GR case may be bit different. Here the singular is more like a 3 sphere in hyperbolic space. I have not encountered metrics with blackhole formation after a certain amount. It seems the topology or geometry is different in that case (maybe a 3 sphere cut out for example)? But there are theorems apparently about this.

17 September, 2015 at 5:19 am

AnonymousIt should be that it applies at ‘high energy’ instead of ‘large amplitude’, I believe.

1 October, 2015 at 3:42 pm

Sergey_ErshkovVery interesting article of Michael Thambynayagam regarding the ansatz for resolving of Navier-Stokes eqs.:

http://arxiv.org/abs/1509.08766

– who is keen in the matter, should recognize this article to be worthy of a review at the Annals of Mathematics.

29 December, 2015 at 11:11 am

Gil KalaiLet me try to give a more restrictive definition of what it would mean that

(*) “NS only supports ‘easy’ computation”.

This can have two purposes. The first, in case that Terry’s conjecture is correct and full computation is supported by 3D NS evolutions, (*) can be used to describe additional (implicit) conditions on realistic NS evolutions.

The second, more exciting, possibility is that (*) can actually be proved for 3D Navier-Stokes equation. This would be interesting on its own and may be a step for proving regularity.

In a comment above I proposed to take “easy computation” to be “bounded depth computation”. Namely (*) would mean that every computation described by NS can be approximated by bounded depth computation (circuit). A considerably weaker form of computation (and thus a much stronger form of (*)), would take “easy computation” to refer to the ability to describe (approximate) the computation by bounded-degree polynomials. (This is related to the notion of “noise-stability” used by Benjamini Schramm and myself.) A recent paper were “easy computation” of this kind was demonstrated to certain quantum systems is a paper by Guy Kindler and me

on Gaussian noise sensitivity and BosonSampling http://arxiv.org/abs/1409.3093

1 February, 2016 at 11:06 pm

Finite time blowup for an Euler-type equation in vorticity stream form | What's new[…] been meaning to return to fluids for some time now, in order to build upon my construction two years ago of a solution to an averaged Navier-Stokes equation that exhibited finite time blowup. (I recently […]

9 February, 2016 at 10:47 pm

Anonymoushttp://www.navier-stokes-equations.com/problem

This is an intersting website.

27 February, 2016 at 3:26 pm

rbcoulterIt seems that the averaged operator. T , can be viewed as the difference of the NS transport operator, TS, minus an external force (F).

Specifically T = NS – F

or F = NS – T

If F passes the conditions of (5) or (9) then this may acceptable as a blowup solution under the Clay rules.

The main question, in my mind at least, does F stay bounded at the blowup time?

In this interpretation the external force “pushes” or “is pushed” by the fluid. It can only be allowed to do this in a macro (smooth) way since any other way, that I can imagine, would violate conditions (5) or (9). The trick, however, is that at the instance of blowup, the last bit of energy must come naturally from the fluid itself to avoid the trivial blowup scenario of simply extracting energy at an infinite power density rate at the blowup point (from the external force field).

27 February, 2016 at 7:16 pm

Finite time blowup for a supercritical defocusing nonlinear wave system | What's new[…] a question asked of me by Sergiu Klainerman recently, regarding whether there were any analogues of my blowup example for Navier-Stokes type equations in the setting of nonlinear wave […]

8 March, 2016 at 5:05 am

Finite time blowup for high dimensional nonlinear wave systems with bounded smooth nonlinearity | What's new[…] sine-Gordon equation is not covered by our arguments. Nevertheless (as with previous finite-time blowup results discussed on this blog), one can view this result as a barrier to trying to prove regularity for […]

29 June, 2016 at 6:44 am

Finite time blowup for Lagrangian modifications of the three-dimensional Euler equation | What's new[…] of the three-dimensional Euler equation“. This paper is loosely in the spirit of other recent papers of mine in which I explore how close one can get to supercritical PDE of physical […]

29 June, 2016 at 7:29 am

Jhon ManugalCan you explain a little bit more about your “water computer” — I am afraid after reading this once or twice I could not find the operators.

I know that ODE can be discretized into recurrence relations (e.g. Euler Method), but that Euler Method “ought to” converge to the ODE.

Yet Cellular Automata can also be encoded as recurrence relations… therefore should be some vague relation between ODE and the model of computation of your choice.

All of that is excellent in theory except, in any one specific case I wouldn’t know how to build a computer. Not if my life depended on it.

29 June, 2016 at 8:14 am

AnonymousIs there any explicit(!) lower bound (in terms of the initial data) for a possible blowup time?

30 June, 2016 at 11:56 am

Terence TaoIn two of the theorems, the data is carefully selected with an upper bound of the blowup time of 1, but the result does not say anything about generic data. For the first blowup result, which is more stable, if the initial data is supported in a narrow cylinder of width around the origin with a total circulation of , then the blowup time will be bounded by , which is consistent with dimensional analysis and also the Beale-Kato-Majda criterion (vorticity has units of inverse time, while circulation has units of length squared per unit time).

1 July, 2016 at 3:06 am

AnonymousIs the implied constant in an absolute (i.e. independent of the initial data) and effectively computable?

Is there also a similar lower(!) bound for the blowup time?

1 July, 2016 at 7:43 am

Terence TaoAssuming that the data is supported inside the cylinder of radius r (at least in a large neighbourhood of the origin), the constant is absolute. With enough control on higher derivatives of the initial vorticity, one should also be able to obtain a matching lower bound from the usual local existence theory (since one can morally rescale both time and space to normalise and to both be comparable to 1), but I haven’t checked this.

16 August, 2016 at 2:39 am

FahadI know that the Navier-Stokes existence is really hard (probably that is the reason why you choose to solve variations of the Navier-Stokes) but I am just curious to know whether the energy dissipation can be used along with local existence and smoothness to prove global smoothness and existence in 2D or 3D? (like the one proposed by the Clay Mathematics Institute)

I am just a high school student, so, mostly my proof will be turned down by the math community.

19 August, 2016 at 10:30 am

Dejan KovacevicFahrad- I hope that Terry will answer your question. However, regarding likelihood to be turned down by math community – don’t worry about that, follow your instincts, consult others, and make sure that you cover and analyze all possible issues and question all, even yourself. Keep questioning and finding the answers, as there is only one truth, regardless of communities of practice or interest. Eventually, what is truthful surfaces up as such, inevitably so.

20 August, 2016 at 3:43 pm

Terence TaoThis is discussed in detail in my other blog post https://terrytao.wordpress.com/2007/03/18/why-global-regularity-for-navier-stokes-is-hard/ . Basically, the answer is no, because the time of existence provided by the local theory can be arbitrarily small even when there is very little energy left to dissipate, and iteration of the local existence theory could thus conceivably lead to a convergent series which is consistent with finite time blowup. Note also that the equation considered in my paper here also has local existence and regularity as well as an energy dissipation inequality, yet still manages to exhibit solutions that blow up in finite time.

21 August, 2016 at 12:10 am

FahadEven I thought the same when I first encountered the problem initially. But, the energy dissipation in (28, Vorticity and Incompressible flow – Bertozzi, Majda) implies that before a singularity is formed, a small region of high energy is formed. This, in turn, forces the energy of the remaining regions

to be approximately close to zero. Let this region be Ω. the energy at the boundary ∂Ω is approximately zero compared to the absolute maximum in Ω. This almost forces the velocity field and its derivatives to be almost zero due to the above energy dissipation. Now, the proof proceeds as below link.

https://drive.google.com/file/d/0Bw5XWeTV9WGOLXhHMUxPUFFsY3M/view?usp=sharing

leading to the conclusion that the energy of the point where the energy is maximum in Ω, should decrease. Therefore, the solution remains in Schwartz space till infinity.

21 August, 2016 at 1:49 am

Tao Chihi… Terence Tao… my name is Tao Chi… and I Iive in Slovenia…

I saw… that you have deleted my video…

I give you a information… that I don’t care about this… that you have deleted… but I do care… that you didn’t give any reason for this action… or any explanation on my knowledge… which of course is not only mine… because it is a universal knowledge… like all of the knowledge… and that also means… that no human being should not give himself any special awards for any knowledge… everything what the human should do is… that he is a grateful for this knowledge… and that he apply this knowledge with ethics…

and ethics is… that you answer me on this message and that you explain me… what is wrong or what is right in my video…

if not… then I ask you… that you also delete my previously videos…

thank you…

P.S. If you see and work only with intelligence… you can be very wrong… because this is only a little part of wholeness…

intelligence is only a muscle which you can pumped up… but there are always a limits… like they are by muscles on the human body… but If you cross these limits… then you do more damage to yourself… than the benefits… by the intelligence we call this madness…

math… geometry… etc… are very welcome tools… for understanding how the universe work… but this tools are tools of the source… and that’s why we must use this tools with ethics… gratefulness… love… and in balance… if not… then you do more damage to yourself and to the others… than the benefits…

and that’s why the primes must be in interaction with non-primes… otherwise the intelligence will be in imbalance and in a state of ignorance…

with love and gratefulness…

TaoChi…

21 August, 2016 at 3:09 am

FahadWell, Terence Tao has put forth some requests (rules) of commenting in his blog, and it reads:

“I welcome comments from people with all kinds of mathematical backgrounds and levels of expertise; my only requests are that the discussions are kept constructive, polite, and at least tangentially relevant to the topic at hand. Comments which are spam, self-promoting, off-topic, or otherwise not fulfilling the above requests will be summarily deleted; repeated offenders in this regard may be subject to blocking. In particular, comments devoted primarily to promoting one’s own research are subject to deletion.”

And it probably is the reason why yours got deleted.

If you wish it to be promoted, formalise it in a form acceptable in the modern math community, and then publish it in a peer-reviewed journal. You can ask questions here, but it should be specific to the post (or thread).

21 August, 2016 at 4:48 am

Tao Chihi Fahad…

thank you for your explanation… I didn’t know for this rules… and certainly my presentation is not for promoting myself… it’s just the knowledge for which I am happy and I wish to shere… discuss and to use this knowledge with others…

with love and gratefulness…

TaoChi

25 August, 2016 at 4:44 am

Tao Chibecause I see… that you didn’t understand… what I have given to you… I will explain this with some other facts…

5 x 5 = 25

5 x 7 = 35

7 x 7 = 49

11 x 5 = 55

13 x 5 = 65

etc…

we see… that primes creates non-primes with a meaningful reason… because without them… they cannot work as wholeness or in harmony…

all systems like military systems… banking systems… etc… which for use only apply the prime numbers… are always in chaotic state… which proves everything what they do… and that’s why they must always fight for their own existence… because otherwise they will collapse immediately… and the reason for this is… because they don’t use the wholeness… and this is easy to see through perception where a man see only himself… and that’s why he is always in chaos… but when he see also a woman of course as equivalent… then suddenly everything is in balance… and this simple equation or algorithm… I have given to you in this video… etc…

all mathematicians… which will continuously search the meaning only in primes… they will always be in chaos… because they will not see the wholeness or harmony… and they will continuously chasing some ghost… which always disappears in the middle of some algorithm… what I have seen in my research… therefore… there are no twin primes… sexy primes… etc… because they are only here and there… and in mathematics and geometry… this is not a proof or algorithm… which work into infinity…

so… now is the decision… like always is… we can accept the solution or not… but the reality for this decision… always show us the real picture in our daily lives… what we see as…

action = reaction

cause = consequence

with love and gratefulness…

TaoChi…

25 August, 2016 at 5:57 am

Anonymous???

25 August, 2016 at 6:32 am

Tao ChiAnonymous… ask me… what you wish to know… and then I can give you an answer… if I can…