I’ve just uploaded to the arXiv my paper “Quantitative bounds for critically bounded solutions to the Navier-Stokes equations“, submitted to the proceedings of the Linde Hall Inaugural Math Symposium. (I unfortunately had to cancel my physical attendance at this symposium for personal reasons, but was still able to contribute to the proceedings.) In recent years I have been interested in working towards establishing the existence of classical solutions for the Navier-Stokes equations
that blow up in finite time, but this time for a change I took a look at the other side of the theory, namely the conditional regularity results for this equation. There are several such results that assert that if a certain norm of the solution stays bounded (or grows at a controlled rate), then the solution stays regular; taken in the contrapositive, they assert that if a solution blows up at a certain finite time , then certain norms of the solution must also go to infinity. Here are some examples (not an exhaustive list) of such blowup criteria:
- (Leray blowup criterion, 1934) If
blows up at a finite time
, and
, then
for an absolute constant
.
- (Prodi–Serrin–Ladyzhenskaya blowup criterion, 1959-1967) If
blows up at a finite time
, and
, then
, where
.
- (Beale-Kato-Majda blowup criterion, 1984) If
blows up at a finite time
, then
, where
is the vorticity.
- (Kato blowup criterion, 1984) If
blows up at a finite time
, then
for some absolute constant
.
- (Escauriaza-Seregin-Sverak blowup criterion, 2003) If
blows up at a finite time
, then
.
- (Seregin blowup criterion, 2012) If
blows up at a finite time
, then
.
- (Phuc blowup criterion, 2015) If
blows up at a finite time
, then
for any
.
- (Gallagher-Koch-Planchon blowup criterion, 2016) If
blows up at a finite time
, then
for any
.
- (Albritton blowup criterion, 2016) If
blows up at a finite time
, then
for any
.
My current paper is most closely related to the Escauriaza-Seregin-Sverak blowup criterion, which was the first to show a critical (i.e., scale-invariant, or dimensionless) spatial norm, namely , had to become large. This result now has many proofs; for instance, many of the subsequent blowup criterion results imply the Escauriaza-Seregin-Sverak one as a special case, and there are also additional proofs by Gallagher-Koch-Planchon (building on ideas of Kenig-Koch), and by Dong-Du. However, all of these proofs rely on some form of a compactness argument: given a finite time blowup, one extracts some suitable family of rescaled solutions that converges in some weak sense to a limiting solution that has some additional good properties (such as almost periodicity modulo symmetries), which one can then rule out using additional qualitative tools, such as unique continuation and backwards uniqueness theorems for parabolic heat equations. In particular, all known proofs use some version of the backwards uniqueness theorem of Escauriaza, Seregin, and Sverak. Because of this reliance on compactness, the existing proofs of the Escauriaza-Seregin-Sverak blowup criterion are qualitative, in that they do not provide any quantitative information on how fast the
norm will go to infinity (along a subsequence of times).
On the other hand, it is a general principle that qualitative arguments established using compactness methods ought to have quantitative analogues that replace the use of compactness by more complicated substitutes that give effective bounds; see for instance these previous blog posts for more discussion. I therefore was interested in trying to obtain a quantitative version of this blowup criterion that gave reasonably good effective bounds (in particular, my objective was to avoid truly enormous bounds such as tower-exponential or Ackermann function bounds, which often arise if one “naively” tries to make a compactness argument effective). In particular, I obtained the following triple-exponential quantitative regularity bounds:
Theorem 1 If
is a classical solution to Navier-Stokes on
with
and
for
and
.
As a corollary, one can now improve the Escauriaza-Seregin-Sverak blowup criterion to
for some absolute constant , which to my knowledge is the first (very slightly) supercritical blowup criterion for Navier-Stokes in the literature.
The proof uses many of the same quantitative inputs as previous arguments, most notably the Carleman inequalities used to establish unique continuation and backwards uniqueness theorems for backwards heat equations, but also some additional techniques that make the quantitative bounds more efficient. The proof focuses initially on points of concentration of the solution, which we define as points where there is a frequency
for which one has the bound
for a large absolute constant , where
is a Littlewood-Paley projection to frequencies
. (This can be compared with the upper bound of
for the quantity on the left-hand side that follows from (1).) The factor of
normalises the left-hand side of (2) to be dimensionless (i.e., critical). The main task is to show that the dimensionless quantity
cannot get too large; in particular, we end up establishing a bound of the form
from which the above theorem ends up following from a routine adaptation of the local well-posedness and regularity theory for Navier-Stokes.
The strategy is to show that any concentration such as (2) when is large must force a significant component of the
norm of
to also show up at many other locations than
, which eventually contradicts (1) if one can produce enough such regions of non-trivial
norm. (This can be viewed as a quantitative variant of the “rigidity” theorems in some of the previous proofs of the Escauriaza-Seregin-Sverak theorem that rule out solutions that exhibit too much “compactness” or “almost periodicity” in the
topology.) The chain of causality that leads from a concentration (2) at
to significant
norm at other regions of the time slice
is somewhat involved (though simpler than the much more convoluted schemes I initially envisaged for this argument):
- Firstly, by using Duhamel’s formula, one can show that a concentration (2) can only occur (with
large) if there was also a preceding concentration
at some slightly previous point
in spacetime, with
also close to
(more precisely, we have
,
, and
). This can be viewed as a sort of contrapositive of a “local regularity theorem”, such as the ones established by Caffarelli, Kohn, and Nirenberg. A key point here is that the lower bound
in the conclusion (3) is precisely the same as the lower bound in (2), so that this backwards propagation of concentration can be iterated.
- Iterating the previous step, one can find a sequence of concentration points
with the
propagating backwards in time; by using estimates ultimately resulting from the dissipative term in the energy identity, one can extract such a sequence in which the
increase geometrically with time, the
are comparable (up to polynomial factors in
) to the natural frequency scale
, and one has
. Using the “epochs of regularity” theory that ultimately dates back to Leray, and tweaking the
slightly, one can also place the times
in intervals
(of length comparable to a small multiple of
) in which the solution is quite regular (in particular,
enjoy good
bounds on
).
- The concentration (4) can be used to establish a lower bound for the
norm of the vorticity
near
. As is well known, the vorticity obeys the vorticity equation
In the epoch of regularity
, the coefficients
of this equation obey good
bounds, allowing the machinery of Carleman estimates to come into play. Using a Carleman estimate that is used to establish unique continuation results for backwards heat equations, one can propagate this lower bound to also give lower
bounds on the vorticity (and its first derivative) in annuli of the form
for various radii
, although the lower bounds decay at a gaussian rate with
.
- Meanwhile, using an energy pigeonholing argument of Bourgain (which, in this Navier-Stokes context, is actually an enstrophy pigeonholing argument), one can locate some annuli
where (a slightly normalised form of) the entrosphy is small at time
; using a version of the localised enstrophy estimates from a previous paper of mine, one can then propagate this sort of control forward in time, obtaining an “annulus of regularity” of the form
in which one has good estimates; in particular, one has
type bounds on
in this cylindrical annulus.
- By intersecting the previous epoch of regularity
with the above annulus of regularity, we have some lower bounds on the
norm of the vorticity (and its first derivative) in the annulus of regularity. Using a Carleman estimate first introduced by Escauriaza, Seregin, and Sverak, as well as a second application of the Carleman estimate used previously, one can then propagate this lower bound back up to time
, establishing a lower bound for the vorticity on the spatial annulus
. By some basic Littlewood-Paley theory one can parlay this lower bound to a lower bound on the
norm of the velocity
; crucially, this lower bound is uniform in
.
- If
is very large (triple exponential in
!), one can then find enough scales
with disjoint
annuli that the total lower bound on the
norm of
provided by the above arguments is inconsistent with (1), thus establishing the claim.
The chain of causality is summarised in the following image:
It seems natural to conjecture that similar triply logarithmic improvements can be made to several of the other blowup criteria listed above, but I have not attempted to pursue this question. It seems difficult to improve the triple logarithmic factor using only the techniques here; the Bourgain pigeonholing argument inevitably costs one exponential, the Carleman inequalities cost a second, and the stacking of scales at the end to contradict the upper bound costs the third.
45 comments
Comments feed for this article
15 August, 2019 at 12:20 pm
omarleoblog
Wow sir you are an amazing person. Only a genious undestands
15 August, 2019 at 12:26 pm
Anonymous
In the section describing the attempt “Exploiting blowup criteria” of implementing “strategy 3” in “Why global regularity for Navier-Stokes is hard” you say that “However, all such blowup criteria are subcritical or critical in nature, and thus, barring a breakthrough in Strategy 2, the known globally controlled quantities cannot be used to reach a contradiction.” Now that you have a supercritical blowup criterion, how would you say this affects the analysis of the hardness of global regularity of Navier-Stokes?
15 August, 2019 at 12:48 pm
Terence Tao
This result would fall in the category of “Positive approach 4” in the blog post you mention. Roughly speaking, the results here assert that in order to ensure regularity, one does not need to uniformly bound the critical norm
; it would suffice to bound a slightly supercritical norm such as
(actually I don’t quite prove this statement in the paper, but I would expect it would follow from a variant of the methods used there). This is technically a little bit closer to what one would need to positively resolve the global regularity problem, namely to be able to deduce regularity from a uniform bound on just the
norm, but the gap between what implies global regularity and one what can actually control is still absolutely enormous. Note that in the case of dyadic models of Navier-Stokes, it is known that slightly supercritical levels of dissipation can still establish global regularity, but significantly supercritical levels of dissipation, such as those seen in the actual 3D Navier-Stokes equations, cause blowup. So I would not view slightly supercritical results as being significant evidence towards being able to control strongly supercritical situations; instead I view them primarily as optimisations of the critical theory, which can penetrate a little bit, but not too far, into the supercritical regime.
16 August, 2019 at 7:04 am
Anonymous
Dear Pro.Tao
I am very amusing to see your paper. You keep up your paper. A top secret from 2006 to 2019: around 13 years. So far I have wanted to tell a story that related to Pro.Tao and Perelmann. After Dr.Perelmann solved Poincare conjecture in 2006, I started to go to China in 2010 , a long journey was very hard. My aim to China is to meet the best man in the world(not mathematics).He is a wizard. He is a reclusive man not be on media, magazine.Firstly , I was very hard to meet him, but with my patient, I finally met him. Initially , I took 2 portraits of Dr.Tao and Perelmann. Obviously, he did not know who they were. I asked him that how Sir valued both of them.He answered they were the best heroes in the world. Next , I asked who was better(Obviously, he did not know Perelmann is Russian, Terence Tao is Australian). An amazing result: his finger on Terence Tao. Imediately , I had argument against him and I said Sir was wrong, Perelmann was better. I said that Perelmann solved the most difficult problem in 7 ones in the world. Now , there are only 6 ones. Terence Tao not yet. He did not wait me to say more. Sir quickly threw his pen pierce through out the picture. This made my heart thump. Sir opened big eyes and announced that Terence Tao solved 4 out of 6. For my viewpoint , I cannot affirm Sir was right or not, but to examine his talent , I asked Sir which football team would be champion in the world in 2014.Wow, Sir is a truly wizard ,the best man in the world. I got a sum of big money from betting .Thanks Sir a lot
16 August, 2019 at 11:47 am
Anonymous
“I was very hard to meet him” haha
20 August, 2019 at 8:54 am
Anonymous
Dear Tao,
After I read your last sentence at the end of your blog. I want to offer you an advice. I can’t give you money, diamond. I give you an uncountable thing, that is spirit. You never give up. Although I am not a mathematician, I know that you have gone 85% on the way of Navier Stokes problem. You trust me! You certainly succeed. In 2019, you will have a great surprising fun. I will give you two wings in order to make you fly higher in the sky.
15 August, 2019 at 2:00 pm
Anonymous
What is the precise definition of blowup at finite time ?
15 August, 2019 at 2:50 pm
Terence Tao
There are several equivalent definitions, but one would be “inability to extend the solution as a classical solution beyond that time”. (This in turn requires one to specify what a classical solution is; one definition for instance would be a solution where the velocity and pressure fields
are smooth, and all derivatives of those fields are square-integrable in space, locally uniformly in time.) One can specify other notions of what it means to be a classical solution, but generally speaking as long as the notion involves some subcritical regularity control, they end up being essentially equivalent to each other. Once one accepts any of the blowup criteria listed in the blog post, one can then take one of those criteria to be an alternate equivalent definition for finite time blowup if one wishes.
17 August, 2019 at 1:48 pm
Anonymous
Is there any conjecture on the magnitude of the best possible bound in theorem 1 ? (i.e. how much the triple exponential bound may be improved)
17 August, 2019 at 2:27 pm
Terence Tao
I don’t think there is much data on this to make a good conjecture yet. Certainly the bound is at least linear, since this is what happens for the linear heat equation; it should be possible to get some sort of superlinear bound by adapting some of the illposedness theory for supercritical equations. A polynomial bound would come close to settling the global regularity conjecture in the positive (if the exponent was low enough), but is unlikely to be true. There is a small chance that one can lower the triple exponential bound to a double exponential one by somehow avoiding the use of Bourgain’s energy pigeonholing argument, but I think this would require one to use more Carleman estimates than the “off the shelf” ones that I simply took from the Escauriaza-Seregin-Sverak paper.
18 August, 2019 at 2:44 am
Anonymous
Is it possible for the best possible bound to be sub-exponential in
(e.g.
) ?
[See my previous response – T.]
18 August, 2019 at 11:45 pm
Anonymous coward
There’s something I just don’t understand, why bother with all this crap? There are much easier ways to earn 1 million than go for a millennium problem. What is the point of math at all? This world is all about shallowness, money, status, only a naive person who never grew out of that innocent teen mentality would waste his life on these useless and intractable problems. You publish some result that takes years in the making, only to have less than 50 people in the world who can understand it, you “live for the grudging appreciation of your few colleagues” (some famous mathematician said this can’t remember name). While savvy cunning manipulative people live in heaven on earth. I slowly saw the world for what it is, I was severely outcasted by certain people I really wanted to get to know because of my “career decisions”, and out grew that naive teen mentality, quitted this nightmare years ago. People like you Terry and my supervisors… Will never understand you people.
19 August, 2019 at 5:09 am
Michel Eve
What a stupid comment ! If you wonder about the usefulness of doing research in Math then why do you bother reading this blog ? You are very lucky that your inapropriate comment has not been removed…especially coming from an anonymous coward.
19 August, 2019 at 5:17 am
Anonymous
You are extremely wrong. Pro.Tao do not need money. Conquering the best problems in the world is from his love of math when he was childhood. If he needs money(Maybe Pro.Tao has gotten Abel prize( nearly $ 1 million) in recent years,but he is humble, he yields to other man. You should not ask me why reason I know. With his famous name in the world, he makes money very easily
15 September, 2019 at 10:55 pm
Ari
I don’t know if you are trolling but I will respond.
1. He is not in it for the money, duh. This may be hard for “normal” people to understand but some people just love math just like some people love baseball.
2. He is doing less selfish thing than making millions and you are blaming him for what? I don’t think he is savvy-cunning, if he was, he’d making millions in finance or something. In fact he is the complete opposite. Savvy-cunning-manipulative people don’t spend decades working on math problems that don’t pay returns.
3. I do agree that people need to but cut out from any naive teen mentality, that eg. doing hard math work will make you happy. People who want to do serious research might end up unhappy, depressed or even on the street. But this message should be to the young guys not to Tao.
4. Be happy that there are people working on these hard problems. Even though a lot of math will have relatively less value, almost all it increases in utility some way. For example solving Navier-Stokes-Eq. might make for better and safer turbines for airplanes, or even help model blood flow in human veins etc. Without Fourier and other transforms we wouldn’t have MRI or digital imaging etc.
p.s. Stop contaminating his fine blog with these bad comments.
19 August, 2019 at 11:03 pm
m.r.s
Hi pro. tao
Please write a complete book on the Navier-Stokes equation. that book should have:
1.A book that deals with the details and prerequisites of this equation!
2. The book is simple and detailed!
19 August, 2019 at 11:14 pm
m.r.s
Many books have been written about Navier Stokes equations. But they are not perfect at all. Not for beginners.
Please write a book for beginners on Navier Stokes equations.
For example with this title : “Navier Stokes equations for beginners”
19 August, 2019 at 11:17 pm
m.r.s
Such a book will surely be very bulky. Maybe 1000 pages!!!!!!!!
20 August, 2019 at 10:28 am
Anonymous
Thanks for the Note.
Can you express or refer us to rigorous definitions and distinctions between
a) Blow-up
b) Singularity
c) Uniqueness
d) Well-posed
e) Regularity
I can see these terms have been used interchangeably sometimes. And please let us know what have not been proven yet.
If the initial condition for 3D NS is small (in some norms), seems that the solution remains bounded. If yes, what is left to be proven in existence and uniqueness.
Thanks again,
20 August, 2019 at 10:30 am
Anonymous
I forgot to mention that, it has been proved that the weak solution for Navier Stokes is not unique. Is there any consequence on convergence to the strong solution.
Thanks,
Mahdi
20 August, 2019 at 11:09 am
Terence Tao
The short answer is “it’s complicated”.
A technical discussion of some of the basic results and concepts can be found for instance in my recent lecture notes on this topic. But for the abridged version: first one has to select a solution concept, which can range all the way from classical solutions (smooth and decaying in space) to weak solutions (only solving the solution in a distributional sense), with a number of intermediate notions of solution that have been studied (e.g. Leray-Hopf weak solution, suitable weak solution, Fujita-Kato mild solution, etc.). One also as to select am matching class of initial data (e.g., smooth data, finite energy data, etc.). Once one has chosen a solution concept and an initial data class, then
As it turns out for Navier-Stokes, solution concepts basically divide into two types, “strong solutions” and “weak solutions”. For strong solutions, one typically has local existence, uniqueness, well-posedness and regularity but global existence or regularity is not known unless one is in a perturbative regime the initial data is very close to zero (or very close to some other well understood initial state); if the solution is not global, then there are various blowup criteria, such as the ones listed in the blog post, which then demonstrate various types of blowup and singularity formation at the final time of existence. For weak solutions, uniqueness, well-posedness and regularity are either false or not known unconditionally (it depends on the precise weak solution concept), but there are conditional results such as “weak-strong uniqueness” results. Also, for weak solutions one typically has global existence (so in particular weak solutions usually do not “blow up” so badly that they exit the category of weak solutions), and some partial information is known about the singularities (e.g., control on the Hausdorff dimension of the singular set).
Weak solutions typically arise as weak limits of strong solutions with high frequency forcing terms. A non-uniqueness result for weak solutions then indicates that some sequences of strong solutions with high frequency forcing terms and initial data converging weakly to some limiting initial data, need not converge weakly. This may either indicate some limitations to the strong well-posedness theory, or be primarily resulting from the low-frequency consequences of adding a high frequency forcing term. In my opinion many of the currently known non-uniqueness results for weak solutions are of the second category.
20 August, 2019 at 11:26 am
Anonymous
Thanks a lot for prompt and comprehensive response.
Wish you all the best,
Mahdi
20 August, 2019 at 1:38 pm
Anonymous
It seems that the viscosity coefficient is missing in the NS equation above.
[In this paper the viscosity is normalised to be 1 for simplicity. -T]
21 August, 2019 at 12:56 am
Anonymous
The viscosity normalization to 1 may be viewed as a gauge transformation.
It seems that such normalization is possible (in certain cases) even for dimensionless parameters.
20 August, 2019 at 1:59 pm
m.r.s
What does “cj” notation on page 8 mean?
Nowhere has the paper been used
20 August, 2019 at 2:09 pm
m.r.s
notation cj in this equation:
Aj=c0j
in page 8 your paper!!!
20 August, 2019 at 6:30 pm
Terence Tao
25 August, 2019 at 5:07 pm
Anonymous
Dear Sir Tao,
I ‘d like to contribute a little merit:
To disscuss Navier Stokes, I am allowed to tell a 100 year- story. ” 100 year ago, the inventist Edison met an old woman , she complained that she was tired and she wished there was somewhat means to help her go home”. Immediately, Edison thought about the first electric car in the world. An great invention from a little idea of an old woman.”Sometime the great heroes in the world must respect a little idea from a normal person” .Like also Navier Stokes , why all hundreds of mathematicians in the world keep their heads into solving “ghost” Navier Stokes equation.Like also coming aross the other bank of river , instead of building a bridge that costs of much money , material , men. We only use a boat to destination. Sir Tao , why don’t you invent another formular to describle fluid flows replacing “ghost” Navier Stokes equation. After then , Sir compares two equations toghether.
Best wishes
22 August, 2019 at 8:05 am
matemático joven
Dear Prof. Tao,
this might be a really dumb suggestion, but did you try an evolutionary algorithm for building your water computer?
Perhaps those criteria could restrict your search space.
22 August, 2019 at 3:12 pm
rjgoodin
Hi Prof. Tao,
I’m sorry to see all the bizarre comments that come up any time you mention the phrase “Navier-Stokes”, but I’m impressed by your graciousness.
For my part, as an interested layperson, I interested to see all of these qualitative results for the blow-up, but really surprised that there were no quantitative results so far. It’s fascinating to see all of these open problems, regardless of how far they get us on the Big Ones.
23 August, 2019 at 6:39 am
matemático joven
I think the lack of quantitative results was with respect to the Escauriaza-Seregin-Sverak-type estimates. But you’d have to look it up.
26 August, 2019 at 5:18 am
Stupid&lame
This blog is stupid and lame
14 November, 2019 at 4:27 am
claesjohnson
I have posted a comment on my blog
https://claesjohnson.blogspot.com/2019/11/solving-clay-navier-stokes-problem-with.html
Would very much appreciate your view on the meaningfulness of a triple exponential bound on gradients and the relation between smoothness and turbulence. Can turbulent solutions be smooth solutions? If so, what is the meaning of a turbulent solution?
14 November, 2019 at 2:16 pm
Daniel
I have found an exact family of solutions to the Euler equations before that appear very turbulent and they are smooth.
27 December, 2019 at 9:43 pm
Anonymous
In the paper when you get (5.17) from the previous line, why is it not a triple exponential of A6? t0 is lower bounded by an exponential of R’, which is itself bounded by an exponential of A6.
28 December, 2019 at 3:42 pm
Terence Tao
Thanks for pointing this out. The radius
for Proposition 4.3 here was selected to be too large just after (5.15). If one sets
to equal to
instead of
(and also sets
to equal
rather than
) then I believe the argument goes through as before and now there is only a double exponential loss rather than a triple exponential one. This will be corrected in the next revision of the ms.
28 December, 2019 at 4:26 pm
Anonymous
I see, thanks! I’m glad it doesn’t have to be exp exp exp exp.
30 December, 2019 at 8:37 pm
Daoguo Zhou
Are there more references in PDE about turning quanlitative results by compactness method into quantitative analogues?
The method in your paper does not appear often.
31 December, 2019 at 10:50 am
Terence Tao
There are some very general “proof mining” techniques (see e.g., https://www.brics.dk/RS/02/31/BRICS-RS-02-31.pdf ) that allow one to start with a qualitative result obtained by a compactness method and converting it into a quantitative result, but these methods often give extremely poor bounds (tower exponential type bounds are common). In high dimensional geometry there is some literature on replacing results based on compact embeddings of Banach spaces with more quantitative estimates on metric entropy numbers, though I don’t know if such results have been applied very much to PDE settings. In the reverse direction, to attack critical PDE Bourgain introduced a quantitative “induction on energy” method which was later simplified by Kenig and Merle to a qualitative “non-existence of minimal-energy blowup solution” argument.
In an old paper of mine http://front.math.ucdavis.edu/math.AP/0402130 I obtained quantitative bounds for the global regularity of spherically symmetric NLS which had previously been obtained by arguments that were either qualitative or used the induction on energy method that provided only tower exponentially poor bounds. Much of the literature on global regularity for slightly supercritical equations (starting with my paper https://arxiv.org/abs/math/0606145 ) also implicitly requires a very quantitative understanding of the critical theory; roughly speaking, the extent to which one can penetrate into the supercritical regime tends to be inversely proportional to the strength of the quantitative bounds one gets in the critical regime.
Certainly I think it is a productive exercise to look at other qualitative results in PDE (or in analysis in general) and try to make them more quantitative, with bounds as strong as possible; this exercise tends to force one to really understand the underlying mechanics of the argument and also allows one to quantitative compare the strength of various techniques to attack the problem in a way that might not be visible at the purely qualitative level.
11 January, 2020 at 9:45 pm
Leo
https://link.springer.com/chapter/10.1007/978-3-642-32589-2_51
smoothness really help or it’s just hiding the goal.
if i want to collapse in a little hole who represent the best gradient but i can’t define the hole.
Or inverse smoothness doesn’t change complexity so i’m working with the same entity. Therefore we always collapse the optimal way by applying it?
3 October, 2020 at 12:15 pm
zaker
Hello professor.
I want to ask you a few questions about the Navier-Stokes equation of incompressible fluid flow.
Question 1- Can the Navier-Stokes equation be explained like Euler’s equation by Arnold’s theory of geodesics?
Question 2 – Suppose we have written the Navier-Stokes equation on a Riemannian manifold. How is the “Millennium Prize Problem” expressed now? That is, how is the above problem written for Riemani Manifeld?
30 October, 2020 at 4:26 pm
.
Would research like this 1) help you or 2) motivate you to work on deep learning or 3) suggest that deep learning might be an arsenal for you some day in your work on pdes or may be other (ir)relevant areas https://www.technologyreview.com/2020/10/30/1011435/ai-fourier-neural-network-cracks-navier-stokes-and-partial-differential-equations/?
If so or if not why or why not respectively?