You are currently browsing the tag archive for the ‘Nash embedding theorem’ tag.

# Tag Archive

## 255B, Notes 2: Onsager’s conjecture

8 January, 2019 in 255B - incompressible Euler equations, math.AP | Tags: convex integration, Euler equations, Nash embedding theorem, Onsager's conjecture | by Terence Tao | 25 comments

We consider the incompressible Euler equations on the (Eulerian) torus , which we write in divergence form as

where is the (inverse) Euclidean metric. Here we use the summation conventions for indices such as (reserving the symbol for other purposes), and are retaining the convention from Notes 1 of denoting vector fields using superscripted indices rather than subscripted indices, as we will eventually need to change variables to Lagrangian coordinates at some point. In principle, much of the discussion in this set of notes (particularly regarding the positive direction of Onsager’s conjecture) could also be modified to also treat non-periodic solutions that decay at infinity if desired, but some non-trivial technical issues do arise non-periodic settings for the negative direction.As noted previously, the kinetic energy

is formally conserved by the flow, where is the Euclidean metric. Indeed, if one assumes that are continuously differentiable in both space and time on , then one can multiply the equation (1) by and contract against to obtain which rearranges using (2) and the product rule to and then if one integrates this identity on and uses Stokes’ theorem, one obtains the required energy conservation law It is then natural to ask whether the energy conservation law continues to hold for lower regularity solutions, in particular weak solutions that only obey (1), (2) in a distributional sense. The above argument no longer works as stated, because is not a test function and so one cannot immediately integrate (1) against . And indeed, as we shall soon see, it is now known that once the regularity of is low enough, energy can “escape to frequency infinity”, leading to failure of the energy conservation law, a phenomenon known in physics as*anomalous energy dissipation*.

But what is the precise level of regularity needed in order to for this anomalous energy dissipation to occur? To make this question precise, we need a quantitative notion of regularity. One such measure is given by the Hölder space for , defined as the space of continuous functions whose norm

is finite. The space lies between the space of continuous functions and the space of continuously differentiable functions, and informally describes a space of functions that is “ times differentiable” in some sense. The above derivation of the energy conservation law involved the integral that roughly speaking measures the fluctuation in energy. Informally, if we could take the derivative in this integrand and somehow “integrate by parts” to split the derivative “equally” amongst the three factors, one would morally arrive at an expression that resembles which suggests that the integral can be made sense of for once . More precisely, one can make

Conjecture 1 (Onsager’s conjecture)Let and , and let .

- (i) If , then any weak solution to the Euler equations (in the Leray form ) obeys the energy conservation law (3).
- (ii) If , then there exist weak solutions to the Euler equations (in Leray form) which do not obey energy conservation.

This conjecture was originally arrived at by Onsager by a somewhat different heuristic derivation; see Remark 7. The numerology is also compatible with that arising from the Kolmogorov theory of turbulence (discussed in this previous post), but we will not discuss this interesting connection further here.

The positive part (i) of Onsager conjecture was established by Constantin, E, and Titi, building upon earlier partial results by Eyink; the proof is a relatively straightforward application of Littlewood-Paley theory, and they were also able to work in larger function spaces than (using -based Besov spaces instead of Hölder spaces, see Exercise 3 below). The negative part (ii) is harder. Discontinuous weak solutions to the Euler equations that did not conserve energy were first constructed by Sheffer, with an alternate construction later given by Shnirelman. De Lellis and Szekelyhidi noticed the resemblance of this problem to that of the Nash-Kuiper theorem in the isometric embedding problem, and began adapting the *convex integration* technique used in that theorem to construct weak solutions of the Euler equations. This began a long series of papers in which increasingly regular weak solutions that failed to conserve energy were constructed, culminating in a recent paper of Isett establishing part (ii) of the Onsager conjecture in the non-endpoint case in three and higher dimensions ; the endpoint remains open. (In two dimensions it may be the case that the positive results extend to a larger range than Onsager’s conjecture predicts; see this paper of Cheskidov, Lopes Filho, Nussenzveig Lopes, and Shvydkoy for more discussion.) Further work continues into several variations of the Onsager conjecture, in which one looks at other differential equations, other function spaces, or other criteria for bad behavior than breakdown of energy conservation. See this recent survey of de Lellis and Szekelyhidi for more discussion.

In these notes we will first establish (i), then discuss the convex integration method in the original context of the Nash-Kuiper embedding theorem. Before tackling the Onsager conjecture (ii) directly, we discuss a related construction of high-dimensional weak solutions in the Sobolev space for close to , which is slightly easier to establish, though still rather intricate. Finally, we discuss the modifications of that construction needed to establish (ii), though we shall stop short of a full proof of that part of the conjecture.

We thank Phil Isett for some comments and corrections.

## Embedding the Heisenberg group into a bounded dimensional Euclidean space with optimal distortion

26 November, 2018 in math.AP, math.MG, paper | Tags: bilipschitz embedding, Heisenberg group, Nash embedding theorem | by Terence Tao | 13 comments

I’ve just uploaded to the arXiv my paper “Embedding the Heisenberg group into a bounded dimensional Euclidean space with optimal distortion“, submitted to Revista Matematica Iberoamericana. This paper concerns the extent to which one can accurately embed the metric structure of the Heisenberg group

into Euclidean space, which we can write as with the notation

Here we give the right-invariant Carnot-Carathéodory metric coming from the right-invariant vector fields

but not from the commutator vector field

This gives the geometry of a Carnot group. As observed by Semmes, it follows from the Carnot group differentiation theory of Pansu that there is no bilipschitz map from to any Euclidean space or even to , since such a map must be differentiable almost everywhere in the sense of Carnot groups, which in particular shows that the derivative map annihilate almost everywhere, which is incompatible with being bilipschitz.

On the other hand, if one *snowflakes* the Heisenberg group by replacing the metric with for some , then it follows from the general theory of Assouad on embedding snowflaked metrics of doubling spaces that may be embedded in a bilipschitz fashion into , or even to for some depending on .

Of course, the distortion of this bilipschitz embedding must degenerate in the limit . From the work of Austin-Naor-Tessera and Naor-Neiman it follows that may be embedded into with a distortion of , but no better. The Naor-Neiman paper also embeds into a finite-dimensional space with independent of , but at the cost of worsening the distortion to . They then posed the question of whether this worsening of the distortion is necessary.

The main result of this paper answers this question in the negative:

Theorem 1There exists an absolute constant such that may be embedded into in a bilipschitz fashion with distortion for any .

To motivate the proof of this theorem, let us first present a bilipschitz map from the snowflaked line (with being the usual metric on ) into complex Hilbert space . The map is given explicitly as a Weierstrass type function

where for each , is the function

and are an orthonormal basis for . The subtracting of the constant is purely in order to make the sum convergent as . If are such that for some integer , one can easily check the bounds

with the lower bound

at which point one finds that

as desired.

The key here was that each function oscillated at a different spatial scale , and the functions were all orthogonal to each other (so that the upper bound involved a factor of rather than ). One can replicate this example for the Heisenberg group without much difficulty. Indeed, if we let be the discrete Heisenberg group, then the nilmanifold is a three-dimensional smooth compact manifold; thus, by the Whitney embedding theorem, it smoothly embeds into . This gives a smooth immersion which is -automorphic in the sense that for all and . If one then defines to be the function

where is the scaling map

then one can repeat the previous arguments to obtain the required bilipschitz bounds

for the function

To adapt this construction to bounded dimension, the main obstruction was the requirement that the took values in orthogonal subspaces. But if one works things out carefully, it is enough to require the weaker orthogonality requirement

for all , where is the bilinear form

One can then try to construct the for bounded dimension by an iterative argument. After some standard reductions, the problem becomes this (roughly speaking): given a smooth, slowly varying function whose derivatives obey certain quantitative upper and lower bounds, construct a smooth oscillating function , whose derivatives also obey certain quantitative upper and lower bounds, which obey the equation

We view this as an underdetermined system of differential equations for (two equations in unknowns; after some reductions, our can be taken to be the explicit value ). The trivial solution to this equation will be inadmissible for our purposes due to the lower bounds we will require on (in order to obtain the quantitative immersion property mentioned previously, as well as for a stronger “freeness” property that is needed to close the iteration). Because this construction will need to be iterated, it will be essential that the regularity control on is the same as that on ; one cannot afford to “lose derivatives” when passing from to .

This problem has some formal similarities with the isometric embedding problem (discussed for instance in this previous post), which can be viewed as the problem of solving an equation of the form , where is a Riemannian manifold and is the bilinear form

The isometric embedding problem also has the key obstacle that naive attempts to solve the equation iteratively can lead to an undesirable “loss of derivatives” that prevents one from iterating indefinitely. This obstacle was famously resolved by the Nash-Moser iteration scheme in which one alternates between perturbatively adjusting an approximate solution to improve the residual error term, and mollifying the resulting perturbation to counteract the loss of derivatives. The current equation (1) differs in some key respects from the isometric embedding equation , in particular being linear in the unknown field rather than quadratic; nevertheless the key obstacle is the same, namely that naive attempts to solve either equation lose derivatives. Our approach to solving (1) was inspired by the Nash-Moser scheme; in retrospect, I also found similarities with Uchiyama’s constructive proof of the Fefferman-Stein decomposition theorem, discussed in this previous post (and in this recent one).

To motivate this iteration, we first express using the product rule in a form that does not place derivatives directly on the unknown :

This reveals that one can construct solutions to (1) by solving the system of equations

for . Because this system is zeroth order in , this can easily be done by linear algebra (even in the presence of a forcing term ) if one imposes a “freeness” condition (analogous to the notion of a free embedding in the isometric embedding problem) that are linearly independent at each point , which (together with some other technical conditions of a similar nature) one then adds to the list of upper and lower bounds required on (with a related bound then imposed on , in order to close the iteration). However, as mentioned previously, there is a “loss of derivatives” problem with this construction: due to the presence of the differential operators in (3), a solution constructed by this method can only be expected to have two degrees less regularity than at best, which makes this construction unsuitable for iteration.

To get around this obstacle (which also prominently appears when solving (linearisations of) the isometric embedding equation ), we instead first construct a smooth, low-frequency solution to a low-frequency equation

where is a mollification of (of Littlewood-Paley type) applied at a small spatial scale for some , and then gradually relax the frequency cutoff to deform this low frequency solution to a solution of the actual equation (1).

We will construct the low-frequency solution rather explicitly, using the Whitney embedding theorem to construct an initial oscillating map into a very low dimensional space , composing it with a Veronese type embedding into a slightly larger dimensional space to obtain a required “freeness” property, and then composing further with a slowly varying isometry depending on and constructed by a quantitative topological lemma (relying ultimately on the vanishing of the first few homotopy groups of high-dimensional spheres), in order to obtain the required orthogonality (4). (This sort of “quantitative null-homotopy” was first proposed by Gromov, with some recent progress on optimal bounds by Chambers-Manin-Weinberger and by Chambers-Dotterer-Manin-Weinberger, but we will not need these more advanced results here, as one can rely on the classical qualitative vanishing for together with a compactness argument to obtain (ineffective) quantitative bounds, which suffice for this application).

To perform the deformation of into , we must solve what is essentially the linearised equation

of (1) when , (viewed as low frequency functions) are both being deformed at some rates (which should be viewed as high frequency functions). To avoid losing derivatives, the magnitude of the deformation in should not be significantly greater than the magnitude of the deformation in , when measured in the same function space norms.

As before, if one directly solves the difference equation (5) using a naive application of (2) with treated as a forcing term, one will lose at least one derivative of regularity when passing from to . However, observe that (2) (and the symmetry ) can be used to obtain the identity

and then one can solve (5) by solving the system of equations

for . The key point here is that this system is zeroth order in both and , so one can solve this system without losing any derivatives when passing from to ; compare this situation with that of the superficially similar system

that one would obtain from naively linearising (3) without exploiting the symmetry of . There is still however one residual “loss of derivatives” problem arising from the presence of a differential operator on the term, which prevents one from directly evolving this iteration scheme in time without losing regularity in . It is here that we borrow the final key idea of the Nash-Moser scheme, which is to replace by a mollified version of itself (where the projection depends on the time parameter). This creates an error term in (5), but it turns out that this error term is quite small and smooth (being a “high-high paraproduct” of and , it ends up being far more regular than either or , even with the presence of the derivatives) and can be iterated away provided that the initial frequency cutoff is large and the function has a fairly high (but finite) amount of regularity (we will eventually use the Hölder space on the Heisenberg group to measure this).

## Finite time blowup for a supercritical defocusing nonlinear Schrodinger system

5 December, 2016 in paper | Tags: conservation laws, Nash embedding theorem, nonlinear Schrodinger equation | by Terence Tao | 19 comments

I’ve just uploaded to the arXiv my paper Finite time blowup for a supercritical defocusing nonlinear Schrödinger system, submitted to Analysis and PDE. This paper is an analogue of a recent paper of mine in which I constructed a supercritical defocusing nonlinear wave (NLW) system which exhibited smooth solutions that developed singularities in finite time. Here, we achieve essentially the same conclusion for the (inhomogeneous) supercritical defocusing nonlinear Schrödinger (NLS) equation

where is now a system of scalar fields, is a potential which is strictly positive and homogeneous of degree (and invariant under phase rotations ), and is a smooth compactly supported forcing term, needed for technical reasons.

To oversimplify somewhat, the equation (1) is known to be globally regular in the *energy-subcritical* case when , or when and ; global regularity is also known (but is significantly more difficult to establish) in the *energy-critical* case when and . (This is an oversimplification for a number of reasons, in particular in higher dimensions one only knows global well-posedness instead of global regularity. See this previous post for some exploration of this issue in the context of nonlinear wave equations.) The main result of this paper is to show that global regularity can break down in the remaining *energy-supercritical case* when and , at least when the target dimension is allowed to be sufficiently large depending on the spatial dimension (I did not try to achieve the optimal value of here, but the argument gives a value of that grows quadratically in ). Unfortunately, this result does not directly impact the most interesting case of the defocusing scalar NLS equation

in which ; however it does establish a rigorous *barrier* to any attempt to prove global regularity for the scalar NLS equation, in that such an attempt needs to crucially use some property of the scalar NLS that is not shared by the more general systems in (1). For instance, any approach that is primarily based on the conservation laws of mass, momentum, and energy (which are common to both (1) and (2)) will not be sufficient to establish global regularity of supercritical defocusing scalar NLS.

The method of proof in this paper is broadly similar to that in the previous paper for NLW, but with a number of additional technical complications. Both proofs begin by reducing matters to constructing a discretely self-similar solution. In the case of NLW, this solution lived on a forward light cone and obeyed a self-similarity

The ability to restrict to a light cone arose from the finite speed of propagation properties of NLW. For NLS, the solution will instead live on the domain

and obey a parabolic self-similarity

and solve the homogeneous version of (1). (The inhomogeneity emerges when one truncates the self-similar solution so that the initial data is compactly supported in space.) A key technical point is that has to be smooth everywhere in , including the boundary component . This unfortunately rules out many of the existing constructions of self-similar solutions, which typically will have some sort of singularity at the spatial origin.

The remaining steps of the argument can broadly be described as quantifier elimination: one systematically eliminates each of the degrees of freedom of the problem in turn by locating the necessary and sufficient conditions required of the remaining degrees of freedom in order for the constraints of a particular degree of freedom to be satisfiable. The first such degree of freedom to eliminate is the potential function . The task here is to determine what constraints must exist on a putative solution in order for there to exist a (positive, homogeneous, smooth away from origin) potential obeying the homogeneous NLS equation

Firstly, the requirement that be homogeneous implies the Euler identity

(where denotes the standard real inner product on ), while the requirement that be phase invariant similarly yields the variant identity

so if one defines the *potential energy field* to be , we obtain from the chain rule the equations

Conversely, it turns out (roughly speaking) that if one can locate fields and obeying the above equations (as well as some other technical regularity and non-degeneracy conditions), then one can find an with all the required properties. The first of these equations can be thought of as a definition of the potential energy field , and the other three equations are basically disguised versions of the conservation laws of mass, energy, and momentum respectively. The construction of relies on a classical extension theorem of Seeley that is a relative of the Whitney extension theorem.

Now that the potential is eliminated, the next degree of freedom to eliminate is the solution field . One can observe that the above equations involving and can be expressed instead in terms of and the *Gram-type matrix* of , which is a matrix consisting of the inner products where range amongst the differential operators

To eliminate , one thus needs to answer the question of what properties are required of a matrix for it to be the Gram-type matrix of a field . Amongst some obvious necessary conditions are that needs to be symmetric and positive semi-definite; there are also additional constraints coming from identities such as

and

Ideally one would like a theorem that asserts (for large enough) that as long as obeys all of the “obvious” constraints, then there exists a suitably non-degenerate map such that . In the case of NLW, the analogous claim was basically a consequence of the Nash embedding theorem (which can be viewed as a theorem about the solvability of the system of equations for a given positive definite symmetric set of fields ). However, the presence of the complex structure in the NLS case poses some significant technical challenges (note for instance that the naive complex version of the Nash embedding theorem is false, due to obstructions such as Liouville’s theorem that prevent a compact complex manifold from being embeddable holomorphically in ). Nevertheless, by adapting the *proof* of the Nash embedding theorem (in particular, the simplified proof of Gunther that avoids the need to use the Nash-Moser iteration scheme) we were able to obtain a partial complex analogue of the Nash embedding theorem that sufficed for our application; it required an artificial additional “curl-free” hypothesis on the Gram-type matrix , but fortunately this hypothesis ends up being automatic in our construction. Also, this version of the Nash embedding theorem is unable to prescribe the component of the Gram-type matrix , but fortunately this component is not used in any of the conservation laws and so the loss of this component does not cause any difficulty.

After applying the above-mentioned Nash-embedding theorem, the task is now to locate a matrix obeying all the hypotheses of that theorem, as well as the conservation laws for mass, momentum, and energy (after defining the potential energy field in terms of ). This is quite a lot of fields and constraints, but one can cut down significantly on the degrees of freedom by requiring that is spherically symmetric (in a tensorial sense) and also continuously self-similar (not just discretely self-similar). Note that this hypothesis is weaker than the assertion that the original field is spherically symmetric and continuously self-similar; indeed we do not know if non-trivial solutions of this type actually exist. These symmetry hypotheses reduce the number of independent components of the matrix to just six: , which now take as their domain the -dimensional space

One now has to construct these six fields, together with a potential energy field , that obey a number of constraints, notably some positive definiteness constraints as well as the aforementioned conservation laws for mass, momentum, and energy.

The field only arises in the equation for the potential (coming from Euler’s identity) and can easily be eliminated. Similarly, the field only makes an appearance in the current of the energy conservation law, and so can also be easily eliminated so long as the total energy is conserved. But in the energy-supercritical case, the total energy is infinite, and so it is relatively easy to eliminate the field from the problem also. This leaves us with the task of constructing just five fields obeying a number of positivity conditions, symmetry conditions, regularity conditions, and conservation laws for mass and momentum.

The potential field can effectively be absorbed into the angular stress field (after placing an appropriate counterbalancing term in the radial stress field so as not to disrupt the conservation laws), so we can also eliminate this field. The angular stress field is then only constrained through the momentum conservation law and a requirement of positivity; one can then eliminate this field by converting the momentum conservation law from an equality to an inequality. Finally, the radial stress field is also only constrained through a positive definiteness constraint and the momentum conservation inequality, so it can also be eliminated from the problem after some further modification of the momentum conservation inequality.

The task then reduces to locating just two fields that obey a mass conservation law

together with an additional inequality that is the remnant of the momentum conservation law. One can solve for the mass conservation law in terms of a single scalar field using the ansatz

so the problem has finally been simplified to the task of locating a single scalar field with some scaling and homogeneity properties that obeys a certain differential inequality relating to momentum conservation. This turns out to be possible by explicitly writing down a specific scalar field using some asymptotic parameters and cutoff functions.

## Notes on the Nash embedding theorem

11 May, 2016 in expository, math.AP, math.DG, math.MG | Tags: Nash embedding theorem, Riemannian geometry, Whitney embedding theorem | by Terence Tao | 29 comments

Throughout this post we shall always work in the smooth category, thus all manifolds, maps, coordinate charts, and functions are assumed to be smooth unless explicitly stated otherwise.

A (real) manifold can be defined in at least two ways. On one hand, one can define the manifold *extrinsically*, as a subset of some standard space such as a Euclidean space . On the other hand, one can define the manifold *intrinsically*, as a topological space equipped with an atlas of coordinate charts. The fundamental *embedding theorems* show that, under reasonable assumptions, the intrinsic and extrinsic approaches give the same classes of manifolds (up to isomorphism in various categories). For instance, we have the following (special case of) the Whitney embedding theorem:

Theorem 1 (Whitney embedding theorem)Let be a compact manifold. Then there exists an embedding from to a Euclidean space .

In fact, if is -dimensional, one can take to equal , which is often best possible (easy examples include the circle which embeds into but not , or the Klein bottle that embeds into but not ). One can also relax the compactness hypothesis on to second countability, but we will not pursue this extension here. We give a “cheap” proof of this theorem below the fold which allows one to take equal to .

A significant strengthening of the Whitney embedding theorem is (a special case of) the Nash embedding theorem:

Theorem 2 (Nash embedding theorem)Let be a compactRiemannianmanifold. Then there exists a isometric embedding from to a Euclidean space .

In order to obtain the isometric embedding, the dimension has to be a bit larger than what is needed for the Whitney embedding theorem; in this article of Gunther the bound

is attained, which I believe is still the record for large . (In the converse direction, one cannot do better than , basically because this is the number of degrees of freedom in the Riemannian metric .) Nash’s original proof of theorem used what is now known as Nash-Moser inverse function theorem, but a subsequent simplification of Gunther allowed one to proceed using just the ordinary inverse function theorem (in Banach spaces).

I recently had the need to invoke the Nash embedding theorem to establish a blowup result for a nonlinear wave equation, which motivated me to go through the proof of the theorem more carefully. Below the fold I give a proof of the theorem that does not attempt to give an optimal value of , but which hopefully isolates the main ideas of the argument (as simplified by Gunther). One advantage of not optimising in is that it allows one to freely exploit the very useful tool of *pairing* together two maps , to form a combined map that can be closer to an embedding or an isometric embedding than the original maps . This lets one perform a “divide and conquer” strategy in which one first starts with the simpler problem of constructing some “partial” embeddings of and then pairs them together to form a “better” embedding.

In preparing these notes, I found the articles of Deane Yang and of Siyuan Lu to be helpful.

## Finite time blowup for a supercritical defocusing nonlinear wave system

27 February, 2016 in math.AP, paper | Tags: finite time blowup, Nash embedding theorem, nonlinear wave equations | by Terence Tao | 23 comments

I’ve just uploaded to the arXiv my paper Finite time blowup for a supercritical defocusing nonlinear wave system, submitted to Analysis and PDE. This paper was inspired by a question asked of me by Sergiu Klainerman recently, regarding whether there were any analogues of my blowup example for Navier-Stokes type equations in the setting of nonlinear wave equations.

Recall that the *defocusing nonlinear wave (NLW) equation* reads

where is the unknown scalar field, is the d’Alambertian operator, and is an exponent. We can generalise this equation to the *defocusing nonlinear wave system*

where is now a system of scalar fields, and is a potential which is homogeneous of degree and strictly positive away from the origin; the scalar equation corresponds to the case where and . We will be interested in smooth solutions to (2). It is only natural to restrict to the smooth category when the potential is also smooth; unfortunately, if one requires to be homogeneous of order all the way down to the origin, then cannot be smooth unless it is identically zero or is an odd integer. This is too restrictive for us, so we will only require that be homogeneous away from the origin (e.g. outside the unit ball). In any event it is the behaviour of for large which will be decisive in understanding regularity or blowup for the equation (2).

Formally, solutions to the equation (2) enjoy a conserved energy

Using this conserved energy, it is possible to establish global regularity for the Cauchy problem (2) in the *energy-subcritical* case when , or when and . This means that for any smooth initial position and initial velocity , there exists a (unique) smooth global solution to the equation (2) with and . These classical global regularity results (essentially due to Jörgens) were famously extended to the *energy-critical* case when and by Grillakis, Struwe, and Shatah-Struwe (though for various technical reasons, the global regularity component of these results was limited to the range ). A key tool used in the energy-critical theory is the *Morawetz estimate*

which can be proven by manipulating the properties of the stress-energy tensor

(with the usual summation conventions involving the Minkowski metric ) and in particular exploiting the divergence-free nature of this tensor: See for instance the text of Shatah-Struwe, or my own PDE book, for more details. The energy-critical regularity results have also been extended to slightly supercritical settings in which the potential grows by a logarithmic factor or so faster than the critical rate; see the results of myself and of Roy.

This leaves the question of global regularity for the *energy supercritical* case when and . On the one hand, global smooth solutions are known for small data (if vanishes to sufficiently high order at the origin, see e.g. the work of Lindblad and Sogge), and global weak solutions for large data were constructed long ago by Segal. On the other hand, the solution map, if it exists, is known to be extremely unstable, particularly at high frequencies; see for instance this paper of Lebeau, this paper of Christ, Colliander, and myself, this paper of Brenner and Kumlin, or this paper of Ibrahim, Majdoub, and Masmoudi for various formulations of this instability. In the case of the focusing NLW , one can easily create solutions that blow up in finite time by ODE constructions, for instance one can take with , which blows up as approaches . However the situation in the defocusing supercritical case is less clear. The strongest positive results are of Kenig-Merle and Killip-Visan, which show (under some additional technical hypotheses) that global regularity for such equations holds under the additional assumption that the critical Sobolev norm of the solution stays bounded. Roughly speaking, this shows that “Type II blowup” cannot occur for (2).

Our main result is that finite time blowup can in fact occur, at least for three-dimensional systems where the number of degrees of freedom is sufficiently large:

Theorem 1Let , , and . Then there exists a smooth potential , positive and homogeneous of degree away from the origin, and a solution to (2) with smooth initial data that develops a singularity in finite time.

The rather large lower bound of on here is primarily due to our use of the Nash embedding theorem (which is the first time I have actually had to use this theorem in an application!). It can certainly be lowered, but unfortunately our methods do not seem to be able to bring all the way down to , so we do not directly exhibit finite time blowup for the scalar supercritical defocusing NLW. Nevertheless, this result presents a barrier to any attempt to prove global regularity for that equation, in that it must somehow use a property of the scalar equation which is not available for systems. It is likely that the methods can be adapted to higher dimensions than three, but we take advantage of some special structure to the equations in three dimensions (related to the strong Huygens principle) which does not seem to be available in higher dimensions.

The blowup will in fact be of discrete self-similar type in a backwards light cone, thus will obey a relation of the form

for some fixed (the exponent is mandated by dimensional analysis considerations). It would be natural to consider *continuously self-similar* solutions (in which the above relation holds for *all* , not just one ). And *rough* self-similar solutions have been constructed in the literature by perturbative methods (see this paper of Planchon, or this paper of Ribaud and Youssfi). However, it turns out that continuously self-similar solutions to a defocusing equation have to obey an additional monotonicity formula which causes them to not exist in three spatial dimensions; this argument is given in my paper. So we have to work just with discretely self-similar solutions.

Because of the discrete self-similarity, the finite time blowup solution will be “locally Type II” in the sense that scale-invariant norms inside the backwards light cone stay bounded as one approaches the singularity. But it will not be “globally Type II” in that scale-invariant norms stay bounded outside the light cone as well; indeed energy will leak from the light cone at every scale. This is consistent with the results of Kenig-Merle and Killip-Visan which preclude “globally Type II” blowup solutions to these equations in many cases.

We now sketch the arguments used to prove this theorem. Usually when studying the NLW, we think of the potential (and the initial data ) as being given in advance, and then try to solve for as an unknown field. However, in this problem we have the freedom to select . So we can look at this problem from a “backwards” direction: we first choose the field , and *then* fit the potential (and the initial data) to match that field.

Now, one cannot write down a completely arbitrary field and hope to find a potential obeying (2), as there are some constraints coming from the homogeneity of . Namely, from the Euler identity

we see that can be recovered from (2) by the formula

so the defocusing nature of imposes a constraint

Furthermore, taking a derivative of (3) we obtain another constraining equation

that does not explicitly involve the potential . Actually, one can write this equation in the more familiar form

where is the stress-energy tensor

now written in a manner that does not explicitly involve .

With this reformulation, this suggests a strategy for locating : first one selects a stress-energy tensor that is divergence-free and obeys suitable positive definiteness and self-similarity properties, and then locates a self-similar map from the backwards light cone to that has that stress-energy tensor (one also needs the map (or more precisely the direction component of that map) injective up to the discrete self-similarity, in order to define consistently). If the stress-energy tensor was replaced by the simpler “energy tensor”

then the question of constructing an (injective) map with the specified energy tensor is precisely the embedding problem that was famously solved by Nash (viewing as a Riemannian metric on the domain of , which in this case is a backwards light cone quotiented by a discrete self-similarity to make it compact). It turns out that one can adapt the Nash embedding theorem to also work with the stress-energy tensor as well (as long as one also specifies the mass density , and as long as a certain positive definiteness property, related to the positive semi-definiteness of Gram matrices, is obeyed). Here is where the dimension shows up:

Proposition 2Let be a smooth compact Riemannian -manifold, and let . Then smoothly isometrically embeds into the sphere .

*Proof:* The Nash embedding theorem (in the form given in this ICM lecture of Gunther) shows that can be smoothly isometrically embedded into , and thus in for some large . Using an irrational slope, the interval can be smoothly isometrically embedded into the -torus , and so and hence can be smoothly embedded in . But from Pythagoras’ theorem, can be identified with a subset of for any , and the claim follows.

One can presumably improve upon the bound by being more efficient with the embeddings (e.g. by modifying the proof of Nash embedding to embed directly into a round sphere), but I did not try to optimise the bound here.

The remaining task is to construct the stress-energy tensor . One can reduce to tensors that are invariant with respect to rotations around the spatial origin, but this still leaves a fair amount of degrees of freedom (it turns out that there are four fields that need to be specified, which are denoted in my paper). However a small miracle occurs in three spatial dimensions, in that the divergence-free condition involves only two of the four degrees of freedom (or three out of four, depending on whether one considers a function that is even or odd in to only be half a degree of freedom). This is easiest to illustrate with the scalar NLW (1). Assuming spherical symmetry, this equation becomes

Making the substitution , we can eliminate the lower order term completely to obtain

(This can be compared with the situation in higher dimensions, in which an undesirable zeroth order term shows up.) In particular, if one introduces the null energy density

and the potential energy density

then one can verify the equation

which can be viewed as a transport equation for with forcing term depending on (or vice versa), and is thus quite easy to solve explicitly by choosing one of these fields and then solving for the other. As it turns out, once one is in the supercritical regime , one can solve this equation while giving and the right homogeneity (they have to be homogeneous of order , which is greater than in the supercritical case) and positivity properties, and from this it is possible to prescribe all the other fields one needs to satisfy the conclusions of the main theorem. (It turns out that and will be concentrated near the boundary of the light cone, so this is how the solution will concentrate also.)

## Recent Comments