You are currently browsing the tag archive for the ‘concentration compactness’ tag.

I’ve just uploaded to the arXiv my paper “Localisation and compactness properties of the Navier-Stokes global regularity problem“, submitted to Analysis and PDE. This paper concerns the global regularity problem for the Navier-Stokes system of equations

\displaystyle  \partial_t u + (u \cdot \nabla) u = \Delta u - \nabla p + f \ \ \ \ \ (1)

\displaystyle  \nabla \cdot u = 0 \ \ \ \ \ (2)

\displaystyle  u(0,\cdot) = u_0 \ \ \ \ \ (3)

in three dimensions. Thus, we specify initial data {(u_0,f,T)}, where {0 < T < \infty} is a time, {u_0: {\bf R}^3 \rightarrow {\bf R}^3} is the initial velocity field (which, in order to be compatible with (2), (3), is required to be divergence-free), {f: [0,T] \times {\bf R}^3 \rightarrow {\bf R}^3} is the forcing term, and then seek to extend this initial data to a solution {(u,p,u_0,f,T)} with this data, where the velocity field {u: [0,T] \times {\bf R}^3 \rightarrow {\bf R}^3} and pressure term {p: [0,T] \times {\bf R}^3 \rightarrow {\bf R}} are the unknown fields.

Roughly speaking, the global regularity problem asserts that given every smooth set of initial data {(u_0,f,T)}, there exists a smooth solution {(u,p,u_0,f,T)} to the Navier-Stokes equation with this data. However, this is not a good formulation of the problem because it does not exclude the possibility that one or more of the fields {u_0, f, u, p} grows too fast at spatial infinity. This problem is evident even for the much simpler heat equation

\displaystyle  \partial_t u = \Delta u

\displaystyle  u(0,\cdot) = u_0.

As long as one has some mild conditions at infinity on the smooth initial data {u_0: {\bf R}^3 \rightarrow {\bf R}} (e.g. polynomial growth at spatial infinity), then one can solve this equation using the fundamental solution of the heat equation:

\displaystyle  u(t,x) = \frac{1}{(4\pi t)^{3/2}} \int_{{\bf R}^3} u_0(y) e^{-|x-y|^2/4t}\ dy.

If furthermore {u} is a tempered distribution, one can use Fourier-analytic methods to show that this is the unique solution to the heat equation with this data. But once one allows sufficiently rapid growth at spatial infinity, existence and uniqueness can break down. Consider for instance the backwards heat kernel

\displaystyle  u(t,x) = \frac{1}{(4\pi(T-t))^{3/2}} e^{|x|^2/(T-t)}

for some {T>0}, which is smooth (albeit exponentially growing) at time zero, and is a smooth solution to the heat equation for {0 \leq t < T}, but develops a dramatic singularity at time {t=T}. A famous example of Tychonoff from 1935, based on a power series construction, also shows that uniqueness for the heat equation can also fail once growth conditions are removed. An explicit example of non-uniqueness for the heat equation is given by the contour integral

\displaystyle  u(t,x_1,x_2,x_3) = \int_\gamma \exp(e^{\pi i/4} x_1 z + e^{5\pi i/8} z^{3/2} - itz^2)\ dz

where {\gamma} is the {L}-shaped contour consisting of the positive real axis and the upper imaginary axis, with {z^{3/2}} being interpreted with the standard branch (with cut on the negative axis). One can show by contour integration that this function solves the heat equation and is smooth (but rapidly growing at infinity), and vanishes for {t<0}, but is not identically zero for {t>0}.

Thus, in order to obtain a meaningful (and physically realistic) problem, one needs to impose some decay (or at least limited growth) hypotheses on the data {u_0,f} and solution {u,p} in addition to smoothness. For the data, one can impose a variety of such hypotheses, including the following:

  • (Finite energy data) One has {\|u_0\|_{L^2_x({\bf R}^3)} < \infty} and {\| f \|_{L^\infty_t L^2_x([0,T] \times {\bf R}^3)} < \infty}.
  • ({H^1} data) One has {\|u_0\|_{H^1_x({\bf R}^3)} < \infty} and {\| f \|_{L^\infty_t H^1_x([0,T] \times {\bf R}^3)} < \infty}.
  • (Schwartz data) One has {\sup_{x \in {\bf R}^3} ||x|^m \nabla_x^k u_0(x)| < \infty} and {\sup_{(t,x) \in [0,T] \times {\bf R}^3} ||x|^m \nabla_x^k \partial_t^l f(t,x)| < \infty} for all {m,k,l \geq 0}.
  • (Periodic data) There is some {0 < L < \infty} such that {u_0(x+Lk) = u_0(x)} and {f(t,x+Lk) = f(t,x)} for all {(t,x) \in [0,T] \times {\bf R}^3} and {k \in {\bf Z}^3}.
  • (Homogeneous data) {f=0}.

Note that smoothness alone does not necessarily imply finite energy, {H^1}, or the Schwartz property. For instance, the (scalar) function {u(x) = \exp( i |x|^{10} ) (1+|x|)^{-2}} is smooth and finite energy, but not in {H^1} or Schwartz. Periodicity is of course incompatible with finite energy, {H^1}, or the Schwartz property, except in the trivial case when the data is identically zero.

Similarly, one can impose conditions at spatial infinity on the solution, such as the following:

  • (Finite energy solution) One has {\| u \|_{L^\infty_t L^2_x([0,T] \times {\bf R}^3)} < \infty}.
  • ({H^1} solution) One has {\| u \|_{L^\infty_t H^1_x([0,T] \times {\bf R}^3)} < \infty} and {\| u \|_{L^2_t H^2_x([0,T] \times {\bf R}^3)} < \infty}.
  • (Partially periodic solution) There is some {0 < L < \infty} such that {u(t,x+Lk) = u(t,x)} for all {(t,x) \in [0,T] \times {\bf R}^3} and {k \in {\bf Z}^3}.
  • (Fully periodic solution) There is some {0 < L < \infty} such that {u(t,x+Lk) = u(t,x)} and {p(t,x+Lk) = p(t,x)} for all {(t,x) \in [0,T] \times {\bf R}^3} and {k \in {\bf Z}^3}.

(The {L^2_t H^2_x} component of the {H^1} solution is for technical reasons, and should not be paid too much attention for this discussion.) Note that we do not consider the notion of a Schwartz solution; as we shall see shortly, this is too restrictive a concept of solution to the Navier-Stokes equation.

Finally, one can downgrade the regularity of the solution down from smoothness. There are many ways to do so; two such examples include

  • ({H^1} mild solutions) The solution is not smooth, but is {H^1} (in the preceding sense) and solves the equation (1) in the sense that the Duhamel formula

    \displaystyle  u(t) = e^{t\Delta} u_0 + \int_0^t e^{(t-t')\Delta} (-(u\cdot\nabla) u-\nabla p+f)(t')\ dt'

    holds.

  • (Leray-Hopf weak solution) The solution {u} is not smooth, but lies in {L^\infty_t L^2_x \cap L^2_t H^1_x}, solves (1) in the sense of distributions (after rewriting the system in divergence form), and obeys an energy inequality.

Finally, one can ask for two types of global regularity results on the Navier-Stokes problem: a qualitative regularity result, in which one merely provides existence of a smooth solution without any explicit bounds on that solution, and a quantitative regularity result, which provides bounds on the solution in terms of the initial data, e.g. a bound of the form

\displaystyle  \| u \|_{L^\infty_t H^1_x([0,T] \times {\bf R}^3)} \leq F( \|u_0\|_{H^1_x({\bf R}^3)} + \|f\|_{L^\infty_t H^1_x([0,T] \times {\bf R}^3)}, T )

for some function {F: {\bf R}^+ \times {\bf R}^+ \rightarrow {\bf R}^+}. One can make a further distinction between local quantitative results, in which {F} is allowed to depend on {T}, and global quantitative results, in which there is no dependence on {T} (the latter is only reasonable though in the homogeneous case, or if {f} has some decay in time).

By combining these various hypotheses and conclusions, we see that one can write down quite a large number of slightly different variants of the global regularity problem. In the official formulation of the regularity problem for the Clay Millennium prize, a positive correct solution to either of the following two problems would be accepted for the prize:

  • Conjecture 1.4 (Qualitative regularity for homogeneous periodic data) If {(u_0,0,T)} is periodic, smooth, and homogeneous, then there exists a smooth partially periodic solution {(u,p,u_0,0,T)} with this data.
  • Conjecture 1.3 (Qualitative regularity for homogeneous Schwartz data) If {(u_0,0,T)} is Schwartz and homogeneous, then there exists a smooth finite energy solution {(u,p,u_0,0,T)} with this data.

(The numbering here corresponds to the numbering in the paper.)

Furthermore, a negative correct solution to either of the following two problems would also be accepted for the prize:

  • Conjecture 1.6 (Qualitative regularity for periodic data) If {(u_0,f,T)} is periodic and smooth, then there exists a smooth partially periodic solution {(u,p,u_0,f,T)} with this data.
  • Conjecture 1.5 (Qualitative regularity for Schwartz data) If {(u_0,f,T)} is Schwartz, then there exists a smooth finite energy solution {(u,p,u_0,f,T)} with this data.

I am not announcing any major progress on these conjectures here. What my paper does study, though, is the question of whether the answer to these conjectures is somehow sensitive to the choice of formulation. For instance:

  1. Note in the periodic formulations of the Clay prize problem that the solution is only required to be partially periodic, rather than fully periodic; thus the pressure has no periodicity hypothesis. One can ask the extent to which the above problems change if one also requires pressure periodicity.
  2. In another direction, one can ask the extent to which quantitative formulations of the Navier-Stokes problem are stronger than their qualitative counterparts; in particular, whether it is possible that each choice of initial data in a certain class leads to a smooth solution, but with no uniform bound on that solution in terms of various natural norms of the data.
  3. Finally, one can ask the extent to which the conjecture depends on the category of data. For instance, could it be that global regularity is true for smooth periodic data but false for Schwartz data? True for Schwartz data but false for smooth {H^1} data? And so forth.

One motivation for the final question (which was posed to me by my colleague, Andrea Bertozzi) is that the Schwartz property on the initial data {u_0} tends to be instantly destroyed by the Navier-Stokes flow. This can be seen by introducing the vorticity {\omega := \nabla \times u}. If {u(t)} is Schwartz, then from Stokes’ theorem we necessarily have vanishing of certain moments of the vorticity, for instance:

\displaystyle  \int_{{\bf R}^3} \omega_1 (x_2^2-x_3^2)\ dx = 0.

On the other hand, some integration by parts using (1) reveals that such moments are usually not preserved by the flow; for instance, one has the law

\displaystyle \partial_t \int_{{\bf R}^3} \omega_1(t,x) (x_2^2-x_3^2)\ dx = 4\int_{{\bf R}^3} u_2(t,x) u_3(t,x)\ dx,

and one can easily concoct examples for which the right-hand side is non-zero at time zero. This suggests that the Schwartz class may be unnecessarily restrictive for Conjecture 1.3 or Conjecture 1.5.

My paper arose out of an attempt to address these three questions, and ended up obtaining partial results in all three directions. Roughly speaking, the results that address these three questions are as follows:

  1. (Homogenisation) If one only assumes partial periodicity instead of full periodicity, then the forcing term {f} becomes irrelevant. In particular, Conjecture 1.4 and Conjecture 1.6 are equivalent.
  2. (Concentration compactness) In the {H^1} category (both periodic and nonperiodic, homogeneous or nonhomogeneous), the qualitative and quantitative formulations of the Navier-Stokes global regularity problem are essentially equivalent.
  3. (Localisation) The (inhomogeneous) Navier-Stokes problems in the Schwartz, smooth {H^1}, and finite energy categories are essentially equivalent to each other, and are also implied by the (fully) periodic version of these problems.

The first two of these families of results are relatively routine, drawing on existing methods in the literature; the localisation results though are somewhat more novel, and introduce some new local energy and local enstrophy estimates which may be of independent interest.

Broadly speaking, the moral to draw from these results is that the precise formulation of the Navier-Stokes equation global regularity problem is only of secondary importance; modulo a number of caveats and technicalities, the various formulations are close to being equivalent, and a breakthrough on any one of the formulations is likely to lead (either directly or indirectly) to a comparable breakthrough on any of the others.

This is only a caricature of the actual implications, though. Below is the diagram from the paper indicating the various formulations of the Navier-Stokes equations, and the known implications between them:

The above three streams of results are discussed in more detail below the fold.

Read the rest of this entry »

One of the key difficulties in performing analysis in infinite-dimensional function spaces, as opposed to finite-dimensional vector spaces, is that the Bolzano-Weierstrass theorem no longer holds: a bounded sequence in an infinite-dimensional function space need not have any convergent subsequences (when viewed using the strong topology). To put it another way, the closed unit ball in an infinite-dimensional function space usually fails to be (sequentially) compact.

As compactness is such a useful property to have in analysis, various tools have been developed over the years to try to salvage some sort of substitute for the compactness property in infinite-dimensional spaces. One of these tools is concentration compactness, which was discussed previously on this blog. This can be viewed as a compromise between weak compactness (which is true in very general circumstances, but is often too weak for applications) and strong compactness (which would be very useful in applications, but is usually false), in which one obtains convergence in an intermediate sense that involves a group of symmetries acting on the function space in question.

Concentration compactness is usually stated and proved in the language of standard analysis: epsilons and deltas, limits and supremas, and so forth. In this post, I wanted to note that one could also state and prove the basic foundations of concentration compactness in the framework of nonstandard analysis, in which one now deals with infinitesimals and ultralimits instead of epsilons and ordinary limits. This is a fairly mild change of viewpoint, but I found it to be informative to view this subject from a slightly different perspective. The nonstandard proofs require a fair amount of general machinery to set up, but conversely, once all the machinery is up and running, the proofs become slightly shorter, and can exploit tools from (standard) infinitary analysis, such as orthogonal projections in Hilbert spaces, or the continuous-pure point decomposition of measures. Because of the substantial amount of setup required, nonstandard proofs tend to have significantly more net complexity than their standard counterparts when it comes to basic results (such as those presented in this post), but the gap between the two narrows when the results become more difficult, and for particularly intricate and deep results it can happen that nonstandard proofs end up being simpler overall than their standard analogues, particularly if the nonstandard proof is able to tap the power of some existing mature body of infinitary mathematics (e.g. ergodic theory, measure theory, Hilbert space theory, or topological group theory) which is difficult to directly access in the standard formulation of the argument.

Read the rest of this entry »

One of the most important topological concepts in analysis is that of compactness (as discussed for instance in my Companion article on this topic).  There are various flavours of this concept, but let us focus on sequential compactness: a subset E of a topological space X is sequentially compact if every sequence in E has a convergent subsequence whose limit is also in E.  This property allows one to do many things with the set E.  For instance, it allows one to maximise a functional on E:

Proposition 1. (Existence of extremisers)  Let E be a non-empty sequentially compact subset of a topological space X, and let F: E \to {\Bbb R} be a continuous function.  Then the supremum \sup_{x \in E} f(x) is attained at at least one point x_* \in E, thus F(x) \leq F(x_*) for all x \in E.  (In particular, this supremum is finite.)  Similarly for the infimum.

Proof. Let -\infty < L \leq +\infty be the supremum L := \sup_{x \in E} F(x).  By the definition of supremum (and the axiom of (countable) choice), one can find a sequence x^{(n)} in E such that F(x^{(n)}) \to L.  By compactness, we can refine this sequence to a subsequence (which, by abuse of notation, we shall continue to call x^{(n)}) such that x^{(n)} converges to a limit x in E.  Since we still have f(x^{(n)}) \to L, and f is continuous at x, we conclude that f(x)=L, and the claim for the supremum follows.  The claim for the infimum is similar.  \Box

Remark 1. An inspection of the argument shows that one can relax the continuity hypothesis on F somewhat: to attain the supremum, it suffices that F be upper semicontinuous, and to attain the infimum, it suffices that F be lower semicontinuous. \diamond

We thus see that sequential compactness is useful, among other things, for ensuring the existence of extremisers.  In finite-dimensional spaces (such as vector spaces), compact sets are plentiful; indeed, the Heine-Borel theorem asserts that every closed and bounded set is compact.  However, once one moves to infinite-dimensional spaces, such as function spaces, then the Heine-Borel theorem fails quite dramatically; most of the closed and bounded sets one encounters in a topological vector space are non-compact, if one insists on using a reasonably “strong” topology.  This causes a difficulty in (among other things) calculus of variations, which is often concerned to finding extremisers to a functional F: E \to {\Bbb R} on a subset E of an infinite-dimensional function space X.

In recent decades, mathematicians have found a number of ways to get around this difficulty.  One of them is to weaken the topology to recover compactness, taking advantage of such results as the Banach-Alaoglu theorem (or its sequential counterpart).  Of course, there is a tradeoff: weakening the topology makes compactness easier to attain, but makes the continuity of F harder to establish.  Nevertheless, if F enjoys enough “smoothing” or “cancellation” properties, one can hope to obtain continuity in the weak topology, allowing one to do things such as locate extremisers.  (The phenomenon that cancellation can lead to continuity in the weak topology is sometimes referred to as compensated compactness.)

Another option is to abandon trying to make all sequences have convergent subsequences, and settle just for extremising sequences to have convergent subsequences, as this would still be enough to retain Theorem 1.  Pursuing this line of thought leads to the Palais-Smale condition, which is a substitute for compactness in some calculus of variations situations.

But in many situations, one cannot weaken the topology to the point where the domain E becomes compact, without destroying the continuity (or semi-continuity) of F, though one can often at least find an intermediate topology (or metric) in which F is continuous, but for which E is still not quite compact.  Thus one can find sequences x^{(n)} in E which do not have any subsequences that converge to a constant element x \in E, even in this intermediate metric.  (As we shall see shortly, one major cause of this failure of compactness is the existence of a non-trivial action of a non-compact group G on E; such a group action can cause compensated compactness or the Palais-Smale condition to fail also.)  Because of this, it is a priori conceivable that a continuous function F need not attain its supremum or infimum.

Nevertheless, even though a sequence x^{(n)} does not have any subsequences that converge to a constant x, it may have a subsequence (which we also call x^{(n)}) which converges to some non-constant sequence y^{(n)} (in the sense that the distance d(x^{(n)},y^{(n)}) between the subsequence and the new sequence in a this intermediate metric), where the approximating sequence y^{(n)} is of a very structured form (e.g. “concentrating” to a point, or “travelling” off to infinity, or a superposition y^{(n)} = \sum_j y^{(n)}_j of several concentrating or travelling profiles of this form).  This weaker form of compactness, in which superpositions of a certain type of profile completely describe all the failures (or defects) of compactness, is known as concentration compactness, and the decomposition x^{(n)} \approx \sum_j y^{(n)}_j of the subsequence is known as the profile decomposition.  In many applications, it is a sufficiently good substitute for compactness that one can still do things like locate extremisers for functionals F –  though one often has to make some additional assumptions of F to compensate for the more complicated nature of the compactness.  This phenomenon was systematically studied by P.L. Lions in the 80s, and found great application in calculus of variations and nonlinear elliptic PDE.  More recently, concentration compactness has been a crucial and powerful tool in the non-perturbative analysis of nonlinear dispersive PDE, in particular being used to locate “minimal energy blowup solutions” or “minimal mass blowup solutions” for such a PDE (analogously to how one can use calculus of variations to find minimal energy solutions to a nonlinear elliptic equation); see for instance this recent survey by Killip and Visan.

In typical applications, the concentration compactness phenomenon is exploited in moderately sophisticated function spaces (such as Sobolev spaces or Strichartz spaces), with the failure of traditional compactness being connected to a moderately complicated group G of symmetries (e.g. the group generated by translations and dilations).  Because of this, concentration compactness can appear to be a rather complicated and technical concept when it is first encountered.  In this note, I would like to illustrate concentration compactness in a simple toy setting, namely in the space X = l^1({\Bbb Z}) of absolutely summable sequences, with the uniform (l^\infty) metric playing the role of the intermediate metric, and the translation group {\Bbb Z} playing the role of the symmetry group G.  This toy setting is significantly simpler than any model that one would actually use in practice [for instance, in most applications X is a Hilbert space], but hopefully it serves to illuminate this useful concept in a less technical fashion.

Read the rest of this entry »

I’ve just uploaded to the arXiv the paper “The cubic nonlinear Schrödinger equation in two dimensions with radial data“, joint with Rowan Killip and Monica Visan, and submitted to the Annals of Mathematics. This is a sequel of sorts to my paper with Monica and Xiaoyi Zhang, in which we established global well-posedness and scattering for the defocusing mass-critical nonlinear Schrödinger equation (NLS) iu_t + \Delta u = |u|^{4/d} u in three and higher dimensions d \geq 3 assuming spherically symmetric data. (This is another example of the recently active field of critical dispersive equations, in which both coarse and fine scales are (just barely) nonlinearly active, and propagate at different speeds, leading to significant technical difficulties.)

In this paper we obtain the same result for the defocusing two-dimensional mass-critical NLS iu_t + \Delta u= |u|^2 u, as well as in the focusing case iu_t + \Delta u= -|u|^2 u under the additional assumption that the mass of the initial data is strictly less than the mass of the ground state. (When mass equals that of the ground state, there is an explicit example, built using the pseudoconformal transformation, which shows that solutions can blow up in finite time.) In fact we can show a slightly stronger statement: for spherically symmetric focusing solutions with arbitrary mass, we can show that the first singularity that forms concentrates at least as much mass as the ground state.

Read the rest of this entry »

[This lecture is also doubling as this week's "open problem of the week", as it (eventually) discusses the soliton resolution conjecture.]

In this third lecture, I will talk about how the dichotomy between structure and randomness pervades the study of two different types of partial differential equations (PDEs):

  • Parabolic PDE, such as the heat equation u_t = \Delta u, which turn out to play an important role in the modern study of geometric topology; and
  • Hamiltonian PDE, such as the Schrödinger equation u_t = i \Delta u, which are heuristically related (via Liouville’s theorem) to measure-preserving actions of the real line (or time axis) {\Bbb R}, somewhat in analogy to how combinatorial number theory and graph theory were related to measure-preserving actions of {\Bbb Z} and S_\infty respectively, as discussed in the previous lecture.

(In physics, one would also insert some physical constants, such as Planck’s constant \hbar, but for the discussion here it is convenient to normalise away all of these constants.)

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,776 other followers