You are currently browsing the tag archive for the ‘compactness’ tag.

In the previous set of notes we developed a theory of “strong” solutions to the Navier-Stokes equations. This theory, based around viewing the Navier-Stokes equations as a perturbation of the linear heat equation, has many attractive features: solutions exist locally, are unique, depend continuously on the initial data, have a high degree of regularity, can be continued in time as long as a sufficiently high regularity norm is under control, and tend to enjoy the same sort of conservation laws that classical solutions do. However, it is a major open problem as to whether these solutions can be extended to be (forward) global in time, because the norms that we know how to control globally in time do not have high enough regularity to be useful for continuing the solution. Also, the theory becomes degenerate in the inviscid limit ${\nu \rightarrow 0}$.

However, it is possible to construct “weak” solutions which lack many of the desirable features of strong solutions (notably, uniqueness, propagation of regularity, and conservation laws) but can often be constructed globally in time even when one us unable to do so for strong solutions. Broadly speaking, one usually constructs weak solutions by some sort of “compactness method”, which can generally be described as follows.

1. Construct a sequence of “approximate solutions” to the desired equation, for instance by developing a well-posedness theory for some “regularised” approximation to the original equation. (This theory often follows similar lines to those in the previous set of notes, for instance using such tools as the contraction mapping theorem to construct the approximate solutions.)
2. Establish some uniform bounds (over appropriate time intervals) on these approximate solutions, even in the limit as an approximation parameter is sent to zero. (Uniformity is key; non-uniform bounds are often easy to obtain if one puts enough “mollification”, “hyper-dissipation”, or “discretisation” in the approximating equation.)
3. Use some sort of “weak compactness” (e.g., the Banach-Alaoglu theorem, the Arzela-Ascoli theorem, or the Rellich compactness theorem) to extract a subsequence of approximate solutions that converge (in a topology weaker than that associated to the available uniform bounds) to a limit. (Note that there is no reason a priori to expect such limit points to be unique, or to have any regularity properties beyond that implied by the available uniform bounds..)
4. Show that this limit solves the original equation in a suitable weak sense.

The quality of these weak solutions is very much determined by the type of uniform bounds one can obtain on the approximate solution; the stronger these bounds are, the more properties one can obtain on these weak solutions. For instance, if the approximate solutions enjoy an energy identity leading to uniform energy bounds, then (by using tools such as Fatou’s lemma) one tends to obtain energy inequalities for the resulting weak solution; but if one somehow is able to obtain uniform bounds in a higher regularity norm than the energy then one can often recover the full energy identity. If the uniform bounds are at the regularity level needed to obtain well-posedness, then one generally expects to upgrade the weak solution to a strong solution. (This phenomenon is often formalised through weak-strong uniqueness theorems, which we will discuss later in these notes.) Thus we see that as far as attacking global regularity is concerned, both the theory of strong solutions and the theory of weak solutions encounter essentially the same obstacle, namely the inability to obtain uniform bounds on (exact or approximate) solutions at high regularities (and at arbitrary times).

For simplicity, we will focus our discussion in this notes on finite energy weak solutions on ${{\bf R}^d}$. There is a completely analogous theory for periodic weak solutions on ${{\bf R}^d}$ (or equivalently, weak solutions on the torus ${({\bf R}^d/{\bf Z}^d)}$ which we will leave to the interested reader.

In recent years, a completely different way to construct weak solutions to the Navier-Stokes or Euler equations has been developed that are not based on the above compactness methods, but instead based on techniques of convex integration. These will be discussed in a later set of notes.

One of the most useful concepts for analysis that arise from topology and metric spaces is the concept of compactness; recall that a space ${X}$ is compact if every open cover of ${X}$ has a finite subcover, or equivalently if any collection of closed sets with the finite intersection property (i.e. every finite subcollection of these sets has non-empty intersection) has non-empty intersection. In these notes, we explore how compactness interacts with other key topological concepts: the Hausdorff property, bases and sub-bases, product spaces, and equicontinuity, in particular establishing the useful Tychonoff and Arzelá-Ascoli theorems that give criteria for compactness (or precompactness).

Exercise 1 (Basic properties of compact sets)

• Show that any finite set is compact.
• Show that any finite union of compact subsets of a topological space is still compact.
• Show that any image of a compact space under a continuous map is still compact.

Show that these three statements continue to hold if “compact” is replaced by “sequentially compact”.

To progress further in our study of function spaces, we will need to develop the standard theory of metric spaces, and of the closely related theory of topological spaces (i.e. point-set topology).  I will be assuming that students in my class will already have encountered these concepts in an undergraduate topology or real analysis course, but for sake of completeness I will briefly review the basics of both spaces here.

In the previous lecture, we studied the recurrence properties of compact systems, which are systems in which all measurable functions exhibit almost periodicity – they almost return completely to themselves after repeated shifting. Now, we consider the opposite extreme of mixing systems – those in which all measurable functions (of mean zero) exhibit mixing – they become orthogonal to themselves after repeated shifting. (Actually, there are two different types of mixing, strong mixing and weak mixing, depending on whether the orthogonality occurs individually or on the average; it is the latter concept which is of more importance to the task of establishing the Furstenberg recurrence theorem.)

We shall see that for weakly mixing systems, averages such as $\frac{1}{N} \sum_{n=0}^{N-1} T^n f \ldots T^{(k-1)n} f$ can be computed very explicitly (in fact, this average converges to the constant $(\int_X f\ d\mu)^{k-1}$). More generally, we shall see that weakly mixing components of a system tend to average themselves out and thus become irrelevant when studying many types of ergodic averages. Our main tool here will be the humble Cauchy-Schwarz inequality, and in particular a certain consequence of it, known as the van der Corput lemma.

As one application of this theory, we will be able to establish Roth’s theorem (the k=3 case of Szemerédi’s theorem).

I’m continuing my series of articles for the Princeton Companion to Mathematics with my article on compactness and compactification. This is a fairly recent article for the PCM, which is now at the stage in which most of the specialised articles have been written, and now it is the general articles on topics such as compactness which are being finished up. The topic of this article is self-explanatory; it is a brief and non-technical introduction as to the incredibly useful concept of compactness in topology, analysis, geometry, and other areas mathematics, and the closely related concept of a compactification, which allows one to rigorously take limits of what would otherwise be divergent sequences.

The PCM has an extremely broad scope, covering not just mathematics itself, but the context that mathematics is placed in. To illustrate this, I will mention Michael Harris‘s essay for the Companion, ““Why mathematics?”, you may ask“.

I have just uploaded to the arXiv my paper “A quantitative formulation of the global regularity problem for the periodic Navier-Stokes equation”, submitted to Dynamics of PDE. This is a short note on one formulation of the Clay Millennium prize problem, namely that there exists a global smooth solution to the Navier-Stokes equation on the torus $({\Bbb R}/{\Bbb Z})^3$ given any smooth divergence-free data. (I should emphasise right off the bat that I am not claiming any major breakthrough on this problem, which remains extremely challenging in my opinion.)
This problem is formulated in a qualitative way: the conjecture asserts that the velocity field $u$ stays smooth for all time, but does not ask for a quantitative bound on the smoothness of that field in terms of the smoothness of the initial data. Nevertheless, it turns out that the compactness properties of the periodic Navier-Stokes flow allow one to equate the qualitative claim with a more concrete quantitative one. More precisely, the paper shows that the following three statements are equivalent:

1. (Qualitative regularity conjecture) Given any smooth divergence-free data $u_0: ({\Bbb R}/{\Bbb Z})^3 \to {\Bbb R}^3$, there exists a global smooth solution $u: [0,+\infty) \times ({\Bbb R}/{\Bbb Z})^3 \to {\Bbb R}^3$ to the Navier-Stokes equations.
2. (Local-in-time quantitative regularity conjecture)
Given any smooth solution $u: [0,T] \times ({\Bbb R}/{\Bbb Z})^3 \to {\Bbb R}^3$ to the Navier-Stokes equations with $0 < T \leq 1$, one has the a priori bound$\| u(T) \|_{H^1(({\Bbb R}/{\Bbb Z})^3)} \leq F( \| u(0) \|_{H^1(({\Bbb R}/{\Bbb Z})^3)} )$ for some non-decreasing function $F:[0,+\infty) \to [0,+\infty)$.
3. (Global-in-time quantitative regularity conjecture) This is the same conjecture as 2, but with the condition $0 < T \leq 1$ replaced by $0 < T < \infty$.

It is easy to see that Conjecture 3 implies Conjecture 2, which implies Conjecture 1. By using the compactness of the local periodic Navier-Stokes flow in $H^1$, one can show that Conjecture 1 implies Conjecture 2; and by using the energy identity (and in particular the fact that the energy dissipation is bounded) one can deduce Conjecture 3 from Conjecture 2. The argument uses only standard tools and is likely to generalise in a number of ways, which I discuss in the paper. (In particular one should be able to replace the $H^1$ norm here by any other subcritical norm.)