One of the most important topological concepts in analysis is that of compactness (as discussed for instance in my Companion article on this topic). There are various flavours of this concept, but let us focus on sequential compactness: a subset E of a topological space X is sequentially compact if every sequence in E has a convergent subsequence whose limit is also in E. This property allows one to do many things with the set E. For instance, it allows one to maximise a functional on E:
Proposition 1. (Existence of extremisers) Let E be a non-empty sequentially compact subset of a topological space X, and let be a continuous function. Then the supremum is attained at at least one point , thus for all . (In particular, this supremum is finite.) Similarly for the infimum.
Proof. Let be the supremum . By the definition of supremum (and the axiom of (countable) choice), one can find a sequence in E such that . By compactness, we can refine this sequence to a subsequence (which, by abuse of notation, we shall continue to call ) such that converges to a limit x in E. Since we still have , and f is continuous at x, we conclude that f(x)=L, and the claim for the supremum follows. The claim for the infimum is similar.
Remark 1. An inspection of the argument shows that one can relax the continuity hypothesis on F somewhat: to attain the supremum, it suffices that F be upper semicontinuous, and to attain the infimum, it suffices that F be lower semicontinuous.
We thus see that sequential compactness is useful, among other things, for ensuring the existence of extremisers. In finite-dimensional spaces (such as vector spaces), compact sets are plentiful; indeed, the Heine-Borel theorem asserts that every closed and bounded set is compact. However, once one moves to infinite-dimensional spaces, such as function spaces, then the Heine-Borel theorem fails quite dramatically; most of the closed and bounded sets one encounters in a topological vector space are non-compact, if one insists on using a reasonably “strong” topology. This causes a difficulty in (among other things) calculus of variations, which is often concerned to finding extremisers to a functional on a subset E of an infinite-dimensional function space X.
In recent decades, mathematicians have found a number of ways to get around this difficulty. One of them is to weaken the topology to recover compactness, taking advantage of such results as the Banach-Alaoglu theorem (or its sequential counterpart). Of course, there is a tradeoff: weakening the topology makes compactness easier to attain, but makes the continuity of F harder to establish. Nevertheless, if F enjoys enough “smoothing” or “cancellation” properties, one can hope to obtain continuity in the weak topology, allowing one to do things such as locate extremisers. (The phenomenon that cancellation can lead to continuity in the weak topology is sometimes referred to as compensated compactness.)
Another option is to abandon trying to make all sequences have convergent subsequences, and settle just for extremising sequences to have convergent subsequences, as this would still be enough to retain Theorem 1. Pursuing this line of thought leads to the Palais-Smale condition, which is a substitute for compactness in some calculus of variations situations.
But in many situations, one cannot weaken the topology to the point where the domain E becomes compact, without destroying the continuity (or semi-continuity) of F, though one can often at least find an intermediate topology (or metric) in which F is continuous, but for which E is still not quite compact. Thus one can find sequences in E which do not have any subsequences that converge to a constant element , even in this intermediate metric. (As we shall see shortly, one major cause of this failure of compactness is the existence of a non-trivial action of a non-compact group G on E; such a group action can cause compensated compactness or the Palais-Smale condition to fail also.) Because of this, it is a priori conceivable that a continuous function F need not attain its supremum or infimum.
Nevertheless, even though a sequence does not have any subsequences that converge to a constant x, it may have a subsequence (which we also call ) which converges to some non-constant sequence (in the sense that the distance between the subsequence and the new sequence in a this intermediate metric), where the approximating sequence is of a very structured form (e.g. “concentrating” to a point, or “travelling” off to infinity, or a superposition of several concentrating or travelling profiles of this form). This weaker form of compactness, in which superpositions of a certain type of profile completely describe all the failures (or defects) of compactness, is known as concentration compactness, and the decomposition of the subsequence is known as the profile decomposition. In many applications, it is a sufficiently good substitute for compactness that one can still do things like locate extremisers for functionals F – though one often has to make some additional assumptions of F to compensate for the more complicated nature of the compactness. This phenomenon was systematically studied by P.L. Lions in the 80s, and found great application in calculus of variations and nonlinear elliptic PDE. More recently, concentration compactness has been a crucial and powerful tool in the non-perturbative analysis of nonlinear dispersive PDE, in particular being used to locate “minimal energy blowup solutions” or “minimal mass blowup solutions” for such a PDE (analogously to how one can use calculus of variations to find minimal energy solutions to a nonlinear elliptic equation); see for instance this recent survey by Killip and Visan.
In typical applications, the concentration compactness phenomenon is exploited in moderately sophisticated function spaces (such as Sobolev spaces or Strichartz spaces), with the failure of traditional compactness being connected to a moderately complicated group G of symmetries (e.g. the group generated by translations and dilations). Because of this, concentration compactness can appear to be a rather complicated and technical concept when it is first encountered. In this note, I would like to illustrate concentration compactness in a simple toy setting, namely in the space of absolutely summable sequences, with the uniform () metric playing the role of the intermediate metric, and the translation group playing the role of the symmetry group G. This toy setting is significantly simpler than any model that one would actually use in practice [for instance, in most applications X is a Hilbert space], but hopefully it serves to illuminate this useful concept in a less technical fashion.
— Defects of compactness in —
Consider the space
of absolutely summable doubly infinite sequences ; this is a normed vector space generated by the basis vectors for (here is the Kronecker delta). We can place several topologies on this space X:
Definition 1. Let be a sequence in X (i.e. a sequence of sequences!), and let be another element in X.
- (Strong topology) We say that converges to x in the strong topology (or topology) if the distance converges to zero.
- (Intermediate topology) We say that converges in x in the intermediate topology (or uniform topology) if the distance converges to zero.
- (Weak topology) We say that converges in x in the weak topology (or pointwise topology) if as for each m. [Strictly speaking, this only describes the weak topology for bounded sequences, but these are the only sequences we will be considering here.]
Example 1. The sequence for converges weakly to zero, but is not convergent in the strong or intermediate topologies. The sequence converges in the intermediate and weak topologies to zero, but is not convergent in the strong topology.
It is easy to see that strong convergence implies intermediate convergence, which in turn implies weak convergence, thus justifying the names “strong”, “intermediate”, and “weak”. For bounded sequences, the intermediate topology can also be described by a number of other norms, e.g. the norm for any (this is an easy application of Hölder’s inequality).
The space X also has the translation action of the group of integers , defined using the shift operators for , defined by the formula
(in particular, is linear with ). This action is continuous with respect to all three of the above topologies. (We give G the discrete topology.)
Inside the infinite-dimensional space X, we let E be the “unit sphere” (though it looks more like an octahedron, actually)
E is clearly invariant under the translation action of G. It is easy to see that E is closed and bounded in the strong topology (or metric). However, it is not closed in the weak topology: the sequence of basis vectors for converges weakly to the origin 0, which lies outside of E. It is also not closed in the intermediate topology; the sequence lies in E but converges in the intermediate topology to 0, which lies outside of E.
The failure of closure in the weak topology causes failure of compactness in the strong or intermediate topologies. Indeed, the sequence cannot have any convergent subsequence in those topologies, since the limit of such a subsequence would have to equal its weak limit, which is zero; but clearly does not converge in either the strong or intermediate topologies to 0. (To put it another way, the embedding of into is not compact.)
More generally, for any fixed profile , the “travelling wave” (or “travelling profile”) for converges weakly to zero, and so by the above argument has no convergent subsequence in the strong or intermediate topologies. A little more generally still, given any sequence of integers going off to infinity, is a sequence in E which has no convergent subsequence in the strong or intermediate topologies. Thus we see that the action of the (non-compact) group G is causing a failure of compactness of E in the strong and intermediate topologies.
Because of the linear nature of the vector space X, one can also create examples of sequences in E with no convergent subsequences by taking superpositions of travelling profiles. For instance, if are two non-negative sequences with , and are two sequences of integers which both go off to infinity,
then the superposition
of the two travelling profiles and will be a sequence in E that continues to converge weakly to zero, and so again has no convergent subsequence in the strong or intermediate topologies.
If and are not non-negative, then there can be cancellations between and , which could cause to have norm significantly less than 1 (thus straying away from E). However, if one also imposes the asymptotic orthogonality condition
we see that these cancellations vanish in the limit , and so in this case we can build a modified superposition
that lies in E, with converging to zero in the strong and uniform topology, and will once again be a sequence with no convergent subsequence. [If the asymptotic orthogonality condition fails, then one can collapse the superposition of two travelling profiles into a single travelling profile, after passing to a subsequence if necessary. Indeed, if does not go to infinity, then we can find a subsequence for which is equal to a constant c, in which case is equal to a single travelling profile .] More generally, given any collection of non-zero elements of X with
and any sequences of integers obeying the asymptotic orthogonality condition
for all , we can find a sequence in that takes the form
where converges to zero in the intermediate topology. [If one has equality in (1), one can make converge in the strong topology also.] If goes off to infinity for at least one j with non-zero, then this sequence will have no convergent subsequence.
We have thus demonstrated a large number of ways that compactness of E fails in the strong and intermediate topologies. The concentration compactness phenomenon, in this setting, tells us that these are essentially the only ways in which compactness fails in the intermediate topology. More precisely, one has
Theorem 2. (Profile decomposition) Let be a sequence in E. Then, after passing to a subsequence (which we still call ), there exist obeying (1), and sequences of integers obeying (2), such that we have the decomposition (3) where the error converges to zero in the intermediate topology. Furthermore, we can improve (1) to
Remark 2. The situation is vastly different in the strong topology; in this case, virtually every sequence in E fails to have a convergent subsequence (consider for instance the sequence from Example 1), and there are so many different ways a sequence can behave that there is no meaningful profile decomposition. A more quantitative way to see this is via a computation of metric entropy constants (i.e. covering numbers). Pick a small number (e.g. ) and a large number N, and consider how many balls of radius in the norm are needed to cover the unit sphere in . A simple volume packing argument shows that this number must grow exponentially in N. On the other hand, if one wants to cover with the (much larger) balls of radius in the topology instead, the number of balls needed grows only polynomially with N. Indeed, after rounding down each coefficient of an element of to a multiple of , there are only at most non-zero coefficients, and so the total number of possibilities for this rounded down approximant is about . Thus, the metric entropy constants for both the strong and intermediate topologies go to infinity in the infinite dimensional limit (thus demonstrating the lack of compactness for both), but much more rapidly for the former than for the latter.
— Proof sketch of Theorem 2 —
We now sketch how one would prove Theorem 2. The idea is to hunt down and “domesticate” the large values of , as these are the only obstructions to convergence in the intermediate topology. (I believe the use of the term “domesticate” here is due to Kyril Tintarev.) Each large piece of the that we capture in this manner will decrease the total “mass” in play, which guarantees that eventually one runs out of such large pieces, at which point one obtains the decomposition (3). [Curiously, the strategy here is very similar to that underlying the structural theorems that arise in additive combinatorics and ergodic theory; I touched upon these analogies before in my Simons lectures.] In this process we rely heavily on the freedom to pass to a subsequence at will, which is useful to eliminate any fluctuations so long as they range over a compact space of possibilities.
Let’s see how this procedure works. We begin with our bounded sequence , whose norms are all equal to 1. If this sequence already converging to zero in the intermediate topology, we are done (we let j range over the empty set, and set equal to all of . So suppose that are not converging to zero in this topology. Passing to a subsequence if necessary, this implies the existence of an such that for all n. Thus we can find integers such that for all n, or equivalently that the shifts have their zero coefficient uniformly bounded below in magnitude by .
We have used the symmetry group G to move a large component of each of the the origin. Now we take advantage of sequential compactness of the unit ball in the weak topology. This allows one (after passing to another subsequence) to assume that the shifted elements converge weakly to some limit ; since the are uniformly non-trivial at the origin, the weak limit is also; in particular, we have . Undoing the shift, we have obtained a decomposition
where the residual is such that converges weakly to zero (thus, in some sense vanishes asymptotically near ). It is then not difficult to show the “asymptotic orthogonality” relationship
where is a quantity that goes to zero as ; this implies, in particular, that the residual eventually has mass strictly less than that of the original sequence :
in fact we have the more precise relationship
Now we take this residual and repeat the whole process. Namely, if converges in the intermediate topology to zero, then we are done; otherwise, as before, we can find (after passing to a subsequence) , for which is bounded from below by at the origin. Because already converged weakly to zero, one can conclude that and must be asymptotically orthogonal in the sense of (2).
Passing to a subsequence again, we can assume that converges weakly to a limit with mass at least , leading to a decomposition
where the residual is such that and both converge weakly to zero, and has norm
in fact we have the more precise relationship
One can continue in this vein, extracting more and more travelling profiles on finer and finer subsequences, with residuals that are getting smaller and smaller. The subsequences involved depend on j, but by the usual Cantor (or Arzelá-Ascoli) diagonalisation argument, one can work with a single sequence throughout. Note that the amounts of mass that are extracted in this process cannot exceed 1 in total: (in fact we have the slightly stronger statement (2)). In particular, the must go to zero as (the infinite convergence principle!). If the were selected in a “greedy” manner, this shows that the asymptotic norm of the residuals as must decay to zero as . Carefully rearranging the epsilons, this gives the decomposition (3) with residual converging to zero in the intermediate topology, and the verification of the rest of the theorem is routine.
Remark 3. It is tempting to view Theorem 2 as asserting that the space E with the can be “compactified” by throwing in some idealised superposition of profiles that are “infinitely far apart” from each other.
— An application of concentration compactness —
As mentioned in the introduction, one can use the profile decomposition of Theorem 2 as a substitute for compactness in establishing results analogous to Theorem 1. The catch is that one needs more hypotheses on the functional F in order to be able to handle the complicated profiles that come up. It is difficult to formalise the “best” set of hypotheses that would cover all conceivable situations; it seems better to just adapt the general arguments to each individual situation separately. Here is a typical (but certainly not optimal) result of this type:
Theorem 3. Let X, E be as above. Let be a non-negative function with the following properties:
- (Continuity) F is continuous in the intermediate topology on E.
- (Homogeneity) F is homogeneous of some degree , thus for all and . (In particular, F(0)=0.)
- (Invariance) F is G-invariant: for all and .
- (Asymptotic additivity) If are a collection of sequences obeying the asymptotic orthogonality condition (1), and are such that , then and . More generally, if is bounded in and converges to zero in the intermediate topology, then . (Note that this generalises both 1. and 3.)
Then F is bounded on E, and attains its supremum.
A typical example of a functional F obeying the above properties is
for some .
Proof. We repeat the proof of Theorem 1. Let . Clearly ; we can assume that , since the claim is trivial when . As before, we have an extremising sequence with . Applying Theorem 2, and passing to a subsequence, we obtain a decomposition (3) with the stated properties. Applying the asymptotic additivity hypothesis 4., we have
and in particular
This implies in particular that L is finite.
Now, we use the homogeneity assumption. Since when , we obtain the bound . We conclude that
Combining this with (1) we obtain
Thus all these inequalities must be equality. Analysing this, we see that all but one of the must vanish, with the remaining (say ) having norm 1. From (4) we thus have , and we have obtained the desired extremiser.
[Update, Nov 6: some corrections, in particular with regard to the closure of E in the intermediate topology.]