You are currently browsing the monthly archive for May 2011.

This is yet another post in a series on basic ingredients in the structural theory of locally compact groups, which is closely related to Hilbert’s fifth problem.

In order to understand the structure of a topological group , a basic strategy is to try to *split* into two smaller factor groups by exhibiting a short exact sequence

If one has such a sequence, then is an extension of by (which includes direct products and semidirect products as examples, but can be more general than these situations, as discussed in this previous blog post). In principle, the problem of understanding the structure of then splits into three simpler problems:

- (Horizontal structure) Understanding the structure of the “horizontal” group .
- (Vertical structure) Understanding the structure of the “vertical” group .
- (Cohomology) Understanding the ways in which one can extend by .

The “cohomological” aspect to this program can be nontrivial. However, in principle at least, this strategy reduces the study of the large group to the study of the smaller groups . (This type of splitting strategy is not restricted to topological groups, but can also be adapted to many other categories, particularly those of groups or group-like objects.) Typically, splitting alone does not fully kill off a structural classification problem, but it can reduce matters to studying those objects which are somehow “simple” or “irreducible”. For instance, this strategy can often be used to reduce questions about arbitrary finite groups to finite simple groups.

A simple example of splitting is as follows. Given any topological group , one can form the connected component of the identity – the maximal connected set containing the identity. It is not difficult to show that is a closed (and thus also locally compact) normal subgroup of , whose quotient is another locally compact group. Furthermore, due to the maximal connected nature of , is totally disconnected – the only connected sets are the singletons. In particular, is Hausdorff (the identity element is closed). Thus we have obtained a splitting

of an arbitrary locally compact group into a connected locally compact group , and a totally disconnected locally compact group . In principle at least, the study of locally compact groups thus splits into the study of connected locally compact groups, and the study of totally disconnected locally compact groups (though the cohomological issues are not always trivial).

In the structural theory of totally disconnected locally compact groups, the first basic theorem in the subject is van Dantzig’s theorem (which we prove below the fold):

Theorem 1 (Van Danztig’s theorem)Every totally disconnected locally compact group contains a compact open subgroup (which will of course still be totally disconnected).

Example 1Let be a prime. Then the -adic field (with the usual -adic valuation) is totally disconnected locally compact, and the -adic integers are a compact open subgroup.

Of course, this situation is the polar opposite of what occurs in the connected case, in which the only open subgroup is the whole group.

In view of van Dantzig’s theorem, we see that the “local” behaviour of totally disconnected locally compact groups can be modeled by the compact totally disconnected groups, which are better understood (for instance, one can start analysing them using the Peter-Weyl theorem, as discussed in this previous post). The global behaviour however remains more complicated, in part because the compact open subgroup given by van Dantzig’s theorem need not be normal, and so does not necessarily induce a splitting of into compact and discrete factors.

Example 2Let be a prime, and let be the semi-direct product , where the integers act on by the map , and we give the product of the discrete topology of and the -adic topology on . One easily verifies that is a totally disconnected locally compact group. It certainly has compact open subgroups, such as . However, it is easy to show that has no non-trivial compact normal subgroups (the problem is that the conjugation action of on has all non-trivial orbits unbounded).

Returning to more general locally compact groups, we obtain an immediate corollary:

Corollary 2Every locally compact group contains an open subgroup which is “compact-by-connected” in the sense that is compact.

Indeed, one applies van Dantzig’s theorem to the totally disconnected group , and then pulls back the resulting compact open subgroup.

Now we mention another application of van Dantzig’s theorem, of more direct relevance to Hilbert’s fifth problem. Define a *generalised Lie group* to be a topological group with the property that given any open neighbourhood of the identity, there exists an open subgroup of and a compact normal subgroup of in such that is isomorphic to a Lie group. It is easy to see that such groups are locally compact. The deep *Gleason-Yamabe theorem*, which among other things establishes a satisfactory solution to Hilbert’s fifth problem (and which we will not prove here), asserts the converse:

Theorem 3 (Gleason-Yamabe theorem)Every locally compact group is a generalised Lie group.

Example 3We consider the locally compact group from Example 2. This is of course not a Lie group. However, any open neighbourhood of the identity in will contain the compact subgroup for some integer . The open subgroup then has isomorphic to the discrete finite group , which is certainly a Lie group. Thus is a generalised Lie group.

One important example of generalised Lie groups are those locally compact groups which are an inverse limit (or *projective limit*) of Lie groups. Indeed, suppose we have a family of Lie groups indexed by partially ordered set which is directed in the sense that every finite subset of has an upper bound, together with continuous homomorphisms for all which form a category in the sense that for all . Then we can form the inverse limit

which is the subgroup of consisting of all tuples which are compatible with the in the sense that for all . If we endow with the product topology, then is a closed subgroup of , and thus has the structure of a topological group, with continuous homomorphisms which are compatible with the in the sense that for all . Such an inverse limit need not be locally compact; for instance, the inverse limit

of Euclidean spaces with the usual coordinate projection maps is isomorphic to the infinite product space with the product topology, which is not locally compact. However, if an inverse limit

of Lie groups *is* locally compact, it can be easily seen to be a generalised Lie group. Indeed, by local compactness, any open neighbourhood of the identity will contain an open precompact neighbourhood of the identity; by construction of the product topology (and the directed nature of ), this smaller neighbourhood will in turn will contain the kernel of one of the , which will be compact since the preceding neighbourhood was precompact. Quotienting out by this we obtain a locally compact subgroup of the Lie group , which is necessarily again a Lie group by Cartan’s theorem, and the claim follows.

In the converse direction, it is possible to use Corollary 2 to obtain the following observation of Gleason:

Theorem 4Every Hausdorff generalised Lie group contains an open subgroup that is an inverse limit of Lie groups.

We show Theorem 4 below the fold. Combining this with the (substantially more difficult) Gleason-Yamabe theorem, we obtain quite a satisfactory description of the local structure of locally compact groups. (The situation is particularly simple for connected groups, which have no non-trivial open subgroups; we then conclude that every connected locally compact Hausdorff group is the inverse limit of Lie groups.)

Example 4The locally compact group is not an inverse limit of Lie groups because (as noted earlier) it has no non-trivial compact normal subgroups, which would contradict the preceding analysis that showed that all locally compact inverse limits of Lie groups were generalised Lie groups. On the other hand, contains the open subgroup , which is the inverse limit of the discrete (and thus Lie) groups for (where we give the usual ordering, and use the obvious projection maps).

This is another post in a series on various components to the solution of Hilbert’s fifth problem. One interpretation of this problem is to ask for a purely topological classification of the topological groups which are isomorphic to Lie groups. (Here we require Lie groups to be finite-dimensional, but allow them to be disconnected.)

There are some obvious necessary conditions on a topological group in order for it to be isomorphic to a Lie group; for instance, it must be Hausdorff and locally compact. These two conditions, by themselves, are not quite enough to force a Lie group structure; consider for instance a -adic field for some prime , which is a locally compact Hausdorff topological group which is not a Lie group (the topology is locally that of a Cantor set). Nevertheless, it turns out that by adding some key additional assumptions on the topological group, one can recover Lie structure. One such result, which is a key component of the full solution to Hilbert’s fifth problem, is the following result of von Neumann:

Theorem 1Let be a locally compact Hausdorff topological group that has a faithful finite-dimensional linear representation, i.e. an injective continuous homomorphism into some linear group. Then can be given the structure of a Lie group. Furthermore, after giving this Lie structure, becomes smooth (and even analytic) and non-degenerate (the Jacobian always has full rank).

This result is closely related to a theorem of Cartan:

Theorem 2 (Cartan’s theorem)Any closed subgroup of a Lie group , is again a Lie group (in particular, is an analytic submanifold of , with the induced analytic structure).

Indeed, Theorem 1 immediately implies Theorem 2 in the important special case when the ambient Lie group is a linear group, and in any event it is not difficult to modify the proof of Theorem 1 to give a proof of Theorem 2. However, Theorem 1 is more general than Theorem 2 in some ways. For instance, let be the real line , which we faithfully represent in the -torus using an irrational embedding for some fixed irrational . The -torus can in turn be embedded in a linear group (e.g. by identifying it with , or ), thus giving a faithful linear representation of . However, the image is not closed (it is a dense subgroup of a -torus), and so Cartan’s theorem does not directly apply ( fails to be a Lie group). Nevertheless, Theorem 1 still applies and guarantees that the original group is a Lie group.

(On the other hand, the image of any *compact* subset of under a faithful representation must be closed, and so Theorem 1 is very close to the version of Theorem 2 for *local* groups.)

The key to building the Lie group structure on a topological group is to first build the associated Lie *algebra* structure, by means of *one-parameter subgroups*.

Definition 3Aone-parameter subgroupof a topological group is a continuous homomorphism from the real line (with the additive group structure) to .

Remark 1Technically, is a parameterisation of a subgroup , rather than a subgroup itself, but we will abuse notation and refer to as the subgroup.

In a Lie group , the one-parameter subgroups are in one-to-one correspondence with the Lie algebra , with each element giving rise to a one-parameter subgroup , and conversely each one-parameter subgroup giving rise to an element of the Lie algebra; we will establish these basic facts in the special case of linear groups below the fold. On the other hand, the notion of a one-parameter subgroup can be defined in an arbitrary topological group. So this suggests the following strategy if one is to try to represent a topological group as a Lie group:

- First, form the space of one-parameter subgroups of .
- Show that has the structure of a (finite-dimensional) Lie algebra.
- Show that “behaves like” the tangent space of at the identity (in particular, the one-parameter subgroups in should cover a neighbourhood of the identity in ).
- Conclude that has the structure of a Lie group.

It turns out that this strategy indeed works to give Theorem 1 (and variants of this strategy are ubiquitious in the rest of the theory surrounding Hilbert’s fifth problem).

Below the fold, I record the proof of Theorem 1 (based on the exposition of Montgomery and Zippin). I plan to organise these disparate posts surrounding Hilbert’s fifth problem (and its application to related topics, such as Gromov’s theorem or to the classification of approximate groups) at a later date.

A basic problem in harmonic analysis (as well as in linear algebra, random matrix theory, and high-dimensional geometry) is to estimate the operator norm of a linear map between two Hilbert spaces, which we will take to be complex for sake of discussion. Even the finite-dimensional case is of interest, as this operator norm is the same as the largest singular value of the matrix associated to .

In general, this operator norm is hard to compute precisely, except in special cases. One such special case is that of a *diagonal operator*, such as that associated to an diagonal matrix . In this case, the operator norm is simply the supremum norm of the diagonal coefficients:

A variant of (1) is Schur’s test, which for simplicity we will phrase in the setting of finite-dimensional operators given by a matrix via the usual formula

A simple version of this test is as follows: if all the absolute row sums and columns sums of are bounded by some constant , thus

(note that this generalises (the upper bound in) (1).) Indeed, to see (4), it suffices by duality and homogeneity to show that

whenever and are sequences with ; but this easily follows from the arithmetic mean-geometric mean inequality

Schur’s test (4) (and its many generalisations to weighted situations, or to Lebesgue or Lorentz spaces) is particularly useful for controlling operators in which the role of oscillation (as reflected in the *phase* of the coefficients , as opposed to just their magnitudes ) is not decisive. However, it is of limited use in situations that involve a lot of cancellation. For this, a different test, known as the Cotlar-Stein lemma, is much more flexible and powerful. It can be viewed in a sense as a non-commutative variant of Schur’s test (4) (or of (1)), in which the scalar coefficients or are replaced by operators instead.

To illustrate the basic flavour of the result, let us return to the bound (1), and now consider instead a *block-diagonal* matrix

where each is now a matrix, and so is an matrix with . Then we have

Indeed, the lower bound is trivial (as can be seen by testing on vectors which are supported on the block of coordinates), while to establish the upper bound, one can make use of the orthogonal decomposition

to decompose an arbitrary vector as

with , in which case we have

and the upper bound in (6) then follows from a simple computation.

The operator associated to the matrix in (5) can be viewed as a sum , where each corresponds to the block of , in which case (6) can also be written as

When is large, this is a significant improvement over the triangle inequality, which merely gives

The reason for this gain can ultimately be traced back to the “orthogonality” of the ; that they “occupy different columns” and “different rows” of the range and domain of . This is obvious when viewed in the matrix formalism, but can also be described in the more abstract Hilbert space operator formalism via the identities

whenever . (The first identity asserts that the ranges of the are orthogonal to each other, and the second asserts that the coranges of the (the ranges of the adjoints ) are orthogonal to each other.) By replacing (7) with a more abstract orthogonal decomposition into these ranges and coranges, one can in fact deduce (8) directly from (9) and (10).

The *Cotlar-Stein lemma* is an extension of this observation to the case where the are merely *almost orthogonal* rather than *orthogonal*, in a manner somewhat analogous to how Schur’s test (partially) extends (1) to the non-diagonal case. Specifically, we have

Lemma 1 (Cotlar-Stein lemma)Let be a finite sequence of bounded linear operators from one Hilbert space to another , obeying the boundsfor all and some (compare with (2), (3)). Then one has

that the hypothesis (11) (or (12)) already gives the bound

on each component of , which by the triangle inequality gives the inferior bound

the point of the Cotlar-Stein lemma is that the dependence on in this bound is eliminated in (13), which in particular makes the bound suitable for extension to the limit (see Remark 1 below).

The Cotlar-Stein lemma was first established by Cotlar in the special case of commuting self-adjoint operators, and then independently by Cotlar and Stein in full generality, with the proof appearing in a subsequent paper of Knapp and Stein.

The Cotlar-Stein lemma is often useful in controlling operators such as singular integral operators or pseudo-differential operators which “do not mix scales together too much”, in that operators map functions “that oscillate at a given scale ” to functions that still mostly oscillate at the same scale . In that case, one can often split into components which essentically capture the scale behaviour, and understanding boundedness properties of then reduces to establishing the boundedness of the simpler operators (and of establishing a sufficient decay in products such as or when and are separated from each other). In some cases, one can use Fourier-analytic tools such as Littlewood-Paley projections to generate the , but the true power of the Cotlar-Stein lemma comes from situations in which the Fourier transform is not suitable, such as when one has a complicated domain (e.g. a manifold or a non-abelian Lie group), or very rough coefficients (which would then have badly behaved Fourier behaviour). One can then select the decomposition in a fashion that is tailored to the particular operator , and is not necessarily dictated by Fourier-analytic considerations.

Once one is in the almost orthogonal setting, as opposed to the genuinely orthogonal setting, the previous arguments based on orthogonal projection seem to fail completely. Instead, the proof of the Cotlar-Stein lemma proceeds via an elegant application of the tensor power trick (or perhaps more accurately, the power method), in which the operator norm of is understood through the operator norm of a large power of (or more precisely, of its self-adjoint square or ). Indeed, from an iteration of (14) we see that for any natural number , one has

To estimate the right-hand side, we expand out the right-hand side and apply the triangle inequality to bound it by

Recall that when we applied the triangle inequality directly to , we lost a factor of in the final estimate; it will turn out that we will lose a similar factor here, but this factor will eventually be attenuated into nothingness by the tensor power trick.

To bound (17), we use the basic inequality in two different ways. If we group the product in pairs, we can bound the summand of (17) by

On the other hand, we can group the product by pairs in another way, to obtain the bound of

We bound and crudely by using (15). Taking the geometric mean of the above bounds, we can thus bound (17) by

If we then sum this series first in , then in , then moving back all the way to , using (11) and (12) alternately, we obtain a final bound of

for (16). Taking roots, we obtain

Sending , we obtain the claim.

Remark 1As observed in a number of places (see e.g. page 318 of Stein’s book, or this paper of Comech, the Cotlar-Stein lemma can be extended to infinite sums (with the obvious changes to the hypotheses (11), (12)). Indeed, one can show that for any , the sum is unconditionally convergent in (and furthermore has bounded -variation), and the resulting operator is a bounded linear operator with an operator norm bound on .

Remark 2If we specialise to the case where all the are equal, we see that the bound in the Cotlar-Stein lemma is sharp, at least in this case. Thus we see how the tensor power trick can convert an inefficient argument, such as that obtained using the triangle inequality or crude bounds such as (15), into an efficient one.

Remark 3One can prove Schur’s test by a similar method. Indeed, starting from the inequality(which follows easily from the singular value decomposition), we can bound by

Estimating the other two terms in the summand by , and then repeatedly summing the indices one at a time as before, we obtain

and the claim follows from the tensor power trick as before. On the other hand, in the converse direction, I do not know of any way to prove the Cotlar-Stein lemma that does not basically go through the tensor power argument.

Recall that a (real) topological vector space is a real vector space equipped with a topology that makes the vector space operations and continuous. One often restricts attention to Hausdorff topological vector spaces; in practice, this is not a severe restriction because it turns out that any topological vector space can be made Hausdorff by quotienting out the closure of the origin . One can also discuss complex topological vector spaces, and the theory is not significantly different; but for sake of exposition we shall restrict attention here to the real case.

An obvious example of a topological vector space is a finite-dimensional vector space such as with the usual topology. Of course, there are plenty of infinite-dimensional topological vector spaces also, such as infinite-dimensional normed vector spaces (with the strong, weak, or weak-* topologies) or Frechet spaces.

One way to distinguish the finite and infinite dimensional topological vector spaces is via local compactness. Recall that a topological space is locally compact if every point in that space has a compact neighbourhood. From the Heine-Borel theorem, all finite-dimensional vector spaces (with the usual topology) are locally compact. In infinite dimensions, one can trivially make a vector space locally compact by giving it a trivial topology, but once one restricts to the Hausdorff case, it seems impossible to make a space locally compact. For instance, in an infinite-dimensional normed vector space with the strong topology, an iteration of the Riesz lemma shows that the closed unit ball in that space contains an infinite sequence with no convergent subsequence, which (by the Heine-Borel theorem) implies that cannot be locally compact. If one gives the weak-* topology instead, then is now compact by the Banach-Alaoglu theorem, but is no longer a neighbourhood of the identity in this topology. In fact, we have the following result:

Theorem 1Every locally compact Hausdorff topological vector space is finite-dimensional.

The first proof of this theorem that I am aware of is by André Weil. There is also a related result:

Theorem 2Every finite-dimensional Hausdorff topological vector space has the usual topology.

As a corollary, every locally compact Hausdorff topological vector space is in fact isomorphic to with the usual topology for some . This can be viewed as a very special case of the theorem of Gleason, which is a key component of the solution to Hilbert’s fifth problem, that a locally compact group with *no small subgroups* (in the sense that there is a neighbourhood of the identity that contains no non-trivial subgroups) is necessarily isomorphic to a Lie group. Indeed, Theorem 1 is in fact used in the proof of Gleason’s theorem (the rough idea being to first locate a “tangent space” to at the origin, with the tangent vectors described by “one-parameter subgroups” of , and show that this space is a locally compact Hausdorff topological space, and hence finite dimensional by Theorem 1).

Theorem 2 may seem devoid of content, but it does contain some subtleties, as it hinges crucially on the *joint* continuity of the vector space operations and , and not just on the separate continuity in each coordinate. Consider for instance the one-dimensional vector space with the *co-compact* topology (a non-empty set is open iff its complement is compact in the usual topology). In this topology, the space is (though not Hausdorff), the scalar multiplication map is jointly continuous as long as the scalar is not zero, and the addition map is continuous in each coordinate (i.e. translations are continuous), but not jointly continuous; for instance, the set does not contain a non-trivial Cartesian product of two sets that are open in the co-compact topology. So this is not a counterexample to Theorem 2. Similarly for the cocountable or cofinite topologies on (the latter topology, incidentally, is the same as the Zariski topology on ).

Another near-counterexample comes from the topology of inherited by pulling back the usual topology on the unit circle . Admittedly, this pullback topology is not quite Hausdorff, but the addition map is jointly continuous. On the other hand, the scalar multiplication map is not continuous at all. A slight variant of this topology comes from pulling back the usual topology on the torus under the map for some irrational ; this restores the Hausdorff property, and addition is still jointly continuous, but multiplication remains discontinuous.

As some final examples, consider with the discrete topology; here, the topology is Hausdorff, addition is jointly continuous, and every dilation is continuous, but multiplication is not jointly continuous. If one instead gives the half-open topology, then again the topology is Hausdorff and addition is jointly continuous, but scalar multiplication is only jointly continuous once one restricts the scalar to be non-negative.

Below the fold, I record the textbook proof of Theorem 2 and Theorem 1. There is nothing particularly original in this presentation, but I wanted to record it here for my own future reference, and perhaps these results will also be of interest to some other readers.

If is a locally integrable function, we define the Hardy-Littlewood maximal function by the formula

where is the ball of radius centred at , and denotes the measure of a set . The *Hardy-Littlewood maximal inequality* asserts that

for all , all , and some constant depending only on . By a standard density argument, this implies in particular that we have the *Lebesgue differentiation theorem*

for all and almost every . See for instance my lecture notes on this topic.

By combining the Hardy-Littlewood maximal inequality with the Marcinkiewicz interpolation theorem (and the trivial inequality ) we see that

for all and , and some constant depending on and .

The exact dependence of on and is still not completely understood. The standard Vitali-type covering argument used to establish (1) has an exponential dependence on dimension, giving a constant of the form for some absolute constant . Inserting this into the Marcinkiewicz theorem, one obtains a constant of the form for some (and taking bounded away from infinity, for simplicity). The dependence on is about right, but the dependence on should not be exponential.

In 1982, Stein gave an elegant argument (with full details appearing in a subsequent paper of Stein and Strömberg), based on the Calderón-Zygmund method of rotations, to eliminate the dependence of :

The argument is based on an earlier bound of Stein from 1976 on the *spherical maximal function*

where are the spherical averaging operators

and is normalised surface measure on the sphere . Because this is an uncountable supremum, and the averaging operators do not have good continuity properties in , it is not *a priori* obvious that is even a measurable function for, say, locally integrable ; but we can avoid this technical issue, at least initially, by restricting attention to continuous functions . The Stein maximal theorem for the spherical maximal function then asserts that if and , then we have

for all (continuous) . We will sketch a proof of this theorem below the fold. (Among other things, one can use this bound to show the pointwise convergence of the spherical averages for any when and , although we will not focus on this application here.)

The condition can be seen to be necessary as follows. Take to be any fixed bump function. A brief calculation then shows that decays like as , and hence does not lie in unless . By taking to be a rescaled bump function supported on a small ball, one can show that the condition is necessary even if we replace with a compact region (and similarly restrict the radius parameter to be bounded). The condition however is not quite necessary; the result is also true when , but this turned out to be a more difficult result, obtained first by Bourgain, with a simplified proof (based on the local smoothing properties of the wave equation) later given by Muckenhaupt-Seeger-Sogge.

The Hardy-Littlewood maximal operator , which involves averaging over balls, is clearly related to the spherical maximal operator, which averages over spheres. Indeed, by using polar co-ordinates, one easily verifies the pointwise inequality

for any (continuous) , which intuitively reflects the fact that one can think of a ball as an average of spheres. Thus, we see that the spherical maximal inequality (3) implies the Hardy-Littlewood maximal inequality (2) with the same constant . (This implication is initially only valid for continuous functions, but one can then extend the inequality (2) to the rest of by a standard limiting argument.)

At first glance, this observation does not immediately establish Theorem 1 for two reasons. Firstly, Stein’s spherical maximal theorem is restricted to the case when and ; and secondly, the constant in that theorem still depends on dimension . The first objection can be easily disposed of, for if , then the hypotheses and will automatically be satisfied for sufficiently large (depending on ); note that the case when is bounded (with a bound depending on ) is already handled by the classical maximal inequality (2).

We still have to deal with the second objection, namely that constant in (3) depends on . However, here we can use the method of rotations to show that the constants can be taken to be non-increasing (and hence bounded) in . The idea is to view high-dimensional spheres as an average of rotated low-dimensional spheres. We illustrate this with a demonstration that , in the sense that any bound of the form

for the -dimensional spherical maximal function, implies the same bound

for the -dimensional spherical maximal function, with exactly the same constant . For any direction , consider the averaging operators

for any continuous , where

where is some orthogonal transformation mapping the sphere to the sphere ; the exact choice of orthogonal transformation is irrelevant due to the rotation-invariance of surface measure on the sphere . A simple application of Fubini’s theorem (after first rotating to be, say, the standard unit vector ) using (4) then shows that

uniformly in . On the other hand, by viewing the -dimensional sphere as an average of the spheres , we have the identity

indeed, one can deduce this from the uniqueness of Haar measure by noting that both the left-hand side and right-hand side are invariant means of on the sphere . This implies that

and thus by Minkowski’s inequality for integrals, we may deduce (5) from (6).

Remark 1Unfortunately, the method of rotations does not work to show that the constant for the weak inequality (1) is independent of dimension, as the weak quasinorm is not a genuine norm and does not obey the Minkowski inequality for integrals. Indeed, the question of whether in (1) can be taken to be independent of dimension remains open. The best known positive result is due to Stein and Strömberg, who showed that one can take for some absolute constant , by comparing the Hardy-Littlewood maximal function with the heat kernel maximal functionThe abstract semigroup maximal inequality of Dunford and Schwartz (discussed for instance in these lecture notes of mine) shows that the heat kernel maximal function is of weak-type with a constant of , and this can be used, together with a comparison argument, to give the Stein-Strömberg bound. In the converse direction, it is a recent result of Aldaz that if one replaces the balls with cubes, then the weak constant must go to infinity as .

I recently reposted my favourite logic puzzle, namely the blue-eyed islander puzzle. I am fond of this puzzle because in order to properly understand the correct solution (and to properly understand why the alternative solution is incorrect), one has to think very clearly (but unintuitively) about the nature of knowledge.

There is however an additional subtlety to the puzzle that was pointed out in comments, in that the correct solution to the puzzle has two components, a (necessary) upper bound and a (possible) lower bound (I’ll explain this further below the fold, in order to avoid blatantly spoiling the puzzle here). Only the upper bound is correctly explained in the puzzle (and even then, there are some slight inaccuracies, as will be discussed below). The lower bound, however, is substantially more difficult to establish, in part because the bound is merely possible and not necessary. Ultimately, this is because to demonstrate the upper bound, one merely has to show that a certain statement is logically deducible from an islander’s state of knowledge, which can be done by presenting an appropriate chain of logical deductions. But to demonstrate the lower bound, one needs to show that certain statements are *not* logically deducible from an islander’s state of knowledge, which is much harder, as one has to rule out *all* possible chains of deductive reasoning from arriving at this particular conclusion. In fact, to rigorously establish such impossiblity statements, one ends up having to leave the “syntactic” side of logic (deductive reasoning), and move instead to the dual “semantic” side of logic (creation of models). As we shall see, semantics requires substantially more mathematical setup than syntax, and the demonstration of the lower bound will therefore be much lengthier than that of the upper bound.

To complicate things further, the particular logic that is used in the blue-eyed islander puzzle is not the same as the logics that are commonly used in mathematics, namely propositional logic and first-order logic. Because the logical reasoning here depends so crucially on the concept of knowledge, one must work instead with an epistemic logic (or more precisely, an *epistemic modal logic*) which can properly work with, and model, the knowledge of various agents. To add even more complication, the role of time is also important (an islander may not know a certain fact on one day, but learn it on the next day), so one also needs to incorporate the language of temporal logic in order to fully model the situation. This makes both the syntax and semantics of the logic quite intricate; to see this, one only needs to contemplate the task of programming a computer with enough epistemic and temporal deductive reasoning powers that it would be able to solve the islander puzzle (or even a smaller version thereof, say with just three or four islanders) without being deliberately “fed” the solution. (The fact, therefore, that humans can grasp the correct solution without any formal logical training is therefore quite remarkable.)

As difficult as the syntax of temporal epistemic modal logic is, though, the semantics is more intricate still. For instance, it turns out that in order to completely model the epistemic state of a finite number of agents (such as 1000 islanders), one requires an *infinite* model, due to the existence of arbitrarily long nested chains of knowledge (e.g. “ knows that knows that knows that has blue eyes”), which cannot be automatically reduced to shorter chains of knowledge. Furthermore, because each agent has only an incomplete knowledge of the world, one must take into account multiple *hypothetical worlds*, which differ from the real world but which are considered to be possible worlds by one or more agents, thus introducing modality into the logic. More subtly, one must also consider worlds which each agent knows to be impossible, but are not commonly known to be impossible, so that (for instance) one agent is willing to admit the possibility that another agent considers that world to be possible; it is the consideration of such worlds which is crucial to the resolution of the blue-eyed islander puzzle. And this is even before one adds the temporal aspect (e.g. “On Tuesday, knows that on Monday, knew that by Wednesday, will know that has blue eyes”).

Despite all this fearsome complexity, it *is* still possible to set up both the syntax and semantics of temporal epistemic modal logic in such a way that one can formulate the blue-eyed islander problem rigorously, and in such a way that one has both an upper and a lower bound in the solution. The purpose of this post is to construct such a setup and to explain the lower bound in particular. The same logic is also useful for analysing another well-known paradox, the unexpected hanging paradox, and I will do so at the end of the post. Note though that there is more than one way to set up epistemic logics, and they are not all equivalent to each other.

(On the other hand, for puzzles such as the islander puzzle in which there are only a finite number of atomic propositions and no free variables, one at least can avoid the need to admit predicate logic, in which one has to discuss quantifiers such as and . A fully formed predicate temporal epistemic modal logic would indeed be of terrifying complexity.)

Our approach here will be a little different from the approach commonly found in the epistemic logic literature, in which one jumps straight to “arbitrary-order epistemic logic” in which arbitrarily long nested chains of knowledge (“ knows that knows that knows that \ldots”) are allowed. Instead, we will adopt a hierarchical approach, recursively defining for a “-order epistemic logic” in which knowledge chains of depth up to , but no greater, are permitted. The arbitrarily order epistemic logic is then obtained as a limit (a direct limit on the syntactic side, and an inverse limit on the semantic side, which is dual to the syntactic side) of the finite order epistemic logics.

I should warn that this is going to be a rather formal and mathematical post. Readers who simply want to know the answer to the islander puzzle would probably be better off reading the discussion at the puzzle’s own blog post instead.

A topological space is said to be *metrisable* if one can find a metric on it whose open balls generate the topology.

There are some obvious necessary conditions on the space in order for it to be metrisable. For instance, it must be Hausdorff, since all metric spaces are Hausdorff. It must also be first countable, because every point in a metric space has a countable neighbourhood base of balls , .

In the converse direction, being Hausdorff and first countable is not always enough to guarantee metrisability, for a variety of reasons. For instance the long line is not metrisable despite being both Hausdorff and first countable, due to a failure of paracompactness, which prevents one from gluing together the local metric structures on this line into a global one. Even after adding in paracompactness, this is still not enough; the real line with the lower limit topology (also known as the *Sorgenfrey line*) is Hausdorff, first countable, and paracompact, but still not metrisable (because of a failure of second countability despite being separable).

However, there is one important setting in which the Hausdorff and first countability axioms *do* suffice to give metrisability, and that is the setting of topological groups:

Theorem 1 (Birkhoff-Kakutani theorem)Let be a topological group (i.e. a topological space that is also a group, such that the group operations and are continuous). Then is metrisable if and only if it is both Hausdorff and first countable.

Remark 1It is not hard to show that a topological group is Hausdorff if and only if the singleton set is closed. More generally, in an arbitrary topological group, it is a good exercise to show that the closure of is always a closed normal subgroup of , whose quotient is then a Hausdorff topological group. Because of this, the study of topological groups can usually be reduced immediately to the study of Hausdorff topological groups. (Indeed, in many texts, topological groups are automatically understood to be an abbreviation for “Hausdorff topological group”.)

The standard proof of the Birkhoff-Kakutani theorem (which we have taken from this book of Montgomery and Zippin) relies on the following Urysohn-type lemma:

Lemma 2 (Urysohn-type lemma)Let be a Hausdorff first countable group. Then there exists a bounded continuous function with the following properties:

- (Unique maximum) , and for all .
- (Neighbourhood base) The sets for form a neighbourhood base at the identity.
- (Uniform continuity) For every , there exists an open neighbourhood of the identity such that for all and .

Note that if had a left-invariant metric, then the function would suffice for this lemma, which already gives some indication as to why this lemma is relevant to the Birkhoff-Kakutani theorem.

Let us assume Lemma 2 for now and finish the proof of the Birkhoff-Kakutani theorem. We only prove the difficult direction, namely that a Hausdorff first countable topological group is metrisable. We let be the function from Lemma 2, and define the function by the formula

where is the space of bounded continuous functions on (with the supremum norm) and is the left-translation operator .

Clearly obeys the the identity and symmetry axioms, and the triangle inequality is also immediate. This already makes a pseudometric. In order for to be a genuine metric, what is needed is that have no non-trivial translation invariances, i.e. one has for all . But this follows since attains its maximum at exactly one point, namely the group identity .

To put it another way: because has no non-trivial translation invariances, the left translation action gives an embedding , and then inherits a metric from the metric structure on .

Now we have to check whether the metric actually generates the topology. This amounts to verifying two things. Firstly, that every ball in this metric is open; and secondly, that every open neighbourhood of a point contains a ball .

To verify the former claim, it suffices to show that the map from to is continuous, follows from the uniform continuity hypothesis. The second claim follows easily from the neighbourhood base hypothesis, since if then .

Remark 2The above argument in fact shows that if a group is metrisable, then it admits a left-invariant metric. The idea of using a suitable continuous function to generate a useful metric structure on a topological group is a powerful one, for instance underlying theGleason lemmaswhich are fundamental to the solution of Hilbert’s fifth problem. I hope to return to this topic in a future post.

Now we prove Lemma 2. By first countability, we can find a countable neighbourhood base

of the identity. As is Hausdorff, we must have

Using the continuity of the group axioms, we can recursively find a sequence of nested open neighbourhoods of the identity

such that each is symmetric (i.e. if and only if ), is contained in , and is such that for each . In particular the are also a neighbourhood base of the identity with

For every dyadic rational in , we can now define the open sets by setting

where is the binary expansion of with . By repeated use of the hypothesis we see that the are increasing in ; indeed, we have the inclusion

We now set

with the understanding that if the supremum is over the empty set. One easily verifies using (4) that is continuous, and furthermore obeys the uniform continuity property. The neighbourhood base property follows since the are a neighbourhood base of the identity, and the unique maximum property follows from (3). This proves Lemma 2.

Remark 3A very similar argument to the one above also establishes that every topological group is completely regular.

Notice that the function constructed in the above argument was localised to the set . As such, it is not difficult to localise the Birkhoff-Kakutani theorem to *local groups*. A local group is a topological space equipped with an identity , a *partially defined* inversion operation , and a *partially defined* product operation , where , are open subsets of and , obeying the following restricted versions of the group axioms:

- (Continuity) and are continuous on their domains of definition.
- (Identity) For any , and are well-defined and equal to .
- (Inverse) For any , and are well-defined and equal to . is well-defined and equal to .
- (Local associativity) If are such that , , , and are all well-defined, then .

Informally, one can view a local group as a topological group in which the closure axiom has been almost completely dropped, but with all the other axioms retained. A basic way to generate a local group is to start with an ordinary topological group and restrict it to an open neighbourhood of the identity, with and . However, this is not quite the only way to generate local groups (ultimately because the local associativity axiom does not necessarily imply a (stronger) global associativity axiom in which one considers two different ways to multiply more than three group elements together).

Remark 4Another important example of a local group is that of agroup chunk, in which the sets and are somehow “generic”; for instance, could be an algebraic variety, Zariski-open, and the group operations birational on their domains of definition. This is somewhat analogous to the notion of a “ group” in additive combinatorics. There are a number ofgroup chunk theorems, starting with a theorem of Weil in the algebraic setting, which roughly speaking assert that a generic portion of a group chunk can be identified with the generic portion of a genuine group.

We then have

Theorem 3 (Birkhoff-Kakutani theorem for local groups)Let be a local group which is Hausdorff and first countable. Then there exists an open neighbourhood of the identity which is metrisable.

*Proof:* (Sketch) It is not difficult to see that in a local group , one can find a symmetric neighbourhood of the identity such that the product of any (say) elements of (multiplied together in any order) are well-defined, which effectively allows us to treat elements of as if they belonged to a group for the purposes of simple algebraic manipulation, such as applying the cancellation laws for . Inside this , one can then repeat the previous arguments and eventually end up with a continuous function supported in obeying the conclusions of Lemma 2 (but in the uniform continuity conclusion, one has to restrict to, say, , to avoid issues of ill-definedness). The definition (1) then gives a metric on with the required properties, where we make the convention that vanishes for (say) and .

My motivation for studying local groups is that it turns out that there is a correspondence (first observed by Hrushovski) between the concept of an approximate group in additive combinatorics, and a locally compact local group in topological group theory; I hope to discuss this correspondence further in a subsequent post.

Suppose one has a measure space and a sequence of operators that are bounded on some space, with . Suppose that on some dense subclass of functions in (e.g. continuous compactly supported functions, if the space is reasonable), one already knows that converges pointwise almost everywhere to some limit , for another bounded operator (e.g. could be the identity operator). What additional ingredient does one need to pass to the limit and conclude that converges almost everywhere to for *all* in (and not just for in a dense subclass)?

One standard way to proceed here is to study the *maximal operator*

and aim to establish a *weak-type maximal inequality*

for all (or all in the dense subclass), and some constant , where is the weak norm

A standard approximation argument using (1) then shows that will now indeed converge to pointwise almost everywhere for all in , and not just in the dense subclass. See for instance these lecture notes of mine, in which this method is used to deduce the Lebesgue differentiation theorem from the Hardy-Littlewood maximal inequality. This is by now a very standard approach to establishing pointwise almost everywhere convergence theorems, but it is natural to ask whether it is strictly necessary. In particular, is it possible to have a pointwise convergence result without being able to obtain a weak-type maximal inequality of the form (1)?

In the case of *norm* convergence (in which one asks for to converge to in the norm, rather than in the pointwise almost everywhere sense), the answer is no, thanks to the uniform boundedness principle, which among other things shows that norm convergence is only possible if one has the uniform bound

for some and all ; and conversely, if one has the uniform bound, and one has already established norm convergence of to on a dense subclass of , (2) will extend that norm convergence to all of .

Returning to pointwise almost everywhere convergence, the answer in general is “yes”. Consider for instance the rank one operators

from to . It is clear that converges pointwise almost everywhere to zero as for any , and the operators are uniformly bounded on , but the maximal function does not obey (1). One can modify this example in a number of ways to defeat almost any reasonable conjecture that something like (1) should be necessary for pointwise almost everywhere convergence.

In spite of this, a remarkable observation of Stein, now known as *Stein’s maximal principle*, asserts that the maximal inequality *is* necessary to prove pointwise almost everywhere convergence, if one is working on a compact group and the operators are translation invariant, and if the exponent is at most :

Theorem 1 (Stein maximal principle)Let be a compact group, let be a homogeneous space of with a finite Haar measure , let , and let be a sequence of bounded linear operators commuting with translations, such that converges pointwise almost everywhere for each . Then (1) holds.

This is not quite the most general vesion of the principle; some additional variants and generalisations are given in the original paper of Stein. For instance, one can replace the discrete sequence of operators with a continuous sequence without much difficulty. As a typical application of this principle, we see that Carleson’s celebrated theorem that the partial Fourier series of an function converge almost everywhere is in fact equivalent to the estimate

And unsurprisingly, most of the proofs of this (difficult) theorem have proceeded by first establishing (3), and Stein’s maximal principle strongly suggests that this is the optimal way to try to prove this theorem.

On the other hand, the theorem does fail for , and almost everywhere convergence results in for can be proven by other methods than weak estimates. For instance, the convergence of Bochner-Riesz multipliers in for any (and for in the range predicted by the Bochner-Riesz conjecture) was verified for by Carbery, Rubio de Francia, and Vega, despite the fact that the weak of even a *single* Bochner-Riesz multiplier, let alone the maximal function, has still not been completely verified in this range. (Carbery, Rubio de Francia and Vega use weighted estimates for the maximal Bochner-Riesz operator, rather than type estimates.) For , though, Stein’s principle (after localising to a torus) does apply, though, and pointwise almost everywhere convergence of Bochner-Riesz means is equivalent to the weak estimate (1).

Stein’s principle is restricted to compact groups (such as the torus or the rotation group ) and their homogeneous spaces (such as the torus again, or the sphere ). As stated, the principle fails in the noncompact setting; for instance, in , the convolution operators are such that converges pointwise almost everywhere to zero for every , but the maximal function is not of weak-type . However, in many applications on non-compact domains, the are “localised” enough that one can transfer from a non-compact setting to a compact setting and then apply Stein’s principle. For instance, Carleson’s theorem on the real line is equivalent to Carleson’s theorem on the circle (due to the localisation of the Dirichlet kernels), which as discussed before is equivalent to the estimate (3) on the circle, which by a scaling argument is equivalent to the analogous estimate on the real line .

Stein’s argument from his 1961 paper can be viewed nowadays as an application of the probabilistic method; starting with a sequence of increasingly bad counterexamples to the maximal inequality (1), one randomly combines them together to create a single “infinitely bad” counterexample. To make this idea work, Stein employs two basic ideas:

- The
*random rotations (or random translations) trick*. Given a subset of of small but positive measure, one can randomly select about translates of that cover most of . - The
*random sums trick*Given a collection of signed functions that may possibly cancel each other in a deterministic sum , one can perform a random sum instead to obtain a random function whose magnitude will usually be comparable to the square function ; this can be made rigorous by concentration of measure results, such as Khintchine’s inequality.

These ideas have since been used repeatedly in harmonic analysis. For instance, I used the random rotations trick in a recent paper with Jordan Ellenberg and Richard Oberlin on Kakeya-type estimates in finite fields. The random sums trick is by now a standard tool to build various counterexamples to estimates (or to convergence results) in harmonic analysis, for instance being used by Fefferman in his famous paper disproving the boundedness of the ball multiplier on for , . Another use of the random sum trick is to show that Theorem 1 fails once ; see Stein’s original paper for details.

Another use of the random rotations trick, closely related to Theorem 1, is the *Nikishin-Stein factorisation theorem*. Here is Stein’s formulation of this theorem:

Theorem 2 (Stein factorisation theorem)Let be a compact group, let be a homogeneous space of with a finite Haar measure , let and , and let be a bounded linear operator commuting with translations and obeying the estimatefor all and some . Then also maps to , with

for all , with depending only on .

This result is trivial with , but becomes useful when . In this regime, the translation invariance allows one to freely “upgrade” a strong-type result to a weak-type result. In other words, bounded linear operators from to automatically factor through the inclusion , which helps explain the name “factorisation theorem”. Factorisation theory has been developed further by many authors, including Maurey and Pisier.

Stein’s factorisation theorem (or more precisely, a variant of it) is useful in the theory of Kakeya and restriction theorems in Euclidean space, as first observed by Bourgain.

In 1970, Nikishin obtained the following generalisation of Stein’s factorisation theorem in which the translation-invariance hypothesis can be dropped, at the cost of excluding a set of small measure:

Theorem 3 (Nikishin-Stein factorisation theorem)Let be a finite measure space, let and , and let be a bounded linear operator obeying the estimatefor all and some . Then for any , there exists a subset of of measure at most such that

One can recover Theorem 2 from Theorem 3 by an averaging argument to eliminate the exceptional set; we omit the details.

Recall that a (complex) abstract Lie algebra is a complex vector space (either finite or infinite dimensional) equipped with a bilinear antisymmetric form that obeys the Jacobi identity

(One can of course define Lie algebras over other fields than the complex numbers , but in order to avoid some technical issues we shall work solely with the complex case in this post.)

An important special case of the abstract Lie algebras are the *concrete Lie algebras*, in which is a vector space of linear transformations on a vector space (which again can be either finite or infinite dimensional), and the bilinear form is given by the usual Lie bracket

It is easy to verify that every concrete Lie algebra is an abstract Lie algebra. In the converse direction, we have

Theorem 1Every abstract Lie algebra is isomorphic to a concrete Lie algebra.

To prove this theorem, we introduce the useful algebraic tool of the universal enveloping algebra of the abstract Lie algebra . This is the free (associative, complex) algebra generated by (viewed as a complex vector space), subject to the constraints

This algebra is described by the Poincaré-Birkhoff-Witt theorem, which asserts that given an ordered basis of as a vector space, that a basis of is given by “monomials” of the form

where is a natural number, the are an increasing sequence of indices in , and the are positive integers. Indeed, given two such monomials, one can express their product as a finite linear combination of further monomials of the form (3) after repeatedly applying (2) (which we rewrite as ) to reorder the terms in this product modulo lower order terms until one all monomials have their indices in the required increasing order. It is then a routine exercise in basic abstract algebra (using all the axioms of an abstract Lie algebra) to verify that this is multiplication rule on monomials does indeed define a complex associative algebra which has the universal properties required of the universal enveloping algebra.

The abstract Lie algebra acts on its universal enveloping algebra by left-multiplication: , thus giving a map from to . It is easy to verify that this map is a Lie algebra homomorphism (so this is indeed an action (or representation) of the Lie algebra), and this action is clearly faithful (i.e. the map from to is injective), since each element of maps the identity element of to a different element of , namely . Thus is isomorphic to its image in , proving Theorem 1.

In the converse direction, every representation of a Lie algebra “factors through” the universal enveloping algebra, in that it extends to an algebra homomorphism from to , which by abuse of notation we shall also call .

One drawback of Theorem 1 is that the space that the concrete Lie algebra acts on will almost always be infinite-dimensional, even when the original Lie algebra is finite-dimensional. However, there is a useful theorem of Ado that rectifies this:

Theorem 2 (Ado’s theorem)Every finite-dimensional abstract Lie algebra is isomorphic to a concrete Lie algebra over afinite-dimensionalvector space .

Among other things, this theorem can be used (in conjunction with the Baker-Campbell-Hausdorff formula) to show that every abstract (finite-dimensional) Lie group (or abstract local Lie group) is locally isomorphic to a linear group. (It is well-known, though, that abstract Lie groups are not necessarily *globally* isomorphic to a linear group, but we will not discuss these global obstructions here.)

Ado’s theorem is surprisingly tricky to prove in general, but some special cases are easy. For instance, one can try using the adjoint representation of on itself, defined by the action ; the Jacobi identity (1) ensures that this indeed a representation of . The kernel of this representation is the centre . This already gives Ado’s theorem in the case when is semisimple, in which case the center is trivial.

The adjoint representation does not suffice, by itself, to prove Ado’s theorem in the non-semisimple case. However, it does provide an important reduction in the proof, namely it reduces matters to showing that every finite-dimensional Lie algebra has a finite-dimensional representation which is faithful on the centre . Indeed, if one has such a representation, one can then take the direct sum of that representation with the adjoint representation to obtain a new finite-dimensional representation which is now faithful on all of , which then gives Ado’s theorem for .

It remains to find a finite-dimensional representation of which is faithful on the centre . In the case when is abelian, so that the centre is all of , this is again easy, because then acts faithfully on by the infinitesimal shear maps . In matrix form, this representation identifies each in this abelian Lie algebra with an “upper-triangular” matrix:

This construction gives a faithful finite-dimensional representation of the centre of any finite-dimensional Lie algebra. The standard proof of Ado’s theorem (which I believe dates back to work of Harish-Chandra) then proceeds by gradually “extending” this representation of the centre to larger and larger sub-algebras of , while preserving the finite-dimensionality of the representation and the faithfulness on , until one obtains a representation on the entire Lie algebra with the required properties. (For technical inductive reasons, one also needs to carry along an additional property of the representation, namely that it maps the nilradical to nilpotent elements, but we will discuss this technicality later.)

This procedure is a little tricky to execute in general, but becomes simpler in the nilpotent case, in which the lower central series becomes trivial for sufficiently large :

Theorem 3 (Ado’s theorem for nilpotent Lie algebras)Let be a finite-dimensional nilpotent Lie algebra. Then there exists a finite-dimensional faithful representation of . Furthermore, there exists a natural number such that , i.e. one has for all .

The second conclusion of Ado’s theorem here is useful for induction purposes. (By Engel’s theorem, this conclusion is also equivalent to the assertion that every element of is nilpotent, but we can prove Theorem 3 without explicitly invoking Engel’s theorem.)

Below the fold, I give a proof of Theorem 3, and then extend the argument to cover the full strength of Ado’s theorem. This is not a new argument – indeed, I am basing this particular presentation from the one in Fulton and Harris – but it was an instructive exercise for me to try to extract the proof of Ado’s theorem from the more general structural theory of Lie algebras (e.g. Engel’s theorem, Lie’s theorem, Levi decomposition, etc.) in which the result is usually placed. (However, the proof I know of still needs Engel’s theorem to establish the solvable case, and the Levi decomposition to then establish the general case.)

Igor Rodnianski and I have just uploaded to the arXiv our paper “Effective limiting absorption principles, and applications“, submitted to Communications in Mathematical Physics. In this paper we derive limiting absorption principles (of type discussed in this recent post) for a general class of Schrödinger operators on a wide class of manifolds, namely the *asymptotically conic* manifolds. The precise definition of such manifolds is somewhat technical, but they include as a special case the *asymptotically flat* manifolds, which in turn include as a further special case the smooth compact perturbations of Euclidean space (i.e. the smooth Riemannian manifolds that are identical to outside of a compact set). The potential is assumed to be a *short range potential*, which roughly speaking means that it decays faster than as ; for several of the applications (particularly at very low energies) we need to in fact assume that is a *strongly short range potential*, which roughly speaking means that it decays faster than .

To begin with, we make no hypotheses about the topology or geodesic geometry of the manifold ; in particular, we allow to be *trapping* in the sense that it contains geodesic flows that do not escape to infinity, but instead remain trapped in a bounded subset of . We also allow the potential to be signed, which in particular allows bound states (eigenfunctions of negative energy) to be created. For standard technical reasons we restrict attention to dimensions three and higher: .

It is well known that such Schrödinger operators are essentially self-adjoint, and their spectrum consists of purely absolutely continuous spectrum on , together with possibly some eigenvalues at zero and negative energy (and at zero energy and in dimensions three and four, there are also the possibility of *resonances* which, while not strictly eigenvalues, have a somewhat analogous effect on the dynamics of the Laplacian and related objects, such as resolvents). In particular, the resolvents make sense as bounded operators on for any and . As discussed in the previous blog post, it is of interest to obtain bounds for the behaviour of these resolvents, as this can then be used via some functional calculus manipulations to obtain control on many other operators and PDE relating to the Schrödinger operator , such as the Helmholtz equation, the time-dependent Schrödinger equation, and the wave equation. In particular, it is of interest to obtain *limiting absorption estimates* such as

for (and particularly in the positive energy regime ), where and is an arbitrary test function. The constant needs to be independent of for such estimates to be truly useful, but it is also of interest to determine the extent to which these constants depend on , , and . The dependence on is relatively uninteresting and henceforth we will suppress it. In particular, our paper focused to a large extent on *quantitative* methods that could give *effective* bounds on in terms of quantities such as the magnitude of the potential in a suitable norm.

It turns out to be convenient to distinguish between three regimes:

- The
*high-energy regime*; - The
*medium-energy regime*; and - The
*low-energy regime*.

Our methods actually apply more or less uniformly to all three regimes, but the nature of the conclusions is quite different in each of the three regimes.

The high-energy regime was essentially worked out by Burq, although we give an independent treatment of Burq’s results here. In this regime it turns out that we have an unconditional estimate of the form (1) with a constant of the shape

where is a constant that depends only on and on a parameter that controls the size of the potential . This constant, while exponentially growing, is still finite, which among other things is enough to rule out the possibility that contains eigenfunctions (i.e. point spectrum) embedded in the high-energy portion of the spectrum. As is well known, if contains a certain type of trapped geodesic (in particular those arising from positively curved portions of the manifold, such as the equator of a sphere), then it is possible to construct *pseudomodes* that show that this sort of exponential growth is necessary. On the other hand, if we make the *non-trapping hypothesis* that all geodesics in escape to infinity, then we can obtain a much stronger high-energy limiting absorption estimate, namely

The exponent here is closely related to the standard fact that on non-trapping manifolds, there is a local smoothing effect for the time-dependent Schrödinger equation that gains half a derivative of regularity (cf. previous blog post). In the high-energy regime, the dynamics are well-approximated by semi-classical methods, and in particular one can use tools such as the positive commutator method and pseudo-differential calculus to obtain the desired estimates. In case of trapping one also needs the standard technique of Carleman inequalities to control the compact (and possibly trapping) core of the manifold, and in particular needing the delicate two-weight Carleman inequalities of Burq.

In the medium and low energy regimes one needs to work harder. In the medium energy regime , we were able to obtain a uniform bound

for all asymptotically conic manifolds (trapping or not) and all short-range potentials. To establish this bound, we have to supplement the existing tools of the positive commutator method and Carleman inequalities with an additional ODE-type analysis of various energies of the solution to a Helmholtz equation on large spheres, as will be discussed in more detail below the fold.

The methods also extend to the low-energy regime . Here, the bounds become somewhat interesting, with a subtle distinction between *effective* estimates that are uniform over all potentials which are bounded in a suitable sense by a parameter (e.g. obeying for all ), and *ineffective* estimates that exploit qualitative properties of (such as the absence of eigenfunctions or resonances at zero) and are thus not uniform over . On the effective side, and for potentials that are strongly short range (at least at local scales ; one can tolerate merely short-range behaviour at more global scales, but this is a technicality that we will not discuss further here) we were able to obtain a polynomial bound of the form

that blew up at a large polynomial rate at the origin. Furthermore, by carefully designing a sequence of potentials that induce near-eigenfunctions that resemble two different Bessel functions of the radial variable glued together, we are able to show that this type of polynomial bound is sharp in the following sense: given any constant , there exists a sequence of potentials on Euclidean space uniformly bounded by , and a sequence of energies going to zero, such that

This shows that if one wants bounds that are *uniform* in the potential , then arbitrary polynomial blowup is necessary.

Interestingly, though, if we *fix* the potential , and then ask for bounds that are not necessarily uniform in , then one can do better, as was already observed in a classic paper of Jensen and Kato concerning power series expansions of the resolvent near the origin. In particular, if we make the spectral assumption that has no eigenfunctions or resonances at zero, then an argument (based on (a variant of) the Fredholm alternative, which as discussed in this recent blog post gives ineffective bounds) gives a bound of the form

in the low-energy regime (but note carefully here that the constant on the right-hand side depends on the potential itself, and not merely on the parameter that upper bounds it). Even if there are eigenvalues or resonances, it turns out that one can still obtain a similar bound but with an exponent of instead of . This limited blowup at infinity is in sharp contrast to the arbitrarily large polynomial blowup rate that can occur if one demands uniform bounds. (This particular subtlety between uniform and non-uniform estimates confused us, by the way, for several weeks; for a long time we thought that we had somehow found a contradiction between our results and the results of Jensen and Kato.)

As applications of our limiting absorption estimates, we give local smoothing and dispersive estimates for solutions (as well as the closely related RAGE type theorems) to the time-dependent Schrödinger and wave equations, and also reprove standard facts about the spectrum of Schrödinger operators in this setting.

## Recent Comments