You are currently browsing the category archive for the ‘math.CA’ category.
One of the basic problems in the field of operator algebras is to develop a functional calculus for either a single operator , or a collection of operators. These operators could in principle act on any function space, but typically one either considers complex matrices (which act on a complex finite dimensional space), or operators (either bounded or unbounded) on a complex Hilbert space. (One can of course also obtain analogous results for real operators, but we will work throughout with complex operators in this post.)
Roughly speaking, a functional calculus is a way to assign an operator or to any function in a suitable function space, which is linear over the complex numbers, preserve the scalars (i.e. when ), and should be either an exact or approximate homomorphism in the sense that
should hold either exactly or approximately. In the case when the are self-adjoint operators acting on a Hilbert space (or Hermitian matrices), one often also desires the identity
to also hold either exactly or approximately. (Note that one cannot reasonably expect (1) and (2) to hold exactly for all if the and their adjoints do not commute with each other, so in those cases one has to be willing to allow some error terms in the above wish list of properties of the calculus.) Ideally, one should also be able to relate the operator norm of or with something like the uniform norm on . In principle, the existence of a good functional calculus allows one to manipulate operators as if they were scalars (or at least approximately as if they were scalars), which is very helpful for a number of applications, such as partial differential equations, spectral theory, noncommutative probability, and semiclassical mechanics. A functional calculus for multiple operators can be particularly valuable as it allows one to treat as being exact or approximate scalars simultaneously. For instance, if one is trying to solve a linear differential equation that can (formally at least) be expressed in the form
for some data , unknown function , some differential operators , and some nice function , then if one’s functional calculus is good enough (and is suitably “elliptic” in the sense that it does not vanish or otherwise degenerate too often), one should be able to solve this equation either exactly or approximately by the formula
which is of course how one would solve this equation if one pretended that the operators were in fact scalars. Formalising this calculus rigorously leads to the theory of pseudodifferential operators, which allows one to (approximately) solve or at least simplify a much wider range of differential equations than one what can achieve with more elementary algebraic transformations (e.g. integrating factors, change of variables, variation of parameters, etc.). In quantum mechanics, a functional calculus that allows one to treat operators as if they were approximately scalar can be used to rigorously justify the correspondence principle in physics, namely that the predictions of quantum mechanics approximate that of classical mechanics in the semiclassical limit .
There is no universal functional calculus that works in all situations; the strongest functional calculi, which are close to being an exact *-homomorphisms on very large class of functions, tend to only work for under very restrictive hypotheses on or (in particular, when , one needs the to commute either exactly, or very close to exactly), while there are weaker functional calculi which have fewer nice properties and only work for a very small class of functions, but can be applied to quite general operators or . In some cases the functional calculus is only formal, in the sense that or has to be interpreted as an infinite formal series that does not converge in a traditional sense. Also, when one wishes to select a functional calculus on non-commuting operators , there is a certain amount of non-uniqueness: one generally has a number of slightly different functional calculi to choose from, which generally have the same properties but differ in some minor technical details (particularly with regards to the behaviour of “lower order” components of the calculus). This is a similar to how one has a variety of slightly different coordinate systems available to parameterise a Riemannian manifold or Lie group. This is on contrast to the case when the underlying operator is (essentially) normal (so that commutes with ); in this special case (which includes the important subcases when is unitary or (essentially) self-adjoint), spectral theory gives us a canonical and very powerful functional calculus which can be used without further modification in applications.
Despite this lack of uniqueness, there is one standard choice for a functional calculus available for general operators , namely the Weyl functional calculus; it is analogous in some ways to normal coordinates for Riemannian manifolds, or exponential coordinates of the first kind for Lie groups, in that it treats lower order terms in a reasonably nice fashion. (But it is important to keep in mind that, like its analogues in Riemannian geometry or Lie theory, there will be some instances in which the Weyl calculus is not the optimal calculus to use for the application at hand.)
I decided to write some notes on the Weyl functional calculus (also known as Weyl quantisation), and to sketch the applications of this calculus both to the theory of pseudodifferential operators. They are mostly for my own benefit (so that I won’t have to redo these particular calculations again), but perhaps they will also be of interest to some readers here. (Of course, this material is also covered in many other places. e.g. Folland’s “harmonic analysis in phase space“.)
Nonstandard analysis is a mathematical framework in which one extends the standard mathematical universe of standard numbers, standard sets, standard functions, etc. into a larger nonstandard universe of nonstandard numbers, nonstandard sets, nonstandard functions, etc., somewhat analogously to how one places the real numbers inside the complex numbers, or the rationals inside the reals. This nonstandard universe enjoys many of the same properties as the standard one; in particular, we have the transfer principle that asserts that any statement in the language of first order logic is true in the standard universe if and only if it is true in the nonstandard one. (For instance, because Fermat’s last theorem is known to be true for standard natural numbers, it is automatically true for nonstandard natural numbers as well.) However, the nonstandard universe also enjoys some additional useful properties that the standard one does not, most notably the countable saturation property, which is a property somewhat analogous to the completeness property of a metric space; much as metric completeness allows one to assert that the intersection of a countable family of nested closed balls is non-empty, countable saturation allows one to assert that the intersection of a countable family of nested satisfiable formulae is simultaneously satisfiable. (See this previous blog post for more on the analogy between the use of nonstandard analysis and the use of metric completions.) Furthermore, by viewing both the standard and nonstandard universes externally (placing them both inside a larger metatheory, such as a model of Zermelo-Frankel-Choice (ZFC) set theory; in some more advanced set-theoretic applications one may also wish to add some large cardinal axioms), one can place some useful additional definitions and constructions on these universes, such as defining the concept of an infinitesimal nonstandard number (a number which is smaller in magnitude than any positive standard number). The ability to rigorously manipulate infinitesimals is of course one of the most well-known advantages of working with nonstandard analysis.
To build a nonstandard universe from a standard one , the most common approach is to take an ultrapower of with respect to some non-principal ultrafilter over the natural numbers; see e.g. this blog post for details. Once one is comfortable with ultrafilters and ultrapowers, this becomes quite a simple and elegant construction, and greatly demystifies the nature of nonstandard analysis.
On the other hand, nonprincipal ultrafilters do have some unappealing features. The most notable one is that their very existence requires the axiom of choice (or more precisely, a weaker form of this axiom known as the boolean prime ideal theorem). Closely related to this is the fact that one cannot actually write down any explicit example of a nonprincipal ultrafilter, but must instead rely on nonconstructive tools such as Zorn’s lemma, the Hahn-Banach theorem, Tychonoff’s theorem, the Stone-Cech compactification, or the boolean prime ideal theorem to locate one. As such, ultrafilters definitely belong to the “infinitary” side of mathematics, and one may feel that it is inappropriate to use such tools for “finitary” mathematical applications, such as those which arise in hard analysis. From a more practical viewpoint, because of the presence of the infinitary ultrafilter, it can be quite difficult (though usually not impossible, with sufficient patience and effort) to take a finitary result proven via nonstandard analysis and coax an effective quantitative bound from it.
There is however a “cheap” version of nonstandard analysis which is less powerful than the full version, but is not as infinitary in that it is constructive (in the sense of not requiring any sort of choice-type axiom), and which can be translated into standard analysis somewhat more easily than a fully nonstandard argument; indeed, a cheap nonstandard argument can often be presented (by judicious use of asymptotic notation) in a way which is nearly indistinguishable from a standard one. It is obtained by replacing the nonprincipal ultrafilter in fully nonstandard analysis with the more classical Fréchet filter of cofinite subsets of the natural numbers, which is the filter that implicitly underlies the concept of the classical limit of a sequence when the underlying asymptotic parameter goes off to infinity. As such, “cheap nonstandard analysis” aligns very well with traditional mathematics, in which one often allows one’s objects to be parameterised by some external parameter such as , which is then allowed to approach some limit such as . The catch is that the Fréchet filter is merely a filter and not an ultrafilter, and as such some of the key features of fully nonstandard analysis are lost. Most notably, the law of the excluded middle does not transfer over perfectly from standard analysis to cheap nonstandard analysis; much as there exist bounded sequences of real numbers (such as ) which do not converge to a (classical) limit, there exist statements in cheap nonstandard analysis which are neither true nor false (at least without passing to a subsequence, see below). The loss of such a fundamental law of mathematical reasoning may seem like a major disadvantage for cheap nonstandard analysis, and it does indeed make cheap nonstandard analysis somewhat weaker than fully nonstandard analysis. But in some situations (particularly when one is reasoning in a “constructivist” or “intuitionistic” fashion, and in particular if one is avoiding too much reliance on set theory) it turns out that one can survive the loss of this law; and furthermore, the law of the excluded middle is still available for standard analysis, and so one can often proceed by working from time to time in the standard universe to temporarily take advantage of this law, and then transferring the results obtained there back to the cheap nonstandard universe once one no longer needs to invoke the law of the excluded middle. Furthermore, the law of the excluded middle can be recovered by adopting the freedom to pass to subsequences with regards to the asymptotic parameter ; this technique is already in widespread use in the analysis of partial differential equations, although it is generally referred to by names such as “the compactness method” rather than as “cheap nonstandard analysis”.
Below the fold, I would like to describe this cheap version of nonstandard analysis, which I think can serve as a pedagogical stepping stone towards fully nonstandard analysis, as it is formally similar to (though weaker than) fully nonstandard analysis, but on the other hand is closer in practice to standard analysis. As we shall see below, the relation between cheap nonstandard analysis and standard analysis is analogous in many ways to the relation between probabilistic reasoning and deterministic reasoning; it also resembles somewhat the preference in much of modern mathematics for viewing mathematical objects as belonging to families (or to categories) to be manipulated en masse, rather than treating each object individually. (For instance, nonstandard analysis can be used as a partial substitute for scheme theory in order to obtain uniformly quantitative results in algebraic geometry, as discussed for instance in this previous blog post.)
One of the most fundamental principles in Fourier analysis is the uncertainty principle. It does not have a single canonical formulation, but one typical informal description of the principle is that if a function is restricted to a narrow region of physical space, then its Fourier transform must be necessarily “smeared out” over a broad region of frequency space. Some versions of the uncertainty principle are discussed in this previous blog post.
In this post I would like to highlight a useful instance of the uncertainty principle, due to Hugh Montgomery, which is useful in analytic number theory contexts. Specifically, suppose we are given a complex-valued function on the integers. To avoid irrelevant issues at spatial infinity, we will assume that the support of this function is finite (in practice, we will only work with functions that are supported in an interval for some natural numbers ). Then we can define the Fourier transform by the formula
where . (In some literature, the sign in the exponential phase is reversed, but this will make no substantial difference to the arguments below.)
The classical uncertainty principle, in this context, asserts that if is localised in an interval of length , then must be “smeared out” at a scale of at least (and essentially constant at scales less than ). For instance, if is supported in , then we have the Plancherel identity
while from the Cauchy-Schwarz inequality we have
for each frequency , and in particular that
for any arc in the unit circle (with denoting the length of ). In particular, an interval of length significantly less than can only capture a fraction of the energy of the Fourier transform of , which is consistent with the above informal statement of the uncertainty principle.
Another manifestation of the classical uncertainty principle is the large sieve inequality. A particularly nice formulation of this inequality is due independently to Montgomery and Vaughan and Selberg: if is supported in , and are frequencies in that are -separated for some , thus for all (where denotes the distance of to the origin in ), then
The reader is encouraged to see how this inequality is consistent with the Plancherel identity (1) and the intuition that is essentially constant at scales less than . The factor can in fact be amplified a little bit to , which is essentially optimal, by using a neat dilation trick of Paul Cohen, in which one dilates to (and replaces each frequency by their roots), and then sending (cf. the tensor product trick); see this survey of Montgomery for details. But we will not need this refinement here.
In the above instances of the uncertainty principle, the concept of narrow support in physical space was formalised in the Archimedean sense, using the standard Archimedean metric on the integers (in particular, the parameter is essentially the Archimedean diameter of the support of ). However, in number theory, the Archimedean metric is not the only metric of importance on the integers; the -adic metrics play an equally important role; indeed, it is common to unify the Archimedean and -adic perspectives together into a unified adelic perspective. In the -adic world, the metric balls are no longer intervals, but are instead residue classes modulo some power of . Intersecting these balls from different -adic metrics together, we obtain residue classes with respect to various moduli (which may be either prime or composite). As such, another natural manifestation of the concept of “narrow support in physical space” is “vanishes on many residue classes modulo “. This notion of narrowness is particularly common in sieve theory, when one deals with functions supported on thin sets such as the primes, which naturally tend to avoid many residue classes (particularly if one throws away the first few primes).
In this context, the uncertainty principle is this: the more residue classes modulo that avoids, the more the Fourier transform must spread out along multiples of . To illustrate a very simple example of this principle, let us take , and suppose that is supported only on odd numbers (thus it completely avoids the residue class ). We write out the formulae for and :
If is supported on the odd numbers, then is always equal to on the support of , and so we have . Thus, whenever has a significant presence at a frequency , it also must have an equally significant presence at the frequency ; there is a spreading out across multiples of . Note that one has a similar effect if was supported instead on the even integers instead of the odd integers.
A little more generally, suppose now that avoids a single residue class modulo a prime ; for sake of argument let us say that it avoids the zero residue class , although the situation for the other residue classes is similar. For any frequency and any , one has
From basic Fourier analysis, we know that the phases sum to zero as ranges from to whenever is not a multiple of . We thus have
In particular, if is large, then one of the other has to be somewhat large as well; using the Cauchy-Schwarz inequality, we can quantify this assertion in an sense via the inequality
Let us continue this analysis a bit further. Now suppose that avoids residue classes modulo a prime , for some . (We exclude the case as it is clearly degenerates by forcing to be identically zero.) Let be the function that equals on these residue classes and zero away from these residue classes, then
Using the periodic Fourier transform, we can write
for some coefficients , thus
Some Fourier-analytic computations reveal that
and
and so after some routine algebra and the Cauchy-Schwarz inequality, we obtain a generalisation of (3):
Thus we see that the more residue classes mod we exclude, the more Fourier energy has to disperse along multiples of . It is also instructive to consider the extreme case , in which is supported on just a single residue class ; in this case, one clearly has , and so spreads its energy completely evenly along multiples of .
In 1968, Montgomery observed the following useful generalisation of the above calculation to arbitrary modulus:
Proposition 1 (Montgomery’s uncertainty principle) Let be a finitely supported function which, for each prime , avoids residue classes modulo for some . Then for each natural number , one has
where is the Möbius function.
We give a proof of this proposition below the fold.
Following the “adelic” philosophy, it is natural to combine this uncertainty principle with the large sieve inequality to take simultaneous advantage of localisation both in the Archimedean sense and in the -adic senses. This leads to the following corollary:
Corollary 2 (Arithmetic large sieve inequality) Let be a function supported on an interval which, for each prime , avoids residue classes modulo for some . Let , and let be a finite set of natural numbers. Suppose that the frequencies with , , and are -separated. Then one has
where was defined in (4).
Indeed, from the large sieve inequality one has
while from Proposition 1 one has
whence the claim.
There is a great deal of flexibility in the above inequality, due to the ability to select the set , the frequencies , the omitted classes , and the separation parameter . Here is a typical application concerning the original motivation for the large sieve inequality, namely in bounding the size of sets which avoid many residue classes:
Corollary 3 (Large sieve) Let be a set of integers contained in which avoids residue classes modulo for each prime , and let . Then
where
Proof: We apply Corollary 2 with , , , , and . The key point is that the Farey sequence of fractions with and is -separated, since
whenever are distinct fractions in this sequence.
If, for instance, is the set of all primes in larger than , then one can set for all , which makes , where is the Euler totient function. It is a classical estimate that
Using this fact and optimising in , we obtain (a special case of) the Brun-Titchmarsh inequality
where is the prime counting function; a variant of the same argument gives the more general Brun-Titchmarsh inequality
for any primitive residue class , where is the number of primes less than or equal to that are congruent to . By performing a more careful optimisation using a slightly sharper version of the large sieve inequality (2) that exploits the irregular spacing of the Farey sequence, Montgomery and Vaughan were able to delete the error term in the Brun-Titchmarsh inequality, thus establishing the very nice inequality
for any natural numbers with . This is a particularly useful inequality in non-asymptotic analytic number theory (when one wishes to study number theory at explicit orders of magnitude, rather than the number theory of sufficiently large numbers), due to the absence of asymptotic notation.
I recently realised that Corollary 2 also establishes a stronger version of the “restriction theorem for the Selberg sieve” that Ben Green and I proved some years ago (indeed, one can view Corollary 2 as a “restriction theorem for the large sieve”). I’m placing the details below the fold.
In the previous set of notes we introduced the notion of expansion in arbitrary -regular graphs. For the rest of the course, we will now focus attention primarily to a special type of -regular graph, namely a Cayley graph.
Definition 1 (Cayley graph) Let be a group, and let be a finite subset of . We assume that is symmetric (thus whenever ) and does not contain the identity (this is to avoid loops). Then the (right-invariant) Cayley graph is defined to be the graph with vertex set and edge set , thus each vertex is connected to the elements for , and so is a -regular graph.
Example 1 The graph in Exercise 3 of Notes 1 is the Cayley graph on with generators .
Remark 1 We call the above Cayley graphs right-invariant because every right translation on is a graph automorphism of . This group of automorphisms acts transitively on the vertex set of the Cayley graph. One can thus view a Cayley graph as a homogeneous space of , as it “looks the same” from every vertex. One could of course also consider left-invariant Cayley graphs, in which is connected to rather than . However, the two such graphs are isomorphic using the inverse map , so we may without loss of generality restrict our attention throughout to left Cayley graphs.
Remark 2 For minor technical reasons, it will be convenient later on to allow to contain the identity and to come with multiplicity (i.e. it will be a multiset rather than a set). If one does so, of course, the resulting Cayley graph will now contain some loops and multiple edges.
For the purposes of building expander families, we would of course want the underlying group to be finite. However, it will be convenient at various times to “lift” a finite Cayley graph up to an infinite one, and so we permit to be infinite in our definition of a Cayley graph.
We will also sometimes consider a generalisation of a Cayley graph, known as a Schreier graph:
Definition 2 (Schreier graph) Let be a finite group that acts (on the left) on a space , thus there is a map from to such that and for all and . Let be a symmetric subset of which acts freely on in the sense that for all and , and for all distinct and . Then the Schreier graph is defined to be the graph with vertex set and edge set .
Example 2 Every Cayley graph is also a Schreier graph , using the obvious left-action of on itself. The -regular graphs formed from permutations that were studied in the previous set of notes is also a Schreier graph provided that for all distinct , with the underlying group being the permutation group (which acts on the vertex set in the obvious manner), and .
Exercise 1 If is an even integer, show that every -regular graph is a Schreier graph involving a set of generators of cardinality . (Hint: first show that every -regular graph can be decomposed into unions of cycles, each of which partition the vertex set, then use the previous example.
We return now to Cayley graphs. It is easy to characterise qualitative expansion properties of Cayley graphs:
Exercise 2 (Qualitative expansion) Let be a finite Cayley graph.
- (i) Show that is a one-sided -expander for for some if and only if generates .
- (ii) Show that is a two-sided -expander for for some if and only if generates , and furthermore intersects each index subgroup of .
We will however be interested in more quantitative expansion properties, in which the expansion constant is independent of the size of the Cayley graph, so that one can construct non-trivial expander families of Cayley graphs.
One can analyse the expansion of Cayley graphs in a number of ways. For instance, by taking the edge expansion viewpoint, one can study Cayley graphs combinatorially, using the product set operation
of subsets of .
Exercise 3 (Combinatorial description of expansion) Let be a family of finite -regular Cayley graphs. Show that is a one-sided expander family if and only if there is a constant independent of such that for all sufficiently large and all subsets of with .
One can also give a combinatorial description of two-sided expansion, but it is more complicated and we will not use it here.
Exercise 4 (Abelian groups do not expand) Let be a family of finite -regular Cayley graphs, with the all abelian, and the generating . Show that are a one-sided expander family if and only if the Cayley graphs have bounded cardinality (i.e. ). (Hint: assume for contradiction that is a one-sided expander family with , and show by two different arguments that grows at least exponentially in and also at most polynomially in , giving the desired contradiction.)
The left-invariant nature of Cayley graphs also suggests that such graphs can be profitably analysed using some sort of Fourier analysis; as the underlying symmetry group is not necessarily abelian, one should use the Fourier analysis of non-abelian groups, which is better known as (unitary) representation theory. The Fourier-analytic nature of Cayley graphs can be highlighted by recalling the operation of convolution of two functions , defined by the formula
This convolution operation is bilinear and associative (at least when one imposes a suitable decay condition on the functions, such as compact support), but is not commutative unless is abelian. (If one is more algebraically minded, one can also identify (when is finite, at least) with the group algebra , in which case convolution is simply the multiplication operation in this algebra.) The adjacency operator on a Cayley graph can then be viewed as a convolution
where is the probability density
where is the Kronecker delta function on . Using the spectral definition of expansion, we thus see that is a one-sided expander if and only if
whenever is orthogonal to the constant function , and is a two-sided expander if
whenever is orthogonal to the constant function .
We remark that the above spectral definition of expansion can be easily extended to symmetric sets which contain the identity or have multiplicity (i.e. are multisets). (We retain symmetry, though, in order to keep the operation of convolution by self-adjoint.) In particular, one can say (with some slight abuse of notation) that a set of elements of (possibly with repetition, and possibly with some elements equalling the identity) generates a one-sided or two-sided -expander if the associated symmetric probability density
We saw in the last set of notes that expansion can be characterised in terms of random walks. One can of course specialise this characterisation to the Cayley graph case:
Exercise 5 (Random walk description of expansion) Let be a family of finite -regular Cayley graphs, and let be the associated probability density functions. Let be a constant.
- Show that the are a two-sided expander family if and only if there exists a such that for all sufficiently large , one has for some , where denotes the convolution of copies of .
- Show that the are a one-sided expander family if and only if there exists a such that for all sufficiently large , one has for some .
In this set of notes, we will connect expansion of Cayley graphs to an important property of certain infinite groups, known as Kazhdan’s property (T) (or property (T) for short). In 1973, Margulis exploited this property to create the first known explicit and deterministic examples of expanding Cayley graphs. As it turns out, property (T) is somewhat overpowered for this purpose; in particular, we now know that there are many families of Cayley graphs for which the associated infinite group does not obey property (T) (or weaker variants of this property, such as property ). In later notes we will therefore turn to other methods of creating Cayley graphs that do not rely on property (T). Nevertheless, property (T) is of substantial intrinsic interest, and also has many connections to other parts of mathematics than the theory of expander graphs, so it is worth spending some time to discuss it here.
The material here is based in part on this recent text on property (T) by Bekka, de la Harpe, and Valette (available online here).
In this set of notes we will be able to finally prove the Gleason-Yamabe theorem from Notes 0, which we restate here:
Theorem 1 (Gleason-Yamabe theorem) Let be a locally compact group. Then, for any open neighbourhood of the identity, there exists an open subgroup of and a compact normal subgroup of in such that is isomorphic to a Lie group.
In the next set of notes, we will combine the Gleason-Yamabe theorem with some topological analysis (and in particular, using the invariance of domain theorem) to establish some further control on locally compact groups, and in particular obtaining a solution to Hilbert’s fifth problem.
To prove the Gleason-Yamabe theorem, we will use three major tools developed in previous notes. The first (from Notes 2) is a criterion for Lie structure in terms of a special type of metric, which we will call a Gleason metric:
Definition 2 Let be a topological group. A Gleason metric on is a left-invariant metric which generates the topology on and obeys the following properties for some constant , writing for :
- (Escape property) If and is such that , then .
- (Commutator estimate) If are such that , then
where is the commutator of and .
Theorem 3 (Building Lie structure from Gleason metrics) Let be a locally compact group that has a Gleason metric. Then is isomorphic to a Lie group.
The second tool is the existence of a left-invariant Haar measure on any locally compact group; see Theorem 3 from Notes 3. Finally, we will also need the compact case of the Gleason-Yamabe theorem (Theorem 8 from Notes 3), which was proven via the Peter-Weyl theorem:
Theorem 4 (Gleason-Yamabe theorem for compact groups) Let be a compact Hausdorff group, and let be a neighbourhood of the identity. Then there exists a compact normal subgroup of contained in such that is isomorphic to a linear group (i.e. a closed subgroup of a general linear group ).
To finish the proof of the Gleason-Yamabe theorem, we have to somehow use the available structures on locally compact groups (such as Haar measure) to build good metrics on those groups (or on suitable subgroups or quotient groups). The basic construction is as follows:
Definition 5 (Building metrics out of test functions) Let be a topological group, and let be a bounded non-negative function. Then we define the pseudometric by the formula
and the semi-norm by the formula
Note that one can also write
where is the “derivative” of in the direction .
Exercise 1 Let the notation and assumptions be as in the above definition. For any , establish the metric-like properties
- (Identity) , with equality when .
- (Symmetry) .
- (Triangle inequality) .
- (Continuity) If , then the map is continuous.
- (Boundedness) One has . If is supported in a set , then equality occurs unless .
- (Left-invariance) . In particular, .
In particular, we have the norm-like properties
- (Identity) , with equality when .
- (Symmetry) .
- (Triangle inequality) .
- (Continuity) If , then the map is continuous.
- (Boundedness) One has . If is supported in a set , then equality occurs unless .
We remark that the first three properties of in the above exercise ensure that is indeed a pseudometric.
To get good metrics (such as Gleason metrics) on groups , it thus suffices to obtain test functions that obey suitably good “regularity” properties. We will achieve this primarily by means of two tricks. The first trick is to obtain high-regularity test functions by convolving together two low-regularity test functions, taking advantage of the existence of a left-invariant Haar measure on . The second trick is to obtain low-regularity test functions by means of a metric-like object on . This latter trick may seem circular, as our whole objective is to get a metric on in the first place, but the key point is that the metric one starts with does not need to have as many “good properties” as the metric one ends up with, thanks to the regularity-improving properties of convolution. As such, one can use a “bootstrap argument” (or induction argument) to create a good metric out of almost nothing. It is this bootstrap miracle which is at the heart of the proof of the Gleason-Yamabe theorem (and hence to the solution of Hilbert’s fifth problem).
The arguments here are based on the nonstandard analysis arguments used to establish Hilbert’s fifth problem by Hirschfeld and by Goldbring (and also some unpublished lecture notes of Goldbring and van den Dries). However, we will not explicitly use any nonstandard analysis in this post.
In the last few notes, we have been steadily reducing the amount of regularity needed on a topological group in order to be able to show that it is in fact a Lie group, in the spirit of Hilbert’s fifth problem. Now, we will work on Hilbert’s fifth problem from the other end, starting with the minimal assumption of local compactness on a topological group , and seeing what kind of structures one can build using this assumption. (For simplicity we shall mostly confine our discussion to global groups rather than local groups for now.) In view of the preceding notes, we would like to see two types of structures emerge in particular:
- representations of into some more structured group, such as a matrix group ; and
- metrics on that capture the escape and commutator structure of (i.e. Gleason metrics).
To build either of these structures, a fundamentally useful tool is that of (left-) Haar measure – a left-invariant Radon measure on . (One can of course also consider right-Haar measures; in many cases (such as for compact or abelian groups), the two concepts are the same, but this is not always the case.) This concept generalises the concept of Lebesgue measure on Euclidean spaces , which is of course fundamental in analysis on those spaces.
Haar measures will help us build useful representations and useful metrics on locally compact groups . For instance, a Haar measure gives rise to the regular representation that maps each element of to the unitary translation operator on the Hilbert space of square-integrable measurable functions on with respect to this Haar measure by the formula
(The presence of the inverse is convenient in order to obtain the homomorphism property without a reversal in the group multiplication.) In general, this is an infinite-dimensional representation; but in many cases (and in particular, in the case when is compact) we can decompose this representation into a useful collection of finite-dimensional representations, leading to the Peter-Weyl theorem, which is a fundamental tool for understanding the structure of compact groups. This theorem is particularly simple in the compact abelian case, where it turns out that the representations can be decomposed into one-dimensional representations , better known as characters, leading to the theory of Fourier analysis on general compact abelian groups. With this and some additional (largely combinatorial) arguments, we will also be able to obtain satisfactory structural control on locally compact abelian groups as well.
The link between Haar measure and useful metrics on is a little more complicated. Firstly, once one has the regular representation , and given a suitable “test” function , one can then embed into (or into other function spaces on , such as or ) by mapping a group element to the translate of in that function space. (This map might not actually be an embedding if enjoys a non-trivial translation symmetry , but let us ignore this possibility for now.) One can then pull the metric structure on the function space back to a metric on , for instance defining an -based metric
if is square-integrable, or perhaps a -based metric
if is continuous and compactly supported (with denoting the supremum norm). These metrics tend to have several nice properties (for instance, they are automatically left-invariant), particularly if the test function is chosen to be sufficiently “smooth”. For instance, if we introduce the differentiation (or more precisely, finite difference) operators
(so that ) and use the metric (1), then a short computation (relying on the translation-invariance of the norm) shows that
for all . This suggests that commutator estimates, such as those appearing in the definition of a Gleason metric in Notes 2, might be available if one can control “second derivatives” of ; informally, we would like our test functions to have a “” type regularity.
If was already a Lie group (or something similar, such as a local group) then it would not be too difficult to concoct such a function by using local coordinates. But of course the whole point of Hilbert’s fifth problem is to do without such regularity hypotheses, and so we need to build test functions by other means. And here is where the Haar measure comes in: it provides the fundamental tool of convolution
between two suitable functions , which can be used to build smoother functions out of rougher ones. For instance:
Exercise 1 Let be continuous, compactly supported functions which are Lipschitz continuous. Show that the convolution using Lebesgue measure on obeys the -type commutator estimate
for all and some finite quantity depending only on .
This exercise suggests a strategy to build Gleason metrics by convolving together some “Lipschitz” test functions and then using the resulting convolution as a test function to define a metric. This strategy may seem somewhat circular because one needs a notion of metric in order to define Lipschitz continuity in the first place, but it turns out that the properties required on that metric are weaker than those that the Gleason metric will satisfy, and so one will be able to break the circularity by using a “bootstrap” or “induction” argument.
We will discuss this strategy – which is due to Gleason, and is fundamental to all currently known solutions to Hilbert’s fifth problem – in later posts. In this post, we will construct Haar measure on general locally compact groups, and then establish the Peter-Weyl theorem, which in turn can be used to obtain a reasonably satisfactory structural classification of both compact groups and locally compact abelian groups.
My graduate text on measure theory (based on these lecture notes) is now published by the AMS as part of the Graduate Studies in Mathematics series. (See also my own blog page for this book, which among other things contains a draft copy of the book in PDF format.)
The classical inverse function theorem reads as follows:
Theorem 1 ( inverse function theorem) Let be an open set, and let be an continuously differentiable function, such that for every , the derivative map is invertible. Then is a local homeomorphism; thus, for every , there exists an open neighbourhood of and an open neighbourhood of such that is a homeomorphism from to .
It is also not difficult to show by inverting the Taylor expansion
that at each , the local inverses are also differentiable at with derivative
The textbook proof of the inverse function theorem proceeds by an application of the contraction mapping theorem. Indeed, one may normalise and to be the identity map; continuity of then shows that is close to the identity for small , which may be used (in conjunction with the fundamental theorem of calculus) to make a contraction on a small ball around the origin for small , at which point the contraction mapping theorem readily finishes off the problem.
I recently learned (after I asked this question on Math Overflow) that the hypothesis of continuous differentiability may be relaxed to just everywhere differentiability:
Theorem 2 (Everywhere differentiable inverse function theorem) Let be an open set, and let be an everywhere differentiable function, such that for every , the derivative map is invertible. Then is a local homeomorphism; thus, for every , there exists an open neighbourhood of and an open neighbourhood of such that is a homeomorphism from to .
As before, one can recover the differentiability of the local inverses, with the derivative of the inverse given by the usual formula (1).
This result implicitly follows from the more general results of Cernavskii about the structure of finite-to-one open and closed maps, however the arguments there are somewhat complicated (and subsequent proofs of those results, such as the one by Vaisala, use some powerful tools from algebraic geometry, such as dimension theory). There is however a more elementary proof of Saint Raymond that was pointed out to me by Julien Melleray. It only uses basic point-set topology (for instance, the concept of a connected component) and the basic topological and geometric structure of Euclidean space (in particular relying primarily on local compactness, local connectedness, and local convexity). I decided to present (an arrangement of) Saint Raymond’s proof here.
To obtain a local homeomorphism near , there are basically two things to show: local surjectivity near (thus, for near , one can solve for some near ) and local injectivity near (thus, for distinct near , is not equal to ). Local surjectivity is relatively easy; basically, the standard proof of the inverse function theorem works here, after replacing the contraction mapping theorem (which is no longer available due to the possibly discontinuous nature of ) with the Brouwer fixed point theorem instead (or one could also use degree theory, which is more or less an equivalent approach). The difficulty is local injectivity – one needs to preclude the existence of nearby points with ; note that in contrast to the contraction mapping theorem that provides both existence and uniqueness of fixed points, the Brouwer fixed point theorem only gives existence and not uniqueness.
In one dimension one can proceed by using Rolle’s theorem. Indeed, as one traverses the interval from to , one must encounter some intermediate point which maximises the quantity , and which is thus instantaneously non-increasing both to the left and to the right of . But, by hypothesis, is non-zero, and this easily leads to a contradiction.
Saint Raymond’s argument for the higher dimensional case proceeds in a broadly similar way. Starting with two nearby points with , one finds a point which “locally extremises” in the following sense: is equal to some , but is adherent to at least two distinct connected components of the set . (This is an oversimplification, as one has to restrict the available points in to a suitably small compact set, but let us ignore this technicality for now.) Note from the non-degenerate nature of that was already adherent to ; the point is that “disconnects” in some sense. Very roughly speaking, the way such a critical point is found is to look at the sets as shrinks from a large initial value down to zero, and one finds the first value of below which this set disconnects from . (Morally, one is performing some sort of Morse theory here on the function , though this function does not have anywhere near enough regularity for classical Morse theory to apply.)
The point is mapped to a point on the boundary of the ball , while the components are mapped to the interior of this ball. By using a continuity argument, one can show (again very roughly speaking) that must contain a “hemispherical” neighbourhood of inside , and similarly for . But then from differentiability of at , one can then show that and overlap near , giving a contradiction.
The rigorous details of the proof are provided below the fold.
This is another installment of my my series of posts on Hilbert’s fifth problem. One formulation of this problem is answered by the following theorem of Gleason and Montgomery-Zippin:
Theorem 1 (Hilbert’s fifth problem) Let be a topological group which is locally Euclidean. Then is isomorphic to a Lie group.
Theorem 1 is deep and difficult result, but the discussion in the previous posts has reduced the proof of this Theorem to that of establishing two simpler results, involving the concepts of a no small subgroups (NSS) subgroup, and that of a Gleason metric. We briefly recall the relevant definitions:
Definition 2 (NSS) A topological group is said to have no small subgroups, or is NSS for short, if there is an open neighbourhood of the identity in that contains no subgroups of other than the trivial subgroup .
Definition 3 (Gleason metric) Let be a topological group. A Gleason metric on is a left-invariant metric which generates the topology on and obeys the following properties for some constant , writing for :
- (Escape property) If and is such that , then
- (Commutator estimate) If are such that , then
where is the commutator of and .
The remaining steps in the resolution of Hilbert’s fifth problem are then as follows:
Theorem 4 (Reduction to the NSS case) Let be a locally compact group, and let be an open neighbourhood of the identity in . Then there exists an open subgroup of , and a compact subgroup of contained in , such that is NSS and locally compact.
Theorem 5 (Gleason’s lemma) Let be a locally compact NSS group. Then has a Gleason metric.
The purpose of this post is to establish these two results, using arguments that are originally due to Gleason. We will split this task into several subtasks, each of which improves the structure on the group by some amount:
Proposition 6 (From locally compact to metrisable) Let be a locally compact group, and let be an open neighbourhood of the identity in . Then there exists an open subgroup of , and a compact subgroup of contained in , such that is locally compact and metrisable.
For any open neighbourhood of the identity in , let be the union of all the subgroups of that are contained in . (Thus, for instance, is NSS if and only if is trivial for all sufficiently small .)
Proposition 7 (From metrisable to subgroup trapping) Let be a locally compact metrisable group. Then has the subgroup trapping property: for every open neighbourhood of the identity, there exists another open neighbourhood of the identity such that generates a subgroup contained in .
Proposition 8 (From subgroup trapping to NSS) Let be a locally compact group with the subgroup trapping property, and let be an open neighbourhood of the identity in . Then there exists an open subgroup of , and a compact subgroup of contained in , such that is locally compact and NSS.
Proposition 9 (From NSS to the escape property) Let be a locally compact NSS group. Then there exists a left-invariant metric on generating the topology on which obeys the escape property (1) for some constant .
Proposition 10 (From escape to the commutator estimate) Let be a locally compact group with a left-invariant metric that obeys the escape property (1). Then also obeys the commutator property (2).
It is clear that Propositions 6, 7, and 8 combine to give Theorem 4, and Propositions 9, 10 combine to give Theorem 5.
Propositions 6-10 are all proven separately, but their proofs share some common strategies and ideas. The first main idea is to construct metrics on a locally compact group by starting with a suitable “bump function” (i.e. a continuous, compactly supported function from to ) and pulling back the metric structure on by using the translation action , thus creating a (semi-)metric
One easily verifies that this is indeed a (semi-)metric (in that it is non-negative, symmetric, and obeys the triangle inequality); it is also left-invariant, and so we have , where
where is the difference operator ,
This construction was already seen in the proof of the Birkhoff-Kakutani theorem, which is the main tool used to establish Proposition 6. For the other propositions, the idea is to choose a bump function that is “smooth” enough that it creates a metric with good properties such as the commutator estimate (2). Roughly speaking, to get a bound of the form (2), one needs to have “ regularity” with respect to the “right” smooth structure on By regularity, we mean here something like a bound of the form
for all . Here we use the usual asymptotic notation, writing or if for some constant (which can vary from line to line).
The following lemma illustrates how regularity can be used to build Gleason metrics.
Lemma 11 Suppose that obeys (4). Then the (semi-)metric (and associated (semi-)norm ) obey the escape property (1) and the commutator property (2).
Proof: We begin with the commutator property (2). Observe the identity
whence
From the triangle inequality (and translation-invariance of the norm) we thus see that (2) follows from (4). Similarly, to obtain the escape property (1), observe the telescoping identity
for any and natural number , and thus by the triangle inequality
But from (4) (and the triangle inequality) we have
and thus we have the “Taylor expansion”
which gives (1).
It remains to obtain that have the desired regularity property. In order to get such regular bump functions, we will use the trick of convolving together two lower regularity bump functions (such as two functions with “ regularity” in some sense to be determined later). In order to perform this convolution, we will use the fundamental tool of (left-invariant) Haar measure on the locally compact group . Here we exploit the basic fact that the convolution
of two functions tends to be smoother than either of the two factors . This is easiest to see in the abelian case, since in this case we can distribute derivatives according to the law
which suggests that the order of “differentiability” of should be the sum of the orders of and separately.
These ideas are already sufficient to establish Proposition 10 directly, and also Proposition 9 when comined with an additional bootstrap argument. The proofs of Proposition 7 and Proposition 8 use similar techniques, but is more difficult due to the potential presence of small subgroups, which require an application of the Peter-Weyl theorem to properly control. Both of these theorems will be proven below the fold, thus (when combined with the preceding posts) completing the proof of Theorem 1.
The presentation here is based on some unpublished notes of van den Dries and Goldbring on Hilbert’s fifth problem. I am indebted to Emmanuel Breuillard, Ben Green, and Tom Sanders for many discussions related to these arguments.
Recent Comments