You are currently browsing the category archive for the ‘math.DG’ category.

In addition to the Fields medallists mentioned in the previous post, the IMU also awarded the Nevanlinna prize to Subhash Khot, the Gauss prize to Stan Osher (my colleague here at UCLA!), and the Chern medal to Phillip Griffiths. Like I did in 2010, I’ll try to briefly discuss one result of each of the prize winners, though the fields of mathematics here are even further from my expertise than those discussed in the previous post (and all the caveats from that post apply here also).

Subhash Khot is best known for his Unique Games Conjecture, a problem in complexity theory that is perhaps second in importance only to the {P \neq NP} problem for the purposes of demarcating the mysterious line between “easy” and “hard” problems (if one follows standard practice and uses “polynomial time” as the definition of “easy”). The {P \neq NP} problem can be viewed as an assertion that it is difficult to find exact solutions to certain standard theoretical computer science problems (such as {k}-SAT); thanks to the NP-completeness phenomenon, it turns out that the precise problem posed here is not of critical importance, and {k}-SAT may be substituted with one of the many other problems known to be NP-complete. The unique games conjecture is similarly an assertion about the difficulty of finding even approximate solutions to certain standard problems, in particular “unique games” problems in which one needs to colour the vertices of a graph in such a way that the colour of one vertex of an edge is determined uniquely (via a specified matching) by the colour of the other vertex. This is an easy problem to solve if one insists on exact solutions (in which 100% of the edges have a colouring compatible with the specified matching), but becomes extremely difficult if one permits approximate solutions, with no exact solution available. In analogy with the NP-completeness phenomenon, the threshold for approximate satisfiability of many other problems (such as the MAX-CUT problem) is closely connected with the truth of the unique games conjecture; remarkably, the truth of the unique games conjecture would imply asymptotically sharp thresholds for many of these problems. This has implications for many theoretical computer science constructions which rely on hardness of approximation, such as probabilistically checkable proofs. For a more detailed survey of the unique games conjecture and its implications, see this Bulletin article of Trevisan.

My colleague Stan Osher has worked in many areas of applied mathematics, ranging from image processing to modeling fluids for major animation studies such as Pixar or Dreamworks, but today I would like to talk about one of his contributions that is close to an area of my own expertise, namely compressed sensing. One of the basic reconstruction problem in compressed sensing is the basis pursuit problem of finding the vector {x \in {\bf R}^n} in an affine space {\{ x \in {\bf R}^n: Ax = b \}} (where {b \in {\bf R}^m} and {A \in {\bf R}^{m\times n}} are given, and {m} is typically somewhat smaller than {n}) which minimises the {\ell^1}-norm {\|x\|_{\ell^1} := \sum_{i=1}^n |x_i|} of the vector {x}. This is a convex optimisation problem, and thus solvable in principle (it is a polynomial time problem, and thus “easy” in the above theoretical computer science sense). However, once {n} and {m} get moderately large (e.g. of the order of {10^6}), standard linear optimisation routines begin to become computationally expensive; also, it is difficult for off-the-shelf methods to exploit any additional structure (e.g. sparsity) in the measurement matrix {A}. Much of the problem comes from the fact that the functional {x \mapsto \|x\|_1} is only barely convex. One way to speed up the optimisation problem is to relax it by replacing the constraint {Ax=b} with a convex penalty term {\frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2}, thus one is now trying to minimise the unconstrained functional

\displaystyle \|x\|_1 + \frac{1}{2\epsilon} \|Ax-b\|_{\ell^2}^2.

This functional is more convex, and is over a computationally simpler domain {{\bf R}^n} than the affine space {\{x \in {\bf R}^n: Ax=b\}}, so is easier (though still not entirely trivial) to optimise over. However, the minimiser {x^\epsilon} to this problem need not match the minimiser {x^0} to the original problem, particularly if the (sub-)gradient {\partial \|x\|_1} of the original functional {\|x\|_1} is large at {x^0}, and if {\epsilon} is not set to be small. (And setting {\epsilon} too small will cause other difficulties with numerically solving the optimisation problem, due to the need to divide by very small denominators.) However, if one modifies the objective function by an additional linear term

\displaystyle \|x\|_1 - \langle p, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2

then some simple convexity considerations reveal that the minimiser to this new problem will match the minimiser {x^0} to the original problem, provided that {p} is (or more precisely, lies in) the (sub-)gradient {\partial \|x\|_1} of {\|x\|_1} at {x^0} – even if {\epsilon} is not small. But, one does not know in advance what the correct value of {p} should be, because one does not know what the minimiser {x^0} is.

With Yin, Goldfarb and Darbon, Osher introduced a Bregman iteration method in which one solves for {x} and {p} simultaneously; given an initial guess {x^k, p^k} for both {x^k} and {p^k}, one first updates {x^k} to the minimiser {x^{k+1} \in {\bf R}^n} of the convex functional

\displaystyle \|x\|_1 - \langle p^k, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2 \ \ \ \ \ (1)

and then updates {p^{k+1}} to the natural value of the subgradient {\partial \|x\|_1} at {x^{k+1}}, namely

\displaystyle p^{k+1} := p^k - \nabla \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2|_{x=x^{k+1}} = p_k - \frac{1}{\epsilon} (Ax^k - b)

(note upon taking the first variation of (1) that {p^{k+1}} is indeed in the subgradient). This procedure converges remarkably quickly (both in theory and in practice) to the true minimiser {x^0} even for non-small values of {\epsilon}, and also has some ability to be parallelised, and has led to actual performance improvements of an order of magnitude or more in certain compressed sensing problems (such as reconstructing an MRI image).

Phillip Griffiths has made many contributions to complex, algebraic and differential geometry, and I am not qualified to describe most of these; my primary exposure to his work is through his text on algebraic geometry with Harris, but as excellent though that text is, it is not really representative of his research. But I thought I would mention one cute result of his related to the famous Nash embedding theorem. Suppose that one has a smooth {n}-dimensional Riemannian manifold that one wants to embed locally into a Euclidean space {{\bf R}^m}. The Nash embedding theorem guarantees that one can do this if {m} is large enough depending on {n}, but what is the minimal value of {m} one can get away with? Many years ago, my colleague Robert Greene showed that {m = \frac{n(n+1)}{2} + n} sufficed (a simplified proof was subsequently given by Gunther). However, this is not believed to be sharp; if one replaces “smooth” with “real analytic” then a standard Cauchy-Kovalevski argument shows that {m = \frac{n(n+1)}{2}} is possible, and no better. So this suggests that {m = \frac{n(n+1)}{2}} is the threshold for the smooth problem also, but this remains open in general. The cases {n=1} is trivial, and the {n=2} case is not too difficult (if the curvature is non-zero) as the codimension {m-n} is one in this case, and the problem reduces to that of solving a Monge-Ampere equation. With Bryant and Yang, Griffiths settled the {n=3} case, under a non-degeneracy condition on the Einstein tensor. This is quite a serious paper – over 100 pages combining differential geometry, PDE methods (e.g. Nash-Moser iteration), and even some harmonic analysis (e.g. they rely at one key juncture on an extension theorem of my advisor, Elias Stein). The main difficulty is that that the relevant PDE degenerates along a certain characteristic submanifold of the cotangent bundle, which then requires an extremely delicate analysis to handle.

Let {f: {\bf R}^3 \rightarrow {\bf R}} be an irreducible polynomial in three variables. As {{\bf R}} is not algebraically closed, the zero set {Z_{\bf R}(f) = \{ x \in{\bf R}^3: f(x)=0\}} can split into various components of dimension between {0} and {2}. For instance, if {f(x_1,x_2,x_3) = x_1^2+x_2^2}, the zero set {Z_{\bf R}(f)} is a line; more interestingly, if {f(x_1,x_2,x_3) = x_3^2 + x_2^2 - x_2^3}, then {Z_{\bf R}(f)} is the union of a line and a surface (or the product of an acnodal cubic curve with a line). We will assume that the {2}-dimensional component {Z_{{\bf R},2}(f)} is non-empty, thus defining a real surface in {{\bf R}^3}. In particular, this hypothesis implies that {f} is not just irreducible over {{\bf R}}, but is in fact absolutely irreducible (i.e. irreducible over {{\bf C}}), since otherwise one could use the complex factorisation of {f} to contain {Z_{\bf R}(f)} inside the intersection {{\bf Z}_{\bf C}(g) \cap {\bf Z}_{\bf C}(\bar{g})} of the complex zero locus of complex polynomial {g} and its complex conjugate, with {g,\bar{g}} having no common factor, forcing {Z_{\bf R}(f)} to be at most one-dimensional. (For instance, in the case {f(x_1,x_2,x_3)=x_1^2+x_2^2}, one can take {g(z_1,z_2,z_3) = z_1 + i z_2}.) Among other things, this makes {{\bf Z}_{{\bf R},2}(f)} a Zariski-dense subset of {{\bf Z}_{\bf C}(f)}, thus any polynomial identity which holds true at every point of {{\bf Z}_{{\bf R},2}(f)}, also holds true on all of {{\bf Z}_{\bf C}(f)}. This allows us to easily use tools from algebraic geometry in this real setting, even though the reals are not quite algebraically closed.

The surface {Z_{{\bf R},2}(f)} is said to be ruled if, for a Zariski open dense set of points {x \in Z_{{\bf R},2}(f)}, there exists a line {l_x = \{ x+tv_x: t \in {\bf R} \}} through {x} for some non-zero {v_x \in {\bf R}^3} which is completely contained in {Z_{{\bf R},2}(f)}, thus

\displaystyle f(x+tv_x)=0

for all {t \in {\bf R}}. Also, a point {x \in {\bf Z}_{{\bf R},2}(f)} is said to be a flecnode if there exists a line {l_x = \{ x+tv_x: t \in {\bf R}\}} through {x} for some non-zero {v_x \in {\bf R}^3} which is tangent to {Z_{{\bf R},2}(f)} to third order, in the sense that

\displaystyle f(x+tv_x)=O(t^4)

as {t \rightarrow 0}, or equivalently that

\displaystyle  \frac{d^j}{dt^j} f(x+tv_x)|_{t=0} = 0 \ \ \ \ \ (1)

for {j=0,1,2,3}. Clearly, if {Z_{{\bf R},2}(f)} is a ruled surface, then a Zariski open dense set of points on {Z_{{\bf R},2}} are a flecnode. We then have the remarkable theorem of Cayley and Salmon asserting the converse:

Theorem 1 (Cayley-Salmon theorem) Let {f: {\bf R}^3 \rightarrow {\bf R}} be an irreducible polynomial with {{\bf Z}_{{\bf R},2}} non-empty. Suppose that a Zariski dense set of points in {Z_{{\bf R},2}(f)} are flecnodes. Then {Z_{{\bf R},2}(f)} is a ruled surface.

Among other things, this theorem was used in the celebrated result of Guth and Katz that almost solved the Erdos distance problem in two dimensions, as discussed in this previous blog post. Vanishing to third order is necessary: observe that in a surface of negative curvature, such as the saddle {\{ (x_1,x_2,x_3): x_3 = x_1^2 - x_2^2 \}}, every point on the surface is tangent to second order to a line (the line in the direction for which the second fundamental form vanishes).

The original proof of the Cayley-Salmon theorem, dating back to at least 1915, is not easily accessible and not written in modern language. A modern proof of this theorem (together with substantial generalisations, for instance to higher dimensions) is given by Landsberg; the proof uses the machinery of modern algebraic geometry. The purpose of this post is to record an alternate proof of the Cayley-Salmon theorem based on classical differential geometry (in particular, the notion of torsion of a curve) and basic ODE methods (in particular, Gronwall’s inequality and the Picard existence theorem). The idea is to “integrate” the lines {l_x} indicated by the flecnode to produce smooth curves {\gamma} on the surface {{\bf Z}_{{\bf R},2}}; one then uses the vanishing (1) and some basic calculus to conclude that these curves have zero torsion and are thus planar curves. Some further manipulation using (1) (now just to second order instead of third) then shows that these curves are in fact straight lines, giving the ruling on the surface.

Update: Janos Kollar has informed me that the above theorem was essentially known to Monge in 1809; see his recent arXiv note for more details.

I thank Larry Guth and Micha Sharir for conversations leading to this post.

Read the rest of this entry »

In this set of notes, we describe the basic analytic structure theory of Lie groups, by relating them to the simpler concept of a Lie algebra. Roughly speaking, the Lie algebra encodes the “infinitesimal” structure of a Lie group, but is a simpler object, being a vector space rather than a nonlinear manifold. Nevertheless, thanks to the fundamental theorems of Lie, the Lie algebra can be used to reconstruct the Lie group (at a local level, at least), by means of the exponential map and the Baker-Campbell-Hausdorff formula. As such, the local theory of Lie groups is completely described (in principle, at least) by the theory of Lie algebras, which leads to a number of useful consequences, such as the following:

  • (Local Lie implies Lie) A topological group {G} is Lie (i.e. it is isomorphic to a Lie group) if and only if it is locally Lie (i.e. the group operations are smooth near the origin).
  • (Uniqueness of Lie structure) A topological group has at most one smooth structure on it that makes it Lie.
  • (Weak regularity implies strong regularity, I) Lie groups are automatically real analytic. (In fact one only needs a “local {C^{1,1}}” regularity on the group structure to obtain real analyticity.)
  • (Weak regularity implies strong regularity, II) A continuous homomorphism from one Lie group to another is automatically smooth (and real analytic).

The connection between Lie groups and Lie algebras also highlights the role of one-parameter subgroups of a topological group, which will play a central role in the solution of Hilbert’s fifth problem.

We note that there is also a very important algebraic structure theory of Lie groups and Lie algebras, in which the Lie algebra is split into solvable and semisimple components, with the latter being decomposed further into simple components, which can then be completely classified using Dynkin diagrams. This classification is of fundamental importance in many areas of mathematics (e.g. representation theory, arithmetic geometry, and group theory), and many of the deeper facts about Lie groups and Lie algebras are proven via this classification (although in such cases it can be of interest to also find alternate proofs that avoid the classification). However, it turns out that we will not need this theory in this course, and so we will not discuss it further here (though it can of course be found in any graduate text on Lie groups and Lie algebras).

Read the rest of this entry »

Over the past few months or so, I have been brushing up on my Lie group theory, as part of my project to fully understand the theory surrounding Hilbert’s fifth problem. Every so often, I encounter a basic fact in Lie theory which requires a slightly non-trivial “trick” to prove; I am recording two of them here, so that I can find these tricks again when I need to.

The first fact concerns the exponential map {\exp: {\mathfrak g} \rightarrow G} from a Lie algebra {{\mathfrak g}} of a Lie group {G} to that group. (For this discuss we will only consider finite-dimensional Lie groups and Lie algebras over the reals {{\bf R}}.) A basic fact in the subject is that the exponential map is locally a homeomorphism: there is a neighbourhood of the origin in {{\mathfrak g}} that is mapped homeomorphically by the exponential map to a neighbourhood of the identity in {G}. This local homeomorphism property is the foundation of an important dictionary between Lie groups and Lie algebras.

It is natural to ask whether the exponential map is globally a homeomorphism, and not just locally: in particular, whether the exponential map remains both injective and surjective. For instance, this is the case for connected, simply connected, nilpotent Lie groups (as can be seen from the Baker-Campbell-Hausdorff formula.)

The circle group {S^1}, which has {{\bf R}} as its Lie algebra, already shows that global injectivity fails for any group that contains a circle subgroup, which is a huge class of examples (including, for instance, the positive dimensional compact Lie groups, or non-simply-connected Lie groups). Surjectivity also obviously fails for disconnected groups, since the Lie algebra is necessarily connected, and so the image under the exponential map must be connected also. However, even for connected Lie groups, surjectivity can fail. To see this, first observe that if the exponential map was surjective, then every group element {g \in G} has a square root (i.e. an element {h \in G} with {h^2 = g}), since {\exp(x)} has {\exp(x/2)} as a square root for any {x \in {\mathfrak g}}. However, there exist elements in connected Lie groups without square roots. A simple example is provided by the matrix

\displaystyle  g = \begin{pmatrix} -4 & 0 \\ 0 & -1/4 \end{pmatrix}

in the connected Lie group {SL_2({\bf R})}. This matrix has eigenvalues {-4}, {-1/4}. Thus, if {h \in SL_2({\bf R})} is a square root of {g}, we see (from the Jordan normal form) that it must have at least one eigenvalue in {\{-2i,+2i\}}, and at least one eigenvalue in {\{-i/2,i/2\}}. On the other hand, as {h} has real coefficients, the complex eigenvalues must come in conjugate pairs {\{ a+bi, a-bi\}}. Since {h} can only have at most {2} eigenvalues, we obtain a contradiction.

However, there is an important case where surjectivity is recovered:

Proposition 1 If {G} is a compact connected Lie group, then the exponential map is surjective.

Proof: The idea here is to relate the exponential map in Lie theory to the exponential map in Riemannian geometry. We first observe that every compact Lie group {G} can be given the structure of a Riemannian manifold with a bi-invariant metric. This can be seen in one of two ways. Firstly, one can put an arbitrary positive definite inner product on {{\mathfrak g}} and average it against the adjoint action of {G} using Haar probability measure (which is available since {G} is compact); this gives an ad-invariant positive-definite inner product on {{\mathfrak g}} that one can then translate by either left or right translation to give a bi-invariant Riemannian structure on {G}. Alternatively, one can use the Peter-Weyl theorem to embed {G} in a unitary group {U(n)}, at which point one can induce a bi-invariant metric on {G} from the one on the space {M_n({\bf C}) \equiv {\bf C}^{n^2}} of {n \times n} complex matrices.

As {G} is connected and compact and thus complete, we can apply the Hopf-Rinow theorem and conclude that any two points are connected by at least one geodesic, so that the Riemannian exponential map from {{\mathfrak g}} to {G} formed by following geodesics from the origin is surjective. But one can check that the Lie exponential map and Riemannian exponential map agree; for instance, this can be seen by noting that the group structure naturally defines a connection on the tangent bundle which is both torsion-free and preserves the bi-invariant metric, and must therefore agree with the Levi-Civita metric. (Alternatively, one can embed into a unitary group {U(n)} and observe that {G} is totally geodesic inside {U(n)}, because the geodesics in {U(n)} can be described explicitly in terms of one-parameter subgroups.) The claim follows. \Box

Remark 1 While it is quite nice to see Riemannian geometry come in to prove this proposition, I am curious to know if there is any other proof of surjectivity for compact connected Lie groups that does not require explicit introduction of Riemannian geometry concepts.

The other basic fact I learned recently concerns the algebraic nature of Lie groups and Lie algebras. An important family of examples of Lie groups are the algebraic groups – algebraic varieties with a group law given by algebraic maps. Given that one can always automatically upgrade the smooth structure on a Lie group to analytic structure (by using the Baker-Campbell-Hausdorff formula), it is natural to ask whether one can upgrade the structure further to an algebraic structure. Unfortunately, this is not always the case. A prototypical example of this is given by the one-parameter subgroup

\displaystyle  G := \{ \begin{pmatrix} t & 0 \\ 0 & t^\alpha \end{pmatrix}: t \in {\bf R}^+ \} \ \ \ \ \ (1)

of {GL_2({\bf R})}. This is a Lie group for any exponent {\alpha \in {\bf R}}, but if {\alpha} is irrational, then the curve that {G} traces out is not an algebraic subset of {GL_2({\bf R})} (as one can see by playing around with Puiseux series).

This is not a true counterexample to the claim that every Lie group can be given the structure of an algebraic group, because one can give {G} a different algebraic structure than one inherited from the ambient group {GL_2({\bf R})}. Indeed, {G} is clearly isomorphic to the additive group {{\bf R}}, which is of course an algebraic group. However, a modification of the above construction works:

Proposition 2 There exists a Lie group {G} that cannot be given the structure of an algebraic group.

Proof: We use an example from the text of Tauvel and Yu (that I found via this MathOverflow posting). We consider the subgroup

\displaystyle  G := \{ \begin{pmatrix} 1 & 0 & 0 \\ x & t & 0 \\ y & 0 & t^\alpha \end{pmatrix}: x, y \in {\bf R}; t \in {\bf R}^+ \}

of {GL_3({\bf R})}, with {\alpha} an irrational number. This is a three-dimensional (metabelian) Lie group, whose Lie algebra {{\mathfrak g} \subset {\mathfrak gl}_3({\bf R})} is spanned by the elements

\displaystyle  X := \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & \alpha \end{pmatrix}

\displaystyle  Y := \begin{pmatrix} 0 & 0 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}

\displaystyle  Z := \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ -\alpha & 0 & 0 \end{pmatrix}

with the Lie bracket given by

\displaystyle  [Y,X] = -Y; [Z,X] = -\alpha Z; [Y,Z] = 0.

As such, we see that if we use the basis {X, Y, Z} to identify {{\mathfrak g}} to {{\bf R}^3}, then adjoint representation of {G} is the identity map.

If {G} is an algebraic group, it is easy to see that the adjoint representation {\hbox{Ad}: G \rightarrow GL({\mathfrak g})} is also algebraic, and so {\hbox{Ad}(G) = G} is algebraic in {GL({\mathfrak g})}. Specialising to our specific example, in which adjoint representation is the identity, we conclude that if {G} has any algebraic structure, then it must also be an algebraic subgroup of {GL_3({\bf R})}; but {G} projects to the group (1) which is not algebraic, a contradiction. \Box

A slight modification of the same argument also shows that not every Lie algebra is algebraic, in the sense that it is isomorphic to a Lie algebra of an algebraic group. (However, there are important classes of Lie algebras that are automatically algebraic, such as nilpotent or semisimple Lie algebras.)

Hilbert’s fifth problem asks to clarify the extent that the assumption on a differentiable or smooth structure is actually needed in the theory of Lie groups and their actions. While this question is not precisely formulated and is thus open to some interpretation, the following result of Gleason and Montgomery-Zippin answers at least one aspect of this question:

Theorem 1 (Hilbert’s fifth problem) Let {G} be a topological group which is locally Euclidean (i.e. it is a topological manifold). Then {G} is isomorphic to a Lie group.

Theorem 1 can be viewed as an application of the more general structural theory of locally compact groups. In particular, Theorem 1 can be deduced from the following structural theorem of Gleason and Yamabe:

Theorem 2 (Gleason-Yamabe theorem) Let {G} be a locally compact group, and let {U} be an open neighbourhood of the identity in {G}. Then there exists an open subgroup {G'} of {G}, and a compact subgroup {N} of {G'} contained in {U}, such that {G'/N} is isomorphic to a Lie group.

The deduction of Theorem 1 from Theorem 2 proceeds using the Brouwer invariance of domain theorem and is discussed in this previous post. In this post, I would like to discuss the proof of Theorem 2. We can split this proof into three parts, by introducing two additional concepts. The first is the property of having no small subgroups:

Definition 3 (NSS) A topological group {G} is said to have no small subgroups, or is NSS for short, if there is an open neighbourhood {U} of the identity in {G} that contains no subgroups of {G} other than the trivial subgroup {\{ \hbox{id}\}}.

An equivalent definition of an NSS group is one which has an open neighbourhood {U} of the identity that every non-identity element {g \in G \backslash \{\hbox{id}\}} escapes in finite time, in the sense that {g^n \not \in U} for some positive integer {n}. It is easy to see that all Lie groups are NSS; we shall shortly see that the converse statement (in the locally compact case) is also true, though significantly harder to prove.

Another useful property is that of having what I will call a Gleason metric:

Definition 4 Let {G} be a topological group. A Gleason metric on {G} is a left-invariant metric {d: G \times G \rightarrow {\bf R}^+} which generates the topology on {G} and obeys the following properties for some constant {C>0}, writing {\|g\|} for {d(g,\hbox{id})}:

  • (Escape property) If {g \in G} and {n \geq 1} is such that {n \|g\| \leq \frac{1}{C}}, then {\|g^n\| \geq \frac{1}{C} n \|g\|}.
  • (Commutator estimate) If {g, h \in G} are such that {\|g\|, \|h\| \leq \frac{1}{C}}, then

    \displaystyle  \|[g,h]\| \leq C \|g\| \|h\|, \ \ \ \ \ (1)

    where {[g,h] := g^{-1}h^{-1}gh} is the commutator of {g} and {h}.

For instance, the unitary group {U(n)} with the operator norm metric {d(g,h) := \|g-h\|_{op}} can easily verified to be a Gleason metric, with the commutator estimate (1) coming from the inequality

\displaystyle  \| [g,h] - 1 \|_{op} = \| gh - hg \|_{op}

\displaystyle  = \| (g-1) (h-1) - (h-1) (g-1) \|_{op}

\displaystyle  \leq 2 \|g-1\|_{op} \|g-1\|_{op}.

Similarly, any left-invariant Riemannian metric on a (connected) Lie group can be verified to be a Gleason metric. From the escape property one easily sees that all groups with Gleason metrics are NSS; again, we shall see that there is a partial converse.

Remark 1 The escape and commutator properties are meant to capture “Euclidean-like” structure of the group. Other metrics, such as Carnot-Carathéodory metrics on Carnot Lie groups such as the Heisenberg group, usually fail one or both of these properties.

The proof of Theorem 2 can then be split into three subtheorems:

Theorem 5 (Reduction to the NSS case) Let {G} be a locally compact group, and let {U} be an open neighbourhood of the identity in {G}. Then there exists an open subgroup {G'} of {G}, and a compact subgroup {N} of {G'} contained in {U}, such that {G'/N} is NSS, locally compact, and metrisable.

Theorem 6 (Gleason’s lemma) Let {G} be a locally compact metrisable NSS group. Then {G} has a Gleason metric.

Theorem 7 (Building a Lie structure) Let {G} be a locally compact group with a Gleason metric. Then {G} is isomorphic to a Lie group.

Clearly, by combining Theorem 5, Theorem 6, and Theorem 7 one obtains Theorem 2 (and hence Theorem 1).

Theorem 5 and Theorem 6 proceed by some elementary combinatorial analysis, together with the use of Haar measure (to build convolutions, and thence to build “smooth” bump functions with which to create a metric, in a variant of the analysis used to prove the Birkhoff-Kakutani theorem); Theorem 5 also requires Peter-Weyl theorem (to dispose of certain compact subgroups that arise en route to the reduction to the NSS case), which was discussed previously on this blog.

In this post I would like to detail the final component to the proof of Theorem 2, namely Theorem 7. (I plan to discuss the other two steps, Theorem 5 and Theorem 6, in a separate post.) The strategy is similar to that used to prove von Neumann’s theorem, as discussed in this previous post (and von Neumann’s theorem is also used in the proof), but with the Gleason metric serving as a substitute for the faithful linear representation. Namely, one first gives the space {L(G)} of one-parameter subgroups of {G} enough of a structure that it can serve as a proxy for the “Lie algebra” of {G}; specifically, it needs to be a vector space, and the “exponential map” needs to cover an open neighbourhood of the identity. This is enough to set up an “adjoint” representation of {G}, whose image is a Lie group by von Neumann’s theorem; the kernel is essentially the centre of {G}, which is abelian and can also be shown to be a Lie group by a similar analysis. To finish the job one needs to use arguments of Kuranishi and of Gleason, as discussed in this previous post.

The arguments here can be phrased either in the standard analysis setting (using sequences, and passing to subsequences often) or in the nonstandard analysis setting (selecting an ultrafilter, and then working with infinitesimals). In my view, the two approaches have roughly the same level of complexity in this case, and I have elected for the standard analysis approach.

Remark 2 From Theorem 7 we see that a Gleason metric structure is a good enough substitute for smooth structure that it can actually be used to reconstruct the entire smooth structure; roughly speaking, the commutator estimate (1) allows for enough “Taylor expansion” of expressions such as {g^n h^n} that one can simulate the fundamentals of Lie theory (in particular, construction of the Lie algebra and the exponential map, and its basic properties. The advantage of working with a Gleason metric rather than a smoother structure, though, is that it is relatively undemanding with regards to regularity; in particular, the commutator estimate (1) is roughly comparable to the imposition {C^{1,1}} structure on the group {G}, as this is the minimal regularity to get the type of Taylor approximation (with quadratic errors) that would be needed to obtain a bound of the form (1). We will return to this point in a later post.

Read the rest of this entry »

A (smooth) Riemannian manifold is a smooth manifold {M} without boundary, equipped with a Riemannian metric {{\rm g}}, which assigns a length {|v|_{{\rm g}(x)} \in {\bf R}^+} to every tangent vector {v \in T_x M} at a point {x \in M}, and more generally assigns an inner product

\displaystyle  \langle v, w \rangle_{{\rm g}(x)} \in {\bf R}

to every pair of tangent vectors {v, w \in T_x M} at a point {x \in M}. (We use Roman font for {g} here, as we will need to use {g} to denote group elements later in this post.) This inner product is assumed to symmetric, positive definite, and smoothly varying in {x}, and the length is then given in terms of the inner product by the formula

\displaystyle  |v|_{{\rm g}(x)}^2 := \langle v, v \rangle_{{\rm g}(x)}.

In coordinates (and also using abstract index notation), the metric {{\rm g}} can be viewed as an invertible symmetric rank {(0,2)} tensor {{\rm g}_{ij}(x)}, with

\displaystyle  \langle v, w \rangle_{{\rm g}(x)} = {\rm g}_{ij}(x) v^i w^j.

One can also view the Riemannian metric as providing a (self-adjoint) identification between the tangent bundle {TM} of the manifold and the cotangent bundle {T^* M}; indeed, every tangent vector {v \in T_x M} is then identified with the cotangent vector {\iota_{TM \rightarrow T^* M}(v) \in T_x^* M}, defined by the formula

\displaystyle \iota_{TM \rightarrow T^* M}(v)(w) := \langle v, w \rangle_{{\rm g}(x)}.

In coordinates, {\iota_{TM \rightarrow T^* M}(v)_i = {\rm g}_{ij} v^j}.

A fundamental dynamical system on the tangent bundle (or equivalently, the cotangent bundle, using the above identification) of a Riemannian manifold is that of geodesic flow. Recall that geodesics are smooth curves {\gamma: [a,b] \rightarrow M} that minimise the length

\displaystyle  |\gamma| := \int_a^b |\gamma'(t)|_{{\rm g}(\gamma(t))}\ dt.

There is some degeneracy in this definition, because one can reparameterise the curve {\gamma} without affecting the length. In order to fix this degeneracy (and also because the square of the speed is a more tractable quantity analytically than the speed itself), it is better if one replaces the length with the energy

\displaystyle  E(\gamma) := \frac{1}{2} \int_a^b |\gamma'(t)|_{{\rm g}(\gamma(t))}^2\ dt.

Minimising the energy of a parameterised curve {\gamma} turns out to be the same as minimising the length, together with an additional requirement that the speed {|\gamma'(t)|_{{\rm g}(\gamma(t))}} stay constant in time. Minimisers (and more generally, critical points) of the energy functional (holding the endpoints fixed) are known as geodesic flows. From a physical perspective, geodesic flow governs the motion of a particle that is subject to no external forces and thus moves freely, save for the constraint that it must always lie on the manifold {M}.

One can also view geodesic flows as a dynamical system on the tangent bundle (with the state at any time {t} given by the position {\gamma(t) \in M} and the velocity {\gamma'(t) \in T_{\gamma(t)} M}) or on the cotangent bundle (with the state then given by the position {\gamma(t) \in M} and the momentum {\iota_{TM \rightarrow T^* M}( \gamma'(t) ) \in T_{\gamma(t)}^* M}). With the latter perspective (sometimes referred to as cogeodesic flow), geodesic flow becomes a Hamiltonian flow, with Hamiltonian {H: T^* M \rightarrow {\bf R}} given as

\displaystyle  H( x, p ) := \frac{1}{2} \langle p, p \rangle_{{\rm g}(x)^{-1}} = \frac{1}{2} {\rm g}^{ij}(x) p_i p_j

where {\langle ,\rangle_{{\rm g}(x)^{-1}}: T^*_x M \times T^*_x M \rightarrow {\bf R}} is the inverse inner product to {\langle, \rangle_{{\rm g}(x)}: T_x M \times T_x M \rightarrow {\bf R}}, which can be defined for instance by the formula

\displaystyle  \langle p_1, p_2 \rangle_{{\rm g}(x)^{-1}} = \langle \iota_{TM \rightarrow T^* M}^{-1}(p_1), \iota_{TM \rightarrow T^* M}^{-1}(p_2)\rangle_{{\rm g}(x)}.

In coordinates, geodesic flow is given by Hamilton’s equations of motion

\displaystyle  \frac{d}{dt} x^i = {\rm g}^{ij} p_j; \quad \frac{d}{dt} p_i = - \frac{1}{2} (\partial_i {\rm g}^{jk}(x)) p_j p_k.

In terms of the velocity {v^i := \frac{d}{dt} x^i = {\rm g}^{ij} p_j}, we can rewrite these equations as the geodesic equation

\displaystyle  \frac{d}{dt} v^i = - \Gamma^i_{jk} v^j v^k

where

\displaystyle  \Gamma^i_{jk} = \frac{1}{2} {\rm g}^{im} (\partial_k {\rm g}_{mj} + \partial_j {\rm g}_{mk} - \partial_m {\rm g}_{jk} )

are the Christoffel symbols; using the Levi-Civita connection {\nabla}, this can be written more succinctly as

\displaystyle  (\gamma^* \nabla)_t v = 0.

If the manifold {M} is an embedded submanifold of a larger Euclidean space {R^n}, with the metric {{\rm g}} on {M} being induced from the standard metric on {{\bf R}^n}, then the geodesic flow equation can be rewritten in the equivalent form

\displaystyle  \gamma''(t) \perp T_{\gamma(t)} M,

where {\gamma} is now viewed as taking values in {{\bf R}^n}, and {T_{\gamma(t)} M} is similarly viewed as a subspace of {{\bf R}^n}. This is intuitively obvious from the geometric interpretation of geodesics: if the curvature of a curve {\gamma} contains components that are transverse to the manifold rather than normal to it, then it is geometrically clear that one should be able to shorten the curve by shifting it along the indicated transverse direction. It is an instructive exercise to rigorously formulate the above intuitive argument. This fact also conforms well with one’s physical intuition of geodesic flow as the motion of a free particle constrained to be in {M}; the normal quantity {\gamma''(t)} then corresponds to the centripetal force necessary to keep the particle lying in {M} (otherwise it would fly off along a tangent line to {M}, as per Newton’s first law). The precise value of the normal vector {\gamma''(t)} can be computed via the second fundamental form as {\gamma''(t) = \Pi_{\gamma(t)}( \gamma'(t), \gamma'(t) )}, but we will not need this formula here.

In a beautiful paper from 1966, Vladimir Arnold (who, sadly, passed away last week), observed that many basic equations in physics, including the Euler equations of motion of a rigid body, and also (by which is a priori a remarkable coincidence) the Euler equations of fluid dynamics of an inviscid incompressible fluid, can be viewed (formally, at least) as geodesic flows on a (finite or infinite dimensional) Riemannian manifold. And not just any Riemannian manifold: the manifold is a Lie group (or, to be truly pedantic, a torsor of that group), equipped with a right-invariant (or left-invariant, depending on one’s conventions) metric. In the context of rigid bodies, the Lie group is the group {SE(3) = {\bf R}^3 \rtimes SO(3)} of rigid motions; in the context of incompressible fluids, it is the group {Sdiff({\bf R}^3}) of measure-preserving diffeomorphisms. The right-invariance makes the Hamiltonian mechanics of geodesic flow in this context (where it is sometimes known as the Euler-Arnold equation or the Euler-Poisson equation) quite special; it becomes (formally, at least) completely integrable, and also indicates (in principle, at least) a way to reformulate these equations in a Lax pair formulation. And indeed, many further completely integrable equations, such as the Korteweg-de Vries equation, have since been reinterpreted as Euler-Arnold flows.

From a physical perspective, this all fits well with the interpretation of geodesic flow as the free motion of a system subject only to a physical constraint, such as rigidity or incompressibility. (I do not know, though, of a similarly intuitive explanation as to why the Korteweg de Vries equation is a geodesic flow.)

One consequence of being a completely integrable system is that one has a large number of conserved quantities. In the case of the Euler equations of motion of a rigid body, the conserved quantities are the linear and angular momentum (as observed in an external reference frame, rather than the frame of the object). In the case of the two-dimensional Euler equations, the conserved quantities are the pointwise values of the vorticity (as viewed in Lagrangian coordinates, rather than Eulerian coordinates). In higher dimensions, the conserved quantity is now the (Hodge star of) the vorticity, again viewed in Lagrangian coordinates. The vorticity itself then evolves by the vorticity equation, and is subject to vortex stretching as the diffeomorphism between the initial and final state becomes increasingly sheared.

The elegant Euler-Arnold formalism is reasonably well-known in some circles (particularly in Lagrangian and symplectic dynamics, where it can be viewed as a special case of the Euler-Poincaré formalism or Lie-Poisson formalism respectively), but not in others; I for instance was only vaguely aware of it until recently, and I think that even in fluid mechanics this perspective to the subject is not always emphasised. Given the circumstances, I thought it would therefore be appropriate to present Arnold’s original 1966 paper here. (For a more modern treatment of these topics, see the books of Arnold-Khesin and Marsden-Ratiu.)

In order to avoid technical issues, I will work formally, ignoring questions of regularity or integrability, and pretending that infinite-dimensional manifolds behave in exactly the same way as their finite-dimensional counterparts. In the finite-dimensional setting, it is not difficult to make all of the formal discussion below rigorous; but the situation in infinite dimensions is substantially more delicate. (Indeed, it is a notorious open problem whether the Euler equations for incompressible fluids even forms a global continuous flow in a reasonable topology in the first place!) However, I do not want to discuss these analytic issues here; see this paper of Ebin and Marsden for a treatment of these topics.

Read the rest of this entry »

Gauge theory” is a term which has connotations of being a fearsomely complicated part of mathematics – for instance, playing an important role in quantum field theory, general relativity, geometric PDE, and so forth.  But the underlying concept is really quite simple: a gauge is nothing more than a “coordinate system” that varies depending on one’s “location” with respect to some “base space” or “parameter space”, a gauge transform is a change of coordinates applied to each such location, and a gauge theory is a model for some physical or mathematical system to which gauge transforms can be applied (and is typically gauge invariant, in that all physically meaningful quantities are left unchanged (or transform naturally) under gauge transformations).  By fixing a gauge (thus breaking or spending the gauge symmetry), the model becomes something easier to analyse mathematically, such as a system of partial differential equations (in classical gauge theories) or a perturbative quantum field theory (in quantum gauge theories), though the tractability of the resulting problem can be heavily dependent on the choice of gauge that one fixed.  Deciding exactly how to fix a gauge (or whether one should spend the gauge symmetry at all) is a key question in the analysis of gauge theories, and one that often requires the input of geometric ideas and intuition into that analysis.

I was asked recently to explain what a gauge theory was, and so I will try to do so in this post.  For simplicity, I will focus exclusively on classical gauge theories; quantum gauge theories are the quantization of classical gauge theories and have their own set of conceptual difficulties (coming from quantum field theory) that I will not discuss here. While gauge theories originated from physics, I will not discuss the physical significance of these theories much here, instead focusing just on their mathematical aspects.  My discussion will be informal, as I want to try to convey the geometric intuition rather than the rigorous formalism (which can, of course, be found in any graduate text on differential geometry).

Read the rest of this entry »

Peter Petersen and I have just uploaded to the arXiv our paper, “Classification of Almost Quarter-Pinched Manifolds“, submitted to Proc. Amer. Math. Soc..  This is perhaps the shortest paper (3 pages) I have ever been involved in, because we were fortunate enough that we could simply cite (as a black box) a reference for every single fact that we needed here.

The paper is related to the famous sphere theorem from Riemannian geometry.  This theorem asserts that any n-dimensional complete simply connected Riemannian manifold which was strictly quarter-pinched (i.e. the sectional curvatures all in the interval (K/4,K] for some K > 0) must necessarily be homeomorphic to the n-sphere S^n.    (In dimensions 3 or less, this already follows from simple connectedness thanks to the Poincaré conjecture (and Myers theorem), so the theorem is really only interesting in higher dimensions.  One can easily drop the simple connectedness hypothesis by passing to a universal cover, but then one has to admit sphere quotients S^n/\Gamma as well as spheres.)

Due to the existence of exotic spheres in higher dimensions, being homeomorphic to a sphere does not necessarily imply being diffeomorphic to a sphere.  (For instance, an example of an exotic sphere with positive sectional curvature (but not quarter-pinched) was recently constructed by Petersen and Wilhelm.)  Nevertheless, Brendle and Schoen recently proved the diffeomorphic version of the sphere theorem: every strictly quarter-pinched complete simply connected Riemannian manifold is diffeomorphic to a sphere.  The proof is based on Ricci flow, and involves three main steps:

  1. A verification that if M is quarter-pinched, then the manifold M \times {\Bbb R}^2 has non-negative isotropic curvature.  (The same statement is true without adding the two additional flat dimensions, but these additional dimensions are very convenient for simplifying the analysis by allowing certain two-planes to wander freely in the product tangent space.)
  2. A verification that the property of having non-negative isotropic curvature is preserved by Ricci flow.  (By contrast, the quarter-pinched property is not preserved by Ricci flow.)
  3. The pinching theory of Böhm and Wilking, which is a refinement of the work of Hamilton (who handled the three and four-dimensional cases).

Brendle and Schoen in fact proved a slightly stronger statement in which the curvature bound K is allowed to vary with position x, but we will not discuss this strengthening here.

The quarter-pinching is sharp; the Fubini-Study metric on complex projective spaces {\Bbb CP}^n is non-strictly quarter-pinched (the sectional curvatures lie in {}[K/4,K] but is not homeomorphic to a sphere).  Nevertheless, by refining the above methods, an endpoint result was established by Brendle and Schoen (see also a later refinement by Seshadri): any complete simply-connected manifold which is non-strictly quarter-pinched is diffeomorphic to either a sphere or a compact rank one symmetric space (or CROSS, for short) such as complex projective space.  (In the latter case one also has some further control on the metric, which we will not detail here.)  The homeomorphic version of this statement was established earlier by Berger and by Klingenberg.

Our result pushes this further by an epsilon.  More precisely, we show for each dimension n that there exists \varepsilon > 0 such that any \frac{1}{4}-\varepsilon_n-pinched complete simply connected manifold (i.e. the curvatures lie in {}[K (\frac{1}{4}-\varepsilon_n), K]) is diffeomorphic to either a sphere or a CROSS.  (The homeomorphic version of this statement was established earlier in even dimensions by Berger.)  We do not know if \varepsilon_n can be made independent of n.

Read the rest of this entry »

I have just uploaded to the arXiv the second installment of my “heatwave” project, entitled “Global regularity of wave maps IV.  Absence of stationary or self-similar solutions in the energy class“.  In the first installment of this project, I was able to establish the global existence of smooth wave maps from 2+1-dimensional spacetime {\Bbb R}^{1+2} to hyperbolic space {\bf H} = {\bf H}^m from arbitrary smooth initial data, conditionally on five claims:

  1. A construction of an energy space for maps into hyperbolic space obeying a certain set of reasonable properties, such as compatibility with symmetries, approximability by smooth maps, and existence of a well-defined stress-energy tensor.
  2. A large data local well-posedness result for wave maps in the above energy space.
  3. The existence of an almost periodic “minimal-energy blowup solution” to the wave maps equation in the energy class, if this equation is such that singularities can form in finite time.
  4. The non-existence of any non-trivial degenerate maps into hyperbolic space in the energy class, where “degenerate” means that one of the partial derivatives of this map vanishes identically.
  5. The non-existence of any travelling or self-similar solution to the wave maps equation in the energy class.

In this paper, the second of four in this series (or, as the title suggests, the fourth in a series of six papers on wave maps, the first two of which can be found here and here), I verify Claims 1, 4, and 5.  (The third paper in the series will tackle Claim 2, while the fourth paper will tackle Claim 3.)  These claims are largely “elliptic” in nature (as opposed to the “hyperbolic” Claims 2, 3), but I will establish them by a “parabolic” method, relying very heavily on the harmonic map heat flow, and on the closely associated caloric gauge introduced in an earlier paper of mine.  The results of paper can be viewed as nonlinear analogues of standard facts about the linear energy space \dot H^1({\Bbb R}^2) \times L^2({\Bbb R}^2), for instance the fact that smooth compactly supported functions are dense in that space, and that this space contains no non-trivial harmonic functions, or functions which are constant in one of the two spatial directions.  The paper turned out a little longer than I had expected (77 pages) due to some surprisingly subtle technicalities, especially when excluding self-similar wave maps.  On the other hand, the heat flow and caloric gauge machinery developed here will be reused in the last two papers in this series, hopefully keeping their length to under 100 pages as well.

A key stumbling block here, related to the critical (scale-invariant) nature of the energy space (or to the failure of the endpoint Sobolev embedding \dot H^1({\Bbb R}^2) \not \subset L^\infty({\Bbb R}^2)) is that changing coordinates in hyperbolic space can be a non-uniformly-continuous operation in the energy space.  Thus, for the purposes of making quantitative estimates in that space, it is preferable to work as covariantly (or co-ordinate free) manner as possible, or if one is to use co-ordinates, to pick them in some canonical manner which is optimally adapted to the tasks at hand.  Ideally, one would work with directly with maps \phi: {\Bbb R}^2 \to {\bf H} (as well as their velocity field \partial_t \phi: {\Bbb R}^2 \to T{\bf H}) without using any coordinates on {\bf H}, but then it becomes to perform basic analytical operations on such maps, such as taking the Fourier transform, or (even more elementarily) taking the difference of two maps in order to measure how distinct they are from each other.

Read the rest of this entry »

In the previous lecture, we studied high curvature regions of Ricci flows t \mapsto (M,g(t)) on some time interval {}[0,T), and concluded that (as long as a mild topological condition was obeyed) they all had canonical neighbourhoods. This is enough control to now study the limits of such flows as one approaches the singularity time T. It turns out that one can subdivide the manifold M into a continuing region C in which the geometry remains well behaved (for instance, the curvature does not blow up, and in fact converges smoothly to an (incomplete) limit), and a disappearing region D, whose topology is well controlled. (For instance, the interface \Sigma between C and D will be a finite union of disjoint surfaces homeomorphic to S^2.) This allows one (at the topological level, at least) to perform surgery on the interface \Sigma, removing the disappearing region D and replacing them with a finite number of “caps” homeomorphic to the 3-ball B^3. The relationship between the topology of the post-surgery manifold and pre-surgery manifold is as is described way back in Lecture 2.

However, once surgery is completed, one needs to restart the Ricci flow process, at which point further singularities can occur. In order to apply surgery to these further singularities, we need to check that all the properties we have been exploiting about Ricci flows – notably the Hamilton-Ivey pinching property, the \kappa-noncollapsing property, and the existence of canonical neighbourhoods for every point of high curvature – persist even in the presence of a large number of surgeries (indeed, with the way the constants are structured, all quantitative bounds on a fixed time interval [0,T] have to be uniform in the number of surgery times, although we will of course need the set of such times to be discrete). To ensure that surgeries do not disrupt any of these properties, it turns out that one has to perform these surgeries deep in certain \varepsilon-horns of the Ricci flow at the singular time, in which the geometry is extremely close to being cylindrical (in particular, it should be a \delta-neck and not just a \varepsilon-neck, where the surgery control parameter \delta is much smaller than \varepsilon; selection of this parameter can get a little tricky if one wants to evolve Ricci flow with surgery indefinitely, although for the purposes of the Poincaré conjecture the situation is simpler as there is a fixed upper bound on the time for which one needs to evolve the flow). Furthermore, the geometry of the manifolds one glues in to replace the disappearing regions has to be carefully chosen (in particular, it has to not disrupt the pinching condition, and the geometry of these glued in regions has to resemble a (C,\varepsilon)-cap for a significant amount of (rescaled) time). The construction of the “standard solution” needed to achieve all these properties is somewhat delicate, although we will not discuss this issue much here.

In this, the final lecture, we shall present these issues from a high-level perspective; due to lack of time and space we will not cover the finer details of the surgery procedure. More detailed versions of the material here can be found in Perelman’s second paper, the notes of Kleiner-Lott, the book of Morgan-Tian, and the paper of Cao-Zhu. (See also a forthcoming paper of Bessières, Besson, Boileau, Maillot, and Porti.)

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,711 other followers