You are currently browsing the category archive for the ‘talk’ category.

Just a short post here to note that the cover story of this month’s Notices of the AMS, by John Friedlander, is about the recent work on bounded gaps between primes by Zhang, Maynard, our own Polymath project, and others.

I may as well take this opportunity to upload some slides of my own talks on this subject: here are my slides on small and large gaps between the primes that I gave at the “Latinos in the Mathematical Sciences” back in April, and here are my slides on the Polymath project for the Schock Prize symposium last October.  (I also gave an abridged version of the latter talk at an AAAS Symposium in February, as well as the Breakthrough Symposium from last November.)

Due to some requests, I’m uploading to my blog the slides for my recent talk in Segovia (for the birthday conference of Michael Cowling) on “Hilbert’s fifth problem and approximate groups“.  The slides cover essentially the same range of topics in this series of lecture notes, or in this text of mine, though of course in considerably less detail, given that the slides are meant to be presented in an hour.

This is a blog version of a talk I recently gave at the IPAM workshop on “The Kakeya Problem, Restriction Problem, and Sum-product Theory”.

Note: the discussion here will be highly non-rigorous in nature, being extremely loose in particular with asymptotic notation and with the notion of dimension. Caveat emptor.

One of the most infamous unsolved problems at the intersection of geometric measure theory, incidence combinatorics, and real-variable harmonic analysis is the Kakeya set conjecture. We will focus on the following three-dimensional case of the conjecture, stated informally as follows:

Conjecture 1 (Kakeya conjecture) Let ${E}$ be a subset of ${{\bf R}^3}$ that contains a unit line segment in every direction. Then ${\hbox{dim}(E) = 3}$.

This conjecture is not precisely formulated here, because we have not specified exactly what type of set ${E}$ is (e.g. measurable, Borel, compact, etc.) and what notion of dimension we are using. We will deliberately ignore these technical details in this post. It is slightly more convenient for us here to work with lines instead of unit line segments, so we work with the following slight variant of the conjecture (which is essentially equivalent):

Conjecture 2 (Kakeya conjecture, again) Let ${{\cal L}}$ be a family of lines in ${{\bf R}^3}$ that meet ${B(0,1)}$ and contain a line in each direction. Let ${E}$ be the union of the restriction ${\ell \cap B(0,2)}$ to ${B(0,2)}$ of every line ${\ell}$ in ${{\cal L}}$. Then ${\hbox{dim}(E) = 3}$.

As the space of all directions in ${{\bf R}^3}$ is two-dimensional, we thus see that ${{\cal L}}$ is an (at least) two-dimensional subset of the four-dimensional space of lines in ${{\bf R}^3}$ (actually, it lies in a compact subset of this space, since we have constrained the lines to meet ${B(0,1)}$). One could then ask if this is the only property of ${{\cal L}}$ that is needed to establish the Kakeya conjecture, that is to say if any subset of ${B(0,2)}$ which contains a two-dimensional family of lines (restricted to ${B(0,2)}$, and meeting ${B(0,1)}$) is necessarily three-dimensional. Here we have an easy counterexample, namely a plane in ${B(0,2)}$ (passing through the origin), which contains a two-dimensional collection of lines. However, we can exclude this case by adding an additional axiom, leading to what one might call a “strong” Kakeya conjecture:

Conjecture 3 (Strong Kakeya conjecture) Let ${{\cal L}}$ be a two-dimensional family of lines in ${{\bf R}^3}$ that meet ${B(0,1)}$, and assume the Wolff axiom that no (affine) plane contains more than a one-dimensional family of lines in ${{\cal L}}$. Let ${E}$ be the union of the restriction ${\ell \cap B(0,2)}$ of every line ${\ell}$ in ${{\cal L}}$. Then ${\hbox{dim}(E) = 3}$.

Actually, to make things work out we need a more quantitative version of the Wolff axiom in which we constrain the metric entropy (and not just dimension) of lines that lie close to a plane, rather than exactly on the plane. However, for the informal discussion here we will ignore these technical details. Families of lines that lie in different directions will obey the Wolff axiom, but the converse is not true in general.

In 1995, Wolff established the important lower bound ${\hbox{dim}(E) \geq 5/2}$ (for various notions of dimension, e.g. Hausdorff dimension) for sets ${E}$ in Conjecture 3 (and hence also for the other forms of the Kakeya problem). However, there is a key obstruction to going beyond the ${5/2}$ barrier, coming from the possible existence of half-dimensional (approximate) subfields of the reals ${{\bf R}}$. To explain this problem, it easiest to first discuss the complex version of the strong Kakeya conjecture, in which all relevant (real) dimensions are doubled:

Conjecture 4 (Strong Kakeya conjecture over ${{\bf C}}$) Let ${{\cal L}}$ be a four (real) dimensional family of complex lines in ${{\bf C}^3}$ that meet the unit ball ${B(0,1)}$ in ${{\bf C}^3}$, and assume the Wolff axiom that no four (real) dimensional (affine) subspace contains more than a two (real) dimensional family of complex lines in ${{\cal L}}$. Let ${E}$ be the union of the restriction ${\ell \cap B(0,2)}$ of every complex line ${\ell}$ in ${{\cal L}}$. Then ${E}$ has real dimension ${6}$.

The argument of Wolff can be adapted to the complex case to show that all sets ${E}$ occuring in Conjecture 4 have real dimension at least ${5}$. Unfortunately, this is sharp, due to the following fundamental counterexample:

Proposition 5 (Heisenberg group counterexample) Let ${H \subset {\bf C}^3}$ be the Heisenberg group

$\displaystyle H = \{ (z_1,z_2,z_3) \in {\bf C}^3: \hbox{Im}(z_1) = \hbox{Im}(z_2 \overline{z_3}) \}$

and let ${{\cal L}}$ be the family of complex lines

$\displaystyle \ell_{s,t,\alpha} := \{ (\overline{\alpha} z + t, z, sz + \alpha): z \in {\bf C} \}$

with ${s,t \in {\bf R}}$ and ${\alpha \in {\bf C}}$. Then ${H}$ is a five (real) dimensional subset of ${{\bf C}^3}$ that contains every line in the four (real) dimensional set ${{\cal L}}$; however each four real dimensional (affine) subspace contains at most a two (real) dimensional set of lines in ${{\cal L}}$. In particular, the strong Kakeya conjecture over the complex numbers is false.

This proposition is proven by a routine computation, which we omit here. The group structure on ${H}$ is given by the group law

$\displaystyle (z_1,z_2,z_3) \cdot (w_1,w_2,w_3) = (z_1 + w_1 + z_2 \overline{w_3} - z_3 \overline{w_2}, z_2 +w_2, z_3+w_3),$

giving ${E}$ the structure of a ${2}$-step simply-connected nilpotent Lie group, isomorphic to the usual Heisenberg group over ${{\bf R}^2}$. Note that while the Heisenberg group is a counterexample to the complex strong Kakeya conjecture, it is not a counterexample to the complex form of the original Kakeya conjecture, because the complex lines ${{\cal L}}$ in the Heisenberg counterexample do not point in distinct directions, but instead only point in a three (real) dimensional subset of the four (real) dimensional space of available directions for complex lines. For instance, one has the one real-dimensional family of parallel lines

$\displaystyle \ell_{0,t,0} = \{ (t, z, 0): z \in {\bf C}\}$

with ${t \in {\bf R}}$; multiplying this family of lines on the right by a group element in ${H}$ gives other families of parallel lines, which in fact sweep out all of ${{\cal L}}$.

The Heisenberg counterexample ultimately arises from the “half-dimensional” (and hence degree two) subfield ${{\bf R}}$ of ${{\bf C}}$, which induces an involution ${z \mapsto \overline{z}}$ which can then be used to define the Heisenberg group ${H}$ through the formula

$\displaystyle H = \{ (z_1,z_2,z_3) \in {\bf C}^3: z_1 - \overline{z_1} = z_2 \overline{z_3} - z_3 \overline{z_2} \}.$

Analogous Heisenberg counterexamples can also be constructed if one works over finite fields ${{\bf F}_{q^2}}$ that contain a “half-dimensional” subfield ${{\bf F}_q}$; we leave the details to the interested reader. Morally speaking, if ${{\bf R}}$ in turn contained a subfield of dimension ${1/2}$ (or even a subring or “approximate subring”), then one ought to be able to use this field to generate a counterexample to the strong Kakeya conjecture over the reals. Fortunately, such subfields do not exist; this was a conjecture of Erdos and Volkmann that was proven by Edgar and Miller, and more quantitatively by Bourgain (answering a question of Nets Katz and myself). However, this fact is not entirely trivial to prove, being a key example of the sum-product phenomenon.

We thus see that to go beyond the ${5/2}$ dimension bound of Wolff for the 3D Kakeya problem over the reals, one must do at least one of two things:

• (a) Exploit the distinct directions of the lines in ${{\mathcal L}}$ in a way that goes beyond the Wolff axiom; or
• (b) Exploit the fact that ${{\bf R}}$ does not contain half-dimensional subfields (or more generally, intermediate-dimensional approximate subrings).

(The situation is more complicated in higher dimensions, as there are more obstructions than the Heisenberg group; for instance, in four dimensions quadric surfaces are an important obstruction, as discussed in this paper of mine.)

Various partial or complete results on the Kakeya problem over various fields have been obtained through route (a) or route (b). For instance, in 2000, Nets Katz, Izabella Laba and myself used route (a) to improve Wolff’s lower bound of ${5/2}$ for Kakeya sets very slightly to ${5/2+10^{-10}}$ (for a weak notion of dimension, namely upper Minkowski dimension). In 2004, Bourgain, Katz, and myself established a sum-product estimate which (among other things) ruled out approximate intermediate-dimensional subrings of ${{\bf F}_p}$, and then pursued route (b) to obtain a corresponding improvement ${5/2+\epsilon}$ to the Kakeya conjecture over finite fields of prime order. The analogous (discretised) sum-product estimate over the reals was established by Bourgain in 2003, which in principle would allow one to extend the result of Katz, Laba and myself to the strong Kakeya setting, but this has not been carried out in the literature. Finally, in 2009, Dvir used route (a) and introduced the polynomial method (as discussed previously here) to completely settle the Kakeya conjecture in finite fields.

Below the fold, I present a heuristic argument of Nets Katz and myself, which in principle would use route (b) to establish the full (strong) Kakeya conjecture. In broad terms, the strategy is as follows:

1. Assume that the (strong) Kakeya conjecture fails, so that there are sets ${E}$ of the form in Conjecture 3 of dimension ${3-\sigma}$ for some ${\sigma>0}$. Assume that ${E}$ is “optimal”, in the sense that ${\sigma}$ is as large as possible.
2. Use the optimality of ${E}$ (and suitable non-isotropic rescalings) to establish strong forms of standard structural properties expected of such sets ${E}$, namely “stickiness”, “planiness”, “local graininess” and “global graininess” (we will roughly describe these properties below). Heuristically, these properties are constraining ${E}$ to “behave like” a putative Heisenberg group counterexample.
3. By playing all these structural properties off of each other, show that ${E}$ can be parameterised locally by a one-dimensional set which generates a counterexample to Bourgain’s sum-product theorem. This contradiction establishes the Kakeya conjecture.

Nets and I have had an informal version of argument for many years, but were never able to make a satisfactory theorem (or even a partial Kakeya result) out of it, because we could not rigorously establish anywhere near enough of the necessary structural properties (stickiness, planiness, etc.) on the optimal set ${E}$ for a large number of reasons (one of which being that we did not have a good notion of dimension that did everything that we wished to demand of it). However, there is beginning to be movement in these directions (e.g. in this recent result of Guth using the polynomial method obtaining a weak version of local graininess on certain Kakeya sets). In view of this (and given that neither Nets or I have been actively working in this direction for some time now, due to many other projects), we’ve decided to distribute these ideas more widely than before, and in particular on this blog.

(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)

Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.

The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:

 (Discrete) (Continuous) (Limit method) Ramsey theory Topological dynamics Compactness Density Ramsey theory Ergodic theory Furstenberg correspondence principle Graph/hypergraph regularity Measure theory Graph limits Polynomial regularity Linear algebra Ultralimits Structural decompositions Hilbert space geometry Ultralimits Fourier analysis Spectral theory Direct and inverse limits Quantitative algebraic geometry Algebraic geometry Schemes Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits Approximate group theory Topological group theory Model theory

As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:

• Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects ${x_n}$ in a common space ${X}$, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object ${\lim_{n \rightarrow \infty} x_n}$, which remains in the same space, and is “close” to many of the original objects ${x_n}$ with respect to the given metric or topology.
• Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects ${x_n}$ in a category ${X}$, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit ${\varinjlim x_n}$ or the inverse limit ${\varprojlim x_n}$ of these objects, which is another object in the same category ${X}$, and is connected to the original objects ${x_n}$ by various morphisms.
• Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects ${x_{\bf n}}$ or of spaces ${X_{\bf n}}$, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, ${X_{\bf n}}$ might be groups and ${x_{\bf n}}$ might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$ or a new space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$, which is still a model of the same language (e.g. if the spaces ${X_{\bf n}}$ were all groups, then the limiting space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ is an abelian group, then the ${X_{\bf n}}$ will also be abelian groups for many ${{\bf n}}$.)

The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects ${x_{\bf n}}$ to all lie in a common space ${X}$ in order to form an ultralimit ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; they are permitted to lie in different spaces ${X_{\bf n}}$; this is more natural in many discrete contexts, e.g. when considering graphs on ${{\bf n}}$ vertices in the limit when ${{\bf n}}$ goes to infinity. Also, no convergence properties on the ${x_{\bf n}}$ are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces ${X_{\bf n}}$ involved are required in order to construct the ultraproduct.

With so few requirements on the objects ${x_{\bf n}}$ or spaces ${X_{\bf n}}$, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the ${x_{\bf n}}$, will be exactly obeyed by the limit object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.

Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.

Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.

Last week I gave a talk at the Trinity Mathematical Society at Trinity College, Cambridge UK.  As the audience was primarily undergraduate, I gave a fairly non-technical talk on the universality phenomenon, based on this blog article of mine on the same topic.  It was a quite light and informal affair, and this is reflected in the talk slides (which, in particular, play up quite strongly the role of former students and Fellows of Trinity College in this story).   There was some interest in making these slides available publicly, so I have placed them on this site here.  (Note: copyright for the images in these slides has not been secured.)

This week I once again gave some public lectures on the cosmic distance ladder in astronomy, once at Stanford and once at UCLA.  The slides I used were similar to the “version 3.0” slides I used for the same talk last year in Australia and elsewhere, but the images have been updated (and the permissions for copyrighted images secured), and some additional data has also been placed on them.    I am placing these slides here on this blog, in Powerpoint format and also in PDF format.  (Video for the UCLA talk should also be available on the UCLA web site at some point; I’ll add a link when it becomes available.)

These slides have evolved over a period of almost five years, particularly with regards to the imagery, but this is likely to be close to the final version.  Here are some of the older iterations of the slides:

I have found that working on and polishing a single public lecture over a period of several years has been very rewarding and educational, especially given that I had very little public speaking experience at the beginning; there are several other mathematicians I know of who are also putting some effort into giving good talks that communicate mathematics and science to the general public, but I think there could potentially be many more such talks like this.

A note regarding copyright: I am happy to have the text or layout of these slides used as the basis for other presentations, so long as the source is acknowledged.  However, some of the images in these slides are copyrighted by others, and permission by the copyright holders was granted only for the display of the slides in their current format.  (The list of such images is given at the end of the slides.)  So if you wish to adapt the slides for your own purposes, you may need to use slightly different imagery.

(Update, October 20: Some photos from the UCLA talk are available here.)

(Update, October 25: Video from the talk is available on Youtube and on Itunes.)

This week at UCLA, Pierre-Louis Lions gave one of this year’s Distinguished Lecture Series, on the topic of mean field games. These are a relatively novel class of systems of partial differential equations, that are used to understand the behaviour of multiple agents each individually trying to optimise their position in space and time, but with their preferences being partly determined by the choices of all the other agents, in the asymptotic limit when the number of agents goes to infinity. A good example here is that of traffic congestion: as a first approximation, each agent wishes to get from A to B in the shortest path possible, but the speed at which one can travel depends on the density of other agents in the area. A more light-hearted example is that of a Mexican wave (or audience wave), which can be modeled by a system of this type, in which each agent chooses to stand, sit, or be in an intermediate position based on his or her comfort level, and also on the position of nearby agents.

Under some assumptions, mean field games can be expressed as a coupled system of two equations, a Fokker-Planck type equation evolving forward in time that governs the evolution of the density function ${m}$ of the agents, and a Hamilton-Jacobi (or Hamilton-Jacobi-Bellman) type equation evolving backward in time that governs the computation of the optimal path for each agent. The combination of both forward propagation and backward propagation in time creates some unusual “elliptic” phenomena in the time variable that is not seen in more conventional evolution equations. For instance, for Mexican waves, this model predicts that such waves only form for stadiums exceeding a certain minimum size (and this phenomenon has apparently been confirmed experimentally!).

Due to lack of time and preparation, I was not able to transcribe Lions’ lectures in full detail; but I thought I would describe here a heuristic derivation of the mean field game equations, and mention some of the results that Lions and his co-authors have been working on. (Video of a related series of lectures (in French) by Lions on this topic at the Collége de France is available here.)

To avoid (rather important) technical issues, I will work at a heuristic level only, ignoring issues of smoothness, convergence, existence and uniqueness, etc.

This is an adaptation of a talk I gave recently for a program at IPAM. In this talk, I gave a (very informal and non-rigorous) overview of Hrushovski’s use of model-theoretic techniques to establish new Freiman-type theorems in non-commutative groups, and some recent work in progress of Ben Green, Tom Sanders and myself to establish combinatorial proofs of some of Hrushovski’s results.

This week I was in my home town of Adelaide, Australia, for the 2009 annual meeting of the Australian Mathematical Society. This was a fairly large meeting (almost 500 participants). One of the highlights of such a large meeting is the ability to listen to plenary lectures in fields adjacent to one’s own, in which speakers can give high-level overviews of a subject without getting too bogged down in the technical details. From the talks here I learned a number of basic things which were well known to experts in that field, but which I had not fully appreciated, and so I wanted to share them here.

The first instance of this was from a plenary lecture by Danny Calegari entitled “faces of the stable commutator length (scl) ball”. One thing I learned from this talk is that in homotopy theory, there is a very close relationship between topological spaces (such as manifolds) on one hand, and groups (and generalisations of groups) on the other, so that homotopy-theoretic questions about the former can often be converted to purely algebraic questions about the latter, and vice versa; indeed, it seems that homotopy theorists almost think of topological spaces and groups as being essentially the same concept, despite looking very different at first glance. To get from a space ${X}$ to a group, one looks at homotopy groups ${\pi_n(X)}$ of that space, and in particular the fundamental group ${\pi_1(X)}$; conversely, to get from a group ${G}$ back to a topological space one can use the Eilenberg-Maclane spaces ${K(G,n)}$ associated to that group (and more generally, a Postnikov tower associated to a sequence of such groups, together with additional data). In Danny’s talk, he gave the following specific example: the problem of finding the least complicated embedded surface with prescribed (and homologically trivial) boundary in a space ${X}$, where “least complicated” is measured by genus (or more precisely, the negative component of Euler characteristic), is essentially equivalent to computing the commutator length of the element in the fundamental group ${\pi(X)}$ corresponding to that boundary (i.e. the least number of commutators one is required to multiply together to express the element); and the stable version of this problem (where one allows the surface to wrap around the boundary ${n}$ times for some large ${n}$, and one computes the asymptotic ratio between the Euler characteristic and ${n}$) is similarly equivalent to computing the stable commutator length of that group element. (Incidentally, there is a simple combinatorial open problem regarding commutator length in the free group, which I have placed on the polymath wiki.)

This theme was reinforced by another plenary lecture by Ezra Getzler entitled “${n}$-groups”, in which he showed how sequences of groups (such as the first ${n}$ homotopy groups ${\pi_1(X),\ldots,\pi_n(X)}$) can be enhanced into a more powerful structure known as an ${n}$-group, which is more complicated to define, requiring the machinery of simplicial complexes, sheaves, and nerves. Nevertheless, this gives a very topological and geometric interpretation of the concept of a group and its generalisations, which are of use in topological quantum field theory, among other things.

Mohammed Abuzaid gave a plenary lecture entitled “Functoriality in homological mirror symmetry”. One thing I learned from this talk was that the (partially conjectural) phenomenon of (homological) mirror symmetry is one of several types of duality, in which the behaviour of maps into one mathematical object ${X}$ (e.g. immersed or embedded curves, surfaces, etc.) are closely tied to the behaviour of maps out of a dual mathematical object ${\hat X}$ (e.g. functionals, vector fields, forms, sections, bundles, etc.). A familiar example of this is in linear algebra: by taking adjoints, a linear map into a vector space ${X}$ can be related to an adjoint linear map mapping out of the dual space ${X^*}$. Here, the behaviour of curves in a two-dimensional symplectic manifold (or more generally, Lagrangian submanifolds in a higher-dimensional symplectic manifold), is tied to the behaviour of holomorphic sections on bundles over a dual algebraic variety, where the precise definition of “behaviour” is category-theoretic, involving some rather complicated gadgets such as the Fukaya category of a symplectic manifold. As with many other applications of category theory, it is not just the individual pairings between an object and its dual which are of interest, but also the relationships between these pairings, as formalised by various functors between categories (and natural transformations between functors). (One approach to mirror symmetry was discussed by Shing-Tung Yau at a distinguished lecture at UCLA, as transcribed in this previous post.)

There was a related theme in a talk by Dennis Gaitsgory entitled “The geometric Langlands program”. From my (very superficial) understanding of the Langlands program, the behaviour of specific maps into a reductive Lie group ${G}$, such as representations in ${G}$ of a fundamental group, étale fundamental group, class group, or Galois group of a global field, is conjecturally tied to specific maps out of a dual reductive Lie group ${\hat G}$, such as irreducible automorphic representations of ${\hat G}$, or of various structures (such as derived categories) attached to vector bundles on ${\hat G}$. There are apparently some tentatively conjectured links (due to Witten?) between Langlands duality and mirror symmetry, but they seem at present to be fairly distinct phenomena (one is topological and geometric, the other is more algebraic and arithmetic). For abelian groups, Langlands duality is closely connected to the much more classical Pontryagin duality in Fourier analysis. (There is an analogue of Fourier analysis for nonabelian groups, namely representation theory, but the link from this to the Langlands program is somewhat murky, at least to me.)

Related also to this was a plenary talk by Akshay Venkatesh, entitled “The Cohen-Lenstra heuristics over global fields”. Here, the question concerned the conjectural behaviour of class groups of quadratic fields, and in particular to explain the numerically observed phenomenon that about ${75.4\%}$ of all quadratic fields ${{\Bbb Q}[\sqrt{d}]}$ (with $d$ prime) enjoy unique factorisation (i.e. have trivial class group). (Class groups, as I learned in these two talks, are arithmetic analogues of the (abelianised) fundamental groups in topology, with Galois groups serving as the analogue of the full fundamental group.) One thing I learned here was that there was a canonical way to randomly generate a (profinite) abelian group, by taking the product of randomly generated finite abelian ${p}$-groups for each prime ${p}$. The way to canonically randomly generate a finite abelian ${p}$-group is to take large integers ${n, D}$, and look at the cokernel of a random homomorphism from ${({\mathbb Z}/p^n{\mathbb Z})^d}$ to ${({\mathbb Z}/p^n{\mathbb Z})^d}$. In the limit ${n,d \rightarrow \infty}$ (or by replacing ${{\mathbb Z}/p^n{\mathbb Z}}$ with the ${p}$-adics and just sending ${d \rightarrow \infty}$), this stabilises and generates any given ${p}$-group ${G}$ with probability

$\displaystyle \frac{1}{|\hbox{Aut}(G)|} \prod_{j=1}^\infty (1 - \frac{1}{p^j}), \ \ \ \ \ (1)$

where ${\hbox{Aut}(G)}$ is the group of automorphisms of ${G}$. In particular this leads to the strange identity

$\displaystyle \sum_G \frac{1}{|\hbox{Aut}(G)|} = \prod_{j=1}^\infty (1 - \frac{1}{p^j})^{-1} \ \ \ \ \ (2)$

where ${G}$ ranges over all ${p}$-groups; I do not know how to prove this identity other than via the above probability computation, the proof of which I give below the fold.

Based on the heuristic that the class group should behave “randomly” subject to some “obvious” constraints, it is expected that a randomly chosen real quadratic field ${{\Bbb Q}[\sqrt{d}]}$ has unique factorisation (i.e. the class group has trivial ${p}$-group component for every ${p}$) with probability

$\displaystyle \prod_{p \hbox{ odd}} \prod_{j=2}^\infty (1 - \frac{1}{p^j}) \approx 0.754,$

whereas a randomly chosen imaginary quadratic field ${{\Bbb Q}[\sqrt{-d}]}$ has unique factorisation with probability

$\displaystyle \prod_{p \hbox{ odd}} \prod_{j=1}^\infty (1 - \frac{1}{p^j}) = 0.$

The former claim is conjectural, whereas the latter claim follows from (for instance) Siegel’s theorem on the size of the class group, as discussed in this previous post. Ellenberg, Venkatesh, and Westerland have recently established some partial results towards the function field analogues of these heuristics.

Next month, I am scheduled to give a short speech (three to five minutes in length) at the annual induction ceremony of the American Academy of Arts and Sciences in Boston.  This is a bit different from the usual scientific talks that I am used to giving; there are no projectors, blackboards, or other visual aids available, and the audience of Academy members is split evenly between the humanities and the sciences (as well as people in industry and politics), so this will be an interesting new experience for me.  (The last time I gave a speech was in 1985.)

My chosen topic is on the future impact of internet-based technologies on academia (somewhat similar in theme to my recent talk on this topic).  I have a draft text below the fold, though it is currently too long and my actual speech is likely to be a significantly abridged version of the one below [Update, Oct 12: The abridged speech is now at the bottom of the post.]  In the spirit of the theme of the talk, I would of course welcome any comments and suggestions.

For comparison, the talks from last year’s ceremony, by Jim Simons, Peter Kim, Susan Athey, Earl Lewis, and Indra Nooyi, can be found here.  Jim’s chosen topic, incidentally, was what mathematics is, and why mathematicians do it.

[Update, Nov 3: Video of the various talks by myself and the other speakers (Emmylou Harris, James Earl Jones, Elizabeth Nabel, Ronald Marc George, and Edward Villela) is now available on the Academy web site here.]