On Thursday, UCLA hosted a “Fields Medalist Symposium“, in which four of the six University of California-affiliated Fields Medalists (Vaughan Jones (1990), Efim Zelmanov (1994), Richard Borcherds (1998), and myself (2006)) gave talks of varying levels of technical sophistication. (The other two are Michael Freedman (1986) and Steven Smale (1966), who could not attend.) The slides for my own talks are available here.

The talks were in order of the year in which the medal was awarded: we began with Vaughan, who spoke on “Flatland: a great place to do algebra”, then Efim, who spoke on “Pro-finite groups”, Richard, who spoke on “What is a quantum field theory?”, and myself, on “Nilsequences and the primes.” The audience was quite mixed, ranging from mathematics faculty to undergraduates to alumni to curiosity seekers, and I severely doubt that every audience member understood every talk, but there was something for everyone, and for me personally it was fantastic to see some perspectives from first-class mathematicians on some wonderful areas of mathematics outside of my own fields of expertise.

Disclaimer: the summaries below are reconstructed from my notes and from some hasty web research; I don’t vouch for 100% accuracy of the mathematical content, and would welcome corrections.

Vaughan Jones – “Flatland: a great place to do algebra”

Vaughan gave a very accessible and engaging public lecture, that managed the rare feat of being both non-technical, and yet packing in a surprising amount of meaty mathematics. He began by noting how the Cartesian co-ordinate system of Descartes had demystified the notion of dimension, reducing the two-dimensional plane to collections of pairs of numbers, the three-dimensional space to triplets of numbers, and so forth. Of course, even so, the notion of the “fourth dimension” and beyond still retains a certain almost magical appeal, and Vaughan illustrated this by discussing the classic book “Flatland” by Edwin Abbott, which mostly revolves around a society of two-dimensional intelligent beings, and the difficulty they have with even conceiving of the third dimension. (As an amusing side note, Vaughan quoted a reviewer of that book from that era approving of Abbott’s higher-dimensional speculations as likely to be more significant than those of his contemporary Hamilton, who had just invented the quaternionic number system.)

Vaughan then talked about his own mathematical journey through dimensions, starting out in the infinite-dimensional theory of von Neumann algebras (in particular, in his celebrated paper developing an index theory for subfactors of von Neumann algebras), and then descending to three dimensions (through his achievements in knot theory) and more recently to two dimensions (through planar algebras). To describe the connections between all these topics, Vaughan first recalled how knots can be obtained from braids by tying the ends together; or equivalently, how braids can be viewed as “knots-with-boundary”. This brings group theory into the picture, because braids form a group. He then talked about Louis Kaufmann’s fundamental insight that braids (which can be viewed as a three-dimensional object) can be converted by a simple operation to a planar object (an element of the Temperley-Lieb algebra, which had also appeared in Vaughan’s paper on index theory). This led naturally to the more general notion of a planar algebra, which can very vaguely be viewed as a two-dimensional version of the more “linear” model of algebra (a sequence of one-input and one-output functions composed together), in which inputs and outputs are connected together by a planar tangle (a collection of non-crossing curves and loops). Other interesting examples of planar algebras included knots with boundary (as proposed by Conway) and tensors (as proposed by Penrose). He then noted how crucial the planarity was in order to obtain a rich algebraic structure; apparently, much of this interesting structure collapses if one allows the components in a tangle to cross each other. (As an analogy, he mentioned how much more rigid and easy a game of sudoku would be if one worked in {\Bbb F}_3^4 and required all coordinate 2-planes (from all six families) to be labeled with some permutation of {1,…,9}, as opposed to just three of the families (the rows, the columns, and the squares, which are also all connected planar objects).)

To close, Vaughan mentioned how vertex algebra structures also arise naturally in the quantum theory of lattices, and in particular (as proposed by Freedman and others) could be useful in designing quantum computers. He then noted that the theory of the braid group (including of course the Jones polynomial) could also be viewed as the theory of dynamics of non-colliding points in the plane (by viewing time as a third dimension); quantising this, one then expects braid groups to play a role in the quantum Hall effect (and its fractional generalisation) that manifests itself at incredibly low temperatures, although despite many breakthroughs in the area, there has not yet been a physical model proposed that would truly be governed by a non-abelian braid group.

Efim Zelmanov – “Pro-finite groups”

Efim gave a much more technical, but also very beautiful, talk on some cutting edge research in group theory, revolving around the extent to which a group can be understood from a prescribed set of relations. One seeks to study infinite groups here, but Efim clearly distinguished between the “hopelessly infinite” groups, and the groups which are at least residually finite – groups which have enough finite models that one can distinguish points. For instance, the integers {\Bbb Z} are residually finite because given any two distinct integers x, y, one can find a homomorphism {\Bbb Z} \to {\Bbb Z}/N{\Bbb Z} into a finite group which sends x and y to different values. One can then view a residually finite group as a subgroup of an infinite product of finite groups. If this subgroup is topologically complete, we say the group is pro-finite; the p-adics are a good example, as are the Galois groups \hbox{Gal}(L/K). One can localise these notions to a fixed (rational) prime p (replacing the notion of “finite group” with “finite p-group“), giving rise to the stricter notions of residually p-finite and pro-p-finite groups. The p-adics are again a good example of a pro-p-finite group. A little less obviously, the free (non-abelian) group F_m on m generators is residually p-finite for every p, and has a pro-p-finite completion F := (F_m)_{\hat p}. An important “linear” example of a pro-p-finite group comes from the congruence subgroup GL^1(n,\Lambda) of n \times n matrices over a commutative complete Noetherian ring \Lambda which are trivial when quotiented by a maximal ideal M whose quotient field is a characteristic p finite field; more generally, we say that a pro-p-finite group is linear if it is isomorphic to a subgroup of such a congruence subgroup.

The pro-p-completed free group F on m generators is universal, in the sense that every pro-p-group G with m generators can be realised as a quotient of F by the (completed) ideal generated by a collection of relations R. In principle, the relations R describe the group G completely (up to isomorphism, of course), but in practice, even such basic questions as whether G is finite or infinite can be difficult to discern just from the relations R. A fundamental result in this area is the Golod-Shafarevich theorem, which asserts that if there are sufficiently few relations (more precisely, |R| < m^2/4) and if one makes the technical hypothesis that the relations lie in the p-commutator F^p [F,F], then the group is infinite. Groups which obey these hypotheses are known as GS-groups. For instance, many Galois groups turn out to be GS-groups and thus infinite, which is an important fact to know in algebraic number theory.

Another important class of GS groups arise as the fundamental groups \Gamma = \pi_1(X) of compact hyperbolic 3-manifolds; Lubotzky showed that these are GS groups for all sufficiently large p. An important conjecture in this area is the virtual positive Betti number conjecture (due to Thurston and Waldhausen), which asserts that such groups have a large abelian subquotient, or more precisely there exists a finite index subgroup H which has a surjective image onto {\Bbb Z}. This conjecture is backed up by some impressive numerical evidence. Lubotzky and Sarnak observed that if this conjecture was true, it would imply the weaker conjecture that \Gamma does not have property \tau; this is now known as the Lubotzky-Sarnak conjecture. Recently, Lackenby showed a converse implication (deducing the Betti number conjecture from the Lubotzky-Sarnak conjecture) in the case of arithmetic groups. Efim and Lubotzky then conjectured that the LS conjecture in fact extends to all GS groups, but this was disproven by an explicit counterexample last year by Ershov. So it seems that there is more to these fundamental groups than just the GS property, at least if one believes these conjectures.

Efim then turned from low-dimensional topology to number theory, and in particular to questions relating to the Fontaine-Mazur conjecture, which asserts that Galois groups \hbox{Gal}({\Bbb Q}_S/{\Bbb Q}) are so “nonlinear” that their image in any GL(n, {\Bbb Q}_p) can have only finite image. A weaker version of this conjecture asserts that such Galois groups are not linear in the sense mentioned earlier. These Galois groups are GS groups, and Efim showed that GS groups contain a copy of the pro-p-finite group F, and so to prove this weaker conjecture it would suffice to show that F is not linear, i.e. that F does not embed injectively into GL^1(n,\Lambda) for any n and \Lambda. This was shown by Zubkov when n=2, and very recently Efim verified the conjecture when n is arbitrary and p is sufficiently large depending on n. The question reduces to one of locating universal identities for generic n \times n matrices over the p-adics (since the free group F, by definition, will not obey such identities). To prove this Efim had to generalise the problem to a representation theoretic version and “induct on representations”; I unfortunately didn’t understand this bit too well. As a consequence Efim showed that such universal identities exist, but his argument was non-constructive, so no explicit such identity is known.

Efim then closed by talking a little more about universal identities; he mentioned the Specht conjecture (proven by Kemer) that over a field of characteristic 0, there are only finitely many universal identities which generate all the others in a syntactical sense (i.e. by the laws of algebra). He then made the interesting remark that there appears to be essentially one and only one technique known to establish finiteness theorems (such as this one) in algebra, namely to appeal to Hilbert’s basis theorem (which asserts that every ideal in a finitely generated commutative ring is itself finitely generated).

Richard Borcherds – “What is a quantum field theory?”

Richard is best known for his work in lattices and group theory, most notably in explaining the monstrous moonshine phenomenon, but in recent years he has moved to a completely different area of mathematics, namely mathematical quantum field theory (QFT), which Richard did a very admirable job of explaining. He began by contrasting the very different perspectives of mathematicians and physicists to the subject; from the mathematical side of things, he mentioned the various axiomatic formulations proposed for QFT (Wightman axioms, Haag-Kastler axioms, Ostewalder-Schrader axioms, etc.), but then mentioned the main difficulty with these formulations, namely that none of the major interacting four-dimensional spacetime QFTs (QED, QCD, standard model, etc.) are known to obey any of these axioms. (The free QFTs obey the axioms, as well as many two-dimensional and a few three-dimensional ones.) On the physical side, the emphasis is more on computing the Green’s function for a QFT, which formally can be expressed as a Feynman path integral, which in turn is formally expandable as an infinite sum (essentially a Dyson series) over Feynman diagrams of various finite-dimensional integrals; these sums are often horribly divergent, but nevertheless by means of various tricks of varying levels of mathematical rigour, physicists have been able to compute at least the first few terms of these sums and get some predictions which are in extraordinary agreement with experimental data. Most of Richard’s talk was on explaining how the mathematical and physical viewpoints could (hopefully) be reconciled.

Richard gave us a very interesting and useful “complexity hierarchy” to view the various spaces in both classical and quantum field theory, using things like the symmetric algebra construction V \mapsto S(V) to go from one level to the next (thus one can view spaces in level n+1 as consisting of some sort of “polynomials” of objects in a level n space). According to Richard, one of the main reasons why QFT is conceptually difficult is that it routinely uses spaces which are very high up in the hierarchy. For example, in a classical field theory (CFT), ignoring all analytic questions of convergence, differentiability, integrability, etc.,

  • “Level 0” spaces are finite-dimensional spaces such as the spacetime M, the gauge group G, the principal vector bundle B over M, and so forth. (For instance, in a scalar field theory, G is the real line, and B is just M \times {\Bbb R}.) Classical fields \phi are then just sections of these bundles; for instance, a scalar field is just a map \phi: M \to {\Bbb R}. The jet bundles of B also qualify as Level 0 spaces.
  • “Level 1” spaces include things like the space of differential operators on M (or on the bundle B), which can be viewed as polynomials over the “Level 0” vector fields \partial_i. For instance, the d’Alembertian \nabla^\alpha \partial_\alpha would belong to this Level 1 space. A little more generally, the Poisson algebra of a jet bundle is a Level 1 space.
  • “Level 2” spaces include the space of polynomial combinations of objects from Level 1 spaces applied to a classical field \phi. In particular the space {\mathcal L} of Lagrangian densities, of which L(\phi) := \partial^\alpha \phi \partial_\alpha \phi + m^2 \phi^2 + \lambda^4 \phi^4 is a typical example, is a Level 2 object. This is the level where standard explanations of classical field theory usually stop; the theory asserts that classical fields must be critical points for the associated action S(\phi) := \int_M L(\phi), and that is an adequate description of the theory. But one can continue onward:
  • “Level 3” spaces include the Poisson algebra generated by {\mathcal L}, which contains such objects as the Poisson bracket \{ S_1, S_2 \} between two actions. This algebra is implicit in things such as Noether’s theorem, but is usually not discussed explicitly. Using the Poisson bracket structure, elements of this Level 3 space can be viewed as “vector fields” or “flows” on the space of all fields in the classical field theory; in particular, infinitesimal symmetries live in a Level 3 space.
  • “Level 4” spaces include the universal enveloping algebra of the previously mentioned Poisson algebra (which is of course a Lie algebra). This is where “differential operators” on the space of all fields will live. I think also that canonical transformations (such as those given by non-infinitesimal symmetries, e.g. spatial translation by a non-zero distance) are also supposed to (formally) lie in a Level 4 space, though I am a bit uncertain on this point.

So while CFTs mostly top out at Level 2, QFTs seem to really require all levels up to Level 4:

  • “Level 0” spaces of a QFT are much the same as those of the associated CFT: the spacetime, the bundle, etc. The quantum fields \phi(x) are no longer sections of the bundle, though, but should be interpreted for each x as (nastily singular and unbounded) operators on some abstract Hilbert space (it seems to be unprofitable to try to make this space concrete until much later in the theory). (Incidentally, these quantum fields are not the wave function |\psi\rangle that one is used to from the Schrödinger formulation of non-relativistic quantum mechanics, but instead represent the (spacetime) position operators from the Heisenberg formulation.)
  • “Level 1” spaces again include the space of differential operators, but now acting on quantum fields rather than classical fields. (There is of course the usual problem that these operators might be unbounded and thus only be densely defined on the Hilbert space of interest, but there are standard ways to deal with these difficulties.)
  • “Level 2” spaces again include the space {\mathcal L} of all Lagrangians, are polynomials that convert a quantum field \phi(x) to another (formally) operator-valued function of space time.
  • “Level 3” spaces include the space of all Feynman path integrals, e.g. \int \phi^4(x) \phi^3(y) e^{i \int_M L(\phi)} D\phi. In particular they include Green’s functions.
  • “Level 4” spaces include the space of generalised Wightman distributions, which include things like \langle \emptyset | \phi(f_1) \ldots \phi(f_n) | \emptyset \rangle where |\emptyset \rangle is the vacuum state and f_1,\ldots,f_n are various bump functions in spacetime, but also include more general objects in which the \phi(f_1) \ldots \phi(f_n) factors are replaced by any other time-ordered operators, such as those coming from the Level 3 space. (I admit I didn’t understand this point very well.) Apparently, the space of all renormalisations is also a Level 4 space.

Richard then talked about the various attempts to build a QFT starting from the Lagrangian as the foundational object. He mentioned Dirac’s philosophy of building the Lie algebra structures first, and only worrying about exactly what the Hilbert space H was at a very late stage of the theory; indeed, trying to apply standard “prequantisation” methods such as proposing L^2(M) as the Hilbert space seemed to run into fundamental difficulties (e.g. the action of the center was wrong). There was some fix to this involving the choice of a “polarisation”, but this seemed somewhat ad hoc and didn’t seem to work in all cases (I didn’t follow this bit well). [Incidentally, Richard made a cute observation, which was that the theory becomes a little cleaner notationally when Hilbert spaces were not viewed as complex vector spaces, but rather as complex bimodules, with the complex numbers acting in the usual linear manner on the left but in an antilinear manner on the right, vz = \overline{z} v. More generally, operators should act on vectors on the left in the usual manner but on the right by the adjoint operator. This ends up reconciling the “mathematical” and “physical” notation in the subject quite nicely.]

A more promising approach was to start by computing the Wightman distributions W_n(x_1,\ldots,x_n) = \langle \emptyset | \phi(x_1) \ldots \phi(x_n) | \emptyset \rangle (which are slight variants of the Green’s function, the differences being technical and having to do with the time-ordering of the spacetime points x_1,\ldots,x_n; the two can be related to each other via analytic continuation), and use that to construct the Hilbert space, or at least the portion of the Hilbert space generated from the vacuum state, via the GNS construction. (There are problems due to \phi(x) being singular, but this can basically be dealt with by the theory of distributions.) This approach fits well with the Wightman axiom formulation of QFT mentioned earlier; unfortunately, it is not known how to make all the relevant series converge in order to verify these axioms for any of the physically relevant QFTs. However, one can often proceed perturbatively, which can be formalised by replacing the underlying field {\Bbb C} (a field in the mathematical sense, not the physical one!) with the field of formal power series {\Bbb C}[[\lambda]], where \lambda denotes the (dimensionless) coupling constants of the theory (e.g. the fine structure constant, which basically represents the charge of the electron) and which show up in the interaction (i.e. non-quadratic) terms of the Lagrangian. There are some significant mathematical problems in working with this perturbative field (in particular, the notion of positivity, or of completeness, has to be carefully redefined) but these are reasonably tractable issues. But even when working perturbatively, the individual terms in the Dyson series used to compute path integrals are usually still divergent. To extract meaningful values for these expressions, physicists employ the twin devices of regularisation and renormalisation. Regularisation introduces a smoothing parameter s to help the integral converge, sending s \to 0 at the end of the day; one can either use s to modify the ambient dimension of the spacetime (which is somewhat dubious mathematically, since spacetime is only supposed to have a nonnegative integer number of dimensions rather than complex!), or to strengthen the dissipative and dispersive nature of the Laplacian by raising it to another power (here one is on firmer ground mathematically, thanks to the well established theory of pseudo-differential operators). However, even when one regularises, the terms still blow up in the limit s \to 0 if the parameters \lambda are kept fixed. However, one can renormalise away this issue by redefining \lambda to be a certain meromorphic function of s, in such a way that a finite limit can now be extracted. This means that \lambda goes to infinity as s goes to zero; for instance in QED, this is the assertion that the electron actually has an infinite “bare” charge but a finite “effective” charge. There can be multiple choices of renormalisation, but there are transformations which convert one to the other (I didn’t understand this point well). There is also some way to view renormalisations as some sort of coproduct-preserving homomorphism from the Level 3 space of all Green’s function-type objects to itself, but I was a little lost by this point.

Richard closed by giving an “executive summary” of QFT as being the unitary representation theory of a certain “Level 4” space (some sort of Lie algebra, I think) which incorporated the Lagrangian, the Poincaré group, and the complex structure (but quotiented somehow by the constraint of locality); all of these ingredients are basically mandated by the Wightman axioms.

Terence Tao – “Nilsequences and the primes”

Here I gave a more detailed presentation of the material I had discussed in my first Simons lecture, and focussing a bit more on the role of nilflows in characterising the “structure” or “conspiracies” which seem to control the additive behaviour of primes, as arising with my work with Ben Green, and also on the very recent connections between this work and Ratner’s theorem. My slides are available here.