You are currently browsing the category archive for the ‘talk’ category.

Last week, we had Peter Scholze give an interesting distinguished lecture series here at UCLA on “Prismatic Cohomology”, which is a new type of cohomology theory worked out by Scholze and Bhargav Bhatt. (Video of the talks will be available shortly; for now we have some notes taken by two note–takers in the audience on that web page.) My understanding of this (speaking as someone that is rather far removed from this area) is that it is progress towards the “motivic” dream of being able to define cohomology for varieties (or similar objects) defined over arbitrary commutative rings , and with coefficients in another arbitrary commutative ring . Currently, we have various flavours of cohomology that only work for certain types of domain rings and coefficient rings :

- Singular cohomology, which roughly speaking works when the domain ring is a characteristic zero field such as or , but can allow for arbitrary coefficients ;
- de Rham cohomology, which roughly speaking works as long as the coefficient ring is the same as the domain ring (or a homomorphic image thereof), as one can only talk about -valued differential forms if the underlying space is also defined over ;
- -adic cohomology, which is a remarkably powerful application of étale cohomology, but only works well when the coefficient ring is localised around a prime that is different from the characteristic of the domain ring ; and
- Crystalline cohomology, in which the domain ring is a field of some finite characteristic , but the coefficient ring can be a slight deformation of , such as the ring of Witt vectors of .

There are various relationships between the cohomology theories, for instance de Rham cohomology coincides with singular cohomology for smooth varieties in the limiting case . The following picture Scholze drew in his first lecture captures these sorts of relationships nicely:

The new prismatic cohomology of Bhatt and Scholze unifies many of these cohomologies in the “neighbourhood” of the point in the above diagram, in which the domain ring and the coefficient ring are both thought of as being “close to characteristic ” in some sense, so that the dilates of these rings is either zero, or “small”. For instance, the -adic ring is technically of characteristic , but is a “small” subring of (it consists of those elements of of -adic valuation at most ), so one can think of as being “close to characteristic ” in some sense. Scholze drew a “zoomed in” version of the previous diagram to informally describe the types of rings for which prismatic cohomology is effective:

To define prismatic cohomology rings one needs a “prism”: a ring homomorphism from to equipped with a “Frobenius-like” endomorphism on obeying some axioms. By tuning these homomorphisms one can recover existing cohomology theories like crystalline or de Rham cohomology as special cases of prismatic cohomology. These specialisations are analogous to how a prism splits white light into various individual colours, giving rise to the terminology “prismatic”, and depicted by this further diagram of Scholze:

(And yes, Peter confirmed that he and Bhargav were inspired by the Dark Side of the Moon album cover in selecting the terminology.)

There was an abstract definition of prismatic cohomology (as being the essentially unique cohomology arising from prisms that obeyed certain natural axioms), but there was also a more concrete way to view them in terms of coordinates, as a “-deformation” of de Rham cohomology. Whereas in de Rham cohomology one worked with derivative operators that for instance applied to monomials by the usual formula

prismatic cohomology in coordinates can be computed using a “-derivative” operator that for instance applies to monomials by the formula

where

is the “-analogue” of (a polynomial in that equals in the limit ). (The -analogues become more complicated for more general forms than these.) In this more concrete setting, the fact that prismatic cohomology is independent of the choice of coordinates apparently becomes quite a non-trivial theorem.

In July I will be spending a week at Park City, being one of the mini-course lecturers in the Graduate Summer School component of the Park City Summer Session on random matrices. I have chosen to give some lectures on least singular values of random matrices, the circular law, and the Lindeberg exchange method in random matrix theory; this is a slightly different set of topics than I had initially advertised (which was instead about the Lindeberg exchange method and the local relaxation flow method), but after consulting with the other mini-course lecturers I felt that this would be a more complementary set of topics. I have uploaded an draft of my lecture notes (some portion of which is derived from my monograph on the subject); as always, comments and corrections are welcome.

*[Update, June 23: notes revised and reformatted to PCMI format. -T.]*

*[Update, Mar 19 2018: further revision. -T.]*

Just a short post here to note that the cover story of this month’s Notices of the AMS, by John Friedlander, is about the recent work on bounded gaps between primes by Zhang, Maynard, our own Polymath project, and others.

I may as well take this opportunity to upload some slides of my own talks on this subject: here are my slides on small and large gaps between the primes that I gave at the “Latinos in the Mathematical Sciences” back in April, and here are my slides on the Polymath project for the Schock Prize symposium last October. (I also gave an abridged version of the latter talk at an AAAS Symposium in February, as well as the Breakthrough Symposium from last November.)

Due to some requests, I’m uploading to my blog the slides for my recent talk in Segovia (for the birthday conference of Michael Cowling) on “Hilbert’s fifth problem and approximate groups“. The slides cover essentially the same range of topics in this series of lecture notes, or in this text of mine, though of course in considerably less detail, given that the slides are meant to be presented in an hour.

This is a blog version of a talk I recently gave at the IPAM workshop on “The Kakeya Problem, Restriction Problem, and Sum-product Theory”.

Note: the discussion here will be highly non-rigorous in nature, being extremely loose in particular with asymptotic notation and with the notion of dimension. Caveat emptor.

One of the most infamous unsolved problems at the intersection of geometric measure theory, incidence combinatorics, and real-variable harmonic analysis is the Kakeya set conjecture. We will focus on the following three-dimensional case of the conjecture, stated informally as follows:

Conjecture 1 (Kakeya conjecture)Let be a subset of that contains a unit line segment in every direction. Then .

This conjecture is not precisely formulated here, because we have not specified exactly what type of set is (e.g. measurable, Borel, compact, etc.) and what notion of dimension we are using. We will deliberately ignore these technical details in this post. It is slightly more convenient for us here to work with lines instead of unit line segments, so we work with the following slight variant of the conjecture (which is essentially equivalent):

Conjecture 2 (Kakeya conjecture, again)Let be a family of lines in that meet and contain a line in each direction. Let be the union of the restriction to of every line in . Then .

As the space of all directions in is two-dimensional, we thus see that is an (at least) two-dimensional subset of the four-dimensional space of lines in (actually, it lies in a compact subset of this space, since we have constrained the lines to meet ). One could then ask if this is the only property of that is needed to establish the Kakeya conjecture, that is to say if any subset of which contains a two-dimensional family of lines (restricted to , and meeting ) is necessarily three-dimensional. Here we have an easy counterexample, namely a plane in (passing through the origin), which contains a two-dimensional collection of lines. However, we can exclude this case by adding an additional axiom, leading to what one might call a “strong” Kakeya conjecture:

Conjecture 3 (Strong Kakeya conjecture)Let be a two-dimensional family of lines in that meet , and assume theWolff axiomthat no (affine) plane contains more than a one-dimensional family of lines in . Let be the union of the restriction of every line in . Then .

Actually, to make things work out we need a more quantitative version of the Wolff axiom in which we constrain the metric entropy (and not just dimension) of lines that lie *close* to a plane, rather than exactly *on* the plane. However, for the informal discussion here we will ignore these technical details. Families of lines that lie in different directions will obey the Wolff axiom, but the converse is not true in general.

In 1995, Wolff established the important lower bound (for various notions of dimension, e.g. Hausdorff dimension) for sets in Conjecture 3 (and hence also for the other forms of the Kakeya problem). However, there is a key obstruction to going beyond the barrier, coming from the possible existence of *half-dimensional (approximate) subfields* of the reals . To explain this problem, it easiest to first discuss the complex version of the strong Kakeya conjecture, in which all relevant (real) dimensions are doubled:

Conjecture 4 (Strong Kakeya conjecture over )Let be a four (real) dimensional family of complex lines in that meet the unit ball in , and assume theWolff axiomthat no four (real) dimensional (affine) subspace contains more than a two (real) dimensional family of complex lines in . Let be the union of the restriction of every complex line in . Then has real dimension .

The argument of Wolff can be adapted to the complex case to show that all sets occuring in Conjecture 4 have real dimension at least . Unfortunately, this is sharp, due to the following fundamental counterexample:

Proposition 5 (Heisenberg group counterexample)Let be the Heisenberg groupand let be the family of complex lines

with and . Then is a five (real) dimensional subset of that contains every line in the four (real) dimensional set ; however each four real dimensional (affine) subspace contains at most a two (real) dimensional set of lines in . In particular, the strong Kakeya conjecture over the complex numbers is false.

This proposition is proven by a routine computation, which we omit here. The group structure on is given by the group law

giving the structure of a -step simply-connected nilpotent Lie group, isomorphic to the usual Heisenberg group over . Note that while the Heisenberg group is a counterexample to the complex strong Kakeya conjecture, it is not a counterexample to the complex form of the original Kakeya conjecture, because the complex lines in the Heisenberg counterexample do not point in distinct directions, but instead only point in a three (real) dimensional subset of the four (real) dimensional space of available directions for complex lines. For instance, one has the one real-dimensional family of parallel lines

with ; multiplying this family of lines on the right by a group element in gives other families of parallel lines, which in fact sweep out all of .

The Heisenberg counterexample ultimately arises from the “half-dimensional” (and hence degree two) subfield of , which induces an involution which can then be used to define the Heisenberg group through the formula

Analogous Heisenberg counterexamples can also be constructed if one works over finite fields that contain a “half-dimensional” subfield ; we leave the details to the interested reader. Morally speaking, if in turn contained a subfield of dimension (or even a subring or “approximate subring”), then one ought to be able to use this field to generate a counterexample to the strong Kakeya conjecture over the reals. Fortunately, such subfields do not exist; this was a conjecture of Erdos and Volkmann that was proven by Edgar and Miller, and more quantitatively by Bourgain (answering a question of Nets Katz and myself). However, this fact is not entirely trivial to prove, being a key example of the sum-product phenomenon.

We thus see that to go beyond the dimension bound of Wolff for the 3D Kakeya problem over the reals, one must do at least one of two things:

- (a) Exploit the distinct directions of the lines in in a way that goes beyond the Wolff axiom; or
- (b) Exploit the fact that does not contain half-dimensional subfields (or more generally, intermediate-dimensional approximate subrings).

(The situation is more complicated in higher dimensions, as there are more obstructions than the Heisenberg group; for instance, in four dimensions quadric surfaces are an important obstruction, as discussed in this paper of mine.)

Various partial or complete results on the Kakeya problem over various fields have been obtained through route (a) or route (b). For instance, in 2000, Nets Katz, Izabella Laba and myself used route (a) to improve Wolff’s lower bound of for Kakeya sets very slightly to (for a weak notion of dimension, namely upper Minkowski dimension). In 2004, Bourgain, Katz, and myself established a sum-product estimate which (among other things) ruled out approximate intermediate-dimensional subrings of , and then pursued route (b) to obtain a corresponding improvement to the Kakeya conjecture over finite fields of prime order. The analogous (discretised) sum-product estimate over the reals was established by Bourgain in 2003, which in principle would allow one to extend the result of Katz, Laba and myself to the strong Kakeya setting, but this has not been carried out in the literature. Finally, in 2009, Dvir used route (a) and introduced the polynomial method (as discussed previously here) to completely settle the Kakeya conjecture in finite fields.

Below the fold, I present a heuristic argument of Nets Katz and myself, which in principle would use route (b) to establish the full (strong) Kakeya conjecture. In broad terms, the strategy is as follows:

- Assume that the (strong) Kakeya conjecture fails, so that there are sets of the form in Conjecture 3 of dimension for some . Assume that is “optimal”, in the sense that is as large as possible.
- Use the optimality of (and suitable non-isotropic rescalings) to establish strong forms of standard structural properties expected of such sets , namely “stickiness”, “planiness”, “local graininess” and “global graininess” (we will roughly describe these properties below). Heuristically, these properties are constraining to “behave like” a putative Heisenberg group counterexample.
- By playing all these structural properties off of each other, show that can be parameterised locally by a one-dimensional set which generates a counterexample to Bourgain’s sum-product theorem. This contradiction establishes the Kakeya conjecture.

Nets and I have had an informal version of argument for many years, but were never able to make a satisfactory theorem (or even a partial Kakeya result) out of it, because we could not rigorously establish anywhere near enough of the necessary structural properties (stickiness, planiness, etc.) on the optimal set for a large number of reasons (one of which being that we did not have a good notion of dimension that did everything that we wished to demand of it). However, there is beginning to be movement in these directions (e.g. in this recent result of Guth using the polynomial method obtaining a weak version of local graininess on certain Kakeya sets). In view of this (and given that neither Nets or I have been actively working in this direction for some time now, due to many other projects), we’ve decided to distribute these ideas more widely than before, and in particular on this blog.

(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)

Discrete analysis, of course, is primarily interested in the study of *discrete* (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to *continuous* (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the *arguments* used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as *limits* of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.

The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:

(Discrete) | (Continuous) | (Limit method) |

Ramsey theory | Topological dynamics | Compactness |

Density Ramsey theory | Ergodic theory | Furstenberg correspondence principle |

Graph/hypergraph regularity | Measure theory | Graph limits |

Polynomial regularity | Linear algebra | Ultralimits |

Structural decompositions | Hilbert space geometry | Ultralimits |

Fourier analysis | Spectral theory | Direct and inverse limits |

Quantitative algebraic geometry | Algebraic geometry | Schemes |

Discrete metric spaces | Continuous metric spaces | Gromov-Hausdorff limits |

Approximate group theory | Topological group theory | Model theory |

As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:

**Topological and metric limits**. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects in a common space , which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object , which remains in the same space, and is “close” to many of the original objects with respect to the given metric or topology.**Categorical limits**. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects in a category , which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit or the inverse limit of these objects, which is another object in the same category , and is connected to the original objects by various morphisms.**Logical limits**. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects or of spaces , each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, might be groups and might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object or a new space , which is still a model of the same language (e.g. if the spaces were all groups, then the limiting space will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if is an abelian group, then the will also be abelian groups for many .)

The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects to all lie in a common space in order to form an ultralimit ; they are permitted to lie in different spaces ; this is more natural in many discrete contexts, e.g. when considering graphs on vertices in the limit when goes to infinity. Also, no convergence properties on the are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces involved are required in order to construct the ultraproduct.

With so few requirements on the objects or spaces , the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the , will be *exactly* obeyed by the limit object ; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.

Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.

Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.

Last week I gave a talk at the Trinity Mathematical Society at Trinity College, Cambridge UK. As the audience was primarily undergraduate, I gave a fairly non-technical talk on the universality phenomenon, based on this blog article of mine on the same topic. It was a quite light and informal affair, and this is reflected in the talk slides (which, in particular, play up quite strongly the role of former students and Fellows of Trinity College in this story). There was some interest in making these slides available publicly, so I have placed them on this site here. (Note: copyright for the images in these slides has not been secured.)

This week I once again gave some public lectures on the cosmic distance ladder in astronomy, once at Stanford and once at UCLA. The slides I used were similar to the “version 3.0” slides I used for the same talk last year in Australia and elsewhere, but the images have been updated (and the permissions for copyrighted images secured), and some additional data has also been placed on them. I am placing these slides here on this blog, in Powerpoint format and also in PDF format. (Video for the UCLA talk should also be available on the UCLA web site at some point; I’ll add a link when it becomes available.)

These slides have evolved over a period of almost five years, particularly with regards to the imagery, but this is likely to be close to the final version. Here are some of the older iterations of the slides:

- (Version 1.0, 2006) A text-based version of the slides, together with accompanying figures.
- (Version 2.0, 2007) First conversion to Powerpoint format.
- (Version 3.0, 2009) Second conversion to Powerpoint format, with completely new imagery and a slightly different arrangement.
- (Version 4.0, 2010) Images updated from the previous version, with copyright permissions secured.
- (Version 4.1, 2010) The version used for the UCLA talk, with some additional data and calculations added.
- (Version 4.2, 2010) A slightly edited version, incorporating some corrections and feedback.
- (Version 4.3, 2017) Some further corrections.

I have found that working on and polishing a single public lecture over a period of several years has been very rewarding and educational, especially given that I had very little public speaking experience at the beginning; there are several other mathematicians I know of who are also putting some effort into giving good talks that communicate mathematics and science to the general public, but I think there could potentially be many more such talks like this.

A note regarding copyright: I am happy to have the text or layout of these slides used as the basis for other presentations, so long as the source is acknowledged. However, some of the images in these slides are copyrighted by others, and permission by the copyright holders was granted only for the display of the slides in their current format. (The list of such images is given at the end of the slides.) So if you wish to adapt the slides for your own purposes, you may need to use slightly different imagery.

(*Update*, October 11: Version 4.2 uploaded, and notice on copyright added.)

(*Update*, October 20: Some photos from the UCLA talk are available here.)

(Update, October 25: Video from the talk is available on Youtube and on Itunes.)

This week at UCLA, Pierre-Louis Lions gave one of this year’s Distinguished Lecture Series, on the topic of *mean field games*. These are a relatively novel class of systems of partial differential equations, that are used to understand the behaviour of multiple agents each individually trying to optimise their position in space and time, but with their preferences being partly determined by the choices of all the other agents, in the asymptotic limit when the number of agents goes to infinity. A good example here is that of traffic congestion: as a first approximation, each agent wishes to get from A to B in the shortest path possible, but the speed at which one can travel depends on the density of other agents in the area. A more light-hearted example is that of a Mexican wave (or audience wave), which can be modeled by a system of this type, in which each agent chooses to stand, sit, or be in an intermediate position based on his or her comfort level, and also on the position of nearby agents.

Under some assumptions, mean field games can be expressed as a coupled system of two equations, a Fokker-Planck type equation evolving forward in time that governs the evolution of the density function of the agents, and a Hamilton-Jacobi (or Hamilton-Jacobi-Bellman) type equation evolving *backward* in time that governs the computation of the optimal path for each agent. The combination of both forward propagation and backward propagation in time creates some unusual “elliptic” phenomena in the time variable that is not seen in more conventional evolution equations. For instance, for Mexican waves, this model predicts that such waves only form for stadiums exceeding a certain minimum size (and this phenomenon has apparently been confirmed experimentally!).

Due to lack of time and preparation, I was not able to transcribe Lions’ lectures in full detail; but I thought I would describe here a heuristic derivation of the mean field game equations, and mention some of the results that Lions and his co-authors have been working on. (Video of a related series of lectures (in French) by Lions on this topic at the Collége de France is available here.)

To avoid (rather important) technical issues, I will work at a heuristic level only, ignoring issues of smoothness, convergence, existence and uniqueness, etc.

This is an adaptation of a talk I gave recently for a program at IPAM. In this talk, I gave a (very informal and non-rigorous) overview of Hrushovski’s use of model-theoretic techniques to establish new Freiman-type theorems in non-commutative groups, and some recent work in progress of Ben Green, Tom Sanders and myself to establish combinatorial proofs of some of Hrushovski’s results.

## Recent Comments