You are currently browsing the monthly archive for January 2008.

Before we begin or study of dynamical systems, topological dynamical systems, and measure-preserving systems (as defined in the previous lecture), it is convenient to give these three classes the structure of a category. One of the basic insights of category theory is that a mathematical objects in a given class (such as dynamical systems) are best studied not in isolation, but in relation to each other, via morphisms. Furthermore, many other basic concepts pertaining to these objects (e.g. subobjects, factors, direct sums, irreducibility, etc.) can be defined in terms of these morphisms. One advantage of taking this perspective here is that it provides a unified way of defining these concepts for the three different categories of dynamical systems, topological dynamical systems, and measure-preserving systems that we will study in this course, thus sparing us the need to give any of our definitions (except for our first one below) in triplicate.

The Distinguished Lecture Series at UCLA for this winter quarter is given by Avi Wigderson, who is lecturing on “some topics in computational complexity“. In his first lecture on Wednesday, Avi gave a wonderful talk (in his inimitably entertaining style) on “The power and weakness of randomness in computation“. The talk was based on these slides. He also gave a sort of “encore” on zero-knowledge proofs in more informal discussions after the main talk.

As always, any errors here are due to my transcription and interpretation.

In this lecture, I define the basic notion of a *dynamical system* (as well as the more structured notions of a *topological dynamical system* and a *measure-preserving system*), and describe the main topics we will cover in this course.

This week I am in San Diego for the annual joint mathematics meeting of the American Mathematical Society and the Mathematical Association of America. I am giving two talks here. One is a lecture (for the AMS “Current Events” Bulletin) on recent developments (by Martel-Merle, Merle-Raphael, and others) on stability of solitons; I will post on that lecture at some point in the near future, once the survey paper associated to that lecture is finalised.

The other, which I am presenting here, is an address on “structure and randomness in the prime numbers“. Of course, I’ve talked about this general topic many times before, (e.g. at my Simons lecture at MIT, my Milliman lecture at U. Washington, and my Science Research Colloquium at UCLA), and I have given similar talks to the one here – which focuses on my original 2004 paper with Ben Green on long arithmetic progressions in the primes – about a dozen or so times. As such, this particular talk has probably run its course, and so I am “retiring” it by posting it here.

p.s. At this meeting, Endre Szemerédi was awarded the 2008 Steele prize for a seminal contribution to research, for his landmark paper establishing what is now known as Szemerédi’s theorem, which underlies the result I discuss in this talk. This prize is richly deserved – congratulations Endre! [The AMS and MAA also awarded prizes to several dozen other mathematicians, including many mentioned previously on this blog; rather than list them all here, let me just point you to their prize booklet.]

I’m continuing my series of articles for the Princeton Companion to Mathematics ahead of the winter quarter here at UCLA (during which I expect this blog to become dominated by ergodic theory posts) with my article on generalised solutions to PDE. (I have three more PCM articles to release here, but they will have to wait until spring break.) This article ties in to some extent with my previous PCM article on distributions, because distributional solutions are one good example of a “generalised solution” or “weak solution” to a PDE. They are not the only such notion though; one also has variational and stationary solutions, viscosity solutions, penalised solutions, solutions outside of a singular set, and so forth. These notions of generalised solution are necessary when dealing with PDE that can exhibit singularities, shocks, oscillations, or other non-smooth behaviour. Also, in the foundational existence theory for many PDE, it has often been profitable to first construct a fairly weak solution and then use additional arguments to upgrade that solution to a stronger solution (e.g. a “classical” or “smooth” solution), rather than attempt to construct the stronger solution directly. On the other hand, there is a tradeoff between how easy it is to construct a weak solution, and how easy it is to upgrade that solution; solution concepts which are so weak that they cannot be upgraded at all seem to be significantly less useful in the subject, even if (or *especially* if) existence of such solutions is a near-triviality. [This is one manifestation of the somewhat whimsical “law of conservation of difficulty”: in order to prove any genuinely non-trivial result, some hard work has to be done *somewhere*. In particular, it is often the case that the behaviour of PDE depends quite sensitively on the exact structure of that PDE (e.g. on the sign of various key terms), and so any result that captures such behaviour must, at some point, exploit that structure in a non-trivial manner; one usually cannot get very far in PDE by relying just on general-purpose theorems that apply to all PDE, regardless of structure.]

The Companion also has a section on history of mathematics; for instance, here is Leo Corry‘s PCM article “The development of the idea of proof“, covering the period from Euclid to Frege. We take for granted nowadays that we have precise, rigorous, and standard frameworks for proving things in set theory, number theory, geometry, analysis, probability, etc., but it is worth remembering that for the majority of the history of mathematics, this was not completely the case; even Euclid’s axiomatic approach to geometry contained some implicit assumptions about topology, order, and sets which were not fully formalised until the work of Hilbert in the modern era. (Even nowadays, there are still a few parts of mathematics, such as mathematical quantum field theory, which still do not have a completely satisfactory formalisation, though hopefully the situation will improve in the future.)

[*Update*, Jan 4: bad link fixed.]

I’m continuing my series of articles for the Princeton Companion to Mathematics through the winter break with my article on distributions. These “generalised functions” can be viewed either as the limits of actual functions, as well as the dual of suitable “test” functions. Having such a space of virtual functions to work in is very convenient for several reasons, in particular it allws one to perform various algebraic manipulations while avoiding (or at least deferring) technical analytical issues, such as how to differentiate a non-differentiable function. You can also find a more recent draft of my article at the PCM web site (username Guest, password PCM).

Today I will highlight Carl Pomerance‘s informative PCM article on “Computational number theory“, which in particular focuses on topics such as primality testing and factoring, which are of major importance in modern cryptography. Interestingly, sieve methods play a critical role in making modern factoring arguments (such as the quadratic sieve and number field sieve) practical even for rather large numbers, although the use of sieves here is rather different from the use of sieves in additive prime number theory.

[*Update*, Jan 1: Link fixed.]

## Recent Comments