You are currently browsing the category archive for the ‘Companion’ category.

The Princeton Companion
to Mathematics
Sample Entry: Fermat’s
Last Theorem
Podcast interview with
editor Timothy Gowers


I was very pleased today to obtain a courtesy copy of the Princeton Companion to Mathematics, which is now in print.  I have discussed several of the individual articles (including my own) in this book elsewhere in this blog, and Tim Gowers, the main editor of the Companion, has of course also discussed it on his blog.  Browsing through it, though, I do get the sense that the whole is greater than the sum of its parts. One particularly striking example of this is the final section on advice to younger mathematicians, with contributions by Sir Michael Atiyah, Béla Bollobás, Alain Connes, Dusa McDuff, and Peter Sarnak; the individual contributions are already very insightful (and almost linearly independent of each other!), but collectively they give a remarkably comprehensive and accurate portrait of how mathematical progress is made these days.

The other immediate impression I got from the book was the sheer weight (physical and otherwise – the book comprises 1034 pages) of mathematics that is out there, much of which I still only have a very partial grasp of at best (see also Einstein’s famous quote on the subject).  But the book also demonstrates that mathematics, while large, is at least connected (and reasonably bounded in diameter, modulo a small exceptional set).  I myself certainly plan to use this book as a first reference the next time I need to look up some mathematical theory or concept that I haven’t had occasion to really use much before.

Given that I have been heavily involved in certain parts of this project, I will not review the book fully here – I am sure that will be done more objectively elsewhere – but comments on the book by other readers are more than welcome here.

[Update, Sep 29: link to advice chapter added.]

I’m closing my series of articles for the Princeton Companion to Mathematics with my article on “Ricci flow“. Of course, this flow on Riemannian manifolds is now very well known to mathematicians, due to its fundamental role in Perelman’s celebrated proof of the Poincaré conjecture. In this short article, I do not focus on that proof, but instead on the more basic questions as to what a Riemannian manifold is, what the Ricci curvature tensor is on such a manifold, and how Ricci flow qualitatively changes the geometry (and with surgery, the topology) of such manifolds over time.

I’ve saved this article for last, in part because it ties in well with my upcoming course on Perelman’s proof which will start in a few weeks (details to follow soon).

The last external article for the PCM that I would like to point out here is Brian Osserman‘s article on the Weil conjectures, which include the “Riemann hypothesis over finite fields” that was famously solved by Deligne. These (now solved) conjectures, which among other things gives some quite precise control on the number of points in an algebraic variety over a finite field, were (and continue to be) a major motivating force behind much of modern arithmetic and algebraic geometry.

[Update, Mar 13: Actual link to Weil conjecture article added.]

My penultimate article for my PCM series is a very short one, on “Hamiltonians“. The PCM has a number of short articles to define terms which occur frequently in the longer articles, but are not substantive enough topics by themselves to warrant a full-length treatment. One of these is the term “Hamiltonian”, which is used in all the standard types of physical mechanics (classical or quantum, microscopic or statistical) to describe the total energy of a system. It is a remarkable feature of the laws of physics that this single object (which is a scalar-valued function in classical physics, and a self-adjoint operator in quantum mechanics) suffices to describe the entire dynamics of a system, although from a mathematical perspective it is not always easy to read off all the analytic aspects of this dynamics just from the form of the Hamiltonian.

In mathematics, Hamiltonians of course arise in the equations of mathematical physics (such as Hamilton’s equations of motion, or Schrödinger’s equations of motion), but also show up in symplectic geometry (as a special case of a moment map) and in microlocal analysis.

For this post, I would also like to highlight an article of my good friend Andrew Granville on one of my own favorite topics, “Analytic number theory“, focusing in particular on the classical problem of understanding the distribution of the primes, via such analytic tools as zeta functions and L-functions, sieve theory, and the circle method.

From Tim Gowers, I hear the good news that the editing process of the Princeton Companion to Mathematics is finally nearing completion. It therefore seems like a good time to resume my own series of Companion articles, while there is still time to correct any errors.

I’ll start today with my article on “Function spaces“. Just as the analysis of numerical quantities relies heavily on the concept of magnitude or absolute value to measure the size of such quantities, or the extent to which two such quantities are close to each other, the analysis of functions relies on the concept of a norm to measure various “sizes” of such functions, as well as the extent to which two functions resemble to each other. But while numbers mainly have just one notion of magnitude (not counting the p-adic valuations, which are of importance in number theory), functions have a wide variety of such magnitudes, such as “height” (L^\infty or C^0 norm), “mass” (L^1 norm), “mean square” or “energy” (L^2 or H^1 norms), “slope” (Lipschitz or C^1 norms), and so forth. In modern mathematics, we use the framework of function spaces to understand the properties of functions and their magnitudes; they provide a precise and rigorous way to formalise such “fuzzy” notions as a function being tall, thin, flat, smooth, oscillating, etc. In this article I focus primarily on the analytic aspects of these function spaces (inequalities, interpolation, etc.), leaving aside the algebraic aspects or the connections with mathematical physics.

The Companion has several short articles describing specific landmark achievements in mathematics. For instance, here is Peter Cameron‘s short article on “Gödel’s theorem“, on what is arguably one of the most popularised (and most misunderstood) theorems in all of mathematics.

I’m continuing my series of articles for the Princeton Companion to Mathematics ahead of the winter quarter here at UCLA (during which I expect this blog to become dominated by ergodic theory posts) with my article on generalised solutions to PDE. (I have three more PCM articles to release here, but they will have to wait until spring break.) This article ties in to some extent with my previous PCM article on distributions, because distributional solutions are one good example of a “generalised solution” or “weak solution” to a PDE. They are not the only such notion though; one also has variational and stationary solutions, viscosity solutions, penalised solutions, solutions outside of a singular set, and so forth. These notions of generalised solution are necessary when dealing with PDE that can exhibit singularities, shocks, oscillations, or other non-smooth behaviour. Also, in the foundational existence theory for many PDE, it has often been profitable to first construct a fairly weak solution and then use additional arguments to upgrade that solution to a stronger solution (e.g. a “classical” or “smooth” solution), rather than attempt to construct the stronger solution directly. On the other hand, there is a tradeoff between how easy it is to construct a weak solution, and how easy it is to upgrade that solution; solution concepts which are so weak that they cannot be upgraded at all seem to be significantly less useful in the subject, even if (or especially if) existence of such solutions is a near-triviality. [This is one manifestation of the somewhat whimsical “law of conservation of difficulty”: in order to prove any genuinely non-trivial result, some hard work has to be done somewhere. In particular, it is often the case that the behaviour of PDE depends quite sensitively on the exact structure of that PDE (e.g. on the sign of various key terms), and so any result that captures such behaviour must, at some point, exploit that structure in a non-trivial manner; one usually cannot get very far in PDE by relying just on general-purpose theorems that apply to all PDE, regardless of structure.]

The Companion also has a section on history of mathematics; for instance, here is Leo Corry‘s PCM article “The development of the idea of proof“, covering the period from Euclid to Frege. We take for granted nowadays that we have precise, rigorous, and standard frameworks for proving things in set theory, number theory, geometry, analysis, probability, etc., but it is worth remembering that for the majority of the history of mathematics, this was not completely the case; even Euclid’s axiomatic approach to geometry contained some implicit assumptions about topology, order, and sets which were not fully formalised until the work of Hilbert in the modern era. (Even nowadays, there are still a few parts of mathematics, such as mathematical quantum field theory, which still do not have a completely satisfactory formalisation, though hopefully the situation will improve in the future.)

[Update, Jan 4: bad link fixed.]

I’m continuing my series of articles for the Princeton Companion to Mathematics through the winter break with my article on distributions. These “generalised functions” can be viewed either as the limits of actual functions, as well as the dual of suitable “test” functions. Having such a space of virtual functions to work in is very convenient for several reasons, in particular it allws one to perform various algebraic manipulations while avoiding (or at least deferring) technical analytical issues, such as how to differentiate a non-differentiable function. You can also find a more recent draft of my article at the PCM web site (username Guest, password PCM).

Today I will highlight Carl Pomerance‘s informative PCM article on “Computational number theory“, which in particular focuses on topics such as primality testing and factoring, which are of major importance in modern cryptography. Interestingly, sieve methods play a critical role in making modern factoring arguments (such as the quadratic sieve and number field sieve) practical even for rather large numbers, although the use of sieves here is rather different from the use of sieves in additive prime number theory.

[Update, Jan 1: Link fixed.]

I’m continuing my series of articles for the Princeton Companion to Mathematics through the holiday season with my article on “Differential forms and integration“. This is my attempt to explain the concept of a differential form in differential geometry and several variable calculus; which I view as an extension of the concept of the signed integral in single variable calculus. I briefly touch on the important concept of de Rham cohomology, but mostly I stick to fundamentals.

I would also like to highlight Doron Zeilberger‘s PCM article “Enumerative and Algebraic combinatorics“. This article describes the art of how to usefully count the number of objects of a given type exactly; this subject has a rather algebraic flavour to it, in contrast with asymptotic combinatorics, which is more concerned with computing the order of magnitude of number of objects in a class. The two subjects complement each other; for instance, in my own work, I have found enumerative and other algebraic methods tend to be useful for controlling “main terms” in a given expression, while asymptotic and other analytic methods tend to be good at controlling “error terms”.

I’m continuing my series of articles for the Princeton Companion to Mathematics with my article on phase space. This brief article, which overlaps to some extent with my article on the Schrödinger equation, introduces the concept of phase space, which is used to describe both the positions and momenta of a system in both classical and quantum mechanics, although in the latter one has to accept a certain amount of ambiguity (or non-commutativity, if one prefers) in this description thanks to the uncertainty principle. (Note that positions alone are not sufficient to fully characterise the state of a system; this observation essentially goes all the way back to Zeno with his arrow paradox.)

Phase space is also used in pure mathematics, where it is used to simultaneously describe position (or time) and frequency; thus the term “time-frequency analysis” is sometimes used to describe phase space-based methods in analysis. The counterpart of classical mechanics is then symplectic geometry and Hamiltonian ODE, while the counterpart of quantum mechanics is the theory of linear differential and pseudodifferential operators. The former is essentially the “high-frequency limit” of the latter; this can be made more precise using the techniques of microlocal analysis, semi-classical analysis, and geometric quantisation.

As usual, I will highlight another author’s PCM article in this post, this one being Frank Kelly‘s article “The mathematics of traffic in networks“, a subject which, as a resident of Los Angeles, I can relate to on a personal level :-) . Frank’s article also discusses in detail Braess’s paradox, which is the rather unintuitive fact that adding extra capacity to a network can sometimes increase the overall delay in the network, by inadvertently redirecting more traffic through bottlenecks! If nothing else, this paradox demonstrates that the mathematics of traffic is non-trivial.

I’m continuing my series of articles for the Princeton Companion to Mathematics with my article on compactness and compactification. This is a fairly recent article for the PCM, which is now at the stage in which most of the specialised articles have been written, and now it is the general articles on topics such as compactness which are being finished up. The topic of this article is self-explanatory; it is a brief and non-technical introduction as to the incredibly useful concept of compactness in topology, analysis, geometry, and other areas mathematics, and the closely related concept of a compactification, which allows one to rigorously take limits of what would otherwise be divergent sequences.

The PCM has an extremely broad scope, covering not just mathematics itself, but the context that mathematics is placed in. To illustrate this, I will mention Michael Harris‘s essay for the Companion, ““Why mathematics?”, you may ask“.

I’m continuing my series of articles for the Princeton Companion to Mathematics by uploading my article on the Fourier transform. Here, I chose to describe this transform as a means of decomposing general functions into more symmetric functions (such as sinusoids or plane waves), and to discuss a little bit how this transform is connected to differential operators such as the Laplacian. (This is of course only one of the many different uses of the Fourier transform, but again, with only five pages to work with, it’s hard to do justice to every single application. For instance, the connections with additive combinatorics are not covered at all.)

On the official web site of the Companion (which you can access with the user name “Guest” and password “PCM”), there is a more polished version of the same article, after it had gone through a few rounds of the editing process.

I’ll also point out David Ben-Zvi‘s Companion article on “moduli spaces“. This concept is deceptively simple – a space whose points are themselves spaces, or “representatives” or “equivalence classes” of such spaces – but it leads to the “correct” way of thinking about many geometric and algebraic objects, and more importantly about families of such objects, without drowning in a mess of coordinate charts and formulae which serve to obscure the underlying geometry.

[Update, Oct 21: categories fixed.]

Archives