You are currently browsing the category archive for the ‘math.MP’ category.
A recurring theme in mathematics is that of duality: a mathematical object can either be described internally (or in physical space, or locally), by describing what physically consists of (or what kind of maps exist into ), or externally (or in frequency space, or globally), by describing what globally interacts or resonates with (or what kind of maps exist out of ). These two fundamentally opposed perspectives on the object are often dual to each other in various ways: performing an operation on may transform it one way in physical space, but in a dual way in frequency space, with the frequency space description often being a “inversion” of the physical space description. In several important cases, one is fortunate enough to have some sort of fundamental theorem connecting the internal and external perspectives. Here are some (closely inter-related) examples of this perspective:
- Vector space duality A vector space over a field can be described either by the set of vectors inside , or dually by the set of linear functionals from to the field (or equivalently, the set of vectors inside the dual space ). (If one is working in the category of topological vector spaces, one would work instead with continuous linear functionals; and so forth.) A fundamental connection between the two is given by the Hahn-Banach theorem (and its relatives).
- Vector subspace duality In a similar spirit, a subspace of can be described either by listing a basis or a spanning set, or dually by a list of linear functionals that cut out that subspace (i.e. a spanning set for the orthogonal complement . Again, the Hahn-Banach theorem provides a fundamental connection between the two perspectives.
- Convex duality More generally, a (closed, bounded) convex body in a vector space can be described either by listing a set of (extreme) points whose convex hull is , or else by listing a set of (irreducible) linear inequalities that cut out . The fundamental connection between the two is given by the Farkas lemma.
- Ideal-variety duality In a slightly different direction, an algebraic variety in an affine space can be viewed either “in physical space” or “internally” as a collection of points in , or else “in frequency space” or “externally” as a collection of polynomials on whose simultaneous zero locus cuts out . The fundamental connection between the two perspectives is given by the nullstellensatz, which then leads to many of the basic fundamental theorems in classical algebraic geometry.
- Hilbert space duality An element in a Hilbert space can either be thought of in physical space as a vector in that space, or in momentum space as a covector on that space. The fundamental connection between the two is given by the Riesz representation theorem for Hilbert spaces.
- Semantic-syntactic duality Much more generally still, a mathematical theory can either be described internally or syntactically via its axioms and theorems, or externally or semantically via its models. The fundamental connection between the two perspectives is given by the Gödel completeness theorem.
- Intrinsic-extrinsic duality A (Riemannian) manifold can either be viewed intrinsically (using only concepts that do not require an ambient space, such as the Levi-Civita connection), or extrinsically, for instance as the level set of some defining function in an ambient space. Some important connections between the two perspectives includes the Nash embedding theorem and the theorema egregium.
- Group duality A group can be described either via presentations (lists of generators, together with relations between them) or representations (realisations of that group in some more concrete group of transformations). A fundamental connection between the two is Cayley’s theorem. Unfortunately, in general it is difficult to build upon this connection (except in special cases, such as the abelian case), and one cannot always pass effortlessly from one perspective to the other.
- Pontryagin group duality A (locally compact Hausdorff) abelian group can be described either by listing its elements , or by listing the characters (i.e. continuous homomorphisms from to the unit circle, or equivalently elements of ). The connection between the two is the focus of abstract harmonic analysis.
- Pontryagin subgroup duality A subgroup of a locally compact abelian group can be described either by generators in , or generators in the orthogonal complement . One of the fundamental connections between the two is the Poisson summation formula.
- Fourier duality A (sufficiently nice) function on a locally compact abelian group (equipped with a Haar measure ) can either be described in physical space (by its values at each element of ) or in frequency space (by the values at elements of the Pontryagin dual ). The fundamental connection between the two is the Fourier inversion formula.
- The uncertainty principle The behaviour of a function at physical scales above (resp. below) a certain scale is almost completely controlled by the behaviour of its Fourier transform at frequency scales below (resp. above) the dual scale and vice versa, thanks to various mathematical manifestations of the uncertainty principle. (The Poisson summation formula can also be viewed as a variant of this principle, using subgroups instead of scales.)
- Stone/Gelfand duality A (locally compact Hausdorff) topological space can be viewed in physical space (as a collection of points), or dually, via the algebra of continuous complex-valued functions on that space, or (in the case when is compact and totally disconnected) via the boolean algebra of clopen sets (or equivalently, the idempotents of ). The fundamental connection between the two is given by the Stone representation theorem or the (commutative) Gelfand-Naimark theorem.
I have discussed a fair number of these examples in previous blog posts (indeed, most of the links above are to my own blog). In this post, I would like to discuss the uncertainty principle, that describes the dual relationship between physical space and frequency space. There are various concrete formalisations of this principle, most famously the Heisenberg uncertainty principle and the Hardy uncertainty principle – but in many situations, it is the heuristic formulation of the principle that is more useful and insightful than any particular rigorous theorem that attempts to capture that principle. Unfortunately, it is a bit tricky to formulate this heuristic in a succinct way that covers all the various applications of that principle; the Heisenberg inequality is a good start, but it only captures a portion of what the principle tells us. Consider for instance the following (deliberately vague) statements, each of which can be viewed (heuristically, at least) as a manifestation of the uncertainty principle:
- A function which is band-limited (restricted to low frequencies) is featureless and smooth at fine scales, but can be oscillatory (i.e. containing plenty of cancellation) at coarse scales. Conversely, a function which is smooth at fine scales will be almost entirely restricted to low frequencies.
- A function which is restricted to high frequencies is oscillatory at fine scales, but is negligible at coarse scales. Conversely, a function which is oscillatory at fine scales will be almost entirely restricted to high frequencies.
- Projecting a function to low frequencies corresponds to averaging out (or spreading out) that function at fine scales, leaving only the coarse scale behaviour.
- Projecting a frequency to high frequencies corresponds to removing the averaged coarse scale behaviour, leaving only the fine scale oscillation.
- The number of degrees of freedom of a function is bounded by the product of its spatial uncertainty and its frequency uncertainty (or more generally, by the volume of the phase space uncertainty). In particular, there are not enough degrees of freedom for a non-trivial function to be simulatenously localised to both very fine scales and very low frequencies.
- To control the coarse scale (or global) averaged behaviour of a function, one essentially only needs to know the low frequency components of the function (and vice versa).
- To control the fine scale (or local) oscillation of a function, one only needs to know the high frequency components of the function (and vice versa).
- Localising a function to a region of physical space will cause its Fourier transform (or inverse Fourier transform) to resemble a plane wave on every dual region of frequency space.
- Averaging a function along certain spatial directions or at certain scales will cause the Fourier transform to become localised to the dual directions and scales. The smoother the averaging, the sharper the localisation.
- The smoother a function is, the more rapidly decreasing its Fourier transform (or inverse Fourier transform) is (and vice versa).
- If a function is smooth or almost constant in certain directions or at certain scales, then its Fourier transform (or inverse Fourier transform) will decay away from the dual directions or beyond the dual scales.
- If a function has a singularity spanning certain directions or certain scales, then its Fourier transform (or inverse Fourier transform) will decay slowly along the dual directions or within the dual scales.
- Localisation operations in position approximately commute with localisation operations in frequency so long as the product of the spatial uncertainty and the frequency uncertainty is significantly larger than one.
- In the high frequency (or large scale) limit, position and frequency asymptotically behave like a pair of classical observables, and partial differential equations asymptotically behave like classical ordinary differential equations. At lower frequencies (or finer scales), the former becomes a “quantum mechanical perturbation” of the latter, with the strength of the quantum effects increasing as one moves to increasingly lower frequencies and finer spatial scales.
- Etc., etc.
- Almost all of the above statements generalise to other locally compact abelian groups than or , in which the concept of a direction or scale is replaced by that of a subgroup or an approximate subgroup. (In particular, as we will see below, the Poisson summation formula can be viewed as another manifestation of the uncertainty principle.)
I think of all of the above (closely related) assertions as being instances of “the uncertainty principle”, but it seems difficult to combine them all into a single unified assertion, even at the heuristic level; they seem to be better arranged as a cloud of tightly interconnected assertions, each of which is reinforced by several of the others. The famous inequality is at the centre of this cloud, but is by no means the only aspect of it.
The uncertainty principle (as interpreted in the above broad sense) is one of the most fundamental principles in harmonic analysis (and more specifically, to the subfield of time-frequency analysis), second only to the Fourier inversion formula (and more generally, Plancherel’s theorem) in importance; understanding this principle is a key piece of intuition in the subject that one has to internalise before one can really get to grips with this subject (and also with closely related subjects, such as semi-classical analysis and microlocal analysis). Like many fundamental results in mathematics, the principle is not actually that difficult to understand, once one sees how it works; and when one needs to use it rigorously, it is usually not too difficult to improvise a suitable formalisation of the principle for the occasion. But, given how vague this principle is, it is difficult to present this principle in a traditional “theorem-proof-remark” manner. Even in the more informal format of a blog post, I was surprised by how challenging it was to describe my own understanding of this piece of mathematics in a linear fashion, despite (or perhaps because of) it being one of the most central and basic conceptual tools in my own personal mathematical toolbox. In the end, I chose to give below a cloud of interrelated discussions about this principle rather than a linear development of the theory, as this seemed to more closely align with the nature of this principle.
This week at UCLA, Pierre-Louis Lions gave one of this year’s Distinguished Lecture Series, on the topic of mean field games. These are a relatively novel class of systems of partial differential equations, that are used to understand the behaviour of multiple agents each individually trying to optimise their position in space and time, but with their preferences being partly determined by the choices of all the other agents, in the asymptotic limit when the number of agents goes to infinity. A good example here is that of traffic congestion: as a first approximation, each agent wishes to get from A to B in the shortest path possible, but the speed at which one can travel depends on the density of other agents in the area. A more light-hearted example is that of a Mexican wave (or audience wave), which can be modeled by a system of this type, in which each agent chooses to stand, sit, or be in an intermediate position based on his or her comfort level, and also on the position of nearby agents.
Under some assumptions, mean field games can be expressed as a coupled system of two equations, a Fokker-Planck type equation evolving forward in time that governs the evolution of the density function of the agents, and a Hamilton-Jacobi (or Hamilton-Jacobi-Bellman) type equation evolving backward in time that governs the computation of the optimal path for each agent. The combination of both forward propagation and backward propagation in time creates some unusual “elliptic” phenomena in the time variable that is not seen in more conventional evolution equations. For instance, for Mexican waves, this model predicts that such waves only form for stadiums exceeding a certain minimum size (and this phenomenon has apparently been confirmed experimentally!).
Due to lack of time and preparation, I was not able to transcribe Lions’ lectures in full detail; but I thought I would describe here a heuristic derivation of the mean field game equations, and mention some of the results that Lions and his co-authors have been working on. (Video of a related series of lectures (in French) by Lions on this topic at the Collége de France is available here.)
To avoid (rather important) technical issues, I will work at a heuristic level only, ignoring issues of smoothness, convergence, existence and uniqueness, etc.
is the fundamental equation of motion for (non-relativistic) quantum mechanics, modeling both one-particle systems and -particle systems for . Remarkably, despite being a linear equation, solutions to this equation can be governed by a non-linear equation in the large particle limit . In particular, when modeling a Bose-Einstein condensate with a suitably scaled interaction potential in the large particle limit, the solution can be governed by the cubic nonlinear Schrödinger equation
I recently attended a talk by Natasa Pavlovic on the rigorous derivation of this type of limiting behaviour, which was initiated by the pioneering work of Hepp and Spohn, and has now attracted a vast recent literature. The rigorous details here are rather sophisticated; but the heuristic explanation of the phenomenon is fairly simple, and actually rather pretty in my opinion, involving the foundational quantum mechanics of -particle systems. I am recording this heuristic derivation here, partly for my own benefit, but perhaps it will be of interest to some readers.
This discussion will be purely formal, in the sense that (important) analytic issues such as differentiability, existence and uniqueness, etc. will be largely ignored.
“Gauge theory” is a term which has connotations of being a fearsomely complicated part of mathematics – for instance, playing an important role in quantum field theory, general relativity, geometric PDE, and so forth. But the underlying concept is really quite simple: a gauge is nothing more than a “coordinate system” that varies depending on one’s “location” with respect to some “base space” or “parameter space”, a gauge transform is a change of coordinates applied to each such location, and a gauge theory is a model for some physical or mathematical system to which gauge transforms can be applied (and is typically gauge invariant, in that all physically meaningful quantities are left unchanged (or transform naturally) under gauge transformations). By fixing a gauge (thus breaking or spending the gauge symmetry), the model becomes something easier to analyse mathematically, such as a system of partial differential equations (in classical gauge theories) or a perturbative quantum field theory (in quantum gauge theories), though the tractability of the resulting problem can be heavily dependent on the choice of gauge that one fixed. Deciding exactly how to fix a gauge (or whether one should spend the gauge symmetry at all) is a key question in the analysis of gauge theories, and one that often requires the input of geometric ideas and intuition into that analysis.
I was asked recently to explain what a gauge theory was, and so I will try to do so in this post. For simplicity, I will focus exclusively on classical gauge theories; quantum gauge theories are the quantization of classical gauge theories and have their own set of conceptual difficulties (coming from quantum field theory) that I will not discuss here. While gauge theories originated from physics, I will not discuss the physical significance of these theories much here, instead focusing just on their mathematical aspects. My discussion will be informal, as I want to try to convey the geometric intuition rather than the rigorous formalism (which can, of course, be found in any graduate text on differential geometry).
My penultimate article for my PCM series is a very short one, on “Hamiltonians“. The PCM has a number of short articles to define terms which occur frequently in the longer articles, but are not substantive enough topics by themselves to warrant a full-length treatment. One of these is the term “Hamiltonian”, which is used in all the standard types of physical mechanics (classical or quantum, microscopic or statistical) to describe the total energy of a system. It is a remarkable feature of the laws of physics that this single object (which is a scalar-valued function in classical physics, and a self-adjoint operator in quantum mechanics) suffices to describe the entire dynamics of a system, although from a mathematical perspective it is not always easy to read off all the analytic aspects of this dynamics just from the form of the Hamiltonian.
In mathematics, Hamiltonians of course arise in the equations of mathematical physics (such as Hamilton’s equations of motion, or Schrödinger’s equations of motion), but also show up in symplectic geometry (as a special case of a moment map) and in microlocal analysis.
For this post, I would also like to highlight an article of my good friend Andrew Granville on one of my own favorite topics, “Analytic number theory“, focusing in particular on the classical problem of understanding the distribution of the primes, via such analytic tools as zeta functions and L-functions, sieve theory, and the circle method.
Einstein’s equation describing the equivalence of mass and energy is arguably the most famous equation in physics. But his beautifully elegant derivation of this formula (here is the English translation) from previously understood laws of physics is considerably less famous. (There is an amusing Far Side cartoon in this regard, with the punchline “squared away”, which you can find on-line by searching hard enough, though I will not link to it directly.)
This topic had come up in recent discussion on this blog, so I thought I would present Einstein’s derivation here. Actually, to be precise, in the paper mentioned above, Einstein uses the postulates of special relativity and other known laws of physics to show the following:
Proposition. (Mass-energy equivalence) If a body at rest emits a total energy of E while remaining at rest, then the mass of that body decreases by .
Assuming that bodies at rest with zero mass necessarily have zero energy, this implies the famous formula – but only for bodies which are at rest. For moving bodies, there is a similar formula, but one has to first decide what the correct definition of mass is for moving bodies; I will not discuss this issue here, but see for instance the Wikipedia entry on this topic.
Broadly speaking, the derivation of the above proposition proceeds via the following five steps:
- Using the postulates of special relativity, determine how space and time coordinates transform under changes of reference frame (i.e. derive the Lorentz transformations).
- Using 1., determine how the temporal frequency (and wave number k) of photons transform under changes of reference frame (i.e. derive the formulae for relativistic Doppler shift).
- Using Planck’s law (and de Broglie’s law ) and 2., determine how the energy E (and momentum p) of photons transform under changes of reference frame.
- Using the law of conservation of energy (and momentum) and 3., determine how the energy (and momentum) of bodies transform under changes of reference frame.
- Comparing the results of 4. with the classical Newtonian approximations (and ), deduce the relativistic relationship between mass and energy for bodies at rest (and more generally between mass, velocity, energy, and momentum for moving bodies).
Actually, as it turns out, Einstein’s analysis for bodies at rest only needs to understand changes of reference frame at infinitesimally low velocity, . However, in order to see enough relativistic effects to deduce the mass-energy equivalence, one needs to obtain formulae which are accurate to second order in v (or more precisely, ), as opposed to those in Newtonian physics which are accurate to first order in v. Also, to understand the relationship between mass, velocity, energy, and momentum for moving bodies rather than bodies at rest, one needs to consider non-infinitesimal changes of reference frame.
Important note: Einstein’s argument is, of course, a physical argument rather than a mathematical one. While I will use the language and formalism of pure mathematics here, it should be emphasised that I am not exactly giving a formal proof of the above Proposition in the sense of modern mathematics; these arguments are instead more like the classical proofs of Euclid, in that numerous “self evident” assumptions about space, time, velocity, etc. will be made along the way. (Indeed, there is a very strong analogy between Euclidean geometry and the Minkowskian geometry of special relativity.) One can of course make these assumptions more explicit, and this has been done in many other places, but I will avoid doing so here in order not to overly obscure Einstein’s original argument. Read the rest of this entry »
I’m continuing my series of articles for the Princeton Companion to Mathematics with my article on phase space. This brief article, which overlaps to some extent with my article on the Schrödinger equation, introduces the concept of phase space, which is used to describe both the positions and momenta of a system in both classical and quantum mechanics, although in the latter one has to accept a certain amount of ambiguity (or non-commutativity, if one prefers) in this description thanks to the uncertainty principle. (Note that positions alone are not sufficient to fully characterise the state of a system; this observation essentially goes all the way back to Zeno with his arrow paradox.)
Phase space is also used in pure mathematics, where it is used to simultaneously describe position (or time) and frequency; thus the term “time-frequency analysis” is sometimes used to describe phase space-based methods in analysis. The counterpart of classical mechanics is then symplectic geometry and Hamiltonian ODE, while the counterpart of quantum mechanics is the theory of linear differential and pseudodifferential operators. The former is essentially the “high-frequency limit” of the latter; this can be made more precise using the techniques of microlocal analysis, semi-classical analysis, and geometric quantisation.
As usual, I will highlight another author’s PCM article in this post, this one being Frank Kelly‘s article “The mathematics of traffic in networks“, a subject which, as a resident of Los Angeles, I can relate to on a personal level :-) . Frank’s article also discusses in detail Braess’s paradox, which is the rather unintuitive fact that adding extra capacity to a network can sometimes increase the overall delay in the network, by inadvertently redirecting more traffic through bottlenecks! If nothing else, this paradox demonstrates that the mathematics of traffic is non-trivial.
I’m continuing my series of articles for the Princeton Companion to Mathematics with my article on the Schrödinger equation – the fundamental equation of motion of quantum particles, possibly in the presence of an external field. My focus here is on the relationship between the Schrödinger equation of motion for wave functions (and the closely related Heisenberg equation of motion for quantum observables), and Hamilton’s equations of motion for classical particles (and the closely related Poisson equation of motion for classical observables). There is also some brief material on semiclassical analysis, scattering theory, and spectral theory, though with only a little more than 5 pages to work with in all, I could not devote much detail to these topics. (In particular, nonlinear Schrödinger equations, a favourite topic of mine, are not covered at all.)
As I said before, I will try to link to at least one other PCM article in every post in this series. Today I would like to highlight Madhu Sudan‘s delightful article on information and coding theory, “Reliable transmission of information“.
[Update, Oct 3: typos corrected.]
[Update, Oct 9: more typos corrected.]
As you may already know, Danica McKellar, the actress and UCLA mathematics alumnus, has recently launched her book “Math Doesn’t Suck“, which is aimed at pre-teenage girls and is a friendly introduction to middle-school mathematics, such as the arithmetic of fractions. The book has received quite a bit of publicity, most of it rather favourable, and is selling quite well; at one point, it even made the Amazon top 20 bestseller list, which is a remarkable achievement for a mathematics book. (The current Amazon rank can be viewed in the product details of the Amazon page for this book.)
I’m very happy that the book is successful for a number of reasons. Firstly, I got to know Danica for a few months (she took my Introduction to Topology class way back in 1997, and in fact was the second-best student there; the class web page has long since disappeared, but you can at least see the midterm and final), and it is always very heartening to see a former student put her or his mathematical knowledge to good use :-) . Secondly, Danica is a wonderful role model and it seems that this book will encourage many school-age kids to give maths a chance. But the final reason is that the book is, in fact, rather good; the mathematical content is organised in a logical manner (for instance, it begins with prime factorisation, then covers least common multiples, then addition of fractions), well motivated, and interleaved with some entertaining, insightful, and slightly goofy digressions, anecdotes, and analogies. (To give one example: to motivate why dividing 6 by 1/2 should yield 12, she first discussed why 6 divided by 2 should give 3, by telling a story about having to serve lattes to a whole bunch of actors, where each actor demands two lattes each, but one could only carry the weight of six lattes at a time, so that only actors could be served in one go; she then asked what would happen instead of each actor only wanted half a latte instead of two. Danica also gives a very clear explanation of the concept of a variable (such as ), by using the familiar concept of a nickname given to someone with a complicated real name as an analogy.)
While I am not exactly in the target audience for this book, I can relate to its pedagogical approach. When I was a kid myself, one of my favourite maths books was a very obscure (and now completely out of print) book called “Creating Calculus“, which introduced the basics of single-variable calculus via concocting a number of slightly silly and rather contrived stories which always involved one or more ants. For instance, to illustrate the concept of a derivative, in one of these stories one of the ants kept walking up a mathematician’s shin while he was relaxing against a tree, but started slipping down at a point where the slope of the shin reached a certain threshold; this got the mathematician interested enough to compute that slope from first principles. The humour in the book was rather corny, involving for instance some truly awful puns, but it was perfect for me when I was 11: it inspired me to play with calculus, which is an important step towards improving one’s understanding of the subject beyond a superficial level. (Two other books in a similarly playful spirit, yet still full of genuine scientific substance, are “Darwin for beginners” and “Mr. Tompkins in paperback“, both of which I also enjoyed very much as a kid. They are of course no substitute for a serious textbook on these subjects, but they complement such treatments excellently.)
Anyway, Danica’s book has already been reviewed in several places, and there’s not much more I can add to what has been said elsewhere. I thought however that I could talk about another of Danica’s contributions to mathematics, namely her paper “Percolation and Gibbs states multiplicity for ferromagnetic Ashkin-Teller models on ” (PDF available here), joint with Brandy Winn and my colleague Lincoln Chayes. (Brandy, incidentally, was the only student in my topology class who did better than Danica; she has recently obtained a PhD in mathematics from U. Chicago, with a thesis in PDE.) This paper is noted from time to time in the above-mentioned publicity, and its main result is sometimes referred to there as the “Chayes-McKellar-Winn theorem”, but as far as I know, no serious effort has been made to explain exactly what this theorem is, or the wider context the result is placed in :-) . So I’ll give it a shot; this allows me an opportunity to talk about some beautiful topics in mathematical physics, namely statistical mechanics, spontaneous magnetisation, and percolation.
[Update, Aug 23: I added a non-technical "executive summary" of what the Chayes-McKellar-Winn theorem is at the very end of this post.]
On Thursday, Yau continued his lecture series on geometric structures, focusing a bit more on the tools and philosophy that goes into actually building these structures. Much of the philosophy, in its full generality, is still rather vague and not properly formalised, but is nevertheless supported by a large number of rigorously worked out examples and results in special cases. A dominant theme in this talk was the interaction between geometry and physics, in particular general relativity and string theory.
As usual, there are likely to be some inaccuracies in my presentation of Yau’s talk (I am not really an expert in this subject), and corrections are welcome. Yau’s slides for this talk are available here.
Read the rest of this entry »