You are currently browsing the category archive for the ‘non-technical’ category.

The main purpose of this post is to roll over the discussion from the previous Polymath8 thread, which has become rather full with comments. We are still writing the paper, but it appears to have stabilised in a near-final form (source files available here); the main remaining tasks are proofreading, checking the mathematics, and polishing the exposition. We also have a tentative consensus to submit the paper to Algebra and Number Theory when the proofreading is all complete.

The paper is quite large now (164 pages!) but it is fortunately rather modular, and thus hopefully somewhat readable (particularly regarding the first half of the paper, which does not need any of the advanced exponential sum estimates). The size should not be a major issue for the journal, so I would not seek to artificially shorten the paper at the expense of readability or content.

The main purpose of this post is to roll over the discussion from the previous Polymath8 thread, which has become rather full with comments. As with the previous thread, the main focus on the comments to this thread are concerned with writing up the results of the Polymath8 “bounded gaps between primes” project; the latest files on this writeup may be found at this directory, with the most recently compiled PDF file (clocking in at about 90 pages so far, with a few sections still to be written!) being found here. There is also still some active discussion on improving the numerical results, with a particular focus on improving the sieving step that converts distribution estimates such as into weak prime tuples results . (For a discussion of the terminology, and for a general overview of the proof strategy, see this previous progress report on the Polymath8 project.) This post can also contain any other discussion pertinent to any aspect of the polymath8 project, of course.

There are a few sections that still need to be written for the draft, mostly concerned with the Type I, Type II, and Type III estimates. However, the proofs of these estimates exist already on this blog, so I hope to transcribe them to the paper fairly shortly (say by the end of this week). Barring any unexpected surprises, or major reorganisation of the paper, it seems that the main remaining task in the writing process would be the proofreading and polishing, and turning from the technical mathematical details to expository issues. As always, feedback from casual participants, as well as those who have been closely involved with the project, would be very valuable in this regard. (One small comment, by the way, regarding corrections: as the draft keeps changing with time, referring to a specific line of the paper using page numbers and line numbers can become inaccurate, so if one could try to use section numbers, theorem numbers, or equation numbers as reference instead (e.g. “the third line after (5.35)” instead of “the twelfth line of page 54”) that would make it easier to track down specific portions of the paper.)

Also, we have set up a wiki page for listing the participants of the polymath8 project, their contact information, and grant information (if applicable). We have two lists of participants; one for those who have been making significant contributions to the project (comparable to that of a co-author of a traditional mathematical research paper), and another list for those who have made auxiliary contributions (e.g. typos, stylistic suggestions, or supplying references) that would typically merit inclusion in the Acknowledgments section of a traditional paper. It’s difficult to exactly draw the line between the two types of contributions, but we have relied in the past on self-reporting, which has worked pretty well so far. (By the time this project concludes, I may go through the comments to previous posts and see if any further names should be added to these lists that have not already been self-reported.)

The main objectives of the polymath8 project, initiated back in June, were to understand the recent breakthrough paper of Zhang establishing an infinite number of prime gaps bounded by a fixed constant , and then to lower that value of as much as possible. After a large number of refinements, optimisations, and other modifications to Zhang’s method, we have now lowered the value of from the initial value of down to (provisionally) , as well as to the slightly worse value of if one wishes to avoid any reliance on the deep theorems of Deligne on the Weil conjectures.

As has often been the case with other polymath projects, the pace has settled down subtantially after the initial frenzy of activity; in particular, the values of (and other key parameters, such as , , and ) have stabilised over the last few weeks. While there may still be a few small improvements in these parameters that can be wrung out of our methods, I think it is safe to say that we have cleared out most of the “low-hanging fruit” (and even some of the “medium-hanging fruit”), which means that it is time to transition to the next phase of the polymath project, namely the writing phase.

After some discussion at the previous post, we have tentatively decided on writing a single research paper, which contains (in a reasonably self-contained fashion) the details of the strongest result we have (i.e. bounded gaps with ), together with some variants, such as the bound that one can obtain without invoking Deligne’s theorems. We can of course also include some discussion as to where further improvements could conceivably arise from these methods, although even if one assumes the most optimistic estimates regarding distribution of the primes, we still do not have any way to get past the barrier of identified as the limit of this method by Goldston, Pintz, and Yildirim. This research paper does not necessarily represent the only output of the polymath8 project; for instance, as part of the polymath8 project the admissible tuples page was created, which is a repository of narrow prime tuples which can automatically accept (and verify) new submissions. (At an early stage of the project, it was suggested that we set up a computing challenge for mathematically inclined programmers to try to find the narrowest prime tuples of a given width; it might be worth revisiting this idea now that our value of has stabilised and the prime tuples page is up and running.) Other potential outputs include additional expository articles, lecture notes, or perhaps the details of a “minimal proof” of bounded gaps between primes that gives a lousy value of but with as short and conceptual a proof as possible. But it seems to me that these projects do not need to proceed via the traditional research paper route (perhaps ending up on the blog, on the wiki, or on the admissible tuples page instead). Also, these projects might also benefit from the passage of time to lend a bit of perspective and depth, especially given that there are likely to be further advances in this field from outside of the polymath project.

I have taken the liberty of setting up a Dropbox folder containing a skeletal outline of a possible research paper, and anyone who is interested in making significant contributions to the writeup of the paper can contact me to be given write access to that folder. However, I am not firmly wedded to the organisational structure of that paper, and at this stage it is quite easy to move sections around if this would lead to a more readable or more logically organised paper.

I have tried to structure the paper so that the deepest arguments – the ones which rely on Deligne’s theorems – are placed at the end of the paper, so that a reader who wishes to read and understand a proof of bounded gaps that does not rely on Deligne’s theorems can stop reading about halfway through the paper. I have also moved the top-level structure of the argument (deducing bounded gaps from a Dickson-Hardy-Littlewood claim , which in turn is established from a Motohashi-Pintz-Zhang distribution estimate , which is in turn deduced from Type I, Type II, and Type III estimates) to the front of the paper.

Of course, any feedback on the draft paper is encouraged, even from (or especially from!) readers who have been following this project on a casual basis, as this would be valuable in making sure that the paper is written in as accessible as fashion as possible. (Sometimes it is possible to be so close to a project that one loses some sense of perspective, and does not realise that what one is writing might not necessarily be as clear to other mathematicians as it is to the author.)

[*This guest post is authored by Ingrid Daubechies, who is the current president of the International Mathematical Union, and (as she describes below) is heavily involved in planning for a next-generation digital mathematical library that can go beyond the current network of preprint servers (such as the arXiv), journal web pages, article databases (such as MathSciNet), individual author web pages, and general web search engines to create a more integrated and useful mathematical resource. I have lightly edited the post for this blog, mostly by adding additional hyperlinks. – T.*]

This guest blog entry concerns the many roles a World Digital Mathematical Library (WDML) could play for the mathematical community worldwide. We seek input to help sketch how a WDML could be so much more than just a huge collection of digitally available mathematical documents. If this is of interest to you, please read on!

The “we” seeking input are the Committee on Electronic Information and Communication (CEIC) of the International Mathematical Union (IMU), and a special committee of the US National Research Council (NRC), charged by the Sloan Foundation to look into this matter. In the US, mathematicians may know the Sloan Foundation best for the prestigious early-career fellowships it awards annually, but the foundation plays a prominent role in other disciplines as well. For instance, the Sloan Digital Sky Survey (SDSS) has had a profound impact on astronomy, serving researchers in many more ways than even its ambitious original setup foresaw. The report being commissioned by the Sloan Foundation from the NRC study group could possibly be the basis for an equally ambitious program funded by the Sloan Foundation for a WDML with the potential to change the practice of mathematical research as profoundly as the SDSS did in astronomy. But to get there, we must formulate a vision that, like the original SDSS proposal, imagines at least some of those impacts. The members of the NRC committee are extremely knowledgeable, and have been picked judiciously so as to span collectively a wide range of expertise and connections. As president of the IMU, I was asked to co-chair this committee, together with Clifford Lynch, of the Coalition for Networked Information; Peter Olver, chair of the IMU’s CEIC, is also a member of the committee. But each of us is at least a quarter century older than the originators of MathOverflow or the ArXiv when they started. We need you, internet-savvy, imaginative, social-networking, young mathematicians to help us formulate the vision that may inspire the creation of a truly revolutionary WDML!

Some history first. Several years ago, an international initiative was started to create a World Digital Mathematical Library. The website for this library, hosted by the IMU, is now mostly a “ghost” website — nothing has been posted there for the last seven years. [It does provide useful links, however, to many sites that continue to be updated, such as the European Mathematical Information Service, which in turn links to many interesting journals, books and other websites featuring electronically available mathematical publications. So it is still worth exploring …] Many of the efforts towards building (parts of) the WDML as originally envisaged have had to grapple with business interests, copyright agreements, search obstructions, metadata secrecy, … and many an enterprising, idealistic effort has been slowly ground down by this. We are still dealing with these frustrations — as witnessed by, e.g., the CostofKnowledge initiative. They are real, important issues, and will need to be addressed.

The charge of the NRC committee, however, is to NOT focus on issues of copyright or open-access or who bears the cost of publishing, but instead on what could/can be done with documents that are (or once they are) freely electronically accessible, apart from simply finding and downloading them. Earlier this year, I posted a question about one possible use on MathOverflow and then on MathForge, about the possibility to “enrich” a paper by annotations from readers, which other readers could wish to consult (or not). These posts elicited some very useful comments. But this was but one way in which a WDML could be more than just an opportunity to find and download papers. Surely there are many more, that you, bloggers and blog-readers, can imagine, suggest, sketch. This is an opportunity: can we — no, YOU! — formulate an ambitious setup that would capture the imagination of sufficiently many of us, that would be workable and that would really make a difference?

Things are pretty quiet here during the holiday season, but one small thing I have been working on recently is a set of notes on special relativity that I will be working through in a few weeks with some bright high school students here at our local math circle. I have only two hours to spend with this group, and it is unlikely that we will reach the end of the notes (in which I derive the famous mass-energy equivalence relation E=mc^2, largely following Einstein’s original derivation as discussed in this previous blog post); instead we will probably spend a fair chunk of time on related topics which do not actually require special relativity *per se*, such as spacetime diagrams, the Doppler shift effect, and an analysis of my airport puzzle. This will be my first time doing something of this sort (in which I will be spending as much time interacting directly with the students as I would lecturing); I’m not sure exactly how it will play out, being a little outside of my usual comfort zone of undergraduate and graduate teaching, but am looking forward to finding out how it goes. (In particular, it may end up that the discussion deviates somewhat from my prepared notes.)

The material covered in my notes is certainly not new, but I ultimately decided that it was worth putting up here in case some readers here had any corrections or other feedback to contribute (which, as always, would be greatly appreciated).

*[Dec 24 and then Jan 21: notes updated, in response to comments.]*

Lars Hörmander, who made fundamental contributions to all areas of partial differential equations, but particularly in developing the analysis of variable-coefficient linear PDE, died last Sunday, aged 81.

I unfortunately never met Hörmander personally, but of course I encountered his work all the time while working in PDE. One of his major contributions to the subject was to systematically develop the calculus of Fourier integral operators (FIOs), which are a substantial generalisation of pseudodifferential operators and which can be used to (approximately) solve linear partial differential equations, or to transform such equations into a more convenient form. Roughly speaking, Fourier integral operators are to linear PDE as canonical transformations are to Hamiltonian mechanics (and one can in fact view FIOs as a quantisation of a canonical transformation). They are a large class of transformations, for instance the Fourier transform, pseudodifferential operators, and smooth changes of the spatial variable are all examples of FIOs, and (as long as certain singular situations are avoided) the composition of two FIOs is again an FIO.

The full theory of FIOs is quite extensive, occupying the entire final volume of Hormander’s famous four-volume series “The Analysis of Linear Partial Differential Operators”. I am certainly not going to try to attempt to summarise it here, but I thought I would try to motivate how these operators arise when trying to transform functions. For simplicity we will work with functions on a Euclidean domain (although FIOs can certainly be defined on more general smooth manifolds, and there is an extension of the theory that also works on manifolds with boundary). As this will be a heuristic discussion, we will ignore all the (technical, but important) issues of smoothness or convergence with regards to the functions, integrals and limits that appear below, and be rather vague with terms such as “decaying” or “concentrated”.

A function can be viewed from many different perspectives (reflecting the variety of bases, or approximate bases, that the Hilbert space offers). Most directly, we have the *physical space perspective*, viewing as a function of the physical variable . In many cases, this function will be concentrated in some subregion of physical space. For instance, a gaussian wave packet

where , and are parameters, would be physically concentrated in the ball . Then we have the *frequency space (or momentum space) perspective*, viewing now as a function of the frequency variable . For this discussion, it will be convenient to normalise the Fourier transform using a small constant (which has the physical interpretation of Planck’s constant if one is doing quantum mechanics), thus

For instance, for the gaussian wave packet (1), one has

and so we see that is concentrated in frequency space in the ball .

However, there is a third (but less rigorous) way to view a function in , which is the *phase space perspective* in which one tries to view as distributed simultaneously in physical space and in frequency space, thus being something like a measure on the phase space . Thus, for instance, the function (1) should heuristically be concentrated on the region in phase space. Unfortunately, due to the uncertainty principle, there is no completely satisfactory way to canonically and rigorously define what the “phase space portrait” of a function should be. (For instance, the Wigner transform of can be viewed as an attempt to describe the distribution of the energy of in phase space, except that this transform can take negative or even complex values; see Folland’s book for further discussion.) Still, it is a very useful heuristic to think of functions has having a phase space portrait, which is something like a non-negative measure on phase space that captures the distribution of functions in both space and frequency, albeit with some “quantum fuzziness” that shows up whenever one tries to inspect this measure at scales of physical space and frequency space that together violate the uncertainty principle. (The score of a piece of music is a good everyday example of a phase space portrait of a function, in this case a sound wave; here, the physical space is the time axis (the horizontal dimension of the score) and the frequency space is the vertical dimension. Here, the time and frequency scales involved are well above the uncertainty principle limit (a typical note lasts many hundreds of cycles, whereas the uncertainty principle kicks in at cycles) and so there is no obstruction here to musical notation being unambiguous.) Furthermore, if one takes certain asymptotic limits, one can recover a precise notion of a phase space portrait; for instance if one takes the *semiclassical limit* then, under certain circumstances, the phase space portrait converges to a well-defined classical probability measure on phase space; closely related to this is the *high frequency limit* of a fixed function, which among other things defines the wave front set of that function, which can be viewed as another asymptotic realisation of the phase space portrait concept.

If functions in can be viewed as a sort of distribution in phase space, then linear operators should be viewed as various transformations on such distributions on phase space. For instance, a pseudodifferential operator should correspond (as a zeroth approximation) to multiplying a phase space distribution by the symbol of that operator, as discussed in this previous blog post. Note that such operators only change the amplitude of the phase space distribution, but not the support of that distribution.

Now we turn to operators that alter the support of a phase space distribution, rather than the amplitude; we will focus on unitary operators to emphasise the amplitude preservation aspect. These will eventually be key examples of Fourier integral operators. A physical translation should correspond to pushing forward the distribution by the transformation , as can be seen by comparing the physical and frequency space supports of with that of . Similarly, a frequency modulation should correspond to the transformation ; a linear change of variables , where is an invertible linear transformation, should correspond to ; and finally, the Fourier transform should correspond to the transformation .

Based on these examples, one may hope that given any diffeomorphism of phase space, one could associate some sort of unitary (or approximately unitary) operator , which (heuristically, at least) pushes the phase space portrait of a function forward by . However, there is an obstruction to doing so, which can be explained as follows. If pushes phase space portraits by , and pseudodifferential operators multiply phase space portraits by , then this suggests the intertwining relationship

and thus is approximately conjugate to :

The formalisation of this fact in the theory of Fourier integral operators is known as Egorov’s theorem, due to Yu Egorov (and not to be confused with the more widely known theorem of Dmitri Egorov in measure theory).

Applying commutators, we conclude the approximate conjugacy relationship

Now, the pseudodifferential calculus (as discussed in this previous post) tells us (heuristically, at least) that

and

where is the Poisson bracket. Comparing this with (2), we are then led to the compatibility condition

thus needs to preserve (approximately, at least) the Poisson bracket, or equivalently needs to be a symplectomorphism (again, approximately at least).

Now suppose that is a symplectomorphism. This is morally equivalent to the graph being a Lagrangian submanifold of (where we give the second copy of phase space the negative of the usual symplectic form , thus yielding as the full symplectic form on ; this is another instantiation of the closed graph theorem, as mentioned in this previous post. This graph is known as the *canonical relation* for the (putative) FIO that is associated to . To understand what it means for this graph to be Lagrangian, we coordinatise as suppose temporarily that this graph was (locally, at least) a smooth graph in the and variables, thus

for some smooth functions . A brief computation shows that the Lagrangian property of is then equivalent to the compatibility conditions

for , where denote the components of . Some Fourier analysis (or Hodge theory) lets us solve these equations as

for some smooth potential function . Thus, we have parameterised our graph as

A reasonable candidate for an operator associated to and in this fashion is the oscillatory integral operator

for some smooth amplitude function (note that the Fourier transform is the special case when and , which helps explain the genesis of the term “Fourier integral operator”). Indeed, if one computes an inner product for gaussian wave packets of the form (1) and localised in phase space near respectively, then a Taylor expansion of around , followed by a stationary phase computation, shows (again heuristically, and assuming is suitably non-degenerate) that has (3) as its canonical relation. (Furthermore, a refinement of this stationary phase calculation suggests that if is normalised to be the *half-density* , then should be approximately unitary.) As such, we view (4) as an example of a Fourier integral operator (assuming various smoothness and non-degeneracy hypotheses on the phase and amplitude which we do not detail here).

Of course, it may be the case that is not a graph in the coordinates (for instance, the key examples of translation, modulation, and dilation are not of this form), but then it is often a graph in some other pair of coordinates, such as . In that case one can compose the oscillatory integral construction given above with a Fourier transform, giving another class of FIOs of the form

This class of FIOs covers many important cases; for instance, the translation, modulation, and dilation operators considered earlier can be written in this form after some Fourier analysis. Another typical example is the half-wave propagator for some time , which can be written in the form

This corresponds to the phase space transformation , which can be viewed as the classical propagator associated to the “quantum” propagator . More generally, propagators for linear Hamiltonian partial differential equations can often be expressed (at least approximately) by Fourier integral operators corresponding to the propagator of the associated *classical* Hamiltonian flow associated to the symbol of the Hamiltonian operator ; this leads to an important mathematical formalisation of the correspondence principle between quantum mechanics and classical mechanics, that is one of the foundations of microlocal analysis and which was extensively developed in Hörmander’s work. (More recently, numerically stable versions of this theory have been developed to allow for rapid and accurate numerical solutions to various linear PDE, for instance through Emmanuel Candés’ theory of curvelets, so the theory that Hörmander built now has some quite significant practical applications in areas such as geology.)

In some cases, the canonical relation may have some singularities (such as fold singularities) which prevent it from being written as graphs in the previous senses, but the theory for defining FIOs even in these cases, and in developing their calculus, is now well established, in large part due to the foundational work of Hörmander.

I recently finished the first draft of the last of my books based on my 2011 blog posts (and also my Google buzzes and Google+ posts from that year), entitled “Spending symmetry“. The PDF of this draft is available here. This is again a rather assorted (and lightly edited) collection of posts (and buzzes, and Google+ posts), though concentrating in the areas of analysis (both standard and nonstandard), logic, and geometry. As always, comments and corrections are welcome.

*[Once again, some advertising on behalf of my department, following on a similar announcement in the previous three years.]*

*The program of study leads to a Masters degree in Mathematics in four years.*

Garth Gaudry, who made many contributions to harmonic analysis and to Australian mathematics, and was also both my undergradaute and masters advisor as well as the head of school during one of my first academic jobs, died yesterday after a long battle with cancer, aged 71.

Garth worked on the interface between real-variable harmonic analysis and abstract harmonic analysis (which, despite their names, are actually two distinct fields, though certainly related to each other). He was one of the first to realise the central importance of Littlewood-Paley theory as a general foundation for both abstract and real-variable harmonic analysis, writing an influential text with Robert Edwards on the topic. He also made contributions to Clifford analysis, which was also the topic of my masters thesis.

But, amongst Australian mathematicians at least, Garth will be remembered for his tireless service to the field, most notably for his pivotal role in founding the Australian Mathematical Sciences Institute (AMSI) and then serving as AMSI’s first director, and then in directing the International Centre of Excellence for Education in Mathematics (ICE-EM), the educational arm of AMSI which, among other things, developed a full suite of maths textbooks and related educational materials covering Years 5-10 (which I reviewed here back in 2008).

I knew Garth ever since I was an undergraduate at Flinders University. He was head of school then (a position roughly equivalent to department chair in the US), but still was able to spare an hour a week to meet with me to discuss real analysis, as I worked my way through Rudin’s “Real and complex analysis” and then Stein’s “Singular integrals”, and then eventually completed a masters thesis under his supervision on Clifford-valued singular integrals. When Princeton accepted my application for graduate study, he convinced me to take the opportunity without hesitation. Without Garth, I certainly wouldn’t be where I am at today, and I will always be very grateful for his advisorship. He was a good person, and he will be missed very much by me and by many others.

Bill Thurston, who made fundamental contributions to our understanding of low-dimensional manifolds and related structures, died on Tuesday, aged 65.

Perhaps Thurston’s best known achievement is the proof of the hyperbolisation theorem for Haken manifolds, which showed that 3-manifolds which obeyed a certain number of topological conditions, could always be given a hyperbolic geometry (i.e. a Riemannian metric that made the manifold isometric to a quotient of the hyperbolic 3-space ). This difficult theorem connecting the topological and geometric structure of 3-manifolds led Thurston to give his influential geometrisation conjecture, which (in principle, at least) completely classifies the topology of an arbitrary compact 3-manifold as a combination of eight model geometries (now known as *Thurston model geometries*). This conjecture has many consequences, including Thurston’s hyperbolisation theorem and (most famously) the Poincaré conjecture. Indeed, by placing that conjecture in the context of a conceptually appealing general framework, of which many other cases could already be verified, Thurston provided one of the strongest pieces of evidence towards the truth of the Poincaré conjecture, until the work of Grisha Perelman in 2002-2003 proved both the Poincaré conjecture and the geometrisation conjecture by developing Hamilton’s Ricci flow methods. (There are now several variants of Perelman’s proof of both conjectures; in the proof of geometrisation by Bessieres, Besson, Boileau, Maillot, and Porti, Thurston’s hyperbolisation theorem is a crucial ingredient, allowing one to bypass the need for the theory of Alexandrov spaces in a key step in Perelman’s argument.)

One of my favourite results of Thurston’s is his elegant method for everting the sphere (smoothly turning a sphere in inside out without any folds or singularities). The fact that sphere eversion can be achieved at all is highly unintuitive, and is often referred to as Smale’s paradox, as Stephen Smale was the first to give a proof that such an eversion exists. However, prior to Thurston’s method, the known constructions for sphere eversion were quite complicated. Thurston’s method, relying on corrugating and then twisting the sphere, is sufficiently conceptual and geometric that it can in fact be explained quite effectively in non-technical terms, as was done in the following excellent video entitled “Outside In“, and produced by the Geometry Center:

In addition to his direct mathematical research contributions, Thurston was also an amazing mathematical expositor, having the rare knack of being able to describe the *process* of mathematical thinking in addition to the *results* of that process and the *intuition* underlying it. His wonderful essay “On proof and progress in mathematics“, which I highly recommend, is the quintessential instance of this; more recent examples include his many insightful questions and answers on MathOverflow.

I unfortunately never had the opportunity to meet Thurston in person (although we did correspond a few times online), but I know many mathematicians who have been profoundly influenced by him and his work. His death is a great loss for mathematics.

## Recent Comments