You are currently browsing Terence Tao’s articles.
Last week, we had Peter Scholze give an interesting distinguished lecture series here at UCLA on “Prismatic Cohomology”, which is a new type of cohomology theory worked out by Scholze and Bhargav Bhatt. (Video of the talks will be available shortly; for now we have some notes taken by two note–takers in the audience on that web page.) My understanding of this (speaking as someone that is rather far removed from this area) is that it is progress towards the “motivic” dream of being able to define cohomology for varieties (or similar objects) defined over arbitrary commutative rings , and with coefficients in another arbitrary commutative ring . Currently, we have various flavours of cohomology that only work for certain types of domain rings and coefficient rings :
 Singular cohomology, which roughly speaking works when the domain ring is a characteristic zero field such as or , but can allow for arbitrary coefficients ;
 de Rham cohomology, which roughly speaking works as long as the coefficient ring is the same as the domain ring (or a homomorphic image thereof), as one can only talk about valued differential forms if the underlying space is also defined over ;
 adic cohomology, which is a remarkably powerful application of étale cohomology, but only works well when the coefficient ring is localised around a prime that is different from the characteristic of the domain ring ; and
 Crystalline cohomology, in which the domain ring is a field of some finite characteristic , but the coefficient ring can be a slight deformation of , such as the ring of Witt vectors of .
There are various relationships between the cohomology theories, for instance de Rham cohomology coincides with singular cohomology for smooth varieties in the limiting case . The following picture Scholze drew in his first lecture captures these sorts of relationships nicely:
The new prismatic cohomology of Bhatt and Scholze unifies many of these cohomologies in the “neighbourhood” of the point in the above diagram, in which the domain ring and the coefficient ring are both thought of as being “close to characteristic ” in some sense, so that the dilates of these rings is either zero, or “small”. For instance, the adic ring is technically of characteristic , but is a “small” ideal of (it consists of those elements of of adic valuation at most ), so one can think of as being “close to characteristic ” in some sense. Scholze drew a “zoomed in” version of the previous diagram to informally describe the types of rings for which prismatic cohomology is effective:
To define prismatic cohomology rings one needs a “prism”: a ring homomorphism from to equipped with a “Frobeniuslike” endomorphism on obeying some axioms. By tuning these homomorphisms one can recover existing cohomology theories like crystalline or de Rham cohomology as special cases of prismatic cohomology. These specialisations are analogous to how a prism splits white light into various individual colours, giving rise to the terminology “prismatic”, and depicted by this further diagram of Scholze:
(And yes, Peter confirmed that he and Bhargav were inspired by the Dark Side of the Moon album cover in selecting the terminology.)
There was an abstract definition of prismatic cohomology (as being the essentially unique cohomology arising from prisms that obeyed certain natural axioms), but there was also a more concrete way to view them in terms of coordinates, as a “deformation” of de Rham cohomology. Whereas in de Rham cohomology one worked with derivative operators that for instance applied to monomials by the usual formula
prismatic cohomology in coordinates can be computed using a “derivative” operator that for instance applies to monomials by the formula
where
is the “analogue” of (a polynomial in that equals in the limit ). (The analogues become more complicated for more general forms than these.) In this more concrete setting, the fact that prismatic cohomology is independent of the choice of coordinates apparently becomes quite a nontrivial theorem.
Now that Google Plus is closing, the brief announcements that I used to post over there will now be migrated over to this blog. (Some people have suggested other platforms for this also, such as Twitter, but I think for now I can use my existing blog to accommodate these sorts of short posts.)
 The NSFCBMS regional research conferences are now requesting proposals for the 2020 conference series. (I was the principal lecturer for one of these conferences back in 2005; it was a very intensive experience, but quite enjoyable, and I am quite pleased with the book that resulted from it.)
 The awardees for the Sloan Fellowships for 2019 have now been announced. (I was on the committee for the mathematics awards. For the usual reasons involving the confidentiality of letters of reference and other sensitive information, I will be unfortunately be unable to answer any specific questions about our committee deliberations.)
I have just uploaded to the arXiv my paper “On the universality of the incompressible Euler equation on compact manifolds, II. Nonrigidity of Euler flows“, submitted to Pure and Applied Functional Analysis. This paper continues my attempts to establish “universality” properties of the Euler equations on Riemannian manifolds , as I conjecture that the freedom to set the metric ought to allow one to “program” such Euler flows to exhibit a wide range of behaviour, and in particular to achieve finite time blowup (if the dimension is sufficiently large, at least).
In coordinates, the Euler equations read
where is the pressure field and is the velocity field, and denotes the LeviCivita connection with the usual Penrose abstract index notation conventions; we restrict attention here to the case where are smooth and is compact, smooth, orientable, connected, and without boundary. Let’s call an Euler flow on (for the time interval ) if it solves the above system of equations for some pressure , and an incompressible flow if it just obeys the divergencefree relation . Thus every Euler flow is an incompressible flow, but the converse is certainly not true; for instance the various conservation laws of the Euler equation, such as conservation of energy, will already block most incompressible flows from being an Euler flow, or even being approximated in a reasonably strong topology by such Euler flows.
However, one can ask if an incompressible flow can be extended to an Euler flow by adding some additional dimensions to . In my paper, I formalise this by considering warped products of which (as a smooth manifold) are products of with a torus, with a metric given by
for , where are the coordinates of the torus , and are smooth positive coefficients for ; in order to preserve the incompressibility condition, we also require the volume preservation property
though in practice we can quickly dispose of this condition by adding one further “dummy” dimension to the torus . We say that an incompressible flow is extendible to an Euler flow if there exists a warped product extending , and an Euler flow on of the form
for some “swirl” fields . The situation here is motivated by the familiar situation of studying axisymmetric Euler flows on , which in cylindrical coordinates take the form
The base component
of this flow is then a flow on the twodimensional plane which is not quite incompressible (due to the failure of the volume preservation condition (2) in this case) but still satisfies a system of equations (coupled with a passive scalar field that is basically the square of the swirl ) that is reminiscent of the Boussinesq equations.
On a fixed dimensional manifold , let denote the space of incompressible flows , equipped with the smooth topology (in spacetime), and let denote the space of such flows that are extendible to Euler flows. Our main theorem is
Theorem 1
 (i) (Generic inextendibility) Assume . Then is of the first category in (the countable union of nowhere dense sets in ).
 (ii) (Nonrigidity) Assume (with an arbitrary metric ). Then is somewhere dense in (that is, the closure of has nonempty interior).
More informally, starting with an incompressible flow , one usually cannot extend it to an Euler flow just by extending the manifold, warping the metric, and adding swirl coefficients, even if one is allowed to select the dimension of the extension, as well as the metric and coefficients, arbitrarily. However, many such flows can be perturbed to be extendible in such a manner (though different perturbations will require different extensions, in particular the dimension of the extension will not be fixed). Among other things, this means that conservation laws such as energy (or momentum, helicity, or circulation) no longer present an obstruction when one is allowed to perform an extension (basically this is because the swirl components of the extension can exchange energy (or momentum, etc.) with the base components in a basically arbitrary fashion.
These results fall short of my hopes to use the ability to extend the manifold to create universal behaviour in Euler flows, because of the fact that each flow requires a different extension in order to achieve the desired dynamics. Still it does seem to provide a little bit of support to the idea that highdimensional Euler flows are quite “flexible” in their behaviour, though not completely so due to the generic inextendibility phenomenon. This flexibility reminds me a little bit of the flexibility of weak solutions to equations such as the Euler equations provided by the “principle” of Gromov and its variants (as discussed in these recent notes), although in this case the flexibility comes from adding additional dimensions, rather than by repeatedly adding highfrequency corrections to the solution.
The proof of part (i) of the theorem basically proceeds by a dimension counting argument (similar to that in the proof of Proposition 9 of these recent lecture notes of mine). Heuristically, the point is that an arbitrary incompressible flow is essentially determined by independent functions of space and time, whereas the warping factors are functions of space only, the pressure field is one function of space and time, and the swirl fields are technically functions of both space and time, but have the same number of degrees of freedom as a function just of space, because they solve an evolution equation. When , this means that there are fewer unknown functions of space and time than prescribed functions of space and time, which is the source of the generic inextendibility. This simple argument breaks down when , but we do not know whether the claim is actually false in this case.
The proof of part (ii) proceeds by direct calculation of the effect of the warping factors and swirl velocities, which effectively create a forcing term (of Boussinesq type) in the first equation of (1) that is a combination of functions of the Eulerian spatial coordinates (coming from the warping factors) and the Lagrangian spatial coordinates (which arise from the swirl velocities, which are passively transported by the flow). In a nonempty open subset of , the combination of these coordinates becomes a nondegenerate set of coordinates for spacetime, and one can then use the StoneWeierstrass theorem to conclude. The requirement that be topologically a torus is a technical hypothesis in order to avoid topological obstructions such as the hairy ball theorem, but it may be that the hypothesis can be dropped (and it may in fact be true, in the case at least, that is dense in all of , not just in a nonempty open subset).
Just a quick post to advertise two upcoming events sponsored by institutions I am affiliated with:
 The 2019 National Math Festival will be held in Washington D.C. on May 4 (together with some satellite events at other US cities). This festival will have numerous games, events, films, and other activities, which are all free and open to the public. (I am on the board of trustees of MSRI, which is one of the sponsors of the festival.)
 The Institute for Pure and Applied Mathematics (IPAM) is now accepting applications for its second Industrial Short Course for May 1617 2019, with the topic of “Deep Learning and the Latest AI Algorithms“. (I serve on the Scientific Advisory Board of this institute.) This is an intensive course (in particular requiring active participation) aimed at industrial mathematicians involving both the theory and practice of deep learning and neural networks, taught by Xavier Bresson. (Note: space is very limited, and there is also a registration fee of $2,000 for this course, which is expected to be in high demand.)
[This post is collectively authored by the ICM structure committee, whom I am currently chairing – T.]
The International Congress of Mathematicians (ICM) is widely considered to be the premier conference for mathematicians. It is held every four years; for instance, the 2018 ICM was held in Rio de Janeiro, Brazil, and the 2022 ICM is to be held in Saint Petersburg, Russia. The most highprofile event at the ICM is the awarding of the 10 or so prizes of the International Mathematical Union (IMU) such as the Fields Medal, and the lectures by the prize laureates; but there are also approximately twenty plenary lectures from leading experts across all mathematical disciplines, several public lectures of a less technical nature, about 180 more specialised invited lectures divided into about twenty section panels, each corresponding to a mathematical field (or range of fields), as well as various outreach and social activities, exhibits and satellite programs, and meetings of the IMU General Assembly; see for instance the program for the 2018 ICM for a sample schedule. In addition to these official events, the ICM also provides more informal networking opportunities, in particular allowing mathematicians at all stages of career, and from all backgrounds and nationalities, to interact with each other.
For each Congress, a Program Committee (together with subcommittees for each section) is entrusted with the task of selecting who will give the lectures of the ICM (excluding the lectures by prize laureates, which are selected by separate prize committees); they also have decided how to appropriately subdivide the entire field of mathematics into sections. Given the prestigious nature of invitations from the ICM to present a lecture, this has been an important and challenging task, but one for which past Program Committees have managed to fulfill in a largely satisfactory fashion.
Nevertheless, in the last few years there has been substantial discussion regarding ways in which the process for structuring the ICM and inviting lecturers could be further improved, for instance to reflect the fact that the distribution of mathematics across various fields has evolved over time. At the 2018 ICM General Assembly meeting in Rio de Janeiro, a resolution was adopted to create a new Structure Committee to take on some of the responsibilities previously delegated to the Program Committee, focusing specifically on the structure of the scientific program. On the other hand, the Structure Committee is not involved with the format for prize lectures, the selection of prize laureates, or the selection of plenary and sectional lecturers; these tasks are instead the responsibilities of other committees (the local Organizing Committee, the prize committees, and the Program Committee respectively).
The first Structure Committee was constituted on 1 Jan 2019, with the following members:

 Terence Tao [Chair from 15 Feb, 2019]
 Carlos Kenig [IMU President (from 1 Jan 2019), ex officio]
 Nalini Anantharaman
 Alexei Borodin
 Annalisa Buffa
 Hélène Esnault [from 21 Mar, 2019]
 Irene Fonseca
 János Kollár [until 21 Mar, 2019]
 Laci Lovász [Chair until 15 Feb, 2019]
 Terry Lyons
 Stephane Mallat
 Hiraku Nakajima
 Éva Tardos
 Peter Teichner
 Akshay Venkatesh
 Anna Wienhard
As one of our first actions, we on the committee are using this blog post to solicit input from the mathematical community regarding the topics within our remit. Among the specific questions (in no particular order) for which we seek comments are the following:
 Are there suggestions to change the format of the ICM that would increase its value to the mathematical community?
 Are there suggestions to change the format of the ICM that would encourage greater participation and interest in attending, particularly with regards to junior researchers and mathematicians from developing countries?
 What is the correct balance between research and exposition in the lectures? For instance, how strongly should one emphasize the importance of good exposition when selecting plenary and sectional speakers? Should there be “Bourbaki style” expository talks presenting work not necessarily authored by the speaker?
 Is the balance between plenary talks, sectional talks, and public talks at an optimal level? There is only a finite amount of space in the calendar, so any increase in the number or length of one of these types of talks will come at the expense of another.
 The ICM is generally perceived to be more important to pure mathematics than to applied mathematics. In what ways can the ICM be made more relevant and attractive to applied mathematicians, or should one not try to do so?
 Are there structural barriers that cause certain areas or styles of mathematics (such as applied or interdisciplinary mathematics) or certain groups of mathematicians to be underrepresented at the ICM? What, if anything, can be done to mitigate these barriers?
Of course, we do not expect these complex and difficult questions to be resolved within this blog post, and debating these and other issues would likely be a major component of our internal committee discussions. Nevertheless, we would value constructive comments towards the above questions (or on other topics within the scope of our committee) to help inform these subsequent discussions. We therefore welcome and invite such commentary, either as responses to this blog post, or sent privately to one of the members of our committee. We would also be interested in having readers share their personal experiences at past congresses, and how it compares with other major conferences of this type. (But in order to keep the discussion focused and constructive, we request that comments here refrain from discussing topics that are out of the scope of this committee, such as suggesting specific potential speakers for the next congress, which is a task instead for the 2022 ICM Program Committee.)
While talking mathematics with a postdoc here at UCLA (March Boedihardjo) we came across the following matrix problem which we managed to solve, but the proof was cute and the process of discovering it was fun, so I thought I would present the problem here as a puzzle without revealing the solution for now.
The problem involves word maps on a matrix group, which for sake of discussion we will take to be the special orthogonal group of real matrices (one of the smallest matrix groups that contains a copy of the free group, which incidentally is the key observation powering the BanachTarski paradox). Given any abstract word of two generators and their inverses (i.e., an element of the free group ), one can define the word map simply by substituting a pair of matrices in into these generators. For instance, if one has the word , then the corresponding word map is given by
for . Because contains a copy of the free group, we see the word map is nontrivial (not equal to the identity) if and only if the word itself is nontrivial.
Anyway, here is the problem:
Problem. Does there exist a sequence of nontrivial word maps that converge uniformly to the identity map?
To put it another way, given any , does there exist a nontrivial word such that for all , where denotes (say) the operator norm, and denotes the identity matrix in ?
As I said, I don’t want to spoil the fun of working out this problem, so I will leave it as a challenge. Readers are welcome to share their thoughts, partial solutions, or full solutions in the comments below.
I have just learned that Jean Bourgain passed away last week in Belgium, aged 64, after a prolonged battle with cancer. He and Eli Stein were the two mathematicians who most influenced my early career; it is something of a shock to find out that they are now both gone, having died within a few days of each other.
Like Eli, Jean remained highly active mathematically, even after his cancer diagnosis. Here is a video profile of him by National Geographic, on the occasion of his 2017 Breakthrough Prize in Mathematics, doing a surprisingly good job of describing in lay terms the sort of mathematical work he did:
When I was a graduate student in Princeton, Tom Wolff came and gave a course on recent progress on the restriction and Kakeya conjectures, starting from the breakthrough work of Jean Bourgain in a now famous 1991 paper in Geom. Func. Anal.. I struggled with that paper for many months; it was by far the most difficult paper I had to read as a graduate student, as Jean would focus on the most essential components of an argument, treating more secondary details (such as rigorously formalising the uncertainty principle) in very brief sentences. This image of my own annotated photocopy of this article may help convey some of the frustration I had when first going through it:
Eventually, though, and with the help of Eli Stein and Tom Wolff, I managed to decode the steps which had mystified me – and my impression of the paper reversed completely. I began to realise that Jean had a certain collection of tools, heuristics, and principles that he regarded as “basic”, such as dyadic decomposition and the uncertainty principle, and by working “modulo” these tools (that is, by regarding any step consisting solely of application of these tools as trivial), one could proceed much more rapidly and efficiently. By reading through Jean’s papers, I was able to add these tools to my own “basic” toolkit, which then became a fundamental starting point for much of my own research. Indeed, a large fraction of my early work could be summarised as “take one of Jean’s papers, understand the techniques used there, and try to improve upon the final results a bit”. In time, I started looking forward to reading the latest paper of Jean. I remember being particularly impressed by his 1999 JAMS paper on global solutions of the energycritical nonlinear Schrodinger equation for spherically symmetric data. It’s hard to describe (especially in lay terms) the experience of reading through (and finally absorbing) the sections of this paper one by one; the best analogy I can come up with would be watching an expert video game player nimbly navigate his or her way through increasingly difficult levels of some video game, with the end of each level (or section) culminating in a fight with a huge “boss” that was eventually dispatched using an array of special weapons that the player happened to have at hand. (I would eventually end up spending two years with four other coauthors trying to remove that spherical symmetry assumption; we did finally succeed, but it was and still is one of the most difficult projects I have been involved in.)
While I was a graduate student at Princeton, Jean worked at the Institute for Advanced Study which was just a mile away. But I never actually had the courage to set up an appointment with him (which, back then, would be more likely done in person or by phone rather than by email). I remember once actually walking to the Institute and standing outside his office door, wondering if I dared knock on it to introduce myself. (In the end I lost my nerve and walked back to the University.)
I think eventually Tom Wolff introduced the two of us to each other during one of Jean’s visits to Tom at Caltech (though I had previously seen Jean give a number of lectures at various places). I had heard that in his younger years Jean had quite the competitive streak; however, when I met him, he was extremely generous with his ideas, and he had a way of condensing even the most difficult arguments to a few extremely informationdense sentences that captured the essence of the matter, which I invariably found to be particularly insightful (once I had finally managed to understand it). He still retained a certain amount of cocky selfconfidence though. I remember posing to him (some time in early 2002, I think) a problem Tom Wolff had once shared with me about trying to prove what is now known as a sumproduct estimate for subsets of a finite field of prime order, and telling him that Nets Katz and I would be able to use this estimate for several applications to Kakeyatype problems. His initial reaction was to say that this estimate should easily follow from a Fourier analytic method, and promised me a proof the following morning. The next day he came up to me and admitted that the problem was more interesting than he had initially expected, and that he would continue to think about it. That was all I heard from him for several months; but one day I received a twopage fax from Jean with a beautiful handwritten proof of the sumproduct estimate, which eventually became our joint paper with Nets on the subject (and the only paper I ended up writing with Jean). Sadly, the actual fax itself has been lost despite several attempts from various parties to retrieve a copy, but a LaTeX version of the fax, typed up by Jean’s tireless assistant Elly Gustafsson, can be seen here.
About three years ago, Jean was diagnosed with cancer and began a fairly aggressive treatment. Nevertheless he remained extraordinarily productive mathematically, authoring over thirty papers in the last three years, including such breakthrough results as his solution of the Vinogradov conjecture with Guth and Demeter, or his short note on the Schrodinger maximal function and his paper with Mirek, Stein, and Wróbel on dimensionfree estimates for the HardyLittlewood maximal function, both of which made progress on problems that had been stuck for over a decade. In May of 2016 I helped organise, and then attended, a conference at the IAS celebrating Jean’s work and impact; by then Jean was not able to easily travel to attend, but he gave a superb special lecture, not announced on the original schedule, via videoconference that was certainly one of the highlights of the meeting. (UPDATE: a video of his talk is available here. Thanks to Brad Rodgers for the link.)
I last met Jean in person in November of 2016, at the award ceremony for his Breakthrough Prize, though we had some email and phone conversations after that date. Here he is with me and Richard Taylor at that event (demonstrating, among other things, that he wears a tuxedo much better than I do):
Jean was a truly remarkable person and mathematician. Certainly the world of analysis is poorer with his passing.
[UPDATE, Dec 31: Here is the initial IAS obituary notice for Jean.]
[UPDATE, Jan 3: See also this MathOverflow question “Jean Bourgain’s Relatively Lesser Known Significant Contributions”.]
This is the eleventh research thread of the Polymath15 project to upper bound the de BruijnNewman constant , continuing this post. Discussion of the project of a nonresearch nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
There are currently two strands of activity. One is writing up the paper describing the combination of theoretical and numerical results needed to obtain the new bound . The latest version of the writeup may be found here, in this directory. The theoretical side of things have mostly been written up; the main remaining tasks to do right now are
 giving a more detailed description and illustration of the two major numerical verifications, namely the barrier verification that establishes a zerofree region for for , and the Dirichlet series bound that establishes a zerofree region for ; and
 giving more detail on the conditional results assuming more numerical verification of RH.
Meanwhile, several of us have been exploring the behaviour of the zeroes of for negative ; this does not directly lead to any new progress on bounding (though there is a good chance that it may simplify the proof of ), but there have been some interesting numerical phenomena uncovered, as summarised in this set of slides. One phenomenon is that for large negative , many of the complex zeroes begin to organise themselves near the curves
(An example of the agreement between the zeroes and these curves may be found here.) We now have a (heuristic) theoretical explanation for this; we should have an approximation
in this region (where are defined in equations (11), (15), (17) of the writeup, and the above curves arise from (an approximation of) those locations where two adjacent terms , in this series have equal magnitude (with the other terms being of lower order).
However, we only have a partial explanation at present of the interesting behaviour of the real zeroes at negative t, for instance the surviving zeroes at extremely negative values of appear to lie on the curve where the quantity is close to a halfinteger, where
The remaining zeroes exhibit a pattern in coordinates that is approximately 1periodic in , where
A plot of the zeroes in these coordinates (somewhat truncated due to the numerical range) may be found here.
We do not yet have a total explanation of the phenomena seen in this picture. It appears that we have an approximation
where is the nonzero multiplier
and
The derivation of this formula may be found in this wiki page. However our initial attempts to simplify the above approximation further have proven to be somewhat inaccurate numerically (in particular giving an incorrect prediction for the location of zeroes, as seen in this picture). We are in the process of using numerics to try to resolve the discrepancies (see this page for some code and discussion).
I was deeply saddened to learn that Elias Stein died yesterday, aged 87.
I have talked about some of Eli’s older mathematical work in these blog posts. He continued to be quite active mathematically in recent years, for instance finishing six papers (with various coauthors including Jean Bourgain, Mariusz Mirek, Błażej Wróbel, and Pavel ZorinKranich) in just this year alone. I last met him at Wrocław, Poland last September for a conference in his honour; he was in good health (and good spirits) then. Here is a picture of Eli together with several of his students (including myself) who were at that meeting (taken from the conference web site):
Eli was an amazingly effective advisor; throughout my graduate studies I think he never had fewer than five graduate students, and there was often a line outside his door when he was meeting with students such as myself. (The Mathematics Geneaology Project lists 52 students of Eli, but if anything this is an underestimate.) My weekly meetings with Eli would tend to go something like this: I would report on all the many different things I had tried over the past week, without much success, to solve my current research problem; Eli would listen patiently to everything I said, concentrate for a moment, and then go over to his filing cabinet and fish out a preprint to hand to me, saying “I think the authors in this paper encountered similar problems and resolved it using Method X”. I would then go back to my office and read the preprint, and indeed they had faced something similar and I could often adapt the techniques there to resolve my immediate obstacles (only to encounter further ones for the next week, but that’s the way research tends to go, especially as a graduate student). Amongst other things, these meetings impressed upon me the value of mathematical experience, by being able to make more key progress on a problem in a handful of minutes than I was able to accomplish in a whole week. (There is a well known story about the famous engineer Charles Steinmetz fixing a broken piece of machinery by making a chalk mark; my meetings with Eli often had a similar feel to them.)
Eli’s lectures were always masterpieces of clarity. In one hour, he would set up a theorem, motivate it, explain the strategy, and execute it flawlessly; even after twenty years of teaching my own classes, I have yet to figure out his secret of somehow always being able to arrive at the natural finale of a mathematical presentation at the end of each hour without having to improvise at least a little bit halfway during the lecture. The clear and selfcontained nature of his lectures (and his many books) were a large reason why I decided to specialise as a graduate student in harmonic analysis (though I would eventually return to other interests, such as analytic number theory, many years after my graduate studies).
Looking back at my time with Eli, I now realise that he was extraordinarily patient and understanding with the brash and naive teenager he had to meet with every week. A key turning point in my own career came after my oral qualifying exams, in which I very nearly failed due to my overconfidence and lack of preparation, particularly in my chosen specialty of harmonic analysis. After the exam, he sat down with me and told me, as gently and diplomatically as possible, that my performance was a disappointment, and that I seriously needed to solidify my mathematical knowledge. This turned out to be exactly what I needed to hear; I got motivated to actually work properly so as not to disappoint my advisor again.
So many of us in the field of harmonic analysis were connected to Eli in one way or another; the field always felt to me like a large extended family, with Eli as one of the patriarchs. He will be greatly missed.
[UPDATE: Here is Princeton’s obituary for Elias Stein.]
Recent Comments