You are currently browsing Terence Tao’s articles.
Just a brief announcement that the AMS is now accepting (until June 30) nominations for the 2020 Joseph L. Doob Prize, which recognizes a single, relatively recent, outstanding research book that makes a seminal contribution to the research literature, reflects the highest standards of research exposition, and promises to have a deep and longterm impact in its area. The book must have been published within the six calendar years preceding the year in which it is nominated. Books may be nominated by members of the Society, by members of the selection committee, by members of AMS editorial committees, or by publishers. (I am currently on the committee for this prize.) A list of previous winners may be found here. The nomination procedure may be found at the bottom of this page.
Joni Teräväinen and I have just uploaded to the arXiv our paper “Value patterns of multiplicative functions and related sequences“, submitted to Forum of Mathematics, Sigma. This paper explores how to use recent technology on correlations of multiplicative (or nearly multiplicative functions), such as the “entropy decrement method”, in conjunction with techniques from additive combinatorics, to establish new results on the sign patterns of functions such as the Liouville function . For instance, with regards to length 5 sign patterns
of the Liouville function, we can now show that at least of the possible sign patterns in occur with positive upper density. (Conjecturally, all of them do so, and this is known for all shorter sign patterns, but unfortunately seems to be the limitation of our methods.)
The Liouville function can be written as , where is the number of prime factors of (counting multiplicity). One can also consider the variant , which is a completely multiplicative function taking values in the cube roots of unity . Here we are able to show that all sign patterns in occur with positive lower density as sign patterns of this function. The analogous result for was already known (see this paper of Matomäki, Radziwiłł, and myself), and in that case it is even known that all sign patterns occur with equal logarithmic density (from this paper of myself and Teräväinen), but these techniques barely fail to handle the case by itself (largely because the “parity” arguments used in the case of the Liouville function no longer control threepoint correlations in the case) and an additional additive combinatorial tool is needed. After applying existing technology (such as entropy decrement methods), the problem roughly speaking reduces to locating patterns for a certain partition of a compact abelian group (think for instance of the unit circle , although the general case is a bit more complicated, in particular if is disconnected then there is a certain “coprimality” constraint on , also we can allow the to be replaced by any with divisible by ), with each of the having measure . An inequality of Kneser just barely fails to guarantee the existence of such patterns, but by using an inverse theorem for Kneser’s inequality in this previous paper of mine we are able to identify precisely the obstruction for this method to work, and rule it out by an ad hoc method.
The same techniques turn out to also make progress on some conjectures of ErdösPomerance and Hildebrand regarding patterns of the largest prime factor of a natural number . For instance, we improve results of ErdösPomerance and of Balog demonstrating that the inequalities
and
each hold for infinitely many , by demonstrating the stronger claims that the inequalities
and
each hold for a set of of positive lower density. As a variant, we also show that we can find a positive density set of for which
for any fixed (this improves on a previous result of Hildebrand with replaced by . A number of other results of this type are also obtained in this paper.
In order to obtain these sorts of results, one needs to extend the entropy decrement technology from the setting of multiplicative functions to that of what we call “weakly stable sets” – sets which have some multiplicative structure, in the sense that (roughly speaking) there is a set such that for all small primes , the statements and are roughly equivalent to each other. For instance, if is a level set , one would take ; if instead is a set of the form , then one can take . When one has such a situation, then very roughly speaking, the entropy decrement argument then allows one to estimate a oneparameter correlation such as
with a twoparameter correlation such as
(where we will be deliberately vague as to how we are averaging over and ), and then the use of the “linear equations in primes” technology of Ben Green, Tamar Ziegler, and myself then allows one to replace this average in turn by something like
where is constrained to be not divisible by small primes but is otherwise quite arbitrary. This latter average can then be attacked by tools from additive combinatorics, such as translation to a continuous group model (using for instance the Furstenberg correspondence principle) followed by tools such as Kneser’s inequality (or inverse theorems to that inequality).
(This post is mostly intended for my own reference, as I found myself repeatedly looking up several conversions between polynomial bases on various occasions.)
Let denote the vector space of polynomials of one variable with real coefficients of degree at most . This is a vector space of dimension , and the sequence of these spaces form a filtration:
A standard basis for these vector spaces are given by the monomials : every polynomial in can be expressed uniquely as a linear combination of the first monomials . More generally, if one has any sequence of polynomials, with each of degree exactly , then an easy induction shows that forms a basis for .
In particular, if we have two such sequences and of polynomials, with each of degree and each of degree , then must be expressible uniquely as a linear combination of the polynomials , thus we have an identity of the form
for some change of basis coefficients . These coefficients describe how to convert a polynomial expressed in the basis into a polynomial expressed in the basis.
Many standard combinatorial quantities involving two natural numbers can be interpreted as such change of basis coefficients. The most familiar example are the binomial coefficients , which measures the conversion from the shifted monomial basis to the monomial basis , thanks to (a special case of) the binomial formula:
thus for instance
More generally, for any shift , the conversion from to is measured by the coefficients , thanks to the general case of the binomial formula.
But there are other bases of interest too. For instance if one uses the falling factorial basis
then the conversion from falling factorials to monomials is given by the Stirling numbers of the first kind :
thus for instance
and the conversion back is given by the Stirling numbers of the second kind :
thus for instance
If one uses the binomial functions as a basis instead of the falling factorials, one of course can rewrite these conversions as
and
thus for instance
and
As a slight variant, if one instead uses rising factorials
then the conversion to monomials yields the unsigned Stirling numbers of the first kind:
thus for instance
One final basis comes from the polylogarithm functions
For instance one has
and more generally one has
for all natural numbers and some polynomial of degree (the Eulerian polynomials), which when converted to the monomial basis yields the (shifted) Eulerian numbers
For instance
These particular coefficients also have useful combinatorial interpretations. For instance:
 The binomial coefficient is of course the number of element subsets of .
 The unsigned Stirling numbers of the first kind are the number of permutations of with exactly cycles. The signed Stirling numbers are then given by the formula .
 The Stirling numbers of the second kind are the number of ways to partition into nonempty subsets.
 The Eulerian numbers are the number of permutations of with exactly ascents.
These coefficients behave similarly to each other in several ways. For instance, the binomial coefficients obey the well known Pascal identity
(with the convention that vanishes outside of the range ). In a similar spirit, the unsigned Stirling numbers of the first kind obey the identity
and the signed counterparts obey the identity
The Stirling numbers of the second kind obey the identity
and the Eulerian numbers obey the identity
I was pleased to learn this week that the 2019 Abel Prize was awarded to Karen Uhlenbeck. Uhlenbeck laid much of the foundations of modern geometric PDE. One of the few papers I have in this area is in fact a joint paper with Gang Tian extending a famous singularity removal theorem of Uhlenbeck for fourdimensional YangMills connections to higher dimensions. In both these papers, it is crucial to be able to construct “Coulomb gauges” for various connections, and there is a clever trick of Uhlenbeck for doing so, introduced in another important paper of hers, which is absolutely critical in my own paper with Tian. Nowadays it would be considered a standard technique, but it was definitely not so at the time that Uhlenbeck introduced it.
Suppose one has a smooth connection on a (closed) unit ball in for some , taking values in some Lie algebra associated to a compact Lie group . This connection then has a curvature , defined in coordinates by the usual formula
It is natural to place the curvature in a scaleinvariant space such as , and then the natural space for the connection would be the Sobolev space . It is easy to see from (1) and Sobolev embedding that if is bounded in , then will be bounded in . One can then ask the converse question: if is bounded in , is bounded in ? This can be viewed as asking whether the curvature equation (1) enjoys “elliptic regularity”.
There is a basic obstruction provided by gauge invariance. For any smooth gauge taking values in the Lie group, one can gauge transform to
and then a brief calculation shows that the curvature is conjugated to
This gauge symmetry does not affect the norm of the curvature tensor , but can make the connection extremely large in , since there is no control on how wildly can oscillate in space.
However, one can hope to overcome this problem by gauge fixing: perhaps if is bounded in , then one can make bounded in after applying a gauge transformation. The basic and useful result of Uhlenbeck is that this can be done if the norm of is sufficiently small (and then the conclusion is that is small in ). (For large connections there is a serious issue related to the Gribov ambiguity.) In my (much) later paper with Tian, we adapted this argument, replacing Lebesgue spaces by Morrey space counterparts. (This result was also independently obtained at about the same time by Meyer and Riviére.)
To make the problem elliptic, one can try to impose the Coulomb gauge condition
(also known as the Lorenz gauge or Hodge gauge in various papers), together with a natural boundary condition on that will not be discussed further here. This turns (1), (2) into a divergencecurl system that is elliptic at the linear level at least. Indeed if one takes the divergence of (1) using (2) one sees that
and if one could somehow ignore the nonlinear term then we would get the required regularity on by standard elliptic regularity estimates.
The problem is then how to handle the nonlinear term. If we already knew that was small in the right norm then one can use Sobolev embedding, Hölder’s inequality, and elliptic regularity to show that the second term in (3) is small compared to the first term, and so one could then hope to eliminate it by perturbative analysis. However, proving that is small in this norm is exactly what we are trying to prove! So this approach seems circular.
Uhlenbeck’s clever way out of this circularity is a textbook example of what is now known as a “continuity” argument. Instead of trying to work just with the original connection , one works with the rescaled connections for , with associated rescaled curvatures . If the original curvature is small in norm (e.g. bounded by some small ), then so are all the rescaled curvatures . We want to obtain a Coulomb gauge at time ; this is difficult to do directly, but it is trivial to obtain a Coulomb gauge at time , because the connection vanishes at this time. On the other hand, once one has successfully obtained a Coulomb gauge at some time with small in the natural norm (say bounded by for some constant which is large in absolute terms, but not so large compared with say ), the perturbative argument mentioned earlier (combined with the qualitative hypothesis that is smooth) actually works to show that a Coulomb gauge can also be constructed and be small for all sufficiently close nearby times to ; furthermore, the perturbative analysis actually shows that the nearby gauges enjoy a slightly better bound on the norm, say rather than . As a consequence of this, the set of times for which one has a good Coulomb gauge obeying the claimed estimates is both open and closed in , and also contains . Since the unit interval is connected, it must then also contain . This concludes the proof.
One of the lessons I drew from this example is to not be deterred (especially in PDE) by an argument seeming to be circular; if the argument is still sufficiently “nontrivial” in nature, it can often be modified into a usefully noncircular argument that achieves what one wants (possibly under an additional qualitative hypothesis, such as a continuity or smoothness hypothesis).
Last week, we had Peter Scholze give an interesting distinguished lecture series here at UCLA on “Prismatic Cohomology”, which is a new type of cohomology theory worked out by Scholze and Bhargav Bhatt. (Video of the talks will be available shortly; for now we have some notes taken by two note–takers in the audience on that web page.) My understanding of this (speaking as someone that is rather far removed from this area) is that it is progress towards the “motivic” dream of being able to define cohomology for varieties (or similar objects) defined over arbitrary commutative rings , and with coefficients in another arbitrary commutative ring . Currently, we have various flavours of cohomology that only work for certain types of domain rings and coefficient rings :
 Singular cohomology, which roughly speaking works when the domain ring is a characteristic zero field such as or , but can allow for arbitrary coefficients ;
 de Rham cohomology, which roughly speaking works as long as the coefficient ring is the same as the domain ring (or a homomorphic image thereof), as one can only talk about valued differential forms if the underlying space is also defined over ;
 adic cohomology, which is a remarkably powerful application of étale cohomology, but only works well when the coefficient ring is localised around a prime that is different from the characteristic of the domain ring ; and
 Crystalline cohomology, in which the domain ring is a field of some finite characteristic , but the coefficient ring can be a slight deformation of , such as the ring of Witt vectors of .
There are various relationships between the cohomology theories, for instance de Rham cohomology coincides with singular cohomology for smooth varieties in the limiting case . The following picture Scholze drew in his first lecture captures these sorts of relationships nicely:
The new prismatic cohomology of Bhatt and Scholze unifies many of these cohomologies in the “neighbourhood” of the point in the above diagram, in which the domain ring and the coefficient ring are both thought of as being “close to characteristic ” in some sense, so that the dilates of these rings is either zero, or “small”. For instance, the adic ring is technically of characteristic , but is a “small” ideal of (it consists of those elements of of adic valuation at most ), so one can think of as being “close to characteristic ” in some sense. Scholze drew a “zoomed in” version of the previous diagram to informally describe the types of rings for which prismatic cohomology is effective:
To define prismatic cohomology rings one needs a “prism”: a ring homomorphism from to equipped with a “Frobeniuslike” endomorphism on obeying some axioms. By tuning these homomorphisms one can recover existing cohomology theories like crystalline or de Rham cohomology as special cases of prismatic cohomology. These specialisations are analogous to how a prism splits white light into various individual colours, giving rise to the terminology “prismatic”, and depicted by this further diagram of Scholze:
(And yes, Peter confirmed that he and Bhargav were inspired by the Dark Side of the Moon album cover in selecting the terminology.)
There was an abstract definition of prismatic cohomology (as being the essentially unique cohomology arising from prisms that obeyed certain natural axioms), but there was also a more concrete way to view them in terms of coordinates, as a “deformation” of de Rham cohomology. Whereas in de Rham cohomology one worked with derivative operators that for instance applied to monomials by the usual formula
prismatic cohomology in coordinates can be computed using a “derivative” operator that for instance applies to monomials by the formula
where
is the “analogue” of (a polynomial in that equals in the limit ). (The analogues become more complicated for more general forms than these.) In this more concrete setting, the fact that prismatic cohomology is independent of the choice of coordinates apparently becomes quite a nontrivial theorem.
Now that Google Plus is closing, the brief announcements that I used to post over there will now be migrated over to this blog. (Some people have suggested other platforms for this also, such as Twitter, but I think for now I can use my existing blog to accommodate these sorts of short posts.)
 The NSFCBMS regional research conferences are now requesting proposals for the 2020 conference series. (I was the principal lecturer for one of these conferences back in 2005; it was a very intensive experience, but quite enjoyable, and I am quite pleased with the book that resulted from it.)
 The awardees for the Sloan Fellowships for 2019 have now been announced. (I was on the committee for the mathematics awards. For the usual reasons involving the confidentiality of letters of reference and other sensitive information, I will be unfortunately be unable to answer any specific questions about our committee deliberations.)
I have just uploaded to the arXiv my paper “On the universality of the incompressible Euler equation on compact manifolds, II. Nonrigidity of Euler flows“, submitted to Pure and Applied Functional Analysis. This paper continues my attempts to establish “universality” properties of the Euler equations on Riemannian manifolds , as I conjecture that the freedom to set the metric ought to allow one to “program” such Euler flows to exhibit a wide range of behaviour, and in particular to achieve finite time blowup (if the dimension is sufficiently large, at least).
In coordinates, the Euler equations read
where is the pressure field and is the velocity field, and denotes the LeviCivita connection with the usual Penrose abstract index notation conventions; we restrict attention here to the case where are smooth and is compact, smooth, orientable, connected, and without boundary. Let’s call an Euler flow on (for the time interval ) if it solves the above system of equations for some pressure , and an incompressible flow if it just obeys the divergencefree relation . Thus every Euler flow is an incompressible flow, but the converse is certainly not true; for instance the various conservation laws of the Euler equation, such as conservation of energy, will already block most incompressible flows from being an Euler flow, or even being approximated in a reasonably strong topology by such Euler flows.
However, one can ask if an incompressible flow can be extended to an Euler flow by adding some additional dimensions to . In my paper, I formalise this by considering warped products of which (as a smooth manifold) are products of with a torus, with a metric given by
for , where are the coordinates of the torus , and are smooth positive coefficients for ; in order to preserve the incompressibility condition, we also require the volume preservation property
though in practice we can quickly dispose of this condition by adding one further “dummy” dimension to the torus . We say that an incompressible flow is extendible to an Euler flow if there exists a warped product extending , and an Euler flow on of the form
for some “swirl” fields . The situation here is motivated by the familiar situation of studying axisymmetric Euler flows on , which in cylindrical coordinates take the form
The base component
of this flow is then a flow on the twodimensional plane which is not quite incompressible (due to the failure of the volume preservation condition (2) in this case) but still satisfies a system of equations (coupled with a passive scalar field that is basically the square of the swirl ) that is reminiscent of the Boussinesq equations.
On a fixed dimensional manifold , let denote the space of incompressible flows , equipped with the smooth topology (in spacetime), and let denote the space of such flows that are extendible to Euler flows. Our main theorem is
Theorem 1
 (i) (Generic inextendibility) Assume . Then is of the first category in (the countable union of nowhere dense sets in ).
 (ii) (Nonrigidity) Assume (with an arbitrary metric ). Then is somewhere dense in (that is, the closure of has nonempty interior).
More informally, starting with an incompressible flow , one usually cannot extend it to an Euler flow just by extending the manifold, warping the metric, and adding swirl coefficients, even if one is allowed to select the dimension of the extension, as well as the metric and coefficients, arbitrarily. However, many such flows can be perturbed to be extendible in such a manner (though different perturbations will require different extensions, in particular the dimension of the extension will not be fixed). Among other things, this means that conservation laws such as energy (or momentum, helicity, or circulation) no longer present an obstruction when one is allowed to perform an extension (basically this is because the swirl components of the extension can exchange energy (or momentum, etc.) with the base components in a basically arbitrary fashion.
These results fall short of my hopes to use the ability to extend the manifold to create universal behaviour in Euler flows, because of the fact that each flow requires a different extension in order to achieve the desired dynamics. Still it does seem to provide a little bit of support to the idea that highdimensional Euler flows are quite “flexible” in their behaviour, though not completely so due to the generic inextendibility phenomenon. This flexibility reminds me a little bit of the flexibility of weak solutions to equations such as the Euler equations provided by the “principle” of Gromov and its variants (as discussed in these recent notes), although in this case the flexibility comes from adding additional dimensions, rather than by repeatedly adding highfrequency corrections to the solution.
The proof of part (i) of the theorem basically proceeds by a dimension counting argument (similar to that in the proof of Proposition 9 of these recent lecture notes of mine). Heuristically, the point is that an arbitrary incompressible flow is essentially determined by independent functions of space and time, whereas the warping factors are functions of space only, the pressure field is one function of space and time, and the swirl fields are technically functions of both space and time, but have the same number of degrees of freedom as a function just of space, because they solve an evolution equation. When , this means that there are fewer unknown functions of space and time than prescribed functions of space and time, which is the source of the generic inextendibility. This simple argument breaks down when , but we do not know whether the claim is actually false in this case.
The proof of part (ii) proceeds by direct calculation of the effect of the warping factors and swirl velocities, which effectively create a forcing term (of Boussinesq type) in the first equation of (1) that is a combination of functions of the Eulerian spatial coordinates (coming from the warping factors) and the Lagrangian spatial coordinates (which arise from the swirl velocities, which are passively transported by the flow). In a nonempty open subset of , the combination of these coordinates becomes a nondegenerate set of coordinates for spacetime, and one can then use the StoneWeierstrass theorem to conclude. The requirement that be topologically a torus is a technical hypothesis in order to avoid topological obstructions such as the hairy ball theorem, but it may be that the hypothesis can be dropped (and it may in fact be true, in the case at least, that is dense in all of , not just in a nonempty open subset).
Just a quick post to advertise two upcoming events sponsored by institutions I am affiliated with:
 The 2019 National Math Festival will be held in Washington D.C. on May 4 (together with some satellite events at other US cities). This festival will have numerous games, events, films, and other activities, which are all free and open to the public. (I am on the board of trustees of MSRI, which is one of the sponsors of the festival.)
 The Institute for Pure and Applied Mathematics (IPAM) is now accepting applications for its second Industrial Short Course for May 1617 2019, with the topic of “Deep Learning and the Latest AI Algorithms“. (I serve on the Scientific Advisory Board of this institute.) This is an intensive course (in particular requiring active participation) aimed at industrial mathematicians involving both the theory and practice of deep learning and neural networks, taught by Xavier Bresson. (Note: space is very limited, and there is also a registration fee of $2,000 for this course, which is expected to be in high demand.)
[This post is collectively authored by the ICM structure committee, whom I am currently chairing – T.]
The International Congress of Mathematicians (ICM) is widely considered to be the premier conference for mathematicians. It is held every four years; for instance, the 2018 ICM was held in Rio de Janeiro, Brazil, and the 2022 ICM is to be held in Saint Petersburg, Russia. The most highprofile event at the ICM is the awarding of the 10 or so prizes of the International Mathematical Union (IMU) such as the Fields Medal, and the lectures by the prize laureates; but there are also approximately twenty plenary lectures from leading experts across all mathematical disciplines, several public lectures of a less technical nature, about 180 more specialised invited lectures divided into about twenty section panels, each corresponding to a mathematical field (or range of fields), as well as various outreach and social activities, exhibits and satellite programs, and meetings of the IMU General Assembly; see for instance the program for the 2018 ICM for a sample schedule. In addition to these official events, the ICM also provides more informal networking opportunities, in particular allowing mathematicians at all stages of career, and from all backgrounds and nationalities, to interact with each other.
For each Congress, a Program Committee (together with subcommittees for each section) is entrusted with the task of selecting who will give the lectures of the ICM (excluding the lectures by prize laureates, which are selected by separate prize committees); they also have decided how to appropriately subdivide the entire field of mathematics into sections. Given the prestigious nature of invitations from the ICM to present a lecture, this has been an important and challenging task, but one for which past Program Committees have managed to fulfill in a largely satisfactory fashion.
Nevertheless, in the last few years there has been substantial discussion regarding ways in which the process for structuring the ICM and inviting lecturers could be further improved, for instance to reflect the fact that the distribution of mathematics across various fields has evolved over time. At the 2018 ICM General Assembly meeting in Rio de Janeiro, a resolution was adopted to create a new Structure Committee to take on some of the responsibilities previously delegated to the Program Committee, focusing specifically on the structure of the scientific program. On the other hand, the Structure Committee is not involved with the format for prize lectures, the selection of prize laureates, or the selection of plenary and sectional lecturers; these tasks are instead the responsibilities of other committees (the local Organizing Committee, the prize committees, and the Program Committee respectively).
The first Structure Committee was constituted on 1 Jan 2019, with the following members:

 Terence Tao [Chair from 15 Feb, 2019]
 Carlos Kenig [IMU President (from 1 Jan 2019), ex officio]
 Nalini Anantharaman
 Alexei Borodin
 Annalisa Buffa
 Hélène Esnault [from 21 Mar, 2019]
 Irene Fonseca
 János Kollár [until 21 Mar, 2019]
 Laci Lovász [Chair until 15 Feb, 2019]
 Terry Lyons
 Stephane Mallat
 Hiraku Nakajima
 Éva Tardos
 Peter Teichner
 Akshay Venkatesh
 Anna Wienhard
As one of our first actions, we on the committee are using this blog post to solicit input from the mathematical community regarding the topics within our remit. Among the specific questions (in no particular order) for which we seek comments are the following:
 Are there suggestions to change the format of the ICM that would increase its value to the mathematical community?
 Are there suggestions to change the format of the ICM that would encourage greater participation and interest in attending, particularly with regards to junior researchers and mathematicians from developing countries?
 What is the correct balance between research and exposition in the lectures? For instance, how strongly should one emphasize the importance of good exposition when selecting plenary and sectional speakers? Should there be “Bourbaki style” expository talks presenting work not necessarily authored by the speaker?
 Is the balance between plenary talks, sectional talks, and public talks at an optimal level? There is only a finite amount of space in the calendar, so any increase in the number or length of one of these types of talks will come at the expense of another.
 The ICM is generally perceived to be more important to pure mathematics than to applied mathematics. In what ways can the ICM be made more relevant and attractive to applied mathematicians, or should one not try to do so?
 Are there structural barriers that cause certain areas or styles of mathematics (such as applied or interdisciplinary mathematics) or certain groups of mathematicians to be underrepresented at the ICM? What, if anything, can be done to mitigate these barriers?
Of course, we do not expect these complex and difficult questions to be resolved within this blog post, and debating these and other issues would likely be a major component of our internal committee discussions. Nevertheless, we would value constructive comments towards the above questions (or on other topics within the scope of our committee) to help inform these subsequent discussions. We therefore welcome and invite such commentary, either as responses to this blog post, or sent privately to one of the members of our committee. We would also be interested in having readers share their personal experiences at past congresses, and how it compares with other major conferences of this type. (But in order to keep the discussion focused and constructive, we request that comments here refrain from discussing topics that are out of the scope of this committee, such as suggesting specific potential speakers for the next congress, which is a task instead for the 2022 ICM Program Committee.)
While talking mathematics with a postdoc here at UCLA (March Boedihardjo) we came across the following matrix problem which we managed to solve, but the proof was cute and the process of discovering it was fun, so I thought I would present the problem here as a puzzle without revealing the solution for now.
The problem involves word maps on a matrix group, which for sake of discussion we will take to be the special orthogonal group of real matrices (one of the smallest matrix groups that contains a copy of the free group, which incidentally is the key observation powering the BanachTarski paradox). Given any abstract word of two generators and their inverses (i.e., an element of the free group ), one can define the word map simply by substituting a pair of matrices in into these generators. For instance, if one has the word , then the corresponding word map is given by
for . Because contains a copy of the free group, we see the word map is nontrivial (not equal to the identity) if and only if the word itself is nontrivial.
Anyway, here is the problem:
Problem. Does there exist a sequence of nontrivial word maps that converge uniformly to the identity map?
To put it another way, given any , does there exist a nontrivial word such that for all , where denotes (say) the operator norm, and denotes the identity matrix in ?
As I said, I don’t want to spoil the fun of working out this problem, so I will leave it as a challenge. Readers are welcome to share their thoughts, partial solutions, or full solutions in the comments below.
Recent Comments