You are currently browsing the category archive for the ‘Mathematics’ category.

Joni Teräväinen and I have just uploaded to the arXiv our paper “Value patterns of multiplicative functions and related sequences“, submitted to Forum of Mathematics, Sigma. This paper explores how to use recent technology on correlations of multiplicative (or nearly multiplicative functions), such as the “entropy decrement method”, in conjunction with techniques from additive combinatorics, to establish new results on the sign patterns of functions such as the Liouville function {\lambda}. For instance, with regards to length 5 sign patterns

\displaystyle  (\lambda(n+1),\dots,\lambda(n+5)) \in \{-1,+1\}^5

of the Liouville function, we can now show that at least {24} of the {32} possible sign patterns in {\{-1,+1\}^5} occur with positive upper density. (Conjecturally, all of them do so, and this is known for all shorter sign patterns, but unfortunately {24} seems to be the limitation of our methods.)

The Liouville function can be written as {\lambda(n) = e^{2\pi i \Omega(n)/2}}, where {\Omega(n)} is the number of prime factors of {n} (counting multiplicity). One can also consider the variant {\lambda_3(n) = e^{2\pi i \Omega(n)/3}}, which is a completely multiplicative function taking values in the cube roots of unity {\{1, \omega, \omega^2\}}. Here we are able to show that all {27} sign patterns in {\{1,\omega,\omega^2\}} occur with positive lower density as sign patterns {(\lambda_3(n+1), \lambda_3(n+2), \lambda_3(n+3))} of this function. The analogous result for {\lambda} was already known (see this paper of Matomäki, Radziwiłł, and myself), and in that case it is even known that all sign patterns occur with equal logarithmic density {1/8} (from this paper of myself and Teräväinen), but these techniques barely fail to handle the {\lambda_3} case by itself (largely because the “parity” arguments used in the case of the Liouville function no longer control three-point correlations in the {\lambda_3} case) and an additional additive combinatorial tool is needed. After applying existing technology (such as entropy decrement methods), the problem roughly speaking reduces to locating patterns {a \in A_1, a+r \in A_2, a+2r \in A_3} for a certain partition {G = A_1 \cup A_2 \cup A_3} of a compact abelian group {G} (think for instance of the unit circle {G={\bf R}/{\bf Z}}, although the general case is a bit more complicated, in particular if {G} is disconnected then there is a certain “coprimality” constraint on {r}, also we can allow the {A_1,A_2,A_3} to be replaced by any {A_{c_1}, A_{c_2}, A_{c_3}} with {c_1+c_2+c_3} divisible by {3}), with each of the {A_i} having measure {1/3}. An inequality of Kneser just barely fails to guarantee the existence of such patterns, but by using an inverse theorem for Kneser’s inequality in this previous paper of mine we are able to identify precisely the obstruction for this method to work, and rule it out by an ad hoc method.

The same techniques turn out to also make progress on some conjectures of Erdös-Pomerance and Hildebrand regarding patterns of the largest prime factor {P^+(n)} of a natural number {n}. For instance, we improve results of Erdös-Pomerance and of Balog demonstrating that the inequalities

\displaystyle  P^+(n+1) < P^+(n+2) < P^+(n+3)


\displaystyle  P^+(n+1) > P^+(n+2) > P^+(n+3)

each hold for infinitely many {n}, by demonstrating the stronger claims that the inequalities

\displaystyle  P^+(n+1) < P^+(n+2) < P^+(n+3) > P^+(n+4)


\displaystyle  P^+(n+1) > P^+(n+2) > P^+(n+3) < P^+(n+4)

each hold for a set of {n} of positive lower density. As a variant, we also show that we can find a positive density set of {n} for which

\displaystyle  P^+(n+1), P^+(n+2), P^+(n+3) > n^\gamma

for any fixed {\gamma < e^{-1/3} = 0.7165\dots} (this improves on a previous result of Hildebrand with {e^{-1/3}} replaced by {e^{-1/2} = 0.6065\dots}. A number of other results of this type are also obtained in this paper.

In order to obtain these sorts of results, one needs to extend the entropy decrement technology from the setting of multiplicative functions to that of what we call “weakly stable sets” – sets {A} which have some multiplicative structure, in the sense that (roughly speaking) there is a set {B} such that for all small primes {p}, the statements {n \in A} and {pn \in B} are roughly equivalent to each other. For instance, if {A} is a level set {A = \{ n: \omega(n) = 0 \hbox{ mod } 3 \}}, one would take {B = \{ n: \omega(n) = 1 \hbox{ mod } 3 \}}; if instead {A} is a set of the form {\{ n: P^+(n) \geq n^\gamma\}}, then one can take {B=A}. When one has such a situation, then very roughly speaking, the entropy decrement argument then allows one to estimate a one-parameter correlation such as

\displaystyle  {\bf E}_n 1_A(n+1) 1_A(n+2) 1_A(n+3)

with a two-parameter correlation such as

\displaystyle  {\bf E}_n {\bf E}_p 1_B(n+p) 1_B(n+2p) 1_B(n+3p)

(where we will be deliberately vague as to how we are averaging over {n} and {p}), and then the use of the “linear equations in primes” technology of Ben Green, Tamar Ziegler, and myself then allows one to replace this average in turn by something like

\displaystyle  {\bf E}_n {\bf E}_r 1_B(n+r) 1_B(n+2r) 1_B(n+3r)

where {r} is constrained to be not divisible by small primes but is otherwise quite arbitrary. This latter average can then be attacked by tools from additive combinatorics, such as translation to a continuous group model (using for instance the Furstenberg correspondence principle) followed by tools such as Kneser’s inequality (or inverse theorems to that inequality).

(This post is mostly intended for my own reference, as I found myself repeatedly looking up several conversions between polynomial bases on various occasions.)

Let {\mathrm{Poly}_{\leq n}} denote the vector space of polynomials {P:{\bf R} \rightarrow {\bf R}} of one variable {x} with real coefficients of degree at most {n}. This is a vector space of dimension {n+1}, and the sequence of these spaces form a filtration:

\displaystyle  \mathrm{Poly}_{\leq 0} \subset \mathrm{Poly}_{\leq 1} \subset \mathrm{Poly}_{\leq 2} \subset \dots

A standard basis for these vector spaces are given by the monomials {x^0, x^1, x^2, \dots}: every polynomial {P(x)} in {\mathrm{Poly}_{\leq n}} can be expressed uniquely as a linear combination of the first {n+1} monomials {x^0, x^1, \dots, x^n}. More generally, if one has any sequence {Q_0(x), Q_1(x), Q_2(x)} of polynomials, with each {Q_n} of degree exactly {n}, then an easy induction shows that {Q_0(x),\dots,Q_n(x)} forms a basis for {\mathrm{Poly}_{\leq n}}.

In particular, if we have two such sequences {Q_0(x), Q_1(x), Q_2(x),\dots} and {R_0(x), R_1(x), R_2(x), \dots} of polynomials, with each {Q_n} of degree {n} and each {R_k} of degree {k}, then {Q_n} must be expressible uniquely as a linear combination of the polynomials {R_0,R_1,\dots,R_n}, thus we have an identity of the form

\displaystyle  Q_n(x) = \sum_{k=0}^n c_{QR}(n,k) R_k(x)

for some change of basis coefficients {c_{QR}(n,k) \in {\bf R}}. These coefficients describe how to convert a polynomial expressed in the {Q_n} basis into a polynomial expressed in the {R_k} basis.

Many standard combinatorial quantities {c(n,k)} involving two natural numbers {0 \leq k \leq n} can be interpreted as such change of basis coefficients. The most familiar example are the binomial coefficients {\binom{n}{k}}, which measures the conversion from the shifted monomial basis {(x+1)^n} to the monomial basis {x^k}, thanks to (a special case of) the binomial formula:

\displaystyle  (x+1)^n = \sum_{k=0}^n \binom{n}{k} x^k,

thus for instance

\displaystyle  (x+1)^3 = \binom{3}{0} x^0 + \binom{3}{1} x^1 + \binom{3}{2} x^2 + \binom{3}{3} x^3

\displaystyle  = 1 + 3x + 3x^2 + x^3.

More generally, for any shift {h}, the conversion from {(x+h)^n} to {x^k} is measured by the coefficients {h^{n-k} \binom{n}{k}}, thanks to the general case of the binomial formula.

But there are other bases of interest too. For instance if one uses the falling factorial basis

\displaystyle  (x)_n := x (x-1) \dots (x-n+1)

then the conversion from falling factorials to monomials is given by the Stirling numbers of the first kind {s(n,k)}:

\displaystyle  (x)_n = \sum_{k=0}^n s(n,k) x^k,

thus for instance

\displaystyle  (x)_3 = s(3,0) x^0 + s(3,1) x^1 + s(3,2) x^2 + s(3,3) x^3

\displaystyle  = 0 + 2 x - 3x^2 + x^3

and the conversion back is given by the Stirling numbers of the second kind {S(n,k)}:

\displaystyle  x^n = \sum_{k=0}^n S(n,k) (x)_k

thus for instance

\displaystyle  x^3 = S(3,0) (x)_0 + S(3,1) (x)_1 + S(3,2) (x)_2 + S(3,3) (x)_3

\displaystyle  = 0 + x + 3 x(x-1) + x(x-1)(x-2).

If one uses the binomial functions {\binom{x}{n} = \frac{1}{n!} (x)_n} as a basis instead of the falling factorials, one of course can rewrite these conversions as

\displaystyle  \binom{x}{n} = \sum_{k=0}^n \frac{1}{n!} s(n,k) x^k


\displaystyle  x^n = \sum_{k=0}^n k! S(n,k) \binom{x}{k}

thus for instance

\displaystyle  \binom{x}{3} = 0 + \frac{1}{3} x - \frac{1}{2} x^2 + \frac{1}{6} x^3


\displaystyle  x^3 = 0 + \binom{x}{1} + 6 \binom{x}{2} + 6 \binom{x}{3}.

As a slight variant, if one instead uses rising factorials

\displaystyle  (x)^n := x (x+1) \dots (x+n-1)

then the conversion to monomials yields the unsigned Stirling numbers {|s(n,k)|} of the first kind:

\displaystyle  (x)^n = \sum_{k=0}^n |s(n,k)| x^k

thus for instance

\displaystyle  (x)^3 = 0 + 2x + 3x^2 + x^3.

One final basis comes from the polylogarithm functions

\displaystyle  \mathrm{Li}_{-n}(x) := \sum_{j=1}^\infty j^n x^j.

For instance one has

\displaystyle  \mathrm{Li}_1(x) = -\log(1-x)

\displaystyle  \mathrm{Li}_0(x) = \frac{x}{1-x}

\displaystyle  \mathrm{Li}_{-1}(x) = \frac{x}{(1-x)^2}

\displaystyle  \mathrm{Li}_{-2}(x) = \frac{x}{(1-x)^3} (1+x)

\displaystyle  \mathrm{Li}_{-3}(x) = \frac{x}{(1-x)^4} (1+4x+x^2)

\displaystyle  \mathrm{Li}_{-4}(x) = \frac{x}{(1-x)^5} (1+11x+11x^2+x^3)

and more generally one has

\displaystyle  \mathrm{Li}_{-n-1}(x) = \frac{x}{(1-x)^{n+2}} E_n(x)

for all natural numbers {n} and some polynomial {E_n} of degree {n} (the Eulerian polynomials), which when converted to the monomial basis yields the (shifted) Eulerian numbers

\displaystyle  E_n(x) = \sum_{k=0}^n A(n+1,k) x^k.

For instance

\displaystyle  E_3(x) = A(4,0) x^0 + A(4,1) x^1 + A(4,2) x^2 + A(4,3) x^3

\displaystyle  = 1 + 11x + 11x^2 + x^3.

These particular coefficients also have useful combinatorial interpretations. For instance:

  • The binomial coefficient {\binom{n}{k}} is of course the number of {k}-element subsets of {\{1,\dots,n\}}.
  • The unsigned Stirling numbers {|s(n,k)|} of the first kind are the number of permutations of {\{1,\dots,n\}} with exactly {k} cycles. The signed Stirling numbers {s(n,k)} are then given by the formula {s(n,k) = (-1)^{n-k} |s(n,k)|}.
  • The Stirling numbers {S(n,k)} of the second kind are the number of ways to partition {\{1,\dots,n\}} into {k} non-empty subsets.
  • The Eulerian numbers {A(n,k)} are the number of permutations of {\{1,\dots,n\}} with exactly {k} ascents.

These coefficients behave similarly to each other in several ways. For instance, the binomial coefficients {\binom{n}{k}} obey the well known Pascal identity

\displaystyle  \binom{n+1}{k} = \binom{n}{k} + \binom{n}{k-1}

(with the convention that {\binom{n}{k}} vanishes outside of the range {0 \leq k \leq n}). In a similar spirit, the unsigned Stirling numbers {|s(n,k)|} of the first kind obey the identity

\displaystyle  |s(n+1,k)| = n |s(n,k)| + |s(n,k-1)|

and the signed counterparts {s(n,k)} obey the identity

\displaystyle  s(n+1,k) = -n s(n,k) + s(n,k-1).

The Stirling numbers of the second kind {S(n,k)} obey the identity

\displaystyle  S(n+1,k) = k S(n,k) + S(n,k-1)

and the Eulerian numbers {A(n,k)} obey the identity

\displaystyle  A(n+1,k) = (k+1) A(n,k) + (n-k+1) A(n,k-1).

I was pleased to learn this week that the 2019 Abel Prize was awarded to Karen Uhlenbeck. Uhlenbeck laid much of the foundations of modern geometric PDE. One of the few papers I have in this area is in fact a joint paper with Gang Tian extending a famous singularity removal theorem of Uhlenbeck for four-dimensional Yang-Mills connections to higher dimensions. In both these papers, it is crucial to be able to construct “Coulomb gauges” for various connections, and there is a clever trick of Uhlenbeck for doing so, introduced in another important paper of hers, which is absolutely critical in my own paper with Tian. Nowadays it would be considered a standard technique, but it was definitely not so at the time that Uhlenbeck introduced it.

Suppose one has a smooth connection {A} on a (closed) unit ball {B(0,1)} in {{\bf R}^n} for some {n \geq 1}, taking values in some Lie algebra {{\mathfrak g}} associated to a compact Lie group {G}. This connection then has a curvature {F(A)}, defined in coordinates by the usual formula

\displaystyle F(A)_{\alpha \beta} = \partial_\alpha A_\beta - \partial_\beta A_\alpha + [A_\alpha,A_\beta]. \ \ \ \ \ (1)

It is natural to place the curvature in a scale-invariant space such as {L^{n/2}(B(0,1))}, and then the natural space for the connection would be the Sobolev space {W^{n/2,1}(B(0,1))}. It is easy to see from (1) and Sobolev embedding that if {A} is bounded in {W^{n/2,1}(B(0,1))}, then {F(A)} will be bounded in {L^{n/2}(B(0,1))}. One can then ask the converse question: if {F(A)} is bounded in {L^{n/2}(B(0,1))}, is {A} bounded in {W^{n/2,1}(B(0,1))}? This can be viewed as asking whether the curvature equation (1) enjoys “elliptic regularity”.

There is a basic obstruction provided by gauge invariance. For any smooth gauge {U: B(0,1) \rightarrow G} taking values in the Lie group, one can gauge transform {A} to

\displaystyle A^U_\alpha := U^{-1} \partial_\alpha U + U^{-1} A_\alpha U

and then a brief calculation shows that the curvature is conjugated to

\displaystyle F(A^U)_{\alpha \beta} = U^{-1} F_{\alpha \beta} U.

This gauge symmetry does not affect the {L^{n/2}(B(0,1))} norm of the curvature tensor {F(A)}, but can make the connection {A} extremely large in {W^{n/2,1}(B(0,1))}, since there is no control on how wildly {U} can oscillate in space.

However, one can hope to overcome this problem by gauge fixing: perhaps if {F(A)} is bounded in {L^{n/2}(B(0,1))}, then one can make {A} bounded in {W^{n/2,1}(B(0,1))} after applying a gauge transformation. The basic and useful result of Uhlenbeck is that this can be done if the {L^{n/2}} norm of {F(A)} is sufficiently small (and then the conclusion is that {A} is small in {W^{n/2,1}}). (For large connections there is a serious issue related to the Gribov ambiguity.) In my (much) later paper with Tian, we adapted this argument, replacing Lebesgue spaces by Morrey space counterparts. (This result was also independently obtained at about the same time by Meyer and Riviére.)

To make the problem elliptic, one can try to impose the Coulomb gauge condition

\displaystyle \partial^\alpha A_\alpha = 0 \ \ \ \ \ (2)

(also known as the Lorenz gauge or Hodge gauge in various papers), together with a natural boundary condition on {\partial B(0,1)} that will not be discussed further here. This turns (1), (2) into a divergence-curl system that is elliptic at the linear level at least. Indeed if one takes the divergence of (1) using (2) one sees that

\displaystyle \partial^\alpha F(A)_{\alpha \beta} = \Delta A_\beta + \partial^\alpha [A_\alpha,A_\beta] \ \ \ \ \ (3)

and if one could somehow ignore the nonlinear term {\partial^\alpha [A_\alpha,A_\beta]} then we would get the required regularity on {A} by standard elliptic regularity estimates.

The problem is then how to handle the nonlinear term. If we already knew that {A} was small in the right norm {W^{n/2,1}(B(0,1))} then one can use Sobolev embedding, Hölder’s inequality, and elliptic regularity to show that the second term in (3) is small compared to the first term, and so one could then hope to eliminate it by perturbative analysis. However, proving that {A} is small in this norm is exactly what we are trying to prove! So this approach seems circular.

Uhlenbeck’s clever way out of this circularity is a textbook example of what is now known as a “continuity” argument. Instead of trying to work just with the original connection {A}, one works with the rescaled connections {A^{(t)}_\alpha(x) := t A_\alpha(tx)} for {0 \leq t \leq 1}, with associated rescaled curvatures {F(A^{(t)})_\alpha = t^2 F(A)_{\alpha \beta}(tx)}. If the original curvature {F(A)} is small in {L^{n/2}} norm (e.g. bounded by some small {\varepsilon>0}), then so are all the rescaled curvatures {F(A^{(t)})}. We want to obtain a Coulomb gauge at time {t=1}; this is difficult to do directly, but it is trivial to obtain a Coulomb gauge at time {t=0}, because the connection vanishes at this time. On the other hand, once one has successfully obtained a Coulomb gauge at some time {t \in [0,1]} with {A^{(t)}} small in the natural norm {W^{n/2,1}} (say bounded by {C \varepsilon} for some constant {C} which is large in absolute terms, but not so large compared with say {1/\varepsilon}), the perturbative argument mentioned earlier (combined with the qualitative hypothesis that {A} is smooth) actually works to show that a Coulomb gauge can also be constructed and be small for all sufficiently close nearby times {t' \in [0,1]} to {t}; furthermore, the perturbative analysis actually shows that the nearby gauges enjoy a slightly better bound on the {W^{n/2,1}} norm, say {C\varepsilon/2} rather than {C\varepsilon}. As a consequence of this, the set of times {t} for which one has a good Coulomb gauge obeying the claimed estimates is both open and closed in {[0,1]}, and also contains {t=0}. Since the unit interval {[0,1]} is connected, it must then also contain {t=1}. This concludes the proof.

One of the lessons I drew from this example is to not be deterred (especially in PDE) by an argument seeming to be circular; if the argument is still sufficiently “nontrivial” in nature, it can often be modified into a usefully non-circular argument that achieves what one wants (possibly under an additional qualitative hypothesis, such as a continuity or smoothness hypothesis).

Last week, we had Peter Scholze give an interesting distinguished lecture series here at UCLA on “Prismatic Cohomology”, which is a new type of cohomology theory worked out by Scholze and Bhargav Bhatt. (Video of the talks will be available shortly; for now we have some notes taken by two notetakers in the audience on that web page.) My understanding of this (speaking as someone that is rather far removed from this area) is that it is progress towards the “motivic” dream of being able to define cohomology {H^i(X/\overline{A}, A)} for varieties {X} (or similar objects) defined over arbitrary commutative rings {\overline{A}}, and with coefficients in another arbitrary commutative ring {A}. Currently, we have various flavours of cohomology that only work for certain types of domain rings {\overline{A}} and coefficient rings {A}:

  • Singular cohomology, which roughly speaking works when the domain ring {\overline{A}} is a characteristic zero field such as {{\bf R}} or {{\bf C}}, but can allow for arbitrary coefficients {A};
  • de Rham cohomology, which roughly speaking works as long as the coefficient ring {A} is the same as the domain ring {\overline{A}} (or a homomorphic image thereof), as one can only talk about {A}-valued differential forms if the underlying space is also defined over {A};
  • {\ell}-adic cohomology, which is a remarkably powerful application of étale cohomology, but only works well when the coefficient ring {A = {\bf Z}_\ell} is localised around a prime {\ell} that is different from the characteristic {p} of the domain ring {\overline{A}}; and
  • Crystalline cohomology, in which the domain ring is a field {k} of some finite characteristic {p}, but the coefficient ring {A} can be a slight deformation of {k}, such as the ring of Witt vectors of {k}.

There are various relationships between the cohomology theories, for instance de Rham cohomology coincides with singular cohomology for smooth varieties in the limiting case {A=\overline{A} = {\bf R}}. The following picture Scholze drew in his first lecture captures these sorts of relationships nicely:


The new prismatic cohomology of Bhatt and Scholze unifies many of these cohomologies in the “neighbourhood” of the point {(p,p)} in the above diagram, in which the domain ring {\overline{A}} and the coefficient ring {A} are both thought of as being “close to characteristic {p}” in some sense, so that the dilates {pA, pA'} of these rings is either zero, or “small”. For instance, the {p}-adic ring {{\bf Z}_p} is technically of characteristic {0}, but {p {\bf Z}_p} is a “small” ideal of {{\bf Z}_p} (it consists of those elements of {{\bf Z}_p} of {p}-adic valuation at most {1/p}), so one can think of {{\bf Z}_p} as being “close to characteristic {p}” in some sense. Scholze drew a “zoomed in” version of the previous diagram to informally describe the types of rings {A,A'} for which prismatic cohomology is effective:


To define prismatic cohomology rings {H^i_\Delta(X/\overline{A}, A)} one needs a “prism”: a ring homomorphism from {A} to {\overline{A}} equipped with a “Frobenius-like” endomorphism {\phi: A \to A} on {A} obeying some axioms. By tuning these homomorphisms one can recover existing cohomology theories like crystalline or de Rham cohomology as special cases of prismatic cohomology. These specialisations are analogous to how a prism splits white light into various individual colours, giving rise to the terminology “prismatic”, and depicted by this further diagram of Scholze:


(And yes, Peter confirmed that he and Bhargav were inspired by the Dark Side of the Moon album cover in selecting the terminology.)

There was an abstract definition of prismatic cohomology (as being the essentially unique cohomology arising from prisms that obeyed certain natural axioms), but there was also a more concrete way to view them in terms of coordinates, as a “{q}-deformation” of de Rham cohomology. Whereas in de Rham cohomology one worked with derivative operators {d} that for instance applied to monomials {t^n} by the usual formula

\displaystyle d(t^n) = n t^{n-1} dt,

prismatic cohomology in coordinates can be computed using a “{q}-derivative” operator {d_q} that for instance applies to monomials {t^n} by the formula

\displaystyle d_q (t^n) = [n]_q t^{n-1} d_q t


\displaystyle [n]_q = \frac{q^n-1}{q-1} = 1 + q + \dots + q^{n-1}

is the “{q}-analogue” of {n} (a polynomial in {q} that equals {n} in the limit {q=1}). (The {q}-analogues become more complicated for more general forms than these.) In this more concrete setting, the fact that prismatic cohomology is independent of the choice of coordinates apparently becomes quite a non-trivial theorem.


Now that Google Plus is closing, the brief announcements that I used to post over there will now be migrated over to this blog.  (Some people have suggested other platforms for this also, such as Twitter, but I think for now I can use my existing blog to accommodate these sorts of short posts.)

  1. The NSF-CBMS regional research conferences are now requesting proposals for the 2020 conference series.  (I was the principal lecturer for one of these conferences back in 2005; it was a very intensive experience, but quite enjoyable, and I am quite pleased with the book that resulted from it.)
  2. The awardees for the Sloan Fellowships for 2019 have now been announced.  (I was on the committee for the mathematics awards.  For the usual reasons involving the confidentiality of letters of reference and other sensitive information, I will be unfortunately be unable to answer any specific questions about our committee deliberations.)

I have just uploaded to the arXiv my paper “On the universality of the incompressible Euler equation on compact manifolds, II. Non-rigidity of Euler flows“, submitted to Pure and Applied Functional Analysis. This paper continues my attempts to establish “universality” properties of the Euler equations on Riemannian manifolds {(M,g)}, as I conjecture that the freedom to set the metric {g} ought to allow one to “program” such Euler flows to exhibit a wide range of behaviour, and in particular to achieve finite time blowup (if the dimension is sufficiently large, at least).

In coordinates, the Euler equations read

\displaystyle \partial_t u^k + u^j \nabla_j u^k = - \nabla^k p \ \ \ \ \ (1)


\displaystyle \nabla_k u^k = 0

where {p: [0,T] \rightarrow C^\infty(M)} is the pressure field and {u: [0,T] \rightarrow \Gamma(TM)} is the velocity field, and {\nabla} denotes the Levi-Civita connection with the usual Penrose abstract index notation conventions; we restrict attention here to the case where {u,p} are smooth and {M} is compact, smooth, orientable, connected, and without boundary. Let’s call {u} an Euler flow on {M} (for the time interval {[0,T]}) if it solves the above system of equations for some pressure {p}, and an incompressible flow if it just obeys the divergence-free relation {\nabla_k u^k=0}. Thus every Euler flow is an incompressible flow, but the converse is certainly not true; for instance the various conservation laws of the Euler equation, such as conservation of energy, will already block most incompressible flows from being an Euler flow, or even being approximated in a reasonably strong topology by such Euler flows.

However, one can ask if an incompressible flow can be extended to an Euler flow by adding some additional dimensions to {M}. In my paper, I formalise this by considering warped products {\tilde M} of {M} which (as a smooth manifold) are products {\tilde M = M \times ({\bf R}/{\bf Z})^m} of {M} with a torus, with a metric {\tilde g} given by

\displaystyle d \tilde g^2 = g_{ij}(x) dx^i dx^j + \sum_{s=1}^m \tilde g_{ss}(x) (d\theta^s)^2

for {(x,\theta) \in \tilde M}, where {\theta^1,\dots,\theta^m} are the coordinates of the torus {({\bf R}/{\bf Z})^m}, and {\tilde g_{ss}: M \rightarrow {\bf R}^+} are smooth positive coefficients for {s=1,\dots,m}; in order to preserve the incompressibility condition, we also require the volume preservation property

\displaystyle \prod_{s=1}^m \tilde g_{ss}(x) = 1 \ \ \ \ \ (2)

though in practice we can quickly dispose of this condition by adding one further “dummy” dimension to the torus {({\bf R}/{\bf Z})^m}. We say that an incompressible flow {u} is extendible to an Euler flow if there exists a warped product {\tilde M} extending {M}, and an Euler flow {\tilde u} on {\tilde M} of the form

\displaystyle \tilde u(t,(x,\theta)) = u^i(t,x) \frac{d}{dx^i} + \sum_{s=1}^m \tilde u^s(t,x) \frac{d}{d\theta^s}

for some “swirl” fields {\tilde u^s: [0,T] \times M \rightarrow {\bf R}}. The situation here is motivated by the familiar situation of studying axisymmetric Euler flows {\tilde u} on {{\bf R}^3}, which in cylindrical coordinates take the form

\displaystyle \tilde u(t,(r,z,\theta)) = u^r(t,r,z) \frac{d}{dr} + u^z(t,r,z) \frac{d}{dz} + \tilde u^\theta(t,r,z) \frac{d}{d\theta}.

The base component

\displaystyle u^r(t,r,z) \frac{d}{dr} + u^z(t,r,z) \frac{d}{dz}

of this flow is then a flow on the two-dimensional {r,z} plane which is not quite incompressible (due to the failure of the volume preservation condition (2) in this case) but still satisfies a system of equations (coupled with a passive scalar field {\rho} that is basically the square of the swirl {\tilde u^\rho}) that is reminiscent of the Boussinesq equations.

On a fixed {d}-dimensional manifold {(M,g)}, let {{\mathcal F}} denote the space of incompressible flows {u: [0,T] \rightarrow \Gamma(TM)}, equipped with the smooth topology (in spacetime), and let {{\mathcal E} \subset {\mathcal F}} denote the space of such flows that are extendible to Euler flows. Our main theorem is

Theorem 1

  • (i) (Generic inextendibility) Assume {d \geq 3}. Then {{\mathcal E}} is of the first category in {{\mathcal F}} (the countable union of nowhere dense sets in {{\mathcal F}}).
  • (ii) (Non-rigidity) Assume {M = ({\bf R}/{\bf Z})^d} (with an arbitrary metric {g}). Then {{\mathcal E}} is somewhere dense in {{\mathcal F}} (that is, the closure of {{\mathcal E}} has non-empty interior).

More informally, starting with an incompressible flow {u}, one usually cannot extend it to an Euler flow just by extending the manifold, warping the metric, and adding swirl coefficients, even if one is allowed to select the dimension of the extension, as well as the metric and coefficients, arbitrarily. However, many such flows can be perturbed to be extendible in such a manner (though different perturbations will require different extensions, in particular the dimension of the extension will not be fixed). Among other things, this means that conservation laws such as energy (or momentum, helicity, or circulation) no longer present an obstruction when one is allowed to perform an extension (basically this is because the swirl components of the extension can exchange energy (or momentum, etc.) with the base components in a basically arbitrary fashion.

These results fall short of my hopes to use the ability to extend the manifold to create universal behaviour in Euler flows, because of the fact that each flow requires a different extension in order to achieve the desired dynamics. Still it does seem to provide a little bit of support to the idea that high-dimensional Euler flows are quite “flexible” in their behaviour, though not completely so due to the generic inextendibility phenomenon. This flexibility reminds me a little bit of the flexibility of weak solutions to equations such as the Euler equations provided by the “{h}-principle” of Gromov and its variants (as discussed in these recent notes), although in this case the flexibility comes from adding additional dimensions, rather than by repeatedly adding high-frequency corrections to the solution.

The proof of part (i) of the theorem basically proceeds by a dimension counting argument (similar to that in the proof of Proposition 9 of these recent lecture notes of mine). Heuristically, the point is that an arbitrary incompressible flow {u} is essentially determined by {d-1} independent functions of space and time, whereas the warping factors {\tilde g_{ss}} are functions of space only, the pressure field is one function of space and time, and the swirl fields {u^s} are technically functions of both space and time, but have the same number of degrees of freedom as a function just of space, because they solve an evolution equation. When {d>2}, this means that there are fewer unknown functions of space and time than prescribed functions of space and time, which is the source of the generic inextendibility. This simple argument breaks down when {d=2}, but we do not know whether the claim is actually false in this case.

The proof of part (ii) proceeds by direct calculation of the effect of the warping factors and swirl velocities, which effectively create a forcing term (of Boussinesq type) in the first equation of (1) that is a combination of functions of the Eulerian spatial coordinates {x^i} (coming from the warping factors) and the Lagrangian spatial coordinates {a^\beta} (which arise from the swirl velocities, which are passively transported by the flow). In a non-empty open subset of {{\mathcal F}}, the combination of these coordinates becomes a non-degenerate set of coordinates for spacetime, and one can then use the Stone-Weierstrass theorem to conclude. The requirement that {M} be topologically a torus is a technical hypothesis in order to avoid topological obstructions such as the hairy ball theorem, but it may be that the hypothesis can be dropped (and it may in fact be true, in the {M = ({\bf R}/{\bf Z})^d} case at least, that {{\mathcal E}} is dense in all of {{\mathcal F}}, not just in a non-empty open subset).


[This post is collectively authored by the ICM structure committee, whom I am currently chairing – T.]

The International Congress of Mathematicians (ICM) is widely considered to be the premier conference for mathematicians.  It is held every four years; for instance, the 2018 ICM was held in Rio de Janeiro, Brazil, and the 2022 ICM is to be held in Saint Petersburg, Russia.  The most high-profile event at the ICM is the awarding of the 10 or so prizes of the International Mathematical Union (IMU) such as the Fields Medal, and the lectures by the prize laureates; but there are also approximately twenty plenary lectures from leading experts across all mathematical disciplines, several public lectures of a less technical nature, about 180 more specialised invited lectures divided into about twenty section panels, each corresponding to a mathematical field (or range of fields), as well as various outreach and social activities, exhibits and satellite programs, and meetings of the IMU General Assembly; see for instance the program for the 2018 ICM for a sample schedule.  In addition to these official events, the ICM also provides more informal networking opportunities, in particular allowing mathematicians at all stages of career, and from all backgrounds and nationalities, to interact with each other.

For each Congress, a Program Committee (together with subcommittees for each section) is entrusted with the task of selecting who will give the lectures of the ICM (excluding the lectures by prize laureates, which are selected by separate prize committees); they also have decided how to appropriately subdivide the entire field of mathematics into sections.   Given the prestigious nature of invitations from the ICM to present a lecture, this has been an important and challenging task, but one for which past Program Committees have managed to fulfill in a largely satisfactory fashion.

Nevertheless, in the last few years there has been substantial discussion regarding ways in which the process for structuring the ICM and inviting lecturers could be further improved, for instance to reflect the fact that the distribution of mathematics across various fields has evolved over time.   At the 2018 ICM General Assembly meeting in Rio de Janeiro, a resolution was adopted to create a new Structure Committee to take on some of the responsibilities previously delegated to the Program Committee, focusing specifically on the structure of the scientific program.  On the other hand, the Structure Committee is not involved with the format for prize lectures, the selection of prize laureates, or the selection of plenary and sectional lecturers; these tasks are instead the responsibilities of other committees (the local Organizing Committee, the prize committees, and the Program Committee respectively).

The first Structure Committee was constituted on 1 Jan 2019, with the following members:

As one of our first actions, we on the committee are using this blog post to solicit input from the mathematical community regarding the topics within our remit.  Among the specific questions (in no particular order) for which we seek comments are the following:

  1. Are there suggestions to change the format of the ICM that would increase its value to the mathematical community?
  2. Are there suggestions to change the format of the ICM that would encourage greater participation and interest in attending, particularly with regards to junior researchers and mathematicians from developing countries?
  3. What is the correct balance between research and exposition in the lectures?  For instance, how strongly should one emphasize the importance of good exposition when selecting plenary and sectional speakers?  Should there be “Bourbaki style” expository talks presenting work not necessarily authored by the speaker?
  4. Is the balance between plenary talks, sectional talks, and public talks at an optimal level?  There is only a finite amount of space in the calendar, so any increase in the number or length of one of these types of talks will come at the expense of another.
  5. The ICM is generally perceived to be more important to pure mathematics than to applied mathematics.  In what ways can the ICM be made more relevant and attractive to applied mathematicians, or should one not try to do so?
  6. Are there structural barriers that cause certain areas or styles of mathematics (such as applied or interdisciplinary mathematics) or certain groups of mathematicians to be under-represented at the ICM?  What, if anything, can be done to mitigate these barriers?

Of course, we do not expect these complex and difficult questions to be resolved within this blog post, and debating these and other issues would likely be a major component of our internal committee discussions.  Nevertheless, we would value constructive comments towards the above questions (or on other topics within the scope of our committee) to help inform these subsequent discussions.  We therefore welcome and invite such commentary, either as responses to this blog post, or sent privately to one of the members of our committee.  We would also be interested in having readers share their personal experiences at past congresses, and how it compares with other major conferences of this type.   (But in order to keep the discussion focused and constructive, we request that comments here refrain from discussing topics that are out of the scope of this committee, such as suggesting specific potential speakers for the next congress, which is a task instead for the 2022 ICM Program Committee.)

While talking mathematics with a postdoc here at UCLA (March Boedihardjo) we came across the following matrix problem which we managed to solve, but the proof was cute and the process of discovering it was fun, so I thought I would present the problem here as a puzzle without revealing the solution for now.

The problem involves word maps on a matrix group, which for sake of discussion we will take to be the special orthogonal group SO(3) of real 3 \times 3 matrices (one of the smallest matrix groups that contains a copy of the free group, which incidentally is the key observation powering the Banach-Tarski paradox).  Given any abstract word w of two generators x,y and their inverses (i.e., an element of the free group {\bf F}_2), one can define the word map w: SO(3) \times SO(3) \to SO(3) simply by substituting a pair of matrices in SO(3) into these generators.  For instance, if one has the word w = x y x^{-2} y^2 x, then the corresponding word map w: SO(3) \times SO(3) \to SO(3) is given by

\displaystyle w(A,B) := ABA^{-2} B^2 A

for A,B \in SO(3).  Because SO(3) contains a copy of the free group, we see the word map is non-trivial (not equal to the identity) if and only if the word itself is nontrivial.

Anyway, here is the problem:

Problem. Does there exist a sequence w_1, w_2, \dots of non-trivial word maps w_n: SO(3) \times SO(3) \to SO(3) that converge uniformly to the identity map?

To put it another way, given any \varepsilon > 0, does there exist a non-trivial word w such that \|w(A,B) - 1 \| \leq \varepsilon for all A,B \in SO(3), where \| \| denotes (say) the operator norm, and 1 denotes the identity matrix in SO(3)?

As I said, I don’t want to spoil the fun of working out this problem, so I will leave it as a challenge. Readers are welcome to share their thoughts, partial solutions, or full solutions in the comments below.

We consider the incompressible Euler equations on the (Eulerian) torus {\mathbf{T}_E := ({\bf R}/{\bf Z})^d}, which we write in divergence form as

\displaystyle  \partial_t u^i + \partial_j(u^j u^i) = - \eta^{ij} \partial_j p \ \ \ \ \ (1)

\displaystyle  \partial_i u^i = 0, \ \ \ \ \ (2)

where {\eta^{ij}} is the (inverse) Euclidean metric. Here we use the summation conventions for indices such as {i,j,l} (reserving the symbol {k} for other purposes), and are retaining the convention from Notes 1 of denoting vector fields using superscripted indices rather than subscripted indices, as we will eventually need to change variables to Lagrangian coordinates at some point. In principle, much of the discussion in this set of notes (particularly regarding the positive direction of Onsager’s conjecture) could also be modified to also treat non-periodic solutions that decay at infinity if desired, but some non-trivial technical issues do arise non-periodic settings for the negative direction.

As noted previously, the kinetic energy

\displaystyle  \frac{1}{2} \int_{\mathbf{T}_E} |u(t,x)|^2\ dx = \frac{1}{2} \int_{\mathbf{T}_E} \eta_{ij} u^i(t,x) u^j(t,x)\ dx

is formally conserved by the flow, where {\eta_{ij}} is the Euclidean metric. Indeed, if one assumes that {u,p} are continuously differentiable in both space and time on {[0,T] \times \mathbf{T}}, then one can multiply the equation (1) by {u^l} and contract against {\eta_{il}} to obtain

\displaystyle  \eta_{il} u^l \partial_t u^i + \eta_{il} u^l \partial_j (u^j u^i) = - \eta_{il} u^l \eta^{ij} \partial_j p = 0

which rearranges using (2) and the product rule to

\displaystyle  \partial_t (\frac{1}{2} \eta_{ij} u^i u^j) + \partial_j( \frac{1}{2} \eta_{il} u^i u^j u^l ) + \partial_j (u^j p)

and then if one integrates this identity on {[0,T] \times \mathbf{T}_E} and uses Stokes’ theorem, one obtains the required energy conservation law

\displaystyle  \frac{1}{2} \int_{\mathbf{T}_E} \eta_{ij} u^i(T,x) u^j(T,x)\ dx = \frac{1}{2} \int_{\mathbf{T}_E} \eta_{ij} u^i(0,x) u^j(0,x)\ dx. \ \ \ \ \ (3)

It is then natural to ask whether the energy conservation law continues to hold for lower regularity solutions, in particular weak solutions that only obey (1), (2) in a distributional sense. The above argument no longer works as stated, because {u^i} is not a test function and so one cannot immediately integrate (1) against {u^i}. And indeed, as we shall soon see, it is now known that once the regularity of {u} is low enough, energy can “escape to frequency infinity”, leading to failure of the energy conservation law, a phenomenon known in physics as anomalous energy dissipation.

But what is the precise level of regularity needed in order to for this anomalous energy dissipation to occur? To make this question precise, we need a quantitative notion of regularity. One such measure is given by the Hölder space {C^{0,\alpha}(\mathbf{T}_E \rightarrow {\bf R})} for {0 < \alpha < 1}, defined as the space of continuous functions {f: \mathbf{T}_E \rightarrow {\bf R}} whose norm

\displaystyle  \| f \|_{C^{0,\alpha}(\mathbf{T}_E \rightarrow {\bf R})} := \sup_{x \in \mathbf{T}_E} |f(x)| + \sup_{x,y \in \mathbf{T}_E: x \neq y} \frac{|f(x)-f(y)|}{|x-y|^\alpha}

is finite. The space {C^{0,\alpha}} lies between the space {C^0} of continuous functions and the space {C^1} of continuously differentiable functions, and informally describes a space of functions that is “{\alpha} times differentiable” in some sense. The above derivation of the energy conservation law involved the integral

\displaystyle  \int_{\mathbf{T}_E} \eta_{ik} u^k \partial_j (u^j u^i)\ dx

that roughly speaking measures the fluctuation in energy. Informally, if we could take the derivative in this integrand and somehow “integrate by parts” to split the derivative “equally” amongst the three factors, one would morally arrive at an expression that resembles

\displaystyle  \int_{\mathbf{T}} \nabla^{1/3} u \nabla^{1/3} u \nabla^{1/3} u\ dx

which suggests that the integral can be made sense of for {u \in C^0_t C^{0,\alpha}_x} once {\alpha > 1/3}. More precisely, one can make

Conjecture 1 (Onsager’s conjecture) Let {0 < \alpha < 1} and {d \geq 2}, and let {0 < T < \infty}.

  • (i) If {\alpha > 1/3}, then any weak solution {u \in C^0_t C^{0,\alpha}([0,T] \times \mathbf{T} \rightarrow {\bf R})} to the Euler equations (in the Leray form {\partial_t u + \partial_j {\mathbb P} (u^j u) = u_0(x) \delta_0(t)}) obeys the energy conservation law (3).
  • (ii) If {\alpha \leq 1/3}, then there exist weak solutions {u \in C^0_t C^{0,\alpha}([0,T] \times \mathbf{T} \rightarrow {\bf R})} to the Euler equations (in Leray form) which do not obey energy conservation.

This conjecture was originally arrived at by Onsager by a somewhat different heuristic derivation; see Remark 7. The numerology is also compatible with that arising from the Kolmogorov theory of turbulence (discussed in this previous post), but we will not discuss this interesting connection further here.

The positive part (i) of Onsager conjecture was established by Constantin, E, and Titi, building upon earlier partial results by Eyink; the proof is a relatively straightforward application of Littlewood-Paley theory, and they were also able to work in larger function spaces than {C^0_t C^{0,\alpha}_x} (using {L^3_x}-based Besov spaces instead of Hölder spaces, see Exercise 3 below). The negative part (ii) is harder. Discontinuous weak solutions to the Euler equations that did not conserve energy were first constructed by Sheffer, with an alternate construction later given by Shnirelman. De Lellis and Szekelyhidi noticed the resemblance of this problem to that of the Nash-Kuiper theorem in the isometric embedding problem, and began adapting the convex integration technique used in that theorem to construct weak solutions of the Euler equations. This began a long series of papers in which increasingly regular weak solutions that failed to conserve energy were constructed, culminating in a recent paper of Isett establishing part (ii) of the Onsager conjecture in the non-endpoint case {\alpha < 1/3} in three and higher dimensions {d \geq 3}; the endpoint {\alpha = 1/3} remains open. (In two dimensions it may be the case that the positive results extend to a larger range than Onsager's conjecture predicts; see this paper of Cheskidov, Lopes Filho, Nussenzveig Lopes, and Shvydkoy for more discussion.) Further work continues into several variations of the Onsager conjecture, in which one looks at other differential equations, other function spaces, or other criteria for bad behavior than breakdown of energy conservation. See this recent survey of de Lellis and Szekelyhidi for more discussion.

In these notes we will first establish (i), then discuss the convex integration method in the original context of the Nash-Kuiper embedding theorem. Before tackling the Onsager conjecture (ii) directly, we discuss a related construction of high-dimensional weak solutions in the Sobolev space {L^2_t H^s_x} for {s} close to {1/2}, which is slightly easier to establish, though still rather intricate. Finally, we discuss the modifications of that construction needed to establish (ii), though we shall stop short of a full proof of that part of the conjecture.

We thank Phil Isett for some comments and corrections.

Read the rest of this entry »

This is the eleventh research thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

There are currently two strands of activity.  One is writing up the paper describing the combination of theoretical and numerical results needed to obtain the new bound \Lambda \leq 0.22.  The latest version of the writeup may be found here, in this directory.  The theoretical side of things have mostly been written up; the main remaining tasks to do right now are

  1. giving a more detailed description and illustration of the two major numerical verifications, namely the barrier verification that establishes a zero-free region for H_t(x+iy)=0 for 0 \leq t \leq 0.2, 0.2 \leq y \leq 1, |x - 6 \times 10^{10} - 83952| \leq 0.5, and the Dirichlet series bound that establishes a zero-free region for t = 0.2, 0.2 \leq y \leq 1, x \geq 6 \times 10^{10} + 83952; and
  2. giving more detail on the conditional results assuming more numerical verification of RH.

Meanwhile, several of us have been exploring the behaviour of the zeroes of H_t for negative t; this does not directly lead to any new progress on bounding \Lambda (though there is a good chance that it may simplify the proof of \Lambda \geq 0), but there have been some interesting numerical phenomena uncovered, as summarised in this set of slides.  One phenomenon is that for large negative t, many of the complex zeroes begin to organise themselves near the curves

\displaystyle y = -\frac{t}{2} \log \frac{x}{4\pi n(n+1)} - 1.

(An example of the agreement between the zeroes and these curves may be found here.)  We now have a (heuristic) theoretical explanation for this; we should have an approximation

\displaystyle H_t(x+iy) \approx B_t(x+iy) \sum_{n=1}^\infty \frac{b_n^t}{n^{s_*}}

in this region (where B_t, b_n^t, n^{s_*} are defined in equations (11), (15), (17) of the writeup, and the above curves arise from (an approximation of) those locations where two adjacent terms \frac{b_n^t}{n^{s_*}}, \frac{b_{n+1}^t}{(n+1)^{s_*}} in this series have equal magnitude (with the other terms being of lower order).

However, we only have a partial explanation at present of the interesting behaviour of the real zeroes at negative t, for instance the surviving zeroes at extremely negative values of t appear to lie on the curve where the quantity N is close to a half-integer, where

\displaystyle \tilde x := x + \frac{\pi t}{4}

\displaystyle N := \sqrt{\frac{\tilde x}{4\pi}}

The remaining zeroes exhibit a pattern in (N,u) coordinates that is approximately 1-periodic in N, where

\displaystyle u := \frac{4\pi |t|}{\tilde x}.

A plot of the zeroes in these coordinates (somewhat truncated due to the numerical range) may be found here.

We do not yet have a total explanation of the phenomena seen in this picture.  It appears that we have an approximation

\displaystyle H_t(x) \approx A_t(x) \sum_{n=1}^\infty \exp( -\frac{|t| \log^2(n/N)}{4(1-\frac{iu}{8\pi})} - \frac{1+i\tilde x}{2} \log(n/N) )

where A_t(x) is the non-zero multiplier

\displaystyle A_t(x) := e^{\pi^2 t/64} M_0(\frac{1+i\tilde x}{2}) N^{-\frac{1+i\tilde x}{2}} \sqrt{\frac{\pi}{1-\frac{iu}{8\pi}}}


\displaystyle M_0(s) := \frac{1}{8}\frac{s(s-1)}{2}\pi^{-s/2} \sqrt{2\pi} \exp( (\frac{s}{2}-\frac{1}{2}) \log \frac{s}{2} - \frac{s}{2} )

The derivation of this formula may be found in this wiki page.  However our initial attempts to simplify the above approximation further have proven to be somewhat inaccurate numerically (in particular giving an incorrect prediction for the location of zeroes, as seen in this picture).  We are in the process of using numerics to try to resolve the discrepancies (see this page for some code and discussion).