I was deeply saddened to learn that Elias Stein died yesterday, aged 87.

I have talked about some of Eli’s older mathematical work in these blog posts.  He continued to be quite active mathematically in recent years, for instance finishing six papers (with various co-authors including Jean Bourgain, Mariusz Mirek, Błażej Wróbel, and Pavel Zorin-Kranich) in just this year alone.  I last met him at Wrocław, Poland last September for a conference in his honour; he was in good health (and good spirits) then.   Here is a picture of Eli together with several of his students (including myself) who were at that meeting (taken from the conference web site):

Eli’s lectures were always masterpieces of clarity.  In one hour, he would set up a theorem, motivate it, explain the strategy, and execute it flawlessly; even after twenty years of teaching my own classes, I have yet to figure out his secret of somehow always being able to arrive at the natural finale of a mathematical presentation at the end of each hour without having to improvise at least a little bit halfway during the lecture.  The clear and self-contained nature of his lectures (and his many books) were a large reason why I decided to specialise as a graduate student in harmonic analysis (though I would eventually return to other interests, such as analytic number theory, many years after my graduate studies).

Looking back at my time with Eli, I now realise that he was extraordinarily patient and understanding with the brash and naive teenager he had to meet with every week.  A key turning point in my own career came after my oral qualifying exams, in which I very nearly failed due to my overconfidence and lack of preparation, particularly in my chosen specialty of harmonic analysis.  After the exam, he sat down with me and told me, as gently and diplomatically as possible, that my performance was a disappointment, and that I seriously needed to solidify my mathematical knowledge.  This turned out to be exactly what I needed to hear; I got motivated to actually work properly so as not to disappoint my advisor again.

So many of us in the field of harmonic analysis were connected to Eli in one way or another; the field always felt to me like a large extended family, with Eli as one of the patriarchs.  He will be greatly missed.

[UPDATE: Here is Princeton’s obituary for Elias Stein.]

These lecture notes are a continuation of the 254A lecture notes from the previous quarter.

We consider the Euler equations for incompressible fluid flow on a Euclidean space ${{\bf R}^d}$; we will label ${{\bf R}^d}$ as the “Eulerian space” ${{\bf R}^d_E}$ (or “Euclidean space”, or “physical space”) to distinguish it from the “Lagrangian space” ${{\bf R}^d_L}$ (or “labels space”) that we will introduce shortly (but the reader is free to also ignore the ${E}$ or ${L}$ subscripts if he or she wishes). Elements of Eulerian space ${{\bf R}^d_E}$ will be referred to by symbols such as ${x}$, we use ${dx}$ to denote Lebesgue measure on ${{\bf R}^d_E}$ and we will use ${x^1,\dots,x^d}$ for the ${d}$ coordinates of ${x}$, and use indices such as ${i,j,k}$ to index these coordinates (with the usual summation conventions), for instance ${\partial_i}$ denotes partial differentiation along the ${x^i}$ coordinate. (We use superscripts for coordinates ${x^i}$ instead of subscripts ${x_i}$ to be compatible with some differential geometry notation that we will use shortly; in particular, when using the summation notation, we will now be matching subscripts with superscripts for the pair of indices being summed.)

In Eulerian coordinates, the Euler equations read

$\displaystyle \partial_t u + u \cdot \nabla u = - \nabla p \ \ \ \ \ (1)$

$\displaystyle \nabla \cdot u = 0$

where ${u: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}^d_E}$ is the velocity field and ${p: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}}$ is the pressure field. These are functions of time ${t \in [0,T)}$ and on the spatial location variable ${x \in {\bf R}^d_E}$. We will refer to the coordinates ${(t,x) = (t,x^1,\dots,x^d)}$ as Eulerian coordinates. However, if one reviews the physical derivation of the Euler equations from 254A Notes 0, before one takes the continuum limit, the fundamental unknowns were not the velocity field ${u}$ or the pressure field ${p}$, but rather the trajectories ${(x^{(a)}(t))_{a \in A}}$, which can be thought of as a single function ${x: [0,T) \times A \rightarrow {\bf R}^d_E}$ from the coordinates ${(t,a)}$ (where ${t}$ is a time and ${a}$ is an element of the label set ${A}$) to ${{\bf R}^d}$. The relationship between the trajectories ${x^{(a)}(t) = x(t,a)}$ and the velocity field was given by the informal relationship

$\displaystyle \partial_t x(t,a) \approx u( t, x(t,a) ). \ \ \ \ \ (2)$

We will refer to the coordinates ${(t,a)}$ as (discrete) Lagrangian coordinates for describing the fluid.

In view of this, it is natural to ask whether there is an alternate way to formulate the continuum limit of incompressible inviscid fluids, by using a continuous version ${(t,a)}$ of the Lagrangian coordinates, rather than Eulerian coordinates. This is indeed the case. Suppose for instance one has a smooth solution ${u, p}$ to the Euler equations on a spacetime slab ${[0,T) \times {\bf R}^d_E}$ in Eulerian coordinates; assume furthermore that the velocity field ${u}$ is uniformly bounded. We introduce another copy ${{\bf R}^d_L}$ of ${{\bf R}^d}$, which we call Lagrangian space or labels space; we use symbols such as ${a}$ to refer to elements of this space, ${da}$ to denote Lebesgue measure on ${{\bf R}^d_L}$, and ${a^1,\dots,a^d}$ to refer to the ${d}$ coordinates of ${a}$. We use indices such as ${\alpha,\beta,\gamma}$ to index these coordinates, thus for instance ${\partial_\alpha}$ denotes partial differentiation along the ${a^\alpha}$ coordinate. We will use summation conventions for both the Eulerian coordinates ${i,j,k}$ and the Lagrangian coordinates ${\alpha,\beta,\gamma}$, with an index being summed if it appears as both a subscript and a superscript in the same term. While ${{\bf R}^d_L}$ and ${{\bf R}^d_E}$ are of course isomorphic, we will try to refrain from identifying them, except perhaps at the initial time ${t=0}$ in order to fix the initialisation of Lagrangian coordinates.

Given a smooth and bounded velocity field ${u: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}^d_E}$, define a trajectory map for this velocity to be any smooth map ${X: [0,T) \times {\bf R}^d_L \rightarrow {\bf R}^d_E}$ that obeys the ODE

$\displaystyle \partial_t X(t,a) = u( t, X(t,a) ); \ \ \ \ \ (3)$

in view of (2), this describes the trajectory (in ${{\bf R}^d_E}$) of a particle labeled by an element ${a}$ of ${{\bf R}^d_L}$. From the Picard existence theorem and the hypothesis that ${u}$ is smooth and bounded, such a map exists and is unique as long as one specifies the initial location ${X(0,a)}$ assigned to each label ${a}$. Traditionally, one chooses the initial condition

$\displaystyle X(0,a) = a \ \ \ \ \ (4)$

for ${a \in {\bf R}^d_L}$, so that we label each particle by its initial location at time ${t=0}$; we are also free to specify other initial conditions for the trajectory map if we please. Indeed, we have the freedom to “permute” the labels ${a \in {\bf R}^d_L}$ by an arbitrary diffeomorphism: if ${X: [0,T) \times {\bf R}^d_L \rightarrow {\bf R}^d_E}$ is a trajectory map, and ${\pi: {\bf R}^d_L \rightarrow{\bf R}^d_L}$ is any diffeomorphism (a smooth map whose inverse exists and is also smooth), then the map ${X \circ \pi: [0,T) \times {\bf R}^d_L \rightarrow {\bf R}^d_E}$ is also a trajectory map, albeit one with different initial conditions ${X(0,a)}$.

Despite the popularity of the initial condition (4), we will try to keep conceptually separate the Eulerian space ${{\bf R}^d_E}$ from the Lagrangian space ${{\bf R}^d_L}$, as they play different physical roles in the interpretation of the fluid; for instance, while the Euclidean metric ${d\eta^2 = dx^1 dx^1 + \dots + dx^d dx^d}$ is an important feature of Eulerian space ${{\bf R}^d_E}$, it is not a geometrically natural structure to use in Lagrangian space ${{\bf R}^d_L}$. We have the following more general version of Exercise 8 from 254A Notes 2:

Exercise 1 Let ${u: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}^d_E}$ be smooth and bounded.

• If ${X_0: {\bf R}^d_L \rightarrow {\bf R}^d_E}$ is a smooth map, show that there exists a unique smooth trajectory map ${X: [0,T) \times {\bf R}^d_L \rightarrow {\bf R}^d_E}$ with initial condition ${X(0,a) = X_0(a)}$ for all ${a \in {\bf R}^d_L}$.
• Show that if ${X_0}$ is a diffeomorphism and ${t \in [0,T)}$, then the map ${X(t): a \mapsto X(t,a)}$ is also a diffeomorphism.

Remark 2 The first of the Euler equations (1) can now be written in the form

$\displaystyle \frac{d^2}{dt^2} X(t,a) = - (\nabla p)( t, X(t,a) ) \ \ \ \ \ (5)$

which can be viewed as a continuous limit of Newton’s first law ${m^{(a)} \frac{d^2}{dt^2} x^{(a)}(t) = F^{(a)}(t)}$.

Call a diffeomorphism ${Y: {\bf R}^d_L \rightarrow {\bf R}^d_E}$ (oriented) volume preserving if one has the equation

$\displaystyle \mathrm{det}( \nabla Y )(a) = 1 \ \ \ \ \ (6)$

for all ${a \in {\bf R}^d_L}$, where the total differential ${\nabla Y}$ is the ${d \times d}$ matrix with entries ${\partial_\alpha Y^i}$ for ${\alpha = 1,\dots,d}$ and ${i=1,\dots,d}$, where ${Y^1,\dots,Y^d:{\bf R}^d_L \rightarrow {\bf R}}$ are the components of ${Y}$. (If one wishes, one can also view ${\nabla Y}$ as a linear transformation from the tangent space ${T_a {\bf R}^d_L}$ of Lagrangian space at ${a}$ to the tangent space ${T_{Y(a)} {\bf R}^d_E}$ of Eulerian space at ${Y(a)}$.) Equivalently, ${Y}$ is orientation preserving and one has a Jacobian-free change of variables formula

$\displaystyle \int_{{\bf R}^d_F} f( Y(a) )\ da = \int_{{\bf R}^d_E} f(x)\ dx$

for all ${f \in C_c({\bf R}^d_E \rightarrow {\bf R})}$, which is in turn equivalent to ${Y(E) \subset {\bf R}^d_E}$ having the same Lebesgue measure as ${E}$ for any measurable set ${E \subset {\bf R}^d_L}$.

The divergence-free condition ${\nabla \cdot u = 0}$ then can be nicely expressed in terms of volume-preserving properties of the trajectory maps ${X}$, in a manner which confirms the interpretation of this condition as an incompressibility condition on the fluid:

Lemma 3 Let ${u: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}^d_E}$ be smooth and bounded, let ${X_0: {\bf R}^d_L \rightarrow {\bf R}^d_E}$ be a volume-preserving diffeomorphism, and let ${X: [0,T) \times {\bf R}^d_L \rightarrow {\bf R}^d_E}$ be the trajectory map. Then the following are equivalent:

• ${\nabla \cdot u = 0}$ on ${[0,T) \times {\bf R}^d_E}$.
• ${X(t): {\bf R}^d_L \rightarrow {\bf R}^d_E}$ is volume-preserving for all ${t \in [0,T)}$.

Proof: Since ${X_0}$ is orientation-preserving, we see from continuity that ${X(t)}$ is also orientation-preserving. Suppose that ${X(t)}$ is also volume-preserving, then for any ${f \in C^\infty_c({\bf R}^d_E \rightarrow {\bf R})}$ we have the conservation law

$\displaystyle \int_{{\bf R}^d_L} f( X(t,a) )\ da = \int_{{\bf R}^d_E} f(x)\ dx$

for all ${t \in [0,T)}$. Differentiating in time using the chain rule and (3) we conclude that

$\displaystyle \int_{{\bf R}^d_L} (u(t) \cdot \nabla f)( X(t,a)) \ da = 0$

for all ${t \in [0,T)}$, and hence by change of variables

$\displaystyle \int_{{\bf R}^d_E} (u(t) \cdot \nabla f)(x) \ dx = 0$

which by integration by parts gives

$\displaystyle \int_{{\bf R}^d_E} (\nabla \cdot u(t,x)) f(x)\ dx = 0$

for all ${f \in C^\infty_c({\bf R}^d_E \rightarrow {\bf R})}$ and ${t \in [0,T)}$, so ${u}$ is divergence-free.

To prove the converse implication, it is convenient to introduce the labels map ${A:[0,T) \times {\bf R}^d_E \rightarrow {\bf R}^d_L}$, defined by setting ${A(t): {\bf R}^d_E \rightarrow {\bf R}^d_L}$ to be the inverse of the diffeomorphism ${X(t): {\bf R}^d_L \rightarrow {\bf R}^d_E}$, thus

$\displaystyle A(t, X(t,a)) = a$

for all ${(t,a) \in [0,T) \times {\bf R}^d_L}$. By the implicit function theorem, ${A}$ is smooth, and by differentiating the above equation in time using (3) we see that

$\displaystyle D_t A(t,x) = 0$

where ${D_t}$ is the usual material derivative

$\displaystyle D_t := \partial_t + u \cdot \nabla \ \ \ \ \ (7)$

acting on functions on ${[0,T) \times {\bf R}^d_E}$. If ${u}$ is divergence-free, we have from integration by parts that

$\displaystyle \partial_t \int_{{\bf R}^d_E} \phi(t,x)\ dx = \int_{{\bf R}^d_E} D_t \phi(t,x)\ dx$

for any test function ${\phi: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}}$. In particular, for any ${g \in C^\infty_c({\bf R}^d_L \rightarrow {\bf R})}$, we can calculate

$\displaystyle \partial_t \int_{{\bf R}^d_E} g( A(t,x) )\ dx = \int_{{\bf R}^d_E} D_t (g(A(t,x)))\ dx$

$\displaystyle = \int_{{\bf R}^d_E} 0\ dx$

and hence

$\displaystyle \int_{{\bf R}^d_E} g(A(t,x))\ dx = \int_{{\bf R}^d_E} g(A(0,x))\ dx$

for any ${t \in [0,T)}$. Since ${X_0}$ is volume-preserving, so is ${A(0)}$, thus

$\displaystyle \int_{{\bf R}^d_E} g \circ A(t)\ dx = \int_{{\bf R}^d_L} g\ da.$

Thus ${A(t)}$ is volume-preserving, and hence ${X(t)}$ is also. $\Box$

Exercise 4 Let ${M: [0,T) \rightarrow \mathrm{GL}_d({\bf R})}$ be a continuously differentiable map from the time interval ${[0,T)}$ to the general linear group ${\mathrm{GL}_d({\bf R})}$ of invertible ${d \times d}$ matrices. Establish Jacobi’s formula

$\displaystyle \partial_t \det(M(t)) = \det(M(t)) \mathrm{tr}( M(t)^{-1} \partial_t M(t) )$

and use this and (6) to give an alternate proof of Lemma 3 that does not involve any integration in space.

Remark 5 One can view the use of Lagrangian coordinates as an extension of the method of characteristics. Indeed, from the chain rule we see that for any smooth function ${f: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}}$ of Eulerian spacetime, one has

$\displaystyle \frac{d}{dt} f(t,X(t,a)) = (D_t f)(t,X(t,a))$

and hence any transport equation that in Eulerian coordinates takes the form

$\displaystyle D_t f = g$

for smooth functions ${f,g: [0,T) \times {\bf R}^d_E \rightarrow {\bf R}}$ of Eulerian spacetime is equivalent to the ODE

$\displaystyle \frac{d}{dt} F = G$

where ${F,G: [0,T) \times {\bf R}^d_L \rightarrow {\bf R}}$ are the smooth functions of Lagrangian spacetime defined by

$\displaystyle F(t,a) := f(t,X(t,a)); \quad G(t,a) := g(t,X(t,a)).$

In this set of notes we recall some basic differential geometry notation, particularly with regards to pullbacks and Lie derivatives of differential forms and other tensor fields on manifolds such as ${{\bf R}^d_E}$ and ${{\bf R}^d_L}$, and explore how the Euler equations look in this notation. Our discussion will be entirely formal in nature; we will assume that all functions have enough smoothness and decay at infinity to justify the relevant calculations. (It is possible to work rigorously in Lagrangian coordinates – see for instance the work of Ebin and Marsden – but we will not do so here.) As a general rule, Lagrangian coordinates tend to be somewhat less convenient to use than Eulerian coordinates for establishing the basic analytic properties of the Euler equations, such as local existence, uniqueness, and continuous dependence on the data; however, they are quite good at clarifying the more algebraic properties of these equations, such as conservation laws and the variational nature of the equations. It may well be that in the future we will be able to use the Lagrangian formalism more effectively on the analytic side of the subject also.

Remark 6 One can also write the Navier-Stokes equations in Lagrangian coordinates, but the equations are not expressed in a favourable form in these coordinates, as the Laplacian ${\Delta}$ appearing in the viscosity term becomes replaced with a time-varying Laplace-Beltrami operator. As such, we will not discuss the Lagrangian coordinate formulation of Navier-Stokes here.

Note: this post is not required reading for this course, or for the sequel course in the winter quarter.

In a Notes 2, we reviewed the classical construction of Leray of global weak solutions to the Navier-Stokes equations. We did not quite follow Leray’s original proof, in that the notes relied more heavily on the machinery of Littlewood-Paley projections, which have become increasingly common tools in modern PDE. On the other hand, we did use the same “exploiting compactness to pass to weakly convergent subsequence” strategy that is the standard one in the PDE literature used to construct weak solutions.

As I discussed in a previous post, the manipulation of sequences and their limits is analogous to a “cheap” version of nonstandard analysis in which one uses the Fréchet filter rather than an ultrafilter to construct the nonstandard universe. (The manipulation of generalised functions of Columbeau-type can also be comfortably interpreted within this sort of cheap nonstandard analysis.) Augmenting the manipulation of sequences with the right to pass to subsequences whenever convenient is then analogous to a sort of “lazy” nonstandard analysis, in which the implied ultrafilter is never actually constructed as a “completed object“, but is instead lazily evaluated, in the sense that whenever membership of a given subsequence of the natural numbers in the ultrafilter needs to be determined, one either passes to that subsequence (thus placing it in the ultrafilter) or the complement of the sequence (placing it out of the ultrafilter). This process can be viewed as the initial portion of the transfinite induction that one usually uses to construct ultrafilters (as discussed using a voting metaphor in this post), except that there is generally no need in any given application to perform the induction for any uncountable ordinal (or indeed for most of the countable ordinals also).

On the other hand, it is also possible to work directly in the orthodox framework of nonstandard analysis when constructing weak solutions. This leads to an approach to the subject which is largely equivalent to the usual subsequence-based approach, though there are some minor technical differences (for instance, the subsequence approach occasionally requires one to work with separable function spaces, whereas in the ultrafilter approach the reliance on separability is largely eliminated, particularly if one imposes a strong notion of saturation on the nonstandard universe). The subject acquires a more “algebraic” flavour, as the quintessential analysis operation of taking a limit is replaced with the “standard part” operation, which is an algebra homomorphism. The notion of a sequence is replaced by the distinction between standard and nonstandard objects, and the need to pass to subsequences disappears entirely. Also, the distinction between “bounded sequences” and “convergent sequences” is largely eradicated, particularly when the space that the sequences ranged in enjoys some compactness properties on bounded sets. Also, in this framework, the notorious non-uniqueness features of weak solutions can be “blamed” on the non-uniqueness of the nonstandard extension of the standard universe (as well as on the multiple possible ways to construct nonstandard mollifications of the original standard PDE). However, many of these changes are largely cosmetic; switching from a subsequence-based theory to a nonstandard analysis-based theory does not seem to bring one significantly closer for instance to the global regularity problem for Navier-Stokes, but it could have been an alternate path for the historical development and presentation of the subject.

In any case, I would like to present below the fold this nonstandard analysis perspective, quickly translating the relevant components of real analysis, functional analysis, and distributional theory that we need to this perspective, and then use it to re-prove Leray’s theorem on existence of global weak solutions to Navier-Stokes.

Kaisa Matomäki, Maksym Radziwill, and I just uploaded to the arXiv our paper “Fourier uniformity of bounded multiplicative functions in short intervals on average“. This paper is the outcome of our attempts during the MSRI program in analytic number theory last year to attack the local Fourier uniformity conjecture for the Liouville function ${\lambda}$. This conjecture generalises a landmark result of Matomäki and Radziwill, who show (among other things) that one has the asymptotic

$\displaystyle \int_X^{2X} |\sum_{x \leq n \leq x+H} \lambda(n)|\ dx = o(HX) \ \ \ \ \ (1)$

whenever ${X \rightarrow \infty}$ and ${H = H(X)}$ goes to infinity as ${X \rightarrow \infty}$. Informally, this says that the Liouville function has small mean for almost all short intervals ${[x,x+H]}$. The remarkable thing about this theorem is that there is no lower bound on how ${H}$ goes to infinity with ${X}$; one can take for instance ${H = \log\log\log X}$. This lack of lower bound was crucial when I applied this result (or more precisely, a generalisation of this result to arbitrary non-pretentious bounded multiplicative functions) a few years ago to solve the Erdös discrepancy problem, as well as a logarithmically averaged two-point Chowla conjecture, for instance it implies that

$\displaystyle \sum_{n \leq X} \frac{\lambda(n) \lambda(n+1)}{n} = o(\log X).$

The local Fourier uniformity conjecture asserts the stronger asymptotic

$\displaystyle \int_X^{2X} \sup_{\alpha \in {\bf R}} |\sum_{x \leq n \leq x+H} \lambda(n) e(-\alpha n)|\ dx = o(HX) \ \ \ \ \ (2)$

under the same hypotheses on ${H}$ and ${X}$. As I worked out in a previous paper, this conjecture would imply a logarithmically averaged three-point Chowla conjecture, implying for instance that

$\displaystyle \sum_{n \leq X} \frac{\lambda(n) \lambda(n+1) \lambda(n+2)}{n} = o(\log X).$

This particular bound also follows from some slightly different arguments of Joni Teräväinen and myself, but the implication would also work for other non-pretentious bounded multiplicative functions, whereas the arguments of Joni and myself rely more heavily on the specific properties of the Liouville function (in particular that ${\lambda(p)=-1}$ for all primes ${p}$).

There is also a higher order version of the local Fourier uniformity conjecture in which the linear phase ${{}e(-\alpha n)}$ is replaced with a polynomial phase such as ${e(-\alpha_d n^d - \dots - \alpha_1 n - \alpha_0)}$, or more generally a nilsequence ${\overline{F(g(n) \Gamma)}}$; as shown in my previous paper, this conjecture implies (and is in fact equivalent to, after logarithmic averaging) a logarithmically averaged version of the full Chowla conjecture (not just the two-point or three-point versions), as well as a logarithmically averaged version of the Sarnak conjecture.

The main result of the current paper is to obtain some cases of the local Fourier uniformity conjecture:

Theorem 1 The asymptotic (2) is true when ${H = X^\theta}$ for a fixed ${\theta > 0}$.

Previously this was known for ${\theta > 5/8}$ by the work of Zhan (who in fact proved the stronger pointwise assertion ${\sup_{\alpha \in {\bf R}} |\sum_{x \leq n \leq x+H} \lambda(n) e(-\alpha n)|= o(H)}$ for ${X \leq x \leq 2X}$ in this case). In a previous paper with Kaisa and Maksym, we also proved a weak version

$\displaystyle \sup_{\alpha \in {\bf R}} \int_X^{2X} |\sum_{x \leq n \leq x+H} \lambda(n) e(-\alpha n)|\ dx = o(HX) \ \ \ \ \ (3)$

of (2) for any ${H}$ growing arbitrarily slowly with ${X}$; this is stronger than (1) (and is in fact proven by a variant of the method) but significantly weaker than (2), because in the latter the worst-case ${\alpha}$ is permitted to depend on the ${x}$ parameter, whereas in (3) ${\alpha}$ must remain independent of ${x}$.

Unfortunately, the restriction ${H = X^\theta}$ is not strong enough to give applications to Chowla-type conjectures (one would need something more like ${H = \log^\theta X}$ for this). However, it can still be used to control some sums that had not previously been manageable. For instance, a quick application of the circle method lets one use the above theorem to derive the asymptotic

$\displaystyle \sum_{h \leq H} \sum_{n \leq X} \lambda(n) \Lambda(n+h) \Lambda(n+2h) = o( H X )$

whenever ${H = X^\theta}$ for a fixed ${\theta > 0}$, where ${\Lambda}$ is the von Mangoldt function. Amusingly, the seemingly simpler question of establishing the expected asymptotic for

$\displaystyle \sum_{h \leq H} \sum_{n \leq X} \Lambda(n+h) \Lambda(n+2h)$

is only known in the range ${\theta \geq 1/6}$ (from the work of Zaccagnini). Thus we have a rare example of a number theory sum that becomes easier to control when one inserts a Liouville function!

We now give an informal description of the strategy of proof of the theorem (though for numerous technical reasons, the actual proof deviates in some respects from the description given here). If (2) failed, then for many values of ${x \in [X,2X]}$ we would have the lower bound

$\displaystyle |\sum_{x \leq n \leq x+H} \lambda(n) e(-\alpha_x n)| \gg 1$

for some frequency ${\alpha_x \in{\bf R}}$. We informally describe this correlation between ${\lambda(n)}$ and ${e(\alpha_x n)}$ by writing

$\displaystyle \lambda(n) \approx e(\alpha_x n) \ \ \ \ \ (4)$

for ${n \in [x,x+H]}$ (informally, one should view this as asserting that ${\lambda(n)}$ “behaves like” a constant multiple of ${e(\alpha_x n)}$). For sake of discussion, suppose we have this relationship for all ${x \in [X,2X]}$, not just many.

As mentioned before, the main difficulty here is to understand how ${\alpha_x}$ varies with ${x}$. As it turns out, the multiplicativity properties of the Liouville function place a significant constraint on this dependence. Indeed, if we let ${p}$ be a fairly small prime (e.g. of size ${H^\varepsilon}$ for some ${\varepsilon>0}$), and use the identity ${\lambda(np) = \lambda(n) \lambda(p) = - \lambda(n)}$ for the Liouville function to conclude (at least heuristically) from (4) that

$\displaystyle \lambda(n) \approx e(\alpha_x n p)$

for ${n \in [x/p, x/p + H/p]}$. (In practice, we will have this sort of claim for many primes ${p}$ rather than all primes ${p}$, after using tools such as the Turán-Kubilius inequality, but we ignore this distinction for this informal argument.)

Now let ${x, y \in [X,2X]}$ and ${p,q \sim P}$ be primes comparable to some fixed range ${P = H^\varepsilon}$ such that

$\displaystyle x/p = y/q + O( H/P). \ \ \ \ \ (5)$

Then we have both

$\displaystyle \lambda(n) \approx e(\alpha_x n p)$

and

$\displaystyle \lambda(n) \approx e(\alpha_y n q)$

on essentially the same range of ${n}$ (two nearby intervals of length ${\sim H/P}$). This suggests that the frequencies ${p \alpha_x}$ and ${q \alpha_y}$ should be close to each other modulo ${1}$, in particular one should expect the relationship

$\displaystyle p \alpha_x = q \alpha_y + O( \frac{P}{H} ) \hbox{ mod } 1. \ \ \ \ \ (6)$

Comparing this with (5) one is led to the expectation that ${\alpha_x}$ should depend inversely on ${x}$ in some sense (for instance one can check that

$\displaystyle \alpha_x = T/x \ \ \ \ \ (7)$

would solve (6) if ${T = O( X / H^2 )}$; by Taylor expansion, this would correspond to a global approximation of the form ${\lambda(n) \approx n^{iT}}$). One now has a problem of an additive combinatorial flavour (or of a “local to global” flavour), namely to leverage the relation (6) to obtain global control on ${\alpha_x}$ that resembles (7).

A key obstacle in solving (6) efficiently is the fact that one only knows that ${p \alpha_x}$ and ${q \alpha_y}$ are close modulo ${1}$, rather than close on the real line. One can start resolving this problem by the Chinese remainder theorem, using the fact that we have the freedom to shift (say) ${\alpha_y}$ by an arbitrary integer. After doing so, one can arrange matters so that one in fact has the relationship

$\displaystyle p \alpha_x = q \alpha_y + O( \frac{P}{H} ) \hbox{ mod } p \ \ \ \ \ (8)$

whenever ${x,y \in [X,2X]}$ and ${p,q \sim P}$ obey (5). (This may force ${\alpha_q}$ to become extremely large, on the order of ${\prod_{p \sim P} p}$, but this will not concern us.)

Now suppose that we have ${y,y' \in [X,2X]}$ and primes ${q,q' \sim P}$ such that

$\displaystyle y/q = y'/q' + O(H/P). \ \ \ \ \ (9)$

For every prime ${p \sim P}$, we can find an ${x}$ such that ${x/p}$ is within ${O(H/P)}$ of both ${y/q}$ and ${y'/q'}$. Applying (8) twice we obtain

$\displaystyle p \alpha_x = q \alpha_y + O( \frac{P}{H} ) \hbox{ mod } p$

and

$\displaystyle p \alpha_x = q' \alpha_{y'} + O( \frac{P}{H} ) \hbox{ mod } p$

and thus by the triangle inequality we have

$\displaystyle q \alpha_y = q' \alpha_{y'} + O( \frac{P}{H} ) \hbox{ mod } p$

for all ${p \sim P}$; hence by the Chinese remainder theorem

$\displaystyle q \alpha_y = q' \alpha_{y'} + O( \frac{P}{H} ) \hbox{ mod } \prod_{p \sim P} p.$

In practice, in the regime ${H = X^\theta}$ that we are considering, the modulus ${\prod_{p \sim P} p}$ is so huge we can effectively ignore it (in the spirit of the Lefschetz principle); so let us pretend that we in fact have

$\displaystyle q \alpha_y = q' \alpha_{y'} + O( \frac{P}{H} ) \ \ \ \ \ (10)$

whenever ${y,y' \in [X,2X]}$ and ${q,q' \sim P}$ obey (9).

Now let ${k}$ be an integer to be chosen later, and suppose we have primes ${p_1,\dots,p_k,q_1,\dots,q_k \sim P}$ such that the difference

$\displaystyle q = |p_1 \dots p_k - q_1 \dots q_k|$

is small but non-zero. If ${k}$ is chosen so that

$\displaystyle P^k \approx \frac{X}{H}$

(where one is somewhat loose about what ${\approx}$ means) then one can then find real numbers ${x_1,\dots,x_k \sim X}$ such that

$\displaystyle \frac{x_j}{p_j} = \frac{x_{j+1}}{q_j} + O( \frac{H}{P} )$

for ${j=1,\dots,k}$, with the convention that ${x_{k+1} = x_1}$. We then have

$\displaystyle p_j \alpha_{x_j} = q_j \alpha_{x_{j+1}} + O( \frac{P}{H} )$

which telescopes to

$\displaystyle p_1 \dots p_k \alpha_{x_1} = q_1 \dots q_k \alpha_{x_1} + O( \frac{P^k}{H} )$

and thus

$\displaystyle q \alpha_{x_1} = O( \frac{P^k}{H} )$

and hence

$\displaystyle \alpha_{x_1} = O( \frac{P^k}{H} ) \approx O( \frac{X}{H^2} ).$

In particular, for each ${x \sim X}$, we expect to be able to write

$\displaystyle \alpha_x = \frac{T_x}{x} + O( \frac{1}{H} )$

for some ${T_x = O( \frac{X^2}{H^2} )}$. This quantity ${T_x}$ can vary with ${x}$; but from (10) and a short calculation we see that

$\displaystyle T_y = T_{y'} + O( \frac{X}{H} )$

whenever ${y, y' \in [X,2X]}$ obey (9) for some ${q,q' \sim P}$.

Now imagine a “graph” in which the vertices are elements ${y}$ of ${[X,2X]}$, and two elements ${y,y'}$ are joined by an edge if (9) holds for some ${q,q' \sim P}$. Because of exponential sum estimates on ${\sum_{q \sim P} q^{it}}$, this graph turns out to essentially be an “expander” in the sense that any two vertices ${y,y' \in [X,2X]}$ can be connected (in multiple ways) by fairly short paths in this graph (if one allows one to modify one of ${y}$ or ${y'}$ by ${O(H)}$). As a consequence, we can assume that this quantity ${T_y}$ is essentially constant in ${y}$ (cf. the application of the ergodic theorem in this previous blog post), thus we now have

$\displaystyle \alpha_x = \frac{T}{x} + O(\frac{1}{H} )$

for most ${x \in [X,2X]}$ and some ${T = O(X^2/H^2)}$. By Taylor expansion, this implies that

$\displaystyle \lambda(n) \approx n^{iT}$

on ${[x,x+H]}$ for most ${x}$, thus

$\displaystyle \int_X^{2X} |\sum_{x \leq n \leq x+H} \lambda(n) n^{-iT}|\ dx \gg HX.$

But this can be shown to contradict the Matomäki-Radziwill theorem (because the multiplicative function ${n \mapsto \lambda(n) n^{-iT}}$ is known to be non-pretentious).

I’ve just uploaded to the arXiv my paper “Embedding the Heisenberg group into a bounded dimensional Euclidean space with optimal distortion“, submitted to Revista Matematica Iberoamericana. This paper concerns the extent to which one can accurately embed the metric structure of the Heisenberg group

$\displaystyle H := \begin{pmatrix} 1 & {\bf R} & {\bf R} \\ 0 & 1 & {\bf R} \\ 0 & 0 & 1 \end{pmatrix}$

into Euclidean space, which we can write as ${\{ [x,y,z]: x,y,z \in {\bf R} \}}$ with the notation

$\displaystyle [x,y,z] := \begin{pmatrix} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \end{pmatrix}.$

Here we give ${H}$ the right-invariant Carnot-Carathéodory metric ${d}$ coming from the right-invariant vector fields

$\displaystyle X := \frac{\partial}{\partial x} + y \frac{\partial}{\partial z}; \quad Y := \frac{\partial}{\partial y}$

but not from the commutator vector field

$\displaystyle Z := [Y,X] = \frac{\partial}{\partial z}.$

This gives ${H}$ the geometry of a Carnot group. As observed by Semmes, it follows from the Carnot group differentiation theory of Pansu that there is no bilipschitz map from ${(H,d)}$ to any Euclidean space ${{\bf R}^D}$ or even to ${\ell^2}$, since such a map must be differentiable almost everywhere in the sense of Carnot groups, which in particular shows that the derivative map annihilate ${Z}$ almost everywhere, which is incompatible with being bilipschitz.

On the other hand, if one snowflakes the Heisenberg group by replacing the metric ${d}$ with ${d^{1-\varepsilon}}$ for some ${0 < \varepsilon < 1}$, then it follows from the general theory of Assouad on embedding snowflaked metrics of doubling spaces that ${(H,d^{1-\varepsilon})}$ may be embedded in a bilipschitz fashion into ${\ell^2}$, or even to ${{\bf R}^{D_\varepsilon}}$ for some ${D_\varepsilon}$ depending on ${\varepsilon}$.

Of course, the distortion of this bilipschitz embedding must degenerate in the limit ${\varepsilon \rightarrow 0}$. From the work of Austin-Naor-Tessera and Naor-Neiman it follows that ${(H,d^{1-\varepsilon})}$ may be embedded into ${\ell^2}$ with a distortion of ${O( \varepsilon^{-1/2} )}$, but no better. The Naor-Neiman paper also embeds ${(H,d^{1-\varepsilon})}$ into a finite-dimensional space ${{\bf R}^D}$ with ${D}$ independent of ${\varepsilon}$, but at the cost of worsening the distortion to ${O(\varepsilon^{-1})}$. They then posed the question of whether this worsening of the distortion is necessary.

The main result of this paper answers this question in the negative:

Theorem 1 There exists an absolute constant ${D}$ such that ${(H,d^{1-\varepsilon})}$ may be embedded into ${{\bf R}^D}$ in a bilipschitz fashion with distortion ${O(\varepsilon^{-1/2})}$ for any ${0 < \varepsilon \leq 1/2}$.

To motivate the proof of this theorem, let us first present a bilipschitz map ${\Phi: {\bf R} \rightarrow \ell^2}$ from the snowflaked line ${({\bf R},d_{\bf R}^{1-\varepsilon})}$ (with ${d_{\bf R}}$ being the usual metric on ${{\bf R}}$) into complex Hilbert space ${\ell^2({\bf C})}$. The map is given explicitly as a Weierstrass type function

$\displaystyle \Phi(x) := \sum_{k \in {\bf Z}} 2^{-\varepsilon k} (\phi_k(x) - \phi_k(0))$

where for each ${k}$, ${\phi_k: {\bf R} \rightarrow \ell^2}$ is the function

$\displaystyle \phi_k(x) := 2^k e^{2\pi i x / 2^k} e_k.$

and ${(e_k)_{k \in {\bf Z}}}$ are an orthonormal basis for ${\ell^2({\bf C})}$. The subtracting of the constant ${\phi_k(0)}$ is purely in order to make the sum convergent as ${k \rightarrow \infty}$. If ${x,y \in {\bf R}}$ are such that ${2^{k_0-2} \leq d_{\bf R}(x,y) \leq 2^{k_0-1}}$ for some integer ${k_0}$, one can easily check the bounds

$\displaystyle |\phi_k(x) - \phi_k(y)| \lesssim d_{\bf R}(x,y)^{(1-\varepsilon)} \min( 2^{-(1-\varepsilon) (k_0-k)}, 2^{-\varepsilon (k-k_0)} )$

with the lower bound

$\displaystyle |\phi_{k_0}(x) - \phi_{k_0}(y)| \gtrsim d_{\bf R}(x,y)^{(1-\varepsilon)}$

at which point one finds that

$\displaystyle d_{\bf R}(x,y)^{1-\varepsilon} \lesssim |\Phi(x) - \Phi(y)| \lesssim \varepsilon^{-1/2} d_{\bf R}(x,y)^{1-\varepsilon}$

as desired.

The key here was that each function ${\phi_k}$ oscillated at a different spatial scale ${2^k}$, and the functions were all orthogonal to each other (so that the upper bound involved a factor of ${\varepsilon^{-1/2}}$ rather than ${\varepsilon^{-1}}$). One can replicate this example for the Heisenberg group without much difficulty. Indeed, if we let ${\Gamma := \{ [a,b,c]: a,b,c \in {\bf Z} \}}$ be the discrete Heisenberg group, then the nilmanifold ${H/\Gamma}$ is a three-dimensional smooth compact manifold; thus, by the Whitney embedding theorem, it smoothly embeds into ${{\bf R}^6}$. This gives a smooth immersion ${\phi: H \rightarrow {\bf R}^6}$ which is ${\Gamma}$-automorphic in the sense that ${\phi(p\gamma) = \phi(p)}$ for all ${p \in H}$ and ${\gamma \in \Gamma}$. If one then defines ${\phi_k: H \rightarrow \ell^2 \otimes {\bf R}^6}$ to be the function

$\displaystyle \phi_k(p) := 2^k \phi( \delta_{2^{-k}}(p) ) \otimes e_k$

where ${\delta_\lambda: H \rightarrow H}$ is the scaling map

$\displaystyle \delta_\lambda([x,y,z]) := [\lambda x, \lambda y, \lambda^2 z],$

then one can repeat the previous arguments to obtain the required bilipschitz bounds

$\displaystyle d(p,q)^{1-\varepsilon} \lesssim |\Phi(p) - \Phi(q) \lesssim \varepsilon^{-1/2} d(p,q)^{1-\varepsilon}$

for the function

$\displaystyle \Phi(p) :=\sum_{k \in {\bf Z}} 2^{-\varepsilon k} (\phi_k(p) - \phi_k(0)).$

To adapt this construction to bounded dimension, the main obstruction was the requirement that the ${\phi_k}$ took values in orthogonal subspaces. But if one works things out carefully, it is enough to require the weaker orthogonality requirement

$\displaystyle B( \phi_{k_0}, \sum_{k>k_0} 2^{-\varepsilon(k-k_0)} \phi_k ) = 0$

for all ${k_0 \in {\bf Z}}$, where ${B(\phi, \psi): H \rightarrow {\bf R}^2}$ is the bilinear form

$\displaystyle B(\phi,\psi) := (X \phi \cdot X \psi, Y \phi \cdot Y \psi ).$

One can then try to construct the ${\phi_k: H \rightarrow {\bf R}^D}$ for bounded dimension ${D}$ by an iterative argument. After some standard reductions, the problem becomes this (roughly speaking): given a smooth, slowly varying function ${\psi: H \rightarrow {\bf R}^{D}}$ whose derivatives obey certain quantitative upper and lower bounds, construct a smooth oscillating function ${\phi: H \rightarrow {\bf R}^{D}}$, whose derivatives also obey certain quantitative upper and lower bounds, which obey the equation

$\displaystyle B(\phi,\psi) = 0. \ \ \ \ \ (1)$

We view this as an underdetermined system of differential equations for ${\phi}$ (two equations in ${D}$ unknowns; after some reductions, our ${D}$ can be taken to be the explicit value ${36}$). The trivial solution ${\phi=0}$ to this equation will be inadmissible for our purposes due to the lower bounds we will require on ${\phi}$ (in order to obtain the quantitative immersion property mentioned previously, as well as for a stronger “freeness” property that is needed to close the iteration). Because this construction will need to be iterated, it will be essential that the regularity control on ${\phi}$ is the same as that on ${\psi}$; one cannot afford to “lose derivatives” when passing from ${\psi}$ to ${\phi}$.

This problem has some formal similarities with the isometric embedding problem (discussed for instance in this previous post), which can be viewed as the problem of solving an equation of the form ${Q(\phi,\phi) = g}$, where ${(M,g)}$ is a Riemannian manifold and ${Q}$ is the bilinear form

$\displaystyle Q(\phi,\psi)_{ij} = \partial_i \phi \cdot \partial_j \psi.$

The isometric embedding problem also has the key obstacle that naive attempts to solve the equation ${Q(\phi,\phi)=g}$ iteratively can lead to an undesirable “loss of derivatives” that prevents one from iterating indefinitely. This obstacle was famously resolved by the Nash-Moser iteration scheme in which one alternates between perturbatively adjusting an approximate solution to improve the residual error term, and mollifying the resulting perturbation to counteract the loss of derivatives. The current equation (1) differs in some key respects from the isometric embedding equation ${Q(\phi,\phi)=g}$, in particular being linear in the unknown field ${\phi}$ rather than quadratic; nevertheless the key obstacle is the same, namely that naive attempts to solve either equation lose derivatives. Our approach to solving (1) was inspired by the Nash-Moser scheme; in retrospect, I also found similarities with Uchiyama’s constructive proof of the Fefferman-Stein decomposition theorem, discussed in this previous post (and in this recent one).

To motivate this iteration, we first express ${B(\phi,\psi)}$ using the product rule in a form that does not place derivatives directly on the unknown ${\phi}$:

$\displaystyle B(\phi,\psi) = \left( W(\phi \cdot W \psi) - \phi \cdot WW \psi\right)_{W = X,Y} \ \ \ \ \ (2)$

This reveals that one can construct solutions ${\phi}$ to (1) by solving the system of equations

$\displaystyle \phi \cdot W \psi = \phi \cdot WW \psi = 0 \ \ \ \ \ (3)$

for ${W \in \{X, Y \}}$. Because this system is zeroth order in ${\phi}$, this can easily be done by linear algebra (even in the presence of a forcing term ${B(\phi,\psi)=F}$) if one imposes a “freeness” condition (analogous to the notion of a free embedding in the isometric embedding problem) that ${X \psi(p), Y \psi(p), XX \psi(p), YY \psi(p)}$ are linearly independent at each point ${p}$, which (together with some other technical conditions of a similar nature) one then adds to the list of upper and lower bounds required on ${\psi}$ (with a related bound then imposed on ${\phi}$, in order to close the iteration). However, as mentioned previously, there is a “loss of derivatives” problem with this construction: due to the presence of the differential operators ${W}$ in (3), a solution ${\phi}$ constructed by this method can only be expected to have two degrees less regularity than ${\psi}$ at best, which makes this construction unsuitable for iteration.

To get around this obstacle (which also prominently appears when solving (linearisations of) the isometric embedding equation ${Q(\phi,\phi)=g}$), we instead first construct a smooth, low-frequency solution ${\phi_{\leq N_0} \colon H \rightarrow {\bf R}^{D}}$ to a low-frequency equation

$\displaystyle B( \phi_{\leq N_0}, P_{\leq N_0} \psi ) = 0 \ \ \ \ \ (4)$

where ${P_{\leq N_0} \psi}$ is a mollification of ${\psi}$ (of Littlewood-Paley type) applied at a small spatial scale ${1/N_0}$ for some ${N_0}$, and then gradually relax the frequency cutoff ${P_{\leq N_0}}$ to deform this low frequency solution ${\phi_{\leq N_0}}$ to a solution ${\phi}$ of the actual equation (1).

We will construct the low-frequency solution ${\phi_{\leq N_0}}$ rather explicitly, using the Whitney embedding theorem to construct an initial oscillating map ${f}$ into a very low dimensional space ${{\bf R}^6}$, composing it with a Veronese type embedding into a slightly larger dimensional space ${{\bf R}^{27}}$ to obtain a required “freeness” property, and then composing further with a slowly varying isometry ${U(p) \colon {\bf R}^{27} \rightarrow {\bf R}^{36}}$ depending on ${P_{\leq N_0}}$ and constructed by a quantitative topological lemma (relying ultimately on the vanishing of the first few homotopy groups of high-dimensional spheres), in order to obtain the required orthogonality (4). (This sort of “quantitative null-homotopy” was first proposed by Gromov, with some recent progress on optimal bounds by Chambers-Manin-Weinberger and by Chambers-Dotterer-Manin-Weinberger, but we will not need these more advanced results here, as one can rely on the classical qualitative vanishing ${\pi^k(S^d)=0}$ for ${k < d}$ together with a compactness argument to obtain (ineffective) quantitative bounds, which suffice for this application).

To perform the deformation of ${\phi_{\leq N_0}}$ into ${\phi}$, we must solve what is essentially the linearised equation

$\displaystyle B( \dot \phi, \psi ) + B( \phi, \dot \psi ) = 0 \ \ \ \ \ (5)$

of (1) when ${\phi}$, ${\psi}$ (viewed as low frequency functions) are both being deformed at some rates ${\dot \phi, \dot \psi}$ (which should be viewed as high frequency functions). To avoid losing derivatives, the magnitude of the deformation ${\dot \phi}$ in ${\phi}$ should not be significantly greater than the magnitude of the deformation ${\dot \psi}$ in ${\psi}$, when measured in the same function space norms.

As before, if one directly solves the difference equation (5) using a naive application of (2) with ${B(\phi,\dot \psi)}$ treated as a forcing term, one will lose at least one derivative of regularity when passing from ${\dot \psi}$ to ${\dot \phi}$. However, observe that (2) (and the symmetry ${B(\phi, \dot \psi) = B(\dot \psi,\phi)}$) can be used to obtain the identity

$\displaystyle B( \dot \phi, \psi ) + B( \phi, \dot \psi ) = \left( W(\dot \phi \cdot W \psi + \dot \psi \cdot W \phi) - (\dot \phi \cdot WW \psi + \dot \psi \cdot WW \phi)\right)_{W = X,Y} \ \ \ \ \ (6)$

and then one can solve (5) by solving the system of equations

$\displaystyle \dot \phi \cdot W \psi = - \dot \psi \cdot W \phi$

for ${W \in \{X,XX,Y,YY\}}$. The key point here is that this system is zeroth order in both ${\dot \phi}$ and ${\dot \psi}$, so one can solve this system without losing any derivatives when passing from ${\dot \psi}$ to ${\dot \phi}$; compare this situation with that of the superficially similar system

$\displaystyle \dot \phi \cdot W \psi = - \phi \cdot W \dot \psi$

that one would obtain from naively linearising (3) without exploiting the symmetry of ${B}$. There is still however one residual “loss of derivatives” problem arising from the presence of a differential operator ${W}$ on the ${\phi}$ term, which prevents one from directly evolving this iteration scheme in time without losing regularity in ${\phi}$. It is here that we borrow the final key idea of the Nash-Moser scheme, which is to replace ${\phi}$ by a mollified version ${P_{\leq N} \phi}$ of itself (where the projection ${P_{\leq N}}$ depends on the time parameter). This creates an error term in (5), but it turns out that this error term is quite small and smooth (being a “high-high paraproduct” of ${\nabla \phi}$ and ${\nabla\psi}$, it ends up being far more regular than either ${\phi}$ or ${\psi}$, even with the presence of the derivatives) and can be iterated away provided that the initial frequency cutoff ${N_0}$ is large and the function ${\psi}$ has a fairly high (but finite) amount of regularity (we will eventually use the Hölder space ${C^{20,\alpha}}$ on the Heisenberg group to measure this).

The celebrated decomposition theorem of Fefferman and Stein shows that every function ${f \in \mathrm{BMO}({\bf R}^n)}$ of bounded mean oscillation can be decomposed in the form

$\displaystyle f = f_0 + \sum_{i=1}^n R_i f_i \ \ \ \ \ (1)$

modulo constants, for some ${f_0,f_1,\dots,f_n \in L^\infty({\bf R}^n)}$, where ${R_i := |\nabla|^{-1} \partial_i}$ are the Riesz transforms. A technical note here a function in BMO is defined only up to constants (as well as up to the usual almost everywhere equivalence); related to this, if ${f_i}$ is an ${L^\infty({\bf R}^n)}$ function, then the Riesz transform ${R_i f_i}$ is well defined as an element of ${\mathrm{BMO}({\bf R}^n)}$, but is also only defined up to constants and almost everywhere equivalence.

The original proof of Fefferman and Stein was indirect (relying for instance on the Hahn-Banach theorem). A constructive proof was later given by Uchiyama, and was in fact the topic of the second post on this blog. A notable feature of Uchiyama’s argument is that the construction is quite nonlinear; the vector-valued function ${(f_0,f_1,\dots,f_n)}$ is defined to take values on a sphere, and the iterative construction to build these functions from ${f}$ involves repeatedly projecting a potential approximant to this function to the sphere (also, the high-frequency components of this approximant are constructed in a manner that depends nonlinearly on the low-frequency components, which is a type of technique that has become increasingly common in analysis and PDE in recent years).

It is natural to ask whether the Fefferman-Stein decomposition (1) can be made linear in ${f}$, in the sense that each of the ${f_i, i=0,\dots,n}$ depend linearly on ${f}$. Strictly speaking this is easily accomplished using the axiom of choice: take a Hamel basis of ${\mathrm{BMO}({\bf R}^n)}$, choose a decomposition (1) for each element of this basis, and then extend linearly to all finite linear combinations of these basis functions, which then cover ${\mathrm{BMO}({\bf R}^n)}$ by definition of Hamel basis. But these linear operations have no reason to be continuous as a map from ${\mathrm{BMO}({\bf R}^n)}$ to ${L^\infty({\bf R}^n)}$. So the correct question is whether the decomposition can be made continuously linear (or equivalently, boundedly linear) in ${f}$, that is to say whether there exist continuous linear transformations ${T_i: \mathrm{BMO}({\bf R}^n) \rightarrow L^\infty({\bf R}^n)}$ such that

$\displaystyle f = T_0 f + \sum_{i=1}^n R_i T_i f \ \ \ \ \ (2)$

modulo constants for all ${f \in \mathrm{BMO}({\bf R}^n)}$. Note from the open mapping theorem that one can choose the functions ${f_0,\dots,f_n}$ to depend in a bounded fashion on ${f}$ (thus ${\|f_i\|_{L^\infty} \leq C \|f\|_{BMO}}$ for some constant ${C}$, however the open mapping theorem does not guarantee linearity. Using a result of Bartle and Graves one can also make the ${f_i}$ depend continuously on ${f}$, but again the dependence is not guaranteed to be linear.

It is generally accepted folklore that continuous linear dependence is known to be impossible, but I had difficulty recently tracking down an explicit proof of this assertion in the literature (if anyone knows of a reference, I would be glad to know of it). The closest I found was a proof of a similar statement in this paper of Bourgain and Brezis, which I was able to adapt to establish the current claim. The basic idea is to average over the symmetries of the decomposition, which in the case of (1) are translation invariance, rotation invariance, and dilation invariance. This effectively makes the operators ${T_0,T_1,\dots,T_n}$ invariant under all these symmetries, which forces them to themselves be linear combinations of the identity and Riesz transform operators; however, no such non-trivial linear combination maps ${\mathrm{BMO}}$ to ${L^\infty}$, and the claim follows. Formal details of this argument (which we phrase in a dual form in order to avoid some technicalities) appear below the fold.

We now turn to the local existence theory for the initial value problem for the incompressible Euler equations

$\displaystyle \partial_t u + (u \cdot \nabla) u = - \nabla p \ \ \ \ \ (1)$

$\displaystyle \nabla \cdot u = 0$

$\displaystyle u(0,x) = u_0(x).$

For sake of discussion we will just work in the non-periodic domain ${{\bf R}^d}$, ${d \geq 2}$, although the arguments here can be adapted without much difficulty to the periodic setting. We will only work with solutions in which the pressure ${p}$ is normalised in the usual fashion:

$\displaystyle p = - \Delta^{-1} \nabla \cdot \nabla \cdot (u \otimes u). \ \ \ \ \ (2)$

Formally, the Euler equations (with normalised pressure) arise as the vanishing viscosity limit ${\nu \rightarrow 0}$ of the Navier-Stokes equations

$\displaystyle \partial_t u + (u \cdot \nabla) u = - \nabla p + \nu \Delta u \ \ \ \ \ (3)$

$\displaystyle \nabla \cdot u = 0$

$\displaystyle p = - \Delta^{-1} \nabla \cdot \nabla \cdot (u \otimes u)$

$\displaystyle u(0,x) = u_0(x)$

that was studied in previous notes. However, because most of the bounds established in previous notes, either on the lifespan ${T_*}$ of the solution or on the size of the solution itself, depended on ${\nu}$, it is not immediate how to justify passing to the limit and obtain either a strong well-posedness theory or a weak solution theory for the limiting equation (1). (For instance, weak solutions to the Navier-Stokes equations (or the approximate solutions used to create such weak solutions) have ${\nabla u}$ lying in ${L^2_{t,loc} L^2_x}$ for ${\nu>0}$, but the bound on the norm is ${O(\nu^{-1/2})}$ and so one could lose this regularity in the limit ${\nu \rightarrow 0}$, at which point it is not clear how to ensure that the nonlinear term ${u_j u}$ still converges in the sense of distributions to what one expects.)

Nevertheless, by carefully using the energy method (which we will do loosely following an approach of Bertozzi and Majda), it is still possible to obtain local-in-time estimates on (high-regularity) solutions to (3) that are uniform in the limit ${\nu \rightarrow 0}$. Such a priori estimates can then be combined with a number of variants of these estimates obtain a satisfactory local well-posedness theory for the Euler equations. Among other things, we will be able to establish the Beale-Kato-Majda criterion – smooth solutions to the Euler (or Navier-Stokes) equations can be continued indefinitely unless the integral

$\displaystyle \int_0^{T_*} \| \omega(t) \|_{L^\infty_x( {\bf R}^d \rightarrow \wedge^2 {\bf R}^d )}\ dt$

becomes infinite at the final time ${T_*}$, where ${\omega := \nabla \wedge u}$ is the vorticity field. The vorticity has the important property that it is transported by the Euler flow, and in two spatial dimensions it can be used to establish global regularity for both the Euler and Navier-Stokes equations in these settings. (Unfortunately, in three and higher dimensions the phenomenon of vortex stretching has frustrated all attempts to date to use the vorticity transport property to establish global regularity of either equation in this setting.)

There is a rather different approach to establishing local well-posedness for the Euler equations, which relies on the vorticity-stream formulation of these equations. This will be discused in a later set of notes.

In the previous set of notes we developed a theory of “strong” solutions to the Navier-Stokes equations. This theory, based around viewing the Navier-Stokes equations as a perturbation of the linear heat equation, has many attractive features: solutions exist locally, are unique, depend continuously on the initial data, have a high degree of regularity, can be continued in time as long as a sufficiently high regularity norm is under control, and tend to enjoy the same sort of conservation laws that classical solutions do. However, it is a major open problem as to whether these solutions can be extended to be (forward) global in time, because the norms that we know how to control globally in time do not have high enough regularity to be useful for continuing the solution. Also, the theory becomes degenerate in the inviscid limit ${\nu \rightarrow 0}$.

However, it is possible to construct “weak” solutions which lack many of the desirable features of strong solutions (notably, uniqueness, propagation of regularity, and conservation laws) but can often be constructed globally in time even when one us unable to do so for strong solutions. Broadly speaking, one usually constructs weak solutions by some sort of “compactness method”, which can generally be described as follows.

1. Construct a sequence of “approximate solutions” to the desired equation, for instance by developing a well-posedness theory for some “regularised” approximation to the original equation. (This theory often follows similar lines to those in the previous set of notes, for instance using such tools as the contraction mapping theorem to construct the approximate solutions.)
2. Establish some uniform bounds (over appropriate time intervals) on these approximate solutions, even in the limit as an approximation parameter is sent to zero. (Uniformity is key; non-uniform bounds are often easy to obtain if one puts enough “mollification”, “hyper-dissipation”, or “discretisation” in the approximating equation.)
3. Use some sort of “weak compactness” (e.g., the Banach-Alaoglu theorem, the Arzela-Ascoli theorem, or the Rellich compactness theorem) to extract a subsequence of approximate solutions that converge (in a topology weaker than that associated to the available uniform bounds) to a limit. (Note that there is no reason a priori to expect such limit points to be unique, or to have any regularity properties beyond that implied by the available uniform bounds..)
4. Show that this limit solves the original equation in a suitable weak sense.

The quality of these weak solutions is very much determined by the type of uniform bounds one can obtain on the approximate solution; the stronger these bounds are, the more properties one can obtain on these weak solutions. For instance, if the approximate solutions enjoy an energy identity leading to uniform energy bounds, then (by using tools such as Fatou’s lemma) one tends to obtain energy inequalities for the resulting weak solution; but if one somehow is able to obtain uniform bounds in a higher regularity norm than the energy then one can often recover the full energy identity. If the uniform bounds are at the regularity level needed to obtain well-posedness, then one generally expects to upgrade the weak solution to a strong solution. (This phenomenon is often formalised through weak-strong uniqueness theorems, which we will discuss later in these notes.) Thus we see that as far as attacking global regularity is concerned, both the theory of strong solutions and the theory of weak solutions encounter essentially the same obstacle, namely the inability to obtain uniform bounds on (exact or approximate) solutions at high regularities (and at arbitrary times).

For simplicity, we will focus our discussion in this notes on finite energy weak solutions on ${{\bf R}^d}$. There is a completely analogous theory for periodic weak solutions on ${{\bf R}^d}$ (or equivalently, weak solutions on the torus ${({\bf R}^d/{\bf Z}^d)}$ which we will leave to the interested reader.

In recent years, a completely different way to construct weak solutions to the Navier-Stokes or Euler equations has been developed that are not based on the above compactness methods, but instead based on techniques of convex integration. These will be discussed in a later set of notes.

We now begin the rigorous theory of the incompressible Navier-Stokes equations

$\displaystyle \partial_t u + (u \cdot \nabla) u = \nu \Delta u - \nabla p \ \ \ \ \ (1)$

$\displaystyle \nabla \cdot u = 0,$

where ${\nu>0}$ is a given constant (the kinematic viscosity, or viscosity for short), ${u: I \times {\bf R}^d \rightarrow {\bf R}^d}$ is an unknown vector field (the velocity field), and ${p: I \times {\bf R}^d \rightarrow {\bf R}}$ is an unknown scalar field (the pressure field). Here ${I}$ is a time interval, usually of the form ${[0,T]}$ or ${[0,T)}$. We will either be interested in spatially decaying situations, in which ${u(t,x)}$ decays to zero as ${x \rightarrow \infty}$, or ${{\bf Z}^d}$-periodic (or periodic for short) settings, in which one has ${u(t, x+n) = u(t,x)}$ for all ${n \in {\bf Z}^d}$. (One can also require the pressure ${p}$ to be periodic as well; this brings up a small subtlety in the uniqueness theory for these equations, which we will address later in this set of notes.) As is usual, we abuse notation by identifying a ${{\bf Z}^d}$-periodic function on ${{\bf R}^d}$ with a function on the torus ${{\bf R}^d/{\bf Z}^d}$.

In order for the system (1) to even make sense, one requires some level of regularity on the unknown fields ${u,p}$; this turns out to be a relatively important technical issue that will require some attention later in this set of notes, and we will end up transforming (1) into other forms that are more suitable for lower regularity candidate solution. Our focus here will be on local existence of these solutions in a short time interval ${[0,T]}$ or ${[0,T)}$, for some ${T>0}$. (One could in principle also consider solutions that extend to negative times, but it turns out that the equations are not time-reversible, and the forward evolution is significantly more natural to study than the backwards one.) The study of Euler equations, in which ${\nu=0}$, will be deferred to subsequent lecture notes.

As the unknown fields involve a time parameter ${t}$, and the first equation of (1) involves time derivatives of ${u}$, the system (1) should be viewed as describing an evolution for the velocity field ${u}$. (As we shall see later, the pressure ${p}$ is not really an independent dynamical field, as it can essentially be expressed in terms of the velocity field without requiring any differentiation or integration in time.) As such, the natural question to study for this system is the initial value problem, in which an initial velocity field ${u_0: {\bf R}^d \rightarrow {\bf R}^d}$ is specified, and one wishes to locate a solution ${(u,p)}$ to the system (1) with initial condition

$\displaystyle u(0,x) = u_0(x) \ \ \ \ \ (2)$

for ${x \in {\bf R}^d}$. Of course, in order for this initial condition to be compatible with the second equation in (1), we need the compatibility condition

$\displaystyle \nabla \cdot u_0 = 0 \ \ \ \ \ (3)$

and one should also impose some regularity, decay, and/or periodicity hypotheses on ${u_0}$ in order to be compatible with corresponding level of regularity etc. on the solution ${u}$.

The fundamental questions in the local theory of an evolution equation are that of existence, uniqueness, and continuous dependence. In the context of the Navier-Stokes equations, these questions can be phrased (somewhat broadly) as follows:

• (a) (Local existence) Given suitable initial data ${u_0}$, does there exist a solution ${(u,p)}$ to the above initial value problem that exists for some time ${T>0}$? What can one say about the time ${T}$ of existence? How regular is the solution?
• (b) (Uniqueness) Is it possible to have two solutions ${(u,p), (u',p')}$ of a certain regularity class to the same initial value problem on a common time interval ${[0,T)}$? To what extent does the answer to this question depend on the regularity assumed on one or both of the solutions? Does one need to normalise the solutions beforehand in order to obtain uniqueness?
• (c) (Continuous dependence on data) If one perturbs the initial conditions ${u_0}$ by a small amount, what happens to the solution ${(u,p)}$ and on the time of existence ${T}$? (This question tends to only be sensible once one has a reasonable uniqueness theory.)

The answers to these questions tend to be more complicated than a simple “Yes” or “No”, for instance they can depend on the precise regularity hypotheses one wishes to impose on the data and on the solution, and even on exactly how one interprets the concept of a “solution”. However, once one settles on such a set of hypotheses, it generally happens that one either gets a “strong” theory (in which one has existence, uniqueness, and continuous dependence on the data), a “weak” theory (in which one has existence of somewhat low-quality solutions, but with only limited uniqueness results (or even some spectacular failures of uniqueness) and almost no continuous dependence on data), or no satsfactory theory whatsoever. In the former case, we say (roughly speaking) that the initial value problem is locally well-posed, and one can then try to build upon the theory to explore more interesting topics such as global existence and asymptotics, classifying potential blowup, rigorous justification of conservation laws, and so forth. With a weak local theory, it becomes much more difficult to address these latter sorts of questions, and there are serious analytic pitfalls that one could fall into if one tries too strenuously to treat weak solutions as if they were strong. (For instance, conservation laws that are rigorously justified for strong, high-regularity solutions may well fail for weak, low-regularity ones.) Also, even if one is primarily interested in solutions at one level of regularity, the well-posedness theory at another level of regularity can be very helpful; for instance, if one is interested in smooth solutions in ${{\bf R}^d}$, it turns out that the well-posedness theory at the critical regularity of ${\dot H^{\frac{d}{2}-1}({\bf R}^d)}$ can be used to establish globally smooth solutions from small initial data. As such, it can become quite important to know what kind of local theory one can obtain for a given equation.

This set of notes will focus on the “strong” theory, in which a substantial amount of regularity is assumed in the initial data and solution, giving a satisfactory (albeit largely local-in-time) well-posedness theory. “Weak” solutions will be considered in later notes.

The Navier-Stokes equations are not the simplest of partial differential equations to study, in part because they are an amalgam of three more basic equations, which behave rather differently from each other (for instance the first equation is nonlinear, while the latter two are linear):

• (a) Transport equations such as ${\partial_t u + (u \cdot \nabla) u = 0}$.
• (b) Diffusion equations (or heat equations) such as ${\partial_t u = \nu \Delta u}$.
• (c) Systems such as ${v = F - \nabla p}$, ${\nabla \cdot v = 0}$, which (for want of a better name) we will call Leray systems.

Accordingly, we will devote some time to getting some preliminary understanding of the linear diffusion and Leray systems before returning to the theory for the Navier-Stokes equation. Transport systems will be discussed further in subsequent notes; in this set of notes, we will instead focus on a more basic example of nonlinear equations, namely the first-order ordinary differential equation

$\displaystyle \partial_t u = F(u) \ \ \ \ \ (4)$

where ${u: I \rightarrow V}$ takes values in some finite-dimensional (real or complex) vector space ${V}$ on some time interval ${I}$, and ${F: V \rightarrow V}$ is a given linear or nonlinear function. (Here, we use “interval” to denote a connected non-empty subset of ${{\bf R}}$; in particular, we allow intervals to be half-infinite or infinite, or to be open, closed, or half-open.) Fundamental results in this area include the Picard existence and uniqueness theorem, the Duhamel formula, and Grönwall’s inequality; they will serve as motivation for the approach to local well-posedness that we will adopt in this set of notes. (There are other ways to construct strong or weak solutions for Navier-Stokes and Euler equations, which we will discuss in later notes.)

A key role in our treatment here will be played by the fundamental theorem of calculus (in various forms and variations). Roughly speaking, this theorem, and its variants, allow us to recast differential equations (such as (1) or (4)) as integral equations. Such integral equations are less tractable algebraically than their differential counterparts (for instance, they are not ideal for verifying conservation laws), but are significantly more convenient for well-posedness theory, basically because integration tends to increase the regularity of a function, while differentiation reduces it. (Indeed, the problem of “losing derivatives”, or more precisely “losing regularity”, is a key obstacle that one often has to address when trying to establish well-posedness for PDE, particularly those that are quite nonlinear and with rough initial data, though for nonlinear parabolic equations such as Navier-Stokes the obstacle is not as serious as it is for some other PDE, due to the smoothing effects of the heat equation.)

One weakness of the methods deployed here are that the quantitative bounds produced deteriorate to the point of uselessness in the inviscid limit ${\nu \rightarrow 0}$, rendering these techniques unsuitable for analysing the Euler equations in which ${\nu=0}$. However, some of the methods developed in later notes have bounds that remain uniform in the ${\nu \rightarrow 0}$ limit, allowing one to also treat the Euler equations.

In this and subsequent set of notes, we use the following asymptotic notation (a variant of Vinogradov notation that is commonly used in PDE and harmonic analysis). The statement ${X \lesssim Y}$, ${Y \gtrsim X}$, or ${X = O(Y)}$ will be used to denote an estimate of the form ${|X| \leq CY}$ (or equivalently ${Y \geq C^{-1} |X|}$) for some constant ${C}$, and ${X \sim Y}$ will be used to denote the estimates ${X \lesssim Y \lesssim X}$. If the constant ${C}$ depends on other parameters (such as the dimension ${d}$), this will be indicated by subscripts, thus for instance ${X \lesssim_d Y}$ denotes the estimate ${|X| \leq C_d Y}$ for some ${C_d}$ depending on ${d}$.

In the last week or so there has been some discussion on the internet about a paper (initially authored by Hill and Tabachnikov) that was initially accepted for publication in the Mathematical Intelligencer, but with the editor-in-chief of that journal later deciding against publication; the paper, in significantly revised form (and now authored solely by Hill), was then quickly accepted by one of the editors in the New York Journal of Mathematics, but then was removed from publication after objections from several members on the editorial board of NYJM that the paper had not been properly refereed or was within the scope of the journal; see this statement by Benson Farb, who at the time was on that board, for more details.  Some further discussion of this incident may be found on Tim Gowers’ blog; the most recent version of the paper, as well as a number of prior revisions, are still available on the arXiv here.

For whatever reason, some of the discussion online has focused on the role of Amie Wilkinson, a mathematician from the University of Chicago (and who, incidentally, was a recent speaker here at UCLA in our Distinguished Lecture Series), who wrote an email to the editor-in-chief of the Intelligencer raising some concerns about the content of the paper and suggesting that it be published alongside commentary from other experts in the field.  (This, by the way, is not uncommon practice when dealing with a potentially provocative publication in one field by authors coming from a different field; for instance, when Emmanuel Candès and I published a paper in the Annals of Statistics introducing what we called the “Dantzig selector”, the Annals solicited a number of articles discussing the selector from prominent statisticians, and then invited us to submit a rejoinder.)    It seems that the editors of the Intelligencer decided instead to reject the paper.  The paper then had a complicated interaction with NYJM, but, as stated by Wilkinson in her recent statement on this matter as well as by Farb, this was done without any involvement from Wilkinson.  (It is true that Farb happens to also be Wilkinson’s husband, but I see no reason to doubt their statements on this matter.)

I have not interacted much with the Intelligencer, but I have published a few papers with NYJM over the years; it is an early example of a quality “diamond open access” mathematics journal.  It seems that this incident may have uncovered some issues with their editorial procedure for reviewing and accepting papers, but I am hopeful that they can be addressed to avoid this sort of event occurring again.