You are currently browsing the category archive for the ‘math.CA’ category.

This is a blog version of a talk I recently gave at the IPAM workshop on “The Kakeya Problem, Restriction Problem, and Sum-product Theory”.

Note: the discussion here will be highly non-rigorous in nature, being extremely loose in particular with asymptotic notation and with the notion of dimension. Caveat emptor.

One of the most infamous unsolved problems at the intersection of geometric measure theory, incidence combinatorics, and real-variable harmonic analysis is the Kakeya set conjecture. We will focus on the following three-dimensional case of the conjecture, stated informally as follows:

Conjecture 1 (Kakeya conjecture) Let ${E}$ be a subset of ${{\bf R}^3}$ that contains a unit line segment in every direction. Then ${\hbox{dim}(E) = 3}$.

This conjecture is not precisely formulated here, because we have not specified exactly what type of set ${E}$ is (e.g. measurable, Borel, compact, etc.) and what notion of dimension we are using. We will deliberately ignore these technical details in this post. It is slightly more convenient for us here to work with lines instead of unit line segments, so we work with the following slight variant of the conjecture (which is essentially equivalent):

Conjecture 2 (Kakeya conjecture, again) Let ${{\cal L}}$ be a family of lines in ${{\bf R}^3}$ that meet ${B(0,1)}$ and contain a line in each direction. Let ${E}$ be the union of the restriction ${\ell \cap B(0,2)}$ to ${B(0,2)}$ of every line ${\ell}$ in ${{\cal L}}$. Then ${\hbox{dim}(E) = 3}$.

As the space of all directions in ${{\bf R}^3}$ is two-dimensional, we thus see that ${{\cal L}}$ is an (at least) two-dimensional subset of the four-dimensional space of lines in ${{\bf R}^3}$ (actually, it lies in a compact subset of this space, since we have constrained the lines to meet ${B(0,1)}$). One could then ask if this is the only property of ${{\cal L}}$ that is needed to establish the Kakeya conjecture, that is to say if any subset of ${B(0,2)}$ which contains a two-dimensional family of lines (restricted to ${B(0,2)}$, and meeting ${B(0,1)}$) is necessarily three-dimensional. Here we have an easy counterexample, namely a plane in ${B(0,2)}$ (passing through the origin), which contains a two-dimensional collection of lines. However, we can exclude this case by adding an additional axiom, leading to what one might call a “strong” Kakeya conjecture:

Conjecture 3 (Strong Kakeya conjecture) Let ${{\cal L}}$ be a two-dimensional family of lines in ${{\bf R}^3}$ that meet ${B(0,1)}$, and assume the Wolff axiom that no (affine) plane contains more than a one-dimensional family of lines in ${{\cal L}}$. Let ${E}$ be the union of the restriction ${\ell \cap B(0,2)}$ of every line ${\ell}$ in ${{\cal L}}$. Then ${\hbox{dim}(E) = 3}$.

Actually, to make things work out we need a more quantitative version of the Wolff axiom in which we constrain the metric entropy (and not just dimension) of lines that lie close to a plane, rather than exactly on the plane. However, for the informal discussion here we will ignore these technical details. Families of lines that lie in different directions will obey the Wolff axiom, but the converse is not true in general.

In 1995, Wolff established the important lower bound ${\hbox{dim}(E) \geq 5/2}$ (for various notions of dimension, e.g. Hausdorff dimension) for sets ${E}$ in Conjecture 3 (and hence also for the other forms of the Kakeya problem). However, there is a key obstruction to going beyond the ${5/2}$ barrier, coming from the possible existence of half-dimensional (approximate) subfields of the reals ${{\bf R}}$. To explain this problem, it easiest to first discuss the complex version of the strong Kakeya conjecture, in which all relevant (real) dimensions are doubled:

Conjecture 4 (Strong Kakeya conjecture over ${{\bf C}}$) Let ${{\cal L}}$ be a four (real) dimensional family of complex lines in ${{\bf C}^3}$ that meet the unit ball ${B(0,1)}$ in ${{\bf C}^3}$, and assume the Wolff axiom that no four (real) dimensional (affine) subspace contains more than a two (real) dimensional family of complex lines in ${{\cal L}}$. Let ${E}$ be the union of the restriction ${\ell \cap B(0,2)}$ of every complex line ${\ell}$ in ${{\cal L}}$. Then ${E}$ has real dimension ${6}$.

The argument of Wolff can be adapted to the complex case to show that all sets ${E}$ occuring in Conjecture 4 have real dimension at least ${5}$. Unfortunately, this is sharp, due to the following fundamental counterexample:

Proposition 5 (Heisenberg group counterexample) Let ${H \subset {\bf C}^3}$ be the Heisenberg group

$\displaystyle H = \{ (z_1,z_2,z_3) \in {\bf C}^3: \hbox{Im}(z_1) = \hbox{Im}(z_2 \overline{z_3}) \}$

and let ${{\cal L}}$ be the family of complex lines

$\displaystyle \ell_{s,t,\alpha} := \{ (\overline{\alpha} z + t, z, sz + \alpha): z \in {\bf C} \}$

with ${s,t \in {\bf R}}$ and ${\alpha \in {\bf C}}$. Then ${H}$ is a five (real) dimensional subset of ${{\bf C}^3}$ that contains every line in the four (real) dimensional set ${{\cal L}}$; however each four real dimensional (affine) subspace contains at most a two (real) dimensional set of lines in ${{\cal L}}$. In particular, the strong Kakeya conjecture over the complex numbers is false.

This proposition is proven by a routine computation, which we omit here. The group structure on ${H}$ is given by the group law

$\displaystyle (z_1,z_2,z_3) \cdot (w_1,w_2,w_3) = (z_1 + w_1 + z_2 \overline{w_3} - z_3 \overline{w_2}, z_2 +w_2, z_3+w_3),$

giving ${E}$ the structure of a ${2}$-step simply-connected nilpotent Lie group, isomorphic to the usual Heisenberg group over ${{\bf R}^2}$. Note that while the Heisenberg group is a counterexample to the complex strong Kakeya conjecture, it is not a counterexample to the complex form of the original Kakeya conjecture, because the complex lines ${{\cal L}}$ in the Heisenberg counterexample do not point in distinct directions, but instead only point in a three (real) dimensional subset of the four (real) dimensional space of available directions for complex lines. For instance, one has the one real-dimensional family of parallel lines

$\displaystyle \ell_{0,t,0} = \{ (t, z, 0): z \in {\bf C}\}$

with ${t \in {\bf R}}$; multiplying this family of lines on the right by a group element in ${H}$ gives other families of parallel lines, which in fact sweep out all of ${{\cal L}}$.

The Heisenberg counterexample ultimately arises from the “half-dimensional” (and hence degree two) subfield ${{\bf R}}$ of ${{\bf C}}$, which induces an involution ${z \mapsto \overline{z}}$ which can then be used to define the Heisenberg group ${H}$ through the formula

$\displaystyle H = \{ (z_1,z_2,z_3) \in {\bf C}^3: z_1 - \overline{z_1} = z_2 \overline{z_3} - z_3 \overline{z_2} \}.$

Analogous Heisenberg counterexamples can also be constructed if one works over finite fields ${{\bf F}_{q^2}}$ that contain a “half-dimensional” subfield ${{\bf F}_q}$; we leave the details to the interested reader. Morally speaking, if ${{\bf R}}$ in turn contained a subfield of dimension ${1/2}$ (or even a subring or “approximate subring”), then one ought to be able to use this field to generate a counterexample to the strong Kakeya conjecture over the reals. Fortunately, such subfields do not exist; this was a conjecture of Erdos and Volkmann that was proven by Edgar and Miller, and more quantitatively by Bourgain (answering a question of Nets Katz and myself). However, this fact is not entirely trivial to prove, being a key example of the sum-product phenomenon.

We thus see that to go beyond the ${5/2}$ dimension bound of Wolff for the 3D Kakeya problem over the reals, one must do at least one of two things:

• (a) Exploit the distinct directions of the lines in ${{\mathcal L}}$ in a way that goes beyond the Wolff axiom; or
• (b) Exploit the fact that ${{\bf R}}$ does not contain half-dimensional subfields (or more generally, intermediate-dimensional approximate subrings).

(The situation is more complicated in higher dimensions, as there are more obstructions than the Heisenberg group; for instance, in four dimensions quadric surfaces are an important obstruction, as discussed in this paper of mine.)

Various partial or complete results on the Kakeya problem over various fields have been obtained through route (a) or route (b). For instance, in 2000, Nets Katz, Izabella Laba and myself used route (a) to improve Wolff’s lower bound of ${5/2}$ for Kakeya sets very slightly to ${5/2+10^{-10}}$ (for a weak notion of dimension, namely upper Minkowski dimension). In 2004, Bourgain, Katz, and myself established a sum-product estimate which (among other things) ruled out approximate intermediate-dimensional subrings of ${{\bf F}_p}$, and then pursued route (b) to obtain a corresponding improvement ${5/2+\epsilon}$ to the Kakeya conjecture over finite fields of prime order. The analogous (discretised) sum-product estimate over the reals was established by Bourgain in 2003, which in principle would allow one to extend the result of Katz, Laba and myself to the strong Kakeya setting, but this has not been carried out in the literature. Finally, in 2009, Dvir used route (a) and introduced the polynomial method (as discussed previously here) to completely settle the Kakeya conjecture in finite fields.

Below the fold, I present a heuristic argument of Nets Katz and myself, which in principle would use route (b) to establish the full (strong) Kakeya conjecture. In broad terms, the strategy is as follows:

1. Assume that the (strong) Kakeya conjecture fails, so that there are sets ${E}$ of the form in Conjecture 3 of dimension ${3-\sigma}$ for some ${\sigma>0}$. Assume that ${E}$ is “optimal”, in the sense that ${\sigma}$ is as large as possible.
2. Use the optimality of ${E}$ (and suitable non-isotropic rescalings) to establish strong forms of standard structural properties expected of such sets ${E}$, namely “stickiness”, “planiness”, “local graininess” and “global graininess” (we will roughly describe these properties below). Heuristically, these properties are constraining ${E}$ to “behave like” a putative Heisenberg group counterexample.
3. By playing all these structural properties off of each other, show that ${E}$ can be parameterised locally by a one-dimensional set which generates a counterexample to Bourgain’s sum-product theorem. This contradiction establishes the Kakeya conjecture.

Nets and I have had an informal version of argument for many years, but were never able to make a satisfactory theorem (or even a partial Kakeya result) out of it, because we could not rigorously establish anywhere near enough of the necessary structural properties (stickiness, planiness, etc.) on the optimal set ${E}$ for a large number of reasons (one of which being that we did not have a good notion of dimension that did everything that we wished to demand of it). However, there is beginning to be movement in these directions (e.g. in this recent result of Guth using the polynomial method obtaining a weak version of local graininess on certain Kakeya sets). In view of this (and given that neither Nets or I have been actively working in this direction for some time now, due to many other projects), we’ve decided to distribute these ideas more widely than before, and in particular on this blog.

Roth’s theorem on arithmetic progressions asserts that every subset of the integers ${{\bf Z}}$ of positive upper density contains infinitely many arithmetic progressions of length three. There are many versions and variants of this theorem. Here is one of them:

Theorem 1 (Roth’s theorem) Let ${G = (G,+)}$ be a compact abelian group, with Haar probability measure ${\mu}$, which is ${2}$-divisible (i.e. the map ${x \mapsto 2x}$ is surjective) and let ${A}$ be a measurable subset of ${G}$ with ${\mu(A) \geq \alpha}$ for some ${0 < \alpha < 1}$. Then we have

$\displaystyle \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r)\ d\mu(x) d\mu(r) \gg_\alpha 1,$

where ${X \gg_\alpha Y}$ denotes the bound ${X \geq c_\alpha Y}$ for some ${c_\alpha > 0}$ depending only on ${\alpha}$.

This theorem is usually formulated in the case that ${G}$ is a finite abelian group of odd order (in which case the result is essentially due to Meshulam) or more specifically a cyclic group ${G = {\bf Z}/N{\bf Z}}$ of odd order (in which case it is essentially due to Varnavides), but is also valid for the more general setting of ${2}$-divisible compact abelian groups, as we shall shortly see. One can be more precise about the dependence of the implied constant ${c_\alpha}$ on ${\alpha}$, but to keep the exposition simple we will work at the qualitative level here, without trying at all to get good quantitative bounds. The theorem is also true without the ${2}$-divisibility hypothesis, but the proof we will discuss runs into some technical issues due to the degeneracy of the ${2r}$ shift in that case.

We can deduce Theorem 1 from the following more general Khintchine-type statement. Let ${\hat G}$ denote the Pontryagin dual of a compact abelian group ${G}$, that is to say the set of all continuous homomorphisms ${\xi: x \mapsto \xi \cdot x}$ from ${G}$ to the (additive) unit circle ${{\bf R}/{\bf Z}}$. Thus ${\hat G}$ is a discrete abelian group, and functions ${f \in L^2(G)}$ have a Fourier transform ${\hat f \in \ell^2(\hat G)}$ defined by

$\displaystyle \hat f(\xi) := \int_G f(x) e^{-2\pi i \xi \cdot x}\ d\mu(x).$

If ${G}$ is ${2}$-divisible, then ${\hat G}$ is ${2}$-torsion-free in the sense that the map ${\xi \mapsto 2 \xi}$ is injective. For any finite set ${S \subset \hat G}$ and any radius ${\rho>0}$, define the Bohr set

$\displaystyle B(S,\rho) := \{ x \in G: \sup_{\xi \in S} \| \xi \cdot x \|_{{\bf R}/{\bf Z}} < \rho \}$

where ${\|\theta\|_{{\bf R}/{\bf Z}}}$ denotes the distance of ${\theta}$ to the nearest integer. We refer to the cardinality ${|S|}$ of ${S}$ as the rank of the Bohr set. We record a simple volume bound on Bohr sets:

Lemma 2 (Volume packing bound) Let ${G}$ be a compact abelian group with Haar probability measure ${\mu}$. For any Bohr set ${B(S,\rho)}$, we have

$\displaystyle \mu( B( S, \rho ) ) \gg_{|S|, \rho} 1.$

Proof: We can cover the torus ${({\bf R}/{\bf Z})^S}$ by ${O_{|S|,\rho}(1)}$ translates ${\theta+Q}$ of the cube ${Q := \{ (\theta_\xi)_{\xi \in S} \in ({\bf R}/{\bf Z})^S: \sup_{\xi \in S} \|\theta_\xi\|_{{\bf R}/{\bf Z}} < \rho/2 \}}$. Then the sets ${\{ x \in G: (\xi \cdot x)_{\xi \in S} \in \theta + Q \}}$ form an cover of ${G}$. But all of these sets lie in a translate of ${B(S,\rho)}$, and the claim then follows from the translation invariance of ${\mu}$. $\Box$

Given any Bohr set ${B(S,\rho)}$, we define a normalised “Lipschitz” cutoff function ${\nu_{B(S,\rho)}: G \rightarrow {\bf R}}$ by the formula

$\displaystyle \nu_{B(S,\rho)}(x) = c_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})_+ \ \ \ \ \ (1)$

where ${c_{B(S,\rho)}}$ is the constant such that

$\displaystyle \int_G \nu_{B(S,\rho)}\ d\mu = 1,$

thus

$\displaystyle c_{B(S,\rho)} = \left( \int_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})\ d\mu(x) \right)^{-1}.$

The function ${\nu_{B(S,\rho)}}$ should be viewed as an ${L^1}$-normalised “tent function” cutoff to ${B(S,\rho)}$. Note from Lemma 2 that

$\displaystyle 1 \ll_{|S|,\rho} c_{B(S,\rho)} \ll_{|S|,\rho} 1. \ \ \ \ \ (2)$

We then have the following sharper version of Theorem 1:

Theorem 3 (Roth-Khintchine theorem) Let ${G = (G,+)}$ be a ${2}$-divisible compact abelian group, with Haar probability measure ${\mu}$, and let ${\epsilon>0}$. Then for any measurable function ${f: G \rightarrow [0,1]}$, there exists a Bohr set ${B(S,\rho)}$ with ${|S| \ll_\epsilon 1}$ and ${\rho \gg_\epsilon 1}$ such that

$\displaystyle \int_G \int_G f(x) f(x+r) f(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \ \ \ \ \ (3)$

$\displaystyle \geq (\int_G f\ d\mu)^3 - O(\epsilon)$

where ${*}$ denotes the convolution operation

$\displaystyle f*g(x) := \int_G f(y) g(x-y)\ d\mu(y).$

A variant of this result (expressed in the language of ergodic theory) appears in this paper of Bergelson, Host, and Kra; a combinatorial version of the Bergelson-Host-Kra result that is closer to Theorem 3 subsequently appeared in this paper of Ben Green and myself, but this theorem arguably appears implicitly in a much older paper of Bourgain. To see why Theorem 3 implies Theorem 1, we apply the theorem with ${f := 1_A}$ and ${\epsilon}$ equal to a small multiple of ${\alpha^3}$ to conclude that there is a Bohr set ${B(S,\rho)}$ with ${|S| \ll_\alpha 1}$ and ${\rho \gg_\alpha 1}$ such that

$\displaystyle \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \gg \alpha^3.$

But from (2) we have the pointwise bound ${\nu_{B(S,\rho)}*\nu_{B(S,\rho)} \ll_\alpha 1}$, and Theorem 1 follows.

Below the fold, we give a short proof of Theorem 3, using an “energy pigeonholing” argument that essentially dates back to the 1986 paper of Bourgain mentioned previously (not to be confused with a later 1999 paper of Bourgain on Roth’s theorem that was highly influential, for instance in emphasising the importance of Bohr sets). The idea is to use the pigeonhole principle to choose the Bohr set ${B(S,\rho)}$ to capture all the “large Fourier coefficients” of ${f}$, but such that a certain “dilate” of ${B(S,\rho)}$ does not capture much more Fourier energy of ${f}$ than ${B(S,\rho)}$ itself. The bound (3) may then be obtained through elementary Fourier analysis, without much need to explicitly compute things like the Fourier transform of an indicator function of a Bohr set. (However, the bound obtained by this argument is going to be quite poor – of tower-exponential type.) To do this we perform a structural decomposition of ${f}$ into “structured”, “small”, and “highly pseudorandom” components, as is common in the subject (e.g. in this previous blog post), but even though we crucially need to retain non-negativity of one of the components in this decomposition, we can avoid recourse to conditional expectation with respect to a partition (or “factor”) of the space, using instead convolution with one of the ${\nu_{B(S,\rho)}}$ considered above to achieve a similar effect.

Throughout this post, we will work only at the formal level of analysis, ignoring issues of convergence of integrals, justifying differentiation under the integral sign, and so forth. (Rigorous justification of the conservation laws and other identities arising from the formal manipulations below can usually be established in an a posteriori fashion once the identities are in hand, without the need to rigorously justify the manipulations used to come up with these identities).

It is a remarkable fact in the theory of differential equations that many of the ordinary and partial differential equations that are of interest (particularly in geometric PDE, or PDE arising from mathematical physics) admit a variational formulation; thus, a collection ${\Phi: \Omega \rightarrow M}$ of one or more fields on a domain ${\Omega}$ taking values in a space ${M}$ will solve the differential equation of interest if and only if ${\Phi}$ is a critical point to the functional

$\displaystyle J[\Phi] := \int_\Omega L( x, \Phi(x), D\Phi(x) )\ dx \ \ \ \ \ (1)$

involving the fields ${\Phi}$ and their first derivatives ${D\Phi}$, where the Lagrangian ${L: \Sigma \rightarrow {\bf R}}$ is a function on the vector bundle ${\Sigma}$ over ${\Omega \times M}$ consisting of triples ${(x, q, \dot q)}$ with ${x \in \Omega}$, ${q \in M}$, and ${\dot q: T_x \Omega \rightarrow T_q M}$ a linear transformation; we also usually keep the boundary data of ${\Phi}$ fixed in case ${\Omega}$ has a non-trivial boundary, although we will ignore these issues here. (We also ignore the possibility of having additional constraints imposed on ${\Phi}$ and ${D\Phi}$, which require the machinery of Lagrange multipliers to deal with, but which will only serve as a distraction for the current discussion.) It is common to use local coordinates to parameterise ${\Omega}$ as ${{\bf R}^d}$ and ${M}$ as ${{\bf R}^n}$, in which case ${\Sigma}$ can be viewed locally as a function on ${{\bf R}^d \times {\bf R}^n \times {\bf R}^{dn}}$.

Example 1 (Geodesic flow) Take ${\Omega = [0,1]}$ and ${M = (M,g)}$ to be a Riemannian manifold, which we will write locally in coordinates as ${{\bf R}^n}$ with metric ${g_{ij}(q)}$ for ${i,j=1,\dots,n}$. A geodesic ${\gamma: [0,1] \rightarrow M}$ is then a critical point (keeping ${\gamma(0),\gamma(1)}$ fixed) of the energy functional

$\displaystyle J[\gamma] := \frac{1}{2} \int_0^1 g_{\gamma(t)}( D\gamma(t), D\gamma(t) )\ dt$

or in coordinates (ignoring coordinate patch issues, and using the usual summation conventions)

$\displaystyle J[\gamma] = \frac{1}{2} \int_0^1 g_{ij}(\gamma(t)) \dot \gamma^i(t) \dot \gamma^j(t)\ dt.$

As discussed in this previous post, both the Euler equations for rigid body motion, and the Euler equations for incompressible inviscid flow, can be interpreted as geodesic flow (though in the latter case, one has to work really formally, as the manifold ${M}$ is now infinite dimensional).

More generally, if ${\Omega = (\Omega,h)}$ is itself a Riemannian manifold, which we write locally in coordinates as ${{\bf R}^d}$ with metric ${h_{ab}(x)}$ for ${a,b=1,\dots,d}$, then a harmonic map ${\Phi: \Omega \rightarrow M}$ is a critical point of the energy functional

$\displaystyle J[\Phi] := \frac{1}{2} \int_\Omega h(x) \otimes g_{\gamma(x)}( D\gamma(x), D\gamma(x) )\ dh(x)$

or in coordinates (again ignoring coordinate patch issues)

$\displaystyle J[\Phi] = \frac{1}{2} \int_{{\bf R}^d} h_{ab}(x) g_{ij}(\Phi(x)) (\partial_a \Phi^i(x)) (\partial_b \Phi^j(x))\ \sqrt{\det(h(x))}\ dx.$

If we replace the Riemannian manifold ${\Omega}$ by a Lorentzian manifold, such as Minkowski space ${{\bf R}^{1+3}}$, then the notion of a harmonic map is replaced by that of a wave map, which generalises the scalar wave equation (which corresponds to the case ${M={\bf R}}$).

Example 2 (${N}$-particle interactions) Take ${\Omega = {\bf R}}$ and ${M = {\bf R}^3 \otimes {\bf R}^N}$; then a function ${\Phi: \Omega \rightarrow M}$ can be interpreted as a collection of ${N}$ trajectories ${q_1,\dots,q_N: {\bf R} \rightarrow {\bf R}^3}$ in space, which we give a physical interpretation as the trajectories of ${N}$ particles. If we assign each particle a positive mass ${m_1,\dots,m_N > 0}$, and also introduce a potential energy function ${V: M \rightarrow {\bf R}}$, then it turns out that Newton’s laws of motion ${F=ma}$ in this context (with the force ${F_i}$ on the ${i^{th}}$ particle being given by the conservative force ${-\nabla_{q_i} V}$) are equivalent to the trajectories ${q_1,\dots,q_N}$ being a critical point of the action functional

$\displaystyle J[\Phi] := \int_{\bf R} \sum_{i=1}^N \frac{1}{2} m_i |\dot q_i(t)|^2 - V( q_1(t),\dots,q_N(t) )\ dt.$

Formally, if ${\Phi = \Phi_0}$ is a critical point of a functional ${J[\Phi]}$, this means that

$\displaystyle \frac{d}{ds} J[ \Phi[s] ]|_{s=0} = 0$

whenever ${s \mapsto \Phi[s]}$ is a (smooth) deformation with ${\Phi[0]=\Phi_0}$ (and with ${\Phi[s]}$ respecting whatever boundary conditions are appropriate). Interchanging the derivative and integral, we (formally, at least) arrive at

$\displaystyle \int_\Omega \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0}\ dx = 0. \ \ \ \ \ (2)$

Write ${\delta \Phi := \frac{d}{ds} \Phi[s]|_{s=0}}$ for the infinitesimal deformation of ${\Phi_0}$. By the chain rule, ${\frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0}}$ can be expressed in terms of ${x, \Phi_0(x), \delta \Phi(x), D\Phi_0(x), D \delta \Phi(x)}$. In coordinates, we have

$\displaystyle \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0} = \delta \Phi^i(x) L_{q^i}(x,\Phi_0(x), D\Phi_0(x)) \ \ \ \ \ (3)$

$\displaystyle + \partial_{x^a} \delta \Phi^i(x) L_{\partial_{x^a} q^i} (x,\Phi_0(x), D\Phi_0(x)),$

where we parameterise ${\Sigma}$ by ${x, (q^i)_{i=1,\dots,n}, (\partial_{x^a} q^i)_{a=1,\dots,d; i=1,\dots,n}}$, and we use subscripts on ${L}$ to denote partial derivatives in the various coefficients. (One can of course work in a coordinate-free manner here if one really wants to, but the notation becomes a little cumbersome due to the need to carefully split up the tangent space of ${\Sigma}$, and we will not do so here.) Thus we can view (2) as an integral identity that asserts the vanishing of a certain integral, whose integrand involves ${x, \Phi_0(x), \delta \Phi(x), D\Phi_0(x), D \delta \Phi(x)}$, where ${\delta \Phi}$ vanishes at the boundary but is otherwise unconstrained.

A general rule of thumb in PDE and calculus of variations is that whenever one has an integral identity of the form ${\int_\Omega F(x)\ dx = 0}$ for some class of functions ${F}$ that vanishes on the boundary, then there must be an associated differential identity ${F = \hbox{div} X}$ that justifies this integral identity through Stokes’ theorem. This rule of thumb helps explain why integration by parts is used so frequently in PDE to justify integral identities. The rule of thumb can fail when one is dealing with “global” or “cohomologically non-trivial” integral identities of a topological nature, such as the Gauss-Bonnet or Kazhdan-Warner identities, but is quite reliable for “local” or “cohomologically trivial” identities, such as those arising from calculus of variations.

In any case, if we apply this rule to (2), we expect that the integrand ${\frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0}}$ should be expressible as a spatial divergence. This is indeed the case:

Proposition 1 (Formal) Let ${\Phi = \Phi_0}$ be a critical point of the functional ${J[\Phi]}$ defined in (1). Then for any deformation ${s \mapsto \Phi[s]}$ with ${\Phi[0] = \Phi_0}$, we have

$\displaystyle \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0} = \hbox{div} X \ \ \ \ \ (4)$

where ${X}$ is the vector field that is expressible in coordinates as

$\displaystyle X^a := \delta \Phi^i(x) L_{\partial_{x^a} q^i}(x,\Phi_0(x), D\Phi_0(x)). \ \ \ \ \ (5)$

Proof: Comparing (4) with (3), we see that the claim is equivalent to the Euler-Lagrange equation

$\displaystyle L_{q^i}(x,\Phi_0(x), D\Phi_0(x)) - \partial_{x^a} L_{\partial_{x^a} q^i}(x,\Phi_0(x), D\Phi_0(x)) = 0. \ \ \ \ \ (6)$

The same computation, together with an integration by parts, shows that (2) may be rewritten as

$\displaystyle \int_\Omega ( L_{q^i}(x,\Phi_0(x), D\Phi_0(x)) - \partial_{x^a} L_{\partial_{x^a} q^i}(x,\Phi_0(x), D\Phi_0(x)) ) \delta \Phi^i(x)\ dx = 0.$

Since ${\delta \Phi^i(x)}$ is unconstrained on the interior of ${\Omega}$, the claim (6) follows (at a formal level, at least). $\Box$

Many variational problems also enjoy one-parameter continuous symmetries: given any field ${\Phi_0}$ (not necessarily a critical point), one can place that field in a one-parameter family ${s \mapsto \Phi[s]}$ with ${\Phi[0] = \Phi_0}$, such that

$\displaystyle J[ \Phi[s] ] = J[ \Phi[0] ]$

for all ${s}$; in particular,

$\displaystyle \frac{d}{ds} J[ \Phi[s] ]|_{s=0} = 0,$

which can be written as (2) as before. Applying the previous rule of thumb, we thus expect another divergence identity

$\displaystyle \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0} = \hbox{div} Y \ \ \ \ \ (7)$

whenever ${s \mapsto \Phi[s]}$ arises from a continuous one-parameter symmetry. This expectation is indeed the case in many examples. For instance, if the spatial domain ${\Omega}$ is the Euclidean space ${{\bf R}^d}$, and the Lagrangian (when expressed in coordinates) has no direct dependence on the spatial variable ${x}$, thus

$\displaystyle L( x, \Phi(x), D\Phi(x) ) = L( \Phi(x), D\Phi(x) ), \ \ \ \ \ (8)$

then we obtain ${d}$ translation symmetries

$\displaystyle \Phi[s](x) := \Phi(x - s e^a )$

for ${a=1,\dots,d}$, where ${e^1,\dots,e^d}$ is the standard basis for ${{\bf R}^d}$. For a fixed ${a}$, the left-hand side of (7) then becomes

$\displaystyle \frac{d}{ds} L( \Phi(x-se^a), D\Phi(x-se^a) )|_{s=0} = -\partial_{x^a} [ L( \Phi(x), D\Phi(x) ) ]$

$\displaystyle = \hbox{div} Y$

where ${Y(x) = - L(\Phi(x), D\Phi(x)) e^a}$. Another common type of symmetry is a pointwise symmetry, in which

$\displaystyle L( x, \Phi[s](x), D\Phi[s](x) ) = L( x, \Phi[0](x), D\Phi[0](x) ) \ \ \ \ \ (9)$

for all ${x}$, in which case (7) clearly holds with ${Y=0}$.

If we subtract (4) from (7), we obtain the celebrated theorem of Noether linking symmetries with conservation laws:

Theorem 2 (Noether’s theorem) Suppose that ${\Phi_0}$ is a critical point of the functional (1), and let ${\Phi[s]}$ be a one-parameter continuous symmetry with ${\Phi[0] = \Phi_0}$. Let ${X}$ be the vector field in (5), and let ${Y}$ be the vector field in (7). Then we have the pointwise conservation law

$\displaystyle \hbox{div}(X-Y) = 0.$

In particular, for one-dimensional variational problems, in which ${\Omega \subset {\bf R}}$, we have the conservation law ${(X-Y)(t) = (X-Y)(0)}$ for all ${t \in \Omega}$ (assuming of course that ${\Omega}$ is connected and contains ${0}$).

Noether’s theorem gives a systematic way to locate conservation laws for solutions to variational problems. For instance, if ${\Omega \subset {\bf R}}$ and the Lagrangian has no explicit time dependence, thus

$\displaystyle L(t, \Phi(t), \dot \Phi(t)) = L(\Phi(t), \dot \Phi(t)),$

then by using the time translation symmetry ${\Phi[s](t) := \Phi(t-s)}$, we have

$\displaystyle Y(t) = - L( \Phi(t), \dot\Phi(t) )$

as discussed previously, whereas we have ${\delta \Phi(t) = - \dot \Phi(t)}$, and hence by (5)

$\displaystyle X(t) := - \dot \Phi^i(x) L_{\dot q^i}(\Phi(t), \dot \Phi(t)),$

and so Noether’s theorem gives conservation of the Hamiltonian

$\displaystyle H(t) := \dot \Phi^i(x) L_{\dot q^i}(\Phi(t), \dot \Phi(t))- L(\Phi(t), \dot \Phi(t)). \ \ \ \ \ (10)$

For instance, for geodesic flow, the Hamiltonian works out to be

$\displaystyle H(t) = \frac{1}{2} g_{ij}(\gamma(t)) \dot \gamma^i(t) \dot \gamma^j(t),$

so we see that the speed of the geodesic is conserved over time.

For pointwise symmetries (9), ${Y}$ vanishes, and so Noether’s theorem simplifies to ${\hbox{div} X = 0}$; in the one-dimensional case ${\Omega \subset {\bf R}}$, we thus see from (5) that the quantity

$\displaystyle \delta \Phi^i(t) L_{\dot q^i}(t,\Phi_0(t), \dot \Phi_0(t)) \ \ \ \ \ (11)$

is conserved in time. For instance, for the ${N}$-particle system in Example 2, if we have the translation invariance

$\displaystyle V( q_1 + h, \dots, q_N + h ) = V( q_1, \dots, q_N )$

for all ${q_1,\dots,q_N,h \in {\bf R}^3}$, then we have the pointwise translation symmetry

$\displaystyle q_i[s](t) := q_i(t) + s e^j$

for all ${i=1,\dots,N}$, ${s \in{\bf R}}$ and some ${j=1,\dots,3}$, in which case ${\dot q_i(t) = e^j}$, and the conserved quantity (11) becomes

$\displaystyle \sum_{i=1}^n m_i \dot q_i^j(t);$

as ${j=1,\dots,3}$ was arbitrary, this establishes conservation of the total momentum

$\displaystyle \sum_{i=1}^n m_i \dot q_i(t).$

Similarly, if we have the rotation invariance

$\displaystyle V( R q_1, \dots, Rq_N ) = V( q_1, \dots, q_N )$

for any ${q_1,\dots,q_N \in {\bf R}^3}$ and ${R \in SO(3)}$, then we have the pointwise rotation symmetry

$\displaystyle q_i[s](t) := \exp( s A ) q_i(t)$

for any skew-symmetric real ${3 \times 3}$ matrix ${A}$, in which case ${\dot q_i(t) = A q_i(t)}$, and the conserved quantity (11) becomes

$\displaystyle \sum_{i=1}^n m_i \langle A q_i(t), \dot q_i(t) \rangle;$

since ${A}$ is an arbitrary skew-symmetric matrix, this establishes conservation of the total angular momentum

$\displaystyle \sum_{i=1}^n m_i q_i(t) \wedge \dot q_i(t).$

Below the fold, I will describe how Noether’s theorem can be used to locate all of the conserved quantities for the Euler equations of inviscid fluid flow, discussed in this previous post, by interpreting that flow as geodesic flow in an infinite dimensional manifold.

I’ve just uploaded to the arXiv the paper “Finite time blowup for an averaged three-dimensional Navier-Stokes equation“, submitted to J. Amer. Math. Soc.. The main purpose of this paper is to formalise the “supercriticality barrier” for the global regularity problem for the Navier-Stokes equation, which roughly speaking asserts that it is not possible to establish global regularity by any “abstract” approach which only uses upper bound function space estimates on the nonlinear part of the equation, combined with the energy identity. This is done by constructing a modification of the Navier-Stokes equations with a nonlinearity that obeys essentially all of the function space estimates that the true Navier-Stokes nonlinearity does, and which also obeys the energy identity, but for which one can construct solutions that blow up in finite time. Results of this type had been previously established by Montgomery-Smith, Gallagher-Paicu, and Li-Sinai for variants of the Navier-Stokes equation without the energy identity, and by Katz-Pavlovic and by Cheskidov for dyadic analogues of the Navier-Stokes equations in five and higher dimensions that obeyed the energy identity (see also the work of Plechac and Sverak and of Hou and Lei that also suggest blowup for other Navier-Stokes type models obeying the energy identity in five and higher dimensions), but to my knowledge this is the first blowup result for a Navier-Stokes type equation in three dimensions that also obeys the energy identity. Intriguingly, the method of proof in fact hints at a possible route to establishing blowup for the true Navier-Stokes equations, which I am now increasingly inclined to believe is the case (albeit for a very small set of initial data).

To state the results more precisely, recall that the Navier-Stokes equations can be written in the form

$\displaystyle \partial_t u + (u \cdot \nabla) u = \nu \Delta u + \nabla p$

for a divergence-free velocity field ${u}$ and a pressure field ${p}$, where ${\nu>0}$ is the viscosity, which we will normalise to be one. We will work in the non-periodic setting, so the spatial domain is ${{\bf R}^3}$, and for sake of exposition I will not discuss matters of regularity or decay of the solution (but we will always be working with strong notions of solution here rather than weak ones). Applying the Leray projection ${P}$ to divergence-free vector fields to this equation, we can eliminate the pressure, and obtain an evolution equation

$\displaystyle \partial_t u = \Delta u + B(u,u) \ \ \ \ \ (1)$

purely for the velocity field, where ${B}$ is a certain bilinear operator on divergence-free vector fields (specifically, ${B(u,v) = -\frac{1}{2} P( (u \cdot \nabla) v + (v \cdot \nabla) u)}$. The global regularity problem for Navier-Stokes is then equivalent to the global regularity problem for the evolution equation (1).

An important feature of the bilinear operator ${B}$ appearing in (1) is the cancellation law

$\displaystyle \langle B(u,u), u \rangle = 0$

(using the ${L^2}$ inner product on divergence-free vector fields), which leads in particular to the fundamental energy identity

$\displaystyle \frac{1}{2} \int_{{\bf R}^3} |u(T,x)|^2\ dx + \int_0^T \int_{{\bf R}^3} |\nabla u(t,x)|^2\ dx dt = \frac{1}{2} \int_{{\bf R}^3} |u(0,x)|^2\ dx.$

This identity (and its consequences) provide essentially the only known a priori bound on solutions to the Navier-Stokes equations from large data and arbitrary times. Unfortunately, as discussed in this previous post, the quantities controlled by the energy identity are supercritical with respect to scaling, which is the fundamental obstacle that has defeated all attempts to solve the global regularity problem for Navier-Stokes without any additional assumptions on the data or solution (e.g. perturbative hypotheses, or a priori control on a critical norm such as the ${L^\infty_t L^3_x}$ norm).

Our main result is then (slightly informally stated) as follows

Theorem 1 There exists an averaged version ${\tilde B}$ of the bilinear operator ${B}$, of the form

$\displaystyle \tilde B(u,v) := \int_\Omega m_{3,\omega}(D) Rot_{3,\omega}$

$\displaystyle B( m_{1,\omega}(D) Rot_{1,\omega} u, m_{2,\omega}(D) Rot_{2,\omega} v )\ d\mu(\omega)$

for some probability space ${(\Omega, \mu)}$, some spatial rotation operators ${Rot_{i,\omega}}$ for ${i=1,2,3}$, and some Fourier multipliers ${m_{i,\omega}}$ of order ${0}$, for which one still has the cancellation law

$\displaystyle \langle \tilde B(u,u), u \rangle = 0$

and for which the averaged Navier-Stokes equation

$\displaystyle \partial_t u = \Delta u + \tilde B(u,u) \ \ \ \ \ (2)$

admits solutions that blow up in finite time.

(There are some integrability conditions on the Fourier multipliers ${m_{i,\omega}}$ required in the above theorem in order for the conclusion to be non-trivial, but I am omitting them here for sake of exposition.)

Because spatial rotations and Fourier multipliers of order ${0}$ are bounded on most function spaces, ${\tilde B}$ automatically obeys almost all of the upper bound estimates that ${B}$ does. Thus, this theorem blocks any attempt to prove global regularity for the true Navier-Stokes equations which relies purely on the energy identity and on upper bound estimates for the nonlinearity; one must use some additional structure of the nonlinear operator ${B}$ which is not shared by an averaged version ${\tilde B}$. Such additional structure certainly exists – for instance, the Navier-Stokes equation has a vorticity formulation involving only differential operators rather than pseudodifferential ones, whereas a general equation of the form (2) does not. However, “abstract” approaches to global regularity generally do not exploit such structure, and thus cannot be used to affirmatively answer the Navier-Stokes problem.

It turns out that the particular averaged bilinear operator ${B}$ that we will use will be a finite linear combination of local cascade operators, which take the form

$\displaystyle C(u,v) := \sum_{n \in {\bf Z}} (1+\epsilon_0)^{5n/2} \langle u, \psi_{1,n} \rangle \langle v, \psi_{2,n} \rangle \psi_{3,n}$

where ${\epsilon_0>0}$ is a small parameter, ${\psi_1,\psi_2,\psi_3}$ are Schwartz vector fields whose Fourier transform is supported on an annulus, and ${\psi_{i,n}(x) := (1+\epsilon_0)^{3n/2} \psi_i( (1+\epsilon_0)^n x)}$ is an ${L^2}$-rescaled version of ${\psi_i}$ (basically a “wavelet” of wavelength about ${(1+\epsilon_0)^{-n}}$ centred at the origin). Such operators were essentially introduced by Katz and Pavlovic as dyadic models for ${B}$; they have the essentially the same scaling property as ${B}$ (except that one can only scale along powers of ${1+\epsilon_0}$, rather than over all positive reals), and in fact they can be expressed as an average of ${B}$ in the sense of the above theorem, as can be shown after a somewhat tedious amount of Fourier-analytic symbol manipulations.

If we consider nonlinearities ${\tilde B}$ which are a finite linear combination of local cascade operators, then the equation (2) more or less collapses to a system of ODE in certain “wavelet coefficients” of ${u}$. The precise ODE that shows up depends on what precise combination of local cascade operators one is using. Katz and Pavlovic essentially considered a single cascade operator together with its “adjoint” (needed to preserve the energy identity), and arrived (more or less) at the system of ODE

$\displaystyle \partial_t X_n = - (1+\epsilon_0)^{2n} X_n + (1+\epsilon_0)^{\frac{5}{2}(n-1)} X_{n-1}^2 - (1+\epsilon_0)^{\frac{5}{2} n} X_n X_{n+1} \ \ \ \ \ (3)$

where ${X_n: [0,T] \rightarrow {\bf R}}$ are scalar fields for each integer ${n}$. (Actually, Katz-Pavlovic worked with a technical variant of this particular equation, but the differences are not so important for this current discussion.) Note that the quadratic terms on the RHS carry a higher exponent of ${1+\epsilon_0}$ than the dissipation term; this reflects the supercritical nature of this evolution (the energy ${\frac{1}{2} \sum_n X_n^2}$ is monotone decreasing in this flow, so the natural size of ${X_n}$ given the control on the energy is ${O(1)}$). There is a slight technical issue with the dissipation if one wishes to embed (3) into an equation of the form (2), but it is minor and I will not discuss it further here.

In principle, if the ${X_n}$ mode has size comparable to ${1}$ at some time ${t_n}$, then energy should flow from ${X_n}$ to ${X_{n+1}}$ at a rate comparable to ${(1+\epsilon_0)^{\frac{5}{2} n}}$, so that by time ${t_{n+1} \approx t_n + (1+\epsilon_0)^{-\frac{5}{2} n}}$ or so, most of the energy of ${X_n}$ should have drained into the ${X_{n+1}}$ mode (with hardly any energy dissipated). Since the series ${\sum_{n \geq 1} (1+\epsilon_0)^{-\frac{5}{2} n}}$ is summable, this suggests finite time blowup for this ODE as the energy races ever more quickly to higher and higher modes. Such a scenario was indeed established by Katz and Pavlovic (and refined by Cheskidov) if the dissipation strength ${(1+\epsilon)^{2n}}$ was weakened somewhat (the exponent ${2}$ has to be lowered to be less than ${\frac{5}{3}}$). As mentioned above, this is enough to give a version of Theorem 1 in five and higher dimensions.

On the other hand, it was shown a few years ago by Barbato, Morandin, and Romito that (3) in fact admits global smooth solutions (at least in the dyadic case ${\epsilon_0=1}$, and assuming non-negative initial data). Roughly speaking, the problem is that as energy is being transferred from ${X_n}$ to ${X_{n+1}}$, energy is also simultaneously being transferred from ${X_{n+1}}$ to ${X_{n+2}}$, and as such the solution races off to higher modes a bit too prematurely, without absorbing all of the energy from lower modes. This weakens the strength of the blowup to the point where the moderately strong dissipation in (3) is enough to kill the high frequency cascade before a true singularity occurs. Because of this, the original Katz-Pavlovic model cannot quite be used to establish Theorem 1 in three dimensions. (Actually, the original Katz-Pavlovic model had some additional dispersive features which allowed for another proof of global smooth solutions, which is an unpublished result of Nazarov.)

To get around this, I had to “engineer” an ODE system with similar features to (3) (namely, a quadratic nonlinearity, a monotone total energy, and the indicated exponents of ${(1+\epsilon_0)}$ for both the dissipation term and the quadratic terms), but for which the cascade of energy from scale ${n}$ to scale ${n+1}$ was not interrupted by the cascade of energy from scale ${n+1}$ to scale ${n+2}$. To do this, I needed to insert a delay in the cascade process (so that after energy was dumped into scale ${n}$, it would take some time before the energy would start to transfer to scale ${n+1}$), but the process also needed to be abrupt (once the process of energy transfer started, it needed to conclude very quickly, before the delayed transfer for the next scale kicked in). It turned out that one could build a “quadratic circuit” out of some basic “quadratic gates” (analogous to how an electrical circuit could be built out of basic gates such as amplifiers or resistors) that achieved this task, leading to an ODE system essentially of the form

$\displaystyle \partial_t X_{1,n} = - (1+\epsilon_0)^{2n} X_{1,n}$

$\displaystyle + (1+\epsilon_0)^{5n/2} (- \epsilon^{-2} X_{3,n} X_{4,n} - \epsilon X_{1,n} X_{2,n} - \epsilon^2 \exp(-K^{10}) X_{1,n} X_{3,n}$

$\displaystyle + K X_{4,n-1}^2)$

$\displaystyle \partial_t X_{2,n} = - (1+\epsilon_0)^{2n} X_{2,n} + (1+\epsilon_0)^{5n/2} (\epsilon X_{1,n}^2 - \epsilon^{-1} K^{10} X_{3,n}^2)$

$\displaystyle \partial_t X_{3,n} = - (1+\epsilon_0)^{2n} X_{3,n} + (1+\epsilon_0)^{5n/2} (\epsilon^2 \exp(-K^{10}) X_{1,n}^2$

$\displaystyle + \epsilon^{-1} K^{10} X_{2,n} X_{3,n} )$

$\displaystyle \partial_t X_{4,n} =- (1+\epsilon_0)^{2n} X_{4,n} + (1+\epsilon_0)^{5n/2} (\epsilon^{-2} X_{3,n} X_{1,n}$

$\displaystyle - (1+\epsilon_0)^{5/2} K X_{4,n} X_{1,n+1})$

where ${K \geq 1}$ is a suitable large parameter and ${\epsilon > 0}$ is a suitable small parameter (much smaller than ${1/K}$). To visualise the dynamics of such a system, I found it useful to describe this system graphically by a “circuit diagram” that is analogous (but not identical) to the circuit diagrams arising in electrical engineering:

The coupling constants here range widely from being very large to very small; in practice, this makes the ${X_{2,n}}$ and ${X_{3,n}}$ modes absorb very little energy, but exert a sizeable influence on the remaining modes. If a lot of energy is suddenly dumped into ${X_{1,n}}$, what happens next is roughly as follows: for a moderate period of time, nothing much happens other than a trickle of energy into ${X_{2,n}}$, which in turn causes a rapid exponential growth of ${X_{3,n}}$ (from a very low base). After this delay, ${X_{3,n}}$ suddenly crosses a certain threshold, at which point it causes ${X_{1,n}}$ and ${X_{4,n}}$ to exchange energy back and forth with extreme speed. The energy from ${X_{4,n}}$ then rapidly drains into ${X_{1,n+1}}$, and the process begins again (with a slight loss in energy due to the dissipation). If one plots the total energy ${E_n := \frac{1}{2} ( X_{1,n}^2 + X_{2,n}^2 + X_{3,n}^2 + X_{4,n}^2 )}$ as a function of time, it looks schematically like this:

As in the previous heuristic discussion, the time between cascades from one frequency scale to the next decay exponentially, leading to blowup at some finite time ${T}$. (One could describe the dynamics here as being similar to the famous “lighting the beacons” scene in the Lord of the Rings movies, except that (a) as each beacon gets ignited, the previous one is extinguished, as per the energy identity; (b) the time between beacon lightings decrease exponentially; and (c) there is no soundtrack.)

There is a real (but remote) possibility that this sort of construction can be adapted to the true Navier-Stokes equations. The basic blowup mechanism in the averaged equation is that of a von Neumann machine, or more precisely a construct (built within the laws of the inviscid evolution ${\partial_t u = \tilde B(u,u)}$) that, after some time delay, manages to suddenly create a replica of itself at a finer scale (and to largely erase its original instantiation in the process). In principle, such a von Neumann machine could also be built out of the laws of the inviscid form of the Navier-Stokes equations (i.e. the Euler equations). In physical terms, one would have to build the machine purely out of an ideal fluid (i.e. an inviscid incompressible fluid). If one could somehow create enough “logic gates” out of ideal fluid, one could presumably build a sort of “fluid computer”, at which point the task of building a von Neumann machine appears to reduce to a software engineering exercise rather than a PDE problem (providing that the gates are suitably stable with respect to perturbations, but (as with actual computers) this can presumably be done by converting the analog signals of fluid mechanics into a more error-resistant digital form). The key thing missing in this program (in both senses of the word) to establish blowup for Navier-Stokes is to construct the logic gates within the laws of ideal fluids. (Compare with the situation for cellular automata such as Conway’s “Game of Life“, in which Turing complete computers, universal constructors, and replicators have all been built within the laws of that game.)

This is the sixth thread for the Polymath8b project to obtain new bounds for the quantity

$\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),$

either for small values of ${m}$ (in particular ${m=1,2}$) or asymptotically as ${m \rightarrow \infty}$. The previous thread may be found here. The currently best known bounds on ${H_m}$ can be found at the wiki page (which has recently returned to full functionality, after a partial outage).

The current focus is on improving the upper bound on ${H_1}$ under the assumption of the generalised Elliott-Halberstam conjecture (GEH) from ${H_1 \leq 8}$ to ${H_1 \leq 6}$, which looks to be the limit of the method (see this previous comment for a semi-rigorous reason as to why ${H_1 \leq 4}$ is not possible with this method). With the most general Selberg sieve available, the problem reduces to the following three-dimensional variational one:

Problem 1 Does there exist a (not necessarily convex) polytope ${R \subset [0,1]^3}$ with quantities ${0 \leq \varepsilon_1,\varepsilon_2,\varepsilon_3 \leq 1}$, and a non-trivial square-integrable function ${F: {\bf R}^3 \rightarrow {\bf R}}$ supported on ${R}$ such that

• ${R + R \subset \{ (x,y,z) \in [0,2]^3: \min(x+y,y+z,z+x) \leq 2 \},}$
• ${\int_0^\infty F(x,y,z)\ dx = 0}$ when ${y+z \geq 1+\varepsilon_1}$;
• ${\int_0^\infty F(x,y,z)\ dy = 0}$ when ${x+z \geq 1+\varepsilon_2}$;
• ${\int_0^\infty F(x,y,z)\ dz = 0}$ when ${x+y \geq 1+\varepsilon_3}$;

and such that we have the inequality

$\displaystyle \int_{y+z \leq 1-\varepsilon_1} (\int_{\bf R} F(x,y,z)\ dx)^2\ dy dz$

$\displaystyle + \int_{z+x \leq 1-\varepsilon_2} (\int_{\bf R} F(x,y,z)\ dy)^2\ dz dx$

$\displaystyle + \int_{x+y \leq 1-\varepsilon_3} (\int_{\bf R} F(x,y,z)\ dz)^2\ dx dy$

$\displaystyle > 2 \int_R F(x,y,z)^2\ dx dy dz?$

(Initially it was assumed that ${R}$ was convex, but we have now realised that this is not necessary.)

An affirmative answer to this question will imply ${H_1 \leq 6}$ on GEH. We are “within almost two percent” of this claim; we cannot quite reach ${2}$ yet, but have got as far as ${1.959633}$. However, we have not yet fully optimised ${F}$ in the above problem.

The most promising route so far is to take the symmetric polytope

$\displaystyle R = \{ (x,y,z) \in [0,1]^3: x+y+z \leq 3/2 \}$

with ${F}$ symmetric as well, and ${\varepsilon_1=\varepsilon_2=\varepsilon_3=\varepsilon}$ (we suspect that the optimal ${\varepsilon}$ will be roughly ${1/6}$). (However, it is certainly worth also taking a look at easier model problems, such as the polytope ${{\cal R}'_3 := \{ (x,y,z) \in [0,1]^3: x+y,y+z,z+x \leq 1\}}$, which has no vanishing marginal conditions to contend with; more recently we have been looking at the non-convex polytope ${R = \{x+y,x+z \leq 1 \} \cup \{ x+y,y+z \leq 1 \} \cup \{ x+z,y+z \leq 1\}}$.) Some further details of this particular case are given below the fold.

There should still be some progress to be made in the other regimes of interest – the unconditional bound on ${H_1}$ (currently at ${270}$), and on any further progress in asymptotic bounds for ${H_m}$ for larger ${m}$ – but the current focus is certainly on the bound on ${H_1}$ on GEH, as we seem to be tantalisingly close to an optimal result here.

This is the fifth thread for the Polymath8b project to obtain new bounds for the quantity

$\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),$

either for small values of ${m}$ (in particular ${m=1,2}$) or asymptotically as ${m \rightarrow \infty}$. The previous thread may be found here. The currently best known bounds on ${H_m}$ can be found at the wiki page (which has recently returned to full functionality, after a partial outage). In particular, the upper bound for ${H_1}$ has been shaved a little from ${272}$ to ${270}$, and we have very recently achieved the bound ${H_1 \leq 8}$ on the generalised Elliott-Halberstam conjecture GEH, formulated as Conjecture 1 of this paper of Bombieri, Friedlander, and Iwaniec. We also have explicit bounds for ${H_m}$ for ${m \leq 5}$, both with and without the assumption of the Elliott-Halberstam conjecture, as well as slightly sharper asymptotics for the upper bound for ${H_m}$ as ${m \rightarrow \infty}$.

The basic strategy for bounding ${H_m}$ still follows the general paradigm first laid out by Goldston, Pintz, Yildirim: given an admissible ${k}$-tuple ${(h_1,\dots,h_k)}$, one needs to locate a non-negative sieve weight ${\nu: {\bf Z} \rightarrow {\bf R}^+}$, supported on an interval ${[x,2x]}$ for a large ${x}$, such that the ratio

$\displaystyle \frac{\sum_{i=1}^k \sum_n \nu(n) 1_{n+h_i \hbox{ prime}}}{\sum_n \nu(n)} \ \ \ \ \ (1)$

is asymptotically larger than ${m}$ as ${x \rightarrow \infty}$; this will show that ${H_m \leq h_k-h_1}$. Thus one wants to locate a sieve weight ${\nu}$ for which one has good lower bounds on the numerator and good upper bounds on the denominator.

One can modify this paradigm slightly, for instance by adding the additional term ${\sum_n \nu(n) 1_{n+h_1,\dots,n+h_k \hbox{ composite}}}$ to the numerator, or by subtracting the term ${\sum_n \nu(n) 1_{n+h_1,n+h_k \hbox{ prime}}}$ from the numerator (which allows one to reduce the bound ${h_k-h_1}$ to ${\max(h_k-h_2,h_{k-1}-h_1)}$); however, the numerical impact of these tweaks have proven to be negligible thus far.

Despite a number of experiments with other sieves, we are still relying primarily on the Selberg sieve

$\displaystyle \nu(n) := 1_{n=b\ (W)} 1_{[x,2x]}(n) \lambda(n)^2$

where ${\lambda(n)}$ is the divisor sum

$\displaystyle \lambda(n) := \sum_{d_1|n+h_1, \dots, d_k|n+h_k} \mu(d_1) \dots \mu(d_k) f( \frac{\log d_1}{\log R}, \dots, \frac{\log d_k}{\log R})$

with ${R = x^{\theta/2}}$, ${\theta}$ is the level of distribution (${\theta=1/2-}$ if relying on Bombieri-Vinogradov, ${\theta=1-}$ if assuming Elliott-Halberstam, and (in principle) ${\theta = \frac{1}{2} + \frac{13}{540}-}$ if using Polymath8a technology), and ${f: [0,+\infty)^k \rightarrow {\bf R}}$ is a smooth, compactly supported function. Most of the progress has come by enlarging the class of cutoff functions ${f}$ one is permitted to use.

The baseline bounds for the numerator and denominator in (1) (as established for instance in this previous post) are as follows. If ${f}$ is supported on the simplex

$\displaystyle {\cal R}_k := \{ (t_1,\dots,t_k) \in [0,+\infty)^k: t_1+\dots+t_k < 1 \},$

and we define the mixed partial derivative ${F: [0,+\infty)^k \rightarrow {\bf R}}$ by

$\displaystyle F(t_1,\dots,t_k) = \frac{\partial^k}{\partial t_1 \dots \partial t_k} f(t_1,\dots,t_k)$

then the denominator in (1) is

$\displaystyle \frac{Bx}{W} (I_k(F) + o(1)) \ \ \ \ \ (2)$

where

$\displaystyle B := (\frac{W}{\phi(W) \log R})^k$

and

$\displaystyle I_k(F) := \int_{[0,+\infty)^k} F(t_1,\dots,t_k)^2\ dt_1 \dots dt_k.$

Similarly, the numerator of (1) is

$\displaystyle \frac{Bx}{W} \frac{2}{\theta} (\sum_{j=1}^m J^{(m)}_k(F) + o(1)) \ \ \ \ \ (3)$

where

$\displaystyle J_k^{(m)}(F) := \int_{[0,+\infty)^{k-1}} (\int_0^\infty F(t_1,\ldots,t_k)\ dt_m)^2\ dt_1 \dots dt_{m-1} dt_{m+1} \dots dt_k.$

Thus, if we let ${M_k}$ be the supremum of the ratio

$\displaystyle \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}$

whenever ${F}$ is supported on ${{\cal R}_k}$ and is non-vanishing, then one can prove ${H_m \leq h_k - h_1}$ whenever

$\displaystyle M_k > \frac{2m}{\theta}.$

We can improve this baseline in a number of ways. Firstly, with regards to the denominator in (1), if one upgrades the Elliott-Halberstam hypothesis ${EH[\theta]}$ to the generalised Elliott-Halberstam hypothesis ${GEH[\theta]}$ (currently known for ${\theta < 1/2}$, thanks to Motohashi, but conjectured for ${\theta < 1}$), the asymptotic (2) holds under the more general hypothesis that ${F}$ is supported in a polytope ${R}$, as long as ${R}$ obeys the inclusion

$\displaystyle R + R \subset \bigcup_{m=1}^k \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: \ \ \ \ \ (4)$

$\displaystyle t_1+\dots+t_{m-1}+t_{m+1}+\dots+t_k < 2; t_m < 2/\theta \} \cup \frac{2}{\theta} \cdot {\cal R}_k;$

examples of polytopes ${R}$ obeying this constraint include the modified simplex

$\displaystyle {\cal R}'_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\dots+t_{m-1}+t_{m+1}+\dots+t_k < 1$

$\displaystyle \hbox{ for all } 1 \leq m \leq k \},$

the prism

$\displaystyle {\cal R}_{k-1} \times [0, 1/\theta)$

the dilated simplex

$\displaystyle \frac{1}{\theta} \cdot {\cal R}_k$

and the truncated simplex

$\displaystyle \frac{k}{k-1} \cdot {\cal R}_k \cap [0,1/\theta)^k.$

See this previous post for a proof of these claims.

With regards to the numerator, the asymptotic (3) is valid whenever, for each ${1 \leq m \leq k}$, the marginals ${\int_0^\infty F(t_1,\ldots,t_k)\ dt_m}$ vanish outside of ${{\cal R}_{k-1}}$. This is automatic if ${F}$ is supported on ${{\cal R}_k}$, or on the slightly larger region ${{\cal R}'_k}$, but is an additional constraint when ${F}$ is supported on one of the other polytopes ${R}$ mentioned above.

More recently, we have obtained a more flexible version of the above asymptotic: if the marginals ${\int_0^\infty F(t_1,\ldots,t_k)\ dt_m}$ vanish outside of ${(1+\varepsilon) \cdot {\cal R}_{k-1}}$ for some ${0 < \varepsilon < 1}$, then the numerator of (1) has a lower bound of

$\displaystyle \frac{Bx}{W} \frac{2}{\theta} (\sum_{j=1}^m J^{(m)}_{k,\varepsilon}(F) + o(1))$

where

$\displaystyle J_{k,\varepsilon}^{(m)}(F) := \int_{(1-\varepsilon) \cdot {\cal R}_{k-1}} (\int_0^\infty F(t_1,\ldots,t_k)\ dt_m)^2\ dt_1 \dots dt_{m-1} dt_{m+1} \dots dt_k.$

A proof is given here. Putting all this together, we can conclude

Theorem 1 Suppose we can find ${0 \leq \varepsilon < 1}$ and a function ${F}$ supported on a polytope ${R}$ obeying (4), not identically zero and with all marginals ${\int_0^\infty F(t_1,\ldots,t_k)\ dt_m}$ vanishing outside of ${(1+\varepsilon) \cdot {\cal R}_{k-1}}$, and with

$\displaystyle \frac{\sum_{m=1}^k J_{k,\varepsilon}^{(m)}(F)}{I_k(F)} > \frac{2m}{\theta}.$

Then ${GEH[\theta]}$ implies ${H_m \leq h_k-h_1}$.

In principle, this very flexible criterion for upper bounding ${H_m}$ should lead to better bounds than before, and in particular we have now established ${H_1 \leq 8}$ on GEH.

Another promising direction is to try to improve the analysis at medium ${k}$ (more specifically, in the regime ${k \sim 50}$), which is where we are currently at without EH or GEH through numerical quadratic programming. Right now we are only using ${\theta=1/2}$ and using the baseline ${M_k}$ analysis, basically for two reasons:

• We do not have good numerical formulae for integrating polynomials on any region more complicated than the simplex ${{\cal R}_k}$ in medium dimension.
• The estimates ${MPZ^{(i)}[\varpi,\delta]}$ produced by Polymath8a involve a ${\delta}$ parameter, which introduces additional restrictions on the support of ${F}$ (conservatively, it restricts ${F}$ to ${[0,\delta']^k}$ where ${\delta' := \frac{\delta}{1/4+\varpi}}$ and ${\theta = 1/2 + 2 \varpi}$; it should be possible to be looser than this (as was done in Polymath8a) but this has not been fully explored yet). This then triggers the previous obstacle of having to integrate on something other than a simplex.

However, these look like solvable problems, and so I would expect that further unconditional improvement for ${H_1}$ should be possible.

This is the fourth thread for the Polymath8b project to obtain new bounds for the quantity

$\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),$

either for small values of ${m}$ (in particular ${m=1,2}$) or asymptotically as ${m \rightarrow \infty}$. The previous thread may be found here. The currently best known bounds on ${H_m}$ are:

• (Maynard) Assuming the Elliott-Halberstam conjecture, ${H_1 \leq 12}$.
• (Polymath8b, tentative) ${H_1 \leq 272}$. Assuming Elliott-Halberstam, ${H_2 \leq 272}$.
• (Polymath8b, tentative) ${H_2 \leq 429{,}822}$. Assuming Elliott-Halberstam, ${H_4 \leq 493{,}408}$.
• (Polymath8b, tentative) ${H_3 \leq 26{,}682{,}014}$. (Presumably a comparable bound also holds for ${H_6}$ on Elliott-Halberstam, but this has not been computed.)
• (Polymath8b) ${H_m \leq \exp( 3.817 m )}$ for sufficiently large ${m}$. Assuming Elliott-Halberstam, ${H_m \ll m e^{2m}}$ for sufficiently large ${m}$.

While the ${H_1}$ bound on the Elliott-Halberstam conjecture has not improved since the start of the Polymath8b project, there is reason to hope that it will soon fall, hopefully to ${8}$. This is because we have begun to exploit more fully the fact that when using “multidimensional Selberg-GPY” sieves of the form

$\displaystyle \nu(n) := \sigma_{f,k}(n)^2$

with

$\displaystyle \sigma_{f,k}(n) := \sum_{d_1|n+h_1,\dots,d_k|n+h_k} \mu(d_1) \dots \mu(d_k) f( \frac{\log d_1}{\log R},\dots,\frac{\log d_k}{\log R}),$

where ${R := x^{\theta/2}}$, it is not necessary for the smooth function ${f: [0,+\infty)^k \rightarrow {\bf R}}$ to be supported on the simplex

$\displaystyle {\cal R}_k := \{ (t_1,\dots,t_k)\in [0,1]^k: t_1+\dots+t_k \leq 1\},$

but can in fact be allowed to range on larger sets. First of all, ${f}$ may instead be supported on the slightly larger polytope

$\displaystyle {\cal R}'_k := \{ (t_1,\dots,t_k)\in [0,1]^k: t_1+\dots+t_{j-1}+t_{j+1}+\dots+t_k \leq 1$

$\displaystyle \hbox{ for all } j=1,\dots,k\}.$

However, it turns out that more is true: given a sufficiently general version of the Elliott-Halberstam conjecture ${EH[\theta]}$ at the given value of ${\theta}$, one may work with functions ${f}$ supported on more general domains ${R}$, so long as the sumset ${R+R := \{ t+t': t,t'\in R\}}$ is contained in the non-convex region

$\displaystyle \bigcup_{j=1}^k \{ (t_1,\dots,t_k)\in [0,\frac{2}{\theta}]^k: t_1+\dots+t_{j-1}+t_{j+1}+\dots+t_k \leq 2 \} \cup \frac{2}{\theta} \cdot {\cal R}_k, \ \ \ \ \ (1)$

and also provided that the restriction

$\displaystyle (t_1,\dots,t_{j-1},t_{j+1},\dots,t_k) \mapsto f(t_1,\dots,t_{j-1},0,t_{j+1},\dots,t_k) \ \ \ \ \ (2)$

is supported on the simplex

$\displaystyle {\cal R}_{k-1} := \{ (t_1,\dots,t_{j-1},t_{j+1},\dots,t_k)\in [0,1]^{k-1}:$

$\displaystyle t_1+\dots+t_{j-1}+t_{j+1}+\dots t_k \leq 1\}.$

More precisely, if ${f}$ is a smooth function, not identically zero, with the above properties for some ${R}$, and the ratio

$\displaystyle \sum_{j=1}^k \int_{{\cal R}_{k-1}} f_{1,\dots,j-1,j+1,\dots,k}(t_1,\dots,t_{j-1},0,t_{j+1},\dots,t_k)^2 \ \ \ \ \ (3)$

$\displaystyle dt_1 \dots dt_{j-1} dt_{j+1} \dots dt_k$

$\displaystyle / \int_R f_{1,\dots,k}^2(t_1,\dots,t_k)\ dt_1 \dots dt_k$

is larger than ${\frac{2m}{\theta}}$, then the claim ${DHL[k,m+1]}$ holds (assuming ${EH[\theta]}$), and in particular ${H_m \leq H(k)}$.

I’ll explain why one can do this below the fold. Taking this for granted, we can rewrite this criterion in terms of the mixed derivative ${F := f_{1,\dots,k}}$, the upshot being that if one can find a smooth function ${F}$ supported on ${R}$ that obeys the vanishing marginal conditions

$\displaystyle \int F( t_1,\dots,t_k )\ dt_j = 0$

whenever ${1 \leq j \leq k}$ and ${t_1+\dots+t_{j-1}+t_{j+1}+\dots+t_k > 1}$, and the ratio

$\displaystyle \frac{\sum_{j=1}^k J_k^{(j)}(F)}{I_k(F)} \ \ \ \ \ (4)$

is larger than ${\frac{2m}{\theta}}$, where

$\displaystyle I_k(F) := \int_R F(t_1,\dots,t_k)^2\ dt_1 \dots dt_k$

and

$\displaystyle J_k^{(j)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1/\theta} F(t_1,\dots,t_k)\ dt_j)^2 dt_1 \dots dt_{j-1} dt_{j+1} \dots dt_k$

then ${DHL[k,m+1]}$ holds. (To equate these two formulations, it is convenient to assume that ${R}$ is a downset, in the sense that whenever ${(t_1,\dots,t_k) \in R}$, the entire box ${[0,t_1] \times \dots \times [0,t_k]}$ lie in ${R}$, but one can easily enlarge ${R}$ to be a downset without destroying the containment of ${R+R}$ in the non-convex region (1).) One initially requires ${F}$ to be smooth, but a limiting argument allows one to relax to bounded measurable ${F}$. (To approximate a rough ${F}$ by a smooth ${F}$ while retaining the required moment conditions, one can first apply a slight dilation and translation so that the marginals of ${F}$ are supported on a slightly smaller version of the simplex ${{\cal R}_{k-1}}$, and then convolve by a smooth approximation to the identity to make ${F}$ smooth, while keeping the marginals supported on ${{\cal R}_{k-1}}$.)

We are now exploring various choices of ${R}$ to work with, including the prism

$\displaystyle \{ (t_1,\dots,t_k) \in [0,1/\theta]^k: t_1+\dots+t_{k-1} \leq 1 \}$

and the symmetric region

$\displaystyle \{ (t_1,\dots,t_k) \in [0,1/\theta]^k: t_1+\dots+t_k \leq \frac{k}{k-1} \}.$

By suitably subdividing these regions into polytopes, and working with piecewise polynomial functions ${F}$ that are polynomial of a specified degree on each subpolytope, one can phrase the problem of optimising (4) as a quadratic program, which we have managed to work with for ${k=3}$. Extending this program to ${k=4}$, there is a decent chance that we will be able to obtain ${DHL[4,2]}$ on EH.

We have also been able to numerically optimise ${M_k}$ quite accurately for medium values of ${k}$ (e.g. ${k \sim 50}$), which has led to improved values of ${H_1}$ without EH. For large ${k}$, we now also have the asymptotic ${M_k=\log k - O(1)}$ with explicit error terms (details here) which have allowed us to slightly improve the ${m=2}$ numerology, and also to get explicit ${m=3}$ numerology for the first time.

(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)

Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.

The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:

 (Discrete) (Continuous) (Limit method) Ramsey theory Topological dynamics Compactness Density Ramsey theory Ergodic theory Furstenberg correspondence principle Graph/hypergraph regularity Measure theory Graph limits Polynomial regularity Linear algebra Ultralimits Structural decompositions Hilbert space geometry Ultralimits Fourier analysis Spectral theory Direct and inverse limits Quantitative algebraic geometry Algebraic geometry Schemes Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits Approximate group theory Topological group theory Model theory

As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:

• Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects ${x_n}$ in a common space ${X}$, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object ${\lim_{n \rightarrow \infty} x_n}$, which remains in the same space, and is “close” to many of the original objects ${x_n}$ with respect to the given metric or topology.
• Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects ${x_n}$ in a category ${X}$, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit ${\varinjlim x_n}$ or the inverse limit ${\varprojlim x_n}$ of these objects, which is another object in the same category ${X}$, and is connected to the original objects ${x_n}$ by various morphisms.
• Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects ${x_{\bf n}}$ or of spaces ${X_{\bf n}}$, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, ${X_{\bf n}}$ might be groups and ${x_{\bf n}}$ might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$ or a new space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$, which is still a model of the same language (e.g. if the spaces ${X_{\bf n}}$ were all groups, then the limiting space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ is an abelian group, then the ${X_{\bf n}}$ will also be abelian groups for many ${{\bf n}}$.)

The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects ${x_{\bf n}}$ to all lie in a common space ${X}$ in order to form an ultralimit ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; they are permitted to lie in different spaces ${X_{\bf n}}$; this is more natural in many discrete contexts, e.g. when considering graphs on ${{\bf n}}$ vertices in the limit when ${{\bf n}}$ goes to infinity. Also, no convergence properties on the ${x_{\bf n}}$ are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces ${X_{\bf n}}$ involved are required in order to construct the ultraproduct.

With so few requirements on the objects ${x_{\bf n}}$ or spaces ${X_{\bf n}}$, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the ${x_{\bf n}}$, will be exactly obeyed by the limit object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.

Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.

Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.

For each natural number ${m}$, let ${H_m}$ denote the quantity

$\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),$

where ${p_n}$ denotes the ${n\textsuperscript{th}}$ prime. In other words, ${H_m}$ is the least quantity such that there are infinitely many intervals of length ${H_m}$ that contain ${m+1}$ or more primes. Thus, for instance, the twin prime conjecture is equivalent to the assertion that ${H_1 = 2}$, and the prime tuples conjecture would imply that ${H_m}$ is equal to the diameter of the narrowest admissible tuple of cardinality ${m+1}$ (thus we conjecturally have ${H_1 = 2}$, ${H_2 = 6}$, ${H_3 = 8}$, ${H_4 = 12}$, ${H_5 = 16}$, and so forth; see this web page for further continuation of this sequence).

In 2004, Goldston, Pintz, and Yildirim established the bound ${H_1 \leq 16}$ conditional on the Elliott-Halberstam conjecture, which remains unproven. However, no unconditional finiteness of ${H_1}$ was obtained (although they famously obtained the non-trivial bound ${p_{n+1}-p_n = o(\log p_n)}$), and even on the Elliot-Halberstam conjecture no finiteness result on the higher ${H_m}$ was obtained either (although they were able to show ${p_{n+2}-p_n=o(\log p_n)}$ on this conjecture). In the recent breakthrough of Zhang, the unconditional bound ${H_1 \leq 70,000,000}$ was obtained, by establishing a weak partial version of the Elliott-Halberstam conjecture; by refining these methods, the Polymath8 project (which I suppose we could retroactively call the Polymath8a project) then lowered this bound to ${H_1 \leq 4,680}$.

With the very recent preprint of James Maynard, we have the following further substantial improvements:

Theorem 1 (Maynard’s theorem) Unconditionally, we have the following bounds:

• ${H_1 \leq 600}$.
• ${H_m \leq C m^3 e^{4m}}$ for an absolute constant ${C}$ and any ${m \geq 1}$.

If one assumes the Elliott-Halberstam conjecture, we have the following improved bounds:

• ${H_1 \leq 12}$.
• ${H_2 \leq 600}$.
• ${H_m \leq C m^3 e^{2m}}$ for an absolute constant ${C}$ and any ${m \geq 1}$.

The final conclusion ${H_m \leq C m^3 e^{2m}}$ on Elliott-Halberstam is not explicitly stated in Maynard’s paper, but follows easily from his methods, as I will describe below the fold. (At around the same time as Maynard’s work, I had also begun a similar set of calculations concerning ${H_m}$, but was only able to obtain the slightly weaker bound ${H_m \leq C \exp( C m )}$ unconditionally.) In the converse direction, the prime tuples conjecture implies that ${H_m}$ should be comparable to ${m \log m}$. Granville has also obtained the slightly weaker explicit bound ${H_m \leq e^{8m+5}}$ for any ${m \geq 1}$ by a slight modification of Maynard’s argument.

The arguments of Maynard avoid using the difficult partial results on (weakened forms of) the Elliott-Halberstam conjecture that were established by Zhang and then refined by Polymath8; instead, the main input is the classical Bombieri-Vinogradov theorem, combined with a sieve that is closer in spirit to an older sieve of Goldston and Yildirim, than to the sieve used later by Goldston, Pintz, and Yildirim on which almost all subsequent work is based.

The aim of the Polymath8b project is to obtain improved bounds on ${H_1, H_2}$, and higher values of ${H_m}$, either conditional on the Elliott-Halberstam conjecture or unconditional. The likeliest routes for doing this are by optimising Maynard’s arguments and/or combining them with some of the results from the Polymath8a project. This post is intended to be the first research thread for that purpose. To start the ball rolling, I am going to give below a presentation of Maynard’s results, with some minor technical differences (most significantly, I am using the Goldston-Pintz-Yildirim variant of the Selberg sieve, rather than the traditional “elementary Selberg sieve” that is used by Maynard (and also in the Polymath8 project), although it seems that the numerology obtained by both sieves is essentially the same). An alternate exposition of Maynard’s work has just been completed also by Andrew Granville.

If ${f: {\bf R}^n \rightarrow {\bf C}}$ and ${g: {\bf R}^n \rightarrow {\bf C}}$ are two absolutely integrable functions on a Euclidean space ${{\bf R}^n}$, then the convolution ${f*g: {\bf R}^n \rightarrow {\bf C}}$ of the two functions is defined by the formula

$\displaystyle f*g(x) := \int_{{\bf R}^n} f(y) g(x-y)\ dy = \int_{{\bf R}^n} f(x-z) g(z)\ dz.$

A simple application of the Fubini-Tonelli theorem shows that the convolution ${f*g}$ is well-defined almost everywhere, and yields another absolutely integrable function. In the case that ${f=1_F}$, ${g=1_G}$ are indicator functions, the convolution simplifies to

$\displaystyle 1_F*1_G(x) = m( F \cap (x-G) ) = m( (x-F) \cap G ) \ \ \ \ \ (1)$

where ${m}$ denotes Lebesgue measure. One can also define convolution on more general locally compact groups than ${{\bf R}^n}$, but we will restrict attention to the Euclidean case in this post.

The convolution ${f*g}$ can also be defined by duality by observing the identity

$\displaystyle \int_{{\bf R}^n} f*g(x) h(x)\ dx = \int_{{\bf R}^n} \int_{{\bf R}^n} h(y+z)\ f(y) dy g(z) dz$

for any bounded measurable function ${h: {\bf R}^n \rightarrow {\bf C}}$. Motivated by this observation, we may define the convolution ${\mu*\nu}$ of two finite Borel measures on ${{\bf R}^n}$ by the formula

$\displaystyle \int_{{\bf R}^n} h(x)\ d\mu*\nu(x) := \int_{{\bf R}^n} \int_{{\bf R}^n} h(y+z)\ d\mu(y) d\nu(z) \ \ \ \ \ (2)$

for any bounded (Borel) measurable function ${h: {\bf R}^n \rightarrow {\bf C}}$, or equivalently that

$\displaystyle \mu*\nu(E) = \int_{{\bf R}^n} \int_{{\bf R}^n} 1_E(y+z)\ d\mu(y) d\nu(z) \ \ \ \ \ (3)$

for all Borel measurable ${E}$. (In another equivalent formulation: ${\mu*\nu}$ is the pushforward of the product measure ${\mu \times \nu}$ with respect to the addition map ${+: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}^n}$.) This can easily be verified to again be a finite Borel measure.

If ${\mu}$ and ${\nu}$ are probability measures, then the convolution ${\mu*\nu}$ also has a simple probabilistic interpretation: it is the law (i.e. probability distribution) of a random varible of the form ${X+Y}$, where ${X, Y}$ are independent random variables taking values in ${{\bf R}^n}$ with law ${\mu,\nu}$ respectively. Among other things, this interpretation makes it obvious that the support of ${\mu*\nu}$ is the sumset of the supports of ${\mu}$ and ${\nu}$, and that ${\mu*\nu}$ will also be a probability measure.

While the above discussion gives a perfectly rigorous definition of the convolution of two measures, it does not always give helpful guidance as to how to compute the convolution of two explicit measures (e.g. the convolution of two surface measures on explicit examples of surfaces, such as the sphere). In simple cases, one can work from first principles directly from the definition (2), (3), perhaps after some application of tools from several variable calculus, such as the change of variables formula. Another technique proceeds by regularisation, approximating the measures ${\mu, \nu}$ involved as the weak limit (or vague limit) of absolutely integrable functions

$\displaystyle \mu = \lim_{\epsilon \rightarrow 0} f_\epsilon; \quad \nu =\lim_{\epsilon \rightarrow 0} g_\epsilon$

(where we identify an absolutely integrable function ${f}$ with the associated absolutely continuous measure ${dm_f(x) := f(x)\ dx}$) which then implies (assuming that the sequences ${f_\epsilon,g_\epsilon}$ are tight) that ${\mu*\nu}$ is the weak limit of the ${f_\epsilon * g_\epsilon}$. The latter convolutions ${f_\epsilon * g_\epsilon}$, being convolutions of functions rather than measures, can be computed (or at least estimated) by traditional integration techniques, at which point the only difficulty is to ensure that one has enough uniformity in ${\epsilon}$ to maintain control of the limit as ${\epsilon \rightarrow 0}$.

A third method proceeds using the Fourier transform

$\displaystyle \hat \mu(\xi) := \int_{{\bf R}^n} e^{-2\pi i x \cdot \xi}\ d\mu(\xi)$

of ${\mu}$ (and of ${\nu}$). We have

$\displaystyle \widehat{\mu*\nu}(\xi) = \hat{\mu}(\xi) \hat{\nu}(\xi)$

and so one can (in principle, at least) compute ${\mu*\nu}$ by taking Fourier transforms, multiplying them together, and applying the (distributional) inverse Fourier transform. Heuristically, this formula implies that the Fourier transform of ${\mu*\nu}$ should be concentrated in the intersection of the frequency region where the Fourier transform of ${\mu}$ is supported, and the frequency region where the Fourier transform of ${\nu}$ is supported. As the regularity of a measure is related to decay of its Fourier transform, this also suggests that the convolution ${\mu*\nu}$ of two measures will typically be more regular than each of the two original measures, particularly if the Fourier transforms of ${\mu}$ and ${\nu}$ are concentrated in different regions of frequency space (which should happen if the measures ${\mu,\nu}$ are suitably “transverse”). In particular, it can happen that ${\mu*\nu}$ is an absolutely continuous measure, even if ${\mu}$ and ${\nu}$ are both singular measures.

Using intuition from microlocal analysis, we can combine our understanding of the spatial and frequency behaviour of convolution to the following heuristic: a convolution ${\mu*\nu}$ should be supported in regions of phase space ${\{ (x,\xi): x \in {\bf R}^n, \xi \in {\bf R}^n \}}$ of the form ${(x,\xi) = (x_1+x_2,\xi)}$, where ${(x_1,\xi)}$ lies in the region of phase space where ${\mu}$ is concentrated, and ${(x_2,\xi)}$ lies in the region of phase space where ${\nu}$ is concentrated. It is a challenge to make this intuition perfectly rigorous, as one has to somehow deal with the obstruction presented by the Heisenberg uncertainty principle, but it can be made rigorous in various asymptotic regimes, for instance using the machinery of wave front sets (which describes the high frequency limit of the phase space distribution).

Let us illustrate these three methods and the final heuristic with a simple example. Let ${\mu}$ be a singular measure on the horizontal unit interval ${[0,1] \times \{0\} = \{ (x,0): 0 \leq x \leq 1 \}}$, given by weighting Lebesgue measure on that interval by some test function ${\phi: {\bf R} \rightarrow {\bf C}}$ supported on ${[0,1]}$:

$\displaystyle \int_{{\bf R}^2} f(x,y)\ d\mu(x,y) := \int_{\bf R} f(x,0) \phi(x)\ dx.$

Similarly, let ${\nu}$ be a singular measure on the vertical unit interval ${\{0\} \times [0,1] = \{ (0,y): 0 \leq y \leq 1 \}}$ given by weighting Lebesgue measure on that interval by another test function ${\psi: {\bf R} \rightarrow {\bf C}}$ supported on ${[0,1]}$:

$\displaystyle \int_{{\bf R}^2} g(x,y)\ d\nu(x,y) := \int_{\bf R} g(0,y) \psi(y)\ dy.$

We can compute the convolution ${\mu*\nu}$ using (2), which in this case becomes

$\displaystyle \int_{{\bf R}^2} h( x, y ) d\mu*\nu(x,y) = \int_{{\bf R}^2} \int_{{\bf R}^2} h(x_1+x_2, y_1+y_2)\ d\mu(x_1,y_1) d\nu(x_2,y_2)$

$\displaystyle = \int_{\bf R} \int_{\bf R} h( x_1, y_2 )\ \phi(x_1) dx_1 \psi(y_2) dy_2$

and we thus conclude that ${\mu*\nu}$ is an absolutely continuous measure on ${{\bf R}^2}$ with density function ${(x,y) \mapsto \phi(x) \psi(y)}$:

$\displaystyle d(\mu*\nu)(x,y) = \phi(x) \psi(y) dx dy. \ \ \ \ \ (4)$

In particular, ${\mu*\nu}$ is supported on the unit square ${[0,1]^2}$, which is of course the sumset of the two intervals ${[0,1] \times\{0\}}$ and ${\{0\} \times [0,1]}$.

We can arrive at the same conclusion from the regularisation method; the computations become lengthier, but more geometric in nature, and emphasises the role of transversality between the two segments supporting ${\mu}$ and ${\nu}$. One can view ${\mu}$ as the weak limit of the functions

$\displaystyle f_\epsilon(x,y) := \frac{1}{\epsilon} \phi(x) 1_{[0,\epsilon]}(y)$

as ${\epsilon \rightarrow 0}$ (where we continue to identify absolutely integrable functions with absolutely continuous measures, and of course we keep ${\epsilon}$ positive). We can similarly view ${\nu}$ as the weak limit of

$\displaystyle g_\epsilon(x,y) := \frac{1}{\epsilon} 1_{[0,\epsilon]}(x) \psi(y).$

Let us first look at the model case when ${\phi=\psi=1_{[0,1]}}$, so that ${f_\epsilon,g_\epsilon}$ are renormalised indicator functions of thin rectangles:

$\displaystyle f_\epsilon = \frac{1}{\epsilon} 1_{[0,1]\times [0,\epsilon]}; \quad g_\epsilon = \frac{1}{\epsilon} 1_{[0,\epsilon] \times [0,1]}.$

By (1), the convolution ${f_\epsilon*g_\epsilon}$ is then given by

$\displaystyle f_\epsilon*g_\epsilon(x,y) := \frac{1}{\epsilon^2} m( E_\epsilon )$

where ${E_\epsilon}$ is the intersection of two rectangles:

$\displaystyle E_\epsilon := ([0,1] \times [0,\epsilon]) \cap ((x,y) - [0,\epsilon] \times [0,1]).$

When ${(x,y)}$ lies in the square ${[\epsilon,1] \times [\epsilon,1]}$, one readily sees (especially if one draws a picture) that ${E_\epsilon}$ consists of an ${\epsilon \times \epsilon}$ square and thus has measure ${\epsilon^2}$; conversely, if ${(x,y)}$ lies outside ${[0,1+\epsilon] \times [0,1+\epsilon]}$, ${E_\epsilon}$ is empty and thus has measure zero. In the intermediate region, ${E_\epsilon}$ will have some measure between ${0}$ and ${\epsilon^2}$. From this we see that ${f_\epsilon*g_\epsilon}$ converges pointwise almost everywhere to ${1_{[0,1] \times [0,1]}}$ while also being dominated by an absolutely integrable function, and so converges weakly to ${1_{[0,1] \times [0,1]}}$, giving a special case of the formula (4).

Exercise 1 Use a similar method to verify (4) in the case that ${\phi, \psi}$ are continuous functions on ${[0,1]}$. (The argument also works for absolutely integrable ${\phi,\psi}$, but one needs to invoke the Lebesgue differentiation theorem to make it run smoothly.)

Now we compute with the Fourier-analytic method. The Fourier transform ${\hat \mu(\xi,\eta)}$ of ${\mu}$ is given by

$\displaystyle \hat \mu(\xi,\eta) =\int_{{\bf R}^2} e^{-2\pi i (x \xi + y \eta)}\ d\mu(x,y)$

$\displaystyle = \int_{\bf R} \phi(x) e^{-2\pi i x \xi}\ dx$

$\displaystyle = \hat \phi(\xi)$

where we abuse notation slightly by using ${\hat \phi}$ to refer to the one-dimensional Fourier transform of ${\phi}$. In particular, ${\hat \mu}$ decays in the ${\xi}$ direction (by the Riemann-Lebesgue lemma) but has no decay in the ${\eta}$ direction, which reflects the horizontally grained structure of ${\mu}$. Similarly we have

$\displaystyle \hat \nu(\xi,\eta) = \hat \psi(\eta),$

so that ${\hat \nu}$ decays in the ${\eta}$ direction. The convolution ${\mu*\nu}$ then has decay in both the ${\xi}$ and ${\eta}$ directions,

$\displaystyle \widehat{\mu*\nu}(\xi,\eta) = \hat \phi(\xi) \hat \psi(\eta)$

and by inverting the Fourier transform we obtain (4).

Exercise 2 Let ${AB}$ and ${CD}$ be two non-parallel line segments in the plane ${{\bf R}^2}$. If ${\mu}$ is the uniform probability measure on ${AB}$ and ${\nu}$ is the uniform probability measure on ${CD}$, show that ${\mu*\nu}$ is the uniform probability measure on the parallelogram ${AB + CD}$ with vertices ${A+C, A+D, B+C, B+D}$. What happens in the degenerate case when ${AB}$ and ${CD}$ are parallel?

Finally, we compare the above answers with what one gets from the microlocal analysis heuristic. The measure ${\mu}$ is supported on the horizontal interval ${[0,1] \times \{0\}}$, and the cotangent bundle at any point on this interval points in the vertical direction. Thus, the wave front set of ${\mu}$ should be supported on those points ${((x_1,x_2),(\xi_1,\xi_2))}$ in phase space with ${x_1 \in [0,1]}$, ${x_2 = 0}$ and ${\xi_1=0}$. Similarly, the wave front set of ${\nu}$ should be supported at those points ${((y_1,y_2),(\xi_1,\xi_2))}$ with ${y_1 = 0}$, ${y_2 \in [0,1]}$, and ${\xi_2=0}$. The convolution ${\mu * \nu}$ should then have wave front set supported on those points ${((x_1+y_1,x_2+y_2), (\xi_1,\xi_2))}$ with ${x_1 \in [0,1]}$, ${x_2 = 0}$, ${\xi_1=0}$, ${y_1=0}$, ${y_2 \in [0,1]}$, and ${\xi_2=0}$, i.e. it should be spatially supported on the unit square and have zero (rescaled) frequency, so the heuristic predicts a smooth function on the unit square, which is indeed what happens. (The situation is slightly more complicated in the non-smooth case ${\phi=\psi=1_{[0,1]}}$, because ${\mu}$ and ${\nu}$ then acquire some additional singularities at the endpoints; namely, the wave front set of ${\mu}$ now also contains those points ${((x_1,x_2),(\xi_1,\xi_2))}$ with ${x_1 \in \{0,1\}}$, ${x_2=0}$, and ${\xi_1,\xi_2}$ arbitrary, and ${\nu}$ similarly contains those points ${((y_1,y_2), (\xi_1,\xi_2))}$ with ${y_1=0}$, ${y_2 \in \{0,1\}}$, and ${\xi_1,\xi_2}$ arbitrary. I’ll leave it as an exercise to the reader to compute what this predicts for the wave front set of ${\mu*\nu}$, and how this compares with the actual wave front set.)

Exercise 3 Let ${\mu}$ be the uniform measure on the unit sphere ${S^{n-1}}$ in ${{\bf R}^n}$ for some ${n \geq 2}$. Use as many of the above methods as possible to establish multiple proofs of the following fact: the convolution ${\mu*\mu}$ is an absolutely continuous multiple ${f(x)\ dx}$ of Lebesgue measure, with ${f(x)}$ supported on the ball ${B(0,2)}$ of radius ${2}$ and obeying the bounds

$\displaystyle |f(x)| \ll \frac{1}{|x|}$

for ${|x| \leq 1}$ and

$\displaystyle |f(x)| \ll (2-|x|)^{(n-3)/2}$

for ${1 \leq |x| \leq 2}$, where the implied constants are allowed to depend on the dimension ${n}$. (Hint: try the ${n=2}$ case first, which is particularly simple due to the fact that the addition map ${+: S^1 \times S^1 \rightarrow {\bf R}^2}$ is mostly a local diffeomorphism. The Fourier-based approach is instructive, but requires either asymptotics of Bessel functions or the principle of stationary phase.)