I’ve just uploaded to the arXiv the paper “Finite time blowup for an averaged three-dimensional Navier-Stokes equation“, submitted to J. Amer. Math. Soc.. The main purpose of this paper is to formalise the “supercriticality barrier” for the global regularity problem for the Navier-Stokes equation, which roughly speaking asserts that it is not possible to establish global regularity by any “abstract” approach which only uses upper bound function space estimates on the nonlinear part of the equation, combined with the energy identity. This is done by constructing a modification of the Navier-Stokes equations with a nonlinearity that obeys essentially all of the function space estimates that the true Navier-Stokes nonlinearity does, and which also obeys the energy identity, but for which one can construct solutions that blow up in finite time. Results of this type had been previously established by Montgomery-Smith, Gallagher-Paicu, and Li-Sinai for variants of the Navier-Stokes equation without the energy identity, and by Katz-Pavlovic and by Cheskidov for dyadic analogues of the Navier-Stokes equations in five and higher dimensions that obeyed the energy identity (see also the work of Plechac and Sverak and of Hou and Lei that also suggest blowup for other Navier-Stokes type models obeying the energy identity in five and higher dimensions), but to my knowledge this is the first blowup result for a Navier-Stokes type equation in three dimensions that also obeys the energy identity. Intriguingly, the method of proof in fact hints at a possible route to establishing blowup for the true Navier-Stokes equations, which I am now increasingly inclined to believe is the case (albeit for a very small set of initial data).

To state the results more precisely, recall that the Navier-Stokes equations can be written in the form

\displaystyle  \partial_t u + (u \cdot \nabla) u = \nu \Delta u + \nabla p

for a divergence-free velocity field {u} and a pressure field {p}, where {\nu>0} is the viscosity, which we will normalise to be one. We will work in the non-periodic setting, so the spatial domain is {{\bf R}^3}, and for sake of exposition I will not discuss matters of regularity or decay of the solution (but we will always be working with strong notions of solution here rather than weak ones). Applying the Leray projection {P} to divergence-free vector fields to this equation, we can eliminate the pressure, and obtain an evolution equation

\displaystyle  \partial_t u = \Delta u + B(u,u) \ \ \ \ \ (1)

purely for the velocity field, where {B} is a certain bilinear operator on divergence-free vector fields (specifically, {B(u,v) = -\frac{1}{2} P( (u \cdot \nabla) v + (v \cdot \nabla) u)}. The global regularity problem for Navier-Stokes is then equivalent to the global regularity problem for the evolution equation (1).

An important feature of the bilinear operator {B} appearing in (1) is the cancellation law

\displaystyle  \langle B(u,u), u \rangle = 0

(using the {L^2} inner product on divergence-free vector fields), which leads in particular to the fundamental energy identity

\displaystyle  \frac{1}{2} \int_{{\bf R}^3} |u(T,x)|^2\ dx + \int_0^T \int_{{\bf R}^3} |\nabla u(t,x)|^2\ dx dt = \frac{1}{2} \int_{{\bf R}^3} |u(0,x)|^2\ dx.

This identity (and its consequences) provide essentially the only known a priori bound on solutions to the Navier-Stokes equations from large data and arbitrary times. Unfortunately, as discussed in this previous post, the quantities controlled by the energy identity are supercritical with respect to scaling, which is the fundamental obstacle that has defeated all attempts to solve the global regularity problem for Navier-Stokes without any additional assumptions on the data or solution (e.g. perturbative hypotheses, or a priori control on a critical norm such as the {L^\infty_t L^3_x} norm).

Our main result is then (slightly informally stated) as follows

Theorem 1 There exists an averaged version {\tilde B} of the bilinear operator {B}, of the form

\displaystyle  \tilde B(u,v) := \int_\Omega m_{3,\omega}(D) Rot_{3,\omega}

\displaystyle B( m_{1,\omega}(D) Rot_{1,\omega} u, m_{2,\omega}(D) Rot_{2,\omega} v )\ d\mu(\omega)

for some probability space {(\Omega, \mu)}, some spatial rotation operators {Rot_{i,\omega}} for {i=1,2,3}, and some Fourier multipliers {m_{i,\omega}} of order {0}, for which one still has the cancellation law

\displaystyle  \langle \tilde B(u,u), u \rangle = 0

and for which the averaged Navier-Stokes equation

\displaystyle  \partial_t u = \Delta u + \tilde B(u,u) \ \ \ \ \ (2)

admits solutions that blow up in finite time.

(There are some integrability conditions on the Fourier multipliers {m_{i,\omega}} required in the above theorem in order for the conclusion to be non-trivial, but I am omitting them here for sake of exposition.)

Because spatial rotations and Fourier multipliers of order {0} are bounded on most function spaces, {\tilde B} automatically obeys almost all of the upper bound estimates that {B} does. Thus, this theorem blocks any attempt to prove global regularity for the true Navier-Stokes equations which relies purely on the energy identity and on upper bound estimates for the nonlinearity; one must use some additional structure of the nonlinear operator {B} which is not shared by an averaged version {\tilde B}. Such additional structure certainly exists – for instance, the Navier-Stokes equation has a vorticity formulation involving only differential operators rather than pseudodifferential ones, whereas a general equation of the form (2) does not. However, “abstract” approaches to global regularity generally do not exploit such structure, and thus cannot be used to affirmatively answer the Navier-Stokes problem.

It turns out that the particular averaged bilinear operator {B} that we will use will be a finite linear combination of local cascade operators, which take the form

\displaystyle  C(u,v) := \sum_{n \in {\bf Z}} (1+\epsilon_0)^{5n/2} \langle u, \psi_{1,n} \rangle \langle v, \psi_{2,n} \rangle \psi_{3,n}

where {\epsilon_0>0} is a small parameter, {\psi_1,\psi_2,\psi_3} are Schwartz vector fields whose Fourier transform is supported on an annulus, and {\psi_{i,n}(x) := (1+\epsilon_0)^{3n/2} \psi_i( (1+\epsilon_0)^n x)} is an {L^2}-rescaled version of {\psi_i} (basically a “wavelet” of wavelength about {(1+\epsilon_0)^{-n}} centred at the origin). Such operators were essentially introduced by Katz and Pavlovic as dyadic models for {B}; they have the essentially the same scaling property as {B} (except that one can only scale along powers of {1+\epsilon_0}, rather than over all positive reals), and in fact they can be expressed as an average of {B} in the sense of the above theorem, as can be shown after a somewhat tedious amount of Fourier-analytic symbol manipulations.

If we consider nonlinearities {\tilde B} which are a finite linear combination of local cascade operators, then the equation (2) more or less collapses to a system of ODE in certain “wavelet coefficients” of {u}. The precise ODE that shows up depends on what precise combination of local cascade operators one is using. Katz and Pavlovic essentially considered a single cascade operator together with its “adjoint” (needed to preserve the energy identity), and arrived (more or less) at the system of ODE

\displaystyle  \partial_t X_n = - (1+\epsilon_0)^{2n} X_n + (1+\epsilon_0)^{\frac{5}{2}(n-1)} X_{n-1}^2 - (1+\epsilon_0)^{\frac{5}{2} n} X_n X_{n+1} \ \ \ \ \ (3)

where {X_n: [0,T] \rightarrow {\bf R}} are scalar fields for each integer {n}. (Actually, Katz-Pavlovic worked with a technical variant of this particular equation, but the differences are not so important for this current discussion.) Note that the quadratic terms on the RHS carry a higher exponent of {1+\epsilon_0} than the dissipation term; this reflects the supercritical nature of this evolution (the energy {\frac{1}{2} \sum_n X_n^2} is monotone decreasing in this flow, so the natural size of {X_n} given the control on the energy is {O(1)}). There is a slight technical issue with the dissipation if one wishes to embed (3) into an equation of the form (2), but it is minor and I will not discuss it further here.

In principle, if the {X_n} mode has size comparable to {1} at some time {t_n}, then energy should flow from {X_n} to {X_{n+1}} at a rate comparable to {(1+\epsilon_0)^{\frac{5}{2} n}}, so that by time {t_{n+1} \approx t_n + (1+\epsilon_0)^{-\frac{5}{2} n}} or so, most of the energy of {X_n} should have drained into the {X_{n+1}} mode (with hardly any energy dissipated). Since the series {\sum_{n \geq 1} (1+\epsilon_0)^{-\frac{5}{2} n}} is summable, this suggests finite time blowup for this ODE as the energy races ever more quickly to higher and higher modes. Such a scenario was indeed established by Katz and Pavlovic (and refined by Cheskidov) if the dissipation strength {(1+\epsilon)^{2n}} was weakened somewhat (the exponent {2} has to be lowered to be less than {\frac{5}{3}}). As mentioned above, this is enough to give a version of Theorem 1 in five and higher dimensions.

On the other hand, it was shown a few years ago by Barbato, Morandin, and Romito that (3) in fact admits global smooth solutions (at least in the dyadic case {\epsilon_0=1}, and assuming non-negative initial data). Roughly speaking, the problem is that as energy is being transferred from {X_n} to {X_{n+1}}, energy is also simultaneously being transferred from {X_{n+1}} to {X_{n+2}}, and as such the solution races off to higher modes a bit too prematurely, without absorbing all of the energy from lower modes. This weakens the strength of the blowup to the point where the moderately strong dissipation in (3) is enough to kill the high frequency cascade before a true singularity occurs. Because of this, the original Katz-Pavlovic model cannot quite be used to establish Theorem 1 in three dimensions. (Actually, the original Katz-Pavlovic model had some additional dispersive features which allowed for another proof of global smooth solutions, which is an unpublished result of Nazarov.)

To get around this, I had to “engineer” an ODE system with similar features to (3) (namely, a quadratic nonlinearity, a monotone total energy, and the indicated exponents of {(1+\epsilon_0)} for both the dissipation term and the quadratic terms), but for which the cascade of energy from scale {n} to scale {n+1} was not interrupted by the cascade of energy from scale {n+1} to scale {n+2}. To do this, I needed to insert a delay in the cascade process (so that after energy was dumped into scale {n}, it would take some time before the energy would start to transfer to scale {n+1}), but the process also needed to be abrupt (once the process of energy transfer started, it needed to conclude very quickly, before the delayed transfer for the next scale kicked in). It turned out that one could build a “quadratic circuit” out of some basic “quadratic gates” (analogous to how an electrical circuit could be built out of basic gates such as amplifiers or resistors) that achieved this task, leading to an ODE system essentially of the form

\displaystyle \partial_t X_{1,n} = - (1+\epsilon_0)^{2n} X_{1,n}

\displaystyle  + (1+\epsilon_0)^{5n/2} (- \epsilon^{-2} X_{3,n} X_{4,n} - \epsilon X_{1,n} X_{2,n} - \epsilon^2 \exp(-K^{10}) X_{1,n} X_{3,n}

\displaystyle  + K X_{4,n-1}^2)

\displaystyle  \partial_t X_{2,n} = - (1+\epsilon_0)^{2n} X_{2,n} + (1+\epsilon_0)^{5n/2} (\epsilon X_{1,n}^2 - \epsilon^{-1} K^{10} X_{3,n}^2)

\displaystyle  \partial_t X_{3,n} = - (1+\epsilon_0)^{2n} X_{3,n} + (1+\epsilon_0)^{5n/2} (\epsilon^2 \exp(-K^{10}) X_{1,n}^2

\displaystyle + \epsilon^{-1} K^{10} X_{2,n} X_{3,n} )

\displaystyle  \partial_t X_{4,n} =- (1+\epsilon_0)^{2n} X_{4,n} + (1+\epsilon_0)^{5n/2} (\epsilon^{-2} X_{3,n} X_{1,n}

\displaystyle - (1+\epsilon_0)^{5/2} K X_{4,n} X_{1,n+1})

where {K \geq 1} is a suitable large parameter and {\epsilon > 0} is a suitable small parameter (much smaller than {1/K}). To visualise the dynamics of such a system, I found it useful to describe this system graphically by a “circuit diagram” that is analogous (but not identical) to the circuit diagrams arising in electrical engineering:

circuit-1

The coupling constants here range widely from being very large to very small; in practice, this makes the {X_{2,n}} and {X_{3,n}} modes absorb very little energy, but exert a sizeable influence on the remaining modes. If a lot of energy is suddenly dumped into {X_{1,n}}, what happens next is roughly as follows: for a moderate period of time, nothing much happens other than a trickle of energy into {X_{2,n}}, which in turn causes a rapid exponential growth of {X_{3,n}} (from a very low base). After this delay, {X_{3,n}} suddenly crosses a certain threshold, at which point it causes {X_{1,n}} and {X_{4,n}} to exchange energy back and forth with extreme speed. The energy from {X_{4,n}} then rapidly drains into {X_{1,n+1}}, and the process begins again (with a slight loss in energy due to the dissipation). If one plots the total energy {E_n := \frac{1}{2} ( X_{1,n}^2 + X_{2,n}^2 + X_{3,n}^2 + X_{4,n}^2 )} as a function of time, it looks schematically like this:

energy-blowup

As in the previous heuristic discussion, the time between cascades from one frequency scale to the next decay exponentially, leading to blowup at some finite time {T}. (One could describe the dynamics here as being similar to the famous “lighting the beacons” scene in the Lord of the Rings movies, except that (a) as each beacon gets ignited, the previous one is extinguished, as per the energy identity; (b) the time between beacon lightings decrease exponentially; and (c) there is no soundtrack.)

There is a real (but remote) possibility that this sort of construction can be adapted to the true Navier-Stokes equations. The basic blowup mechanism in the averaged equation is that of a von Neumann machine, or more precisely a construct (built within the laws of the inviscid evolution {\partial_t u = \tilde B(u,u)}) that, after some time delay, manages to suddenly create a replica of itself at a finer scale (and to largely erase its original instantiation in the process). In principle, such a von Neumann machine could also be built out of the laws of the inviscid form of the Navier-Stokes equations (i.e. the Euler equations). In physical terms, one would have to build the machine purely out of an ideal fluid (i.e. an inviscid incompressible fluid). If one could somehow create enough “logic gates” out of ideal fluid, one could presumably build a sort of “fluid computer”, at which point the task of building a von Neumann machine appears to reduce to a software engineering exercise rather than a PDE problem (providing that the gates are suitably stable with respect to perturbations, but (as with actual computers) this can presumably be done by converting the analog signals of fluid mechanics into a more error-resistant digital form). The key thing missing in this program (in both senses of the word) to establish blowup for Navier-Stokes is to construct the logic gates within the laws of ideal fluids. (Compare with the situation for cellular automata such as Conway’s “Game of Life“, in which Turing complete computers, universal constructors, and replicators have all been built within the laws of that game.)

This is the seventh thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page.

The current focus is on improving the upper bound on {H_1} under the assumption of the generalised Elliott-Halberstam conjecture (GEH) from {H_1 \leq 8} to {H_1 \leq 6}. Very recently, we have been able to exploit GEH more fully, leading to a promising new expansion of the sieve support region. The problem now reduces to the following:

Problem 1 Does there exist a (not necessarily convex) polytope {R \subset [0,2]^3} with quantities {0 \leq \varepsilon_1,\varepsilon_2,\varepsilon_3 \leq 1}, and a non-trivial square-integrable function {F: {\bf R}^3 \rightarrow {\bf R}} supported on {R} such that

  • {R + R \subset \{ (x,y,z) \in [0,4]^3: \min(x+y,y+z,z+x) \leq 2 \},}
  • {\int_0^\infty F(x,y,z)\ dx = 0} when {y+z \geq 1+\varepsilon_1};
  • {\int_0^\infty F(x,y,z)\ dy = 0} when {x+z \geq 1+\varepsilon_2};
  • {\int_0^\infty F(x,y,z)\ dz = 0} when {x+y \geq 1+\varepsilon_3};

and such that we have the inequality

\displaystyle  \int_{y+z \leq 1-\varepsilon_1} (\int_{\bf R} F(x,y,z)\ dx)^2\ dy dz

\displaystyle + \int_{z+x \leq 1-\varepsilon_2} (\int_{\bf R} F(x,y,z)\ dy)^2\ dz dx

\displaystyle + \int_{x+y \leq 1-\varepsilon_3} (\int_{\bf R} F(x,y,z)\ dz)^2\ dx dy

\displaystyle  > 2 \int_R F(x,y,z)^2\ dx dy dz?

An affirmative answer to this question will imply {H_1 \leq 6} on GEH. We are “within two percent” of this claim; we cannot quite reach {2} yet, but have got as far as {1.962998}. However, we have not yet fully optimised {F} in the above problem. In particular, the simplex

\displaystyle  R = \{ (x,y,z) \in [0,2]^3: x+y+z \leq 3/2 \}

is now available, and should lead to some noticeable improvement in the numerology.

There is also a very slim chance that the twin prime conjecture is now provable on GEH. It would require an affirmative solution to the following problem:

Problem 2 Does there exist a (not necessarily convex) polytope {R \subset [0,2]^2} with quantities {0 \leq \varepsilon_1,\varepsilon_2 \leq 1}, and a non-trivial square-integrable function {F: {\bf R}^2 \rightarrow {\bf R}} supported on {R} such that

  • {R + R \subset \{ (x,y) \in [0,4]^2: \min(x,y) \leq 2 \}}

    \displaystyle  = [0,2] \times [0,4] \cup [0,4] \times [0,2],

  • {\int_0^\infty F(x,y)\ dx = 0} when {y \geq 1+\varepsilon_1};
  • {\int_0^\infty F(x,y)\ dy = 0} when {x \geq 1+\varepsilon_2};

and such that we have the inequality

\displaystyle  \int_{y \leq 1-\varepsilon_1} (\int_{\bf R} F(x,y)\ dx)^2\ dy

\displaystyle + \int_{x \leq 1-\varepsilon_2} (\int_{\bf R} F(x,y)\ dy)^2\ dx

\displaystyle  > 2 \int_R F(x,y)^2\ dx dy?

We suspect that the answer to this question is negative, but have not formally ruled it out yet.

For the rest of this post, I will justify why positive answers to these sorts of variational problems are sufficient to get bounds on {H_1} (or more generally {H_m}).

Read the rest of this entry »

This is the sixth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page (which has recently returned to full functionality, after a partial outage).

The current focus is on improving the upper bound on {H_1} under the assumption of the generalised Elliott-Halberstam conjecture (GEH) from {H_1 \leq 8} to {H_1 \leq 6}, which looks to be the limit of the method (see this previous comment for a semi-rigorous reason as to why {H_1 \leq 4} is not possible with this method). With the most general Selberg sieve available, the problem reduces to the following three-dimensional variational one:

Problem 1 Does there exist a (not necessarily convex) polytope {R \subset [0,1]^3} with quantities {0 \leq \varepsilon_1,\varepsilon_2,\varepsilon_3 \leq 1}, and a non-trivial square-integrable function {F: {\bf R}^3 \rightarrow {\bf R}} supported on {R} such that

  • {R + R \subset \{ (x,y,z) \in [0,2]^3: \min(x+y,y+z,z+x) \leq 2 \},}
  • {\int_0^\infty F(x,y,z)\ dx = 0} when {y+z \geq 1+\varepsilon_1};
  • {\int_0^\infty F(x,y,z)\ dy = 0} when {x+z \geq 1+\varepsilon_2};
  • {\int_0^\infty F(x,y,z)\ dz = 0} when {x+y \geq 1+\varepsilon_3};

and such that we have the inequality

\displaystyle  \int_{y+z \leq 1-\varepsilon_1} (\int_{\bf R} F(x,y,z)\ dx)^2\ dy dz

\displaystyle + \int_{z+x \leq 1-\varepsilon_2} (\int_{\bf R} F(x,y,z)\ dy)^2\ dz dx

\displaystyle + \int_{x+y \leq 1-\varepsilon_3} (\int_{\bf R} F(x,y,z)\ dz)^2\ dx dy

\displaystyle  > 2 \int_R F(x,y,z)^2\ dx dy dz?

(Initially it was assumed that {R} was convex, but we have now realised that this is not necessary.)

An affirmative answer to this question will imply {H_1 \leq 6} on GEH. We are “within almost two percent” of this claim; we cannot quite reach {2} yet, but have got as far as {1.959633}. However, we have not yet fully optimised {F} in the above problem.

The most promising route so far is to take the symmetric polytope

\displaystyle  R = \{ (x,y,z) \in [0,1]^3: x+y+z \leq 3/2 \}

with {F} symmetric as well, and {\varepsilon_1=\varepsilon_2=\varepsilon_3=\varepsilon} (we suspect that the optimal {\varepsilon} will be roughly {1/6}). (However, it is certainly worth also taking a look at easier model problems, such as the polytope {{\cal R}'_3 := \{ (x,y,z) \in [0,1]^3: x+y,y+z,z+x \leq 1\}}, which has no vanishing marginal conditions to contend with; more recently we have been looking at the non-convex polytope {R = \{x+y,x+z \leq 1 \} \cup \{ x+y,y+z \leq 1 \} \cup \{ x+z,y+z \leq 1\}}.) Some further details of this particular case are given below the fold.

There should still be some progress to be made in the other regimes of interest – the unconditional bound on {H_1} (currently at {270}), and on any further progress in asymptotic bounds for {H_m} for larger {m} – but the current focus is certainly on the bound on {H_1} on GEH, as we seem to be tantalisingly close to an optimal result here.

Read the rest of this entry »

This is the fifth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page (which has recently returned to full functionality, after a partial outage). In particular, the upper bound for {H_1} has been shaved a little from {272} to {270}, and we have very recently achieved the bound {H_1 \leq 8} on the generalised Elliott-Halberstam conjecture GEH, formulated as Conjecture 1 of this paper of Bombieri, Friedlander, and Iwaniec. We also have explicit bounds for {H_m} for {m \leq 5}, both with and without the assumption of the Elliott-Halberstam conjecture, as well as slightly sharper asymptotics for the upper bound for {H_m} as {m \rightarrow \infty}.

The basic strategy for bounding {H_m} still follows the general paradigm first laid out by Goldston, Pintz, Yildirim: given an admissible {k}-tuple {(h_1,\dots,h_k)}, one needs to locate a non-negative sieve weight {\nu: {\bf Z} \rightarrow {\bf R}^+}, supported on an interval {[x,2x]} for a large {x}, such that the ratio

\displaystyle  \frac{\sum_{i=1}^k \sum_n \nu(n) 1_{n+h_i \hbox{ prime}}}{\sum_n \nu(n)} \ \ \ \ \ (1)

is asymptotically larger than {m} as {x \rightarrow \infty}; this will show that {H_m \leq h_k-h_1}. Thus one wants to locate a sieve weight {\nu} for which one has good lower bounds on the numerator and good upper bounds on the denominator.

One can modify this paradigm slightly, for instance by adding the additional term {\sum_n \nu(n) 1_{n+h_1,\dots,n+h_k \hbox{ composite}}} to the numerator, or by subtracting the term {\sum_n \nu(n) 1_{n+h_1,n+h_k \hbox{ prime}}} from the numerator (which allows one to reduce the bound {h_k-h_1} to {\max(h_k-h_2,h_{k-1}-h_1)}); however, the numerical impact of these tweaks have proven to be negligible thus far.

Despite a number of experiments with other sieves, we are still relying primarily on the Selberg sieve

\displaystyle  \nu(n) := 1_{n=b\ (W)} 1_{[x,2x]}(n) \lambda(n)^2

where {\lambda(n)} is the divisor sum

\displaystyle  \lambda(n) := \sum_{d_1|n+h_1, \dots, d_k|n+h_k} \mu(d_1) \dots \mu(d_k) f( \frac{\log d_1}{\log R}, \dots, \frac{\log d_k}{\log R})

with {R = x^{\theta/2}}, {\theta} is the level of distribution ({\theta=1/2-} if relying on Bombieri-Vinogradov, {\theta=1-} if assuming Elliott-Halberstam, and (in principle) {\theta = \frac{1}{2} + \frac{13}{540}-} if using Polymath8a technology), and {f: [0,+\infty)^k \rightarrow {\bf R}} is a smooth, compactly supported function. Most of the progress has come by enlarging the class of cutoff functions {f} one is permitted to use.

The baseline bounds for the numerator and denominator in (1) (as established for instance in this previous post) are as follows. If {f} is supported on the simplex

\displaystyle  {\cal R}_k := \{ (t_1,\dots,t_k) \in [0,+\infty)^k: t_1+\dots+t_k < 1 \},

and we define the mixed partial derivative {F: [0,+\infty)^k \rightarrow {\bf R}} by

\displaystyle  F(t_1,\dots,t_k) = \frac{\partial^k}{\partial t_1 \dots \partial t_k} f(t_1,\dots,t_k)

then the denominator in (1) is

\displaystyle  \frac{Bx}{W} (I_k(F) + o(1)) \ \ \ \ \ (2)

where

\displaystyle  B := (\frac{W}{\phi(W) \log R})^k

and

\displaystyle  I_k(F) := \int_{[0,+\infty)^k} F(t_1,\dots,t_k)^2\ dt_1 \dots dt_k.

Similarly, the numerator of (1) is

\displaystyle  \frac{Bx}{W} \frac{2}{\theta} (\sum_{j=1}^m J^{(m)}_k(F) + o(1)) \ \ \ \ \ (3)

where

\displaystyle  J_k^{(m)}(F) := \int_{[0,+\infty)^{k-1}} (\int_0^\infty F(t_1,\ldots,t_k)\ dt_m)^2\ dt_1 \dots dt_{m-1} dt_{m+1} \dots dt_k.

Thus, if we let {M_k} be the supremum of the ratio

\displaystyle  \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}

whenever {F} is supported on {{\cal R}_k} and is non-vanishing, then one can prove {H_m \leq h_k - h_1} whenever

\displaystyle  M_k > \frac{2m}{\theta}.

We can improve this baseline in a number of ways. Firstly, with regards to the denominator in (1), if one upgrades the Elliott-Halberstam hypothesis {EH[\theta]} to the generalised Elliott-Halberstam hypothesis {GEH[\theta]} (currently known for {\theta < 1/2}, thanks to Motohashi, but conjectured for {\theta < 1}), the asymptotic (2) holds under the more general hypothesis that {F} is supported in a polytope {R}, as long as {R} obeys the inclusion

\displaystyle  R + R \subset \bigcup_{m=1}^k \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: \ \ \ \ \ (4)

\displaystyle  t_1+\dots+t_{m-1}+t_{m+1}+\dots+t_k < 2; t_m < 2/\theta \} \cup \frac{2}{\theta} \cdot {\cal R}_k;

examples of polytopes {R} obeying this constraint include the modified simplex

\displaystyle  {\cal R}'_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\dots+t_{m-1}+t_{m+1}+\dots+t_k < 1

\displaystyle \hbox{ for all } 1 \leq m \leq k \},

the prism

\displaystyle  {\cal R}_{k-1} \times [0, 1/\theta)

the dilated simplex

\displaystyle  \frac{1}{\theta} \cdot {\cal R}_k

and the truncated simplex

\displaystyle  \frac{k}{k-1} \cdot {\cal R}_k \cap [0,1/\theta)^k.

See this previous post for a proof of these claims.

With regards to the numerator, the asymptotic (3) is valid whenever, for each {1 \leq m \leq k}, the marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanish outside of {{\cal R}_{k-1}}. This is automatic if {F} is supported on {{\cal R}_k}, or on the slightly larger region {{\cal R}'_k}, but is an additional constraint when {F} is supported on one of the other polytopes {R} mentioned above.

More recently, we have obtained a more flexible version of the above asymptotic: if the marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanish outside of {(1+\varepsilon) \cdot {\cal R}_{k-1}} for some {0 < \varepsilon < 1}, then the numerator of (1) has a lower bound of

\displaystyle  \frac{Bx}{W} \frac{2}{\theta} (\sum_{j=1}^m J^{(m)}_{k,\varepsilon}(F) + o(1))

where

\displaystyle  J_{k,\varepsilon}^{(m)}(F) := \int_{(1-\varepsilon) \cdot {\cal R}_{k-1}} (\int_0^\infty F(t_1,\ldots,t_k)\ dt_m)^2\ dt_1 \dots dt_{m-1} dt_{m+1} \dots dt_k.

A proof is given here. Putting all this together, we can conclude

Theorem 1 Suppose we can find {0 \leq \varepsilon < 1} and a function {F} supported on a polytope {R} obeying (4), not identically zero and with all marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanishing outside of {(1+\varepsilon) \cdot {\cal R}_{k-1}}, and with

\displaystyle  \frac{\sum_{m=1}^k J_{k,\varepsilon}^{(m)}(F)}{I_k(F)} > \frac{2m}{\theta}.

Then {GEH[\theta]} implies {H_m \leq h_k-h_1}.

In principle, this very flexible criterion for upper bounding {H_m} should lead to better bounds than before, and in particular we have now established {H_1 \leq 8} on GEH.

Another promising direction is to try to improve the analysis at medium {k} (more specifically, in the regime {k \sim 50}), which is where we are currently at without EH or GEH through numerical quadratic programming. Right now we are only using {\theta=1/2} and using the baseline {M_k} analysis, basically for two reasons:

  • We do not have good numerical formulae for integrating polynomials on any region more complicated than the simplex {{\cal R}_k} in medium dimension.
  • The estimates {MPZ^{(i)}[\varpi,\delta]} produced by Polymath8a involve a {\delta} parameter, which introduces additional restrictions on the support of {F} (conservatively, it restricts {F} to {[0,\delta']^k} where {\delta' := \frac{\delta}{1/4+\varpi}} and {\theta = 1/2 + 2 \varpi}; it should be possible to be looser than this (as was done in Polymath8a) but this has not been fully explored yet). This then triggers the previous obstacle of having to integrate on something other than a simplex.

However, these look like solvable problems, and so I would expect that further unconditional improvement for {H_1} should be possible.

I’m encountering a sporadic bug over the past few months with the way WordPress renders or displays its LaTeX images on this blog (and occasionally on other WordPress blogs).  On most computers, it seems to work fine, but on some computers, the sizes of images are occasionally way off, leading to extremely distorted and fairly unreadable versions of the images appearing in blog posts and comments.  A sample screenshot (with accompanying HTML source), supplied to me by a reader, can be found here (in which an image whose dimensions should be 321 x 59 are instead being displayed as 552 x 20).  Is anyone else encountering this issue?  The problem sometimes can be resolved by refreshing the page, but not always, so it is a bit unclear where the problem is coming from and how one might mitigate it.  (If nothing else, I can add it to the bug collection post, once it can be reliably replicated.)

This is the fourth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} are:

  • (Maynard) Assuming the Elliott-Halberstam conjecture, {H_1 \leq 12}.
  • (Polymath8b, tentative) {H_1 \leq 272}. Assuming Elliott-Halberstam, {H_2 \leq 272}.
  • (Polymath8b, tentative) {H_2 \leq 429{,}822}. Assuming Elliott-Halberstam, {H_4 \leq 493{,}408}.
  • (Polymath8b, tentative) {H_3 \leq 26{,}682{,}014}. (Presumably a comparable bound also holds for {H_6} on Elliott-Halberstam, but this has not been computed.)
  • (Polymath8b) {H_m \leq \exp( 3.817 m )} for sufficiently large {m}. Assuming Elliott-Halberstam, {H_m \ll m e^{2m}} for sufficiently large {m}.

While the {H_1} bound on the Elliott-Halberstam conjecture has not improved since the start of the Polymath8b project, there is reason to hope that it will soon fall, hopefully to {8}. This is because we have begun to exploit more fully the fact that when using “multidimensional Selberg-GPY” sieves of the form

\displaystyle  \nu(n) := \sigma_{f,k}(n)^2

with

\displaystyle  \sigma_{f,k}(n) := \sum_{d_1|n+h_1,\dots,d_k|n+h_k} \mu(d_1) \dots \mu(d_k) f( \frac{\log d_1}{\log R},\dots,\frac{\log d_k}{\log R}),

where {R := x^{\theta/2}}, it is not necessary for the smooth function {f: [0,+\infty)^k \rightarrow {\bf R}} to be supported on the simplex

\displaystyle {\cal R}_k := \{ (t_1,\dots,t_k)\in [0,1]^k: t_1+\dots+t_k \leq 1\},

but can in fact be allowed to range on larger sets. First of all, {f} may instead be supported on the slightly larger polytope

\displaystyle {\cal R}'_k := \{ (t_1,\dots,t_k)\in [0,1]^k: t_1+\dots+t_{j-1}+t_{j+1}+\dots+t_k \leq 1

\displaystyle  \hbox{ for all } j=1,\dots,k\}.

However, it turns out that more is true: given a sufficiently general version of the Elliott-Halberstam conjecture {EH[\theta]} at the given value of {\theta}, one may work with functions {f} supported on more general domains {R}, so long as the sumset {R+R := \{ t+t': t,t'\in R\}} is contained in the non-convex region

\displaystyle  \bigcup_{j=1}^k \{ (t_1,\dots,t_k)\in [0,\frac{2}{\theta}]^k: t_1+\dots+t_{j-1}+t_{j+1}+\dots+t_k \leq 2 \} \cup \frac{2}{\theta} \cdot {\cal R}_k, \ \ \ \ \ (1)

and also provided that the restriction

\displaystyle  (t_1,\dots,t_{j-1},t_{j+1},\dots,t_k) \mapsto f(t_1,\dots,t_{j-1},0,t_{j+1},\dots,t_k) \ \ \ \ \ (2)

is supported on the simplex

\displaystyle {\cal R}_{k-1} := \{ (t_1,\dots,t_{j-1},t_{j+1},\dots,t_k)\in [0,1]^{k-1}:

\displaystyle t_1+\dots+t_{j-1}+t_{j+1}+\dots t_k \leq 1\}.

More precisely, if {f} is a smooth function, not identically zero, with the above properties for some {R}, and the ratio

\displaystyle  \sum_{j=1}^k \int_{{\cal R}_{k-1}} f_{1,\dots,j-1,j+1,\dots,k}(t_1,\dots,t_{j-1},0,t_{j+1},\dots,t_k)^2 \ \ \ \ \ (3)

\displaystyle dt_1 \dots dt_{j-1} dt_{j+1} \dots dt_k

\displaystyle  / \int_R f_{1,\dots,k}^2(t_1,\dots,t_k)\ dt_1 \dots dt_k

is larger than {\frac{2m}{\theta}}, then the claim {DHL[k,m+1]} holds (assuming {EH[\theta]}), and in particular {H_m \leq H(k)}.

I’ll explain why one can do this below the fold. Taking this for granted, we can rewrite this criterion in terms of the mixed derivative {F := f_{1,\dots,k}}, the upshot being that if one can find a smooth function {F} supported on {R} that obeys the vanishing marginal conditions

\displaystyle  \int F( t_1,\dots,t_k )\ dt_j = 0

whenever {1 \leq j \leq k} and {t_1+\dots+t_{j-1}+t_{j+1}+\dots+t_k > 1}, and the ratio

\displaystyle  \frac{\sum_{j=1}^k J_k^{(j)}(F)}{I_k(F)} \ \ \ \ \ (4)

is larger than {\frac{2m}{\theta}}, where

\displaystyle  I_k(F) := \int_R F(t_1,\dots,t_k)^2\ dt_1 \dots dt_k

and

\displaystyle  J_k^{(j)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1/\theta} F(t_1,\dots,t_k)\ dt_j)^2 dt_1 \dots dt_{j-1} dt_{j+1} \dots dt_k

then {DHL[k,m+1]} holds. (To equate these two formulations, it is convenient to assume that {R} is a downset, in the sense that whenever {(t_1,\dots,t_k) \in R}, the entire box {[0,t_1] \times \dots \times [0,t_k]} lie in {R}, but one can easily enlarge {R} to be a downset without destroying the containment of {R+R} in the non-convex region (1).) One initially requires {F} to be smooth, but a limiting argument allows one to relax to bounded measurable {F}. (To approximate a rough {F} by a smooth {F} while retaining the required moment conditions, one can first apply a slight dilation and translation so that the marginals of {F} are supported on a slightly smaller version of the simplex {{\cal R}_{k-1}}, and then convolve by a smooth approximation to the identity to make {F} smooth, while keeping the marginals supported on {{\cal R}_{k-1}}.)

We are now exploring various choices of {R} to work with, including the prism

\displaystyle  \{ (t_1,\dots,t_k) \in [0,1/\theta]^k: t_1+\dots+t_{k-1} \leq 1 \}

and the symmetric region

\displaystyle  \{ (t_1,\dots,t_k) \in [0,1/\theta]^k: t_1+\dots+t_k \leq \frac{k}{k-1} \}.

By suitably subdividing these regions into polytopes, and working with piecewise polynomial functions {F} that are polynomial of a specified degree on each subpolytope, one can phrase the problem of optimising (4) as a quadratic program, which we have managed to work with for {k=3}. Extending this program to {k=4}, there is a decent chance that we will be able to obtain {DHL[4,2]} on EH.

We have also been able to numerically optimise {M_k} quite accurately for medium values of {k} (e.g. {k \sim 50}), which has led to improved values of {H_1} without EH. For large {k}, we now also have the asymptotic {M_k=\log k - O(1)} with explicit error terms (details here) which have allowed us to slightly improve the {m=2} numerology, and also to get explicit {m=3} numerology for the first time.

Read the rest of this entry »

Mertens’ theorems are a set of classical estimates concerning the asymptotic distribution of the prime numbers:

Theorem 1 (Mertens’ theorems) In the asymptotic limit {x \rightarrow \infty}, we have

\displaystyle  \sum_{p\leq x} \frac{\log p}{p} = \log x + O(1), \ \ \ \ \ (1)

\displaystyle  \sum_{p\leq x} \frac{1}{p} = \log \log x + O(1), \ \ \ \ \ (2)

and

\displaystyle  \sum_{p\leq x} \log(1-\frac{1}{p}) = -\log \log x - \gamma + o(1) \ \ \ \ \ (3)

where {\gamma} is the Euler-Mascheroni constant, defined by requiring that

\displaystyle  1 + \frac{1}{2} + \ldots + \frac{1}{n} = \log n + \gamma + o(1) \ \ \ \ \ (4)

in the limit {n \rightarrow \infty}.

The third theorem (3) is usually stated in exponentiated form

\displaystyle  \prod_{p \leq x} (1-\frac{1}{p}) = \frac{e^{-\gamma}+o(1)}{\log x},

but in the logarithmic form (3) we see that it is strictly stronger than (2), in view of the asymptotic {\log(1-\frac{1}{p}) = -\frac{1}{p} + O(\frac{1}{p^2})}.

Remarkably, these theorems can be proven without the assistance of the prime number theorem

\displaystyle  \sum_{p \leq x} 1 = \frac{x}{\log x} + o( \frac{x}{\log x} ),

which was proven about two decades after Mertens’ work. (But one can certainly use versions of the prime number theorem with good error term, together with summation by parts, to obtain good estimates on the various errors in Mertens’ theorems.) Roughly speaking, the reason for this is that Mertens’ theorems only require control on the Riemann zeta function {\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}} in the neighbourhood of the pole at {s=1}, whereas (as discussed in this previous post) the prime number theorem requires control on the zeta function on (a neighbourhood of) the line {\{ 1+it: t \in {\bf R} \}}. Specifically, Mertens’ theorem is ultimately deduced from the Euler product formula

\displaystyle  \zeta(s) = \prod_p (1-\frac{1}{p^s})^{-1}, \ \ \ \ \ (5)

valid in the region {\hbox{Re}(s) > 1} (which is ultimately a Fourier-Dirichlet transform of the fundamental theorem of arithmetic), and following crude asymptotics:

Proposition 2 (Simple pole) For {s} sufficiently close to {1} with {\hbox{Re}(s) > 1}, we have

\displaystyle  \zeta(s) = \frac{1}{s-1} + O(1) \ \ \ \ \ (6)

and

\displaystyle  \zeta'(s) = \frac{-1}{(s-1)^2} + O(1).

Proof: For {s} as in the proposition, we have {\frac{1}{n^s} = \frac{1}{t^s} + O(\frac{1}{n^2})} for any natural number {n} and {n \leq t \leq n+1}, and hence

\displaystyle  \frac{1}{n^s} = \int_n^{n+1} \frac{1}{t^s}\ dt + O( \frac{1}{n^2} ).

Summing in {n} and using the identity {\int_1^\infty \frac{1}{t^s}\ dt = \frac{1}{s-1}}, we obtain the first claim. Similarly, we have

\displaystyle  \frac{-\log n}{n^s} = \int_n^{n+1} \frac{-\log t}{t^s}\ dt + O( \frac{\log n}{n^2} ),

and by summing in {n} and using the identity {\int_1^\infty \frac{-\log t}{t^s}\ dt = \frac{-1}{(s-1)^2}} (the derivative of the previous identity) we obtain the claim. \Box

The first two of Mertens’ theorems (1), (2) are relatively easy to prove, and imply the third theorem (3) except with {\gamma} replaced by an unspecified absolute constant. To get the specific constant {\gamma} requires a little bit of additional effort. From (4), one might expect that the appearance of {\gamma} arises from the refinement

\displaystyle  \zeta(s) = \frac{1}{s-1} + \gamma + O(|s-1|) \ \ \ \ \ (7)

that one can obtain to (6). However, it turns out that the connection is not so much with the zeta function, but with the Gamma function, and specifically with the identity {\Gamma'(1) = - \gamma} (which is of course related to (7) through the functional equation for zeta, but can be proven without any reference to zeta functions). More specifically, we have the following asymptotic for the exponential integral:

Proposition 3 (Exponential integral asymptotics) For sufficiently small {\epsilon}, one has

\displaystyle  \int_\epsilon^\infty \frac{e^{-t}}{t}\ dt = \log \frac{1}{\epsilon} - \gamma + O(\epsilon).

A routine integration by parts shows that this asymptotic is equivalent to the identity

\displaystyle  \int_0^\infty e^{-t} \log t\ dt = -\gamma

which is the identity {\Gamma'(1)=-\gamma} mentioned previously.

Proof: We start by using the identity {\frac{1}{i} = \int_0^1 x^{i-1}\ dx} to express the harmonic series {H_n := 1+\frac{1}{2}+\ldots+\frac{1}{n}} as

\displaystyle  H_n = \int_0^1 1 + x + \ldots + x^{n-1}\ dx

or on summing the geometric series

\displaystyle  H_n = \int_0^1 \frac{1-x^n}{1-x}\ dx.

Since {\int_0^{1-1/n} \frac{1}{1-x} = \log n}, we thus have

\displaystyle  H_n - \log n = \int_0^1 \frac{1_{[1-1/n,1]}(x) - x^n}{1-x}\ dx;

making the change of variables {x = 1-\frac{t}{n}}, this becomes

\displaystyle  H_n - \log n = \int_0^n \frac{1_{[0,1]}(t) - (1-\frac{t}{n})^n}{t}\ dt.

As {n \rightarrow \infty}, {\frac{1_{[0,1]}(t) - (1-\frac{t}{n})^n}{t}} converges pointwise to {\frac{1_{[0,1]}(t) - e^{-t}}{t}} and is pointwise dominated by {O( e^{-t} )}. Taking limits as {n \rightarrow \infty} using dominated convergence, we conclude that

\displaystyle  \gamma = \int_0^\infty \frac{1_{[0,1]}(t) - e^{-t}}{t}\ dt.

or equivalently

\displaystyle  \int_0^\infty \frac{e^{-t} - 1_{[0,\epsilon]}(t)}{t}\ dt = \log \frac{1}{\epsilon} - \gamma.

The claim then follows by bounding the {\int_0^\epsilon} portion of the integral on the left-hand side. \Box

Below the fold I would like to record how Proposition 2 and Proposition 3 imply Theorem 1; the computations are utterly standard, and can be found in most analytic number theory texts, but I wanted to write them down for my own benefit (I always keep forgetting, in particular, how the third of Mertens’ theorems is proven).

Read the rest of this entry »

This is the third thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} are:

  • (Maynard) Assuming the Elliott-Halberstam conjecture, {H_1 \leq 12}.
  • (Polymath8b, tentative) {H_1 \leq 330}. Assuming Elliott-Halberstam, {H_2 \leq 330}.
  • (Polymath8b, tentative) {H_2 \leq 484{,}126}. Assuming Elliott-Halberstam, {H_4 \leq 493{,}408}.
  • (Polymath8b) {H_m \leq \exp( 3.817 m )} for sufficiently large {m}. Assuming Elliott-Halberstam, {H_m \ll e^{2m} m \log m} for sufficiently large {m}.

Much of the current focus of the Polymath8b project is on the quantity

\displaystyle M_k = M_k({\cal R}_k) := \sup_F \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}

where {F} ranges over square-integrable functions on the simplex

\displaystyle {\cal R}_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\ldots+t_k \leq 1 \}

with {I_k, J_k^{(m)}} being the quadratic forms

\displaystyle I_k(F) := \int_{{\cal R}_k} F(t_1,\ldots,t_k)^2\ dt_1 \ldots dt_k

and

\displaystyle J_k^{(m)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_k)\ dt_m)^2

\displaystyle dt_1 \ldots dt_{m-1} dt_{m+1} \ldots dt_k.

It was shown by Maynard that one has {H_m \leq H(k)} whenever {M_k > 4m}, where {H(k)} is the narrowest diameter of an admissible {k}-tuple. As discussed in the previous post, we have slight improvements to this implication, but they are currently difficult to implement, due to the need to perform high-dimensional integration. The quantity {M_k} does seem however to be close to the theoretical limit of what the Selberg sieve method can achieve for implications of this type (at the Bombieri-Vinogradov level of distribution, at least); it seems of interest to explore more general sieves, although we have not yet made much progress in this direction.

The best asymptotic bounds for {M_k} we have are

\displaystyle \log k - \log\log\log k + O(1) \leq M_k \leq \frac{k}{k-1} \log k \ \ \ \ \ (1)

 

which we prove below the fold. The upper bound holds for all {k > 1}; the lower bound is only valid for sufficiently large {k}, and gives the upper bound {H_m \ll e^{2m} \log m} on Elliott-Halberstam.

For small {k}, the upper bound is quite competitive, for instance it provides the upper bound in the best values

\displaystyle 1.845 \leq M_4 \leq 1.848

and

\displaystyle 2.001162 \leq M_5 \leq 2.011797

we have for {M_4} and {M_5}. The situation is a little less clear for medium values of {k}, for instance we have

\displaystyle 3.95608 \leq M_{59} \leq 4.148

and so it is not yet clear whether {M_{59} > 4} (which would imply {H_1 \leq 300}). See this wiki page for some further upper and lower bounds on {M_k}.

The best lower bounds are not obtained through the asymptotic analysis, but rather through quadratic programming (extending the original method of Maynard). This has given significant numerical improvements to our best bounds (in particular lowering the {H_1} bound from {600} to {330}), but we have not yet been able to combine this method with the other potential improvements (enlarging the simplex, using MPZ distributional estimates, and exploiting upper bounds on two-point correlations) due to the computational difficulty involved.

Read the rest of this entry »

(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)

Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.

The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:

(Discrete) (Continuous) (Limit method)
Ramsey theory Topological dynamics Compactness
Density Ramsey theory Ergodic theory Furstenberg correspondence principle
Graph/hypergraph regularity Measure theory Graph limits
Polynomial regularity Linear algebra Ultralimits
Structural decompositions Hilbert space geometry Ultralimits
Fourier analysis Spectral theory Direct and inverse limits
Quantitative algebraic geometry Algebraic geometry Schemes
Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits
Approximate group theory Topological group theory Model theory

As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:

  • Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects {x_n} in a common space {X}, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object {\lim_{n \rightarrow \infty} x_n}, which remains in the same space, and is “close” to many of the original objects {x_n} with respect to the given metric or topology.
  • Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects {x_n} in a category {X}, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit {\varinjlim x_n} or the inverse limit {\varprojlim x_n} of these objects, which is another object in the same category {X}, and is connected to the original objects {x_n} by various morphisms.
  • Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects {x_{\bf n}} or of spaces {X_{\bf n}}, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, {X_{\bf n}} might be groups and {x_{\bf n}} might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}} or a new space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}, which is still a model of the same language (e.g. if the spaces {X_{\bf n}} were all groups, then the limiting space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} is an abelian group, then the {X_{\bf n}} will also be abelian groups for many {{\bf n}}.)

The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects {x_{\bf n}} to all lie in a common space {X} in order to form an ultralimit {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; they are permitted to lie in different spaces {X_{\bf n}}; this is more natural in many discrete contexts, e.g. when considering graphs on {{\bf n}} vertices in the limit when {{\bf n}} goes to infinity. Also, no convergence properties on the {x_{\bf n}} are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces {X_{\bf n}} involved are required in order to construct the ultraproduct.

With so few requirements on the objects {x_{\bf n}} or spaces {X_{\bf n}}, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the {x_{\bf n}}, will be exactly obeyed by the limit object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.

Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.

Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.

Read the rest of this entry »

This is the second thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} are:

  • (Maynard) {H_1 \leq 600}.
  • (Polymath8b, tentative) {H_2 \leq 484,276}.
  • (Polymath8b, tentative) {H_m \leq \exp( 3.817 m )} for sufficiently large {m}.
  • (Maynard) Assuming the Elliott-Halberstam conjecture, {H_1 \leq 12}, {H_2 \leq 600}, and {H_m \ll m^3 e^{2m}}.

Following the strategy of Maynard, the bounds on {H_m} proceed by combining four ingredients:

  1. Distribution estimates {EH[\theta]} or {MPZ[\varpi,\delta]} for the primes (or related objects);
  2. Bounds for the minimal diameter {H(k)} of an admissible {k}-tuple;
  3. Lower bounds for the optimal value {M_k} to a certain variational problem;
  4. Sieve-theoretic arguments to convert the previous three ingredients into a bound on {H_m}.

Accordingly, the most natural routes to improve the bounds on {H_m} are to improve one or more of the above four ingredients.

Ingredient 1 was studied intensively in Polymath8a. The following results are known or conjectured (see the Polymath8a paper for notation and proofs):

  • (Bombieri-Vinogradov) {EH[\theta]} is true for all {0 < \theta < 1/2}.
  • (Polymath8a) {MPZ[\varpi,\delta]} is true for {\frac{600}{7} \varpi + \frac{180}{7}\delta < 1}.
  • (Polymath8a, tentative) {MPZ[\varpi,\delta]} is true for {\frac{1080}{13} \varpi + \frac{330}{13} \delta < 1}.
  • (Elliott-Halberstam conjecture) {EH[\theta]} is true for all {0 < \theta < 1}.

Ingredient 2 was also studied intensively in Polymath8a, and is more or less a solved problem for the values of {k} of interest (with exact values of {H(k)} for {k \leq 342}, and quite good upper bounds for {H(k)} for {k < 5000}, available at this page). So the main focus currently is on improving Ingredients 3 and 4.

For Ingredient 3, the basic variational problem is to understand the quantity

\displaystyle  M_k({\cal R}_k) := \sup_F \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}

for {F: {\cal R}_k \rightarrow {\bf R}} bounded measurable functions, not identically zero, on the simplex

\displaystyle  {\cal R}_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\ldots+t_k \leq 1 \}

with {I_k, J_k^{(m)}} being the quadratic forms

\displaystyle  I_k(F) := \int_{{\cal R}_k} F(t_1,\ldots,t_k)^2\ dt_1 \ldots dt_k

and

\displaystyle  J_k^{(m)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_k)\ dt_i)^2 dt_1 \ldots dt_{m-1} dt_{m+1} \ldots dt_k.

Equivalently, one has

\displaystyle  M_k({\cal R}_k) := \sup_F \frac{\int_{{\cal R}_k} F {\cal L}_k F}{\int_{{\cal R}_k} F^2}

where {{\cal L}_k: L^2({\cal R}_k) \rightarrow L^2({\cal R}_k)} is the positive semi-definite bounded self-adjoint operator

\displaystyle  {\cal L}_k F(t_1,\ldots,t_k) = \sum_{m=1}^k \int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_{m-1},s,t_{m+1},\ldots,t_k)\ ds,

so {M_k} is the operator norm of {{\cal L}}. Another interpretation of {M_k({\cal R}_k)} is that the probability that a rook moving randomly in the unit cube {[0,1]^k} stays in simplex {{\cal R}_k} for {n} moves is asymptotically {(M_k({\cal R}_k)/k + o(1))^n}.

We now have a fairly good asymptotic understanding of {M_k({\cal R}_k)}, with the bounds

\displaystyle  \log k - 2 \log\log k -2 \leq M_k({\cal R}_k) \leq \log k + \log\log k + 2

holding for sufficiently large {k}. There is however still room to tighten the bounds on {M_k({\cal R}_k)} for small {k}; I’ll summarise some of the ideas discussed so far below the fold.

For Ingredient 4, the basic tool is this:

Theorem 1 (Maynard) If {EH[\theta]} is true and {M_k({\cal R}_k) > \frac{2m}{\theta}}, then {H_m \leq H(k)}.

Thus, for instance, it is known that {M_{105} > 4} and {H(105)=600}, and this together with the Bombieri-Vinogradov inequality gives {H_1\leq 600}. This result is proven in Maynard’s paper and an alternate proof is also given in the previous blog post.

We have a number of ways to relax the hypotheses of this result, which we also summarise below the fold.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,330 other followers