You are currently browsing the category archive for the ‘math.CA’ category.

Roth’s theorem on arithmetic progressions asserts that every subset of the integers {{\bf Z}} of positive upper density contains infinitely many arithmetic progressions of length three. There are many versions and variants of this theorem. Here is one of them:

Theorem 1 (Roth’s theorem) Let {G = (G,+)} be a compact abelian group, with Haar probability measure {\mu}, which is {2}-divisible (i.e. the map {x \mapsto 2x} is surjective) and let {A} be a measurable subset of {G} with {\mu(A) \geq \alpha} for some {0 < \alpha < 1}. Then we have

\displaystyle  \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r)\ d\mu(x) d\mu(r) \gg_\alpha 1,

where {X \gg_\alpha Y} denotes the bound {X \geq c_\alpha Y} for some {c_\alpha > 0} depending only on {\alpha}.

This theorem is usually formulated in the case that {G} is a finite abelian group of odd order (in which case the result is essentially due to Meshulam) or more specifically a cyclic group {G = {\bf Z}/N{\bf Z}} of odd order (in which case it is essentially due to Varnavides), but is also valid for the more general setting of {2}-divisible compact abelian groups, as we shall shortly see. One can be more precise about the dependence of the implied constant {c_\alpha} on {\alpha}, but to keep the exposition simple we will work at the qualitative level here, without trying at all to get good quantitative bounds. The theorem is also true without the {2}-divisibility hypothesis, but the proof we will discuss runs into some technical issues due to the degeneracy of the {2r} shift in that case.

We can deduce Theorem 1 from the following more general Khintchine-type statement. Let {\hat G} denote the Pontryagin dual of a compact abelian group {G}, that is to say the set of all continuous homomorphisms {\xi: x \mapsto \xi \cdot x} from {G} to the (additive) unit circle {{\bf R}/{\bf Z}}. Thus {\hat G} is a discrete abelian group, and functions {f \in L^2(G)} have a Fourier transform {\hat f \in \ell^2(\hat G)} defined by

\displaystyle  \hat f(\xi) := \int_G f(x) e^{-2\pi i \xi \cdot x}\ d\mu(x).

If {G} is {2}-divisible, then {\hat G} is {2}-torsion-free in the sense that the map {\xi \mapsto 2 \xi} is injective. For any finite set {S \subset \hat G} and any radius {\rho>0}, define the Bohr set

\displaystyle  B(S,\rho) := \{ x \in G: \sup_{\xi \in S} \| \xi \cdot x \|_{{\bf R}/{\bf Z}} < \rho \}

where {\|\theta\|_{{\bf R}/{\bf Z}}} denotes the distance of {\theta} to the nearest integer. We refer to the cardinality {|S|} of {S} as the rank of the Bohr set. We record a simple volume bound on Bohr sets:

Lemma 2 (Volume packing bound) Let {G} be a compact abelian group with Haar probability measure {\mu}. For any Bohr set {B(S,\rho)}, we have

\displaystyle  \mu( B( S, \rho ) ) \gg_{|S|, \rho} 1.

Proof: We can cover the torus {({\bf R}/{\bf Z})^S} by {O_{|S|,\rho}(1)} translates {\theta+Q} of the cube {Q := \{ (\theta_\xi)_{\xi \in S} \in ({\bf R}/{\bf Z})^S: \sup_{\xi \in S} \|\theta_\xi\|_{{\bf R}/{\bf Z}} < \rho/2 \}}. Then the sets {\{ x \in G: (\xi \cdot x)_{\xi \in S} \in \theta + Q \}} form an cover of {G}. But all of these sets lie in a translate of {B(S,\rho)}, and the claim then follows from the translation invariance of {\mu}. \Box

Given any Bohr set {B(S,\rho)}, we define a normalised “Lipschitz” cutoff function {\nu_{B(S,\rho)}: G \rightarrow {\bf R}} by the formula

\displaystyle  \nu_{B(S,\rho)}(x) = c_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})_+ \ \ \ \ \ (1)

where {c_{B(S,\rho)}} is the constant such that

\displaystyle  \int_G \nu_{B(S,\rho)}\ d\mu = 1,

thus

\displaystyle c_{B(S,\rho)} = \left( \int_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})\ d\mu(x) \right)^{-1}.

The function {\nu_{B(S,\rho)}} should be viewed as an {L^1}-normalised “tent function” cutoff to {B(S,\rho)}. Note from Lemma 2 that

\displaystyle  1 \ll_{|S|,\rho} c_{B(S,\rho)} \ll_{|S|,\rho} 1. \ \ \ \ \ (2)

We then have the following sharper version of Theorem 1:

Theorem 3 (Roth-Khintchine theorem) Let {G = (G,+)} be a {2}-divisible compact abelian group, with Haar probability measure {\mu}, and let {\epsilon>0}. Then for any measurable function {f: G \rightarrow [0,1]}, there exists a Bohr set {B(S,\rho)} with {|S| \ll_\epsilon 1} and {\rho \gg_\epsilon 1} such that

\displaystyle  \int_G \int_G f(x) f(x+r) f(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \ \ \ \ \ (3)

\displaystyle  \geq (\int_G f\ d\mu)^3 - O(\epsilon)

where {*} denotes the convolution operation

\displaystyle  f*g(x) := \int_G f(y) g(x-y)\ d\mu(y).

A variant of this result (expressed in the language of ergodic theory) appears in this paper of Bergelson, Host, and Kra; a combinatorial version of the Bergelson-Host-Kra result that is closer to Theorem 3 subsequently appeared in this paper of Ben Green and myself, but this theorem arguably appears implicitly in a much older paper of Bourgain. To see why Theorem 3 implies Theorem 1, we apply the theorem with {f := 1_A} and {\epsilon} equal to a small multiple of {\alpha^3} to conclude that there is a Bohr set {B(S,\rho)} with {|S| \ll_\alpha 1} and {\rho \gg_\alpha 1} such that

\displaystyle  \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \gg \alpha^3.

But from (2) we have the pointwise bound {\nu_{B(S,\rho)}*\nu_{B(S,\rho)} \ll_\alpha 1}, and Theorem 1 follows.

Below the fold, we give a short proof of Theorem 3, using an “energy pigeonholing” argument that essentially dates back to the 1986 paper of Bourgain mentioned previously (not to be confused with a later 1999 paper of Bourgain on Roth’s theorem that was highly influential, for instance in emphasising the importance of Bohr sets). The idea is to use the pigeonhole principle to choose the Bohr set {B(S,\rho)} to capture all the “large Fourier coefficients” of {f}, but such that a certain “dilate” of {B(S,\rho)} does not capture much more Fourier energy of {f} than {B(S,\rho)} itself. The bound (3) may then be obtained through elementary Fourier analysis, without much need to explicitly compute things like the Fourier transform of an indicator function of a Bohr set. (However, the bound obtained by this argument is going to be quite poor – of tower-exponential type.) To do this we perform a structural decomposition of {f} into “structured”, “small”, and “highly pseudorandom” components, as is common in the subject (e.g. in this previous blog post), but even though we crucially need to retain non-negativity of one of the components in this decomposition, we can avoid recourse to conditional expectation with respect to a partition (or “factor”) of the space, using instead convolution with one of the {\nu_{B(S,\rho)}} considered above to achieve a similar effect.

Read the rest of this entry »

Throughout this post, we will work only at the formal level of analysis, ignoring issues of convergence of integrals, justifying differentiation under the integral sign, and so forth. (Rigorous justification of the conservation laws and other identities arising from the formal manipulations below can usually be established in an a posteriori fashion once the identities are in hand, without the need to rigorously justify the manipulations used to come up with these identities).

It is a remarkable fact in the theory of differential equations that many of the ordinary and partial differential equations that are of interest (particularly in geometric PDE, or PDE arising from mathematical physics) admit a variational formulation; thus, a collection {\Phi: \Omega \rightarrow M} of one or more fields on a domain {\Omega} taking values in a space {M} will solve the differential equation of interest if and only if {\Phi} is a critical point to the functional

\displaystyle  J[\Phi] := \int_\Omega L( x, \Phi(x), D\Phi(x) )\ dx \ \ \ \ \ (1)

involving the fields {\Phi} and their first derivatives {D\Phi}, where the Lagrangian {L: \Sigma \rightarrow {\bf R}} is a function on the vector bundle {\Sigma} over {\Omega \times M} consisting of triples {(x, q, \dot q)} with {x \in \Omega}, {q \in M}, and {\dot q: T_x \Omega \rightarrow T_q M} a linear transformation; we also usually keep the boundary data of {\Phi} fixed in case {\Omega} has a non-trivial boundary, although we will ignore these issues here. (We also ignore the possibility of having additional constraints imposed on {\Phi} and {D\Phi}, which require the machinery of Lagrange multipliers to deal with, but which will only serve as a distraction for the current discussion.) It is common to use local coordinates to parameterise {\Omega} as {{\bf R}^d} and {M} as {{\bf R}^n}, in which case {\Sigma} can be viewed locally as a function on {{\bf R}^d \times {\bf R}^n \times {\bf R}^{dn}}.

Example 1 (Geodesic flow) Take {\Omega = [0,1]} and {M = (M,g)} to be a Riemannian manifold, which we will write locally in coordinates as {{\bf R}^n} with metric {g_{ij}(q)} for {i,j=1,\dots,n}. A geodesic {\gamma: [0,1] \rightarrow M} is then a critical point (keeping {\gamma(0),\gamma(1)} fixed) of the energy functional

\displaystyle  J[\gamma] := \frac{1}{2} \int_0^1 g_{\gamma(t)}( D\gamma(t), D\gamma(t) )\ dt

or in coordinates (ignoring coordinate patch issues, and using the usual summation conventions)

\displaystyle  J[\gamma] = \frac{1}{2} \int_0^1 g_{ij}(\gamma(t)) \dot \gamma^i(t) \dot \gamma^j(t)\ dt.

As discussed in this previous post, both the Euler equations for rigid body motion, and the Euler equations for incompressible inviscid flow, can be interpreted as geodesic flow (though in the latter case, one has to work really formally, as the manifold {M} is now infinite dimensional).

More generally, if {\Omega = (\Omega,h)} is itself a Riemannian manifold, which we write locally in coordinates as {{\bf R}^d} with metric {h_{ab}(x)} for {a,b=1,\dots,d}, then a harmonic map {\Phi: \Omega \rightarrow M} is a critical point of the energy functional

\displaystyle  J[\Phi] := \frac{1}{2} \int_\Omega h(x) \otimes g_{\gamma(x)}( D\gamma(x), D\gamma(x) )\ dh(x)

or in coordinates (again ignoring coordinate patch issues)

\displaystyle  J[\Phi] = \frac{1}{2} \int_{{\bf R}^d} h_{ab}(x) g_{ij}(\Phi(x)) (\partial_a \Phi^i(x)) (\partial_b \Phi^j(x))\ \sqrt{\det(h(x))}\ dx.

If we replace the Riemannian manifold {\Omega} by a Lorentzian manifold, such as Minkowski space {{\bf R}^{1+3}}, then the notion of a harmonic map is replaced by that of a wave map, which generalises the scalar wave equation (which corresponds to the case {M={\bf R}}).

Example 2 ({N}-particle interactions) Take {\Omega = {\bf R}} and {M = {\bf R}^3 \otimes {\bf R}^N}; then a function {\Phi: \Omega \rightarrow M} can be interpreted as a collection of {N} trajectories {q_1,\dots,q_N: {\bf R} \rightarrow {\bf R}^3} in space, which we give a physical interpretation as the trajectories of {N} particles. If we assign each particle a positive mass {m_1,\dots,m_N > 0}, and also introduce a potential energy function {V: M \rightarrow {\bf R}}, then it turns out that Newton’s laws of motion {F=ma} in this context (with the force {F_i} on the {i^{th}} particle being given by the conservative force {-\nabla_{q_i} V}) are equivalent to the trajectories {q_1,\dots,q_N} being a critical point of the action functional

\displaystyle  J[\Phi] := \int_{\bf R} \sum_{i=1}^N \frac{1}{2} m_i |\dot q_i(t)|^2 - V( q_1(t),\dots,q_N(t) )\ dt.

Formally, if {\Phi = \Phi_0} is a critical point of a functional {J[\Phi]}, this means that

\displaystyle  \frac{d}{ds} J[ \Phi[s] ]|_{s=0} = 0

whenever {s \mapsto \Phi[s]} is a (smooth) deformation with {\Phi[0]=\Phi_0} (and with {\Phi[s]} respecting whatever boundary conditions are appropriate). Interchanging the derivative and integral, we (formally, at least) arrive at

\displaystyle  \int_\Omega \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0}\ dx = 0. \ \ \ \ \ (2)

Write {\delta \Phi := \frac{d}{ds} \Phi[s]|_{s=0}} for the infinitesimal deformation of {\Phi_0}. By the chain rule, {\frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0}} can be expressed in terms of {x, \Phi_0(x), \delta \Phi(x), D\Phi_0(x), D \delta \Phi(x)}. In coordinates, we have

\displaystyle  \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0} = \delta \Phi^i(x) L_{q^i}(x,\Phi_0(x), D\Phi_0(x)) \ \ \ \ \ (3)

\displaystyle  + \partial_{x^a} \delta \Phi^i(x) L_{\partial_{x^a} q^i} (x,\Phi_0(x), D\Phi_0(x)),

where we parameterise {\Sigma} by {x, (q^i)_{i=1,\dots,n}, (\partial_{x^a} q^i)_{a=1,\dots,d; i=1,\dots,n}}, and we use subscripts on {L} to denote partial derivatives in the various coefficients. (One can of course work in a coordinate-free manner here if one really wants to, but the notation becomes a little cumbersome due to the need to carefully split up the tangent space of {\Sigma}, and we will not do so here.) Thus we can view (2) as an integral identity that asserts the vanishing of a certain integral, whose integrand involves {x, \Phi_0(x), \delta \Phi(x), D\Phi_0(x), D \delta \Phi(x)}, where {\delta \Phi} vanishes at the boundary but is otherwise unconstrained.

A general rule of thumb in PDE and calculus of variations is that whenever one has an integral identity of the form {\int_\Omega F(x)\ dx = 0} for some class of functions {F} that vanishes on the boundary, then there must be an associated differential identity {F = \hbox{div} X} that justifies this integral identity through Stokes’ theorem. This rule of thumb helps explain why integration by parts is used so frequently in PDE to justify integral identities. The rule of thumb can fail when one is dealing with “global” or “cohomologically non-trivial” integral identities of a topological nature, such as the Gauss-Bonnet or Kazhdan-Warner identities, but is quite reliable for “local” or “cohomologically trivial” identities, such as those arising from calculus of variations.

In any case, if we apply this rule to (2), we expect that the integrand {\frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0}} should be expressible as a spatial divergence. This is indeed the case:

Proposition 1 (Formal) Let {\Phi = \Phi_0} be a critical point of the functional {J[\Phi]} defined in (1). Then for any deformation {s \mapsto \Phi[s]} with {\Phi[0] = \Phi_0}, we have

\displaystyle  \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0} = \hbox{div} X \ \ \ \ \ (4)

where {X} is the vector field that is expressible in coordinates as

\displaystyle  X^a := \delta \Phi^i(x) L_{\partial_{x^a} q^i}(x,\Phi_0(x), D\Phi_0(x)). \ \ \ \ \ (5)

Proof: Comparing (4) with (3), we see that the claim is equivalent to the Euler-Lagrange equation

\displaystyle  L_{q^i}(x,\Phi_0(x), D\Phi_0(x)) - \partial_{x^a} L_{\partial_{x^a} q^i}(x,\Phi_0(x), D\Phi_0(x)) = 0. \ \ \ \ \ (6)

The same computation, together with an integration by parts, shows that (2) may be rewritten as

\displaystyle  \int_\Omega ( L_{q^i}(x,\Phi_0(x), D\Phi_0(x)) - \partial_{x^a} L_{\partial_{x^a} q^i}(x,\Phi_0(x), D\Phi_0(x)) ) \delta \Phi^i(x)\ dx = 0.

Since {\delta \Phi^i(x)} is unconstrained on the interior of {\Omega}, the claim (6) follows (at a formal level, at least). \Box

Many variational problems also enjoy one-parameter continuous symmetries: given any field {\Phi_0} (not necessarily a critical point), one can place that field in a one-parameter family {s \mapsto \Phi[s]} with {\Phi[0] = \Phi_0}, such that

\displaystyle  J[ \Phi[s] ] = J[ \Phi[0] ]

for all {s}; in particular,

\displaystyle  \frac{d}{ds} J[ \Phi[s] ]|_{s=0} = 0,

which can be written as (2) as before. Applying the previous rule of thumb, we thus expect another divergence identity

\displaystyle  \frac{d}{ds} L( x, \Phi[s](x), D\Phi[s](x) )|_{s=0} = \hbox{div} Y \ \ \ \ \ (7)

whenever {s \mapsto \Phi[s]} arises from a continuous one-parameter symmetry. This expectation is indeed the case in many examples. For instance, if the spatial domain {\Omega} is the Euclidean space {{\bf R}^d}, and the Lagrangian (when expressed in coordinates) has no direct dependence on the spatial variable {x}, thus

\displaystyle  L( x, \Phi(x), D\Phi(x) ) = L( \Phi(x), D\Phi(x) ), \ \ \ \ \ (8)

then we obtain {d} translation symmetries

\displaystyle  \Phi[s](x) := \Phi(x - s e^a )

for {a=1,\dots,d}, where {e^1,\dots,e^d} is the standard basis for {{\bf R}^d}. For a fixed {a}, the left-hand side of (7) then becomes

\displaystyle  \frac{d}{ds} L( \Phi(x-se^a), D\Phi(x-se^a) )|_{s=0} = -\partial_{x^a} [ L( \Phi(x), D\Phi(x) ) ]

\displaystyle  = \hbox{div} Y

where {Y(x) = - L(\Phi(x), D\Phi(x)) e^a}. Another common type of symmetry is a pointwise symmetry, in which

\displaystyle  L( x, \Phi[s](x), D\Phi[s](x) ) = L( x, \Phi[0](x), D\Phi[0](x) ) \ \ \ \ \ (9)

for all {x}, in which case (7) clearly holds with {Y=0}.

If we subtract (4) from (7), we obtain the celebrated theorem of Noether linking symmetries with conservation laws:

Theorem 2 (Noether’s theorem) Suppose that {\Phi_0} is a critical point of the functional (1), and let {\Phi[s]} be a one-parameter continuous symmetry with {\Phi[0] = \Phi_0}. Let {X} be the vector field in (5), and let {Y} be the vector field in (7). Then we have the pointwise conservation law

\displaystyle  \hbox{div}(X-Y) = 0.

In particular, for one-dimensional variational problems, in which {\Omega \subset {\bf R}}, we have the conservation law {(X-Y)(t) = (X-Y)(0)} for all {t \in \Omega} (assuming of course that {\Omega} is connected and contains {0}).

Noether’s theorem gives a systematic way to locate conservation laws for solutions to variational problems. For instance, if {\Omega \subset {\bf R}} and the Lagrangian has no explicit time dependence, thus

\displaystyle  L(t, \Phi(t), \dot \Phi(t)) = L(\Phi(t), \dot \Phi(t)),

then by using the time translation symmetry {\Phi[s](t) := \Phi(t-s)}, we have

\displaystyle  Y(t) = - L( \Phi(t), \dot\Phi(t) )

as discussed previously, whereas we have {\delta \Phi(t) = - \dot \Phi(t)}, and hence by (5)

\displaystyle  X(t) := - \dot \Phi^i(x) L_{\dot q^i}(\Phi(t), \dot \Phi(t)),

and so Noether’s theorem gives conservation of the Hamiltonian

\displaystyle  H(t) := \dot \Phi^i(x) L_{\dot q^i}(\Phi(t), \dot \Phi(t))- L(\Phi(t), \dot \Phi(t)). \ \ \ \ \ (10)

For instance, for geodesic flow, the Hamiltonian works out to be

\displaystyle  H(t) = \frac{1}{2} g_{ij}(\gamma(t)) \dot \gamma^i(t) \dot \gamma^j(t),

so we see that the speed of the geodesic is conserved over time.

For pointwise symmetries (9), {Y} vanishes, and so Noether’s theorem simplifies to {\hbox{div} X = 0}; in the one-dimensional case {\Omega \subset {\bf R}}, we thus see from (5) that the quantity

\displaystyle  \delta \Phi^i(t) L_{\dot q^i}(t,\Phi_0(t), \dot \Phi_0(t)) \ \ \ \ \ (11)

is conserved in time. For instance, for the {N}-particle system in Example 2, if we have the translation invariance

\displaystyle  V( q_1 + h, \dots, q_N + h ) = V( q_1, \dots, q_N )

for all {q_1,\dots,q_N,h \in {\bf R}^3}, then we have the pointwise translation symmetry

\displaystyle  q_i[s](t) := q_i(t) + s e^j

for all {i=1,\dots,N}, {s \in{\bf R}} and some {j=1,\dots,3}, in which case {\dot q_i(t) = e^j}, and the conserved quantity (11) becomes

\displaystyle  \sum_{i=1}^n m_i \dot q_i^j(t);

as {j=1,\dots,3} was arbitrary, this establishes conservation of the total momentum

\displaystyle  \sum_{i=1}^n m_i \dot q_i(t).

Similarly, if we have the rotation invariance

\displaystyle  V( R q_1, \dots, Rq_N ) = V( q_1, \dots, q_N )

for any {q_1,\dots,q_N \in {\bf R}^3} and {R \in SO(3)}, then we have the pointwise rotation symmetry

\displaystyle  q_i[s](t) := \exp( s A ) q_i(t)

for any skew-symmetric real {3 \times 3} matrix {A}, in which case {\dot q_i(t) = A q_i(t)}, and the conserved quantity (11) becomes

\displaystyle  \sum_{i=1}^n m_i \langle A q_i(t), \dot q_i(t) \rangle;

since {A} is an arbitrary skew-symmetric matrix, this establishes conservation of the total angular momentum

\displaystyle  \sum_{i=1}^n m_i q_i(t) \wedge \dot q_i(t).

Below the fold, I will describe how Noether’s theorem can be used to locate all of the conserved quantities for the Euler equations of inviscid fluid flow, discussed in this previous post, by interpreting that flow as geodesic flow in an infinite dimensional manifold.

Read the rest of this entry »

I’ve just uploaded to the arXiv the paper “Finite time blowup for an averaged three-dimensional Navier-Stokes equation“, submitted to J. Amer. Math. Soc.. The main purpose of this paper is to formalise the “supercriticality barrier” for the global regularity problem for the Navier-Stokes equation, which roughly speaking asserts that it is not possible to establish global regularity by any “abstract” approach which only uses upper bound function space estimates on the nonlinear part of the equation, combined with the energy identity. This is done by constructing a modification of the Navier-Stokes equations with a nonlinearity that obeys essentially all of the function space estimates that the true Navier-Stokes nonlinearity does, and which also obeys the energy identity, but for which one can construct solutions that blow up in finite time. Results of this type had been previously established by Montgomery-Smith, Gallagher-Paicu, and Li-Sinai for variants of the Navier-Stokes equation without the energy identity, and by Katz-Pavlovic and by Cheskidov for dyadic analogues of the Navier-Stokes equations in five and higher dimensions that obeyed the energy identity (see also the work of Plechac and Sverak and of Hou and Lei that also suggest blowup for other Navier-Stokes type models obeying the energy identity in five and higher dimensions), but to my knowledge this is the first blowup result for a Navier-Stokes type equation in three dimensions that also obeys the energy identity. Intriguingly, the method of proof in fact hints at a possible route to establishing blowup for the true Navier-Stokes equations, which I am now increasingly inclined to believe is the case (albeit for a very small set of initial data).

To state the results more precisely, recall that the Navier-Stokes equations can be written in the form

\displaystyle  \partial_t u + (u \cdot \nabla) u = \nu \Delta u + \nabla p

for a divergence-free velocity field {u} and a pressure field {p}, where {\nu>0} is the viscosity, which we will normalise to be one. We will work in the non-periodic setting, so the spatial domain is {{\bf R}^3}, and for sake of exposition I will not discuss matters of regularity or decay of the solution (but we will always be working with strong notions of solution here rather than weak ones). Applying the Leray projection {P} to divergence-free vector fields to this equation, we can eliminate the pressure, and obtain an evolution equation

\displaystyle  \partial_t u = \Delta u + B(u,u) \ \ \ \ \ (1)

purely for the velocity field, where {B} is a certain bilinear operator on divergence-free vector fields (specifically, {B(u,v) = -\frac{1}{2} P( (u \cdot \nabla) v + (v \cdot \nabla) u)}. The global regularity problem for Navier-Stokes is then equivalent to the global regularity problem for the evolution equation (1).

An important feature of the bilinear operator {B} appearing in (1) is the cancellation law

\displaystyle  \langle B(u,u), u \rangle = 0

(using the {L^2} inner product on divergence-free vector fields), which leads in particular to the fundamental energy identity

\displaystyle  \frac{1}{2} \int_{{\bf R}^3} |u(T,x)|^2\ dx + \int_0^T \int_{{\bf R}^3} |\nabla u(t,x)|^2\ dx dt = \frac{1}{2} \int_{{\bf R}^3} |u(0,x)|^2\ dx.

This identity (and its consequences) provide essentially the only known a priori bound on solutions to the Navier-Stokes equations from large data and arbitrary times. Unfortunately, as discussed in this previous post, the quantities controlled by the energy identity are supercritical with respect to scaling, which is the fundamental obstacle that has defeated all attempts to solve the global regularity problem for Navier-Stokes without any additional assumptions on the data or solution (e.g. perturbative hypotheses, or a priori control on a critical norm such as the {L^\infty_t L^3_x} norm).

Our main result is then (slightly informally stated) as follows

Theorem 1 There exists an averaged version {\tilde B} of the bilinear operator {B}, of the form

\displaystyle  \tilde B(u,v) := \int_\Omega m_{3,\omega}(D) Rot_{3,\omega}

\displaystyle B( m_{1,\omega}(D) Rot_{1,\omega} u, m_{2,\omega}(D) Rot_{2,\omega} v )\ d\mu(\omega)

for some probability space {(\Omega, \mu)}, some spatial rotation operators {Rot_{i,\omega}} for {i=1,2,3}, and some Fourier multipliers {m_{i,\omega}} of order {0}, for which one still has the cancellation law

\displaystyle  \langle \tilde B(u,u), u \rangle = 0

and for which the averaged Navier-Stokes equation

\displaystyle  \partial_t u = \Delta u + \tilde B(u,u) \ \ \ \ \ (2)

admits solutions that blow up in finite time.

(There are some integrability conditions on the Fourier multipliers {m_{i,\omega}} required in the above theorem in order for the conclusion to be non-trivial, but I am omitting them here for sake of exposition.)

Because spatial rotations and Fourier multipliers of order {0} are bounded on most function spaces, {\tilde B} automatically obeys almost all of the upper bound estimates that {B} does. Thus, this theorem blocks any attempt to prove global regularity for the true Navier-Stokes equations which relies purely on the energy identity and on upper bound estimates for the nonlinearity; one must use some additional structure of the nonlinear operator {B} which is not shared by an averaged version {\tilde B}. Such additional structure certainly exists – for instance, the Navier-Stokes equation has a vorticity formulation involving only differential operators rather than pseudodifferential ones, whereas a general equation of the form (2) does not. However, “abstract” approaches to global regularity generally do not exploit such structure, and thus cannot be used to affirmatively answer the Navier-Stokes problem.

It turns out that the particular averaged bilinear operator {B} that we will use will be a finite linear combination of local cascade operators, which take the form

\displaystyle  C(u,v) := \sum_{n \in {\bf Z}} (1+\epsilon_0)^{5n/2} \langle u, \psi_{1,n} \rangle \langle v, \psi_{2,n} \rangle \psi_{3,n}

where {\epsilon_0>0} is a small parameter, {\psi_1,\psi_2,\psi_3} are Schwartz vector fields whose Fourier transform is supported on an annulus, and {\psi_{i,n}(x) := (1+\epsilon_0)^{3n/2} \psi_i( (1+\epsilon_0)^n x)} is an {L^2}-rescaled version of {\psi_i} (basically a “wavelet” of wavelength about {(1+\epsilon_0)^{-n}} centred at the origin). Such operators were essentially introduced by Katz and Pavlovic as dyadic models for {B}; they have the essentially the same scaling property as {B} (except that one can only scale along powers of {1+\epsilon_0}, rather than over all positive reals), and in fact they can be expressed as an average of {B} in the sense of the above theorem, as can be shown after a somewhat tedious amount of Fourier-analytic symbol manipulations.

If we consider nonlinearities {\tilde B} which are a finite linear combination of local cascade operators, then the equation (2) more or less collapses to a system of ODE in certain “wavelet coefficients” of {u}. The precise ODE that shows up depends on what precise combination of local cascade operators one is using. Katz and Pavlovic essentially considered a single cascade operator together with its “adjoint” (needed to preserve the energy identity), and arrived (more or less) at the system of ODE

\displaystyle  \partial_t X_n = - (1+\epsilon_0)^{2n} X_n + (1+\epsilon_0)^{\frac{5}{2}(n-1)} X_{n-1}^2 - (1+\epsilon_0)^{\frac{5}{2} n} X_n X_{n+1} \ \ \ \ \ (3)

where {X_n: [0,T] \rightarrow {\bf R}} are scalar fields for each integer {n}. (Actually, Katz-Pavlovic worked with a technical variant of this particular equation, but the differences are not so important for this current discussion.) Note that the quadratic terms on the RHS carry a higher exponent of {1+\epsilon_0} than the dissipation term; this reflects the supercritical nature of this evolution (the energy {\frac{1}{2} \sum_n X_n^2} is monotone decreasing in this flow, so the natural size of {X_n} given the control on the energy is {O(1)}). There is a slight technical issue with the dissipation if one wishes to embed (3) into an equation of the form (2), but it is minor and I will not discuss it further here.

In principle, if the {X_n} mode has size comparable to {1} at some time {t_n}, then energy should flow from {X_n} to {X_{n+1}} at a rate comparable to {(1+\epsilon_0)^{\frac{5}{2} n}}, so that by time {t_{n+1} \approx t_n + (1+\epsilon_0)^{-\frac{5}{2} n}} or so, most of the energy of {X_n} should have drained into the {X_{n+1}} mode (with hardly any energy dissipated). Since the series {\sum_{n \geq 1} (1+\epsilon_0)^{-\frac{5}{2} n}} is summable, this suggests finite time blowup for this ODE as the energy races ever more quickly to higher and higher modes. Such a scenario was indeed established by Katz and Pavlovic (and refined by Cheskidov) if the dissipation strength {(1+\epsilon)^{2n}} was weakened somewhat (the exponent {2} has to be lowered to be less than {\frac{5}{3}}). As mentioned above, this is enough to give a version of Theorem 1 in five and higher dimensions.

On the other hand, it was shown a few years ago by Barbato, Morandin, and Romito that (3) in fact admits global smooth solutions (at least in the dyadic case {\epsilon_0=1}, and assuming non-negative initial data). Roughly speaking, the problem is that as energy is being transferred from {X_n} to {X_{n+1}}, energy is also simultaneously being transferred from {X_{n+1}} to {X_{n+2}}, and as such the solution races off to higher modes a bit too prematurely, without absorbing all of the energy from lower modes. This weakens the strength of the blowup to the point where the moderately strong dissipation in (3) is enough to kill the high frequency cascade before a true singularity occurs. Because of this, the original Katz-Pavlovic model cannot quite be used to establish Theorem 1 in three dimensions. (Actually, the original Katz-Pavlovic model had some additional dispersive features which allowed for another proof of global smooth solutions, which is an unpublished result of Nazarov.)

To get around this, I had to “engineer” an ODE system with similar features to (3) (namely, a quadratic nonlinearity, a monotone total energy, and the indicated exponents of {(1+\epsilon_0)} for both the dissipation term and the quadratic terms), but for which the cascade of energy from scale {n} to scale {n+1} was not interrupted by the cascade of energy from scale {n+1} to scale {n+2}. To do this, I needed to insert a delay in the cascade process (so that after energy was dumped into scale {n}, it would take some time before the energy would start to transfer to scale {n+1}), but the process also needed to be abrupt (once the process of energy transfer started, it needed to conclude very quickly, before the delayed transfer for the next scale kicked in). It turned out that one could build a “quadratic circuit” out of some basic “quadratic gates” (analogous to how an electrical circuit could be built out of basic gates such as amplifiers or resistors) that achieved this task, leading to an ODE system essentially of the form

\displaystyle \partial_t X_{1,n} = - (1+\epsilon_0)^{2n} X_{1,n}

\displaystyle  + (1+\epsilon_0)^{5n/2} (- \epsilon^{-2} X_{3,n} X_{4,n} - \epsilon X_{1,n} X_{2,n} - \epsilon^2 \exp(-K^{10}) X_{1,n} X_{3,n}

\displaystyle  + K X_{4,n-1}^2)

\displaystyle  \partial_t X_{2,n} = - (1+\epsilon_0)^{2n} X_{2,n} + (1+\epsilon_0)^{5n/2} (\epsilon X_{1,n}^2 - \epsilon^{-1} K^{10} X_{3,n}^2)

\displaystyle  \partial_t X_{3,n} = - (1+\epsilon_0)^{2n} X_{3,n} + (1+\epsilon_0)^{5n/2} (\epsilon^2 \exp(-K^{10}) X_{1,n}^2

\displaystyle + \epsilon^{-1} K^{10} X_{2,n} X_{3,n} )

\displaystyle  \partial_t X_{4,n} =- (1+\epsilon_0)^{2n} X_{4,n} + (1+\epsilon_0)^{5n/2} (\epsilon^{-2} X_{3,n} X_{1,n}

\displaystyle - (1+\epsilon_0)^{5/2} K X_{4,n} X_{1,n+1})

where {K \geq 1} is a suitable large parameter and {\epsilon > 0} is a suitable small parameter (much smaller than {1/K}). To visualise the dynamics of such a system, I found it useful to describe this system graphically by a “circuit diagram” that is analogous (but not identical) to the circuit diagrams arising in electrical engineering:

circuit-1

The coupling constants here range widely from being very large to very small; in practice, this makes the {X_{2,n}} and {X_{3,n}} modes absorb very little energy, but exert a sizeable influence on the remaining modes. If a lot of energy is suddenly dumped into {X_{1,n}}, what happens next is roughly as follows: for a moderate period of time, nothing much happens other than a trickle of energy into {X_{2,n}}, which in turn causes a rapid exponential growth of {X_{3,n}} (from a very low base). After this delay, {X_{3,n}} suddenly crosses a certain threshold, at which point it causes {X_{1,n}} and {X_{4,n}} to exchange energy back and forth with extreme speed. The energy from {X_{4,n}} then rapidly drains into {X_{1,n+1}}, and the process begins again (with a slight loss in energy due to the dissipation). If one plots the total energy {E_n := \frac{1}{2} ( X_{1,n}^2 + X_{2,n}^2 + X_{3,n}^2 + X_{4,n}^2 )} as a function of time, it looks schematically like this:

energy-blowup

As in the previous heuristic discussion, the time between cascades from one frequency scale to the next decay exponentially, leading to blowup at some finite time {T}. (One could describe the dynamics here as being similar to the famous “lighting the beacons” scene in the Lord of the Rings movies, except that (a) as each beacon gets ignited, the previous one is extinguished, as per the energy identity; (b) the time between beacon lightings decrease exponentially; and (c) there is no soundtrack.)

There is a real (but remote) possibility that this sort of construction can be adapted to the true Navier-Stokes equations. The basic blowup mechanism in the averaged equation is that of a von Neumann machine, or more precisely a construct (built within the laws of the inviscid evolution {\partial_t u = \tilde B(u,u)}) that, after some time delay, manages to suddenly create a replica of itself at a finer scale (and to largely erase its original instantiation in the process). In principle, such a von Neumann machine could also be built out of the laws of the inviscid form of the Navier-Stokes equations (i.e. the Euler equations). In physical terms, one would have to build the machine purely out of an ideal fluid (i.e. an inviscid incompressible fluid). If one could somehow create enough “logic gates” out of ideal fluid, one could presumably build a sort of “fluid computer”, at which point the task of building a von Neumann machine appears to reduce to a software engineering exercise rather than a PDE problem (providing that the gates are suitably stable with respect to perturbations, but (as with actual computers) this can presumably be done by converting the analog signals of fluid mechanics into a more error-resistant digital form). The key thing missing in this program (in both senses of the word) to establish blowup for Navier-Stokes is to construct the logic gates within the laws of ideal fluids. (Compare with the situation for cellular automata such as Conway’s “Game of Life“, in which Turing complete computers, universal constructors, and replicators have all been built within the laws of that game.)

This is the sixth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page (which has recently returned to full functionality, after a partial outage).

The current focus is on improving the upper bound on {H_1} under the assumption of the generalised Elliott-Halberstam conjecture (GEH) from {H_1 \leq 8} to {H_1 \leq 6}, which looks to be the limit of the method (see this previous comment for a semi-rigorous reason as to why {H_1 \leq 4} is not possible with this method). With the most general Selberg sieve available, the problem reduces to the following three-dimensional variational one:

Problem 1 Does there exist a (not necessarily convex) polytope {R \subset [0,1]^3} with quantities {0 \leq \varepsilon_1,\varepsilon_2,\varepsilon_3 \leq 1}, and a non-trivial square-integrable function {F: {\bf R}^3 \rightarrow {\bf R}} supported on {R} such that

  • {R + R \subset \{ (x,y,z) \in [0,2]^3: \min(x+y,y+z,z+x) \leq 2 \},}
  • {\int_0^\infty F(x,y,z)\ dx = 0} when {y+z \geq 1+\varepsilon_1};
  • {\int_0^\infty F(x,y,z)\ dy = 0} when {x+z \geq 1+\varepsilon_2};
  • {\int_0^\infty F(x,y,z)\ dz = 0} when {x+y \geq 1+\varepsilon_3};

and such that we have the inequality

\displaystyle  \int_{y+z \leq 1-\varepsilon_1} (\int_{\bf R} F(x,y,z)\ dx)^2\ dy dz

\displaystyle + \int_{z+x \leq 1-\varepsilon_2} (\int_{\bf R} F(x,y,z)\ dy)^2\ dz dx

\displaystyle + \int_{x+y \leq 1-\varepsilon_3} (\int_{\bf R} F(x,y,z)\ dz)^2\ dx dy

\displaystyle  > 2 \int_R F(x,y,z)^2\ dx dy dz?

(Initially it was assumed that {R} was convex, but we have now realised that this is not necessary.)

An affirmative answer to this question will imply {H_1 \leq 6} on GEH. We are “within almost two percent” of this claim; we cannot quite reach {2} yet, but have got as far as {1.959633}. However, we have not yet fully optimised {F} in the above problem.

The most promising route so far is to take the symmetric polytope

\displaystyle  R = \{ (x,y,z) \in [0,1]^3: x+y+z \leq 3/2 \}

with {F} symmetric as well, and {\varepsilon_1=\varepsilon_2=\varepsilon_3=\varepsilon} (we suspect that the optimal {\varepsilon} will be roughly {1/6}). (However, it is certainly worth also taking a look at easier model problems, such as the polytope {{\cal R}'_3 := \{ (x,y,z) \in [0,1]^3: x+y,y+z,z+x \leq 1\}}, which has no vanishing marginal conditions to contend with; more recently we have been looking at the non-convex polytope {R = \{x+y,x+z \leq 1 \} \cup \{ x+y,y+z \leq 1 \} \cup \{ x+z,y+z \leq 1\}}.) Some further details of this particular case are given below the fold.

There should still be some progress to be made in the other regimes of interest – the unconditional bound on {H_1} (currently at {270}), and on any further progress in asymptotic bounds for {H_m} for larger {m} – but the current focus is certainly on the bound on {H_1} on GEH, as we seem to be tantalisingly close to an optimal result here.

Read the rest of this entry »

This is the fifth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page (which has recently returned to full functionality, after a partial outage). In particular, the upper bound for {H_1} has been shaved a little from {272} to {270}, and we have very recently achieved the bound {H_1 \leq 8} on the generalised Elliott-Halberstam conjecture GEH, formulated as Conjecture 1 of this paper of Bombieri, Friedlander, and Iwaniec. We also have explicit bounds for {H_m} for {m \leq 5}, both with and without the assumption of the Elliott-Halberstam conjecture, as well as slightly sharper asymptotics for the upper bound for {H_m} as {m \rightarrow \infty}.

The basic strategy for bounding {H_m} still follows the general paradigm first laid out by Goldston, Pintz, Yildirim: given an admissible {k}-tuple {(h_1,\dots,h_k)}, one needs to locate a non-negative sieve weight {\nu: {\bf Z} \rightarrow {\bf R}^+}, supported on an interval {[x,2x]} for a large {x}, such that the ratio

\displaystyle  \frac{\sum_{i=1}^k \sum_n \nu(n) 1_{n+h_i \hbox{ prime}}}{\sum_n \nu(n)} \ \ \ \ \ (1)

is asymptotically larger than {m} as {x \rightarrow \infty}; this will show that {H_m \leq h_k-h_1}. Thus one wants to locate a sieve weight {\nu} for which one has good lower bounds on the numerator and good upper bounds on the denominator.

One can modify this paradigm slightly, for instance by adding the additional term {\sum_n \nu(n) 1_{n+h_1,\dots,n+h_k \hbox{ composite}}} to the numerator, or by subtracting the term {\sum_n \nu(n) 1_{n+h_1,n+h_k \hbox{ prime}}} from the numerator (which allows one to reduce the bound {h_k-h_1} to {\max(h_k-h_2,h_{k-1}-h_1)}); however, the numerical impact of these tweaks have proven to be negligible thus far.

Despite a number of experiments with other sieves, we are still relying primarily on the Selberg sieve

\displaystyle  \nu(n) := 1_{n=b\ (W)} 1_{[x,2x]}(n) \lambda(n)^2

where {\lambda(n)} is the divisor sum

\displaystyle  \lambda(n) := \sum_{d_1|n+h_1, \dots, d_k|n+h_k} \mu(d_1) \dots \mu(d_k) f( \frac{\log d_1}{\log R}, \dots, \frac{\log d_k}{\log R})

with {R = x^{\theta/2}}, {\theta} is the level of distribution ({\theta=1/2-} if relying on Bombieri-Vinogradov, {\theta=1-} if assuming Elliott-Halberstam, and (in principle) {\theta = \frac{1}{2} + \frac{13}{540}-} if using Polymath8a technology), and {f: [0,+\infty)^k \rightarrow {\bf R}} is a smooth, compactly supported function. Most of the progress has come by enlarging the class of cutoff functions {f} one is permitted to use.

The baseline bounds for the numerator and denominator in (1) (as established for instance in this previous post) are as follows. If {f} is supported on the simplex

\displaystyle  {\cal R}_k := \{ (t_1,\dots,t_k) \in [0,+\infty)^k: t_1+\dots+t_k < 1 \},

and we define the mixed partial derivative {F: [0,+\infty)^k \rightarrow {\bf R}} by

\displaystyle  F(t_1,\dots,t_k) = \frac{\partial^k}{\partial t_1 \dots \partial t_k} f(t_1,\dots,t_k)

then the denominator in (1) is

\displaystyle  \frac{Bx}{W} (I_k(F) + o(1)) \ \ \ \ \ (2)

where

\displaystyle  B := (\frac{W}{\phi(W) \log R})^k

and

\displaystyle  I_k(F) := \int_{[0,+\infty)^k} F(t_1,\dots,t_k)^2\ dt_1 \dots dt_k.

Similarly, the numerator of (1) is

\displaystyle  \frac{Bx}{W} \frac{2}{\theta} (\sum_{j=1}^m J^{(m)}_k(F) + o(1)) \ \ \ \ \ (3)

where

\displaystyle  J_k^{(m)}(F) := \int_{[0,+\infty)^{k-1}} (\int_0^\infty F(t_1,\ldots,t_k)\ dt_m)^2\ dt_1 \dots dt_{m-1} dt_{m+1} \dots dt_k.

Thus, if we let {M_k} be the supremum of the ratio

\displaystyle  \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}

whenever {F} is supported on {{\cal R}_k} and is non-vanishing, then one can prove {H_m \leq h_k - h_1} whenever

\displaystyle  M_k > \frac{2m}{\theta}.

We can improve this baseline in a number of ways. Firstly, with regards to the denominator in (1), if one upgrades the Elliott-Halberstam hypothesis {EH[\theta]} to the generalised Elliott-Halberstam hypothesis {GEH[\theta]} (currently known for {\theta < 1/2}, thanks to Motohashi, but conjectured for {\theta < 1}), the asymptotic (2) holds under the more general hypothesis that {F} is supported in a polytope {R}, as long as {R} obeys the inclusion

\displaystyle  R + R \subset \bigcup_{m=1}^k \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: \ \ \ \ \ (4)

\displaystyle  t_1+\dots+t_{m-1}+t_{m+1}+\dots+t_k < 2; t_m < 2/\theta \} \cup \frac{2}{\theta} \cdot {\cal R}_k;

examples of polytopes {R} obeying this constraint include the modified simplex

\displaystyle  {\cal R}'_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\dots+t_{m-1}+t_{m+1}+\dots+t_k < 1

\displaystyle \hbox{ for all } 1 \leq m \leq k \},

the prism

\displaystyle  {\cal R}_{k-1} \times [0, 1/\theta)

the dilated simplex

\displaystyle  \frac{1}{\theta} \cdot {\cal R}_k

and the truncated simplex

\displaystyle  \frac{k}{k-1} \cdot {\cal R}_k \cap [0,1/\theta)^k.

See this previous post for a proof of these claims.

With regards to the numerator, the asymptotic (3) is valid whenever, for each {1 \leq m \leq k}, the marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanish outside of {{\cal R}_{k-1}}. This is automatic if {F} is supported on {{\cal R}_k}, or on the slightly larger region {{\cal R}'_k}, but is an additional constraint when {F} is supported on one of the other polytopes {R} mentioned above.

More recently, we have obtained a more flexible version of the above asymptotic: if the marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanish outside of {(1+\varepsilon) \cdot {\cal R}_{k-1}} for some {0 < \varepsilon < 1}, then the numerator of (1) has a lower bound of

\displaystyle  \frac{Bx}{W} \frac{2}{\theta} (\sum_{j=1}^m J^{(m)}_{k,\varepsilon}(F) + o(1))

where

\displaystyle  J_{k,\varepsilon}^{(m)}(F) := \int_{(1-\varepsilon) \cdot {\cal R}_{k-1}} (\int_0^\infty F(t_1,\ldots,t_k)\ dt_m)^2\ dt_1 \dots dt_{m-1} dt_{m+1} \dots dt_k.

A proof is given here. Putting all this together, we can conclude

Theorem 1 Suppose we can find {0 \leq \varepsilon < 1} and a function {F} supported on a polytope {R} obeying (4), not identically zero and with all marginals {\int_0^\infty F(t_1,\ldots,t_k)\ dt_m} vanishing outside of {(1+\varepsilon) \cdot {\cal R}_{k-1}}, and with

\displaystyle  \frac{\sum_{m=1}^k J_{k,\varepsilon}^{(m)}(F)}{I_k(F)} > \frac{2m}{\theta}.

Then {GEH[\theta]} implies {H_m \leq h_k-h_1}.

In principle, this very flexible criterion for upper bounding {H_m} should lead to better bounds than before, and in particular we have now established {H_1 \leq 8} on GEH.

Another promising direction is to try to improve the analysis at medium {k} (more specifically, in the regime {k \sim 50}), which is where we are currently at without EH or GEH through numerical quadratic programming. Right now we are only using {\theta=1/2} and using the baseline {M_k} analysis, basically for two reasons:

  • We do not have good numerical formulae for integrating polynomials on any region more complicated than the simplex {{\cal R}_k} in medium dimension.
  • The estimates {MPZ^{(i)}[\varpi,\delta]} produced by Polymath8a involve a {\delta} parameter, which introduces additional restrictions on the support of {F} (conservatively, it restricts {F} to {[0,\delta']^k} where {\delta' := \frac{\delta}{1/4+\varpi}} and {\theta = 1/2 + 2 \varpi}; it should be possible to be looser than this (as was done in Polymath8a) but this has not been fully explored yet). This then triggers the previous obstacle of having to integrate on something other than a simplex.

However, these look like solvable problems, and so I would expect that further unconditional improvement for {H_1} should be possible.

This is the fourth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} are:

  • (Maynard) Assuming the Elliott-Halberstam conjecture, {H_1 \leq 12}.
  • (Polymath8b, tentative) {H_1 \leq 272}. Assuming Elliott-Halberstam, {H_2 \leq 272}.
  • (Polymath8b, tentative) {H_2 \leq 429{,}822}. Assuming Elliott-Halberstam, {H_4 \leq 493{,}408}.
  • (Polymath8b, tentative) {H_3 \leq 26{,}682{,}014}. (Presumably a comparable bound also holds for {H_6} on Elliott-Halberstam, but this has not been computed.)
  • (Polymath8b) {H_m \leq \exp( 3.817 m )} for sufficiently large {m}. Assuming Elliott-Halberstam, {H_m \ll m e^{2m}} for sufficiently large {m}.

While the {H_1} bound on the Elliott-Halberstam conjecture has not improved since the start of the Polymath8b project, there is reason to hope that it will soon fall, hopefully to {8}. This is because we have begun to exploit more fully the fact that when using “multidimensional Selberg-GPY” sieves of the form

\displaystyle  \nu(n) := \sigma_{f,k}(n)^2

with

\displaystyle  \sigma_{f,k}(n) := \sum_{d_1|n+h_1,\dots,d_k|n+h_k} \mu(d_1) \dots \mu(d_k) f( \frac{\log d_1}{\log R},\dots,\frac{\log d_k}{\log R}),

where {R := x^{\theta/2}}, it is not necessary for the smooth function {f: [0,+\infty)^k \rightarrow {\bf R}} to be supported on the simplex

\displaystyle {\cal R}_k := \{ (t_1,\dots,t_k)\in [0,1]^k: t_1+\dots+t_k \leq 1\},

but can in fact be allowed to range on larger sets. First of all, {f} may instead be supported on the slightly larger polytope

\displaystyle {\cal R}'_k := \{ (t_1,\dots,t_k)\in [0,1]^k: t_1+\dots+t_{j-1}+t_{j+1}+\dots+t_k \leq 1

\displaystyle  \hbox{ for all } j=1,\dots,k\}.

However, it turns out that more is true: given a sufficiently general version of the Elliott-Halberstam conjecture {EH[\theta]} at the given value of {\theta}, one may work with functions {f} supported on more general domains {R}, so long as the sumset {R+R := \{ t+t': t,t'\in R\}} is contained in the non-convex region

\displaystyle  \bigcup_{j=1}^k \{ (t_1,\dots,t_k)\in [0,\frac{2}{\theta}]^k: t_1+\dots+t_{j-1}+t_{j+1}+\dots+t_k \leq 2 \} \cup \frac{2}{\theta} \cdot {\cal R}_k, \ \ \ \ \ (1)

and also provided that the restriction

\displaystyle  (t_1,\dots,t_{j-1},t_{j+1},\dots,t_k) \mapsto f(t_1,\dots,t_{j-1},0,t_{j+1},\dots,t_k) \ \ \ \ \ (2)

is supported on the simplex

\displaystyle {\cal R}_{k-1} := \{ (t_1,\dots,t_{j-1},t_{j+1},\dots,t_k)\in [0,1]^{k-1}:

\displaystyle t_1+\dots+t_{j-1}+t_{j+1}+\dots t_k \leq 1\}.

More precisely, if {f} is a smooth function, not identically zero, with the above properties for some {R}, and the ratio

\displaystyle  \sum_{j=1}^k \int_{{\cal R}_{k-1}} f_{1,\dots,j-1,j+1,\dots,k}(t_1,\dots,t_{j-1},0,t_{j+1},\dots,t_k)^2 \ \ \ \ \ (3)

\displaystyle dt_1 \dots dt_{j-1} dt_{j+1} \dots dt_k

\displaystyle  / \int_R f_{1,\dots,k}^2(t_1,\dots,t_k)\ dt_1 \dots dt_k

is larger than {\frac{2m}{\theta}}, then the claim {DHL[k,m+1]} holds (assuming {EH[\theta]}), and in particular {H_m \leq H(k)}.

I’ll explain why one can do this below the fold. Taking this for granted, we can rewrite this criterion in terms of the mixed derivative {F := f_{1,\dots,k}}, the upshot being that if one can find a smooth function {F} supported on {R} that obeys the vanishing marginal conditions

\displaystyle  \int F( t_1,\dots,t_k )\ dt_j = 0

whenever {1 \leq j \leq k} and {t_1+\dots+t_{j-1}+t_{j+1}+\dots+t_k > 1}, and the ratio

\displaystyle  \frac{\sum_{j=1}^k J_k^{(j)}(F)}{I_k(F)} \ \ \ \ \ (4)

is larger than {\frac{2m}{\theta}}, where

\displaystyle  I_k(F) := \int_R F(t_1,\dots,t_k)^2\ dt_1 \dots dt_k

and

\displaystyle  J_k^{(j)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1/\theta} F(t_1,\dots,t_k)\ dt_j)^2 dt_1 \dots dt_{j-1} dt_{j+1} \dots dt_k

then {DHL[k,m+1]} holds. (To equate these two formulations, it is convenient to assume that {R} is a downset, in the sense that whenever {(t_1,\dots,t_k) \in R}, the entire box {[0,t_1] \times \dots \times [0,t_k]} lie in {R}, but one can easily enlarge {R} to be a downset without destroying the containment of {R+R} in the non-convex region (1).) One initially requires {F} to be smooth, but a limiting argument allows one to relax to bounded measurable {F}. (To approximate a rough {F} by a smooth {F} while retaining the required moment conditions, one can first apply a slight dilation and translation so that the marginals of {F} are supported on a slightly smaller version of the simplex {{\cal R}_{k-1}}, and then convolve by a smooth approximation to the identity to make {F} smooth, while keeping the marginals supported on {{\cal R}_{k-1}}.)

We are now exploring various choices of {R} to work with, including the prism

\displaystyle  \{ (t_1,\dots,t_k) \in [0,1/\theta]^k: t_1+\dots+t_{k-1} \leq 1 \}

and the symmetric region

\displaystyle  \{ (t_1,\dots,t_k) \in [0,1/\theta]^k: t_1+\dots+t_k \leq \frac{k}{k-1} \}.

By suitably subdividing these regions into polytopes, and working with piecewise polynomial functions {F} that are polynomial of a specified degree on each subpolytope, one can phrase the problem of optimising (4) as a quadratic program, which we have managed to work with for {k=3}. Extending this program to {k=4}, there is a decent chance that we will be able to obtain {DHL[4,2]} on EH.

We have also been able to numerically optimise {M_k} quite accurately for medium values of {k} (e.g. {k \sim 50}), which has led to improved values of {H_1} without EH. For large {k}, we now also have the asymptotic {M_k=\log k - O(1)} with explicit error terms (details here) which have allowed us to slightly improve the {m=2} numerology, and also to get explicit {m=3} numerology for the first time.

Read the rest of this entry »

(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)

Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.

The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:

(Discrete) (Continuous) (Limit method)
Ramsey theory Topological dynamics Compactness
Density Ramsey theory Ergodic theory Furstenberg correspondence principle
Graph/hypergraph regularity Measure theory Graph limits
Polynomial regularity Linear algebra Ultralimits
Structural decompositions Hilbert space geometry Ultralimits
Fourier analysis Spectral theory Direct and inverse limits
Quantitative algebraic geometry Algebraic geometry Schemes
Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits
Approximate group theory Topological group theory Model theory

As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:

  • Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects {x_n} in a common space {X}, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object {\lim_{n \rightarrow \infty} x_n}, which remains in the same space, and is “close” to many of the original objects {x_n} with respect to the given metric or topology.
  • Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects {x_n} in a category {X}, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit {\varinjlim x_n} or the inverse limit {\varprojlim x_n} of these objects, which is another object in the same category {X}, and is connected to the original objects {x_n} by various morphisms.
  • Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects {x_{\bf n}} or of spaces {X_{\bf n}}, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, {X_{\bf n}} might be groups and {x_{\bf n}} might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}} or a new space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}, which is still a model of the same language (e.g. if the spaces {X_{\bf n}} were all groups, then the limiting space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} is an abelian group, then the {X_{\bf n}} will also be abelian groups for many {{\bf n}}.)

The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects {x_{\bf n}} to all lie in a common space {X} in order to form an ultralimit {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; they are permitted to lie in different spaces {X_{\bf n}}; this is more natural in many discrete contexts, e.g. when considering graphs on {{\bf n}} vertices in the limit when {{\bf n}} goes to infinity. Also, no convergence properties on the {x_{\bf n}} are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces {X_{\bf n}} involved are required in order to construct the ultraproduct.

With so few requirements on the objects {x_{\bf n}} or spaces {X_{\bf n}}, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the {x_{\bf n}}, will be exactly obeyed by the limit object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.

Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.

Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.

Read the rest of this entry »

For each natural number {m}, let {H_m} denote the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

where {p_n} denotes the {n\textsuperscript{th}} prime. In other words, {H_m} is the least quantity such that there are infinitely many intervals of length {H_m} that contain {m+1} or more primes. Thus, for instance, the twin prime conjecture is equivalent to the assertion that {H_1 = 2}, and the prime tuples conjecture would imply that {H_m} is equal to the diameter of the narrowest admissible tuple of cardinality {m+1} (thus we conjecturally have {H_1 = 2}, {H_2 = 6}, {H_3 = 8}, {H_4 = 12}, {H_5 = 16}, and so forth; see this web page for further continuation of this sequence).

In 2004, Goldston, Pintz, and Yildirim established the bound {H_1 \leq 16} conditional on the Elliott-Halberstam conjecture, which remains unproven. However, no unconditional finiteness of {H_1} was obtained (although they famously obtained the non-trivial bound {p_{n+1}-p_n = o(\log p_n)}), and even on the Elliot-Halberstam conjecture no finiteness result on the higher {H_m} was obtained either (although they were able to show {p_{n+2}-p_n=o(\log p_n)} on this conjecture). In the recent breakthrough of Zhang, the unconditional bound {H_1 \leq 70,000,000} was obtained, by establishing a weak partial version of the Elliott-Halberstam conjecture; by refining these methods, the Polymath8 project (which I suppose we could retroactively call the Polymath8a project) then lowered this bound to {H_1 \leq 4,680}.

With the very recent preprint of James Maynard, we have the following further substantial improvements:

Theorem 1 (Maynard’s theorem) Unconditionally, we have the following bounds:

  • {H_1 \leq 600}.
  • {H_m \leq C m^3 e^{4m}} for an absolute constant {C} and any {m \geq 1}.

If one assumes the Elliott-Halberstam conjecture, we have the following improved bounds:

  • {H_1 \leq 12}.
  • {H_2 \leq 600}.
  • {H_m \leq C m^3 e^{2m}} for an absolute constant {C} and any {m \geq 1}.

The final conclusion {H_m \leq C m^3 e^{2m}} on Elliott-Halberstam is not explicitly stated in Maynard’s paper, but follows easily from his methods, as I will describe below the fold. (At around the same time as Maynard’s work, I had also begun a similar set of calculations concerning {H_m}, but was only able to obtain the slightly weaker bound {H_m \leq C \exp( C m )} unconditionally.) In the converse direction, the prime tuples conjecture implies that {H_m} should be comparable to {m \log m}. Granville has also obtained the slightly weaker explicit bound {H_m \leq e^{8m+5}} for any {m \geq 1} by a slight modification of Maynard’s argument.

The arguments of Maynard avoid using the difficult partial results on (weakened forms of) the Elliott-Halberstam conjecture that were established by Zhang and then refined by Polymath8; instead, the main input is the classical Bombieri-Vinogradov theorem, combined with a sieve that is closer in spirit to an older sieve of Goldston and Yildirim, than to the sieve used later by Goldston, Pintz, and Yildirim on which almost all subsequent work is based.

The aim of the Polymath8b project is to obtain improved bounds on {H_1, H_2}, and higher values of {H_m}, either conditional on the Elliott-Halberstam conjecture or unconditional. The likeliest routes for doing this are by optimising Maynard’s arguments and/or combining them with some of the results from the Polymath8a project. This post is intended to be the first research thread for that purpose. To start the ball rolling, I am going to give below a presentation of Maynard’s results, with some minor technical differences (most significantly, I am using the Goldston-Pintz-Yildirim variant of the Selberg sieve, rather than the traditional “elementary Selberg sieve” that is used by Maynard (and also in the Polymath8 project), although it seems that the numerology obtained by both sieves is essentially the same). An alternate exposition of Maynard’s work has just been completed also by Andrew Granville.

Read the rest of this entry »

If {f: {\bf R}^n \rightarrow {\bf C}} and {g: {\bf R}^n \rightarrow {\bf C}} are two absolutely integrable functions on a Euclidean space {{\bf R}^n}, then the convolution {f*g: {\bf R}^n \rightarrow {\bf C}} of the two functions is defined by the formula

\displaystyle  f*g(x) := \int_{{\bf R}^n} f(y) g(x-y)\ dy = \int_{{\bf R}^n} f(x-z) g(z)\ dz.

A simple application of the Fubini-Tonelli theorem shows that the convolution {f*g} is well-defined almost everywhere, and yields another absolutely integrable function. In the case that {f=1_F}, {g=1_G} are indicator functions, the convolution simplifies to

\displaystyle  1_F*1_G(x) = m( F \cap (x-G) ) = m( (x-F) \cap G ) \ \ \ \ \ (1)

where {m} denotes Lebesgue measure. One can also define convolution on more general locally compact groups than {{\bf R}^n}, but we will restrict attention to the Euclidean case in this post.

The convolution {f*g} can also be defined by duality by observing the identity

\displaystyle  \int_{{\bf R}^n} f*g(x) h(x)\ dx = \int_{{\bf R}^n} \int_{{\bf R}^n} h(y+z)\ f(y) dy g(z) dz

for any bounded measurable function {h: {\bf R}^n \rightarrow {\bf C}}. Motivated by this observation, we may define the convolution {\mu*\nu} of two finite Borel measures on {{\bf R}^n} by the formula

\displaystyle  \int_{{\bf R}^n} h(x)\ d\mu*\nu(x) := \int_{{\bf R}^n} \int_{{\bf R}^n} h(y+z)\ d\mu(y) d\nu(z) \ \ \ \ \ (2)

for any bounded (Borel) measurable function {h: {\bf R}^n \rightarrow {\bf C}}, or equivalently that

\displaystyle  \mu*\nu(E) = \int_{{\bf R}^n} \int_{{\bf R}^n} 1_E(y+z)\ d\mu(y) d\nu(z) \ \ \ \ \ (3)

for all Borel measurable {E}. (In another equivalent formulation: {\mu*\nu} is the pushforward of the product measure {\mu \times \nu} with respect to the addition map {+: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}^n}.) This can easily be verified to again be a finite Borel measure.

If {\mu} and {\nu} are probability measures, then the convolution {\mu*\nu} also has a simple probabilistic interpretation: it is the law (i.e. probability distribution) of a random varible of the form {X+Y}, where {X, Y} are independent random variables taking values in {{\bf R}^n} with law {\mu,\nu} respectively. Among other things, this interpretation makes it obvious that the support of {\mu*\nu} is the sumset of the supports of {\mu} and {\nu}, and that {\mu*\nu} will also be a probability measure.

While the above discussion gives a perfectly rigorous definition of the convolution of two measures, it does not always give helpful guidance as to how to compute the convolution of two explicit measures (e.g. the convolution of two surface measures on explicit examples of surfaces, such as the sphere). In simple cases, one can work from first principles directly from the definition (2), (3), perhaps after some application of tools from several variable calculus, such as the change of variables formula. Another technique proceeds by regularisation, approximating the measures {\mu, \nu} involved as the weak limit (or vague limit) of absolutely integrable functions

\displaystyle  \mu = \lim_{\epsilon \rightarrow 0} f_\epsilon; \quad \nu =\lim_{\epsilon \rightarrow 0} g_\epsilon

(where we identify an absolutely integrable function {f} with the associated absolutely continuous measure {dm_f(x) := f(x)\ dx}) which then implies (assuming that the sequences {f_\epsilon,g_\epsilon} are tight) that {\mu*\nu} is the weak limit of the {f_\epsilon * g_\epsilon}. The latter convolutions {f_\epsilon * g_\epsilon}, being convolutions of functions rather than measures, can be computed (or at least estimated) by traditional integration techniques, at which point the only difficulty is to ensure that one has enough uniformity in {\epsilon} to maintain control of the limit as {\epsilon \rightarrow 0}.

A third method proceeds using the Fourier transform

\displaystyle  \hat \mu(\xi) := \int_{{\bf R}^n} e^{-2\pi i x \cdot \xi}\ d\mu(\xi)

of {\mu} (and of {\nu}). We have

\displaystyle  \widehat{\mu*\nu}(\xi) = \hat{\mu}(\xi) \hat{\nu}(\xi)

and so one can (in principle, at least) compute {\mu*\nu} by taking Fourier transforms, multiplying them together, and applying the (distributional) inverse Fourier transform. Heuristically, this formula implies that the Fourier transform of {\mu*\nu} should be concentrated in the intersection of the frequency region where the Fourier transform of {\mu} is supported, and the frequency region where the Fourier transform of {\nu} is supported. As the regularity of a measure is related to decay of its Fourier transform, this also suggests that the convolution {\mu*\nu} of two measures will typically be more regular than each of the two original measures, particularly if the Fourier transforms of {\mu} and {\nu} are concentrated in different regions of frequency space (which should happen if the measures {\mu,\nu} are suitably “transverse”). In particular, it can happen that {\mu*\nu} is an absolutely continuous measure, even if {\mu} and {\nu} are both singular measures.

Using intuition from microlocal analysis, we can combine our understanding of the spatial and frequency behaviour of convolution to the following heuristic: a convolution {\mu*\nu} should be supported in regions of phase space {\{ (x,\xi): x \in {\bf R}^n, \xi \in {\bf R}^n \}} of the form {(x,\xi) = (x_1+x_2,\xi)}, where {(x_1,\xi)} lies in the region of phase space where {\mu} is concentrated, and {(x_2,\xi)} lies in the region of phase space where {\nu} is concentrated. It is a challenge to make this intuition perfectly rigorous, as one has to somehow deal with the obstruction presented by the Heisenberg uncertainty principle, but it can be made rigorous in various asymptotic regimes, for instance using the machinery of wave front sets (which describes the high frequency limit of the phase space distribution).

Let us illustrate these three methods and the final heuristic with a simple example. Let {\mu} be a singular measure on the horizontal unit interval {[0,1] \times \{0\} = \{ (x,0): 0 \leq x \leq 1 \}}, given by weighting Lebesgue measure on that interval by some test function {\phi: {\bf R} \rightarrow {\bf C}} supported on {[0,1]}:

\displaystyle  \int_{{\bf R}^2} f(x,y)\ d\mu(x,y) := \int_{\bf R} f(x,0) \phi(x)\ dx.

Similarly, let {\nu} be a singular measure on the vertical unit interval {\{0\} \times [0,1] = \{ (0,y): 0 \leq y \leq 1 \}} given by weighting Lebesgue measure on that interval by another test function {\psi: {\bf R} \rightarrow {\bf C}} supported on {[0,1]}:

\displaystyle  \int_{{\bf R}^2} g(x,y)\ d\nu(x,y) := \int_{\bf R} g(0,y) \psi(y)\ dy.

We can compute the convolution {\mu*\nu} using (2), which in this case becomes

\displaystyle  \int_{{\bf R}^2} h( x, y ) d\mu*\nu(x,y) = \int_{{\bf R}^2} \int_{{\bf R}^2} h(x_1+x_2, y_1+y_2)\ d\mu(x_1,y_1) d\nu(x_2,y_2)

\displaystyle  = \int_{\bf R} \int_{\bf R} h( x_1, y_2 )\ \phi(x_1) dx_1 \psi(y_2) dy_2

and we thus conclude that {\mu*\nu} is an absolutely continuous measure on {{\bf R}^2} with density function {(x,y) \mapsto \phi(x) \psi(y)}:

\displaystyle  d(\mu*\nu)(x,y) = \phi(x) \psi(y) dx dy. \ \ \ \ \ (4)

In particular, {\mu*\nu} is supported on the unit square {[0,1]^2}, which is of course the sumset of the two intervals {[0,1] \times\{0\}} and {\{0\} \times [0,1]}.

We can arrive at the same conclusion from the regularisation method; the computations become lengthier, but more geometric in nature, and emphasises the role of transversality between the two segments supporting {\mu} and {\nu}. One can view {\mu} as the weak limit of the functions

\displaystyle  f_\epsilon(x,y) := \frac{1}{\epsilon} \phi(x) 1_{[0,\epsilon]}(y)

as {\epsilon \rightarrow 0} (where we continue to identify absolutely integrable functions with absolutely continuous measures, and of course we keep {\epsilon} positive). We can similarly view {\nu} as the weak limit of

\displaystyle  g_\epsilon(x,y) := \frac{1}{\epsilon} 1_{[0,\epsilon]}(x) \psi(y).

Let us first look at the model case when {\phi=\psi=1_{[0,1]}}, so that {f_\epsilon,g_\epsilon} are renormalised indicator functions of thin rectangles:

\displaystyle  f_\epsilon = \frac{1}{\epsilon} 1_{[0,1]\times [0,\epsilon]}; \quad g_\epsilon = \frac{1}{\epsilon} 1_{[0,\epsilon] \times [0,1]}.

By (1), the convolution {f_\epsilon*g_\epsilon} is then given by

\displaystyle  f_\epsilon*g_\epsilon(x,y) := \frac{1}{\epsilon^2} m( E_\epsilon )

where {E_\epsilon} is the intersection of two rectangles:

\displaystyle  E_\epsilon := ([0,1] \times [0,\epsilon]) \cap ((x,y) - [0,\epsilon] \times [0,1]).

When {(x,y)} lies in the square {[\epsilon,1] \times [\epsilon,1]}, one readily sees (especially if one draws a picture) that {E_\epsilon} consists of an {\epsilon \times \epsilon} square and thus has measure {\epsilon^2}; conversely, if {(x,y)} lies outside {[0,1+\epsilon] \times [0,1+\epsilon]}, {E_\epsilon} is empty and thus has measure zero. In the intermediate region, {E_\epsilon} will have some measure between {0} and {\epsilon^2}. From this we see that {f_\epsilon*g_\epsilon} converges pointwise almost everywhere to {1_{[0,1] \times [0,1]}} while also being dominated by an absolutely integrable function, and so converges weakly to {1_{[0,1] \times [0,1]}}, giving a special case of the formula (4).

Exercise 1 Use a similar method to verify (4) in the case that {\phi, \psi} are continuous functions on {[0,1]}. (The argument also works for absolutely integrable {\phi,\psi}, but one needs to invoke the Lebesgue differentiation theorem to make it run smoothly.)

Now we compute with the Fourier-analytic method. The Fourier transform {\hat \mu(\xi,\eta)} of {\mu} is given by

\displaystyle  \hat \mu(\xi,\eta) =\int_{{\bf R}^2} e^{-2\pi i (x \xi + y \eta)}\ d\mu(x,y)

\displaystyle  = \int_{\bf R} \phi(x) e^{-2\pi i x \xi}\ dx

\displaystyle  = \hat \phi(\xi)

where we abuse notation slightly by using {\hat \phi} to refer to the one-dimensional Fourier transform of {\phi}. In particular, {\hat \mu} decays in the {\xi} direction (by the Riemann-Lebesgue lemma) but has no decay in the {\eta} direction, which reflects the horizontally grained structure of {\mu}. Similarly we have

\displaystyle  \hat \nu(\xi,\eta) = \hat \psi(\eta),

so that {\hat \nu} decays in the {\eta} direction. The convolution {\mu*\nu} then has decay in both the {\xi} and {\eta} directions,

\displaystyle  \widehat{\mu*\nu}(\xi,\eta) = \hat \phi(\xi) \hat \psi(\eta)

and by inverting the Fourier transform we obtain (4).

Exercise 2 Let {AB} and {CD} be two non-parallel line segments in the plane {{\bf R}^2}. If {\mu} is the uniform probability measure on {AB} and {\nu} is the uniform probability measure on {CD}, show that {\mu*\nu} is the uniform probability measure on the parallelogram {AB + CD} with vertices {A+C, A+D, B+C, B+D}. What happens in the degenerate case when {AB} and {CD} are parallel?

Finally, we compare the above answers with what one gets from the microlocal analysis heuristic. The measure {\mu} is supported on the horizontal interval {[0,1] \times \{0\}}, and the cotangent bundle at any point on this interval points in the vertical direction. Thus, the wave front set of {\mu} should be supported on those points {((x_1,x_2),(\xi_1,\xi_2))} in phase space with {x_1 \in [0,1]}, {x_2 = 0} and {\xi_1=0}. Similarly, the wave front set of {\nu} should be supported at those points {((y_1,y_2),(\xi_1,\xi_2))} with {y_1 = 0}, {y_2 \in [0,1]}, and {\xi_2=0}. The convolution {\mu * \nu} should then have wave front set supported on those points {((x_1+y_1,x_2+y_2), (\xi_1,\xi_2))} with {x_1 \in [0,1]}, {x_2 = 0}, {\xi_1=0}, {y_1=0}, {y_2 \in [0,1]}, and {\xi_2=0}, i.e. it should be spatially supported on the unit square and have zero (rescaled) frequency, so the heuristic predicts a smooth function on the unit square, which is indeed what happens. (The situation is slightly more complicated in the non-smooth case {\phi=\psi=1_{[0,1]}}, because {\mu} and {\nu} then acquire some additional singularities at the endpoints; namely, the wave front set of {\mu} now also contains those points {((x_1,x_2),(\xi_1,\xi_2))} with {x_1 \in \{0,1\}}, {x_2=0}, and {\xi_1,\xi_2} arbitrary, and {\nu} similarly contains those points {((y_1,y_2), (\xi_1,\xi_2))} with {y_1=0}, {y_2 \in \{0,1\}}, and {\xi_1,\xi_2} arbitrary. I’ll leave it as an exercise to the reader to compute what this predicts for the wave front set of {\mu*\nu}, and how this compares with the actual wave front set.)

Exercise 3 Let {\mu} be the uniform measure on the unit sphere {S^{n-1}} in {{\bf R}^n} for some {n \geq 2}. Use as many of the above methods as possible to establish multiple proofs of the following fact: the convolution {\mu*\mu} is an absolutely continuous multiple {f(x)\ dx} of Lebesgue measure, with {f(x)} supported on the ball {B(0,2)} of radius {2} and obeying the bounds

\displaystyle  |f(x)| \ll \frac{1}{|x|}

for {|x| \leq 1} and

\displaystyle  |f(x)| \ll (2-|x|)^{(n-3)/2}

for {1 \leq |x| \leq 2}, where the implied constants are allowed to depend on the dimension {n}. (Hint: try the {n=2} case first, which is particularly simple due to the fact that the addition map {+: S^1 \times S^1 \rightarrow {\bf R}^2} is mostly a local diffeomorphism. The Fourier-based approach is instructive, but requires either asymptotics of Bessel functions or the principle of stationary phase.)

The fundamental notions of calculus, namely differentiation and integration, are often viewed as being the quintessential concepts in mathematical analysis, as their standard definitions involve the concept of a limit. However, it is possible to capture most of the essence of these notions by purely algebraic means (almost completely avoiding the use of limits, Riemann sums, and similar devices), which turns out to be useful when trying to generalise these concepts to more abstract situations in which it becomes convenient to permit the underlying number systems involved to be something other than the real or complex numbers, even if this makes many standard analysis constructions unavailable. For instance, the algebraic notion of a derivation often serves as a substitute for the analytic notion of a derivative in such cases, by abstracting out the key algebraic properties of differentiation, namely linearity and the Leibniz rule (also known as the product rule).

Abstract algebraic analogues of integration are less well known, but can still be developed. To motivate such an abstraction, consider the integration functional {I: {\mathcal S}({\bf R} \rightarrow {\bf C}) \rightarrow {\bf C}} from the space {{\mathcal S}({\bf R} \rightarrow {\bf C})} of complex-valued Schwarz functions {f: {\bf R} \rightarrow {\bf C}} to the complex numbers, defined by

\displaystyle  I(f) := \int_{\bf R} f(x)\ dx

where the integration on the right is the usual Lebesgue integral (or improper Riemann integral) from analysis. This functional obeys two obvious algebraic properties. Firstly, it is linear over {{\bf C}}, thus

\displaystyle  I(cf) = c I(f) \ \ \ \ \ (1)

and

\displaystyle  I(f+g) = I(f) + I(g) \ \ \ \ \ (2)

for all {f,g \in {\mathcal S}({\bf R} \rightarrow {\bf C})} and {c \in {\bf C}}. Secondly, it is translation invariant, thus

\displaystyle  I(\tau_h f) = I(f) \ \ \ \ \ (3)

for all {h \in {\bf C}}, where {\tau_h f(x) := f(x-h)} is the translation of {f} by {h}. Motivated by the uniqueness theory of Haar measure, one might expect that these two axioms already uniquely determine {I} after one sets a normalisation, for instance by requiring that

\displaystyle  I( x \mapsto e^{-\pi x^2} ) = 1. \ \ \ \ \ (4)

This is not quite true as stated (one can modify the proof of the Hahn-Banach theorem, after first applying a Fourier transform, to create pathological translation-invariant linear functionals on {{\mathcal S}({\bf R} \rightarrow {\bf C})} that are not multiples of the standard Fourier transform), but if one adds a mild analytical axiom, such as continuity of {I} (using the usual Schwartz topology on {{\mathcal S}({\bf R} \rightarrow {\bf C})}), then the above axioms are enough to uniquely pin down the notion of integration. Indeed, if {I: {\mathcal S}({\bf R} \rightarrow {\bf C}) \rightarrow {\bf C}} is a continuous linear functional that is translation invariant, then from the linearity and translation invariance axioms one has

\displaystyle  I( \frac{\tau_h f - f}{h} ) = 0

for all {f \in {\mathcal S}({\bf R} \rightarrow {\bf C})} and non-zero reals {h}. If {f} is Schwartz, then as {h \rightarrow 0}, one can verify that the Newton quotients {\frac{\tau_h f - f}{h}} converge in the Schwartz topology to the derivative {f'} of {f}, so by the continuity axiom one has

\displaystyle  I(f') = 0.

Next, note that any Schwartz function of integral zero has an antiderivative which is also Schwartz, and so {I} annihilates all zero-integral Schwartz functions, and thus must be a scalar multiple of the usual integration functional. Using the normalisation (4), we see that {I} must therefore be the usual integration functional, giving the claimed uniqueness.

Motivated by the above discussion, we can define the notion of an abstract integration functional {I: X \rightarrow R} taking values in some vector space {R}, and applied to inputs {f} in some other vector space {X} that enjoys a linear action {h \mapsto \tau_h} (the “translation action”) of some group {V}, as being a functional which is both linear and translation invariant, thus one has the axioms (1), (2), (3) for all {f,g \in X}, scalars {c}, and {h \in V}. The previous discussion then considered the special case when {R = {\bf C}}, {X = {\mathcal S}({\bf R} \rightarrow {\bf C})}, {V = {\bf R}}, and {\tau} was the usual translation action.

Once we have performed this abstraction, we can now present analogues of classical integration which bear very little analytic resemblance to the classical concept, but which still have much of the algebraic structure of integration. Consider for instance the situation in which we keep the complex range {R = {\bf C}}, the translation group {V = {\bf R}}, and the usual translation action {h \mapsto \tau_h}, but we replace the space {{\mathcal S}({\bf R} \rightarrow {\bf C})} of Schwartz functions by the space {Poly_{\leq d}({\bf R} \rightarrow {\bf C})} of polynomials {x \mapsto a_0 + a_1 x + \ldots + a_d x^d} of degree at most {d} with complex coefficients, where {d} is a fixed natural number; note that this space is translation invariant, so it makes sense to talk about an abstract integration functional {I: Poly_{\leq d}({\bf R} \rightarrow {\bf C}) \rightarrow {\bf C}}. Of course, one cannot apply traditional integration concepts to non-zero polynomials, as they are not absolutely integrable. But one can repeat the previous arguments to show that any abstract integration functional must annihilate derivatives of polynomials of degree at most {d}:

\displaystyle  I(f') = 0 \hbox{ for all } f \in Poly_{\leq d}({\bf R} \rightarrow {\bf C}). \ \ \ \ \ (5)

Clearly, every polynomial of degree at most {d-1} is thus annihilated by {I}, which makes {I} a scalar multiple of the functional that extracts the top coefficient {a_d} of a polynomial, thus if one sets a normalisation

\displaystyle  I( x \mapsto x^d ) = c

for some constant {c}, then one has

\displaystyle  I( x \mapsto a_0 + a_1 x + \ldots + a_d x^d ) = c a_d \ \ \ \ \ (6)

for any polynomial {x \mapsto a_0 + a_1 x + \ldots + a_d x^d}. So we see that up to a normalising constant, the operation of extracting the top order coefficient of a polynomial of fixed degree serves as the analogue of integration. In particular, despite the fact that integration is supposed to be the “opposite” of differentiation (as indicated for instance by (5)), we see in this case that integration is basically ({d}-fold) differentiation; indeed, compare (6) with the identity

\displaystyle  (\frac{d}{dx})^d ( a_0 + a_1 x + \ldots + a_d x^d ) = d! a_d.

In particular, we see, in contrast to the usual Lebesgue integral, the integration functional (6) can be localised to an arbitrary location: one only needs to know the germ of the polynomial {x \mapsto a_0 + a_1 x + \ldots + a_d x^d} at a single point {x_0} in order to determine the value of the functional (6). This localisation property may initially seem at odds with the translation invariance, but the two can be reconciled thanks to the extremely rigid nature of the class {Poly_{\leq d}({\bf R} \rightarrow {\bf C})}, in contrast to the Schwartz class {{\mathcal S}({\bf R} \rightarrow {\bf C})} which admits bump functions and so can generate local phenomena that can only be detected in small regions of the underlying spatial domain, and which therefore forces any translation-invariant integration functional on such function classes to measure the function at every single point in space.

The reversal of the relationship between integration and differentiation is also reflected in the fact that the abstract integration operation on polynomials interacts with the scaling operation {\delta_\lambda f(x) := f(x/\lambda)} in essentially the opposite way from the classical integration operation. Indeed, for classical integration on {{\bf R}^d}, one has

\displaystyle  \int_{{\bf R}^d} f(x/\lambda)\ dx = \lambda^d \int f(x)\ dx

for Schwartz functions {f \in {\mathcal S}({\bf R}^d \rightarrow {\bf C})}, and so in this case the integration functional {I(f) := \int_{{\bf R}^d} f(x)\ dx} obeys the scaling law

\displaystyle  I( \delta_\lambda f ) = \lambda^d I(f).

In contrast, the abstract integration operation defined in (6) obeys the opposite scaling law

\displaystyle  I( \delta_\lambda f ) = \lambda^{-d} I(f). \ \ \ \ \ (7)

Remark 1 One way to interpret what is going on is to view the integration operation (6) as a renormalised version of integration. A polynomial {x \mapsto a_0 + a_1 + \ldots + a_d x^d} is, in general, not absolutely integrable, and the partial integrals

\displaystyle  \int_0^N a_0 + a_1 + \ldots + a_d x^d\ dx

diverge as {N \rightarrow \infty}. But if one renormalises these integrals by the factor {\frac{1}{N^{d+1}}}, then one recovers convergence,

\displaystyle  \lim_{N \rightarrow \infty} \frac{1}{N^{d+1}} \int_0^N a_0 + a_1 + \ldots + a_d x^d\ dx = \frac{1}{d+1} a_d

thus giving an interpretation of (6) as a renormalised classical integral, with the renormalisation being responsible for the unusual scaling relationship in (7). However, this interpretation is a little artificial, and it seems that it is best to view functionals such as (6) from an abstract algebraic perspective, rather than to try to force an analytic interpretation on them.

Now we return to the classical Lebesgue integral

\displaystyle  I(f) := \int_{\bf R} f(x)\ dx. \ \ \ \ \ (8)

As noted earlier, this integration functional has a translation invariance associated to translations along the real line {{\bf R}}, as well as a dilation invariance by real dilation parameters {\lambda>0}. However, if we refine the class {{\mathcal S}({\bf R} \rightarrow {\bf C})} of functions somewhat, we can obtain a stronger family of invariances, in which we allow complex translations and dilations. More precisely, let {\mathcal{SE}({\bf C} \rightarrow {\bf C})} denote the space of all functions {f: {\bf C} \rightarrow {\bf C}} which are entire (or equivalently, are given by a Taylor series with an infinite radius of convergence around the origin) and also admit rapid decay in a sectorial neighbourhood of the real line, or more precisely there exists an {\epsilon>0} such that for every {A > 0} there exists {C_A > 0} such that one has the bound

\displaystyle  |f(z)| \leq C_A (1+|z|)^{-A}

whenever {|\hbox{Im}(z)| \leq A + \epsilon |\hbox{Re}(z)|}. For want of a better name, we shall call elements of this space Schwartz entire functions. This is clearly a complex vector space. A typical example of a Schwartz entire function are the complex gaussians

\displaystyle  f(z) := e^{-\pi (az^2 + 2bz + c)}

where {a,b,c} are complex numbers with {\hbox{Re}(a) > 0}. From the Cauchy integral formula (and its derivatives) we see that if {f} lies in {\mathcal{SE}({\bf C} \rightarrow {\bf C})}, then the restriction of {f} to the real line lies in {{\mathcal S}({\bf R} \rightarrow {\bf C})}; conversely, from analytic continuation we see that every function in {{\mathcal S}({\bf R} \rightarrow {\bf C})} has at most one extension in {\mathcal{SE}({\bf C} \rightarrow {\bf C})}. Thus one can identify {\mathcal{SE}({\bf C} \rightarrow {\bf C})} with a subspace of {{\mathcal S}({\bf R} \rightarrow {\bf C})}, and in particular the integration functional (8) is inherited by {\mathcal{SE}({\bf C} \rightarrow {\bf C})}, and by abuse of notation we denote the resulting functional {I: \mathcal{SE}({\bf C} \rightarrow {\bf C}) \rightarrow {\bf C}} as {I} also. Note, in analogy with the situation with polynomials, that this abstract integration functional is somewhat localised; one only needs to evaluate the function {f} on the real line, rather than the entire complex plane, in order to compute {I(f)}. This is consistent with the rigid nature of Schwartz entire functions, as one can uniquely recover the entire function from its values on the real line by analytic continuation.

Of course, the functional {I: \mathcal{SE}({\bf C} \rightarrow {\bf C}) \rightarrow {\bf C}} remains translation invariant with respect to real translation:

\displaystyle  I(\tau_h f) = I(f) \hbox{ for all } h \in {\bf R}.

However, thanks to contour shifting, we now also have translation invariance with respect to complex translation:

\displaystyle  I(\tau_h f) = I(f) \hbox{ for all } h \in {\bf C},

where of course we continue to define the translation operator {\tau_h} for complex {h} by the usual formula {\tau_h f(x) := f(x-h)}. In a similar vein, we also have the scaling law

\displaystyle  I(\delta_\lambda f) = \lambda I(f)

for any {f \in \mathcal{SE}({\bf C} \rightarrow {\bf C})}, if {\lambda} is a complex number sufficiently close to {1} (where “sufficiently close” depends on {f}, and more precisely depends on the sectoral aperture parameter {\epsilon} associated to {f}); again, one can verify that {\delta_\lambda f} lies in {\mathcal{SE}({\bf C} \rightarrow {\bf C})} for {\lambda} sufficiently close to {1}. These invariances (which relocalise the integration functional {I} onto other contours than the real line {{\bf R}}) are very useful for computing integrals, and in particular for computing gaussian integrals. For instance, the complex translation invariance tells us (after shifting by {b/a}) that

\displaystyle  I( z \mapsto e^{-\pi (az^2 + 2bz + c) } ) = e^{-\pi (c-b^2/a)} I( z \mapsto e^{-\pi a z^2} )

when {a,b,c \in {\bf C}} with {\hbox{Re}(a) > 0}, and then an application of the complex scaling law (and a continuity argument, observing that there is a compact path connecting {a} to {1} in the right half plane) gives

\displaystyle  I( z \mapsto e^{-\pi (az^2 + 2bz + c) } ) = a^{-1/2} e^{-\pi (c-b^2/a)} I( z \mapsto e^{-\pi z^2} )

using the branch of {a^{-1/2}} on the right half-plane for which {1^{-1/2} = 1}. Using the normalisation (4) we thus have

\displaystyle  I( z \mapsto e^{-\pi (az^2 + 2bz + c) } ) = a^{-1/2} e^{-\pi (c-b^2/a)}

giving the usual gaussian integral formula

\displaystyle  \int_{\bf R} e^{-\pi (ax^2 + 2bx + c)}\ dx = a^{-1/2} e^{-\pi (c-b^2/a)}. \ \ \ \ \ (9)

This is a basic illustration of the power that a large symmetry group (in this case, the complex homothety group) can bring to bear on the task of computing integrals.

One can extend this sort of analysis to higher dimensions. For any natural number {n \geq 1}, let {\mathcal{SE}({\bf C}^n \rightarrow {\bf C})} denote the space of all functions {f: {\bf C}^n \rightarrow {\bf C}} which is jointly entire in the sense that {f(z_1,\ldots,z_n)} can be expressed as a Taylor series in {z_1,\ldots,z_n} which is absolutely convergent for all choices of {z_1,\ldots,z_n}, and such that there exists an {\epsilon > 0} such that for any {A>0} there is {C_A>0} for which one has the bound

\displaystyle  |f(z)| \leq C_A (1+|z|)^{-A}

whenever {|\hbox{Im}(z_j)| \leq A + \epsilon |\hbox{Re}(z_j)|} for all {1 \leq j \leq n}, where {z = \begin{pmatrix} z_1 \\ \vdots \\ z_n \end{pmatrix}} and {|z| := (|z_1|^2+\ldots+|z_n|^2)^{1/2}}. Again, we call such functions Schwartz entire functions; a typical example is the function

\displaystyle  f(z) := e^{-\pi (z^T A z + 2b^T z + c)}

where {A} is an {n \times n} complex symmetric matrix with positive definite real part, {b} is a vector in {{\bf C}^n}, and {c} is a complex number. We can then define an abstract integration functional {I: \mathcal{SE}({\bf C}^n \rightarrow {\bf C}) \rightarrow {\bf C}} by integration on the real slice {{\bf R}^n}:

\displaystyle  I(f) := \int_{{\bf R}^n} f(x)\ dx

where {dx} is the usual Lebesgue measure on {{\bf R}^n}. By contour shifting in each of the {n} variables {z_1,\ldots,z_n} separately, we see that {I} is invariant with respect to complex translations of each of the {z_j} variables, and is thus invariant under translating the joint variable {z} by {{\bf C}^n}. One can also verify the scaling law

\displaystyle  I(\delta_A f) = \hbox{det}(A) I(f)

for {n \times n} complex matrices {A} sufficiently close to the origin, where {\delta_A f(z) := f(A^{-1} z)}. This can be seen for shear transformations {A} by Fubini’s theorem and the aforementioned translation invariance, while for diagonal transformations near the origin this can be seen from {n} applications of one-dimensional scaling law, and the general case then follows by composition. Among other things, these laws then easily lead to the higher-dimensional generalisation

\displaystyle  \int_{{\bf R}^n} e^{-\pi (x^T A x + 2 b^T x + c)}\ dx = \hbox{det}(A)^{-1/2} e^{-\pi (c-b^T A^{-1} b)} \ \ \ \ \ (10)

whenever {A} is a complex symmetric matrix with positive definite real part, {b} is a vector in {{\bf C}^n}, and {c} is a complex number, basically by repeating the one-dimensional argument sketched earlier. Here, we choose the branch of {\hbox{det}(A)^{-1/2}} for all matrices {A} in the indicated class for which {\hbox{det}(1)^{-1/2} = 1}.

Now we turn to an integration functional suitable for computing complex gaussian integrals such as

\displaystyle  \int_{{\bf C}^n} e^{-2\pi (z^\dagger A z + b^\dagger z + z^\dagger \tilde b + c)}\ dz d\overline{z}, \ \ \ \ \ (11)

where {z} is now a complex variable

\displaystyle  z = \begin{pmatrix} z_1 \\ \vdots \\ z_n \end{pmatrix},

{z^\dagger} is the adjoint

\displaystyle  z^\dagger := (\overline{z_1},\ldots, \overline{z_n}),

{A} is a complex {n \times n} matrix with positive definite Hermitian part, {b, \tilde b} are column vectors in {{\bf C}^n}, {c} is a complex number, and {dz d\overline{z} = \prod_{j=1}^n 2 d\hbox{Re}(z_j) d\hbox{Im}(z_j)} is {2^n} times Lebesgue measure on {{\bf C}^n}. (The factors of two here turn out to be a natural normalisation, but they can be ignored on a first reading.) As we shall see later, such integrals are relevant when performing computations on the Gaussian Unitary Ensemble (GUE) in random matrix theory. Note that the integrand here is not complex analytic due to the presence of the complex conjugates. However, this can be dealt with by the trick of replacing the complex conjugate {\overline{z}} by a variable {z^*} which is formally conjugate to {z}, but which is allowed to vary independently of {z}. More precisely, let {\mathcal{SA}({\bf C}^n \times {\bf C}^n \rightarrow {\bf C})} be the space of all functions {f: (z,z^*) \mapsto f(z,z^*)} of two independent {n}-tuples

\displaystyle  z = \begin{pmatrix} z_1 \\ \vdots \\ z_n \end{pmatrix}, z^* = \begin{pmatrix} z_1^* \\ \vdots \\ z_n^* \end{pmatrix}

of complex variables, which is jointly entire in all {2n} variables (in the sense defined previously, i.e. there is a joint Taylor series that is absolutely convergent for all independent choices of {z, z^* \in {\bf C}^n}), and such that there is an {\epsilon>0} such that for every {A>0} there is {C_A>0} such that one has the bound

\displaystyle  |f(z,z^*)| \leq C_A (1 + |z|)^{-A}

whenever {|z^* - \overline{z}| \leq A + \epsilon |z|}. We will call such functions Schwartz analytic. Note that the integrand in (11) is Schwartz analytic when {A} has positive definite Hermitian part, if we reinterpret {z^\dagger} as the transpose of {z^*} rather than as the adjoint of {z} in order to make the integrand entire in {z} and {z^*}. We can then define an abstract integration functional {I: \mathcal{SA}({\bf C}^n \times {\bf C}^n \rightarrow {\bf C}) \rightarrow {\bf C}} by the formula

\displaystyle  I(f) := \int_{{\bf C}^n} f(z,\overline{z})\ dz d\overline{z}, \ \ \ \ \ (12)

thus {I} can be localised to the slice {\{ (z,\overline{z}): z \in {\bf C}^n\}} of {{\bf C}^n \times {\bf C}^n} (though, as with previous functionals, one can use contour shifting to relocalise {I} to other slices also.) One can also write this integral as

\displaystyle  I(f) = 2^n \int_{{\bf R}^n \times {\bf R}^n} f(x+iy, x-iy)\ dx dy

and note that the integrand here is a Schwartz entire function on {{\bf C}^n \times {\bf C}^n}, thus linking the Schwartz analytic integral with the Schwartz entire integral. Using this connection, one can verify that this functional {I} is invariant with respect to translating {z} and {z^*} by independent shifts in {{\bf C}^n} (thus giving a {{\bf C}^n \times {\bf C}^n} translation symmetry), and one also has the independent dilation symmetry

\displaystyle  I(\delta_{A,B} f) = \hbox{det}(A) \hbox{det}(B) I(f)

for {n \times n} complex matrices {A,B} that are sufficiently close to the identity, where {\delta_{A,B} f(z,z^*) := f(A^{-1} z, B^{-1} z^*)}. Arguing as before, we can then compute (11) as

\displaystyle  \int_{{\bf C}^n} e^{-2\pi (z^\dagger A z + b^\dagger z + z^\dagger \tilde b + c)}\ dz d\overline{z} = \hbox{det}(A)^{-1} e^{-2\pi (c - b^\dagger A^{-1} \tilde b)}. \ \ \ \ \ (13)

In particular, this gives an integral representation for the determinant-reciprocal {\hbox{det}(A)^{-1}} of a complex {n \times n} matrix with positive definite Hermitian part, in terms of gaussian expressions in which {A} only appears linearly in the exponential:

\displaystyle  \hbox{det}(A)^{-1} = \int_{{\bf C}^n} e^{-2\pi z^\dagger A z}\ dz d\overline{z}.

This formula is then convenient for computing statistics such as

\displaystyle  \mathop{\bf E} \hbox{det}(W_n-E-i\eta)^{-1}

for random matrices {W_n} drawn from the Gaussian Unitary Ensemble (GUE), and some choice of spectral parameter {E+i\eta} with {\eta>0}; we review this computation later in this post. By the trick of matrix differentiation of the determinant (as reviewed in this recent blog post), one can also use this method to compute matrix-valued statistics such as

\displaystyle  \mathop{\bf E} \hbox{det}(W_n-E-i\eta)^{-1} (W_n-E-i\eta)^{-1}.

However, if one restricts attention to classical integrals over real or complex (and in particular, commuting or bosonic) variables, it does not seem possible to easily eradicate the negative determinant factors in such calculations, which is unfortunate because many statistics of interest in random matrix theory, such as the expected Stieltjes transform

\displaystyle  \mathop{\bf E} \frac{1}{n} \hbox{tr} (W_n-E-i\eta)^{-1},

which is the Stieltjes transform of the density of states. However, it turns out (as I learned recently from Peter Sarnak and Tom Spencer) that it is possible to cancel out these negative determinant factors by balancing the bosonic gaussian integrals with an equal number of fermionic gaussian integrals, in which one integrates over a family of anticommuting variables. These fermionic integrals are closer in spirit to the polynomial integral (6) than to Lebesgue type integrals, and in particular obey a scaling law which is inverse to the Lebesgue scaling (in particular, a linear change of fermionic variables {\zeta \mapsto A \zeta} ends up transforming a fermionic integral by {\hbox{det}(A)} rather than {\hbox{det}(A)^{-1}}), which conveniently cancels out the reciprocal determinants in the previous calculations. Furthermore, one can combine the bosonic and fermionic integrals into a unified integration concept, known as the Berezin integral (or Grassmann integral), in which one integrates functions of supervectors (vectors with both bosonic and fermionic components), and is of particular importance in the theory of supersymmetry in physics. (The prefix “super” in physics means, roughly speaking, that the object or concept that the prefix is attached to contains both bosonic and fermionic aspects.) When one applies this unified integration concept to gaussians, this can lead to quite compact and efficient calculations (provided that one is willing to work with “super”-analogues of various concepts in classical linear algebra, such as the supertrace or superdeterminant).

Abstract integrals of the flavour of (6) arose in quantum field theory, when physicists sought to formally compute integrals of the form

\displaystyle  \int F( x_1, \ldots, x_n, \xi_1, \ldots, \xi_m )\ dx_1 \ldots dx_n d\xi_1 \ldots d\xi_m \ \ \ \ \ (14)

where {x_1,\ldots,x_n} are familiar commuting (or bosonic) variables (which, in particular, can often be localised to be scalar variables taking values in {{\bf R}} or {{\bf C}}), while {\xi_1,\ldots,\xi_m} were more exotic anticommuting (or fermionic) variables, taking values in some vector space of fermions. (As we shall see shortly, one can formalise these concepts by working in a supercommutative algebra.) The integrand {F(x_1,\ldots,x_n,\xi_1,\ldots,\xi_m)} was a formally analytic function of {x_1,\ldots,x_n,\xi_1,\ldots,\xi_m}, in that it could be expanded as a (formal, noncommutative) power series in the variables {x_1,\ldots,x_n,\xi_1,\ldots,\xi_m}. For functions {F(x_1,\ldots,x_n)} that depend only on bosonic variables, it is certainly possible for such analytic functions to be in the Schwartz class and thus fall under the scope of the classical integral, as discussed previously. However, functions {F(\xi_1,\ldots,\xi_m)} that depend on fermionic variables {\xi_1,\ldots,\xi_m} behave rather differently. Indeed, a fermonic variable {\xi} must anticommute with itself, so that {\xi^2 = 0}. In particular, any power series in {\xi} terminates after the linear term in {\xi}, so that a function {F(\xi)} can only be analytic in {\xi} if it is a polynomial of degree at most {1} in {\xi}; more generally, an analytic function {F(\xi_1,\ldots,\xi_m)} of {m} fermionic variables {\xi_1,\ldots,\xi_m} must be a polynomial of degree at most {m}, and an analytic function {F(x_1,\ldots,x_n,\xi_1,\ldots,\xi_m)} of {n} bosonic and {m} fermionic variables can be Schwartz in the bosonic variables but will be polynomial in the fermonic variables. As such, to interpret the integral (14), one can use classical (Lebesgue) integration (or the variants discussed above for integrating Schwartz entire or Schwartz analytic functions) for the bosonic variables, but must use abstract integrals such as (6) for the fermonic variables, leading to the concept of Berezin integration mentioned earlier.

In this post I would like to set out some of the basic algebraic formalism of Berezin integration, particularly with regards to integration of gaussian-type expressions, and then show how this formalism can be used to perform computations involving GUE (for instance, one can compute the density of states of GUE by this machinery without recourse to the theory of orthogonal polynomials). The use of supersymmetric gaussian integrals to analyse ensembles such as GUE appears in the work of Efetov (and was also proposed in the slightly earlier works of Parisi-Sourlas and McKane, with a related approach also appearing in the work of Wegner); the material here is adapted from this survey of Mirlin, as well as the later papers of Disertori-Pinson-Spencer and of Disertori.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 4,043 other followers