You are currently browsing the category archive for the ‘paper’ category.

Fifteen years ago, I wrote a paper entitled Global regularity of wave maps. II. Small energy in two dimensions, in which I established global regularity of wave maps from two spatial dimensions to the unit sphere, assuming that the initial data had small energy. Recently, Hao Jia (personal communication) discovered a small gap in the argument that requires a slightly non-trivial fix. The issue does not really affect the subsequent literature, because the main result has since been reproven and extended by methods that avoid the gap (see in particular this subsequent paper of Tataru), but I have decided to describe the gap and its fix on this blog.

I will assume familiarity with the notation of my paper. In Section 10, some complicated spaces {S[k] = S[k]({\bf R}^{1+n})} are constructed for each frequency scale {k}, and then a further space {S(c) = S(c)({\bf R}^{1+n})} is constructed for a given frequency envelope {c} by the formula

\displaystyle  \| \phi \|_{S(c)({\bf R}^{1+n})} := \|\phi \|_{L^\infty_t L^\infty_x({\bf R}^{1+n})} + \sup_k c_k^{-1} \| \phi_k \|_{S[k]({\bf R}^{1+n})} \ \ \ \ \ (1)

where {\phi_k := P_k \phi} is the Littlewood-Paley projection of {\phi} to frequency magnitudes {\sim 2^k}. Then, given a spacetime slab {[-T,T] \times {\bf R}^n}, we define the restrictions

\displaystyle  \| \phi \|_{S(c)([-T,T] \times {\bf R}^n)} := \inf \{ \| \tilde \phi \|_{S(c)({\bf R}^{1+n})}: \tilde \phi \downharpoonright_{[-T,T] \times {\bf R}^n} = \phi \}

where the infimum is taken over all extensions {\tilde \phi} of {\phi} to the Minkowski spacetime {{\bf R}^{1+n}}; similarly one defines

\displaystyle  \| \phi_k \|_{S_k([-T,T] \times {\bf R}^n)} := \inf \{ \| \tilde \phi_k \|_{S_k({\bf R}^{1+n})}: \tilde \phi_k \downharpoonright_{[-T,T] \times {\bf R}^n} = \phi_k \}.

The gap in the paper is as follows: it was implicitly assumed that one could restrict (1) to the slab {[-T,T] \times {\bf R}^n} to obtain the equality

\displaystyle  \| \phi \|_{S(c)([-T,T] \times {\bf R}^n)} = \|\phi \|_{L^\infty_t L^\infty_x([-T,T] \times {\bf R}^n)} + \sup_k c_k^{-1} \| \phi_k \|_{S[k]([-T,T] \times {\bf R}^n)}.

(This equality is implicitly used to establish the bound (36) in the paper.) Unfortunately, (1) only gives the lower bound, not the upper bound, and it is the upper bound which is needed here. The problem is that the extensions {\tilde \phi_k} of {\phi_k} that are optimal for computing {\| \phi_k \|_{S[k]([-T,T] \times {\bf R}^n)}} are not necessarily the Littlewood-Paley projections of the extensions {\tilde \phi} of {\phi} that are optimal for computing {\| \phi \|_{S(c)([-T,T] \times {\bf R}^n)}}.

To remedy the problem, one has to prove an upper bound of the form

\displaystyle  \| \phi \|_{S(c)([-T,T] \times {\bf R}^n)} \lesssim \|\phi \|_{L^\infty_t L^\infty_x([-T,T] \times {\bf R}^n)} + \sup_k c_k^{-1} \| \phi_k \|_{S[k]([-T,T] \times {\bf R}^n)}

for all Schwartz {\phi} (actually we need affinely Schwartz {\phi}, but one can easily normalise to the Schwartz case). Without loss of generality we may normalise the RHS to be {1}. Thus

\displaystyle  \|\phi \|_{L^\infty_t L^\infty_x([-T,T] \times {\bf R}^n)} \leq 1 \ \ \ \ \ (2)

and

\displaystyle  \|P_k \phi \|_{S[k]([-T,T] \times {\bf R}^n)} \leq c_k \ \ \ \ \ (3)

for each {k}, and one has to find a single extension {\tilde \phi} of {\phi} such that

\displaystyle  \|\tilde \phi \|_{L^\infty_t L^\infty_x({\bf R}^{1+n})} \lesssim 1 \ \ \ \ \ (4)

and

\displaystyle  \|P_k \tilde \phi \|_{S[k]({\bf R}^{1+n})} \lesssim c_k \ \ \ \ \ (5)

for each {k}. Achieving a {\tilde \phi} that obeys (4) is trivial (just extend {\phi} by zero), but such extensions do not necessarily obey (5). On the other hand, from (3) we can find extensions {\tilde \phi_k} of {P_k \phi} such that

\displaystyle  \|\tilde \phi_k \|_{S[k]({\bf R}^{1+n})} \lesssim c_k; \ \ \ \ \ (6)

the extension {\tilde \phi := \sum_k \tilde \phi_k} will then obey (5) (here we use Lemma 9 from my paper), but unfortunately is not guaranteed to obey (4) (the {S[k]} norm does control the {L^\infty_t L^\infty_x} norm, but a key point about frequency envelopes for the small energy regularity problem is that the coefficients {c_k}, while bounded, are not necessarily summable).

This can be fixed as follows. For each {k} we introduce a time cutoff {\eta_k} supported on {[-T-2^{-k}, T+2^{-k}]} that equals {1} on {[-T-2^{-k-1},T+2^{-k+1}]} and obeys the usual derivative estimates in between (the {j^{th}} time derivative of size {O_j(2^{jk})} for each {j}). Later we will prove the truncation estimate

\displaystyle  \| \eta_k \tilde \phi_k \|_{S[k]({\bf R}^{1+n})} \lesssim \| \tilde \phi_k \|_{S[k]({\bf R}^{1+n})}. \ \ \ \ \ (7)

Assuming this estimate, then if we set {\tilde \phi := \sum_k \eta_k \tilde \phi_k}, then using Lemma 9 in my paper and (6), (7) (and the local stability of frequency envelopes) we have the required property (5). (There is a technical issue arising from the fact that {\tilde \phi} is not necessarily Schwartz due to slow decay at temporal infinity, but by considering partial sums in the {k} summation and taking limits we can check that {\tilde \phi} is the strong limit of Schwartz functions, which suffices here; we omit the details for sake of exposition.) So the only issue is to establish (4), that is to say that

\displaystyle  \| \sum_k \eta_k(t) \tilde \phi_k(t) \|_{L^\infty_x({\bf R}^n)} \lesssim 1

for all {t \in {\bf R}}.

For {t \in [-T,T]} this is immediate from (2). Now suppose that {t \in [T+2^{k_0-1}, T+2^{k_0}]} for some integer {k_0} (the case when {t \in [-T-2^{k_0}, -T-2^{k_0-1}]} is treated similarly). Then we can split

\displaystyle  \sum_k \eta_k(t) \tilde \phi_k(t) = \Phi_1 + \Phi_2 + \Phi_3

where

\displaystyle  \Phi_1 := \sum_{k < k_0} \tilde \phi_k(T)

\displaystyle  \Phi_2 := \sum_{k < k_0} \tilde \phi_k(t) - \tilde \phi_k(T)

\displaystyle  \Phi_3 := \eta_{k_0}(t) \tilde \phi_{k_0}(t).

The contribution of the {\Phi_3} term is acceptable by (6) and estimate (82) from my paper. The term {\Phi_1} sums to {P_{<k_0} \phi(T)} which is acceptable by (2). So it remains to control the {L^\infty_x} norm of {\Phi_2}. By the triangle inequality and the fundamental theorem of calculus, we can bound

\displaystyle  \| \Phi_2 \|_{L^\infty_x} \leq (t-T) \sum_{k < k_0} \| \partial_t \tilde \phi_k \|_{L^\infty_t L^\infty_x({\bf R}^{1+n})}.

By hypothesis, {t-T \leq 2^{-k_0}}. Using the first term in (79) of my paper and Bernstein’s inequality followed by (6) we have

\displaystyle  \| \partial_t \tilde \phi_k \|_{L^\infty_t L^\infty_x({\bf R}^{1+n})} \lesssim 2^k \| \tilde \phi_k \|_{S[k]({\bf R}^{1+n})} \lesssim 2^k;

and then we are done by summing the geometric series in {k}.

It remains to prove the truncation estimate (7). This estimate is similar in spirit to the algebra estimates already in my paper, but unfortunately does not seem to follow immediately from these estimates as written, and so one has to repeat the somewhat lengthy decompositions and case checkings used to prove these estimates. We do this below the fold.

Read the rest of this entry »

I’ve just posted to the arXiv my paper “Finite time blowup for Lagrangian modifications of the three-dimensional Euler equation“. This paper is loosely in the spirit of other recent papers of mine in which I explore how close one can get to supercritical PDE of physical interest (such as the Euler and Navier-Stokes equations), while still being able to rigorously demonstrate finite time blowup for at least some choices of initial data. Here, the PDE we are trying to get close to is the incompressible inviscid Euler equations

\displaystyle \partial_t u + (u \cdot \nabla) u = - \nabla p

\displaystyle \nabla \cdot u = 0

in three spatial dimensions, where {u} is the velocity vector field and {p} is the pressure field. In vorticity form, and viewing the vorticity {\omega} as a {2}-form (rather than a vector), we can rewrite this system using the language of differential geometry as

\displaystyle \partial_t \omega + {\mathcal L}_u \omega = 0

\displaystyle u = \delta \tilde \eta^{-1} \Delta^{-1} \omega

where {{\mathcal L}_u} is the Lie derivative along {u}, {\delta} is the codifferential (the adjoint of the differential {d}, or equivalently the negative of the divergence operator) that sends {k+1}-vector fields to {k}-vector fields, {\Delta} is the Hodge Laplacian, and {\tilde \eta} is the identification of {k}-vector fields with {k}-forms induced by the Euclidean metric {\tilde \eta}. The equation{u = \delta \tilde \eta^{-1} \Delta^{-1} \omega} can be viewed as the Biot-Savart law recovering velocity from vorticity, expressed in the language of differential geometry.

One can then generalise this system by replacing the operator {\tilde \eta^{-1} \Delta^{-1}} by a more general operator {A} from {2}-forms to {2}-vector fields, giving rise to what I call the generalised Euler equations

\displaystyle \partial_t \omega + {\mathcal L}_u \omega = 0

\displaystyle u = \delta A \omega.

For example, the surface quasi-geostrophic (SQG) equations can be written in this form, as discussed in this previous post. One can view {A \omega} (up to Hodge duality) as a vector potential for the velocity {u}, so it is natural to refer to {A} as a vector potential operator.

The generalised Euler equations carry much of the same geometric structure as the true Euler equations. For instance, the transport equation {\partial_t \omega + {\mathcal L}_u \omega = 0} is equivalent to the Kelvin circulation theorem, which in three dimensions also implies the transport of vortex streamlines and the conservation of helicity. If {A} is self-adjoint and positive definite, then the famous Euler-Poincaré interpretation of the true Euler equations as geodesic flow on an infinite dimensional Riemannian manifold of volume preserving diffeomorphisms (as discussed in this previous post) extends to the generalised Euler equations (with the operator {A} determining the new Riemannian metric to place on this manifold). In particular, the generalised Euler equations have a Lagrangian formulation, and so by Noether’s theorem we expect any continuous symmetry of the Lagrangian to lead to conserved quantities. Indeed, we have a conserved Hamiltonian {\frac{1}{2} \int \langle \omega, A \omega \rangle}, and any spatial symmetry of {A} leads to a conserved impulse (e.g. translation invariance leads to a conserved momentum, and rotation invariance leads to a conserved angular momentum). If {A} behaves like a pseudodifferential operator of order {-2} (as is the case with the true vector potential operator {\tilde \eta^{-1} \Delta^{-1}}), then it turns out that one can use energy methods to recover the same sort of classical local existence theory as for the true Euler equations (up to and including the famous Beale-Kato-Majda criterion for blowup).

The true Euler equations are suspected of admitting smooth localised solutions which blow up in finite time; there is now substantial numerical evidence for this blowup, but it has not been proven rigorously. The main purpose of this paper is to show that such finite time blowup can at least be established for certain generalised Euler equations that are somewhat close to the true Euler equations. This is similar in spirit to my previous paper on finite time blowup on averaged Navier-Stokes equations, with the main new feature here being that the modified equation continues to have a Lagrangian structure and a vorticity formulation, which was not the case with the averaged Navier-Stokes equation. On the other hand, the arguments here are not able to handle the presence of viscosity (basically because they rely crucially on the Kelvin circulation theorem, which is not available in the viscous case).

In fact, three different blowup constructions are presented (for three different choices of vector potential operator {A}). The first is a variant of one discussed previously on this blog, in which a “neck pinch” singularity for a vortex tube is created by using a non-self-adjoint vector potential operator, in which the velocity at the neck of the vortex tube is determined by the circulation of the vorticity somewhat further away from that neck, which when combined with conservation of circulation is enough to guarantee finite time blowup. This is a relatively easy construction of finite time blowup, and has the advantage of being rather stable (any initial data flowing through a narrow tube with a large positive circulation will blow up in finite time). On the other hand, it is not so surprising in the non-self-adjoint case that finite blowup can occur, as there is no conserved energy.

The second blowup construction is based on a connection between the two-dimensional SQG equation and the three-dimensional generalised Euler equations, discussed in this previous post. Namely, any solution to the former can be lifted to a “two and a half-dimensional” solution to the latter, in which the velocity and vorticity are translation-invariant in the vertical direction (but the velocity is still allowed to contain vertical components, so the flow is not completely horizontal). The same embedding also works to lift solutions to generalised SQG equations in two dimensions to solutions to generalised Euler equations in three dimensions. Conveniently, even if the vector potential operator for the generalised SQG equation fails to be self-adjoint, one can ensure that the three-dimensional vector potential operator is self-adjoint. Using this trick, together with a two-dimensional version of the first blowup construction, one can then construct a generalised Euler equation in three dimensions with a vector potential that is both self-adjoint and positive definite, and still admits solutions that blow up in finite time, though now the blowup is now a vortex sheet creasing at on a line, rather than a vortex tube pinching at a point.

This eliminates the main defect of the first blowup construction, but introduces two others. Firstly, the blowup is less stable, as it relies crucially on the initial data being translation-invariant in the vertical direction. Secondly, the solution is not spatially localised in the vertical direction (though it can be viewed as a compactly supported solution on the manifold {{\bf R}^2 \times {\bf R}/{\bf Z}}, rather than {{\bf R}^3}). The third and final blowup construction of the paper addresses the final defect, by replacing vertical translation symmetry with axial rotation symmetry around the vertical axis (basically, replacing Cartesian coordinates with cylindrical coordinates). It turns out that there is a more complicated way to embed two-dimensional generalised SQG equations into three-dimensional generalised Euler equations in which the solutions to the latter are now axially symmetric (but are allowed to “swirl” in the sense that the velocity field can have a non-zero angular component), while still keeping the vector potential operator self-adjoint and positive definite; the blowup is now that of a vortex ring creasing on a circle.

As with the previous papers in this series, these blowup constructions do not directly imply finite time blowup for the true Euler equations, but they do at least provide a barrier to establishing global regularity for these latter equations, in that one is forced to use some property of the true Euler equations that are not shared by these generalisations. They also suggest some possible blowup mechanisms for the true Euler equations (although unfortunately these mechanisms do not seem compatible with the addition of viscosity, so they do not seem to suggest a viable Navier-Stokes blowup mechanism).

I’ve just uploaded to the arXiv my paper “Equivalence of the logarithmically averaged Chowla and Sarnak conjectures“, submitted to the Festschrift “Number Theory – Diophantine problems, uniform distribution and applications” in honour of Robert F. Tichy. This paper is a spinoff of my previous paper establishing a logarithmically averaged version of the Chowla (and Elliott) conjectures in the two-point case. In that paper, the estimate

\displaystyle  \sum_{n \leq x} \frac{\lambda(n) \lambda(n+h)}{n} = o( \log x )

as {x \rightarrow \infty} was demonstrated, where {h} was any positive integer and {\lambda} denoted the Liouville function. The proof proceeded using a method I call the “entropy decrement argument”, which ultimately reduced matters to establishing a bound of the form

\displaystyle  \sum_{n \leq x} \frac{|\sum_{h \leq H} \lambda(n+h) e( \alpha h)|}{n} = o( H \log x )

whenever {H} was a slowly growing function of {x}. This was in turn established in a previous paper of Matomaki, Radziwill, and myself, using the recent breakthrough of Matomaki and Radziwill.

It is natural to see to what extent the arguments can be adapted to attack the higher-point cases of the logarithmically averaged Chowla conjecture (ignoring for this post the more general Elliott conjecture for other bounded multiplicative functions than the Liouville function). That is to say, one would like to prove that

\displaystyle  \sum_{n \leq x} \frac{\lambda(n+h_1) \dots \lambda(n+h_k)}{n} = o( \log x )

as {x \rightarrow \infty} for any fixed distinct integers {h_1,\dots,h_k}. As it turns out (and as is detailed in the current paper), the entropy decrement argument extends to this setting (after using some known facts about linear equations in primes), and allows one to reduce the above estimate to an estimate of the form

\displaystyle  \sum_{n \leq x} \frac{1}{n} \| \lambda \|_{U^d[n, n+H]} = o( \log x )

for {H} a slowly growing function of {x} and some fixed {d} (in fact we can take {d=k-1} for {k \geq 3}), where {U^d} is the (normalised) local Gowers uniformity norm. (In the case {k=3}, {d=2}, this becomes the Fourier-uniformity conjecture discussed in this previous post.) If one then applied the (now proven) inverse conjecture for the Gowers norms, this estimate is in turn equivalent to the more complicated looking assertion

\displaystyle  \sum_{n \leq x} \frac{1}{n} \sup |\sum_{h \leq H} \lambda(n+h) F( g^h x )| = o( \log x ) \ \ \ \ \ (1)

where the supremum is over all possible choices of nilsequences {h \mapsto F(g^h x)} of controlled step and complexity (see the paper for definitions of these terms).

The main novelty in the paper (elaborating upon a previous comment I had made on this blog) is to observe that this latter estimate in turn follows from the logarithmically averaged form of Sarnak’s conjecture (discussed in this previous post), namely that

\displaystyle  \sum_{n \leq x} \frac{1}{n} \lambda(n) F( T^n x )= o( \log x )

whenever {n \mapsto F(T^n x)} is a zero entropy (i.e. deterministic) sequence. Morally speaking, this follows from the well-known fact that nilsequences have zero entropy, but the presence of the supremum in (1) means that we need a little bit more; roughly speaking, we need the class of nilsequences of a given step and complexity to have “uniformly zero entropy” in some sense.

On the other hand, it was already known (see previous post) that the Chowla conjecture implied the Sarnak conjecture, and similarly for the logarithmically averaged form of the two conjectures. Putting all these implications together, we obtain the pleasant fact that the logarithmically averaged Sarnak and Chowla conjectures are equivalent, which is the main result of the current paper. There have been a large number of special cases of the Sarnak conjecture worked out (when the deterministic sequence involved came from a special dynamical system), so these results can now also be viewed as partial progress towards the Chowla conjecture also (at least with logarithmic averaging). However, my feeling is that the full resolution of these conjectures will not come from these sorts of special cases; instead, conjectures like the Fourier-uniformity conjecture in this previous post look more promising to attack.

It would also be nice to get rid of the pesky logarithmic averaging, but this seems to be an inherent requirement of the entropy decrement argument method, so one would probably have to find a way to avoid that argument if one were to remove the log averaging.

Tamar Ziegler and I have just uploaded to the arXiv two related papers: “Concatenation theorems for anti-Gowers-uniform functions and Host-Kra characteoristic factors” and “polynomial patterns in primes“, with the former developing a “quantitative Bessel inequality” for local Gowers norms that is crucial in the latter.

We use the term “concatenation theorem” to denote results in which structural control of a function in two or more “directions” can be “concatenated” into structural control in a joint direction. A trivial example of such a concatenation theorem is the following: if a function {f: {\bf Z} \times {\bf Z} \rightarrow {\bf R}} is constant in the first variable (thus {x \mapsto f(x,y)} is constant for each {y}), and also constant in the second variable (thus {y \mapsto f(x,y)} is constant for each {x}), then it is constant in the joint variable {(x,y)}. A slightly less trivial example: if a function {f: {\bf Z} \times {\bf Z} \rightarrow {\bf R}} is affine-linear in the first variable (thus, for each {y}, there exist {\alpha(y), \beta(y)} such that {f(x,y) = \alpha(y) x + \beta(y)} for all {x}) and affine-linear in the second variable (thus, for each {x}, there exist {\gamma(x), \delta(x)} such that {f(x,y) = \gamma(x)y + \delta(x)} for all {y}) then {f} is a quadratic polynomial in {x,y}; in fact it must take the form

\displaystyle f(x,y) = \epsilon xy + \zeta x + \eta y + \theta \ \ \ \ \ (1)

 

for some real numbers {\epsilon, \zeta, \eta, \theta}. (This can be seen for instance by using the affine linearity in {y} to show that the coefficients {\alpha(y), \beta(y)} are also affine linear.)

The same phenomenon extends to higher degree polynomials. Given a function {f: G \rightarrow K} from one additive group {G} to another, we say that {f} is of degree less than {d} along a subgroup {H} of {G} if all the {d}-fold iterated differences of {f} along directions in {H} vanish, that is to say

\displaystyle \partial_{h_1} \dots \partial_{h_d} f(x) = 0

for all {x \in G} and {h_1,\dots,h_d \in H}, where {\partial_h} is the difference operator

\displaystyle \partial_h f(x) := f(x+h) - f(x).

(We adopt the convention that the only {f} of degree less than {0} is the zero function.)

We then have the following simple proposition:

Proposition 1 (Concatenation of polynomiality) Let {f: G \rightarrow K} be of degree less than {d_1} along one subgroup {H_1} of {G}, and of degree less than {d_2} along another subgroup {H_2} of {G}, for some {d_1,d_2 \geq 1}. Then {f} is of degree less than {d_1+d_2-1} along the subgroup {H_1+H_2} of {G}.

Note the previous example was basically the case when {G = {\bf Z} \times {\bf Z}}, {H_1 = {\bf Z} \times \{0\}}, {H_2 = \{0\} \times {\bf Z}}, {K = {\bf R}}, and {d_1=d_2=2}.

Proof: The claim is trivial for {d_1=1} or {d_2=1} (in which {f} is constant along {H_1} or {H_2} respectively), so suppose inductively {d_1,d_2 \geq 2} and the claim has already been proven for smaller values of {d_1-1}.

We take a derivative in a direction {h_1 \in H_1} along {h_1} to obtain

\displaystyle T^{-h_1} f = f + \partial_{h_1} f

where {T^{-h_1} f(x) = f(x+h_1)} is the shift of {f} by {-h_1}. Then we take a further shift by a direction {h_2 \in H_2} to obtain

\displaystyle T^{-h_1-h_2} f = T^{-h_2} f + T^{-h_2} \partial_{h_1} f = f + \partial_{h_2} f + T^{-h_2} \partial_{h_1} f

leading to the cocycle equation

\displaystyle \partial_{h_1+h_2} f = \partial_{h_2} f + T^{-h_2} \partial_{h_1} f.

Since {f} has degree less than {d_1} along {H_1} and degree less than {d_2} along {H_2}, {\partial_{h_1} f} has degree less than {d_1-1} along {H_1} and less than {d_2} along {H_2}, so is degree less than {d_1+d_2-2} along {H_1+H_2} by induction hypothesis. Similarly {\partial_{h_2} f} is also of degree less than {d_1+d_2-2} along {H_1+H_2}. Combining this with the cocycle equation we see that {\partial_{h_1+h_2}f} is of degree less than {d_1+d_2-2} along {H_1+H_2} for any {h_1+h_2 \in H_1+H_2}, and hence {f} is of degree less than {d_1+d_2-1} along {H_1+H_2}, as required. \Box

While this proposition is simple, it already illustrates some basic principles regarding how one would go about proving a concatenation theorem:

  • (i) One should perform induction on the degrees {d_1,d_2} involved, and take advantage of the recursive nature of degree (in this case, the fact that a function is of less than degree {d} along some subgroup {H} of directions iff all of its first derivatives along {H} are of degree less than {d-1}).
  • (ii) Structure is preserved by operations such as addition, shifting, and taking derivatives. In particular, if a function {f} is of degree less than {d} along some subgroup {H}, then any derivative {\partial_k f} of {f} is also of degree less than {d} along {H}, even if {k} does not belong to {H}.

Here is another simple example of a concatenation theorem. Suppose an at most countable additive group {G} acts by measure-preserving shifts {T: g \mapsto T^g} on some probability space {(X, {\mathcal X}, \mu)}; we call the pair {(X,T)} (or more precisely {(X, {\mathcal X}, \mu, T)}) a {G}-system. We say that a function {f \in L^\infty(X)} is a generalised eigenfunction of degree less than {d} along some subgroup {H} of {G} and some {d \geq 1} if one has

\displaystyle T^h f = \lambda_h f

almost everywhere for all {h \in H}, and some functions {\lambda_h \in L^\infty(X)} of degree less than {d-1} along {H}, with the convention that a function has degree less than {0} if and only if it is equal to {1}. Thus for instance, a function {f} is an generalised eigenfunction of degree less than {1} along {H} if it is constant on almost every {H}-ergodic component of {G}, and is a generalised function of degree less than {2} along {H} if it is an eigenfunction of the shift action on almost every {H}-ergodic component of {G}. A basic example of a higher order eigenfunction is the function {f(x,y) := e^{2\pi i y}} on the skew shift {({\bf R}/{\bf Z})^2} with {{\bf Z}} action given by the generator {T(x,y) := (x+\alpha,y+x)} for some irrational {\alpha}. One can check that {T^h f = \lambda_h f} for every integer {h}, where {\lambda_h: x \mapsto e^{2\pi i \binom{h}{2} \alpha} e^{2\pi i h x}} is a generalised eigenfunction of degree less than {2} along {{\bf Z}}, so {f} is of degree less than {3} along {{\bf Z}}.

We then have

Proposition 2 (Concatenation of higher order eigenfunctions) Let {(X,T)} be a {G}-system, and let {f \in L^\infty(X)} be a generalised eigenfunction of degree less than {d_1} along one subgroup {H_1} of {G}, and a generalised eigenfunction of degree less than {d_2} along another subgroup {H_2} of {G}, for some {d_1,d_2 \geq 1}. Then {f} is a generalised eigenfunction of degree less than {d_1+d_2-1} along the subgroup {H_1+H_2} of {G}.

The argument is almost identical to that of the previous proposition and is left as an exercise to the reader. The key point is the point (ii) identified earlier: the space of generalised eigenfunctions of degree less than {d} along {H} is preserved by multiplication and shifts, as well as the operation of “taking derivatives” {f \mapsto \lambda_k} even along directions {k} that do not lie in {H}. (To prove this latter claim, one should restrict to the region where {f} is non-zero, and then divide {T^k f} by {f} to locate {\lambda_k}.)

A typical example of this proposition in action is as follows: consider the {{\bf Z}^2}-system given by the {3}-torus {({\bf R}/{\bf Z})^3} with generating shifts

\displaystyle T^{(1,0)}(x,y,z) := (x+\alpha,y,z+y)

\displaystyle T^{(0,1)}(x,y,z) := (x,y+\alpha,z+x)

for some irrational {\alpha}, which can be checked to give a {{\bf Z}^2} action

\displaystyle T^{(n,m)}(x,y,z) := (x+n\alpha, y+m\alpha, z+ny+mx+nm\alpha).

The function {f(x,y,z) := e^{2\pi i z}} can then be checked to be a generalised eigenfunction of degree less than {2} along {{\bf Z} \times \{0\}}, and also less than {2} along {\{0\} \times {\bf Z}}, and less than {3} along {{\bf Z}^2}. One can view this example as the dynamical systems translation of the example (1) (see this previous post for some more discussion of this sort of correspondence).

The main results of our concatenation paper are analogues of these propositions concerning a more complicated notion of “polynomial-like” structure that are of importance in additive combinatorics and in ergodic theory. On the ergodic theory side, the notion of structure is captured by the Host-Kra characteristic factors {Z^{<d}_H(X)} of a {G}-system {X} along a subgroup {H}. These factors can be defined in a number of ways. One is by duality, using the Gowers-Host-Kra uniformity seminorms (defined for instance here) {\| \|_{U^d_H(X)}}. Namely, {Z^{<d}_H(X)} is the factor of {X} defined up to equivalence by the requirement that

\displaystyle \|f\|_{U^d_H(X)} = 0 \iff {\bf E}(f | Z^{<d}_H(X) ) = 0.

An equivalent definition is in terms of the dual functions {{\mathcal D}^d_H(f)} of {f} along {H}, which can be defined recursively by setting {{\mathcal D}^0_H(f) = 1} and

\displaystyle {\mathcal D}^d_H(f) = {\bf E}_h T^h f {\mathcal D}^{d-1}( f \overline{T^h f} )

where {{\bf E}_h} denotes the ergodic average along a Følner sequence in {G} (in fact one can also define these concepts in non-amenable abelian settings as per this previous post). The factor {Z^{<d}_H(X)} can then be alternately defined as the factor generated by the dual functions {{\mathcal D}^d_H(f)} for {f \in L^\infty(X)}.

In the case when {G=H={\bf Z}} and {X} is {G}-ergodic, a deep theorem of Host and Kra shows that the factor {Z^{<d}_H(X)} is equivalent to the inverse limit of nilsystems of step less than {d}. A similar statement holds with {{\bf Z}} replaced by any finitely generated group by Griesmer, while the case of an infinite vector space over a finite field was treated in this paper of Bergelson, Ziegler, and myself. The situation is more subtle when {X} is not {G}-ergodic, or when {X} is {G}-ergodic but {H} is a proper subgroup of {G} acting non-ergodically, when one has to start considering measurable families of directional nilsystems; see for instance this paper of Austin for some of the subtleties involved (for instance, higher order group cohomology begins to become relevant!).

One of our main theorems is then

Proposition 3 (Concatenation of characteristic factors) Let {(X,T)} be a {G}-system, and let {f} be measurable with respect to the factor {Z^{<d_1}_{H_1}(X)} and with respect to the factor {Z^{<d_2}_{H_2}(X)} for some {d_1,d_2 \geq 1} and some subgroups {H_1,H_2} of {G}. Then {f} is also measurable with respect to the factor {Z^{<d_1+d_2-1}_{H_1+H_2}(X)}.

We give two proofs of this proposition in the paper; an ergodic-theoretic proof using the Host-Kra theory of “cocycles of type {<d} (along a subgroup {H})”, which can be used to inductively describe the factors {Z^{<d}_H}, and a combinatorial proof based on a combinatorial analogue of this proposition which is harder to state (but which roughly speaking asserts that a function which is nearly orthogonal to all bounded functions of small {U^{d_1}_{H_1}} norm, and also to all bounded functions of small {U^{d_2}_{H_2}} norm, is also nearly orthogonal to alll bounded functions of small {U^{d_1+d_2-1}_{H_1+H_2}} norm). The combinatorial proof parallels the proof of Proposition 2. A key point is that dual functions {F := {\mathcal D}^d_H(f)} obey a property analogous to being a generalised eigenfunction, namely that

\displaystyle T^h F = {\bf E}_k \lambda_{h,k} F_k

where {F_k := T^k F} and {\lambda_{h,k} := {\mathcal D}^{d-1}( T^h f \overline{T^k f} )} is a “structured function of order {d-1}” along {H}. (In the language of this previous paper of mine, this is an assertion that dual functions are uniformly almost periodic of order {d}.) Again, the point (ii) above is crucial, and in particular it is key that any structure that {F} has is inherited by the associated functions {\lambda_{h,k}} and {F_k}. This sort of inheritance is quite easy to accomplish in the ergodic setting, as there is a ready-made language of factors to encapsulate the concept of structure, and the shift-invariance and {\sigma}-algebra properties of factors make it easy to show that just about any “natural” operation one performs on a function measurable with respect to a given factor, returns a function that is still measurable in that factor. In the finitary combinatorial setting, though, encoding the fact (ii) becomes a remarkably complicated notational nightmare, requiring a huge amount of “epsilon management” and “second-order epsilon management” (in which one manages not only scalar epsilons, but also function-valued epsilons that depend on other parameters). In order to avoid all this we were forced to utilise a nonstandard analysis framework for the combinatorial theorems, which made the arguments greatly resemble the ergodic arguments in many respects (though the two settings are still not equivalent, see this previous blog post for some comparisons between the two settings). Unfortunately the arguments are still rather complicated.

For combinatorial applications, dual formulations of the concatenation theorem are more useful. A direct dualisation of the theorem yields the following decomposition theorem: a bounded function which is small in {U^{d_1+d_2-1}_{H_1+H_2}} norm can be split into a component that is small in {U^{d_1}_{H_1}} norm, and a component that is small in {U^{d_2}_{H_2}} norm. (One may wish to understand this type of result by first proving the following baby version: any function that has mean zero on every coset of {H_1+H_2}, can be decomposed as the sum of a function that has mean zero on every {H_1} coset, and a function that has mean zero on every {H_2} coset. This is dual to the assertion that a function that is constant on every {H_1} coset and constant on every {H_2} coset, is constant on every {H_1+H_2} coset.) Combining this with some standard “almost orthogonality” arguments (i.e. Cauchy-Schwarz) give the following Bessel-type inequality: if one has a lot of subgroups {H_1,\dots,H_k} and a bounded function is small in {U^{2d-1}_{H_i+H_j}} norm for most {i,j}, then it is also small in {U^d_{H_i}} norm for most {i}. (Here is a baby version one may wish to warm up on: if a function {f} has small mean on {({\bf Z}/p{\bf Z})^2} for some large prime {p}, then it has small mean on most of the cosets of most of the one-dimensional subgroups of {({\bf Z}/p{\bf Z})^2}.)

There is also a generalisation of the above Bessel inequality (as well as several of the other results mentioned above) in which the subgroups {H_i} are replaced by more general coset progressions {H_i+P_i} (of bounded rank), so that one has a Bessel inequailty controlling “local” Gowers uniformity norms such as {U^d_{P_i}} by “global” Gowers uniformity norms such as {U^{2d-1}_{P_i+P_j}}. This turns out to be particularly useful when attempting to compute polynomial averages such as

\displaystyle \sum_{n \leq N} \sum_{r \leq \sqrt{N}} f(n) g(n+r^2) h(n+2r^2) \ \ \ \ \ (2)

 

for various functions {f,g,h}. After repeated use of the van der Corput lemma, one can control such averages by expressions such as

\displaystyle \sum_{n \leq N} \sum_{h,m,k \leq \sqrt{N}} f(n) f(n+mh) f(n+mk) f(n+m(h+k))

(actually one ends up with more complicated expressions than this, but let’s use this example for sake of discussion). This can be viewed as an average of various {U^2} Gowers uniformity norms of {f} along arithmetic progressions of the form {\{ mh: h \leq \sqrt{N}\}} for various {m \leq \sqrt{N}}. Using the above Bessel inequality, this can be controlled in turn by an average of various {U^3} Gowers uniformity norms along rank two generalised arithmetic progressions of the form {\{ m_1 h_1 + m_2 h_2: h_1,h_2 \le \sqrt{N}\}} for various {m_1,m_2 \leq \sqrt{N}}. But for generic {m_1,m_2}, this rank two progression is close in a certain technical sense to the “global” interval {\{ n: n \leq N \}} (this is ultimately due to the basic fact that two randomly chosen large integers are likely to be coprime, or at least have a small gcd). As a consequence, one can use the concatenation theorems from our first paper to control expressions such as (2) in terms of global Gowers uniformity norms. This is important in number theoretic applications, when one is interested in computing sums such as

\displaystyle \sum_{n \leq N} \sum_{r \leq \sqrt{N}} \mu(n) \mu(n+r^2) \mu(n+2r^2)

or

\displaystyle \sum_{n \leq N} \sum_{r \leq \sqrt{N}} \Lambda(n) \Lambda(n+r^2) \Lambda(n+2r^2)

where {\mu} and {\Lambda} are the Möbius and von Mangoldt functions respectively. This is because we are able to control global Gowers uniformity norms of such functions (thanks to results such as the proof of the inverse conjecture for the Gowers norms, the orthogonality of the Möbius function with nilsequences, and asymptotics for linear equations in primes), but much less control is currently available for local Gowers uniformity norms, even with the assistance of the generalised Riemann hypothesis (see this previous blog post for some further discussion).

By combining these tools and strategies with the “transference principle” approach from our previous paper (as improved using the recent “densification” technique of Conlon, Fox, and Zhao, discussed in this previous post), we are able in particular to establish the following result:

Theorem 4 (Polynomial patterns in the primes) Let {P_1,\dots,P_k: {\bf Z} \rightarrow {\bf Z}} be polynomials of degree at most {d}, whose degree {d} coefficients are all distinct, for some {d \geq 1}. Suppose that {P_1,\dots,P_k} is admissible in the sense that for every prime {p}, there are {n,r} such that {n+P_1(r),\dots,n+P_k(r)} are all coprime to {p}. Then there exist infinitely many pairs {n,r} of natural numbers such that {n+P_1(r),\dots,n+P_k(r)} are prime.

Furthermore, we obtain an asymptotic for the number of such pairs {n,r} in the range {n \leq N}, {r \leq N^{1/d}} (actually for minor technical reasons we reduce the range of {r} to be very slightly less than {N^{1/d}}). In fact one could in principle obtain asymptotics for smaller values of {r}, and relax the requirement that the degree {d} coefficients be distinct with the requirement that no two of the {P_i} differ by a constant, provided one had good enough local uniformity results for the Möbius or von Mangoldt functions. For instance, we can obtain an asymptotic for triplets of the form {n, n+r,n+r^d} unconditionally for {d \leq 5}, and conditionally on GRH for all {d}, using known results on primes in short intervals on average.

The {d=1} case of this theorem was obtained in a previous paper of myself and Ben Green (using the aforementioned conjectures on the Gowers uniformity norm and the orthogonality of the Möbius function with nilsequences, both of which are now proven). For higher {d}, an older result of Tamar and myself was able to tackle the case when {P_1(0)=\dots=P_k(0)=0} (though our results there only give lower bounds on the number of pairs {(n,r)}, and no asymptotics). Both of these results generalise my older theorem with Ben Green on the primes containing arbitrarily long arithmetic progressions. The theorem also extends to multidimensional polynomials, in which case there are some additional previous results; see the paper for more details. We also get a technical refinement of our previous result on narrow polynomial progressions in (dense subsets of) the primes by making the progressions just a little bit narrower in the case of the density of the set one is using is small.

. This latter Bessel type inequality is particularly useful in combinatorial and number-theoretic applications, as it allows one to convert “global” Gowers uniformity norm (basically, bounds on norms such as {U^{2d-1}_{H_i+H_j}}) to “local” Gowers uniformity norm control.

Van Vu and I just posted to the arXiv our paper “sum-free sets in groups” (submitted to Discrete Analysis), as well as a companion survey article (submitted to J. Comb.). Given a subset {A} of an additive group {G = (G,+)}, define the quantity {\phi(A)} to be the cardinality of the largest subset {B} of {A} which is sum-free in {A} in the sense that all the sums {b_1+b_2} with {b_1,b_2} distinct elements of {B} lie outside of {A}. For instance, if {A} is itself a group, then {\phi(A)=1}, since no two elements of {A} can sum to something outside of {A}. More generally, if {A} is the union of {k} groups, then {\phi(A)} is at most {k}, thanks to the pigeonhole principle.

If {G} is the integers, then there are no non-trivial subgroups, and one can thus expect {\phi(A)} to start growing with {A}. For instance, one has the following easy result:

Proposition 1 Let {A} be a set of {2^k} natural numbers. Then {\phi(A) > k}.

Proof: We use an argument of Ruzsa, which is based in turn on an older argument of Choi. Let {x_1} be the largest element of {A}, and then recursively, once {x_1,\dots,x_i} has been selected, let {x_{i+1}} be the largest element of {A} not equal to any of the {x_1,\dots,x_i}, such that {x_{i+1}+x_j \not \in A} for all {j=1,\dots,i}, terminating this construction when no such {x_{i+1}} can be located. This gives a sequence {x_1 > x_2 > \dots > x_m} of elements in {A} which are sum-free in {A}, and with the property that for any {y \in A}, either {y} is equal to one of the {x_i}, or else {y + x_i \in A} for some {i} with {x_i > y}. Iterating this, we see that any {y \in A} is of the form {x_{i_1} - x_{i_2} - \dots - x_{i_j}} for some {j \geq 1} and {1 \leq i_1 < i_2 < \dots \leq i_j \leq m}. The number of such expressions {x_{i_1} - x_{i_2} - \dots - x_{i_j}} is at most {2^{m}-1}, thus {2^k \leq 2^m-1} which implies {m \geq k+1}. Since {\phi(A) \geq m}, the claim follows. \Box

In particular, we have {\phi(A) \gg \log |A|} for subsets {A} of the integers. It has been possible to improve upon this easy bound, but only with remarkable effort. The best lower bound currently is

\displaystyle \phi(A) \geq \log |A| (\log\log|A|)^{1/2 - o(1)},

a result of Shao (building upon earlier work of Sudakov, Szemeredi, and Vu and of Dousse). In the opposite direction, a construction of Ruzsa gives examples of large sets {A} with {\phi(A) \leq \exp( O( \sqrt{\log |A|} ) )}.

Using the standard tool of Freiman homomorphisms, the above results for the integers extend to other torsion-free abelian groups {G}. In our paper we study the opposite case where {G} is finite (but still abelian). In this paper of Erdös (in which the quantity {\phi(A)} was first introduced), the following question was posed: if {A} is sufficiently large depending on {\phi(A)}, does this imply the existence of two elements {x,y \in A} with {x+y=0}? As it turns out, we were able to find some simple counterexamples to this statement. For instance, if {H} is any finite additive group, then the set {A := \{ 1 \hbox{ mod } 7, 2 \hbox{ mod } 7, 4 \hbox{ mod } 7\} \times H \subset {\bf Z}/7{\bf Z} \times H} has {\phi(A)=3} but with no {x,y \in A} summing to zero; this type of example in fact works with {7} replaced by any larger Mersenne prime, and we also have a counterexample in {{\bf Z}/2^n{\bf Z}} for {n} arbitrarily large. However, in the positive direction, we can show that the answer to Erdös’s question is positive if {|G|} is assumed to have no small prime factors. That is to say,

Theorem 2 For every {k \geq 1} there exists {C \geq 1} such that if {G} is a finite abelian group whose order is not divisible by any prime less than or equal to {C}, and {A} is a subset of {G} with order at least {C} and {\phi(A) \leq k}, then there exist {x,y \in A} with {x+y=0}.

There are two main tools used to prove this result. One is an “arithmetic removal lemma” proven by Král, Serra, and Vena. Note that the condition {\phi(A) \leq k} means that for any distinct {x_1,\dots,x_{k+1} \in A}, at least one of the {x_i+x_j}, {1 \leq i < j \leq k+1}, must also lie in {A}. Roughly speaking, the arithmetic removal lemma allows one to “almost” remove the requirement that {x_1,\dots,x_{k+1}} be distinct, which basically now means that {x \in A \implies 2x \in A} for almost all {x \in A}. This near-dilation symmetry, when combined with the hypothesis that {|G|} has no small prime factors, gives a lot of “dispersion” in the Fourier coefficients of {1_A} which can now be exploited to prove the theorem.

The second tool is the following structure theorem, which is the main result of our paper, and goes a fair ways towards classifying sets {A} for which {\phi(A)} is small:

Theorem 3 Let {A} be a finite subset of an arbitrary additive group {G}, with {\phi(A) \leq k}. Then one can find finite subgroups {H_1,\dots,H_m} with {m \leq k} such that {|A \cap H_i| \gg_k |H_i|} and {|A \backslash (H_1 \cup \dots \cup H_m)| \ll_k 1}. Furthermore, if {m=k}, then the exceptional set {A \backslash (H_1 \cup \dots \cup H_m)} is empty.

Roughly speaking, this theorem shows that the example of the union of {k} subgroups mentioned earlier is more or less the “only” example of sets {A} with {\phi(A) \leq k}, modulo the addition of some small exceptional sets and some refinement of the subgroups to dense subsets.

This theorem has the flavour of other inverse theorems in additive combinatorics, such as Freiman’s theorem, and indeed one can use Freiman’s theorem (and related tools, such as the Balog-Szemeredi theorem) to easily get a weaker version of this theorem. Indeed, if there are no sum-free subsets of {A} of order {k+1}, then a fraction {\gg_k 1} of all pairs {a,b} in {A} must have their sum also in {A} (otherwise one could take {k+1} random elements of {A} and they would be sum-free in {A} with positive probability). From this and the Balog-Szemeredi theorem and Freiman’s theorem (in arbitrary abelian groups, as established by Green and Ruzsa), we see that {A} must be “commensurate” with a “coset progression” {H+P} of bounded rank. One can then eliminate the torsion-free component {P} of this coset progression by a number of methods (e.g. by using variants of the argument in Proposition 1), with the upshot being that one can locate a finite group {H_1} that has large intersection with {A}.

At this point it is tempting to simply remove {H_1} from {A} and iterate. But one runs into a technical difficulty that removing a set such as {H_1} from {A} can alter the quantity {\phi(A)} in unpredictable ways, so one has to still keep {H_1} around when analysing the residual set {A \backslash H_1}. A second difficulty is that the latter set {A \backslash H_1} could be considerably smaller than {A} or {H_1}, but still large in absolute terms, so in particular any error term whose size is only bounded by {\varepsilon |A|} for a small {\varepsilon} could be massive compared with the residual set {A\backslash H_1}, and so such error terms would be unacceptable. One can get around these difficulties if one first performs some preliminary “normalisation” of the group {H_1}, so that the residual set {A \backslash H_1} does not intersect any coset of {H_1} too strongly. The arguments become even more complicated when one starts removing more than one group {H_1,\dots,H_i} from {A} and analyses the residual set {A \backslash (H_1 \cup \dots \cup H_i)}; indeed the “epsilon management” involved became so fearsomely intricate that we were forced to use a nonstandard analysis formulation of the problem in order to keep the complexity of the argument at a reasonable level (cf. my previous blog post on this topic). One drawback of doing so is that we have no effective bounds for the implied constants in our main theorem; it would be of interest to obtain a more direct proof of our main theorem that would lead to effective bounds.

I’ve just uploaded to the arXiv my paper Finite time blowup for high dimensional nonlinear wave systems with bounded smooth nonlinearity, submitted to Comm. PDE. This paper is in the same spirit as (though not directly related to) my previous paper on finite time blowup of supercritical NLW systems, and was inspired by a question posed to me some time ago by Jeffrey Rauch. Here, instead of looking at supercritical equations, we look at an extremely subcritical equation, namely a system of the form

\displaystyle \Box u = f(u) \ \ \ \ \ (1)

 

where {u: {\bf R}^{1+d} \rightarrow {\bf R}^m} is the unknown field, and {f: {\bf R}^m \rightarrow {\bf R}^m} is the nonlinearity, which we assume to have all derivatives bounded. A typical example of such an equation is the higher-dimensional sine-Gordon equation

\displaystyle \Box u = \sin u

for a scalar field {u: {\bf R}^{1+d} \rightarrow {\bf R}}. Here {\Box = -\partial_t^2 + \Delta} is the d’Alembertian operator. We restrict attention here to classical (i.e. smooth) solutions to (1).

We do not assume any Hamiltonian structure, so we do not require {f} to be a gradient {f = \nabla F} of a potential {F: {\bf R}^m \rightarrow {\bf R}}. But even without such Hamiltonian structure, the equation (1) is very well behaved, with many a priori bounds available. For instance, if the initial position {u_0(x) = u(0,x)} and initial velocity {u_1(x) = \partial_t u(0,x)} are smooth and compactly supported, then from finite speed of propagation {u(t)} has uniformly bounded compact support for all {t} in a bounded interval. As the nonlinearity {f} is bounded, this immediately places {f(u)} in {L^\infty_t L^2_x} in any bounded time interval, which by the energy inequality gives an a priori {L^\infty_t H^1_x} bound on {u} in this time interval. Next, from the chain rule we have

\displaystyle \nabla f(u) = (\nabla_{{\bf R}^m} f)(u) \nabla u

which (from the assumption that {\nabla_{{\bf R}^m} f} is bounded) shows that {f(u)} is in {L^\infty_t H^1_x}, which by the energy inequality again now gives an a priori {L^\infty_t H^2_x} bound on {u}.

One might expect that one could keep iterating this and obtain a priori bounds on {u} in arbitrarily smooth norms. In low dimensions such as {d \leq 3}, this is a fairly easy task, since the above estimates and Sobolev embedding already place one in {L^\infty_t L^\infty_x}, and the nonlinear map {f} is easily verified to preserve the space {L^\infty_t H^k_x \cap L^\infty_t L^\infty_x} for any natural number {k}, from which one obtains a priori bounds in any Sobolev space; from this and standard energy methods, one can then establish global regularity for this equation (that is to say, any smooth choice of initial data generates a global smooth solution). However, one starts running into trouble in higher dimensions, in which no {L^\infty_x} bound is available. The main problem is that even a really nice nonlinearity such as {u \mapsto \sin u} is unbounded in higher Sobolev norms. The estimates

\displaystyle |\sin u| \leq |u|

and

\displaystyle |\nabla(\sin u)| \leq |\nabla u|

ensure that the map {u \mapsto \sin u} is bounded in low regularity spaces like {L^2_x} or {H^1_x}, but one already runs into trouble with the second derivative

\displaystyle \nabla^2(\sin u) = (\cos u) \nabla^2 u - (\sin u) \nabla u \nabla u

where there is a troublesome lower order term of size {O( |\nabla u|^2 )} which becomes difficult to control in higher dimensions, preventing the map {u \mapsto \sin u} to be bounded in {H^2_x}. Ultimately, the issue here is that when {u} is not controlled in {L^\infty}, the function {\sin u} can oscillate at a much higher frequency than {u}; for instance, if {u} is the one-dimensional wave {u = A \sin(kx)}for some {k > 0} and {A>1}, then {u} oscillates at frequency {k}, but the function {\sin(u)= \sin(A \sin(kx))} more or less oscillates at the larger frequency {Ak}.

In medium dimensions, it is possible to use dispersive estimates for the wave equation (such as the famous Strichartz estimates) to overcome these problems. This line of inquiry was pursued (albeit for slightly different classes of nonlinearity {f} than those considered here) by Heinz-von Wahl, Pecher (in a series of papers), Brenner, and Brenner-von Wahl; to cut a long story short, one of the conclusions of these papers was that one had global regularity for equations such as (1) in dimensions {d \leq 9}. (I reprove this result using modern Strichartz estimate and Littlewood-Paley techniques in an appendix to my paper. The references given also allow for some growth in the nonlinearity {f}, but we will not detail the precise hypotheses used in these papers here.)

In my paper, I complement these positive results with an almost matching negative result:

Theorem 1 If {d \geq 11} and {m \geq 2}, then there exists a nonlinearity {f: {\bf R}^m \rightarrow {\bf R}^m} with all derivatives bounded, and a solution {u} to (1) that is smooth at time zero, but develops a singularity in finite time.

The construction crucially relies on the ability to choose the nonlinearity {f}, and also needs some injectivity properties on the solution {u: {\bf R}^{1+d} \rightarrow {\bf R}^m} (after making a symmetry reduction using an assumption of spherical symmetry to view {u} as a function of {1+1} variables rather than {1+d}) which restricts our counterexample to the {m \geq 2} case. Thus the model case of the higher-dimensional sine-Gordon equation {\Box u =\sin u} is not covered by our arguments. Nevertheless (as with previous finite-time blowup results discussed on this blog), one can view this result as a barrier to trying to prove regularity for equations such as {\Box u = \sin u} in eleven and higher dimensions, as any such argument must somehow use a property of that equation that is not applicable to the more general system (1).

Let us first give some back-of-the-envelope calculations suggesting why there could be finite time blowup in eleven and higher dimensions. For sake of this discussion let us restrict attention to the sine-Gordon equation {\Box u = \sin u}. The blowup ansatz we will use is as follows: for each frequency {N_j} in a sequence {1 < N_1 < N_2 < N_3 < \dots} of large quantities going to infinity, there will be a spacetime “cube” {Q_j = \{ (t,x): t \sim \frac{1}{N_j}; x = O(\frac{1}{N_j})\}} on which the solution {u} oscillates with “amplitude” {N_j^\alpha} and “frequency” {N_j}, where {\alpha>0} is an exponent to be chosen later; this ansatz is of course compatible with the uncertainty principle. Since {N_j^\alpha \rightarrow \infty} as {j \rightarrow \infty}, this will create a singularity at the spacetime origin {(0,0)}. To make this ansatz plausible, we wish to make the oscillation of {u} on {Q_j} driven primarily by the forcing term {\sin u} at {Q_{j-1}}. Thus, by Duhamel’s formula, we expect a relation roughly of the form

\displaystyle u(t,x) \approx \int \frac{\sin((s-t)\sqrt{-\Delta})}{\sqrt{-\Delta}} \sin(1_{Q_{j-1}} u(s)) (x)\ ds

on {Q_j}, where {\frac{\sin((s-t)\sqrt{-\Delta})}{\sqrt{-\Delta}}} is the usual free wave propagator, and {1_{Q_{j-1}}} is the indicator function of {Q_{j-1}}.

On {Q_{j-1}}, {u} oscillates with amplitude {N_{j-1}^\alpha} and frequency {N_{j-1}}, we expect the derivative {\nabla_{t,x} u} to be of size about {N_{j-1}^{\alpha+1}}, and so from the principle of stationary phase we expect {\sin(u)} to oscillate at frequency about {N_{j-1}^{\alpha+1}}. Since the wave propagator {\frac{\sin((s-t)\sqrt{-\Delta})}{\sqrt{-\Delta}}} preserves frequencies, and {u} is supposed to be of frequency {N_j} on {Q_j} we are thus led to the requirement

\displaystyle N_j \approx N_{j-1}^{\alpha+1}. \ \ \ \ \ (2)

 

Next, when restricted to frequencies of order {N_{j}}, the propagator {\frac{\sin((s-t)\sqrt{-\Delta})}{\sqrt{-\Delta}}} “behaves like” {N_{j}^{\frac{d-3}{2}} (s-t)^{\frac{d-1}{2}} A_{s-t}}, where {A_{s-t}} is the spherical averaging operator

\displaystyle A_{s-t} f(x) := \frac{1}{\omega_{d-1}} \int_{S^{d-1}} f(x + (s-t)\theta)\ d\theta

where {d\theta} is surface measure on the unit sphere {S^{d-1}}, and {\omega_{d-1}} is the volume of that sphere. In our setting, {s-t} is comparable to {1/N_{j-1}}, and so we have the informal approximation

\displaystyle u(t,x) \approx N_j^{\frac{d-3}{2}} N_{j-1}^{-\frac{d-1}{2}} \int_{s \sim 1/N_{j-1}} A_{s-t} \sin(u(s))(x)\ ds

on {Q_j}.

Since {\sin(u(s))} is bounded, {A_{s-t} \sin(u(s))} is bounded as well. This gives a (non-rigorous) upper bound

\displaystyle u(t,x) \lessapprox N_j^{\frac{d-3}{2}} N_{j-1}^{-\frac{d-1}{2}} \frac{1}{N_{j-1}}

which when combined with our ansatz that {u} has ampitude about {N_j^\alpha} on {Q_j}, gives the constraint

\displaystyle N_j^\alpha \lessapprox N_j^{\frac{d-3}{2}} N_{j-1}^{-\frac{d-1}{2}} \frac{1}{N_{j-1}}

which on applying (2) gives the further constraint

\displaystyle \alpha(\alpha+1) \leq \frac{d-3}{2} (\alpha+1) - \frac{d-1}{2} - 1

which can be rearranged as

\displaystyle \left(\alpha - \frac{d-5}{4}\right)^2 \leq \frac{d^2-10d-7}{16}.

It is now clear that the optimal choice of {\alpha} is

\displaystyle \alpha = \frac{d-5}{4},

and this blowup ansatz is only self-consistent when

\displaystyle \frac{d^2-10d-7}{16} \geq 0

or equivalently if {d \geq 11}.

To turn this ansatz into an actual blowup example, we will construct {u} as the sum of various functions {u_j} that solve the wave equation with forcing term in {Q_{j+1}}, and which concentrate in {Q_j} with the amplitude and frequency indicated by the above heuristic analysis. The remaining task is to show that {\Box u} can be written in the form {f(u)} for some {f} with all derivatives bounded. For this one needs some injectivity properties of {u} (after imposing spherical symmetry to impose a dimensional reduction on the domain of {u} from {d+1} dimensions to {1+1}). This requires one to construct some solutions to the free wave equation that have some unusual restrictions on the range (for instance, we will need a solution taking values in the plane {{\bf R}^2} that avoid one quadrant of that plane). In order to do this we take advantage of the very explicit nature of the fundamental solution to the wave equation in odd dimensions (such as {d=11}), particularly under the assumption of spherical symmetry. Specifically, one can show that in odd dimension {d}, any spherically symmetric function {u(t,x) = u(t,r)} of the form

\displaystyle u(t,r) = \left(\frac{1}{r} \partial_r\right)^{\frac{d-1}{2}} (g(t+r) + g(t-r))

for an arbitrary smooth function {g: {\bf R} \rightarrow {\bf R}^m}, will solve the free wave equation; this is ultimately due to iterating the “ladder operator” identity

\displaystyle \left( \partial_{tt} + \partial_{rr} + \frac{d-1}{r} \partial_r \right) \frac{1}{r} \partial_r = \frac{1}{r} \partial_r \left( \partial_{tt} + \partial_{rr} + \frac{d-3}{r} \partial_r \right).

This precise and relatively simple formula for {u} allows one to create “bespoke” solutions {u} that obey various unusual properties, without too much difficulty.

It is not clear to me what to conjecture for {d=10}. The blowup ansatz given above is a little inefficient, in that the frequency {N_{j+1}} component of the solution is only generated from a portion of the {N_j} component, namely the portion close to a certain light cone. In particular, the solution does not saturate the Strichartz estimates that are used to establish the positive results for {d \leq 9}, which helps explain the slight gap between the positive and negative results. It may be that a more complicated ansatz could work to give a negative result in ten dimensions; conversely, it is also possible that one could use more advanced estimates than the Strichartz estimate (that somehow capture the “thinness” of the fundamental solution, and not just its dispersive properties) to stretch the positive results to ten dimensions. Which side the {d=10} case falls in all come down to some rather delicate numerology.

I’ve just uploaded to the arXiv my paper Finite time blowup for a supercritical defocusing nonlinear wave system, submitted to Analysis and PDE. This paper was inspired by a question asked of me by Sergiu Klainerman recently, regarding whether there were any analogues of my blowup example for Navier-Stokes type equations in the setting of nonlinear wave equations.

Recall that the defocusing nonlinear wave (NLW) equation reads

\displaystyle \Box u = |u|^{p-1} u \ \ \ \ \ (1)

 

where {u: {\bf R}^{1+d} \rightarrow {\bf R}} is the unknown scalar field, {\Box = -\partial_t^2 + \Delta} is the d’Alambertian operator, and {p>1} is an exponent. We can generalise this equation to the defocusing nonlinear wave system

\displaystyle \Box u = (\nabla F)(u) \ \ \ \ \ (2)

 

where {u: {\bf R}^{1+d} \rightarrow {\bf R}^m} is now a system of scalar fields, and {F: {\bf R}^m \rightarrow {\bf R}} is a potential which is homogeneous of degree {p+1} and strictly positive away from the origin; the scalar equation corresponds to the case where {m=1} and {F(u) = \frac{1}{p+1} |u|^{p+1}}. We will be interested in smooth solutions {u} to (2). It is only natural to restrict to the smooth category when the potential {F} is also smooth; unfortunately, if one requires {F} to be homogeneous of order {p+1} all the way down to the origin, then {F} cannot be smooth unless it is identically zero or {p+1} is an odd integer. This is too restrictive for us, so we will only require that {F} be homogeneous away from the origin (e.g. outside the unit ball). In any event it is the behaviour of {F(u)} for large {u} which will be decisive in understanding regularity or blowup for the equation (2).

Formally, solutions to the equation (2) enjoy a conserved energy

\displaystyle E[u] = \int_{{\bf R}^d} \frac{1}{2} \|\partial_t u \|^2 + \frac{1}{2} \| \nabla_x u \|^2 + F(u)\ dx.

Using this conserved energy, it is possible to establish global regularity for the Cauchy problem (2) in the energy-subcritical case when {d \leq 2}, or when {d \geq 3} and {p < 1+\frac{4}{d-2}}. This means that for any smooth initial position {u_0: {\bf R}^d \rightarrow {\bf R}^m} and initial velocity {u_1: {\bf R}^d \rightarrow {\bf R}^m}, there exists a (unique) smooth global solution {u: {\bf R}^{1+d} \rightarrow {\bf R}^m} to the equation (2) with {u(0,x) = u_0(x)} and {\partial_t u(0,x) = u_1(x)}. These classical global regularity results (essentially due to Jörgens) were famously extended to the energy-critical case when {d \geq 3} and {p = 1 + \frac{4}{d-2}} by Grillakis, Struwe, and Shatah-Struwe (though for various technical reasons, the global regularity component of these results was limited to the range {3 \leq d \leq 7}). A key tool used in the energy-critical theory is the Morawetz estimate

\displaystyle \int_0^T \int_{{\bf R}^d} \frac{|u(t,x)|^{p+1}}{|x|}\ dx dt \lesssim E[u]

which can be proven by manipulating the properties of the stress-energy tensor

\displaystyle T_{\alpha \beta} = \langle \partial_\alpha u, \partial_\beta u \rangle - \frac{1}{2} \eta_{\alpha \beta} (\langle \partial^\gamma u, \partial_\gamma u \rangle + F(u))

(with the usual summation conventions involving the Minkowski metric {\eta_{\alpha \beta} dx^\alpha dx^\beta = -dt^2 + |dx|^2}) and in particular exploiting the divergence-free nature of this tensor: {\partial^\beta T_{\alpha \beta}} See for instance the text of Shatah-Struwe, or my own PDE book, for more details. The energy-critical regularity results have also been extended to slightly supercritical settings in which the potential grows by a logarithmic factor or so faster than the critical rate; see the results of myself and of Roy.

This leaves the question of global regularity for the energy supercritical case when {d \geq 3} and {p > 1+\frac{4}{d-2}}. On the one hand, global smooth solutions are known for small data (if {F} vanishes to sufficiently high order at the origin, see e.g. the work of Lindblad and Sogge), and global weak solutions for large data were constructed long ago by Segal. On the other hand, the solution map, if it exists, is known to be extremely unstable, particularly at high frequencies; see for instance this paper of Lebeau, this paper of Christ, Colliander, and myself, this paper of Brenner and Kumlin, or this paper of Ibrahim, Majdoub, and Masmoudi for various formulations of this instability. In the case of the focusing NLW {-\partial_{tt} u + \Delta u = - |u|^{p-1} u}, one can easily create solutions that blow up in finite time by ODE constructions, for instance one can take {u(t,x) = c (1-t)^{-\frac{2}{p-1}}} with {c = (\frac{2(p+1)}{(p-1)^2})^{\frac{1}{p-1}}}, which blows up as {t} approaches {1}. However the situation in the defocusing supercritical case is less clear. The strongest positive results are of Kenig-Merle and Killip-Visan, which show (under some additional technical hypotheses) that global regularity for such equations holds under the additional assumption that the critical Sobolev norm of the solution stays bounded. Roughly speaking, this shows that “Type II blowup” cannot occur for (2).

Our main result is that finite time blowup can in fact occur, at least for three-dimensional systems where the number {m} of degrees of freedom is sufficiently large:

Theorem 1 Let {d=3}, {p > 5}, and {m \geq 76}. Then there exists a smooth potential {F: {\bf R}^m \rightarrow {\bf R}}, positive and homogeneous of degree {p+1} away from the origin, and a solution to (2) with smooth initial data that develops a singularity in finite time.

The rather large lower bound of {76} on {m} here is primarily due to our use of the Nash embedding theorem (which is the first time I have actually had to use this theorem in an application!). It can certainly be lowered, but unfortunately our methods do not seem to be able to bring {m} all the way down to {1}, so we do not directly exhibit finite time blowup for the scalar supercritical defocusing NLW. Nevertheless, this result presents a barrier to any attempt to prove global regularity for that equation, in that it must somehow use a property of the scalar equation which is not available for systems. It is likely that the methods can be adapted to higher dimensions than three, but we take advantage of some special structure to the equations in three dimensions (related to the strong Huygens principle) which does not seem to be available in higher dimensions.

The blowup will in fact be of discrete self-similar type in a backwards light cone, thus {u} will obey a relation of the form

\displaystyle u(e^S t, e^S x) = e^{-\frac{2}{p-1} S} u(t,x)

for some fixed {S>0} (the exponent {-\frac{2}{p-1}} is mandated by dimensional analysis considerations). It would be natural to consider continuously self-similar solutions (in which the above relation holds for all {S}, not just one {S}). And rough self-similar solutions have been constructed in the literature by perturbative methods (see this paper of Planchon, or this paper of Ribaud and Youssfi). However, it turns out that continuously self-similar solutions to a defocusing equation have to obey an additional monotonicity formula which causes them to not exist in three spatial dimensions; this argument is given in my paper. So we have to work just with discretely self-similar solutions.

Because of the discrete self-similarity, the finite time blowup solution will be “locally Type II” in the sense that scale-invariant norms inside the backwards light cone stay bounded as one approaches the singularity. But it will not be “globally Type II” in that scale-invariant norms stay bounded outside the light cone as well; indeed energy will leak from the light cone at every scale. This is consistent with the results of Kenig-Merle and Killip-Visan which preclude “globally Type II” blowup solutions to these equations in many cases.

We now sketch the arguments used to prove this theorem. Usually when studying the NLW, we think of the potential {F} (and the initial data {u_0,u_1}) as being given in advance, and then try to solve for {u} as an unknown field. However, in this problem we have the freedom to select {F}. So we can look at this problem from a “backwards” direction: we first choose the field {u}, and then fit the potential {F} (and the initial data) to match that field.

Now, one cannot write down a completely arbitrary field {u} and hope to find a potential {F} obeying (2), as there are some constraints coming from the homogeneity of {F}. Namely, from the Euler identity

\displaystyle \langle u, (\nabla F)(u) \rangle = (p+1) F(u)

we see that {F(u)} can be recovered from (2) by the formula

\displaystyle F(u) = \frac{1}{p+1} \langle u, \Box u \rangle \ \ \ \ \ (3)

 

so the defocusing nature of {F} imposes a constraint

\displaystyle \langle u, \Box u \rangle > 0.

Furthermore, taking a derivative of (3) we obtain another constraining equation

\displaystyle \langle \partial_\alpha u, \Box u \rangle = \frac{1}{p+1} \partial_\alpha \langle u, \Box u \rangle

that does not explicitly involve the potential {F}. Actually, one can write this equation in the more familiar form

\displaystyle \partial^\beta T_{\alpha \beta} = 0

where {T_{\alpha \beta}} is the stress-energy tensor

\displaystyle T_{\alpha \beta} = \langle \partial_\alpha u, \partial_\beta u \rangle - \frac{1}{2} \eta_{\alpha \beta} (\langle \partial^\gamma u, \partial_\gamma u \rangle + \frac{1}{p+1} \langle u, \Box u \rangle),

now written in a manner that does not explicitly involve {F}.

With this reformulation, this suggests a strategy for locating {u}: first one selects a stress-energy tensor {T_{\alpha \beta}} that is divergence-free and obeys suitable positive definiteness and self-similarity properties, and then locates a self-similar map {u} from the backwards light cone to {{\bf R}^m} that has that stress-energy tensor (one also needs the map {u} (or more precisely the direction component {u/\|u\|} of that map) injective up to the discrete self-similarity, in order to define {F(u)} consistently). If the stress-energy tensor was replaced by the simpler “energy tensor”

\displaystyle E_{\alpha \beta} = \langle \partial_\alpha u, \partial_\beta u \rangle

then the question of constructing an (injective) map {u} with the specified energy tensor is precisely the embedding problem that was famously solved by Nash (viewing {E_{\alpha \beta}} as a Riemannian metric on the domain of {u}, which in this case is a backwards light cone quotiented by a discrete self-similarity to make it compact). It turns out that one can adapt the Nash embedding theorem to also work with the stress-energy tensor as well (as long as one also specifies the mass density {M = \|u\|^2}, and as long as a certain positive definiteness property, related to the positive semi-definiteness of Gram matrices, is obeyed). Here is where the dimension {76} shows up:

Proposition 2 Let {M} be a smooth compact Riemannian {4}-manifold, and let {m \geq 76}. Then {M} smoothly isometrically embeds into the sphere {S^{m-1}}.

Proof: The Nash embedding theorem (in the form given in this ICM lecture of Gunther) shows that {M} can be smoothly isometrically embedded into {{\bf R}^{19}}, and thus in {[-R,R]^{19}} for some large {R}. Using an irrational slope, the interval {[-R,R]} can be smoothly isometrically embedded into the {2}-torus {\frac{1}{\sqrt{38}} (S^1 \times S^1)}, and so {[-R,R]^{19}} and hence {M} can be smoothly embedded in {\frac{1}{\sqrt{38}} (S^1)^{38}}. But from Pythagoras’ theorem, {\frac{1}{\sqrt{38}} (S^1)^{38}} can be identified with a subset of {S^{m-1}} for any {m \geq 76}, and the claim follows. \Box

One can presumably improve upon the bound {76} by being more efficient with the embeddings (e.g. by modifying the proof of Nash embedding to embed directly into a round sphere), but I did not try to optimise the bound here.

The remaining task is to construct the stress-energy tensor {T_{\alpha \beta}}. One can reduce to tensors that are invariant with respect to rotations around the spatial origin, but this still leaves a fair amount of degrees of freedom (it turns out that there are four fields that need to be specified, which are denoted {M, E_{tt}, E_{tr}, E_{rr}} in my paper). However a small miracle occurs in three spatial dimensions, in that the divergence-free condition involves only two of the four degrees of freedom (or three out of four, depending on whether one considers a function that is even or odd in {r} to only be half a degree of freedom). This is easiest to illustrate with the scalar NLW (1). Assuming spherical symmetry, this equation becomes

\displaystyle - \partial_{tt} u + \partial_{rr} u + \frac{2}{r} \partial_r u = |u|^{p-1} u.

Making the substitution {\phi := ru}, we can eliminate the lower order term {\frac{2}{r} \partial_r} completely to obtain

\displaystyle - \partial_{tt} \phi + \partial_{rr} \phi= \frac{1}{r^{p-1}} |\phi|^{p-1} \phi.

(This can be compared with the situation in higher dimensions, in which an undesirable zeroth order term {\frac{(d-1)(d-3)}{r^2} \phi} shows up.) In particular, if one introduces the null energy density

\displaystyle e_+ := \frac{1}{2} |\partial_t \phi + \partial_r \phi|^2

and the potential energy density

\displaystyle V := \frac{|\phi|^{p+1}}{(p+1) r^{p-1}}

then one can verify the equation

\displaystyle (\partial_t - \partial_r) e_+ + (\partial_t + \partial_r) V = - \frac{p-1}{r} V

which can be viewed as a transport equation for {e_+} with forcing term depending on {V} (or vice versa), and is thus quite easy to solve explicitly by choosing one of these fields and then solving for the other. As it turns out, once one is in the supercritical regime {p>5}, one can solve this equation while giving {e_+} and {V} the right homogeneity (they have to be homogeneous of order {-\frac{4}{p-1}}, which is greater than {-1} in the supercritical case) and positivity properties, and from this it is possible to prescribe all the other fields one needs to satisfy the conclusions of the main theorem. (It turns out that {e_+} and {V} will be concentrated near the boundary of the light cone, so this is how the solution {u} will concentrate also.)

Kevin Ford, James Maynard, and I have uploaded to the arXiv our preprint “Chains of large gaps between primes“. This paper was announced in our previous paper with Konyagin and Green, which was concerned with the largest gap

\displaystyle  G_1(X) := \max_{p_n, p_{n+1} \leq X} (p_{n+1} - p_n)

between consecutive primes up to {X}, in which we improved the Rankin bound of

\displaystyle  G_1(X) \gg \log X \frac{\log_2 X \log_4 X}{(\log_3 X)^2}

to

\displaystyle  G_1(X) \gg \log X \frac{\log_2 X \log_4 X}{\log_3 X}

for large {X} (where we use the abbreviations {\log_2 X := \log\log X}, {\log_3 X := \log\log\log X}, and {\log_4 X := \log\log\log\log X}). Here, we obtain an analogous result for the quantity

\displaystyle  G_k(X) := \max_{p_n, \dots, p_{n+k} \leq X} \min( p_{n+1} - p_n, p_{n+2}-p_{n+1}, \dots, p_{n+k} - p_{n+k-1} )

which measures how far apart the gaps between chains of {k} consecutive primes can be. Our main result is

\displaystyle  G_k(X) \gg \frac{1}{k^2} \log X \frac{\log_2 X \log_4 X}{\log_3 X}

whenever {X} is sufficiently large depending on {k}, with the implied constant here absolute (and effective). The factor of {1/k^2} is inherent to the method, and related to the basic probabilistic fact that if one selects {k} numbers at random from the unit interval {[0,1]}, then one expects the minimum gap between adjacent numbers to be about {1/k^2} (i.e. smaller than the mean spacing of {1/k} by an additional factor of {1/k}).

Our arguments combine those from the previous paper with the matrix method of Maier, who (in our notation) showed that

\displaystyle  G_k(X) \gg_k  \log X \frac{\log_2 X \log_4 X}{(\log_3 X)^2}

for an infinite sequence of {X} going to infinity. (Maier needed to restrict to an infinite sequence to avoid Siegel zeroes, but we are able to resolve this issue by the now standard technique of simply eliminating a prime factor of an exceptional conductor from the sieve-theoretic portion of the argument. As a byproduct, this also makes all of the estimates in our paper effective.)

As its name suggests, the Maier matrix method is usually presented by imagining a matrix of numbers, and using information about the distribution of primes in the columns of this matrix to deduce information about the primes in at least one of the rows of the matrix. We found it convenient to interpret this method in an equivalent probabilistic form as follows. Suppose one wants to find an interval {n+1,\dots,n+y} which contained a block of at least {k} primes, each separated from each other by at least {g} (ultimately, {y} will be something like {\log X \frac{\log_2 X \log_4 X}{\log_3 X}} and {g} something like {y/k^2}). One can do this by the probabilistic method: pick {n} to be a random large natural number {{\mathbf n}} (with the precise distribution to be chosen later), and try to lower bound the probability that the interval {{\mathbf n}+1,\dots,{\mathbf n}+y} contains at least {k} primes, no two of which are within {g} of each other.

By carefully choosing the residue class of {{\mathbf n}} with respect to small primes, one can eliminate several of the {{\mathbf n}+j} from consideration of being prime immediately. For instance, if {{\mathbf n}} is chosen to be large and even, then the {{\mathbf n}+j} with {j} even have no chance of being prime and can thus be eliminated; similarly if {{\mathbf n}} is large and odd, then {{\mathbf n}+j} cannot be prime for any odd {j}. Using the methods of our previous paper, we can find a residue class {m \hbox{ mod } P} (where {P} is a product of a large number of primes) such that, if one chooses {{\mathbf n}} to be a large random element of {m \hbox{ mod } P} (that is, {{\mathbf n} = {\mathbf z} P + m} for some large random integer {{\mathbf z}}), then the set {{\mathcal T}} of shifts {j \in \{1,\dots,y\}} for which {{\mathbf n}+j} still has a chance of being prime has size comparable to something like {k \log X / \log_2 X}; furthermore this set {{\mathcal T}} is fairly well distributed in {\{1,\dots,y\}} in the sense that it does not concentrate too strongly in any short subinterval of {\{1,\dots,y\}}. The main new difficulty, not present in the previous paper, is to get lower bounds on the size of {{\mathcal T}} in addition to upper bounds, but this turns out to be achievable by a suitable modification of the arguments.

Using a version of the prime number theorem in arithmetic progressions due to Gallagher, one can show that for each remaining shift {j \in {\mathcal T}}, {{\mathbf n}+j} is going to be prime with probability comparable to {\log_2 X / \log X}, so one expects about {k} primes in the set {\{{\mathbf n} + j: j \in {\mathcal T}\}}. An upper bound sieve (e.g. the Selberg sieve) also shows that for any distinct {j,j' \in {\mathcal T}}, the probability that {{\mathbf n}+j} and {{\mathbf n}+j'} are both prime is {O( (\log_2 X / \log X)^2 )}. Using this and some routine second moment calculations, one can then show that with large probability, the set {\{{\mathbf n} + j: j \in {\mathcal T}\}} will indeed contain about {k} primes, no two of which are closer than {g} to each other; with no other numbers in this interval being prime, this gives a lower bound on {G_k(X)}.

I’ve just uploaded two related papers to the arXiv:

This pair of papers is an outgrowth of these two recent blog posts and the ensuing discussion. In the first paper, we establish the following logarithmically averaged version of the Chowla conjecture (in the case {k=2} of two-point correlations (or “pair correlations”)):

Theorem 1 (Logarithmically averaged Chowla conjecture) Let {a_1,a_2} be natural numbers, and let {b_1,b_2} be integers such that {a_1 b_2 - a_2 b_1 \neq 0}. Let {1 \leq \omega(x) \leq x} be a quantity depending on {x} that goes to infinity as {x \rightarrow \infty}. Let {\lambda} denote the Liouville function. Then one has

\displaystyle  \sum_{x/\omega(x) < n \leq x} \frac{\lambda(a_1 n + b_1) \lambda(a_2 n+b_2)}{n} = o( \log \omega(x) ) \ \ \ \ \ (1)

as {x \rightarrow \infty}.

Thus for instance one has

\displaystyle  \sum_{n \leq x} \frac{\lambda(n) \lambda(n+1)}{n} = o(\log x). \ \ \ \ \ (2)

For comparison, the non-averaged Chowla conjecture would imply that

\displaystyle  \sum_{n \leq x} \lambda(n) \lambda(n+1) = o(x) \ \ \ \ \ (3)

which is a strictly stronger estimate than (2), and remains open.

The arguments also extend to other completely multiplicative functions than the Liouville function. In particular, one obtains a slightly averaged version of the non-asymptotic Elliott conjecture that was shown in the previous blog post to imply a positive solution to the Erdos discrepancy problem. The averaged version of the conjecture established in this paper is slightly weaker than the one assumed in the previous blog post, but it turns out that the arguments there can be modified without much difficulty to accept this averaged Elliott conjecture as input. In particular, we obtain an unconditional solution to the Erdos discrepancy problem as a consequence; this is detailed in the second paper listed above. In fact we can also handle the vector-valued version of the Erdos discrepancy problem, in which the sequence {f(1), f(2), \dots} takes values in the unit sphere of an arbitrary Hilbert space, rather than in {\{-1,+1\}}.

Estimates such as (2) or (3) are known to be subject to the “parity problem” (discussed numerous times previously on this blog), which roughly speaking means that they cannot be proven solely using “linear” estimates on functions such as the von Mangoldt function. However, it is known that the parity problem can be circumvented using “bilinear” estimates, and this is basically what is done here.

We now describe in informal terms the proof of Theorem 1, focusing on the model case (2) for simplicity. Suppose for contradiction that the left-hand side of (2) was large and (say) positive. Using the multiplicativity {\lambda(pn) = -\lambda(n)}, we conclude that

\displaystyle  \sum_{n \leq x} \frac{\lambda(n) \lambda(n+p) 1_{p|n}}{n}

is also large and positive for all primes {p} that are not too large; note here how the logarithmic averaging allows us to leave the constraint {n \leq x} unchanged. Summing in {p}, we conclude that

\displaystyle  \sum_{n \leq x} \frac{ \sum_{p \in {\mathcal P}} \lambda(n) \lambda(n+p) 1_{p|n}}{n}

is large and positive for any given set {{\mathcal P}} of medium-sized primes. By a standard averaging argument, this implies that

\displaystyle  \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} \lambda(n+j) \lambda(n+p+j) 1_{p|n+j} \ \ \ \ \ (4)

is large for many choices of {n}, where {H} is a medium-sized parameter at our disposal to choose, and we take {{\mathcal P}} to be some set of primes that are somewhat smaller than {H}. (A similar approach was taken in this recent paper of Matomaki, Radziwill, and myself to study sign patterns of the Möbius function.) To obtain the required contradiction, one thus wants to demonstrate significant cancellation in the expression (4). As in that paper, we view {n} as a random variable, in which case (4) is essentially a bilinear sum of the random sequence {(\lambda(n+1),\dots,\lambda(n+H))} along a random graph {G_{n,H}} on {\{1,\dots,H\}}, in which two vertices {j, j+p} are connected if they differ by a prime {p} in {{\mathcal P}} that divides {n+j}. A key difficulty in controlling this sum is that for randomly chosen {n}, the sequence {(\lambda(n+1),\dots,\lambda(n+H))} and the graph {G_{n,H}} need not be independent. To get around this obstacle we introduce a new argument which we call the “entropy decrement argument” (in analogy with the “density increment argument” and “energy increment argument” that appear in the literature surrounding Szemerédi’s theorem on arithmetic progressions, and also reminiscent of the “entropy compression argument” of Moser and Tardos, discussed in this previous post). This argument, which is a simple consequence of the Shannon entropy inequalities, can be viewed as a quantitative version of the standard subadditivity argument that establishes the existence of Kolmogorov-Sinai entropy in topological dynamical systems; it allows one to select a scale parameter {H} (in some suitable range {[H_-,H_+]}) for which the sequence {(\lambda(n+1),\dots,\lambda(n+H))} and the graph {G_{n,H}} exhibit some weak independence properties (or more precisely, the mutual information between the two random variables is small).

Informally, the entropy decrement argument goes like this: if the sequence {(\lambda(n+1),\dots,\lambda(n+H))} has significant mutual information with {G_{n,H}}, then the entropy of the sequence {(\lambda(n+1),\dots,\lambda(n+H'))} for {H' > H} will grow a little slower than linearly, due to the fact that the graph {G_{n,H}} has zero entropy (knowledge of {G_{n,H}} more or less completely determines the shifts {G_{n+kH,H}} of the graph); this can be formalised using the classical Shannon inequalities for entropy (and specifically, the non-negativity of conditional mutual information). But the entropy cannot drop below zero, so by increasing {H} as necessary, at some point one must reach a metastable region (cf. the finite convergence principle discussed in this previous blog post), within which very little mutual information can be shared between the sequence {(\lambda(n+1),\dots,\lambda(n+H))} and the graph {G_{n,H}}. Curiously, for the application it is not enough to have a purely quantitative version of this argument; one needs a quantitative bound (which gains a factor of a bit more than {\log H} on the trivial bound for mutual information), and this is surprisingly delicate (it ultimately comes down to the fact that the series {\sum_{j \geq 2} \frac{1}{j \log j \log\log j}} diverges, which is only barely true).

Once one locates a scale {H} with the low mutual information property, one can use standard concentration of measure results such as the Hoeffding inequality to approximate (4) by the significantly simpler expression

\displaystyle  \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} \frac{\lambda(n+j) \lambda(n+p+j)}{p}. \ \ \ \ \ (5)

The important thing here is that Hoeffding’s inequality gives exponentially strong bounds on the failure probability, which is needed to counteract the logarithms that are inevitably present whenever trying to use entropy inequalities. The expression (5) can then be controlled in turn by an application of the Hardy-Littlewood circle method and a non-trivial estimate

\displaystyle  \sup_\alpha \frac{1}{X} \int_X^{2X} |\frac{1}{H} \sum_{x \leq n \leq x+H} \lambda(n) e(\alpha n)|\ dx = o(1) \ \ \ \ \ (6)

for averaged short sums of a modulated Liouville function established in another recent paper by Matomäki, Radziwill and myself.

When one uses this method to study more general sums such as

\displaystyle  \sum_{n \leq x} \frac{g_1(n) g_2(n+1)}{n},

one ends up having to consider expressions such as

\displaystyle  \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} c_p \frac{g_1(n+j) g_2(n+p+j)}{p}.

where {c_p} is the coefficient {c_p := \overline{g_1}(p) \overline{g_2}(p)}. When attacking this sum with the circle method, one soon finds oneself in the situation of wanting to locate the large Fourier coefficients of the exponential sum

\displaystyle  S(\alpha) := \sum_{p \in {\mathcal P}} \frac{c_p}{p} e^{2\pi i \alpha p}.

In many cases (such as in the application to the Erdös discrepancy problem), the coefficient {c_p} is identically {1}, and one can understand this sum satisfactorily using the classical results of Vinogradov: basically, {S(\alpha)} is large when {\alpha} lies in a “major arc” and is small when it lies in a “minor arc”. For more general functions {g_1,g_2}, the coefficients {c_p} are more or less arbitrary; the large values of {S(\alpha)} are no longer confined to the major arc case. Fortunately, even in this general situation one can use a restriction theorem for the primes established some time ago by Ben Green and myself to show that there are still only a bounded number of possible locations {\alpha} (up to the uncertainty mandated by the Heisenberg uncertainty principle) where {S(\alpha)} is large, and we can still conclude by using (6). (Actually, as recently pointed out to me by Ben, one does not need the full strength of our result; one only needs the {L^4} restriction theorem for the primes, which can be proven fairly directly using Plancherel’s theorem and some sieve theory.)

It is tempting to also use the method to attack higher order cases of the (logarithmically) averaged Chowla conjecture, for instance one could try to prove the estimate

\displaystyle  \sum_{n \leq x} \frac{\lambda(n) \lambda(n+1) \lambda(n+2)}{n} = o(\log x).

The above arguments reduce matters to obtaining some non-trivial cancellation for sums of the form

\displaystyle  \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} \frac{\lambda(n+j) \lambda(n+p+j) \lambda(n+2p+j)}{p}.

A little bit of “higher order Fourier analysis” (as was done for very similar sums in the ergodic theory context by Frantzikinakis-Host-Kra and Wooley-Ziegler) lets one control this sort of sum if one can establish a bound of the form

\displaystyle  \frac{1}{X} \int_X^{2X} \sup_\alpha |\frac{1}{H} \sum_{x \leq n \leq x+H} \lambda(n) e(\alpha n)|\ dx = o(1) \ \ \ \ \ (7)

where {X} goes to infinity and {H} is a very slowly growing function of {X}. This looks very similar to (6), but the fact that the supremum is now inside the integral makes the problem much more difficult. However it looks worth attacking (7) further, as this estimate looks like it should have many nice applications (beyond just the {k=3} case of the logarithmically averaged Chowla or Elliott conjectures, which is already interesting).

For higher {k} than {k=3}, the same line of analysis requires one to replace the linear phase {e(\alpha n)} by more complicated phases, such as quadratic phases {e(\alpha n^2 + \beta n)} or even {k-2}-step nilsequences. Given that (7) is already beyond the reach of current literature, these even more complicated expressions are also unavailable at present, but one can imagine that they will eventually become tractable, in which case we would obtain an averaged form of the Chowla conjecture for all {k}, which would have a number of consequences (such as a logarithmically averaged version of Sarnak’s conjecture, as per this blog post).

It would of course be very nice to remove the logarithmic averaging, and be able to establish bounds such as (3). I did attempt to do so, but I do not see a way to use the entropy decrement argument in a manner that does not require some sort of averaging of logarithmic type, as it requires one to pick a scale {H} that one cannot specify in advance, which is not a problem for logarithmic averages (which are quite stable with respect to dilations) but is problematic for ordinary averages. But perhaps the problem can be circumvented by some clever modification of the argument. One possible approach would be to start exploiting multiplicativity at products of primes, and not just individual primes, to try to keep the scale fixed, but this makes the concentration of measure part of the argument much more complicated as one loses some independence properties (coming from the Chinese remainder theorem) which allowed one to conclude just from the Hoeffding inequality.

Kaisa Matomäki, Maksym Radziwiłł, and I have just uploaded to the arXiv our paper “Sign patterns of the Liouville and Möbius functions“. This paper is somewhat similar to our previous paper in that it is using the recent breakthrough of Matomäki and Radziwiłł on mean values of multiplicative functions to obtain partial results towards the Chowla conjecture. This conjecture can be phrased, roughly speaking, as follows: if {k} is a fixed natural number and {n} is selected at random from a large interval {[1,x]}, then the sign pattern {(\lambda(n), \lambda(n+1),\dots,\lambda(n+k-1)) \in \{-1,+1\}^k} becomes asymptotically equidistributed in {\{-1,+1\}^k} in the limit {x \rightarrow \infty}. This remains open for {k \geq 2}. In fact even the significantly weaker statement that each of the sign patterns in {\{-1,+1\}^k} is attained infinitely often is open for {k \geq 4}. However, in 1986, Hildebrand showed that for {k \leq 3} all sign patterns are indeed attained infinitely often. Our first result is a strengthening of Hildebrand’s, moving a little bit closer to Chowla’s conjecture:

Theorem 1 Let {k \leq 3}. Then each of the sign patterns in {\{-1,+1\}^k} is attained by the Liouville function for a set of natural numbers {n} of positive lower density.

Thus for instance one has {\lambda(n)=\lambda(n+1)=\lambda(n+2)} for a set of {n} of positive lower density. The {k \leq 2} case of this theorem already appears in the original paper of Matomäki and Radziwiłł (and the significantly simpler case of the sign patterns {++} and {--} was treated previously by Harman, Pintz, and Wolke).

The basic strategy in all of these arguments is to assume for sake of contradiction that a certain sign pattern occurs extremely rarely, and then exploit the complete multiplicativity of {\lambda} (which implies in particular that {\lambda(2n) = -\lambda(n)}, {\lambda(3n) = -\lambda(n)}, and {\lambda(5n) = -\lambda(n)} for all {n}) together with some combinatorial arguments (vaguely analogous to solving a Sudoku puzzle!) to establish more complex sign patterns for the Liouville function, that are either inconsistent with each other, or with results such as the Matomäki-Radziwiłł result. To illustrate this, let us give some {k=2} examples, arguing a little informally to emphasise the combinatorial aspects of the argument. First suppose that the sign pattern {(\lambda(n),\lambda(n+1)) = (+1,+1)} almost never occurs. The prime number theorem tells us that {\lambda(n)} and {\lambda(n+1)} are each equal to {+1} about half of the time, which by inclusion-exclusion implies that the sign pattern {(\lambda(n),\lambda(n+1))=(-1,-1)} almost never occurs. In other words, we have {\lambda(n+1) = -\lambda(n)} for almost all {n}. But from the multiplicativity property {\lambda(2n)=-\lambda(n)} this implies that one should have

\displaystyle \lambda(2n+2) = -\lambda(2n)

\displaystyle \lambda(2n+1) = -\lambda(2n)

and

\displaystyle \lambda(2n+2) = -\lambda(2n+1)

for almost all {n}. But the above three statements are contradictory, and the claim follows.

Similarly, if we assume that the sign pattern {(\lambda(n),\lambda(n+1)) = (+1,-1)} almost never occurs, then a similar argument to the above shows that for any fixed {h}, one has {\lambda(n)=\lambda(n+1)=\dots=\lambda(n+h)} for almost all {n}. But this means that the mean {\frac{1}{h} \sum_{j=1}^h \lambda(n+j)} is abnormally large for most {n}, which (for {h} large enough) contradicts the results of Matomäki and Radziwiłł. Here we see that the “enemy” to defeat is the scenario in which {\lambda} only changes sign very rarely, in which case one rarely sees the pattern {(+1,-1)}.

It turns out that similar (but more combinatorially intricate) arguments work for sign patterns of length three (but are unlikely to work for most sign patterns of length four or greater). We give here one fragment of such an argument (due to Hildebrand) which hopefully conveys the Sudoku-type flavour of the combinatorics. Suppose for instance that the sign pattern {(\lambda(n),\lambda(n+1),\lambda(n+2)) = (+1,+1,+1)} almost never occurs. Now suppose {n} is a typical number with {\lambda(15n-1)=\lambda(15n+1)=+1}. Since we almost never have the sign pattern {(+1,+1,+1)}, we must (almost always) then have {\lambda(15n) = -1}. By multiplicativity this implies that

\displaystyle (\lambda(60n-4), \lambda(60n), \lambda(60n+4)) = (+1,-1,+1).

We claim that this (almost always) forces {\lambda(60n+5)=-1}. For if {\lambda(60n+5)=+1}, then by the lack of the sign pattern {(+1,+1,+1)}, this (almost always) forces {\lambda(60n+3)=\lambda(60n+6)=-1}, which by multiplicativity forces {\lambda(20n+1)=\lambda(20n+2)=+1}, which by lack of {(+1,+1,+1)} (almost always) forces {\lambda(20n)=-1}, which by multiplicativity contradicts {\lambda(60n)=-1}. Thus we have {\lambda(60n+5)=-1}; a similar argument gives {\lambda(60n-5)=-1} almost always, which by multiplicativity gives {\lambda(12n-1)=\lambda(12n)=\lambda(12n+1)=+1}, a contradiction. Thus we almost never have {\lambda(15n-1)=\lambda(15n+1)=+1}, which by the inclusion-exclusion argument mentioned previously shows that {\lambda(15n+1) = - \lambda(15n-1)} for almost all {n}.

One can continue these Sudoku-type arguments and conclude eventually that {\lambda(3n-1)=-\lambda(3n+1)=\lambda(3n+2)} for almost all {n}. To put it another way, if {\chi_3} denotes the non-principal Dirichlet character of modulus {3}, then {\lambda \chi_3} is almost always constant away from the multiples of {3}. (Conversely, if {\lambda \chi_3} changed sign very rarely outside of the multiples of three, then the sign pattern {(+1,+1,+1)} would never occur.) Fortunately, the main result of Matomäki and Radziwiłł shows that this scenario cannot occur, which establishes that the sign pattern {(+1,+1,+1)} must occur rather frequently. The other sign patterns are handled by variants of these arguments.

Excluding a sign pattern of length three leads to useful implications like “if {\lambda(n-1)=\lambda(n)=+1}, then {\lambda(n+1)=-1}” which turn out are just barely strong enough to quite rigidly constrain the Liouville function using Sudoku-like arguments. In contrast, excluding a sign pattern of length four only gives rise to implications like “`if {\lambda(n-2)=\lambda(n-1)=\lambda(n)=+1}, then {\lambda(n+1)=-1}“, and these seem to be much weaker for this purpose (the hypothesis in these implications just isn’t satisfied nearly often enough). So a different idea seems to be needed if one wishes to extend the above theorem to larger values of {k}.

Our second theorem gives an analogous result for the Möbius function {\mu} (which takes values in {\{-1,0,+1\}} rather than {\{-1,1\}}), but the analysis turns out to be remarkably difficult and we are only able to get up to {k=2}:

Theorem 2 Let {k \leq 2}. Then each of the sign patterns in {\{-1,0,+1\}^k} is attained by the Möbius function for a set {n} of positive lower density.

It turns out that the prime number theorem and elementary sieve theory can be used to handle the {k=1} case and all the {k=2} cases that involve at least one {0}, leaving only the four sign patterns {(\pm 1, \pm 1)} to handle. It is here that the zeroes of the Möbius function cause a significant new obstacle. Suppose for instance that the sign pattern {(+1, -1)} almost never occurs for the Möbius function. The same arguments that were used in the Liouville case then show that {\mu(n)} will be almost always equal to {\mu(n+1)}, provided that {n,n+1} are both square-free. One can try to chain this together as before to create a long string {\mu(n)=\dots=\mu(n+h) \in \{-1,+1\}} where the Möbius function is constant, but this cannot work for any {h} larger than three, because the Möbius function vanishes at every multiple of four.

The constraints we assume on the Möbius function can be depicted using a graph on the squarefree natural numbers, in which any two adjacent squarefree natural numbers are connected by an edge. The main difficulty is then that this graph is highly disconnected due to the multiples of four not being squarefree.

To get around this, we need to enlarge the graph. Note from multiplicativity that if {\mu(n)} is almost always equal to {\mu(n+1)} when {n,n+1} are squarefree, then {\mu(n)} is almost always equal to {\mu(n+p)} when {n,n+p} are squarefree and {n} is divisible by {p}. We can then form a graph on the squarefree natural numbers by connecting {n} to {n+p} whenever {n,n+p} are squarefree and {n} is divisible by {p}. If this graph is “locally connected” in some sense, then {\mu} will be constant on almost all of the squarefree numbers in a large interval, which turns out to be incompatible with the results of Matomäki and Radziwiłł. Because of this, matters are reduced to establishing the connectedness of a certain graph. More precisely, it turns out to be sufficient to establish the following claim:

Theorem 3 For each prime {p}, let {a_p \hbox{ mod } p^2} be a residue class chosen uniformly at random. Let {G} be the random graph whose vertices {V} consist of those integers {n} not equal to {a_p \hbox{ mod } p^2} for any {p}, and whose edges consist of pairs {n,n+p} in {V} with {n = a_p \hbox{ mod } p}. Then with probability {1}, the graph {G} is connected.

We were able to show the connectedness of this graph, though it turned out to be remarkably tricky to do so. Roughly speaking (and suppressing a number of technicalities), the main steps in the argument were as follows.

  • (Early stage) Pick a large number {X} (in our paper we take {X} to be odd, but I’ll ignore this technicality here). Using a moment method to explore neighbourhoods of a single point in {V}, one can show that a vertex {v} in {V} is almost always connected to at least {\log^{10} X} numbers in {[v,v+X^{1/100}]}, using relatively short paths of short diameter. (This is the most computationally intensive portion of the argument.)
  • (Middle stage) Let {X'} be a typical number in {[X/40,X/20]}, and let {R} be a scale somewhere between {X^{1/40}} and {X'}. By using paths {n, n+p_1, n+p_1-p_2, n+p_1-p_2+p_3} involving three primes, and using a variant of Vinogradov’s theorem and some routine second moment computations, one can show that with quite high probability, any “good” vertex in {[v+X'-R, v+X'-0.99R]} is connected to a “good” vertex in {[v+X'-0.01R, v+X-0.0099 R]} by paths of length three, where the definition of “good” is somewhat technical but encompasses almost all of the vertices in {V}.
  • (Late stage) Combining the two previous results together, we can show that most vertices {v} will be connected to a vertex in {[v+X'-X^{1/40}, v+X']} for any {X'} in {[X/40,X/20]}. In particular, {v} will be connected to a set of {\gg X^{9/10}} vertices in {[v,v+X/20]}. By tracking everything carefully, one can control the length and diameter of the paths used to connect {v} to this set, and one can also control the parity of the elements in this set.
  • (Final stage) Now if we have two vertices {v, w} at a distance {X} apart. By the previous item, one can connect {v} to a large set {A} of vertices in {[v,v+X/20]}, and one can similarly connect {w} to a large set {B} of vertices in {[w,w+X/20]}. Now, by using a Vinogradov-type theorem and second moment calculations again (and ensuring that the elements of {A} and {B} have opposite parity), one can connect many of the vertices in {A} to many of the vertices {B} by paths of length three, which then connects {v} to {w}, and gives the claim.

It seems of interest to understand random graphs like {G} further. In particular, the graph {G'} on the integers formed by connecting {n} to {n+p} for all {n} in a randomly selected residue class mod {p} for each prime {p} is particularly interesting (it is to the Liouville function as {G} is to the Möbius function); if one could show some “local expander” properties of this graph {G'}, then one would have a chance of modifying the above methods to attack the first unsolved case of the Chowla conjecture, namely that {\lambda(n)\lambda(n+1)} has asymptotic density zero (perhaps working with logarithmic density instead of natural density to avoids some technicalities).

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.