You are currently browsing the category archive for the ‘Mathematics’ category.

The twin prime conjecture, still unsolved, asserts that there are infinitely many primes such that is also prime. A more precise form of this conjecture is (a special case) of the Hardy-Littlewood prime tuples conjecture, which asserts that

as , where is the von Mangoldt function and is the twin prime constant

Because is almost entirely supported on the primes, it is not difficult to see that (1) implies the twin prime conjecture.

One can give a heuristic justification of the asymptotic (1) (and hence the twin prime conjecture) via sieve theoretic methods. Recall that the von Mangoldt function can be decomposed as a Dirichlet convolution

where is the Möbius function. Because of this, we can rewrite the left-hand side of (1) as

To compute this double sum, it is thus natural to consider sums such as

or (to simplify things by removing the logarithm)

The prime number theorem in arithmetic progressions suggests that one has an asymptotic of the form

where is the multiplicative function with for even and

for odd. Summing by parts, one then expects

and so we heuristically have

The Dirichlet series

has an Euler product factorisation

for ; comparing this with the Euler product factorisation

for the Riemann zeta function, and recalling that has a simple pole of residue at , we see that

has a simple zero at with first derivative

From this and standard multiplicative number theory manipulations, one can calculate the asymptotic

which concludes the heuristic justification of (1).

What prevents us from making the above heuristic argument rigorous, and thus proving (1) and the twin prime conjecture? Note that the variable in (2) ranges to be as large as . On the other hand, the prime number theorem in arithmetic progressions (3) is not expected to hold for anywhere that large (for instance, the left-hand side of (3) vanishes as soon as exceeds ). The best unconditional result known of the type (3) is the Siegel-Walfisz theorem, which allows to be as large as . Even the powerful generalised Riemann hypothesis (GRH) only lets one prove an estimate of the form (3) for up to about .

However, because of the averaging effect of the summation in in (2), we don’t need the asymptotic (3) to be true for *all* in a particular range; having it true for *almost all* in that range would suffice. Here the situation is much better; the celebrated Bombieri-Vinogradov theorem (sometimes known as “GRH on the average”) implies, roughly speaking, that the approximation (3) is valid for *almost all* for any fixed . While this is not enough to control (2) or (1), the Bombieri-Vinogradov theorem can at least be used to control variants of (1) such as

for various sieve weights whose associated divisor function is supposed to approximate the von Mangoldt function , although that theorem only lets one do this when the weights are supported on the range . This is still enough to obtain some partial results towards (1); for instance, by selecting weights according to the Selberg sieve, one can use the Bombieri-Vinogradov theorem to establish the upper bound

which is off from (1) by a factor of about . See for instance this blog post for details.

It has been difficult to improve upon the Bombieri-Vinogradov theorem in its full generality, although there are various improvements to certain restricted versions of the Bombieri-Vinogradov theorem, for instance in the famous work of Zhang on bounded gaps between primes. Nevertheless, it is believed that the Elliott-Halberstam conjecture (EH) holds, which roughly speaking would mean that (3) now holds for almost all for any fixed . (Unfortunately, the factor cannot be removed, as investigated in a series of papers by Friedlander, Granville, and also Hildebrand and Maier.) This comes tantalisingly close to having enough distribution to control all of (1). Unfortunately, it still falls short. Using this conjecture in place of the Bombieri-Vinogradov theorem leads to various improvements to sieve theoretic bounds; for instance, the factor of in (4) can now be improved to .

In two papers from the 1970s (which can be found online here and here respectively, the latter starting on page 255 of the pdf), Bombieri developed what is now known as the *Bombieri asymptotic sieve* to clarify the situation more precisely. First, he showed that on the Elliott-Halberstam conjecture, while one still could not establish the asymptotic (1), one could prove the generalised asymptotic

for all natural numbers , where the generalised von Mangoldt functions are defined by the formula

These functions behave like the von Mangoldt function, but are concentrated on -almost primes (numbers with at most prime factors) rather than primes. The right-hand side of (5) corresponds to what one would expect if one ran the same heuristics used to justify (1). Sadly, the case of (5), which is just (1), is just barely excluded from Bombieri’s analysis.

More generally, on the assumption of EH, the Bombieri asymptotic sieve provides the asymptotic

for any fixed and any tuple of natural numbers other than , where

is a further generalisation of the von Mangoldt function (now concentrated on -almost primes). By combining these asymptotics with some elementary identities involving the , together with the Weierstrass approximation theorem, Bombieri was able to control a wide family of sums including (1), except for one undetermined scalar . Namely, he was able to show (again on EH) that for any fixed and any continuous function on the simplex that had suitable vanishing at the boundary, the sum

when was even, where the integral on is with respect to the measure (this is Dirac measure in the case ). In particular, we have

and the twin prime conjecture would be proved if one could show that is bounded away from zero, while (1) is equivalent to the assertion that is equal to . Unfortunately, no additional bound beyond the inequalities provided by the Bombieri asymptotic sieve is known, even if one assumes all other major conjectures in number theory than the prime tuples conjecture and its variants (e.g. GRH, GEH, GUE, abc, Chowla, …).

To put it another way, the Bombieri asymptotic sieve is able (on EH) to compute asymptotics for sums

without needing to know the unknown scalar , when is a function supported on almost primes of the form

for and some fixed , with vanishing elsewhere and for some continuous (symmetric) functions obeying some vanishing at the boundary, so long as the parity condition

is obeyed (informally: gives the same weight to products of an odd number of primes as to products of an even number of primes, or to put it another way, is asymptotically orthogonal to the Möbius function ). But when violates the parity condition, the asymptotic involves the unknown . This scalar thus embodies the “parity problem” for the twin prime conjecture (discussed in these previous blog posts).

Because the obstruction to the parity problem is only one-dimensional (on EH), one can replace any parity-violating weight (such as ) with any other parity-violating weight and obtain a logically equivalent estimate. For instance, to prove the twin prime conjecture on EH, it would suffice to show that

for some fixed , or equivalently that there are solutions to the equation in primes with and . (In some cases, this sort of reduction can also be made using other sieves than the Bombieri asymptotic sieve, as was observed by Ng.) As another example, the Bombieri asymptotic sieve can be used to show that the asymptotic (1) is equivalent to the asymptotic

where is the set of numbers that are *rough* in the sense that they have no prime factors less than for some fixed (the function clearly correlates with and so must violate the parity condition). One can replace with similar sieve weights (e.g. a Selberg sieve) that concentrate on almost primes if desired.

As it turns out, if one is willing to strengthen the assumption of the Elliott-Halberstam (EH) conjecture to the assumption of the *generalised Elliott-Halberstam (GEH) conjecture* (as formulated for instance in Claim 2.6 of the Polymath8b paper), one can also swap the factor in the above asymptotics with other parity-violating weights and obtain a logically equivalent estimate, as the Bombieri asymptotic sieve also applies to weights such as under the assumption of GEH. For instance, on GEH one can use two such applications of the Bombieri asymptotic sieve to show that the twin prime conjecture would follow if one could show that there are solutions to the equation

in primes with and , for some . Similarly, on GEH the asymptotic (1) is equivalent to the asymptotic

for some fixed , and similarly with replaced by other sieves. This form of the quantitative twin primes conjecture is appealingly similar to the (special case)

of the Chowla conjecture, for which there has been some recent progress (discussed for instance in these recent posts). Informally, the Bombieri asymptotic sieve lets us (on GEH) view the twin prime conjecture as a sort of Chowla conjecture restricted to almost primes. Unfortunately, the recent progress on the Chowla conjecture relies heavily on the multiplicativity of at small primes, which is completely destroyed by inserting a weight such as , so this does not yet yield a viable path towards the twin prime conjecture even assuming GEH. Still, the similarity is striking, and one can hope that further ways to attack the Chowla conjecture may emerge that could impact the twin prime conjecture. (Alternatively, if one assumes a sufficiently optimistic version of the GEH, one could perhaps relax the notion of “almost prime” to the extent that one could start usefully using multiplicativity at smallish primes, though this seems rather wishful at present, particularly since the most optimistic versions of GEH are known to be false.)

The Bombieri asymptotic sieve is already well explained in the original two papers of Bombieri; there is also a slightly different treatment of the sieve by Friedlander and Iwaniec, as well as a simplified version in the book of Friedlander and Iwaniec (in which the distribution hypothesis is strengthened in order to shorten the arguments. I’ve decided though to write up my own notes on the sieve below the fold; this is primarily for my own benefit, but may be useful to some readers also. I largely follow the treatment of Bombieri, with the one idiosyncratic twist of replacing the usual “elementary” Selberg sieve with the “analytic” Selberg sieve used in particular in many of the breakthrough works in small gaps between primes; I prefer working with the latter due to its Fourier-analytic flavour.

** — 1. Controlling generalised von Mangoldt sums — **

To prove (5), we shall first generalise it, by replacing the sequence by a more general sequence obeying the following axioms:

- (i) (Non-negativity) One has for all .
- (ii) (Crude size bound) One has for all , where is the divisor function.
- (iii) (Size) We have for some constant .
- (iv) (Elliott-Halberstam type conjecture) For any , one has
where is a multiplicative function with for all primes and .

These axioms are a little bit stronger than what is actually needed to make the Bombieri asymptotic sieve work, but we will not attempt to work with the weakest possible axioms here.

We introduce the function

which is analytic for ; in particular it can be evaluated at to yield

There are two model examples of data to keep in mind. The first, discussed in the introduction, is when , then and is as in the introduction; one of course needs EH to justify axiom (iv) in this case. The other is when , in which case and for all . We will later take advantage of the second example to avoid doing some (routine, but messy) main term computations.

The main result of this section is then

Theorem 1Let be as above. Let be a tuple of natural numbers (independent of ) that is not equal to . Then one has the asymptoticas , where .

Note that this recovers (5) (on EH) as a special case.

We now begin the proof of this theorem. Henceforth we allow implied constants in the or notation to depend on and .

It will be convenient to replace the range by a shorter range by the following standard localisation trick. Let be a large quantity depending on to be chosen later, and let denote the interval . We will show the estimate

from which the original claim follows by a routine summation argument. Observe from axiom (iv) and the triangle inequality that

for any .

Write for the logarithm function , thus for any . Without loss of generality we may assume that ; we then factor , where

This function is just when . When the function is more complicated, but we at least have the following crude bound:

*Proof:* We induct on . The case is obvious, so suppose and the claim has already been proven for . Since , we see from induction hypothesis and the triangle inequality that

Since by Möbius inversion, the claim follows.

We can write

In the region , we have . Thus

for . The contribution of the error term to to (10) is easily seen to be negligible if is large enough, so we may freely replace with with little difficulty.

If we insert this replacement directly into the left-hand side of (10) and rearrange, we get

We can’t quite control this using axiom (iv) because the range of is a bit too big, as explained in the introduction. So let us introduce a truncated function

where is a small quantity to be chosen later, and is a smooth function that equals on and equals on . Suppose one could establish the following two estimates for any fixed :

where is a quantity that depends on but not on . Then on combining the two estimates we would have

One could in principle compute explicitly from the proof of (13), but one can avoid doing so by the following comparison trick. In the special case , standard multiplicative number theory (noting that the Dirichlet series has a pole of order at , with top Laurent coefficient ) gives the asymptotic

which when compared with (14) for (recalling that in this case) gives the formula

Inserting this back into (14) and recalling that can be made arbitrarily small, we obtain (10).

As it turns out, the estimate (13) is easy to establish, but the estimate (12) is not, roughly speaking because the typical number in has too many divisors in the range , each of which gives a contribution to the error term. (In the book of Friedlander and Iwaniec, the estimate (13) is established anyway, but only after assuming a stronger version of (iv), roughly speaking in which is allowed to be as large as .) To resolve this issue, we will insert a preliminary sieve that will remove most of the potential divisors i the range (leaving only about such divisors on the average for typical ), making the analogue of (12) easier to prove (at the cost of making the analogue of (13) more difficult). Namely, if one can find a function for which one has the estimates

for some quantity that depends on but not on , then by repeating the previous arguments we will again be able to establish (10).

The key estimate is (16). As we shall see, when comparing with , the weight will cost us a factor of , but the term in the definitions of and will recover a factor of , which will give the desired bound since we are assuming .

One has some flexibility in how to select the weight : basically any standard sieve that uses divisors of size at most to localise (at least approximately) to numbers that are rough in the sense that they have no (or at least very few) factors less than , will do. We will use the analytic Selberg sieve choice

where is a smooth function supported on that equals on .

It remains to establish the bounds (15), (16), (17). To warm up and introduce the various methods needed, we begin with the standard bound

where denotes the derivative of . Note the loss of that had previously been pointed out. In the arguments that follows I will be a little brief with the details, as they are standard (see e.g. this previous post).

We now prove (19). The left-hand side can be expanded as

where denotes the least common multiple of and . From the support of we see that the summand is only non-vanishing when . We now use axiom (iv) and split the left-hand side into a main term

and an error term that is at most

From axiom (ii) and elementary multiplicative number theory, we have the bound

so from axiom (iv) and Cauchy-Schwarz we see that the error term (20) is acceptable. Thus it will suffice to establish the bound

The summand here is almost, but not quite, multiplicative in . To make it genuinely multiplicative, we perform a (shifted) Fourier expansion

for some rapidly decreasing function (essentially the Fourier transform of ). Thus

and so the left-hand side of (21) can be rearranged using Fubini’s theorem as

We can factorise as an Euler product:

Taking absolute values and using Mertens’ theorem leads to the crude bound

which when combined with the rapid decrease of , allows us to restrict the region of integration in (23) to the square (say) with negligible error. Next, we use the Euler product

for to factorise

where

For with nonnegative real part, one has

and so by the Weierstrass -test, is continuous at . Since

we thus have

Also, since has a pole of order at with residue , we have

and thus

The quantity (23) can thus be written, up to errors of , as

Using the rapid decrease of , we may remove the restriction on , and it will now suffice to prove the identity

But on differentiating and then squaring (22) we have

and the claim follows by integrating in from zero to infinity (noting that vanishes for ).

We have the following variant of (19):

for any . We also have the variant

If in addition has no prime factors less than for some fixed , one has

Roughly speaking, the above estimates assert that is concentrated on those numbers with no prime factors much less than , but factors without such small prime divisors occur with about the same relative density as they do in the integers.

*Proof:* The left-hand side of (24) can be expanded as

If we define

then the previous expression can be written as

while one has

which gives (25) from Axiom (iv). To prove (24), it now suffices to show that

Arguing as before, the left-hand side is

where

From Mertens’ theorem we have

when , so the contribution of the terms where can be absorbed into the error (after increasing that error slightly). For the remaining contributions, we see that

where if does not divide , and

if divides times for some . In the latter case, Taylor expansion gives the bounds

and the claim (28) follows. When and we have

and (27) follows by repeating the previous calculations. Finally, (26) is proven similarly to (24) (using in place of ).

Now we can prove (15), (16), (17). We begin with (15). Using the Leibniz rule applied to the identity and using and Möbius inversion (and the associativity and commutativity of Dirichlet convolution) we see that

Next, by applying the Leibniz rule to for some and using (29) we see that

and hence we have the recursive identity

In particular, from induction we see that is supported on numbers with at most distinct prime factors, and hence is supported on numbers with at most distinct prime factors. In particular, from (18) we see that on the support of . Thus it will suffice to show that

If and , then has at most distinct prime factors , with . If we factor , where is the contribution of those with , and is the contribution of those with , then at least one of the following two statements hold:

- (a) (and hence ) is divisible by a square number of size at least .
- (b) .

The contribution of case (a) is easily seen to be acceptable by axiom (ii). For case (b), we observe from (30) and induction that

and so it will suffice to show that

where ranges over numbers bounded by with at most distinct prime factors, the smallest of which is at most , and consists of those numbers with no prime factor less than or equal to . Applying (26) (with replaced by ) gives the bound

so by (25) it suffices to show that

subject to the same constraints on as before. The contribution of those with distinct prime factors can be bounded by

applying Mertens’ theorem and summing over , one obtains the claim.

Now we show (16). As discussed previously in this section, we can replace by with negligible error. Comparing this with (16) and (11), we see that it suffices to show that

From the support of , the summand on the left-hand side is only non-zero when , which makes , where we use the crucial hypothesis to gain enough powers of to make the argument here work. Applying Lemma 2, we reduce to showing that

We can make the change of variables to flip the sum

and then swap the sums to reduce to showing that

By Lemma 3, it suffices to show that

To prove this, we use the Rankin trick, bounding the implied weight by . We can then bound the left-hand side by the Euler product

which can be bounded by

and the claim follows from Mertens’ theorem.

Finally, we show (17). By (11), the left-hand side expands as

We let be a small constant to be chosen later. We divide the outer sum into two ranges, depending on whether only has prime factors greater than or not. In the former case, we can apply (27) to write this contribution as

plus a negligible error, where the is implicitly restricted to numbers with all prime factors greater than . The main term is messy, but it is of the required form up to an acceptable error, so there is no need to compute it any further. It remains to consider those that have at least one prime factor less than . Here we use (24) instead of (27) as well as Lemma 3 to dominate this contribution by

up to negligible errors, where is now restricted to have at least one prime factor less than . This makes at least one of the factors to be at most . A routine application of Rankin’s trick shows that

and so the total contribution of this case is . Since can be made arbitrarily small, (17) follows.

** — 2. Weierstrass approximation — **

Having proved Theorem 1, we now take linear combinations of this theorem, combined with the Weierstrass approximation theorem, to give the asymptotics (7), (8) described in the introduction.

Let , , , be as in that theorem. It will be convenient to normalise the weights by to make their mean value comparable to . From Theorem 1 and summation by parts we have

whenever does not consist entirely of ones.

We now take a closer look at what happens when does consist entirely of ones. Let denote the -tuple . Convolving the case of (30) with copies of for some and using the Leibniz rule, we see that

and hence

Multiplying by and summing over , and using (31) to control the term, one has

If we define (up to an error of ) by the formula

then an induction then shows that

for odd , and

for even . In particular, after adjusting by if necessary, we have since the left-hand sides are non-negative.

If we now define the comparison sequence , standard multiplicative number theory shows that the above estimates also hold when is replaced by ; thus

for both odd and even . The bound (31) also holds for when does not consist entirely of ones, and hence

for any fixed (which may or may not consist entirely of ones).

Next, from induction (on ), the Leibniz rule, and (30), we see that for any and , , the function

is a finite linear combination of functions of the form for tuples that may possibly consist entirely of ones. We thus have

whenever is one of these functions (32). Specialising to the case , we thus have

where . The contribution of those that are powers of primes can be easily seen to be negligible, leading to

where now . The contribution of the case where two of the primes agree can also be seen to be negligible, as can the error when replacing with , and then by symmetry

By linearity, this implies that

for any polynomial that vanishes on the coordinate hyperplanes . The right-hand side can also be evaluated by Mertens’ theorem as

when is odd and

when is even. Using the Weierstrass approximation theorem, we then have

for any continuous function that is compactly supported in the interior of . Computing the right-hand side using Mertens’ theorem as before, we obtain the claimed asymptotics (7), (8).

Remark 4The Bombieri asymptotic sieve has to use the full power of EH (or GEH); there are constructions due to Ford that show that if one only has a distributional hypothesis up to for some fixed constant , then the asymptotics of sums such as (5), or more generally (9), are not determined by a single scalar parameter , but can also vary in other ways as well. Thus the Bombieri asymptotic sieve really is asymptotic; in order to get type error terms one needs the level of distribution to be asymptotically equal to as . Related to this, the quantitative decay of the error terms in the Bombieri asymptotic sieve are extremely poor; in particular, they depend on the dependence of implied constant in axiom (iv) on the parameters , for which there is no consensus on what one should conjecturally expect.

I’ve just posted to the arXiv my paper “Finite time blowup for Lagrangian modifications of the three-dimensional Euler equation“. This paper is loosely in the spirit of other recent papers of mine in which I explore how close one can get to supercritical PDE of physical interest (such as the Euler and Navier-Stokes equations), while still being able to rigorously demonstrate finite time blowup for at least some choices of initial data. Here, the PDE we are trying to get close to is the incompressible inviscid Euler equations

in three spatial dimensions, where is the velocity vector field and is the pressure field. In vorticity form, and viewing the vorticity as a -form (rather than a vector), we can rewrite this system using the language of differential geometry as

where is the Lie derivative along , is the codifferential (the adjoint of the differential , or equivalently the negative of the divergence operator) that sends -vector fields to -vector fields, is the Hodge Laplacian, and is the identification of -vector fields with -forms induced by the Euclidean metric . The equation can be viewed as the Biot-Savart law recovering velocity from vorticity, expressed in the language of differential geometry.

One can then generalise this system by replacing the operator by a more general operator from -forms to -vector fields, giving rise to what I call the *generalised Euler equations*

For example, the surface quasi-geostrophic (SQG) equations can be written in this form, as discussed in this previous post. One can view (up to Hodge duality) as a vector potential for the velocity , so it is natural to refer to as a vector potential operator.

The generalised Euler equations carry much of the same geometric structure as the true Euler equations. For instance, the transport equation is equivalent to the Kelvin circulation theorem, which in three dimensions also implies the transport of vortex streamlines and the conservation of helicity. If is self-adjoint and positive definite, then the famous Euler-Poincaré interpretation of the true Euler equations as geodesic flow on an infinite dimensional Riemannian manifold of volume preserving diffeomorphisms (as discussed in this previous post) extends to the generalised Euler equations (with the operator determining the new Riemannian metric to place on this manifold). In particular, the generalised Euler equations have a Lagrangian formulation, and so by Noether’s theorem we expect any continuous symmetry of the Lagrangian to lead to conserved quantities. Indeed, we have a conserved Hamiltonian , and any spatial symmetry of leads to a conserved impulse (e.g. translation invariance leads to a conserved momentum, and rotation invariance leads to a conserved angular momentum). If behaves like a pseudodifferential operator of order (as is the case with the true vector potential operator ), then it turns out that one can use energy methods to recover the same sort of classical local existence theory as for the true Euler equations (up to and including the famous Beale-Kato-Majda criterion for blowup).

The true Euler equations are suspected of admitting smooth localised solutions which blow up in finite time; there is now substantial numerical evidence for this blowup, but it has not been proven rigorously. The main purpose of this paper is to show that such finite time blowup can at least be established for certain generalised Euler equations that are somewhat close to the true Euler equations. This is similar in spirit to my previous paper on finite time blowup on averaged Navier-Stokes equations, with the main new feature here being that the modified equation continues to have a Lagrangian structure and a vorticity formulation, which was not the case with the averaged Navier-Stokes equation. On the other hand, the arguments here are not able to handle the presence of viscosity (basically because they rely crucially on the Kelvin circulation theorem, which is not available in the viscous case).

In fact, three different blowup constructions are presented (for three different choices of vector potential operator ). The first is a variant of one discussed previously on this blog, in which a “neck pinch” singularity for a vortex tube is created by using a non-self-adjoint vector potential operator, in which the velocity at the neck of the vortex tube is determined by the circulation of the vorticity somewhat further away from that neck, which when combined with conservation of circulation is enough to guarantee finite time blowup. This is a relatively easy construction of finite time blowup, and has the advantage of being rather stable (any initial data flowing through a narrow tube with a large positive circulation will blow up in finite time). On the other hand, it is not so surprising in the non-self-adjoint case that finite blowup can occur, as there is no conserved energy.

The second blowup construction is based on a connection between the two-dimensional SQG equation and the three-dimensional generalised Euler equations, discussed in this previous post. Namely, any solution to the former can be lifted to a “two and a half-dimensional” solution to the latter, in which the velocity and vorticity are translation-invariant in the vertical direction (but the velocity is still allowed to contain vertical components, so the flow is not completely horizontal). The same embedding also works to lift solutions to generalised SQG equations in two dimensions to solutions to generalised Euler equations in three dimensions. Conveniently, even if the vector potential operator for the generalised SQG equation fails to be self-adjoint, one can ensure that the three-dimensional vector potential operator is self-adjoint. Using this trick, together with a two-dimensional version of the first blowup construction, one can then construct a generalised Euler equation in three dimensions with a vector potential that is both self-adjoint and positive definite, and still admits solutions that blow up in finite time, though now the blowup is now a vortex sheet creasing at on a line, rather than a vortex tube pinching at a point.

This eliminates the main defect of the first blowup construction, but introduces two others. Firstly, the blowup is less stable, as it relies crucially on the initial data being translation-invariant in the vertical direction. Secondly, the solution is not spatially localised in the vertical direction (though it can be viewed as a compactly supported solution on the manifold , rather than ). The third and final blowup construction of the paper addresses the final defect, by replacing vertical translation symmetry with axial rotation symmetry around the vertical axis (basically, replacing Cartesian coordinates with cylindrical coordinates). It turns out that there is a more complicated way to embed two-dimensional generalised SQG equations into three-dimensional generalised Euler equations in which the solutions to the latter are now axially symmetric (but are allowed to “swirl” in the sense that the velocity field can have a non-zero angular component), while still keeping the vector potential operator self-adjoint and positive definite; the blowup is now that of a vortex ring creasing on a circle.

As with the previous papers in this series, these blowup constructions do not *directly* imply finite time blowup for the true Euler equations, but they do at least provide a barrier to establishing global regularity for these latter equations, in that one is forced to use some property of the true Euler equations that are not shared by these generalisations. They also suggest some possible blowup mechanisms for the true Euler equations (although unfortunately these mechanisms do not seem compatible with the addition of viscosity, so they do not seem to suggest a viable Navier-Stokes blowup mechanism).

Note: the following is a record of some whimsical mathematical thoughts and computations I had after doing some grading. It is likely that the sort of problems discussed here are in fact well studied in the appropriate literature; I would appreciate knowing of any links to such.

Suppose one assigns true-false questions on an examination, with the answers randomised so that each question is equally likely to have “true” as the correct answer as “false”, with no correlation between different questions. Suppose that the students taking the examination must answer each question with exactly one of “true” or “false” (they are not allowed to skip any question). Then it is easy to see how to grade the exam: one can simply count how many questions each student answered correctly (i.e. each correct answer scores one point, and each incorrect answer scores zero points), and give that number as the final grade of the examination. More generally, one could assign some score of points to each correct answer and some score (possibly negative) of points to each incorrect answer, giving a total grade of points. As long as , this grade is simply an affine rescaling of the simple grading scheme and would serve just as well for the purpose of evaluating the students, as well as encouraging each student to answer the questions as correctly as possible.

In practice, though, a student will probably not know the answer to each individual question with absolute certainty. One can adopt a probabilistic model, where for a given student and a given question , the student may think that the answer to question is true with probability and false with probability , where is some quantity that can be viewed as a measure of confidence has in the answer (with being confident that the answer is true if is close to , and confident that the answer is false if is close to ); for simplicity let us assume that in ‘s probabilistic model, the answers to each question are independent random variables. Given this model, and assuming that the student wishes to maximise his or her expected grade on the exam, it is an easy matter to see that the optimal strategy for to take is to answer question true if and false if . (If , the student can answer arbitrarily.)

[Important note: here we are *not* using the term “confidence” in the technical sense used in statistics, but rather as an informal term for “subjective probability”.]

This is fine as far as it goes, but for the purposes of evaluating how well the student actually knows the material, it provides only a limited amount of information, in particular we do not get to directly see the student’s subjective probabilities for each question. If for instance answered out of questions correctly, was it because he or she actually knew the right answer for seven of the questions, or was it because he or she was making educated guesses for the ten questions that turned out to be slightly better than random chance? There seems to be no way to discern this if the only input the student is allowed to provide for each question is the single binary choice of true/false.

But what if the student were able to give probabilistic answers to any given question? That is to say, instead of being forced to answer just “true” or “false” for a given question , the student was allowed to give answers such as “ confident that the answer is true” (and hence confidence the answer is false). Such answers would give more insight as to how well the student actually knew the material; in particular, we would theoretically be able to actually see the student’s subjective probabilities .

But now it becomes less clear what the right grading scheme to pick is. Suppose for instance we wish to extend the simple grading scheme in which an correct answer given in confidence is awarded one point. How many points should one award a correct answer given in confidence? How about an incorrect answer given in confidence (or equivalently, a correct answer given in confidence)?

Mathematically, one could design a grading scheme by selecting some grading function and then awarding a student points whenever they indicate the correct answer with a confidence of . For instance, if the student was confident that the answer was “true” (and hence confident that the answer was “false”), then this grading scheme would award the student points if the correct answer actually was “true”, and points if the correct answer actually was “false”. One can then ask the question of what functions would be “best” for this scheme?

Intuitively, one would expect that should be monotone increasing – one should be rewarded more for being correct with high confidence, than correct with low confidence. On the other hand, some sort of “partial credit” should still be assigned in the latter case. One obvious proposal is to just use a linear grading function – thus for instance a correct answer given with confidence might be worth points. But is this the “best” option?

To make the problem more mathematically precise, one needs an objective criterion with which to evaluate a given grading scheme. One criterion that one could use here is the avoidance of perverse incentives. If a grading scheme is designed badly, a student may end up overstating or understating his or her confidence in an answer in order to optimise the (expected) grade: the optimal level of confidence for a student to report on a question may differ from that student’s subjective confidence . So one could ask to design a scheme so that is always equal to , so that the incentive is for the student to honestly report his or her confidence level in the answer.

This turns out to give a precise constraint on the grading function . If a student thinks that the answer to a question is true with probability and false with probability , and enters in an answer of “true” with confidence (and thus “false” with confidence ), then student would expect a grade of

on average for this question. To maximise this expected grade (assuming differentiability of , which is a reasonable hypothesis for a partial credit grading scheme), one performs the usual maneuvre of differentiating in the independent variable and setting the result to zero, thus obtaining

In order to avoid perverse incentives, the maximum should occur at , thus we should have

for all . This suggests that the function should be constant. (Strictly speaking, it only gives the weaker constraint that is symmetric around ; but if one generalised the problem to allow for multiple-choice questions with more than two possible answers, with a grading scheme that depended only on the confidence assigned to the correct answer, the same analysis would in fact force to be constant in ; we leave this computation to the interested reader.) In other words, should be of the form for some ; by monotonicity we expect to be positive. If we make the normalisation (so that no points are awarded for a split in confidence between true and false) and , one arrives at the grading scheme

Thus, if a student believes that an answer is “true” with confidence and “false” with confidence , he or she will be awarded points when the correct answer is “true”, and points if the correct answer is “false”. The following table gives some illustrative values for this scheme:

Confidence that answer is “true” | Points awarded if answer is “true” | Points awarded if answer is “false” |

Note the large penalties for being extremely confident of an answer that ultimately turns out to be incorrect; in particular, answers of confidence should be avoided unless one really is absolutely certain as to the correctness of one’s answer.

The total grade given under such a scheme to a student who answers each question to be “true” with confidence , and “false” with confidence , is

This grade can also be written as

where

is the likelihood of the student ‘s subjective probability model, given the outcome of the correct answers. Thus the grade system here has another natural interpretation, as being an affine rescaling of the log-likelihood. The incentive is thus for the student to maximise the likelihood of his or her own subjective model, which aligns well with standard practices in statistics. From the perspective of Bayesian probability, the grade given to a student can then be viewed as a measurement (in logarithmic scale) of how much the posterior probability that the student’s model was correct has improved over the prior probability.

One could propose using the above grading scheme to evaluate predictions to binary events, such as an upcoming election with only two viable candidates, to see in hindsight just how effective each predictor was in calling these events. One difficulty in doing so is that many predictions do not come with explicit probabilities attached to them, and attaching a default confidence level of to any prediction made without any such qualification would result in an automatic grade of if even one of these predictions turned out to be incorrect. But perhaps if a predictor refuses to attach confidence level to his or her predictions, one can assign some default level of confidence to these predictions, and then (using some suitable set of predictions from this predictor as “training data”) find the value of that maximises this predictor’s grade. This level can then be used going forward as the default level of confidence to apply to any future predictions from this predictor.

The above grading scheme extends easily enough to multiple-choice questions. But one question I had trouble with was how to deal with *uncertainty*, in which the student does not know enough about a question to venture even a probability of being true or false. Here, it is natural to allow a student to leave a question blank (i.e. to answer “I don’t know”); a more advanced option would be to allow the student to enter his or her confidence level as an interval range (e.g. “I am between and confident that the answer is “true””). But now I do not have a good proposal for a grading scheme; once there is uncertainty in the student’s subjective model, the problem of that student maximising his or her expected grade becomes ill-posed due to the “unknown unknowns”, and so the previous criterion of avoiding perverse incentives becomes far less useful.

A *capset* in the vector space over the finite field of three elements is a subset of that does not contain any lines , where and . A basic problem in additive combinatorics (discussed in one of the very first posts on this blog) is to obtain good upper and lower bounds for the maximal size of a capset in .

Trivially, one has . Using Fourier methods (and the density increment argument of Roth), the bound of was obtained by Meshulam, and improved only as late as 2012 to for some absolute constant by Bateman and Katz. But in a very recent breakthrough, Ellenberg (and independently Gijswijt) obtained the exponentially superior bound , using a version of the polynomial method recently introduced by Croot, Lev, and Pach. (In the converse direction, a construction of Edel gives capsets as large as .) Given the success of the polynomial method in superficially similar problems such as the finite field Kakeya problem (discussed in this previous post), it was natural to wonder that this method could be applicable to the cap set problem (see for instance this MathOverflow comment of mine on this from 2010), but it took a surprisingly long time before Croot, Lev, and Pach were able to identify the precise variant of the polynomial method that would actually work here.

The proof of the capset bound is very short (Ellenberg’s and Gijswijt’s preprints are both 3 pages long, and Croot-Lev-Pach is 6 pages), but I thought I would present a slight reformulation of the argument which treats the three points on a line in symmetrically (as opposed to treating the third point differently from the first two, as is done in the Ellenberg and Gijswijt papers; Croot-Lev-Pach also treat the middle point of a three-term arithmetic progression differently from the two endpoints, although this is a very natural thing to do in their context of ). The basic starting point is this: if is a capset, then one has the identity

for all , where is the Kronecker delta function, which we view as taking values in . Indeed, (1) reflects the fact that the equation has solutions precisely when are either all equal, or form a line, and the latter is ruled out precisely when is a capset.

To exploit (1), we will show that the left-hand side of (1) is “low rank” in some sense, while the right-hand side is “high rank”. Recall that a function taking values in a field is of *rank one* if it is non-zero and of the form for some , and that the rank of a general function is the least number of rank one functions needed to express as a linear combination. More generally, if , we define the *rank* of a function to be the least number of “rank one” functions of the form

for some and some functions , , that are needed to generate as a linear combination. For instance, when , the rank one functions take the form , , , and linear combinations of such rank one functions will give a function of rank at most .

It is a standard fact in linear algebra that the rank of a diagonal matrix is equal to the number of non-zero entries. This phenomenon extends to higher dimensions:

Lemma 1 (Rank of diagonal hypermatrices)Let , let be a finite set, let be a field, and for each , let be a coefficient. Then the rank of the function

*Proof:* We induct on . As mentioned above, the case follows from standard linear algebra, so suppose now that and the claim has already been proven for .

It is clear that the function (2) has rank at most equal to the number of non-zero (since the summands on the right-hand side are rank one functions), so it suffices to establish the lower bound. By deleting from those elements with (which cannot increase the rank), we may assume without loss of generality that all the are non-zero. Now suppose for contradiction that (2) has rank at most , then we obtain a representation

for some sets of cardinalities adding up to at most , and some functions and .

Consider the space of functions that are orthogonal to all the , in the sense that

for all . This space is a vector space whose dimension is at least . A basis of this space generates a coordinate matrix of full rank, which implies that there is at least one non-singular minor. This implies that there exists a function in this space which is nowhere vanishing on some subset of of cardinality at least .

If we multiply (3) by and sum in , we conclude that

where

The right-hand side has rank at most , since the summands are rank one functions. On the other hand, from induction hypothesis the left-hand side has rank at least , giving the required contradiction.

On the other hand, we have the following (symmetrised version of a) beautifully simple observation of Croot, Lev, and Pach:

*Proof:* Using the identity for , we have

The right-hand side is clearly a polynomial of degree in , which is then a linear combination of monomials

with with

In particular, from the pigeonhole principle, at least one of is at most .

Consider the contribution of the monomials for which . We can regroup this contribution as

where ranges over those with , is the monomial

and is some explicitly computable function whose exact form will not be of relevance to our argument. The number of such is equal to , so this contribution has rank at most . The remaining contributions arising from the cases and similarly have rank at most (grouping the monomials so that each monomial is only counted once), so the claim follows.

Upon restricting from to , the rank of is still at most . The two lemmas then combine to give the Ellenberg-Gijswijt bound

All that remains is to compute the asymptotic behaviour of . This can be done using the general tool of Cramer’s theorem, but can also be derived from Stirling’s formula (discussed in this previous post). Indeed, if , , for some summing to , Stirling’s formula gives

where is the entropy function

We then have

where is the maximum entropy subject to the constraints

A routine Lagrange multiplier computation shows that the maximum occurs when

and is approximately , giving rise to the claimed bound of .

Remark 3As noted in the Ellenberg and Gijswijt papers, the above argument extends readily to other fields than to control the maximal size of subset of that has no non-trivial solutions to the equation , where are non-zero constants that sum to zero. Of course one replaces the function in Lemma 2 by in this case.

Remark 4This symmetrised formulation suggests that one possible way to improve slightly on the numerical quantity by finding a more efficient way to decompose into rank one functions, however I was not able to do so (though such improvements are reminiscent of the Strassen type algorithms for fast matrix multiplication).

Remark 5It is tempting to see if this method can get non-trivial upper bounds for sets with no length progressions, in (say) . One can run the above arguments, replacing the functionwith

this leads to the bound where

Unfortunately, is asymptotic to and so this bound is in fact slightly worse than the trivial bound ! However, there is a slim chance that there is a more efficient way to decompose into rank one functions that would give a non-trivial bound on . I experimented with a few possible such decompositions but unfortunately without success.

Remark 6Return now to the capset problem. Since Lemma 1 is valid for any field , one could perhaps hope to get better bounds by viewing the Kronecker delta function as taking values in another field than , such as the complex numbers . However, as soon as one works in a field of characteristic other than , one can adjoin a cube root of unity, and one now has the Fourier decompositionMoving to the Fourier basis, we conclude from Lemma 1 that the function on now has rank exactly , and so one cannot improve upon the trivial bound of by this method using fields of characteristic other than three as the range field. So it seems one has to stick with (or the algebraic completion thereof).

Thanks to Jordan Ellenberg and Ben Green for helpful discussions.

I’ve just uploaded to the arXiv my paper “Equivalence of the logarithmically averaged Chowla and Sarnak conjectures“, submitted to the Festschrift “Number Theory – Diophantine problems, uniform distribution and applications” in honour of Robert F. Tichy. This paper is a spinoff of my previous paper establishing a logarithmically averaged version of the Chowla (and Elliott) conjectures in the two-point case. In that paper, the estimate

as was demonstrated, where was any positive integer and denoted the Liouville function. The proof proceeded using a method I call the “entropy decrement argument”, which ultimately reduced matters to establishing a bound of the form

whenever was a slowly growing function of . This was in turn established in a previous paper of Matomaki, Radziwill, and myself, using the recent breakthrough of Matomaki and Radziwill.

It is natural to see to what extent the arguments can be adapted to attack the higher-point cases of the logarithmically averaged Chowla conjecture (ignoring for this post the more general Elliott conjecture for other bounded multiplicative functions than the Liouville function). That is to say, one would like to prove that

as for any fixed distinct integers . As it turns out (and as is detailed in the current paper), the entropy decrement argument extends to this setting (after using some known facts about linear equations in primes), and allows one to reduce the above estimate to an estimate of the form

for a slowly growing function of and some fixed (in fact we can take for ), where is the (normalised) local Gowers uniformity norm. (In the case , , this becomes the Fourier-uniformity conjecture discussed in this previous post.) If one then applied the (now proven) inverse conjecture for the Gowers norms, this estimate is in turn equivalent to the more complicated looking assertion

where the supremum is over all possible choices of *nilsequences* of controlled step and complexity (see the paper for definitions of these terms).

The main novelty in the paper (elaborating upon a previous comment I had made on this blog) is to observe that this latter estimate in turn follows from the logarithmically averaged form of Sarnak’s conjecture (discussed in this previous post), namely that

whenever is a zero entropy (i.e. deterministic) sequence. Morally speaking, this follows from the well-known fact that nilsequences have zero entropy, but the presence of the supremum in (1) means that we need a little bit more; roughly speaking, we need the *class* of nilsequences of a given step and complexity to have “uniformly zero entropy” in some sense.

On the other hand, it was already known (see previous post) that the Chowla conjecture implied the Sarnak conjecture, and similarly for the logarithmically averaged form of the two conjectures. Putting all these implications together, we obtain the pleasant fact that the logarithmically averaged Sarnak and Chowla conjectures are equivalent, which is the main result of the current paper. There have been a large number of special cases of the Sarnak conjecture worked out (when the deterministic sequence involved came from a special dynamical system), so these results can now also be viewed as partial progress towards the Chowla conjecture also (at least with logarithmic averaging). However, my feeling is that the full resolution of these conjectures will not come from these sorts of special cases; instead, conjectures like the Fourier-uniformity conjecture in this previous post look more promising to attack.

It would also be nice to get rid of the pesky logarithmic averaging, but this seems to be an inherent requirement of the entropy decrement argument method, so one would probably have to find a way to avoid that argument if one were to remove the log averaging.

When teaching mathematics, the traditional method of lecturing in front of a blackboard is still hard to improve upon, despite all the advances in modern technology. However, there are some nice things one can do in an electronic medium, such as this blog. Here, I would like to experiment with the ability to animate images, which I think can convey some mathematical concepts in ways that cannot be easily replicated by traditional static text and images. Given that many readers may find these animations annoying, I am placing the rest of the post below the fold.

Over the last few years, a large group of mathematicians have been developing an online database to systematically collect the known facts, numerical data, and algorithms concerning some of the most central types of objects in modern number theory, namely the L-functions associated to various number fields, curves, and modular forms, as well as further data about these modular forms. This of course includes the most famous examples of L-functions and modular forms respectively, namely the Riemann zeta function and the discriminant modular form , but there are countless other examples of both. The connections between these classes of objects lie at the heart of the Langlands programme.

As of today, the “L-functions and modular forms database” is now out of beta, and open to the public; at present the database is mostly geared towards specialists in computational number theory, but will hopefully develop into a more broadly useful resource as time develops. An article by John Cremona summarising the purpose of the database can be found here.

(Thanks to Andrew Sutherland and Kiran Kedlaya for the information.)

In functional analysis, it is common to endow various (infinite-dimensional) vector spaces with a variety of topologies. For instance, a normed vector space can be given the strong topology as well as the weak topology; if the vector space has a predual, it also has a weak-* topology. Similarly, spaces of operators have a number of useful topologies on them, including the operator norm topology, strong operator topology, and the weak operator topology. For function spaces, one can use topologies associated to various modes of convergence, such as uniform convergence, pointwise convergence, locally uniform convergence, or convergence in the sense of distributions. (A small minority of such modes are not topologisable, though, the most common of which is pointwise almost everywhere convergence; see Exercise 8 of this previous post).

Some of these topologies are much stronger than others (in that they contain many more open sets, or equivalently that they have many fewer convergent sequences and nets). However, even the weakest topologies used in analysis (e.g. convergence in distributions) tend to be Hausdorff, since this at least ensures the uniqueness of limits of sequences and nets, which is a fundamentally useful feature for analysis. On the other hand, some Hausdorff topologies used are “better” than others in that many more analysis tools are available for those topologies. In particular, topologies that come from Banach space norms are particularly valued, as such topologies (and their attendant norm and metric structures) grant access to many convenient additional results such as the Baire category theorem, the uniform boundedness principle, the open mapping theorem, and the closed graph theorem.

Of course, most topologies placed on a vector space will not come from Banach space norms. For instance, if one takes the space of continuous functions on that converge to zero at infinity, the topology of uniform convergence comes from a Banach space norm on this space (namely, the uniform norm ), but the topology of pointwise convergence does not; and indeed all the other usual modes of convergence one could use here (e.g. convergence, locally uniform convergence, convergence in measure, etc.) do not arise from Banach space norms.

I recently realised (while teaching a graduate class in real analysis) that the closed graph theorem provides a quick explanation for why Banach space topologies are so rare:

Proposition 1Let be a Hausdorff topological vector space. Then, up to equivalence of norms, there is at most one norm one can place on so that is a Banach space whose topology is at least as strong as . In particular, there is at most one topology stronger than that comes from a Banach space norm.

*Proof:* Suppose one had two norms on such that and were both Banach spaces with topologies stronger than . Now consider the graph of the identity function from the Banach space to the Banach space . This graph is closed; indeed, if is a sequence in this graph that converged in the product topology to , then converges to in norm and hence in , and similarly converges to in norm and hence in . But limits are unique in the Hausdorff topology , so . Applying the closed graph theorem (see also previous discussions on this theorem), we see that the identity map is continuous from to ; similarly for the inverse. Thus the norms are equivalent as claimed.

By using various generalisations of the closed graph theorem, one can generalise the above proposition to Fréchet spaces, or even to F-spaces. The proposition can fail if one drops the requirement that the norms be stronger than a specified Hausdorff topology; indeed, if is infinite dimensional, one can use a Hamel basis of to construct a linear bijection on that is unbounded with respect to a given Banach space norm , and which can then be used to give an inequivalent Banach space structure on .

One can interpret Proposition 1 as follows: once one equips a vector space with some “weak” (but still Hausdorff) topology, there is a *canonical* choice of “strong” topology one can place on that space that is stronger than the “weak” topology but arises from a Banach space structure (or at least a Fréchet or F-space structure), provided that at least one such structure exists. In the case of function spaces, one can usually use the topology of convergence in distribution as the “weak” Hausdorff topology for this purpose, since this topology is weaker than almost all of the other topologies used in analysis. This helps justify the common practice of describing a Banach or Fréchet function space just by giving the set of functions that belong to that space (e.g. is the space of Schwartz functions on ) without bothering to specify the precise topology to serve as the “strong” topology, since it is usually understood that one is using the canonical such topology (e.g. the Fréchet space structure on given by the usual Schwartz space seminorms).

Of course, there are still some topological vector spaces which have no “strong topology” arising from a Banach space at all. Consider for instance the space of finitely supported sequences. A weak, but still Hausdorff, topology to place on this space is the topology of pointwise convergence. But there is no norm stronger than this topology that makes this space a Banach space. For, if there were, then letting be the standard basis of , the series would have to converge in , and hence pointwise, to an element of , but the only available pointwise limit for this series lies outside of . But I do not know if there is an easily checkable criterion to test whether a given vector space (equipped with a Hausdorff “weak” toplogy) can be equipped with a stronger Banach space (or Fréchet space or -space) topology.

Tamar Ziegler and I have just uploaded to the arXiv two related papers: “Concatenation theorems for anti-Gowers-uniform functions and Host-Kra characteoristic factors” and “polynomial patterns in primes“, with the former developing a “quantitative Bessel inequality” for local Gowers norms that is crucial in the latter.

We use the term “concatenation theorem” to denote results in which structural control of a function in two or more “directions” can be “concatenated” into structural control in a *joint* direction. A trivial example of such a concatenation theorem is the following: if a function is constant in the first variable (thus is constant for each ), and also constant in the second variable (thus is constant for each ), then it is constant in the joint variable . A slightly less trivial example: if a function is affine-linear in the first variable (thus, for each , there exist such that for all ) and affine-linear in the second variable (thus, for each , there exist such that for all ) then is a quadratic polynomial in ; in fact it must take the form

for some real numbers . (This can be seen for instance by using the affine linearity in to show that the coefficients are also affine linear.)

The same phenomenon extends to higher degree polynomials. Given a function from one additive group to another, we say that is of *degree less than * along a subgroup of if all the -fold iterated differences of along directions in vanish, that is to say

for all and , where is the difference operator

(We adopt the convention that the only of degree less than is the zero function.)

We then have the following simple proposition:

Proposition 1 (Concatenation of polynomiality)Let be of degree less than along one subgroup of , and of degree less than along another subgroup of , for some . Then is of degree less than along the subgroup of .

Note the previous example was basically the case when , , , , and .

*Proof:* The claim is trivial for or (in which is constant along or respectively), so suppose inductively and the claim has already been proven for smaller values of .

We take a derivative in a direction along to obtain

where is the shift of by . Then we take a further shift by a direction to obtain

leading to the *cocycle equation*

Since has degree less than along and degree less than along , has degree less than along and less than along , so is degree less than along by induction hypothesis. Similarly is also of degree less than along . Combining this with the cocycle equation we see that is of degree less than along for any , and hence is of degree less than along , as required.

While this proposition is simple, it already illustrates some basic principles regarding how one would go about proving a concatenation theorem:

- (i) One should perform induction on the degrees involved, and take advantage of the recursive nature of degree (in this case, the fact that a function is of less than degree along some subgroup of directions iff all of its first derivatives along are of degree less than ).
- (ii) Structure is preserved by operations such as addition, shifting, and taking derivatives. In particular, if a function is of degree less than along some subgroup , then any derivative of is also of degree less than along ,
*even if does not belong to*.

Here is another simple example of a concatenation theorem. Suppose an at most countable additive group acts by measure-preserving shifts on some probability space ; we call the pair (or more precisely ) a *-system*. We say that a function is a *generalised eigenfunction of degree less than * along some subgroup of and some if one has

almost everywhere for all , and some functions of degree less than along , with the convention that a function has degree less than if and only if it is equal to . Thus for instance, a function is an generalised eigenfunction of degree less than along if it is constant on almost every -ergodic component of , and is a generalised function of degree less than along if it is an eigenfunction of the shift action on almost every -ergodic component of . A basic example of a higher order eigenfunction is the function on the *skew shift* with action given by the generator for some irrational . One can check that for every integer , where is a generalised eigenfunction of degree less than along , so is of degree less than along .

We then have

Proposition 2 (Concatenation of higher order eigenfunctions)Let be a -system, and let be a generalised eigenfunction of degree less than along one subgroup of , and a generalised eigenfunction of degree less than along another subgroup of , for some . Then is a generalised eigenfunction of degree less than along the subgroup of .

The argument is almost identical to that of the previous proposition and is left as an exercise to the reader. The key point is the point (ii) identified earlier: the space of generalised eigenfunctions of degree less than along is preserved by multiplication and shifts, as well as the operation of “taking derivatives” even along directions that do not lie in . (To prove this latter claim, one should restrict to the region where is non-zero, and then divide by to locate .)

A typical example of this proposition in action is as follows: consider the -system given by the -torus with generating shifts

for some irrational , which can be checked to give a action

The function can then be checked to be a generalised eigenfunction of degree less than along , and also less than along , and less than along . One can view this example as the dynamical systems translation of the example (1) (see this previous post for some more discussion of this sort of correspondence).

The main results of our concatenation paper are analogues of these propositions concerning a more complicated notion of “polynomial-like” structure that are of importance in additive combinatorics and in ergodic theory. On the ergodic theory side, the notion of structure is captured by the *Host-Kra characteristic factors* of a -system along a subgroup . These factors can be defined in a number of ways. One is by duality, using the *Gowers-Host-Kra uniformity seminorms* (defined for instance here) . Namely, is the factor of defined up to equivalence by the requirement that

An equivalent definition is in terms of the *dual functions* of along , which can be defined recursively by setting and

where denotes the ergodic average along a Følner sequence in (in fact one can also define these concepts in non-amenable abelian settings as per this previous post). The factor can then be alternately defined as the factor generated by the dual functions for .

In the case when and is -ergodic, a deep theorem of Host and Kra shows that the factor is equivalent to the inverse limit of nilsystems of step less than . A similar statement holds with replaced by any finitely generated group by Griesmer, while the case of an infinite vector space over a finite field was treated in this paper of Bergelson, Ziegler, and myself. The situation is more subtle when is not -ergodic, or when is -ergodic but is a proper subgroup of acting non-ergodically, when one has to start considering measurable families of directional nilsystems; see for instance this paper of Austin for some of the subtleties involved (for instance, higher order group cohomology begins to become relevant!).

One of our main theorems is then

Proposition 3 (Concatenation of characteristic factors)Let be a -system, and let be measurable with respect to the factor and with respect to the factor for some and some subgroups of . Then is also measurable with respect to the factor .

We give two proofs of this proposition in the paper; an ergodic-theoretic proof using the Host-Kra theory of “cocycles of type (along a subgroup )”, which can be used to inductively describe the factors , and a combinatorial proof based on a combinatorial analogue of this proposition which is harder to state (but which roughly speaking asserts that a function which is nearly orthogonal to all bounded functions of small norm, and also to all bounded functions of small norm, is also nearly orthogonal to alll bounded functions of small norm). The combinatorial proof parallels the proof of Proposition 2. A key point is that dual functions obey a property analogous to being a generalised eigenfunction, namely that

where and is a “structured function of order ” along . (In the language of this previous paper of mine, this is an assertion that dual functions are uniformly almost periodic of order .) Again, the point (ii) above is crucial, and in particular it is key that any structure that has is inherited by the associated functions and . This sort of inheritance is quite easy to accomplish in the ergodic setting, as there is a ready-made language of factors to encapsulate the concept of structure, and the shift-invariance and -algebra properties of factors make it easy to show that just about any “natural” operation one performs on a function measurable with respect to a given factor, returns a function that is still measurable in that factor. In the finitary combinatorial setting, though, encoding the fact (ii) becomes a remarkably complicated notational nightmare, requiring a huge amount of “epsilon management” and “second-order epsilon management” (in which one manages not only scalar epsilons, but also function-valued epsilons that depend on other parameters). In order to avoid all this we were forced to utilise a nonstandard analysis framework for the combinatorial theorems, which made the arguments greatly resemble the ergodic arguments in many respects (though the two settings are still not equivalent, see this previous blog post for some comparisons between the two settings). Unfortunately the arguments are still rather complicated.

For combinatorial applications, dual formulations of the concatenation theorem are more useful. A direct dualisation of the theorem yields the following decomposition theorem: a bounded function which is small in norm can be split into a component that is small in norm, and a component that is small in norm. (One may wish to understand this type of result by first proving the following baby version: any function that has mean zero on every coset of , can be decomposed as the sum of a function that has mean zero on every coset, and a function that has mean zero on every coset. This is dual to the assertion that a function that is constant on every coset and constant on every coset, is constant on every coset.) Combining this with some standard “almost orthogonality” arguments (i.e. Cauchy-Schwarz) give the following Bessel-type inequality: if one has a lot of subgroups and a bounded function is small in norm for most , then it is also small in norm for most . (Here is a baby version one may wish to warm up on: if a function has small mean on for some large prime , then it has small mean on most of the cosets of most of the one-dimensional subgroups of .)

There is also a generalisation of the above Bessel inequality (as well as several of the other results mentioned above) in which the subgroups are replaced by more general *coset progressions* (of bounded rank), so that one has a Bessel inequailty controlling “local” Gowers uniformity norms such as by “global” Gowers uniformity norms such as . This turns out to be particularly useful when attempting to compute polynomial averages such as

for various functions . After repeated use of the van der Corput lemma, one can control such averages by expressions such as

(actually one ends up with more complicated expressions than this, but let’s use this example for sake of discussion). This can be viewed as an average of various Gowers uniformity norms of along arithmetic progressions of the form for various . Using the above Bessel inequality, this can be controlled in turn by an average of various Gowers uniformity norms along rank two generalised arithmetic progressions of the form for various . But for generic , this rank two progression is close in a certain technical sense to the “global” interval (this is ultimately due to the basic fact that two randomly chosen large integers are likely to be coprime, or at least have a small gcd). As a consequence, one can use the concatenation theorems from our first paper to control expressions such as (2) in terms of *global* Gowers uniformity norms. This is important in number theoretic applications, when one is interested in computing sums such as

or

where and are the Möbius and von Mangoldt functions respectively. This is because we are able to control global Gowers uniformity norms of such functions (thanks to results such as the proof of the inverse conjecture for the Gowers norms, the orthogonality of the Möbius function with nilsequences, and asymptotics for linear equations in primes), but much less control is currently available for local Gowers uniformity norms, even with the assistance of the generalised Riemann hypothesis (see this previous blog post for some further discussion).

By combining these tools and strategies with the “transference principle” approach from our previous paper (as improved using the recent “densification” technique of Conlon, Fox, and Zhao, discussed in this previous post), we are able in particular to establish the following result:

Theorem 4 (Polynomial patterns in the primes)Let be polynomials of degree at most , whose degree coefficients are all distinct, for some . Suppose that is admissible in the sense that for every prime , there are such that are all coprime to . Then there exist infinitely many pairs of natural numbers such that are prime.

Furthermore, we obtain an asymptotic for the number of such pairs in the range , (actually for minor technical reasons we reduce the range of to be very slightly less than ). In fact one could in principle obtain asymptotics for smaller values of , and relax the requirement that the degree coefficients be distinct with the requirement that no two of the differ by a constant, provided one had good enough local uniformity results for the Möbius or von Mangoldt functions. For instance, we can obtain an asymptotic for triplets of the form unconditionally for , and conditionally on GRH for all , using known results on primes in short intervals on average.

The case of this theorem was obtained in a previous paper of myself and Ben Green (using the aforementioned conjectures on the Gowers uniformity norm and the orthogonality of the Möbius function with nilsequences, both of which are now proven). For higher , an older result of Tamar and myself was able to tackle the case when (though our results there only give lower bounds on the number of pairs , and no asymptotics). Both of these results generalise my older theorem with Ben Green on the primes containing arbitrarily long arithmetic progressions. The theorem also extends to multidimensional polynomials, in which case there are some additional previous results; see the paper for more details. We also get a technical refinement of our previous result on narrow polynomial progressions in (dense subsets of) the primes by making the progressions just a little bit narrower in the case of the density of the set one is using is small.

. This latter Bessel type inequality is particularly useful in combinatorial and number-theoretic applications, as it allows one to convert “global” Gowers uniformity norm (basically, bounds on norms such as ) to “local” Gowers uniformity norm control.

## Recent Comments