You are currently browsing the category archive for the ‘expository’ category.
The twin prime conjecture, still unsolved, asserts that there are infinitely many primes such that is also prime. A more precise form of this conjecture is (a special case) of the Hardy-Littlewood prime tuples conjecture, which asserts that
Because is almost entirely supported on the primes, it is not difficult to see that (1) implies the twin prime conjecture.
One can give a heuristic justification of the asymptotic (1) (and hence the twin prime conjecture) via sieve theoretic methods. Recall that the von Mangoldt function can be decomposed as a Dirichlet convolution
or (to simplify things by removing the logarithm)
for odd. Summing by parts, one then expects
and so we heuristically have
The Dirichlet series
has an Euler product factorisation
for ; comparing this with the Euler product factorisation
for the Riemann zeta function, and recalling that has a simple pole of residue at , we see that
has a simple zero at with first derivative
From this and standard multiplicative number theory manipulations, one can calculate the asymptotic
which concludes the heuristic justification of (1).
What prevents us from making the above heuristic argument rigorous, and thus proving (1) and the twin prime conjecture? Note that the variable in (2) ranges to be as large as . On the other hand, the prime number theorem in arithmetic progressions (3) is not expected to hold for anywhere that large (for instance, the left-hand side of (3) vanishes as soon as exceeds ). The best unconditional result known of the type (3) is the Siegel-Walfisz theorem, which allows to be as large as . Even the powerful generalised Riemann hypothesis (GRH) only lets one prove an estimate of the form (3) for up to about .
However, because of the averaging effect of the summation in in (2), we don’t need the asymptotic (3) to be true for all in a particular range; having it true for almost all in that range would suffice. Here the situation is much better; the celebrated Bombieri-Vinogradov theorem (sometimes known as “GRH on the average”) implies, roughly speaking, that the approximation (3) is valid for almost all for any fixed . While this is not enough to control (2) or (1), the Bombieri-Vinogradov theorem can at least be used to control variants of (1) such as
for various sieve weights whose associated divisor function is supposed to approximate the von Mangoldt function , although that theorem only lets one do this when the weights are supported on the range . This is still enough to obtain some partial results towards (1); for instance, by selecting weights according to the Selberg sieve, one can use the Bombieri-Vinogradov theorem to establish the upper bound
It has been difficult to improve upon the Bombieri-Vinogradov theorem in its full generality, although there are various improvements to certain restricted versions of the Bombieri-Vinogradov theorem, for instance in the famous work of Zhang on bounded gaps between primes. Nevertheless, it is believed that the Elliott-Halberstam conjecture (EH) holds, which roughly speaking would mean that (3) now holds for almost all for any fixed . (Unfortunately, the factor cannot be removed, as investigated in a series of papers by Friedlander, Granville, and also Hildebrand and Maier.) This comes tantalisingly close to having enough distribution to control all of (1). Unfortunately, it still falls short. Using this conjecture in place of the Bombieri-Vinogradov theorem leads to various improvements to sieve theoretic bounds; for instance, the factor of in (4) can now be improved to .
In two papers from the 1970s (which can be found online here and here respectively, the latter starting on page 255 of the pdf), Bombieri developed what is now known as the Bombieri asymptotic sieve to clarify the situation more precisely. First, he showed that on the Elliott-Halberstam conjecture, while one still could not establish the asymptotic (1), one could prove the generalised asymptotic
These functions behave like the von Mangoldt function, but are concentrated on -almost primes (numbers with at most prime factors) rather than primes. The right-hand side of (5) corresponds to what one would expect if one ran the same heuristics used to justify (1). Sadly, the case of (5), which is just (1), is just barely excluded from Bombieri’s analysis.
for any fixed and any tuple of natural numbers other than , where
is a further generalisation of the von Mangoldt function (now concentrated on -almost primes). By combining these asymptotics with some elementary identities involving the , together with the Weierstrass approximation theorem, Bombieri was able to control a wide family of sums including (1), except for one undetermined scalar . Namely, he was able to show (again on EH) that for any fixed and any continuous function on the simplex that had suitable vanishing at the boundary, the sum
and the twin prime conjecture would be proved if one could show that is bounded away from zero, while (1) is equivalent to the assertion that is equal to . Unfortunately, no additional bound beyond the inequalities provided by the Bombieri asymptotic sieve is known, even if one assumes all other major conjectures in number theory than the prime tuples conjecture and its variants (e.g. GRH, GEH, GUE, abc, Chowla, …).
for and some fixed , with vanishing elsewhere and for some continuous (symmetric) functions obeying some vanishing at the boundary, so long as the parity condition
is obeyed (informally: gives the same weight to products of an odd number of primes as to products of an even number of primes, or to put it another way, is asymptotically orthogonal to the Möbius function ). But when violates the parity condition, the asymptotic involves the unknown . This scalar thus embodies the “parity problem” for the twin prime conjecture (discussed in these previous blog posts).
Because the obstruction to the parity problem is only one-dimensional (on EH), one can replace any parity-violating weight (such as ) with any other parity-violating weight and obtain a logically equivalent estimate. For instance, to prove the twin prime conjecture on EH, it would suffice to show that
for some fixed , or equivalently that there are solutions to the equation in primes with and . (In some cases, this sort of reduction can also be made using other sieves than the Bombieri asymptotic sieve, as was observed by Ng.) As another example, the Bombieri asymptotic sieve can be used to show that the asymptotic (1) is equivalent to the asymptotic
where is the set of numbers that are rough in the sense that they have no prime factors less than for some fixed (the function clearly correlates with and so must violate the parity condition). One can replace with similar sieve weights (e.g. a Selberg sieve) that concentrate on almost primes if desired.
As it turns out, if one is willing to strengthen the assumption of the Elliott-Halberstam (EH) conjecture to the assumption of the generalised Elliott-Halberstam (GEH) conjecture (as formulated for instance in Claim 2.6 of the Polymath8b paper), one can also swap the factor in the above asymptotics with other parity-violating weights and obtain a logically equivalent estimate, as the Bombieri asymptotic sieve also applies to weights such as under the assumption of GEH. For instance, on GEH one can use two such applications of the Bombieri asymptotic sieve to show that the twin prime conjecture would follow if one could show that there are solutions to the equation
in primes with and , for some . Similarly, on GEH the asymptotic (1) is equivalent to the asymptotic
for some fixed , and similarly with replaced by other sieves. This form of the quantitative twin primes conjecture is appealingly similar to the (special case)
of the Chowla conjecture, for which there has been some recent progress (discussed for instance in these recent posts). Informally, the Bombieri asymptotic sieve lets us (on GEH) view the twin prime conjecture as a sort of Chowla conjecture restricted to almost primes. Unfortunately, the recent progress on the Chowla conjecture relies heavily on the multiplicativity of at small primes, which is completely destroyed by inserting a weight such as , so this does not yet yield a viable path towards the twin prime conjecture even assuming GEH. Still, the similarity is striking, and one can hope that further ways to attack the Chowla conjecture may emerge that could impact the twin prime conjecture. (Alternatively, if one assumes a sufficiently optimistic version of the GEH, one could perhaps relax the notion of “almost prime” to the extent that one could start usefully using multiplicativity at smallish primes, though this seems rather wishful at present, particularly since the most optimistic versions of GEH are known to be false.)
The Bombieri asymptotic sieve is already well explained in the original two papers of Bombieri; there is also a slightly different treatment of the sieve by Friedlander and Iwaniec, as well as a simplified version in the book of Friedlander and Iwaniec (in which the distribution hypothesis is strengthened in order to shorten the arguments. I’ve decided though to write up my own notes on the sieve below the fold; this is primarily for my own benefit, but may be useful to some readers also. I largely follow the treatment of Bombieri, with the one idiosyncratic twist of replacing the usual “elementary” Selberg sieve with the “analytic” Selberg sieve used in particular in many of the breakthrough works in small gaps between primes; I prefer working with the latter due to its Fourier-analytic flavour.
— 1. Controlling generalised von Mangoldt sums —
To prove (5), we shall first generalise it, by replacing the sequence by a more general sequence obeying the following axioms:
- (i) (Non-negativity) One has for all .
- (ii) (Crude size bound) One has for all , where is the divisor function.
- (iii) (Size) We have for some constant .
- (iv) (Elliott-Halberstam type conjecture) For any , one has
where is a multiplicative function with for all primes and .
These axioms are a little bit stronger than what is actually needed to make the Bombieri asymptotic sieve work, but we will not attempt to work with the weakest possible axioms here.
We introduce the function
which is analytic for ; in particular it can be evaluated at to yield
There are two model examples of data to keep in mind. The first, discussed in the introduction, is when , then and is as in the introduction; one of course needs EH to justify axiom (iv) in this case. The other is when , in which case and for all . We will later take advantage of the second example to avoid doing some (routine, but messy) main term computations.
The main result of this section is then
as , where .
Note that this recovers (5) (on EH) as a special case.
We now begin the proof of this theorem. Henceforth we allow implied constants in the or notation to depend on and .
It will be convenient to replace the range by a shorter range by the following standard localisation trick. Let be a large quantity depending on to be chosen later, and let denote the interval . We will show the estimate
for any .
Write for the logarithm function , thus for any . Without loss of generality we may assume that ; we then factor , where
This function is just when . When the function is more complicated, but we at least have the following crude bound:
Proof: We induct on . The case is obvious, so suppose and the claim has already been proven for . Since , we see from induction hypothesis and the triangle inequality that
Since by Möbius inversion, the claim follows.
We can write
In the region , we have . Thus
for . The contribution of the error term to to (10) is easily seen to be negligible if is large enough, so we may freely replace with with little difficulty.
If we insert this replacement directly into the left-hand side of (10) and rearrange, we get
One could in principle compute explicitly from the proof of (13), but one can avoid doing so by the following comparison trick. In the special case , standard multiplicative number theory (noting that the Dirichlet series has a pole of order at , with top Laurent coefficient ) gives the asymptotic
which when compared with (14) for (recalling that in this case) gives the formula
As it turns out, the estimate (13) is easy to establish, but the estimate (12) is not, roughly speaking because the typical number in has too many divisors in the range , each of which gives a contribution to the error term. (In the book of Friedlander and Iwaniec, the estimate (13) is established anyway, but only after assuming a stronger version of (iv), roughly speaking in which is allowed to be as large as .) To resolve this issue, we will insert a preliminary sieve that will remove most of the potential divisors i the range (leaving only about such divisors on the average for typical ), making the analogue of (12) easier to prove (at the cost of making the analogue of (13) more difficult). Namely, if one can find a function for which one has the estimates
for some quantity that depends on but not on , then by repeating the previous arguments we will again be able to establish (10).
The key estimate is (16). As we shall see, when comparing with , the weight will cost us a factor of , but the term in the definitions of and will recover a factor of , which will give the desired bound since we are assuming .
One has some flexibility in how to select the weight : basically any standard sieve that uses divisors of size at most to localise (at least approximately) to numbers that are rough in the sense that they have no (or at least very few) factors less than , will do. We will use the analytic Selberg sieve choice
where denotes the derivative of . Note the loss of that had previously been pointed out. In the arguments that follows I will be a little brief with the details, as they are standard (see e.g. this previous post).
We now prove (19). The left-hand side can be expanded as
where denotes the least common multiple of and . From the support of we see that the summand is only non-vanishing when . We now use axiom (iv) and split the left-hand side into a main term
so from axiom (iv) and Cauchy-Schwarz we see that the error term (20) is acceptable. Thus it will suffice to establish the bound
and so the left-hand side of (21) can be rearranged using Fubini’s theorem as
We can factorise as an Euler product:
Taking absolute values and using Mertens’ theorem leads to the crude bound
which when combined with the rapid decrease of , allows us to restrict the region of integration in (23) to the square (say) with negligible error. Next, we use the Euler product
for to factorise
For with nonnegative real part, one has
and so by the Weierstrass -test, is continuous at . Since
we thus have
Also, since has a pole of order at with residue , we have
The quantity (23) can thus be written, up to errors of , as
Using the rapid decrease of , we may remove the restriction on , and it will now suffice to prove the identity
But on differentiating and then squaring (22) we have
and the claim follows by integrating in from zero to infinity (noting that vanishes for ).
We have the following variant of (19):
Roughly speaking, the above estimates assert that is concentrated on those numbers with no prime factors much less than , but factors without such small prime divisors occur with about the same relative density as they do in the integers.
Proof: The left-hand side of (24) can be expanded as
If we define
then the previous expression can be written as
while one has
From Mertens’ theorem we have
when , so the contribution of the terms where can be absorbed into the error (after increasing that error slightly). For the remaining contributions, we see that
where if does not divide , and
if divides times for some . In the latter case, Taylor expansion gives the bounds
and the claim (28) follows. When and we have
Now we can prove (15), (16), (17). We begin with (15). Using the Leibniz rule applied to the identity and using and Möbius inversion (and the associativity and commutativity of Dirichlet convolution) we see that
Next, by applying the Leibniz rule to for some and using (29) we see that
In particular, from induction we see that is supported on numbers with at most distinct prime factors, and hence is supported on numbers with at most distinct prime factors. In particular, from (18) we see that on the support of . Thus it will suffice to show that
If and , then has at most distinct prime factors , with . If we factor , where is the contribution of those with , and is the contribution of those with , then at least one of the following two statements hold:
- (a) (and hence ) is divisible by a square number of size at least .
- (b) .
The contribution of case (a) is easily seen to be acceptable by axiom (ii). For case (b), we observe from (30) and induction that
and so it will suffice to show that
where ranges over numbers bounded by with at most distinct prime factors, the smallest of which is at most , and consists of those numbers with no prime factor less than or equal to . Applying (26) (with replaced by ) gives the bound
so by (25) it suffices to show that
subject to the same constraints on as before. The contribution of those with distinct prime factors can be bounded by
applying Mertens’ theorem and summing over , one obtains the claim.
From the support of , the summand on the left-hand side is only non-zero when , which makes , where we use the crucial hypothesis to gain enough powers of to make the argument here work. Applying Lemma 2, we reduce to showing that
We can make the change of variables to flip the sum
and then swap the sums to reduce to showing that
By Lemma 3, it suffices to show that
To prove this, we use the Rankin trick, bounding the implied weight by . We can then bound the left-hand side by the Euler product
which can be bounded by
and the claim follows from Mertens’ theorem.
We let be a small constant to be chosen later. We divide the outer sum into two ranges, depending on whether only has prime factors greater than or not. In the former case, we can apply (27) to write this contribution as
plus a negligible error, where the is implicitly restricted to numbers with all prime factors greater than . The main term is messy, but it is of the required form up to an acceptable error, so there is no need to compute it any further. It remains to consider those that have at least one prime factor less than . Here we use (24) instead of (27) as well as Lemma 3 to dominate this contribution by
up to negligible errors, where is now restricted to have at least one prime factor less than . This makes at least one of the factors to be at most . A routine application of Rankin’s trick shows that
and so the total contribution of this case is . Since can be made arbitrarily small, (17) follows.
— 2. Weierstrass approximation —
Let , , , be as in that theorem. It will be convenient to normalise the weights by to make their mean value comparable to . From Theorem 1 and summation by parts we have
We now take a closer look at what happens when does consist entirely of ones. Let denote the -tuple . Convolving the case of (30) with copies of for some and using the Leibniz rule, we see that
Multiplying by and summing over , and using (31) to control the term, one has
If we define (up to an error of ) by the formula
then an induction then shows that
for odd , and
for even . In particular, after adjusting by if necessary, we have since the left-hand sides are non-negative.
If we now define the comparison sequence , standard multiplicative number theory shows that the above estimates also hold when is replaced by ; thus
for both odd and even . The bound (31) also holds for when does not consist entirely of ones, and hence
for any fixed (which may or may not consist entirely of ones).
Next, from induction (on ), the Leibniz rule, and (30), we see that for any and , , the function
whenever is one of these functions (32). Specialising to the case , we thus have
where . The contribution of those that are powers of primes can be easily seen to be negligible, leading to
where now . The contribution of the case where two of the primes agree can also be seen to be negligible, as can the error when replacing with , and then by symmetry
By linearity, this implies that
for any polynomial that vanishes on the coordinate hyperplanes . The right-hand side can also be evaluated by Mertens’ theorem as
when is odd and
when is even. Using the Weierstrass approximation theorem, we then have
Remark 4 The Bombieri asymptotic sieve has to use the full power of EH (or GEH); there are constructions due to Ford that show that if one only has a distributional hypothesis up to for some fixed constant , then the asymptotics of sums such as (5), or more generally (9), are not determined by a single scalar parameter , but can also vary in other ways as well. Thus the Bombieri asymptotic sieve really is asymptotic; in order to get type error terms one needs the level of distribution to be asymptotically equal to as . Related to this, the quantitative decay of the error terms in the Bombieri asymptotic sieve are extremely poor; in particular, they depend on the dependence of implied constant in axiom (iv) on the parameters , for which there is no consensus on what one should conjecturally expect.
A capset in the vector space over the finite field of three elements is a subset of that does not contain any lines , where and . A basic problem in additive combinatorics (discussed in one of the very first posts on this blog) is to obtain good upper and lower bounds for the maximal size of a capset in .
Trivially, one has . Using Fourier methods (and the density increment argument of Roth), the bound of was obtained by Meshulam, and improved only as late as 2012 to for some absolute constant by Bateman and Katz. But in a very recent breakthrough, Ellenberg (and independently Gijswijt) obtained the exponentially superior bound , using a version of the polynomial method recently introduced by Croot, Lev, and Pach. (In the converse direction, a construction of Edel gives capsets as large as .) Given the success of the polynomial method in superficially similar problems such as the finite field Kakeya problem (discussed in this previous post), it was natural to wonder that this method could be applicable to the cap set problem (see for instance this MathOverflow comment of mine on this from 2010), but it took a surprisingly long time before Croot, Lev, and Pach were able to identify the precise variant of the polynomial method that would actually work here.
The proof of the capset bound is very short (Ellenberg’s and Gijswijt’s preprints are both 3 pages long, and Croot-Lev-Pach is 6 pages), but I thought I would present a slight reformulation of the argument which treats the three points on a line in symmetrically (as opposed to treating the third point differently from the first two, as is done in the Ellenberg and Gijswijt papers; Croot-Lev-Pach also treat the middle point of a three-term arithmetic progression differently from the two endpoints, although this is a very natural thing to do in their context of ). The basic starting point is this: if is a capset, then one has the identity
for all , where is the Kronecker delta function, which we view as taking values in . Indeed, (1) reflects the fact that the equation has solutions precisely when are either all equal, or form a line, and the latter is ruled out precisely when is a capset.
To exploit (1), we will show that the left-hand side of (1) is “low rank” in some sense, while the right-hand side is “high rank”. Recall that a function taking values in a field is of rank one if it is non-zero and of the form for some , and that the rank of a general function is the least number of rank one functions needed to express as a linear combination. More generally, if , we define the rank of a function to be the least number of “rank one” functions of the form
for some and some functions , , that are needed to generate as a linear combination. For instance, when , the rank one functions take the form , , , and linear combinations of such rank one functions will give a function of rank at most .
It is a standard fact in linear algebra that the rank of a diagonal matrix is equal to the number of non-zero entries. This phenomenon extends to higher dimensions:
Proof: We induct on . As mentioned above, the case follows from standard linear algebra, so suppose now that and the claim has already been proven for .
It is clear that the function (2) has rank at most equal to the number of non-zero (since the summands on the right-hand side are rank one functions), so it suffices to establish the lower bound. By deleting from those elements with (which cannot increase the rank), we may assume without loss of generality that all the are non-zero. Now suppose for contradiction that (2) has rank at most , then we obtain a representation
Consider the space of functions that are orthogonal to all the , in the sense that
for all . This space is a vector space whose dimension is at least . A basis of this space generates a coordinate matrix of full rank, which implies that there is at least one non-singular minor. This implies that there exists a function in this space which is nowhere vanishing on some subset of of cardinality at least .
If we multiply (3) by and sum in , we conclude that
The right-hand side has rank at most , since the summands are rank one functions. On the other hand, from induction hypothesis the left-hand side has rank at least , giving the required contradiction.
On the other hand, we have the following (symmetrised version of a) beautifully simple observation of Croot, Lev, and Pach:
Proof: Using the identity for , we have
The right-hand side is clearly a polynomial of degree in , which is then a linear combination of monomials
In particular, from the pigeonhole principle, at least one of is at most .
Consider the contribution of the monomials for which . We can regroup this contribution as
where ranges over those with , is the monomial
and is some explicitly computable function whose exact form will not be of relevance to our argument. The number of such is equal to , so this contribution has rank at most . The remaining contributions arising from the cases and similarly have rank at most (grouping the monomials so that each monomial is only counted once), so the claim follows.
Upon restricting from to , the rank of is still at most . The two lemmas then combine to give the Ellenberg-Gijswijt bound
All that remains is to compute the asymptotic behaviour of . This can be done using the general tool of Cramer’s theorem, but can also be derived from Stirling’s formula (discussed in this previous post). Indeed, if , , for some summing to , Stirling’s formula gives
where is the entropy function
We then have
where is the maximum entropy subject to the constraints
A routine Lagrange multiplier computation shows that the maximum occurs when
and is approximately , giving rise to the claimed bound of .
Remark 3 As noted in the Ellenberg and Gijswijt papers, the above argument extends readily to other fields than to control the maximal size of subset of that has no non-trivial solutions to the equation , where are non-zero constants that sum to zero. Of course one replaces the function in Lemma 2 by in this case.
Remark 4 This symmetrised formulation suggests that one possible way to improve slightly on the numerical quantity by finding a more efficient way to decompose into rank one functions, however I was not able to do so (though such improvements are reminiscent of the Strassen type algorithms for fast matrix multiplication).
Remark 5 It is tempting to see if this method can get non-trivial upper bounds for sets with no length progressions, in (say) . One can run the above arguments, replacing the function
this leads to the bound where
Unfortunately, is asymptotic to and so this bound is in fact slightly worse than the trivial bound ! However, there is a slim chance that there is a more efficient way to decompose into rank one functions that would give a non-trivial bound on . I experimented with a few possible such decompositions but unfortunately without success.
Remark 6 Return now to the capset problem. Since Lemma 1 is valid for any field , one could perhaps hope to get better bounds by viewing the Kronecker delta function as taking values in another field than , such as the complex numbers . However, as soon as one works in a field of characteristic other than , one can adjoin a cube root of unity, and one now has the Fourier decomposition
Moving to the Fourier basis, we conclude from Lemma 1 that the function on now has rank exactly , and so one cannot improve upon the trivial bound of by this method using fields of characteristic other than three as the range field. So it seems one has to stick with (or the algebraic completion thereof).
Thanks to Jordan Ellenberg and Ben Green for helpful discussions.
When teaching mathematics, the traditional method of lecturing in front of a blackboard is still hard to improve upon, despite all the advances in modern technology. However, there are some nice things one can do in an electronic medium, such as this blog. Here, I would like to experiment with the ability to animate images, which I think can convey some mathematical concepts in ways that cannot be easily replicated by traditional static text and images. Given that many readers may find these animations annoying, I am placing the rest of the post below the fold.
In functional analysis, it is common to endow various (infinite-dimensional) vector spaces with a variety of topologies. For instance, a normed vector space can be given the strong topology as well as the weak topology; if the vector space has a predual, it also has a weak-* topology. Similarly, spaces of operators have a number of useful topologies on them, including the operator norm topology, strong operator topology, and the weak operator topology. For function spaces, one can use topologies associated to various modes of convergence, such as uniform convergence, pointwise convergence, locally uniform convergence, or convergence in the sense of distributions. (A small minority of such modes are not topologisable, though, the most common of which is pointwise almost everywhere convergence; see Exercise 8 of this previous post).
Some of these topologies are much stronger than others (in that they contain many more open sets, or equivalently that they have many fewer convergent sequences and nets). However, even the weakest topologies used in analysis (e.g. convergence in distributions) tend to be Hausdorff, since this at least ensures the uniqueness of limits of sequences and nets, which is a fundamentally useful feature for analysis. On the other hand, some Hausdorff topologies used are “better” than others in that many more analysis tools are available for those topologies. In particular, topologies that come from Banach space norms are particularly valued, as such topologies (and their attendant norm and metric structures) grant access to many convenient additional results such as the Baire category theorem, the uniform boundedness principle, the open mapping theorem, and the closed graph theorem.
Of course, most topologies placed on a vector space will not come from Banach space norms. For instance, if one takes the space of continuous functions on that converge to zero at infinity, the topology of uniform convergence comes from a Banach space norm on this space (namely, the uniform norm ), but the topology of pointwise convergence does not; and indeed all the other usual modes of convergence one could use here (e.g. convergence, locally uniform convergence, convergence in measure, etc.) do not arise from Banach space norms.
I recently realised (while teaching a graduate class in real analysis) that the closed graph theorem provides a quick explanation for why Banach space topologies are so rare:
Proposition 1 Let be a Hausdorff topological vector space. Then, up to equivalence of norms, there is at most one norm one can place on so that is a Banach space whose topology is at least as strong as . In particular, there is at most one topology stronger than that comes from a Banach space norm.
Proof: Suppose one had two norms on such that and were both Banach spaces with topologies stronger than . Now consider the graph of the identity function from the Banach space to the Banach space . This graph is closed; indeed, if is a sequence in this graph that converged in the product topology to , then converges to in norm and hence in , and similarly converges to in norm and hence in . But limits are unique in the Hausdorff topology , so . Applying the closed graph theorem (see also previous discussions on this theorem), we see that the identity map is continuous from to ; similarly for the inverse. Thus the norms are equivalent as claimed.
By using various generalisations of the closed graph theorem, one can generalise the above proposition to Fréchet spaces, or even to F-spaces. The proposition can fail if one drops the requirement that the norms be stronger than a specified Hausdorff topology; indeed, if is infinite dimensional, one can use a Hamel basis of to construct a linear bijection on that is unbounded with respect to a given Banach space norm , and which can then be used to give an inequivalent Banach space structure on .
One can interpret Proposition 1 as follows: once one equips a vector space with some “weak” (but still Hausdorff) topology, there is a canonical choice of “strong” topology one can place on that space that is stronger than the “weak” topology but arises from a Banach space structure (or at least a Fréchet or F-space structure), provided that at least one such structure exists. In the case of function spaces, one can usually use the topology of convergence in distribution as the “weak” Hausdorff topology for this purpose, since this topology is weaker than almost all of the other topologies used in analysis. This helps justify the common practice of describing a Banach or Fréchet function space just by giving the set of functions that belong to that space (e.g. is the space of Schwartz functions on ) without bothering to specify the precise topology to serve as the “strong” topology, since it is usually understood that one is using the canonical such topology (e.g. the Fréchet space structure on given by the usual Schwartz space seminorms).
Of course, there are still some topological vector spaces which have no “strong topology” arising from a Banach space at all. Consider for instance the space of finitely supported sequences. A weak, but still Hausdorff, topology to place on this space is the topology of pointwise convergence. But there is no norm stronger than this topology that makes this space a Banach space. For, if there were, then letting be the standard basis of , the series would have to converge in , and hence pointwise, to an element of , but the only available pointwise limit for this series lies outside of . But I do not know if there is an easily checkable criterion to test whether a given vector space (equipped with a Hausdorff “weak” toplogy) can be equipped with a stronger Banach space (or Fréchet space or -space) topology.
There is a very nice recent paper by Lemke Oliver and Soundararajan (complete with a popular science article about it by the consistently excellent Erica Klarreich for Quanta) about a surprising (but now satisfactorily explained) bias in the distribution of pairs of consecutive primes when reduced to a small modulus .
This phenomenon is superficially similar to the more well known Chebyshev bias concerning the reduction of a single prime to a small modulus , but is in fact a rather different (and much stronger) bias than the Chebyshev bias, and seems to arise from a completely different source. The Chebyshev bias asserts, roughly speaking, that a randomly selected prime of a large magnitude will typically (though not always) be slightly more likely to be a quadratic non-residue modulo than a quadratic residue, but the bias is small (the difference in probabilities is only about for typical choices of ), and certainly consistent with known or conjectured positive results such as Dirichlet’s theorem or the generalised Riemann hypothesis. The reason for the Chebyshev bias can be traced back to the von Mangoldt explicit formula which relates the distribution of the von Mangoldt function modulo with the zeroes of the -functions with period . This formula predicts (assuming some standard conjectures like GRH) that the von Mangoldt function is quite unbiased modulo . The von Mangoldt function is mostly concentrated in the primes, but it also has a medium-sized contribution coming from squares of primes, which are of course all located in the quadratic residues modulo . (Cubes and higher powers of primes also make a small contribution, but these are quite negligible asymptotically.) To balance everything out, the contribution of the primes must then exhibit a small preference towards quadratic non-residues, and this is the Chebyshev bias. (See this article of Rubinstein and Sarnak for a more technical discussion of the Chebyshev bias, and this survey of Granville and Martin for an accessible introduction. The story of the Chebyshev bias is also related to Skewes’ number, once considered the largest explicit constant to naturally appear in a mathematical argument.)
The paper of Lemke Oliver and Soundararajan considers instead the distribution of the pairs for small and for large consecutive primes , say drawn at random from the primes comparable to some large . For sake of discussion let us just take . Then all primes larger than are either or ; Chebyshev’s bias gives a very slight preference to the latter (of order , as discussed above), but apart from this, we expect the primes to be more or less equally distributed in both classes. For instance, assuming GRH, the probability that lands in would be , and similarly for .
In view of this, one would expect that up to errors of or so, the pair should be equally distributed amongst the four options , , , , thus for instance the probability that this pair is would naively be expected to be , and similarly for the other three tuples. These assertions are not yet proven (although some non-trivial upper and lower bounds for such probabilities can be obtained from recent work of Maynard).
However, Lemke Oliver and Soundararajan argue (backed by both plausible heuristic arguments (based ultimately on the Hardy-Littlewood prime tuples conjecture), as well as substantial numerical evidence) that there is a significant bias away from the tuples and – informally, adjacent primes don’t like being in the same residue class! For instance, they predict that the probability of attaining is in fact
with similar predictions for the other three pairs (in fact they give a somewhat more precise prediction than this). The magnitude of this bias, being comparable to , is significantly stronger than the Chebyshev bias of .
One consequence of this prediction is that the prime gaps are slightly less likely to be divisible by than naive random models of the primes would predict. Indeed, if the four options , , , all occurred with equal probability , then should equal with probability , and and with probability each (as would be the case when taking the difference of two random numbers drawn from those integers not divisible by ); but the Lemke Oliver-Soundararajan bias predicts that the probability of being divisible by three should be slightly lower, being approximately .
Below the fold we will give a somewhat informal justification of (a simplified version of) this phenomenon, based on the Lemke Oliver-Soundararajan calculation using the prime tuples conjecture.
I’ve been meaning to return to fluids for some time now, in order to build upon my construction two years ago of a solution to an averaged Navier-Stokes equation that exhibited finite time blowup. (I recently spoke on this work in the recent conference in Princeton in honour of Sergiu Klainerman; my slides for that talk are here.)
One of the biggest deficiencies with my previous result is the fact that the averaged Navier-Stokes equation does not enjoy any good equation for the vorticity , in contrast to the true Navier-Stokes equations which, when written in vorticity-stream formulation, become
(Throughout this post we will be working in three spatial dimensions .) So one of my main near-term goals in this area is to exhibit an equation resembling Navier-Stokes as much as possible which enjoys a vorticity equation, and for which there is finite time blowup.
Heuristically, this task should be easier for the Euler equations (i.e. the zero viscosity case of Navier-Stokes) than the viscous Navier-Stokes equation, as one expects the viscosity to only make it easier for the solution to stay regular. Indeed, morally speaking, the assertion that finite time blowup solutions of Navier-Stokes exist should be roughly equivalent to the assertion that finite time blowup solutions of Euler exist which are “Type I” in the sense that all Navier-Stokes-critical and Navier-Stokes-subcritical norms of this solution go to infinity (which, as explained in the above slides, heuristically means that the effects of viscosity are negligible when compared against the nonlinear components of the equation). In vorticity-stream formulation, the Euler equations can be written as
As discussed in this previous blog post, a natural generalisation of this system of equations is the system
where is a linear operator on divergence-free vector fields that is “zeroth order” in some sense; ideally it should also be invertible, self-adjoint, and positive definite (in order to have a Hamiltonian that is comparable to the kinetic energy ). (In the previous blog post, it was observed that the surface quasi-geostrophic (SQG) equation could be embedded in a system of the form (1).) The system (1) has many features in common with the Euler equations; for instance vortex lines are transported by the velocity field , and Kelvin’s circulation theorem is still valid.
So far, I have not been able to fully achieve this goal. However, I have the following partial result, stated somewhat informally:
Theorem 1 There is a “zeroth order” linear operator (which, unfortunately, is not invertible, self-adjoint, or positive definite) for which the system (1) exhibits smooth solutions that blowup in finite time.
being rescalings of . This operator is still bounded on all spaces , and so is arguably still a zeroth order operator, though not as convincingly as I would like. Another, less significant, issue with the result is that the solution constructed does not have good spatial decay properties, but this is mostly for convenience and it is likely that the construction can be localised to give solutions that have reasonable decay in space. But the biggest drawback of this theorem is the fact that is not invertible, self-adjoint, or positive definite, so in particular there is no non-negative Hamiltonian for this equation. It may be that some modification of the arguments below can fix these issues, but I have so far been unable to do so. Still, the construction does show that the circulation theorem is insufficient by itself to prevent blowup.
We sketch the proof of the above theorem as follows. We use the barrier method, introducing the time-varying hyperboloid domains
for (expressed in cylindrical coordinates ). We will select initial data to be for some non-negative even bump function supported on , normalised so that
in particular is divergence-free supported in , with vortex lines connecting to . Suppose for contradiction that we have a smooth solution to (1) with this initial data; to simplify the discussion we assume that the solution behaves well at spatial infinity (this can be justified with the choice (2) of vorticity-stream operator, but we will not do so here). Since the domains disconnect from at time , there must exist a time which is the first time where the support of touches the boundary of , with supported in .
From (1) we see that the support of is transported by the velocity field . Thus, at the point of contact of the support of with the boundary of , the inward component of the velocity field cannot exceed the inward velocity of . We will construct the functions so that this is not the case, leading to the desired contradiction. (Geometrically, what is going on here is that the operator is pinching the flow to pass through the narrow cylinder , leading to a singularity by time at the latest.)
First we observe from conservation of circulation, and from the fact that is supported in , that the integrals
are constant in both space and time for . From the choice of initial data we thus have
for all and all . On the other hand, if is of the form (2) with for some bump function that only has -components, then is divergence-free with mean zero, and
where . We choose to be supported in the slab for some large constant , and to equal a function depending only on on the cylinder , normalised so that . If , then passes through this cylinder, and we conclude that
for some coefficients . We will not be able to control these coefficients , but fortunately we only need to understand on the boundary , for which . So, if happens to be supported on an annulus , then vanishes on if is large enough. We then have
on the boundary of .
Let be a function of the form
where is a bump function supported on that equals on . We can perform a dyadic decomposition where
where is a bump function supported on with . If we then set
then one can check that for a function that is divergence-free and mean zero, and supported on the annulus , and
so on (where ) we have
One can manually check that the inward velocity of this vector on exceeds the inward velocity of if is large enough, and the claim follows.
Remark 2 The type of blowup suggested by this construction, where a unit amount of circulation is squeezed into a narrow cylinder, is of “Type II” with respect to the Navier-Stokes scaling, because Navier-Stokes-critical norms such (or at least ) look like they stay bounded during this squeezing procedure (the velocity field is of size about in cylinders of radius and length about ). So even if the various issues with are repaired, it does not seem likely that this construction can be directly adapted to obtain a corresponding blowup for a Navier-Stokes type equation. To get a “Type I” blowup that is consistent with Kelvin’s circulation theorem, it seems that one needs to coil the vortex lines around a loop multiple times in order to get increased circulation in a small space. This seems possible to pull off to me – there don’t appear to be any unavoidable obstructions coming from topology, scaling, or conservation laws – but would require a more complicated construction than the one given above.
In this blog post, I would like to specialise the arguments of Bourgain, Demeter, and Guth from the previous post to the two-dimensional case of the Vinogradov main conjecture, namely
This particular case of the main conjecture has a classical proof using some elementary number theory. Indeed, the left-hand side can be viewed as the number of solutions to the system of equations
with . These two equations can combine (using the algebraic identity applied to ) to imply the further equation
which, when combined with the divisor bound, shows that each is associated to choices of excluding diagonal cases when two of the collide, and this easily yields Theorem 1. However, the Bourgain-Demeter-Guth argument (which, in the two dimensional case, is essentially contained in a previous paper of Bourgain and Demeter) does not require the divisor bound, and extends for instance to the the more general case where ranges in a -separated set of reals between to .
In this special case, the Bourgain-Demeter argument simplifies, as the lower dimensional inductive hypothesis becomes a simple almost orthogonality claim, and the multilinear Kakeya estimate needed is also easy (collapsing to just Fubini’s theorem). Also one can work entirely in the context of the Vinogradov main conjecture, and not turn to the increased generality of decoupling inequalities (though this additional generality is convenient in higher dimensions). As such, I am presenting this special case as an introduction to the Bourgain-Demeter-Guth machinery.
We now give the specialisation of the Bourgain-Demeter argument to Theorem 1. It will suffice to establish the bound
for all , (where we keep fixed and send to infinity), as the bound then follows by combining the above bound with the trivial bound . Accordingly, for any and , we let denote the claim that
as . Clearly, for any fixed , holds for some large , and it will suffice to establish
Proposition 2 Let , and let be such that holds. Then there exists such that holds.
Indeed, this proposition shows that for , the infimum of the for which holds is zero.
We prove the proposition below the fold, using a simplified form of the methods discussed in the previous blog post. To simplify the exposition we will be a bit cavalier with the uncertainty principle, for instance by essentially ignoring the tails of rapidly decreasing functions.
Given any finite collection of elements in some Banach space , the triangle inequality tells us that
However, when the all “oscillate in different ways”, one expects to improve substantially upon the triangle inequality. For instance, if is a Hilbert space and the are mutually orthogonal, we have the Pythagorean theorem
for any finite collection in any Banach space , where denotes the cardinality of . Thus orthogonality in a Hilbert space yields “square root cancellation”, saving a factor of or so over the trivial bound coming from the triangle inequality.
More generally, let us somewhat informally say that a collection exhibits decoupling in if one has the Pythagorean-like inequality
for any , thus one obtains almost the full square root cancellation in the norm. The theory of almost orthogonality can then be viewed as the theory of decoupling in Hilbert spaces such as . In spaces for one usually does not expect this sort of decoupling; for instance, if the are disjointly supported one has
and the right-hand side can be much larger than when . At the opposite extreme, one usually does not expect to get decoupling in , since one could conceivably align the to all attain a maximum magnitude at the same location with the same phase, at which point the triangle inequality in becomes sharp.
However, in some cases one can get decoupling for certain . For instance, suppose we are in , and that are bi-orthogonal in the sense that the products for are pairwise orthogonal in . Then we have
giving decoupling in . (Similarly if each of the is orthogonal to all but of the other .) A similar argument also gives decoupling when one has tri-orthogonality (with the mostly orthogonal to each other), and so forth. As a slight variant, Khintchine’s inequality also indicates that decoupling should occur for any fixed if one multiplies each of the by an independent random sign .
In recent years, Bourgain and Demeter have been establishing decoupling theorems in spaces for various key exponents of , in the “restriction theory” setting in which the are Fourier transforms of measures supported on different portions of a given surface or curve; this builds upon the earlier decoupling theorems of Wolff. In a recent paper with Guth, they established the following decoupling theorem for the curve parameterised by the polynomial curve
For any ball in , let denote the weight
which should be viewed as a smoothed out version of the indicator function of . In particular, the space can be viewed as a smoothed out version of the space . For future reference we observe a fundamental self-similarity of the curve : any arc in this curve, with a compact interval, is affinely equivalent to the standard arc .
of a finite Borel measure on the arc , where . Then the exhibit decoupling in for any ball of radius .
Orthogonality gives the case of this theorem. The bi-orthogonality type arguments sketched earlier only give decoupling in up to the range ; the point here is that we can now get a much larger value of . The case of this theorem was previously established by Bourgain and Demeter (who obtained in fact an analogous theorem for any curved hypersurface). The exponent (and the radius ) is best possible, as can be seen by the following basic example. If
where is a bump function adapted to , then standard Fourier-analytic computations show that will be comparable to on a rectangular box of dimensions (and thus volume ) centred at the origin, and exhibit decay away from this box, with comparable to
On the other hand, is comparable to on a ball of radius comparable to centred at the origin, so is , which is just barely consistent with decoupling. This calculation shows that decoupling will fail if is replaced by any larger exponent, and also if the radius of the ball is reduced to be significantly smaller than .
This theorem has the following consequence of importance in analytic number theory:
Corollary 2 (Vinogradov main conjecture) Let be integers, and let . Then
Proof: By the Hölder inequality (and the trivial bound of for the exponential sum), it suffices to treat the critical case , that is to say to show that
We can rescale this as
As the integrand is periodic along the lattice , this is equivalent to
The left-hand side may be bounded by , where and . Since
the claim now follows from the decoupling theorem and a brief calculation.
Using the Plancherel formula, one may equivalently (when is an integer) write the Vinogradov main conjecture in terms of solutions to the system of equations
but we will not use this formulation here.
A history of the Vinogradov main conjecture may be found in this survey of Wooley; prior to the Bourgain-Demeter-Guth theorem, the conjecture was solved completely for , or for and either below or above , with the bulk of recent progress coming from the efficient congruencing technique of Wooley. It has numerous applications to exponential sums, Waring’s problem, and the zeta function; to give just one application, the main conjecture implies the predicted asymptotic for the number of ways to express a large number as the sum of fifth powers (the previous best result required fifth powers). The Bourgain-Demeter-Guth approach to the Vinogradov main conjecture, based on decoupling, is ostensibly very different from the efficient congruencing technique, which relies heavily on the arithmetic structure of the program, but it appears (as I have been told from second-hand sources) that the two methods are actually closely related, with the former being a sort of “Archimedean” version of the latter (with the intervals in the decoupling theorem being analogous to congruence classes in the efficient congruencing method); hopefully there will be some future work making this connection more precise. One advantage of the decoupling approach is that it generalises to non-arithmetic settings in which the set that is drawn from is replaced by some other similarly separated set of real numbers. (A random thought – could this allow the Vinogradov-Korobov bounds on the zeta function to extend to Beurling zeta functions?)
Below the fold we sketch the Bourgain-Demeter-Guth argument proving Theorem 1.
I thank Jean Bourgain and Andrew Granville for helpful discussions.
Let denote the Liouville function. The prime number theorem is equivalent to the estimate
as , that is to say that exhibits cancellation on large intervals such as . This result can be improved to give cancellation on shorter intervals. For instance, using the known zero density estimates for the Riemann zeta function, one can establish that
as if for some fixed ; I believe this result is due to Ramachandra (see also Exercise 21 of this previous blog post), and in fact one could obtain a better error term on the right-hand side that for instance gained an arbitrary power of . On the Riemann hypothesis (or the weaker density hypothesis), it was known that the could be lowered to .
Early this year, there was a major breakthrough by Matomaki and Radziwill, who (among other things) showed that the asymptotic (1) was in fact valid for any with that went to infinity as , thus yielding cancellation on extremely short intervals. This has many further applications; for instance, this estimate, or more precisely its extension to other “non-pretentious” bounded multiplicative functions, was a key ingredient in my recent solution of the Erdös discrepancy problem, as well as in obtaining logarithmically averaged cases of Chowla’s conjecture, such as
It is of interest to twist the above estimates by phases such as the linear phase . In 1937, Davenport showed that
from which one can see that this is another averaged form of Chowla’s conjecture (stronger than the one I was able to prove with Matomaki and Radziwill, but a consequence of the unaveraged Chowla conjecture). If one inserted such a bound into the machinery I used to solve the Erdös discrepancy problem, it should lead to further averaged cases of Chowla’s conjecture, such as
though I have not fully checked the details of this implication. It should also have a number of new implications for sign patterns of the Liouville function, though we have not explored these in detail yet.
One can write (4) equivalently in the form
uniformly for all -dependent phases . In contrast, (3) is equivalent to the subcase of (6) when the linear phase coefficient is independent of . This dependency of on seems to necessitate some highly nontrivial additive combinatorial analysis of the function in order to establish (4) when is small. To date, this analysis has proven to be elusive, but I would like to record what one can do with more classical methods like Vaughan’s identity, namely:
The values of in this range are far too large to yield implications such as new cases of the Chowla conjecture, but it appears that the exponent is the limit of “classical” methods (at least as far as I was able to apply them), in the sense that one does not do any combinatorial analysis on the function , nor does one use modern equidistribution results on “Type III sums” that require deep estimates on Kloosterman-type sums. The latter may shave a little bit off of the exponent, but I don’t see how one would ever hope to go below without doing some non-trivial combinatorics on the function . UPDATE: I have come across this paper of Zhan which uses mean-value theorems for L-functions to lower the exponent to .
Let me now sketch the proof of the proposition, omitting many of the technical details. We first remark that known estimates on sums of the Liouville function (or similar functions such as the von Mangoldt function) in short arithmetic progressions, based on zero-density estimates for Dirichlet -functions, can handle the “major arc” case of (4) (or (6)) where is restricted to be of the form for (the exponent here being of the same numerology as the exponent in the classical result of Ramachandra, tied to the best zero density estimates currently available); for instance a modification of the arguments in this recent paper of Koukoulopoulos would suffice. Thus we can restrict attention to “minor arc” values of (or , using the interpretation of (6)).
Next, one breaks up (or the closely related Möbius function) into Dirichlet convolutions using one of the standard identities (e.g. Vaughan’s identity or Heath-Brown’s identity), as discussed for instance in this previous post (which is focused more on the von Mangoldt function, but analogous identities exist for the Liouville and Möbius functions). The exact choice of identity is not terribly important, but the upshot is that can be decomposed into terms, each of which is either of the “Type I” form
for some coefficients that are roughly of logarithmic size on the average, and scales with and , or else of the “Type II” form
for some coefficients that are roughly of logarithmic size on the average, and scales with and . As discussed in the previous post, the exponent is a natural barrier in these identities if one is unwilling to also consider “Type III” type terms which are roughly of the shape of the third divisor function .
A Type I sum makes a contribution to that can be bounded (via Cauchy-Schwarz) in terms of an expression such as
The inner sum exhibits a lot of cancellation unless is within of an integer. (Here, “a lot” should be loosely interpreted as “gaining many powers of over the trivial bound”.) Since is significantly larger than , standard Vinogradov-type manipulations (see e.g. Lemma 13 of these previous notes) show that this bad case occurs for many only when is “major arc”, which is the case we have specifically excluded. This lets us dispose of the Type I contributions.
A Type II sum makes a contribution to roughly of the form
We can break this up into a number of sums roughly of the form
for ; note that the range is non-trivial because is much larger than . Applying the usual bilinear sum Cauchy-Schwarz methods (e.g. Theorem 14 of these notes) we conclude that there is a lot of cancellation unless one has for some . But with , is well below the threshold for the definition of major arc, so we can exclude this case and obtain the required cancellation.