You are currently browsing the tag archive for the ‘exponential sums’ tag.
Kaisa Matomäki, Maksym Radziwill, Xuancheng Shao, Joni Teräväinen, and myself have just uploaded to the arXiv our preprint “Singmaster’s conjecture in the interior of Pascal’s triangle“. This paper leverages the theory of exponential sums over primes to make progress on a well known conjecture of Singmaster which asserts that any natural number larger than appears at most a bounded number of times in Pascal’s triangle. That is to say, for any integer
, there are at most
solutions to the equation
Our main result settles this conjecture in the “interior” region of the triangle:
Theorem 1 (Singmaster’s conjecture in the interior of the triangle) Ifand
is sufficiently large depending on
, there are at most two solutions to (1) in the region
and hence at most four in the region
Also, there is at most one solution in the region
To verify Singmaster’s conjecture in full, it thus suffices in view of this result to verify the conjecture in the boundary region
(or equivalentlyThe upper bound of two here for the number of solutions in the region (2) is best possible, due to the infinite family of solutions to the equation
coming from
The appearance of the quantity in Theorem 1 may be familiar to readers that are acquainted with Vinogradov’s bounds on exponential sums, which ends up being the main new ingredient in our arguments. In principle this threshold could be lowered if we had stronger bounds on exponential sums.
To try to control solutions to (1) we use a combination of “Archimedean” and “non-Archimedean” approaches. In the “Archimedean” approach (following earlier work of Kane on this problem) we view primarily as real numbers rather than integers, and express (1) in terms of the Gamma function as
Proposition 2 Let, and suppose
is sufficiently large depending on
. If
is a solution to (1) in the left half
of Pascal’s triangle, then there is at most one other solution
to this equation in the left half with
Again, the example of (4) shows that a cluster of two solutions is certainly possible; the convexity argument only kicks in once one has a cluster of three or more solutions.
To finish the proof of Theorem 1, one has to show that any two solutions to (1) in the region of interest must be close enough for the above proposition to apply. Here we switch to the “non-Archimedean” approach, in which we look at the
-adic valuations
of the binomial coefficients, defined as the number of times a prime
divides
. From the fundamental theorem of arithmetic, a collision
A key idea in our approach is to view this condition (6) statistically, for instance by viewing as a prime drawn randomly from an interval such as
for some suitably chosen scale parameter
, so that the two sides of (6) now become random variables. It then becomes advantageous to compare correlations between these two random variables and some additional test random variable. For instance, if
and
are far apart from each other, then one would expect the left-hand side of (6) to have a higher correlation with the fractional part
, since this term shows up in the summation on the left-hand side but not the right. Similarly if
and
are far apart from each other (although there are some annoying cases one has to treat separately when there is some “unexpected commensurability”, for instance if
is a rational multiple of
where the rational has bounded numerator and denominator). In order to execute this strategy, it turns out (after some standard Fourier expansion) that one needs to get good control on exponential sums such as
A modification of the arguments also gives similar results for the equation
where
Theorem 3 Ifand
is sufficiently large depending on
, there are at most two solutions to (7) in the region
Again the upper bound of two is best possible, thanks to identities such as
Kaisa Matomaki, Maksym Radziwill, and I have uploaded to the arXiv our paper “Correlations of the von Mangoldt and higher divisor functions I. Long shift ranges“, submitted to Proceedings of the London Mathematical Society. This paper is concerned with the estimation of correlations such as
for medium-sized and large
, where
is the von Mangoldt function; we also consider variants of this sum in which one of the von Mangoldt functions is replaced with a (higher order) divisor function, but for sake of discussion let us focus just on the sum (1). Understanding this sum is very closely related to the problem of finding pairs of primes that differ by
; for instance, if one could establish a lower bound
then this would easily imply the twin prime conjecture.
The (first) Hardy-Littlewood conjecture asserts an asymptotic
as for any fixed positive
, where the singular series
is an arithmetic factor arising from the irregularity of distribution of
at small moduli, defined explicitly by
when is even, and
when
is odd, where
is (half of) the twin prime constant. See for instance this previous blog post for a a heuristic explanation of this conjecture. From the previous discussion we see that (2) for would imply the twin prime conjecture. Sieve theoretic methods are only able to provide an upper bound of the form
.
Needless to say, apart from the trivial case of odd , there are no values of
for which the Hardy-Littlewood conjecture is known. However there are some results that say that this conjecture holds “on the average”: in particular, if
is a quantity depending on
that is somewhat large, there are results that show that (2) holds for most (i.e. for
) of the
betwen
and
. Ideally one would like to get
as small as possible, in particular one can view the full Hardy-Littlewood conjecture as the endpoint case when
is bounded.
The first results in this direction were by van der Corput and by Lavrik, who established such a result with (with a subsequent refinement by Balog); Wolke lowered
to
, and Mikawa lowered
further to
. The main result of this paper is a further lowering of
to
. In fact (as in the preceding works) we get a better error term than
, namely an error of the shape
for any
.
Our arguments initially proceed along standard lines. One can use the Hardy-Littlewood circle method to express the correlation in (2) as an integral involving exponential sums . The contribution of “major arc”
is known by a standard computation to recover the main term
plus acceptable errors, so it is a matter of controlling the “minor arcs”. After averaging in
and using the Plancherel identity, one is basically faced with establishing a bound of the form
for any “minor arc” . If
is somewhat close to a low height rational
(specifically, if it is within
of such a rational with
), then this type of estimate is roughly of comparable strength (by another application of Plancherel) to the best available prime number theorem in short intervals on the average, namely that the prime number theorem holds for most intervals of the form
, and we can handle this case using standard mean value theorems for Dirichlet series. So we can restrict attention to the “strongly minor arc” case where
is far from such rationals.
The next step (following some ideas we found in a paper of Zhan) is to rewrite this estimate not in terms of the exponential sums , but rather in terms of the Dirichlet polynomial
. After a certain amount of computation (including some oscillatory integral estimates arising from stationary phase), one is eventually reduced to the task of establishing an estimate of the form
for any (with
sufficiently large depending on
).
The next step, which is again standard, is the use of the Heath-Brown identity (as discussed for instance in this previous blog post) to split up into a number of components that have a Dirichlet convolution structure. Because the exponent
we are shooting for is less than
, we end up with five types of components that arise, which we call “Type
“, “Type
“, “Type
“, “Type
“, and “Type II”. The “Type II” sums are Dirichlet convolutions involving a factor supported on a range
and is quite easy to deal with; the “Type
” terms are Dirichlet convolutions that resemble (non-degenerate portions of) the
divisor function, formed from convolving together
portions of
. The “Type
” and “Type
” terms can be estimated satisfactorily by standard moment estimates for Dirichlet polynomials; this already recovers the result of Mikawa (and our argument is in fact slightly more elementary in that no Kloosterman sum estimates are required). It is the treatment of the “Type
” and “Type
” sums that require some new analysis, with the Type
terms turning to be the most delicate. After using an existing moment estimate of Jutila for Dirichlet L-functions, matters reduce to obtaining a family of estimates, a typical one of which (relating to the more difficult Type
sums) is of the form
for “typical” ordinates of size
, where
is the Dirichlet polynomial
(a fragment of the Riemann zeta function). The precise definition of “typical” is a little technical (because of the complicated nature of Jutila’s estimate) and will not be detailed here. Such a claim would follow easily from the Lindelof hypothesis (which would imply that
) but of course we would like to have an unconditional result.
At this point, having exhausted all the Dirichlet polynomial estimates that are usefully available, we return to “physical space”. Using some further Fourier-analytic and oscillatory integral computations, we can estimate the left-hand side of (3) by an expression that is roughly of the shape
The phase can be Taylor expanded as the sum of
and a lower order term
, plus negligible errors. If we could discard the lower order term then we would get quite a good bound using the exponential sum estimates of Robert and Sargos, which control averages of exponential sums with purely monomial phases, with the averaging allowing us to exploit the hypothesis that
is “typical”. Figuring out how to get rid of this lower order term caused some inefficiency in our arguments; the best we could do (after much experimentation) was to use Fourier analysis to shorten the sums, estimate a one-parameter average exponential sum with a binomial phase by a two-parameter average with a monomial phase, and then use the van der Corput
process followed by the estimates of Robert and Sargos. This rather complicated procedure works up to
it may be possible that some alternate way to proceed here could improve the exponent somewhat.
In a sequel to this paper, we will use a somewhat different method to reduce to a much smaller value of
, but only if we replace the correlations
by either
or
, and also we now only save a
in the error term rather than
.
We have seen in previous notes that the operation of forming a Dirichlet series
or twisted Dirichlet series
is an incredibly useful tool for questions in multiplicative number theory. Such series can be viewed as a multiplicative Fourier transform, since the functions and
are multiplicative characters.
Similarly, it turns out that the operation of forming an additive Fourier series
where lies on the (additive) unit circle
and
is the standard additive character, is an incredibly useful tool for additive number theory, particularly when studying additive problems involving three or more variables taking values in sets such as the primes; the deployment of this tool is generally known as the Hardy-Littlewood circle method. (In the analytic number theory literature, the minus sign in the phase
is traditionally omitted, and what is denoted by
here would be referred to instead by
,
or just
.) We list some of the most classical problems in this area:
- (Even Goldbach conjecture) Is it true that every even natural number
greater than two can be expressed as the sum
of two primes?
- (Odd Goldbach conjecture) Is it true that every odd natural number
greater than five can be expressed as the sum
of three primes?
- (Waring problem) For each natural number
, what is the least natural number
such that every natural number
can be expressed as the sum of
or fewer
powers?
- (Asymptotic Waring problem) For each natural number
, what is the least natural number
such that every sufficiently large natural number
can be expressed as the sum of
or fewer
powers?
- (Partition function problem) For any natural number
, let
denote the number of representations of
of the form
where
and
are natural numbers. What is the asymptotic behaviour of
as
?
The Waring problem and its asymptotic version will not be discussed further here, save to note that the Vinogradov mean value theorem (Theorem 13 from Notes 5) and its variants are particularly useful for getting good bounds on ; see for instance the ICM article of Wooley for recent progress on these problems. Similarly, the partition function problem was the original motivation of Hardy and Littlewood in introducing the circle method, but we will not discuss it further here; see e.g. Chapter 20 of Iwaniec-Kowalski for a treatment.
Instead, we will focus our attention on the odd Goldbach conjecture as our model problem. (The even Goldbach conjecture, which involves only two variables instead of three, is unfortunately not amenable to a circle method approach for a variety of reasons, unless the statement is replaced with something weaker, such as an averaged statement; see this previous blog post for further discussion. On the other hand, the methods here can obtain weaker versions of the even Goldbach conjecture, such as showing that “almost all” even numbers are the sum of two primes; see Exercise 34 below.) In particular, we will establish the following celebrated theorem of Vinogradov:
Theorem 1 (Vinogradov’s theorem) Every sufficiently large odd number
is expressible as the sum of three primes.
Recently, the restriction that be sufficiently large was replaced by Helfgott with
, thus establishing the odd Goldbach conjecture in full. This argument followed the same basic approach as Vinogradov (based on the circle method), but with various estimates replaced by “log-free” versions (analogous to the log-free zero-density theorems in Notes 7), combined with careful numerical optimisation of constants and also some numerical work on the even Goldbach problem and on the generalised Riemann hypothesis. We refer the reader to Helfgott’s text for details.
We will in fact show the more precise statement:
Theorem 2 (Quantitative Vinogradov theorem) Let
be an natural number. Then
The implied constants are ineffective.
We dropped the hypothesis that is odd in Theorem 2, but note that
vanishes when
is even. For odd
, we have
Unfortunately, due to the ineffectivity of the constants in Theorem 2 (a consequence of the reliance on the Siegel-Walfisz theorem in the proof of that theorem), one cannot quantify explicitly what “sufficiently large” means in Theorem 1 directly from Theorem 2. However, there is a modification of this theorem which gives effective bounds; see Exercise 32 below.
Exercise 4 Obtain a heuristic derivation of the main term
using the modified Cramér model (Section 1 of Supplement 4).
To prove Theorem 2, we consider the more general problem of estimating sums of the form
for various integers and functions
, which we will take to be finitely supported to avoid issues of convergence.
Suppose that are supported on
; for simplicity, let us first assume the pointwise bound
for all
. (This simple case will not cover the case in Theorem 2, when
are truncated versions of the von Mangoldt function
, but will serve as a warmup to that case.) Then we have the trivial upper bound
A basic observation is that this upper bound is attainable if all “pretend” to behave like the same additive character
for some
. For instance, if
, then we have
when
, and then it is not difficult to show that
as .
The key to the success of the circle method lies in the converse of the above statement: the only way that the trivial upper bound (2) comes close to being sharp is when all correlate with the same character
, or in other words
are simultaneously large. This converse is largely captured by the following two identities:
Exercise 5 Let
be finitely supported functions. Then for any natural number
, show that
and
The traditional approach to using the circle method to compute sums such as proceeds by invoking (3) to express this sum as an integral over the unit circle, then dividing the unit circle into “major arcs” where
are large but computable with high precision, and “minor arcs” where one has estimates to ensure that
are small in both
and
senses. For functions
of number-theoretic significance, such as truncated von Mangoldt functions, the “major arcs” typically consist of those
that are close to a rational number
with
not too large, and the “minor arcs” consist of the remaining portions of the circle. One then obtains lower bounds on the contributions of the major arcs, and upper bounds on the contribution of the minor arcs, in order to get good lower bounds on
.
This traditional approach is covered in many places, such as this text of Vaughan. We will emphasise in this set of notes a slightly different perspective on the circle method, coming from recent developments in additive combinatorics; this approach does not quite give the sharpest quantitative estimates, but it allows for easier generalisation to more combinatorial contexts, for instance when replacing the primes by dense subsets of the primes, or replacing the equation with some other equation or system of equations.
From Exercise 5 and Hölder’s inequality, we immediately obtain
Corollary 6 Let
be finitely supported functions. Then for any natural number
, we have
Similarly for permutations of the
.
In the case when are supported on
and bounded by
, this corollary tells us that we have
is
whenever one has
uniformly in
, and similarly for permutations of
. From this and the triangle inequality, we obtain the following conclusion: if
is supported on
and bounded by
, and
is Fourier-approximated by another function
supported on
and bounded by
in the sense that
Thus, one possible strategy for estimating the sum is, one can effectively replace (or “model”)
by a simpler function
which Fourier-approximates
in the sense that the exponential sums
agree up to error
. For instance:
Exercise 7 Let
be a natural number, and let
be a random subset of
, chosen so that each
has an independent probability of
of lying in
.
- (i) If
and
, show that with probability
as
, one has
uniformly in
. (Hint: for any fixed
, this can be accomplished with quite a good probability (e.g.
) using a concentration of measure inequality, such as Hoeffding’s inequality. To obtain the uniformity in
, round
to the nearest multiple of (say)
and apply the union bound).
- (ii) Show that with probability
, one has
representations of the form
with
(with
treated as an ordered triple, rather than an unordered one).
In the case when is something like the truncated von Mangoldt function
, the quantity
is of size
rather than
. This costs us a logarithmic factor in the above analysis, however we can still conclude that we have the approximation (4) whenever
is another sequence with
such that one has the improved Fourier approximation
uniformly in . (Later on we will obtain a “log-free” version of this implication in which one does not need to gain a factor of
in the error term.)
This suggests a strategy for proving Vinogradov’s theorem: find an approximant to some suitable truncation
of the von Mangoldt function (e.g.
or
) which obeys the Fourier approximation property (5), and such that the expression
is easily computable. It turns out that there are a number of good options for such an approximant
. One of the quickest ways to obtain such an approximation (which is used in Chapter 19 of Iwaniec and Kowalski) is to start with the standard identity
, that is to say
and obtain an approximation by truncating to be less than some threshold
(which, in practice, would be a small power of
):
Thus, for instance, if , the approximant
would be taken to be
One could also use the slightly smoother approximation
in which case we would take
The function is somewhat similar to the continuous Selberg sieve weights studied in Notes 4, with the main difference being that we did not square the divisor sum as we will not need to take
to be non-negative. As long as
is not too large, one can use some sieve-like computations to compute expressions like
quite accurately. The approximation (5) can be justified by using a nice estimate of Davenport that exemplifies the Mobius pseudorandomness heuristic from Supplement 4:
Theorem 8 (Davenport’s estimate) For any
and
, we have
uniformly for all
. The implied constants are ineffective.
This estimate will be proven by splitting into two cases. In the “major arc” case when is close to a rational
with
small (of size
or so), this estimate will be a consequence of the Siegel-Walfisz theorem ( from Notes 2); it is the application of this theorem that is responsible for the ineffective constants. In the remaining “minor arc” case, one proceeds by using a combinatorial identity (such as Vaughan’s identity) to express the sum
in terms of bilinear sums of the form
, and use the Cauchy-Schwarz inequality and the minor arc nature of
to obtain a gain in this case. This will all be done below the fold. We will also use (a rigorous version of) the approximation (6) (or (7)) to establish Vinogradov’s theorem.
A somewhat different looking approximation for the von Mangoldt function that also turns out to be quite useful is
for some that is not too large compared to
. The methods used to establish Theorem 8 can also establish a Fourier approximation that makes (8) precise, and which can yield an alternate proof of Vinogradov’s theorem; this will be done below the fold.
The approximation (8) can be written in a way that makes it more similar to (7):
Exercise 9 Show that the right-hand side of (8) can be rewritten as
where
Then, show the inequalities
and conclude that
(Hint: for the latter estimate, use Theorem 27 of Notes 1.)
The coefficients in the above exercise are quite similar to optimised Selberg sieve coefficients (see Section 2 of Notes 4).
Another approximation to , related to the modified Cramér random model (see Model 10 of Supplement 4) is
where and
is a slowly growing function of
(e.g.
); a closely related approximation is
for as above and
coprime to
. These approximations (closely related to a device known as the “
-trick”) are not as quantitatively accurate as the previous approximations, but can still suffice to establish Vinogradov’s theorem, and also to count many other linear patterns in the primes or subsets of the primes (particularly if one injects some additional tools from additive combinatorics, and specifically the inverse conjecture for the Gowers uniformity norms); see this paper of Ben Green and myself for more discussion (and this more recent paper of Shao for an analysis of this approach in the context of Vinogradov-type theorems). The following exercise expresses the approximation (9) in a form similar to the previous approximation (8):
Exercise 10 With
as above, show that
for all natural numbers
.
We return to the study of the Riemann zeta function , focusing now on the task of upper bounding the size of this function within the critical strip; as seen in Exercise 43 of Notes 2, such upper bounds can lead to zero-free regions for
, which in turn lead to improved estimates for the error term in the prime number theorem.
In equation (21) of Notes 2 we obtained the somewhat crude estimates
for any and
with
and
. Setting
, we obtained the crude estimate
in this region. In particular, if and
then we had
. Using the functional equation and the Hadamard three lines lemma, we can improve this to
; see Supplement 3.
Now we seek better upper bounds on . We will reduce the problem to that of bounding certain exponential sums, in the spirit of Exercise 34 of Supplement 3:
Proposition 1 Let
with
and
. Then
where
.
Proof: We fix a smooth function with
for
and
for
, and allow implied constants to depend on
. Let
with
. From Exercise 34 of Supplement 3, we have
for some sufficiently large absolute constant . By dyadic decomposition, we thus have
We can absorb the first term in the second using the case of the supremum. Writing
, where
it thus suffices to show that
for each . But from the fundamental theorem of calculus, the left-hand side can be written as
and the claim then follows from the triangle inequality and a routine calculation.
We are thus interested in getting good bounds on the sum . More generally, we consider normalised exponential sums of the form
where is an interval of length at most
for some
, and
is a smooth function. We will assume smoothness estimates of the form
for some , all
, and all
, where
is the
-fold derivative of
; in the case
,
of interest for the Riemann zeta function, we easily verify that these estimates hold with
. (One can consider exponential sums under more general hypotheses than (3), but the hypotheses here are adequate for our needs.) We do not bound the zeroth derivative
of
directly, but it would not be natural to do so in any event, since the magnitude of the sum (2) is unaffected if one adds an arbitrary constant to
.
The trivial bound for (2) is
and we will seek to obtain significant improvements to this bound. Pseudorandomness heuristics predict a bound of for (2) for any
if
; this assertion (a special case of the exponent pair hypothesis) would have many consequences (for instance, inserting it into Proposition 1 soon yields the Lindelöf hypothesis), but is unfortunately quite far from resolution with known methods. However, we can obtain weaker gains of the form
when
and
depends on
. We present two such results here, which perform well for small and large values of
respectively:
Theorem 2 Let
, let
be an interval of length at most
, and let
be a smooth function obeying (3) for all
and
.
The factor of can be removed by a more careful argument, but we will not need to do so here as we are willing to lose powers of
. The estimate (6) is superior to (5) when
for
large, since (after optimising in
) (5) gives a gain of the form
over the trivial bound, while (6) gives
. We have not attempted to obtain completely optimal estimates here, settling for a relatively simple presentation that still gives good bounds on
, and there are a wide variety of additional exponential sum estimates beyond the ones given here; see Chapter 8 of Iwaniec-Kowalski, or Chapters 3-4 of Montgomery, for further discussion.
We now briefly discuss the strategies of proof of Theorem 2. Both parts of the theorem proceed by treating like a polynomial of degree roughly
; in the case of (ii), this is done explicitly via Taylor expansion, whereas for (i) it is only at the level of analogy. Both parts of the theorem then try to “linearise” the phase to make it a linear function of the summands (actually in part (ii), it is necessary to introduce an additional variable and make the phase a bilinear function of the summands). The van der Corput estimate achieves this linearisation by squaring the exponential sum about
times, which is why the gain is only exponentially small in
. The Vinogradov estimate achieves linearisation by raising the exponential sum to a significantly smaller power – on the order of
– by using Hölder’s inequality in combination with the fact that the discrete curve
becomes roughly equidistributed in the box
after taking the sumset of about
copies of this curve. This latter fact has a precise formulation, known as the Vinogradov mean value theorem, and its proof is the most difficult part of the argument, relying on using a “
-adic” version of this equidistribution to reduce the claim at a given scale
to a smaller scale
with
, and then proceeding by induction.
One can combine Theorem 2 with Proposition 1 to obtain various bounds on the Riemann zeta function:
Exercise 3 (Subconvexity bound)
- (i) Show that
for all
. (Hint: use the
case of the Van der Corput estimate.)
- (ii) For any
, show that
as
(the decay rate in the
is allowed to depend on
).
Exercise 4 Let
be such that
, and let
.
- (i) (Littlewood bound) Use the van der Corput estimate to show that
whenever
.
- (ii) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that
whenever
.
As noted in Exercise 43 of Notes 2, the Vinogradov-Korobov bound leads to the zero-free region , which in turn leads to the prime number theorem with error term
for . If one uses the weaker Littlewood bound instead, one obtains the narrower zero-free region
(which is only slightly wider than the classical zero-free region) and an error term
in the prime number theorem.
Exercise 5 (Vinogradov-Korobov in arithmetic progressions) Let
be a non-principal character of modulus
.
- (i) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that
whenever
and
(Hint: use the Vinogradov estimate and a change of variables to control
for various intervals
of length at most
and residue classes
, in the regime
(say). For
, do not try to capture any cancellation and just use the triangle inequality instead.)
- (ii) Obtain a zero-free region
for
, for some (effective) absolute constant
.
- (iii) Obtain the prime number theorem in arithmetic progressions with error term
whenever
,
,
is primitive, and
depends (ineffectively) on
.
As in all previous posts in this series, we adopt the following asymptotic notation: is a parameter going off to infinity, and all quantities may depend on
unless explicitly declared to be “fixed”. The asymptotic notation
is then defined relative to this parameter. A quantity
is said to be of polynomial size if one has
, and bounded if
. We also write
for
, and
for
.
The purpose of this (rather technical) post is both to roll over the polymath8 research thread from this previous post, and also to record the details of the latest improvement to the Type I estimates (based on exploiting additional averaging and using Deligne’s proof of the Weil conjectures) which lead to a slight improvement in the numerology.
In order to obtain this new Type I estimate, we need to strengthen the previously used properties of “dense divisibility” or “double dense divisibility” as follows.
Definition 1 (Multiple dense divisibility) Let
. For each natural number
, we define a notion of
-tuply
-dense divisibility recursively as follows:
- Every natural number
is
-tuply
-densely divisible.
- If
and
is a natural number, we say that
is
-tuply
-densely divisible if, whenever
are natural numbers with
, and
, one can find a factorisation
with
such that
is
-tuply
-densely divisible and
is
-tuply
-densely divisible.
We let
denote the set of
-tuply
-densely divisible numbers. We abbreviate “
-tuply densely divisible” as “densely divisible”, “
-tuply densely divisible” as “doubly densely divisible”, and so forth; we also abbreviate
as
.
Given any finitely supported sequence and any primitive residue class
, we define the discrepancy
We now recall the key concept of a coefficient sequence, with some slight tweaks in the definitions that are technically convenient for this post.
Definition 2 A coefficient sequence is a finitely supported sequence
that obeys the bounds
for all
, where
is the divisor function.
- (i) A coefficient sequence
is said to be located at scale
for some
if it is supported on an interval of the form
for some
.
- (ii) A coefficient sequence
located at scale
for some
is said to obey the Siegel-Walfisz theorem if one has
for any
, any fixed
, and any primitive residue class
.
- (iii) A coefficient sequence
is said to be smooth at scale
for some
is said to be smooth if it takes the form
for some smooth function
supported on an interval of size
and obeying the derivative bounds
for all fixed
(note that the implied constant in the
notation may depend on
).
Note that we allow sequences to be smooth at scale without being located at scale
; for instance if one arbitrarily translates of a sequence that is both smooth and located at scale
, it will remain smooth at this scale but may not necessarily be located at this scale any more. Note also that we allow the smoothness scale
of a coefficient sequence to be less than one. This is to allow for the following convenient rescaling property: if
is smooth at scale
,
, and
is an integer, then
is smooth at scale
, even if
is less than one.
Now we adapt the Type I estimate to the -tuply densely divisible setting.
Definition 3 (Type I estimates) Let
,
, and
be fixed quantities, and let
be a fixed natural number. We let
be an arbitrary bounded subset of
, let
, and let
a primitive congruence class. We say that
holds if, whenever
are quantities with
for some fixed
, and
are coefficient sequences located at scales
respectively, with
obeying a Siegel-Walfisz theorem, we have
for any fixed
. Here, as in previous posts,
denotes the square-free natural numbers whose prime factors lie in
.
The main theorem of this post is then
Theorem 4 (Improved Type I estimate) We have
whenever
and
In practice, the first condition here is dominant. Except for weakening double dense divisibility to quadruple dense divisibility, this improves upon the previous Type I estimate that established under the stricter hypothesis
As in previous posts, Type I estimates (when combined with existing Type II and Type III estimates) lead to distribution results of Motohashi-Pintz-Zhang type. For any fixed and
, we let
denote the assertion that
for any fixed , any bounded
, and any primitive
, where
is the von Mangoldt function.
Proof: Setting sufficiently close to
, we see from the above theorem that
holds whenever
and
The second condition is implied by the first and can be deleted.
From this previous post we know that (which we define analogously to
from previous sections) holds whenever
while holds with
sufficiently close to
whenever
Again, these conditions are implied by (8). The claim then follows from the Heath-Brown identity and dyadic decomposition as in this previous post.
As before, we let denote the claim that given any admissible
-tuple
, there are infinitely many translates of
that contain at least two primes.
This follows from the Pintz sieve, as discussed below the fold. Combining this with the best known prime tuples, we obtain that there are infinitely many prime gaps of size at most , improving slightly over the previous record of
.
[Note: the content of this post is standard number theoretic material that can be found in many textbooks (I am relying principally here on Iwaniec and Kowalski); I am not claiming any new progress on any version of the Riemann hypothesis here, but am simply arranging existing facts together.]
The Riemann hypothesis is arguably the most important and famous unsolved problem in number theory. It is usually phrased in terms of the Riemann zeta function , defined by
for and extended meromorphically to other values of
, and asserts that the only zeroes of
in the critical strip
lie on the critical line
.
One of the main reasons that the Riemann hypothesis is so important to number theory is that the zeroes of the zeta function in the critical strip control the distribution of the primes. To see the connection, let us perform the following formal manipulations (ignoring for now the important analytic issues of convergence of series, interchanging sums, branches of the logarithm, etc., in order to focus on the intuition). The starting point is the fundamental theorem of arithmetic, which asserts that every natural number has a unique factorisation
into primes. Taking logarithms, we obtain the identity
for any natural number , where
is the von Mangoldt function, thus
when
is a power of a prime
and zero otherwise. If we then perform a “Dirichlet-Fourier transform” by viewing both sides of (1) as coefficients of a Dirichlet series, we conclude that
formally at least. Writing , the right-hand side factors as
whereas the left-hand side is (formally, at least) equal to . We conclude the identity
(formally, at least). If we integrate this, we are formally led to the identity
or equivalently to the exponential identity
which allows one to reconstruct the Riemann zeta function from the von Mangoldt function. (It is instructive exercise in enumerative combinatorics to try to prove this identity directly, at the level of formal Dirichlet series, using the fundamental theorem of arithmetic of course.) Now, as has a simple pole at
and zeroes at various places
on the critical strip, we expect a Weierstrass factorisation which formally (ignoring normalisation issues) takes the form
(where we will be intentionally vague about what is hiding in the terms) and so we expect an expansion of the form
Note that
and hence on integrating in we formally have
and thus we have the heuristic approximation
Comparing this with (3), we are led to a heuristic form of the explicit formula
When trying to make this heuristic rigorous, it turns out (due to the rough nature of both sides of (4)) that one has to interpret the explicit formula in some suitably weak sense, for instance by testing (4) against the indicator function to obtain the formula
which can in fact be made into a rigorous statement after some truncation (the von Mangoldt explicit formula). From this formula we now see how helpful the Riemann hypothesis will be to control the distribution of the primes; indeed, if the Riemann hypothesis holds, so that for all zeroes
, it is not difficult to use (a suitably rigorous version of) the explicit formula to conclude that
as , giving a near-optimal “square root cancellation” for the sum
. Conversely, if one can somehow establish a bound of the form
for any fixed , then the explicit formula can be used to then deduce that all zeroes
of
have real part at most
, which leads to the following remarkable amplification phenomenon (analogous, as we will see later, to the tensor power trick): any bound of the form
can be automatically amplified to the stronger bound
with both bounds being equivalent to the Riemann hypothesis. Of course, the Riemann hypothesis for the Riemann zeta function remains open; but partial progress on this hypothesis (in the form of zero-free regions for the zeta function) leads to partial versions of the asymptotic (6). For instance, it is known that there are no zeroes of the zeta function on the line , and this can be shown by some analysis (either complex analysis or Fourier analysis) to be equivalent to the prime number theorem
see e.g. this previous blog post for more discussion.
The main engine powering the above observations was the fundamental theorem of arithmetic, and so one can expect to establish similar assertions in other contexts where some version of the fundamental theorem of arithmetic is available. One of the simplest such variants is to continue working on the natural numbers, but “twist” them by a Dirichlet character . The analogue of the Riemann zeta function is then the https://en.wikipedia.org/wiki/Multiplicative_function, the equation (1), which encoded the fundamental theorem of arithmetic, can be twisted by
to obtain
and essentially the same manipulations as before eventually lead to the exponential identity
which is a twisted version of (2), as well as twisted explicit formula, which heuristically takes the form
for non-principal , where
now ranges over the zeroes of
in the critical strip, rather than the zeroes of
; a more accurate formulation, following (5), would be
(See e.g. Davenport’s book for a more rigorous discussion which emphasises the analogy between the Riemann zeta function and the Dirichlet -function.) If we assume the generalised Riemann hypothesis, which asserts that all zeroes of
in the critical strip also lie on the critical line, then we obtain the bound
for any non-principal Dirichlet character , again demonstrating a near-optimal square root cancellation for this sum. Again, we have the amplification property that the above bound is implied by the apparently weaker bound
(where denotes a quantity that goes to zero as
for any fixed
). Next, one can consider other number systems than the natural numbers
and integers
. For instance, one can replace the integers
with rings
of integers in other number fields
(i.e. finite extensions of
), such as the quadratic extensions
of the rationals for various square-free integers
, in which case the ring of integers would be the ring of quadratic integers
for a suitable generator
(it turns out that one can take
if
, and
if
). Here, it is not immediately obvious what the analogue of the natural numbers
is in this setting, since rings such as
do not come with a natural ordering. However, we can adopt an algebraic viewpoint to see the correct generalisation, observing that every natural number
generates a principal ideal
in the integers, and conversely every non-trivial ideal
in the integers is associated to precisely one natural number
in this fashion, namely the norm
of that ideal. So one can identify the natural numbers with the ideals of
. Furthermore, with this identification, the prime numbers correspond to the prime ideals, since if
is prime, and
are integers, then
if and only if one of
or
is true. Finally, even in number systems (such as
) in which the classical version of the fundamental theorem of arithmetic fail (e.g.
), we have the fundamental theorem of arithmetic for ideals: every ideal
in a Dedekind domain (which includes the ring
of integers in a number field as a key example) is uniquely representable (up to permutation) as the product of a finite number of prime ideals
(although these ideals might not necessarily be principal). For instance, in
, the principal ideal
factors as the product of four prime (but non-principal) ideals
,
,
,
. (Note that the first two ideals
are actually equal to each other.) Because we still have the fundamental theorem of arithmetic, we can develop analogues of the previous observations relating the Riemann hypothesis to the distribution of primes. The analogue of the Riemann hypothesis is now the Dedekind zeta function
where the summation is over all non-trivial ideals in . One can also define a von Mangoldt function
, defined as
when
is a power of a prime ideal
, and zero otherwise; then the fundamental theorem of arithmetic for ideals can be encoded in an analogue of (1) (or (7)),
which leads as before to an exponential identity
and an explicit formula of the heuristic form
in analogy with (5) or (10). Again, a suitable Riemann hypothesis for the Dedekind zeta function leads to good asymptotics for the distribution of prime ideals, giving a bound of the form
where is the conductor of
(which, in the case of number fields, is the absolute value of the discriminant of
) and
is the degree of the extension of
over
. As before, we have the amplification phenomenon that the above near-optimal square root cancellation bound is implied by the weaker bound
where denotes a quantity that goes to zero as
(holding
fixed). See e.g. Chapter 5 of Iwaniec-Kowalski for details.
As was the case with the Dirichlet -functions, one can twist the Dedekind zeta function example by characters, in this case the Hecke characters; we will not do this here, but see e.g. Section 3 of Iwaniec-Kowalski for details.
Very analogous considerations hold if we move from number fields to function fields. The simplest case is the function field associated to the affine line and a finite field
of some order
. The polynomial functions on the affine line
are just the usual polynomial ring
, which then play the role of the integers
(or
) in previous examples. This ring happens to be a unique factorisation domain, so the situation is closely analogous to the classical setting of the Riemann zeta function. The analogue of the natural numbers are the monic polynomials (since every non-trivial principal ideal is generated by precisely one monic polynomial), and the analogue of the prime numbers are the irreducible monic polynomials. The norm
of a polynomial is the order of
, which can be computed explicitly as
Because of this, we will normalise things slightly differently here and use in place of
in what follows. The (local) zeta function
is then defined as
where ranges over monic polynomials, and the von Mangoldt function
is defined to equal
when
is a power of a monic irreducible polynomial
, and zero otherwise. Note that because
is always a power of
, the zeta function here is in fact periodic with period
. Because of this, it is customary to make a change of variables
, so that
and is the renormalised zeta function
We have the analogue of (1) (or (7) or (11)):
which leads as before to an exponential identity
analogous to (2), (8), or (12). It also leads to the explicit formula
where are the zeroes of the original zeta function
(counting each residue class of the period
just once), or equivalently
where are the reciprocals of the roots of the normalised zeta function
(or to put it another way,
are the factors of this zeta function). Again, to make proper sense of this heuristic we need to sum, obtaining
As it turns out, in the function field setting, the zeta functions are always rational (this is part of the Weil conjectures), and the above heuristic formula is basically exact up to a constant factor, thus
for an explicit integer (independent of
) arising from any potential pole of
at
. In the case of the affine line
, the situation is particularly simple, because the zeta function
is easy to compute. Indeed, since there are exactly
monic polynomials of a given degree
, we see from (14) that
so in fact there are no zeroes whatsoever, and no pole at either, so we have an exact prime number theorem for this function field:
Among other things, this tells us that the number of irreducible monic polynomials of degree is
.
We can transition from an algebraic perspective to a geometric one, by viewing a given monic polynomial through its roots, which are a finite set of points in the algebraic closure
of the finite field
(or more suggestively, as points on the affine line
). The number of such points (counting multiplicity) is the degree of
, and from the factor theorem, the set of points determines the monic polynomial
(or, if one removes the monic hypothesis, it determines the polynomial
projectively). These points have an action of the Galois group
. It is a classical fact that this Galois group is in fact a cyclic group generated by a single element, the (geometric) Frobenius map
, which fixes the elements of the original finite field
but permutes the other elements of
. Thus the roots of a given polynomial
split into orbits of the Frobenius map. One can check that the roots consist of a single such orbit (counting multiplicity) if and only if
is irreducible; thus the fundamental theorem of arithmetic can be viewed geometrically as as the orbit decomposition of any Frobenius-invariant finite set of points in the affine line.
Now consider the degree finite field extension
of
(it is a classical fact that there is exactly one such extension up to isomorphism for each
); this is a subfield of
of order
. (Here we are performing a standard abuse of notation by overloading the subscripts in the
notation; thus
denotes the field of order
, while
denotes the extension of
of order
, so that we in fact have
if we use one subscript convention on the left-hand side and the other subscript convention on the right-hand side. We hope this overloading will not cause confusion.) Each point
in this extension (or, more suggestively, the affine line
over this extension) has a minimal polynomial – an irreducible monic polynomial whose roots consist the Frobenius orbit of
. Since the Frobenius action is periodic of period
on
, the degree of this minimal polynomial must divide
. Conversely, every monic irreducible polynomial of degree
dividing
produces
distinct zeroes that lie in
(here we use the classical fact that finite fields are perfect) and hence in
. We have thus partitioned
into Frobenius orbits (also known as closed points), with each monic irreducible polynomial
of degree
dividing
contributing an orbit of size
. From this we conclude a geometric interpretation of the left-hand side of (18):
The identity (18) thus is equivalent to the thoroughly boring fact that the number of -points on the affine line
is equal to
. However, things become much more interesting if one then replaces the affine line
by a more general (geometrically) irreducible curve
defined over
; for instance one could take
to be an ellpitic curve
for some suitable , although the discussion here applies to more general curves as well (though to avoid some minor technicalities, we will assume that the curve is projective with a finite number of
-rational points removed). The analogue of
is then the coordinate ring of
(for instance, in the case of the elliptic curve (20) it would be
), with polynomials in this ring producing a set of roots in the curve
that is again invariant with respect to the Frobenius action (acting on the
and
coordinates separately). In general, we do not expect unique factorisation in this coordinate ring (this is basically because Bezout’s theorem suggests that the zero set of a polynomial on
will almost never consist of a single (closed) point). Of course, we can use the algebraic formalism of ideals to get around this, setting up a zeta function
and a von Mangoldt function as before, where
would now run over the non-trivial ideals of the coordinate ring. However, it is more instructive to use the geometric viewpoint, using the ideal-variety dictionary from algebraic geometry to convert algebraic objects involving ideals into geometric objects involving varieties. In this dictionary, a non-trivial ideal would correspond to a proper subvariety (or more precisely, a subscheme, but let us ignore the distinction between varieties and schemes here) of the curve
; as the curve is irreducible and one-dimensional, this subvariety must be zero-dimensional and is thus a (multi-)set of points
in
, or equivalently an effective divisor
of
; this generalises the concept of the set of roots of a polynomial (which corresponds to the case of a principal ideal). Furthermore, this divisor has to be rational in the sense that it is Frobenius-invariant. The prime ideals correspond to those divisors (or sets of points) which are irreducible, that is to say the individual Frobenius orbits, also known as closed points of
. With this dictionary, the zeta function becomes
where the sum is over effective rational divisors of
(with
being the degree of an effective divisor
), or equivalently
The analogue of (19), which gives a geometric interpretation to sums of the von Mangoldt function, becomes
thus this sum is simply counting the number of -points of
. The analogue of the exponential identity (16) (or (2), (8), or (12)) is then
and the analogue of the explicit formula (17) (or (5), (10) or (13)) is
where runs over the (reciprocal) zeroes of
(counting multiplicity), and
is an integer independent of
. (As it turns out,
equals
when
is a projective curve, and more generally equals
when
is a projective curve with
rational points deleted.)
To evaluate , one needs to count the number of effective divisors of a given degree on the curve
. Fortunately, there is a tool that is particularly well-designed for this task, namely the Riemann-Roch theorem. By using this theorem, one can show (when
is projective) that
is in fact a rational function, with a finite number of zeroes, and a simple pole at both
and
, with similar results when one deletes some rational points from
; see e.g. Chapter 11 of Iwaniec-Kowalski for details. Thus the sum in (22) is finite. For instance, for the affine elliptic curve (20) (which is a projective curve with one point removed), it turns out that we have
for two complex numbers depending on
and
.
The Riemann hypothesis for (untwisted) curves – which is the deepest and most difficult aspect of the Weil conjectures for these curves – asserts that the zeroes of lie on the critical line, or equivalently that all the roots
in (22) have modulus
, so that (22) then gives the asymptotic
where the implied constant depends only on the genus of (and on the number of points removed from
). For instance, for elliptic curves we have the Hasse bound
As before, we have an important amplification phenomenon: if we can establish a weaker estimate, e.g.
then we can automatically deduce the stronger bound (23). This amplification is not a mere curiosity; most of the proofs of the Riemann hypothesis for curves proceed via this fact. For instance, by using the elementary method of Stepanov to bound points in curves (discussed for instance in this previous post), one can establish the preliminary bound (24) for large , which then amplifies to the optimal bound (23) for all
(and in particular for
). Again, see Chapter 11 of Iwaniec-Kowalski for details. The ability to convert a bound with
-dependent losses over the optimal bound (such as (24)) into an essentially optimal bound with no
-dependent losses (such as (23)) is important in analytic number theory, since in many applications (e.g. in those arising from sieve theory) one wishes to sum over large ranges of
.
Much as the Riemann zeta function can be twisted by a Dirichlet character to form a Dirichlet -function, one can twist the zeta function on curves by various additive and multiplicative characters. For instance, suppose one has an affine plane curve
and an additive character
, thus
for all
. Given a rational effective divisor
, the sum
is Frobenius-invariant and thus lies in
. By abuse of notation, we may thus define
on such divisors by
and observe that is multiplicative in the sense that
for rational effective divisors
. One can then define
for any non-trivial ideal
by replacing that ideal with the associated rational effective divisor; for instance, if
is a polynomial in the coefficient ring of
, with zeroes at
, then
is
. Again, we have the multiplicativity property
. If we then form the twisted normalised zeta function
then by twisting the previous analysis, we eventually arrive at the exponential identity
in analogy with (21) (or (2), (8), (12), or (16)), where the companion sums are defined by
where the trace of an element
in the plane
is defined by the formula
In particular, is the exponential sum
which is an important type of sum in analytic number theory, containing for instance the Kloosterman sum
as a special case, where . (NOTE: the sign conventions for the companion sum
are not consistent across the literature, sometimes it is
which is referred to as the companion sum.)
If is non-principal (and
is non-linear), one can show (by a suitably twisted version of the Riemann-Roch theorem) that
is a rational function of
, with no pole at
, and one then gets an explicit formula of the form
for the companion sums, where are the reciprocals of the zeroes of
, in analogy to (22) (or (5), (10), (13), or (17)). For instance, in the case of Kloosterman sums, there is an identity of the form
for all and some complex numbers
depending on
, where we have abbreviated
as
. As before, the Riemann hypothesis for
then gives a square root cancellation bound of the form
for the companion sums (and in particular gives the very explicit Weil bound for the Kloosterman sum), but again there is the amplification phenomenon that this sort of bound can be deduced from the apparently weaker bound
As before, most of the known proofs of the Riemann hypothesis for these twisted zeta functions proceed by first establishing this weaker bound (e.g. one could again use Stepanov’s method here for this goal) and then amplifying to the full bound (28); see Chapter 11 of Iwaniec-Kowalski for further details.
One can also twist the zeta function on a curve by a multiplicative character by similar arguments, except that instead of forming the sum
of all the components of an effective divisor
, one takes the product
instead, and similarly one replaces the trace
by the norm
Again, see Chapter 11 of Iwaniec-Kowalski for details.
Deligne famously extended the above theory to higher-dimensional varieties than curves, and also to the closely related context of -adic sheaves on curves, giving rise to two separate proofs of the Weil conjectures in full generality. (Very roughly speaking, the former context can be obtained from the latter context by a sort of Fubini theorem type argument that expresses sums on higher-dimensional varieties as iterated sums on curves of various expressions related to
-adic sheaves.) In this higher-dimensional setting, the zeta function formalism is still present, but is much more difficult to use, in large part due to the much less tractable nature of divisors in higher dimensions (they are now combinations of codimension one subvarieties or subschemes, rather than combinations of points). To get around this difficulty, one has to change perspective yet again, from an algebraic or geometric perspective to an
-adic cohomological perspective. (I could imagine that once one is sufficiently expert in the subject, all these perspectives merge back together into a unified viewpoint, but I am certainly not yet at that stage of understanding.) In particular, the zeta function, while still present, plays a significantly less prominent role in the analysis (at least if one is willing to take Deligne’s theorems as a black box); the explicit formula is now obtained via a different route, namely the Grothendieck-Lefschetz fixed point formula. I have written some notes on this material below the fold (based in part on some lectures of Philippe Michel, as well as the text of Iwaniec-Kowalski and also this book of Katz), but I should caution that my understanding here is still rather sketchy and possibly inaccurate in places.
As in previous posts, we use the following asymptotic notation: is a parameter going off to infinity, and all quantities may depend on
unless explicitly declared to be “fixed”. The asymptotic notation
is then defined relative to this parameter. A quantity
is said to be of polynomial size if one has
, and bounded if
. We also write
for
, and
for
.
The purpose of this post is to collect together all the various refinements to the second half of Zhang’s paper that have been obtained as part of the polymath8 project and present them as a coherent argument. In order to state the main result, we need to recall some definitions. If is a bounded subset of
, let
denote the square-free numbers whose prime factors lie in
, and let
denote the product of the primes
in
. Note by the Chinese remainder theorem that the set
of primitive congruence classes
modulo
can be identified with the tuples
of primitive congruence classes
of congruence classes modulo
for each
which obey the Chinese remainder theorem
for all coprime , since one can identify
with the tuple
for each
.
If and
is a natural number, we say that
is
-densely divisible if, for every
, one can find a factor of
in the interval
. We say that
is doubly
-densely divisible if, for every
, one can find a factor
of
in the interval
such that
is itself
-densely divisible. We let
denote the set of doubly
-densely divisible natural numbers, and
the set of
-densely divisible numbers.
Given any finitely supported sequence and any primitive residue class
, we define the discrepancy
For any fixed , we let
denote the assertion that
for any fixed , any bounded
, and any primitive
, where
is the von Mangoldt function. Importantly, we do not require
or
to be fixed, in particular
could grow polynomially in
, and
could grow exponentially in
, but the implied constant in (1) would still need to be fixed (so it has to be uniform in
and
). (In previous formulations of these estimates, the system of congruence
was also required to obey a controlled multiplicity hypothesis, but we no longer need this hypothesis in our arguments.) In this post we will record the proof of the following result, which is currently the best distribution result produced by the ongoing polymath8 project to optimise Zhang’s theorem on bounded gaps between primes:
This improves upon the previous constraint of (see this previous post), although that latter statement was stronger in that it only required single dense divisibility rather than double dense divisibility. However, thanks to the efficiency of the sieving step of our argument, the upgrade of the single dense divisibility hypothesis to double dense divisibility costs almost nothing with respect to the
parameter (which, using this constraint, gives a value of
as verified in these comments, which then implies a value of
).
This estimate is deduced from three sub-estimates, which require a bit more notation to state. We need a fixed quantity .
Definition 2 A coefficient sequence is a finitely supported sequence
that obeys the bounds
for all
, where
is the divisor function.
- (i) A coefficient sequence
is said to be at scale
for some
if it is supported on an interval of the form
.
- (ii) A coefficient sequence
at scale
is said to obey the Siegel-Walfisz theorem if one has
for any
, any fixed
, and any primitive residue class
.
- (iii) A coefficient sequence
at scale
(relative to this choice of
) is said to be smooth if it takes the form
for some smooth function
supported on
obeying the derivative bounds
for all fixed
(note that the implied constant in the
notation may depend on
).
Definition 3 (Type I, Type II, Type III estimates) Let
,
, and
be fixed quantities. We let
be an arbitrary bounded subset of
, and
a primitive congruence class.
- (i) We say that
holds if, whenever
are quantities with
and
for some fixed
, and
are coefficient sequences at scales
respectively, with
obeying a Siegel-Walfisz theorem, we have
- (ii) We say that
holds if the conclusion (7) of
holds under the same hypotheses as before, except that (6) is replaced with
for some sufficiently small fixed
.
- (iii) We say that
holds if, whenever
are quantities with
and
are coefficient sequences at scales
respectively, with
smooth, we have
Theorem 1 is then a consequence of the following four statements.
Theorem 4 (Type I estimate)
holds whenever
are fixed quantities such that
Theorem 5 (Type II estimate)
holds whenever
are fixed quantities such that
Theorem 6 (Type III estimate)
holds whenever
,
, and
are fixed quantities such that
In particular, if
then all values of
that are sufficiently close to
are admissible.
Lemma 7 (Combinatorial lemma) Let
,
, and
be such that
,
, and
simultaneously hold. Then
holds.
Indeed, if , one checks that the hypotheses for Theorems 4, 5, 6 are obeyed for
sufficiently close to
, at which point the claim follows from Lemma 7.
The proofs of Theorems 4, 5, 6 will be given below the fold, while the proof of Lemma 7 follows from the arguments in this previous post. We remark that in our current arguments, the double dense divisibility is only fully used in the Type I estimates; the Type II and Type III estimates are also valid just with single dense divisibility.
Remark 1 Theorem 6 is vacuously true for
, as the condition (10) cannot be satisfied in this case. If we use this trivial case of Theorem 6, while keeping the full strength of Theorems 4 and 5, we obtain Theorem 1 in the regime
As in previous posts, we use the following asymptotic notation: is a parameter going off to infinity, and all quantities may depend on
unless explicitly declared to be “fixed”. The asymptotic notation
is then defined relative to this parameter. A quantity
is said to be of polynomial size if one has
, and bounded if
. We also write
for
, and
for
.
The purpose of this post is to collect together all the various refinements to the second half of Zhang’s paper that have been obtained as part of the polymath8 project and present them as a coherent argument (though not fully self-contained, as we will need some lemmas from previous posts).
In order to state the main result, we need to recall some definitions.
Definition 1 (Singleton congruence class system) Let
, and let
denote the square-free numbers whose prime factors lie in
. A singleton congruence class system on
is a collection
of primitive residue classes
for each
, obeying the Chinese remainder theorem property
whenever
are coprime. We say that such a system
has controlled multiplicity if the
for any fixed
and any congruence class
with
. Here
is the divisor function.
Next we need a relaxation of the concept of -smoothness.
Definition 2 (Dense divisibility) Let
. A positive integer
is said to be
-densely divisible if, for every
, there exists a factor of
in the interval
. We let
denote the set of
-densely divisible positive integers.
Now we present a strengthened version of the Motohashi-Pintz-Zhang conjecture
, which depends on parameters
and
.
Conjecture 3 (
) Let
, and let
be a congruence class system with controlled multiplicity. Then
for any fixed
, where
is the von Mangoldt function.
The difference between this conjecture and the weaker conjecture is that the modulus
is constrained to be
-densely divisible rather than
-smooth (note that
is no longer constrained to lie in
). This relaxation of the smoothness condition improves the Goldston-Pintz-Yildirim type sieving needed to deduce
from
; see this previous post.
The main result we will establish is
This improves upon previous constraints of (see this blog comment) and
(see Theorem 13 of this previous post), which were also only established for
instead of
. Inserting Theorem 4 into the Pintz sieve from this previous post gives
for
(see this blog comment), which when inserted in turn into newly set up tables of narrow prime tuples gives infinitely many prime gaps of separation at most
.
As in previous posts, we use the following asymptotic notation: is a parameter going off to infinity, and all quantities may depend on
unless explicitly declared to be “fixed”. The asymptotic notation
is then defined relative to this parameter. A quantity
is said to be of polynomial size if one has
, and said to be bounded if
. Another convenient notation: we write
for
. Thus for instance the divisor bound asserts that if
has polynomial size, then the number of divisors of
is
.
This post is intended to highlight a phenomenon unearthed in the ongoing polymath8 project (and is in fact a key component of Zhang’s proof that there are bounded gaps between primes infinitely often), namely that one can get quite good bounds on relatively short exponential sums when the modulus is smooth, through the basic technique of Weyl differencing (ultimately based on the Cauchy-Schwarz inequality, and also related to the van der Corput lemma in equidistribution theory). Improvements in the case of smooth moduli have appeared before in the literature (e.g. in this paper of Heath-Brown, paper of Graham and Ringrose, this later paper of Heath-Brown, this paper of Chang, or this paper of Goldmakher); the arguments here are particularly close to that of the first paper of Heath-Brown. It now also appears that further optimisation of this Weyl differencing trick could lead to noticeable improvements in the numerology for the polymath8 project, so I am devoting this post to explaining this trick further.
To illustrate the method, let us begin with the classical problem in analytic number theory of estimating an incomplete character sum
where is a primitive Dirichlet character of some conductor
,
is an integer, and
is some quantity between
and
. Clearly we have the trivial bound
we also have the classical Pólya-Vinogradov inequality
This latter inequality gives improvements over the trivial bound when is much larger than
, but not for
much smaller than
. The Pólya-Vinogradov inequality can be deduced via a little Fourier analysis from the completed exponential sum bound
for any , where
. (In fact, from the classical theory of Gauss sums, this exponential sum is equal to
for some complex number
of norm
.)
In the case when is a prime, improving upon the above two inequalities is an important but difficult problem, with only partially satisfactory results so far. To give just one indication of the difficulty, the seemingly modest improvement
to the Pólya-Vinogradov inequality when was a prime required a 14-page paper in Inventiones by Montgomery and Vaughan to prove, and even then it was only conditional on the generalised Riemann hypothesis! See also this more recent paper of Granville and Soundararajan for an unconditional variant of this result in the case that
has odd order.
Another important improvement is the Burgess bound, which in our notation asserts that
for any fixed integer , assuming that
is square-free (for simplicity) and of polynomial size; see this previous post for a discussion of the Burgess argument. This is non-trivial for
as small as
.
In the case when is prime, there has been very little improvement to the Burgess bound (or its Fourier dual, which can give bounds for
as large as
) in the last fifty years; an improvement to the exponents in (3) in this case (particularly anything that gave a power saving for
below
) would in fact be rather significant news in analytic number theory.
However, in the opposite case when is smooth – that is to say, all of its factors are much smaller than
– then one can do better than the Burgess bound in some regimes. This fact has been observed in several places in the literature (in particular, in the papers of Heath-Brown, Graham-Ringrose, Chang, and Goldmakher mentioned previously), but also turns out to (implicitly) be a key insight in Zhang’s paper on bounded prime gaps. In the case of character sums, one such improved estimate (closely related to Theorem 2 of the Heath-Brown paper) is as follows:
Proposition 1 Let
be square-free with a factorisation
and of polynomial size, and let
be integers with
. Then for any primitive character
with conductor
, one has
This proposition is particularly powerful when is smooth, as this gives many factorisations
with the ability to specify
with a fair amount of accuracy. For instance, if
is
-smooth (i.e. all prime factors are at most
), then by the greedy algorithm one can find a divisor
of
with
; if we set
, then
, and the above proposition then gives
which can improve upon the Burgess bound when is small. For instance, if
, then this bound becomes
; in contrast the Burgess bound only gives
for this value of
(using the optimal choice
for
), which is inferior for
.
The hypothesis that be squarefree may be relaxed, but for applications to the Polymath8 project, it is only the squarefree moduli that are relevant.
Proof: If then the claim follows from the trivial bound (1), while for
the claim follows from (2). Hence we may assume that
We use the method of Weyl differencing, the key point being to difference in multiples of .
Let , thus
. For any
, we have
By the Chinese remainder theorem, we may factor
where are primitive characters of conductor
respectively. As
is periodic of period
, we thus have
and so we can take out of the inner summation of the right-hand side of (4) to obtain
and hence by the triangle inequality
Note how the characters on the right-hand side only have period rather than
. This reduction in the period is ultimately the source of the saving over the Pólya-Vinogradov inequality.
Note that the inner sum vanishes unless , which is an interval of length
by choice of
. Thus by Cauchy-Schwarz one has
We expand the right-hand side as
We first consider the diagonal contribution . In this case we use the trivial bound
for the inner summation, and we soon see that the total contribution here is
.
Now we consider the off-diagonal case; by symmetry we can take . Then the indicator functions
restrict
to the interval
. On the other hand, as a consequence of the Weil conjectures for curves one can show that
for any ; indeed one can use the Chinese remainder theorem and the square-free nature of
to reduce to the case when
is prime, in which case one can apply (for instance) the original paper of Weil to establish this bound, noting also that
and
are coprime since
is squarefree. Applying the method of completion of sums (or the Parseval formula), this shows that
Summing in (using Lemma 5 from this previous post) we see that the total contribution to the off-diagonal case is
which simplifies to . The claim follows.
A modification of the above argument (using more complicated versions of the Weil conjectures) allows one to replace the summand by more complicated summands such as
for some polynomials or rational functions
of bounded degree and obeying a suitable non-degeneracy condition (after restricting of course to those
for which the arguments
are well-defined). We will not detail this here, but instead turn to the question of estimating slightly longer exponential sums, such as
where should be thought of as a little bit larger than
.
One of the most basic methods in additive number theory is the Hardy-Littlewood circle method. This method is based on expressing a quantity of interest to additive number theory, such as the number of representations of an integer
as the sum of three primes
, as a Fourier-analytic integral over the unit circle
involving exponential sums such as
where the sum here ranges over all primes up to , and
. For instance, the expression
mentioned earlier can be written as
The strategy is then to obtain sufficiently accurate bounds on exponential sums such as in order to obtain non-trivial bounds on quantities such as
. For instance, if one can show that
for all odd integers
greater than some given threshold
, this implies that all odd integers greater than
are expressible as the sum of three primes, thus establishing all but finitely many instances of the odd Goldbach conjecture.
Remark 1 In practice, it can be more efficient to work with smoother sums than the partial sum (1), for instance by replacing the cutoff
with a smoother cutoff
for a suitable choice of cutoff function
, or by replacing the restriction of the summation to primes by a more analytically tractable weight, such as the von Mangoldt function
. However, these improvements to the circle method are primarily technical in nature and do not have much impact on the heuristic discussion in this post, so we will not emphasise them here. One can also certainly use the circle method to study additive combinations of numbers from other sets than the set of primes, but we will restrict attention to additive combinations of primes for sake of discussion, as it is historically one of the most studied sets in additive number theory.
In many cases, it turns out that one can get fairly precise evaluations on sums such as in the major arc case, when
is close to a rational number
with small denominator
, by using tools such as the prime number theorem in arithmetic progressions. For instance, the prime number theorem itself tells us that
and the prime number theorem in residue classes modulo suggests more generally that
when is small and
is close to
, basically thanks to the elementary calculation that the phase
has an average value of
when
is uniformly distributed amongst the residue classes modulo
that are coprime to
. Quantifying the precise error in these approximations can be quite challenging, though, unless one assumes powerful hypotheses such as the Generalised Riemann Hypothesis.
In the minor arc case when is not close to a rational
with small denominator, one no longer expects to have such precise control on the value of
, due to the “pseudorandom” fluctuations of the quantity
. Using the standard probabilistic heuristic (supported by results such as the central limit theorem or Chernoff’s inequality) that the sum of
“pseudorandom” phases should fluctuate randomly and be of typical magnitude
, one expects upper bounds of the shape
for “typical” minor arc . Indeed, a simple application of the Plancherel identity, followed by the prime number theorem, reveals that
which is consistent with (though weaker than) the above heuristic. In practice, though, we are unable to rigorously establish bounds anywhere near as strong as (3); upper bounds such as are far more typical.
Because one only expects to have upper bounds on , rather than asymptotics, in the minor arc case, one cannot realistically hope to make much use of phases such as
for the minor arc contribution to integrals such as (2) (at least if one is working with a single, deterministic, value of
, so that averaging in
is unavailable). In particular, from upper bound information alone, it is difficult to avoid the “conspiracy” that the magnitude
oscillates in sympathetic resonance with the phase
, thus essentially eliminating almost all of the possible gain in the bounds that could arise from exploiting cancellation from that phase. Thus, one basically has little option except to use the triangle inequality to control the portion of the integral on the minor arc region
:
Despite this handicap, though, it is still possible to get enough bounds on both the major and minor arc contributions of integrals such as (2) to obtain non-trivial lower bounds on quantities such as , at least when
is large. In particular, this sort of method can be developed to give a proof of Vinogradov’s famous theorem that every sufficiently large odd integer
is the sum of three primes; my own result that all odd numbers greater than
can be expressed as the sum of at most five primes is also proven by essentially the same method (modulo a number of minor refinements, and taking advantage of some numerical work on both the Goldbach problems and on the Riemann hypothesis ). It is certainly conceivable that some further variant of the circle method (again combined with a suitable amount of numerical work, such as that of numerically establishing zero-free regions for the Generalised Riemann Hypothesis) can be used to settle the full odd Goldbach conjecture; indeed, under the assumption of the Generalised Riemann Hypothesis, this was already achieved by Deshouillers, Effinger, te Riele, and Zinoviev back in 1997. I am optimistic that an unconditional version of this result will be possible within a few years or so, though I should say that there are still significant technical challenges to doing so, and some clever new ideas will probably be needed to get either the Vinogradov-style argument or numerical verification to work unconditionally for the three-primes problem at medium-sized ranges of
, such as
. (But the intermediate problem of representing all even natural numbers as the sum of at most four primes looks somewhat closer to being feasible, though even this would require some substantially new and non-trivial ideas beyond what is in my five-primes paper.)
However, I (and many other analytic number theorists) are considerably more skeptical that the circle method can be applied to the even Goldbach problem of representing a large even number as the sum
of two primes, or the similar (and marginally simpler) twin prime conjecture of finding infinitely many pairs of twin primes, i.e. finding infinitely many representations
of
as the difference of two primes. At first glance, the situation looks tantalisingly similar to that of the Vinogradov theorem: to settle the even Goldbach problem for large
, one has to find a non-trivial lower bound for the quantity
for sufficiently large , as this quantity
is also the number of ways to represent
as the sum
of two primes
. Similarly, to settle the twin prime problem, it would suffice to obtain a lower bound for the quantity
that goes to infinity as , as this quantity
is also the number of ways to represent
as the difference
of two primes less than or equal to
.
In principle, one can achieve either of these two objectives by a sufficiently fine level of control on the exponential sums . Indeed, there is a trivial (and uninteresting) way to take any (hypothetical) solution of either the asymptotic even Goldbach problem or the twin prime problem and (artificially) convert it to a proof that “uses the circle method”; one simply begins with the quantity
or
, expresses it in terms of
using (5) or (6), and then uses (5) or (6) again to convert these integrals back into a the combinatorial expression of counting solutions to
or
, and then uses the hypothetical solution to the given problem to obtain the required lower bounds on
or
.
Of course, this would not qualify as a genuine application of the circle method by any reasonable measure. One can then ask the more refined question of whether one could hope to get non-trivial lower bounds on or
(or similar quantities) purely from the upper and lower bounds on
or similar quantities (and of various
type norms on such quantities, such as the
bound (4)). Of course, we do not yet know what the strongest possible upper and lower bounds in
are yet (otherwise we would already have made progress on major conjectures such as the Riemann hypothesis); but we can make plausible heuristic conjectures on such bounds. And this is enough to make the following heuristic conclusions:
- (i) For “binary” problems such as computing (5), (6), the contribution of the minor arcs potentially dominates that of the major arcs (if all one is given about the minor arc sums is magnitude information), in contrast to “ternary” problems such as computing (2), in which it is the major arc contribution which is absolutely dominant.
- (ii) Upper and lower bounds on the magnitude of
are not sufficient, by themselves, to obtain non-trivial bounds on (5), (6) unless these bounds are extremely tight (within a relative error of
or better); but
- (iii) obtaining such tight bounds is a problem of comparable difficulty to the original binary problems.
I will provide some justification for these conclusions below the fold; they are reasonably well known “folklore” to many researchers in the field, but it seems that they are rarely made explicit in the literature (in part because these arguments are, by their nature, heuristic instead of rigorous) and I have been asked about them from time to time, so I decided to try to write them down here.
In view of the above conclusions, it seems that the best one can hope to do by using the circle method for the twin prime or even Goldbach problems is to reformulate such problems into a statement of roughly comparable difficulty to the original problem, even if one assumes powerful conjectures such as the Generalised Riemann Hypothesis (which lets one make very precise control on major arc exponential sums, but not on minor arc ones). These are not rigorous conclusions – after all, we have already seen that one can always artifically insert the circle method into any viable approach on these problems – but they do strongly suggest that one needs a method other than the circle method in order to fully solve either of these two problems. I do not know what such a method would be, though I can give some heuristic objections to some of the other popular methods used in additive number theory (such as sieve methods, or more recently the use of inverse theorems); this will be done at the end of this post.
Recent Comments