as for any distinct natural numbers , where denotes the Liouville function. (One could also replace the Liouville function here by the Möbius function and obtain a morally equivalent conjecture.) This conjecture remains open for any ; for instance the assertion

is a variant of the twin prime conjecture (though possibly a tiny bit easier to prove), and is subject to the notorious parity barrier (as discussed in this previous post).

Our main result asserts, roughly speaking, that Chowla’s conjecture can be established unconditionally provided one has non-trivial averaging in the parameters. More precisely, one has

Theorem 1 (Chowla on the average)Suppose is a quantity that goes to infinity as (but it can go to infinity arbitrarily slowly). Then for any fixed , we haveIn fact, we can remove one of the averaging parameters and obtain

Actually we can make the decay rate a bit more quantitative, gaining about over the trivial bound. The key case is ; while the unaveraged Chowla conjecture becomes more difficult as increases, the averaged Chowla conjecture does not increase in difficulty due to the increasing amount of averaging for larger , and we end up deducing the higher case of the conjecture from the case by an elementary argument.

The proof of the theorem proceeds as follows. By exploiting the Fourier-analytic identity

(related to a standard Fourier-analytic identity for the Gowers norm) it turns out that the case of the above theorem can basically be derived from an estimate of the form

uniformly for all . For “major arc” , close to a rational for small , we can establish this bound from a generalisation of a recent result of Matomaki and Radziwill (discussed in this previous post) on averages of multiplicative functions in short intervals. For “minor arc” , we can proceed instead from an argument of Katai and Bourgain-Sarnak-Ziegler (discussed in this previous post).

The argument also extends to other bounded multiplicative functions than the Liouville function. Chowla’s conjecture was generalised by Elliott, who roughly speaking conjectured that the copies of in Chowla’s conjecture could be replaced by arbitrary bounded multiplicative functions as long as these functions were far from a twisted Dirichlet character in the sense that

(This type of distance is incidentally now a fundamental notion in the Granville-Soundararajan “pretentious” approach to multiplicative number theory.) During our work on this project, we found that Elliott’s conjecture is not quite true as stated due to a technicality: one can cook up a bounded multiplicative function which behaves like on scales for some going to infinity and some slowly varying , and such a function will be far from any fixed Dirichlet character whilst still having many large correlations (e.g. the pair correlations will be large). In our paper we propose a technical “fix” to Elliott’s conjecture (replacing (1) by a truncated variant), and show that this repaired version of Elliott’s conjecture is true on the average in much the same way that Chowla’s conjecture is. (If one restricts attention to real-valued multiplicative functions, then this technical issue does not show up, basically because one can assume without loss of generality that in this case; we discuss this fact in an appendix to the paper.)

Filed under: math.NT, paper Tagged: Chowla's conjecture, Kaisa Matomaki, Liouville function, Maksym Radziwill ]]>

One has the classical estimate

(See e.g. Exercise 37 from Supplement 3.) In view of this, let us define the normalised log-magnitudes for any by the formula

informally, this is a normalised window into near . One can rephrase several assertions about the zeta function in terms of the asymptotic behaviour of . For instance:

- (i) The bound (1) implies that is asymptotically locally bounded from above in the limit , thus for any compact set we have for and sufficiently large. In fact the implied constant in only depends on the projection of to the real axis.
- (ii) For , we have the bounds
which implies that converges locally uniformly as to zero in the region .

- (iii) The functional equation, together with the symmetry , implies that
which by Exercise 17 of Supplement 3 shows that

as , locally uniformly in . In particular, when combined with the previous item, we see that converges locally uniformly as to in the region .

- (iv) From Jensen’s formula (Theorem 16 of Supplement 2) we see that is a subharmonic function, and thus is subharmonic as well. In particular we have the mean value inequality
for any disk , where the integral is with respect to area measure. From this and (ii) we conclude that

for any disk with and sufficiently large ; combining this with (i) we conclude that is asymptotically locally bounded in in the limit , thus for any compact set we have for sufficiently large .

From (v) and the usual Arzela-Ascoli diagonalisation argument, we see that the are asymptotically compact in the topology of distributions: given any sequence tending to , one can extract a subsequence such that the converge in the sense of distributions. Let us then define a *normalised limit profile* of to be a distributional limit of a sequence of ; they are analogous to limiting profiles in PDE, and also to the more recent introduction of “graphons” in the theory of graph limits. Then by taking limits in (i)-(iv) we can say a lot about such normalised limit profiles (up to almost everywhere equivalence, which is an issue we will address shortly):

- (i) is bounded from above in the critical strip .
- (ii) vanishes on .
- (iii) We have the functional equation for all . In particular for .
- (iv) is subharmonic.

Unfortunately, (i)-(iv) fail to characterise completely. For instance, one could have for any convex function of that equals for , for , and obeys the functional equation , and this would be consistent with (i)-(iv). One can also perturb such examples in a region where is strictly convex to create further examples of functions obeying (i)-(iv). Note from subharmonicity that the function is always going to be convex in ; this can be seen as a limiting case of the Hadamard three-lines theorem (Exercise 41 of Supplement 2).

We pause to address one minor technicality. We have defined as a distributional limit, and as such it is *a priori* only defined up to almost everywhere equivalence. However, due to subharmonicity, there is a unique upper semi-continuous representative of (taking values in ), defined by the formula

for any (note from subharmonicity that the expression in the limit is monotone nonincreasing as , and is also continuous in ). We will now view this upper semi-continuous representative of as *the* canonical representative of , so that is now defined everywhere, rather than up to almost everywhere equivalence.

By a classical theorem of Riesz, a function is subharmonic if and only if the distribution is a non-negative measure, where is the Laplacian in the coordinates. Jensen’s formula (or Greens’ theorem), when interpreted distributionally, tells us that

away from the real axis, where ranges over the non-trivial zeroes of . Thus, if is a normalised limit profile for that is the distributional limit of , then we have

where is a non-negative measure which is the limit in the vague topology of the measures

Thus is a normalised limit profile of the zeroes of the Riemann zeta function.

Using this machinery, we can recover many classical theorems about the Riemann zeta function by “soft” arguments that do not require extensive calculation. Here are some examples:

Theorem 1The Riemann hypothesis implies the Lindelöf hypothesis.

*Proof:* It suffices to show that any limiting profile (arising as the limit of some ) vanishes on the critical line . But if the Riemann hypothesis holds, then the measures are supported on the critical line , so the normalised limit profile is also supported on this line. This implies that is harmonic outside of the critical line. By (ii) and unique continuation for harmonic functions, this implies that vanishes on the half-space (and equals on the complementary half-space, by (iii)), giving the claim.

In fact, we have the following sharper statement:

Theorem 2 (Backlund)The Lindelöf hypothesis is equivalent to the assertion that for any fixed , the number of zeroes in the region is as .

*Proof:* If the latter claim holds, then for any , the measures assign a mass of to any region of the form as for any fixed and . Thus the normalised limiting profile measure is supported on the critical line, and we can repeat the previous argument.

Conversely, suppose the claim fails, then we can find a sequence and such that assigns a mass of to the region . Extracting a normalised limiting profile, we conclude that the normalised limiting profile measure is non-trivial somewhere to the right of the critical line, so the associated subharmonic function is not harmonic everywhere to the right of the critical line. From the maximum principle and (ii) this implies that has to be positive somewhere on the critical line, but this contradicts the Lindelöf hypothesis. (One has to take a bit of care in the last step since only converges to in the sense of distributions, but it turns out that the subharmonicity of all the functions involved gives enough regularity to justify the argument; we omit the details here.)

Theorem 3 (Littlewood)Assume the Lindelöf hypothesis. Then for any fixed , the number of zeroes in the region is as .

*Proof:* By the previous arguments, the only possible normalised limiting profile for is . Taking distributional Laplacians, we see that the only possible normalised limiting profile for the zeroes is Lebesgue measure on the critical line. Thus, can only converge to as , and the claim follows.

Even without the Lindelöf hypothesis, we have the following result:

Theorem 4 (Titchmarsh)For any fixed , there are zeroes in the region for sufficiently large .

Among other things, this theorem recovers a classical result of Littlewood that the gaps between the imaginary parts of the zeroes goes to zero, even without assuming unproven conjectures such as the Riemann or Lindelöf hypotheses.

*Proof:* Suppose for contradiction that this were not the case, then we can find and a sequence such that contains zeroes. Passing to a subsequence to extract a limit profile, we conclude that the normalised limit profile measure assigns no mass to the horizontal strip . Thus the associated subharmonic function is actually harmonic on this strip. But by (ii) and unique continuation this forces to vanish on this strip, contradicting the functional equation (iii).

Exercise 5Use limiting profiles to obtain the matching upper bound of for the number of zeroes in for sufficiently large .

Remark 6One can remove the need to take limiting profiles in the above arguments if one can come up with quantitative (or “hard”) substitutes for qualitative (or “soft”) results such as the unique continuation property for harmonic functions. This would also allow one to replace the qualitative decay rates with more quantitative decay rates such as or . Indeed, the classical proofs of the above theorems come with quantitative bounds that are typically of this form (see e.g. the text of Titchmarsh for details).

Exercise 7Let denote the quantity , where the branch of the argument is taken by using a line segment connecting to (say) , and then to . If we have a sequence producing normalised limit profiles for and the zeroes respectively, show that converges in the sense of distributions to the function , or equivalentlyConclude in particular that if the Lindelöf hypothesis holds, then as .

A little bit more about the normalised limit profiles are known unconditionally, beyond (i)-(iv). For instance, from Exercise 3 of Notes 5 we have as , which implies that any normalised limit profile for is bounded by on the critical line, beating the bound of coming from convexity and (ii), (iii), and then convexity can be used to further bound away from the critical line also. Some further small improvements of this type are known (coming from various methods for estimating exponential sums), though they fall well short of determining completely at our current level of understanding. Of course, given that we believe the Riemann hypothesis (and hence the Lindelöf hypothesis) to be true, the only actual limit profile that should exist is (in fact this assertion is equivalent to the Lindelöf hypothesis, by the arguments above).

Better control on limiting profiles is available if we do not insist on controlling for *all* values of the height parameter , but only for *most* such values, thanks to the existence of several *mean value theorems* for the zeta function, as discussed in Notes 6; we discuss this below the fold.

** — 1. Limiting profiles outside of exceptional sets — **

In order to avoid an excessive number of extraction of subsequences and discarding of exceptional sets, we now move away from the standard sequential notion of a limit, and instead work with the less popular, but equally valid notion of an *ultrafilter limit*. Recall that an ultrafilter on a set is a collection of subsets of (which we will call the “-large” sets) which are the sets of full measure with regards to some finitely additive -valued probability measure on (with the power set Boolean algebra ). We call a subset of *-small* if it is not -large. Given a function into a topological space and a point , we say that *converges to along * if is -large for every neighbourhood of , and then we call a *-limit* of .

Exercise 8Let be a function into a topological space , and let be an ultrafilter on .

- (i) If is compact, show that has at least one -limit.
- (ii) If is Hausdorff, show that has at most one -limit.
- (iii) Conversely, if fails to be compact (resp. Hausdorff), show that there exists a function and an ultrafilter on such that has no -limit (resp. more than one -limit).

In particular, given an ultrafilter on the non-negative reals , which is non-principal in the sense that all compact subsets of are -small,, there exists a unique normalised limiting profile that is the limit of along , and similarly for . Because the distributional topology is second countable, such limiting profiles are also limiting profiles of sequences as in the previous discussion, and so we retain all existing properties of limit profiles such as (i)-(iv). However, in the ultrafilter formalism we can now easily avoid various “small” exceptional sets of , in addition to the compact sets that have already been excluded. For instance, let us call an ultrafilter *generic* if every Lebesgue measurable subset of of zero upper density (thus has Lebesgue measure as ) is -small. The existence of generic ultrafilters follows easily from Zorn’s lemma. Define a *generic limit profile* to be a limit profile that arises from a generic ultrafilter; informally, these capture the possible behaviour of the zeta function outside of a set of heights of zero density. To see how these profiles are better than arbitrary limit profiles, we recall from Exercise 2 of Notes 6 that

if the are -separated elements of and are arbitrary complex coefficients. If we set , we can conclude (among other things), that for any constant , one has

for all outside of a set of measure (informally: “square root cancellation occurs generically”). Using this, one can for instance show that

for all outside of a set of measure , which implies that any *generic* limit profile vanishes on the critical line, and thus must be ; that is to say, the Lindelöf hypothesis is true “generically”.

One can profitably explore the regime between arbitrary non-principal ultrafilters and generic ultrafilters by introducing the intermediate notion of an *-generic ultrafilter* for any , defined as an ultrafilter with the property that any Lebesgue measurable subset of of “dimension at most ” in the sense that has measure , is -small. One can then interpret many existing mean value theorems on the zeta function (or on other Dirichlet series) as controlling the -generic limit profiles of , or more generally of the log-magnitude of various Dirichlet series (e.g. for various exponents ). For instance, the previous argument shows that

for all outside of a set of measure , which implies that any -generic limit profile is bounded above by on the critical line. One can also recast much of the arguments in Notes 6 in this language (defining limit profiles for various Dirichlet polynomials, and using such profiles and zero-detecting polynomials to establish -generic zero-free regions), although this is mostly just a change of notation and does not seem to yield any major simplifications to these arguments.

Filed under: 254A - analytic prime number theory, math.NT Tagged: limit profiles, Riemann zeta function, ultrafilters ]]>

than it is to estimate summatory functions such as

(Here we are normalising to be roughly constant in size, e.g. as .) For instance, when is the von Mangoldt function , the logarithmic sums can be adequately estimated by Mertens’ theorem, which can be easily proven by elementary means (see Notes 1); but a satisfactory estimate on the summatory function requires the prime number theorem, which is substantially harder to prove (see Notes 2). (From a complex-analytic or Fourier-analytic viewpoint, the problem is that the logarithmic sums can usually be controlled just from knowledge of the Dirichlet series for near ; but the summatory functions require control of the Dirichlet series for on or near a large portion of the line . See Notes 2 for further discussion.)

Viewed conversely, whenever one has a difficult estimate on a summatory function such as , one can look to see if there is a “cheaper” version of that estimate that only controls the logarithmic sums , which is easier to prove than the original, more “expensive” estimate. In this post, we shall do this for two theorems, a classical theorem of Halasz on mean values of multiplicative functions on long intervals, and a much more recent result of Matomaki and RadziwiÅ‚Å‚ on mean values of multiplicative functions in short intervals. The two are related; the former theorem is an ingredient in the latter (though in the special case of the Matomaki-RadziwiÅ‚Å‚ theorem considered here, we will not need Halasz’s theorem directly, instead using a key tool in the *proof* of that theorem).

We begin with Halasz’s theorem. Here is a version of this theorem, due to Montgomery and to Tenenbaum:

Theorem 1 (Halasz-Montgomery-Tenenbaum)Let be a multiplicative function with for all . Let and , and setThen one has

Informally, this theorem asserts that is small compared with , unless “pretends” to be like the character on primes for some small . (This is the starting point of the “pretentious” approach of Granville and Soundararajan to analytic number theory, as developed for instance here.) We now give a “cheap” version of this theorem which is significantly weaker (both because it settles for controlling logarithmic sums rather than summatory functions, it requires to be completely multiplicative instead of multiplicative, it requires a strong bound on the analogue of the quantity , and because it only gives qualitative decay rather than quantitative estimates), but easier to prove:

Theorem 2 (Cheap Halasz)Let be an asymptotic parameter goingto infinity. Let be a completely multiplicative function (possibly depending on ) such that for all , such that

Note that now that we are content with estimating exponential sums, we no longer need to preclude the possibility that pretends to be like ; see Exercise 11 of Notes 1 for a related observation.

To prove this theorem, we first need a special case of the Turan-Kubilius inequality.

Lemma 3 (Turan-Kubilius)Let be a parameter going to infinity, and let be a quantity depending on such that and as . Then

Informally, this lemma is asserting that

for most large numbers . Another way of writing this heuristically is in terms of Dirichlet convolutions:

This type of estimate was previously discussed as a tool to establish a criterion of Katai and Bourgain-Sarnak-Ziegler for MÃ¶bius orthogonality estimates in this previous blog post. See also Section 5 of Notes 1 for some similar computations.

*Proof:* By Cauchy-Schwarz it suffices to show that

Expanding out the square, it suffices to show that

for .

We just show the case, as the cases are similar (and easier). We rearrange the left-hand side as

We can estimate the inner sum as . But a routine application of Mertens’ theorem (handling the diagonal case when separately) shows that

and the claim follows.

Remark 4As an alternative to the Turan-Kubilius inequality, one can use the RamarÃ© identity(see e.g. Section 17.3 of Friedlander-Iwaniec). This identity turns out to give superior quantitative results than the Turan-Kubilius inequality in applications; see the paper of Matomaki and RadziwiÅ‚Å‚ for an instance of this.

We now prove Theorem 2. Let denote the left-hand side of (2); by the triangle inequality we have . By Lemma 3 (for some to be chosen later) and the triangle inequality we have

We rearrange the left-hand side as

We now replace the constraint by . The error incurred in doing so is

which by Mertens’ theorem is . Thus we have

But by definition of , we have , thus

From Mertens’ theorem, the expression in brackets can be rewritten as

and so the real part of this expression is

By (1), Mertens’ theorem and the hypothesis on we have

for any . This implies that we can find going to infinity such that

and thus the expression in brackets has real part . The claim follows.

The Turan-Kubilius argument is certainly not the most efficient way to estimate sums such as . In the exercise below we give a significantly more accurate estimate that works when is non-negative.

Exercise 5(Granville-Koukoulopoulos-Matomaki)

- (i) If is a completely multiplicative function with for all primes , show that
as . (

Hint:for the upper bound, expand out the Euler product. For the lower bound, show that , where is the completely multiplicative function with for all primes .)- (ii) If is multiplicative and takes values in , show that
for all .

Now we turn to a very recent result of Matomaki and RadziwiÅ‚Å‚ on mean values of multiplicative functions in short intervals. For sake of illustration we specialise their results to the simpler case of the Liouville function , although their arguments actually work (with some additional effort) for arbitrary multiplicative functions of magnitude at most that are real-valued (or more generally, stay far from complex characters ). Furthermore, we give a qualitative form of their estimates rather than a quantitative one:

Theorem 6 (Matomaki-RadziwiÅ‚Å‚, special case)Let be a parameter going to infinity, and let be a quantity going to infinity as . Then for all but of the integers , one has

A simple sieving argument (see Exercise 18 of Supplement 4) shows that one can replace by the MÃ¶bius function and obtain the same conclusion. See this recent note of Matomaki and RadziwiÅ‚Å‚ for a simple proof of their (quantitative) main theorem in this special case.

Of course, (4) improves upon the trivial bound of . Prior to this paper, such estimates were only known (using arguments similar to those in Section 3 of Notes 6) for unconditionally, or for for some sufficiently large if one assumed the Riemann hypothesis. This theorem also represents some progress towards Chowla’s conjecture (discussed in Supplement 4) that

as for any fixed distinct ; indeed, it implies that this conjecture holds if one performs a small amount of averaging in the .

Below the fold, we give a “cheap” version of the Matomaki-RadziwiÅ‚Å‚ argument. More precisely, we establish

Theorem 7 (Cheap Matomaki-RadziwiÅ‚Å‚)Let be a parameter going to infinity, and let . Then

Note that (5) improves upon the trivial bound of . Again, one can replace with if desired. Due to the cheapness of Theorem 7, the proof will require few ingredients; the deepest input is the improved zero-free region for the Riemann zeta function due to Vinogradov and Korobov. Other than that, the main tools are the Turan-Kubilius result established above, and some Fourier (or complex) analysis.

** â€” 1. Proof of theorem â€” **

We now prove Theorem 7. We first observe that it will suffice to show that

for any smooth supported on (say) and respectively, as the claim follows by taking and to be approximations to and respectively and using the triangle inequality to control the error.

We will need a quantity that goes to infinity reasonably fast; for instance, will suffice. By Lemma 3 and the triangle inequality, we can replace in (5) by while only incurring an acceptable error. Thus our task is now to show that

I will (perhaps idiosyncratically) adopt a Fourier-analytic point of view here, rather than a more traditional complex-analytic point of view (for instance, we will use Fourier transforms as a substitute for Dirichlet series). To bring the Fourier perspective to the forefront, we make the change of variables and , and note that , to rearrange the previous claim as

Introducing the normalised discrete measure

it thus suffices to show that

where now denotes ordinary (Fourier) convolution rather than Dirichlet convolution.

From Mertens’ theorem we see that has total mass ; also, from the triangle inequality (and the hypothesis ) we see that is supported on and obeys the pointwise bound of ; also, the derivative of is bounded by . Thus we see that the trivial bound on is by Young’s inequality. To improve upon this, we use Fourier analysis. By Plancherel’s theorem, we have

where are the Fourier transforms

and

From Plancherel’s theorem and the derivative bound on we have

and

so the contribution of those with or is acceptable. Also, from the definition of we have

and so from the prime number theorem we have when ; since , we see that the contribution of the region is also acceptable. It thus suffices to show that

whenever and . But by definition of , we may expand as

so by smoothed dyadic decomposition (and by choosing with decaying sufficiently slowly) it suffices to show that

whenever for some sufficiently slowly decaying . We replace the summation over primes with a von Mangoldt function weight to rewrite this as

Performing a Fourier expansion of the smooth function , it thus suffices to show the Dirichlet series bound

as and (we use the crude bound to deal with the contribution). But this follows from the Vinogradov-Korobov bounds (who in fact get a bound of as ); see Exercise 43 of Notes 2 combined with Exercise 4(i) of Notes 5.

Remark 8If one were working with a more general completely multiplicative function than the Liouville function , then one would have to use a duality argument to control the large values of (which could occur at a couple more locations than ), and use some version of Halasz’s theorem to also obtain some non-trivial bounds on at those large values (this would require some hypothesis that does not pretend to be like any of the characters with ). These new ingredients are in a similar spirit to the “log-free density theorem” from Theorem 6 of Notes 7. See the Matomaki-RadziwiÅ‚Å‚ paper for details (in the non-cheap case).

Filed under: 254A - analytic prime number theory, math.NT Tagged: Halasz's theorem, Kaisa Matomaki, Maksym Radziwill, multiplicative number theory, Turan-Kubilius inequality ]]>

Theorem 1 (Linnik’s theorem)Let be a primitive residue class. Then contains a prime with .

In fact it is known that one can find a prime with , a result of Xylouris. For sake of comparison, recall from Exercise 65 of Notes 2 that the Siegel-Walfisz theorem gives this theorem with a bound of , and from Exercise 48 of Notes 2 one can obtain a bound of the form if one assumes the generalised Riemann hypothesis. The probabilistic random models from Supplement 4 suggest that one should in fact be able to take .

We will not aim to obtain the optimal exponents for Linnik’s theorem here, and follow the treatment in Chapter 18 of Iwaniec and Kowalski. We will in fact establish the following more quantitative result (a special case of a more powerful theorem of Gallagher), which splits into two cases, depending on whether there is an exceptional zero or not:

Theorem 2 (Quantitative Linnik theorem)Let be a primitive residue class for some . For any , let denote the quantityAssume that for some sufficiently large .

- (i) (No exceptional zero) If all the real zeroes of -functions of real characters of modulus are such that , then
for all and some absolute constant .

- (ii) (Exceptional zero) If there is a zero of an -function of a real character of modulus with for some sufficiently small , then
for all and some absolute constant .

The implied constants here are effective.

Note from the Landau-Page theorem (Exercise 54 from Notes 2) that at most one exceptional zero exists (if is small enough). A key point here is that the error term in the exceptional zero case is an *improvement* over the error term when no exceptional zero is present; this compensates for the potential reduction in the main term coming from the term. The splitting into cases depending on whether an exceptional zero exists or not turns out to be an essential technique in many advanced results in analytic number theory (though presumably such a splitting will one day become unnecessary, once the possibility of exceptional zeroes are finally eliminated for good).

Exercise 3Assuming Theorem 2, and assuming for some sufficiently large absolute constant , establish the lower boundwhen there is no exceptional zero, and

when there is an exceptional zero . Conclude that Theorem 2 implies Theorem 1, regardless of whether an exceptional zero exists or not.

Remark 4The Brun-Titchmarsh theorem (Exercise 33 from Notes 4), in the sharp form of Montgomery and Vaughan, gives thatfor any primitive residue class and any . This is (barely) consistent with the estimate (1). Any lowering of the coefficient in the Brun-Titchmarsh inequality (with reasonable error terms), in the regime when is a large power of , would then lead to at least some elimination of the exceptional zero case. However, this has not led to any progress on the Landau-Siegel zero problem (and may well be just a reformulation of that problem). (When is a relatively small power of , some improvements to Brun-Titchmarsh are possible that are not in contradiction with the presence of an exceptional zero; see this paper of Maynard for more discussion.

Theorem 2 is deduced in turn from facts about the distribution of zeroes of -functions. Recall from the truncated explicit formula (Exercise 45(iv) of Notes 2) with (say) that

for any non-principal character of modulus , where we assume for some large ; for the principal character one has the same formula with an additional term of on the right-hand side (as is easily deduced from Theorem 21 of Notes 2). Using the Fourier inversion formula

(see Theorem 69 of Notes 1), we thus have

and so it suffices by the triangle inequality (bounding very crudely by , as the contribution of the low-lying zeroes already turns out to be quite dominant) to show that

when no exceptional zero is present, and

when an exceptional zero is present.

To handle the former case (2), one uses two facts about zeroes. The first is the classical zero-free region (Proposition 51 from Notes 2), which we reproduce in our context here:

Proposition 5 (Classical zero-free region)Let . Apart from a potential exceptional zero , all zeroes of -functions with of modulus and are such thatfor some absolute constant .

Using this zero-free region, we have

whenever contributes to the sum in (2), and so the left-hand side of (2) is bounded by

where we recall that is the number of zeroes of any -function of a character of modulus with and (here we use conjugation symmetry to make non-negative, accepting a multiplicative factor of two).

In Exercise 25 of Notes 6, the grand density estimate

is proven. If one inserts this bound into the above expression, one obtains a bound for (2) which is of the form

Unfortunately this is off from what we need by a factor of (and would lead to a weak form of Linnik’s theorem in which was bounded by rather than by ). In the analogous problem for prime number theorems in short intervals, we could use the Vinogradov-Korobov zero-free region to compensate for this loss, but that region does not help here for the contribution of the low-lying zeroes with , which as mentioned before give the dominant contribution. Fortunately, it is possible to remove this logarithmic loss from the zero-density side of things:

Theorem 6 (Log-free grand density estimate)For any and , one hasThe implied constants are effective.

We prove this estimate below the fold. The proof follows the methods of the previous section, but one inserts various sieve weights to restrict sums over natural numbers to essentially become sums over “almost primes”, as this turns out to remove the logarithmic losses. (More generally, the trick of restricting to almost primes by inserting suitable sieve weights is quite useful for avoiding any unnecessary losses of logarithmic factors in analytic number theory estimates.)

Now we turn to the case when there is an exceptional zero (3). The argument used to prove (2) applies here also, but does not gain the factor of in the exponent. To achieve this, we need an additional tool, a version of the Deuring-Heilbronn repulsion phenomenon due to Linnik:

Theorem 8 (Deuring-Heilbronn repulsion phenomenon)Suppose is such that there is an exceptional zero with small. Then all other zeroes of -functions of modulus are such thatIn other words, the exceptional zero enlarges the classical zero-free region by a factor of . The implied constants are effective.

Exercise 9Use Theorem 6 and Theorem 8 to complete the proof of (3), and thus Linnik’s theorem.

Exercise 10Use Theorem 8 to give an alternate proof of (Tatuzawa’s version of) Siegel’s theorem (Theorem 62 of Notes 2). (Hint:if two characters have different moduli, then they can be made to have the same modulus by multiplying by suitable principal characters.)

Theorem 8 is proven by similar methods to that of Theorem 6, the basic idea being to insert a further weight of (in addition to the sieve weights), the point being that the exceptional zero causes this weight to be quite small on the average. There is a strengthening of Theorem 8 due to Bombieri that is along the lines of Theorem 6, obtaining the improvement

with effective implied constants for any and in the presence of an exceptional zero, where the prime in means that the exceptional zero is omitted (thus if ). Note that the upper bound on falls below one when for a sufficiently small , thus recovering Theorem 8. Bombieri’s theorem can be established by the methods in this set of notes, and will be given as an exercise to the reader.

Remark 11There are a number of alternate ways to derive the results in this set of notes, for instance using the Turan power sums method which is based on studying derivatives such asfor and large , and performing various sorts of averaging in to attenuate the contribution of many of the zeroes . We will not develop this method here, but see for instance Chapter 9 of Montgomery’s book. See the text of Friedlander and Iwaniec for yet another approach based primarily on sieve-theoretic ideas.

Remark 12When one optimises all the exponents, it turns out that the exponent in Linnik’s theorem isextremelygood in the presence of an exceptional zero – indeed Friedlander and Iwaniec showed can even get a bound of the form for some , which is even stronger than one can obtain from GRH! There are other places in which exceptional zeroes can be used to obtain results stronger than what one can obtain even on the Riemann hypothesis; for instance, Heath-Brown used the hypothesis of an infinite sequence of Siegel zeroes to obtain the twin prime conejcture.

** — 1. Log-free density estimate — **

We now prove Theorem 6. We will make no attempt here to optimise the exponents in this theorem, and so will be quite wasteful in the choices of numerical exponents in the argument that follows in order to simplify the presentation.

By increasing if necessary we may assume that

(say); we may also assume that is larger than any specified absolute constant. We may then replace by in the estimate, thus we wish to show that

Observe that in the regime

the claim already follows from the non-log-free density estimate (4). Thus we may assume that

for some , and the claim is now to show that there are at most zeroes of -functions with , , and a character of modulus . We may assume that , since the case follows from the case (and also essentially follows from the classical zero-free region, in any event).

For minor technical reasons it is convenient to first dispose of the contribution of the principal character. In this case, the zeroes are the same as those of the Riemann zeta function. From the Vinogradov-Korobov zero-free region we conclude there are no zeroes with and . Thus we may restrict attention to non-principal characters .

Suppose we have a zero of a non-principal character of modulus with and . From equation (48) of Notes 2 we then have

(say) for all . One can of course obtain more efficient truncations than this, but as mentioned previously we are not trying to optimise the exponents. If one subtracts the term from the left-hand side, this already gives a zero-detecting polynomial, but it is not tractable to work with because it contains too many terms with small (and is also not concentrated on those that are almost prime). To fix this, we weight the previous Dirichlet polynomial by , where is an arithmetic function supported on to be chosen later obeying the bound . We expand

and hence by (7) and the upper bound on

Since , one sees from the divisor bound and the hypothesis that is large that

If we have , then we can extract the term and obtain a zero-detecting polynomial:

We now select the weights . There are a number of options here; we will use a variant of the “continuous Selberg sieve” from Section 2 of Notes 4. Fix a smooth function that equals on and is supported on ; we allow implied constants to depend on . For any , define

Observe from Möbius inversion that for all . The weight was used as an upper bound Selberg sieve in Notes 4.

We will need the following general bound:

Lemma 13 (Sieve upper bound)Let , and let be a completely multiplicative function such that for all primes . Then

*Proof:* Clearly, we can restrict to those numbers whose prime factors do not exceed , for some large absolute constant .

By a Fourier expansion we can write

for some rapidly decreasing function , and thus the left-hand side of (10) may be written as

where we implicitly restrict to numbers whose prime factors do not exceed (note that this makes the integrand absolutely summable and integrable, so that Fubini’s theorem applies). We may factor this as

By the rapid decrease of , it thus suffices to show that

By Taylor expansion we can bound the left-hand side by

By Mertens theorem we can replace the constraint with . Since , it thus suffices to show that

But we can factor

and the claim follows from Mertens’ theorem.

We record a basic corollary of this estimate:

*Proof:* Writing , we can write the left-hand side of (12) as

Since is supported on and is bounded above by , the contribution of the error is which is acceptable. By Lemma 13 with , the contribution of the main term is , and the claim then follows from Mertens’ theorem.

We will work primarily with the cutoff

the reason for the separate scales and will become clearer later. The function is supported on , equals at , and is bounded by , so from the previous discussion we thus have the zero-detector inequality

whenever with of modulus , , and . Our objective is to show that the number of such zeroes os .

We first control the number of zeroes that are very close together. From equation (48) of Notes 2 with (say), we see that

whenever , , and is non-principal of modulus ; also from equation (45) of Notes 2 we have

From Jensen’s theorem (Theorem 16 of Supplement 2), we conclude that for any given non-principal and any , there are at most zeroes of (counting multiplicity, of course) with and . To prove Theorem 6, it thus suffices by the usual covering argument to establish the bound

whenever one has a sequence of zeroes with a non-principal character of conductor , , and , obeying the separation condition

Note from the existing grand zero-density estimate in (4) that

We write (13) for the zeroes as

and

and is a smooth function supported on which equals on . Note that the term in is .

We use the generalised Bessel inequality (Proposition 2 from Notes 3) with to conclude that

where are complex numbers with . (Strictly speaking, one needs to deal with the issue that the are not finitely supported, but there is enough absolute convergence here that this is a routine matter.) From Corollary 14 and dyadic decomposition we have

(note how the logarithmic factors cancel, which is crucial to obtaining our “log-free” estimate) and so from (17), the inequality and symmetry it suffices to show that

We now estimate the expression

From (9), the factor vanishes unless , and from the support of we see that the inner sum vanishes unless . From Exercise 44 of Notes 2, we then have

so we see from (16) and (6) that the contribution of the error term to (18) is acceptable. For the main term in (21), we see from Corollary 14 that

so (as the main term in (21) is independent of ) the remaining contribution to (18) is bounded by

Making the change of variables , this becomes

The integral is bounded by , and from two integration by parts it is also bounded by

On the other hand, for , the are -separated by hypothesis, and so

and the claim follows.

** — 2. Consequences of an exceptional zero — **

In preparation for proving Theorem 8, we investigate in this section the consequences of a Landau-Siegel zero, that is to say a real character of some modulus with a zero

for some with small. For minor technical reasons we will assume that is a multiple of , so that ; this condition can be easily established by multiplying by a principal character of modulus dividing . (We will not need to assume here that is primitive.)

In Notes 2, we already observed that the presence of an exceptional zero was associated with a small (but positive) value of ; indeed, from Lemmas 57 and 59 of Notes 2 we see that

Also, from the class number formula (equation (56) from Notes 2) we have

For the arguments below, one could also use the slightly weaker estimates in Exercise 67 of Notes 2 or Exercise 57 of Notes 3 and still obtain comparable results. We will however *not* rely on Siegel’s theorem (Theorem 62 of Notes 2) in order to keep all bounds effective.

We now refine this analysis. We begin with a complexified version of Exercise 58 from Notes 2:

Exercise 15Let be a non-principal character of modulus . Let with and for some . If , show thatfor any . (

Hint:use the Dirichlet hyperbola method and Exercise 44 from Notes 2.)If is a non-principal character of modulus with , show that

For technical reasons it will be convenient to work with a completely multiplicative variant of the function . Define the arithmetic function to be the completely multiplicative function such that for all ; this is equal to at square-free numbers, but is a bit larger at other numbers. Observe that is non-negative, and has the factorisation

where is a multiplicative function that vanishes on primes and obeys the bounds

for all and primes . In particular is non-negative and for , since we assumed . From Euler products we see that

so in particular the Dirichlet series is analytic and uniformly bounded on . This implies that

whenever with .

Taking Dirichlet series, we see that

whenever ; more generally, we have

for any character . Now we look at what happens inside the critical strip:

Exercise 16Let be a real character of modulus a multiple of , and let be as above. Let with and for some . If , show thatfor any .

We record a nice corollary of these estimates due to Bombieri, which asserts that the exceptional zero forces to vanish (or equivalently, to become ) on most large primes:

Lemma 17 (Bombieri’s lemma)Let be a real character of modulus with an exceptional zero at for some sufficiently small . Then for any , we have

Informally, Bombieri’s lemma asserts that for most primes between and . The exponent of here can be lowered substantially with a more careful analysis, but we will not do so here. For primes much larger than , becomes equidistributed; see Exercise 22 below.

*Proof:* Without loss of generality we may take to be a multiple of . We may assume that , as the claim follows from Mertens’ theorem otherwise; in particular .

By (27) for we have

for any . Since and , we see from (31), (26) that the error term is dominated by the main term, thus

Next, applying (29) with replaced by and subtracting, we have

As , we have by Taylor expansion. As before, the error term can be bounded by the main term and so

Since is non-negative and completely multiplicative, one has

and thus (since )

Since , we have , and the claim follows.

Now we can give a more precise version of (23):

Proposition 18Let be a real character of modulus with an exceptional zero at for some sufficiently small . Thenfor any with .

Observe that (23) is a corollary of the case of this proposition thanks to Mertens’ theorem and the trivial bounds . We thus see from this proposition and Bombieri’s lemma that the exceptional zero controls at primes larger than , but that is additionally sensitive to the values of at primes below this range. For an even more precise formula for , see this paper of Goldfeld and Schinzel, or Exercise 23 below.

*Proof:* By Bombieri’s lemma and Mertens’ theorem, it suffices to prove the asymptotic for .

We begin with the upper bound

Applying (25) with and we have

The left-hand side is non-negative and , so we conclude (using (31)) that

From Euler products and Mertens’ theorem we have

But from Lemma 17 and Mertens’ theorem we see that

and the claim follows.

Now we establish the matching lower bound

Applying (25) with and we have

For , we have , and thus by (31)

Inserting this into (30) and using (31) and we conclude that

and the claim then follows from the preceding calculations.

Remark 19One particularly striking consequence of an exceptional zero is that the spacing of zeroes of other -functions become extremely regular; roughly speaking, for most other characters whose conductor is somewhat (but not too much) larger than the conductor of , the zeroes of (at moderate height) mostly lie on the critical line and are spaced in approximate arithmetic progression; this “alternative hypothesis” is in contradiction to to the pair correlation conjecture discussed in Section 4 of Supplement 4. This phenomenon was first discovered by Montgomery and Weinberger and can roughly be explained as follows. By an approximate functional equation similar to Exercise 54 of Supplement 3, one can approximately write as the sum of plus times a gamma factor which oscillates like when . The smallness of on average for medium-sized (as suggested for instance by Bombieri’s lemma) suggests that these sums should be well approximated by much shorter sums, which oscillate quite slowly in . This gives an approximation to that is of the form for slowly varying , which can then be used to place the zeroes of this function in approximate arithmetic progression on the real line.

** — 3. The Deuring-Heilbronn repulsion phenomenon — **

We now prove Theorem 8. Let be such that there is an exceptional zero with small, associated to some quadratic character of modulus :

From the class number bound (equation (56) of Notes 2; one could also use Exercise 67 of Notes 2 for a similar bound) we have

Let , let be a character of modulus (possibly equal to or the principal character), and suppose we have

for some , with . Our task is to show that

(say), since the claim is trivial otherwise. By multiplying by the principal character of modulus if necessary, we may assume as before that is a multiple of , so that we can utilise the multiplicative function from the previous section. By enlarging , we may assume as in Section 1 that

we may also assume that is larger than any specified absolute constant. From the classical zero-free region and the Landau-Page theorem we have

The task (34) is then equivalent to showing that

We recall the sieve cutoffs

from Section 1, which were used in the zero detector. The main difference is that we will “twist” the polynomial by the completely multiplicative function :

Proposition 20 (Zero-detecting polynomial)Let the notation and assumptions be as above.

*Proof:* First suppose that is not equal to or the principal character. Since , we see from (28), (36), (35), (37) that

(say) for any . In particular, as is supported on and one has from the divisor bound, one has

(say), thanks to (35). Since equals for , we thus conclude (39) since is assumed to be large.

Now suppose that is or the principal character, so that . From (27), (36), (35), (37) we then have

for . By a similar calculation to before, we have

where we have used (37) and Proposition 18 in the last line. The claim (40) then follows from Corollary 14.

Using the estimates from the previous section, we can establish the following bound:

The point here is that the sieve weights and are morally restricting to almost primes, and that should be small on such numbers by Bombieri’s lemma. Assuming this proposition, we conclude that the left-hand sides of (39) or (40) are , and (38) follows.

*Proof:* By the Cauchy-Schwarz inequality it suffices to show that

We begin with the second bound (41), which we establish by quite crude estimates. By a Fourier expansion we can write

for some rapidly decreasing function , and thus

Bounding , we thus have

for any . Squaring and using Cauchy-Schwarz, we conclude that

for any . In particular, for , we have

and so we can bound the left-hand side of (42) by

which we bound by

By Mertens’ theorem we have

and the claim follows by taking large enough.

Now we establish (41). By dyadic decomposition it suffices to show that

for all . The left-hand side may be written as as

From the hyperbola method we see that

for any , and thus

Since is supported on and is bounded by , the contribution of the is easily seen to be acceptable (using (32), (36)). The contribution of the main term is

but this is acceptable by Lemma 13, (26), and Proposition 18.

The proof of Theorem 8 is now complete.

Exercise 22Let be a real quadratic character of modulus with a zero at for some small . Show thatand hence

for all and some absolute constant . (

Hint:use (3) and the explicit formula.) Roughly speaking, this exercise asserts that is equidistributed for primes with much larger than .

Exercise 23Let be a real quadratic character of modulus with a zero at for some small . Show thatif is a sufficiently large absolute constant. (

Hint:use Exercise 81, Lemma 40, and Theorem 41 of Notes 1, as well as Exercise 22.

Exercise 24 (Bombieri’s zero density estimate)Under the hypotheses of Theorem 8, establish the estimate (5). (Hint:repeat the arguments in Section 1, but now “twisted” by .)

Filed under: 254A - analytic prime number theory, math.NT Tagged: arithmetic progressions, Deuring-Heilbronn repulsion, prime number theorem ]]>

However, it turns out that one can get much better bounds if one settles for estimating sums such as , or more generally finite Dirichlet series (also known as *Dirichlet polynomials*) such as , for *most* values of in a given range such as . Equivalently, we will be able to get some control on the *large values* of such Dirichlet polynomials, in the sense that we can control the set of for which exceeds a certain threshold, even if we cannot show that this set is empty. These large value theorems are often closely tied with estimates for *mean values* such as of a Dirichlet series; these latter estimates are thus known as *mean value theorems* for Dirichlet series. Our approach to these theorems will follow the same sort of methods used in Notes 3, in particular relying on the generalised Bessel inequality from those notes.

Our main application of the large value theorems for Dirichlet polynomials will be to control the number of zeroes of the Riemann zeta function (or the Dirichlet -functions ) in various rectangles of the form for various and . These rectangles will be larger than the zero-free regions for which we can exclude zeroes completely, but we will often be able to limit the number of zeroes in such rectangles to be quite small. For instance, we will be able to show the following weak form of the Riemann hypothesis: as , a proportion of zeroes of the Riemann zeta function in the critical strip with will have real part . Related to this, the number of zeroes with and can be shown to be bounded by as for any .

In the next set of notes we will use refined versions of these theorems to establish Linnik’s theorem on the least prime in an arithmetic progression.

Our presentation here is broadly based on Chapters 9 and 10 in Iwaniec and Kowalski, who give a number of more sophisticated large value theorems than the ones discussed here.

** — 1. Large values of Dirichlet polynomials — **

Our basic estimate on large values is the following estimate.

Theorem 1 ( estimate on large values)Let and , and let be a sequence of complex numbers. Let be a -separated set of real numbers in an interval of length at most , thus for all . Let be real numbers. Then

This estimate is closely analogous to the analytic large sieve inequality (Proposition 6 from Notes 3). The factor is needed for technical reasons, but should be ignored at a first reading since it is comparable to one on the range . The bound (1) can be compared against the Cauchy-Schwarz bound

and against the pseudorandomness heuristic that should be roughly of size for “typical” .

*Proof:* We first observe that to prove the theorem, it suffices to do so in the case . Indeed, if , one can simply increase until it reaches , which does not significantly affect the factor ; conversely, if , one can partition into subsets, each of diameter at most , and the claim then follows from the triangle inequality.

Without loss of generality we may assume the are in increasing order, so in particular for any .

Next, we apply the generalised Bessel inequality (Proposition 2 of Notes 3), using the weight defined as , where is a smooth function supported on which equals one on . This lets us bound the left-hand side of (1) by

for some coefficients with .

By choice of , we have

so it will suffice to show that

We now focus on the expression

When , we can bound this by , which gives an acceptable contribution; now we consider the case . One could estimate (2) in this case using Proposition 6 of Notes 5 and summation by parts to get a bound of here, but one can do better (saving a logarithmic factor) by exploiting the smooth nature of . Namely, by using the Poisson summation formula (Theorem 34 of Supplement 2), we can rewrite (2) as

which we rescale as

Since , one can check that the derivative of the phase is on the support of when , and when is non-zero, while the second derivative is . Meanwhile, the function is compactly supported and has the first two derivatives bounded by . From this and two integrations by parts we obtain the bounds

when and

when , thus on summing

We thus need to show that

But if one bounds we obtain the claim.

There is an integral variant of this large values estimate, although we will not use it much here:

Exercise 2 ( mean value theorem)Let and let be a sequence of complex numbers.

- (i) Show that
- (ii) Show the more precise estimate
for any and . (

Hint:Reduce to the case when and is supported on the range . Averaging Theorem 1 gives an upper bound of roughly the right order of magnitude; to get the asymptotic, apply Plancherel’s theorem to an expression of the form where is the indicator function convolved by some rapidly decreasing function whose Fourier transform is supported on (say) , and use the previous upper bound to control the error.)

We will use the large values estimate in the following way:

Corollary 3Let , and let be a sequence of complex numbers obeying a bound of the form for all . Let . Then, after deleting at most unit intervals from , we havefor all and all in the remaining portion of .

One can view this corollary as improving upon the trivial bound

coming from mean value theorems on multiplicative functions (see Proposition 21 of Notes 1), if one is allowed to delete some unit intervals from the range of , with the bound improving as one deletes more and more intervals.

*Proof:* Let be the set of all for which

for at least one choice of . By the greedy algorithm, we can cover by the union of intervals of the form , where are -separated points in . By hypothesis, we can find such that

for all , and hence

On the other hand, from mean value theorems on multiplicative functions (see Proposition 21 of Notes 1) we have

and thus by Theorem 1 we have

Comparing the two bounds we see that , and the claim follows.

The above estimate turns out to be rather inefficient if is very small compared with , because the factor becomes large compared with and so it is not even clear that one improves upon the trivial bound (4). However, by multiplying Dirichlet polynomials together one can get a good bound in this regime:

Corollary 4Let , and let be a sequence of complex numbers obeying a bound of the form for all . Let , and let be a natural number. Then, after deleting at most unit intervals from , we have

*Proof:* We raise the original Dirichlet polynomial to the power to obtain

where is the Dirichlet convolution

From the bounds on we have . Subdividing into dyadic intervals and applying Corollary 3 (with replaced by ) and the triangle inequality, we conclude that upon deleting unit intervals from , we have

for all and all outside these intervals. Applying (6) and taking roots, we obtain the claim.

Exercise 5Show that if and , then after deleting at most unit intervals from , one has for all in the remaining portion of . Conclude the fourth moment bound(

Hint:use the approximate functional equation, Exercise 39 from Supplement 3.)

Remark 6In 1926, Ingham showed the more precise asymptoticas . The higher moments for have been intensively studied, and are conjectured by Conrey and Gonek (using the random matrix model, see Section 4 of Supplement 4) to be asymptotic to for certain explicit constants , but this remains unproven. It can be shown that the Lindelof hypothesis is equivalent to the assertion that for all and . In a recent paper of Harper, it was shown assuming the Riemann hypothesis that for all and .

It is conjectured by Montgomery that Corollary 4 also holds for real , after replacing the factor by for any fixed ; see this paper of Bourgain for a counterexample showing that such a factor is necessary. (Amusingly, this counterexample relies on the existence of Besicovitch sets of measure zero!) If one had this, then one could optimise (5) in the range for any fixed by choosing so that , arriving (morally, at least) at a bound of the form

thus beating the trivial bound by about . This bound would have many consequences, most notably the *density hypothesis* discussed below. Unfortunately, Montgomery’s conjecture remains open. Nevertheless, one can obtain a weaker version of this bound by choosing natural numbers so that is *close* to , rather than exactly equal to :

Corollary 7Let be such that for some , and let be a sequence of complex numbers obeying a bound of the form for all . Let . Then, after deleting at most unit intervals from , we have

Note that the bound (8) is not too much worse than (7) in the important regimes when is close to and when is close to .

*Proof:* We can choose a natural number such that , so that . Applying Corollary 4 with replaced by , we conclude (after deleting at most unit intervals from ) that

since , one obtains (8) with in place of . If instead one uses Corollary 4 with rather than , one obtains (after deleting a similar number of unit intervals) that

since , one obtains the remaining case of (8).

Exercise 8If one has the additional hypothesis , show that one can replace the factor in (8) by . Thus we can approach the conjectured estimate (7) in the regime where is very small compared with .

In the regime when is small, one can obtain better bounds by exploiting further estimates on exponential sums such as (2). We will give just one example (due to Montgomery) of such an improvement, referring the reader to Chapter 9 of Iwaniec-Kowalski or Chapter 7 of Montgomery for further examples.

Proposition 9Let , and let be a sequence of complex numbers obeying a bound of the form for all . Let . Then, after deleting at most unit intervals from , we have

Note that this bound can in fact be superior to (7) if is a little bit less than and is less than .

*Proof:* Let be the set of all for which

for at least one choice of , where is a large constant be chosen later. By the greedy algorithm as before, we can cover by the union of intervals of the form , where are -separated points in ; we arrange the in increasing order, so that . By hypothesis, we can find such that

and so by the generalised Bessel inequality (with ) we may bound the left-hand side of (10) by

for some with . The diagonal contribution is . For the off-diagonal contributions, we see from Propositions 6, 9 of Notes 5 (dealing with the cases and respectively) and summation by parts that

and so from the bound we can bound the left-hand side of (10) by

Since , we thus have

The second term on the right-hand side may be absorbed into the left-hand side if is large enough, and we conclude that

which gives , and the claim follows.

** — 2. Zero density estimates — **

We now use the large value theorems for Dirichlet polynomials to control zeroes of the Riemann zeta function with large real part . The key connection is that if is a zero of , then this will force some Dirichlet polynomial to be unusually large at that zero, which can be ruled out by the theorems of the previous section (after excluding some unit intervals from the range of ). Dirichlet polynomials with this sort of property are sometimes known as *zero-detecting polynomials*, for this reason.

Naively, one might expect, in view of the identities and for , that Dirichlet polynomials such as or might serve as zero-detecting polynomials. Unfortunately, in the regime , it is difficult to control the tail behaviour of these series. A more efficient choice comes from the following observation. Suppose that for some and , where is large. From Exercise 33 of Supplement 3, we have

for any , where is a smooth function equaling on and zero on , and is a sufficiently large multiple of . Thus

To eliminate the terms coming from small , we multiply both sides by the Dirichlet polynomial for some fixed . This series can be very crudely bounded by , so that (after adjusting ) we have

In particular, we have

where is the sequence

In particular, for large we have

and so serves as a zero-detecting polynomial.

From Möbius inversion, we see that is supported on the range , and is bounded by . We can thus decompose the Dirichlet polynomial as the sum of expressions of the form for . Applying Corollary 7 (or Corollary 3 to deal with the cases when ), we see that for any , and after deleting at most unit intervals from , we have

for all choices of and for all and all in the remaining portion of . Adding a large multiple of to (which does not significantly affect the error , we can improve this to

(say). Summing this in , we see that

if

and is sufficiently large depending on . Comparing this with (11), we conclude

Proposition 10Let and , and let be sufficiently large depending on . Then, after deleting at most unit intervals from , there are no zeroes of of the form with and in the remaining portion of .

For any and , let denote the number of zeroes of the Riemann zeta function (counting multiplicity) with and . Recall from Proposition 16 of Notes 2 that there are at most zeroes in any rectangle of the form with . Combining this with the previous proposition, we conclude

*Proof:* By dyadic decomposition (and reflection symmetry) we may replace the constraint in the definition of by . The claim then follows from the previous proposition with and a suitably small choice of .

Exercise 12 (Weak Riemann hypothesis)For any and , show thatfor all . Conclude in particular that for any , only of the zeroes of the Riemann zeta function in the rectangle have real part greater than or less than .

Remark 13The sharpest result known in the direction of the weak Riemann hypothesis is by Selberg, who showed that for any and . Thus, “most” zeroes with lie within of the critical line. Improving upon this result seems to be closely related to making progress on the pair correlation conjecture (see Section 4 of Supplement 4).

Corollary 11 is far from the sharpest bound on known, but it will suffice for our applications; see Chapter 10 of Iwaniec and Kowalski for various stronger bounds on , formed by using more advanced large values estimates as well as more complicated zero detecting polynomials. The *density hypothesis* asserts that

for any , , and , thus replacing the exponent in Corollary 11 with . This hypothesis is known to hold for (a result of Huxley, in fact his estimates are even stronger than what the density hypothesis predicts), but is open in general; it is known that the exponent in Corollary 11 can be replaced by (by the work of Ingham and of Huxley), but at the critical value no further improvement is currently known. Of course, the Riemann hypothesis is equivalent to the assertion that for all , which is far stronger than the density hypothesis; in Exercise 15 below we will see that even the Lindelof hypothesis is sufficient to imply the density hypothesis. However, the density hypothesis can be a reasonable substitute for the Riemann hypothesis in some settings (e.g. in establishing prime number theorems in short intervals, as discussed below), and looks more amenable to a purely analytic attack (e.g. through resolution of the exponent pair conjecture, discussed briefly in Notes 5) than the Riemann hypothesis.

In Exercise 32 of Notes 2, it was observed that if there were no zeroes of the Riemann zeta function with real part larger than , then one had the upper bound as for fixed . The following exercise gives a sort of converse to this claim, “modulo the density hypothesis”:

Exercise 15 (Riemann zero usually implies large value of )Let and , and let . Show that after deleting at most unit intervals from , the following holds: whenever is a zero of the Riemann zeta function with and in the remaining portion of , we havefor some with and . (

Hint:use Exercise 8 to dispose of the portion of the zero-detecting polynomial with . Then divide out by and conclude that a sum such as is large for some . Then use Lemma 34 from Supplement 3.) Conclude in particular that the Lindelof hypothesis implies the density hypothesis.

Remark 16One can refine the above arguments to show that the density hypothesis is equivalent to the assertion that for any , one has the Lindelof-type bounds for all and all in the set with at most unit intervals removed, as . An application of Jensen’s formula shows that this claim is equivalent in turn to the assertion that for any and , the set contains zeroes of for all with at most involved. Informally, what this means is that the density hypothesis fails not just through the existence of a sufficient number of “exceptional” zeroes with , but in fact through a sufficient number ofclumpsof exceptional zeroes – such zeroes within a unit distance of each other.

** — 3. Primes in short intervals — **

As an application of the zero density estimates we obtain a prime number theorem in short intervals:

Theorem 17 (Prime number theorem in short intervals)For any fixed , we havewhenever and . In particular, there exists a prime between and whenever is sufficiently large depending on .

Theorems of this type were first obtained by Hohiesel in 1930. If the exponent could be lowered to be below , this would establish Legendre’s conjecture that there always exists a prime between consecutive squares , at least when is sufficiently large. Unfortunately, we do not know how to do this, even under the assumption of the Riemann hypothesis; the best unconditional result is by Baker, Harman, and Pintz, who established the existence of a prime between and for every sufficiently large . Among other things, this establishes the existence of a prime between adjacent cubes if is large enough.

*Proof:* From the truncated explicit formula (Theorem 21 from Notes 2) we have

and similarly with replaced by . Subtracting, we conclude that

From the fundamental theorem of calculus we have

and so

By symmetry of the zeroes, it thus suffices to show that

we can bound the left-hand side of (14) by

From the Vinogradov-Korobov zero-free region (Exercise 5 of Notes 5), we see that the integrand vanishes when for any , if is sufficiently large depending on . (Indeed, the Vinogradov-Korobov region gives more vanishing than this, but this is all that we shall need.) Using this and Corollary 11, we may bound the left-hand side of (14) by

and the claim follows by choosing large enough depending on .

Exercise 18Assuming the density hypothesis, show that the exponent in Theorem 17 can be replaced by ; thus the density hypothesis, though weaker than the Riemann or Lindelof hypotheses, is still strong enough to get a near-miss to the Legendre conjecture.

Exercise 19Using the Littlewood zero-free region (see Exercise 4 of Notes 5 and subsequent remarks) in place of the Vinogradov-Korobov zero-free region, obtain Theorem 17 with replaced by some absolute constant . (This is close to the original argument of Hohiesel.)

As we have seen several times in previous notes, one can obtain better results on prime number theorems in short intervals if one works *on average* in , rather than demanding a result which is true for all . For instance, we have

Theorem 20 (Prime number theorem on the average)For any fixed , we havewhenever and . In particular, there is a prime between and for

almost allin , in the sense that the set of exceptions has measure .

*Proof:* By arguing as in (13) we have

where , so it suffices to show that

where the sum is understood to be over zeroes if the zeta function with . Since

it suffices to show that

for each ; shifting by , it suffices to show that

The left-hand side can be expanded as

By symmetry we may bound this by

For , we may bound

while for we have

so on summing using Proposition 16 of Notes 2 we have

and so it will suffice to show that

But this can be done by a repetition of the arguments used to establish (14); indeed, the left-hand side is

and by using the Vinogradov-Korobov zero-free region and Corollary 11 as before, we obtain the claim.

Exercise 21

- (i) Assuming without proof that the exponent in Corollary 11 can be lowered from to , show that the exponent in Theorem 20 may be lowered from to ; this implies in particular that Legendre’s conjecture is true for “almost all” .
- (ii) Assuming the density hypothesis, show that the exponent in Theorem 20 can be lowered all the way to zero.

** — 4. -function variants — **

We have already seen in previous notes that many results about the Riemann zeta function extend to -functions , with the variable playing a role closely analogous to that of the imaginary ordinate . Certainly, for instance, one can prove zero-density estimates for a single -function for a single Dirichlet character of modulus by much the same methods as given previously, after the usual modification of replacing logarithmic factors such as with instead.

However, one can also prove “grand density theorems” in which one counts zeroes not just of a single -function , but of a whole *family* of -functions , in which ranges in some given family (e.g. *all* Dirichlet characters of a given modulus ). Such theorems are particularly useful when trying to control primes in an arithmetic progression , since Fourier expansion then requires one to consider all characters of modulus simultaneously. (It is also of interest to obtain grand density theorems for all Dirichlet characters of modulus *up to* some threshold , but we will not need to discuss this variant here.) In order to get good results in this regard, one needs a version of the large values theorems in which one averages over characters as well as imaginary ordinates . Here is a typical such result:

Theorem 22 ( estimate on large values with characters)Let and , let be a natural number, and let be a sequence of complex numbers. Let be real numbers in an interval of length , and let be Dirichlet characters of modulus (possibly non-primitive or principal). Assume the following separation condition on the pairs for :

Note that this generalises Theorem 1, which is the case of Theorem 22.

*Proof:* We mimic the proof of Theorem 22. By increasing , we may reduce without loss of generality to the case . As before, we apply the generalised Bessel inequality with the same weight used in Theorem 22, to reduce to showing that

Bounding and using symmetry, it thus suffices to show that

for each .

We now focus on the expression

As before, this expression is bounded by in the diagonal case , which gives an acceptable contribution, so we turn to the case . We split into two subcases, depending on whether or not.

First suppose that . We then split the sum in (17) into sums, each arising from a primitive residue class , on which . Applying a change of variables and then using Poisson summation as in the proof of Theorem 22, one can show that each individual sum is , so the total sum (17) is . Summing over all the with , the are -separated by hypothesis, and so we obtain an acceptable contribution in this case.

Now suppose that . Again, we can split the sum in (17) into sums, each of which is equal to times

for a primitive residue class . Repeating the analysis of Theorem 22, but now noting that is not required to be bounded below, we can write this latter sum as

Summing in , and noting that has mean zero, we see that the main term here cancels out, and we can bound (17) by . The total number of possible can be bounded by , and so the contribution of this case is acceptable (with a bit of room to spare).

Remark 23If one is willing to lose a factor of , the above proof can be simplified slightly by performing only one integration by parts rather than two in the integrals arising from Poisson summation.

We can now obtain a large values theorem in which the role of the interval is now replaced by the set of pairs , with a Dirichlet character of modulus , and an element of . Inside this set, we consider unit intervals of the form for some Dirichlet character and .

Exercise 24Let , let be a natural number, and let be a sequence of complex numbers obeying a bound of the form for all . Let .

For any , , and natural number , let denote the combined number of zeroes of all of the -functions with of modulus , and , counting multiplicity of course. We can then repeat the proof of Corollary 11 to give

Exercise 25 (Grand zero-density estimate)Let , , and let be a natural number. Show that

Exercise 26For any and , let denote the combined number of zeroes of all of the -functions withprimitiveand of conductor at most , and , counting multiplicity of course. Show that for any , one has(

Hint:one will have to develop an analogue of Theorem 22 and Exercise 24 in which one works with primitive characters of conductor at most , rather than all characters of a fixed modulus , and with all references to replaced by .)

As with the density theorems for the zeta function, the exponents here may be improved somewhat, with the conjecturally being reducible to for any ; see Chapter 10 of Iwaniec and Kowalski. (Of course, the generalised Riemann hypothesis asserts that in fact vanishes whenever .)

Given that the density estimates for the Riemann zeta function yield prime number theorems in short intervals, one expects the grand density estimates to yield prime number theorems in sparse (and short) arithmetic progressions. However, there are two technical issues with this. The first is that the analogue of the Vinogradov-Korobov zero-free region is not necessarily wide enough (in some ranges of parameters) for the argument to work. The second is that one may encounter an exceptional (Landau-Siegel) zero. These difficulties can be overcome, leading to Linnik’s theorem (as well as the quantitative refinement of this theorem by Gallagher); this will be the focus of the next set of notes.

Filed under: 254A - analytic prime number theory, math.NT Tagged: Dirichlet polynomials, Dirichlet series, prime number theorem, zero density theorems ]]>

In equation (21) of Notes 2 we obtained the somewhat crude estimates

for any and with and . Setting , we obtained the crude estimate

in this region. In particular, if and then we had . Using the functional equation and the Hadamard three lines lemma, we can improve this to ; see Supplement 3.

Now we seek better upper bounds on . We will reduce the problem to that of bounding certain exponential sums, in the spirit of Exercise 33 of Supplement 3:

Proposition 1Let with and . Thenwhere .

*Proof:* We fix a smooth function with for and for , and allow implied constants to depend on . Let with . From Exercise 33 of Supplement 3, we have

for some sufficiently large absolute constant . By dyadic decomposition, we thus have

We can absorb the first term in the second using the case of the supremum. Writing , where

it thus suffices to show that

for each . But from the fundamental theorem of calculus, the left-hand side can be written as

and the claim then follows from the triangle inequality and a routine calculation.

We are thus interested in getting good bounds on the sum . More generally, we consider normalised exponential sums of the form

where is an interval of length at most for some , and is a smooth function. We will assume smoothness estimates of the form

for some , all , and all , where is the -fold derivative of ; in the case , of interest for the Riemann zeta function, we easily verify that these estimates hold with . (One can consider exponential sums under more general hypotheses than (3), but the hypotheses here are adequate for our needs.) We do not bound the zeroth derivative of directly, but it would not be natural to do so in any event, since the magnitude of the sum (2) is unaffected if one adds an arbitrary constant to .

The trivial bound for (2) is

and we will seek to obtain significant improvements to this bound. Pseudorandomness heuristics predict a bound of for (2) for any if ; this assertion (a special case of the *exponent pair hypothesis*) would have many consequences (for instance, inserting it into Proposition 1 soon yields the LindelÃ¶f hypothesis), but is unfortunately quite far from resolution with known methods. However, we can obtain weaker gains of the form when and depends on . We present two such results here, which perform well for small and large values of respectively:

Theorem 2Let , let be an interval of length at most , and let be a smooth function obeying (3) for all and .

- (i) (van der Corput estimate) For any natural number , one has
- (ii) (Vinogradov estimate) If is a natural number and , then
for some absolute constant .

The factor of can be removed by a more careful argument, but we will not need to do so here as we are willing to lose powers of . The estimate (6) is superior to (5) when for large, since (after optimising in ) (5) gives a gain of the form over the trivial bound, while (6) gives . We have not attempted to obtain completely optimal estimates here, settling for a relatively simple presentation that still gives good bounds on , and there are a wide variety of additional exponential sum estimates beyond the ones given here; see Chapter 8 of Iwaniec-Kowalski, or Chapters 3-4 of Montgomery, for further discussion.

We now briefly discuss the strategies of proof of Theorem 2. Both parts of the theorem proceed by treating like a polynomial of degree roughly ; in the case of (ii), this is done explicitly via Taylor expansion, whereas for (i) it is only at the level of analogy. Both parts of the theorem then try to “linearise” the phase to make it a linear function of the summands (actually in part (ii), it is necessary to introduce an additional variable and make the phase a *bilinear* function of the summands). The van der Corput estimate achieves this linearisation by squaring the exponential sum about times, which is why the gain is only exponentially small in . The Vinogradov estimate achieves linearisation by raising the exponential sum to a significantly smaller power – on the order of – by using HÃ¶lder’s inequality in combination with the fact that the discrete curve becomes roughly equidistributed in the box after taking the sumset of about copies of this curve. This latter fact has a precise formulation, known as the Vinogradov mean value theorem, and its proof is the most difficult part of the argument, relying on using a “-adic” version of this equidistribution to reduce the claim at a given scale to a smaller scale with , and then proceeding by induction.

One can combine Theorem 2 with Proposition 1 to obtain various bounds on the Riemann zeta function:

Exercise 3 (Subconvexity bound)

- (i) Show that for all . (
Hint:use the case of the Van der Corput estimate.)- (ii) For any , show that as .

Exercise 4Let be such that , and let .

- (i) (Littlewood bound) Use the van der Corput estimate to show that whenever .
- (ii) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that whenever .

As noted in Exercise 43 of Notes 2, the Vinogradov-Korobov bound leads to the zero-free region , which in turn leads to the prime number theorem with error term

for . If one uses the weaker Littlewood bound instead, one obtains the narrower zero-free region

(which is only slightly wider than the classical zero-free region) and an error term

in the prime number theorem.

Exercise 5 (Vinogradov-Korobov in arithmetic progressions)Let be a non-principal character of modulus .

- (i) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that whenever and
(

Hint:use the Vinogradov estimate and a change of variables to control for various intervals of length at most and residue classes , in the regime (say). For , do not try to capture any cancellation and just use the triangle inequality instead.)- (ii) Obtain a zero-free region
for , for some (effective) absolute constant .

- (iii) Obtain the prime number theorem in arithmetic progressions with error term
whenever , , is primitive, and depends (ineffectively) on .

** â€” 1. Van der Corput estimates â€” **

In this section we prove Theorem 2(i). To motivate the arguments, we will use an analogy between the sums and the integrals (cf. Exercise 11 from Notes 1). This analogy can be made rigorous by the Poisson summation formula (after applying some smoothing to the integral over to truncate the frequency summation), but we will not need to do so here.

Write . We can control the integral by integration by parts:

If obeys (3) for , we thus have

An analogous argument, using summation by parts instead of integration by parts, controls the sum , but only in the regime when :

Proposition 6Let and , let be an interval of length at most , and let be a smooth function obeying the boundsfor and . If , then

*Proof:* We may assume that for some integers , and after deleting the right endpoint it suffices to show that

From the case of (3) and the mean value theorem one has for all . Thus we have

and we can write

From the cases of (3) and the mean value theorem we see that has size and derivative on . The claim then follows from a summation by parts.

To use this proposition in the regime , we will use the following inequality of van der Corput, which is a basic application of the Cauchy-Schwarz inequality:

Proposition 7 (van der Corput inequality)Let , let be an interval of length , and let be a function. Then

The point of this proposition is that it effectively replaces the phase by the differenced phase for some medium-sized ; it is an oscillatory version of the trivial observation that if is close to constant, then is also close to constant. From the fundamental theorem of calculus, we see that if obeys the estimates (3), then obeys a variant of (3) in which has been replaced by . Since , this reduces , and so one can hope to then iterate this proposition to the point where one can apply Proposition 6.

*Proof:* By rounding down, we may assume that is an integer. For any , we have

and thus on averaging in

There are only values of for which the inner sum is non-vanishing. By the Cauchy-Schwarz inequality, we thus have

and so it will suffice to show that

The left-hand side may be expanded as

The contribution of the diagonal terms is which is acceptable. For the off-diagonal terms, we may use symmetry to restrict to the case , picking up a factor of . After shifting by , we may thus bound this contribution by

Since each occurs times as a difference , the claim follows.

Exercise 8 (Qualitative Weyl exponential sum estimates)Let be a polynomial with real coefficients .

- (i) If and is irrational, show that as . (
Hint:induct on , using geometric series for the base case and Proposition 7 for the induction step.)- (ii) If and at least one of is irrational, show that as .
- (iii) If all of the are rational, show that converges as to a limit that is not necessarily zero.
One can obtain more quantitative estimates on the decay rate of in terms of how badly approximable by rationals one or more of the coefficients are; see for instance Chapter 8 of Iwaniec-Kowalski for some estimates of this type.

If we combine one application of Proposition 7 with Proposition 6, we conclude

Proposition 9Let and , let be an interval of length at most , and let be a smooth function obeying the boundsfor and . Then

*Proof:* If then the claim follows from Proposition 6 (or from (4), if ), so we may assume that . We can also assume that , since otherwise the claim follows from the trivial bound (4).

Set , then . By Proposition 7 we have

where . From the fundamental theorem of calculus we have

for . By Proposition 6 we then have

so on summing in

and the claim follows from the choice of .

We can iterate this:

Proposition 10Let be a sufficiently large constant. Then for any , any , any natural number , any interval of length at most , and any smooth obeying the bounds

We have avoided the use of asymptotic notation for this proposition because we need to use induction on .

*Proof:* We induct on . The case follows from Proposition 9. Now suppose that and the claim has already been proven for .

If then the claim follows from (4) (for large enough), so suppose that . Then the quantity is such that . By Proposition 7 we have

From the fundamental theorem of calculus, obeys the bounds (7) for with replaced by . Thus by the induction hypothesis,

Performing the sum over , we obtain

which by the choice of simplifies to

and the claim follows for large enough.

Now we can prove part (i) of Theorem 2. We can assume that , since the claim follows from (4) otherwise. For any natural number , we can apply Proposition 10 with and conclude that

Taking infima over , it then suffices to show that

whenever . (The case when can be easily derived from the case, after conceding a multiplicative factor of .

We prove (8) by induction on . The case is clear, so suppose and that (8) has already been proven for . If

then

and the claim follows from the term of the infimum. If instead

then

and the claim follows from the induction hypothesis.

** â€” 2. Vinogradov estimate â€” **

We now prove Theorem 2(ii), loosely following the treatment in Iwaniec and Kowalski. We first observe that for bounded values of , part (ii) of this theorem follows from the already proven part (i) (replacing by, say, ), so we may assume now that is larger than any specified absolute constant.

The first step of Vinogradov’s argument is to use a Taylor expansion and a bilinear averaging to reduce the problem to a bilinear exponential sum estimate involving polynomials with medium-sized coefficients. More precisely, we will derive Theorem 2(ii) from

Theorem 11 (Bilinear estimate)Let be a constant, let be a sufficiently large natural number, let , and let be real numbers, with the property that

Let us see how Theorem 2(ii) follows from Theorem 11. By reducing as necessary, we may assume that

for some large constant , as the claim is trivial otherwise.

Set . For any , we have

and hence on averaging in

The condition can only be satisfied if lies within of , and if lies in and is further than from the endpoints of then the constraint may be dropped. It thus suffices to show that

for all in that are further than from the endpoints of .

Fix . From (3) and Taylor expansion we see that

where . Since and , we see from (11) that the error term is (say) if is large enough, so

and it thus suffices to show that

From (3) we have

which (since and is large) implies from (11) that

if (say). The claim now follows from Theorem 11 (with replaced by ).

It remains to prove Theorem 11. We fix and allow all implied constants to depend on . We may assume that

for some large constant (depending on ), as the claim is trivial otherwise.

We write the left-hand side of (10) as

where is the bilinear form

and is the polynomial curve

lies in the box , but occupies only a very sparse subset of this box. However, if one takes iterated sumsets

of this curve, one expects (if is large compared with ) to fill out a much denser subset of this box (or more precisely, of a slightly larger box in which all sides have been multiplied by ), due to the “curvature” of (13). By using the device of HÃ¶lder’s inequality, we will be able to estimate the sparse sum with a sum over such a sumset in a box, which will be significantly easier to estimate, provided one can establish the expected density property of the sumset, or at least something reasonably close to that density property. This latter claim will be accomplished by a deep result known as the Vinogradov mean value theorem.

We turn to the details. Let be a large natural number (depending on ) to be chosen later; in fact we will eventually take for a large absolute constant . From HÃ¶lder’s inequality we have

We remove the absolute values to write this as

for some coefficients of magnitude . The right-hand side can be rearranged using the bilinearity of as

We collect some terms to obtain the inequality

where for each , is the number of representations of of the form with . Note that is supported in the box

We now use HÃ¶lder’s inequality again to spread out the vectors, and also to separate the weight from the phase. Specifically, we have

The quantity has a combinatorial interpretation as the number of solutions to the equation

with ; its estimation is the deepest and most difficult part of Vinogradov’s argument, and will be discussed presently. Leaving this aside for now, we return to (16) and expand out the right-hand side, using the triangle inequality and bilinearity to bound it by

The quantity lies in the box . Furthermore, from (15) and the Cauchy-Schwarz inequality, every has at most representations of the form

Thus we arrive at the inequality

The point here is that the phase is now a linear function of the variables , in contrast to the polynomial function of the variables in the original definition of . In particular, the exponential sum is now fairly easy to estimate:

Lemma 12If , then we havefor some (depending only on ).

*Proof:* We can factor the left-hand side as

Since and we have the trivial bound

it will suffice to show that

for at least choices of , for some .

By (9), we can find at least choices of with and

By summing the geometric series, or by using the trivial bound, we see that

where denotes the distance of to the nearest integer. On any interval of length , we see from the quantitative integral test (Lemma 2 of Notes 1) that

and hence

and the claim follows since , , and for some large .

From this lemma and (18) we have

for some depending only on . To conclude the desired bound , it thus suffices to establish the following result:

Theorem 13 (Vinogradov mean value theorem)Let be natural numbers such that . Then

If we apply this theorem with equal to a sufficiently large multiple of , we obtain the required bound .

Actually, Vinogradov proved a slightly weaker estimate than this; the claim above (in a sharper form) was obtained by later refinements of the argument due to Stechkin and Karatsuba. This result has a number of applications beyond that of controlling the Riemann zeta function; for instance it has application to the Waring problem of expressing large natural numbers as the sum of powers, which we will not discuss further here. The estimate (19) should be compared with the lower bound

where the term comes from the diagonal contribution to (17), and the term coming from (14) and the Cauchy-Schwarz inequality. Informally, (19) is an assertion that the -fold sum of the discrete curve (13) is somewhat close to being uniformly distributed on . The *main conjecture* of Vinogradov asserts the near-optimal bound

for any choice of and . In the recent work of Wooley and Ford-Wooley, an improved version of the congruencing method given below known as *efficient congruencing* has been developed and used to establish the main conjecture (20) in many cases. See this recent ICM proceedings article of Wooley for a survey of the latest developments in this direction.

We now turn to the proof of the Vinogradov mean value theorem. Since

and , we have , and so it will suffice to show that

whenever and is the normalised quantity

which can be viewed as a measure of irregularity of distribution of the quantity for .

We have the extremely crude bound

coming from the fact that there are choices of , so that

This already establishes the claim when (say). For , what we will do is establish the recursive inequality

whenever and some ; iterating this times and then using (23), we will obtain the claim. Note how it is important here that no powers of are lost in the estimate (24). However, we can be quite generous in losing factors such as , , or as these are easily absorbed in the term. To prove (24), we begin with a technical reduction. For a prime , let us first define a restricted version of to be the number of solutions to

with , with having distinct reductions mod , and also having distinct reductions mod . As we are taking to be rather large, one should think of the constraint of having distinct reductions mod to be a mild condition, so that is morally of the same size as . This is confirmed by the following proposition:

The reason why we need the prime to be somewhat comparable to will be clearer later.

*Proof:* By definition, is the number of solutions to

with . If the set has cardinality less than , then there are ways to choose the , and ways to choose , leading to a contribution of at most to . Similarly if has cardinality less than . Thus we may restrict attention to the case when and have cardinality at least . By paying a factor of , we may then restrict to the case where are all distinct, and are all distinct. In particular, the quantity

is non-zero and has cardinality at most . In particular, there are at most primes larger than that define this quantity. On the other hand, from the prime number theorem we can find distinct primes in the range . We thus see that for each solution to (25) with , distinct, and distinct, there is a such that has distinct reductions mod , and also has distinct reductions mod , so that this tuple contributes to . Thus we have

and the claim follows.

Introducing the normalised quantity

and recalling that , we conclude that

The next step is to analyse the multiplicity properties of the -fold sum . Clearly we have

where the power sums are defined by

We recall a classical relation, known as Newton’s identities (or *Newton-Girard identities*), between these power sums and the elementary symmetric polynomials , defined for by the formula

For instance , , and for all . One can also view the as essentially being the coefficients of the polynomial :

Lemma 15 (Newton’s identities)For any , one has the polynomial identity

Thus for instance

*Proof:* We use the method of generating functions. We rewrite the polynomial identity (27) as

On the other hand, taking logarithmic derivatives we have (as formal power series)

From the geometric series formula we have (as formal power series)

and the claim then follows by equating coefficients.

Corollary 16If , then the quantity determines the multiset up to permutations. In particular, any element of has at most representations of the form .

*Proof:* The quantity determines the quantities for . By Newton’s identities and induction, this determines the quantities for , and thus determines the polynomial . The claim now follows from the unique factorisation of polynomials.

This particular result turns out to not give particularly good bounds on ; the sums are so sparsely distributed that the number of representations of a given (in, say, the box ) is typically zero. However, if we localise “-adically”, by replacing the integers with the ring , we get a more usable result:

Corollary 17Let be a prime with . Then any has at most representations of the formwith having distinct reductions mod , where by abuse of notation we localise to the ring in the obvious fashion.

To put it another way, this corollary asserts that the map from to is at most -to-one, if one excludes those which do not have distinct reductions mod .

*Proof:* Suppose we have two representations

with and each having distinct reductions mod . By Newton’s identities as before, we have for (note that as there is no difficulty dividing by for ). Therefore we have the polynomial identity

as polynomials over . In particular, each is a root of and thus must equal one of the since the reductions mod are all distinct (and so all but one of the factors will be invertible in ). This implies that is a permutation of , and the claim follows.

Returning to the integers, and specialising to the range of primes produced by Lemma 14, we conclude

Lemma 18 (Linnik’s lemma)Let be a prime with , and let . Then the number of solutions to the systemwith and , with having distinct reductions modulo , is at most .

A version of this lemma also applies for primes outside of the range , but this is basically the range where the lemma is the most efficient. Indeed, probabilistic heuristics suggest that the number of solutions here should be approximately , which equals in the range .

*Proof:* Each residue class mod consists of residue classes mod . Since

it thus suffices (replacing with different elements of ) to show that for any , the number of solutions to the lifted system

with and is at most . But each residue class mod meets in at most representatives, so the claim follows from Corollary 17, crudely bounding by .

Now we need to consider sums of terms, rather than just terms. Recall that is the number of solutions to

with , with having distinct reductions mod , and also having distinct reductions mod . We also need an even more restricted version of , which is defined similarly to but with the additional constraint that

Probabilistic heuristics suggest that should be about as large as . One side of this prediction is confirmed by the following application of HÃ¶lder’s inequality:

Lemma 19 (HÃ¶lder inequality)We haveIn particular, introducing the normalised quantity

*Proof:* It is easiest to prove this by Fourier-analytic means. Observe that we have the Fourier expansion

where

where the asterisk denotes the restriction to those with distinct reductions modulo and is the usual dot product , and

Similarly, we have

where

Since , the claim now follows from HÃ¶lder’s inequality.

Exercise 20Find a non-Fourier-analytic proof of the above lemma, based on the Cauchy-Schwarz inequality, in the case when is a power of two. (Hint:you may wish to generalise the HÃ¶lder inequality to one involving the number of solutions to systems where each is drawn from a finite multiset (such quantities are known asadditive energies of orderin the additive combinatorics literature).) For an additional challenge: find a Cauchy-Schwarz proof that works for arbitrary values of .

Next, we use the Linnik lemma and some elementary arguments to bound :

Lemma 21If , thenwhere . In particular, from (22) and (28) we have

*Proof:* By definition, is the number of solutions to the system

where , with and each having distinct reductions mod , and also

for some . From the binomial theorem, is a linear transform of , so

Writing and for and some , we conclude that

In particular, taking the coefficient

There are choices for , and then by Linnik’s lemma once are chosen, there are at most choices for . Once these are chosen, we still have to select subject to a constraint of the form

for some depending on . The number of solutions to this system is

which by the Cauchy-Schwarz inequality is bounded by . The claim follows.

Combining (26), (29), and (30) we obtain (24) as required.

Filed under: 254A - analytic prime number theory, math.CA, math.NT Tagged: exponential sums, Riemann zeta function, van der Corput lemma, Vinogradov main theorem, Vinogradov-Korobov bound ]]>

for all primes and some fixed (we allow all constants below to depend on ). Let , and for each prime , let be a set of integers, with for . We consider finitely supported sequences of non-negative reals for which we have bounds of the form

for all square-free and some , and some remainder terms . One is then interested in upper and lower bounds on the quantity

The fundamental lemma of sieve theory (Corollary 19 of Notes 4) gives us the bound

This bound is strong when is large, but is not as useful for smaller values of . We now give a sharp bound in this regime. We introduce the functions by

where we adopt the convention . Note that for each one has only finitely many non-zero summands in (6), (7). These functions are closely related to the Buchstab function from Exercise 28 of Supplement 4; indeed from comparing the definitions one has

for all .

Exercise 1 (Alternate definition of )Show that is continuously differentiable except at , and is continuously differentiable except at where it is continuous, obeying the delay-differential equations

for , with the initial conditions

for and

for . Show that these properties of determine completely.

For future reference, we record the following explicit values of :

We will show

Theorem 2 (Linear sieve)Let the notation and hypotheses be as above, with . Then, for any , one has the upper bound

if is sufficiently large depending on . Furthermore, this claim is sharp in the sense that the quantity cannot be replaced by any smaller quantity, and similarly cannot be replaced by any larger quantity.

Comparing the linear sieve with the fundamental lemma (and also testing using the sequence for some extremely large ), we conclude that we necessarily have the asymptotics

for all ; this can also be proven directly from the definitions of , or from Exercise 1, but is somewhat challenging to do so; see e.g. Chapter 11 of Friedlander-Iwaniec for details.

Exercise 3Establish the integral identitiesand

for . Argue heuristically that these identities are consistent with the bounds in Theorem 2 and the Buchstab identity (Equation (16) from Notes 4).

Exercise 4Use the Selberg sieve (Theorem 30 from Notes 4) to obtain a slightly weaker version of (12) in the range in which the error term is worsened to , but the main term is unchanged.

We will prove Theorem 2 below the fold. The optimality of is closely related to the parity problem obstruction discussed in Section 5 of Notes 4; a naive application of the parity arguments there only give the weak bounds and for , but this can be sharpened by a more careful counting of various sums involving the Liouville function .

As an application of the linear sieve (specialised to the ranges in (10), (11)), we will establish a famous theorem of Chen, giving (in some sense) the closest approach to the twin prime conjecture that one can hope to achieve by sieve-theoretic methods:

Theorem 5 (Chen’s theorem)There are infinitely many primes such that is the product of at most two primes.

The same argument gives the version of Chen’s theorem for the even Goldbach conjecture, namely that for all sufficiently large even , there exists a prime between and such that is the product of at most two primes.

The discussion in these notes loosely follows that of Friedlander-Iwaniec (who study sieving problems in more general dimension than ).

** â€” 1. Optimality â€” **

We first establish that the quantities appearing in Theorem 2 cannot be improved. We use the parity argument of Selberg, based on weight sequences related to the Liouville function.

We argue for the optimality of ; the argument for is similar and is left as an exercise. Suppose that there is for which the claim in Theorem 2 is not optimal, thus there exists such that

for as in that theorem, with sufficiently large.

We will contradict this claim by specialising to a special case. Let be a large parameter going to infinity, and set . We set , then by Mertens’ theorem we have . We set to be the residue class , thus (3) becomes

and (14) becomes

Now let be a small fixed quantity to be chosen later, set , and let be the sequence

This is clearly finitely supported and non-negative. For any , we have

from the multiplicativity of . If , then , and then by the prime number theorem for the Liouville function (Exercise 41 from Notes 2, combined with Exercise 18 from Supplement 4) we have

(say), annd hence the remainder term in (15) is of size

As such, the error term in (16) may be absorbed into the term, and so

Now we count the left-hand side. Observe that is supported on those numbers that are the product of an odd number of primes (possibly with repetition), in which case . To be coprime to , all these primes must be at least ; since we are restricting , we thus must have . The left-hand side of (18) may thus be written as

This expression may be computed using the prime number theorem:

Exercise 6Show that the expression (19) is equal to .

Since is continuous for , we obtain a contradiction if is sufficiently small.

Exercise 7Verify the optimality of in Theorem 2. (Hint:replace by in the above arguments.)

** â€” 2. The linear sieve â€” **

We now prove the forward direction of Theorem 2. Again, we focus on the upper bound (12), as the lower bound case is similar.

Fix . Morally speaking, the most natural sieve to use here is the (upper bound) *beta sieve* from Notes 4, with the optimal value of , which for the linear sieve turns out to be . Recall that this sieve is defined as the sum

where is the set of divisors of with , such that

for all odd . From Proposition 14 of Notes 4 this is indeed an upper bound sieve; indeed we have

where is the set of divisors of with , such that

for all odd . Now for the key heuristic point: if lies in the support of , then the sum in (20) mostly vanishes. Indeed, if is such that and for some and odd , then one has for some that is not divisible by any prime less than . On the other hand, from (21), (22) one has

and

which (morally) implies from the sieve of Erathosthenes that is prime, thus and so is not in the support of . As such, we expect the upper bound sieve to be extremely efficient on the support of , which when combined with the analysis of the previous section suggests that this sieve should produce the desired upper bound (12).

One can indeed use this sieve to establish the required bound (12); see Chapter 11 of Friedlander-Iwaniec for details. However, for various technical reasons it will be convenient to modify this sieve slightly, by increasing the parameter to be slightly greater than , and also by using the fundamental lemma to perform a preliminary sifting on the small primes.

We turn to the details. To prove (12) we may of course assume that is suitably small. It will be convenient to worsen the error in (12) a little to , since one can of course remove the logarithm by reducing appropriately.

Set and . By the fundamental lemma of sieve theory (Lemma 17 of Notes 4), one can find combinatorial upper and lower bound sieve coefficients at sifting level supported on divisors of , such that

We will use the upper bound sieve as a preliminary sieve to remove the for ; the lower bound sieve plays only a minor supporting role, mainly to control the difference between the upper bound sieve .

Next, we set , and consider the upper bound beta sieve let be the upper bound beta sieve with parameter on the primes dividing up to level of distribution . In other words, consists of those divisors of with such that

for all odd ; in particular, for all . By Proposition 14 of Notes 4, this is indeed an upper bound sieve for the primes dividing :

Multiplying this with the second inequality in (24) (this is the method of *composition of sieves*), we obtain an upper bound sieve for the primes up to :

Multiplying this by and summing in , we conclude that

Note that each product appears at most once in the above sum, and all such products are squarefree and at most . Applying (3), we thus have

Thus to prove (2), it suffices to show that

for sufficiently large depending on . Factoring out the summation using (23), it thus suffices to show that

Now we eliminate the role of . From (1), (5) and Mertens’ theorem we have

for large enough. Also, if , then and all prime factors of are at least . Thus has at most prime factors, each of which are at least . From (1) we then have

The contribution of the error term is then easily seen to be for large enough, and so we reduce to showing that

One can proceed here by evaluating the left-hand side directly. However, we will proceed instead by using the weight function from the previous section. More precisely, we will evaluate the expression

in two different ways, where is as before (but with the role of now replaced by the function ). Firstly, since , we see from the argument used to establish (17) that

(say). Since each appears at least once, we can thus write (26) as

which upon factoring the sum using (23) and Mertens’ theorem

Thus to verify (25), it will suffice to show that (26) is of the form

for sufficiently large.

To do this, we abbreviate (26) as

By Proposition 14 of Notes 4, we can expand as

where for any , is the collection of all divisors of with such that

for all with the same parity as . For technical reasons, we will also impose the additional inequality

for all with the opposite parity as ; this follows from (30) when , but is an additional constraint when and is even, but in the above identity is odd, so this additional constraint is harmless. For similar reasons we impose the inequality

which follows from (30) or (31) except when , but then this inequality is automatic from our hypothesis , which implies if is chosen small enough.

Inserting the identity (28), we can write (26) as

where

and

We first estimate . By(24), we can write as the sum of

(where we bound by ). The error may be rearranged as

which by (23) is of size for large enough. As for the main term (33), we see from Exercise 6 (and the arguments preceding that exercise) that this term is equal to for sufficiently large. Thus, to obtain the desired approximation for (26), it will suffice to show that

Next, we establish an exponential decay estimate on the :

Lemma 8For sufficiently large depending on , we havefor all and some absolute constant .

*Proof:* (Sketch) Note that if is in , then and all prime factors are at least , thus we may assume without loss of generality that .

We bound

Note that if lies in , then

thanks to (29), (32). From this and the fundamental lemma of sieve theory we see (Exercise!) that

and so it will suffice to show that

By the prime number theorem, the left-hand side is bounded (Exercise!) by as , where

and is the set of points with ,

and such that

for all with the same parity as , and

for all . It thus suffices to prove the bound

for all and some absolute constant .

We use an argument from the book of Harman. Observe that vanishes for , which makes the claim (35) routine for (Exercise!) for sufficiently large. We will now inductively prove (35) for all odd . From the change of variables , we obtain the identity

where when is odd and when is even (Exercise!). In particular, if and (35) was already proven for , then

One can check (Exercise!) that the quantity is maximised at , where its value is less than (in fact it is ) if is small enough. As such, we obtain (35) if is sufficiently close to .

Finally, (35) for even follows from the odd case (with a slightly larger choice of ) by one final application of (36).

Exercise 9Fill in the steps marked (Exercise!) in the above proof.

In view of this lemma, the total contribution of with for some sufficiently large is acceptable. Thus it suffices to show that

whenever is odd.

By (24), we can write as

plus an error of size

Arguing as in the treatment of the term, we see from (23) that the error term is bounded by

which is as desired for large enough. Thus it suffices to show that

If and appear in the above sum, then we have where has no prime factor less than , has an even number of prime factors, and obeys the bounds

thanks to (29). Note that (29) also gives , and thus (since and ) we see that if is small enough and is large enough. This forces to either equal , or be the product of two primes between and . The contribution of the case is bounded by , which is acceptable. As for the contribution of those that are the product of two primes, the prime number theorem shows that there are values of that can contribute to the sum, and so this contribution to is at most

but by (34) the sum here is for large enough, and the claim follows. This completes the proof of (12).

Exercise 10Establish the lower bound (13) in Theorem 2. (Note that one can assume without loss of generality that , which will now be needed to ensure (31) when .)

** â€” 3. Chen’s theorem â€” **

We now prove Chen’s theorem for twin primes, loosely following the treatment in Chapter 25 of Friedlander-Iwaniec. We will in fact show the slightly stronger statement that

for sufficiently large , where is the set of all numbers that are products of at most two primes, and . Indeed, after removing the (negligible) contribution of those that are powers of primes, this estimate would imply that there are infinitely many primes such that is the product of at most two primes, each of which is at least .

Chen’s argument begins with the following simple lower bound sieve for :

Lemma 11If , then

*Proof:* If has no prime factors less than or equal to , then and the claim follows. If has two or more factors less than or equal to , then and the claim follows. Finally, if has exactly one factor less than or equal to , then (as ) it must be of the form for some , and the claim again follows.

In view of this sieve (trivially managing the contribution of , and using the restriction of to be coprime to ), it suffices to show that

for sufficiently large , where

and

We thus seek sufficiently good lower bounds on and sufficiently good upper bounds on and .

We begin with . We use the lower bound linear sieve, with equal to the residue class for all , so that is the residue class . We approximate

where is the multiplicative function with and for . From the Bombieri-Vinogradov theorem (Theorem 17 of Notes 2) we have

(say) if for some small fixed . Applying the lower bound linear sieve (13), we conclude that

where

We can compute an asymptotic for :

as , where is the twin prime constant.

From (11) we have . Sending slowly to zero, we conclude that

Now we turn to . Here we use the upper bound linear sieve. Let be as before. For any dividing and , we have

where and are as previously. We apply the upper bound linear sieve (12) with level of distribution , to conclude that

We sum over . Since is at most , and each number less than or equal to has at most prime factors, we have

The error term is thanks to (39). Since , we thus have

for sufficiently large , thanks to Exercise 12. We can compute the sum using Exercise 37 of Notes 1, to obtain

which by (10) and sending slowly gives

A routine computation shows that

Finally, we consider , which is estimated by “switching” the sieve to sift out small divisors of , rather than small divisors of . Removing those with , as well as those that are powers of primes, and then shifting by , we have

where is the finitely supported non-negative sequence

Here we are sifting out the residue classes , so that .

The sequence has good distribution up to level :

Proposition 13One haswhere is as before, and

(say), with as before.

*Proof:* Observe that the quantity in (42) is bounded above by if the summand is to be non-zero. We now use a finer-than-dyadic decomposition trick similar to that used in the proof of the Bombieri-Vinogradov theorem in Notes 3 to approximate as a combination of Dirichlet convolutions. Namely, we set , and partition (plus possibly a little portion to the right of ) into consecutive intervals each of the form for some . We similarly split (plus possibly a tiny portion of ) into intervals each of the form for some . We can thus split as

Observe that for each there are only choices of for which the summand can be non-zero. As such, the contribution of the diagonal case can be easily seen to be absorbed into the error, as can those cases where the product set is not contained completely in . If we let be the set of triplets obeying these properties, we can thus approximate by , where is the Dirichlet convolution

From the general Bombieri-Vinogradov theorem (Theorem 16 of Notes 3) and the Siegel-Walfisz theorem (Exercise 64 of Notes 2) we see that

where

(say) and

This gives the claim with replaced by the quantity ; but by undoing the previous decomposition we see that this quantity is equal to up to an error of (say), and the claim follows.

Applying the upper bound sieve (12) (with sifting level ), we thus have

and hence by (10) and Exercise 12

for sufficiently large.

Note that

From the prime number theorem and Exercise 37 of Notes 1, we thus have

We thus conclude that

where

The left-hand side of (38) is then at least

One can calculate that , and the claim follows.

Exercise 14Establish Chen’s theorem for the even Goldbach conjecture.

Remark 15If one is willing to use stronger distributional claims on the primes than is provided by the Bombieri-Vinogradov theorem, then one can use a simpler sieve than Chen’s sieve to establish Chen’s theorem, but then the required distributional theorem will then either be conjectural or more difficult to establish than the Bombieri-Vinogradov theorem. See Chapter 25 of Friedlander-Iwaniec for further discussion.

Filed under: 254A - analytic prime number theory, math.NT Tagged: Chen's theorem, linear sieve, sieve theory, twin primes ]]>

- Let be the set of natural numbers in . For each prime , let be the union of the residue classes and . Then is the cardinality of the
*sifted set*. - Let be the set of primes in . For each prime , let be the residue class . Then is the cardinality of the
*sifted set*. - Let be the set of primes in . For each prime , let be the residue class . Then is the cardinality of the
*sifted set*. - Let be the set . For each prime , let be the residue class Then is the cardinality of the
*sifted set*.

Exercise 1Develop similar sifting formulations of the other three Landau problems.

In view of these sieving interpretations of number-theoretic problems, it becomes natural to try to estimate the size of sifted sets for various finite sets of integers, and subsets of integers indexed by primes dividing some squarefree natural number (which, in the above examples, would be the product of all primes up to ). As we see in the above examples, the sets in applications are typically the union of one or more residue classes modulo , but we will work at a more abstract level of generality here by treating as more or less arbitrary sets of integers, without caring too much about the arithmetic structure of such sets.

It turns out to be conceptually more natural to replace sets by functions, and to consider the more general the task of estimating sifted *sums*

for some finitely supported sequence of non-negative numbers; the previous combinatorial sifting problem then corresponds to the indicator function case . (One could also use other index sets here than the integers if desired; for much of sieve theory the index set and its subsets are treated as abstract sets, so the exact arithmetic structure of these sets is not of primary importance.)

Continuing with twin primes as a running example, we thus have the following sample sieving problem:

Problem 2 (Sieving problem for twin primes)Let , and let denote the number of natural numbers which avoid the residue classes for all primes . In other words, we havewhere , is the product of all the primes strictly less than (we omit itself for minor technical reasons), and is the union of the residue classes . Obtain upper and lower bounds on which are as strong as possible in the asymptotic regime where goes to infinity and the

sifting levelgrows with (ideally we would like to grow as fast as ).

From the preceding discussion we know that the number of twin prime pairs in is equal to , if is not a perfect square; one also easily sees that the number of twin prime pairs in is at least , again if is not a perfect square. Thus we see that a sufficiently good answer to Problem 2 would resolve the twin prime conjecture, particularly if we can get the sifting level to be as large as .

We return now to the general problem of estimating (1). We may expand

where (with the convention that ). We thus arrive at the Legendre sieve identity

Specialising to the case of an indicator function , we recover the inclusion-exclusion formula

Such exact sieving formulae are already satisfactory for controlling sifted sets or sifted sums when the amount of sieving is relatively small compared to the size of . For instance, let us return to the running example in Problem 2 for some . Observe that each in this example consists of residue classes modulo , where is defined to equal when and when is odd. By the Chinese remainder theorem, this implies that for each , consists of residue classes modulo . Using the basic bound

for any and any residue class , we conclude that

for any , where is the multiplicative function

Since and there are at most primes dividing , we may crudely bound , thus

Also, the number of divisors of is at most . From the Legendre sieve (3), we thus conclude that

We can factorise the main term to obtain

This is compatible with the heuristic

coming from the equidistribution of residues principle (Section 3 of Supplement 4), bearing in mind (from the modified Cramér model, see Section 1 of Supplement 4) that we expect this heuristic to become inaccurate when becomes very large. We can simplify the right-hand side of (7) by recalling the twin prime constant

(see equation (7) from Supplement 4); note that

so from Mertens’ third theorem (Theorem 42 from Notes 1) one has

as . Bounding crudely by , we conclude in particular that

when with . This is somewhat encouraging for the purposes of getting a sufficiently good answer to Problem 2 to resolve the twin prime conjecture, but note that is currently far too small: one needs to get as large as before one is counting twin primes, and currently can only get as large as .

The problem is that the number of terms in the Legendre sieve (3) basically grows exponentially in , and so the error terms in (4) accumulate to an unacceptable extent once is significantly larger than . An alternative way to phrase this problem is that the estimate (4) is only expected to be truly useful in the regime ; on the other hand, the moduli appearing in (3) can be as large as , which grows exponentially in by the prime number theorem.

To resolve this problem, it is thus natural to try to *truncate* the Legendre sieve, in such a way that one only uses information about the sums for a relatively small number of divisors of , such as those which are below a certain threshold . This leads to the following general sieving problem:

Problem 3 (General sieving problem)Let be a squarefree natural number, and let be a set of divisors of . For each prime dividing , let be a set of integers, and define for all (with the convention that ). Suppose that is an (unknown) finitely supported sequence of non-negative reals, whose sumsare known for all . What are the best upper and lower bounds one can conclude on the quantity (1)?

Here is a simple example of this type of problem (corresponding to the case , , , , and ):

Exercise 4Let be a finitely supported sequence of non-negative reals such that , , and . Show thatand give counterexamples to show that these bounds cannot be improved in general, even when is an indicator function sequence.

Problem 3 is an example of a linear programming problem. By using linear programming duality (as encapsulated by results such as the Hahn-Banach theorem, the separating hyperplane theorem, or the Farkas lemma), we can rephrase the above problem in terms of *upper and lower bound sieves*:

Theorem 5 (Dual sieve problem)Let be as in Problem 3. We assume that Problem 3 isfeasible, in the sense that there exists at least one finitely supported sequence of non-negative reals obeying the constraints in that problem. Define an (normalised)upper bound sieveto be a function of the formfor some coefficients , and obeying the pointwise lower bound

for all (in particular is non-negative). Similarly, define a (normalised)

lower bound sieveto be a function of the formfor some coefficients , and obeying the pointwise upper bound

for all . Thus for instance and are (trivially) upper bound sieves and lower bound sieves respectively.

- (i) The supremal value of the quantity (1), subject to the constraints in Problem 3, is equal to the infimal value of the quantity , as ranges over all upper bound sieves.
- (ii) The infimal value of the quantity (1), subject to the constraints in Problem 3, is equal to the supremal value of the quantity , as ranges over all lower bound sieves.

*Proof:* We prove part (i) only, and leave part (ii) as an exercise. Let be the supremal value of the quantity (1) given the constraints in Problem 3, and let be the infimal value of . We need to show that .

We first establish the easy inequality . If the sequence obeys the constraints in Problem 3, and is an upper bound sieve, then

and hence (by the non-negativity of and )

taking suprema in and infima in we conclude that .

Now suppose for contradiction that , thus for some real number . We will argue using the hyperplane separation theorem; one can also proceed using one of the other duality results mentioned above. (See this previous blog post for some discussion of the connections between these various forms of linear duality.) Consider the affine functional

on the vector space of finitely supported sequences of reals. On the one hand, since , this functional is positive for every sequence obeying the constraints in Problem 3. Next, let be the space of affine functionals of the form