You are currently browsing the monthly archive for February 2015.
In analytic number theory, it is a well-known phenomenon that for many arithmetic functions of interest in number theory, it is significantly easier to estimate logarithmic sums such as
than it is to estimate summatory functions such as
(Here we are normalising to be roughly constant in size, e.g.
as
.) For instance, when
is the von Mangoldt function
, the logarithmic sums
can be adequately estimated by Mertens’ theorem, which can be easily proven by elementary means (see Notes 1); but a satisfactory estimate on the summatory function
requires the prime number theorem, which is substantially harder to prove (see Notes 2). (From a complex-analytic or Fourier-analytic viewpoint, the problem is that the logarithmic sums
can usually be controlled just from knowledge of the Dirichlet series
for
near
; but the summatory functions require control of the Dirichlet series
for
on or near a large portion of the line
. See Notes 2 for further discussion.)
Viewed conversely, whenever one has a difficult estimate on a summatory function such as , one can look to see if there is a “cheaper” version of that estimate that only controls the logarithmic sums
, which is easier to prove than the original, more “expensive” estimate. In this post, we shall do this for two theorems, a classical theorem of Halasz on mean values of multiplicative functions on long intervals, and a much more recent result of Matomaki and Radziwiłł on mean values of multiplicative functions in short intervals. The two are related; the former theorem is an ingredient in the latter (though in the special case of the Matomaki-Radziwiłł theorem considered here, we will not need Halasz’s theorem directly, instead using a key tool in the proof of that theorem).
We begin with Halasz’s theorem. Here is a version of this theorem, due to Montgomery and to Tenenbaum:
Theorem 1 (Halasz-Montgomery-Tenenbaum) Let
be a multiplicative function with
for all
. Let
and
, and set
Then one has
Informally, this theorem asserts that is small compared with
, unless
“pretends” to be like the character
on primes for some small
. (This is the starting point of the “pretentious” approach of Granville and Soundararajan to analytic number theory, as developed for instance here.) We now give a “cheap” version of this theorem which is significantly weaker (both because it settles for controlling logarithmic sums rather than summatory functions, it requires
to be completely multiplicative instead of multiplicative, it requires a strong bound on the analogue of the quantity
, and because it only gives qualitative decay rather than quantitative estimates), but easier to prove:
Theorem 2 (Cheap Halasz) Let
be an asymptotic parameter goingto infinity. Let
be a completely multiplicative function (possibly depending on
) such that
for all
, such that
Note that now that we are content with estimating exponential sums, we no longer need to preclude the possibility that pretends to be like
; see Exercise 11 of Notes 1 for a related observation.
To prove this theorem, we first need a special case of the Turan-Kubilius inequality.
Lemma 3 (Turan-Kubilius) Let
be a parameter going to infinity, and let
be a quantity depending on
such that
and
as
. Then
Informally, this lemma is asserting that
for most large numbers . Another way of writing this heuristically is in terms of Dirichlet convolutions:
This type of estimate was previously discussed as a tool to establish a criterion of Katai and Bourgain-Sarnak-Ziegler for Möbius orthogonality estimates in this previous blog post. See also Section 5 of Notes 1 for some similar computations.
Proof: By Cauchy-Schwarz it suffices to show that
Expanding out the square, it suffices to show that
for .
We just show the case, as the
cases are similar (and easier). We rearrange the left-hand side as
We can estimate the inner sum as . But a routine application of Mertens’ theorem (handling the diagonal case when
separately) shows that
and the claim follows.
Remark 4 As an alternative to the Turan-Kubilius inequality, one can use the Ramaré identity
(see e.g. Section 17.3 of Friedlander-Iwaniec). This identity turns out to give superior quantitative results than the Turan-Kubilius inequality in applications; see the paper of Matomaki and Radziwiłł for an instance of this.
We now prove Theorem 2. Let denote the left-hand side of (2); by the triangle inequality we have
. By Lemma 3 (for some
to be chosen later) and the triangle inequality we have
We rearrange the left-hand side as
We now replace the constraint by
. The error incurred in doing so is
which by Mertens’ theorem is . Thus we have
But by definition of , we have
, thus
From Mertens’ theorem, the expression in brackets can be rewritten as
and so the real part of this expression is
By (1), Mertens’ theorem and the hypothesis on we have
for any . This implies that we can find
going to infinity such that
and thus the expression in brackets has real part . The claim follows.
The Turan-Kubilius argument is certainly not the most efficient way to estimate sums such as . In the exercise below we give a significantly more accurate estimate that works when
is non-negative.
Exercise 5 (Granville-Koukoulopoulos-Matomaki)
- (i) If
is a completely multiplicative function with
for all primes
, show that
as
. (Hint: for the upper bound, expand out the Euler product. For the lower bound, show that
, where
is the completely multiplicative function with
for all primes
.)
- (ii) If
is multiplicative and takes values in
, show that
for all
.
Now we turn to a very recent result of Matomaki and Radziwiłł on mean values of multiplicative functions in short intervals. For sake of illustration we specialise their results to the simpler case of the Liouville function , although their arguments actually work (with some additional effort) for arbitrary multiplicative functions of magnitude at most
that are real-valued (or more generally, stay far from complex characters
). Furthermore, we give a qualitative form of their estimates rather than a quantitative one:
Theorem 6 (Matomaki-Radziwiłł, special case) Let
be a parameter going to infinity, and let
be a quantity going to infinity as
. Then for all but
of the integers
, one has
A simple sieving argument (see Exercise 18 of Supplement 4) shows that one can replace by the Möbius function
and obtain the same conclusion. See this recent note of Matomaki and Radziwiłł for a simple proof of their (quantitative) main theorem in this special case.
Of course, (4) improves upon the trivial bound of . Prior to this paper, such estimates were only known (using arguments similar to those in Section 3 of Notes 6) for
unconditionally, or for
for some sufficiently large
if one assumed the Riemann hypothesis. This theorem also represents some progress towards Chowla’s conjecture (discussed in Supplement 4) that
as for any fixed distinct
; indeed, it implies that this conjecture holds if one performs a small amount of averaging in the
.
Below the fold, we give a “cheap” version of the Matomaki-Radziwiłł argument. More precisely, we establish
Theorem 7 (Cheap Matomaki-Radziwiłł) Let
be a parameter going to infinity, and let
. Then
Note that (5) improves upon the trivial bound of . Again, one can replace
with
if desired. Due to the cheapness of Theorem 7, the proof will require few ingredients; the deepest input is the improved zero-free region for the Riemann zeta function due to Vinogradov and Korobov. Other than that, the main tools are the Turan-Kubilius result established above, and some Fourier (or complex) analysis.
In the previous set of notes, we saw how zero-density theorems for the Riemann zeta function, when combined with the zero-free region of Vinogradov and Korobov, could be used to obtain prime number theorems in short intervals. It turns out that a more sophisticated version of this type of argument also works to obtain prime number theorems in arithmetic progressions, in particular establishing the celebrated theorem of Linnik:
Theorem 1 (Linnik’s theorem) Let
be a primitive residue class. Then
contains a prime
with
.
In fact it is known that one can find a prime with
, a result of Xylouris. For sake of comparison, recall from Exercise 65 of Notes 2 that the Siegel-Walfisz theorem gives this theorem with a bound of
, and from Exercise 48 of Notes 2 one can obtain a bound of the form
if one assumes the generalised Riemann hypothesis. The probabilistic random models from Supplement 4 suggest that one should in fact be able to take
.
We will not aim to obtain the optimal exponents for Linnik’s theorem here, and follow the treatment in Chapter 18 of Iwaniec and Kowalski. We will in fact establish the following more quantitative result (a special case of a more powerful theorem of Gallagher), which splits into two cases, depending on whether there is an exceptional zero or not:
Theorem 2 (Quantitative Linnik theorem) Let
be a primitive residue class for some
. For any
, let
denote the quantity
Assume that
for some sufficiently large
.
- (i) (No exceptional zero) If all the real zeroes
of
-functions
of real characters
of modulus
are such that
, then
for all
and some absolute constant
.
- (ii) (Exceptional zero) If there is a zero
of an
-function
of a real character
of modulus
with
for some sufficiently small
, then
for all
and some absolute constant
.
The implied constants here are effective.
Note from the Landau-Page theorem (Exercise 54 from Notes 2) that at most one exceptional zero exists (if is small enough). A key point here is that the error term
in the exceptional zero case is an improvement over the error term when no exceptional zero is present; this compensates for the potential reduction in the main term coming from the
term. The splitting into cases depending on whether an exceptional zero exists or not turns out to be an essential technique in many advanced results in analytic number theory (though presumably such a splitting will one day become unnecessary, once the possibility of exceptional zeroes are finally eliminated for good).
Exercise 3 Assuming Theorem 2, and assuming
for some sufficiently large absolute constant
, establish the lower bound
when there is no exceptional zero, and
when there is an exceptional zero
. Conclude that Theorem 2 implies Theorem 1, regardless of whether an exceptional zero exists or not.
Remark 4 The Brun-Titchmarsh theorem (Exercise 33 from Notes 4), in the sharp form of Montgomery and Vaughan, gives that
for any primitive residue class
and any
. This is (barely) consistent with the estimate (1). Any lowering of the coefficient
in the Brun-Titchmarsh inequality (with reasonable error terms), in the regime when
is a large power of
, would then lead to at least some elimination of the exceptional zero case. However, this has not led to any progress on the Landau-Siegel zero problem (and may well be just a reformulation of that problem). (When
is a relatively small power of
, some improvements to Brun-Titchmarsh are possible that are not in contradiction with the presence of an exceptional zero; see this paper of Maynard for more discussion.)
Theorem 2 is deduced in turn from facts about the distribution of zeroes of -functions. We first need a version of the truncated explicit formula that does not lose unnecessary logarithms:
Exercise 5 (Log-free truncated explicit formula) With the hypotheses as above, show that
for any non-principal character
of modulus
, where we assume
for some large
; for the principal character establish the same formula with an additional term of
on the right-hand side. (Hint: this is almost immediate from Exercise 45(iv) and Theorem 21 ofNotes 2) with (say)
, except that there is a factor of
in the error term instead of
when
is extremely large compared to
. However, a closer inspection of the proof (particularly with regards to the truncated Perron formula in Proposition 12 of Notes 2) shows that the
factor can be replaced fairly easily by
. To get rid of the final factor of
, note that the proof of Proposition 12 used the rather crude bound
. If one replaces this crude bound by more sophisticated tools such as the Brun-Titchmarsh inequality, one will be able to remove the factor of
.
Using the Fourier inversion formula
(see Theorem 69 of Notes 1), we thus have
and so it suffices by the triangle inequality (bounding very crudely by
, as the contribution of the low-lying zeroes already turns out to be quite dominant) to show that
when no exceptional zero is present, and
when an exceptional zero is present.
To handle the former case (2), one uses two facts about zeroes. The first is the classical zero-free region (Proposition 51 from Notes 2), which we reproduce in our context here:
Proposition 6 (Classical zero-free region) Let
. Apart from a potential exceptional zero
, all zeroes
of
-functions
with
of modulus
and
are such that
for some absolute constant
.
Using this zero-free region, we have
whenever contributes to the sum in (2), and so the left-hand side of (2) is bounded by
where we recall that is the number of zeroes
of any
-function of a character
of modulus
with
and
(here we use conjugation symmetry to make
non-negative, accepting a multiplicative factor of two).
In Exercise 25 of Notes 6, the grand density estimate
is proven. If one inserts this bound into the above expression, one obtains a bound for (2) which is of the form
Unfortunately this is off from what we need by a factor of (and would lead to a weak form of Linnik’s theorem in which
was bounded by
rather than by
). In the analogous problem for prime number theorems in short intervals, we could use the Vinogradov-Korobov zero-free region to compensate for this loss, but that region does not help here for the contribution of the low-lying zeroes with
, which as mentioned before give the dominant contribution. Fortunately, it is possible to remove this logarithmic loss from the zero-density side of things:
Theorem 7 (Log-free grand density estimate) For any
and
, one has
The implied constants are effective.
We prove this estimate below the fold. The proof follows the methods of the previous section, but one inserts various sieve weights to restrict sums over natural numbers to essentially become sums over “almost primes”, as this turns out to remove the logarithmic losses. (More generally, the trick of restricting to almost primes by inserting suitable sieve weights is quite useful for avoiding any unnecessary losses of logarithmic factors in analytic number theory estimates.)
Now we turn to the case when there is an exceptional zero (3). The argument used to prove (2) applies here also, but does not gain the factor of in the exponent. To achieve this, we need an additional tool, a version of the Deuring-Heilbronn repulsion phenomenon due to Linnik:
Theorem 9 (Deuring-Heilbronn repulsion phenomenon) Suppose
is such that there is an exceptional zero
with
small. Then all other zeroes
of
-functions of modulus
are such that
In other words, the exceptional zero enlarges the classical zero-free region by a factor of
. The implied constants are effective.
Exercise 10 Use Theorem 7 and Theorem 9 to complete the proof of (3), and thus Linnik’s theorem.
Exercise 11 Use Theorem 9 to give an alternate proof of (Tatuzawa’s version of) Siegel’s theorem (Theorem 62 of Notes 2). (Hint: if two characters have different moduli, then they can be made to have the same modulus by multiplying by suitable principal characters.)
Theorem 9 is proven by similar methods to that of Theorem 7, the basic idea being to insert a further weight of (in addition to the sieve weights), the point being that the exceptional zero causes this weight to be quite small on the average. There is a strengthening of Theorem 9 due to Bombieri that is along the lines of Theorem 7, obtaining the improvement
with effective implied constants for any and
in the presence of an exceptional zero, where the prime in
means that the exceptional zero
is omitted (thus
if
). Note that the upper bound on
falls below one when
for a sufficiently small
, thus recovering Theorem 9. Bombieri’s theorem can be established by the methods in this set of notes, and will be given as an exercise to the reader.
Remark 12 There are a number of alternate ways to derive the results in this set of notes, for instance using the Turan power sums method which is based on studying derivatives such as
for
and large
, and performing various sorts of averaging in
to attenuate the contribution of many of the zeroes
. We will not develop this method here, but see for instance Chapter 9 of Montgomery’s book. See the text of Friedlander and Iwaniec for yet another approach based primarily on sieve-theoretic ideas.
Remark 13 When one optimises all the exponents, it turns out that the exponent in Linnik’s theorem is extremely good in the presence of an exceptional zero – indeed Friedlander and Iwaniec showed can even get a bound of the form
for some
, which is even stronger than one can obtain from GRH! There are other places in which exceptional zeroes can be used to obtain results stronger than what one can obtain even on the Riemann hypothesis; for instance, Heath-Brown used the hypothesis of an infinite sequence of Siegel zeroes to obtain the twin prime conejcture.
In the previous set of notes, we studied upper bounds on sums such as for
that were valid for all
in a given range, such as
; this led in turn to upper bounds on the Riemann zeta
for
in the same range, and for various choices of
. While some improvement over the trivial bound of
was obtained by these methods, we did not get close to the conjectural bound of
that one expects from pseudorandomness heuristics (assuming that
is not too large compared with
, e.g.
.
However, it turns out that one can get much better bounds if one settles for estimating sums such as , or more generally finite Dirichlet series (also known as Dirichlet polynomials) such as
, for most values of
in a given range such as
. Equivalently, we will be able to get some control on the large values of such Dirichlet polynomials, in the sense that we can control the set of
for which
exceeds a certain threshold, even if we cannot show that this set is empty. These large value theorems are often closely tied with estimates for mean values such as
of a Dirichlet series; these latter estimates are thus known as mean value theorems for Dirichlet series. Our approach to these theorems will follow the same sort of methods used in Notes 3, in particular relying on the generalised Bessel inequality from those notes.
Our main application of the large value theorems for Dirichlet polynomials will be to control the number of zeroes of the Riemann zeta function (or the Dirichlet
-functions
) in various rectangles of the form
for various
and
. These rectangles will be larger than the zero-free regions for which we can exclude zeroes completely, but we will often be able to limit the number of zeroes in such rectangles to be quite small. For instance, we will be able to show the following weak form of the Riemann hypothesis: as
, a proportion
of zeroes of the Riemann zeta function in the critical strip with
will have real part
. Related to this, the number of zeroes with
and
can be shown to be bounded by
as
for any
.
In the next set of notes we will use refined versions of these theorems to establish Linnik’s theorem on the least prime in an arithmetic progression.
Our presentation here is broadly based on Chapters 9 and 10 in Iwaniec and Kowalski, who give a number of more sophisticated large value theorems than the ones discussed here.
We return to the study of the Riemann zeta function , focusing now on the task of upper bounding the size of this function within the critical strip; as seen in Exercise 43 of Notes 2, such upper bounds can lead to zero-free regions for
, which in turn lead to improved estimates for the error term in the prime number theorem.
In equation (21) of Notes 2 we obtained the somewhat crude estimates
for any and
with
and
. Setting
, we obtained the crude estimate
in this region. In particular, if and
then we had
. Using the functional equation and the Hadamard three lines lemma, we can improve this to
; see Supplement 3.
Now we seek better upper bounds on . We will reduce the problem to that of bounding certain exponential sums, in the spirit of Exercise 34 of Supplement 3:
Proposition 1 Let
with
and
. Then
where
.
Proof: We fix a smooth function with
for
and
for
, and allow implied constants to depend on
. Let
with
. From Exercise 34 of Supplement 3, we have
for some sufficiently large absolute constant . By dyadic decomposition, we thus have
We can absorb the first term in the second using the case of the supremum. Writing
, where
it thus suffices to show that
for each . But from the fundamental theorem of calculus, the left-hand side can be written as
and the claim then follows from the triangle inequality and a routine calculation.
We are thus interested in getting good bounds on the sum . More generally, we consider normalised exponential sums of the form
where is an interval of length at most
for some
, and
is a smooth function. We will assume smoothness estimates of the form
for some , all
, and all
, where
is the
-fold derivative of
; in the case
,
of interest for the Riemann zeta function, we easily verify that these estimates hold with
. (One can consider exponential sums under more general hypotheses than (3), but the hypotheses here are adequate for our needs.) We do not bound the zeroth derivative
of
directly, but it would not be natural to do so in any event, since the magnitude of the sum (2) is unaffected if one adds an arbitrary constant to
.
The trivial bound for (2) is
and we will seek to obtain significant improvements to this bound. Pseudorandomness heuristics predict a bound of for (2) for any
if
; this assertion (a special case of the exponent pair hypothesis) would have many consequences (for instance, inserting it into Proposition 1 soon yields the Lindelöf hypothesis), but is unfortunately quite far from resolution with known methods. However, we can obtain weaker gains of the form
when
and
depends on
. We present two such results here, which perform well for small and large values of
respectively:
Theorem 2 Let
, let
be an interval of length at most
, and let
be a smooth function obeying (3) for all
and
.
The factor of can be removed by a more careful argument, but we will not need to do so here as we are willing to lose powers of
. The estimate (6) is superior to (5) when
for
large, since (after optimising in
) (5) gives a gain of the form
over the trivial bound, while (6) gives
. We have not attempted to obtain completely optimal estimates here, settling for a relatively simple presentation that still gives good bounds on
, and there are a wide variety of additional exponential sum estimates beyond the ones given here; see Chapter 8 of Iwaniec-Kowalski, or Chapters 3-4 of Montgomery, for further discussion.
We now briefly discuss the strategies of proof of Theorem 2. Both parts of the theorem proceed by treating like a polynomial of degree roughly
; in the case of (ii), this is done explicitly via Taylor expansion, whereas for (i) it is only at the level of analogy. Both parts of the theorem then try to “linearise” the phase to make it a linear function of the summands (actually in part (ii), it is necessary to introduce an additional variable and make the phase a bilinear function of the summands). The van der Corput estimate achieves this linearisation by squaring the exponential sum about
times, which is why the gain is only exponentially small in
. The Vinogradov estimate achieves linearisation by raising the exponential sum to a significantly smaller power – on the order of
– by using Hölder’s inequality in combination with the fact that the discrete curve
becomes roughly equidistributed in the box
after taking the sumset of about
copies of this curve. This latter fact has a precise formulation, known as the Vinogradov mean value theorem, and its proof is the most difficult part of the argument, relying on using a “
-adic” version of this equidistribution to reduce the claim at a given scale
to a smaller scale
with
, and then proceeding by induction.
One can combine Theorem 2 with Proposition 1 to obtain various bounds on the Riemann zeta function:
Exercise 3 (Subconvexity bound)
- (i) Show that
for all
. (Hint: use the
case of the Van der Corput estimate.)
- (ii) For any
, show that
as
(the decay rate in the
is allowed to depend on
).
Exercise 4 Let
be such that
, and let
.
- (i) (Littlewood bound) Use the van der Corput estimate to show that
whenever
.
- (ii) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that
whenever
.
As noted in Exercise 43 of Notes 2, the Vinogradov-Korobov bound leads to the zero-free region , which in turn leads to the prime number theorem with error term
for . If one uses the weaker Littlewood bound instead, one obtains the narrower zero-free region
(which is only slightly wider than the classical zero-free region) and an error term
in the prime number theorem.
Exercise 5 (Vinogradov-Korobov in arithmetic progressions) Let
be a non-principal character of modulus
.
- (i) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that
whenever
and
(Hint: use the Vinogradov estimate and a change of variables to control
for various intervals
of length at most
and residue classes
, in the regime
(say). For
, do not try to capture any cancellation and just use the triangle inequality instead.)
- (ii) Obtain a zero-free region
for
, for some (effective) absolute constant
.
- (iii) Obtain the prime number theorem in arithmetic progressions with error term
whenever
,
,
is primitive, and
depends (ineffectively) on
.
Recent Comments