You are currently browsing the category archive for the ‘254A – analytic prime number theory’ category.
Let us call an arithmetic function
-bounded if we have
for all
. In this section we focus on the asymptotic behaviour of
-bounded multiplicative functions. Some key examples of such functions include:
- The Möbius function
;
- The Liouville function
;
- “Archimedean” characters
(which I call Archimedean because they are pullbacks of a Fourier character
on the multiplicative group
, which has the Archimedean property);
- Dirichlet characters (or “non-Archimedean” characters)
(which are essentially pullbacks of Fourier characters on a multiplicative cyclic group
with the discrete (non-Archimedean) metric);
- Hybrid characters
.
The space of -bounded multiplicative functions is also closed under multiplication and complex conjugation.
Given a multiplicative function , we are often interested in the asymptotics of long averages such as
for large values of , as well as short sums
where and
are both large, but
is significantly smaller than
. (Throughout these notes we will try to normalise most of the sums and integrals appearing here as averages that are trivially bounded by
; note that other normalisations are preferred in some of the literature cited here.) For instance, as we established in Theorem 58 of Notes 1, the prime number theorem is equivalent to the assertion that
as . The Liouville function behaves almost identically to the Möbius function, in that estimates for one function almost always imply analogous estimates for the other:
Exercise 1 Without using the prime number theorem, show that (1) is also equivalent to
as
. (Hint: use the identities
and
.)
Henceforth we shall focus our discussion more on the Liouville function, and turn our attention to averages on shorter intervals. From (2) one has
as if
is such that
for some fixed
. However it is significantly more difficult to understand what happens when
grows much slower than this. By using the techniques based on zero density estimates discussed in Notes 6, it was shown by Motohashi and that one can also establish \eqref. On the Riemann Hypothesis Maier and Montgomery lowered the threshold to
for an absolute constant
(the bound
is more classical, following from Exercise 33 of Notes 2). On the other hand, the randomness heuristics from Supplement 4 suggest that
should be able to be taken as small as
, and perhaps even
if one is particularly optimistic about the accuracy of these probabilistic models. On the other hand, the Chowla conjecture (mentioned for instance in Supplement 4) predicts that
cannot be taken arbitrarily slowly growing in
, due to the conjectured existence of arbitrarily long strings of consecutive numbers where the Liouville function does not change sign (and in fact one can already show from the known partial results towards the Chowla conjecture that (3) fails for some sequence
and some sufficiently slowly growing
, by modifying the arguments in these papers of mine).
The situation is better when one asks to understand the mean value on almost all short intervals, rather than all intervals. There are several equivalent ways to formulate this question:
Exercise 2 Let
be a function of
such that
and
as
. Let
be a
-bounded function. Show that the following assertions are equivalent:
As it turns out the second moment formulation in (iii) will be the most convenient for us to work with in this set of notes, as it is well suited to Fourier-analytic techniques (and in particular the Plancherel theorem).
Using zero density methods, for instance, it was shown by Ramachandra that
whenever and
. With this quality of bound (saving arbitrary powers of
over the trivial bound of
), this is still the lowest value of
one can reach unconditionally. However, in a striking recent breakthrough, it was shown by Matomaki and Radziwill that as long as one is willing to settle for weaker bounds (saving a small power of
or
, or just a qualitative decay of
), one can obtain non-trivial estimates on far shorter intervals. For instance, they show
Theorem 3 (Matomaki-Radziwill theorem for Liouville) For any
, one has
for some absolute constant
.
In fact they prove a slightly more precise result: see Theorem 1 of that paper. In particular, they obtain the asymptotic (4) for any function that goes to infinity as
, no matter how slowly! This ability to let
grow slowly with
is important for several applications; for instance, in order to combine this type of result with the entropy decrement methods from Notes 9, it is essential that
be allowed to grow more slowly than
. See also this survey of Soundararajan for further discussion.
Exercise 4 In this exercise you may use Theorem 3 freely.
- (i) Establish the upper bound
for some absolute constant
and all sufficiently large
. (Hint: if this bound failed, then
would hold for almost all
; use this to create many intervals
for which
is extremely large.)
- (ii) Show that Theorem 3 also holds with
replaced by
, where
is the principal character of period
. (Use the fact that
for all
.) Use this to establish the corresponding lower bound
to (i).
(There is a curious asymmetry to the difficulty level of these bounds; the upper bound in (ii) was established much earlier by Harman, Pintz, and Wolke, but the lower bound in (i) was only established in the Matomaki-Radziwill paper.)
The techniques discussed previously were highly complex-analytic in nature, relying in particular on the fact that functions such as or
have Dirichlet series
,
that extend meromorphically into the critical strip. In contrast, the Matomaki-Radziwill theorem does not rely on such meromorphic continuations, and in fact holds for more general classes of
-bounded multiplicative functions
, for which one typically does not expect any meromorphic continuation into the strip. Instead, one can view the Matomaki-Radziwill theory as following the philosophy of a slightly different approach to multiplicative number theory, namely the pretentious multiplicative number theory of Granville and Soundarajan (as presented for instance in their draft monograph). A basic notion here is the pretentious distance between two
-bounded multiplicative functions
(at a given scale
), which informally measures the extent to which
“pretends” to be like
(or vice versa). The precise definition is
Definition 5 (Pretentious distance) Given two
-bounded multiplicative functions
, and a threshold
, the pretentious distance
between
and
up to scale
is given by the formula
Note that one can also define an infinite version of this distance by removing the constraint
, though in such cases the pretentious distance may then be infinite. The pretentious distance is not quite a metric (because
can be non-zero, and furthermore
can vanish without
being equal), but it is still quite close to behaving like a metric, in particular it obeys the triangle inequality; see Exercise 16 below. The philosophy of pretentious multiplicative number theory is that two
-bounded multiplicative functions
will exhibit similar behaviour at scale
if their pretentious distance
is bounded, but will become uncorrelated from each other if this distance becomes large. A simple example of this philosophy is given by the following “weak Halasz theorem”, proven in Section 2:
Proposition 6 (Logarithmically averaged version of Halasz) Let
be sufficiently large. Then for any
-bounded multiplicative functions
, one has
for an absolute constant
.
In particular, if does not pretend to be
, then the logarithmic average
will be small. This condition is basically necessary, since of course
.
If one works with non-logarithmic averages , then not pretending to be
is insufficient to establish decay, as was already observed in Exercise 11 of Notes 1: if
is an Archimedean character
for some non-zero real
, then
goes to zero as
(which is consistent with Proposition 6), but
does not go to zero. However, this is in some sense the “only” obstruction to these averages decaying to zero, as quantified by the following basic result:
Theorem 7 (Halasz’s theorem) Let
be sufficiently large. Then for any
-bounded multiplicative function
, one has
for an absolute constant
and any
.
Informally, we refer to a -bounded multiplicative function as “pretentious’; if it pretends to be a character such as
, and “non-pretentious” otherwise. The precise distinction is rather malleable, as the precise class of characters that one views as “obstructions” varies from situation to situation. For instance, in Proposition 6 it is just the trivial character
which needs to be considered, but in Theorem 7 it is the characters
with
. In other contexts one may also need to add Dirichlet characters
or hybrid characters such as
to the list of characters that one might pretend to be. The division into pretentious and non-pretentious functions in multiplicative number theory is faintly analogous to the division into major and minor arcs in the circle method applied to additive number theory problems; see Notes 8. The Möbius and Liouville functions are model examples of non-pretentious functions; see Exercise 24.
In the contrapositive, Halasz’ theorem can be formulated as the assertion that if one has a large mean
for some , then one has the pretentious property
for some . This has the flavour of an “inverse theorem”, of the type often found in arithmetic combinatorics.
Among other things, Halasz’s theorem gives yet another proof of the prime number theorem (1); see Section 2.
We now give a version of the Matomaki-Radziwill theorem for general (non-pretentious) multiplicative functions that is formulated in a similar contrapositive (or “inverse theorem”) fashion, though to simplify the presentation we only state a qualitative version that does not give explicit bounds.
Theorem 8 ((Qualitative) Matomaki-Radziwill theorem) Let
, and let
, with
sufficiently large depending on
. Suppose that
is a
-bounded multiplicative function such that
Then one has
for some
.
The condition is basically optimal, as the following example shows:
Exercise 9 Let
be a sufficiently small constant, and let
be such that
. Let
be the Archimedean character
for some
. Show that
Combining Theorem 8 with standard non-pretentiousness facts about the Liouville function (see Exercise 24), we recover Theorem 3 (but with a decay rate of only rather than
). We refer the reader to the original paper of Matomaki-Radziwill (as well as this followup paper with myself) for the quantitative version of Theorem 8 that is strong enough to recover the full version of Theorem 3, and which can also handle real-valued pretentious functions.
With our current state of knowledge, the only arguments that can establish the full strength of Halasz and Matomaki-Radziwill theorems are Fourier analytic in nature, relating sums involving an arithmetic function with its Dirichlet series
which one can view as a discrete Fourier transform of (or more precisely of the measure
, if one evaluates the Dirichlet series on the right edge
of the critical strip). In this aspect, the techniques resemble the complex-analytic methods from Notes 2, but with the key difference that no analytic or meromorphic continuation into the strip is assumed. The key identity that allows us to pass to Dirichlet series is the following variant of Proposition 7 of Notes 2:
Proposition 10 (Parseval type identity) Let
be finitely supported arithmetic functions, and let
be a Schwartz function. Then
where
is the Fourier transform of
. (Note that the finite support of
and the Schwartz nature of
ensure that both sides of the identity are absolutely convergent.)
The restriction that be finitely supported will be slightly annoying in places, since most multiplicative functions will fail to be finitely supported, but this technicality can usually be overcome by suitably truncating the multiplicative function, and taking limits if necessary.
Proof: By expanding out the Dirichlet series, it suffices to show that
for any natural numbers . But this follows from the Fourier inversion formula
applied at
.
For applications to Halasz type theorems, one sets equal to the Kronecker delta
, producing weighted integrals of
of “
” type. For applications to Matomaki-Radziwill theorems, one instead sets
, and more precisely uses the following corollary of the above proposition, to obtain weighted integrals of
of “
” type:
Exercise 11 (Plancherel type identity) If
is finitely supported, and
is a Schwartz function, establish the identity
In contrast, information about the non-pretentious nature of a multiplicative function will give “pointwise” or “
” type control on the Dirichlet series
, as is suggested from the Euler product factorisation of
.
It will be convenient to formalise the notion of ,
, and
control of the Dirichlet series
, which as previously mentioned can be viewed as a sort of “Fourier transform” of
:
Definition 12 (Fourier norms) Let
be finitely supported, and let
be a bounded measurable set. We define the Fourier
norm
the Fourier
norm
and the Fourier
norm
One could more generally define norms for other exponents
, but we will only need the exponents
in this current set of notes. It is clear that all the above norms are in fact (semi-)norms on the space of finitely supported arithmetic functions.
As mentioned above, Halasz’s theorem gives good control on the Fourier norm for restrictions of non-pretentious functions to intervals:
Exercise 13 (Fourier
control via Halasz) Let
be a
-bounded multiplicative function, let
be an interval in
for some
, let
, and let
be a bounded measurable set. Show that
(Hint: you will need to use summation by parts (or an equivalent device) to deal with a
weight.)
Meanwhile, the Plancherel identity in Exercise 11 gives good control on the Fourier norm for functions on long intervals (compare with Exercise 2 from Notes 6):
Exercise 14 (
mean value theorem) Let
, and let
be finitely supported. Show that
Conclude in particular that if
is supported in
for some
and
, then
In the simplest case of the logarithmically averaged Halasz theorem (Proposition 6), Fourier estimates are already sufficient to obtain decent control on the (weighted) Fourier
type expressions that show up. However, these estimates are not enough by themselves to establish the full Halasz theorem or the Matomaki-Radziwill theorem. To get from Fourier
control to Fourier
or
control more efficiently, the key trick is use Hölder’s inequality, which when combined with the basic Dirichlet series identity
The strategy is then to factor (or approximately factor) the original function as a Dirichlet convolution (or average of convolutions) of various components, each of which enjoys reasonably good Fourier
or
estimates on various regions
, and then combine them using the Hölder inequalities (5), (6) and the triangle inequality. For instance, to prove Halasz’s theorem, we will split
into the Dirichlet convolution of three factors, one of which will be estimated in
using the non-pretentiousness hypothesis, and the other two being estimated in
using Exercise 14. For the Matomaki-Radziwill theorem, one uses a significantly more complicated decomposition of
into a variety of Dirichlet convolutions of factors, and also splits up the Fourier domain
into several subregions depending on whether the Dirichlet series associated to some of these components are large or small. In each region and for each component of these decompositions, all but one of the factors will be estimated in
, and the other in
; but the precise way in which this is done will vary from component to component. For instance, in some regions a key factor will be small in
by construction of the region; in other places, the
control will come from Exercise 13. Similarly, in some regions, satisfactory
control is provided by Exercise 14, but in other regions one must instead use “large value” theorems (in the spirit of Proposition 9 from Notes 6), or amplify the power of the standard
mean value theorems by combining the Dirichlet series with other Dirichlet series that are known to be large in this region.
There are several ways to achieve the desired factorisation. In the case of Halasz’s theorem, we can simply work with a crude version of the Euler product factorisation, dividing the primes into three categories (“small”, “medium”, and “large” primes) and expressing as a triple Dirichlet convolution accordingly. For the Matomaki-Radziwill theorem, one instead exploits the Turan-Kubilius phenomenon (Section 5 of Notes 1, or Lemma 2 of Notes 9)) that for various moderately wide ranges
of primes, the number of prime divisors of a large number
in the range
is almost always close to
. Thus, if we introduce the arithmetic functions
and more generally we have a twisted approximation
for multiplicative functions . (Actually, for technical reasons it will be convenient to work with a smoothed out version of these functions; see Section 3.) Informally, these formulas suggest that the “
energy” of a multiplicative function
is concentrated in those regions where
is extremely large in a
sense. Iterations of this formula (or variants of this formula, such as an identity due to Ramaré) will then give the desired (approximate) factorisation of
.
Read the rest of this entry »
In these notes we presume familiarity with the basic concepts of probability theory, such as random variables (which could take values in the reals, vectors, or other measurable spaces), probability, and expectation. Much of this theory is in turn based on measure theory, which we will also presume familiarity with. See for instance this previous set of lecture notes for a brief review.
The basic objects of study in analytic number theory are deterministic; there is nothing inherently random about the set of prime numbers, for instance. Despite this, one can still interpret many of the averages encountered in analytic number theory in probabilistic terms, by introducing random variables into the subject. Consider for instance the form
of the prime number theorem (where we take the limit ). One can interpret this estimate probabilistically as
where is a random variable drawn uniformly from the natural numbers up to
, and
denotes the expectation. (In this set of notes we will use boldface symbols to denote random variables, and non-boldface symbols for deterministic objects.) By itself, such an interpretation is little more than a change of notation. However, the power of this interpretation becomes more apparent when one then imports concepts from probability theory (together with all their attendant intuitions and tools), such as independence, conditioning, stationarity, total variation distance, and entropy. For instance, suppose we want to use the prime number theorem (1) to make a prediction for the sum
After dividing by , this is essentially
With probabilistic intuition, one may expect the random variables to be approximately independent (there is no obvious relationship between the number of prime factors of
, and of
), and so the above average would be expected to be approximately equal to
which by (2) is equal to . Thus we are led to the prediction
The asymptotic (3) is widely believed (it is a special case of the Chowla conjecture, which we will discuss in later notes; while there has been recent progress towards establishing it rigorously, it remains open for now.
How would one try to make these probabilistic intuitions more rigorous? The first thing one needs to do is find a more quantitative measurement of what it means for two random variables to be “approximately” independent. There are several candidates for such measurements, but we will focus in these notes on two particularly convenient measures of approximate independence: the “” measure of independence known as covariance, and the “
” measure of independence known as mutual information (actually we will usually need the more general notion of conditional mutual information that measures conditional independence). The use of
type methods in analytic number theory is well established, though it is usually not described in probabilistic terms, being referred to instead by such names as the “second moment method”, the “large sieve” or the “method of bilinear sums”. The use of
methods (or “entropy methods”) is much more recent, and has been able to control certain types of averages in analytic number theory that were out of reach of previous methods such as
methods. For instance, in later notes we will use entropy methods to establish the logarithmically averaged version
of (3), which is implied by (3) but strictly weaker (much as the prime number theorem (1) implies the bound , but the latter bound is much easier to establish than the former).
As with many other situations in analytic number theory, we can exploit the fact that certain assertions (such as approximate independence) can become significantly easier to prove if one only seeks to establish them on average, rather than uniformly. For instance, given two random variables and
of number-theoretic origin (such as the random variables
and
mentioned previously), it can often be extremely difficult to determine the extent to which
behave “independently” (or “conditionally independently”). However, thanks to second moment tools or entropy based tools, it is often possible to assert results of the following flavour: if
are a large collection of “independent” random variables, and
is a further random variable that is “not too large” in some sense, then
must necessarily be nearly independent (or conditionally independent) to many of the
, even if one cannot pinpoint precisely which of the
the variable
is independent with. In the case of the second moment method, this allows us to compute correlations such as
for “most”
. The entropy method gives bounds that are significantly weaker quantitatively than the second moment method (and in particular, in its current incarnation at least it is only able to say non-trivial assertions involving interactions with residue classes at small primes), but can control significantly more general quantities
for “most”
thanks to tools such as the Pinsker inequality.
In the fall quarter (starting Sep 27) I will be teaching a graduate course on analytic prime number theory. This will be similar to a graduate course I taught in 2015, and in particular will reuse several of the lecture notes from that course, though it will also incorporate some new material (and omit some material covered in the previous course, to compensate). I anticipate covering the following topics:
- Elementary multiplicative number theory
- Complex-analytic multiplicative number theory
- The entropy decrement argument
- Bounds for exponential sums
- Zero density theorems
- Halasz’s theorem and the Matomaki-Radziwill theorem
- The circle method
- (If time permits) Chowla’s conjecture and the Erdos discrepancy problem [Update: I did not end up writing notes on this topic.]
Lecture notes for topics 3, 6, and 8 will be forthcoming.
We have seen in previous notes that the operation of forming a Dirichlet series
or twisted Dirichlet series
is an incredibly useful tool for questions in multiplicative number theory. Such series can be viewed as a multiplicative Fourier transform, since the functions and
are multiplicative characters.
Similarly, it turns out that the operation of forming an additive Fourier series
where lies on the (additive) unit circle
and
is the standard additive character, is an incredibly useful tool for additive number theory, particularly when studying additive problems involving three or more variables taking values in sets such as the primes; the deployment of this tool is generally known as the Hardy-Littlewood circle method. (In the analytic number theory literature, the minus sign in the phase
is traditionally omitted, and what is denoted by
here would be referred to instead by
,
or just
.) We list some of the most classical problems in this area:
- (Even Goldbach conjecture) Is it true that every even natural number
greater than two can be expressed as the sum
of two primes?
- (Odd Goldbach conjecture) Is it true that every odd natural number
greater than five can be expressed as the sum
of three primes?
- (Waring problem) For each natural number
, what is the least natural number
such that every natural number
can be expressed as the sum of
or fewer
powers?
- (Asymptotic Waring problem) For each natural number
, what is the least natural number
such that every sufficiently large natural number
can be expressed as the sum of
or fewer
powers?
- (Partition function problem) For any natural number
, let
denote the number of representations of
of the form
where
and
are natural numbers. What is the asymptotic behaviour of
as
?
The Waring problem and its asymptotic version will not be discussed further here, save to note that the Vinogradov mean value theorem (Theorem 13 from Notes 5) and its variants are particularly useful for getting good bounds on ; see for instance the ICM article of Wooley for recent progress on these problems. Similarly, the partition function problem was the original motivation of Hardy and Littlewood in introducing the circle method, but we will not discuss it further here; see e.g. Chapter 20 of Iwaniec-Kowalski for a treatment.
Instead, we will focus our attention on the odd Goldbach conjecture as our model problem. (The even Goldbach conjecture, which involves only two variables instead of three, is unfortunately not amenable to a circle method approach for a variety of reasons, unless the statement is replaced with something weaker, such as an averaged statement; see this previous blog post for further discussion. On the other hand, the methods here can obtain weaker versions of the even Goldbach conjecture, such as showing that “almost all” even numbers are the sum of two primes; see Exercise 34 below.) In particular, we will establish the following celebrated theorem of Vinogradov:
Theorem 1 (Vinogradov’s theorem) Every sufficiently large odd number
is expressible as the sum of three primes.
Recently, the restriction that be sufficiently large was replaced by Helfgott with
, thus establishing the odd Goldbach conjecture in full. This argument followed the same basic approach as Vinogradov (based on the circle method), but with various estimates replaced by “log-free” versions (analogous to the log-free zero-density theorems in Notes 7), combined with careful numerical optimisation of constants and also some numerical work on the even Goldbach problem and on the generalised Riemann hypothesis. We refer the reader to Helfgott’s text for details.
We will in fact show the more precise statement:
Theorem 2 (Quantitative Vinogradov theorem) Let
be an natural number. Then
The implied constants are ineffective.
We dropped the hypothesis that is odd in Theorem 2, but note that
vanishes when
is even. For odd
, we have
Unfortunately, due to the ineffectivity of the constants in Theorem 2 (a consequence of the reliance on the Siegel-Walfisz theorem in the proof of that theorem), one cannot quantify explicitly what “sufficiently large” means in Theorem 1 directly from Theorem 2. However, there is a modification of this theorem which gives effective bounds; see Exercise 32 below.
Exercise 4 Obtain a heuristic derivation of the main term
using the modified Cramér model (Section 1 of Supplement 4).
To prove Theorem 2, we consider the more general problem of estimating sums of the form
for various integers and functions
, which we will take to be finitely supported to avoid issues of convergence.
Suppose that are supported on
; for simplicity, let us first assume the pointwise bound
for all
. (This simple case will not cover the case in Theorem 2, when
are truncated versions of the von Mangoldt function
, but will serve as a warmup to that case.) Then we have the trivial upper bound
A basic observation is that this upper bound is attainable if all “pretend” to behave like the same additive character
for some
. For instance, if
, then we have
when
, and then it is not difficult to show that
as .
The key to the success of the circle method lies in the converse of the above statement: the only way that the trivial upper bound (2) comes close to being sharp is when all correlate with the same character
, or in other words
are simultaneously large. This converse is largely captured by the following two identities:
Exercise 5 Let
be finitely supported functions. Then for any natural number
, show that
and
The traditional approach to using the circle method to compute sums such as proceeds by invoking (3) to express this sum as an integral over the unit circle, then dividing the unit circle into “major arcs” where
are large but computable with high precision, and “minor arcs” where one has estimates to ensure that
are small in both
and
senses. For functions
of number-theoretic significance, such as truncated von Mangoldt functions, the “major arcs” typically consist of those
that are close to a rational number
with
not too large, and the “minor arcs” consist of the remaining portions of the circle. One then obtains lower bounds on the contributions of the major arcs, and upper bounds on the contribution of the minor arcs, in order to get good lower bounds on
.
This traditional approach is covered in many places, such as this text of Vaughan. We will emphasise in this set of notes a slightly different perspective on the circle method, coming from recent developments in additive combinatorics; this approach does not quite give the sharpest quantitative estimates, but it allows for easier generalisation to more combinatorial contexts, for instance when replacing the primes by dense subsets of the primes, or replacing the equation with some other equation or system of equations.
From Exercise 5 and Hölder’s inequality, we immediately obtain
Corollary 6 Let
be finitely supported functions. Then for any natural number
, we have
Similarly for permutations of the
.
In the case when are supported on
and bounded by
, this corollary tells us that we have
is
whenever one has
uniformly in
, and similarly for permutations of
. From this and the triangle inequality, we obtain the following conclusion: if
is supported on
and bounded by
, and
is Fourier-approximated by another function
supported on
and bounded by
in the sense that
Thus, one possible strategy for estimating the sum is, one can effectively replace (or “model”)
by a simpler function
which Fourier-approximates
in the sense that the exponential sums
agree up to error
. For instance:
Exercise 7 Let
be a natural number, and let
be a random subset of
, chosen so that each
has an independent probability of
of lying in
.
- (i) If
and
, show that with probability
as
, one has
uniformly in
. (Hint: for any fixed
, this can be accomplished with quite a good probability (e.g.
) using a concentration of measure inequality, such as Hoeffding’s inequality. To obtain the uniformity in
, round
to the nearest multiple of (say)
and apply the union bound).
- (ii) Show that with probability
, one has
representations of the form
with
(with
treated as an ordered triple, rather than an unordered one).
In the case when is something like the truncated von Mangoldt function
, the quantity
is of size
rather than
. This costs us a logarithmic factor in the above analysis, however we can still conclude that we have the approximation (4) whenever
is another sequence with
such that one has the improved Fourier approximation
uniformly in . (Later on we will obtain a “log-free” version of this implication in which one does not need to gain a factor of
in the error term.)
This suggests a strategy for proving Vinogradov’s theorem: find an approximant to some suitable truncation
of the von Mangoldt function (e.g.
or
) which obeys the Fourier approximation property (5), and such that the expression
is easily computable. It turns out that there are a number of good options for such an approximant
. One of the quickest ways to obtain such an approximation (which is used in Chapter 19 of Iwaniec and Kowalski) is to start with the standard identity
, that is to say
and obtain an approximation by truncating to be less than some threshold
(which, in practice, would be a small power of
):
Thus, for instance, if , the approximant
would be taken to be
One could also use the slightly smoother approximation
in which case we would take
The function is somewhat similar to the continuous Selberg sieve weights studied in Notes 4, with the main difference being that we did not square the divisor sum as we will not need to take
to be non-negative. As long as
is not too large, one can use some sieve-like computations to compute expressions like
quite accurately. The approximation (5) can be justified by using a nice estimate of Davenport that exemplifies the Mobius pseudorandomness heuristic from Supplement 4:
Theorem 8 (Davenport’s estimate) For any
and
, we have
uniformly for all
. The implied constants are ineffective.
This estimate will be proven by splitting into two cases. In the “major arc” case when is close to a rational
with
small (of size
or so), this estimate will be a consequence of the Siegel-Walfisz theorem ( from Notes 2); it is the application of this theorem that is responsible for the ineffective constants. In the remaining “minor arc” case, one proceeds by using a combinatorial identity (such as Vaughan’s identity) to express the sum
in terms of bilinear sums of the form
, and use the Cauchy-Schwarz inequality and the minor arc nature of
to obtain a gain in this case. This will all be done below the fold. We will also use (a rigorous version of) the approximation (6) (or (7)) to establish Vinogradov’s theorem.
A somewhat different looking approximation for the von Mangoldt function that also turns out to be quite useful is
for some that is not too large compared to
. The methods used to establish Theorem 8 can also establish a Fourier approximation that makes (8) precise, and which can yield an alternate proof of Vinogradov’s theorem; this will be done below the fold.
The approximation (8) can be written in a way that makes it more similar to (7):
Exercise 9 Show that the right-hand side of (8) can be rewritten as
where
Then, show the inequalities
and conclude that
(Hint: for the latter estimate, use Theorem 27 of Notes 1.)
The coefficients in the above exercise are quite similar to optimised Selberg sieve coefficients (see Section 2 of Notes 4).
Another approximation to , related to the modified Cramér random model (see Model 10 of Supplement 4) is
where and
is a slowly growing function of
(e.g.
); a closely related approximation is
for as above and
coprime to
. These approximations (closely related to a device known as the “
-trick”) are not as quantitatively accurate as the previous approximations, but can still suffice to establish Vinogradov’s theorem, and also to count many other linear patterns in the primes or subsets of the primes (particularly if one injects some additional tools from additive combinatorics, and specifically the inverse conjecture for the Gowers uniformity norms); see this paper of Ben Green and myself for more discussion (and this more recent paper of Shao for an analysis of this approach in the context of Vinogradov-type theorems). The following exercise expresses the approximation (9) in a form similar to the previous approximation (8):
Exercise 10 With
as above, show that
for all natural numbers
.
A major topic of interest of analytic number theory is the asymptotic behaviour of the Riemann zeta function in the critical strip
in the limit
. For the purposes of this set of notes, it is a little simpler technically to work with the log-magnitude
of the zeta function. (In principle, one can reconstruct a branch of
, and hence
itself, from
using the Cauchy-Riemann equations, or tools such as the Borel-Carathéodory theorem, see Exercise 40 of Supplement 2.)
One has the classical estimate
(See e.g. Exercise 37 from Supplement 3.) In view of this, let us define the normalised log-magnitudes for any
by the formula
informally, this is a normalised window into near
. One can rephrase several assertions about the zeta function in terms of the asymptotic behaviour of
. For instance:
- (i) The bound (1) implies that
is asymptotically locally bounded from above in the limit
, thus for any compact set
we have
for
and
sufficiently large. In fact the implied constant in
only depends on the projection of
to the real axis.
- (ii) For
, we have the bounds
which implies that
converges locally uniformly as
to zero in the region
.
- (iii) The functional equation, together with the symmetry
, implies that
which by Exercise 17 of Supplement 3 shows that
as
, locally uniformly in
. In particular, when combined with the previous item, we see that
converges locally uniformly as
to
in the region
.
- (iv) From Jensen’s formula (Theorem 16 of Supplement 2) we see that
is a subharmonic function, and thus
is subharmonic as well. In particular we have the mean value inequality
for any disk
, where the integral is with respect to area measure. From this and (ii) we conclude that
for any disk with
and sufficiently large
; combining this with (i) we conclude that
is asymptotically locally bounded in
in the limit
, thus for any compact set
we have
for sufficiently large
.
From (iv) and the usual Arzela-Ascoli diagonalisation argument, we see that the are asymptotically compact in the topology of distributions: given any sequence
tending to
, one can extract a subsequence such that the
converge in the sense of distributions. Let us then define a normalised limit profile of
to be a distributional limit
of a sequence of
; they are analogous to limiting profiles in PDE, and also to the more recent introduction of “graphons” in the theory of graph limits. Then by taking limits in (i)-(iv) we can say a lot about such normalised limit profiles
(up to almost everywhere equivalence, which is an issue we will address shortly):
- (i)
is bounded from above in the critical strip
.
- (ii)
vanishes on
.
- (iii) We have the functional equation
for all
. In particular
for
.
- (iv)
is subharmonic.
Unfortunately, (i)-(iv) fail to characterise completely. For instance, one could have
for any convex function
of
that equals
for
,
for
, and obeys the functional equation
, and this would be consistent with (i)-(iv). One can also perturb such examples in a region where
is strictly convex to create further examples of functions obeying (i)-(iv). Note from subharmonicity that the function
is always going to be convex in
; this can be seen as a limiting case of the Hadamard three-lines theorem (Exercise 41 of Supplement 2).
We pause to address one minor technicality. We have defined as a distributional limit, and as such it is a priori only defined up to almost everywhere equivalence. However, due to subharmonicity, there is a unique upper semi-continuous representative of
(taking values in
), defined by the formula
for any (note from subharmonicity that the expression in the limit is monotone nonincreasing as
, and is also continuous in
). We will now view this upper semi-continuous representative of
as the canonical representative of
, so that
is now defined everywhere, rather than up to almost everywhere equivalence.
By a classical theorem of Riesz, a function is subharmonic if and only if the distribution
is a non-negative measure, where
is the Laplacian in the
coordinates. Jensen’s formula (or Greens’ theorem), when interpreted distributionally, tells us that
away from the real axis, where ranges over the non-trivial zeroes of
. Thus, if
is a normalised limit profile for
that is the distributional limit of
, then we have
where is a non-negative measure which is the limit in the vague topology of the measures
Thus is a normalised limit profile of the zeroes of the Riemann zeta function.
Using this machinery, we can recover many classical theorems about the Riemann zeta function by “soft” arguments that do not require extensive calculation. Here are some examples:
Theorem 1 The Riemann hypothesis implies the Lindelöf hypothesis.
Proof: It suffices to show that any limiting profile (arising as the limit of some
) vanishes on the critical line
. But if the Riemann hypothesis holds, then the measures
are supported on the critical line
, so the normalised limit profile
is also supported on this line. This implies that
is harmonic outside of the critical line. By (ii) and unique continuation for harmonic functions, this implies that
vanishes on the half-space
(and equals
on the complementary half-space, by (iii)), giving the claim.
In fact, we have the following sharper statement:
Theorem 2 (Backlund) The Lindelöf hypothesis is equivalent to the assertion that for any fixed
, the number of zeroes in the region
is
as
.
Proof: If the latter claim holds, then for any , the measures
assign a mass of
to any region of the form
as
for any fixed
and
. Thus the normalised limiting profile measure
is supported on the critical line, and we can repeat the previous argument.
Conversely, suppose the claim fails, then we can find a sequence and
such that
assigns a mass of
to the region
. Extracting a normalised limiting profile, we conclude that the normalised limiting profile measure
is non-trivial somewhere to the right of the critical line, so the associated subharmonic function
is not harmonic everywhere to the right of the critical line. From the maximum principle and (ii) this implies that
has to be positive somewhere on the critical line, but this contradicts the Lindelöf hypothesis. (One has to take a bit of care in the last step since
only converges to
in the sense of distributions, but it turns out that the subharmonicity of all the functions involved gives enough regularity to justify the argument; we omit the details here.)
Theorem 3 (Littlewood) Assume the Lindelöf hypothesis. Then for any fixed
, the number of zeroes in the region
is
as
.
Proof: By the previous arguments, the only possible normalised limiting profile for is
. Taking distributional Laplacians, we see that the only possible normalised limiting profile for the zeroes is Lebesgue measure on the critical line. Thus,
can only converge to
as
, and the claim follows.
Even without the Lindelöf hypothesis, we have the following result:
Theorem 4 (Titchmarsh) For any fixed
, there are
zeroes in the region
for sufficiently large
.
Among other things, this theorem recovers a classical result of Littlewood that the gaps between the imaginary parts of the zeroes goes to zero, even without assuming unproven conjectures such as the Riemann or Lindelöf hypotheses.
Proof: Suppose for contradiction that this were not the case, then we can find and a sequence
such that
contains
zeroes. Passing to a subsequence to extract a limit profile, we conclude that the normalised limit profile measure
assigns no mass to the horizontal strip
. Thus the associated subharmonic function
is actually harmonic on this strip. But by (ii) and unique continuation this forces
to vanish on this strip, contradicting the functional equation (iii).
Exercise 5 Use limiting profiles to obtain the matching upper bound of
for the number of zeroes in
for sufficiently large
.
Remark 6 One can remove the need to take limiting profiles in the above arguments if one can come up with quantitative (or “hard”) substitutes for qualitative (or “soft”) results such as the unique continuation property for harmonic functions. This would also allow one to replace the qualitative decay rates
with more quantitative decay rates such as
or
. Indeed, the classical proofs of the above theorems come with quantitative bounds that are typically of this form (see e.g. the text of Titchmarsh for details).
Exercise 7 Let
denote the quantity
, where the branch of the argument is taken by using a line segment connecting
to (say)
, and then to
. If we have a sequence
producing normalised limit profiles
for
and the zeroes respectively, show that
converges in the sense of distributions to the function
, or equivalently
Conclude in particular that if the Lindelöf hypothesis holds, then
as
.
A little bit more about the normalised limit profiles are known unconditionally, beyond (i)-(iv). For instance, from Exercise 3 of Notes 5 we have
as
, which implies that any normalised limit profile
for
is bounded by
on the critical line, beating the bound of
coming from convexity and (ii), (iii), and then convexity can be used to further bound
away from the critical line also. Some further small improvements of this type are known (coming from various methods for estimating exponential sums), though they fall well short of determining
completely at our current level of understanding. Of course, given that we believe the Riemann hypothesis (and hence the Lindelöf hypothesis) to be true, the only actual limit profile that should exist is
(in fact this assertion is equivalent to the Lindelöf hypothesis, by the arguments above).
Better control on limiting profiles is available if we do not insist on controlling for all values of the height parameter
, but only for most such values, thanks to the existence of several mean value theorems for the zeta function, as discussed in Notes 6; we discuss this below the fold.
In analytic number theory, it is a well-known phenomenon that for many arithmetic functions of interest in number theory, it is significantly easier to estimate logarithmic sums such as
than it is to estimate summatory functions such as
(Here we are normalising to be roughly constant in size, e.g.
as
.) For instance, when
is the von Mangoldt function
, the logarithmic sums
can be adequately estimated by Mertens’ theorem, which can be easily proven by elementary means (see Notes 1); but a satisfactory estimate on the summatory function
requires the prime number theorem, which is substantially harder to prove (see Notes 2). (From a complex-analytic or Fourier-analytic viewpoint, the problem is that the logarithmic sums
can usually be controlled just from knowledge of the Dirichlet series
for
near
; but the summatory functions require control of the Dirichlet series
for
on or near a large portion of the line
. See Notes 2 for further discussion.)
Viewed conversely, whenever one has a difficult estimate on a summatory function such as , one can look to see if there is a “cheaper” version of that estimate that only controls the logarithmic sums
, which is easier to prove than the original, more “expensive” estimate. In this post, we shall do this for two theorems, a classical theorem of Halasz on mean values of multiplicative functions on long intervals, and a much more recent result of Matomaki and Radziwiłł on mean values of multiplicative functions in short intervals. The two are related; the former theorem is an ingredient in the latter (though in the special case of the Matomaki-Radziwiłł theorem considered here, we will not need Halasz’s theorem directly, instead using a key tool in the proof of that theorem).
We begin with Halasz’s theorem. Here is a version of this theorem, due to Montgomery and to Tenenbaum:
Theorem 1 (Halasz-Montgomery-Tenenbaum) Let
be a multiplicative function with
for all
. Let
and
, and set
Then one has
Informally, this theorem asserts that is small compared with
, unless
“pretends” to be like the character
on primes for some small
. (This is the starting point of the “pretentious” approach of Granville and Soundararajan to analytic number theory, as developed for instance here.) We now give a “cheap” version of this theorem which is significantly weaker (both because it settles for controlling logarithmic sums rather than summatory functions, it requires
to be completely multiplicative instead of multiplicative, it requires a strong bound on the analogue of the quantity
, and because it only gives qualitative decay rather than quantitative estimates), but easier to prove:
Theorem 2 (Cheap Halasz) Let
be an asymptotic parameter goingto infinity. Let
be a completely multiplicative function (possibly depending on
) such that
for all
, such that
Note that now that we are content with estimating exponential sums, we no longer need to preclude the possibility that pretends to be like
; see Exercise 11 of Notes 1 for a related observation.
To prove this theorem, we first need a special case of the Turan-Kubilius inequality.
Lemma 3 (Turan-Kubilius) Let
be a parameter going to infinity, and let
be a quantity depending on
such that
and
as
. Then
Informally, this lemma is asserting that
for most large numbers . Another way of writing this heuristically is in terms of Dirichlet convolutions:
This type of estimate was previously discussed as a tool to establish a criterion of Katai and Bourgain-Sarnak-Ziegler for Möbius orthogonality estimates in this previous blog post. See also Section 5 of Notes 1 for some similar computations.
Proof: By Cauchy-Schwarz it suffices to show that
Expanding out the square, it suffices to show that
for .
We just show the case, as the
cases are similar (and easier). We rearrange the left-hand side as
We can estimate the inner sum as . But a routine application of Mertens’ theorem (handling the diagonal case when
separately) shows that
and the claim follows.
Remark 4 As an alternative to the Turan-Kubilius inequality, one can use the Ramaré identity
(see e.g. Section 17.3 of Friedlander-Iwaniec). This identity turns out to give superior quantitative results than the Turan-Kubilius inequality in applications; see the paper of Matomaki and Radziwiłł for an instance of this.
We now prove Theorem 2. Let denote the left-hand side of (2); by the triangle inequality we have
. By Lemma 3 (for some
to be chosen later) and the triangle inequality we have
We rearrange the left-hand side as
We now replace the constraint by
. The error incurred in doing so is
which by Mertens’ theorem is . Thus we have
But by definition of , we have
, thus
From Mertens’ theorem, the expression in brackets can be rewritten as
and so the real part of this expression is
By (1), Mertens’ theorem and the hypothesis on we have
for any . This implies that we can find
going to infinity such that
and thus the expression in brackets has real part . The claim follows.
The Turan-Kubilius argument is certainly not the most efficient way to estimate sums such as . In the exercise below we give a significantly more accurate estimate that works when
is non-negative.
Exercise 5 (Granville-Koukoulopoulos-Matomaki)
- (i) If
is a completely multiplicative function with
for all primes
, show that
as
. (Hint: for the upper bound, expand out the Euler product. For the lower bound, show that
, where
is the completely multiplicative function with
for all primes
.)
- (ii) If
is multiplicative and takes values in
, show that
for all
.
Now we turn to a very recent result of Matomaki and Radziwiłł on mean values of multiplicative functions in short intervals. For sake of illustration we specialise their results to the simpler case of the Liouville function , although their arguments actually work (with some additional effort) for arbitrary multiplicative functions of magnitude at most
that are real-valued (or more generally, stay far from complex characters
). Furthermore, we give a qualitative form of their estimates rather than a quantitative one:
Theorem 6 (Matomaki-Radziwiłł, special case) Let
be a parameter going to infinity, and let
be a quantity going to infinity as
. Then for all but
of the integers
, one has
A simple sieving argument (see Exercise 18 of Supplement 4) shows that one can replace by the Möbius function
and obtain the same conclusion. See this recent note of Matomaki and Radziwiłł for a simple proof of their (quantitative) main theorem in this special case.
Of course, (4) improves upon the trivial bound of . Prior to this paper, such estimates were only known (using arguments similar to those in Section 3 of Notes 6) for
unconditionally, or for
for some sufficiently large
if one assumed the Riemann hypothesis. This theorem also represents some progress towards Chowla’s conjecture (discussed in Supplement 4) that
as for any fixed distinct
; indeed, it implies that this conjecture holds if one performs a small amount of averaging in the
.
Below the fold, we give a “cheap” version of the Matomaki-Radziwiłł argument. More precisely, we establish
Theorem 7 (Cheap Matomaki-Radziwiłł) Let
be a parameter going to infinity, and let
. Then
Note that (5) improves upon the trivial bound of . Again, one can replace
with
if desired. Due to the cheapness of Theorem 7, the proof will require few ingredients; the deepest input is the improved zero-free region for the Riemann zeta function due to Vinogradov and Korobov. Other than that, the main tools are the Turan-Kubilius result established above, and some Fourier (or complex) analysis.
In the previous set of notes, we saw how zero-density theorems for the Riemann zeta function, when combined with the zero-free region of Vinogradov and Korobov, could be used to obtain prime number theorems in short intervals. It turns out that a more sophisticated version of this type of argument also works to obtain prime number theorems in arithmetic progressions, in particular establishing the celebrated theorem of Linnik:
Theorem 1 (Linnik’s theorem) Let
be a primitive residue class. Then
contains a prime
with
.
In fact it is known that one can find a prime with
, a result of Xylouris. For sake of comparison, recall from Exercise 65 of Notes 2 that the Siegel-Walfisz theorem gives this theorem with a bound of
, and from Exercise 48 of Notes 2 one can obtain a bound of the form
if one assumes the generalised Riemann hypothesis. The probabilistic random models from Supplement 4 suggest that one should in fact be able to take
.
We will not aim to obtain the optimal exponents for Linnik’s theorem here, and follow the treatment in Chapter 18 of Iwaniec and Kowalski. We will in fact establish the following more quantitative result (a special case of a more powerful theorem of Gallagher), which splits into two cases, depending on whether there is an exceptional zero or not:
Theorem 2 (Quantitative Linnik theorem) Let
be a primitive residue class for some
. For any
, let
denote the quantity
Assume that
for some sufficiently large
.
- (i) (No exceptional zero) If all the real zeroes
of
-functions
of real characters
of modulus
are such that
, then
for all
and some absolute constant
.
- (ii) (Exceptional zero) If there is a zero
of an
-function
of a real character
of modulus
with
for some sufficiently small
, then
for all
and some absolute constant
.
The implied constants here are effective.
Note from the Landau-Page theorem (Exercise 54 from Notes 2) that at most one exceptional zero exists (if is small enough). A key point here is that the error term
in the exceptional zero case is an improvement over the error term when no exceptional zero is present; this compensates for the potential reduction in the main term coming from the
term. The splitting into cases depending on whether an exceptional zero exists or not turns out to be an essential technique in many advanced results in analytic number theory (though presumably such a splitting will one day become unnecessary, once the possibility of exceptional zeroes are finally eliminated for good).
Exercise 3 Assuming Theorem 2, and assuming
for some sufficiently large absolute constant
, establish the lower bound
when there is no exceptional zero, and
when there is an exceptional zero
. Conclude that Theorem 2 implies Theorem 1, regardless of whether an exceptional zero exists or not.
Remark 4 The Brun-Titchmarsh theorem (Exercise 33 from Notes 4), in the sharp form of Montgomery and Vaughan, gives that
for any primitive residue class
and any
. This is (barely) consistent with the estimate (1). Any lowering of the coefficient
in the Brun-Titchmarsh inequality (with reasonable error terms), in the regime when
is a large power of
, would then lead to at least some elimination of the exceptional zero case. However, this has not led to any progress on the Landau-Siegel zero problem (and may well be just a reformulation of that problem). (When
is a relatively small power of
, some improvements to Brun-Titchmarsh are possible that are not in contradiction with the presence of an exceptional zero; see this paper of Maynard for more discussion.)
Theorem 2 is deduced in turn from facts about the distribution of zeroes of -functions. We first need a version of the truncated explicit formula that does not lose unnecessary logarithms:
Exercise 5 (Log-free truncated explicit formula) With the hypotheses as above, show that
for any non-principal character
of modulus
, where we assume
for some large
; for the principal character establish the same formula with an additional term of
on the right-hand side. (Hint: this is almost immediate from Exercise 45(iv) and Theorem 21 ofNotes 2) with (say)
, except that there is a factor of
in the error term instead of
when
is extremely large compared to
. However, a closer inspection of the proof (particularly with regards to the truncated Perron formula in Proposition 12 of Notes 2) shows that the
factor can be replaced fairly easily by
. To get rid of the final factor of
, note that the proof of Proposition 12 used the rather crude bound
. If one replaces this crude bound by more sophisticated tools such as the Brun-Titchmarsh inequality, one will be able to remove the factor of
.
Using the Fourier inversion formula
(see Theorem 69 of Notes 1), we thus have
and so it suffices by the triangle inequality (bounding very crudely by
, as the contribution of the low-lying zeroes already turns out to be quite dominant) to show that
when no exceptional zero is present, and
when an exceptional zero is present.
To handle the former case (2), one uses two facts about zeroes. The first is the classical zero-free region (Proposition 51 from Notes 2), which we reproduce in our context here:
Proposition 6 (Classical zero-free region) Let
. Apart from a potential exceptional zero
, all zeroes
of
-functions
with
of modulus
and
are such that
for some absolute constant
.
Using this zero-free region, we have
whenever contributes to the sum in (2), and so the left-hand side of (2) is bounded by
where we recall that is the number of zeroes
of any
-function of a character
of modulus
with
and
(here we use conjugation symmetry to make
non-negative, accepting a multiplicative factor of two).
In Exercise 25 of Notes 6, the grand density estimate
is proven. If one inserts this bound into the above expression, one obtains a bound for (2) which is of the form
Unfortunately this is off from what we need by a factor of (and would lead to a weak form of Linnik’s theorem in which
was bounded by
rather than by
). In the analogous problem for prime number theorems in short intervals, we could use the Vinogradov-Korobov zero-free region to compensate for this loss, but that region does not help here for the contribution of the low-lying zeroes with
, which as mentioned before give the dominant contribution. Fortunately, it is possible to remove this logarithmic loss from the zero-density side of things:
Theorem 7 (Log-free grand density estimate) For any
and
, one has
The implied constants are effective.
We prove this estimate below the fold. The proof follows the methods of the previous section, but one inserts various sieve weights to restrict sums over natural numbers to essentially become sums over “almost primes”, as this turns out to remove the logarithmic losses. (More generally, the trick of restricting to almost primes by inserting suitable sieve weights is quite useful for avoiding any unnecessary losses of logarithmic factors in analytic number theory estimates.)
Now we turn to the case when there is an exceptional zero (3). The argument used to prove (2) applies here also, but does not gain the factor of in the exponent. To achieve this, we need an additional tool, a version of the Deuring-Heilbronn repulsion phenomenon due to Linnik:
Theorem 9 (Deuring-Heilbronn repulsion phenomenon) Suppose
is such that there is an exceptional zero
with
small. Then all other zeroes
of
-functions of modulus
are such that
In other words, the exceptional zero enlarges the classical zero-free region by a factor of
. The implied constants are effective.
Exercise 10 Use Theorem 7 and Theorem 9 to complete the proof of (3), and thus Linnik’s theorem.
Exercise 11 Use Theorem 9 to give an alternate proof of (Tatuzawa’s version of) Siegel’s theorem (Theorem 62 of Notes 2). (Hint: if two characters have different moduli, then they can be made to have the same modulus by multiplying by suitable principal characters.)
Theorem 9 is proven by similar methods to that of Theorem 7, the basic idea being to insert a further weight of (in addition to the sieve weights), the point being that the exceptional zero causes this weight to be quite small on the average. There is a strengthening of Theorem 9 due to Bombieri that is along the lines of Theorem 7, obtaining the improvement
with effective implied constants for any and
in the presence of an exceptional zero, where the prime in
means that the exceptional zero
is omitted (thus
if
). Note that the upper bound on
falls below one when
for a sufficiently small
, thus recovering Theorem 9. Bombieri’s theorem can be established by the methods in this set of notes, and will be given as an exercise to the reader.
Remark 12 There are a number of alternate ways to derive the results in this set of notes, for instance using the Turan power sums method which is based on studying derivatives such as
for
and large
, and performing various sorts of averaging in
to attenuate the contribution of many of the zeroes
. We will not develop this method here, but see for instance Chapter 9 of Montgomery’s book. See the text of Friedlander and Iwaniec for yet another approach based primarily on sieve-theoretic ideas.
Remark 13 When one optimises all the exponents, it turns out that the exponent in Linnik’s theorem is extremely good in the presence of an exceptional zero – indeed Friedlander and Iwaniec showed can even get a bound of the form
for some
, which is even stronger than one can obtain from GRH! There are other places in which exceptional zeroes can be used to obtain results stronger than what one can obtain even on the Riemann hypothesis; for instance, Heath-Brown used the hypothesis of an infinite sequence of Siegel zeroes to obtain the twin prime conejcture.
In the previous set of notes, we studied upper bounds on sums such as for
that were valid for all
in a given range, such as
; this led in turn to upper bounds on the Riemann zeta
for
in the same range, and for various choices of
. While some improvement over the trivial bound of
was obtained by these methods, we did not get close to the conjectural bound of
that one expects from pseudorandomness heuristics (assuming that
is not too large compared with
, e.g.
.
However, it turns out that one can get much better bounds if one settles for estimating sums such as , or more generally finite Dirichlet series (also known as Dirichlet polynomials) such as
, for most values of
in a given range such as
. Equivalently, we will be able to get some control on the large values of such Dirichlet polynomials, in the sense that we can control the set of
for which
exceeds a certain threshold, even if we cannot show that this set is empty. These large value theorems are often closely tied with estimates for mean values such as
of a Dirichlet series; these latter estimates are thus known as mean value theorems for Dirichlet series. Our approach to these theorems will follow the same sort of methods used in Notes 3, in particular relying on the generalised Bessel inequality from those notes.
Our main application of the large value theorems for Dirichlet polynomials will be to control the number of zeroes of the Riemann zeta function (or the Dirichlet
-functions
) in various rectangles of the form
for various
and
. These rectangles will be larger than the zero-free regions for which we can exclude zeroes completely, but we will often be able to limit the number of zeroes in such rectangles to be quite small. For instance, we will be able to show the following weak form of the Riemann hypothesis: as
, a proportion
of zeroes of the Riemann zeta function in the critical strip with
will have real part
. Related to this, the number of zeroes with
and
can be shown to be bounded by
as
for any
.
In the next set of notes we will use refined versions of these theorems to establish Linnik’s theorem on the least prime in an arithmetic progression.
Our presentation here is broadly based on Chapters 9 and 10 in Iwaniec and Kowalski, who give a number of more sophisticated large value theorems than the ones discussed here.
We return to the study of the Riemann zeta function , focusing now on the task of upper bounding the size of this function within the critical strip; as seen in Exercise 43 of Notes 2, such upper bounds can lead to zero-free regions for
, which in turn lead to improved estimates for the error term in the prime number theorem.
In equation (21) of Notes 2 we obtained the somewhat crude estimates
for any and
with
and
. Setting
, we obtained the crude estimate
in this region. In particular, if and
then we had
. Using the functional equation and the Hadamard three lines lemma, we can improve this to
; see Supplement 3.
Now we seek better upper bounds on . We will reduce the problem to that of bounding certain exponential sums, in the spirit of Exercise 34 of Supplement 3:
Proposition 1 Let
with
and
. Then
where
.
Proof: We fix a smooth function with
for
and
for
, and allow implied constants to depend on
. Let
with
. From Exercise 34 of Supplement 3, we have
for some sufficiently large absolute constant . By dyadic decomposition, we thus have
We can absorb the first term in the second using the case of the supremum. Writing
, where
it thus suffices to show that
for each . But from the fundamental theorem of calculus, the left-hand side can be written as
and the claim then follows from the triangle inequality and a routine calculation.
We are thus interested in getting good bounds on the sum . More generally, we consider normalised exponential sums of the form
where is an interval of length at most
for some
, and
is a smooth function. We will assume smoothness estimates of the form
for some , all
, and all
, where
is the
-fold derivative of
; in the case
,
of interest for the Riemann zeta function, we easily verify that these estimates hold with
. (One can consider exponential sums under more general hypotheses than (3), but the hypotheses here are adequate for our needs.) We do not bound the zeroth derivative
of
directly, but it would not be natural to do so in any event, since the magnitude of the sum (2) is unaffected if one adds an arbitrary constant to
.
The trivial bound for (2) is
and we will seek to obtain significant improvements to this bound. Pseudorandomness heuristics predict a bound of for (2) for any
if
; this assertion (a special case of the exponent pair hypothesis) would have many consequences (for instance, inserting it into Proposition 1 soon yields the Lindelöf hypothesis), but is unfortunately quite far from resolution with known methods. However, we can obtain weaker gains of the form
when
and
depends on
. We present two such results here, which perform well for small and large values of
respectively:
Theorem 2 Let
, let
be an interval of length at most
, and let
be a smooth function obeying (3) for all
and
.
The factor of can be removed by a more careful argument, but we will not need to do so here as we are willing to lose powers of
. The estimate (6) is superior to (5) when
for
large, since (after optimising in
) (5) gives a gain of the form
over the trivial bound, while (6) gives
. We have not attempted to obtain completely optimal estimates here, settling for a relatively simple presentation that still gives good bounds on
, and there are a wide variety of additional exponential sum estimates beyond the ones given here; see Chapter 8 of Iwaniec-Kowalski, or Chapters 3-4 of Montgomery, for further discussion.
We now briefly discuss the strategies of proof of Theorem 2. Both parts of the theorem proceed by treating like a polynomial of degree roughly
; in the case of (ii), this is done explicitly via Taylor expansion, whereas for (i) it is only at the level of analogy. Both parts of the theorem then try to “linearise” the phase to make it a linear function of the summands (actually in part (ii), it is necessary to introduce an additional variable and make the phase a bilinear function of the summands). The van der Corput estimate achieves this linearisation by squaring the exponential sum about
times, which is why the gain is only exponentially small in
. The Vinogradov estimate achieves linearisation by raising the exponential sum to a significantly smaller power – on the order of
– by using Hölder’s inequality in combination with the fact that the discrete curve
becomes roughly equidistributed in the box
after taking the sumset of about
copies of this curve. This latter fact has a precise formulation, known as the Vinogradov mean value theorem, and its proof is the most difficult part of the argument, relying on using a “
-adic” version of this equidistribution to reduce the claim at a given scale
to a smaller scale
with
, and then proceeding by induction.
One can combine Theorem 2 with Proposition 1 to obtain various bounds on the Riemann zeta function:
Exercise 3 (Subconvexity bound)
- (i) Show that
for all
. (Hint: use the
case of the Van der Corput estimate.)
- (ii) For any
, show that
as
(the decay rate in the
is allowed to depend on
).
Exercise 4 Let
be such that
, and let
.
- (i) (Littlewood bound) Use the van der Corput estimate to show that
whenever
.
- (ii) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that
whenever
.
As noted in Exercise 43 of Notes 2, the Vinogradov-Korobov bound leads to the zero-free region , which in turn leads to the prime number theorem with error term
for . If one uses the weaker Littlewood bound instead, one obtains the narrower zero-free region
(which is only slightly wider than the classical zero-free region) and an error term
in the prime number theorem.
Exercise 5 (Vinogradov-Korobov in arithmetic progressions) Let
be a non-principal character of modulus
.
- (i) (Vinogradov-Korobov bound) Use the Vinogradov estimate to show that
whenever
and
(Hint: use the Vinogradov estimate and a change of variables to control
for various intervals
of length at most
and residue classes
, in the regime
(say). For
, do not try to capture any cancellation and just use the triangle inequality instead.)
- (ii) Obtain a zero-free region
for
, for some (effective) absolute constant
.
- (iii) Obtain the prime number theorem in arithmetic progressions with error term
whenever
,
,
is primitive, and
depends (ineffectively) on
.
We continue the discussion of sieve theory from Notes 4, but now specialise to the case of the linear sieve in which the sieve dimension is equal to
, which is one of the best understood sieving situations, and one of the rare cases in which the precise limits of the sieve method are known. A bit more specifically, let
be quantities with
for some fixed
, and let
be a multiplicative function with
for all primes and some fixed
(we allow all constants below to depend on
). Let
, and for each prime
, let
be a set of integers, with
for
. We consider finitely supported sequences
of non-negative reals for which we have bounds of the form
for all square-free and some
, and some remainder terms
. One is then interested in upper and lower bounds on the quantity
The fundamental lemma of sieve theory (Corollary 19 of Notes 4) gives us the bound
This bound is strong when is large, but is not as useful for smaller values of
. We now give a sharp bound in this regime. We introduce the functions
by
where we adopt the convention . Note that for each
one has only finitely many non-zero summands in (6), (7). These functions are closely related to the Buchstab function
from Exercise 28 of Supplement 4; indeed from comparing the definitions one has
for all .
Exercise 1 (Alternate definition of
) Show that
is continuously differentiable except at
, and
is continuously differentiable except at
where it is continuous, obeying the delay-differential equations
for
, with the initial conditions
for
and
for
. Show that these properties of
determine
completely.
For future reference, we record the following explicit values of :
for .
We will show
Theorem 2 (Linear sieve) Let the notation and hypotheses be as above, with
. Then, for any
, one has the upper bound
if
is sufficiently large depending on
. Furthermore, this claim is sharp in the sense that the quantity
cannot be replaced by any smaller quantity, and similarly
cannot be replaced by any larger quantity.
Comparing the linear sieve with the fundamental lemma (and also testing using the sequence for some extremely large
), we conclude that we necessarily have the asymptotics
for all ; this can also be proven directly from the definitions of
, or from Exercise 1, but is somewhat challenging to do so; see e.g. Chapter 11 of Friedlander-Iwaniec for details.
Exercise 3 Establish the integral identities
and
for
. Argue heuristically that these identities are consistent with the bounds in Theorem 2 and the Buchstab identity (Equation (16) from Notes 4).
Exercise 4 Use the Selberg sieve (Theorem 30 from Notes 4) to obtain a slightly weaker version of (12) in the range
in which the error term
is worsened to
, but the main term is unchanged.
We will prove Theorem 2 below the fold. The optimality of is closely related to the parity problem obstruction discussed in Section 5 of Notes 4; a naive application of the parity arguments there only give the weak bounds
and
for
, but this can be sharpened by a more careful counting of various sums involving the Liouville function
.
As an application of the linear sieve (specialised to the ranges in (10), (11)), we will establish a famous theorem of Chen, giving (in some sense) the closest approach to the twin prime conjecture that one can hope to achieve by sieve-theoretic methods:
Theorem 5 (Chen’s theorem) There are infinitely many primes
such that
is the product of at most two primes.
The same argument gives the version of Chen’s theorem for the even Goldbach conjecture, namely that for all sufficiently large even , there exists a prime
between
and
such that
is the product of at most two primes.
The discussion in these notes loosely follows that of Friedlander-Iwaniec (who study sieving problems in more general dimension than ).
Recent Comments