You are currently browsing the category archive for the ‘teaching’ category.

In the previous set of notes we established the central limit theorem, which we formulate here as follows:

Theorem 1 (Central limit theorem) Let {X_1,X_2,X_3,\dots} be iid copies of a real random variable {X} of mean {\mu} and variance {0 < \sigma^2 < \infty}, and write {S_n := X_1 + \dots + X_n}. Then, for any fixed {a < b}, we have

\displaystyle  {\bf P}( a \leq \frac{S_n - n \mu}{\sqrt{n} \sigma} \leq b ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_a^b e^{-t^2/2}\ dt \ \ \ \ \ (1)

as {n \rightarrow \infty}.

This is however not the end of the matter; there are many variants, refinements, and generalisations of the central limit theorem, and the purpose of this set of notes is to present a small sample of these variants.

First of all, the above theorem does not quantify the rate of convergence in (1). We have already addressed this issue to some extent with the Berry-Esséen theorem, which roughly speaking gives a convergence rate of {O(1/\sqrt{n})} uniformly in {a,b} if we assume that {X} has finite third moment. However there are still some quantitative versions of (1) which are not addressed by the Berry-Esséen theorem. For instance one may be interested in bounding the large deviation probabilities

\displaystyle  {\bf P}( |\frac{S_n - n \mu}{\sqrt{n} \sigma}| \geq \lambda ) \ \ \ \ \ (2)

in the setting where {\lambda} grows with {n}. The central limit theorem (1) suggests that this probability should be bounded by something like {O( e^{-\lambda^2/2})}; however, this theorem only kicks in when {n} is very large compared with {\lambda}. For instance, if one uses the Berry-Esséen theorem, one would need {n} as large as {e^{\lambda^2}} or so to reach the desired bound of {O( e^{-\lambda^2/2})}, even under the assumption of finite third moment. Basically, the issue is that convergence-in-distribution results, such as the central limit theorem, only really control the typical behaviour of statistics in {\frac{S_n-n mu}{\sqrt{n} \sigma}}; they are much less effective at controlling the very rare outlier events in which the statistic strays far from its typical behaviour. Fortunately, there are large deviation inequalities (or concentration of measure inequalities) that do provide exponential type bounds for quantities such as (2), which are valid for both small and large values of {n}. A basic example of this is the Chernoff bound that made an appearance in Exercise 47 of Notes 4; here we give some further basic inequalities of this type, including versions of the Bennett and Hoeffding inequalities.

In the other direction, we can also look at the fine scale behaviour of the sums {S_n} by trying to control probabilities such as

\displaystyle  {\bf P}( a \leq S_n \leq a+h ) \ \ \ \ \ (3)

where {h} is now bounded (but {a} can grow with {n}). The central limit theorem predicts that this quantity should be roughly {\frac{h}{\sqrt{2\pi n} \sigma} e^{-(a-n\mu)^2 / 2n \sigma^2}}, but even if one is able to invoke the Berry-Esséen theorem, one cannot quite see this main term because it is dominated by the error term {O(1/n^{1/2})} in Berry-Esséen. There is good reason for this: if for instance {X} takes integer values, then {S_n} also takes integer values, and {{\bf P}( a \leq S_n \leq a+h )} can vanish when {h} is less than {1} and {a} is slightly larger than an integer. However, this turns out to essentially be the only obstruction; if {X} does not lie in a lattice such as {{\bf Z}}, then we can establish a local limit theorem controlling (3), and when {X} does take values in a lattice like {{\bf Z}}, there is a discrete local limit theorem that controls probabilities such as {{\bf P}(S_n = m)}. Both of these limit theorems will be proven by the Fourier-analytic method used in the previous set of notes.

We also discuss other limit theorems in which the limiting distribution is something other than the normal distribution. Perhaps the most common example of these theorems is the Poisson limit theorems, in which one sums a large number of indicator variables (or approximate indicator variables), each of which is rarely non-zero, but which collectively add up to a random variable of medium-sized mean. In this case, it turns out that the limiting distribution should be a Poisson random variable; this again is an easy application of the Fourier method. Finally, we briefly discuss limit theorems for other stable laws than the normal distribution, which are suitable for summing random variables of infinite variance, such as the Cauchy distribution.

Finally, we mention a very important class of generalisations to the CLT (and to the variants of the CLT discussed in this post), in which the hypothesis of joint independence between the variables {X_1,\dots,X_n} is relaxed, for instance one could assume only that the {X_1,\dots,X_n} form a martingale. Many (though not all) of the proofs of the CLT extend to these more general settings, and this turns out to be important for many applications in which one does not expect joint independence. However, we will not discuss these generalisations in this course, as they are better suited for subsequent courses in this series when the theory of martingales, conditional expectation, and related tools are developed.

Read the rest of this entry »

Let {X_1,X_2,\dots} be iid copies of an absolutely integrable real scalar random variable {X}, and form the partial sums {S_n := X_1 + \dots + X_n}. As we saw in the last set of notes, the law of large numbers ensures that the empirical averages {S_n/n} converge (both in probability and almost surely) to a deterministic limit, namely the mean {\mu= {\bf E} X} of the reference variable {X}. Furthermore, under some additional moment hypotheses on the underlying variable {X}, we can obtain square root cancellation for the fluctuation {\frac{S_n}{n} - \mu} of the empirical average from the mean. To simplify the calculations, let us first restrict to the case {\mu=0, \sigma^2=1} of mean zero and variance one, thus

\displaystyle  {\bf E} X = 0


\displaystyle  {\bf Var}(X) = {\bf E} X^2 = 1.

Then, as computed in previous notes, the normalised fluctuation {S_n/\sqrt{n}} also has mean zero and variance one:

\displaystyle  {\bf E} \frac{S_n}{\sqrt{n}} = 0

\displaystyle  {\bf Var}(\frac{S_n}{\sqrt{n}}) = {\bf E} (\frac{S_n}{\sqrt{n}})^2 = 1.

This and Chebyshev’s inequality already indicates that the “typical” size of {S_n} is {O(\sqrt{n})}, thus for instance {\frac{S_n}{\sqrt{n} \omega(n)}} goes to zero in probability for any {\omega(n)} that goes to infinity as {n \rightarrow \infty}. If we also have a finite fourth moment {{\bf E} |X|^4 < \infty}, then the calculations of the previous notes also give a fourth moment estimate

\displaystyle  {\bf E} (\frac{S_n}{\sqrt{n}})^4 = 3 + O( \frac{{\bf E} |X|^4}{n} ).

From this and the Paley-Zygmund inequality (Exercise 42 of Notes 1) we also get some lower bound for {\frac{S_n}{\sqrt{n}}} of the form

\displaystyle  {\bf P}( |\frac{S_n}{\sqrt{n}}| \geq \varepsilon ) \geq \varepsilon

for some absolute constant {\varepsilon>0} and for {n} sufficiently large; this indicates in particular that {\frac{S_n \omega(n)}{\sqrt{n}}} does not converge in any reasonable sense to something finite for any {\omega(n)} that goes to infinity.

The question remains as to what happens to the ratio {S_n/\sqrt{n}} itself, without multiplying or dividing by any factor {\omega(n)}. A first guess would be that these ratios converge in probability or almost surely, but this is unfortunately not the case:

Proposition 1 Let {X_1,X_2,\dots} be iid copies of an absolutely integrable real scalar random variable {X} with mean zero, variance one, and finite fourth moment, and write {S_n := X_1 + \dots + X_n}. Then the random variables {S_n/\sqrt{n}} do not converge in probability or almost surely to any limit, and neither does any subsequence of these random variables.

Proof: Suppose for contradiction that some sequence {S_{n_j}/\sqrt{n_j}} converged in probability or almost surely to a limit {Y}. By passing to a further subsequence we may assume that the convergence is in the almost sure sense. Since all of the {S_{n_j}/\sqrt{n_j}} have mean zero, variance one, and bounded fourth moment, Theorem 24 of Notes 1 implies that the limit {Y} also has mean zero and variance one. On the other hand, {Y} is a tail random variable and is thus almost surely constant by the Kolmogorov zero-one law from Notes 3. Since constants have variance zero, we obtain the required contradiction. \Box

Nevertheless there is an important limit for the ratio {S_n/\sqrt{n}}, which requires one to replace the notions of convergence in probability or almost sure convergence by the weaker concept of convergence in distribution.

Definition 2 (Vague convergence and convergence in distribution) Let {R} be a locally compact Hausdorff topological space with the Borel {\sigma}-algebra. A sequence of finite measures {\mu_n} on {R} is said to converge vaguely to another finite measure {\mu} if one has

\displaystyle  \int_R G(x)\ d\mu_n(x) \rightarrow \int_R G(x)\ d\mu(x)

as {n \rightarrow \infty} for all continuous compactly supported functions {G: R \rightarrow {\bf R}}. (Vague convergence is also known as weak convergence, although strictly speaking the terminology weak-* convergence would be more accurate.) A sequence of random variables {X_n} taking values in {R} is said to converge in distribution (or converge weakly or converge in law) to another random variable {X} if the distributions {\mu_{X_n}} converge vaguely to the distribution {\mu_X}, or equivalently if

\displaystyle  {\bf E}G(X_n) \rightarrow {\bf E} G(X)

as {n \rightarrow \infty} for all continuous compactly supported functions {G: R \rightarrow {\bf R}}.

One could in principle try to extend this definition beyond the locally compact Hausdorff setting, but certain pathologies can occur when doing so (e.g. failure of the Riesz representation theorem), and we will never need to consider vague convergence in spaces that are not locally compact Hausdorff, so we restrict to this setting for simplicity.

Note that the notion of convergence in distribution depends only on the distribution of the random variables involved. One consequence of this is that convergence in distribution does not produce unique limits: if {X_n} converges in distribution to {X}, and {Y} has the same distribution as {X}, then {X_n} also converges in distribution to {Y}. However, limits are unique up to equivalence in distribution (this is a consequence of the Riesz representation theorem, discussed for instance in this blog post). As a consequence of the insensitivity of convergence in distribution to equivalence in distribution, we may also legitimately talk about convergence of distribution of a sequence of random variables {X_n} to another random variable {X} even when all the random variables {X_1,X_2,\dots} and {X} involved are being modeled by different probability spaces (e.g. each {X_n} is modeled by {\Omega_n}, and {X} is modeled by {\Omega}, with no coupling presumed between these spaces). This is in contrast to the stronger notions of convergence in probability or almost sure convergence, which require all the random variables to be modeled by a common probability space. Also, by an abuse of notation, we can say that a sequence {X_n} of random variables converges in distribution to a probability measure {\mu}, when {\mu_{X_n}} converges vaguely to {\mu}. Thus we can talk about a sequence of random variables converging in distribution to a uniform distribution, a gaussian distribution, etc..

From the dominated convergence theorem (available for both convergence in probability and almost sure convergence) we see that convergence in probability or almost sure convergence implies convergence in distribution. The converse is not true, due to the insensitivity of convergence in distribution to equivalence in distribution; for instance, if {X_1,X_2,\dots} are iid copies of a non-deterministic scalar random variable {X}, then the {X_n} trivially converge in distribution to {X}, but will not converge in probability or almost surely (as one can see from the zero-one law). However, there are some partial converses that relate convergence in distribution to convergence in probability; see Exercise 10 below.

Remark 3 The notion of convergence in distribution is somewhat similar to the notion of convergence in the sense of distributions that arises in distribution theory (discussed for instance in this previous blog post), however strictly speaking the two notions of convergence are distinct and should not be confused with each other, despite the very similar names.

The notion of convergence in distribution simplifies in the case of real scalar random variables:

Proposition 4 Let {X_1,X_2,\dots} be a sequence of scalar random variables, and let {X} be another scalar random variable. Then the following are equivalent:

  • (i) {X_n} converges in distribution to {X}.
  • (ii) {F_{X_n}(t)} converges to {F_X(t)} for each continuity point {t} of {F_X} (i.e. for all real numbers {t \in {\bf R}} at which {F_X} is continuous). Here {F_X(t) := {\bf P}(X \leq t)} is the cumulative distribution function of {X}.

Proof: First suppose that {X_n} converges in distribution to {X}, and {F_X} is continuous at {t}. For any {\varepsilon > 0}, one can find a {\delta} such that

\displaystyle  F_X(t) - \varepsilon \leq F_X(t') \leq F_X(t) + \varepsilon

for every {t' \in [t-\delta,t+\delta]}. One can also find an {N} larger than {|t|+\delta} such that {F_X(-N) \leq \varepsilon} and {F_X(N) \geq 1-\varepsilon}. Thus

\displaystyle  {\bf P} (|X| \geq N ) = O(\varepsilon)


\displaystyle  {\bf P} (|X - t| \leq \delta ) = O(\varepsilon).

Let {G: {\bf R} \rightarrow [0,1]} be a continuous function supported on {[-2N, t]} that equals {1} on {[-N, t-\delta]}. Then by the above discussion we have

\displaystyle  {\bf E} G(X) = F_X(t) + O(\varepsilon)

and hence

\displaystyle  {\bf E} G(X_n) = F_X(t) + O(\varepsilon)

for large enough {n}. In particular

\displaystyle  {\bf P}( X_n \leq t ) \geq F_X(t) - O(\varepsilon).

A similar argument, replacing {G} with a continuous function supported on {[t,2N]} that equals {1} on {[t+\delta,N]} gives

\displaystyle  {\bf P}( X_n > t ) \geq 1 - F_X(t) - O(\varepsilon)

for {n} large enough. Putting the two estimates together gives

\displaystyle  F_{X_n}(t) = F_X(t) + O(\varepsilon)

for {n} large enough; sending {\varepsilon \rightarrow 0}, we obtain the claim.

Conversely, suppose that {F_{X_n}(t)} converges to {F_X(t)} at every continuity point {t} of {F_X}. Let {G: {\bf R} \rightarrow {\bf R}} be a continuous compactly supported function, then it is uniformly continuous. As {F_X} is monotone increasing, it can only have countably many points of discontinuity. From these two facts one can find, for any {\varepsilon>0}, a simple function {G_\varepsilon(t) = \sum_{i=1}^n c_i 1_{(t_i,t_{i+1}]}} for some {t_1 < \dots < t_n} that are points of continuity of {F_X}, and real numbers {c_i}, such that {|G(t) - G_\varepsilon(t)| \leq \varepsilon} for all {t}. Thus

\displaystyle  {\bf P} G(X_n) = {\bf P} G_\varepsilon(X_n) + O(\varepsilon)

\displaystyle  = \sum_{i=1}^n c_i(F_{X_n}(t_{i+1}) - F_{X_n}(t)) + O(\varepsilon).

Similarly for {X_n} replaced by {X}. Subtracting and taking limit superior, we conclude that

\displaystyle  \limsup_{n \rightarrow \infty} |{\bf P} G(X_n) - {\bf P} G(X)| = O(\varepsilon),

and on sending {\varepsilon \rightarrow 0}, we obtain that {X_n} converges in distribution to {X} as claimed. \Box

The restriction to continuity points of {t} is necessary. Consider for instance the deterministic random variables {X_n = 1/n}, then {X_n} converges almost surely (and hence in distribution) to {0}, but {F_{X_n}(0) = 0} does not converge to {F_X(0)=1}.

Example 5 For any natural number {n}, let {X_n} be a discrete random variable drawn uniformly from the finite set {\{0/n, 1/n, \dots, (n-1)/n\}}, and let {X} be the continuous random variable drawn uniformly from {[0,1]}. Then {X_n} converges in distribution to {X}. Thus we see that a continuous random variable can emerge as the limit of discrete random variables.

Example 6 For any natural number {n}, let {X_n} be a continuous random variable drawn uniformly from {[0,1/n]}, then {X_n} converges in distribution to the deterministic real number {0}. Thus we see that discrete (or even deterministic) random variables can emerge as the limit of continuous random variables.

Exercise 7 (Portmanteau theorem) Show that the properties (i) and (ii) in Proposition 4 are also equivalent to the following three statements:

  • (iii) One has {\limsup_{n \rightarrow \infty} {\bf P}( X_n \in K ) \leq {\bf P}(X \in K)} for all closed sets {K \subset {\bf R}}.
  • (iv) One has {\liminf_{n \rightarrow \infty} {\bf P}( X_n \in U ) \geq {\bf P}(X \in U)} for all open sets {U \subset {\bf R}}.
  • (v) For any Borel set {E \subset {\bf R}} whose topological boundary {\partial E} is such that {{\bf P}(X \in \partial E) = 0}, one has {\lim_{n \rightarrow \infty} {\bf P}(X_n \in E) = {\bf P}(X \in E)}.

(Note: to prove this theorem, you may wish to invoke Urysohn’s lemma. To deduce (iii) from (i), you may wish to start with the case of compact {K}.)

We can now state the famous central limit theorem:

Theorem 8 (Central limit theorem) Let {X_1,X_2,\dots} be iid copies of a scalar random variable {X} of finite mean {\mu := {\bf E} X} and finite non-zero variance {\sigma^2 := {\bf Var}(X)}. Let {S_n := X_1 + \dots + X_n}. Then the random variables {\frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu)} converges in distribution to a random variable with the standard normal distribution {N(0,1)} (that is to say, a random variable with probability density function {x \mapsto \frac{1}{\sqrt{2\pi}} e^{-x^2/2}}). Thus, by abuse of notation

\displaystyle  \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \rightarrow N(0,1).

In the normalised case {\mu=0, \sigma^2=1} when {X} has mean zero and unit variance, this simplifies to

\displaystyle  \frac{S_n}{\sqrt{n}} \rightarrow N(0,1).

Using Proposition 4 (and the fact that the cumulative distribution function associated to {N(0,1)} is continuous, the central limit theorem is equivalent to asserting that

\displaystyle  {\bf P}( \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \leq t ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{-\infty}^t e^{-x^2/2}\ dx

as {n \rightarrow \infty} for any {t \in {\bf R}}, or equivalently that

\displaystyle  {\bf P}( a \leq \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \leq b ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{a}^b e^{-x^2/2}\ dx.

Informally, one can think of the central limit theorem as asserting that {S_n} approximately behaves like it has distribution {N( n \mu, n \sigma^2 )} for large {n}, where {N(\mu,\sigma^2)} is the normal distribution with mean {\mu} and variance {\sigma^2}, that is to say the distribution with probability density function {x \mapsto \frac{1}{\sqrt{2\pi} \sigma} e^{-(x-\mu)^2/2\sigma^2}}. The integrals {\frac{1}{\sqrt{2\pi}} \int_{-\infty}^t e^{-x^2/2}\ dx} can be written in terms of the error function {\hbox{erf}} as {\frac{1}{2} + \frac{1}{2} \hbox{erf}(t/\sqrt{2})}.

The central limit theorem is a basic example of the universality phenomenon in probability – many statistics involving a large system of many independent (or weakly dependent) variables (such as the normalised sums {\frac{\sqrt{n}}{\sigma}(\frac{S_n}{n}-\mu)}) end up having a universal asymptotic limit (in this case, the normal distribution), regardless of the precise makeup of the underlying random variable {X} that comprised that system. Indeed, the universality of the normal distribution is such that it arises in many other contexts than the fluctuation of iid random variables; the central limit theorem is merely the first place in probability theory where it makes a prominent appearance.

We will give several proofs of the central limit theorem in these notes; each of these proofs has their advantages and disadvantages, and can each extend to prove many further results beyond the central limit theorem. We first give Lindeberg’s proof of the central limit theorem, based on exchanging (or swapping) each component {X_1,\dots,X_n} of the sum {S_n} in turn. This proof gives an accessible explanation as to why there should be a universal limit for the central limit theorem; one then computes directly with gaussians to verify that it is the normal distribution which is the universal limit. Our second proof is the most popular one taught in probability texts, namely the Fourier-analytic proof based around the concept of the characteristic function {t \mapsto {\bf E} e^{itX}} of a real random variable {X}. Thanks to the powerful identities and other results of Fourier analysis, this gives a quite short and direct proof of the central limit theorem, although the arguments may seem rather magical to readers who are not already familiar with Fourier methods. Finally, we give a proof based on the moment method, in the spirit of the arguments in the previous notes; this argument is more combinatorial, but is straightforward and is particularly robust, in particular being well equipped to handle some dependencies between components; we will illustrate this by proving the Erdos-Kac law in number theory by this method. Some further discussion of the central limit theorem (including some further proofs, such as one based on Stein’s method) can be found in this blog post. Some further variants of the central limit theorem, such as local limit theorems, stable laws, and large deviation inequalities, will be discussed in the next (and final) set of notes.

The following exercise illustrates the power of the central limit theorem, by establishing combinatorial estimates which would otherwise require the use of Stirling’s formula to establish.

Exercise 9 (De Moivre-Laplace theorem) Let {X} be a Bernoulli random variable, taking values in {\{0,1\}} with {{\bf P}(X=0)={\bf P}(X=1)=1/2}, thus {X} has mean {1/2} and variance {1/4}. Let {X_1,X_2,\dots} be iid copies of {X}, and write {S_n := X_1+\dots+X_n}.

  • (i) Show that {S_n} takes values in {\{0,\dots,n\}} with {{\bf P}(S_n=i) = \frac{1}{2^n} \binom{n}{i}}. (This is an example of a binomial distribution.)
  • (ii) Assume Stirling’s formula

    \displaystyle  n! = (1+o(1)) \sqrt{2\pi n} n^n e^{-n} \ \ \ \ \ (1)

    where {o(1)} is a function of {n} that goes to zero as {n \rightarrow \infty}. (A proof of this formula may be found in this previous blog post.) Using this formula, and without using the central limit theorem, show that

    \displaystyle  {\bf P}( a \leq 2\sqrt{n} (\frac{S_n}{n} - \frac{1}{2}) \leq b ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{a}^b e^{-x^2/2}\ dx

    as {n \rightarrow \infty} for any fixed real numbers {a<b}.

The above special case of the central limit theorem was first established by de Moivre and Laplace.

We close this section with some basic facts about convergence of distribution that will be useful in the sequel.

Exercise 10 Let {X_1,X_2,\dots}, {Y_1,Y_2,\dots} be sequences of real random variables, and let {X,Y} be further real random variables.

  • (i) If {X} is deterministic, show that {X_n} converges in distribution to {X} if and only if {X_n} converges in probability to {X}.
  • (ii) Suppose that {X_n} is independent of {Y_n} for each {n}, and {X} independent of {Y}. Show that {X_n+iY_n} converges in distribution to {X+iY} if and only if {X_n} converges in distribution to {X} and {Y_n} converges in distribution to {Y}. (The shortest way to prove this is by invoking the Stone-Weierstrass theorem, but one can also proceed by proving some version of Proposition 4.) What happens if the independence hypothesis is dropped?
  • (iii) If {X_n} converges in distribution to {X}, show that for every {\varepsilon>0} there exists {K>0} such that {{\bf P}( |X_n| \geq K ) < \varepsilon} for all sufficiently large {n}. (That is to say, {X_n} is a tight sequence of random variables.)
  • (iv) Show that {X_n} converges in distribution to {X} if and only if, after extending the probability space model if necessary, one can find copies {Z_1,Z_2,\dots} and {Z} of {X_1,X_2,\dots} and {X} respectively such that {Z_n} converges almost surely to {Z}. (Hint: use the Skorohod representation, Exercise 29 of Notes 0.)
  • (v) If {X_1,X_2,\dots} converges in distribution to {X}, and {F: {\bf R} \rightarrow {\bf R}} is continuous, show that {F(X_1),F(X_2),\dots} converges in distribution to {F(X)}. Generalise this claim to the case when {X} takes values in an arbitrary locally compact Hausdorff space.
  • (vi) (Slutsky’s theorem) If {X_n} converges in distribution to {X}, and {Y_n} converges in probability to a deterministic limit {Y}, show that {X_n+Y_n} converges in distribution to {X+Y}, and {X_n Y_n} converges in distribution to {XY}. (Hint: either use (iv), or else use (iii) to control some error terms.) This statement combines particularly well with (i). What happens if {Y} is not assumed to be deterministic?
  • (vii) (Fatou lemma) If {G: {\bf R} \rightarrow [0,+\infty)} is continuous, and {X_n} converges in distribution to {X}, show that {\liminf_{n \rightarrow \infty} {\bf E} G(X_n) \geq {\bf E} G(X)}.
  • (viii) (Bounded convergence) If {G: {\bf R} \rightarrow {\bf R}} is continuous and bounded, and {X_n} converges in distribution to {X}, show that {\lim_{n \rightarrow \infty} {\bf E} G(X_n) = {\bf E} G(X)}.
  • (ix) (Dominated convergence) If {X_n} converges in distribution to {X}, and there is an absolutely integrable {Y} such that {|X_n| \leq Y} almost surely for all {n}, show that {\lim_{n \rightarrow \infty} {\bf E} X_n = {\bf E} X}.

For future reference we also mention (but will not prove) Prokhorov’s theorem that gives a partial converse to part (iii) of the above exercise:

Theorem 11 (Prokhorov’s theorem) Let {X_1,X_2,\dots} be a sequence of real random variables which is tight (that is, for every {\varepsilon>0} there exists {K>0} such that {{\bf P}(|X_n| \geq K) < \varepsilon} for all sufficiently large {n}). Then there exists a subsequence {X_{n_j}} which converges in distribution to some random variable {X} (which may possibly be modeled by a different probability space model than the {X_1,X_2,\dots}.)

The proof of this theorem relies on the Riesz representation theorem, and is beyond the scope of this course; but see for instance Exercise 29 of this previous blog post. (See also the closely related Helly selection theorem, covered in Exercise 30 of the same post.)

Read the rest of this entry »

One of the major activities in probability theory is studying the various statistics that can be produced from a complex system with many components. One of the simplest possible systems one can consider is a finite sequence {X_1,\dots,X_n} or an infinite sequence {X_1,X_2,\dots} of jointly independent scalar random variables, with the case when the {X_i} are also identically distributed (i.e. the {X_i} are iid) being a model case of particular interest. (In some cases one may consider a triangular array {(X_{n,i})_{1 \leq i \leq n}} of scalar random variables, rather than a finite or infinite sequence.) There are many statistics of such sequences that one can study, but one of the most basic such statistics are the partial sums

\displaystyle  S_n := X_1 + \dots + X_n.

The first fundamental result about these sums is the law of large numbers (or LLN for short), which comes in two formulations, weak (WLLN) and strong (SLLN). To state these laws, we first must define the notion of convergence in probability.

Definition 1 Let {X_n} be a sequence of random variables taking values in a separable metric space {R = (R,d)} (e.g. the {X_n} could be scalar random variables, taking values in {{\bf R}} or {{\bf C}}), and let {X} be another random variable taking values in {R}. We say that {X_n} converges in probability to {X} if, for every radius {\varepsilon > 0}, one has {{\bf P}( d(X_n,X) > \varepsilon ) \rightarrow 0} as {n \rightarrow \infty}. Thus, if {X_n, X} are scalar, we have {X_n} converging to {X} in probability if {{\bf P}( |X_n-X| > \varepsilon ) \rightarrow 0} as {n \rightarrow \infty} for any given {\varepsilon > 0}.

The measure-theoretic analogue of convergence in probability is convergence in measure.

It is instructive to compare the notion of convergence in probability with almost sure convergence. it is easy to see that {X_n} converges almost surely to {X} if and only if, for every radius {\varepsilon > 0}, one has {{\bf P}( \bigvee_{n \geq N} (d(X_n,X)>\varepsilon) ) \rightarrow 0} as {N \rightarrow \infty}; thus, roughly speaking, convergence in probability is good for controlling how a single random variable {X_n} is close to its putative limiting value {X}, while almost sure convergence is good for controlling how the entire tail {(X_n)_{n \geq N}} of a sequence of random variables is close to its putative limit {X}.

We have the following easy relationships between convergence in probability and almost sure convergence:

Exercise 2 Let {X_n} be a sequence of scalar random variables, and let {X} be another scalar random variable.

  • (i) If {X_n \rightarrow X} almost surely, show that {X_n \rightarrow X} in probability. Give a counterexample to show that the converse does not necessarily hold.
  • (ii) Suppose that {\sum_n {\bf P}( |X_n-X| > \varepsilon ) < \infty} for all {\varepsilon > 0}. Show that {X_n \rightarrow X} almost surely. Give a counterexample to show that the converse does not necessarily hold.
  • (iii) If {X_n \rightarrow X} in probability, show that there is a subsequence {X_{n_j}} of the {X_n} such that {X_{n_j} \rightarrow X} almost surely.
  • (iv) If {X_n,X} are absolutely integrable and {{\bf E} |X_n-X| \rightarrow 0} as {n \rightarrow \infty}, show that {X_n \rightarrow X} in probability. Give a counterexample to show that the converse does not necessarily hold.
  • (v) (Urysohn subsequence principle) Suppose that every subsequence {X_{n_j}} of {X_n} has a further subsequence {X_{n_{j_k}}} that converges to {X} in probability. Show that {X_n} also converges to {X} in probability.
  • (vi) Does the Urysohn subsequence principle still hold if “in probability” is replaced with “almost surely” throughout?
  • (vii) If {X_n} converges in probability to {X}, and {F: {\bf R} \rightarrow {\bf R}} or {F: {\bf C} \rightarrow {\bf C}} is continuous, show that {F(X_n)} converges in probability to {F(X)}. More generally, if for each {i=1,\dots,k}, {X^{(i)}_n} is a sequence of scalar random variables that converge in probability to {X^{(i)}}, and {F: {\bf R}^k \rightarrow {\bf R}} or {F: {\bf C}^k \rightarrow {\bf C}} is continuous, show that {F(X^{(1)}_n,\dots,X^{(k)}_n)} converges in probability to {F(X^{(1)},\dots,X^{(k)})}. (Thus, for instance, if {X_n} and {Y_n} converge in probability to {X} and {Y} respectively, then {X_n + Y_n} and {X_n Y_n} converge in probability to {X+Y} and {XY} respectively.
  • (viii) (Fatou’s lemma for convergence in probability) If {X_n} are non-negative and converge in probability to {X}, show that {{\bf E} X \leq \liminf_{n \rightarrow \infty} {\bf E} X_n}.
  • (ix) (Dominated convergence in probability) If {X_n} converge in probability to {X}, and one almost surely has {|X_n| \leq Y} for all {n} and some absolutely integrable {Y}, show that {{\bf E} X_n} converges to {{\bf E} X}.

Exercise 3 Let {X_1,X_2,\dots} be a sequence of scalar random variables converging in probability to another random variable {X}.

  • (i) Suppose that there is a random variable {Y} which is independent of {X_i} for each individual {i}. Show that {Y} is also independent of {X}.
  • (ii) Suppose that the {X_1,X_2,\dots} are jointly independent. Show that {X} is almost surely constant (i.e. there is a deterministic scalar {c} such that {X=c} almost surely).

We can now state the weak and strong law of large numbers, in the model case of iid random variables.

Theorem 4 (Law of large numbers, model case) Let {X_1, X_2, \dots} be an iid sequence of copies of an absolutely integrable random variable {X} (thus the {X_i} are independent and all have the same distribution as {X}). Write {\mu := {\bf E} X}, and for each natural number {n}, let {S_n} denote the random variable {S_n := X_1 + \dots + X_n}.

  • (i) (Weak law of large numbers) The random variables {S_n/n} converge in probability to {\mu}.
  • (ii) (Strong law of large numbers) The random variables {S_n/n} converge almost surely to {\mu}.

Informally: if {X_1,\dots,X_n} are iid with mean {\mu}, then {X_1 + \dots + X_n \approx \mu n} for {n} large. Clearly the strong law of large numbers implies the weak law, but the weak law is easier to prove (and has somewhat better quantitative estimates). There are several variants of the law of large numbers, for instance when one drops the hypothesis of identical distribution, or when the random variable {X} is not absolutely integrable, or if one seeks more quantitative bounds on the rate of convergence; we will discuss some of these variants below the fold.

It is instructive to compare the law of large numbers with what one can obtain from the Kolmogorov zero-one law, discussed in Notes 2. Observe that if the {X_n} are real-valued, then the limit superior {\limsup_{n \rightarrow \infty} S_n/n} and {\liminf_{n \rightarrow \infty} S_n/n} are tail random variables in the sense that they are not affected if one changes finitely many of the {X_n}; in particular, events such as {\limsup_{n \rightarrow \infty} S_n/n > t} are tail events for any {t \in {\bf R}}. From this and the zero-one law we see that there must exist deterministic quantities {-\infty \leq \mu_- \leq \mu_+ \leq +\infty} such that {\limsup_{n \rightarrow \infty} S_n/n = \mu_+} and {\liminf_{n \rightarrow \infty} S_n/n = \mu_-} almost surely. The strong law of large numbers can then be viewed as the assertion that {\mu_- = \mu_+ = \mu} when {X} is absolutely integrable. On the other hand, the zero-one law argument does not require absolute integrability (and one can replace the denominator {n} by other functions of {n} that go to infinity as {n \rightarrow \infty}).

The law of large numbers asserts, roughly speaking, that the theoretical expectation {\mu} of a random variable {X} can be approximated by taking a large number of independent samples {X_1,\dots,X_n} of {X} and then forming the empirical mean {S_n/n = \frac{X_1+\dots+X_n}{n}}. This ability to approximate the theoretical statistics of a probability distribution through empirical data is one of the basic starting points for mathematical statistics, though this is not the focus of the course here. The tendency of statistics such as {S_n/n} to cluster closely around their mean value {\mu} is the simplest instance of the concentration of measure phenomenon, which is of tremendous significance not only within probability, but also in applications of probability to disciplines such as statistics, theoretical computer science, combinatorics, random matrix theory and high dimensional geometry. We will not discuss these topics much in this course, but see this previous blog post for some further discussion.

There are several ways to prove the law of large numbers (in both forms). One basic strategy is to use the moment method – controlling statistics such as {S_n/n} by computing moments such as the mean {{\bf E} S_n/n}, variance {{\bf E} |S_n/n - {\bf E} S_n/n|^2}, or higher moments such as {{\bf E} |S_n/n - {\bf E} S_n/n|^k} for {k = 4, 6, \dots}. The joint independence of the {X_i} make such moments fairly easy to compute, requiring only some elementary combinatorics. A direct application of the moment method typically requires one to make a finite moment assumption such as {{\bf E} |X|^k < \infty}, but as we shall see, one can reduce fairly easily to this case by a truncation argument.

For the strong law of large numbers, one can also use methods relating to the theory of martingales, such as stopping time arguments and maximal inequalities; we present some classical arguments of Kolmogorov in this regard.

Read the rest of this entry »

In the previous set of notes, we constructed the measure-theoretic notion of the Lebesgue integral, and used this to set up the probabilistic notion of expectation on a rigorous footing. In this set of notes, we will similarly construct the measure-theoretic concept of a product measure (restricting to the case of probability measures to avoid unnecessary techncialities), and use this to set up the probabilistic notion of independence on a rigorous footing. (To quote Durrett: “measure theory ends and probability theory begins with the definition of independence.”) We will be able to take virtually any collection of random variables (or probability distributions) and couple them together to be independent via the product measure construction, though for infinite products there is the slight technicality (a requirement of the Kolmogorov extension theorem) that the random variables need to range in standard Borel spaces. This is not the only way to couple together such random variables, but it is the simplest and the easiest to compute with in practice, as we shall see in the next few sets of notes.

Read the rest of this entry »

In Notes 0, we introduced the notion of a measure space {\Omega = (\Omega, {\mathcal F}, \mu)}, which includes as a special case the notion of a probability space. By selecting one such probability space {(\Omega,{\mathcal F},\mu)} as a sample space, one obtains a model for random events and random variables, with random events {E} being modeled by measurable sets {E_\Omega} in {{\mathcal F}}, and random variables {X} taking values in a measurable space {R} being modeled by measurable functions {X_\Omega: \Omega \rightarrow R}. We then defined some basic operations on these random events and variables:

  • Given events {E,F}, we defined the conjunction {E \wedge F}, the disjunction {E \vee F}, and the complement {\overline{E}}. For countable families {E_1,E_2,\dots} of events, we similarly defined {\bigwedge_{n=1}^\infty E_n} and {\bigvee_{n=1}^\infty E_n}. We also defined the empty event {\emptyset} and the sure event {\overline{\emptyset}}, and what it meant for two events to be equal.
  • Given random variables {X_1,\dots,X_n} in ranges {R_1,\dots,R_n} respectively, and a measurable function {F: R_1 \times \dots \times R_n \rightarrow S}, we defined the random variable {F(X_1,\dots,X_n)} in range {S}. (As the special case {n=0} of this, every deterministic element {s} of {S} was also a random variable taking values in {S}.) Given a relation {P: R_1 \times \dots \times R_n \rightarrow \{\hbox{true}, \hbox{false}\}}, we similarly defined the event {P(X_1,\dots,X_n)}. Conversely, given an event {E}, we defined the indicator random variable {1_E}. Finally, we defined what it meant for two random variables to be equal.
  • Given an event {E}, we defined its probability {{\bf P}(E)}.

These operations obey various axioms; for instance, the boolean operations on events obey the axioms of a Boolean algebra, and the probabilility function {E \mapsto {\bf P}(E)} obeys the Kolmogorov axioms. However, we will not focus on the axiomatic approach to probability theory here, instead basing the foundations of probability theory on the sample space models as discussed in Notes 0. (But see this previous post for a treatment of one such axiomatic approach.)

It turns out that almost all of the other operations on random events and variables we need can be constructed in terms of the above basic operations. In particular, this allows one to safely extend the sample space in probability theory whenever needed, provided one uses an extension that respects the above basic operations; this is an important operation when one needs to add new sources of randomness to an existing system of events and random variables, or to couple together two separate such systems into a joint system that extends both of the original systems. We gave a simple example of such an extension in the previous notes, but now we give a more formal definition:

Definition 1 Suppose that we are using a probability space {\Omega = (\Omega, {\mathcal F}, \mu)} as the model for a collection of events and random variables. An extension of this probability space is a probability space {\Omega' = (\Omega', {\mathcal F}', \mu')}, together with a measurable map {\pi: \Omega' \rightarrow \Omega} (sometimes called the factor map) which is probability-preserving in the sense that

\displaystyle  \mu'( \pi^{-1}(E) ) = \mu(E) \ \ \ \ \ (1)

for all {E \in {\mathcal F}}. (Caution: this does not imply that {\mu(\pi(F)) = \mu'(F)} for all {F \in {\mathcal F}'} – why not?)

An event {E} which is modeled by a measurable subset {E_\Omega} in the sample space {\Omega}, will be modeled by the measurable set {E_{\Omega'} := \pi^{-1}(E_\Omega)} in the extended sample space {\Omega'}. Similarly, a random variable {X} taking values in some range {R} that is modeled by a measurable function {X_\Omega: \Omega \rightarrow R} in {\Omega}, will be modeled instead by the measurable function {X_{\Omega'} := X_\Omega \circ \pi} in {\Omega'}. We also allow the extension {\Omega'} to model additional events and random variables that were not modeled by the original sample space {\Omega} (indeed, this is one of the main reasons why we perform extensions in probability in the first place).

Thus, for instance, the sample space {\Omega'} in Example 3 of the previous post is an extension of the sample space {\Omega} in that example, with the factor map {\pi: \Omega' \rightarrow \Omega} given by the first coordinate projection {\pi(i,j) := i}. One can verify that all of the basic operations on events and random variables listed above are unaffected by the above extension (with one caveat, see remark below). For instance, the conjunction {E \wedge F} of two events can be defined via the original model {\Omega} by the formula

\displaystyle  (E \wedge F)_\Omega := E_\Omega \cap F_\Omega

or via the extension {\Omega'} via the formula

\displaystyle  (E \wedge F)_{\Omega'} := E_{\Omega'} \cap F_{\Omega'}.

The two definitions are consistent with each other, thanks to the obvious set-theoretic identity

\displaystyle  \pi^{-1}( E_\Omega \cap F_\Omega ) = \pi^{-1}(E_\Omega) \cap \pi^{-1}(F_\Omega).

Similarly, the assumption (1) is precisely what is needed to ensure that the probability {\mathop{\bf P}(E)} of an event remains unchanged when one replaces a sample space model with an extension. We leave the verification of preservation of the other basic operations described above under extension as exercises to the reader.

Remark 2 There is one minor exception to this general rule if we do not impose the additional requirement that the factor map {\pi} is surjective. Namely, for non-surjective {\pi}, it can become possible that two events {E, F} are unequal in the original sample space model, but become equal in the extension (and similarly for random variables), although the converse never happens (events that are equal in the original sample space always remain equal in the extension). For instance, let {\Omega} be the discrete probability space {\{a,b\}} with {p_a=1} and {p_b=0}, and let {\Omega'} be the discrete probability space {\{ a'\}} with {p'_{a'}=1}, and non-surjective factor map {\pi: \Omega' \rightarrow \Omega} defined by {\pi(a') := a}. Then the event modeled by {\{b\}} in {\Omega} is distinct from the empty event when viewed in {\Omega}, but becomes equal to that event when viewed in {\Omega'}. Thus we see that extending the sample space by a non-surjective factor map can identify previously distinct events together (though of course, being probability preserving, this can only happen if those two events were already almost surely equal anyway). This turns out to be fairly harmless though; while it is nice to know if two given events are equal, or if they differ by a non-null event, it is almost never useful to know that two events are unequal if they are already almost surely equal. Alternatively, one can add the additional requirement of surjectivity in the definition of an extension, which is also a fairly harmless constraint to impose (this is what I chose to do in this previous set of notes).

Roughly speaking, one can define probability theory as the study of those properties of random events and random variables that are model-independent in the sense that they are preserved by extensions. For instance, the cardinality {|E_\Omega|} of the model {E_\Omega} of an event {E} is not a concept within the scope of probability theory, as it is not preserved by extensions: continuing Example 3 from Notes 0, the event {E} that a die roll {X} is even is modeled by a set {E_\Omega = \{2,4,6\}} of cardinality {3} in the original sample space model {\Omega}, but by a set {E_{\Omega'} = \{2,4,6\} \times \{1,2,3,4,5,6\}} of cardinality {18} in the extension. Thus it does not make sense in the context of probability theory to refer to the “cardinality of an event {E}“.

On the other hand, the supremum {\sup_n X_n} of a collection of random variables {X_n} in the extended real line {[-\infty,+\infty]} is a valid probabilistic concept. This can be seen by manually verifying that this operation is preserved under extension of the sample space, but one can also see this by defining the supremum in terms of existing basic operations. Indeed, note from Exercise 24 of Notes 0 that a random variable {X} in the extended real line is completely specified by the threshold events {(X \leq t)} for {t \in {\bf R}}; in particular, two such random variables {X,Y} are equal if and only if the events {(X \leq t)} and {(Y \leq t)} are surely equal for all {t}. From the identity

\displaystyle  (\sup_n X_n \leq t) = \bigwedge_{n=1}^\infty (X_n \leq t)

we thus see that one can completely specify {\sup_n X_n} in terms of {X_n} using only the basic operations provided in the above list (and in particular using the countable conjunction {\bigwedge_{n=1}^\infty}.) Of course, the same considerations hold if one replaces supremum, by infimum, limit superior, limit inferior, or (if it exists) the limit.

In this set of notes, we will define some further important operations on scalar random variables, in particular the expectation of these variables. In the sample space models, expectation corresponds to the notion of integration on a measure space. As we will need to use both expectation and integration in this course, we will thus begin by quickly reviewing the basics of integration on a measure space, although we will then translate the key results of this theory into probabilistic language.

As the finer details of the Lebesgue integral construction are not the core focus of this probability course, some of the details of this construction will be left to exercises. See also Chapter 1 of Durrett, or these previous blog notes, for a more detailed treatment.

Read the rest of this entry »

Starting this week, I will be teaching an introductory graduate course (Math 275A) on probability theory here at UCLA. While I find myself using probabilistic methods routinely nowadays in my research (for instance, the probabilistic concept of Shannon entropy played a crucial role in my recent paper on the Chowla and Elliott conjectures, and random multiplicative functions similarly played a central role in the paper on the Erdos discrepancy problem), this will actually be the first time I will be teaching a course on probability itself (although I did give a course on random matrix theory some years ago that presumed familiarity with graduate-level probability theory). As such, I will be relying primarily on an existing textbook, in this case Durrett’s Probability: Theory and Examples. I still need to prepare lecture notes, though, and so I thought I would continue my practice of putting my notes online, although in this particular case they will be less detailed or complete than with other courses, as they will mostly be focusing on those topics that are not already comprehensively covered in the text of Durrett. Below the fold are my first such set of notes, concerning the classical measure-theoretic foundations of probability. (I wrote on these foundations also in this previous blog post, but in that post I already assumed that the reader was familiar with measure theory and basic probability, whereas in this course not every student will have a strong background in these areas.)

Note: as this set of notes is primarily concerned with foundational issues, it will contain a large number of pedantic (and nearly trivial) formalities and philosophical points. We dwell on these technicalities in this set of notes primarily so that they are out of the way in later notes, when we work with the actual mathematics of probability, rather than on the supporting foundations of that mathematics. In particular, the excessively formal and philosophical language in this set of notes will not be replicated in later notes.

Read the rest of this entry »

We have seen in previous notes that the operation of forming a Dirichlet series

\displaystyle  {\mathcal D} f(n) := \sum_n \frac{f(n)}{n^s}

or twisted Dirichlet series

\displaystyle  {\mathcal D} \chi f(n) := \sum_n \frac{f(n) \chi(n)}{n^s}

is an incredibly useful tool for questions in multiplicative number theory. Such series can be viewed as a multiplicative Fourier transform, since the functions {n \mapsto \frac{1}{n^s}} and {n \mapsto \frac{\chi(n)}{n^s}} are multiplicative characters.

Similarly, it turns out that the operation of forming an additive Fourier series

\displaystyle  \hat f(\theta) := \sum_n f(n) e(-n \theta),

where {\theta} lies on the (additive) unit circle {{\bf R}/{\bf Z}} and {e(\theta) := e^{2\pi i \theta}} is the standard additive character, is an incredibly useful tool for additive number theory, particularly when studying additive problems involving three or more variables taking values in sets such as the primes; the deployment of this tool is generally known as the Hardy-Littlewood circle method. (In the analytic number theory literature, the minus sign in the phase {e(-n\theta)} is traditionally omitted, and what is denoted by {\hat f(\theta)} here would be referred to instead by {S_f(-\theta)}, {S(f;-\theta)} or just {S(-\theta)}.) We list some of the most classical problems in this area:

  • (Even Goldbach conjecture) Is it true that every even natural number {N} greater than two can be expressed as the sum {p_1+p_2} of two primes?
  • (Odd Goldbach conjecture) Is it true that every odd natural number {N} greater than five can be expressed as the sum {p_1+p_2+p_3} of three primes?
  • (Waring problem) For each natural number {k}, what is the least natural number {g(k)} such that every natural number {N} can be expressed as the sum of {g(k)} or fewer {k^{th}} powers?
  • (Asymptotic Waring problem) For each natural number {k}, what is the least natural number {G(k)} such that every sufficiently large natural number {N} can be expressed as the sum of {G(k)} or fewer {k^{th}} powers?
  • (Partition function problem) For any natural number {N}, let {p(N)} denote the number of representations of {N} of the form {N = n_1 + \dots + n_k} where {k} and {n_1 \geq \dots \geq n_k} are natural numbers. What is the asymptotic behaviour of {p(N)} as {N \rightarrow \infty}?

The Waring problem and its asymptotic version will not be discussed further here, save to note that the Vinogradov mean value theorem (Theorem 13 from Notes 5) and its variants are particularly useful for getting good bounds on {G(k)}; see for instance the ICM article of Wooley for recent progress on these problems. Similarly, the partition function problem was the original motivation of Hardy and Littlewood in introducing the circle method, but we will not discuss it further here; see e.g. Chapter 20 of Iwaniec-Kowalski for a treatment.

Instead, we will focus our attention on the odd Goldbach conjecture as our model problem. (The even Goldbach conjecture, which involves only two variables instead of three, is unfortunately not amenable to a circle method approach for a variety of reasons, unless the statement is replaced with something weaker, such as an averaged statement; see this previous blog post for further discussion. On the other hand, the methods here can obtain weaker versions of the even Goldbach conjecture, such as showing that “almost all” even numbers are the sum of two primes; see Exercise 34 below.) In particular, we will establish the following celebrated theorem of Vinogradov:

Theorem 1 (Vinogradov’s theorem) Every sufficiently large odd number {N} is expressible as the sum of three primes.

Recently, the restriction that {n} be sufficiently large was replaced by Helfgott with {N > 5}, thus establishing the odd Goldbach conjecture in full. This argument followed the same basic approach as Vinogradov (based on the circle method), but with various estimates replaced by “log-free” versions (analogous to the log-free zero-density theorems in Notes 7), combined with careful numerical optimisation of constants and also some numerical work on the even Goldbach problem and on the generalised Riemann hypothesis. We refer the reader to Helfgott’s text for details.

We will in fact show the more precise statement:

Theorem 2 (Quantitative Vinogradov theorem) Let {N \geq 2} be an natural number. Then

\displaystyle  \sum_{a,b,c: a+b+c=N} \Lambda(a) \Lambda(b) \Lambda(c) = G_3(N) \frac{N^2}{2} + O_A( N^2 \log^{-A} N )

for any {A>0}, where

\displaystyle  G_3(N) = \prod_{p|N} (1-\frac{1}{(p-1)^2}) \times \prod_{p \not | N} (1 + \frac{1}{(p-1)^3}). \ \ \ \ \ (1)

The implied constants are ineffective.

We dropped the hypothesis that {N} is odd in Theorem 2, but note that {G_3(N)} vanishes when {N} is even. For odd {N}, we have

\displaystyle  1 \ll G_3(N) \ll 1.

Exercise 3 Show that Theorem 2 implies Theorem 1.

Unfortunately, due to the ineffectivity of the constants in Theorem 2 (a consequence of the reliance on the Siegel-Walfisz theorem in the proof of that theorem), one cannot quantify explicitly what “sufficiently large” means in Theorem 1 directly from Theorem 2. However, there is a modification of this theorem which gives effective bounds; see Exercise 32 below.

Exercise 4 Obtain a heuristic derivation of the main term {G_3(N) \frac{N^2}{2}} using the modified Cramér model (Section 1 of Supplement 4).

To prove Theorem 2, we consider the more general problem of estimating sums of the form

\displaystyle  \sum_{a,b,c \in {\bf Z}: a+b+c=N} f(a) g(b) h(c)

for various integers {N} and functions {f,g,h: {\bf Z} \rightarrow {\bf C}}, which we will take to be finitely supported to avoid issues of convergence.

Suppose that {f,g,h} are supported on {\{1,\dots,N\}}; for simplicity, let us first assume the pointwise bound {|f(n)|, |g(n)|, |h(n)| \ll 1} for all {n}. (This simple case will not cover the case in Theorem 2, when {f,g,h} are truncated versions of the von Mangoldt function {\Lambda}, but will serve as a warmup to that case.) Then we have the trivial upper bound

\displaystyle  \sum_{a,b,c \in {\bf Z}: a+b+c=N} f(a) g(b) h(c) \ll N^2. \ \ \ \ \ (2)

A basic observation is that this upper bound is attainable if {f,g,h} all “pretend” to behave like the same additive character {n \mapsto e(\theta n)} for some {\theta \in {\bf R}/{\bf Z}}. For instance, if {f(n)=g(n)=h(n) = e(\theta n) 1_{n \leq N}}, then we have {f(a)g(b)h(c) = e(\theta N)} when {a+b+c=N}, and then it is not difficult to show that

\displaystyle  \sum_{a,b,c \in {\bf Z}: a+b+c=N} f(a) g(b) h(c) = (\frac{1}{2}+o(1)) e(\theta N) N^2

as {N \rightarrow \infty}.

The key to the success of the circle method lies in the converse of the above statement: the only way that the trivial upper bound (2) comes close to being sharp is when {f,g,h} all correlate with the same character {n \mapsto e(\theta n)}, or in other words {\hat f(\theta), \hat g(\theta), \hat h(\theta)} are simultaneously large. This converse is largely captured by the following two identities:

Exercise 5 Let {f,g,h: {\bf Z} \rightarrow {\bf C}} be finitely supported functions. Then for any natural number {N}, show that

\displaystyle  \sum_{a,b,c: a+b+c=N} f(a) g(b) h(c) = \int_{{\bf R}/{\bf Z}} \hat f(\theta) \hat g(\theta) \hat h(\theta) e(\theta N)\ d\theta \ \ \ \ \ (3)


\displaystyle  \sum_n |f(n)|^2 = \int_{{\bf R}/{\bf Z}} |\hat f(\theta)|^2\ d\theta.

The traditional approach to using the circle method to compute sums such as {\sum_{a,b,c: a+b+c=N} f(a) g(b) h(c)} proceeds by invoking (3) to express this sum as an integral over the unit circle, then dividing the unit circle into “major arcs” where {\hat f(\theta), \hat g(\theta),\hat h(\theta)} are large but computable with high precision, and “minor arcs” where one has estimates to ensure that {\hat f(\theta), \hat g(\theta),\hat h(\theta)} are small in both {L^\infty} and {L^2} senses. For functions {f,g,h} of number-theoretic significance, such as truncated von Mangoldt functions, the “major arcs” typically consist of those {\theta} that are close to a rational number {\frac{a}{q}} with {q} not too large, and the “minor arcs” consist of the remaining portions of the circle. One then obtains lower bounds on the contributions of the major arcs, and upper bounds on the contribution of the minor arcs, in order to get good lower bounds on {\sum_{a,b,c: a+b+c=N} f(a) g(b) h(c)}.

This traditional approach is covered in many places, such as this text of Vaughan. We will emphasise in this set of notes a slightly different perspective on the circle method, coming from recent developments in additive combinatorics; this approach does not quite give the sharpest quantitative estimates, but it allows for easier generalisation to more combinatorial contexts, for instance when replacing the primes by dense subsets of the primes, or replacing the equation {a+b+c=N} with some other equation or system of equations.

From Exercise 5 and Hölder’s inequality, we immediately obtain

Corollary 6 Let {f,g,h: {\bf Z} \rightarrow {\bf C}} be finitely supported functions. Then for any natural number {N}, we have

\displaystyle  |\sum_{a,b,c: a+b+c=N} f(a) g(b) h(c)| \leq (\sum_n |f(n)|^2)^{1/2} (\sum_n |g(n)|^2)^{1/2}

\displaystyle  \times \sup_\theta |\sum_n h(n) e(n\theta)|.

Similarly for permutations of the {f,g,h}.

In the case when {f,g,h} are supported on {[1,N]} and bounded by {O(1)}, this corollary tells us that we have {\sum_{a,b,c: a+b+c=N} f(a) g(b) h(c)} is {o(N^2)} whenever one has {\sum_n h(n) e(n\theta) = o(N)} uniformly in {\theta}, and similarly for permutations of {f,g,h}. From this and the triangle inequality, we obtain the following conclusion: if {f} is supported on {[1,N]} and bounded by {O(1)}, and {f} is Fourier-approximated by another function {g} supported on {[1,N]} and bounded by {O(1)} in the sense that

\displaystyle  \sum_n f(n) e(n\theta) = \sum_n g(n) e(n\theta) + o(N)

uniformly in {\theta}, then we have

\displaystyle  \sum_{a,b,c: a+b+c=N} f(a) f(b) f(c) = \sum_{a,b,c: a+b+c=N} g(a) g(b) g(c) + o(N^2). \ \ \ \ \ (4)

Thus, one possible strategy for estimating the sum {\sum_{a,b,c: a+b+c=N} f(a) f(b) f(c)} is, one can effectively replace (or “model”) {f} by a simpler function {g} which Fourier-approximates {g} in the sense that the exponential sums {\sum_n f(n) e(n\theta), \sum_n g(n) e(n\theta)} agree up to error {o(N)}. For instance:

Exercise 7 Let {N} be a natural number, and let {A} be a random subset of {\{1,\dots,N\}}, chosen so that each {n \in \{1,\dots,N\}} has an independent probability of {1/2} of lying in {A}.

  • (i) If {f := 1_A} and {g := \frac{1}{2} 1_{[1,N]}}, show that with probability {1-o(1)} as {N \rightarrow \infty}, one has {\sum_n f(n) e(n\theta) = \sum_n g(n) e(n\theta) + o(N)} uniformly in {\theta}. (Hint: for any fixed {\theta}, this can be accomplished with quite a good probability (e.g. {1-o(N^{-2})}) using a concentration of measure inequality, such as Hoeffding’s inequality. To obtain the uniformity in {\theta}, round {\theta} to the nearest multiple of (say) {1/N^2} and apply the union bound).
  • (ii) Show that with probability {1-o(1)}, one has {(\frac{1}{16}+o(1))N^2} representations of the form {N=a+b+c} with {a,b,c \in A} (with {(a,b,c)} treated as an ordered triple, rather than an unordered one).

In the case when {f} is something like the truncated von Mangoldt function {\Lambda(n) 1_{n \leq N}}, the quantity {\sum_n |f(n)|^2} is of size {O( N \log N)} rather than {O( N )}. This costs us a logarithmic factor in the above analysis, however we can still conclude that we have the approximation (4) whenever {g} is another sequence with {\sum_n |g(n)|^2 \ll N \log N} such that one has the improved Fourier approximation

\displaystyle  \sum_n f(n) e(n\theta) = \sum_n g(n) e(n\theta) + o(\frac{N}{\log N}) \ \ \ \ \ (5)

uniformly in {\theta}. (Later on we will obtain a “log-free” version of this implication in which one does not need to gain a factor of {\frac{1}{\log N}} in the error term.)

This suggests a strategy for proving Vinogradov’s theorem: find an approximant {g} to some suitable truncation {f} of the von Mangoldt function (e.g. {f(n) = \Lambda(n) 1_{n \leq N}} or {f(n) = \Lambda(n) \eta(n/N)}) which obeys the Fourier approximation property (5), and such that the expression {\sum_{a+b+c=N} g(a) g(b) g(c)} is easily computable. It turns out that there are a number of good options for such an approximant {g}. One of the quickest ways to obtain such an approximation (which is used in Chapter 19 of Iwaniec and Kowalski) is to start with the standard identity {\Lambda = -\mu L * 1}, that is to say

\displaystyle  \Lambda(n) = - \sum_{d|n} \mu(d) \log d,

and obtain an approximation by truncating {d} to be less than some threshold {R} (which, in practice, would be a small power of {N}):

\displaystyle  \Lambda(n) \approx - \sum_{d \leq R: d|n} \mu(d) \log d. \ \ \ \ \ (6)

Thus, for instance, if {f(n) = \Lambda(n) 1_{n \leq N}}, the approximant {g} would be taken to be

\displaystyle  g(n) := - \sum_{d \leq R: d|n} \mu(d) \log d 1_{n \leq N}.

One could also use the slightly smoother approximation

\displaystyle  \Lambda(n) \approx \sum_{d \leq R: d|n} \mu(d) \log \frac{R}{d} \ \ \ \ \ (7)

in which case we would take

\displaystyle  g(n) := \sum_{d \leq R: d|n} \mu(d) \log \frac{R}{d} 1_{n \leq N}.

The function {g} is somewhat similar to the continuous Selberg sieve weights studied in Notes 4, with the main difference being that we did not square the divisor sum as we will not need to take {g} to be non-negative. As long as {z} is not too large, one can use some sieve-like computations to compute expressions like {\sum_{a+b+c=N} g(a)g(b)g(c)} quite accurately. The approximation (5) can be justified by using a nice estimate of Davenport that exemplifies the Mobius pseudorandomness heuristic from Supplement 4:

Theorem 8 (Davenport’s estimate) For any {A>0} and {x \geq 2}, we have

\displaystyle  \sum_{n \leq x} \mu(n) e(\theta n) \ll_A x \log^{-A} x

uniformly for all {\theta \in {\bf R}/{\bf Z}}. The implied constants are ineffective.

This estimate will be proven by splitting into two cases. In the “major arc” case when {\theta} is close to a rational {a/q} with {q} small (of size {O(\log^{O(1)} x)} or so), this estimate will be a consequence of the Siegel-Walfisz theorem ( from Notes 2); it is the application of this theorem that is responsible for the ineffective constants. In the remaining “minor arc” case, one proceeds by using a combinatorial identity (such as Vaughan’s identity) to express the sum {\sum_{n \leq x} \mu(n) e(\theta n)} in terms of bilinear sums of the form {\sum_n \sum_m a_n b_m e(\theta nm)}, and use the Cauchy-Schwarz inequality and the minor arc nature of {\theta} to obtain a gain in this case. This will all be done below the fold. We will also use (a rigorous version of) the approximation (6) (or (7)) to establish Vinogradov’s theorem.

A somewhat different looking approximation for the von Mangoldt function that also turns out to be quite useful is

\displaystyle  \Lambda(n) \approx \sum_{q \leq Q} \sum_{a \in ({\bf Z}/q{\bf Z})^\times} \frac{\mu(q)}{\phi(q)} e( \frac{an}{q} ) \ \ \ \ \ (8)

for some {Q} that is not too large compared to {N}. The methods used to establish Theorem 8 can also establish a Fourier approximation that makes (8) precise, and which can yield an alternate proof of Vinogradov’s theorem; this will be done below the fold.

The approximation (8) can be written in a way that makes it more similar to (7):

Exercise 9 Show that the right-hand side of (8) can be rewritten as

\displaystyle  \sum_{d \leq Q: d|n} \mu(d) \rho_d


\displaystyle  \rho_d := \frac{d}{\phi(d)} \sum_{m \leq Q/d: (m,d)=1} \frac{\mu^2(m)}{\phi(m)}.

Then, show the inequalities

\displaystyle  \sum_{m \leq Q/d} \frac{\mu^2(m)}{\phi(m)} \leq \rho_d \leq \sum_{m \leq Q} \frac{\mu^2(m)}{\phi(m)}

and conclude that

\displaystyle  \log \frac{Q}{d} - O(1) \leq \rho_d \leq \log Q + O(1).

(Hint: for the latter estimate, use Theorem 27 of Notes 1.)

The coefficients {\rho_d} in the above exercise are quite similar to optimised Selberg sieve coefficients (see Section 2 of Notes 4).

Another approximation to {\Lambda}, related to the modified Cramér random model (see Model 10 of Supplement 4) is

\displaystyle  \Lambda(n) \approx \frac{W}{\phi(W)} 1_{(n,W)=1} \ \ \ \ \ (9)

where {W := \prod_{p \leq w} p} and {w} is a slowly growing function of {N} (e.g. {w = \log\log N}); a closely related approximation is

\displaystyle  \frac{\phi(W)}{W} \Lambda(Wn+b) \approx 1 \ \ \ \ \ (10)

for {W,w} as above and {1 \leq b \leq W} coprime to {W}. These approximations (closely related to a device known as the “{W}-trick”) are not as quantitatively accurate as the previous approximations, but can still suffice to establish Vinogradov’s theorem, and also to count many other linear patterns in the primes or subsets of the primes (particularly if one injects some additional tools from additive combinatorics, and specifically the inverse conjecture for the Gowers uniformity norms); see this paper of Ben Green and myself for more discussion (and this more recent paper of Shao for an analysis of this approach in the context of Vinogradov-type theorems). The following exercise expresses the approximation (9) in a form similar to the previous approximation (8):

Exercise 10 With {W} as above, show that

\displaystyle  \frac{W}{\phi(W)} 1_{(n,W)=1} = \sum_{q|W} \sum_{a \in ({\bf Z}/q{\bf Z})^\times} \frac{\mu(q)}{\phi(q)} e( \frac{an}{q} )

for all natural numbers {n}.

Read the rest of this entry »

A major topic of interest of analytic number theory is the asymptotic behaviour of the Riemann zeta function {\zeta} in the critical strip {\{ \sigma+it: 0 < \sigma < 1; t \in {\bf R} \}} in the limit {t \rightarrow +\infty}. For the purposes of this set of notes, it is a little simpler technically to work with the log-magnitude {\log |\zeta|: {\bf C} \rightarrow [-\infty,+\infty]} of the zeta function. (In principle, one can reconstruct a branch of {\log \zeta}, and hence {\zeta} itself, from {\log |\zeta|} using the Cauchy-Riemann equations, or tools such as the Borel-Carathéodory theorem, see Exercise 40 of Supplement 2.)

One has the classical estimate

\displaystyle  \zeta(\sigma+it) = O( t^{O(1)} )

when {\sigma = O(1)} and {t \geq 10} (say), so that

\displaystyle  \log |\zeta(\sigma+it)| \leq O( \log t ). \ \ \ \ \ (1)

(See e.g. Exercise 37 from Supplement 3.) In view of this, let us define the normalised log-magnitudes {F_T: {\bf C} \rightarrow [-\infty,+\infty]} for any {T \geq 10} by the formula

\displaystyle  F_T( \sigma + it ) := \frac{1}{\log T} \log |\zeta( \sigma + i(T + t) )|;

informally, this is a normalised window into {\log |\zeta|} near {iT}. One can rephrase several assertions about the zeta function in terms of the asymptotic behaviour of {F_T}. For instance:

  • (i) The bound (1) implies that {F_T} is asymptotically locally bounded from above in the limit {T \rightarrow \infty}, thus for any compact set {K \subset {\bf C}} we have {F_T(\sigma+it) \leq O_K(1)} for {\sigma+it \in K} and {T} sufficiently large. In fact the implied constant in {K} only depends on the projection of {K} to the real axis.
  • (ii) For {\sigma > 1}, we have the bounds

    \displaystyle  |\zeta(\sigma+it)|, \frac{1}{|\zeta(\sigma+it)|} \leq \zeta(\sigma)

    which implies that {F_T} converges locally uniformly as {T \rightarrow +\infty} to zero in the region {\{ \sigma+it: \sigma > 1, t \in {\bf R} \}}.

  • (iii) The functional equation, together with the symmetry {\zeta(\sigma-it) = \overline{\zeta(\sigma+it)}}, implies that

    \displaystyle  |\zeta(\sigma+it)| = 2^\sigma \pi^{\sigma-1} |\sin \frac{\pi(\sigma+it)}{2}| |\Gamma(1-\sigma-it)| |\zeta(1-\sigma+it)|

    which by Exercise 17 of Supplement 3 shows that

    \displaystyle  F_T( 1-\sigma+it ) = \frac{1}{2}-\sigma + F_T(\sigma+it) + o(1)

    as {T \rightarrow \infty}, locally uniformly in {\sigma+it}. In particular, when combined with the previous item, we see that {F_T(\sigma+it)} converges locally uniformly as {T \rightarrow +\infty} to {\frac{1}{2}-\sigma} in the region {\{ \sigma+it: \sigma < 0, t \in {\bf R}\}}.

  • (iv) From Jensen’s formula (Theorem 16 of Supplement 2) we see that {\log|\zeta|} is a subharmonic function, and thus {F_T} is subharmonic as well. In particular we have the mean value inequality

    \displaystyle  F_T( z_0 ) \leq \frac{1}{\pi r^2} \int_{z: |z-z_0| \leq r} F_T(z)

    for any disk {\{ z: |z-z_0| \leq r \}}, where the integral is with respect to area measure. From this and (ii) we conclude that

    \displaystyle  \int_{z: |z-z_0| \leq r} F_T(z) \geq O_{z_0,r}(1)

    for any disk with {\hbox{Re}(z_0)>1} and sufficiently large {T}; combining this with (i) we conclude that {F_T} is asymptotically locally bounded in {L^1} in the limit {T \rightarrow \infty}, thus for any compact set {K \subset {\bf C}} we have {\int_K |F_T| \ll_K 1} for sufficiently large {T}.

From (v) and the usual Arzela-Ascoli diagonalisation argument, we see that the {F_T} are asymptotically compact in the topology of distributions: given any sequence {T_n} tending to {+\infty}, one can extract a subsequence such that the {F_T} converge in the sense of distributions. Let us then define a normalised limit profile of {\log|\zeta|} to be a distributional limit {F} of a sequence of {F_T}; they are analogous to limiting profiles in PDE, and also to the more recent introduction of “graphons” in the theory of graph limits. Then by taking limits in (i)-(iv) we can say a lot about such normalised limit profiles {F} (up to almost everywhere equivalence, which is an issue we will address shortly):

  • (i) {F} is bounded from above in the critical strip {\{ \sigma+it: 0 \leq \sigma \leq 1 \}}.
  • (ii) {F} vanishes on {\{ \sigma+it: \sigma \geq 1\}}.
  • (iii) We have the functional equation {F(1-\sigma+it) = \frac{1}{2}-\sigma + F(\sigma+it)} for all {\sigma+it}. In particular {F(\sigma+it) = \frac{1}{2}-\sigma} for {\sigma<0}.
  • (iv) {F} is subharmonic.

Unfortunately, (i)-(iv) fail to characterise {F} completely. For instance, one could have {F(\sigma+it) = f(\sigma)} for any convex function {f(\sigma)} of {\sigma} that equals {0} for {\sigma \geq 1}, {\frac{1}{2}-\sigma} for {\sigma \leq 1}, and obeys the functional equation {f(1-\sigma) = \frac{1}{2}-\sigma+f(\sigma)}, and this would be consistent with (i)-(iv). One can also perturb such examples in a region where {f} is strictly convex to create further examples of functions obeying (i)-(iv). Note from subharmonicity that the function {\sigma \mapsto \sup_t F(\sigma+it)} is always going to be convex in {\sigma}; this can be seen as a limiting case of the Hadamard three-lines theorem (Exercise 41 of Supplement 2).

We pause to address one minor technicality. We have defined {F} as a distributional limit, and as such it is a priori only defined up to almost everywhere equivalence. However, due to subharmonicity, there is a unique upper semi-continuous representative of {F} (taking values in {[-\infty,+\infty)}), defined by the formula

\displaystyle  F(z_0) = \lim_{r \rightarrow 0^+} \frac{1}{\pi r^2} \int_{B(z_0,r)} F(z)\ dz

for any {z_0 \in {\bf C}} (note from subharmonicity that the expression in the limit is monotone nonincreasing as {r \rightarrow 0}, and is also continuous in {z_0}). We will now view this upper semi-continuous representative of {F} as the canonical representative of {F}, so that {F} is now defined everywhere, rather than up to almost everywhere equivalence.

By a classical theorem of Riesz, a function {F} is subharmonic if and only if the distribution {-\Delta F} is a non-negative measure, where {\Delta := \frac{\partial^2}{\partial \sigma^2} + \frac{\partial^2}{\partial t^2}} is the Laplacian in the {\sigma,t} coordinates. Jensen’s formula (or Greens’ theorem), when interpreted distributionally, tells us that

\displaystyle  -\Delta \log |\zeta| = \frac{1}{2\pi} \sum_\rho \delta_\rho

away from the real axis, where {\rho} ranges over the non-trivial zeroes of {\zeta}. Thus, if {F} is a normalised limit profile for {\log |\zeta|} that is the distributional limit of {F_{T_n}}, then we have

\displaystyle  -\Delta F = \nu

where {\nu} is a non-negative measure which is the limit in the vague topology of the measures

\displaystyle  \nu_{T_n} := \frac{1}{2\pi \log T_n} \sum_\rho \delta_{\rho - T_n}.

Thus {\nu} is a normalised limit profile of the zeroes of the Riemann zeta function.

Using this machinery, we can recover many classical theorems about the Riemann zeta function by “soft” arguments that do not require extensive calculation. Here are some examples:

Theorem 1 The Riemann hypothesis implies the Lindelöf hypothesis.

Proof: It suffices to show that any limiting profile {F} (arising as the limit of some {F_{T_n}}) vanishes on the critical line {\{1/2+it: t \in {\bf R}\}}. But if the Riemann hypothesis holds, then the measures {\nu_{T_n}} are supported on the critical line {\{1/2+it: t \in {\bf R}\}}, so the normalised limit profile {\nu} is also supported on this line. This implies that {F} is harmonic outside of the critical line. By (ii) and unique continuation for harmonic functions, this implies that {F} vanishes on the half-space {\{ \sigma+it: \sigma \geq \frac{1}{2} \}} (and equals {\frac{1}{2}-\sigma} on the complementary half-space, by (iii)), giving the claim. \Box

In fact, we have the following sharper statement:

Theorem 2 (Backlund) The Lindelöf hypothesis is equivalent to the assertion that for any fixed {\sigma_0 > \frac{1}{2}}, the number of zeroes in the region {\{ \sigma+it: \sigma > \sigma_0, T \leq t \leq T+1 \}} is {o(\log T)} as {T \rightarrow \infty}.

Proof: If the latter claim holds, then for any {T_n \rightarrow \infty}, the measures {\nu_{T_n}} assign a mass of {o(1)} to any region of the form {\{ \sigma+it: \sigma > \sigma_0; t_0 \leq t \leq t_0+1 \}} as {n \rightarrow \infty} for any fixed {\sigma_0>\frac{1}{2}} and {t_0 \in {\bf R}}. Thus the normalised limiting profile measure {\nu} is supported on the critical line, and we can repeat the previous argument.

Conversely, suppose the claim fails, then we can find a sequence {T_n} and {\sigma_0>0} such that {\nu_{T_n}} assigns a mass of {\gg 1} to the region {\{ \sigma+it: \sigma > \sigma_0; 0\leq t \leq 1 \}}. Extracting a normalised limiting profile, we conclude that the normalised limiting profile measure {\nu} is non-trivial somewhere to the right of the critical line, so the associated subharmonic function {F} is not harmonic everywhere to the right of the critical line. From the maximum principle and (ii) this implies that {F} has to be positive somewhere on the critical line, but this contradicts the Lindelöf hypothesis. (One has to take a bit of care in the last step since {F_{T_n}} only converges to {F} in the sense of distributions, but it turns out that the subharmonicity of all the functions involved gives enough regularity to justify the argument; we omit the details here.) \Box

Theorem 3 (Littlewood) Assume the Lindelöf hypothesis. Then for any fixed {\alpha>0}, the number of zeroes in the region {\{ \sigma+it: T \leq t \leq T+\alpha \}} is {(2\pi \alpha+o(1)) \log T} as {T \rightarrow +\infty}.

Proof: By the previous arguments, the only possible normalised limiting profile for {\log |\zeta|} is {\max( 0, \frac{1}{2}-\sigma )}. Taking distributional Laplacians, we see that the only possible normalised limiting profile for the zeroes is Lebesgue measure on the critical line. Thus, {\nu_T( \{\sigma+it: T \leq t \leq T+\alpha \} )} can only converge to {\alpha} as {T \rightarrow +\infty}, and the claim follows. \Box

Even without the Lindelöf hypothesis, we have the following result:

Theorem 4 (Titchmarsh) For any fixed {\alpha>0}, there are {\gg_\alpha \log T} zeroes in the region {\{ \sigma+it: T \leq t \leq T+\alpha \}} for sufficiently large {T}.

Among other things, this theorem recovers a classical result of Littlewood that the gaps between the imaginary parts of the zeroes goes to zero, even without assuming unproven conjectures such as the Riemann or Lindelöf hypotheses.

Proof: Suppose for contradiction that this were not the case, then we can find {\alpha > 0} and a sequence {T_n \rightarrow \infty} such that {\{ \sigma+it: T_n \leq t \leq T_n+\alpha \}} contains {o(\log T)} zeroes. Passing to a subsequence to extract a limit profile, we conclude that the normalised limit profile measure {\nu} assigns no mass to the horizontal strip {\{ \sigma+it: 0 \leq t \leq\alpha \}}. Thus the associated subharmonic function {F} is actually harmonic on this strip. But by (ii) and unique continuation this forces {F} to vanish on this strip, contradicting the functional equation (iii). \Box

Exercise 5 Use limiting profiles to obtain the matching upper bound of {O_\alpha(\log T)} for the number of zeroes in {\{ \sigma+it: T \leq t \leq T+\alpha \}} for sufficiently large {T}.

Remark 6 One can remove the need to take limiting profiles in the above arguments if one can come up with quantitative (or “hard”) substitutes for qualitative (or “soft”) results such as the unique continuation property for harmonic functions. This would also allow one to replace the qualitative decay rates {o(1)} with more quantitative decay rates such as {1/\log \log T} or {1/\log\log\log T}. Indeed, the classical proofs of the above theorems come with quantitative bounds that are typically of this form (see e.g. the text of Titchmarsh for details).

Exercise 7 Let {S(T)} denote the quantity {S(T) := \frac{1}{\pi} \hbox{arg} \zeta(\frac{1}{2}+iT)}, where the branch of the argument is taken by using a line segment connecting {\frac{1}{2}+iT} to (say) {2+iT}, and then to {2}. If we have a sequence {T_n \rightarrow \infty} producing normalised limit profiles {F, \nu} for {\log|\zeta|} and the zeroes respectively, show that {t \mapsto \frac{1}{\log T_n} S(T_n + t)} converges in the sense of distributions to the function {t \mapsto \frac{1}{\pi} \int_{1/2}^1 \frac{\partial F}{\partial t}(\sigma+it)\ d\sigma}, or equivalently

\displaystyle  t \mapsto \frac{1}{2\pi} \frac{\partial}{\partial t} \int_0^1 F(\sigma+it)\ d\sigma.

Conclude in particular that if the Lindelöf hypothesis holds, then {S(T) = o(\log T)} as {T \rightarrow \infty}.

A little bit more about the normalised limit profiles {F} are known unconditionally, beyond (i)-(iv). For instance, from Exercise 3 of Notes 5 we have {\zeta(1/2 + it ) = O( t^{1/6+o(1)} )} as {t \rightarrow +\infty}, which implies that any normalised limit profile {F} for {\log|\zeta|} is bounded by {1/6} on the critical line, beating the bound of {1/4} coming from convexity and (ii), (iii), and then convexity can be used to further bound {F} away from the critical line also. Some further small improvements of this type are known (coming from various methods for estimating exponential sums), though they fall well short of determining {F} completely at our current level of understanding. Of course, given that we believe the Riemann hypothesis (and hence the Lindelöf hypothesis) to be true, the only actual limit profile that should exist is {\max(0,\frac{1}{2}-\sigma)} (in fact this assertion is equivalent to the Lindelöf hypothesis, by the arguments above).

Better control on limiting profiles is available if we do not insist on controlling {\zeta} for all values of the height parameter {T}, but only for most such values, thanks to the existence of several mean value theorems for the zeta function, as discussed in Notes 6; we discuss this below the fold.

Read the rest of this entry »

In analytic number theory, it is a well-known phenomenon that for many arithmetic functions {f: {\bf N} \rightarrow {\bf C}} of interest in number theory, it is significantly easier to estimate logarithmic sums such as

\displaystyle  \sum_{n \leq x} \frac{f(n)}{n}

than it is to estimate summatory functions such as

\displaystyle  \sum_{n \leq x} f(n).

(Here we are normalising {f} to be roughly constant in size, e.g. {f(n) = O( n^{o(1)} )} as {n \rightarrow \infty}.) For instance, when {f} is the von Mangoldt function {\Lambda}, the logarithmic sums {\sum_{n \leq x} \frac{\Lambda(n)}{n}} can be adequately estimated by Mertens’ theorem, which can be easily proven by elementary means (see Notes 1); but a satisfactory estimate on the summatory function {\sum_{n \leq x} \Lambda(n)} requires the prime number theorem, which is substantially harder to prove (see Notes 2). (From a complex-analytic or Fourier-analytic viewpoint, the problem is that the logarithmic sums {\sum_{n \leq x} \frac{f(n)}{n}} can usually be controlled just from knowledge of the Dirichlet series {\sum_n \frac{f(n)}{n^s}} for {s} near {1}; but the summatory functions require control of the Dirichlet series {\sum_n \frac{f(n)}{n^s}} for {s} on or near a large portion of the line {\{ 1+it: t \in {\bf R} \}}. See Notes 2 for further discussion.)

Viewed conversely, whenever one has a difficult estimate on a summatory function such as {\sum_{n \leq x} f(n)}, one can look to see if there is a “cheaper” version of that estimate that only controls the logarithmic sums {\sum_{n \leq x} \frac{f(n)}{n}}, which is easier to prove than the original, more “expensive” estimate. In this post, we shall do this for two theorems, a classical theorem of Halasz on mean values of multiplicative functions on long intervals, and a much more recent result of Matomaki and RadziwiÅ‚Å‚ on mean values of multiplicative functions in short intervals. The two are related; the former theorem is an ingredient in the latter (though in the special case of the Matomaki-RadziwiÅ‚Å‚ theorem considered here, we will not need Halasz’s theorem directly, instead using a key tool in the proof of that theorem).

We begin with Halasz’s theorem. Here is a version of this theorem, due to Montgomery and to Tenenbaum:

Theorem 1 (Halasz-Montgomery-Tenenbaum) Let {f: {\bf N} \rightarrow {\bf C}} be a multiplicative function with {|f(n)| \leq 1} for all {n}. Let {x \geq 3} and {T \geq 1}, and set

\displaystyle  M := \min_{|t| \leq T} \sum_{p \leq x} \frac{1 - \hbox{Re}( f(p) p^{-it} )}{p}.

Then one has

\displaystyle  \frac{1}{x} \sum_{n \leq x} f(n) \ll (1+M) e^{-M} + \frac{1}{\sqrt{T}}.

Informally, this theorem asserts that {\sum_{n \leq x} f(n)} is small compared with {x}, unless {f} “pretends” to be like the character {p \mapsto p^{it}} on primes for some small {y}. (This is the starting point of the “pretentious” approach of Granville and Soundararajan to analytic number theory, as developed for instance here.) We now give a “cheap” version of this theorem which is significantly weaker (both because it settles for controlling logarithmic sums rather than summatory functions, it requires {f} to be completely multiplicative instead of multiplicative, it requires a strong bound on the analogue of the quantity {M}, and because it only gives qualitative decay rather than quantitative estimates), but easier to prove:

Theorem 2 (Cheap Halasz) Let {x} be an asymptotic parameter goingto infinity. Let {f: {\bf N} \rightarrow {\bf C}} be a completely multiplicative function (possibly depending on {x}) such that {|f(n)| \leq 1} for all {n}, such that

\displaystyle  \sum_{p \leq x} \frac{1 - \hbox{Re}( f(p) )}{p} \gg \log\log x. \ \ \ \ \ (1)


\displaystyle  \frac{1}{\log x} \sum_{n \leq x} \frac{f(n)}{n} = o(1). \ \ \ \ \ (2)

Note that now that we are content with estimating exponential sums, we no longer need to preclude the possibility that {f(p)} pretends to be like {p^{it}}; see Exercise 11 of Notes 1 for a related observation.

To prove this theorem, we first need a special case of the Turan-Kubilius inequality.

Lemma 3 (Turan-Kubilius) Let {x} be a parameter going to infinity, and let {1 < P < x} be a quantity depending on {x} such that {P = x^{o(1)}} and {P \rightarrow \infty} as {x \rightarrow \infty}. Then

\displaystyle  \sum_{n \leq x} \frac{ | \frac{1}{\log \log P} \sum_{p \leq P: p|n} 1 - 1 |}{n} = o( \log x ).

Informally, this lemma is asserting that

\displaystyle  \sum_{p \leq P: p|n} 1 \approx \log \log P

for most large numbers {n}. Another way of writing this heuristically is in terms of Dirichlet convolutions:

\displaystyle  1 \approx 1 * \frac{1}{\log\log P} 1_{{\mathcal P} \cap [1,P]}.

This type of estimate was previously discussed as a tool to establish a criterion of Katai and Bourgain-Sarnak-Ziegler for Möbius orthogonality estimates in this previous blog post. See also Section 5 of Notes 1 for some similar computations.

Proof: By Cauchy-Schwarz it suffices to show that

\displaystyle  \sum_{n \leq x} \frac{ | \frac{1}{\log \log P} \sum_{p \leq P: p|n} 1 - 1 |^2}{n} = o( \log x ).

Expanding out the square, it suffices to show that

\displaystyle  \sum_{n \leq x} \frac{ (\frac{1}{\log \log P} \sum_{p \leq P: p|n} 1)^j}{n} = \log x + o( \log x )

for {j=0,1,2}.

We just show the {j=2} case, as the {j=0,1} cases are similar (and easier). We rearrange the left-hand side as

\displaystyle  \frac{1}{(\log\log P)^2} \sum_{p_1, p_2 \leq P} \sum_{n \leq x: p_1,p_2|n} \frac{1}{n}.

We can estimate the inner sum as {(1+o(1)) \frac{1}{[p_1,p_2]} \log x}. But a routine application of Mertens’ theorem (handling the diagonal case when {p_1=p_2} separately) shows that

\displaystyle  \sum_{p_1, p_2 \leq P} \frac{1}{[p_1,p_2]} = (1+o(1)) (\log\log P)^2

and the claim follows. \Box

Remark 4 As an alternative to the Turan-Kubilius inequality, one can use the Ramaré identity

\displaystyle  \sum_{p \leq P: p|n} \frac{1}{\# \{ p' \leq P: p'|n\} + 1} - 1 = 1_{(p,n)=1 \hbox{ for all } p \leq P}

(see e.g. Section 17.3 of Friedlander-Iwaniec). This identity turns out to give superior quantitative results than the Turan-Kubilius inequality in applications; see the paper of Matomaki and Radziwiłł for an instance of this.

We now prove Theorem 2. Let {Q} denote the left-hand side of (2); by the triangle inequality we have {Q=O(1)}. By Lemma 3 (for some {P = x^{o(1)}} to be chosen later) and the triangle inequality we have

\displaystyle  \sum_{n \leq x} \frac{\frac{1}{\log \log P} \sum_{p \leq P: p|n} f(n)}{n} = Q \log x + o( \log x ).

We rearrange the left-hand side as

\displaystyle  \frac{1}{\log\log P} \sum_{p \leq P} \frac{f(p)}{p} \sum_{m \leq x/p} \frac{f(m)}{m}.

We now replace the constraint {m \leq x/p} by {m \leq x}. The error incurred in doing so is

\displaystyle  O( \frac{1}{\log\log P} \sum_{p \leq P} \frac{1}{p} \sum_{x/P \leq m \leq x} \frac{1}{m} )

which by Mertens’ theorem is {O(\log P) = o( \log x )}. Thus we have

\displaystyle  \frac{1}{\log\log P} \sum_{p \leq P} \frac{f(p)}{p} \sum_{m \leq x} \frac{f(m)}{m} = Q \log x + o( \log x ).

But by definition of {Q}, we have {\sum_{m \leq x} \frac{f(m)}{m} = Q \log x}, thus

\displaystyle  [1 - \frac{1}{\log\log P} \sum_{p \leq P} \frac{f(p)}{p}] Q = o(1). \ \ \ \ \ (3)

From Mertens’ theorem, the expression in brackets can be rewritten as

\displaystyle  \frac{1}{\log\log P} \sum_{p \leq P} \frac{1 - f(p)}{p} + o(1)

and so the real part of this expression is

\displaystyle  \frac{1}{\log\log P} \sum_{p \leq P} \frac{1 - \hbox{Re} f(p)}{p} + o(1).

By (1), Mertens’ theorem and the hypothesis on {f} we have

\displaystyle  \sum_{p \leq x^\varepsilon} \frac{(1 - \hbox{Re} f(p)) \log p}{p} \gg \log\log x^\varepsilon - O_\varepsilon(1)

for any {\varepsilon > 0}. This implies that we can find {P = x^{o(1)}} going to infinity such that

\displaystyle  \sum_{p \leq P} \frac{(1 - \hbox{Re} f(p)) \log p}{p} \gg (1-o(1))\log\log P

and thus the expression in brackets has real part {\gg 1-o(1)}. The claim follows.

The Turan-Kubilius argument is certainly not the most efficient way to estimate sums such as {\frac{1}{n} \sum_{n \leq x} f(n)}. In the exercise below we give a significantly more accurate estimate that works when {f} is non-negative.

Exercise 5 (Granville-Koukoulopoulos-Matomaki)

  • (i) If {g} is a completely multiplicative function with {g(p) \in \{0,1\}} for all primes {p}, show that

    \displaystyle  (e^{-\gamma}-o(1)) \prod_{p \leq x} (1 - \frac{g(p)}{p})^{-1} \leq \sum_{n \leq x} \frac{g(n)}{n} \leq \prod_{p \leq x} (1 - \frac{g(p)}{p})^{-1}.

    as {x \rightarrow \infty}. (Hint: for the upper bound, expand out the Euler product. For the lower bound, show that {\sum_{n \leq x} \frac{g(n)}{n} \times \sum_{n \leq x} \frac{h(n)}{n} \ge \sum_{n \leq x} \frac{1}{n}}, where {h} is the completely multiplicative function with {h(p) = 1-g(p)} for all primes {p}.)

  • (ii) If {g} is multiplicative and takes values in {[0,1]}, show that

    \displaystyle  \sum_{n \leq x} \frac{g(n)}{n} \asymp \prod_{p \leq x} (1 - \frac{g(p)}{p})^{-1}

    \displaystyle  \asymp \exp( \sum_{p \leq x} \frac{g(p)}{p} )

    for all {x \geq 1}.

Now we turn to a very recent result of Matomaki and Radziwiłł on mean values of multiplicative functions in short intervals. For sake of illustration we specialise their results to the simpler case of the Liouville function {\lambda}, although their arguments actually work (with some additional effort) for arbitrary multiplicative functions of magnitude at most {1} that are real-valued (or more generally, stay far from complex characters {p \mapsto p^{it}}). Furthermore, we give a qualitative form of their estimates rather than a quantitative one:

Theorem 6 (Matomaki-Radziwiłł, special case) Let {X} be a parameter going to infinity, and let {2 \leq h \leq X} be a quantity going to infinity as {X \rightarrow \infty}. Then for all but {o(X)} of the integers {x \in [X,2X]}, one has

\displaystyle  \sum_{x \leq n \leq x+h} \lambda(n) = o( h ).

Equivalently, one has

\displaystyle  \sum_{X \leq x \leq 2X} |\sum_{x \leq n \leq x+h} \lambda(n)|^2 = o( h^2 X ). \ \ \ \ \ (4)

A simple sieving argument (see Exercise 18 of Supplement 4) shows that one can replace {\lambda} by the Möbius function {\mu} and obtain the same conclusion. See this recent note of Matomaki and Radziwiłł for a simple proof of their (quantitative) main theorem in this special case.

Of course, (4) improves upon the trivial bound of {O( h^2 X )}. Prior to this paper, such estimates were only known (using arguments similar to those in Section 3 of Notes 6) for {h \geq X^{1/6+\varepsilon}} unconditionally, or for {h \geq \log^A X} for some sufficiently large {A} if one assumed the Riemann hypothesis. This theorem also represents some progress towards Chowla’s conjecture (discussed in Supplement 4) that

\displaystyle  \sum_{n \leq x} \lambda(n+h_1) \dots \lambda(n+h_k) = o( x )

as {x \rightarrow \infty} for any fixed distinct {h_1,\dots,h_k}; indeed, it implies that this conjecture holds if one performs a small amount of averaging in the {h_1,\dots,h_k}.

Below the fold, we give a “cheap” version of the Matomaki-Radziwiłł argument. More precisely, we establish

Theorem 7 (Cheap Matomaki-Radziwiłł) Let {X} be a parameter going to infinity, and let {1 \leq T \leq X}. Then

\displaystyle  \int_X^{X^A} \left|\sum_{x \leq n \leq e^{1/T} x} \frac{\lambda(n)}{n}\right|^2\frac{dx}{x} = o\left( \frac{\log X}{T^2} \right), \ \ \ \ \ (5)

for any fixed {A>1}.

Note that (5) improves upon the trivial bound of {O( \frac{\log X}{T^2} )}. Again, one can replace {\lambda} with {\mu} if desired. Due to the cheapness of Theorem 7, the proof will require few ingredients; the deepest input is the improved zero-free region for the Riemann zeta function due to Vinogradov and Korobov. Other than that, the main tools are the Turan-Kubilius result established above, and some Fourier (or complex) analysis.

Read the rest of this entry »

In the previous set of notes, we saw how zero-density theorems for the Riemann zeta function, when combined with the zero-free region of Vinogradov and Korobov, could be used to obtain prime number theorems in short intervals. It turns out that a more sophisticated version of this type of argument also works to obtain prime number theorems in arithmetic progressions, in particular establishing the celebrated theorem of Linnik:

Theorem 1 (Linnik’s theorem) Let {a\ (q)} be a primitive residue class. Then {a\ (q)} contains a prime {p} with {p \ll q^{O(1)}}.

In fact it is known that one can find a prime {p} with {p \ll q^{5}}, a result of Xylouris. For sake of comparison, recall from Exercise 65 of Notes 2 that the Siegel-Walfisz theorem gives this theorem with a bound of {p \ll \exp( q^{o(1)} )}, and from Exercise 48 of Notes 2 one can obtain a bound of the form {p \ll \phi(q)^2 \log^2 q} if one assumes the generalised Riemann hypothesis. The probabilistic random models from Supplement 4 suggest that one should in fact be able to take {p \ll q^{1+o(1)}}.

We will not aim to obtain the optimal exponents for Linnik’s theorem here, and follow the treatment in Chapter 18 of Iwaniec and Kowalski. We will in fact establish the following more quantitative result (a special case of a more powerful theorem of Gallagher), which splits into two cases, depending on whether there is an exceptional zero or not:

Theorem 2 (Quantitative Linnik theorem) Let {a\ (q)} be a primitive residue class for some {q \geq 2}. For any {x > 1}, let {\psi(x;q,a)} denote the quantity

\displaystyle  \psi(x;q,a) := \sum_{n \leq x: n=a\ (q)} \Lambda(n).

Assume that {x \geq q^C} for some sufficiently large {C}.

  • (i) (No exceptional zero) If all the real zeroes {\beta} of {L}-functions {L(\cdot,\chi)} of real characters {\chi} of modulus {q} are such that {1-\beta \gg \frac{1}{\log q}}, then

    \displaystyle  \psi(x;q,a) = \frac{x}{\phi(q)} ( 1 + O( \exp( - c \frac{\log x}{\log q} ) ) + O( \frac{\log^2 q}{q} ) )

    for all {x \geq 1} and some absolute constant {c>0}.

  • (ii) (Exceptional zero) If there is a zero {\beta} of an {L}-function {L(\cdot,\chi_1)} of a real character {\chi_1} of modulus {q} with {\beta = 1 - \frac{\varepsilon}{\log q}} for some sufficiently small {\varepsilon>0}, then

    \displaystyle  \psi(x;q,a) = \frac{x}{\phi(q)} ( 1 - \chi_1(a) \frac{x^{\beta-1}}{\beta} \ \ \ \ \ (1)

    \displaystyle + O( \exp( - c \frac{\log x}{\log q} \log \frac{1}{\varepsilon} ) )

    \displaystyle  + O( \frac{\log^2 q}{q} ) )

    for all {x \geq 1} and some absolute constant {c>0}.

The implied constants here are effective.

Note from the Landau-Page theorem (Exercise 54 from Notes 2) that at most one exceptional zero exists (if {\varepsilon} is small enough). A key point here is that the error term {O( \exp( - c \frac{\log x}{\log q} \log \frac{1}{\varepsilon} ) )} in the exceptional zero case is an improvement over the error term when no exceptional zero is present; this compensates for the potential reduction in the main term coming from the {\chi_1(a) \frac{x^{\beta-1}}{\beta}} term. The splitting into cases depending on whether an exceptional zero exists or not turns out to be an essential technique in many advanced results in analytic number theory (though presumably such a splitting will one day become unnecessary, once the possibility of exceptional zeroes are finally eliminated for good).

Exercise 3 Assuming Theorem 2, and assuming {x \geq q^C} for some sufficiently large absolute constant {C}, establish the lower bound

\displaystyle  \psi(x;a,q) \gg \frac{x}{\phi(q)}

when there is no exceptional zero, and

\displaystyle  \psi(x;a,q) \gg \varepsilon \frac{x}{\phi(q)}

when there is an exceptional zero {\beta = 1 - \frac{\varepsilon}{\log q}}. Conclude that Theorem 2 implies Theorem 1, regardless of whether an exceptional zero exists or not.

Remark 4 The Brun-Titchmarsh theorem (Exercise 33 from Notes 4), in the sharp form of Montgomery and Vaughan, gives that

\displaystyle  \pi(x; q, a) \leq 2 \frac{x}{\phi(q) \log (x/q)}

for any primitive residue class {a\ (q)} and any {x \geq q}. This is (barely) consistent with the estimate (1). Any lowering of the coefficient {2} in the Brun-Titchmarsh inequality (with reasonable error terms), in the regime when {x} is a large power of {q}, would then lead to at least some elimination of the exceptional zero case. However, this has not led to any progress on the Landau-Siegel zero problem (and may well be just a reformulation of that problem). (When {x} is a relatively small power of {q}, some improvements to Brun-Titchmarsh are possible that are not in contradiction with the presence of an exceptional zero; see this paper of Maynard for more discussion.)

Theorem 2 is deduced in turn from facts about the distribution of zeroes of {L}-functions. Recall from the truncated explicit formula (Exercise 45(iv) of Notes 2) with (say) {T := q^2} that

\displaystyle  \sum_{n \leq x} \Lambda(n) \chi(n) = - \sum_{\hbox{Re}(\rho) > 3/4; |\hbox{Im}(\rho)| \leq q^2; L(\rho,\chi)=0} \frac{x^\rho}{\rho} + O( \frac{x}{q^2} \log^2 q)

for any non-principal character {\chi} of modulus {q}, where we assume {x \geq q^C} for some large {C}; for the principal character one has the same formula with an additional term of {x} on the right-hand side (as is easily deduced from Theorem 21 of Notes 2). Using the Fourier inversion formula

\displaystyle  1_{n = a\ (q)} = \frac{1}{\phi(q)} \sum_{\chi\ (q)} \overline{\chi(a)} \chi(n)

(see Theorem 69 of Notes 1), we thus have

\displaystyle  \psi(x;a,q) = \frac{x}{\phi(q)} ( 1 - \sum_{\chi\ (q)} \overline{\chi(a)} \sum_{\hbox{Re}(\rho) > 3/4; |\hbox{Im}(\rho)| \leq q^2; L(\rho,\chi)=0} \frac{x^{\rho-1}}{\rho}

\displaystyle  + O( \frac{\log^2 q}{q} ) )

and so it suffices by the triangle inequality (bounding {1/\rho} very crudely by {O(1)}, as the contribution of the low-lying zeroes already turns out to be quite dominant) to show that

\displaystyle  \sum_{\chi\ (q)} \sum_{\sigma > 3/4; |t| \leq q^2; L(\sigma+it,\chi)=0} x^{\sigma-1} \ll \exp( - c \frac{\log x}{\log q} ) \ \ \ \ \ (2)

when no exceptional zero is present, and

\displaystyle  \sum_{\chi\ (q)} \sum_{\sigma > 3/4; |t| \leq q^2; L(\sigma+it,\chi)=0; \sigma+it \neq \beta} x^{\sigma-1} \ll \exp( - c \frac{\log x}{\log q} \log \frac{1}{\varepsilon} ) \ \ \ \ \ (3)

when an exceptional zero is present.

To handle the former case (2), one uses two facts about zeroes. The first is the classical zero-free region (Proposition 51 from Notes 2), which we reproduce in our context here:

Proposition 5 (Classical zero-free region) Let {q, T \geq 2}. Apart from a potential exceptional zero {\beta}, all zeroes {\sigma+it} of {L}-functions {L(\cdot,\chi)} with {\chi} of modulus {q} and {|t| \leq T} are such that

\displaystyle  \sigma \leq 1 - \frac{c}{\log qT}

for some absolute constant {c>0}.

Using this zero-free region, we have

\displaystyle  x^{\sigma-1} \ll \log x \int_{1/2}^{1-c/\log q} 1_{\alpha < \sigma} x^{\alpha-1}\ d\alpha

whenever {\sigma} contributes to the sum in (2), and so the left-hand side of (2) is bounded by

\displaystyle  \ll \log x \int_{1/2}^{1 - c/\log q} N( \alpha, q, q^2 ) x^{\alpha-1}\ d\alpha

where we recall that {N(\alpha,q,T)} is the number of zeroes {\sigma+it} of any {L}-function of a character {\chi} of modulus {q} with {\sigma \geq \alpha} and {0 \leq t \leq T} (here we use conjugation symmetry to make {t} non-negative, accepting a multiplicative factor of two).

In Exercise 25 of Notes 6, the grand density estimate

\displaystyle  N(\alpha,q,T) \ll (qT)^{4(1-\alpha)} \log^{O(1)}(qT) \ \ \ \ \ (4)

is proven. If one inserts this bound into the above expression, one obtains a bound for (2) which is of the form

\displaystyle  \ll (\log^{O(1)} q) \exp( - c \frac{\log x}{\log q} ).

Unfortunately this is off from what we need by a factor of {\log^{O(1)} q} (and would lead to a weak form of Linnik’s theorem in which {p} was bounded by {O( \exp( \log^{O(1)} q ) )} rather than by {q^{O(1)}}). In the analogous problem for prime number theorems in short intervals, we could use the Vinogradov-Korobov zero-free region to compensate for this loss, but that region does not help here for the contribution of the low-lying zeroes with {t = O(1)}, which as mentioned before give the dominant contribution. Fortunately, it is possible to remove this logarithmic loss from the zero-density side of things:

Theorem 6 (Log-free grand density estimate) For any {q, T > 1} and {1/2 \leq \alpha \leq 1}, one has

\displaystyle  N(\alpha,q,T) \ll (qT)^{O(1-\alpha)}.

The implied constants are effective.

We prove this estimate below the fold. The proof follows the methods of the previous section, but one inserts various sieve weights to restrict sums over natural numbers to essentially become sums over “almost primes”, as this turns out to remove the logarithmic losses. (More generally, the trick of restricting to almost primes by inserting suitable sieve weights is quite useful for avoiding any unnecessary losses of logarithmic factors in analytic number theory estimates.)

Exercise 7 Use Theorem 6 to complete the proof of (2).

Now we turn to the case when there is an exceptional zero (3). The argument used to prove (2) applies here also, but does not gain the factor of {\log \frac{1}{\varepsilon}} in the exponent. To achieve this, we need an additional tool, a version of the Deuring-Heilbronn repulsion phenomenon due to Linnik:

Theorem 8 (Deuring-Heilbronn repulsion phenomenon) Suppose {q \geq 2} is such that there is an exceptional zero {\beta = 1 - \frac{\varepsilon}{\log q}} with {\varepsilon} small. Then all other zeroes {\sigma+it} of {L}-functions of modulus {q} are such that

\displaystyle  \sigma \leq 1 - c \frac{\log \frac{1}{\varepsilon}}{\log(q(2+|t|))}.

In other words, the exceptional zero enlarges the classical zero-free region by a factor of {\log \frac{1}{\varepsilon}}. The implied constants are effective.

Exercise 9 Use Theorem 6 and Theorem 8 to complete the proof of (3), and thus Linnik’s theorem.

Exercise 10 Use Theorem 8 to give an alternate proof of (Tatuzawa’s version of) Siegel’s theorem (Theorem 62 of Notes 2). (Hint: if two characters have different moduli, then they can be made to have the same modulus by multiplying by suitable principal characters.)

Theorem 8 is proven by similar methods to that of Theorem 6, the basic idea being to insert a further weight of {1 * \chi_1} (in addition to the sieve weights), the point being that the exceptional zero causes this weight to be quite small on the average. There is a strengthening of Theorem 8 due to Bombieri that is along the lines of Theorem 6, obtaining the improvement

\displaystyle  N'(\alpha,q,T) \ll \varepsilon (1 + \frac{\log T}{\log q}) (qT)^{O(1-\alpha)} \ \ \ \ \ (5)

with effective implied constants for any {1/2 \leq \alpha \leq 1} and {T \geq 1} in the presence of an exceptional zero, where the prime in {N'(\alpha,q,T)} means that the exceptional zero {\beta} is omitted (thus {N'(\alpha,q,T) = N(\alpha,q,T)-1} if {\alpha \leq \beta}). Note that the upper bound on {N'(\alpha,q,T)} falls below one when {\alpha > 1 - c \frac{\log \frac{1}{\varepsilon}}{\log(qT)}} for a sufficiently small {c>0}, thus recovering Theorem 8. Bombieri’s theorem can be established by the methods in this set of notes, and will be given as an exercise to the reader.

Remark 11 There are a number of alternate ways to derive the results in this set of notes, for instance using the Turan power sums method which is based on studying derivatives such as

\displaystyle \frac{L'}{L}(s,\chi)^{(k)} = (-1)^k \sum_n \frac{\Lambda(n) \chi(n) \log^k n}{n^s}

\displaystyle  \approx (-1)^{k+1} k! \sum_\rho \frac{1}{(s-\rho)^{k+1}}

for {\hbox{Re}(s)>1} and large {k}, and performing various sorts of averaging in {k} to attenuate the contribution of many of the zeroes {\rho}. We will not develop this method here, but see for instance Chapter 9 of Montgomery’s book. See the text of Friedlander and Iwaniec for yet another approach based primarily on sieve-theoretic ideas.

Remark 12 When one optimises all the exponents, it turns out that the exponent in Linnik’s theorem is extremely good in the presence of an exceptional zero – indeed Friedlander and Iwaniec showed can even get a bound of the form {p \ll q^{2-c}} for some {c>0}, which is even stronger than one can obtain from GRH! There are other places in which exceptional zeroes can be used to obtain results stronger than what one can obtain even on the Riemann hypothesis; for instance, Heath-Brown used the hypothesis of an infinite sequence of Siegel zeroes to obtain the twin prime conejcture.

Read the rest of this entry »


RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.

Get every new post delivered to your Inbox.

Join 5,398 other followers