You are currently browsing the monthly archive for November 2015.

A basic estimate in multiplicative number theory (particularly if one is using the Granville-Soundararajan “pretentious” approach to this subject) is the following inequality of Halasz (formulated here in a quantitative form introduced by Montgomery and Tenenbaum).

Theorem 1 (Halasz inequality) Let {f: {\bf N} \rightarrow {\bf C}} be a multiplicative function bounded in magnitude by {1}, and suppose that {x \geq 3}, {T \geq 1}, and { M \geq 0} are such that

\displaystyle \sum_{p \leq x} \frac{1 - \hbox{Re}(f(p) p^{-it})}{p} \geq M \ \ \ \ \ (1)

for all real numbers {t} with {|t| \leq T}. Then

\displaystyle \frac{1}{x} \sum_{n \leq x} f(n) \ll (1+M) e^{-M} + \frac{1}{\sqrt{T}}.

As a qualitative corollary, we conclude (by standard compactness arguments) that if

\displaystyle \sum_{p} \frac{1 - \hbox{Re}(f(p) p^{-it})}{p} = +\infty

for all real {t}, then

\displaystyle \frac{1}{x} \sum_{n \leq x} f(n) = o(1) \ \ \ \ \ (2)

as {x \rightarrow \infty}. In the more recent work of this paper of Granville and Soundararajan, the sharper bound

\displaystyle \frac{1}{x} \sum_{n \leq x} f(n) \ll (1+M) e^{-M} + \frac{1}{T} + \frac{\log\log x}{\log x}

is obtained (with a more precise description of the {(1+M) e^{-M}} term).

The usual proofs of Halasz’s theorem are somewhat lengthy (though there has been a recent simplification, in forthcoming work of Granville, Harper, and Soundarajan). Below the fold I would like to give a relatively short proof of the following “cheap” version of the inequality, which has slightly weaker quantitative bounds, but still suffices to give qualitative conclusions such as (2).

Theorem 2 (Cheap Halasz inequality) Let {f: {\bf N} \rightarrow {\bf C}} be a multiplicative function bounded in magnitude by {1}. Let {T \geq 1} and {M \geq 0}, and suppose that {x} is sufficiently large depending on {T,M}. If (1) holds for all {|t| \leq T}, then

\displaystyle \frac{1}{x} \sum_{n \leq x} f(n) \ll (1+M) e^{-M/2} + \frac{1}{T}.

The non-optimal exponent {1/2} can probably be improved a bit by being more careful with the exponents, but I did not try to optimise it here. A similar bound appears in the first paper of Halasz on this topic.

The idea of the argument is to split {f} as a Dirichlet convolution {f = f_1 * f_2 * f_3} where {f_1,f_2,f_3} is the portion of {f} coming from “small”, “medium”, and “large” primes respectively (with the dividing line between the three types of primes being given by various powers of {x}). Using a Perron-type formula, one can express this convolution in terms of the product of the Dirichlet series of {f_1,f_2,f_3} respectively at various complex numbers {1+it} with {|t| \leq T}. One can use {L^2} based estimates to control the Dirichlet series of {f_2,f_3}, while using the hypothesis (1) one can get {L^\infty} estimates on the Dirichlet series of {f_1}. (This is similar to the Fourier-analytic approach to ternary additive problems, such as Vinogradov’s theorem on representing large odd numbers as the sum of three primes.) This idea was inspired by a similar device used in the work of Granville, Harper, and Soundarajan. A variant of this argument also appears in unpublished work of Adam Harper.

I thank Andrew Granville for helpful comments which led to significant simplifications of the argument.

Read the rest of this entry »

In the previous set of notes we established the central limit theorem, which we formulate here as follows:

Theorem 1 (Central limit theorem) Let {X_1,X_2,X_3,\dots} be iid copies of a real random variable {X} of mean {\mu} and variance {0 < \sigma^2 < \infty}, and write {S_n := X_1 + \dots + X_n}. Then, for any fixed {a < b}, we have

\displaystyle {\bf P}( a \leq \frac{S_n - n \mu}{\sqrt{n} \sigma} \leq b ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_a^b e^{-t^2/2}\ dt \ \ \ \ \ (1)

as {n \rightarrow \infty}.

This is however not the end of the matter; there are many variants, refinements, and generalisations of the central limit theorem, and the purpose of this set of notes is to present a small sample of these variants.

First of all, the above theorem does not quantify the rate of convergence in (1). We have already addressed this issue to some extent with the Berry-Esséen theorem, which roughly speaking gives a convergence rate of {O(1/\sqrt{n})} uniformly in {a,b} if we assume that {X} has finite third moment. However there are still some quantitative versions of (1) which are not addressed by the Berry-Esséen theorem. For instance one may be interested in bounding the large deviation probabilities

\displaystyle {\bf P}( |\frac{S_n - n \mu}{\sqrt{n} \sigma}| \geq \lambda ) \ \ \ \ \ (2)

in the setting where {\lambda} grows with {n}. Chebyshev’s inequality gives an upper bound of {1/\lambda^2} for this quantity, but one can often do much better than this in practice. For instance, the central limit theorem (1) suggests that this probability should be bounded by something like {O( e^{-\lambda^2/2})}; however, this theorem only kicks in when {n} is very large compared with {\lambda}. For instance, if one uses the Berry-Esséen theorem, one would need {n} as large as {e^{\lambda^2}} or so to reach the desired bound of {O( e^{-\lambda^2/2})}, even under the assumption of finite third moment. Basically, the issue is that convergence-in-distribution results, such as the central limit theorem, only really control the typical behaviour of statistics in {\frac{S_n-n \mu}{\sqrt{n} \sigma}}; they are much less effective at controlling the very rare outlier events in which the statistic strays far from its typical behaviour. Fortunately, there are large deviation inequalities (or concentration of measure inequalities) that do provide exponential type bounds for quantities such as (2), which are valid for both small and large values of {n}. A basic example of this is the Chernoff bound that made an appearance in Exercise 47 of Notes 4; here we give some further basic inequalities of this type, including versions of the Bennett and Hoeffding inequalities.

In the other direction, we can also look at the fine scale behaviour of the sums {S_n} by trying to control probabilities such as

\displaystyle {\bf P}( a \leq S_n \leq a+h ) \ \ \ \ \ (3)

where {h} is now bounded (but {a} can grow with {n}). The central limit theorem predicts that this quantity should be roughly {\frac{h}{\sqrt{2\pi n} \sigma} e^{-(a-n\mu)^2 / 2n \sigma^2}}, but even if one is able to invoke the Berry-Esséen theorem, one cannot quite see this main term because it is dominated by the error term {O(1/n^{1/2})} in Berry-Esséen. There is good reason for this: if for instance {X} takes integer values, then {S_n} also takes integer values, and {{\bf P}( a \leq S_n \leq a+h )} can vanish when {h} is less than {1} and {a} is slightly larger than an integer. However, this turns out to essentially be the only obstruction; if {X} does not lie in a lattice such as {{\bf Z}}, then we can establish a local limit theorem controlling (3), and when {X} does take values in a lattice like {{\bf Z}}, there is a discrete local limit theorem that controls probabilities such as {{\bf P}(S_n = m)}. Both of these limit theorems will be proven by the Fourier-analytic method used in the previous set of notes.

We also discuss other limit theorems in which the limiting distribution is something other than the normal distribution. Perhaps the most common example of these theorems is the Poisson limit theorems, in which one sums a large number of indicator variables (or approximate indicator variables), each of which is rarely non-zero, but which collectively add up to a random variable of medium-sized mean. In this case, it turns out that the limiting distribution should be a Poisson random variable; this again is an easy application of the Fourier method. Finally, we briefly discuss limit theorems for other stable laws than the normal distribution, which are suitable for summing random variables of infinite variance, such as the Cauchy distribution.

Finally, we mention a very important class of generalisations to the CLT (and to the variants of the CLT discussed in this post), in which the hypothesis of joint independence between the variables {X_1,\dots,X_n} is relaxed, for instance one could assume only that the {X_1,\dots,X_n} form a martingale. Many (though not all) of the proofs of the CLT extend to these more general settings, and this turns out to be important for many applications in which one does not expect joint independence. However, we will not discuss these generalisations in this course, as they are better suited for subsequent courses in this series when the theory of martingales, conditional expectation, and related tools are developed.

Read the rest of this entry »

Kevin Ford, James Maynard, and I have uploaded to the arXiv our preprint “Chains of large gaps between primes“. This paper was announced in our previous paper with Konyagin and Green, which was concerned with the largest gap

\displaystyle  G_1(X) := \max_{p_n, p_{n+1} \leq X} (p_{n+1} - p_n)

between consecutive primes up to {X}, in which we improved the Rankin bound of

\displaystyle  G_1(X) \gg \log X \frac{\log_2 X \log_4 X}{(\log_3 X)^2}

to

\displaystyle  G_1(X) \gg \log X \frac{\log_2 X \log_4 X}{\log_3 X}

for large {X} (where we use the abbreviations {\log_2 X := \log\log X}, {\log_3 X := \log\log\log X}, and {\log_4 X := \log\log\log\log X}). Here, we obtain an analogous result for the quantity

\displaystyle  G_k(X) := \max_{p_n, \dots, p_{n+k} \leq X} \min( p_{n+1} - p_n, p_{n+2}-p_{n+1}, \dots, p_{n+k} - p_{n+k-1} )

which measures how far apart the gaps between chains of {k} consecutive primes can be. Our main result is

\displaystyle  G_k(X) \gg \frac{1}{k^2} \log X \frac{\log_2 X \log_4 X}{\log_3 X}

whenever {X} is sufficiently large depending on {k}, with the implied constant here absolute (and effective). The factor of {1/k^2} is inherent to the method, and related to the basic probabilistic fact that if one selects {k} numbers at random from the unit interval {[0,1]}, then one expects the minimum gap between adjacent numbers to be about {1/k^2} (i.e. smaller than the mean spacing of {1/k} by an additional factor of {1/k}).

Our arguments combine those from the previous paper with the matrix method of Maier, who (in our notation) showed that

\displaystyle  G_k(X) \gg_k  \log X \frac{\log_2 X \log_4 X}{(\log_3 X)^2}

for an infinite sequence of {X} going to infinity. (Maier needed to restrict to an infinite sequence to avoid Siegel zeroes, but we are able to resolve this issue by the now standard technique of simply eliminating a prime factor of an exceptional conductor from the sieve-theoretic portion of the argument. As a byproduct, this also makes all of the estimates in our paper effective.)

As its name suggests, the Maier matrix method is usually presented by imagining a matrix of numbers, and using information about the distribution of primes in the columns of this matrix to deduce information about the primes in at least one of the rows of the matrix. We found it convenient to interpret this method in an equivalent probabilistic form as follows. Suppose one wants to find an interval {n+1,\dots,n+y} which contained a block of at least {k} primes, each separated from each other by at least {g} (ultimately, {y} will be something like {\log X \frac{\log_2 X \log_4 X}{\log_3 X}} and {g} something like {y/k^2}). One can do this by the probabilistic method: pick {n} to be a random large natural number {{\mathbf n}} (with the precise distribution to be chosen later), and try to lower bound the probability that the interval {{\mathbf n}+1,\dots,{\mathbf n}+y} contains at least {k} primes, no two of which are within {g} of each other.

By carefully choosing the residue class of {{\mathbf n}} with respect to small primes, one can eliminate several of the {{\mathbf n}+j} from consideration of being prime immediately. For instance, if {{\mathbf n}} is chosen to be large and even, then the {{\mathbf n}+j} with {j} even have no chance of being prime and can thus be eliminated; similarly if {{\mathbf n}} is large and odd, then {{\mathbf n}+j} cannot be prime for any odd {j}. Using the methods of our previous paper, we can find a residue class {m \hbox{ mod } P} (where {P} is a product of a large number of primes) such that, if one chooses {{\mathbf n}} to be a large random element of {m \hbox{ mod } P} (that is, {{\mathbf n} = {\mathbf z} P + m} for some large random integer {{\mathbf z}}), then the set {{\mathcal T}} of shifts {j \in \{1,\dots,y\}} for which {{\mathbf n}+j} still has a chance of being prime has size comparable to something like {k \log X / \log_2 X}; furthermore this set {{\mathcal T}} is fairly well distributed in {\{1,\dots,y\}} in the sense that it does not concentrate too strongly in any short subinterval of {\{1,\dots,y\}}. The main new difficulty, not present in the previous paper, is to get lower bounds on the size of {{\mathcal T}} in addition to upper bounds, but this turns out to be achievable by a suitable modification of the arguments.

Using a version of the prime number theorem in arithmetic progressions due to Gallagher, one can show that for each remaining shift {j \in {\mathcal T}}, {{\mathbf n}+j} is going to be prime with probability comparable to {\log_2 X / \log X}, so one expects about {k} primes in the set {\{{\mathbf n} + j: j \in {\mathcal T}\}}. An upper bound sieve (e.g. the Selberg sieve) also shows that for any distinct {j,j' \in {\mathcal T}}, the probability that {{\mathbf n}+j} and {{\mathbf n}+j'} are both prime is {O( (\log_2 X / \log X)^2 )}. Using this and some routine second moment calculations, one can then show that with large probability, the set {\{{\mathbf n} + j: j \in {\mathcal T}\}} will indeed contain about {k} primes, no two of which are closer than {g} to each other; with no other numbers in this interval being prime, this gives a lower bound on {G_k(X)}.

Klaus Roth, who made fundamental contributions to analytic number theory, died this Tuesday, aged 90.

I never met or communicated with Roth personally, but was certainly influenced by his work; he wrote relatively few papers, but they tended to have outsized impact. For instance, he was one of the key people (together with Bombieri) to work on simplifying and generalising the large sieve, taking it from the technically formidable original formulation of Linnik and Rényi to the clean and general almost orthogonality principle that we have today (discussed for instance in these lecture notes of mine). The paper of Roth that had the most impact on my own personal work was his three-page paper proving what is now known as Roth’s theorem on arithmetic progressions:

Theorem 1 (Roth’s theorem on arithmetic progressions) Let {A} be a set of natural numbers of positive upper density (thus {\limsup_{N \rightarrow\infty} |A \cap \{1,\dots,N\}|/N > 0}). Then {A} contains infinitely many arithmetic progressions {a,a+r,a+2r} of length three (with {r} non-zero of course).

At the heart of Roth’s elegant argument was the following (surprising at the time) dichotomy: if {A} had some moderately large density within some arithmetic progression {P}, either one could use Fourier-analytic methods to detect the presence of an arithmetic progression of length three inside {A \cap P}, or else one could locate a long subprogression {P'} of {P} on which {A} had increased density. Iterating this dichotomy by an argument now known as the density increment argument, one eventually obtains Roth’s theorem, no matter which side of the dichotomy actually holds. This argument (and the many descendants of it), based on various “dichotomies between structure and randomness”, became essential in many other results of this type, most famously perhaps in Szemerédi’s proof of his celebrated theorem on arithmetic progressions that generalised Roth’s theorem to progressions of arbitrary length. More recently, my recent work on the Chowla and Elliott conjectures that was a crucial component of the solution of the Erdös discrepancy problem, relies on an entropy decrement argument which was directly inspired by the density increment argument of Roth.

The Erdös discrepancy problem also is connected with another well known theorem of Roth:

Theorem 2 (Roth’s discrepancy theorem for arithmetic progressions) Let {f(1),\dots,f(n)} be a sequence in {\{-1,+1\}}. Then there exists an arithmetic progression {a+r, a+2r, \dots, a+kr} in {\{1,\dots,n\}} with {r} positive such that

\displaystyle  |\sum_{j=1}^k f(a+jr)| \geq c n^{1/4}

for an absolute constant {c>0}.

In fact, Roth proved a stronger estimate regarding mean square discrepancy, which I am not writing down here; as with the Roth theorem in arithmetic progressions, his proof was short and Fourier-analytic in nature (although non-Fourier-analytic proofs have since been found, for instance the semidefinite programming proof of Lovasz). The exponent {1/4} is known to be sharp (a result of Matousek and Spencer).

As a particular corollary of the above theorem, for an infinite sequence {f(1), f(2), \dots} of signs, the sums {|\sum_{j=1}^k f(a+jr)|} are unbounded in {a,r,k}. The Erdös discrepancy problem asks whether the same statement holds when {a} is restricted to be zero. (Roth also established discrepancy theorems for other sets, such as rectangles, which will not be discussed here.)

Finally, one has to mention Roth’s most famous result, cited for instance in his Fields medal citation:

Theorem 3 (Roth’s theorem on Diophantine approximation) Let {\alpha} be an irrational algebraic number. Then for any {\varepsilon > 0} there is a quantity {c_{\alpha,\varepsilon}} such that

\displaystyle  |\alpha - \frac{a}{q}| > \frac{c_{\alpha,\varepsilon}}{q^{2+\varepsilon}}.

From the Dirichlet approximation theorem (or from the theory of continued fractions) we know that the exponent {2+\varepsilon} in the denominator cannot be reduced to {2} or below. A classical and easy theorem of Liouville gives the claim with the exponent {2+\varepsilon} replaced by the degree of the algebraic number {\alpha}; work of Thue and Siegel reduced this exponent, but Roth was the one who obtained the near-optimal result. An important point is that the constant {c_{\alpha,\varepsilon}} is ineffective – it is a major open problem in Diophantine approximation to produce any bound significantly stronger than Liouville’s theorem with effective constants. This is because the proof of Roth’s theorem does not exclude any single rational {a/q} from being close to {\alpha}, but instead very ingeniously shows that one cannot have two different rationals {a/q}, {a'/q'} that are unusually close to {\alpha}, even when the denominators {q,q'} are very different in size. (I refer to this sort of argument as a “dueling conspiracies” argument; they are strangely prevalent throughout analytic number theory.)

Chantal David, Andrew Granville, Emmanuel Kowalski, Phillipe Michel, Kannan Soundararajan, and I are running a program at MSRI in the Spring of 2017 (more precisely, from Jan 17, 2017 to May 26, 2017) in the area of analytic number theory, with the intention to bringing together many of the leading experts in all aspects of the subject and to present recent work on the many active areas of the subject (e.g. the distribution of the prime numbers, refinements of the circle method, a deeper understanding of the asymptotics of bounded multiplicative functions (and applications to Erdos discrepancy type problems!) and of the “pretentious” approach to analytic number theory, more “analysis-friendly” formulations of the theorems of Deligne and others involving trace functions over fields, and new subconvexity theorems for automorphic forms, to name a few).  Like any other semester MSRI program, there will be a number of workshops, seminars, and similar activities taking place while the members are in residence.  I’m personally looking forward to the program, which should be occurring in the midst of a particularly productive time for the subject.  Needless to say, I (and the rest of the organising committee) plan to be present for most of the program.

Applications for Postdoctoral Fellowships and Research Memberships for this program (and for other MSRI programs in this time period, namely the companion program in Harmonic Analysis and the Fall program in Geometric Group Theory, as well as the complementary program in all other areas of mathematics) remain open until Dec 1.  Applications are open to everyone, but require supporting documentation, such as a CV, statement of purpose, and letters of recommendation from other mathematicians; see the application page for more details.

Let {X_1,X_2,\dots} be iid copies of an absolutely integrable real scalar random variable {X}, and form the partial sums {S_n := X_1 + \dots + X_n}. As we saw in the last set of notes, the law of large numbers ensures that the empirical averages {S_n/n} converge (both in probability and almost surely) to a deterministic limit, namely the mean {\mu= {\bf E} X} of the reference variable {X}. Furthermore, under some additional moment hypotheses on the underlying variable {X}, we can obtain square root cancellation for the fluctuation {\frac{S_n}{n} - \mu} of the empirical average from the mean. To simplify the calculations, let us first restrict to the case {\mu=0, \sigma^2=1} of mean zero and variance one, thus

\displaystyle  {\bf E} X = 0

and

\displaystyle  {\bf Var}(X) = {\bf E} X^2 = 1.

Then, as computed in previous notes, the normalised fluctuation {S_n/\sqrt{n}} also has mean zero and variance one:

\displaystyle  {\bf E} \frac{S_n}{\sqrt{n}} = 0

\displaystyle  {\bf Var}(\frac{S_n}{\sqrt{n}}) = {\bf E} (\frac{S_n}{\sqrt{n}})^2 = 1.

This and Chebyshev’s inequality already indicates that the “typical” size of {S_n} is {O(\sqrt{n})}, thus for instance {\frac{S_n}{\sqrt{n} \omega(n)}} goes to zero in probability for any {\omega(n)} that goes to infinity as {n \rightarrow \infty}. If we also have a finite fourth moment {{\bf E} |X|^4 < \infty}, then the calculations of the previous notes also give a fourth moment estimate

\displaystyle  {\bf E} (\frac{S_n}{\sqrt{n}})^4 = 3 + O( \frac{{\bf E} |X|^4}{n} ).

From this and the Paley-Zygmund inequality (Exercise 44 of Notes 1) we also get some lower bound for {\frac{S_n}{\sqrt{n}}} of the form

\displaystyle  {\bf P}( |\frac{S_n}{\sqrt{n}}| \geq \varepsilon ) \geq \varepsilon

for some absolute constant {\varepsilon>0} and for {n} sufficiently large; this indicates in particular that {\frac{S_n \omega(n)}{\sqrt{n}}} does not converge in any reasonable sense to something finite for any {\omega(n)} that goes to infinity.
The question remains as to what happens to the ratio {S_n/\sqrt{n}} itself, without multiplying or dividing by any factor {\omega(n)}. A first guess would be that these ratios converge in probability or almost surely, but this is unfortunately not the case:

Proposition 1 Let {X_1,X_2,\dots} be iid copies of an absolutely integrable real scalar random variable {X} with mean zero, variance one, and finite fourth moment, and write {S_n := X_1 + \dots + X_n}. Then the random variables {S_n/\sqrt{n}} do not converge in probability or almost surely to any limit, and neither does any subsequence of these random variables.

Proof: Suppose for contradiction that some sequence {S_{n_j}/\sqrt{n_j}} converged in probability or almost surely to a limit {Y}. By passing to a further subsequence we may assume that the convergence is in the almost sure sense. Since all of the {S_{n_j}/\sqrt{n_j}} have mean zero, variance one, and bounded fourth moment, Theorem 25 of Notes 1 implies that the limit {Y} also has mean zero and variance one. On the other hand, {Y} is a tail random variable and is thus almost surely constant by the Kolmogorov zero-one law from Notes 3. Since constants have variance zero, we obtain the required contradiction. \Box

Nevertheless there is an important limit for the ratio {S_n/\sqrt{n}}, which requires one to replace the notions of convergence in probability or almost sure convergence by the weaker concept of convergence in distribution.

Definition 2 (Vague convergence and convergence in distribution) Let {R} be a locally compact Hausdorff topological space with the Borel {\sigma}-algebra. A sequence of finite measures {\mu_n} on {R} is said to converge vaguely to another finite measure {\mu} if one has

\displaystyle  \int_R G(x)\ d\mu_n(x) \rightarrow \int_R G(x)\ d\mu(x)

as {n \rightarrow \infty} for all continuous compactly supported functions {G: R \rightarrow {\bf R}}. (Vague convergence is also known as weak convergence, although strictly speaking the terminology weak-* convergence would be more accurate.) A sequence of random variables {X_n} taking values in {R} is said to converge in distribution (or converge weakly or converge in law) to another random variable {X} if the distributions {\mu_{X_n}} converge vaguely to the distribution {\mu_X}, or equivalently if

\displaystyle  {\bf E}G(X_n) \rightarrow {\bf E} G(X)

as {n \rightarrow \infty} for all continuous compactly supported functions {G: R \rightarrow {\bf R}}.

One could in principle try to extend this definition beyond the locally compact Hausdorff setting, but certain pathologies can occur when doing so (e.g. failure of the Riesz representation theorem), and we will never need to consider vague convergence in spaces that are not locally compact Hausdorff, so we restrict to this setting for simplicity.

Note that the notion of convergence in distribution depends only on the distribution of the random variables involved. One consequence of this is that convergence in distribution does not produce unique limits: if {X_n} converges in distribution to {X}, and {Y} has the same distribution as {X}, then {X_n} also converges in distribution to {Y}. However, limits are unique up to equivalence in distribution (this is a consequence of the Riesz representation theorem, discussed for instance in this blog post). As a consequence of the insensitivity of convergence in distribution to equivalence in distribution, we may also legitimately talk about convergence of distribution of a sequence of random variables {X_n} to another random variable {X} even when all the random variables {X_1,X_2,\dots} and {X} involved are being modeled by different probability spaces (e.g. each {X_n} is modeled by {\Omega_n}, and {X} is modeled by {\Omega}, with no coupling presumed between these spaces). This is in contrast to the stronger notions of convergence in probability or almost sure convergence, which require all the random variables to be modeled by a common probability space. Also, by an abuse of notation, we can say that a sequence {X_n} of random variables converges in distribution to a probability measure {\mu}, when {\mu_{X_n}} converges vaguely to {\mu}. Thus we can talk about a sequence of random variables converging in distribution to a uniform distribution, a gaussian distribution, etc..

From the dominated convergence theorem (available for both convergence in probability and almost sure convergence) we see that convergence in probability or almost sure convergence implies convergence in distribution. The converse is not true, due to the insensitivity of convergence in distribution to equivalence in distribution; for instance, if {X_1,X_2,\dots} are iid copies of a non-deterministic scalar random variable {X}, then the {X_n} trivially converge in distribution to {X}, but will not converge in probability or almost surely (as one can see from the zero-one law). However, there are some partial converses that relate convergence in distribution to convergence in probability; see Exercise 10 below.

Remark 3 The notion of convergence in distribution is somewhat similar to the notion of convergence in the sense of distributions that arises in distribution theory (discussed for instance in this previous blog post), however strictly speaking the two notions of convergence are distinct and should not be confused with each other, despite the very similar names.

The notion of convergence in distribution simplifies in the case of real scalar random variables:

Proposition 4 Let {X_1,X_2,\dots} be a sequence of scalar random variables, and let {X} be another scalar random variable. Then the following are equivalent:

  • (i) {X_n} converges in distribution to {X}.
  • (ii) {F_{X_n}(t)} converges to {F_X(t)} for each continuity point {t} of {F_X} (i.e. for all real numbers {t \in {\bf R}} at which {F_X} is continuous). Here {F_X(t) := {\bf P}(X \leq t)} is the cumulative distribution function of {X}.

Proof: First suppose that {X_n} converges in distribution to {X}, and {F_X} is continuous at {t}. For any {\varepsilon > 0}, one can find a {\delta} such that

\displaystyle  F_X(t) - \varepsilon \leq F_X(t') \leq F_X(t) + \varepsilon

for every {t' \in [t-\delta,t+\delta]}. One can also find an {N} larger than {|t|+\delta} such that {F_X(-N) \leq \varepsilon} and {F_X(N) \geq 1-\varepsilon}. Thus

\displaystyle  {\bf P} (|X| \geq N ) = O(\varepsilon)

and

\displaystyle  {\bf P} (|X - t| \leq \delta ) = O(\varepsilon).

Let {G: {\bf R} \rightarrow [0,1]} be a continuous function supported on {[-2N, t]} that equals {1} on {[-N, t-\delta]}. Then by the above discussion we have

\displaystyle  {\bf E} G(X) = F_X(t) + O(\varepsilon)

and hence

\displaystyle  {\bf E} G(X_n) = F_X(t) + O(\varepsilon)

for large enough {n}. In particular

\displaystyle  {\bf P}( X_n \leq t ) \geq F_X(t) - O(\varepsilon).

A similar argument, replacing {G} with a continuous function supported on {[t,2N]} that equals {1} on {[t+\delta,N]} gives

\displaystyle  {\bf P}( X_n > t ) \geq 1 - F_X(t) - O(\varepsilon)

for {n} large enough. Putting the two estimates together gives

\displaystyle  F_{X_n}(t) = F_X(t) + O(\varepsilon)

for {n} large enough; sending {\varepsilon \rightarrow 0}, we obtain the claim.
Conversely, suppose that {F_{X_n}(t)} converges to {F_X(t)} at every continuity point {t} of {F_X}. Let {G: {\bf R} \rightarrow {\bf R}} be a continuous compactly supported function, then it is uniformly continuous. As {F_X} is monotone increasing, it can only have countably many points of discontinuity. From these two facts one can find, for any {\varepsilon>0}, a simple function {G_\varepsilon(t) = \sum_{i=1}^n c_i 1_{(t_i,t_{i+1}]}} for some {t_1 < \dots < t_n} that are points of continuity of {F_X}, and real numbers {c_i}, such that {|G(t) - G_\varepsilon(t)| \leq \varepsilon} for all {t}. Thus

\displaystyle  {\bf E} G(X_n) = {\bf E} G_\varepsilon(X_n) + O(\varepsilon)

\displaystyle  = \sum_{i=1}^n c_i(F_{X_n}(t_{i+1}) - F_{X_n}(t)) + O(\varepsilon).

Similarly for {X_n} replaced by {X}. Subtracting and taking limit superior, we conclude that

\displaystyle  \limsup_{n \rightarrow \infty} |{\bf E} G(X_n) - {\bf E} G(X)| = O(\varepsilon),

and on sending {\varepsilon \rightarrow 0}, we obtain that {X_n} converges in distribution to {X} as claimed. \Box
The restriction to continuity points of {t} is necessary. Consider for instance the deterministic random variables {X_n = 1/n}, then {X_n} converges almost surely (and hence in distribution) to {0}, but {F_{X_n}(0) = 0} does not converge to {F_X(0)=1}.

Example 5 For any natural number {n}, let {X_n} be a discrete random variable drawn uniformly from the finite set {\{0/n, 1/n, \dots, (n-1)/n\}}, and let {X} be the continuous random variable drawn uniformly from {[0,1]}. Then {X_n} converges in distribution to {X}. Thus we see that a continuous random variable can emerge as the limit of discrete random variables.

Example 6 For any natural number {n}, let {X_n} be a continuous random variable drawn uniformly from {[0,1/n]}, then {X_n} converges in distribution to the deterministic real number {0}. Thus we see that discrete (or even deterministic) random variables can emerge as the limit of continuous random variables.

Exercise 7 (Portmanteau theorem) Show that the properties (i) and (ii) in Proposition 4 are also equivalent to the following three statements:

  • (iii) One has {\limsup_{n \rightarrow \infty} {\bf P}( X_n \in K ) \leq {\bf P}(X \in K)} for all closed sets {K \subset {\bf R}}.
  • (iv) One has {\liminf_{n \rightarrow \infty} {\bf P}( X_n \in U ) \geq {\bf P}(X \in U)} for all open sets {U \subset {\bf R}}.
  • (v) For any Borel set {E \subset {\bf R}} whose topological boundary {\partial E} is such that {{\bf P}(X \in \partial E) = 0}, one has {\lim_{n \rightarrow \infty} {\bf P}(X_n \in E) = {\bf P}(X \in E)}.

(Note: to prove this theorem, you may wish to invoke Urysohn’s lemma. To deduce (iii) from (i), you may wish to start with the case of compact {K}.)

We can now state the famous central limit theorem:

Theorem 8 (Central limit theorem) Let {X_1,X_2,\dots} be iid copies of a scalar random variable {X} of finite mean {\mu := {\bf E} X} and finite non-zero variance {\sigma^2 := {\bf Var}(X)}. Let {S_n := X_1 + \dots + X_n}. Then the random variables {\frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu)} converges in distribution to a random variable with the standard normal distribution {N(0,1)} (that is to say, a random variable with probability density function {x \mapsto \frac{1}{\sqrt{2\pi}} e^{-x^2/2}}). Thus, by abuse of notation

\displaystyle  \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \rightarrow N(0,1).

In the normalised case {\mu=0, \sigma^2=1} when {X} has mean zero and unit variance, this simplifies to

\displaystyle  \frac{S_n}{\sqrt{n}} \rightarrow N(0,1).

Using Proposition 4 (and the fact that the cumulative distribution function associated to {N(0,1)} is continuous, the central limit theorem is equivalent to asserting that

\displaystyle  {\bf P}( \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \leq t ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{-\infty}^t e^{-x^2/2}\ dx

as {n \rightarrow \infty} for any {t \in {\bf R}}, or equivalently that

\displaystyle  {\bf P}( a \leq \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \leq b ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{a}^b e^{-x^2/2}\ dx.

Informally, one can think of the central limit theorem as asserting that {S_n} approximately behaves like it has distribution {N( n \mu, n \sigma^2 )} for large {n}, where {N(\mu,\sigma^2)} is the normal distribution with mean {\mu} and variance {\sigma^2}, that is to say the distribution with probability density function {x \mapsto \frac{1}{\sqrt{2\pi} \sigma} e^{-(x-\mu)^2/2\sigma^2}}. The integrals {\frac{1}{\sqrt{2\pi}} \int_{-\infty}^t e^{-x^2/2}\ dx} can be written in terms of the error function {\hbox{erf}} as {\frac{1}{2} + \frac{1}{2} \hbox{erf}(t/\sqrt{2})}.
The central limit theorem is a basic example of the universality phenomenon in probability – many statistics involving a large system of many independent (or weakly dependent) variables (such as the normalised sums {\frac{\sqrt{n}}{\sigma}(\frac{S_n}{n}-\mu)}) end up having a universal asymptotic limit (in this case, the normal distribution), regardless of the precise makeup of the underlying random variable {X} that comprised that system. Indeed, the universality of the normal distribution is such that it arises in many other contexts than the fluctuation of iid random variables; the central limit theorem is merely the first place in probability theory where it makes a prominent appearance.

We will give several proofs of the central limit theorem in these notes; each of these proofs has their advantages and disadvantages, and can each extend to prove many further results beyond the central limit theorem. We first give Lindeberg’s proof of the central limit theorem, based on exchanging (or swapping) each component {X_1,\dots,X_n} of the sum {S_n} in turn. This proof gives an accessible explanation as to why there should be a universal limit for the central limit theorem; one then computes directly with gaussians to verify that it is the normal distribution which is the universal limit. Our second proof is the most popular one taught in probability texts, namely the Fourier-analytic proof based around the concept of the characteristic function {t \mapsto {\bf E} e^{itX}} of a real random variable {X}. Thanks to the powerful identities and other results of Fourier analysis, this gives a quite short and direct proof of the central limit theorem, although the arguments may seem rather magical to readers who are not already familiar with Fourier methods. Finally, we give a proof based on the moment method, in the spirit of the arguments in the previous notes; this argument is more combinatorial, but is straightforward and is particularly robust, in particular being well equipped to handle some dependencies between components; we will illustrate this by proving the Erdos-Kac law in number theory by this method. Some further discussion of the central limit theorem (including some further proofs, such as one based on Stein’s method) can be found in this blog post. Some further variants of the central limit theorem, such as local limit theorems, stable laws, and large deviation inequalities, will be discussed in the next (and final) set of notes.

The following exercise illustrates the power of the central limit theorem, by establishing combinatorial estimates which would otherwise require the use of Stirling’s formula to establish.

Exercise 9 (De Moivre-Laplace theorem) Let {X} be a Bernoulli random variable, taking values in {\{0,1\}} with {{\bf P}(X=0)={\bf P}(X=1)=1/2}, thus {X} has mean {1/2} and variance {1/4}. Let {X_1,X_2,\dots} be iid copies of {X}, and write {S_n := X_1+\dots+X_n}.

  • (i) Show that {S_n} takes values in {\{0,\dots,n\}} with {{\bf P}(S_n=i) = \frac{1}{2^n} \binom{n}{i}}. (This is an example of a binomial distribution.)
  • (ii) Assume Stirling’s formula

    \displaystyle  n! = (1+o(1)) \sqrt{2\pi n} n^n e^{-n} \ \ \ \ \ (1)

    where {o(1)} is a function of {n} that goes to zero as {n \rightarrow \infty}. (A proof of this formula may be found in this previous blog post.) Using this formula, and without using the central limit theorem, show that

    \displaystyle  {\bf P}( a \leq 2\sqrt{n} (\frac{S_n}{n} - \frac{1}{2}) \leq b ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{a}^b e^{-x^2/2}\ dx

    as {n \rightarrow \infty} for any fixed real numbers {a<b}.

The above special case of the central limit theorem was first established by de Moivre and Laplace.

We close this section with some basic facts about convergence of distribution that will be useful in the sequel.

Exercise 10 Let {X_1,X_2,\dots}, {Y_1,Y_2,\dots} be sequences of real random variables, and let {X,Y} be further real random variables.

  • (i) If {X} is deterministic, show that {X_n} converges in distribution to {X} if and only if {X_n} converges in probability to {X}.
  • (ii) Suppose that {X_n} is independent of {Y_n} for each {n}, and {X} independent of {Y}. Show that {X_n+iY_n} converges in distribution to {X+iY} if and only if {X_n} converges in distribution to {X} and {Y_n} converges in distribution to {Y}. (The shortest way to prove this is by invoking the Stone-Weierstrass theorem, but one can also proceed by proving some version of Proposition 4.) What happens if the independence hypothesis is dropped?
  • (iii) If {X_n} converges in distribution to {X}, show that for every {\varepsilon>0} there exists {K>0} such that {{\bf P}( |X_n| \geq K ) < \varepsilon} for all sufficiently large {n}. (That is to say, {X_n} is a tight sequence of random variables.)
  • (iv) Show that {X_n} converges in distribution to {X} if and only if, after extending the probability space model if necessary, one can find copies {Z_1,Z_2,\dots} and {Z} of {X_1,X_2,\dots} and {X} respectively such that {Z_n} converges almost surely to {Z}. (Hint: use the Skorohod representation, Exercise 29 of Notes 0.)
  • (v) If {X_1,X_2,\dots} converges in distribution to {X}, and {F: {\bf R} \rightarrow {\bf R}} is continuous, show that {F(X_1),F(X_2),\dots} converges in distribution to {F(X)}. Generalise this claim to the case when {X} takes values in an arbitrary locally compact Hausdorff space.
  • (vi) (Slutsky’s theorem) If {X_n} converges in distribution to {X}, and {Y_n} converges in probability to a deterministic limit {Y}, show that {X_n+Y_n} converges in distribution to {X+Y}, and {X_n Y_n} converges in distribution to {XY}. (Hint: either use (iv), or else use (iii) to control some error terms.) This statement combines particularly well with (i). What happens if {Y} is not assumed to be deterministic?
  • (vii) (Fatou lemma) If {G: {\bf R} \rightarrow [0,+\infty)} is continuous, and {X_n} converges in distribution to {X}, show that {\liminf_{n \rightarrow \infty} {\bf E} G(X_n) \geq {\bf E} G(X)}.
  • (viii) (Bounded convergence) If {G: {\bf R} \rightarrow {\bf R}} is continuous and bounded, and {X_n} converges in distribution to {X}, show that {\lim_{n \rightarrow \infty} {\bf E} G(X_n) = {\bf E} G(X)}.
  • (ix) (Dominated convergence) If {X_n} converges in distribution to {X}, and there is an absolutely integrable {Y} such that {|X_n| \leq Y} almost surely for all {n}, show that {\lim_{n \rightarrow \infty} {\bf E} X_n = {\bf E} X}.

For future reference we also mention (but will not prove) Prokhorov’s theorem that gives a partial converse to part (iii) of the above exercise:

Theorem 11 (Prokhorov’s theorem) Let {X_1,X_2,\dots} be a sequence of real random variables which is tight (that is, for every {\varepsilon>0} there exists {K>0} such that {{\bf P}(|X_n| \geq K) < \varepsilon} for all sufficiently large {n}). Then there exists a subsequence {X_{n_j}} which converges in distribution to some random variable {X} (which may possibly be modeled by a different probability space model than the {X_1,X_2,\dots}.)

The proof of this theorem relies on the Riesz representation theorem, and is beyond the scope of this course; but see for instance Exercise 29 of this previous blog post. (See also the closely related Helly selection theorem, covered in Exercise 30 of the same post.)

Read the rest of this entry »

Archives