You are currently browsing the tag archive for the ‘central limit theorem’ tag.

In the previous set of notes we established the central limit theorem, which we formulate here as follows:

Theorem 1 (Central limit theorem) Let ${X_1,X_2,X_3,\dots}$ be iid copies of a real random variable ${X}$ of mean ${\mu}$ and variance ${0 < \sigma^2 < \infty}$, and write ${S_n := X_1 + \dots + X_n}$. Then, for any fixed ${a < b}$, we have

$\displaystyle {\bf P}( a \leq \frac{S_n - n \mu}{\sqrt{n} \sigma} \leq b ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_a^b e^{-t^2/2}\ dt \ \ \ \ \ (1)$

as ${n \rightarrow \infty}$.

This is however not the end of the matter; there are many variants, refinements, and generalisations of the central limit theorem, and the purpose of this set of notes is to present a small sample of these variants.

First of all, the above theorem does not quantify the rate of convergence in (1). We have already addressed this issue to some extent with the Berry-Esséen theorem, which roughly speaking gives a convergence rate of ${O(1/\sqrt{n})}$ uniformly in ${a,b}$ if we assume that ${X}$ has finite third moment. However there are still some quantitative versions of (1) which are not addressed by the Berry-Esséen theorem. For instance one may be interested in bounding the large deviation probabilities

$\displaystyle {\bf P}( |\frac{S_n - n \mu}{\sqrt{n} \sigma}| \geq \lambda ) \ \ \ \ \ (2)$

in the setting where ${\lambda}$ grows with ${n}$. Chebyshev’s inequality gives an upper bound of ${1/\lambda^2}$ for this quantity, but one can often do much better than this in practice. For instance, the central limit theorem (1) suggests that this probability should be bounded by something like ${O( e^{-\lambda^2/2})}$; however, this theorem only kicks in when ${n}$ is very large compared with ${\lambda}$. For instance, if one uses the Berry-Esséen theorem, one would need ${n}$ as large as ${e^{\lambda^2}}$ or so to reach the desired bound of ${O( e^{-\lambda^2/2})}$, even under the assumption of finite third moment. Basically, the issue is that convergence-in-distribution results, such as the central limit theorem, only really control the typical behaviour of statistics in ${\frac{S_n-n \mu}{\sqrt{n} \sigma}}$; they are much less effective at controlling the very rare outlier events in which the statistic strays far from its typical behaviour. Fortunately, there are large deviation inequalities (or concentration of measure inequalities) that do provide exponential type bounds for quantities such as (2), which are valid for both small and large values of ${n}$. A basic example of this is the Chernoff bound that made an appearance in Exercise 47 of Notes 4; here we give some further basic inequalities of this type, including versions of the Bennett and Hoeffding inequalities.

In the other direction, we can also look at the fine scale behaviour of the sums ${S_n}$ by trying to control probabilities such as

$\displaystyle {\bf P}( a \leq S_n \leq a+h ) \ \ \ \ \ (3)$

where ${h}$ is now bounded (but ${a}$ can grow with ${n}$). The central limit theorem predicts that this quantity should be roughly ${\frac{h}{\sqrt{2\pi n} \sigma} e^{-(a-n\mu)^2 / 2n \sigma^2}}$, but even if one is able to invoke the Berry-Esséen theorem, one cannot quite see this main term because it is dominated by the error term ${O(1/n^{1/2})}$ in Berry-Esséen. There is good reason for this: if for instance ${X}$ takes integer values, then ${S_n}$ also takes integer values, and ${{\bf P}( a \leq S_n \leq a+h )}$ can vanish when ${h}$ is less than ${1}$ and ${a}$ is slightly larger than an integer. However, this turns out to essentially be the only obstruction; if ${X}$ does not lie in a lattice such as ${{\bf Z}}$, then we can establish a local limit theorem controlling (3), and when ${X}$ does take values in a lattice like ${{\bf Z}}$, there is a discrete local limit theorem that controls probabilities such as ${{\bf P}(S_n = m)}$. Both of these limit theorems will be proven by the Fourier-analytic method used in the previous set of notes.

We also discuss other limit theorems in which the limiting distribution is something other than the normal distribution. Perhaps the most common example of these theorems is the Poisson limit theorems, in which one sums a large number of indicator variables (or approximate indicator variables), each of which is rarely non-zero, but which collectively add up to a random variable of medium-sized mean. In this case, it turns out that the limiting distribution should be a Poisson random variable; this again is an easy application of the Fourier method. Finally, we briefly discuss limit theorems for other stable laws than the normal distribution, which are suitable for summing random variables of infinite variance, such as the Cauchy distribution.

Finally, we mention a very important class of generalisations to the CLT (and to the variants of the CLT discussed in this post), in which the hypothesis of joint independence between the variables ${X_1,\dots,X_n}$ is relaxed, for instance one could assume only that the ${X_1,\dots,X_n}$ form a martingale. Many (though not all) of the proofs of the CLT extend to these more general settings, and this turns out to be important for many applications in which one does not expect joint independence. However, we will not discuss these generalisations in this course, as they are better suited for subsequent courses in this series when the theory of martingales, conditional expectation, and related tools are developed.

Let ${X_1,X_2,\dots}$ be iid copies of an absolutely integrable real scalar random variable ${X}$, and form the partial sums ${S_n := X_1 + \dots + X_n}$. As we saw in the last set of notes, the law of large numbers ensures that the empirical averages ${S_n/n}$ converge (both in probability and almost surely) to a deterministic limit, namely the mean ${\mu= {\bf E} X}$ of the reference variable ${X}$. Furthermore, under some additional moment hypotheses on the underlying variable ${X}$, we can obtain square root cancellation for the fluctuation ${\frac{S_n}{n} - \mu}$ of the empirical average from the mean. To simplify the calculations, let us first restrict to the case ${\mu=0, \sigma^2=1}$ of mean zero and variance one, thus

$\displaystyle {\bf E} X = 0$

and

$\displaystyle {\bf Var}(X) = {\bf E} X^2 = 1.$

Then, as computed in previous notes, the normalised fluctuation ${S_n/\sqrt{n}}$ also has mean zero and variance one:

$\displaystyle {\bf E} \frac{S_n}{\sqrt{n}} = 0$

$\displaystyle {\bf Var}(\frac{S_n}{\sqrt{n}}) = {\bf E} (\frac{S_n}{\sqrt{n}})^2 = 1.$

This and Chebyshev’s inequality already indicates that the “typical” size of ${S_n}$ is ${O(\sqrt{n})}$, thus for instance ${\frac{S_n}{\sqrt{n} \omega(n)}}$ goes to zero in probability for any ${\omega(n)}$ that goes to infinity as ${n \rightarrow \infty}$. If we also have a finite fourth moment ${{\bf E} |X|^4 < \infty}$, then the calculations of the previous notes also give a fourth moment estimate

$\displaystyle {\bf E} (\frac{S_n}{\sqrt{n}})^4 = 3 + O( \frac{{\bf E} |X|^4}{n} ).$

From this and the Paley-Zygmund inequality (Exercise 42 of Notes 1) we also get some lower bound for ${\frac{S_n}{\sqrt{n}}}$ of the form

$\displaystyle {\bf P}( |\frac{S_n}{\sqrt{n}}| \geq \varepsilon ) \geq \varepsilon$

for some absolute constant ${\varepsilon>0}$ and for ${n}$ sufficiently large; this indicates in particular that ${\frac{S_n \omega(n)}{\sqrt{n}}}$ does not converge in any reasonable sense to something finite for any ${\omega(n)}$ that goes to infinity.

The question remains as to what happens to the ratio ${S_n/\sqrt{n}}$ itself, without multiplying or dividing by any factor ${\omega(n)}$. A first guess would be that these ratios converge in probability or almost surely, but this is unfortunately not the case:

Proposition 1 Let ${X_1,X_2,\dots}$ be iid copies of an absolutely integrable real scalar random variable ${X}$ with mean zero, variance one, and finite fourth moment, and write ${S_n := X_1 + \dots + X_n}$. Then the random variables ${S_n/\sqrt{n}}$ do not converge in probability or almost surely to any limit, and neither does any subsequence of these random variables.

Proof: Suppose for contradiction that some sequence ${S_{n_j}/\sqrt{n_j}}$ converged in probability or almost surely to a limit ${Y}$. By passing to a further subsequence we may assume that the convergence is in the almost sure sense. Since all of the ${S_{n_j}/\sqrt{n_j}}$ have mean zero, variance one, and bounded fourth moment, Theorem 24 of Notes 1 implies that the limit ${Y}$ also has mean zero and variance one. On the other hand, ${Y}$ is a tail random variable and is thus almost surely constant by the Kolmogorov zero-one law from Notes 3. Since constants have variance zero, we obtain the required contradiction. $\Box$

Nevertheless there is an important limit for the ratio ${S_n/\sqrt{n}}$, which requires one to replace the notions of convergence in probability or almost sure convergence by the weaker concept of convergence in distribution.

Definition 2 (Vague convergence and convergence in distribution) Let ${R}$ be a locally compact Hausdorff topological space with the Borel ${\sigma}$-algebra. A sequence of finite measures ${\mu_n}$ on ${R}$ is said to converge vaguely to another finite measure ${\mu}$ if one has

$\displaystyle \int_R G(x)\ d\mu_n(x) \rightarrow \int_R G(x)\ d\mu(x)$

as ${n \rightarrow \infty}$ for all continuous compactly supported functions ${G: R \rightarrow {\bf R}}$. (Vague convergence is also known as weak convergence, although strictly speaking the terminology weak-* convergence would be more accurate.) A sequence of random variables ${X_n}$ taking values in ${R}$ is said to converge in distribution (or converge weakly or converge in law) to another random variable ${X}$ if the distributions ${\mu_{X_n}}$ converge vaguely to the distribution ${\mu_X}$, or equivalently if

$\displaystyle {\bf E}G(X_n) \rightarrow {\bf E} G(X)$

as ${n \rightarrow \infty}$ for all continuous compactly supported functions ${G: R \rightarrow {\bf R}}$.

One could in principle try to extend this definition beyond the locally compact Hausdorff setting, but certain pathologies can occur when doing so (e.g. failure of the Riesz representation theorem), and we will never need to consider vague convergence in spaces that are not locally compact Hausdorff, so we restrict to this setting for simplicity.

Note that the notion of convergence in distribution depends only on the distribution of the random variables involved. One consequence of this is that convergence in distribution does not produce unique limits: if ${X_n}$ converges in distribution to ${X}$, and ${Y}$ has the same distribution as ${X}$, then ${X_n}$ also converges in distribution to ${Y}$. However, limits are unique up to equivalence in distribution (this is a consequence of the Riesz representation theorem, discussed for instance in this blog post). As a consequence of the insensitivity of convergence in distribution to equivalence in distribution, we may also legitimately talk about convergence of distribution of a sequence of random variables ${X_n}$ to another random variable ${X}$ even when all the random variables ${X_1,X_2,\dots}$ and ${X}$ involved are being modeled by different probability spaces (e.g. each ${X_n}$ is modeled by ${\Omega_n}$, and ${X}$ is modeled by ${\Omega}$, with no coupling presumed between these spaces). This is in contrast to the stronger notions of convergence in probability or almost sure convergence, which require all the random variables to be modeled by a common probability space. Also, by an abuse of notation, we can say that a sequence ${X_n}$ of random variables converges in distribution to a probability measure ${\mu}$, when ${\mu_{X_n}}$ converges vaguely to ${\mu}$. Thus we can talk about a sequence of random variables converging in distribution to a uniform distribution, a gaussian distribution, etc..

From the dominated convergence theorem (available for both convergence in probability and almost sure convergence) we see that convergence in probability or almost sure convergence implies convergence in distribution. The converse is not true, due to the insensitivity of convergence in distribution to equivalence in distribution; for instance, if ${X_1,X_2,\dots}$ are iid copies of a non-deterministic scalar random variable ${X}$, then the ${X_n}$ trivially converge in distribution to ${X}$, but will not converge in probability or almost surely (as one can see from the zero-one law). However, there are some partial converses that relate convergence in distribution to convergence in probability; see Exercise 10 below.

Remark 3 The notion of convergence in distribution is somewhat similar to the notion of convergence in the sense of distributions that arises in distribution theory (discussed for instance in this previous blog post), however strictly speaking the two notions of convergence are distinct and should not be confused with each other, despite the very similar names.

The notion of convergence in distribution simplifies in the case of real scalar random variables:

Proposition 4 Let ${X_1,X_2,\dots}$ be a sequence of scalar random variables, and let ${X}$ be another scalar random variable. Then the following are equivalent:

• (i) ${X_n}$ converges in distribution to ${X}$.
• (ii) ${F_{X_n}(t)}$ converges to ${F_X(t)}$ for each continuity point ${t}$ of ${F_X}$ (i.e. for all real numbers ${t \in {\bf R}}$ at which ${F_X}$ is continuous). Here ${F_X(t) := {\bf P}(X \leq t)}$ is the cumulative distribution function of ${X}$.

Proof: First suppose that ${X_n}$ converges in distribution to ${X}$, and ${F_X}$ is continuous at ${t}$. For any ${\varepsilon > 0}$, one can find a ${\delta}$ such that

$\displaystyle F_X(t) - \varepsilon \leq F_X(t') \leq F_X(t) + \varepsilon$

for every ${t' \in [t-\delta,t+\delta]}$. One can also find an ${N}$ larger than ${|t|+\delta}$ such that ${F_X(-N) \leq \varepsilon}$ and ${F_X(N) \geq 1-\varepsilon}$. Thus

$\displaystyle {\bf P} (|X| \geq N ) = O(\varepsilon)$

and

$\displaystyle {\bf P} (|X - t| \leq \delta ) = O(\varepsilon).$

Let ${G: {\bf R} \rightarrow [0,1]}$ be a continuous function supported on ${[-2N, t]}$ that equals ${1}$ on ${[-N, t-\delta]}$. Then by the above discussion we have

$\displaystyle {\bf E} G(X) = F_X(t) + O(\varepsilon)$

and hence

$\displaystyle {\bf E} G(X_n) = F_X(t) + O(\varepsilon)$

for large enough ${n}$. In particular

$\displaystyle {\bf P}( X_n \leq t ) \geq F_X(t) - O(\varepsilon).$

A similar argument, replacing ${G}$ with a continuous function supported on ${[t,2N]}$ that equals ${1}$ on ${[t+\delta,N]}$ gives

$\displaystyle {\bf P}( X_n > t ) \geq 1 - F_X(t) - O(\varepsilon)$

for ${n}$ large enough. Putting the two estimates together gives

$\displaystyle F_{X_n}(t) = F_X(t) + O(\varepsilon)$

for ${n}$ large enough; sending ${\varepsilon \rightarrow 0}$, we obtain the claim.

Conversely, suppose that ${F_{X_n}(t)}$ converges to ${F_X(t)}$ at every continuity point ${t}$ of ${F_X}$. Let ${G: {\bf R} \rightarrow {\bf R}}$ be a continuous compactly supported function, then it is uniformly continuous. As ${F_X}$ is monotone increasing, it can only have countably many points of discontinuity. From these two facts one can find, for any ${\varepsilon>0}$, a simple function ${G_\varepsilon(t) = \sum_{i=1}^n c_i 1_{(t_i,t_{i+1}]}}$ for some ${t_1 < \dots < t_n}$ that are points of continuity of ${F_X}$, and real numbers ${c_i}$, such that ${|G(t) - G_\varepsilon(t)| \leq \varepsilon}$ for all ${t}$. Thus

$\displaystyle {\bf E} G(X_n) = {\bf E} G_\varepsilon(X_n) + O(\varepsilon)$

$\displaystyle = \sum_{i=1}^n c_i(F_{X_n}(t_{i+1}) - F_{X_n}(t)) + O(\varepsilon).$

Similarly for ${X_n}$ replaced by ${X}$. Subtracting and taking limit superior, we conclude that

$\displaystyle \limsup_{n \rightarrow \infty} |{\bf E} G(X_n) - {\bf E} G(X)| = O(\varepsilon),$

and on sending ${\varepsilon \rightarrow 0}$, we obtain that ${X_n}$ converges in distribution to ${X}$ as claimed. $\Box$

The restriction to continuity points of ${t}$ is necessary. Consider for instance the deterministic random variables ${X_n = 1/n}$, then ${X_n}$ converges almost surely (and hence in distribution) to ${0}$, but ${F_{X_n}(0) = 0}$ does not converge to ${F_X(0)=1}$.

Example 5 For any natural number ${n}$, let ${X_n}$ be a discrete random variable drawn uniformly from the finite set ${\{0/n, 1/n, \dots, (n-1)/n\}}$, and let ${X}$ be the continuous random variable drawn uniformly from ${[0,1]}$. Then ${X_n}$ converges in distribution to ${X}$. Thus we see that a continuous random variable can emerge as the limit of discrete random variables.

Example 6 For any natural number ${n}$, let ${X_n}$ be a continuous random variable drawn uniformly from ${[0,1/n]}$, then ${X_n}$ converges in distribution to the deterministic real number ${0}$. Thus we see that discrete (or even deterministic) random variables can emerge as the limit of continuous random variables.

Exercise 7 (Portmanteau theorem) Show that the properties (i) and (ii) in Proposition 4 are also equivalent to the following three statements:

• (iii) One has ${\limsup_{n \rightarrow \infty} {\bf P}( X_n \in K ) \leq {\bf P}(X \in K)}$ for all closed sets ${K \subset {\bf R}}$.
• (iv) One has ${\liminf_{n \rightarrow \infty} {\bf P}( X_n \in U ) \geq {\bf P}(X \in U)}$ for all open sets ${U \subset {\bf R}}$.
• (v) For any Borel set ${E \subset {\bf R}}$ whose topological boundary ${\partial E}$ is such that ${{\bf P}(X \in \partial E) = 0}$, one has ${\lim_{n \rightarrow \infty} {\bf P}(X_n \in E) = {\bf P}(X \in E)}$.

(Note: to prove this theorem, you may wish to invoke Urysohn’s lemma. To deduce (iii) from (i), you may wish to start with the case of compact ${K}$.)

We can now state the famous central limit theorem:

Theorem 8 (Central limit theorem) Let ${X_1,X_2,\dots}$ be iid copies of a scalar random variable ${X}$ of finite mean ${\mu := {\bf E} X}$ and finite non-zero variance ${\sigma^2 := {\bf Var}(X)}$. Let ${S_n := X_1 + \dots + X_n}$. Then the random variables ${\frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu)}$ converges in distribution to a random variable with the standard normal distribution ${N(0,1)}$ (that is to say, a random variable with probability density function ${x \mapsto \frac{1}{\sqrt{2\pi}} e^{-x^2/2}}$). Thus, by abuse of notation

$\displaystyle \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \rightarrow N(0,1).$

In the normalised case ${\mu=0, \sigma^2=1}$ when ${X}$ has mean zero and unit variance, this simplifies to

$\displaystyle \frac{S_n}{\sqrt{n}} \rightarrow N(0,1).$

Using Proposition 4 (and the fact that the cumulative distribution function associated to ${N(0,1)}$ is continuous, the central limit theorem is equivalent to asserting that

$\displaystyle {\bf P}( \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \leq t ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{-\infty}^t e^{-x^2/2}\ dx$

as ${n \rightarrow \infty}$ for any ${t \in {\bf R}}$, or equivalently that

$\displaystyle {\bf P}( a \leq \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \leq b ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{a}^b e^{-x^2/2}\ dx.$

Informally, one can think of the central limit theorem as asserting that ${S_n}$ approximately behaves like it has distribution ${N( n \mu, n \sigma^2 )}$ for large ${n}$, where ${N(\mu,\sigma^2)}$ is the normal distribution with mean ${\mu}$ and variance ${\sigma^2}$, that is to say the distribution with probability density function ${x \mapsto \frac{1}{\sqrt{2\pi} \sigma} e^{-(x-\mu)^2/2\sigma^2}}$. The integrals ${\frac{1}{\sqrt{2\pi}} \int_{-\infty}^t e^{-x^2/2}\ dx}$ can be written in terms of the error function ${\hbox{erf}}$ as ${\frac{1}{2} + \frac{1}{2} \hbox{erf}(t/\sqrt{2})}$.

The central limit theorem is a basic example of the universality phenomenon in probability – many statistics involving a large system of many independent (or weakly dependent) variables (such as the normalised sums ${\frac{\sqrt{n}}{\sigma}(\frac{S_n}{n}-\mu)}$) end up having a universal asymptotic limit (in this case, the normal distribution), regardless of the precise makeup of the underlying random variable ${X}$ that comprised that system. Indeed, the universality of the normal distribution is such that it arises in many other contexts than the fluctuation of iid random variables; the central limit theorem is merely the first place in probability theory where it makes a prominent appearance.

We will give several proofs of the central limit theorem in these notes; each of these proofs has their advantages and disadvantages, and can each extend to prove many further results beyond the central limit theorem. We first give Lindeberg’s proof of the central limit theorem, based on exchanging (or swapping) each component ${X_1,\dots,X_n}$ of the sum ${S_n}$ in turn. This proof gives an accessible explanation as to why there should be a universal limit for the central limit theorem; one then computes directly with gaussians to verify that it is the normal distribution which is the universal limit. Our second proof is the most popular one taught in probability texts, namely the Fourier-analytic proof based around the concept of the characteristic function ${t \mapsto {\bf E} e^{itX}}$ of a real random variable ${X}$. Thanks to the powerful identities and other results of Fourier analysis, this gives a quite short and direct proof of the central limit theorem, although the arguments may seem rather magical to readers who are not already familiar with Fourier methods. Finally, we give a proof based on the moment method, in the spirit of the arguments in the previous notes; this argument is more combinatorial, but is straightforward and is particularly robust, in particular being well equipped to handle some dependencies between components; we will illustrate this by proving the Erdos-Kac law in number theory by this method. Some further discussion of the central limit theorem (including some further proofs, such as one based on Stein’s method) can be found in this blog post. Some further variants of the central limit theorem, such as local limit theorems, stable laws, and large deviation inequalities, will be discussed in the next (and final) set of notes.

The following exercise illustrates the power of the central limit theorem, by establishing combinatorial estimates which would otherwise require the use of Stirling’s formula to establish.

Exercise 9 (De Moivre-Laplace theorem) Let ${X}$ be a Bernoulli random variable, taking values in ${\{0,1\}}$ with ${{\bf P}(X=0)={\bf P}(X=1)=1/2}$, thus ${X}$ has mean ${1/2}$ and variance ${1/4}$. Let ${X_1,X_2,\dots}$ be iid copies of ${X}$, and write ${S_n := X_1+\dots+X_n}$.

• (i) Show that ${S_n}$ takes values in ${\{0,\dots,n\}}$ with ${{\bf P}(S_n=i) = \frac{1}{2^n} \binom{n}{i}}$. (This is an example of a binomial distribution.)
• (ii) Assume Stirling’s formula

$\displaystyle n! = (1+o(1)) \sqrt{2\pi n} n^n e^{-n} \ \ \ \ \ (1)$

where ${o(1)}$ is a function of ${n}$ that goes to zero as ${n \rightarrow \infty}$. (A proof of this formula may be found in this previous blog post.) Using this formula, and without using the central limit theorem, show that

$\displaystyle {\bf P}( a \leq 2\sqrt{n} (\frac{S_n}{n} - \frac{1}{2}) \leq b ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{a}^b e^{-x^2/2}\ dx$

as ${n \rightarrow \infty}$ for any fixed real numbers ${a.

The above special case of the central limit theorem was first established by de Moivre and Laplace.

We close this section with some basic facts about convergence of distribution that will be useful in the sequel.

Exercise 10 Let ${X_1,X_2,\dots}$, ${Y_1,Y_2,\dots}$ be sequences of real random variables, and let ${X,Y}$ be further real random variables.

• (i) If ${X}$ is deterministic, show that ${X_n}$ converges in distribution to ${X}$ if and only if ${X_n}$ converges in probability to ${X}$.
• (ii) Suppose that ${X_n}$ is independent of ${Y_n}$ for each ${n}$, and ${X}$ independent of ${Y}$. Show that ${X_n+iY_n}$ converges in distribution to ${X+iY}$ if and only if ${X_n}$ converges in distribution to ${X}$ and ${Y_n}$ converges in distribution to ${Y}$. (The shortest way to prove this is by invoking the Stone-Weierstrass theorem, but one can also proceed by proving some version of Proposition 4.) What happens if the independence hypothesis is dropped?
• (iii) If ${X_n}$ converges in distribution to ${X}$, show that for every ${\varepsilon>0}$ there exists ${K>0}$ such that ${{\bf P}( |X_n| \geq K ) < \varepsilon}$ for all sufficiently large ${n}$. (That is to say, ${X_n}$ is a tight sequence of random variables.)
• (iv) Show that ${X_n}$ converges in distribution to ${X}$ if and only if, after extending the probability space model if necessary, one can find copies ${Z_1,Z_2,\dots}$ and ${Z}$ of ${X_1,X_2,\dots}$ and ${X}$ respectively such that ${Z_n}$ converges almost surely to ${Z}$. (Hint: use the Skorohod representation, Exercise 29 of Notes 0.)
• (v) If ${X_1,X_2,\dots}$ converges in distribution to ${X}$, and ${F: {\bf R} \rightarrow {\bf R}}$ is continuous, show that ${F(X_1),F(X_2),\dots}$ converges in distribution to ${F(X)}$. Generalise this claim to the case when ${X}$ takes values in an arbitrary locally compact Hausdorff space.
• (vi) (Slutsky’s theorem) If ${X_n}$ converges in distribution to ${X}$, and ${Y_n}$ converges in probability to a deterministic limit ${Y}$, show that ${X_n+Y_n}$ converges in distribution to ${X+Y}$, and ${X_n Y_n}$ converges in distribution to ${XY}$. (Hint: either use (iv), or else use (iii) to control some error terms.) This statement combines particularly well with (i). What happens if ${Y}$ is not assumed to be deterministic?
• (vii) (Fatou lemma) If ${G: {\bf R} \rightarrow [0,+\infty)}$ is continuous, and ${X_n}$ converges in distribution to ${X}$, show that ${\liminf_{n \rightarrow \infty} {\bf E} G(X_n) \geq {\bf E} G(X)}$.
• (viii) (Bounded convergence) If ${G: {\bf R} \rightarrow {\bf R}}$ is continuous and bounded, and ${X_n}$ converges in distribution to ${X}$, show that ${\lim_{n \rightarrow \infty} {\bf E} G(X_n) = {\bf E} G(X)}$.
• (ix) (Dominated convergence) If ${X_n}$ converges in distribution to ${X}$, and there is an absolutely integrable ${Y}$ such that ${|X_n| \leq Y}$ almost surely for all ${n}$, show that ${\lim_{n \rightarrow \infty} {\bf E} X_n = {\bf E} X}$.

For future reference we also mention (but will not prove) Prokhorov’s theorem that gives a partial converse to part (iii) of the above exercise:

Theorem 11 (Prokhorov’s theorem) Let ${X_1,X_2,\dots}$ be a sequence of real random variables which is tight (that is, for every ${\varepsilon>0}$ there exists ${K>0}$ such that ${{\bf P}(|X_n| \geq K) < \varepsilon}$ for all sufficiently large ${n}$). Then there exists a subsequence ${X_{n_j}}$ which converges in distribution to some random variable ${X}$ (which may possibly be modeled by a different probability space model than the ${X_1,X_2,\dots}$.)

The proof of this theorem relies on the Riesz representation theorem, and is beyond the scope of this course; but see for instance Exercise 29 of this previous blog post. (See also the closely related Helly selection theorem, covered in Exercise 30 of the same post.)

Van Vu and I have just uploaded to the arXiv our paper A central limit theorem for the determinant of a Wigner matrix, submitted to Adv. Math.. It studies the asymptotic distribution of the determinant ${\det M_n}$ of a random Wigner matrix (such as a matrix drawn from the Gaussian Unitary Ensemble (GUE) or Gaussian Orthogonal Ensemble (GOE)).

Before we get to these results, let us first discuss the simpler problem of studying the determinant ${\det A_n}$ of a random iid matrix ${A_n = (\zeta_{ij})_{1 \leq i,j \leq n}}$, such as a real gaussian matrix (where all entries are independently and identically distributed using the standard real normal distribution ${\zeta_{ij} \equiv N(0,1)_{\bf R}}$), a complex gaussian matrix (where all entries are independently and identically distributed using the standard complex normal distribution ${\zeta_{ij} \equiv N(0,1)_{\bf C}}$, thus the real and imaginary parts are independent with law ${N(0,1/2)_{\bf R}}$), or the random sign matrix (in which all entries are independently and identically distributed according to the Bernoulli distribution ${\zeta_{ij} \equiv \pm 1}$ (with a ${1/2}$ chance of either sign). More generally, one can consider a matrix ${A_n}$ in which all the entries ${\zeta_{ij}}$ are independently and identically distributed with mean zero and variance ${1}$.

We can expand ${\det A_n}$ using the Leibniz expansion

$\displaystyle \det A_n = \sum_{\sigma \in S_n} I_\sigma, \ \ \ \ \ (1)$

where ${\sigma: \{1,\ldots,n\} \rightarrow \{1,\ldots,n\}}$ ranges over the permutations of ${\{1,\ldots,n\}}$, and ${I_\sigma}$ is the product

$\displaystyle I_\sigma := \hbox{sgn}(\sigma) \prod_{i=1}^n \zeta_{i\sigma(i)}.$

From the iid nature of the ${\zeta_{ij}}$, we easily see that each ${I_\sigma}$ has mean zero and variance one, and are pairwise uncorrelated as ${\sigma}$ varies. We conclude that ${\det A_n}$ has mean zero and variance ${n!}$ (an observation first made by Turán). In particular, from Chebyshev’s inequality we see that ${\det A_n}$ is typically of size ${O(\sqrt{n!})}$.

It turns out, though, that this is not quite best possible. This is easiest to explain in the real gaussian case, by performing a computation first made by Goodman. In this case, ${\det A_n}$ is clearly symmetrical, so we can focus attention on the magnitude ${|\det A_n|}$. We can interpret this quantity geometrically as the volume of an ${n}$-dimensional parallelopiped whose generating vectors ${X_1,\ldots,X_n}$ are independent real gaussian vectors in ${{\bf R}^n}$ (i.e. their coefficients are iid with law ${N(0,1)_{\bf R}}$). Using the classical base-times-height formula, we thus have

$\displaystyle |\det A_n| = \prod_{i=1}^n \hbox{dist}(X_i, V_i) \ \ \ \ \ (2)$

where ${V_i}$ is the ${i-1}$-dimensional linear subspace of ${{\bf R}^n}$ spanned by ${X_1,\ldots,X_{i-1}}$ (note that ${X_1,\ldots,X_n}$, having an absolutely continuous joint distribution, are almost surely linearly independent). Taking logarithms, we conclude

$\displaystyle \log |\det A_n| = \sum_{i=1}^n \log \hbox{dist}(X_i, V_i).$

Now, we take advantage of a fundamental symmetry property of the Gaussian vector distribution, namely its invariance with respect to the orthogonal group ${O(n)}$. Because of this, we see that if we fix ${X_1,\ldots,X_{i-1}}$ (and thus ${V_i}$, the random variable ${\hbox{dist}(X_i,V_i)}$ has the same distribution as ${\hbox{dist}(X_i,{\bf R}^{i-1})}$, or equivalently the ${\chi}$ distribution

$\displaystyle \chi_{n-i+1} := (\sum_{j=1}^{n-i+1} \xi_{n-i+1,j}^2)^{1/2}$

where ${\xi_{n-i+1,1},\ldots,\xi_{n-i+1,n-i+1}}$ are iid copies of ${N(0,1)_{\bf R}}$. As this distribution does not depend on the ${X_1,\ldots,X_{i-1}}$, we conclude that the law of ${\log |\det A_n|}$ is given by the sum of ${n}$ independent ${\chi}$-variables:

$\displaystyle \log |\det A_n| \equiv \sum_{j=1}^{n} \log \chi_j.$

A standard computation shows that each ${\chi_j^2}$ has mean ${j}$ and variance ${2j}$, and then a Taylor series (or Ito calculus) computation (using concentration of measure tools to control tails) shows that ${\log \chi_j}$ has mean ${\frac{1}{2} \log j - \frac{1}{2j} + O(1/j^{3/2})}$ and variance ${\frac{1}{2j}+O(1/j^{3/2})}$. As such, ${\log |\det A_n|}$ has mean ${\frac{1}{2} \log n! - \frac{1}{2} \log n + O(1)}$ and variance ${\frac{1}{2} \log n + O(1)}$. Applying a suitable version of the central limit theorem, one obtains the asymptotic law

$\displaystyle \frac{\log |\det A_n| - \frac{1}{2} \log n! + \frac{1}{2} \log n}{\sqrt{\frac{1}{2}\log n}} \rightarrow N(0,1)_{\bf R}, \ \ \ \ \ (3)$

where ${\rightarrow}$ denotes convergence in distribution. A bit more informally, we have

$\displaystyle |\det A_n| \approx n^{-1/2} \sqrt{n!} \exp( N( 0, \log n / 2 )_{\bf R} ) \ \ \ \ \ (4)$

when ${A_n}$ is a real gaussian matrix; thus, for instance, the median value of ${|\det A_n|}$ is ${n^{-1/2+o(1)} \sqrt{n!}}$. At first glance, this appears to conflict with the second moment bound ${\mathop{\bf E} |\det A_n|^2 = n!}$ of Turán mentioned earlier, but once one recalls that ${\exp(N(0,t)_{\bf R})}$ has a second moment of ${\exp(2t)}$, we see that the two facts are in fact perfectly consistent; the upper tail of the normal distribution in the exponent in (4) ends up dominating the second moment.

It turns out that the central limit theorem (3) is valid for any real iid matrix with mean zero, variance one, and an exponential decay condition on the entries; this was first claimed by Girko, though the arguments in that paper appear to be incomplete. Another proof of this result, with more quantitative bounds on the convergence rate has been recently obtained by Hoi Nguyen and Van Vu. The basic idea in these arguments is to express the sum in (2) in terms of a martingale and apply the martingale central limit theorem.

If one works with complex gaussian random matrices instead of real gaussian random matrices, the above computations change slightly (one has to replace the real ${\chi}$ distribution with the complex ${\chi}$ distribution, in which the ${\xi_{i,j}}$ are distributed according to the complex gaussian ${N(0,1)_{\bf C}}$ instead of the real one). At the end of the day, one ends up with the law

$\displaystyle \frac{\log |\det A_n| - \frac{1}{2} \log n! + \frac{1}{4} \log n}{\sqrt{\frac{1}{4}\log n}} \rightarrow N(0,1)_{\bf R}, \ \ \ \ \ (5)$

$\displaystyle |\det A_n| \approx n^{-1/4} \sqrt{n!} \exp( N( 0, \log n / 4 )_{\bf R} ) \ \ \ \ \ (6)$

(but note that this new asymptotic is still consistent with Turán’s second moment calculation).

We can now turn to the results of our paper. Here, we replace the iid matrices ${A_n}$ by Wigner matrices ${M_n = (\zeta_{ij})_{1 \leq i,j \leq n}}$, which are defined similarly but are constrained to be Hermitian (or real symmetric), thus ${\zeta_{ij} = \overline{\zeta_{ji}}}$ for all ${i,j}$. Model examples here include the Gaussian Unitary Ensemble (GUE), in which ${\zeta_{ij} \equiv N(0,1)_{\bf C}}$ for ${1 \leq i < j \leq n}$ and ${\zeta_{ij} \equiv N(0,1)_{\bf R}}$ for ${1 \leq i=j \leq n}$, the Gaussian Orthogonal Ensemble (GOE), in which ${\zeta_{ij} \equiv N(0,1)_{\bf R}}$ for ${1 \leq i < j \leq n}$ and ${\zeta_{ij} \equiv N(0,2)_{\bf R}}$ for ${1 \leq i=j \leq n}$, and the symmetric Bernoulli ensemble, in which ${\zeta_{ij} \equiv \pm 1}$ for ${1 \leq i \leq j \leq n}$ (with probability ${1/2}$ of either sign). In all cases, the upper triangular entries of the matrix are assumed to be jointly independent. For a more precise definition of the Wigner matrix ensembles we are considering, see the introduction to our paper.

The determinants ${\det M_n}$ of these matrices still have a Leibniz expansion. However, in the Wigner case, the mean and variance of the ${I_\sigma}$ are slightly different, and what is worse, they are not all pairwise uncorrelated any more. For instance, the mean of ${I_\sigma}$ is still usually zero, but equals ${(-1)^{n/2}}$ in the exceptional case when ${\sigma}$ is a perfect matching (i.e. the union of exactly ${n/2}$ ${2}$-cycles, a possibility that can of course only happen when ${n}$ is even). As such, the mean ${\mathop{\bf E} \det M_n}$ still vanishes when ${n}$ is odd, but for even ${n}$ it is equal to

$\displaystyle (-1)^{n/2} \frac{n!}{(n/2)!2^{n/2}}$

(the fraction here simply being the number of perfect matchings on ${n}$ vertices). Using Stirling’s formula, one then computes that ${|\mathop{\bf E} \det A_n|}$ is comparable to ${n^{-1/4} \sqrt{n!}}$ when ${n}$ is large and even. The second moment calculation is more complicated (and uses facts about the distribution of cycles in random permutations, mentioned in this previous post), but one can compute that ${\mathop{\bf E} |\det A_n|^2}$ is comparable to ${n^{1/2} n!}$ for GUE and ${n^{3/2} n!}$ for GOE. (The discrepancy here comes from the fact that in the GOE case, ${I_\sigma}$ and ${I_\rho}$ can correlate when ${\rho}$ contains reversals of ${k}$-cycles of ${\sigma}$ for ${k \geq 3}$, but this does not happen in the GUE case.) For GUE, much more precise asymptotics for the moments of the determinant are known, starting from the work of Brezin and Hikami, though we do not need these more sophisticated computations here.

Our main results are then as follows.

Theorem 1 Let ${M_n}$ be a Wigner matrix.

• If ${M_n}$ is drawn from GUE, then

$\displaystyle \frac{\log |\det M_n| - \frac{1}{2} \log n! + \frac{1}{4} \log n}{\sqrt{\frac{1}{2}\log n}} \rightarrow N(0,1)_{\bf R}.$

• If ${M_n}$ is drawn from GOE, then

$\displaystyle \frac{\log |\det M_n| - \frac{1}{2} \log n! + \frac{1}{4} \log n}{\sqrt{\log n}} \rightarrow N(0,1)_{\bf R}.$

• The previous two results also hold for more general Wigner matrices, assuming that the real and imaginary parts are independent, a finite moment condition is satisfied, and the entries match moments with those of GOE or GUE to fourth order. (See the paper for a more precise formulation of the result.)

Thus, we informally have

$\displaystyle |\det M_n| \approx n^{-1/4} \sqrt{n!} \exp( N( 0, \log n / 2 )_{\bf R} )$

when ${M_n}$ is drawn from GUE, or from another Wigner ensemble matching GUE to fourth order (and obeying some additional minor technical hypotheses); and

$\displaystyle |\det M_n| \approx n^{-1/4} \sqrt{n!} \exp( N( 0, \log n )_{\bf R} )$

when ${M_n}$ is drawn from GOE, or from another Wigner ensemble matching GOE to fourth order. Again, these asymptotic limiting distributions are consistent with the asymptotic behaviour for the second moments.

The extension from the GUE or GOE case to more general Wigner ensembles is a fairly routine application of the four moment theorem for Wigner matrices, although for various technical reasons we do not quite use the existing four moment theorems in the literature, but adapt them to the log determinant. The main idea is to express the log-determinant as an integral

$\displaystyle \log|\det M_n| = \frac{1}{2} n \log n - n \hbox{Im} \int_0^\infty s(\sqrt{-1}\eta)\ d\eta \ \ \ \ \ (7)$

of the Stieltjes transform

$\displaystyle s(z) := \frac{1}{n} \hbox{tr}( \frac{1}{\sqrt{n}} M_n - z )^{-1}$

of ${M_n}$. Strictly speaking, the integral in (7) is divergent at infinity (and also can be ill-behaved near zero), but this can be addressed by standard truncation and renormalisation arguments (combined with known facts about the least singular value of Wigner matrices), which we omit here. We then use a variant of the four moment theorem for the Stieltjes transform, as used by Erdos, Yau, and Yin (based on a previous four moment theorem for individual eigenvalues introduced by Van Vu and myself). The four moment theorem is proven by the now-standard Lindeberg exchange method, combined with the usual resolvent identities to control the behaviour of the resolvent (and hence the Stieltjes transform) with respect to modifying one or two entries, together with the delocalisation of eigenvector property (which in turn arises from local semicircle laws) to control the error terms.

Somewhat surprisingly (to us, at least), it turned out that it was the first part of the theorem (namely, the verification of the limiting law for the invariant ensembles GUE and GOE) that was more difficult than the extension to the Wigner case. Even in an ensemble as highly symmetric as GUE, the rows are no longer independent, and the formula (2) is basically useless for getting any non-trivial control on the log determinant. There is an explicit formula for the joint distribution of the eigenvalues of GUE (or GOE), which does eventually give the distribution of the cumulants of the log determinant, which then gives the required central limit theorem; but this is a lengthy computation, first performed by Delannay and Le Caer.

Following a suggestion of my colleague, Rowan Killip, we give an alternate proof of this central limit theorem in the GUE and GOE cases, by using a beautiful observation of Trotter, namely that the GUE or GOE ensemble can be conjugated into a tractable tridiagonal form. Let me state it just for GUE:

Proposition 2 (Tridiagonal form of GUE) Let ${M'_n}$ be the random tridiagonal real symmetric matrix

$\displaystyle M'_n = \begin{pmatrix} a_1 & b_1 & 0 & \ldots & 0 & 0 \\ b_1 & a_2 & b_2 & \ldots & 0 & 0 \\ 0 & b_2 & a_3 & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \ldots & a_{n-1} & b_{n-1} \\ 0 & 0 & 0 & \ldots & b_{n-1} & a_n \end{pmatrix}$

where the ${a_1,\ldots,a_n, b_1,\ldots,b_{n-1}}$ are jointly independent real random variables, with ${a_1,\ldots,a_n \equiv N(0,1)_{\bf R}}$ being standard real Gaussians, and each ${b_i}$ having a ${\chi}$-distribution:

$\displaystyle b_i = (\sum_{j=1}^i |z_{i,j}|^2)^{1/2}$

where ${z_{i,j} \equiv N(0,1)_{\bf C}}$ are iid complex gaussians. Let ${M_n}$ be drawn from GUE. Then the joint eigenvalue distribution of ${M_n}$ is identical to the joint eigenvalue distribution of ${M'_n}$.

Proof: Let ${M_n}$ be drawn from GUE. We can write

$\displaystyle M_n = \begin{pmatrix} M_{n-1} & X_n \\ X_n^* & a_n \end{pmatrix}$

where ${M_{n-1}}$ is drawn from the ${n-1\times n-1}$ GUE, ${a_n \equiv N(0,1)_{\bf R}}$, and ${X_n \in {\bf C}^{n-1}}$ is a random gaussian vector with all entries iid with distribution ${N(0,1)_{\bf C}}$. Furthermore, ${M_{n-1}, X_n, a_n}$ are jointly independent.

We now apply the tridiagonal matrix algorithm. Let ${b_{n-1} := |X_n|}$, then ${b_n}$ has the ${\chi}$-distribution indicated in the proposition. We then conjugate ${M_n}$ by a unitary matrix ${U}$ that preserves the final basis vector ${e_n}$, and maps ${X}$ to ${b_{n-1} e_{n-1}}$. Then we have

$\displaystyle U M_n U^* = \begin{pmatrix} \tilde M_{n-1} & b_{n-1} e_{n-1} \\ b_{n-1} e_{n-1}^* & a_n \end{pmatrix}$

where ${\tilde M_{n-1}}$ is conjugate to ${M_{n-1}}$. Now we make the crucial observation: because ${M_{n-1}}$ is distributed according to GUE (which is a unitarily invariant ensemble), and ${U}$ is a unitary matrix independent of ${M_{n-1}}$, ${\tilde M_{n-1}}$ is also distributed according to GUE, and remains independent of both ${b_{n-1}}$ and ${a_n}$.

We continue this process, expanding ${U M_n U^*}$ as

$\displaystyle \begin{pmatrix} M_{n-2} & X_{n-1} & 0 \\ X_{n-1}^* & a_{n-1} & b_{n-1} \\ 0 & b_{n-1} & a_n. \end{pmatrix}$

Applying a further unitary conjugation that fixes ${e_{n-1}, e_n}$ but maps ${X_{n-1}}$ to ${b_{n-2} e_{n-2}}$, we may replace ${X_{n-1}}$ by ${b_{n-2} e_{n-2}}$ while transforming ${M_{n-2}}$ to another GUE matrix ${\tilde M_{n-2}}$ independent of ${a_n, b_{n-1}, a_{n-1}, b_{n-2}}$. Iterating this process, we eventually obtain a coupling of ${M_n}$ to ${M'_n}$ by unitary conjugations, and the claim follows. $\Box$

The determinant of a tridiagonal matrix is not quite as simple as the determinant of a triangular matrix (in which it is simply the product of the diagonal entries), but it is pretty close: the determinant ${D_n}$ of the above matrix is given by solving the recursion

$\displaystyle D_i = a_i D_{i-1} + b_{i-1}^2 D_{i-2}$

with ${D_0=1}$ and ${D_{-1} = 0}$. Thus, instead of the product of a sequence of independent scalar ${\chi}$ distributions as in the gaussian matrix case, the determinant of GUE ends up being controlled by the product of a sequence of independent ${2\times 2}$ matrices whose entries are given by gaussians and ${\chi}$ distributions. In this case, one cannot immediately take logarithms and hope to get something for which the martingale central limit theorem can be applied, but some ad hoc manipulation of these ${2 \times 2}$ matrix products eventually does make this strategy work. (Roughly speaking, one has to work with the logarithm of the Frobenius norm of the matrix first.)

I’ve spent the last week or so reworking the first draft of my universality article for Mathematics Awareness Month, in view of the useful comments and feedback received on that draft here on this blog, as well as elsewhere.  In fact, I ended up rewriting the article from scratch, and expanding it substantially, in order to focus on a more engaging and less technical narrative.  I found that I had to use a substantially different mindset than the one I am used to having for technical expository writing; indeed, the exercise reminded me more of my high school English assignments than of my professional work.  (This is perhaps a bad sign: English was not exactly my strongest subject as a student.)

The piece now has title: “E pluribus unum: from complexity, universality”.  This is a somewhat US-centric piece of wordplay, but Mathematics Awareness Month is, after all, a US-based initiative, even though awareness of mathematics certainly transcends national boundaries.   Still, it is a trivial matter to modify the title later if a better proposal arises, and I am sure that if I send this text to be published, that the editors may have some suggestions in this regard.

By coincidence, I moved up and expanded the other US-centric item – the discussion of the 2008 US presidential elections – to the front of the paper to play the role of the hook.  I’ll try to keep the Commonwealth spelling conventions, though. :-)

I decided to cut out the discussion of the N-body problem for various values of N, in part due to the confusion over the notion of a “solution”; there is a nice mathematical story there, but perhaps one that gets in the way of the main story of universality.

I have added a fair number of relevant images, though some of them will have to be changed in the final version for copyright reasons.  The narrow column format of this blog means that the image placement is not optimal, but I am sure that this can be rectified if this article is published professionally.

## Read the rest of this entry »

The month of April has been designated as Mathematics Awareness Month by the major American mathematics organisations (the AMS, ASA, MAA, and SIAM).  I was approached to write a popular mathematics article for April 2011 (the theme for that month is “Mathematics and Complexity”).  While I have written a fair number of expository articles (including several on this blog) aimed at a mathematical audience, I actually have not had much experience writing articles at the popular mathematics level, and so I found this task to be remarkably difficult.  At this level of exposition, one not only needs to explain the facts, but also to tell a story; I have experience in the former but not in the latter.

I decided to write on the topic of universality – the phenomenon that the macroscopic behaviour of a dynamical system can be largely independent of the precise microscopic structure.   Below the fold is a first draft of the article; I would definitely welcome feedback and corrections.  It does not yet have any pictures, but I plan to rectify that in the final draft.  It also does not have a title, but this will be easy to address later.   But perhaps the biggest thing lacking right now is a narrative “hook”; I don’t yet have any good ideas as to how to make the story of universality compelling to a lay audience.  Any suggestions in this regard would be particularly appreciated.

I have not yet decided where I would try to publish this article; in fact, I might just publish it here on this blog (and eventually, in one of the blog book compilations).

Let ${X_1,X_2,\dots}$ be iid copies of an absolutely integrable real scalar random variable ${X}$, and form the partial sums ${S_n := X_1 + \dots + X_n}$. As we saw in the last set of notes, the law of large numbers ensures that the empirical averages ${S_n/n}$ converge (both in probability and almost surely) to a deterministic limit, namely the mean ${\mu= {\bf E} X}$ of the reference variable ${X}$. Furthermore, under some additional moment hypotheses on the underlying variable ${X}$, we can obtain square root cancellation for the fluctuation ${\frac{S_n}{n} - \mu}$ of the empirical average from the mean. To simplify the calculations, let us first restrict to the case ${\mu=0, \sigma^2=1}$ of mean zero and variance one, thus

$\displaystyle {\bf E} X = 0$

and

$\displaystyle {\bf Var}(X) = {\bf E} X^2 = 1.$

Then, as computed in previous notes, the normalised fluctuation ${S_n/\sqrt{n}}$ also has mean zero and variance one:

$\displaystyle {\bf E} \frac{S_n}{\sqrt{n}} = 0$

$\displaystyle {\bf Var}(\frac{S_n}{\sqrt{n}}) = {\bf E} (\frac{S_n}{\sqrt{n}})^2 = 1.$

This and Chebyshev’s inequality already indicates that the “typical” size of ${S_n}$ is ${O(\sqrt{n})}$, thus for instance ${\frac{S_n}{\sqrt{n} \omega(n)}}$ goes to zero in probability for any ${\omega(n)}$ that goes to infinity as ${n \rightarrow \infty}$. If we also have a finite fourth moment ${{\bf E} |X|^4 < \infty}$, then the calculations of the previous notes also give a fourth moment estimate

$\displaystyle {\bf E} (\frac{S_n}{\sqrt{n}})^4 = 3 + O( \frac{{\bf E} |X|^4}{n} ).$

From this and the Paley-Zygmund inequality (Exercise 42 of Notes 1) we also get some lower bound for ${\frac{S_n}{\sqrt{n}}}$ of the form

$\displaystyle {\bf P}( |\frac{S_n}{\sqrt{n}}| \geq \varepsilon ) \geq \varepsilon$

for some absolute constant ${\varepsilon>0}$ and for ${n}$ sufficiently large; this indicates in particular that ${\frac{S_n \omega(n)}{\sqrt{n}}}$ does not converge in any reasonable sense to something finite for any ${\omega(n)}$ that goes to infinity.

The question remains as to what happens to the ratio ${S_n/\sqrt{n}}$ itself, without multiplying or dividing by any factor ${\omega(n)}$. A first guess would be that these ratios converge in probability or almost surely, but this is unfortunately not the case:

Proposition 1 Let ${X_1,X_2,\dots}$ be iid copies of an absolutely integrable real scalar random variable ${X}$ with mean zero, variance one, and finite fourth moment, and write ${S_n := X_1 + \dots + X_n}$. Then the random variables ${S_n/\sqrt{n}}$ do not converge in probability or almost surely to any limit, and neither does any subsequence of these random variables.

Proof: Suppose for contradiction that some sequence ${S_{n_j}/\sqrt{n_j}}$ converged in probability or almost surely to a limit ${Y}$. By passing to a further subsequence we may assume that the convergence is in the almost sure sense. Since all of the ${S_{n_j}/\sqrt{n_j}}$ have mean zero, variance one, and bounded fourth moment, Theorem 24 of Notes 1 implies that the limit ${Y}$ also has mean zero and variance one. On the other hand, ${Y}$ is a tail random variable and is thus almost surely constant by the Kolmogorov zero-one law from Notes 3. Since constants have variance zero, we obtain the required contradiction. $\Box$

Nevertheless there is an important limit for the ratio ${S_n/\sqrt{n}}$, which requires one to replace the notions of convergence in probability or almost sure convergence by the weaker concept of convergence in distribution.

Definition 2 (Vague convergence and convergence in distribution) Let ${R}$ be a locally compact Hausdorff topological space with the Borel ${\sigma}$-algebra. A sequence of finite measures ${\mu_n}$ on ${R}$ is said to converge vaguely to another finite measure ${\mu}$ if one has

$\displaystyle \int_R G(x)\ d\mu_n(x) \rightarrow \int_R G(x)\ d\mu(x)$

as ${n \rightarrow \infty}$ for all continuous compactly supported functions ${G: R \rightarrow {\bf R}}$. (Vague convergence is also known as weak convergence, although strictly speaking the terminology weak-* convergence would be more accurate.) A sequence of random variables ${X_n}$ taking values in ${R}$ is said to converge in distribution (or converge weakly or converge in law) to another random variable ${X}$ if the distributions ${\mu_{X_n}}$ converge vaguely to the distribution ${\mu_X}$, or equivalently if

$\displaystyle {\bf E}G(X_n) \rightarrow {\bf E} G(X)$

as ${n \rightarrow \infty}$ for all continuous compactly supported functions ${G: R \rightarrow {\bf R}}$.

One could in principle try to extend this definition beyond the locally compact Hausdorff setting, but certain pathologies can occur when doing so (e.g. failure of the Riesz representation theorem), and we will never need to consider vague convergence in spaces that are not locally compact Hausdorff, so we restrict to this setting for simplicity.

Note that the notion of convergence in distribution depends only on the distribution of the random variables involved. One consequence of this is that convergence in distribution does not produce unique limits: if ${X_n}$ converges in distribution to ${X}$, and ${Y}$ has the same distribution as ${X}$, then ${X_n}$ also converges in distribution to ${Y}$. However, limits are unique up to equivalence in distribution (this is a consequence of the Riesz representation theorem, discussed for instance in this blog post). As a consequence of the insensitivity of convergence in distribution to equivalence in distribution, we may also legitimately talk about convergence of distribution of a sequence of random variables ${X_n}$ to another random variable ${X}$ even when all the random variables ${X_1,X_2,\dots}$ and ${X}$ involved are being modeled by different probability spaces (e.g. each ${X_n}$ is modeled by ${\Omega_n}$, and ${X}$ is modeled by ${\Omega}$, with no coupling presumed between these spaces). This is in contrast to the stronger notions of convergence in probability or almost sure convergence, which require all the random variables to be modeled by a common probability space. Also, by an abuse of notation, we can say that a sequence ${X_n}$ of random variables converges in distribution to a probability measure ${\mu}$, when ${\mu_{X_n}}$ converges vaguely to ${\mu}$. Thus we can talk about a sequence of random variables converging in distribution to a uniform distribution, a gaussian distribution, etc..

From the dominated convergence theorem (available for both convergence in probability and almost sure convergence) we see that convergence in probability or almost sure convergence implies convergence in distribution. The converse is not true, due to the insensitivity of convergence in distribution to equivalence in distribution; for instance, if ${X_1,X_2,\dots}$ are iid copies of a non-deterministic scalar random variable ${X}$, then the ${X_n}$ trivially converge in distribution to ${X}$, but will not converge in probability or almost surely (as one can see from the zero-one law). However, there are some partial converses that relate convergence in distribution to convergence in probability; see Exercise 10 below.

Remark 3 The notion of convergence in distribution is somewhat similar to the notion of convergence in the sense of distributions that arises in distribution theory (discussed for instance in this previous blog post), however strictly speaking the two notions of convergence are distinct and should not be confused with each other, despite the very similar names.

The notion of convergence in distribution simplifies in the case of real scalar random variables:

Proposition 4 Let ${X_1,X_2,\dots}$ be a sequence of scalar random variables, and let ${X}$ be another scalar random variable. Then the following are equivalent:

• (i) ${X_n}$ converges in distribution to ${X}$.
• (ii) ${F_{X_n}(t)}$ converges to ${F_X(t)}$ for each continuity point ${t}$ of ${F_X}$ (i.e. for all real numbers ${t \in {\bf R}}$ at which ${F_X}$ is continuous). Here ${F_X(t) := {\bf P}(X \leq t)}$ is the cumulative distribution function of ${X}$.

Proof: First suppose that ${X_n}$ converges in distribution to ${X}$, and ${F_X}$ is continuous at ${t}$. For any ${\varepsilon > 0}$, one can find a ${\delta}$ such that

$\displaystyle F_X(t) - \varepsilon \leq F_X(t') \leq F_X(t) + \varepsilon$

for every ${t' \in [t-\delta,t+\delta]}$. One can also find an ${N}$ larger than ${|t|+\delta}$ such that ${F_X(-N) \leq \varepsilon}$ and ${F_X(N) \geq 1-\varepsilon}$. Thus

$\displaystyle {\bf P} (|X| \geq N ) = O(\varepsilon)$

and

$\displaystyle {\bf P} (|X - t| \leq \delta ) = O(\varepsilon).$

Let ${G: {\bf R} \rightarrow [0,1]}$ be a continuous function supported on ${[-2N, t]}$ that equals ${1}$ on ${[-N, t-\delta]}$. Then by the above discussion we have

$\displaystyle {\bf E} G(X) = F_X(t) + O(\varepsilon)$

and hence

$\displaystyle {\bf E} G(X_n) = F_X(t) + O(\varepsilon)$

for large enough ${n}$. In particular

$\displaystyle {\bf P}( X_n \leq t ) \geq F_X(t) - O(\varepsilon).$

A similar argument, replacing ${G}$ with a continuous function supported on ${[t,2N]}$ that equals ${1}$ on ${[t+\delta,N]}$ gives

$\displaystyle {\bf P}( X_n > t ) \geq 1 - F_X(t) - O(\varepsilon)$

for ${n}$ large enough. Putting the two estimates together gives

$\displaystyle F_{X_n}(t) = F_X(t) + O(\varepsilon)$

for ${n}$ large enough; sending ${\varepsilon \rightarrow 0}$, we obtain the claim.

Conversely, suppose that ${F_{X_n}(t)}$ converges to ${F_X(t)}$ at every continuity point ${t}$ of ${F_X}$. Let ${G: {\bf R} \rightarrow {\bf R}}$ be a continuous compactly supported function, then it is uniformly continuous. As ${F_X}$ is monotone increasing, it can only have countably many points of discontinuity. From these two facts one can find, for any ${\varepsilon>0}$, a simple function ${G_\varepsilon(t) = \sum_{i=1}^n c_i 1_{(t_i,t_{i+1}]}}$ for some ${t_1 < \dots < t_n}$ that are points of continuity of ${F_X}$, and real numbers ${c_i}$, such that ${|G(t) - G_\varepsilon(t)| \leq \varepsilon}$ for all ${t}$. Thus

$\displaystyle {\bf E} G(X_n) = {\bf E} G_\varepsilon(X_n) + O(\varepsilon)$

$\displaystyle = \sum_{i=1}^n c_i(F_{X_n}(t_{i+1}) - F_{X_n}(t)) + O(\varepsilon).$

Similarly for ${X_n}$ replaced by ${X}$. Subtracting and taking limit superior, we conclude that

$\displaystyle \limsup_{n \rightarrow \infty} |{\bf E} G(X_n) - {\bf E} G(X)| = O(\varepsilon),$

and on sending ${\varepsilon \rightarrow 0}$, we obtain that ${X_n}$ converges in distribution to ${X}$ as claimed. $\Box$

The restriction to continuity points of ${t}$ is necessary. Consider for instance the deterministic random variables ${X_n = 1/n}$, then ${X_n}$ converges almost surely (and hence in distribution) to ${0}$, but ${F_{X_n}(0) = 0}$ does not converge to ${F_X(0)=1}$.

Example 5 For any natural number ${n}$, let ${X_n}$ be a discrete random variable drawn uniformly from the finite set ${\{0/n, 1/n, \dots, (n-1)/n\}}$, and let ${X}$ be the continuous random variable drawn uniformly from ${[0,1]}$. Then ${X_n}$ converges in distribution to ${X}$. Thus we see that a continuous random variable can emerge as the limit of discrete random variables.

Example 6 For any natural number ${n}$, let ${X_n}$ be a continuous random variable drawn uniformly from ${[0,1/n]}$, then ${X_n}$ converges in distribution to the deterministic real number ${0}$. Thus we see that discrete (or even deterministic) random variables can emerge as the limit of continuous random variables.

Exercise 7 (Portmanteau theorem) Show that the properties (i) and (ii) in Proposition 4 are also equivalent to the following three statements:

• (iii) One has ${\limsup_{n \rightarrow \infty} {\bf P}( X_n \in K ) \leq {\bf P}(X \in K)}$ for all closed sets ${K \subset {\bf R}}$.
• (iv) One has ${\liminf_{n \rightarrow \infty} {\bf P}( X_n \in U ) \geq {\bf P}(X \in U)}$ for all open sets ${U \subset {\bf R}}$.
• (v) For any Borel set ${E \subset {\bf R}}$ whose topological boundary ${\partial E}$ is such that ${{\bf P}(X \in \partial E) = 0}$, one has ${\lim_{n \rightarrow \infty} {\bf P}(X_n \in E) = {\bf P}(X \in E)}$.

(Note: to prove this theorem, you may wish to invoke Urysohn’s lemma. To deduce (iii) from (i), you may wish to start with the case of compact ${K}$.)

We can now state the famous central limit theorem:

Theorem 8 (Central limit theorem) Let ${X_1,X_2,\dots}$ be iid copies of a scalar random variable ${X}$ of finite mean ${\mu := {\bf E} X}$ and finite non-zero variance ${\sigma^2 := {\bf Var}(X)}$. Let ${S_n := X_1 + \dots + X_n}$. Then the random variables ${\frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu)}$ converges in distribution to a random variable with the standard normal distribution ${N(0,1)}$ (that is to say, a random variable with probability density function ${x \mapsto \frac{1}{\sqrt{2\pi}} e^{-x^2/2}}$). Thus, by abuse of notation

$\displaystyle \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \rightarrow N(0,1).$

In the normalised case ${\mu=0, \sigma^2=1}$ when ${X}$ has mean zero and unit variance, this simplifies to

$\displaystyle \frac{S_n}{\sqrt{n}} \rightarrow N(0,1).$

Using Proposition 4 (and the fact that the cumulative distribution function associated to ${N(0,1)}$ is continuous, the central limit theorem is equivalent to asserting that

$\displaystyle {\bf P}( \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \leq t ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{-\infty}^t e^{-x^2/2}\ dx$

as ${n \rightarrow \infty}$ for any ${t \in {\bf R}}$, or equivalently that

$\displaystyle {\bf P}( a \leq \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \leq b ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{a}^b e^{-x^2/2}\ dx.$

Informally, one can think of the central limit theorem as asserting that ${S_n}$ approximately behaves like it has distribution ${N( n \mu, n \sigma^2 )}$ for large ${n}$, where ${N(\mu,\sigma^2)}$ is the normal distribution with mean ${\mu}$ and variance ${\sigma^2}$, that is to say the distribution with probability density function ${x \mapsto \frac{1}{\sqrt{2\pi} \sigma} e^{-(x-\mu)^2/2\sigma^2}}$. The integrals ${\frac{1}{\sqrt{2\pi}} \int_{-\infty}^t e^{-x^2/2}\ dx}$ can be written in terms of the error function ${\hbox{erf}}$ as ${\frac{1}{2} + \frac{1}{2} \hbox{erf}(t/\sqrt{2})}$.

The central limit theorem is a basic example of the universality phenomenon in probability – many statistics involving a large system of many independent (or weakly dependent) variables (such as the normalised sums ${\frac{\sqrt{n}}{\sigma}(\frac{S_n}{n}-\mu)}$) end up having a universal asymptotic limit (in this case, the normal distribution), regardless of the precise makeup of the underlying random variable ${X}$ that comprised that system. Indeed, the universality of the normal distribution is such that it arises in many other contexts than the fluctuation of iid random variables; the central limit theorem is merely the first place in probability theory where it makes a prominent appearance.

We will give several proofs of the central limit theorem in these notes; each of these proofs has their advantages and disadvantages, and can each extend to prove many further results beyond the central limit theorem. We first give Lindeberg’s proof of the central limit theorem, based on exchanging (or swapping) each component ${X_1,\dots,X_n}$ of the sum ${S_n}$ in turn. This proof gives an accessible explanation as to why there should be a universal limit for the central limit theorem; one then computes directly with gaussians to verify that it is the normal distribution which is the universal limit. Our second proof is the most popular one taught in probability texts, namely the Fourier-analytic proof based around the concept of the characteristic function ${t \mapsto {\bf E} e^{itX}}$ of a real random variable ${X}$. Thanks to the powerful identities and other results of Fourier analysis, this gives a quite short and direct proof of the central limit theorem, although the arguments may seem rather magical to readers who are not already familiar with Fourier methods. Finally, we give a proof based on the moment method, in the spirit of the arguments in the previous notes; this argument is more combinatorial, but is straightforward and is particularly robust, in particular being well equipped to handle some dependencies between components; we will illustrate this by proving the Erdos-Kac law in number theory by this method. Some further discussion of the central limit theorem (including some further proofs, such as one based on Stein’s method) can be found in this blog post. Some further variants of the central limit theorem, such as local limit theorems, stable laws, and large deviation inequalities, will be discussed in the next (and final) set of notes.

The following exercise illustrates the power of the central limit theorem, by establishing combinatorial estimates which would otherwise require the use of Stirling’s formula to establish.

Exercise 9 (De Moivre-Laplace theorem) Let ${X}$ be a Bernoulli random variable, taking values in ${\{0,1\}}$ with ${{\bf P}(X=0)={\bf P}(X=1)=1/2}$, thus ${X}$ has mean ${1/2}$ and variance ${1/4}$. Let ${X_1,X_2,\dots}$ be iid copies of ${X}$, and write ${S_n := X_1+\dots+X_n}$.

• (i) Show that ${S_n}$ takes values in ${\{0,\dots,n\}}$ with ${{\bf P}(S_n=i) = \frac{1}{2^n} \binom{n}{i}}$. (This is an example of a binomial distribution.)
• (ii) Assume Stirling’s formula

$\displaystyle n! = (1+o(1)) \sqrt{2\pi n} n^n e^{-n} \ \ \ \ \ (1)$

where ${o(1)}$ is a function of ${n}$ that goes to zero as ${n \rightarrow \infty}$. (A proof of this formula may be found in this previous blog post.) Using this formula, and without using the central limit theorem, show that

$\displaystyle {\bf P}( a \leq 2\sqrt{n} (\frac{S_n}{n} - \frac{1}{2}) \leq b ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{a}^b e^{-x^2/2}\ dx$

as ${n \rightarrow \infty}$ for any fixed real numbers ${a.

The above special case of the central limit theorem was first established by de Moivre and Laplace.

We close this section with some basic facts about convergence of distribution that will be useful in the sequel.

Exercise 10 Let ${X_1,X_2,\dots}$, ${Y_1,Y_2,\dots}$ be sequences of real random variables, and let ${X,Y}$ be further real random variables.

• (i) If ${X}$ is deterministic, show that ${X_n}$ converges in distribution to ${X}$ if and only if ${X_n}$ converges in probability to ${X}$.
• (ii) Suppose that ${X_n}$ is independent of ${Y_n}$ for each ${n}$, and ${X}$ independent of ${Y}$. Show that ${X_n+iY_n}$ converges in distribution to ${X+iY}$ if and only if ${X_n}$ converges in distribution to ${X}$ and ${Y_n}$ converges in distribution to ${Y}$. (The shortest way to prove this is by invoking the Stone-Weierstrass theorem, but one can also proceed by proving some version of Proposition 4.) What happens if the independence hypothesis is dropped?
• (iii) If ${X_n}$ converges in distribution to ${X}$, show that for every ${\varepsilon>0}$ there exists ${K>0}$ such that ${{\bf P}( |X_n| \geq K ) < \varepsilon}$ for all sufficiently large ${n}$. (That is to say, ${X_n}$ is a tight sequence of random variables.)
• (iv) Show that ${X_n}$ converges in distribution to ${X}$ if and only if, after extending the probability space model if necessary, one can find copies ${Z_1,Z_2,\dots}$ and ${Z}$ of ${X_1,X_2,\dots}$ and ${X}$ respectively such that ${Z_n}$ converges almost surely to ${Z}$. (Hint: use the Skorohod representation, Exercise 29 of Notes 0.)
• (v) If ${X_1,X_2,\dots}$ converges in distribution to ${X}$, and ${F: {\bf R} \rightarrow {\bf R}}$ is continuous, show that ${F(X_1),F(X_2),\dots}$ converges in distribution to ${F(X)}$. Generalise this claim to the case when ${X}$ takes values in an arbitrary locally compact Hausdorff space.
• (vi) (Slutsky’s theorem) If ${X_n}$ converges in distribution to ${X}$, and ${Y_n}$ converges in probability to a deterministic limit ${Y}$, show that ${X_n+Y_n}$ converges in distribution to ${X+Y}$, and ${X_n Y_n}$ converges in distribution to ${XY}$. (Hint: either use (iv), or else use (iii) to control some error terms.) This statement combines particularly well with (i). What happens if ${Y}$ is not assumed to be deterministic?
• (vii) (Fatou lemma) If ${G: {\bf R} \rightarrow [0,+\infty)}$ is continuous, and ${X_n}$ converges in distribution to ${X}$, show that ${\liminf_{n \rightarrow \infty} {\bf E} G(X_n) \geq {\bf E} G(X)}$.
• (viii) (Bounded convergence) If ${G: {\bf R} \rightarrow {\bf R}}$ is continuous and bounded, and ${X_n}$ converges in distribution to ${X}$, show that ${\lim_{n \rightarrow \infty} {\bf E} G(X_n) = {\bf E} G(X)}$.
• (ix) (Dominated convergence) If ${X_n}$ converges in distribution to ${X}$, and there is an absolutely integrable ${Y}$ such that ${|X_n| \leq Y}$ almost surely for all ${n}$, show that ${\lim_{n \rightarrow \infty} {\bf E} X_n = {\bf E} X}$.

For future reference we also mention (but will not prove) Prokhorov’s theorem that gives a partial converse to part (iii) of the above exercise:

Theorem 11 (Prokhorov’s theorem) Let ${X_1,X_2,\dots}$ be a sequence of real random variables which is tight (that is, for every ${\varepsilon>0}$ there exists ${K>0}$ such that ${{\bf P}(|X_n| \geq K) < \varepsilon}$ for all sufficiently large ${n}$). Then there exists a subsequence ${X_{n_j}}$ which converges in distribution to some random variable ${X}$ (which may possibly be modeled by a different probability space model than the ${X_1,X_2,\dots}$.)

The proof of this theorem relies on the Riesz representation theorem, and is beyond the scope of this course; but see for instance Exercise 29 of this previous blog post. (See also the closely related Helly selection theorem, covered in Exercise 30 of the same post.)

The Riemann zeta function ${\zeta(s)}$, defined for ${\hbox{Re}(s)>1}$ by

$\displaystyle \zeta(s) := \sum_{n=1}^\infty \frac{1}{n^s} \ \ \ \ \ (1)$

and then continued meromorphically to other values of ${s}$ by analytic continuation, is a fundamentally important function in analytic number theory, as it is connected to the primes ${p=2,3,5,\ldots}$ via the Euler product formula

$\displaystyle \zeta(s) = \prod_p (1 - \frac{1}{p^s})^{-1} \ \ \ \ \ (2)$

(for ${\hbox{Re}(s) > 1}$, at least), where ${p}$ ranges over primes. (The equivalence between (1) and (2) is essentially the generating function version of the fundamental theorem of arithmetic.) The function ${\zeta}$ has a pole at ${1}$ and a number of zeroes ${\rho}$. A formal application of the factor theorem gives

$\displaystyle \zeta(s) = \frac{1}{s-1} \prod_\rho (s-\rho) \times \ldots \ \ \ \ \ (3)$

where ${\rho}$ ranges over zeroes of ${\zeta}$, and we will be vague about what the ${\ldots}$ factor is, how to make sense of the infinite product, and exactly which zeroes of ${\zeta}$ are involved in the product. Equating (2) and (3) and taking logarithms gives the formal identity

$\displaystyle - \log \zeta(s) = \sum_p \log(1 - \frac{1}{p^s}) = \log(s-1) - \sum_\rho \log(s-\rho) + \ldots; \ \ \ \ \ (4)$

using the Taylor expansion

$\displaystyle \log(1 - \frac{1}{p^s}) = - \frac{1}{p^s} - \frac{1}{2 p^{2s}} - \frac{1}{3p^{3s}} - \ldots \ \ \ \ \ (5)$

and differentiating the above identity in ${s}$ yields the formal identity

$\displaystyle - \frac{\zeta'(s)}{\zeta(s)} = \sum_n \frac{\Lambda(n)}{n^s} = \frac{1}{s-1} - \sum_\rho \frac{1}{s-\rho} + \ldots \ \ \ \ \ (6)$

where ${\Lambda(n)}$ is the von Mangoldt function, defined to be ${\log p}$ when ${n}$ is a power of a prime ${p}$, and zero otherwise. Thus we see that the behaviour of the primes (as encoded by the von Mangoldt function) is intimately tied to the distribution of the zeroes ${\rho}$. For instance, if we knew that the zeroes were far away from the axis ${\hbox{Re}(s)=1}$, then we would heuristically have

$\displaystyle \sum_n \frac{\Lambda(n)}{n^{1+it}} \approx \frac{1}{it}$

for real ${t}$. On the other hand, the integral test suggests that

$\displaystyle \sum_n \frac{1}{n^{1+it}} \approx \frac{1}{it}$

and thus we see that ${\frac{\Lambda(n)}{n}}$ and ${\frac{1}{n}}$ have essentially the same (multiplicative) Fourier transform:

$\displaystyle \sum_n \frac{\Lambda(n)}{n^{1+it}} \approx \sum_n \frac{1}{n^{1+it}}.$

Inverting the Fourier transform (or performing a contour integral closely related to the inverse Fourier transform), one is led to the prime number theorem

$\displaystyle \sum_{n \leq x} \Lambda(n) \approx \sum_{n \leq x} 1.$

In fact, the standard proof of the prime number theorem basically proceeds by making all of the above formal arguments precise and rigorous.

Unfortunately, we don’t know as much about the zeroes ${\rho}$ of the zeta function (and hence, about the ${\zeta}$ function itself) as we would like. The Riemann hypothesis (RH) asserts that all the zeroes (except for the “trivial” zeroes at the negative even numbers) lie on the critical line ${\hbox{Re}(s)=1/2}$; this hypothesis would make the error terms in the above proof of the prime number theorem significantly more accurate. Furthermore, the stronger GUE hypothesis asserts in addition to RH that the local distribution of these zeroes on the critical line should behave like the local distribution of the eigenvalues of a random matrix drawn from the gaussian unitary ensemble (GUE). I will not give a precise formulation of this hypothesis here, except to say that the adjective “local” in the context of distribution of zeroes ${\rho}$ means something like “at scale ${O(1/\log T)}$ when ${\hbox{Im}(s) = O(T)}$“.

Nevertheless, we do know some reasonably non-trivial facts about the zeroes ${\rho}$ and the zeta function ${\zeta}$, either unconditionally, or assuming RH (or GUE). Firstly, there are no zeroes for ${\hbox{Re}(s)>1}$ (as one can already see from the convergence of the Euler product (2) in this case) or for ${\hbox{Re}(s)=1}$ (this is trickier, relying on (6) and the elementary observation that

$\displaystyle \hbox{Re}( 3\frac{\Lambda(n)}{n^{\sigma}} + 4\frac{\Lambda(n)}{n^{\sigma+it}} + \frac{\Lambda(n)}{n^{\sigma+2it}} ) = 2\frac{\Lambda(n)}{n^\sigma} (1+\cos(t \log n))^2$

is non-negative for ${\sigma > 1}$ and ${t \in {\mathbb R}}$); from the functional equation

$\displaystyle \pi^{-s/2} \Gamma(s/2) \zeta(s) = \pi^{-(1-s)/2} \Gamma((1-s)/2) \zeta(1-s)$

(which can be viewed as a consequence of the Poisson summation formula, see e.g. my blog post on this topic) we know that there are no zeroes for ${\hbox{Re}(s) \leq 0}$ either (except for the trivial zeroes at negative even integers, corresponding to the poles of the Gamma function). Thus all the non-trivial zeroes lie in the critical strip ${0 < \hbox{Re}(s) < 1}$.

We also know that there are infinitely many non-trivial zeroes, and can approximately count how many zeroes there are in any large bounded region of the critical strip. For instance, for large ${T}$, the number of zeroes ${\rho}$ in this strip with ${\hbox{Im}(\rho) = T+O(1)}$ is ${O(\log T)}$. This can be seen by applying (6) to ${s = 2+iT}$ (say); the trivial zeroes at the negative integers end up giving a contribution of ${O(\log T)}$ to this sum (this is a heavily disguised variant of Stirling’s formula, as one can view the trivial zeroes as essentially being poles of the Gamma function), while the ${\frac{1}{s-1}}$ and ${\ldots}$ terms end up being negligible (of size ${O(1)}$), while each non-trivial zero ${\rho}$ contributes a term which has a non-negative real part, and furthermore has size comparable to ${1}$ if ${\hbox{Im}(\rho) = T+O(1)}$. (Here I am glossing over a technical renormalisation needed to make the infinite series in (6) converge properly.) Meanwhile, the left-hand side of (6) is absolutely convergent for ${s=2+iT}$ and of size ${O(1)}$, and the claim follows. A more refined version of this argument shows that the number of non-trivial zeroes with ${0 \leq \hbox{Im}(\rho) \leq T}$ is ${\frac{T}{2\pi} \log \frac{T}{2\pi} - \frac{T}{2\pi} + O(\log T)}$, but we will not need this more precise formula here. (A fair fraction – at least 40%, in fact – of these zeroes are known to lie on the critical line; see this earlier blog post of mine for more discussion.)

Another thing that we happen to know is how the magnitude ${|\zeta(1/2+it)|}$ of the zeta function is distributed as ${t \rightarrow \infty}$; it turns out to be log-normally distributed with log-variance about ${\frac{1}{2} \log \log t}$. More precisely, we have the following result of Selberg:

Theorem 1 Let ${T}$ be a large number, and let ${t}$ be chosen uniformly at random from between ${T}$ and ${2T}$ (say). Then the distribution of ${\frac{1}{\sqrt{\frac{1}{2} \log \log T}} \log |\zeta(1/2+it)|}$ converges (in distribution) to the normal distribution ${N(0,1)}$.

To put it more informally, ${\log |\zeta(1/2+it)|}$ behaves like ${\sqrt{\frac{1}{2} \log \log t} \times N(0,1)}$ plus lower order terms for “typical” large values of ${t}$. (Zeroes ${\rho}$ of ${\zeta}$ are, of course, certainly not typical, but one can show that one can usually stay away from these zeroes.) In fact, Selberg showed a slightly more precise result, namely that for any fixed ${k \geq 1}$, the ${k^{th}}$ moment of ${\frac{1}{\sqrt{\frac{1}{2} \log \log T}} \log |\zeta(1/2+it)|}$ converges to the ${k^{th}}$ moment of ${N(0,1)}$.

Remarkably, Selberg’s result does not need RH or GUE, though it is certainly consistent with such hypotheses. (For instance, the determinant of a GUE matrix asymptotically obeys a remarkably similar log-normal law to that given by Selberg’s theorem.) Indeed, the net effect of these hypotheses only affects some error terms in ${\log |\zeta(1/2+it)|}$ of magnitude ${O(1)}$, and are thus asymptotically negligible compared to the main term, which has magnitude about ${O(\sqrt{\log \log T})}$. So Selberg’s result, while very pretty, manages to finesse the question of what the zeroes ${\rho}$ of ${\zeta}$ are actually doing – he makes the primes do most of the work, rather than the zeroes.

Selberg never actually published the above result, but it is reproduced in a number of places (e.g. in this book by Joyner, or this book by Laurincikas). As with many other results in analytic number theory, the actual details of the proof can get somewhat technical; but I would like to record here (partly for my own benefit) an informal sketch of some of the main ideas in the argument.