Let be iid copies of an absolutely integrable real scalar random variable , and form the partial sums . As we saw in the last set of notes, the law of large numbers ensures that the empirical averages converge (both in probability and almost surely) to a deterministic limit, namely the mean of the reference variable . Furthermore, under some additional moment hypotheses on the underlying variable , we can obtain square root cancellation for the fluctuation of the empirical average from the mean. To simplify the calculations, let us first restrict to the case of mean zero and variance one, thus
Then, as computed in previous notes, the normalised fluctuation also has mean zero and variance one:
This and Chebyshev’s inequality already indicates that the “typical” size of is , thus for instance goes to zero in probability for any that goes to infinity as . If we also have a finite fourth moment , then the calculations of the previous notes also give a fourth moment estimate
From this and the Paley-Zygmund inequality (Exercise 42 of Notes 1) we also get some lower bound for of the form
for some absolute constant and for sufficiently large; this indicates in particular that does not converge in any reasonable sense to something finite for any that goes to infinity.
The question remains as to what happens to the ratio itself, without multiplying or dividing by any factor . A first guess would be that these ratios converge in probability or almost surely, but this is unfortunately not the case:
Proposition 1 Let be iid copies of an absolutely integrable real scalar random variable with mean zero, variance one, and finite fourth moment, and write . Then the random variables do not converge in probability or almost surely to any limit, and neither does any subsequence of these random variables.
Proof: Suppose for contradiction that some sequence converged in probability or almost surely to a limit . By passing to a further subsequence we may assume that the convergence is in the almost sure sense. Since all of the have mean zero, variance one, and bounded fourth moment, Theorem 24 of Notes 1 implies that the limit also has mean zero and variance one. On the other hand, is a tail random variable and is thus almost surely constant by the Kolmogorov zero-one law from Notes 3. Since constants have variance zero, we obtain the required contradiction.
Nevertheless there is an important limit for the ratio , which requires one to replace the notions of convergence in probability or almost sure convergence by the weaker concept of convergence in distribution.
Definition 2 (Vague convergence and convergence in distribution) Let be a locally compact Hausdorff topological space with the Borel -algebra. A sequence of finite measures on is said to converge vaguely to another finite measure if one has
as for all continuous compactly supported functions . (Vague convergence is also known as weak convergence, although strictly speaking the terminology weak-* convergence would be more accurate.) A sequence of random variables taking values in is said to converge in distribution (or converge weakly or converge in law) to another random variable if the distributions converge vaguely to the distribution , or equivalently if
as for all continuous compactly supported functions .
One could in principle try to extend this definition beyond the locally compact Hausdorff setting, but certain pathologies can occur when doing so (e.g. failure of the Riesz representation theorem), and we will never need to consider vague convergence in spaces that are not locally compact Hausdorff, so we restrict to this setting for simplicity.
Note that the notion of convergence in distribution depends only on the distribution of the random variables involved. One consequence of this is that convergence in distribution does not produce unique limits: if converges in distribution to , and has the same distribution as , then also converges in distribution to . However, limits are unique up to equivalence in distribution (this is a consequence of the Riesz representation theorem, discussed for instance in this blog post). As a consequence of the insensitivity of convergence in distribution to equivalence in distribution, we may also legitimately talk about convergence of distribution of a sequence of random variables to another random variable even when all the random variables and involved are being modeled by different probability spaces (e.g. each is modeled by , and is modeled by , with no coupling presumed between these spaces). This is in contrast to the stronger notions of convergence in probability or almost sure convergence, which require all the random variables to be modeled by a common probability space. Also, by an abuse of notation, we can say that a sequence of random variables converges in distribution to a probability measure , when converges vaguely to . Thus we can talk about a sequence of random variables converging in distribution to a uniform distribution, a gaussian distribution, etc..
From the dominated convergence theorem (available for both convergence in probability and almost sure convergence) we see that convergence in probability or almost sure convergence implies convergence in distribution. The converse is not true, due to the insensitivity of convergence in distribution to equivalence in distribution; for instance, if are iid copies of a non-deterministic scalar random variable , then the trivially converge in distribution to , but will not converge in probability or almost surely (as one can see from the zero-one law). However, there are some partial converses that relate convergence in distribution to convergence in probability; see Exercise 10 below.
Remark 3 The notion of convergence in distribution is somewhat similar to the notion of convergence in the sense of distributions that arises in distribution theory (discussed for instance in this previous blog post), however strictly speaking the two notions of convergence are distinct and should not be confused with each other, despite the very similar names.
The notion of convergence in distribution simplifies in the case of real scalar random variables:
- (i) converges in distribution to .
- (ii) converges to for each continuity point of (i.e. for all real numbers at which is continuous). Here is the cumulative distribution function of .
Proof: First suppose that converges in distribution to , and is continuous at . For any , one can find a such that
for every . One can also find an larger than such that and . Thus
Let be a continuous function supported on that equals on . Then by the above discussion we have
for large enough . In particular
A similar argument, replacing with a continuous function supported on that equals on gives
for large enough. Putting the two estimates together gives
for large enough; sending , we obtain the claim.
Conversely, suppose that converges to at every continuity point of . Let be a continuous compactly supported function, then it is uniformly continuous. As is monotone increasing, it can only have countably many points of discontinuity. From these two facts one can find, for any , a simple function for some that are points of continuity of , and real numbers , such that for all . Thus
Similarly for replaced by . Subtracting and taking limit superior, we conclude that
and on sending , we obtain that converges in distribution to as claimed.
The restriction to continuity points of is necessary. Consider for instance the deterministic random variables , then converges almost surely (and hence in distribution) to , but does not converge to .
Example 5 For any natural number , let be a discrete random variable drawn uniformly from the finite set , and let be the continuous random variable drawn uniformly from . Then converges in distribution to . Thus we see that a continuous random variable can emerge as the limit of discrete random variables.
Example 6 For any natural number , let be a continuous random variable drawn uniformly from , then converges in distribution to the deterministic real number . Thus we see that discrete (or even deterministic) random variables can emerge as the limit of continuous random variables.
Exercise 7 (Portmanteau theorem) Show that the properties (i) and (ii) in Proposition 4 are also equivalent to the following three statements:
- (iii) One has for all closed sets .
- (iv) One has for all open sets .
- (v) For any Borel set whose topological boundary is such that , one has .
(Note: to prove this theorem, you may wish to invoke Urysohn’s lemma. To deduce (iii) from (i), you may wish to start with the case of compact .)
We can now state the famous central limit theorem:
Theorem 8 (Central limit theorem) Let be iid copies of a scalar random variable of finite mean and finite non-zero variance . Let . Then the random variables converges in distribution to a random variable with the standard normal distribution (that is to say, a random variable with probability density function ). Thus, by abuse of notation
In the normalised case when has mean zero and unit variance, this simplifies to
Using Proposition 4 (and the fact that the cumulative distribution function associated to is continuous, the central limit theorem is equivalent to asserting that
as for any , or equivalently that
Informally, one can think of the central limit theorem as asserting that approximately behaves like it has distribution for large , where is the normal distribution with mean and variance , that is to say the distribution with probability density function . The integrals can be written in terms of the error function as .
The central limit theorem is a basic example of the universality phenomenon in probability – many statistics involving a large system of many independent (or weakly dependent) variables (such as the normalised sums ) end up having a universal asymptotic limit (in this case, the normal distribution), regardless of the precise makeup of the underlying random variable that comprised that system. Indeed, the universality of the normal distribution is such that it arises in many other contexts than the fluctuation of iid random variables; the central limit theorem is merely the first place in probability theory where it makes a prominent appearance.
We will give several proofs of the central limit theorem in these notes; each of these proofs has their advantages and disadvantages, and can each extend to prove many further results beyond the central limit theorem. We first give Lindeberg’s proof of the central limit theorem, based on exchanging (or swapping) each component of the sum in turn. This proof gives an accessible explanation as to why there should be a universal limit for the central limit theorem; one then computes directly with gaussians to verify that it is the normal distribution which is the universal limit. Our second proof is the most popular one taught in probability texts, namely the Fourier-analytic proof based around the concept of the characteristic function of a real random variable . Thanks to the powerful identities and other results of Fourier analysis, this gives a quite short and direct proof of the central limit theorem, although the arguments may seem rather magical to readers who are not already familiar with Fourier methods. Finally, we give a proof based on the moment method, in the spirit of the arguments in the previous notes; this argument is more combinatorial, but is straightforward and is particularly robust, in particular being well equipped to handle some dependencies between components; we will illustrate this by proving the Erdos-Kac law in number theory by this method. Some further discussion of the central limit theorem (including some further proofs, such as one based on Stein’s method) can be found in this blog post. Some further variants of the central limit theorem, such as local limit theorems, stable laws, and large deviation inequalities, will be discussed in the next (and final) set of notes.
The following exercise illustrates the power of the central limit theorem, by establishing combinatorial estimates which would otherwise require the use of Stirling’s formula to establish.
- (i) Show that takes values in with . (This is an example of a binomial distribution.)
- (ii) Assume Stirling’s formula
where is a function of that goes to zero as . (A proof of this formula may be found in this previous blog post.) Using this formula, and without using the central limit theorem, show that
as for any fixed real numbers .
The above special case of the central limit theorem was first established by de Moivre and Laplace.
We close this section with some basic facts about convergence of distribution that will be useful in the sequel.
- (i) If is deterministic, show that converges in distribution to if and only if converges in probability to .
- (ii) Suppose that is independent of for each , and independent of . Show that converges in distribution to if and only if converges in distribution to and converges in distribution to . (The shortest way to prove this is by invoking the Stone-Weierstrass theorem, but one can also proceed by proving some version of Proposition 4.) What happens if the independence hypothesis is dropped?
- (iii) If converges in distribution to , show that for every there exists such that for all sufficiently large . (That is to say, is a tight sequence of random variables.)
- (iv) Show that converges in distribution to if and only if, after extending the probability space model if necessary, one can find copies and of and respectively such that converges almost surely to . (Hint: use the Skorohod representation, Exercise 29 of Notes 0.)
- (v) If converges in distribution to , and is continuous, show that converges in distribution to . Generalise this claim to the case when takes values in an arbitrary locally compact Hausdorff space.
- (vi) (Slutsky’s theorem) If converges in distribution to , and converges in probability to a deterministic limit , show that converges in distribution to , and converges in distribution to . (Hint: either use (iv), or else use (iii) to control some error terms.) This statement combines particularly well with (i). What happens if is not assumed to be deterministic?
- (vii) (Fatou lemma) If is continuous, and converges in distribution to , show that .
- (viii) (Bounded convergence) If is continuous and bounded, and converges in distribution to , show that .
- (ix) (Dominated convergence) If converges in distribution to , and there is an absolutely integrable such that almost surely for all , show that .
For future reference we also mention (but will not prove) Prokhorov’s theorem that gives a partial converse to part (iii) of the above exercise:
Theorem 11 (Prokhorov’s theorem) Let be a sequence of real random variables which is tight (that is, for every there exists such that for all sufficiently large ). Then there exists a subsequence which converges in distribution to some random variable (which may possibly be modeled by a different probability space model than the .)
The proof of this theorem relies on the Riesz representation theorem, and is beyond the scope of this course; but see for instance Exercise 29 of this previous blog post. (See also the closely related Helly selection theorem, covered in Exercise 30 of the same post.)
— 1. The Lindeberg approach to the central limit theorem —
We now give the Lindeberg argument establishing the central limit theorem. The proof splits into two unrelated components. The first component is to establish the central limit theorem for a single choice of underlying random variable . The second component is to show that the limiting distribution of is universal in the sense that it does not depend the choice of underlying random variable. Putting the two components together gives Theorem 8.
We begin with the first component of the argument. One could use the Bernoulli distribution from Exercise 9 as the choice of underlying random variable, but a simpler choice of distribution (in the sense that no appeal to Stirling’s formula is required) is the normal distribution itself. The key computation is:
This is of course consistent with the additivity of mean and variance for independent random variables, given that random variables with the distribution have mean and variance .
Proof: By subtracting and from respectively, we may normalise , by dividing through by we may also normalise . Thus
for any Borel sets . As are independent, this implies that
for any Borel set (this follows from the uniqueness of product measure, or equivalently one can use the monotone class lemma starting from the case when is a finite boolean combination of product sets ). In particular, we have
We can complete the square using to write (after some routine algebra)
so on using the identity for any and , we can write (2) as
and so has the cumulative distribution function of , giving the claim.
In the next section we give an alternate proof of the above lemma using the machinery of characteristic functions. A more geometric argument can be given as follows. With the same normalisations as in the above proof, we can write and for some . Then we can write and where are iid copies of . But the joint probability density function of is rotation invariant, so has the same distribution as , and the claim follows.
From the above lemma we see that if are iid copies of a normal distribution of mean and variance , then has distribution , and hence has distribution exactly equal to . Thus the central limit theorem is clearly true in this case.
Exercise 13 (Probabilistic interpretation of convolution) Let be measurable functions with . Define the convolution of and to be
Show that if are independent real random variables with probability density functions respectively, then has probability density function .
Now we turn to the general case of the central limit theorem. By subtracting from (and from each of the ) we may normalise ; by dividing (and each of the ) by we may also normalise . Thus , and our task is to show that
as , for any continuous compactly supported functions , where is a random variable distributed according to (possibly modeled by a different probability space than the original ). Since any continuous compactly supported function can be approximated uniformly by smooth compactly supported functions (as can be seen from the Weierstrass or Stone-Weierstrass theorems), it suffices to show this for smooth compactly supported .
Let be iid copies of ; by extending the probability space used to model (using Proposition 26 from Notes 2), we can model the and by a common model, in such a way that the combined collection of random variables are jointly independent. As we have already proved the central limit theorem in the normally distributed case, we already have
We first establish this claim under the additional simplifying assumption of a finite third moment: . Rather than swap all of the with all of the , let us just swap the final to a , that is to say let us consider the expression
Writing , we can write this as
To compute this expression we use Taylor expansion. As is smooth and compactly supported, the first three derivatives of are bounded, leading to the Taylor approximation
where the implied constant depends on . Taking expectations, we conclude that
Now for a key point: as the random variable only depends on , it is independent of , and so we can decouple the expectations to obtain
The same considerations apply after swapping with (which also has a bounded third moment):
But by hypothesis, and have matching moments to second order: and . Thus on subtraction we have
A similar argument (permuting the indices, and replacing some of the with ) gives
for all . Summing the telescoping series, we conclude that
which gives (4). Note how it was important to Taylor expand to at least third order to obtain a total error bound that went to zero, which explains why it is the first two moments of (or equivalently, the mean and variance) that play such a decisive role in the central limit theorem.
Now we remove the hypothesis of finite third moment. As in the previous set of notes, we use the truncation method, taking advantage of the “room” inherent in the factor in the error term of the above analysis. For technical reasons we have to modify the usual truncation slightly to preserve the mean zero condition. Let . We split , where and with ; we split respectively (we are suppressing the dependence of on and to simplify the notation). The random variables (and ) are chosen to have mean zero, but their variance is not quite one. However, from dominated convergence, the quantity converges to , and the variance converges to , as .
Let be iid copies of . The previous arguments then give
for large enough. We can bound
for large enough . By Lemma 12, is distributed as , and hence
as . We conclude that
Next, we consider the error term
The variable has mean zero, and by dominated convergence, the variance of goes to zero as . The above error term is also mean zero and has the same variance as . In particular, from Cauchy-Schwarz we have
As is smooth and compactly supported, it is Lipschitz continuous, and hence
Taking expectations, and then combining all these estimates, we conclude that
The above argument can be generalised to a central limit theorem for certain triangular arrays, known as the Lindeberg central limit theorem:
Exercise 14 (Lindeberg central limit theorem) Let be a sequence of natural numbers going to infinity in . For each natural number , let be jointly independent real random variables of mean zero and finite variance. (We do not require the random variables to be jointly independent in , or even to be modeled by a common probability space.) Let be defined by
and assume that for all .
- (i) If one assumes the Lindeberg condition that
as for any , then show that the random variables converge in distribution to a random variable with the normal distribution .
- (ii) Show that the Lindeberg condition implies the Feller condition
Note that Theorem 8 (after normalising to the mean zero case ) corresponds to the special case and (or, if one wishes, ) of the Lindeberg central limit theorem. It was shown by Feller that if the situation as in the above exercise and the Feller condition holds, then the Lindeberg condition is necessary as well as sufficient for to converge in distribution to a random variable with normal distribution ; the combined result is sometimes known as the Lindeberg-Feller theorem.
Exercise 15 (Weak Berry-Esséen theorem) Let be iid copies of a real random variable of mean zero, unit variance, and finite third moment.
- (i) Show that
whenever is three times continuously differentiable and compactly supported, with distributed as and the implied constant in the notation absolute.
- (ii) Show that
for any , with the implied constant absolute.
We will strengthen the conclusion of this theorem in Theorem 37 below.
Remark 16 The Lindeberg exchange method explains why the limiting distribution of statistics such as depend primarily on the first two moments of the component random variables , if there is a suitable amount of independence between the . It turns out that there is an analogous application of the Lindeberg method in random matrix theory, which (very roughly speaking) asserts that appropriate statistics of random matrices such as depend primarily on the first four moments of the matrix components , if there is a suitable amount of independence between the . See for instance this survey article of Van Vu and myself for more discussion of this. The Lindeberg method also suggests that the more moments of one assumes to match with the Gaussian variable , the faster the rate of convergence (because one can use higher order Taylor expansions).
We now use the Lindeberg central limit theorem to obtain the converse direction of the Kolmogorov three-series theorem (Exercise 29 of Notes 3).
Exercise 17 (Kolmogorov three-series theorem, converse direction) Let be a sequence of jointly independent real random variables, with the property that the series is almost surely convergent (i.e., the partial sums are almost surely convergent), and let .
- (i) Show that is finite. (Hint: argue by contradiction and use the second Borel-Cantelli lemma.)
- (ii) Show that is finite. (Hint: first use (i) and the Borel-Cantelli lemma to reduce to the case where almost surely. If is infinite, use Exercise 14 to show that converges in distribution to a standard normal distribution, and use this to contradict the almost sure convergence of .)
- (iii) Show that the series is convergent. (Hint: reduce as before to the case where almost surely, and apply the forward direction of the three-series theorem to .)
— 2. The Fourier-analytic approach to the central limit theorem —
Let us now give the standard Fourier-analytic proof of the central limit theorem. Given any real random variable , we introduce the characteristic function , defined by the formula
Equivalently, is the Fourier transform of the probability measure . One should caution that the term “characteristic function” has several other unrelated meanings in mathematics; particularly confusingly, in real analysis “characteristic function” is used to denote what in probability one would call an “indicator function”. Note that no moment hypotheses are required to define the characteristic function, because the random variable is bounded even when is not absolutely integrable.
Most of the standard random variables in probability have characteristic functions that are quite simple and explicit. For the purposes of proving the central limit theorem, the most important such explicit form of the characteristic function is of the normal distribution:
We record the explicit characteristic functions of some other standard distributions:
Exercise 20 Let , and let be a Poisson random variable with intensity , thus takes values in the non-negative integers with . Show that for all .
Exercise 21 Let be uniformly distributed in some interval . Show that for all non-zero .
Exercise 22 Let and , and let be a Cauchy random variable with parameters , which means that is a real random variable with probability density function . Show that for all .
The characteristic function is clearly bounded in magnitude by , and equals at the origin. By the dominated convergence theorem, is continuous in .
Exercise 23 (Riemann-Lebesgue lemma) Show that if is a real random variable that has an absolutely integrable probability density function , then as . (Hint: first show the claim when is a finite linear combination of intervals, then for the general case show that can be approximated in norm by such finite linear combinations.) Note from Example 18 that the claim can fail if does not have a probability density function. (In Fourier analysis, this fact is known as the Riemann-Lebesgue lemma.)
Exercise 24 Show that the characteristic function of a real random variable is in fact uniformly continuous on its domain.
which thus interprets the characteristic function of a real random variable as a kind of generating function for the moments. One rigorous version of this identity is as follows.
for all . Conclude in particular the partial Taylor expansion
where is a quantity that goes to zero as , times .
for all . (Thus for instance a bounded random variable is subgaussian, as is any gaussian random variable.) Rigorously establish (6) in this case, and show that the series converges locally uniformly in .
Note that the characteristic function depends only on the distribution of : if and are equal in distribution, then . The converse statement is true also: if , then and are equal in distribution. This follows from a more general (and useful) fact, known as Lévy’s continuity theorem.
- (i) converges pointwise to .
- (ii) converges in distribution to .
Now suppose that (i) holds, and we wish to show that (ii) holds. We need to show that
whenever is a continuous, compactly supported function. As in the Lindeberg argument, it suffices to prove this when is smooth and compactly supported, in particular is a Schwartz function (infinitely differentiable, with all derivatives rapidly decreasing). But then we have the Fourier inversion formula
is Schwartz, and is in particular absolutely integrable (see e.g. these lecture notes of mine). From the Fubini-Tonelli theorem, we thus have
and similarly for . The claim now follows from the Lebesgue dominated convergence theorem.
There is one subtlety with the Lévy continuity theorem: it is possible for a sequence of characteristic functions to converge pointwise, but for the limit to not be the characteristic function of any random variable, in which case will not converge in distribution. For instance, if , then converges pointwise to for any , but this is clearly not the characteristic function of any random variable (as characteristic functions are continuous). However, this lack of continuity is the only obstruction:
Exercise 29 (Lévy’s continuity theorem, full version) Let be a sequence of real valued random variables. Suppose that converges pointwise to a limit . Show that the following are equivalent:
- (i) is continuous at .
- (ii) is a tight sequence (as in Exercise 10(iii)).
- (iii) is the characteristic function of a real valued random variable (possibly after extending the sample space).
- (iv) converges in distribution to some real valued random variable (possibly after extending the sample space).
Hint: To get from (ii) to the other conclusions, use Theorem 11 and Theorem 27. To get back to (ii) from (i), use (7) for a suitable Schwartz function . The other implications are easy once Theorem 27 is in hand.
Remark 30 Lévy’s continuity theorem is very similar in spirit to Weyl’s criterion in equidistribution theory.
for some constant depending only on . (Hint: Use (7) for a suitable Schwartz function .) The left-hand side of (8) (as well as higher dimensional analogues of this quantity) is known as the small ball probability of at radius .
In Fourier analysis, we learn that the Fourier transform is a particularly well-suited tool for studying convolutions. The probability theory analogue of this fact is that characteristic functions are a particularly well-suited tool for studying sums of independent random variables. More precisely, we have
for all . Also, for any scalar , one has
and more generally, for any linear transformation , one has
that describes the characteristic function of in terms of that of .
We now have enough machinery to give a quick proof of the central limit theorem:
for sufficiently small , or equivalently
for sufficiently small . Applying (10), we conclude that
as for any fixed . But by Exercise 19, is the characteristic function of the normal distribution . The claim now follows from the Lévy continuity theorem.
The above machinery extends without difficulty to vector-valued random variables taking values in . The analogue of the characteristic function is then the function defined by
We leave the routine extension of the above results and proofs to the higher dimensional case to the reader. Most interesting is what happens to the central limit theorem:
Exercise 34 (Vector-valued central limit theorem) Let be a random variable taking values in with finite second moment. Define the covariance matrix to be the matrix whose entry is the covariance .
- Show that the covariance matrix is positive semi-definite real symmetric.
- Conversely, given any positive definite real symmetric matrix and , show that the multivariate normal distribution , given by the absolutely continuous measure
has mean and covariance matrix , and has a characteristic function given by
How would one define the normal distribution if degenerated to be merely positive semi-definite instead of positive definite?
- If is the sum of iid copies of , show that converges in distribution to . (For this exercise, you may assume without proof that the Lévy continuity theorem extends to .)
Exercise 35 (Complex central limit theorem) Let be a complex random variable of mean , whose real and imaginary parts have variance and covariance . Let be iid copies of . Show that as , the normalised sums converge in distribution to the standard complex gaussian , defined as the measure on with
for Borel , where is Lebesgue measure on (identified with in the usual fashion).
Exercise 36 Use characteristic functions and the truncation argument to give an alternate proof of the Lindeberg central limit theorem (Theorem 14).
A more sophisticated version of the Fourier-analytic method gives a more quantitative form of the central limit theorem, namely the Berry-Esséen theorem.
uniformly for all , where has the distribution of , and the implied constant is absolute.
for all . We may of course assume that , as the claim is trivial otherwise. (In particular, has finite third moment.)
Let be a small absolute constant to be chosen later. Let be an non-negative Schwartz function with total mass whose Fourier transform is supported in ; such an can be constructed by taking the inverse Fourier transform of a smooth function supported on and equal to one at the origin, and then multiplying that transform by its complex conjugate to make it non-negative.
Let be the smoothed out version of , defined as
(say) for any , where subscript indicates that the implied constant depends on .
for every , where the subscript means that the implied constant depends on . Indeed, suppose that (14) held. Replacing by and for some large absolute constant and subtracting, we have
for any . From (13) we see that the function is bounded by , and hence by the bounded probability density of
for all real . This implies that
A similar argument gives
It remains to establish (14). We write
(with the expression in the limit being uniformly bounded) and
to conclude (after applying Fubini’s theorem) that
using the compact support of . Hence
By the fundamental theorem of calculus we have . This factor of cancels a similar factor on the denominator to make the expression inside the limit dominated by an absolutely integrable random variable. Thus by the dominated convergence theorem
and so it suffices to show that
From Taylor expansion we have
for any ; taking expectations and using the definition of we have
and in particular
if and is small enough. Applying (10), we conclude that
if . Meanwhile, from Exercise 19 we have . Elementary calculus then gives us
(say) if is small enough. Inserting this bound into (16) we obtain the claim.
Exercise 38 Show that the error terms in Theorem 37 are sharp (up to constants) when is a signed Bernoulli random variable, or more precisely when takes values in with probability for each.
Exercise 39 Let be a sequence of real random variables which converge in distribution to a real random variable , and let be a sequence of real random variables which converge in distribution to a real random variable . Suppose that, for each , and are independent, and suppose also that and are independent. Show that converges in distribution to . (Hint: use the Lévy continuity theorem.)
The following exercise shows that the central limit theorem fails when the variance is infinite.
Exercise 40 Let be iid copies of an absolutely integrable random variable of mean zero.
- (i) In this part we assume that is symmetric, which means that and have the same distribution. Show that for any and ,
(Hint: relate both sides of this inequality to the probability of the event that and , using the symmetry of the situation.)
- (ii) If is symmetric and converges in distribution to a real random variable , show that has finite variance. (Hint: if this is not the case, then will have arbitrarily large variance as increases. On the other hand, can be made arbitrarily small by taking large enough. For a large threshold , apply (i) (with ) to obtain a contradiction.
- (iii) Generalise (ii) by removing the hypothesis that is symmetric. (Hint: apply the symmetrisation trick of replacing by , where is an independent copy of , and use the previous exercise. One may need to utilise some truncation arguments to show that has infinite variance whenever has infinite variance.)
- (i) If is a real random variable of mean zero and variance , and is a real number, show that
(Hint: first establish the Taylor bounds and .)
- (ii) Establish the pointwise inequality
whenever are complex numbers in the disk .
- (iii) Suppose that for each , are jointly independent real random variables of mean zero and finite variance, obeying the uniform bound
for all and some going to zero as , and also obeying the variance bound
as for some . If , use (i) and (ii) to show that
as for any given real .
- (iv) Use (iii) and a truncation argument to give an alternate proof of the Lindeberg central limit theorem (Theorem 14). (Note: one has to address the issue that truncating a random variable may alter its mean slightly.)
— 3. The moment method —
The above Fourier-analytic proof of the central limit theorem is one of the quickest (and slickest) proofs available for this theorem, and is accordingly the “standard” proof given in probability textbooks. However, it relies quite heavily on the Fourier-analytic identities in Exercise 32, which in turn are extremely dependent on both the commutative nature of the situation (as it uses the identity ) and on the independence of the situation (as it uses identities of the form ). As one or both of these factors can be absent when trying to generalise this theorem, it is also important to look for non-Fourier based methods to prove results such as the central limit theorem. These methods often lead to proofs that are lengthier and more technical than the Fourier proofs, but also tend to be more robust.
The most elementary (but still remarkably effective) method available in this regard is the moment method, which we have already used in previous notes. In principle, this method is equivalent to the Fourier method, through the identity (6); but in practice, the moment method proofs tend to look somewhat different than the Fourier-analytic ones, and it is often more apparent how to modify them to non-independent or non-commutative settings.
We first need an analogue of the Lévy continuity theorem. Here we encounter a technical issue: whereas the Fourier phases were bounded, the moment functions become unbounded at infinity. However, one can deal with this issue as long as one has sufficient decay:
- (i) There exist such that for all .
- (ii) There exist such that for all .
- (iii) There exist such that for all .
Furthermore, show that if (i) holds for some , then (ii) holds for depending only on , and similarly for any of the other implications. Variables obeying (i), (ii), or (iii) are called subgaussian. The function is known as the moment generating function of ; it is of course closely related to the characteristic function of .
Exercise 43 Use the truncation method to show that in order to prove the central limit theorem (Theorem 8), it suffices to do so in the case when the underlying random variable is bounded (and in particular subgaussian).
Once we restrict to the subgaussian case, we have an analogue of the Lévy continuity theorem:
Theorem 44 (Moment continuity theorem) Let be a sequence of real random variables, and let be a subgaussian random variable. Suppose that for every , converges pointwise to . Then converges in distribution to .
Proof: Let , then by the preceding exercise we have
for some independent of . In particular,
when is sufficiently large depending on . From Taylor’s theorem with remainder (and Stirling’s formula (1)) we conclude
uniformly in , for sufficiently large. Similarly for . Taking limits using (i) we see that
Then letting , keeping fixed, we see that converges pointwise to for each , and the claim now follows from the Lévy continuity theorem.
We remark that in place of Stirling’s formula, one can use the cruder lower bound , which is immediate from the observation that occurs in the Taylor expansion of .
Remark 45 One corollary of Theorem 44 is that the distribution of a subgaussian random variable is uniquely determined by its moments (actually, this could already be deduced from Exercise 26 and Remark 28). The situation can fail for distributions with slower tails, for much the same reason that a smooth function is not determined by its derivatives at one point if that function is not analytic.
The Fourier inversion formula provides an easy way to recover the distribution from the characteristic function. Recovering a distribution from its moments is more difficult, and sometimes requires tools such as analytic continuation; this problem is known as the inverse moment problem and will not be discussed here.
Exercise 46 (Converse direction of moment continuity theorem) Let be a sequence of uniformly subgaussian random variables (thus there exist such that for all and all ), and suppose converges in distribution to a limit . Show that for any , converges pointwise to .
We now give the moment method proof of the central limit theorem. As discussed above we may assume without loss of generality that is bounded (and in particular subgaussian); we may also normalise to have mean zero and unit variance. By Theorem 44, it suffices to show that
for all , where is a standard gaussian variable.
The moments were already computed in Exercise 36 of Notes 1. So now we need to compute . Using linearity of expectation, we can expand this as
To understand this expression, let us first look at some small values of .
- For , this expression is trivially .
- For , this expression is trivially , thanks to the mean zero hypothesis on .
- For , we can split this expression into the diagonal and off-diagonal components:
Each summand in the first sum is , as has unit variance. Each summand in the second sum is , as the have mean zero and are independent. So the second moment is .
- For , we have a similar expansion
The summands in the latter two sums vanish because of the (joint) independence and mean zero hypotheses. The summands in the first sum need not vanish, but are , so the first term is , which is asymptotically negligible, so the third moment goes to .
- For , the expansion becomes quite complicated:
Again, most terms vanish, except for the first sum, which is and is asymptotically negligible, and the sum , which by the independence and unit variance assumptions works out to . Thus the fourth moment goes to (as it should).
Now we tackle the general case. Ordering the indices as for some , with each occuring with multiplicity and using elementary enumerative combinatorics, we see that is the sum of all terms of the form
where , are positive integers adding up to , and is the multinomial coefficient
The total number of such terms depends only on (in fact, it is (exercise!), though we will not need this fact).
As we already saw from the small examples, most of the terms vanish, and many of the other terms are negligible in the limit . Indeed, if any of the are equal to , then every summand in (17) vanishes, by joint independence and the mean zero hypothesis. Thus, we may restrict attention to those expressions (17) for which all the are at least . Since the sum up to , we conclude that is at most .
On the other hand, the total number of summands in (17) is clearly at most (in fact it is ), and the summands are bounded (for fixed ) since is bounded. Thus, if is strictly less than , then the expression in (17) is and goes to zero as . So, asymptotically, the only terms (17) which are still relevant are those for which is equal to . This already shows that goes to zero when is odd. When is even, the only surviving term in the limit is now when and . But then by independence and unit variance, the expectation in (17) is , and so this term is equal to
and the main term is happily equal to the moment as computed in Exercise 36 of Notes 1. (One could also appeal to Lemma 12 here, specialising to the case when is normally distributed, to explain this coincidence.) This concludes the proof of the central limit theorem.
Exercise 47 (Chernoff bound) Let be iid copies of a real random variable of mean zero and unit variance, which is subgaussian in the sense of Exercise 42. Write .
Exercise 48 (Erdös-Kac theorem) For any natural number , let be a natural number drawn uniformly at random from the natural numbers , and let denote the number of distinct prime factors of .
- (i) Show that for any , one has
as if is odd, and
as if is even. (Hint: adapt the arguments in Exercise 16 of Notes 3, estimating by , using Mertens’ theorem and induction on to deal with lower order errors, and treating the random variables as being approximately independent and approximately of mean zero.)
- (ii) Establish the Erdös-Kac theorem
as for any fixed .
Informally, the Erdös-Kac theorem asserts that behaves like for “random” . Note that this refines the Hardy-Ramanujan theorem (Exercise 16 of Notes 3).