You are currently browsing the tag archive for the ‘moment method’ tag.

Let {X_1,X_2,\dots} be iid copies of an absolutely integrable real scalar random variable {X}, and form the partial sums {S_n := X_1 + \dots + X_n}. As we saw in the last set of notes, the law of large numbers ensures that the empirical averages {S_n/n} converge (both in probability and almost surely) to a deterministic limit, namely the mean {\mu= {\bf E} X} of the reference variable {X}. Furthermore, under some additional moment hypotheses on the underlying variable {X}, we can obtain square root cancellation for the fluctuation {\frac{S_n}{n} - \mu} of the empirical average from the mean. To simplify the calculations, let us first restrict to the case {\mu=0, \sigma^2=1} of mean zero and variance one, thus

\displaystyle  {\bf E} X = 0

and

\displaystyle  {\bf Var}(X) = {\bf E} X^2 = 1.

Then, as computed in previous notes, the normalised fluctuation {S_n/\sqrt{n}} also has mean zero and variance one:

\displaystyle  {\bf E} \frac{S_n}{\sqrt{n}} = 0

\displaystyle  {\bf Var}(\frac{S_n}{\sqrt{n}}) = {\bf E} (\frac{S_n}{\sqrt{n}})^2 = 1.

This and Chebyshev’s inequality already indicates that the “typical” size of {S_n} is {O(\sqrt{n})}, thus for instance {\frac{S_n}{\sqrt{n} \omega(n)}} goes to zero in probability for any {\omega(n)} that goes to infinity as {n \rightarrow \infty}. If we also have a finite fourth moment {{\bf E} |X|^4 < \infty}, then the calculations of the previous notes also give a fourth moment estimate

\displaystyle  {\bf E} (\frac{S_n}{\sqrt{n}})^4 = 3 + O( \frac{{\bf E} |X|^4}{n} ).

From this and the Paley-Zygmund inequality (Exercise 42 of Notes 1) we also get some lower bound for {\frac{S_n}{\sqrt{n}}} of the form

\displaystyle  {\bf P}( |\frac{S_n}{\sqrt{n}}| \geq \varepsilon ) \geq \varepsilon

for some absolute constant {\varepsilon>0} and for {n} sufficiently large; this indicates in particular that {\frac{S_n \omega(n)}{\sqrt{n}}} does not converge in any reasonable sense to something finite for any {\omega(n)} that goes to infinity.

The question remains as to what happens to the ratio {S_n/\sqrt{n}} itself, without multiplying or dividing by any factor {\omega(n)}. A first guess would be that these ratios converge in probability or almost surely, but this is unfortunately not the case:

Proposition 1 Let {X_1,X_2,\dots} be iid copies of an absolutely integrable real scalar random variable {X} with mean zero, variance one, and finite fourth moment, and write {S_n := X_1 + \dots + X_n}. Then the random variables {S_n/\sqrt{n}} do not converge in probability or almost surely to any limit, and neither does any subsequence of these random variables.

Proof: Suppose for contradiction that some sequence {S_{n_j}/\sqrt{n_j}} converged in probability or almost surely to a limit {Y}. By passing to a further subsequence we may assume that the convergence is in the almost sure sense. Since all of the {S_{n_j}/\sqrt{n_j}} have mean zero, variance one, and bounded fourth moment, Theorem 24 of Notes 1 implies that the limit {Y} also has mean zero and variance one. On the other hand, {Y} is a tail random variable and is thus almost surely constant by the Kolmogorov zero-one law from Notes 3. Since constants have variance zero, we obtain the required contradiction. \Box

Nevertheless there is an important limit for the ratio {S_n/\sqrt{n}}, which requires one to replace the notions of convergence in probability or almost sure convergence by the weaker concept of convergence in distribution.

Definition 2 (Vague convergence and convergence in distribution) Let {R} be a locally compact Hausdorff topological space with the Borel {\sigma}-algebra. A sequence of finite measures {\mu_n} on {R} is said to converge vaguely to another finite measure {\mu} if one has

\displaystyle  \int_R G(x)\ d\mu_n(x) \rightarrow \int_R G(x)\ d\mu(x)

as {n \rightarrow \infty} for all continuous compactly supported functions {G: R \rightarrow {\bf R}}. (Vague convergence is also known as weak convergence, although strictly speaking the terminology weak-* convergence would be more accurate.) A sequence of random variables {X_n} taking values in {R} is said to converge in distribution (or converge weakly or converge in law) to another random variable {X} if the distributions {\mu_{X_n}} converge vaguely to the distribution {\mu_X}, or equivalently if

\displaystyle  {\bf E}G(X_n) \rightarrow {\bf E} G(X)

as {n \rightarrow \infty} for all continuous compactly supported functions {G: R \rightarrow {\bf R}}.

One could in principle try to extend this definition beyond the locally compact Hausdorff setting, but certain pathologies can occur when doing so (e.g. failure of the Riesz representation theorem), and we will never need to consider vague convergence in spaces that are not locally compact Hausdorff, so we restrict to this setting for simplicity.

Note that the notion of convergence in distribution depends only on the distribution of the random variables involved. One consequence of this is that convergence in distribution does not produce unique limits: if {X_n} converges in distribution to {X}, and {Y} has the same distribution as {X}, then {X_n} also converges in distribution to {Y}. However, limits are unique up to equivalence in distribution (this is a consequence of the Riesz representation theorem, discussed for instance in this blog post). As a consequence of the insensitivity of convergence in distribution to equivalence in distribution, we may also legitimately talk about convergence of distribution of a sequence of random variables {X_n} to another random variable {X} even when all the random variables {X_1,X_2,\dots} and {X} involved are being modeled by different probability spaces (e.g. each {X_n} is modeled by {\Omega_n}, and {X} is modeled by {\Omega}, with no coupling presumed between these spaces). This is in contrast to the stronger notions of convergence in probability or almost sure convergence, which require all the random variables to be modeled by a common probability space. Also, by an abuse of notation, we can say that a sequence {X_n} of random variables converges in distribution to a probability measure {\mu}, when {\mu_{X_n}} converges vaguely to {\mu}. Thus we can talk about a sequence of random variables converging in distribution to a uniform distribution, a gaussian distribution, etc..

From the dominated convergence theorem (available for both convergence in probability and almost sure convergence) we see that convergence in probability or almost sure convergence implies convergence in distribution. The converse is not true, due to the insensitivity of convergence in distribution to equivalence in distribution; for instance, if {X_1,X_2,\dots} are iid copies of a non-deterministic scalar random variable {X}, then the {X_n} trivially converge in distribution to {X}, but will not converge in probability or almost surely (as one can see from the zero-one law). However, there are some partial converses that relate convergence in distribution to convergence in probability; see Exercise 10 below.

Remark 3 The notion of convergence in distribution is somewhat similar to the notion of convergence in the sense of distributions that arises in distribution theory (discussed for instance in this previous blog post), however strictly speaking the two notions of convergence are distinct and should not be confused with each other, despite the very similar names.

The notion of convergence in distribution simplifies in the case of real scalar random variables:

Proposition 4 Let {X_1,X_2,\dots} be a sequence of scalar random variables, and let {X} be another scalar random variable. Then the following are equivalent:

  • (i) {X_n} converges in distribution to {X}.
  • (ii) {F_{X_n}(t)} converges to {F_X(t)} for each continuity point {t} of {F_X} (i.e. for all real numbers {t \in {\bf R}} at which {F_X} is continuous). Here {F_X(t) := {\bf P}(X \leq t)} is the cumulative distribution function of {X}.

Proof: First suppose that {X_n} converges in distribution to {X}, and {F_X} is continuous at {t}. For any {\varepsilon > 0}, one can find a {\delta} such that

\displaystyle  F_X(t) - \varepsilon \leq F_X(t') \leq F_X(t) + \varepsilon

for every {t' \in [t-\delta,t+\delta]}. One can also find an {N} larger than {|t|+\delta} such that {F_X(-N) \leq \varepsilon} and {F_X(N) \geq 1-\varepsilon}. Thus

\displaystyle  {\bf P} (|X| \geq N ) = O(\varepsilon)

and

\displaystyle  {\bf P} (|X - t| \leq \delta ) = O(\varepsilon).

Let {G: {\bf R} \rightarrow [0,1]} be a continuous function supported on {[-2N, t]} that equals {1} on {[-N, t-\delta]}. Then by the above discussion we have

\displaystyle  {\bf E} G(X) = F_X(t) + O(\varepsilon)

and hence

\displaystyle  {\bf E} G(X_n) = F_X(t) + O(\varepsilon)

for large enough {n}. In particular

\displaystyle  {\bf P}( X_n \leq t ) \geq F_X(t) - O(\varepsilon).

A similar argument, replacing {G} with a continuous function supported on {[t,2N]} that equals {1} on {[t+\delta,N]} gives

\displaystyle  {\bf P}( X_n > t ) \geq 1 - F_X(t) - O(\varepsilon)

for {n} large enough. Putting the two estimates together gives

\displaystyle  F_{X_n}(t) = F_X(t) + O(\varepsilon)

for {n} large enough; sending {\varepsilon \rightarrow 0}, we obtain the claim.

Conversely, suppose that {F_{X_n}(t)} converges to {F_X(t)} at every continuity point {t} of {F_X}. Let {G: {\bf R} \rightarrow {\bf R}} be a continuous compactly supported function, then it is uniformly continuous. As {F_X} is monotone increasing, it can only have countably many points of discontinuity. From these two facts one can find, for any {\varepsilon>0}, a simple function {G_\varepsilon(t) = \sum_{i=1}^n c_i 1_{(t_i,t_{i+1}]}} for some {t_1 < \dots < t_n} that are points of continuity of {F_X}, and real numbers {c_i}, such that {|G(t) - G_\varepsilon(t)| \leq \varepsilon} for all {t}. Thus

\displaystyle  {\bf P} G(X_n) = {\bf P} G_\varepsilon(X_n) + O(\varepsilon)

\displaystyle  = \sum_{i=1}^n c_i(F_{X_n}(t_{i+1}) - F_{X_n}(t)) + O(\varepsilon).

Similarly for {X_n} replaced by {X}. Subtracting and taking limit superior, we conclude that

\displaystyle  \limsup_{n \rightarrow \infty} |{\bf P} G(X_n) - {\bf P} G(X)| = O(\varepsilon),

and on sending {\varepsilon \rightarrow 0}, we obtain that {X_n} converges in distribution to {X} as claimed. \Box

The restriction to continuity points of {t} is necessary. Consider for instance the deterministic random variables {X_n = 1/n}, then {X_n} converges almost surely (and hence in distribution) to {0}, but {F_{X_n}(0) = 0} does not converge to {F_X(0)=1}.

Example 5 For any natural number {n}, let {X_n} be a discrete random variable drawn uniformly from the finite set {\{0/n, 1/n, \dots, (n-1)/n\}}, and let {X} be the continuous random variable drawn uniformly from {[0,1]}. Then {X_n} converges in distribution to {X}. Thus we see that a continuous random variable can emerge as the limit of discrete random variables.

Example 6 For any natural number {n}, let {X_n} be a continuous random variable drawn uniformly from {[0,1/n]}, then {X_n} converges in distribution to the deterministic real number {0}. Thus we see that discrete (or even deterministic) random variables can emerge as the limit of continuous random variables.

Exercise 7 (Portmanteau theorem) Show that the properties (i) and (ii) in Proposition 4 are also equivalent to the following three statements:

  • (iii) One has {\limsup_{n \rightarrow \infty} {\bf P}( X_n \in K ) \leq {\bf P}(X \in K)} for all closed sets {K \subset {\bf R}}.
  • (iv) One has {\liminf_{n \rightarrow \infty} {\bf P}( X_n \in U ) \geq {\bf P}(X \in U)} for all open sets {U \subset {\bf R}}.
  • (v) For any Borel set {E \subset {\bf R}} whose topological boundary {\partial E} is such that {{\bf P}(X \in \partial E) = 0}, one has {\lim_{n \rightarrow \infty} {\bf P}(X_n \in E) = {\bf P}(X \in E)}.

(Note: to prove this theorem, you may wish to invoke Urysohn’s lemma. To deduce (iii) from (i), you may wish to start with the case of compact {K}.)

We can now state the famous central limit theorem:

Theorem 8 (Central limit theorem) Let {X_1,X_2,\dots} be iid copies of a scalar random variable {X} of finite mean {\mu := {\bf E} X} and finite non-zero variance {\sigma^2 := {\bf Var}(X)}. Let {S_n := X_1 + \dots + X_n}. Then the random variables {\frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu)} converges in distribution to a random variable with the standard normal distribution {N(0,1)} (that is to say, a random variable with probability density function {x \mapsto \frac{1}{\sqrt{2\pi}} e^{-x^2/2}}). Thus, by abuse of notation

\displaystyle  \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \rightarrow N(0,1).

In the normalised case {\mu=0, \sigma^2=1} when {X} has mean zero and unit variance, this simplifies to

\displaystyle  \frac{S_n}{\sqrt{n}} \rightarrow N(0,1).

Using Proposition 4 (and the fact that the cumulative distribution function associated to {N(0,1)} is continuous, the central limit theorem is equivalent to asserting that

\displaystyle  {\bf P}( \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \leq t ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{-\infty}^t e^{-x^2/2}\ dx

as {n \rightarrow \infty} for any {t \in {\bf R}}, or equivalently that

\displaystyle  {\bf P}( a \leq \frac{\sqrt{n}}{\sigma} (\frac{S_n}{n} - \mu) \leq b ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{a}^b e^{-x^2/2}\ dx.

Informally, one can think of the central limit theorem as asserting that {S_n} approximately behaves like it has distribution {N( n \mu, n \sigma^2 )} for large {n}, where {N(\mu,\sigma^2)} is the normal distribution with mean {\mu} and variance {\sigma^2}, that is to say the distribution with probability density function {x \mapsto \frac{1}{\sqrt{2\pi} \sigma} e^{-(x-\mu)^2/2\sigma^2}}. The integrals {\frac{1}{\sqrt{2\pi}} \int_{-\infty}^t e^{-x^2/2}\ dx} can be written in terms of the error function {\hbox{erf}} as {\frac{1}{2} + \frac{1}{2} \hbox{erf}(t/\sqrt{2})}.

The central limit theorem is a basic example of the universality phenomenon in probability – many statistics involving a large system of many independent (or weakly dependent) variables (such as the normalised sums {\frac{\sqrt{n}}{\sigma}(\frac{S_n}{n}-\mu)}) end up having a universal asymptotic limit (in this case, the normal distribution), regardless of the precise makeup of the underlying random variable {X} that comprised that system. Indeed, the universality of the normal distribution is such that it arises in many other contexts than the fluctuation of iid random variables; the central limit theorem is merely the first place in probability theory where it makes a prominent appearance.

We will give several proofs of the central limit theorem in these notes; each of these proofs has their advantages and disadvantages, and can each extend to prove many further results beyond the central limit theorem. We first give Lindeberg’s proof of the central limit theorem, based on exchanging (or swapping) each component {X_1,\dots,X_n} of the sum {S_n} in turn. This proof gives an accessible explanation as to why there should be a universal limit for the central limit theorem; one then computes directly with gaussians to verify that it is the normal distribution which is the universal limit. Our second proof is the most popular one taught in probability texts, namely the Fourier-analytic proof based around the concept of the characteristic function {t \mapsto {\bf E} e^{itX}} of a real random variable {X}. Thanks to the powerful identities and other results of Fourier analysis, this gives a quite short and direct proof of the central limit theorem, although the arguments may seem rather magical to readers who are not already familiar with Fourier methods. Finally, we give a proof based on the moment method, in the spirit of the arguments in the previous notes; this argument is more combinatorial, but is straightforward and is particularly robust, in particular being well equipped to handle some dependencies between components; we will illustrate this by proving the Erdos-Kac law in number theory by this method. Some further discussion of the central limit theorem (including some further proofs, such as one based on Stein’s method) can be found in this blog post. Some further variants of the central limit theorem, such as local limit theorems, stable laws, and large deviation inequalities, will be discussed in the next (and final) set of notes.

The following exercise illustrates the power of the central limit theorem, by establishing combinatorial estimates which would otherwise require the use of Stirling’s formula to establish.

Exercise 9 (De Moivre-Laplace theorem) Let {X} be a Bernoulli random variable, taking values in {\{0,1\}} with {{\bf P}(X=0)={\bf P}(X=1)=1/2}, thus {X} has mean {1/2} and variance {1/4}. Let {X_1,X_2,\dots} be iid copies of {X}, and write {S_n := X_1+\dots+X_n}.

  • (i) Show that {S_n} takes values in {\{0,\dots,n\}} with {{\bf P}(S_n=i) = \frac{1}{2^n} \binom{n}{i}}. (This is an example of a binomial distribution.)
  • (ii) Assume Stirling’s formula

    \displaystyle  n! = (1+o(1)) \sqrt{2\pi n} n^n e^{-n} \ \ \ \ \ (1)

    where {o(1)} is a function of {n} that goes to zero as {n \rightarrow \infty}. (A proof of this formula may be found in this previous blog post.) Using this formula, and without using the central limit theorem, show that

    \displaystyle  {\bf P}( a \leq 2\sqrt{n} (\frac{S_n}{n} - \frac{1}{2}) \leq b ) \rightarrow \frac{1}{\sqrt{2\pi}} \int_{a}^b e^{-x^2/2}\ dx

    as {n \rightarrow \infty} for any fixed real numbers {a<b}.

The above special case of the central limit theorem was first established by de Moivre and Laplace.

We close this section with some basic facts about convergence of distribution that will be useful in the sequel.

Exercise 10 Let {X_1,X_2,\dots}, {Y_1,Y_2,\dots} be sequences of real random variables, and let {X,Y} be further real random variables.

  • (i) If {X} is deterministic, show that {X_n} converges in distribution to {X} if and only if {X_n} converges in probability to {X}.
  • (ii) Suppose that {X_n} is independent of {Y_n} for each {n}, and {X} independent of {Y}. Show that {X_n+iY_n} converges in distribution to {X+iY} if and only if {X_n} converges in distribution to {X} and {Y_n} converges in distribution to {Y}. (The shortest way to prove this is by invoking the Stone-Weierstrass theorem, but one can also proceed by proving some version of Proposition 4.) What happens if the independence hypothesis is dropped?
  • (iii) If {X_n} converges in distribution to {X}, show that for every {\varepsilon>0} there exists {K>0} such that {{\bf P}( |X_n| \geq K ) < \varepsilon} for all sufficiently large {n}. (That is to say, {X_n} is a tight sequence of random variables.)
  • (iv) Show that {X_n} converges in distribution to {X} if and only if, after extending the probability space model if necessary, one can find copies {Z_1,Z_2,\dots} and {Z} of {X_1,X_2,\dots} and {X} respectively such that {Z_n} converges almost surely to {Z}. (Hint: use the Skorohod representation, Exercise 29 of Notes 0.)
  • (v) If {X_1,X_2,\dots} converges in distribution to {X}, and {F: {\bf R} \rightarrow {\bf R}} is continuous, show that {F(X_1),F(X_2),\dots} converges in distribution to {F(X)}. Generalise this claim to the case when {X} takes values in an arbitrary locally compact Hausdorff space.
  • (vi) (Slutsky’s theorem) If {X_n} converges in distribution to {X}, and {Y_n} converges in probability to a deterministic limit {Y}, show that {X_n+Y_n} converges in distribution to {X+Y}, and {X_n Y_n} converges in distribution to {XY}. (Hint: either use (iv), or else use (iii) to control some error terms.) This statement combines particularly well with (i). What happens if {Y} is not assumed to be deterministic?
  • (vii) (Fatou lemma) If {G: {\bf R} \rightarrow [0,+\infty)} is continuous, and {X_n} converges in distribution to {X}, show that {\liminf_{n \rightarrow \infty} {\bf E} G(X_n) \geq {\bf E} G(X)}.
  • (viii) (Bounded convergence) If {G: {\bf R} \rightarrow {\bf R}} is continuous and bounded, and {X_n} converges in distribution to {X}, show that {\lim_{n \rightarrow \infty} {\bf E} G(X_n) = {\bf E} G(X)}.
  • (ix) (Dominated convergence) If {X_n} converges in distribution to {X}, and there is an absolutely integrable {Y} such that {|X_n| \leq Y} almost surely for all {n}, show that {\lim_{n \rightarrow \infty} {\bf E} X_n = {\bf E} X}.

For future reference we also mention (but will not prove) Prokhorov’s theorem that gives a partial converse to part (iii) of the above exercise:

Theorem 11 (Prokhorov’s theorem) Let {X_1,X_2,\dots} be a sequence of real random variables which is tight (that is, for every {\varepsilon>0} there exists {K>0} such that {{\bf P}(|X_n| \geq K) < \varepsilon} for all sufficiently large {n}). Then there exists a subsequence {X_{n_j}} which converges in distribution to some random variable {X} (which may possibly be modeled by a different probability space model than the {X_1,X_2,\dots}.)

The proof of this theorem relies on the Riesz representation theorem, and is beyond the scope of this course; but see for instance Exercise 29 of this previous blog post. (See also the closely related Helly selection theorem, covered in Exercise 30 of the same post.)

Read the rest of this entry »

One of the major activities in probability theory is studying the various statistics that can be produced from a complex system with many components. One of the simplest possible systems one can consider is a finite sequence {X_1,\dots,X_n} or an infinite sequence {X_1,X_2,\dots} of jointly independent scalar random variables, with the case when the {X_i} are also identically distributed (i.e. the {X_i} are iid) being a model case of particular interest. (In some cases one may consider a triangular array {(X_{n,i})_{1 \leq i \leq n}} of scalar random variables, rather than a finite or infinite sequence.) There are many statistics of such sequences that one can study, but one of the most basic such statistics are the partial sums

\displaystyle S_n := X_1 + \dots + X_n.

The first fundamental result about these sums is the law of large numbers (or LLN for short), which comes in two formulations, weak (WLLN) and strong (SLLN). To state these laws, we first must define the notion of convergence in probability.

Definition 1 Let {X_n} be a sequence of random variables taking values in a separable metric space {R = (R,d)} (e.g. the {X_n} could be scalar random variables, taking values in {{\bf R}} or {{\bf C}}), and let {X} be another random variable taking values in {R}. We say that {X_n} converges in probability to {X} if, for every radius {\varepsilon > 0}, one has {{\bf P}( d(X_n,X) > \varepsilon ) \rightarrow 0} as {n \rightarrow \infty}. Thus, if {X_n, X} are scalar, we have {X_n} converging to {X} in probability if {{\bf P}( |X_n-X| > \varepsilon ) \rightarrow 0} as {n \rightarrow \infty} for any given {\varepsilon > 0}.

The measure-theoretic analogue of convergence in probability is convergence in measure.

It is instructive to compare the notion of convergence in probability with almost sure convergence. it is easy to see that {X_n} converges almost surely to {X} if and only if, for every radius {\varepsilon > 0}, one has {{\bf P}( \bigvee_{n \geq N} (d(X_n,X)>\varepsilon) ) \rightarrow 0} as {N \rightarrow \infty}; thus, roughly speaking, convergence in probability is good for controlling how a single random variable {X_n} is close to its putative limiting value {X}, while almost sure convergence is good for controlling how the entire tail {(X_n)_{n \geq N}} of a sequence of random variables is close to its putative limit {X}.

We have the following easy relationships between convergence in probability and almost sure convergence:

Exercise 2 Let {X_n} be a sequence of scalar random variables, and let {X} be another scalar random variable.

  • (i) If {X_n \rightarrow X} almost surely, show that {X_n \rightarrow X} in probability. Give a counterexample to show that the converse does not necessarily hold.
  • (ii) Suppose that {\sum_n {\bf P}( |X_n-X| > \varepsilon ) < \infty} for all {\varepsilon > 0}. Show that {X_n \rightarrow X} almost surely. Give a counterexample to show that the converse does not necessarily hold.
  • (iii) If {X_n \rightarrow X} in probability, show that there is a subsequence {X_{n_j}} of the {X_n} such that {X_{n_j} \rightarrow X} almost surely.
  • (iv) If {X_n,X} are absolutely integrable and {{\bf E} |X_n-X| \rightarrow 0} as {n \rightarrow \infty}, show that {X_n \rightarrow X} in probability. Give a counterexample to show that the converse does not necessarily hold.
  • (v) (Urysohn subsequence principle) Suppose that every subsequence {X_{n_j}} of {X_n} has a further subsequence {X_{n_{j_k}}} that converges to {X} in probability. Show that {X_n} also converges to {X} in probability.
  • (vi) Does the Urysohn subsequence principle still hold if “in probability” is replaced with “almost surely” throughout?
  • (vii) If {X_n} converges in probability to {X}, and {F: {\bf R} \rightarrow {\bf R}} or {F: {\bf C} \rightarrow {\bf C}} is continuous, show that {F(X_n)} converges in probability to {F(X)}. More generally, if for each {i=1,\dots,k}, {X^{(i)}_n} is a sequence of scalar random variables that converge in probability to {X^{(i)}}, and {F: {\bf R}^k \rightarrow {\bf R}} or {F: {\bf C}^k \rightarrow {\bf C}} is continuous, show that {F(X^{(1)}_n,\dots,X^{(k)}_n)} converges in probability to {F(X^{(1)},\dots,X^{(k)})}. (Thus, for instance, if {X_n} and {Y_n} converge in probability to {X} and {Y} respectively, then {X_n + Y_n} and {X_n Y_n} converge in probability to {X+Y} and {XY} respectively.
  • (viii) (Fatou’s lemma for convergence in probability) If {X_n} are non-negative and converge in probability to {X}, show that {{\bf E} X \leq \liminf_{n \rightarrow \infty} {\bf E} X_n}.
  • (ix) (Dominated convergence in probability) If {X_n} converge in probability to {X}, and one almost surely has {|X_n| \leq Y} for all {n} and some absolutely integrable {Y}, show that {{\bf E} X_n} converges to {{\bf E} X}.

Exercise 3 Let {X_1,X_2,\dots} be a sequence of scalar random variables converging in probability to another random variable {X}.

  • (i) Suppose that there is a random variable {Y} which is independent of {X_i} for each individual {i}. Show that {Y} is also independent of {X}.
  • (ii) Suppose that the {X_1,X_2,\dots} are jointly independent. Show that {X} is almost surely constant (i.e. there is a deterministic scalar {c} such that {X=c} almost surely).

We can now state the weak and strong law of large numbers, in the model case of iid random variables.

Theorem 4 (Law of large numbers, model case) Let {X_1, X_2, \dots} be an iid sequence of copies of an absolutely integrable random variable {X} (thus the {X_i} are independent and all have the same distribution as {X}). Write {\mu := {\bf E} X}, and for each natural number {n}, let {S_n} denote the random variable {S_n := X_1 + \dots + X_n}.

  • (i) (Weak law of large numbers) The random variables {S_n/n} converge in probability to {\mu}.
  • (ii) (Strong law of large numbers) The random variables {S_n/n} converge almost surely to {\mu}.

Informally: if {X_1,\dots,X_n} are iid with mean {\mu}, then {X_1 + \dots + X_n \approx \mu n} for {n} large. Clearly the strong law of large numbers implies the weak law, but the weak law is easier to prove (and has somewhat better quantitative estimates). There are several variants of the law of large numbers, for instance when one drops the hypothesis of identical distribution, or when the random variable {X} is not absolutely integrable, or if one seeks more quantitative bounds on the rate of convergence; we will discuss some of these variants below the fold.

It is instructive to compare the law of large numbers with what one can obtain from the Kolmogorov zero-one law, discussed in Notes 2. Observe that if the {X_n} are real-valued, then the limit superior {\limsup_{n \rightarrow \infty} S_n/n} and {\liminf_{n \rightarrow \infty} S_n/n} are tail random variables in the sense that they are not affected if one changes finitely many of the {X_n}; in particular, events such as {\limsup_{n \rightarrow \infty} S_n/n > t} are tail events for any {t \in {\bf R}}. From this and the zero-one law we see that there must exist deterministic quantities {-\infty \leq \mu_- \leq \mu_+ \leq +\infty} such that {\limsup_{n \rightarrow \infty} S_n/n = \mu_+} and {\liminf_{n \rightarrow \infty} S_n/n = \mu_-} almost surely. The strong law of large numbers can then be viewed as the assertion that {\mu_- = \mu_+ = \mu} when {X} is absolutely integrable. On the other hand, the zero-one law argument does not require absolute integrability (and one can replace the denominator {n} by other functions of {n} that go to infinity as {n \rightarrow \infty}).

The law of large numbers asserts, roughly speaking, that the theoretical expectation {\mu} of a random variable {X} can be approximated by taking a large number of independent samples {X_1,\dots,X_n} of {X} and then forming the empirical mean {S_n/n = \frac{X_1+\dots+X_n}{n}}. This ability to approximate the theoretical statistics of a probability distribution through empirical data is one of the basic starting points for mathematical statistics, though this is not the focus of the course here. The tendency of statistics such as {S_n/n} to cluster closely around their mean value {\mu} is the simplest instance of the concentration of measure phenomenon, which is of tremendous significance not only within probability, but also in applications of probability to disciplines such as statistics, theoretical computer science, combinatorics, random matrix theory and high dimensional geometry. We will not discuss these topics much in this course, but see this previous blog post for some further discussion.

There are several ways to prove the law of large numbers (in both forms). One basic strategy is to use the moment method – controlling statistics such as {S_n/n} by computing moments such as the mean {{\bf E} S_n/n}, variance {{\bf E} |S_n/n - {\bf E} S_n/n|^2}, or higher moments such as {{\bf E} |S_n/n - {\bf E} S_n/n|^k} for {k = 4, 6, \dots}. The joint independence of the {X_i} make such moments fairly easy to compute, requiring only some elementary combinatorics. A direct application of the moment method typically requires one to make a finite moment assumption such as {{\bf E} |X|^k < \infty}, but as we shall see, one can reduce fairly easily to this case by a truncation argument.

For the strong law of large numbers, one can also use methods relating to the theory of martingales, such as stopping time arguments and maximal inequalities; we present some classical arguments of Kolmogorov in this regard.

Read the rest of this entry »

We can now turn attention to one of the centerpiece universality results in random matrix theory, namely the Wigner semi-circle law for Wigner matrices. Recall from previous notes that a Wigner Hermitian matrix ensemble is a random matrix ensemble {M_n = (\xi_{ij})_{1 \leq i,j \leq n}} of Hermitian matrices (thus {\xi_{ij} = \overline{\xi_{ji}}}; this includes real symmetric matrices as an important special case), in which the upper-triangular entries {\xi_{ij}}, {i>j} are iid complex random variables with mean zero and unit variance, and the diagonal entries {\xi_{ii}} are iid real variables, independent of the upper-triangular entries, with bounded mean and variance. Particular special cases of interest include the Gaussian Orthogonal Ensemble (GOE), the symmetric random sign matrices (aka symmetric Bernoulli ensemble), and the Gaussian Unitary Ensemble (GUE).

In previous notes we saw that the operator norm of {M_n} was typically of size {O(\sqrt{n})}, so it is natural to work with the normalised matrix {\frac{1}{\sqrt{n}} M_n}. Accordingly, given any {n \times n} Hermitian matrix {M_n}, we can form the (normalised) empirical spectral distribution (or ESD for short)

\displaystyle  \mu_{\frac{1}{\sqrt{n}} M_n} := \frac{1}{n} \sum_{j=1}^n \delta_{\lambda_j(M_n) / \sqrt{n}},

of {M_n}, where {\lambda_1(M_n) \leq \ldots \leq \lambda_n(M_n)} are the (necessarily real) eigenvalues of {M_n}, counting multiplicity. The ESD is a probability measure, which can be viewed as a distribution of the normalised eigenvalues of {M_n}.

When {M_n} is a random matrix ensemble, then the ESD {\mu_{\frac{1}{\sqrt{n}} M_n}} is now a random measure – i.e. a random variable taking values in the space {\hbox{Pr}({\mathbb R})} of probability measures on the real line. (Thus, the distribution of {\mu_{\frac{1}{\sqrt{n}} M_n}} is a probability measure on probability measures!)

Now we consider the behaviour of the ESD of a sequence of Hermitian matrix ensembles {M_n} as {n \rightarrow \infty}. Recall from Notes 0 that for any sequence of random variables in a {\sigma}-compact metrisable space, one can define notions of convergence in probability and convergence almost surely. Specialising these definitions to the case of random probability measures on {{\mathbb R}}, and to deterministic limits, we see that a sequence of random ESDs {\mu_{\frac{1}{\sqrt{n}} M_n}} converge in probability (resp. converge almost surely) to a deterministic limit {\mu \in \hbox{Pr}({\mathbb R})} (which, confusingly enough, is a deterministic probability measure!) if, for every test function {\varphi \in C_c({\mathbb R})}, the quantities {\int_{\mathbb R} \varphi\ d\mu_{\frac{1}{\sqrt{n}} M_n}} converge in probability (resp. converge almost surely) to {\int_{\mathbb R} \varphi\ d\mu}.

Remark 1 As usual, convergence almost surely implies convergence in probability, but not vice versa. In the special case of random probability measures, there is an even weaker notion of convergence, namely convergence in expectation, defined as follows. Given a random ESD {\mu_{\frac{1}{\sqrt{n}} M_n}}, one can form its expectation {{\bf E} \mu_{\frac{1}{\sqrt{n}} M_n} \in \hbox{Pr}({\mathbb R})}, defined via duality (the Riesz representation theorem) as

\displaystyle  \int_{\mathbb R} \varphi\ d{\bf E} \mu_{\frac{1}{\sqrt{n}} M_n} := {\bf E} \int_{\mathbb R} \varphi\ d	 \mu_{\frac{1}{\sqrt{n}} M_n};

this probability measure can be viewed as the law of a random eigenvalue {\frac{1}{\sqrt{n}}\lambda_i(M_n)} drawn from a random matrix {M_n} from the ensemble. We then say that the ESDs converge in expectation to a limit {\mu \in \hbox{Pr}({\mathbb R})} if {{\bf E} \mu_{\frac{1}{\sqrt{n}} M_n}} converges the vague topology to {\mu}, thus

\displaystyle  {\bf E} \int_{\mathbb R} \varphi\ d	 \mu_{\frac{1}{\sqrt{n}} M_n} \rightarrow \int_{\mathbb R} \varphi\ d\mu

for all {\phi \in C_c({\mathbb R})}.

In general, these notions of convergence are distinct from each other; but in practice, one often finds in random matrix theory that these notions are effectively equivalent to each other, thanks to the concentration of measure phenomenon.

Exercise 1 Let {M_n} be a sequence of {n \times n} Hermitian matrix ensembles, and let {\mu} be a continuous probability measure on {{\mathbb R}}.

  • Show that {\mu_{\frac{1}{\sqrt{n}} M_n}} converges almost surely to {\mu} if and only if {\mu_{\frac{1}{\sqrt{n}}}(-\infty,\lambda)} converges almost surely to {\mu(-\infty,\lambda)} for all {\lambda \in {\mathbb R}}.
  • Show that {\mu_{\frac{1}{\sqrt{n}} M_n}} converges in probability to {\mu} if and only if {\mu_{\frac{1}{\sqrt{n}}}(-\infty,\lambda)} converges in probability to {\mu(-\infty,\lambda)} for all {\lambda \in {\mathbb R}}.
  • Show that {\mu_{\frac{1}{\sqrt{n}} M_n}} converges in expectation to {\mu} if and only if {\mathop{\mathbb E} \mu_{\frac{1}{\sqrt{n}}}(-\infty,\lambda)} converges to {\mu(-\infty,\lambda)} for all {\lambda \in {\mathbb R}}.

We can now state the Wigner semi-circular law.

Theorem 1 (Semicircular law) Let {M_n} be the top left {n \times n} minors of an infinite Wigner matrix {(\xi_{ij})_{i,j \geq 1}}. Then the ESDs {\mu_{\frac{1}{\sqrt{n}} M_n}} converge almost surely (and hence also in probability and in expectation) to the Wigner semi-circular distribution

\displaystyle  \mu_{sc} := \frac{1}{2\pi} (4-|x|^2)^{1/2}_+\ dx. \ \ \ \ \ (1)

A numerical example of this theorem in action can be seen at the MathWorld entry for this law.

The semi-circular law nicely complements the upper Bai-Yin theorem from Notes 3, which asserts that (in the case when the entries have finite fourth moment, at least), the matrices {\frac{1}{\sqrt{n}} M_n} almost surely has operator norm at most {2+o(1)}. Note that the operator norm is the same thing as the largest magnitude of the eigenvalues. Because the semi-circular distribution (1) is supported on the interval {[-2,2]} with positive density on the interior of this interval, Theorem 1 easily supplies the lower Bai-Yin theorem, that the operator norm of {\frac{1}{\sqrt{n}} M_n} is almost surely at least {2-o(1)}, and thus (in the finite fourth moment case) the norm is in fact equal to {2+o(1)}. Indeed, we have just shown that the circular law provides an alternate proof of the lower Bai-Yin bound (Proposition 11 of Notes 3).

As will hopefully become clearer in the next set of notes, the semi-circular law is the noncommutative (or free probability) analogue of the central limit theorem, with the semi-circular distribution (1) taking on the role of the normal distribution. Of course, there is a striking difference between the two distributions, in that the former is compactly supported while the latter is merely subgaussian. One reason for this is that the concentration of measure phenomenon is more powerful in the case of ESDs of Wigner matrices than it is for averages of iid variables; compare the concentration of measure results in Notes 3 with those in Notes 1.

There are several ways to prove (or at least to heuristically justify) the circular law. In this set of notes we shall focus on the two most popular methods, the moment method and the Stieltjes transform method, together with a third (heuristic) method based on Dyson Brownian motion (Notes 3b). In the next set of notes we shall also study the free probability method, and in the set of notes after that we use the determinantal processes method (although this method is initially only restricted to highly symmetric ensembles, such as GUE).

Read the rest of this entry »

Now that we have developed the basic probabilistic tools that we will need, we now turn to the main subject of this course, namely the study of random matrices. There are many random matrix models (aka matrix ensembles) of interest – far too many to all be discussed in a single course. We will thus focus on just a few simple models. First of all, we shall restrict attention to square matrices {M = (\xi_{ij})_{1 \leq i,j \leq n}}, where {n} is a (large) integer and the {\xi_{ij}} are real or complex random variables. (One can certainly study rectangular matrices as well, but for simplicity we will only look at the square case.) Then, we shall restrict to three main models:

  • Iid matrix ensembles, in which the coefficients {\xi_{ij}} are iid random variables with a single distribution {\xi_{ij} \equiv \xi}. We will often normalise {\xi} to have mean zero and unit variance. Examples of iid models include the Bernouli ensemble (aka random sign matrices) in which the {\xi_{ij}} are signed Bernoulli variables, the real gaussian matrix ensemble in which {\xi_{ij} \equiv N(0,1)_{\bf R}}, and the complex gaussian matrix ensemble in which {\xi_{ij} \equiv N(0,1)_{\bf C}}.
  • Symmetric Wigner matrix ensembles, in which the upper triangular coefficients {\xi_{ij}}, {j \geq i} are jointly independent and real, but the lower triangular coefficients {\xi_{ij}}, {j<i} are constrained to equal their transposes: {\xi_{ij}=\xi_{ji}}. Thus {M} by construction is always a real symmetric matrix. Typically, the strictly upper triangular coefficients will be iid, as will the diagonal coefficients, but the two classes of coefficients may have a different distribution. One example here is the symmetric Bernoulli ensemble, in which both the strictly upper triangular and the diagonal entries are signed Bernoulli variables; another important example is the Gaussian Orthogonal Ensemble (GOE), in which the upper triangular entries have distribution {N(0,1)_{\bf R}} and the diagonal entries have distribution {N(0,2)_{\bf R}}. (We will explain the reason for this discrepancy later.)
  • Hermitian Wigner matrix ensembles, in which the upper triangular coefficients are jointly independent, with the diagonal entries being real and the strictly upper triangular entries complex, and the lower triangular coefficients {\xi_{ij}}, {j<i} are constrained to equal their adjoints: {\xi_{ij} = \overline{\xi_{ji}}}. Thus {M} by construction is always a Hermitian matrix. This class of ensembles contains the symmetric Wigner ensembles as a subclass. Another very important example is the Gaussian Unitary Ensemble (GUE), in which all off-diagional entries have distribution {N(0,1)_{\bf C}}, but the diagonal entries have distribution {N(0,1)_{\bf R}}.

Given a matrix ensemble {M}, there are many statistics of {M} that one may wish to consider, e.g. the eigenvalues or singular values of {M}, the trace and determinant, etc. In these notes we will focus on a basic statistic, namely the operator norm

\displaystyle \| M \|_{op} := \sup_{x \in {\bf C}^n: |x|=1} |Mx| \ \ \ \ \ (1)

of the matrix {M}. This is an interesting quantity in its own right, but also serves as a basic upper bound on many other quantities. (For instance, {\|M\|_{op}} is also the largest singular value {\sigma_1(M)} of {M} and thus dominates the other singular values; similarly, all eigenvalues {\lambda_i(M)} of {M} clearly have magnitude at most {\|M\|_{op}}.) Because of this, it is particularly important to get good upper tail bounds

\displaystyle {\bf P}( \|M\|_{op} \geq \lambda ) \leq \ldots

on this quantity, for various thresholds {\lambda}. (Lower tail bounds are also of interest, of course; for instance, they give us confidence that the upper tail bounds are sharp.) Also, as we shall see, the problem of upper bounding {\|M\|_{op}} can be viewed as a non-commutative analogue of upper bounding the quantity {|S_n|} studied in Notes 1. (The analogue of the central limit theorem in Notes 2 is the Wigner semi-circular law, which will be studied in the next set of notes.)

An {n \times n} matrix consisting entirely of {1}s has an operator norm of exactly {n}, as can for instance be seen from the Cauchy-Schwarz inequality. More generally, any matrix whose entries are all uniformly {O(1)} will have an operator norm of {O(n)} (which can again be seen from Cauchy-Schwarz, or alternatively from Schur’s test, or from a computation of the Frobenius norm). However, this argument does not take advantage of possible cancellations in {M}. Indeed, from analogy with concentration of measure, when the entries of the matrix {M} are independent, bounded and have mean zero, we expect the operator norm to be of size {O(\sqrt{n})} rather than {O(n)}. We shall see shortly that this intuition is indeed correct. (One can see, though, that the mean zero hypothesis is important; from the triangle inequality we see that if we add the all-ones matrix (for instance) to a random matrix with mean zero, to obtain a random matrix whose coefficients all have mean {1}, then at least one of the two random matrices necessarily has operator norm at least {n/2}.)

As mentioned before, there is an analogy here with the concentration of measure phenomenon, and many of the tools used in the latter (e.g. the moment method) will also appear here. (Indeed, we will be able to use some of the concentration inequalities from Notes 1 directly to help control {\|M\|_{op}} and related quantities.) Similarly, just as many of the tools from concentration of measure could be adapted to help prove the central limit theorem, several the tools seen here will be of use in deriving the semi-circular law.

The most advanced knowledge we have on the operator norm is given by the Tracy-Widom law, which not only tells us where the operator norm is concentrated in (it turns out, for instance, that for a Wigner matrix (with some additional technical assumptions), it is concentrated in the range {[2\sqrt{n} - O(n^{-1/6}), 2\sqrt{n} + O(n^{-1/6})]}), but what its distribution in that range is. While the methods in this set of notes can eventually be pushed to establish this result, this is far from trivial, and will only be briefly discussed here. (We may return to the Tracy-Widom law later in this course, though.)

Read the rest of this entry »

Suppose we have a large number of scalar random variables {X_1,\ldots,X_n}, which each have bounded size on average (e.g. their mean and variance could be {O(1)}). What can one then say about their sum {S_n := X_1+\ldots+X_n}? If each individual summand {X_i} varies in an interval of size {O(1)}, then their sum of course varies in an interval of size {O(n)}. However, a remarkable phenomenon, known as concentration of measure, asserts that assuming a sufficient amount of independence between the component variables {X_1,\ldots,X_n}, this sum sharply concentrates in a much narrower range, typically in an interval of size {O(\sqrt{n})}. This phenomenon is quantified by a variety of large deviation inequalities that give upper bounds (often exponential in nature) on the probability that such a combined random variable deviates significantly from its mean. The same phenomenon applies not only to linear expressions such as {S_n = X_1+\ldots+X_n}, but more generally to nonlinear combinations {F(X_1,\ldots,X_n)} of such variables, provided that the nonlinear function {F} is sufficiently regular (in particular, if it is Lipschitz, either separately in each variable, or jointly in all variables).

The basic intuition here is that it is difficult for a large number of independent variables {X_1,\ldots,X_n} to “work together” to simultaneously pull a sum {X_1+\ldots+X_n} or a more general combination {F(X_1,\ldots,X_n)} too far away from its mean. Independence here is the key; concentration of measure results typically fail if the {X_i} are too highly correlated with each other.

There are many applications of the concentration of measure phenomenon, but we will focus on a specific application which is useful in the random matrix theory topics we will be studying, namely on controlling the behaviour of random {n}-dimensional vectors with independent components, and in particular on the distance between such random vectors and a given subspace.

Once one has a sufficient amount of independence, the concentration of measure tends to be sub-gaussian in nature; thus the probability that one is at least {\lambda} standard deviations from the mean tends to drop off like {C \exp(-c\lambda^2)} for some {C,c > 0}. In particular, one is {O( \log^{1/2} n )} standard deviations from the mean with high probability, and {O( \log^{1/2+\epsilon} n)} standard deviations from the mean with overwhelming probability. Indeed, concentration of measure is our primary tool for ensuring that various events hold with overwhelming probability (other moment methods can give high probability, but have difficulty ensuring overwhelming probability).

This is only a brief introduction to the concentration of measure phenomenon. A systematic study of this topic can be found in this book by Ledoux.

Read the rest of this entry »

Let X be a real-valued random variable, and let X_1, X_2, X_3, ... be an infinite sequence of independent and identically distributed copies of X. Let \overline{X}_n := \frac{1}{n}(X_1 + \ldots + X_n) be the empirical averages of this sequence. A fundamental theorem in probability theory is the law of large numbers, which comes in both a weak and a strong form:

Weak law of large numbers. Suppose that the first moment {\Bbb E} |X| of X is finite. Then \overline{X}_n converges in probability to {\Bbb E} X, thus \lim_{n \to \infty} {\Bbb P}( |\overline{X}_n - {\Bbb E} X| \geq \varepsilon ) = 0 for every \varepsilon > 0.

Strong law of large numbers. Suppose that the first moment {\Bbb E} |X| of X is finite. Then \overline{X}_n converges almost surely to {\Bbb E} X, thus {\Bbb P}( \lim_{n \to \infty} \overline{X}_n = {\Bbb E} X ) = 1.

[The concepts of convergence in probability and almost sure convergence in probability theory are specialisations of the concepts of convergence in measure and pointwise convergence almost everywhere in measure theory.]

(If one strengthens the first moment assumption to that of finiteness of the second moment {\Bbb E}|X|^2, then we of course have a more precise statement than the (weak) law of large numbers, namely the central limit theorem, but I will not discuss that theorem here.  With even more hypotheses on X, one similarly has more precise versions of the strong law of large numbers, such as the Chernoff inequality, which I will again not discuss here.)

The weak law is easy to prove, but the strong law (which of course implies the weak law, by Egoroff’s theorem) is more subtle, and in fact the proof of this law (assuming just finiteness of the first moment) usually only appears in advanced graduate texts. So I thought I would present a proof here of both laws, which proceeds by the standard techniques of the moment method and truncation. The emphasis in this exposition will be on motivation and methods rather than brevity and strength of results; there do exist proofs of the strong law in the literature that have been compressed down to the size of one page or less, but this is not my goal here.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 6,303 other followers