You are currently browsing the tag archive for the ‘law of large numbers’ tag.

One of the major activities in probability theory is studying the various statistics that can be produced from a complex system with many components. One of the simplest possible systems one can consider is a finite sequence ${X_1,\dots,X_n}$ or an infinite sequence ${X_1,X_2,\dots}$ of jointly independent scalar random variables, with the case when the ${X_i}$ are also identically distributed (i.e. the ${X_i}$ are iid) being a model case of particular interest. (In some cases one may consider a triangular array ${(X_{n,i})_{1 \leq i \leq n}}$ of scalar random variables, rather than a finite or infinite sequence.) There are many statistics of such sequences that one can study, but one of the most basic such statistics are the partial sums

$\displaystyle S_n := X_1 + \dots + X_n.$

The first fundamental result about these sums is the law of large numbers (or LLN for short), which comes in two formulations, weak (WLLN) and strong (SLLN). To state these laws, we first must define the notion of convergence in probability.

Definition 1 Let ${X_n}$ be a sequence of random variables taking values in a separable metric space ${R = (R,d)}$ (e.g. the ${X_n}$ could be scalar random variables, taking values in ${{\bf R}}$ or ${{\bf C}}$), and let ${X}$ be another random variable taking values in ${R}$. We say that ${X_n}$ converges in probability to ${X}$ if, for every radius ${\varepsilon > 0}$, one has ${{\bf P}( d(X_n,X) > \varepsilon ) \rightarrow 0}$ as ${n \rightarrow \infty}$. Thus, if ${X_n, X}$ are scalar, we have ${X_n}$ converging to ${X}$ in probability if ${{\bf P}( |X_n-X| > \varepsilon ) \rightarrow 0}$ as ${n \rightarrow \infty}$ for any given ${\varepsilon > 0}$.

The measure-theoretic analogue of convergence in probability is convergence in measure.

It is instructive to compare the notion of convergence in probability with almost sure convergence. it is easy to see that ${X_n}$ converges almost surely to ${X}$ if and only if, for every radius ${\varepsilon > 0}$, one has ${{\bf P}( \bigvee_{n \geq N} (d(X_n,X)>\varepsilon) ) \rightarrow 0}$ as ${N \rightarrow \infty}$; thus, roughly speaking, convergence in probability is good for controlling how a single random variable ${X_n}$ is close to its putative limiting value ${X}$, while almost sure convergence is good for controlling how the entire tail ${(X_n)_{n \geq N}}$ of a sequence of random variables is close to its putative limit ${X}$.

We have the following easy relationships between convergence in probability and almost sure convergence:

Exercise 2 Let ${X_n}$ be a sequence of scalar random variables, and let ${X}$ be another scalar random variable.

• (i) If ${X_n \rightarrow X}$ almost surely, show that ${X_n \rightarrow X}$ in probability. Give a counterexample to show that the converse does not necessarily hold.
• (ii) Suppose that ${\sum_n {\bf P}( |X_n-X| > \varepsilon ) < \infty}$ for all ${\varepsilon > 0}$. Show that ${X_n \rightarrow X}$ almost surely. Give a counterexample to show that the converse does not necessarily hold.
• (iii) If ${X_n \rightarrow X}$ in probability, show that there is a subsequence ${X_{n_j}}$ of the ${X_n}$ such that ${X_{n_j} \rightarrow X}$ almost surely.
• (iv) If ${X_n,X}$ are absolutely integrable and ${{\bf E} |X_n-X| \rightarrow 0}$ as ${n \rightarrow \infty}$, show that ${X_n \rightarrow X}$ in probability. Give a counterexample to show that the converse does not necessarily hold.
• (v) (Urysohn subsequence principle) Suppose that every subsequence ${X_{n_j}}$ of ${X_n}$ has a further subsequence ${X_{n_{j_k}}}$ that converges to ${X}$ in probability. Show that ${X_n}$ also converges to ${X}$ in probability.
• (vi) Does the Urysohn subsequence principle still hold if “in probability” is replaced with “almost surely” throughout?
• (vii) If ${X_n}$ converges in probability to ${X}$, and ${F: {\bf R} \rightarrow {\bf R}}$ or ${F: {\bf C} \rightarrow {\bf C}}$ is continuous, show that ${F(X_n)}$ converges in probability to ${F(X)}$. More generally, if for each ${i=1,\dots,k}$, ${X^{(i)}_n}$ is a sequence of scalar random variables that converge in probability to ${X^{(i)}}$, and ${F: {\bf R}^k \rightarrow {\bf R}}$ or ${F: {\bf C}^k \rightarrow {\bf C}}$ is continuous, show that ${F(X^{(1)}_n,\dots,X^{(k)}_n)}$ converges in probability to ${F(X^{(1)},\dots,X^{(k)})}$. (Thus, for instance, if ${X_n}$ and ${Y_n}$ converge in probability to ${X}$ and ${Y}$ respectively, then ${X_n + Y_n}$ and ${X_n Y_n}$ converge in probability to ${X+Y}$ and ${XY}$ respectively.
• (viii) (Fatou’s lemma for convergence in probability) If ${X_n}$ are non-negative and converge in probability to ${X}$, show that ${{\bf E} X \leq \liminf_{n \rightarrow \infty} {\bf E} X_n}$.
• (ix) (Dominated convergence in probability) If ${X_n}$ converge in probability to ${X}$, and one almost surely has ${|X_n| \leq Y}$ for all ${n}$ and some absolutely integrable ${Y}$, show that ${{\bf E} X_n}$ converges to ${{\bf E} X}$.

Exercise 3 Let ${X_1,X_2,\dots}$ be a sequence of scalar random variables converging in probability to another random variable ${X}$.

• (i) Suppose that there is a random variable ${Y}$ which is independent of ${X_i}$ for each individual ${i}$. Show that ${Y}$ is also independent of ${X}$.
• (ii) Suppose that the ${X_1,X_2,\dots}$ are jointly independent. Show that ${X}$ is almost surely constant (i.e. there is a deterministic scalar ${c}$ such that ${X=c}$ almost surely).

We can now state the weak and strong law of large numbers, in the model case of iid random variables.

Theorem 4 (Law of large numbers, model case) Let ${X_1, X_2, \dots}$ be an iid sequence of copies of an absolutely integrable random variable ${X}$ (thus the ${X_i}$ are independent and all have the same distribution as ${X}$). Write ${\mu := {\bf E} X}$, and for each natural number ${n}$, let ${S_n}$ denote the random variable ${S_n := X_1 + \dots + X_n}$.

• (i) (Weak law of large numbers) The random variables ${S_n/n}$ converge in probability to ${\mu}$.
• (ii) (Strong law of large numbers) The random variables ${S_n/n}$ converge almost surely to ${\mu}$.

Informally: if ${X_1,\dots,X_n}$ are iid with mean ${\mu}$, then ${X_1 + \dots + X_n \approx \mu n}$ for ${n}$ large. Clearly the strong law of large numbers implies the weak law, but the weak law is easier to prove (and has somewhat better quantitative estimates). There are several variants of the law of large numbers, for instance when one drops the hypothesis of identical distribution, or when the random variable ${X}$ is not absolutely integrable, or if one seeks more quantitative bounds on the rate of convergence; we will discuss some of these variants below the fold.

It is instructive to compare the law of large numbers with what one can obtain from the Kolmogorov zero-one law, discussed in Notes 2. Observe that if the ${X_n}$ are real-valued, then the limit superior ${\limsup_{n \rightarrow \infty} S_n/n}$ and ${\liminf_{n \rightarrow \infty} S_n/n}$ are tail random variables in the sense that they are not affected if one changes finitely many of the ${X_n}$; in particular, events such as ${\limsup_{n \rightarrow \infty} S_n/n > t}$ are tail events for any ${t \in {\bf R}}$. From this and the zero-one law we see that there must exist deterministic quantities ${-\infty \leq \mu_- \leq \mu_+ \leq +\infty}$ such that ${\limsup_{n \rightarrow \infty} S_n/n = \mu_+}$ and ${\liminf_{n \rightarrow \infty} S_n/n = \mu_-}$ almost surely. The strong law of large numbers can then be viewed as the assertion that ${\mu_- = \mu_+ = \mu}$ when ${X}$ is absolutely integrable. On the other hand, the zero-one law argument does not require absolute integrability (and one can replace the denominator ${n}$ by other functions of ${n}$ that go to infinity as ${n \rightarrow \infty}$).

The law of large numbers asserts, roughly speaking, that the theoretical expectation ${\mu}$ of a random variable ${X}$ can be approximated by taking a large number of independent samples ${X_1,\dots,X_n}$ of ${X}$ and then forming the empirical mean ${S_n/n = \frac{X_1+\dots+X_n}{n}}$. This ability to approximate the theoretical statistics of a probability distribution through empirical data is one of the basic starting points for mathematical statistics, though this is not the focus of the course here. The tendency of statistics such as ${S_n/n}$ to cluster closely around their mean value ${\mu}$ is the simplest instance of the concentration of measure phenomenon, which is of tremendous significance not only within probability, but also in applications of probability to disciplines such as statistics, theoretical computer science, combinatorics, random matrix theory and high dimensional geometry. We will not discuss these topics much in this course, but see this previous blog post for some further discussion.

There are several ways to prove the law of large numbers (in both forms). One basic strategy is to use the moment method – controlling statistics such as ${S_n/n}$ by computing moments such as the mean ${{\bf E} S_n/n}$, variance ${{\bf E} |S_n/n - {\bf E} S_n/n|^2}$, or higher moments such as ${{\bf E} |S_n/n - {\bf E} S_n/n|^k}$ for ${k = 4, 6, \dots}$. The joint independence of the ${X_i}$ make such moments fairly easy to compute, requiring only some elementary combinatorics. A direct application of the moment method typically requires one to make a finite moment assumption such as ${{\bf E} |X|^k < \infty}$, but as we shall see, one can reduce fairly easily to this case by a truncation argument.

For the strong law of large numbers, one can also use methods relating to the theory of martingales, such as stopping time arguments and maximal inequalities; we present some classical arguments of Kolmogorov in this regard.

Let X be a real-valued random variable, and let $X_1, X_2, X_3, ...$ be an infinite sequence of independent and identically distributed copies of X. Let $\overline{X}_n := \frac{1}{n}(X_1 + \ldots + X_n)$ be the empirical averages of this sequence. A fundamental theorem in probability theory is the law of large numbers, which comes in both a weak and a strong form:

Weak law of large numbers. Suppose that the first moment ${\Bbb E} |X|$ of X is finite. Then $\overline{X}_n$ converges in probability to ${\Bbb E} X$, thus $\lim_{n \to \infty} {\Bbb P}( |\overline{X}_n - {\Bbb E} X| \geq \varepsilon ) = 0$ for every $\varepsilon > 0$.

Strong law of large numbers. Suppose that the first moment ${\Bbb E} |X|$ of X is finite. Then $\overline{X}_n$ converges almost surely to ${\Bbb E} X$, thus ${\Bbb P}( \lim_{n \to \infty} \overline{X}_n = {\Bbb E} X ) = 1$.

[The concepts of convergence in probability and almost sure convergence in probability theory are specialisations of the concepts of convergence in measure and pointwise convergence almost everywhere in measure theory.]

(If one strengthens the first moment assumption to that of finiteness of the second moment ${\Bbb E}|X|^2$, then we of course have a more precise statement than the (weak) law of large numbers, namely the central limit theorem, but I will not discuss that theorem here.  With even more hypotheses on X, one similarly has more precise versions of the strong law of large numbers, such as the Chernoff inequality, which I will again not discuss here.)

The weak law is easy to prove, but the strong law (which of course implies the weak law, by Egoroff’s theorem) is more subtle, and in fact the proof of this law (assuming just finiteness of the first moment) usually only appears in advanced graduate texts. So I thought I would present a proof here of both laws, which proceeds by the standard techniques of the moment method and truncation. The emphasis in this exposition will be on motivation and methods rather than brevity and strength of results; there do exist proofs of the strong law in the literature that have been compressed down to the size of one page or less, but this is not my goal here.