You are currently browsing the tag archive for the ‘law of large numbers’ tag.

One of the major activities in probability theory is studying the various statistics that can be produced from a complex system with many components. One of the simplest possible systems one can consider is a finite sequence or an infinite sequence of jointly independent scalar random variables, with the case when the are also identically distributed (i.e. the are iid) being a model case of particular interest. (In some cases one may consider a triangular array of scalar random variables, rather than a finite or infinite sequence.) There are many statistics of such sequences that one can study, but one of the most basic such statistics are the partial sums

The first fundamental result about these sums is the law of large numbers (or LLN for short), which comes in two formulations, weak (WLLN) and strong (SLLN). To state these laws, we first must define the notion of convergence in probability.

Definition 1Let be a sequence of random variables taking values in a separable metric space (e.g. the could be scalar random variables, taking values in or ), and let be another random variable taking values in . We say that converges in probability to if, for every radius , one has as . Thus, if are scalar, we have converging to in probability if as for any given .

The measure-theoretic analogue of convergence in probability is convergence in measure.

It is instructive to compare the notion of convergence in probability with almost sure convergence. it is easy to see that converges almost surely to if and only if, for every radius , one has as ; thus, roughly speaking, convergence in probability is good for controlling how a single random variable is close to its putative limiting value , while almost sure convergence is good for controlling how the entire *tail* of a sequence of random variables is close to its putative limit .

We have the following easy relationships between convergence in probability and almost sure convergence:

Exercise 2Let be a sequence of scalar random variables, and let be another scalar random variable.

- (i) If almost surely, show that in probability. Give a counterexample to show that the converse does not necessarily hold.
- (ii) Suppose that for all . Show that almost surely. Give a counterexample to show that the converse does not necessarily hold.
- (iii) If in probability, show that there is a subsequence of the such that almost surely.
- (iv) If are absolutely integrable and as , show that in probability. Give a counterexample to show that the converse does not necessarily hold.
- (v) (Urysohn subsequence principle) Suppose that every subsequence of has a further subsequence that converges to in probability. Show that also converges to in probability.
- (vi) Does the Urysohn subsequence principle still hold if “in probability” is replaced with “almost surely” throughout?
- (vii) If converges in probability to , and or is continuous, show that converges in probability to . More generally, if for each , is a sequence of scalar random variables that converge in probability to , and or is continuous, show that converges in probability to . (Thus, for instance, if and converge in probability to and respectively, then and converge in probability to and respectively.
- (viii) (Fatou’s lemma for convergence in probability) If are non-negative and converge in probability to , show that .
- (ix) (Dominated convergence in probability) If converge in probability to , and one almost surely has for all and some absolutely integrable , show that converges to .

Exercise 3Let be a sequence of scalar random variables converging in probability to another random variable .

- (i) Suppose that there is a random variable which is independent of for each individual . Show that is also independent of .
- (ii) Suppose that the are jointly independent. Show that is almost surely constant (i.e. there is a deterministic scalar such that almost surely).

We can now state the weak and strong law of large numbers, in the model case of iid random variables.

Theorem 4 (Law of large numbers, model case)Let be an iid sequence of copies of an absolutely integrable random variable (thus the are independent and all have the same distribution as ). Write , and for each natural number , let denote the random variable .

- (i) (Weak law of large numbers) The random variables converge in probability to .
- (ii) (Strong law of large numbers) The random variables converge almost surely to .

Informally: if are iid with mean , then for large. Clearly the strong law of large numbers implies the weak law, but the weak law is easier to prove (and has somewhat better quantitative estimates). There are several variants of the law of large numbers, for instance when one drops the hypothesis of identical distribution, or when the random variable is not absolutely integrable, or if one seeks more quantitative bounds on the rate of convergence; we will discuss some of these variants below the fold.

It is instructive to compare the law of large numbers with what one can obtain from the Kolmogorov zero-one law, discussed in Notes 2. Observe that if the are real-valued, then the limit superior and are tail random variables in the sense that they are not affected if one changes finitely many of the ; in particular, events such as are tail events for any . From this and the zero-one law we see that there must exist deterministic quantities such that and almost surely. The strong law of large numbers can then be viewed as the assertion that when is absolutely integrable. On the other hand, the zero-one law argument does not require absolute integrability (and one can replace the denominator by other functions of that go to infinity as ).

The law of large numbers asserts, roughly speaking, that the theoretical expectation of a random variable can be approximated by taking a large number of independent samples of and then forming the empirical mean . This ability to approximate the theoretical statistics of a probability distribution through empirical data is one of the basic starting points for mathematical statistics, though this is not the focus of the course here. The tendency of statistics such as to cluster closely around their mean value is the simplest instance of the concentration of measure phenomenon, which is of tremendous significance not only within probability, but also in applications of probability to disciplines such as statistics, theoretical computer science, combinatorics, random matrix theory and high dimensional geometry. We will not discuss these topics much in this course, but see this previous blog post for some further discussion.

There are several ways to prove the law of large numbers (in both forms). One basic strategy is to use the *moment method* – controlling statistics such as by computing moments such as the mean , variance , or higher moments such as for . The joint independence of the make such moments fairly easy to compute, requiring only some elementary combinatorics. A direct application of the moment method typically requires one to make a finite moment assumption such as , but as we shall see, one can reduce fairly easily to this case by a truncation argument.

For the strong law of large numbers, one can also use methods relating to the theory of martingales, such as stopping time arguments and maximal inequalities; we present some classical arguments of Kolmogorov in this regard.

Let X be a real-valued random variable, and let be an infinite sequence of independent and identically distributed copies of X. Let be the empirical averages of this sequence. A fundamental theorem in probability theory is the law of large numbers, which comes in both a weak and a strong form:

Weak law of large numbers.Suppose that the first moment of X is finite. Then converges in probability to , thus for every .

Strong law of large numbers.Suppose that the first moment of X is finite. Then converges almost surely to , thus .

[The concepts of convergence in probability and almost sure convergence in probability theory are specialisations of the concepts of convergence in measure and pointwise convergence almost everywhere in measure theory.]

(If one strengthens the first moment assumption to that of finiteness of the second moment , then we of course have a more precise statement than the (weak) law of large numbers, namely the central limit theorem, but I will not discuss that theorem here. With even more hypotheses on X, one similarly has more precise versions of the strong law of large numbers, such as the Chernoff inequality, which I will again not discuss here.)

The weak law is easy to prove, but the strong law (which of course implies the weak law, by Egoroff’s theorem) is more subtle, and in fact the proof of this law (assuming just finiteness of the first moment) usually only appears in advanced graduate texts. So I thought I would present a proof here of both laws, which proceeds by the standard techniques of the moment method and truncation. The emphasis in this exposition will be on motivation and methods rather than brevity and strength of results; there do exist proofs of the strong law in the literature that have been compressed down to the size of one page or less, but this is not my goal here.

## Recent Comments