You are currently browsing the tag archive for the ‘moment method’ tag.

We can now turn attention to one of the centerpiece universality results in random matrix theory, namely the Wigner semi-circle law for Wigner matrices. Recall from previous notes that a Wigner Hermitian matrix ensemble is a random matrix ensemble ${M_n = (\xi_{ij})_{1 \leq i,j \leq n}}$ of Hermitian matrices (thus ${\xi_{ij} = \overline{\xi_{ji}}}$; this includes real symmetric matrices as an important special case), in which the upper-triangular entries ${\xi_{ij}}$, ${i>j}$ are iid complex random variables with mean zero and unit variance, and the diagonal entries ${\xi_{ii}}$ are iid real variables, independent of the upper-triangular entries, with bounded mean and variance. Particular special cases of interest include the Gaussian Orthogonal Ensemble (GOE), the symmetric random sign matrices (aka symmetric Bernoulli ensemble), and the Gaussian Unitary Ensemble (GUE).

In previous notes we saw that the operator norm of ${M_n}$ was typically of size ${O(\sqrt{n})}$, so it is natural to work with the normalised matrix ${\frac{1}{\sqrt{n}} M_n}$. Accordingly, given any ${n \times n}$ Hermitian matrix ${M_n}$, we can form the (normalised) empirical spectral distribution (or ESD for short)

$\displaystyle \mu_{\frac{1}{\sqrt{n}} M_n} := \frac{1}{n} \sum_{j=1}^n \delta_{\lambda_j(M_n) / \sqrt{n}},$

of ${M_n}$, where ${\lambda_1(M_n) \leq \ldots \leq \lambda_n(M_n)}$ are the (necessarily real) eigenvalues of ${M_n}$, counting multiplicity. The ESD is a probability measure, which can be viewed as a distribution of the normalised eigenvalues of ${M_n}$.

When ${M_n}$ is a random matrix ensemble, then the ESD ${\mu_{\frac{1}{\sqrt{n}} M_n}}$ is now a random measure – i.e. a random variable taking values in the space ${\hbox{Pr}({\mathbb R})}$ of probability measures on the real line. (Thus, the distribution of ${\mu_{\frac{1}{\sqrt{n}} M_n}}$ is a probability measure on probability measures!)

Now we consider the behaviour of the ESD of a sequence of Hermitian matrix ensembles ${M_n}$ as ${n \rightarrow \infty}$. Recall from Notes 0 that for any sequence of random variables in a ${\sigma}$-compact metrisable space, one can define notions of convergence in probability and convergence almost surely. Specialising these definitions to the case of random probability measures on ${{\mathbb R}}$, and to deterministic limits, we see that a sequence of random ESDs ${\mu_{\frac{1}{\sqrt{n}} M_n}}$ converge in probability (resp. converge almost surely) to a deterministic limit ${\mu \in \hbox{Pr}({\mathbb R})}$ (which, confusingly enough, is a deterministic probability measure!) if, for every test function ${\varphi \in C_c({\mathbb R})}$, the quantities ${\int_{\mathbb R} \varphi\ d\mu_{\frac{1}{\sqrt{n}} M_n}}$ converge in probability (resp. converge almost surely) to ${\int_{\mathbb R} \varphi\ d\mu}$.

Remark 1 As usual, convergence almost surely implies convergence in probability, but not vice versa. In the special case of random probability measures, there is an even weaker notion of convergence, namely convergence in expectation, defined as follows. Given a random ESD ${\mu_{\frac{1}{\sqrt{n}} M_n}}$, one can form its expectation ${{\bf E} \mu_{\frac{1}{\sqrt{n}} M_n} \in \hbox{Pr}({\mathbb R})}$, defined via duality (the Riesz representation theorem) as

$\displaystyle \int_{\mathbb R} \varphi\ d{\bf E} \mu_{\frac{1}{\sqrt{n}} M_n} := {\bf E} \int_{\mathbb R} \varphi\ d \mu_{\frac{1}{\sqrt{n}} M_n};$

this probability measure can be viewed as the law of a random eigenvalue ${\frac{1}{\sqrt{n}}\lambda_i(M_n)}$ drawn from a random matrix ${M_n}$ from the ensemble. We then say that the ESDs converge in expectation to a limit ${\mu \in \hbox{Pr}({\mathbb R})}$ if ${{\bf E} \mu_{\frac{1}{\sqrt{n}} M_n}}$ converges the vague topology to ${\mu}$, thus

$\displaystyle {\bf E} \int_{\mathbb R} \varphi\ d \mu_{\frac{1}{\sqrt{n}} M_n} \rightarrow \int_{\mathbb R} \varphi\ d\mu$

for all ${\phi \in C_c({\mathbb R})}$.

In general, these notions of convergence are distinct from each other; but in practice, one often finds in random matrix theory that these notions are effectively equivalent to each other, thanks to the concentration of measure phenomenon.

Exercise 1 Let ${M_n}$ be a sequence of ${n \times n}$ Hermitian matrix ensembles, and let ${\mu}$ be a continuous probability measure on ${{\mathbb R}}$.

• Show that ${\mu_{\frac{1}{\sqrt{n}} M_n}}$ converges almost surely to ${\mu}$ if and only if ${\mu_{\frac{1}{\sqrt{n}}}(-\infty,\lambda)}$ converges almost surely to ${\mu(-\infty,\lambda)}$ for all ${\lambda \in {\mathbb R}}$.
• Show that ${\mu_{\frac{1}{\sqrt{n}} M_n}}$ converges in probability to ${\mu}$ if and only if ${\mu_{\frac{1}{\sqrt{n}}}(-\infty,\lambda)}$ converges in probability to ${\mu(-\infty,\lambda)}$ for all ${\lambda \in {\mathbb R}}$.
• Show that ${\mu_{\frac{1}{\sqrt{n}} M_n}}$ converges in expectation to ${\mu}$ if and only if ${\mathop{\mathbb E} \mu_{\frac{1}{\sqrt{n}}}(-\infty,\lambda)}$ converges to ${\mu(-\infty,\lambda)}$ for all ${\lambda \in {\mathbb R}}$.

We can now state the Wigner semi-circular law.

Theorem 1 (Semicircular law) Let ${M_n}$ be the top left ${n \times n}$ minors of an infinite Wigner matrix ${(\xi_{ij})_{i,j \geq 1}}$. Then the ESDs ${\mu_{\frac{1}{\sqrt{n}} M_n}}$ converge almost surely (and hence also in probability and in expectation) to the Wigner semi-circular distribution

$\displaystyle \mu_{sc} := \frac{1}{2\pi} (4-|x|^2)^{1/2}_+\ dx. \ \ \ \ \ (1)$

A numerical example of this theorem in action can be seen at the MathWorld entry for this law.

The semi-circular law nicely complements the upper Bai-Yin theorem from Notes 3, which asserts that (in the case when the entries have finite fourth moment, at least), the matrices ${\frac{1}{\sqrt{n}} M_n}$ almost surely has operator norm at most ${2+o(1)}$. Note that the operator norm is the same thing as the largest magnitude of the eigenvalues. Because the semi-circular distribution (1) is supported on the interval ${[-2,2]}$ with positive density on the interior of this interval, Theorem 1 easily supplies the lower Bai-Yin theorem, that the operator norm of ${\frac{1}{\sqrt{n}} M_n}$ is almost surely at least ${2-o(1)}$, and thus (in the finite fourth moment case) the norm is in fact equal to ${2+o(1)}$. Indeed, we have just shown that the circular law provides an alternate proof of the lower Bai-Yin bound (Proposition 11 of Notes 3).

As will hopefully become clearer in the next set of notes, the semi-circular law is the noncommutative (or free probability) analogue of the central limit theorem, with the semi-circular distribution (1) taking on the role of the normal distribution. Of course, there is a striking difference between the two distributions, in that the former is compactly supported while the latter is merely subgaussian. One reason for this is that the concentration of measure phenomenon is more powerful in the case of ESDs of Wigner matrices than it is for averages of iid variables; compare the concentration of measure results in Notes 3 with those in Notes 1.

There are several ways to prove (or at least to heuristically justify) the circular law. In this set of notes we shall focus on the two most popular methods, the moment method and the Stieltjes transform method, together with a third (heuristic) method based on Dyson Brownian motion (Notes 3b). In the next set of notes we shall also study the free probability method, and in the set of notes after that we use the determinantal processes method (although this method is initially only restricted to highly symmetric ensembles, such as GUE).

Now that we have developed the basic probabilistic tools that we will need, we now turn to the main subject of this course, namely the study of random matrices. There are many random matrix models (aka matrix ensembles) of interest – far too many to all be discussed in a single course. We will thus focus on just a few simple models. First of all, we shall restrict attention to square matrices ${M = (\xi_{ij})_{1 \leq i,j \leq n}}$, where ${n}$ is a (large) integer and the ${\xi_{ij}}$ are real or complex random variables. (One can certainly study rectangular matrices as well, but for simplicity we will only look at the square case.) Then, we shall restrict to three main models:

• Iid matrix ensembles, in which the coefficients ${\xi_{ij}}$ are iid random variables with a single distribution ${\xi_{ij} \equiv \xi}$. We will often normalise ${\xi}$ to have mean zero and unit variance. Examples of iid models include the Bernouli ensemble (aka random sign matrices) in which the ${\xi_{ij}}$ are signed Bernoulli variables, the real gaussian matrix ensemble in which ${\xi_{ij} \equiv N(0,1)_{\bf R}}$, and the complex gaussian matrix ensemble in which ${\xi_{ij} \equiv N(0,1)_{\bf C}}$.
• Symmetric Wigner matrix ensembles, in which the upper triangular coefficients ${\xi_{ij}}$, ${j \geq i}$ are jointly independent and real, but the lower triangular coefficients ${\xi_{ij}}$, ${j are constrained to equal their transposes: ${\xi_{ij}=\xi_{ji}}$. Thus ${M}$ by construction is always a real symmetric matrix. Typically, the strictly upper triangular coefficients will be iid, as will the diagonal coefficients, but the two classes of coefficients may have a different distribution. One example here is the symmetric Bernoulli ensemble, in which both the strictly upper triangular and the diagonal entries are signed Bernoulli variables; another important example is the Gaussian Orthogonal Ensemble (GOE), in which the upper triangular entries have distribution ${N(0,1)_{\bf R}}$ and the diagonal entries have distribution ${N(0,2)_{\bf R}}$. (We will explain the reason for this discrepancy later.)
• Hermitian Wigner matrix ensembles, in which the upper triangular coefficients are jointly independent, with the diagonal entries being real and the strictly upper triangular entries complex, and the lower triangular coefficients ${\xi_{ij}}$, ${j are constrained to equal their adjoints: ${\xi_{ij} = \overline{\xi_{ji}}}$. Thus ${M}$ by construction is always a Hermitian matrix. This class of ensembles contains the symmetric Wigner ensembles as a subclass. Another very important example is the Gaussian Unitary Ensemble (GUE), in which all off-diagional entries have distribution ${N(0,1)_{\bf C}}$, but the diagonal entries have distribution ${N(0,1)_{\bf R}}$.

Given a matrix ensemble ${M}$, there are many statistics of ${M}$ that one may wish to consider, e.g. the eigenvalues or singular values of ${M}$, the trace and determinant, etc. In these notes we will focus on a basic statistic, namely the operator norm

$\displaystyle \| M \|_{op} := \sup_{x \in {\bf C}^n: |x|=1} |Mx| \ \ \ \ \ (1)$

of the matrix ${M}$. This is an interesting quantity in its own right, but also serves as a basic upper bound on many other quantities. (For instance, ${\|M\|_{op}}$ is also the largest singular value ${\sigma_1(M)}$ of ${M}$ and thus dominates the other singular values; similarly, all eigenvalues ${\lambda_i(M)}$ of ${M}$ clearly have magnitude at most ${\|M\|_{op}}$.) Because of this, it is particularly important to get good upper tail bounds

$\displaystyle {\bf P}( \|M\|_{op} \geq \lambda ) \leq \ldots$

on this quantity, for various thresholds ${\lambda}$. (Lower tail bounds are also of interest, of course; for instance, they give us confidence that the upper tail bounds are sharp.) Also, as we shall see, the problem of upper bounding ${\|M\|_{op}}$ can be viewed as a non-commutative analogue of upper bounding the quantity ${|S_n|}$ studied in Notes 1. (The analogue of the central limit theorem in Notes 2 is the Wigner semi-circular law, which will be studied in the next set of notes.)

An ${n \times n}$ matrix consisting entirely of ${1}$s has an operator norm of exactly ${n}$, as can for instance be seen from the Cauchy-Schwarz inequality. More generally, any matrix whose entries are all uniformly ${O(1)}$ will have an operator norm of ${O(n)}$ (which can again be seen from Cauchy-Schwarz, or alternatively from Schur’s test, or from a computation of the Frobenius norm). However, this argument does not take advantage of possible cancellations in ${M}$. Indeed, from analogy with concentration of measure, when the entries of the matrix ${M}$ are independent, bounded and have mean zero, we expect the operator norm to be of size ${O(\sqrt{n})}$ rather than ${O(n)}$. We shall see shortly that this intuition is indeed correct. (One can see, though, that the mean zero hypothesis is important; from the triangle inequality we see that if we add the all-ones matrix (for instance) to a random matrix with mean zero, to obtain a random matrix whose coefficients all have mean ${1}$, then at least one of the two random matrices necessarily has operator norm at least ${n/2}$.)

As mentioned before, there is an analogy here with the concentration of measure phenomenon, and many of the tools used in the latter (e.g. the moment method) will also appear here. (Indeed, we will be able to use some of the concentration inequalities from Notes 1 directly to help control ${\|M\|_{op}}$ and related quantities.) Similarly, just as many of the tools from concentration of measure could be adapted to help prove the central limit theorem, several the tools seen here will be of use in deriving the semi-circular law.

The most advanced knowledge we have on the operator norm is given by the Tracy-Widom law, which not only tells us where the operator norm is concentrated in (it turns out, for instance, that for a Wigner matrix (with some additional technical assumptions), it is concentrated in the range ${[2\sqrt{n} - O(n^{-1/6}), 2\sqrt{n} + O(n^{-1/6})]}$), but what its distribution in that range is. While the methods in this set of notes can eventually be pushed to establish this result, this is far from trivial, and will only be briefly discussed here. (We may return to the Tracy-Widom law later in this course, though.)

Suppose we have a large number of scalar random variables ${X_1,\ldots,X_n}$, which each have bounded size on average (e.g. their mean and variance could be ${O(1)}$). What can one then say about their sum ${S_n := X_1+\ldots+X_n}$? If each individual summand ${X_i}$ varies in an interval of size ${O(1)}$, then their sum of course varies in an interval of size ${O(n)}$. However, a remarkable phenomenon, known as concentration of measure, asserts that assuming a sufficient amount of independence between the component variables ${X_1,\ldots,X_n}$, this sum sharply concentrates in a much narrower range, typically in an interval of size ${O(\sqrt{n})}$. This phenomenon is quantified by a variety of large deviation inequalities that give upper bounds (often exponential in nature) on the probability that such a combined random variable deviates significantly from its mean. The same phenomenon applies not only to linear expressions such as ${S_n = X_1+\ldots+X_n}$, but more generally to nonlinear combinations ${F(X_1,\ldots,X_n)}$ of such variables, provided that the nonlinear function ${F}$ is sufficiently regular (in particular, if it is Lipschitz, either separately in each variable, or jointly in all variables).

The basic intuition here is that it is difficult for a large number of independent variables ${X_1,\ldots,X_n}$ to “work together” to simultaneously pull a sum ${X_1+\ldots+X_n}$ or a more general combination ${F(X_1,\ldots,X_n)}$ too far away from its mean. Independence here is the key; concentration of measure results typically fail if the ${X_i}$ are too highly correlated with each other.

There are many applications of the concentration of measure phenomenon, but we will focus on a specific application which is useful in the random matrix theory topics we will be studying, namely on controlling the behaviour of random ${n}$-dimensional vectors with independent components, and in particular on the distance between such random vectors and a given subspace.

Once one has a sufficient amount of independence, the concentration of measure tends to be sub-gaussian in nature; thus the probability that one is at least ${\lambda}$ standard deviations from the mean tends to drop off like ${C \exp(-c\lambda^2)}$ for some ${C,c > 0}$. In particular, one is ${O( \log^{1/2} n )}$ standard deviations from the mean with high probability, and ${O( \log^{1/2+\epsilon} n)}$ standard deviations from the mean with overwhelming probability. Indeed, concentration of measure is our primary tool for ensuring that various events hold with overwhelming probability (other moment methods can give high probability, but have difficulty ensuring overwhelming probability).

This is only a brief introduction to the concentration of measure phenomenon. A systematic study of this topic can be found in this book by Ledoux.

Let X be a real-valued random variable, and let $X_1, X_2, X_3, ...$ be an infinite sequence of independent and identically distributed copies of X. Let $\overline{X}_n := \frac{1}{n}(X_1 + \ldots + X_n)$ be the empirical averages of this sequence. A fundamental theorem in probability theory is the law of large numbers, which comes in both a weak and a strong form:

Weak law of large numbers. Suppose that the first moment ${\Bbb E} |X|$ of X is finite. Then $\overline{X}_n$ converges in probability to ${\Bbb E} X$, thus $\lim_{n \to \infty} {\Bbb P}( |\overline{X}_n - {\Bbb E} X| \geq \varepsilon ) = 0$ for every $\varepsilon > 0$.

Strong law of large numbers. Suppose that the first moment ${\Bbb E} |X|$ of X is finite. Then $\overline{X}_n$ converges almost surely to ${\Bbb E} X$, thus ${\Bbb P}( \lim_{n \to \infty} \overline{X}_n = {\Bbb E} X ) = 1$.

[The concepts of convergence in probability and almost sure convergence in probability theory are specialisations of the concepts of convergence in measure and pointwise convergence almost everywhere in measure theory.]

(If one strengthens the first moment assumption to that of finiteness of the second moment ${\Bbb E}|X|^2$, then we of course have a more precise statement than the (weak) law of large numbers, namely the central limit theorem, but I will not discuss that theorem here.  With even more hypotheses on X, one similarly has more precise versions of the strong law of large numbers, such as the Chernoff inequality, which I will again not discuss here.)

The weak law is easy to prove, but the strong law (which of course implies the weak law, by Egoroff’s theorem) is more subtle, and in fact the proof of this law (assuming just finiteness of the first moment) usually only appears in advanced graduate texts. So I thought I would present a proof here of both laws, which proceeds by the standard techniques of the moment method and truncation. The emphasis in this exposition will be on motivation and methods rather than brevity and strength of results; there do exist proofs of the strong law in the literature that have been compressed down to the size of one page or less, but this is not my goal here.