You are currently browsing the tag archive for the ‘Bai-Yin theorem’ tag.
9 January, 2010 in 254A - random matrices, math.PR, math.SP | Tags: Bai-Yin theorem, concentration of measure, epsilon net argument, iid matrices, moment method, operator norm, Wigner matrices | by Terence Tao | 37 comments
Now that we have developed the basic probabilistic tools that we will need, we now turn to the main subject of this course, namely the study of random matrices. There are many random matrix models (aka matrix ensembles) of interest – far too many to all be discussed in a single course. We will thus focus on just a few simple models. First of all, we shall restrict attention to square matrices , where is a (large) integer and the are real or complex random variables. (One can certainly study rectangular matrices as well, but for simplicity we will only look at the square case.) Then, we shall restrict to three main models:
- Iid matrix ensembles, in which the coefficients are iid random variables with a single distribution . We will often normalise to have mean zero and unit variance. Examples of iid models include the Bernouli ensemble (aka random sign matrices) in which the are signed Bernoulli variables, the real gaussian matrix ensemble in which , and the complex gaussian matrix ensemble in which .
- Symmetric Wigner matrix ensembles, in which the upper triangular coefficients , are jointly independent and real, but the lower triangular coefficients , are constrained to equal their transposes: . Thus by construction is always a real symmetric matrix. Typically, the strictly upper triangular coefficients will be iid, as will the diagonal coefficients, but the two classes of coefficients may have a different distribution. One example here is the symmetric Bernoulli ensemble, in which both the strictly upper triangular and the diagonal entries are signed Bernoulli variables; another important example is the Gaussian Orthogonal Ensemble (GOE), in which the upper triangular entries have distribution and the diagonal entries have distribution . (We will explain the reason for this discrepancy later.)
- Hermitian Wigner matrix ensembles, in which the upper triangular coefficients are jointly independent, with the diagonal entries being real and the strictly upper triangular entries complex, and the lower triangular coefficients , are constrained to equal their adjoints: . Thus by construction is always a Hermitian matrix. This class of ensembles contains the symmetric Wigner ensembles as a subclass. Another very important example is the Gaussian Unitary Ensemble (GUE), in which all off-diagional entries have distribution , but the diagonal entries have distribution .
Given a matrix ensemble , there are many statistics of that one may wish to consider, e.g. the eigenvalues or singular values of , the trace and determinant, etc. In these notes we will focus on a basic statistic, namely the operator norm
of the matrix . This is an interesting quantity in its own right, but also serves as a basic upper bound on many other quantities. (For instance, is also the largest singular value of and thus dominates the other singular values; similarly, all eigenvalues of clearly have magnitude at most .) Because of this, it is particularly important to get good upper tail bounds
on this quantity, for various thresholds . (Lower tail bounds are also of interest, of course; for instance, they give us confidence that the upper tail bounds are sharp.) Also, as we shall see, the problem of upper bounding can be viewed as a non-commutative analogue of upper bounding the quantity studied in Notes 1. (The analogue of the central limit theorem in Notes 2 is the Wigner semi-circular law, which will be studied in the next set of notes.)
An matrix consisting entirely of s has an operator norm of exactly , as can for instance be seen from the Cauchy-Schwarz inequality. More generally, any matrix whose entries are all uniformly will have an operator norm of (which can again be seen from Cauchy-Schwarz, or alternatively from Schur’s test, or from a computation of the Frobenius norm). However, this argument does not take advantage of possible cancellations in . Indeed, from analogy with concentration of measure, when the entries of the matrix are independent, bounded and have mean zero, we expect the operator norm to be of size rather than . We shall see shortly that this intuition is indeed correct. (One can see, though, that the mean zero hypothesis is important; from the triangle inequality we see that if we add the all-ones matrix (for instance) to a random matrix with mean zero, to obtain a random matrix whose coefficients all have mean , then at least one of the two random matrices necessarily has operator norm at least .)
As mentioned before, there is an analogy here with the concentration of measure phenomenon, and many of the tools used in the latter (e.g. the moment method) will also appear here. (Indeed, we will be able to use some of the concentration inequalities from Notes 1 directly to help control and related quantities.) Similarly, just as many of the tools from concentration of measure could be adapted to help prove the central limit theorem, several the tools seen here will be of use in deriving the semi-circular law.
The most advanced knowledge we have on the operator norm is given by the Tracy-Widom law, which not only tells us where the operator norm is concentrated in (it turns out, for instance, that for a Wigner matrix (with some additional technical assumptions), it is concentrated in the range ), but what its distribution in that range is. While the methods in this set of notes can eventually be pushed to establish this result, this is far from trivial, and will only be briefly discussed here. (We may return to the Tracy-Widom law later in this course, though.)