You are currently browsing the tag archive for the ‘concentration of measure’ tag.
Let and
be two random variables taking values in the same (discrete) range
, and let
be some subset of
, which we think of as the set of “bad” outcomes for either
or
. If
and
have the same probability distribution, then clearly
In particular, if it is rare for to lie in
, then it is also rare for
to lie in
.
If and
do not have exactly the same probability distribution, but their probability distributions are close to each other in some sense, then we can expect to have an approximate version of the above statement. For instance, from the definition of the total variation distance
between two random variables (or more precisely, the total variation distance between the probability distributions of two random variables), we see that
for any . In particular, if it is rare for
to lie in
, and
are close in total variation, then it is also rare for
to lie in
.
A basic inequality in information theory is Pinsker’s inequality
where the Kullback-Leibler divergence is defined by the formula
(See this previous blog post for a proof of this inequality.) A standard application of Jensen’s inequality reveals that is non-negative (Gibbs’ inequality), and vanishes if and only if
,
have the same distribution; thus one can think of
as a measure of how close the distributions of
and
are to each other, although one should caution that this is not a symmetric notion of distance, as
in general. Inserting Pinsker’s inequality into (1), we see for instance that
Thus, if is close to
in the Kullback-Leibler sense, and it is rare for
to lie in
, then it is rare for
to lie in
as well.
We can specialise this inequality to the case when a uniform random variable
on a finite range
of some cardinality
, in which case the Kullback-Leibler divergence
simplifies to
where
is the Shannon entropy of . Again, a routine application of Jensen’s inequality shows that
, with equality if and only if
is uniformly distributed on
. The above inequality then becomes
Thus, if is a small fraction of
(so that it is rare for
to lie in
), and the entropy of
is very close to the maximum possible value of
, then it is rare for
to lie in
also.
The inequality (2) is only useful when the entropy is close to
in the sense that
, otherwise the bound is worse than the trivial bound of
. In my recent paper on the Chowla and Elliott conjectures, I ended up using a variant of (2) which was still non-trivial when the entropy
was allowed to be smaller than
. More precisely, I used the following simple inequality, which is implicit in the arguments of that paper but which I would like to make more explicit in this post:
Lemma 1 (Pinsker-type inequality) Let
be a random variable taking values in a finite range
of cardinality
, let
be a uniformly distributed random variable in
, and let
be a subset of
. Then
Proof: Consider the conditional entropy . On the one hand, we have
by Jensen’s inequality. On the other hand, one has
where we have again used Jensen’s inequality. Putting the two inequalities together, we obtain the claim.
Remark 2 As noted in comments, this inequality can be viewed as a special case of the more general inequality
for arbitrary random variables
taking values in the same discrete range
, which follows from the data processing inequality
for arbitrary functions
, applied to the indicator function
. Indeed one has
where
is the entropy function.
Thus, for instance, if one has
and
for some much larger than
(so that
), then
More informally: if the entropy of is somewhat close to the maximum possible value of
, and it is exponentially rare for a uniform variable to lie in
, then it is still somewhat rare for
to lie in
. The estimate given is close to sharp in this regime, as can be seen by calculating the entropy of a random variable
which is uniformly distributed inside a small set
with some probability
and uniformly distributed outside of
with probability
, for some parameter
.
It turns out that the above lemma combines well with concentration of measure estimates; in my paper, I used one of the simplest such estimates, namely Hoeffding’s inequality, but there are of course many other estimates of this type (see e.g. this previous blog post for some others). Roughly speaking, concentration of measure inequalities allow one to make approximations such as
with exponentially high probability, where is a uniform distribution and
is some reasonable function of
. Combining this with the above lemma, we can then obtain approximations of the form
with somewhat high probability, if the entropy of is somewhat close to maximum. This observation, combined with an “entropy decrement argument” that allowed one to arrive at a situation in which the relevant random variable
did have a near-maximum entropy, is the key new idea in my recent paper; for instance, one can use the approximation (3) to obtain an approximation of the form
for “most” choices of and a suitable choice of
(with the latter being provided by the entropy decrement argument). The left-hand side is tied to Chowla-type sums such as
through the multiplicativity of
, while the right-hand side, being a linear correlation involving two parameters
rather than just one, has “finite complexity” and can be treated by existing techniques such as the Hardy-Littlewood circle method. One could hope that one could similarly use approximations such as (3) in other problems in analytic number theory or combinatorics.
Van Vu and I have just uploaded to the arXiv our paper Random matrices: Sharp concentration of eigenvalues, submitted to the Electronic Journal of Probability. As with many of our previous papers, this paper is concerned with the distribution of the eigenvalues of a random Wigner matrix
(such as a matrix drawn from the Gaussian Unitary Ensemble (GUE) or Gaussian Orthogonal Ensemble (GOE)). To simplify the discussion we shall mostly restrict attention to the bulk of the spectrum, i.e. to eigenvalues
with
for some fixed
, although analogues of most of the results below have also been obtained at the edge of the spectrum.
If we normalise the entries of the matrix to have mean zero and variance
, then in the asymptotic limit
, we have the Wigner semicircle law, which asserts that the eigenvalues are asymptotically distributed according to the semicircular distribution
, where
An essentially equivalent way of saying this is that for large , we expect the
eigenvalue
of
to stay close to the classical location
, defined by the formula
In particular, from the Wigner semicircle law it can be shown that asymptotically almost surely, one has
for all .
In the modern study of the spectrum of Wigner matrices (and in particular as a key tool in establishing universality results), it has become of interest to improve the error term in (1) as much as possible. A typical early result in this direction was by Bai, who used the Stieltjes transform method to obtain polynomial convergence rates of the shape for some absolute constant
; see also the subsequent papers of Alon-Krivelevich-Vu and of of Meckes, who were able to obtain such convergence rates (with exponentially high probability) by using concentration of measure tools, such as Talagrand’s inequality. On the other hand, in the case of the GUE ensemble it is known (by this paper of Gustavsson) that
has variance comparable to
in the bulk, so that the optimal error term in (1) should be about
. (One may think that if one wanted bounds on (1) that were uniform in
, one would need to enlarge the error term further, but this does not appear to be the case, due to strong correlations between the
; note for instance this recent result of Ben Arous and Bourgarde that the largest gap between eigenvalues in the bulk is typically of order
.)
A significant advance in this direction was achieved by Erdos, Schlein, and Yau in a series of papers where they used a combination of Stieltjes transform and concentration of measure methods to obtain local semicircle laws which showed, among other things, that one had asymptotics of the form
with exponentially high probability for intervals in the bulk that were as short as
for some
, where
is the number of eigenvalues. These asymptotics are consistent with a good error term in (1), and are already sufficient for many applications, but do not quite imply a strong concentration result for individual eigenvalues
(basically because they do not preclude long-range or “secular” shifts in the spectrum that involve large blocks of eigenvalues at mesoscopic scales). Nevertheless, this was rectified in a subsequent paper of Erdos, Yau, and Yin, which roughly speaking obtained a bound of the form
in the bulk with exponentially high probability, for Wigner matrices obeying some exponential decay conditions on the entries. This was achieved by a rather delicate high moment calculation, in which the contribution of the diagonal entries of the resolvent (whose average forms the Stieltjes transform) was shown to mostly cancel each other out.
As the GUE computations show, this concentration result is sharp up to the quasilogarithmic factor . The main result of this paper is to improve the concentration result to one more in line with the GUE case, namely
with exponentially high probability (see the paper for a more precise statement of results). The one catch is that an additional hypothesis is required, namely that the entries of the Wigner matrix have vanishing third moment. We also obtain similar results for the edge of the spectrum (but with a different scaling).
Our arguments are rather different from those of Erdos, Yau, and Yin, and thus provide an alternate approach to establishing eigenvalue concentration. The main tool is the Lindeberg exchange strategy, which is also used to prove the Four Moment Theorem (although we do not directly invoke the Four Moment Theorem in our analysis). The main novelty is that this exchange strategy is now used to establish large deviation estimates (i.e. exponentially small tail probabilities) rather than universality of the limiting distribution. Roughly speaking, the basic point is as follows. The Lindeberg exchange strategy seeks to compare a function of many independent random variables
with the same function
of a different set of random variables (which match moments with the original set of variables to some order, such as to second or fourth order) by exchanging the random variables one at a time. Typically, one tries to upper bound expressions such as
for various smooth test functions , by performing a Taylor expansion in the variable being swapped and taking advantage of the matching moment hypotheses. In previous implementations of this strategy,
was a bounded test function, which allowed one to get control of the bulk of the distribution of
, and in particular in controlling probabilities such as
for various thresholds and
, but did not give good control on the tail as the error terms tended to be polynomially decaying in
rather than exponentially decaying. However, it turns out that one can modify the exchange strategy to deal with moments such as
for various moderately large (e.g. of size comparable to
), obtaining results such as
after performing all the relevant exchanges. As such, one can then use large deviation estimates on to deduce large deviation estimates on
.
In this paper we also take advantage of a simplification, first noted by Erdos, Yau, and Yin, that Four Moment Theorems become somewhat easier to prove if one works with resolvents (and the closely related Stieltjes transform
) rather than with individual eigenvalues, as the Taylor expansion of resolvents are very simple (essentially being a Neumann series). The relationship between the Stieltjes transform and the location of individual eigenvalues can be seen by taking advantage of the identity
for any energy level , which can be verified from elementary calculus. (In practice, we would truncate
near zero and near infinity to avoid some divergences, but this is a minor technicality.) As such, a concentration result for the Stieltjes transform can be used to establish an analogous concentration result for the eigenvalue counting functions
, which in turn can be used to deduce concentration results for individual eigenvalues
by some basic combinatorial manipulations.
We can now turn attention to one of the centerpiece universality results in random matrix theory, namely the Wigner semi-circle law for Wigner matrices. Recall from previous notes that a Wigner Hermitian matrix ensemble is a random matrix ensemble of Hermitian matrices (thus
; this includes real symmetric matrices as an important special case), in which the upper-triangular entries
,
are iid complex random variables with mean zero and unit variance, and the diagonal entries
are iid real variables, independent of the upper-triangular entries, with bounded mean and variance. Particular special cases of interest include the Gaussian Orthogonal Ensemble (GOE), the symmetric random sign matrices (aka symmetric Bernoulli ensemble), and the Gaussian Unitary Ensemble (GUE).
In previous notes we saw that the operator norm of was typically of size
, so it is natural to work with the normalised matrix
. Accordingly, given any
Hermitian matrix
, we can form the (normalised) empirical spectral distribution (or ESD for short)
of , where
are the (necessarily real) eigenvalues of
, counting multiplicity. The ESD is a probability measure, which can be viewed as a distribution of the normalised eigenvalues of
.
When is a random matrix ensemble, then the ESD
is now a random measure – i.e. a random variable taking values in the space
of probability measures on the real line. (Thus, the distribution of
is a probability measure on probability measures!)
Now we consider the behaviour of the ESD of a sequence of Hermitian matrix ensembles as
. Recall from Notes 0 that for any sequence of random variables in a
-compact metrisable space, one can define notions of convergence in probability and convergence almost surely. Specialising these definitions to the case of random probability measures on
, and to deterministic limits, we see that a sequence of random ESDs
converge in probability (resp. converge almost surely) to a deterministic limit
(which, confusingly enough, is a deterministic probability measure!) if, for every test function
, the quantities
converge in probability (resp. converge almost surely) to
.
Remark 1 As usual, convergence almost surely implies convergence in probability, but not vice versa. In the special case of random probability measures, there is an even weaker notion of convergence, namely convergence in expectation, defined as follows. Given a random ESD
, one can form its expectation
, defined via duality (the Riesz representation theorem) as
this probability measure can be viewed as the law of a random eigenvalue
drawn from a random matrix
from the ensemble. We then say that the ESDs converge in expectation to a limit
if
converges the vague topology to
, thus
for all
.
In general, these notions of convergence are distinct from each other; but in practice, one often finds in random matrix theory that these notions are effectively equivalent to each other, thanks to the concentration of measure phenomenon.
Exercise 1 Let
be a sequence of
Hermitian matrix ensembles, and let
be a continuous probability measure on
.
- Show that
converges almost surely to
if and only if
converges almost surely to
for all
.
- Show that
converges in probability to
if and only if
converges in probability to
for all
.
- Show that
converges in expectation to
if and only if
converges to
for all
.
We can now state the Wigner semi-circular law.
Theorem 1 (Semicircular law) Let
be the top left
minors of an infinite Wigner matrix
. Then the ESDs
converge almost surely (and hence also in probability and in expectation) to the Wigner semi-circular distribution
A numerical example of this theorem in action can be seen at the MathWorld entry for this law.
The semi-circular law nicely complements the upper Bai-Yin theorem from Notes 3, which asserts that (in the case when the entries have finite fourth moment, at least), the matrices almost surely has operator norm at most
. Note that the operator norm is the same thing as the largest magnitude of the eigenvalues. Because the semi-circular distribution (1) is supported on the interval
with positive density on the interior of this interval, Theorem 1 easily supplies the lower Bai-Yin theorem, that the operator norm of
is almost surely at least
, and thus (in the finite fourth moment case) the norm is in fact equal to
. Indeed, we have just shown that the semi-circular law provides an alternate proof of the lower Bai-Yin bound (Proposition 11 of Notes 3).
As will hopefully become clearer in the next set of notes, the semi-circular law is the noncommutative (or free probability) analogue of the central limit theorem, with the semi-circular distribution (1) taking on the role of the normal distribution. Of course, there is a striking difference between the two distributions, in that the former is compactly supported while the latter is merely subgaussian. One reason for this is that the concentration of measure phenomenon is more powerful in the case of ESDs of Wigner matrices than it is for averages of iid variables; compare the concentration of measure results in Notes 3 with those in Notes 1.
There are several ways to prove (or at least to heuristically justify) the semi-circular law. In this set of notes we shall focus on the two most popular methods, the moment method and the Stieltjes transform method, together with a third (heuristic) method based on Dyson Brownian motion (Notes 3b). In the next set of notes we shall also study the free probability method, and in the set of notes after that we use the determinantal processes method (although this method is initially only restricted to highly symmetric ensembles, such as GUE).
Suppose we have a large number of scalar random variables , which each have bounded size on average (e.g. their mean and variance could be
). What can one then say about their sum
? If each individual summand
varies in an interval of size
, then their sum of course varies in an interval of size
. However, a remarkable phenomenon, known as concentration of measure, asserts that assuming a sufficient amount of independence between the component variables
, this sum sharply concentrates in a much narrower range, typically in an interval of size
. This phenomenon is quantified by a variety of large deviation inequalities that give upper bounds (often exponential in nature) on the probability that such a combined random variable deviates significantly from its mean. The same phenomenon applies not only to linear expressions such as
, but more generally to nonlinear combinations
of such variables, provided that the nonlinear function
is sufficiently regular (in particular, if it is Lipschitz, either separately in each variable, or jointly in all variables).
The basic intuition here is that it is difficult for a large number of independent variables to “work together” to simultaneously pull a sum
or a more general combination
too far away from its mean. Independence here is the key; concentration of measure results typically fail if the
are too highly correlated with each other.
There are many applications of the concentration of measure phenomenon, but we will focus on a specific application which is useful in the random matrix theory topics we will be studying, namely on controlling the behaviour of random -dimensional vectors with independent components, and in particular on the distance between such random vectors and a given subspace.
Once one has a sufficient amount of independence, the concentration of measure tends to be sub-gaussian in nature; thus the probability that one is at least standard deviations from the mean tends to drop off like
for some
. In particular, one is
standard deviations from the mean with high probability, and
standard deviations from the mean with overwhelming probability. Indeed, concentration of measure is our primary tool for ensuring that various events hold with overwhelming probability (other moment methods can give high probability, but have difficulty ensuring overwhelming probability).
This is only a brief introduction to the concentration of measure phenomenon. A systematic study of this topic can be found in this book by Ledoux.
Recent Comments