You are currently browsing the tag archive for the ‘Chernoff inequality’ tag.

Suppose we have a large number of scalar random variables {X_1,\ldots,X_n}, which each have bounded size on average (e.g. their mean and variance could be {O(1)}). What can one then say about their sum {S_n := X_1+\ldots+X_n}? If each individual summand {X_i} varies in an interval of size {O(1)}, then their sum of course varies in an interval of size {O(n)}. However, a remarkable phenomenon, known as concentration of measure, asserts that assuming a sufficient amount of independence between the component variables {X_1,\ldots,X_n}, this sum sharply concentrates in a much narrower range, typically in an interval of size {O(\sqrt{n})}. This phenomenon is quantified by a variety of large deviation inequalities that give upper bounds (often exponential in nature) on the probability that such a combined random variable deviates significantly from its mean. The same phenomenon applies not only to linear expressions such as {S_n = X_1+\ldots+X_n}, but more generally to nonlinear combinations {F(X_1,\ldots,X_n)} of such variables, provided that the nonlinear function {F} is sufficiently regular (in particular, if it is Lipschitz, either separately in each variable, or jointly in all variables).

The basic intuition here is that it is difficult for a large number of independent variables {X_1,\ldots,X_n} to “work together” to simultaneously pull a sum {X_1+\ldots+X_n} or a more general combination {F(X_1,\ldots,X_n)} too far away from its mean. Independence here is the key; concentration of measure results typically fail if the {X_i} are too highly correlated with each other.

There are many applications of the concentration of measure phenomenon, but we will focus on a specific application which is useful in the random matrix theory topics we will be studying, namely on controlling the behaviour of random {n}-dimensional vectors with independent components, and in particular on the distance between such random vectors and a given subspace.

Once one has a sufficient amount of independence, the concentration of measure tends to be sub-gaussian in nature; thus the probability that one is at least {\lambda} standard deviations from the mean tends to drop off like {C \exp(-c\lambda^2)} for some {C,c > 0}. In particular, one is {O( \log^{1/2} n )} standard deviations from the mean with high probability, and {O( \log^{1/2+\epsilon} n)} standard deviations from the mean with overwhelming probability. Indeed, concentration of measure is our primary tool for ensuring that various events hold with overwhelming probability (other moment methods can give high probability, but have difficulty ensuring overwhelming probability).

This is only a brief introduction to the concentration of measure phenomenon. A systematic study of this topic can be found in this book by Ledoux.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,701 other followers