You are currently browsing the tag archive for the ‘Talagrand's inequality’ tag.
In the theory of discrete random matrices (e.g. matrices whose entries are random signs ), one often encounters the problem of understanding the distribution of the random variable , where is an -dimensional random sign vector (so is uniformly distributed in the discrete cube ), and is some -dimensional subspace of for some .
It is not hard to compute the second moment of this random variable. Indeed, if denotes the orthogonal projection matrix from to the orthogonal complement of , then one observes that
In fact, one has sharp concentration around this value, in the sense that with high probability. More precisely, we have
for some absolute constants .
In fact the constants are very civilised; for large one can basically take and , for instance. This type of concentration, particularly for subspaces of moderately large codimension , is fundamental to much of my work on random matrices with Van Vu, starting with our first paper (in which this proposition first appears). (For subspaces of small codimension (such as hyperplanes) one has to use other tools to get good results, such as inverse Littlewood-Offord theory or the Berry-Esséen central limit theorem, but that is another story.)
Proposition 1 is an easy consequence of the second moment computation and Talagrand’s inequality, which among other things provides a sharp concentration result for convex Lipschitz functions on the cube ; since is indeed a convex Lipschitz function, this inequality can be applied immediately. The proof of Talagrand’s inequality is short and can be found in several textbooks (e.g. Alon and Spencer), but I thought I would reproduce the argument here (specialised to the convex case), mostly to force myself to learn the proof properly. Note the concentration of obtained by Talagrand’s inequality is much stronger than what one would get from more elementary tools such as Azuma’s inequality or McDiarmid’s inequality, which would only give concentration of about or so (which is in fact trivial, since the cube has diameter ); the point is that Talagrand’s inequality is very effective at exploiting the convexity of the problem, as well as the Lipschitz nature of the function in all directions, whereas Azuma’s inequality can only easily take advantage of the Lipschitz nature of the function in coordinate directions. On the other hand, Azuma’s inequality works just as well if the metric is replaced with the larger metric, and one can conclude that the distance between and concentrates around its median to a width , which is a more non-trivial fact than the concentration bound given by that inequality. (The computation of the median of the distance is more complicated than for the distance, though, and depends on the orientation of .)
Remark 1 If one makes the coordinates of iid Gaussian variables rather than random signs, then Proposition 1 is much easier to prove; the probability distribution of a Gaussian vector is rotation-invariant, so one can rotate to be, say, , at which point is clearly the sum of independent squares of Gaussians (i.e. a chi-square distribution), and the claim follows from direct computation (or one can use the Chernoff inequality). The gaussian counterpart of Talagrand’s inequality is more classical, being essentially due to Lévy, and will also be discussed later in this post.