You are currently browsing the tag archive for the ‘Kadison-Singer problem’ tag.
One of the basic tools in modern combinatorics is the probabilistic method, introduced by Erdos, in which a deterministic solution to a given problem is shown to exist by constructing a random candidate for a solution, and showing that this candidate solves all the requirements of the problem with positive probability. When the problem requires a real-valued statistic to be suitably large or suitably small, the following trivial observation is often employed:
Proposition 1 (Comparison with mean) Let
be a random real-valued variable, whose mean (or first moment)
is finite. Then
with positive probability, and
with positive probability.
This proposition is usually applied in conjunction with a computation of the first moment , in which case this version of the probabilistic method becomes an instance of the first moment method. (For comparison with other moment methods, such as the second moment method, exponential moment method, and zeroth moment method, see Chapter 1 of my book with Van Vu. For a general discussion of the probabilistic method, see the book by Alon and Spencer of the same name.)
As a typical example in random matrix theory, if one wanted to understand how small or how large the operator norm of a random matrix
could be, one might first try to compute the expected operator norm
and then apply Proposition 1; see this previous blog post for examples of this strategy (and related strategies, based on comparing
with more tractable expressions such as the moments
). (In this blog post, all matrices are complex-valued.)
Recently, in their proof of the Kadison-Singer conjecture (and also in their earlier paper on Ramanujan graphs), Marcus, Spielman, and Srivastava introduced an striking new variant of the first moment method, suited in particular for controlling the operator norm of a Hermitian positive semi-definite matrix
. Such matrices have non-negative real eigenvalues, and so
in this case is just the largest eigenvalue
of
. Traditionally, one tries to control the eigenvalues through averaged statistics such as moments
or Stieltjes transforms
; again, see this previous blog post. Here we use
as short-hand for
, where
is the
identity matrix. Marcus, Spielman, and Srivastava instead rely on the interpretation of the eigenvalues
of
as the roots of the characteristic polynomial
of
, thus
where is the largest real root of a non-zero polynomial
. (In our applications, we will only ever apply
to polynomials that have at least one real root, but for sake of completeness let us set
if
has no real roots.)
Prior to the work of Marcus, Spielman, and Srivastava, I think it is safe to say that the conventional wisdom in random matrix theory was that the representation (1) of the operator norm was not particularly useful, due to the highly non-linear nature of both the characteristic polynomial map
and the maximum root map
. (Although, as pointed out to me by Adam Marcus, some related ideas have occurred in graph theory rather than random matrix theory, for instance in the theory of the matching polynomial of a graph.) For instance, a fact as basic as the triangle inequality
is extremely difficult to establish through (1). Nevertheless, it turns out that for certain special types of random matrices
(particularly those in which a typical instance
of this ensemble has a simple relationship to “adjacent” matrices in this ensemble), the polynomials
enjoy an extremely rich structure (in particular, they lie in families of real stable polynomials, and hence enjoy good combinatorial interlacing properties) that can be surprisingly useful. In particular, Marcus, Spielman, and Srivastava established the following nonlinear variant of Proposition 1:
Proposition 2 (Comparison with mean) Let
. Let
be a random matrix, which is the sum
of independent Hermitian rank one
matrices
, each taking a finite number of values. Then
with positive probability, and
with positive probability.
We prove this proposition below the fold. The hypothesis that each only takes finitely many values is technical and can likely be relaxed substantially, but we will not need to do so here. Despite the superficial similarity with Proposition 1, the proof of Proposition 2 is quite nonlinear; in particular, one needs the interlacing properties of real stable polynomials to proceed. Another key ingredient in the proof is the observation that while the determinant
of a matrix
generally behaves in a nonlinear fashion on the underlying matrix
, it becomes (affine-)linear when one considers rank one perturbations, and so
depends in an affine-multilinear fashion on the
. More precisely, we have the following deterministic formula, also proven below the fold:
Proposition 3 (Deterministic multilinearisation formula) Let
be the sum of deterministic rank one
matrices
. Then we have
for all
, where the mixed characteristic polynomial
of any
matrices
(not necessarily rank one) is given by the formula
Among other things, this formula gives a useful representation of the mean characteristic polynomial :
Corollary 4 (Random multilinearisation formula) Let
be the sum of jointly independent rank one
matrices
. Then we have
for all
.
Proof: For fixed , the expression
is a polynomial combination of the
, while the differential operator
is a linear combination of differential operators
for
. As a consequence, we may expand (3) as a linear combination of terms, each of which is a multilinear combination of
for some
. Taking expectations of both sides of (2) and using the joint independence of the
, we obtain the claim.
In view of Proposition 2, we can now hope to control the operator norm of certain special types of random matrices
(and specifically, the sum of independent Hermitian positive semi-definite rank one matrices) by first controlling the mean
of the random characteristic polynomial
. Pursuing this philosophy, Marcus, Spielman, and Srivastava establish the following result, which they then use to prove the Kadison-Singer conjecture:
Theorem 5 (Marcus-Spielman-Srivastava theorem) Let
. Let
be jointly independent random vectors in
, with each
taking a finite number of values. Suppose that we have the normalisation
where we are using the convention that
is the
identity matrix
whenever necessary. Suppose also that we have the smallness condition
for some
and all
. Then one has
with positive probability.
Note that the upper bound in (5) must be at least (by taking
to be deterministic) and also must be at least
(by taking the
to always have magnitude at least
). Thus the bound in (5) is asymptotically tight both in the regime
and in the regime
; the latter regime will be particularly useful for applications to Kadison-Singer. It should also be noted that if one uses more traditional random matrix theory methods (based on tools such as Proposition 1, as well as more sophisticated variants of these tools, such as the concentration of measure results of Rudelson and Ahlswede-Winter), one obtains a bound of
with high probability, which is insufficient for the application to the Kadison-Singer problem; see this article of Tropp. Thus, Theorem 5 obtains a sharper bound, at the cost of trading in “high probability” for “positive probability”.
In the paper of Marcus, Spielman and Srivastava, Theorem 5 is used to deduce a conjecture of Weaver, which was already known to imply the Kadison-Singer conjecture; actually, a slight modification of their argument gives the paving conjecture of Kadison and Singer, from which the original Kadison-Singer conjecture may be readily deduced. We give these implications below the fold. (See also this survey article for some background on the Kadison-Singer problem.)
Let us now summarise how Theorem 5 is proven. In the spirit of semi-definite programming, we rephrase the above theorem in terms of the rank one Hermitian positive semi-definite matrices :
Theorem 6 (Marcus-Spielman-Srivastava theorem again) Let
be jointly independent random rank one Hermitian positive semi-definite
matrices such that the sum
has mean
and such that
for some
and all
. Then one has
with positive probability.
In view of (1) and Proposition 2, this theorem follows from the following control on the mean characteristic polynomial:
Theorem 7 (Control of mean characteristic polynomial) Let
be jointly independent random rank one Hermitian positive semi-definite
matrices such that the sum
has mean
and such that
for some
and all
. Then one has
This result is proven using the multilinearisation formula (Corollary 4) and some convexity properties of real stable polynomials; we give the proof below the fold.
Thanks to Adam Marcus, Assaf Naor and Sorin Popa for many useful explanations on various aspects of the Kadison-Singer problem.
Recent Comments