You are currently browsing the monthly archive for November 2013.

This is the second thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} are:

  • (Maynard) {H_1 \leq 600}.
  • (Polymath8b, tentative) {H_2 \leq 484,276}.
  • (Polymath8b, tentative) {H_m \leq \exp( 3.817 m )} for sufficiently large {m}.
  • (Maynard) Assuming the Elliott-Halberstam conjecture, {H_1 \leq 12}, {H_2 \leq 600}, and {H_m \ll m^3 e^{2m}}.

Following the strategy of Maynard, the bounds on {H_m} proceed by combining four ingredients:

  1. Distribution estimates {EH[\theta]} or {MPZ[\varpi,\delta]} for the primes (or related objects);
  2. Bounds for the minimal diameter {H(k)} of an admissible {k}-tuple;
  3. Lower bounds for the optimal value {M_k} to a certain variational problem;
  4. Sieve-theoretic arguments to convert the previous three ingredients into a bound on {H_m}.

Accordingly, the most natural routes to improve the bounds on {H_m} are to improve one or more of the above four ingredients.

Ingredient 1 was studied intensively in Polymath8a. The following results are known or conjectured (see the Polymath8a paper for notation and proofs):

  • (Bombieri-Vinogradov) {EH[\theta]} is true for all {0 < \theta < 1/2}.
  • (Polymath8a) {MPZ[\varpi,\delta]} is true for {\frac{600}{7} \varpi + \frac{180}{7}\delta < 1}.
  • (Polymath8a, tentative) {MPZ[\varpi,\delta]} is true for {\frac{1080}{13} \varpi + \frac{330}{13} \delta < 1}.
  • (Elliott-Halberstam conjecture) {EH[\theta]} is true for all {0 < \theta < 1}.

Ingredient 2 was also studied intensively in Polymath8a, and is more or less a solved problem for the values of {k} of interest (with exact values of {H(k)} for {k \leq 342}, and quite good upper bounds for {H(k)} for {k < 5000}, available at this page). So the main focus currently is on improving Ingredients 3 and 4.

For Ingredient 3, the basic variational problem is to understand the quantity

\displaystyle  M_k({\cal R}_k) := \sup_F \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}

for {F: {\cal R}_k \rightarrow {\bf R}} bounded measurable functions, not identically zero, on the simplex

\displaystyle  {\cal R}_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\ldots+t_k \leq 1 \}

with {I_k, J_k^{(m)}} being the quadratic forms

\displaystyle  I_k(F) := \int_{{\cal R}_k} F(t_1,\ldots,t_k)^2\ dt_1 \ldots dt_k

and

\displaystyle  J_k^{(m)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_k)\ dt_i)^2 dt_1 \ldots dt_{m-1} dt_{m+1} \ldots dt_k.

Equivalently, one has

\displaystyle  M_k({\cal R}_k) := \sup_F \frac{\int_{{\cal R}_k} F {\cal L}_k F}{\int_{{\cal R}_k} F^2}

where {{\cal L}_k: L^2({\cal R}_k) \rightarrow L^2({\cal R}_k)} is the positive semi-definite bounded self-adjoint operator

\displaystyle  {\cal L}_k F(t_1,\ldots,t_k) = \sum_{m=1}^k \int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_{m-1},s,t_{m+1},\ldots,t_k)\ ds,

so {M_k} is the operator norm of {{\cal L}}. Another interpretation of {M_k({\cal R}_k)} is that the probability that a rook moving randomly in the unit cube {[0,1]^k} stays in simplex {{\cal R}_k} for {n} moves is asymptotically {(M_k({\cal R}_k)/k + o(1))^n}.

We now have a fairly good asymptotic understanding of {M_k({\cal R}_k)}, with the bounds

\displaystyle  \log k - 2 \log\log k -2 \leq M_k({\cal R}_k) \leq \log k + \log\log k + 2

holding for sufficiently large {k}. There is however still room to tighten the bounds on {M_k({\cal R}_k)} for small {k}; I’ll summarise some of the ideas discussed so far below the fold.

For Ingredient 4, the basic tool is this:

Theorem 1 (Maynard) If {EH[\theta]} is true and {M_k({\cal R}_k) > \frac{2m}{\theta}}, then {H_m \leq H(k)}.

Thus, for instance, it is known that {M_{105} > 4} and {H(105)=600}, and this together with the Bombieri-Vinogradov inequality gives {H_1\leq 600}. This result is proven in Maynard’s paper and an alternate proof is also given in the previous blog post.

We have a number of ways to relax the hypotheses of this result, which we also summarise below the fold.

Read the rest of this entry »

For each natural number {m}, let {H_m} denote the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

where {p_n} denotes the {n\textsuperscript{th}} prime. In other words, {H_m} is the least quantity such that there are infinitely many intervals of length {H_m} that contain {m+1} or more primes. Thus, for instance, the twin prime conjecture is equivalent to the assertion that {H_1 = 2}, and the prime tuples conjecture would imply that {H_m} is equal to the diameter of the narrowest admissible tuple of cardinality {m+1} (thus we conjecturally have {H_1 = 2}, {H_2 = 6}, {H_3 = 8}, {H_4 = 12}, {H_5 = 16}, and so forth; see this web page for further continuation of this sequence).

In 2004, Goldston, Pintz, and Yildirim established the bound {H_1 \leq 16} conditional on the Elliott-Halberstam conjecture, which remains unproven. However, no unconditional finiteness of {H_1} was obtained (although they famously obtained the non-trivial bound {p_{n+1}-p_n = o(\log p_n)}), and even on the Elliot-Halberstam conjecture no finiteness result on the higher {H_m} was obtained either (although they were able to show {p_{n+2}-p_n=o(\log p_n)} on this conjecture). In the recent breakthrough of Zhang, the unconditional bound {H_1 \leq 70,000,000} was obtained, by establishing a weak partial version of the Elliott-Halberstam conjecture; by refining these methods, the Polymath8 project (which I suppose we could retroactively call the Polymath8a project) then lowered this bound to {H_1 \leq 4,680}.

With the very recent preprint of James Maynard, we have the following further substantial improvements:

Theorem 1 (Maynard’s theorem) Unconditionally, we have the following bounds:

  • {H_1 \leq 600}.
  • {H_m \leq C m^3 e^{4m}} for an absolute constant {C} and any {m \geq 1}.

If one assumes the Elliott-Halberstam conjecture, we have the following improved bounds:

  • {H_1 \leq 12}.
  • {H_2 \leq 600}.
  • {H_m \leq C m^3 e^{2m}} for an absolute constant {C} and any {m \geq 1}.

The final conclusion {H_m \leq C m^3 e^{2m}} on Elliott-Halberstam is not explicitly stated in Maynard’s paper, but follows easily from his methods, as I will describe below the fold. (At around the same time as Maynard’s work, I had also begun a similar set of calculations concerning {H_m}, but was only able to obtain the slightly weaker bound {H_m \leq C \exp( C m )} unconditionally.) In the converse direction, the prime tuples conjecture implies that {H_m} should be comparable to {m \log m}. Granville has also obtained the slightly weaker explicit bound {H_m \leq e^{8m+5}} for any {m \geq 1} by a slight modification of Maynard’s argument.

The arguments of Maynard avoid using the difficult partial results on (weakened forms of) the Elliott-Halberstam conjecture that were established by Zhang and then refined by Polymath8; instead, the main input is the classical Bombieri-Vinogradov theorem, combined with a sieve that is closer in spirit to an older sieve of Goldston and Yildirim, than to the sieve used later by Goldston, Pintz, and Yildirim on which almost all subsequent work is based.

The aim of the Polymath8b project is to obtain improved bounds on {H_1, H_2}, and higher values of {H_m}, either conditional on the Elliott-Halberstam conjecture or unconditional. The likeliest routes for doing this are by optimising Maynard’s arguments and/or combining them with some of the results from the Polymath8a project. This post is intended to be the first research thread for that purpose. To start the ball rolling, I am going to give below a presentation of Maynard’s results, with some minor technical differences (most significantly, I am using the Goldston-Pintz-Yildirim variant of the Selberg sieve, rather than the traditional “elementary Selberg sieve” that is used by Maynard (and also in the Polymath8 project), although it seems that the numerology obtained by both sieves is essentially the same). An alternate exposition of Maynard’s work has just been completed also by Andrew Granville.

Read the rest of this entry »

It’s time to (somewhat belatedly) roll over the previous thread on writing the first paper from the Polymath8 project, as this thread is overflowing with comments.  We are getting near the end of writing this large (173 pages!) paper, establishing a bound of 4,680 on the gap between primes, with only a few sections left to thoroughly proofread (and the last section should probably be removed, with appropriate changes elsewhere, in view of the more recent progress by Maynard).  As before, one can access the working copy of the paper at this subdirectory, as well as the rest of the directory, and the plan is to submit the paper to Algebra and Number theory (and the arXiv) once there is consensus to do so.  Even before this paper was submitted, it already has had some impact; Andrew Granville’s exposition of the bounded gaps between primes story for the Bulletin of the AMS follows several of the Polymath8 arguments in deriving the result.

After this paper is done, there is interest in continuing onwards with other Polymath8 – related topics, and perhaps it is time to start planning for them.  First of all, we have an invitation from  the Newsletter of the European Mathematical Society to discuss our experiences and impressions with the project.  I think it would be interesting to collect some impressions or thoughts (both positive and negative)  from people who were highly active in the research and/or writing aspects of the project, as well as from more casual participants who were following the progress more quietly.  This project seemed to attract a bit more attention than most other polymath projects (with the possible exception of the very first project, Polymath1).  I think there are several reasons for this; the project builds upon a recent breakthrough (Zhang’s paper) that attracted an impressive amount of attention and publicity; the objective is quite easy to describe, when compared against other mathematical research objectives; and one could summarise the current state of progress by a single natural number H, which implied by infinite descent that the project was guaranteed to terminate at some point, but also made it possible to set up a “scoreboard” that could be quickly and easily updated.  From the research side, another appealing feature of the project was that – in the early stages of the project, at least – it was quite easy to grab a new world record by means of making a small observation, which made it fit very well with the polymath spirit (in which the emphasis is on lots of small contributions by many people, rather than a few big contributions by a small number of people).  Indeed, when the project first arose spontaneously as a blog post of Scott Morrrison over at the Secret Blogging Seminar, I was initially hesitant to get involved, but soon found the “game” of shaving a few thousands or so off of H to be rather fun and addictive, and with a much greater sense of instant gratification than traditional research projects, which often take months before a satisfactory conclusion is reached.  Anyway, I would welcome other thoughts or impressions on the projects in the comments below (I think that the pace of comments regarding proofreading of the paper has slowed down enough that this post can accommodate both types of comments comfortably.)

Then of course there is the “Polymath 8b” project in which we build upon the recent breakthroughs of James Maynard, which have simplified the route to bounded gaps between primes considerably, bypassing the need for any Elliott-Halberstam type distribution results beyond the Bombieri-Vinogradov theorem.  James has kindly shown me an advance copy of the preprint, which should be available on the arXiv in a matter of days; it looks like he has made a modest improvement to the previously announced results, improving k_0 a bit to 105 (which then improves H to the nice round number of 600).  He also has a companion result on bounding gaps p_{n+m}-p_n between non-consecutive primes for any m (not just m=1), with a bound of the shape H_m := \lim \inf_{n \to \infty} p_{n+m}-p_n \ll m^3 e^{4m}, which is in fact the first time that the finiteness of this limit inferior has been demonstrated.  I plan to discuss these results (from a slightly different perspective than Maynard) in a subsequent blog post kicking off the Polymath8b project, once Maynard’s paper has been uploaded.  It should be possible to shave the value of H = H_1 down further (or to get better bounds for H_m for larger m), both unconditionally and under assumptions such as the Elliott-Halberstam conjecture, either by performing more numerical or theoretical optimisation on the variational problem Maynard is faced with, and also by using the improved distributional estimates provided by our existing paper; again, I plan to discuss these issues in a subsequent post. ( James, by the way, has expressed interest in participating in this project, which should be very helpful.)

The classical foundations of probability theory (discussed for instance in this previous blog post) is founded on the notion of a probability space {(\Omega, {\cal E}, {\bf P})} – a space {\Omega} (the sample space) equipped with a {\sigma}-algebra {{\cal E}} (the event space), together with a countably additive probability measure {{\bf P}: {\cal E} \rightarrow [0,1]} that assigns a real number in the interval {[0,1]} to each event.

One can generalise the concept of a probability space to a finitely additive probability space, in which the event space {{\cal E}} is now only a Boolean algebra rather than a {\sigma}-algebra, and the measure {\mu} is now only finitely additive instead of countably additive, thus {{\bf P}( E \vee F ) = {\bf P}(E) + {\bf P}(F)} when {E,F} are disjoint events. By giving up countable additivity, one loses a fair amount of measure and integration theory, and in particular the notion of the expectation of a random variable becomes problematic (unless the random variable takes only finitely many values). Nevertheless, one can still perform a fair amount of probability theory in this weaker setting.

In this post I would like to describe a further weakening of probability theory, which I will call qualitative probability theory, in which one does not assign a precise numerical probability value {{\bf P}(E)} to each event, but instead merely records whether this probability is zero, one, or something in between. Thus {{\bf P}} is now a function from {{\cal E}} to the set {\{0, I, 1\}}, where {I} is a new symbol that replaces all the elements of the open interval {(0,1)}. In this setting, one can no longer compute quantitative expressions, such as the mean or variance of a random variable; but one can still talk about whether an event holds almost surely, with positive probability, or with zero probability, and there are still usable notions of independence. (I will refer to classical probability theory as quantitative probability theory, to distinguish it from its qualitative counterpart.)

The main reason I want to introduce this weak notion of probability theory is that it becomes suited to talk about random variables living inside algebraic varieties, even if these varieties are defined over fields other than {{\bf R}} or {{\bf C}}. In algebraic geometry one often talks about a “generic” element of a variety {V} defined over a field {k}, which does not lie in any specified variety of lower dimension defined over {k}. Once {V} has positive dimension, such generic elements do not exist as classical, deterministic {k}-points {x} in {V}, since of course any such point lies in the {0}-dimensional subvariety {\{x\}} of {V}. There are of course several established ways to deal with this problem. One way (which one might call the “Weil” approach to generic points) is to extend the field {k} to a sufficiently transcendental extension {\tilde k}, in order to locate a sufficient number of generic points in {V(\tilde k)}. Another approach (which one might dub the “Zariski” approach to generic points) is to work scheme-theoretically, and interpret a generic point in {V} as being associated to the zero ideal in the function ring of {V}. However I want to discuss a third perspective, in which one interprets a generic point not as a deterministic object, but rather as a random variable {{\bf x}} taking values in {V}, but which lies in any given lower-dimensional subvariety of {V} with probability zero. This interpretation is intuitive, but difficult to implement in classical probability theory (except perhaps when considering varieties over {{\bf R}} or {{\bf C}}) due to the lack of a natural probability measure to place on algebraic varieties; however it works just fine in qualitative probability theory. In particular, the algebraic geometry notion of being “generically true” can now be interpreted probabilistically as an assertion that something is “almost surely true”.

It turns out that just as qualitative random variables may be used to interpret the concept of a generic point, they can also be used to interpret the concept of a type in model theory; the type of a random variable {x} is the set of all predicates {\phi(x)} that are almost surely obeyed by {x}. In contrast, model theorists often adopt a Weil-type approach to types, in which one works with deterministic representatives of a type, which often do not occur in the original structure of interest, but only in a sufficiently saturated extension of that structure (this is the analogue of working in a sufficiently transcendental extension of the base field). However, it seems that (in some cases at least) one can equivalently view types in terms of (qualitative) random variables on the original structure, avoiding the need to extend that structure. (Instead, one reserves the right to extend the sample space of one’s probability theory whenever necessary, as part of the “probabilistic way of thinking” discussed in this previous blog post.) We illustrate this below the fold with two related theorems that I will interpret through the probabilistic lens: the “group chunk theorem” of Weil (and later developed by Hrushovski), and the “group configuration theorem” of Zilber (and again later developed by Hrushovski). For sake of concreteness we will only consider these theorems in the theory of algebraically closed fields, although the results are quite general and can be applied to many other theories studied in model theory.

Read the rest of this entry »

One of the basic tools in modern combinatorics is the probabilistic method, introduced by Erdos, in which a deterministic solution to a given problem is shown to exist by constructing a random candidate for a solution, and showing that this candidate solves all the requirements of the problem with positive probability. When the problem requires a real-valued statistic {X} to be suitably large or suitably small, the following trivial observation is often employed:

Proposition 1 (Comparison with mean) Let {X} be a random real-valued variable, whose mean (or first moment) {\mathop{\bf E} X} is finite. Then

\displaystyle  X \leq \mathop{\bf E} X

with positive probability, and

\displaystyle  X \geq \mathop{\bf E} X

with positive probability.

This proposition is usually applied in conjunction with a computation of the first moment {\mathop{\bf E} X}, in which case this version of the probabilistic method becomes an instance of the first moment method. (For comparison with other moment methods, such as the second moment method, exponential moment method, and zeroth moment method, see Chapter 1 of my book with Van Vu. For a general discussion of the probabilistic method, see the book by Alon and Spencer of the same name.)

As a typical example in random matrix theory, if one wanted to understand how small or how large the operator norm {\|A\|_{op}} of a random matrix {A} could be, one might first try to compute the expected operator norm {\mathop{\bf E} \|A\|_{op}} and then apply Proposition 1; see this previous blog post for examples of this strategy (and related strategies, based on comparing {\|A\|_{op}} with more tractable expressions such as the moments {\hbox{tr} A^k}). (In this blog post, all matrices are complex-valued.)

Recently, in their proof of the Kadison-Singer conjecture (and also in their earlier paper on Ramanujan graphs), Marcus, Spielman, and Srivastava introduced an striking new variant of the first moment method, suited in particular for controlling the operator norm {\|A\|_{op}} of a Hermitian positive semi-definite matrix {A}. Such matrices have non-negative real eigenvalues, and so {\|A\|_{op}} in this case is just the largest eigenvalue {\lambda_1(A)} of {A}. Traditionally, one tries to control the eigenvalues through averaged statistics such as moments {\hbox{tr} A^k = \sum_i \lambda_i(A)^k} or Stieltjes transforms {\hbox{tr} (A-z)^{-1} = \sum_i (\lambda_i(A)-z)^{-1}}; again, see this previous blog post. Here we use {z} as short-hand for {zI_d}, where {I_d} is the {d \times d} identity matrix. Marcus, Spielman, and Srivastava instead rely on the interpretation of the eigenvalues {\lambda_i(A)} of {A} as the roots of the characteristic polynomial {p_A(z) := \hbox{det}(z-A)} of {A}, thus

\displaystyle  \|A\|_{op} = \hbox{maxroot}( p_A ) \ \ \ \ \ (1)

where {\hbox{maxroot}(p)} is the largest real root of a non-zero polynomial {p}. (In our applications, we will only ever apply {\hbox{maxroot}} to polynomials that have at least one real root, but for sake of completeness let us set {\hbox{maxroot}(p)=-\infty} if {p} has no real roots.)

Prior to the work of Marcus, Spielman, and Srivastava, I think it is safe to say that the conventional wisdom in random matrix theory was that the representation (1) of the operator norm {\|A\|_{op}} was not particularly useful, due to the highly non-linear nature of both the characteristic polynomial map {A \mapsto p_A} and the maximum root map {p \mapsto \hbox{maxroot}(p)}. (Although, as pointed out to me by Adam Marcus, some related ideas have occurred in graph theory rather than random matrix theory, for instance in the theory of the matching polynomial of a graph.) For instance, a fact as basic as the triangle inequality {\|A+B\|_{op} \leq \|A\|_{op} + \|B\|_{op}} is extremely difficult to establish through (1). Nevertheless, it turns out that for certain special types of random matrices {A} (particularly those in which a typical instance {A} of this ensemble has a simple relationship to “adjacent” matrices in this ensemble), the polynomials {p_A} enjoy an extremely rich structure (in particular, they lie in families of real stable polynomials, and hence enjoy good combinatorial interlacing properties) that can be surprisingly useful. In particular, Marcus, Spielman, and Srivastava established the following nonlinear variant of Proposition 1:

Proposition 2 (Comparison with mean) Let {m,d \geq 1}. Let {A} be a random matrix, which is the sum {A = \sum_{i=1}^m A_i} of independent Hermitian rank one {d \times d} matrices {A_i}, each taking a finite number of values. Then

\displaystyle  \hbox{maxroot}(p_A) \leq \hbox{maxroot}( \mathop{\bf E} p_A )

with positive probability, and

\displaystyle  \hbox{maxroot}(p_A) \geq \hbox{maxroot}( \mathop{\bf E} p_A )

with positive probability.

We prove this proposition below the fold. The hypothesis that each {A_i} only takes finitely many values is technical and can likely be relaxed substantially, but we will not need to do so here. Despite the superficial similarity with Proposition 1, the proof of Proposition 2 is quite nonlinear; in particular, one needs the interlacing properties of real stable polynomials to proceed. Another key ingredient in the proof is the observation that while the determinant {\hbox{det}(A)} of a matrix {A} generally behaves in a nonlinear fashion on the underlying matrix {A}, it becomes (affine-)linear when one considers rank one perturbations, and so {p_A} depends in an affine-multilinear fashion on the {A_1,\ldots,A_m}. More precisely, we have the following deterministic formula, also proven below the fold:

Proposition 3 (Deterministic multilinearisation formula) Let {A} be the sum of deterministic rank one {d \times d} matrices {A_1,\ldots,A_m}. Then we have

\displaystyle  p_A(z) = \mu[A_1,\ldots,A_m](z) \ \ \ \ \ (2)

for all {z \in C}, where the mixed characteristic polynomial {\mu[A_1,\ldots,A_m](z)} of any {d \times d} matrices {A_1,\ldots,A_m} (not necessarily rank one) is given by the formula

\displaystyle  \mu[A_1,\ldots,A_m](z) \ \ \ \ \ (3)

\displaystyle  = (\prod_{i=1}^m (1 - \frac{\partial}{\partial z_i})) \hbox{det}( z + \sum_{i=1}^m z_i A_i ) |_{z_1=\ldots=z_m=0}.

Among other things, this formula gives a useful representation of the mean characteristic polynomial {\mathop{\bf E} p_A}:

Corollary 4 (Random multilinearisation formula) Let {A} be the sum of jointly independent rank one {d \times d} matrices {A_1,\ldots,A_m}. Then we have

\displaystyle  \mathop{\bf E} p_A(z) = \mu[ \mathop{\bf E} A_1, \ldots, \mathop{\bf E} A_m ](z) \ \ \ \ \ (4)

for all {z \in {\bf C}}.

Proof: For fixed {z}, the expression {\hbox{det}( z + \sum_{i=1}^m z_i A_i )} is a polynomial combination of the {z_i A_i}, while the differential operator {(\prod_{i=1}^m (1 - \frac{\partial}{\partial z_i}))} is a linear combination of differential operators {\frac{\partial^j}{\partial z_{i_1} \ldots \partial z_{i_j}}} for {1 \leq i_1 < \ldots < i_j \leq d}. As a consequence, we may expand (3) as a linear combination of terms, each of which is a multilinear combination of {A_{i_1},\ldots,A_{i_j}} for some {1 \leq i_1 < \ldots < i_j \leq d}. Taking expectations of both sides of (2) and using the joint independence of the {A_i}, we obtain the claim. \Box

In view of Proposition 2, we can now hope to control the operator norm {\|A\|_{op}} of certain special types of random matrices {A} (and specifically, the sum of independent Hermitian positive semi-definite rank one matrices) by first controlling the mean {\mathop{\bf E} p_A} of the random characteristic polynomial {p_A}. Pursuing this philosophy, Marcus, Spielman, and Srivastava establish the following result, which they then use to prove the Kadison-Singer conjecture:

Theorem 5 (Marcus-Spielman-Srivastava theorem) Let {m,d \geq 1}. Let {v_1,\ldots,v_m \in {\bf C}^d} be jointly independent random vectors in {{\bf C}^d}, with each {v_i} taking a finite number of values. Suppose that we have the normalisation

\displaystyle  \mathop{\bf E} \sum_{i=1}^m v_i v_i^* = 1

where we are using the convention that {1} is the {d \times d} identity matrix {I_d} whenever necessary. Suppose also that we have the smallness condition

\displaystyle  \mathop{\bf E} \|v_i\|^2 \leq \varepsilon

for some {\varepsilon>0} and all {i=1,\ldots,m}. Then one has

\displaystyle  \| \sum_{i=1}^m v_i v_i^* \|_{op} \leq (1+\sqrt{\varepsilon})^2 \ \ \ \ \ (5)

with positive probability.

Note that the upper bound in (5) must be at least {1} (by taking {v_i} to be deterministic) and also must be at least {\varepsilon} (by taking the {v_i} to always have magnitude at least {\sqrt{\varepsilon}}). Thus the bound in (5) is asymptotically tight both in the regime {\varepsilon\rightarrow 0} and in the regime {\varepsilon \rightarrow \infty}; the latter regime will be particularly useful for applications to Kadison-Singer. It should also be noted that if one uses more traditional random matrix theory methods (based on tools such as Proposition 1, as well as more sophisticated variants of these tools, such as the concentration of measure results of Rudelson and Ahlswede-Winter), one obtains a bound of {\| \sum_{i=1}^m v_i v_i^* \|_{op} \ll_\varepsilon \log d} with high probability, which is insufficient for the application to the Kadison-Singer problem; see this article of Tropp. Thus, Theorem 5 obtains a sharper bound, at the cost of trading in “high probability” for “positive probability”.

In the paper of Marcus, Spielman and Srivastava, Theorem 5 is used to deduce a conjecture {KS_2} of Weaver, which was already known to imply the Kadison-Singer conjecture; actually, a slight modification of their argument gives the paving conjecture of Kadison and Singer, from which the original Kadison-Singer conjecture may be readily deduced. We give these implications below the fold. (See also this survey article for some background on the Kadison-Singer problem.)

Let us now summarise how Theorem 5 is proven. In the spirit of semi-definite programming, we rephrase the above theorem in terms of the rank one Hermitian positive semi-definite matrices {A_i := v_iv_i^*}:

Theorem 6 (Marcus-Spielman-Srivastava theorem again) Let {A_1,\ldots,A_m} be jointly independent random rank one Hermitian positive semi-definite {d \times d} matrices such that the sum {A :=\sum_{i=1}^m A_i} has mean

\displaystyle  \mathop{\bf E} A = I_d

and such that

\displaystyle  \mathop{\bf E} \hbox{tr} A_i \leq \varepsilon

for some {\varepsilon>0} and all {i=1,\ldots,m}. Then one has

\displaystyle  \| A \|_{op} \leq (1+\sqrt{\varepsilon})^2

with positive probability.

In view of (1) and Proposition 2, this theorem follows from the following control on the mean characteristic polynomial:

Theorem 7 (Control of mean characteristic polynomial) Let {A_1,\ldots,A_m} be jointly independent random rank one Hermitian positive semi-definite {d \times d} matrices such that the sum {A :=\sum_{i=1}^m A_i} has mean

\displaystyle  \mathop{\bf E} A = 1

and such that

\displaystyle  \mathop{\bf E} \hbox{tr} A_i \leq \varepsilon

for some {\varepsilon>0} and all {i=1,\ldots,m}. Then one has

\displaystyle  \hbox{maxroot}(\mathop{\bf E} p_A) \leq (1 +\sqrt{\varepsilon})^2.

This result is proven using the multilinearisation formula (Corollary 4) and some convexity properties of real stable polynomials; we give the proof below the fold.

Thanks to Adam Marcus, Assaf Naor and Sorin Popa for many useful explanations on various aspects of the Kadison-Singer problem.

Read the rest of this entry »

Archives