It’s time to (somewhat belatedly) roll over the previous thread on writing the first paper from the Polymath8 project, as this thread is overflowing with comments.  We are getting near the end of writing this large (173 pages!) paper, establishing a bound of 4,680 on the gap between primes, with only a few sections left to thoroughly proofread (and the last section should probably be removed, with appropriate changes elsewhere, in view of the more recent progress by Maynard).  As before, one can access the working copy of the paper at this subdirectory, as well as the rest of the directory, and the plan is to submit the paper to Algebra and Number theory (and the arXiv) once there is consensus to do so.  Even before this paper was submitted, it already has had some impact; Andrew Granville’s exposition of the bounded gaps between primes story for the Bulletin of the AMS follows several of the Polymath8 arguments in deriving the result.

After this paper is done, there is interest in continuing onwards with other Polymath8 – related topics, and perhaps it is time to start planning for them.  First of all, we have an invitation from  the Newsletter of the European Mathematical Society to discuss our experiences and impressions with the project.  I think it would be interesting to collect some impressions or thoughts (both positive and negative)  from people who were highly active in the research and/or writing aspects of the project, as well as from more casual participants who were following the progress more quietly.  This project seemed to attract a bit more attention than most other polymath projects (with the possible exception of the very first project, Polymath1).  I think there are several reasons for this; the project builds upon a recent breakthrough (Zhang’s paper) that attracted an impressive amount of attention and publicity; the objective is quite easy to describe, when compared against other mathematical research objectives; and one could summarise the current state of progress by a single natural number H, which implied by infinite descent that the project was guaranteed to terminate at some point, but also made it possible to set up a “scoreboard” that could be quickly and easily updated.  From the research side, another appealing feature of the project was that – in the early stages of the project, at least – it was quite easy to grab a new world record by means of making a small observation, which made it fit very well with the polymath spirit (in which the emphasis is on lots of small contributions by many people, rather than a few big contributions by a small number of people).  Indeed, when the project first arose spontaneously as a blog post of Scott Morrrison over at the Secret Blogging Seminar, I was initially hesitant to get involved, but soon found the “game” of shaving a few thousands or so off of H to be rather fun and addictive, and with a much greater sense of instant gratification than traditional research projects, which often take months before a satisfactory conclusion is reached.  Anyway, I would welcome other thoughts or impressions on the projects in the comments below (I think that the pace of comments regarding proofreading of the paper has slowed down enough that this post can accommodate both types of comments comfortably.)

Then of course there is the “Polymath 8b” project in which we build upon the recent breakthroughs of James Maynard, which have simplified the route to bounded gaps between primes considerably, bypassing the need for any Elliott-Halberstam type distribution results beyond the Bombieri-Vinogradov theorem.  James has kindly shown me an advance copy of the preprint, which should be available on the arXiv in a matter of days; it looks like he has made a modest improvement to the previously announced results, improving k_0 a bit to 105 (which then improves H to the nice round number of 600).  He also has a companion result on bounding gaps p_{n+m}-p_n between non-consecutive primes for any m (not just m=1), with a bound of the shape H_m := \lim \inf_{n \to \infty} p_{n+m}-p_n \ll m^3 e^{4m}, which is in fact the first time that the finiteness of this limit inferior has been demonstrated.  I plan to discuss these results (from a slightly different perspective than Maynard) in a subsequent blog post kicking off the Polymath8b project, once Maynard’s paper has been uploaded.  It should be possible to shave the value of H = H_1 down further (or to get better bounds for H_m for larger m), both unconditionally and under assumptions such as the Elliott-Halberstam conjecture, either by performing more numerical or theoretical optimisation on the variational problem Maynard is faced with, and also by using the improved distributional estimates provided by our existing paper; again, I plan to discuss these issues in a subsequent post. ( James, by the way, has expressed interest in participating in this project, which should be very helpful.)

The classical foundations of probability theory (discussed for instance in this previous blog post) is founded on the notion of a probability space {(\Omega, {\cal E}, {\bf P})} – a space {\Omega} (the sample space) equipped with a {\sigma}-algebra {{\cal E}} (the event space), together with a countably additive probability measure {{\bf P}: {\cal E} \rightarrow [0,1]} that assigns a real number in the interval {[0,1]} to each event.

One can generalise the concept of a probability space to a finitely additive probability space, in which the event space {{\cal E}} is now only a Boolean algebra rather than a {\sigma}-algebra, and the measure {\mu} is now only finitely additive instead of countably additive, thus {{\bf P}( E \vee F ) = {\bf P}(E) + {\bf P}(F)} when {E,F} are disjoint events. By giving up countable additivity, one loses a fair amount of measure and integration theory, and in particular the notion of the expectation of a random variable becomes problematic (unless the random variable takes only finitely many values). Nevertheless, one can still perform a fair amount of probability theory in this weaker setting.

In this post I would like to describe a further weakening of probability theory, which I will call qualitative probability theory, in which one does not assign a precise numerical probability value {{\bf P}(E)} to each event, but instead merely records whether this probability is zero, one, or something in between. Thus {{\bf P}} is now a function from {{\cal E}} to the set {\{0, I, 1\}}, where {I} is a new symbol that replaces all the elements of the open interval {(0,1)}. In this setting, one can no longer compute quantitative expressions, such as the mean or variance of a random variable; but one can still talk about whether an event holds almost surely, with positive probability, or with zero probability, and there are still usable notions of independence. (I will refer to classical probability theory as quantitative probability theory, to distinguish it from its qualitative counterpart.)

The main reason I want to introduce this weak notion of probability theory is that it becomes suited to talk about random variables living inside algebraic varieties, even if these varieties are defined over fields other than {{\bf R}} or {{\bf C}}. In algebraic geometry one often talks about a “generic” element of a variety {V} defined over a field {k}, which does not lie in any specified variety of lower dimension defined over {k}. Once {V} has positive dimension, such generic elements do not exist as classical, deterministic {k}-points {x} in {V}, since of course any such point lies in the {0}-dimensional subvariety {\{x\}} of {V}. There are of course several established ways to deal with this problem. One way (which one might call the “Weil” approach to generic points) is to extend the field {k} to a sufficiently transcendental extension {\tilde k}, in order to locate a sufficient number of generic points in {V(\tilde k)}. Another approach (which one might dub the “Zariski” approach to generic points) is to work scheme-theoretically, and interpret a generic point in {V} as being associated to the zero ideal in the function ring of {V}. However I want to discuss a third perspective, in which one interprets a generic point not as a deterministic object, but rather as a random variable {{\bf x}} taking values in {V}, but which lies in any given lower-dimensional subvariety of {V} with probability zero. This interpretation is intuitive, but difficult to implement in classical probability theory (except perhaps when considering varieties over {{\bf R}} or {{\bf C}}) due to the lack of a natural probability measure to place on algebraic varieties; however it works just fine in qualitative probability theory. In particular, the algebraic geometry notion of being “generically true” can now be interpreted probabilistically as an assertion that something is “almost surely true”.

It turns out that just as qualitative random variables may be used to interpret the concept of a generic point, they can also be used to interpret the concept of a type in model theory; the type of a random variable {x} is the set of all predicates {\phi(x)} that are almost surely obeyed by {x}. In contrast, model theorists often adopt a Weil-type approach to types, in which one works with deterministic representatives of a type, which often do not occur in the original structure of interest, but only in a sufficiently saturated extension of that structure (this is the analogue of working in a sufficiently transcendental extension of the base field). However, it seems that (in some cases at least) one can equivalently view types in terms of (qualitative) random variables on the original structure, avoiding the need to extend that structure. (Instead, one reserves the right to extend the sample space of one’s probability theory whenever necessary, as part of the “probabilistic way of thinking” discussed in this previous blog post.) We illustrate this below the fold with two related theorems that I will interpret through the probabilistic lens: the “group chunk theorem” of Weil (and later developed by Hrushovski), and the “group configuration theorem” of Zilber (and again later developed by Hrushovski). For sake of concreteness we will only consider these theorems in the theory of algebraically closed fields, although the results are quite general and can be applied to many other theories studied in model theory.

Read the rest of this entry »

One of the basic tools in modern combinatorics is the probabilistic method, introduced by Erdos, in which a deterministic solution to a given problem is shown to exist by constructing a random candidate for a solution, and showing that this candidate solves all the requirements of the problem with positive probability. When the problem requires a real-valued statistic {X} to be suitably large or suitably small, the following trivial observation is often employed:

Proposition 1 (Comparison with mean) Let {X} be a random real-valued variable, whose mean (or first moment) {\mathop{\bf E} X} is finite. Then

\displaystyle  X \leq \mathop{\bf E} X

with positive probability, and

\displaystyle  X \geq \mathop{\bf E} X

with positive probability.

This proposition is usually applied in conjunction with a computation of the first moment {\mathop{\bf E} X}, in which case this version of the probabilistic method becomes an instance of the first moment method. (For comparison with other moment methods, such as the second moment method, exponential moment method, and zeroth moment method, see Chapter 1 of my book with Van Vu. For a general discussion of the probabilistic method, see the book by Alon and Spencer of the same name.)

As a typical example in random matrix theory, if one wanted to understand how small or how large the operator norm {\|A\|_{op}} of a random matrix {A} could be, one might first try to compute the expected operator norm {\mathop{\bf E} \|A\|_{op}} and then apply Proposition 1; see this previous blog post for examples of this strategy (and related strategies, based on comparing {\|A\|_{op}} with more tractable expressions such as the moments {\hbox{tr} A^k}). (In this blog post, all matrices are complex-valued.)

Recently, in their proof of the Kadison-Singer conjecture (and also in their earlier paper on Ramanujan graphs), Marcus, Spielman, and Srivastava introduced an striking new variant of the first moment method, suited in particular for controlling the operator norm {\|A\|_{op}} of a Hermitian positive semi-definite matrix {A}. Such matrices have non-negative real eigenvalues, and so {\|A\|_{op}} in this case is just the largest eigenvalue {\lambda_1(A)} of {A}. Traditionally, one tries to control the eigenvalues through averaged statistics such as moments {\hbox{tr} A^k = \sum_i \lambda_i(A)^k} or Stieltjes transforms {\hbox{tr} (A-z)^{-1} = \sum_i (\lambda_i(A)-z)^{-1}}; again, see this previous blog post. Here we use {z} as short-hand for {zI_d}, where {I_d} is the {d \times d} identity matrix. Marcus, Spielman, and Srivastava instead rely on the interpretation of the eigenvalues {\lambda_i(A)} of {A} as the roots of the characteristic polynomial {p_A(z) := \hbox{det}(z-A)} of {A}, thus

\displaystyle  \|A\|_{op} = \hbox{maxroot}( p_A ) \ \ \ \ \ (1)

where {\hbox{maxroot}(p)} is the largest real root of a non-zero polynomial {p}. (In our applications, we will only ever apply {\hbox{maxroot}} to polynomials that have at least one real root, but for sake of completeness let us set {\hbox{maxroot}(p)=-\infty} if {p} has no real roots.)

Prior to the work of Marcus, Spielman, and Srivastava, I think it is safe to say that the conventional wisdom in random matrix theory was that the representation (1) of the operator norm {\|A\|_{op}} was not particularly useful, due to the highly non-linear nature of both the characteristic polynomial map {A \mapsto p_A} and the maximum root map {p \mapsto \hbox{maxroot}(p)}. (Although, as pointed out to me by Adam Marcus, some related ideas have occurred in graph theory rather than random matrix theory, for instance in the theory of the matching polynomial of a graph.) For instance, a fact as basic as the triangle inequality {\|A+B\|_{op} \leq \|A\|_{op} + \|B\|_{op}} is extremely difficult to establish through (1). Nevertheless, it turns out that for certain special types of random matrices {A} (particularly those in which a typical instance {A} of this ensemble has a simple relationship to “adjacent” matrices in this ensemble), the polynomials {p_A} enjoy an extremely rich structure (in particular, they lie in families of real stable polynomials, and hence enjoy good combinatorial interlacing properties) that can be surprisingly useful. In particular, Marcus, Spielman, and Srivastava established the following nonlinear variant of Proposition 1:

Proposition 2 (Comparison with mean) Let {m,d \geq 1}. Let {A} be a random matrix, which is the sum {A = \sum_{i=1}^m A_i} of independent Hermitian rank one {d \times d} matrices {A_i}, each taking a finite number of values. Then

\displaystyle  \hbox{maxroot}(p_A) \leq \hbox{maxroot}( \mathop{\bf E} p_A )

with positive probability, and

\displaystyle  \hbox{maxroot}(p_A) \geq \hbox{maxroot}( \mathop{\bf E} p_A )

with positive probability.

We prove this proposition below the fold. The hypothesis that each {A_i} only takes finitely many values is technical and can likely be relaxed substantially, but we will not need to do so here. Despite the superficial similarity with Proposition 1, the proof of Proposition 2 is quite nonlinear; in particular, one needs the interlacing properties of real stable polynomials to proceed. Another key ingredient in the proof is the observation that while the determinant {\hbox{det}(A)} of a matrix {A} generally behaves in a nonlinar fashion on the underlying matrix {A}, it becomes (affine-)linear when one considers rank one perturbations, and so {p_A} depends in an affine-multilinear fashion on the {A_1,\ldots,A_m}. More precisely, we have the following deterministic formula, also proven below the fold:

Proposition 3 (Deterministic multilinearisation formula) Let {A} be the sum of deterministic rank one {d \times d} matrices {A_1,\ldots,A_m}. Then we have

\displaystyle  p_A(z) = \mu[A_1,\ldots,A_m](z) \ \ \ \ \ (2)

for all {z \in C}, where the mixed characteristic polynomial {\mu[A_1,\ldots,A_m](z)} of any {d \times d} matrices {A_1,\ldots,A_m} (not necessarily rank one) is given by the formula

\displaystyle  \mu[A_1,\ldots,A_m](z) \ \ \ \ \ (3)

\displaystyle  = (\prod_{i=1}^m (1 - \frac{\partial}{\partial z_i})) \hbox{det}( z + \sum_{i=1}^m z_i A_i ) |_{z_1=\ldots=z_m=0}.

Among other things, this formula gives a useful representation of the mean characteristic polynomial {\mathop{\bf E} p_A}:

Corollary 4 (Random multilinearisation formula) Let {A} be the sum of jointly independent rank one {d \times d} matrices {A_1,\ldots,A_m}. Then we have

\displaystyle  \mathop{\bf E} p_A(z) = \mu[ \mathop{\bf E} A_1, \ldots, \mathop{\bf E} A_m ](z) \ \ \ \ \ (4)

for all {z \in {\bf C}}.

Proof: For fixed {z}, the expression {\hbox{det}( z + \sum_{i=1}^m z_i A_i )} is a polynomial combination of the {z_i A_i}, while the differential operator {(\prod_{i=1}^m (1 - \frac{\partial}{\partial z_i}))} is a linear combination of differential operators {\frac{\partial^j}{\partial z_{i_1} \ldots \partial z_{i_j}}} for {1 \leq i_1 < \ldots < i_j \leq d}. As a consequence, we may expand (3) as a linear combination of terms, each of which is a multilinear combination of {A_{i_1},\ldots,A_{i_j}} for some {1 \leq i_1 < \ldots < i_j \leq d}. Taking expectations of both sides of (2) and using the joint independence of the {A_i}, we obtain the claim. \Box

In view of Proposition 2, we can now hope to control the operator norm {\|A\|_{op}} of certain special types of random matrices {A} (and specifically, the sum of independent Hermitian positive semi-definite rank one matrices) by first controlling the mean {\mathop{\bf E} p_A} of the random characteristic polynomial {p_A}. Pursuing this philosophy, Marcus, Spielman, and Srivastava establish the following result, which they then use to prove the Kadison-Singer conjecture:

Theorem 5 (Marcus-Spielman-Srivastava theorem) Let {m,d \geq 1}. Let {v_1,\ldots,v_m \in {\bf C}^d} be jointly independent random vectors in {{\bf C}^d}, with each {v_i} taking a finite number of values. Suppose that we have the normalisation

\displaystyle  \mathop{\bf E} \sum_{i=1}^m v_i v_i^* = 1

where we are using the convention that {1} is the {d \times d} identity matrix {I_d} whenever necessary. Suppose also that we have the smallness condition

\displaystyle  \mathop{\bf E} \|v_i\|^2 \leq \epsilon

for some {\epsilon>0} and all {i=1,\ldots,m}. Then one has

\displaystyle  \| \sum_{i=1}^m v_i v_i^* \|_{op} \leq (1+\sqrt{\epsilon})^2 \ \ \ \ \ (5)

with positive probability.

Note that the upper bound in (5) must be at least {1} (by taking {v_i} to be deterministic) and also must be at least {\epsilon} (by taking the {v_i} to always have magnitude at least {\sqrt{\epsilon}}). Thus the bound in (5) is asymptotically tight both in the regime {\epsilon\rightarrow 0} and in the regime {\epsilon \rightarrow \infty}; the latter regime will be particularly useful for applications to Kadison-Singer. It should also be noted that if one uses more traditional random matrix theory methods (based on tools such as Proposition 1, as well as more sophisticated variants of these tools, such as the concentration of measure results of Rudelson and Ahlswede-Winter), one obtains a bound of {\| \sum_{i=1}^m v_i v_i^* \|_{op} \ll_\epsilon \log d} with high probability, which is insufficient for the application to the Kadison-Singer problem; see this article of Tropp. Thus, Theorem 5 obtains a sharper bound, at the cost of trading in “high probability” for “positive probability”.

In the paper of Marcus, Spielman and Srivastava, Theorem 5 is used to deduce a conjecture {KS_2} of Weaver, which was already known to imply the Kadison-Singer conjecture; actually, a slight modification of their argument gives the paving conjecture of Kadison and Singer, from which the original Kadison-Singer conjecture may be readily deduced. We give these implications below the fold. (See also this survey article for some background on the Kadison-Singer problem.)

Let us now summarise how Theorem 5 is proven. In the spirit of semi-definite programming, we rephrase the above theorem in terms of the rank one Hermitian positive semi-definite matrices {A_i := v_iv_i^*}:

Theorem 6 (Marcus-Spielman-Srivastava theorem again) Let {A_1,\ldots,A_m} be jointly independent random rank one Hermitian positive semi-definite {d \times d} matrices such that the sum {A :=\sum_{i=1}^m A_i} has mean

\displaystyle  \mathop{\bf E} A = I_d

and such that

\displaystyle  \mathop{\bf E} \hbox{tr} A_i \leq \epsilon

for some {\epsilon>0} and all {i=1,\ldots,m}. Then one has

\displaystyle  \| A \|_{op} \leq (1+\sqrt{\epsilon})^2

with positive probability.

In view of (1) and Proposition 2, this theorem follows from the following control on the mean characteristic polynomial:

Theorem 7 (Control of mean characteristic polynomial) Let {A_1,\ldots,A_m} be jointly independent random rank one Hermitian positive semi-definite {d \times d} matrices such that the sum {A :=\sum_{i=1}^m A_i} has mean

\displaystyle  \mathop{\bf E} A = 1

and such that

\displaystyle  \mathop{\bf E} \hbox{tr} A_i \leq \epsilon

for some {\epsilon>0} and all {i=1,\ldots,m}. Then one has

\displaystyle  \hbox{maxroot}(\mathop{\bf E} p_A) \leq (1 +\sqrt{\epsilon})^2.

This result is proven using the multilinearisation formula (Corollary 4) and some convexity properties of real stable polynomials; we give the proof below the fold.

Thanks to Adam Marcus, Assaf Naor and Sorin Popa for many useful explanations on various aspects of the Kadison-Singer problem.

Read the rest of this entry »

I’ve just finished the first draft of my book “Expansion in finite simple groups of Lie type“, which is  based in the lecture notes for my graduate course on this topic that were previously posted on this blog.  It also contains some newer material, such as the notes on Lie algebras and Lie groups that I posted most recently here.

As always, corrections or comments are greatly appreciated (and errata will be collected at this page).

Let {F} be a field. A definable set over {F} is a set of the form

\displaystyle  \{ x \in F^n | \phi(x) \hbox{ is true} \} \ \ \ \ \ (1)

where {n} is a natural number, and {\phi(x)} is a predicate involving the ring operations {+,\times} of {F}, the equality symbol {=}, an arbitrary number of constants and free variables in {F}, the quantifiers {\forall, \exists}, boolean operators such as {\vee,\wedge,\neg}, and parentheses and colons, where the quantifiers are always understood to be over the field {F}. Thus, for instance, the set of quadratic residues

\displaystyle  \{ x \in F | \exists y: x = y \times y \}

is definable over {F}, and any algebraic variety over {F} is also a definable set over {F}. Henceforth we will abbreviate “definable over {F}” simply as “definable”.

If {F} is a finite field, then every subset of {F^n} is definable, since finite sets are automatically definable. However, we can obtain a more interesting notion in this case by restricting the complexity of a definable set. We say that {E \subset F^n} is a definable set of complexity at most {M} if {n \leq M}, and {E} can be written in the form (1) for some predicate {\phi} of length at most {M} (where all operators, quantifiers, relations, variables, constants, and punctuation symbols are considered to have unit length). Thus, for instance, a hypersurface in {n} dimensions of degree {d} would be a definable set of complexity {O_{n,d}(1)}. We will then be interested in the regime where the complexity remains bounded, but the field size (or field characteristic) becomes large.

In a recent paper, I established (in the large characteristic case) the following regularity lemma for dense definable graphs, which significantly strengthens the Szemerédi regularity lemma in this context, by eliminating “bad” pairs, giving a polynomially strong regularity, and also giving definability of the cells:

Lemma 1 (Algebraic regularity lemma) Let {F} be a finite field, let {V,W} be definable non-empty sets of complexity at most {M}, and let {E \subset V \times W} also be definable with complexity at most {M}. Assume that the characteristic of {F} is sufficiently large depending on {M}. Then we may partition {V = V_1 \cup \ldots \cup V_m} and {W = W_1 \cup \ldots \cup W_n} with {m,n = O_M(1)}, with the following properties:

  • (Definability) Each of the {V_1,\ldots,V_m,W_1,\ldots,W_n} are definable of complexity {O_M(1)}.
  • (Size) We have {|V_i| \gg_M |V|} and {|W_j| \gg_M |W|} for all {i=1,\ldots,m} and {j=1,\ldots,n}.
  • (Regularity) We have

    \displaystyle  |E \cap (A \times B)| = d_{ij} |A| |B| + O_M( |F|^{-1/4} |V| |W| ) \ \ \ \ \ (2)

    for all {i=1,\ldots,m}, {j=1,\ldots,n}, {A \subset V_i}, and {B\subset W_j}, where {d_{ij}} is a rational number in {[0,1]} with numerator and denominator {O_M(1)}.

My original proof of this lemma was quite complicated, based on an explicit calculation of the “square”

\displaystyle  \mu(w,w') := \{ v \in V: (v,w), (v,w') \in E \}

of {E} using the Lang-Weil bound and some facts about the étale fundamental group. It was the reliance on the latter which was the main reason why the result was restricted to the large characteristic setting. (I then applied this lemma to classify expanding polynomials over finite fields of large characteristic, but I will not discuss these applications here; see this previous blog post for more discussion.)

Recently, Anand Pillay and Sergei Starchenko (and independently, Udi Hrushovski) have observed that the theory of the étale fundamental group is not necessary in the argument, and the lemma can in fact be deduced from quite general model theoretic techniques, in particular using (a local version of) the concept of stability. One of the consequences of this new proof of the lemma is that the hypothesis of large characteristic can be omitted; the lemma is now known to be valid for arbitrary finite fields {F} (although its content is trivial if the field is not sufficiently large depending on the complexity at most {M}).

Inspired by this, I decided to see if I could find yet another proof of the algebraic regularity lemma, again avoiding the theory of the étale fundamental group. It turns out that the spectral proof of the Szemerédi regularity lemma (discussed in this previous blog post) adapts very nicely to this setting. The key fact needed about definable sets over finite fields is that their cardinality takes on an essentially discrete set of values. More precisely, we have the following fundamental result of Chatzidakis, van den Dries, and Macintyre:

Proposition 2 Let {F} be a finite field, and let {M > 0}.

  • (Discretised cardinality) If {E} is a non-empty definable set of complexity at most {M}, then one has

    \displaystyle  |E| = c |F|^d + O_M( |F|^{d-1/2} ) \ \ \ \ \ (3)

    where {d = O_M(1)} is a natural number, and {c} is a positive rational number with numerator and denominator {O_M(1)}. In particular, we have {|F|^d \ll_M |E| \ll_M |F|^d}.

  • (Definable cardinality) Assume {|F|} is sufficiently large depending on {M}. If {V, W}, and {E \subset V \times W} are definable sets of complexity at most {M}, so that {E_w := \{ v \in V: (v,w) \in W \}} can be viewed as a definable subset of {V} that is definably parameterised by {w \in W}, then for each natural number {d = O_M(1)} and each positive rational {c} with numerator and denominator {O_M(1)}, the set

    \displaystyle  \{ w \in W: |E_w| = c |F|^d + O_M( |F|^{d-1/2} ) \} \ \ \ \ \ (4)

    is definable with complexity {O_M(1)}, where the implied constants in the asymptotic notation used to define (4) are the same as those that appearing in (3). (Informally: the “dimension” {d} and “measure” {c} of {E_w} depends definably on {w}.)

We will take this proposition as a black box; a proof can be obtained by combining the description of definable sets over pseudofinite fields (discussed in this previous post) with the Lang-Weil bound (discussed in this previous post). (The former fact is phrased using nonstandard analysis, but one can use standard compactness-and-contradiction arguments to convert such statements to statements in standard analysis, as discussed in this post.)

The above proposition places severe restrictions on the cardinality of definable sets; for instance, it shows that one cannot have a definable set of complexity at most {M} and cardinality {|F|^{1/2}}, if {|F|} is sufficiently large depending on {M}. If {E \subset V} are definable sets of complexity at most {M}, it shows that {|E| = (c+ O_M(|F|^{-1/2})) |V|} for some rational {0\leq c \leq 1} with numerator and denominator {O_M(1)}; furthermore, if {c=0}, we may improve this bound to {|E| = O_M( |F|^{-1} |V|)}. In particular, we obtain the following “self-improving” properties:

  • If {E \subset V} are definable of complexity at most {M} and {|E| \leq \epsilon |V|} for some {\epsilon>0}, then (if {\epsilon} is sufficiently small depending on {M} and {F} is sufficiently large depending on {M}) this forces {|E| = O_M( |F|^{-1} |V| )}.
  • If {E \subset V} are definable of complexity at most {M} and {||E| - c |V|| \leq \epsilon |V|} for some {\epsilon>0} and positive rational {c}, then (if {\epsilon} is sufficiently small depending on {M,c} and {F} is sufficiently large depending on {M,c}) this forces {|E| = c |V| + O_M( |F|^{-1/2} |V| )}.

It turns out that these self-improving properties can be applied to the coefficients of various matrices (basically powers of the adjacency matrix associated to {E}) that arise in the spectral proof of the regularity lemma to significantly improve the bounds in that lemma; we describe how this is done below the fold. We also make some connections to the stability-based proofs of Pillay-Starchenko and Hrushovski.

Read the rest of this entry »

I’ve just uploaded to the arXiv my article “Algebraic combinatorial geometry: the polynomial method in arithmetic combinatorics, incidence combinatorics, and number theory“, submitted to the new journal “EMS surveys in the mathematical sciences“.  This is the first draft of a survey article on the polynomial method – a technique in combinatorics and number theory for controlling a relevant set of points by comparing it with the zero set of a suitably chosen polynomial, and then using tools from algebraic geometry (e.g. Bezout’s theorem) on that zero set. As such, the method combines algebraic geometry with combinatorial geometry, and could be viewed as the philosophy of a combined field which I dub “algebraic combinatorial geometry”.   There is also an important extension of this method when one is working overthe reals, in which methods from algebraic topology (e.g. the ham sandwich theorem and its generalisation to polynomials), and not just algebraic geometry, come into play also.

The polynomial method has been used independently many times in mathematics; for instance, it plays a key role in the proof of Baker’s theorem in transcendence theory, or Stepanov’s method in giving an elementary proof of the Riemann hypothesis for finite fields over curves; in combinatorics, the nullstellenatz of Alon is also another relatively early use of the polynomial method.  More recently, it underlies Dvir’s proof of the Kakeya conjecture over finite fields and Guth and Katz’s near-complete solution to the Erdos distance problem in the plane, and can be used to give a short proof of the Szemeredi-Trotter theorem.  One of the aims of this survey is to try to present all of these disparate applications of the polynomial method in a somewhat unified context; my hope is that there will eventually be a systematic foundation for algebraic combinatorial geometry which naturally contains all of these different instances the polynomial method (and also suggests new instances to explore); but the field is unfortunately not at that stage of maturity yet.

This is something of a first draft, so comments and suggestions are even more welcome than usual.  (For instance, I have already had my attention drawn to some additional uses of the polynomial method in the literature that I was not previously aware of.)

Once again it is time to roll over the previous discussion thread, which has become rather full with comments.  The paper is nearly finished (see also the working copy at this subdirectory, as well as the rest of the directory), but several people are carefully proofreading various sections of the paper.  Once all the people doing so have signed off on it, I think we will be ready to submit (there appears to be no objection to the plan to submit to Algebra and Number Theory).

Another thing to discuss is an invitation to Polymath8 to write a feature article (up to 8000 words or 15 pages) for the Newsletter of the European Mathematical Society on our experiences with this project.  It is perhaps premature to actually start writing this article before the main research paper is finalised, but we can at least plan how to write such an article.  One suggestion, proposed by Emmanuel, is to have individual participants each contribute a brief account of their interaction with the project, which we would compile together with some additional text summarising the project as a whole (and maybe some speculation for any lessons we can apply here for future polymath projects).   Certainly I plan to have a separate blog post collecting feedback on this project once the main writing is done.

The main purpose of this post is to roll over the discussion from the previous Polymath8 thread, which has become rather full with comments.  We are still writing the paper, but it appears to have stabilised in a near-final form (source files available here); the main remaining tasks are proofreading, checking the mathematics, and polishing the exposition.  We also have a tentative consensus to submit the paper to Algebra and Number Theory when the proofreading is all complete.

The paper is quite large now (164 pages!) but it is fortunately rather modular, and thus hopefully somewhat readable (particularly regarding the first half of the paper, which does not  need any of the advanced exponential sum estimates).  The size should not be a major issue for the journal, so I would not seek to artificially shorten the paper at the expense of readability or content.

Define a partition of {1} to be a finite or infinite multiset {\Sigma} of real numbers in the interval {I \in (0,1]} (that is, an unordered set of real numbers in {I}, possibly with multiplicity) whose total sum is {1}: {\sum_{t \in \Sigma}t = 1}. For instance, {\{1/2,1/4,1/8,1/16,\ldots\}} is a partition of {1}. Such partitions arise naturally when trying to decompose a large object into smaller ones, for instance:

  1. (Prime factorisation) Given a natural number {n}, one can decompose it into prime factors {n = p_1 \ldots p_k} (counting multiplicity), and then the multiset

    \displaystyle  \Sigma_{PF}(n) := \{ \frac{\log p_1}{\log n}, \ldots,\frac{\log p_k}{\log n} \}

    is a partition of {1}.

  2. (Cycle decomposition) Given a permutation {\sigma \in S_n} on {n} labels {\{1,\ldots,n\}}, one can decompose {\sigma} into cycles {C_1,\ldots,C_k}, and then the multiset

    \displaystyle  \Sigma_{CD}(\sigma) := \{ \frac{|C_1|}{n}, \ldots, \frac{|C_k|}{n} \}

    is a partition of {1}.

  3. (Normalisation) Given a multiset {\Gamma} of positive real numbers whose sum {S := \sum_{x\in \Gamma}x} is finite and non-zero, the multiset

    \displaystyle  \Sigma_N( \Gamma) := \frac{1}{S} \cdot \Gamma = \{ \frac{x}{S}: x \in \Gamma \}

    is a partition of {1}.

In the spirit of the universality phenomenon, one can ask what is the natural distribution for what a “typical” partition should look like; thus one seeks a natural probability distribution on the space of all partitions, analogous to (say) the gaussian distributions on the real line, or GUE distributions on point processes on the line, and so forth. It turns out that there is one natural such distribution which is related to all three examples above, known as the Poisson-Dirichlet distribution. To describe this distribution, we first have to deal with the problem that it is not immediately obvious how to cleanly parameterise the space of partitions, given that the cardinality of the partition can be finite or infinite, that multiplicity is allowed, and that we would like to identify two partitions that are permutations of each other

One way to proceed is to random partition {\Sigma} as a type of point process on the interval {I}, with the constraint that {\sum_{x \in \Sigma} x = 1}, in which case one can study statistics such as the counting functions

\displaystyle  N_{[a,b]} := |\Sigma \cap [a,b]| = \sum_{x \in\Sigma} 1_{[a,b]}(x)

(where the cardinality here counts multiplicity). This can certainly be done, although in the case of the Poisson-Dirichlet process, the formulae for the joint distribution of such counting functions is moderately complicated. Another way to proceed is to order the elements of {\Sigma} in decreasing order

\displaystyle  t_1 \geq t_2 \geq t_3 \geq \ldots \geq 0,

with the convention that one pads the sequence {t_n} by an infinite number of zeroes if {\Sigma} is finite; this identifies the space of partitions with an infinite dimensional simplex

\displaystyle  \{ (t_1,t_2,\ldots) \in [0,1]^{\bf N}: t_1 \geq t_2 \geq \ldots; \sum_{n=1}^\infty t_n = 1 \}.

However, it turns out that the process of ordering the elements is not “smooth” (basically because functions such as {(x,y) \mapsto \max(x,y)} and {(x,y) \mapsto \min(x,y)} are not smooth) and the formulae for the joint distribution in the case of the Poisson-Dirichlet process is again complicated.

It turns out that there is a better (or at least “smoother”) way to enumerate the elements {u_1,(1-u_1)u_2,(1-u_1)(1-u_2)u_3,\ldots} of a partition {\Sigma} than the ordered method, although it is random rather than deterministic. This procedure (which I learned from this paper of Donnelly and Grimmett) works as follows.

  1. Given a partition {\Sigma}, let {u_1} be an element of {\Sigma} chosen at random, with each element {t\in \Sigma} having a probability {t} of being chosen as {u_1} (so if {t \in \Sigma} occurs with multiplicity {m}, the net probability that {t} is chosen as {u_1} is actually {mt}). Note that this is well-defined since the elements of {\Sigma} sum to {1}.
  2. Now suppose {u_1} is chosen. If {\Sigma \backslash \{u_1\}} is empty, we set {u_2,u_3,\ldots} all equal to zero and stop. Otherwise, let {u_2} be an element of {\frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})} chosen at random, with each element {t \in \frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})} having a probability {t} of being chosen as {u_2}. (For instance, if {u_1} occurred with some multiplicity {m>1} in {\Sigma}, then {u_2} can equal {\frac{u_1}{1-u_1}} with probability {(m-1)u_1/(1-u_1)}.)
  3. Now suppose {u_1,u_2} are both chosen. If {\Sigma \backslash \{u_1,u_2\}} is empty, we set {u_3, u_4, \ldots} all equal to zero and stop. Otherwise, let {u_3} be an element of {\frac{1}{1-u_1-u_2} \cdot (\Sigma\backslash \{u_1,u_2\})}, with ech element {t \in \frac{1}{1-u_1-u_2} \cdot (\Sigma\backslash \{u_1,u_2\})} having a probability {t} of being chosen as {u_3}.
  4. We continue this process indefinitely to create elements {u_1,u_2,u_3,\ldots \in [0,1]}.

We denote the random sequence {Enum(\Sigma) := (u_1,u_2,\ldots) \in [0,1]^{\bf N}} formed from a partition {\Sigma} in the above manner as the random normalised enumeration of {\Sigma}; this is a random variable in the infinite unit cube {[0,1]^{\bf N}}, and can be defined recursively by the formula

\displaystyle  Enum(\Sigma) = (u_1, Enum(\frac{1}{1-u_1} \cdot (\Sigma\backslash \{u_1\})))

with {u_1} drawn randomly from {\Sigma}, with each element {t \in \Sigma} chosen with probability {t}, except when {\Sigma =\{1\}} in which case we instead have

\displaystyle  Enum(\{1\}) = (1, 0,0,\ldots).

Note that one can recover {\Sigma} from any of its random normalised enumerations {Enum(\Sigma) := (u_1,u_2,\ldots)} by the formula

\displaystyle  \Sigma = \{ u_1, (1-u_1) u_2,(1-u_1)(1-u_2)u_3,\ldots\} \ \ \ \ \ (1)

with the convention that one discards any zero elements on the right-hand side. Thus {Enum} can be viewed as a (stochastic) parameterisation of the space of partitions by the unit cube {[0,1]^{\bf N}}, which is a simpler domain to work with than the infinite-dimensional simplex mentioned earlier.

Note that this random enumeration procedure can also be adapted to the three models described earlier:

  1. Given a natural number {n}, one can randomly enumerate its prime factors {n =p'_1 p'_2 \ldots p'_k} by letting each prime factor {p} of {n} be equal to {p'_1} with probability {\frac{\log p}{\log n}}, then once {p'_1} is chosen, let each remaining prime factor {p} of {n/p'_1} be equal to {p'_2} with probability {\frac{\log p}{\log n/p'_1}}, and so forth.
  2. Given a permutation {\sigma\in S_n}, one can randomly enumerate its cycles {C'_1,\ldots,C'_k} by letting each cycle {C} in {\sigma} be equal to {C'_1} with probability {\frac{|C|}{n}}, and once {C'_1} is chosen, let each remaining cycle {C} be equalto {C'_2} with probability {\frac{|C|}{n-|C'_1|}}, and so forth. Alternatively, one traverse the elements of {\{1,\ldots,n\}} in random order, then let {C'_1} be the first cycle one encounters when performing this traversal, let {C'_2} be the next cycle (not equal to {C'_1} one encounters when performing this traversal, and so forth.
  3. Given a multiset {\Gamma} of positive real numbers whose sum {S := \sum_{x\in\Gamma} x} is finite, we can randomly enumerate {x'_1,x'_2,\ldots} the elements of this sequence by letting each {x \in \Gamma} have a {\frac{x}{S}} probability of being set equal to {x'_1}, and then once {x'_1} is chosen, let each remaining {x \in \Gamma\backslash \{x'_1\}} have a {\frac{x_i}{S-x'_1}} probability of being set equal to {x'_2}, and so forth.

We then have the following result:

Proposition 1 (Existence of the Poisson-Dirichlet process) There exists a random partition {\Sigma} whose random enumeration {Enum(\Sigma) = (u_1,u_2,\ldots)} has the uniform distribution on {[0,1]^{\bf N}}, thus {u_1,u_2,\ldots} are independently and identically distributed copies of the uniform distribution on {[0,1]}.

A random partition {\Sigma} with this property will be called the Poisson-Dirichlet process. This process, first introduced by Kingman, can be described explicitly using (1) as

\displaystyle  \Sigma = \{ u_1, (1-u_1) u_2,(1-u_1)(1-u_2)u_3,\ldots\},

where {u_1,u_2,\ldots} are iid copies of the uniform distribution of {[0,1]}, although it is not immediately obvious from this definition that {Enum(\Sigma)} is indeed uniformly distributed on {[0,1]^{\bf N}}. We prove this proposition below the fold.

An equivalent definition of a Poisson-Dirichlet process is a random partition {\Sigma} with the property that

\displaystyle  (u_1, \frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})) \equiv (U, \Sigma) \ \ \ \ \ (2)

where {u_1} is a random element of {\Sigma} with each {t \in\Sigma} having a probability {t} of being equal to {u_1}, {U} is a uniform variable on {[0,1]} that is independent of {\Sigma}, and {\equiv} denotes equality of distribution. This can be viewed as a sort of stochastic self-similarity property of {\Sigma}: if one randomly removes one element from {\Sigma} and rescales, one gets a new copy of {\Sigma}.

It turns out that each of the three ways to generate partitions listed above can lead to the Poisson-Dirichlet process, either directly or in a suitable limit. We begin with the third way, namely by normalising a Poisson process to have sum {1}:

Proposition 2 (Poisson-Dirichlet processes via Poisson processes) Let {a>0}, and let {\Gamma_a} be a Poisson process on {(0,+\infty)} with intensity function {t \mapsto \frac{1}{t} e^{-at}}. Then the sum {S :=\sum_{x \in \Gamma_a} x} is almost surely finite, and the normalisation {\Sigma_N(\Gamma_a) = \frac{1}{S} \cdot \Gamma_a} is a Poisson-Dirichlet process.

Again, we prove this proposition below the fold. Now we turn to the second way (a topic, incidentally, that was briefly touched upon in this previous blog post):

Proposition 3 (Large cycles of a typical permutation) For each natural number {n}, let {\sigma} be a permutation drawn uniformly at random from {S_n}. Then the random partition {\Sigma_{CD}(\sigma)} converges in the limit {n \rightarrow\infty} to a Poisson-Dirichlet process {\Sigma_{PF}} in the following sense: given any fixed sequence of intervals {[a_1,b_1],\ldots,[a_k,b_k] \subset I} (independent of {n}), the joint discrete random variable {(N_{[a_1,b_1]}(\Sigma_{CD}(\sigma)),\ldots,N_{[a_k,b_k]}(\Sigma_{CD}(\sigma)))} converges in distribution to {(N_{[a_1,b_1]}(\Sigma),\ldots,N_{[a_k,b_k]}(\Sigma))}.

Finally, we turn to the first way:

Proposition 4 (Large prime factors of a typical number) Let {x > 0}, and let {N_x} be a random natural number chosen according to one of the following three rules:

  1. (Uniform distribution) {N_x} is drawn uniformly at random from the natural numbers in {[1,x]}.
  2. (Shifted uniform distribution) {N_x} is drawn uniformly at random from the natural numbers in {[x,2x]}.
  3. (Zeta distribution) Each natural number {n} has a probability {\frac{1}{\zeta(s)}\frac{1}{n^s}} of being equal to {N_x}, where {s := 1 +\frac{1}{\log x}}and {\zeta(s):=\sum_{n=1}^\infty \frac{1}{n^s}}.

Then {\Sigma_{PF}(N_x)} converges as {x \rightarrow \infty} to a Poisson-Dirichlet process {\Sigma} in the same fashion as in Proposition 3.

The process {\Sigma_{PF}(N_x)} was first studied by Billingsley (and also later by Knuth-Trabb Pardo and by Vershik, but the formulae were initially rather complicated; the proposition above is due to of Donnelly and Grimmett, although the third case of the proposition is substantially easier and appears in the earlier work of Lloyd. We prove the proposition below the fold.

The previous two propositions suggests an interesting analogy between large random integers and large random permutations; see this ICM article of Vershik and this non-technical article of Granville (which, incidentally, was once adapted into a play) for further discussion.

As a sample application, consider the problem of estimating the number {\pi(x,x^{1/u})} of integers up to {x} which are not divisible by any prime larger than {x^{1/u}} (i.e. they are {x^{1/u}}-smooth), where {u>0} is a fixed real number. This is essentially (modulo some inessential technicalities concerning the distinction between the intervals {[x,2x]} and {[1,x]}) the probability that {\Sigma} avoids {[1/u,1]}, which by the above theorem converges to the probability {\rho(u)} that {\Sigma_{PF}} avoids {[1/u,1]}. Below the fold we will show that this function is given by the Dickman function, defined by setting {\rho(u)=1} for {u < 1} and {u\rho'(u) = \rho(u-1)} for {u \geq 1}, thus recovering the classical result of Dickman that {\pi(x,x^{1/u}) = (\rho(u)+o(1))x}.

I thank Andrew Granville and Anatoly Vershik for showing me the nice link between prime factors and the Poisson-Dirichlet process. The material here is standard, and (like many of the other notes on this blog) was primarily written for my own benefit, but it may be of interest to some readers. In preparing this article I found this exposition by Kingman to be helpful.

Note: this article will emphasise the computations rather than rigour, and in particular will rely on informal use of infinitesimals to avoid dealing with stochastic calculus or other technicalities. We adopt the convention that we will neglect higher order terms in infinitesimal calculations, e.g. if {dt} is infinitesimal then we will abbreviate {dt + o(dt)} simply as {dt}.

Read the rest of this entry »

Emmanuel Breuillard, Ben Green, Bob Guralnick, and I have just uploaded to the arXiv our joint paper “Expansion in finite simple groups of Lie type“. This long-delayed paper (announced way back in 2010!) is a followup to our previous paper in which we showed that, with one possible exception, generic pairs of elements of a simple algebraic group (over an uncountable field) generated a free group which was strongly dense in the sense that any nonabelian subgroup of this group was Zariski dense. The main result of this paper is to establish the analogous result for finite simple groups of Lie type (as defined in the previous blog post) and bounded rank, namely that almost all pairs {a,b} of elements of such a group generate a Cayley graph which is a (two-sided) expander, with expansion constant bounded below by a quantity depending on the rank of the group. (Informally, this means that the random walk generated by {a,b} spreads out in logarithmic time to be essentially uniformly distributed across the group, as opposed for instance to being largely trapped in an algebraic subgroup. Thus if generic elements did not generate a strongly dense group, one would probably expect expansion to fail.)

There are also some related results established in the paper. Firstly, as we discovered after writing our first paper, there was one class of algebraic groups for which our demonstration of strongly dense subgroups broke down, namely the {Sp_4} groups in characteristic three. In the current paper we provide in a pair of appendices a new argument that covers this case (or more generally, {Sp_4} in odd characteristic), by first reducing to the case of affine groups {k^2 \rtimes SL_2(k)} (which can be found inside {Sp_4} as a subgroup) and then using a ping-pong argument (in a p-adic metric) in the latter context.

Secondly, we show that the distinction between one-sided expansion and two-sided expansion (see this set of lecture notes of mine for definitions) is erased in the context of Cayley graphs of bounded degree, in the sense that such graphs are one-sided expanders if and only if they are two-sided expanders (perhaps with slightly different expansion constants). The argument turns out to be an elementary combinatorial one, based on the “pivot” argument discussed in these lecture notes of mine.

Now to the main result of the paper, namely the expansion of random Cayley graphs. This result had previously been established for {SL_2} by Bourgain and Gamburd, and Ben, Emmanuel and I had used the Bourgain-Gamburd method to achieve the same result for Suzuki groups. For the other finite simple groups of Lie type, expander graphs had been constructed by Kassabov, Lubotzky, and Nikolov, but they required more than two generators, which were placed deterministically rather than randomly. (Here, I am skipping over a large number of other results on expanding Cayley graphs; see this survey of Lubotzsky for a fairly recent summary of developments.) The current paper also uses the “Bourgain-Gamburd machine”, as discussed in these lecture notes of mine, to demonstrate expansion. This machine shows how expansion of a Cayley graph follows from three basic ingredients, which we state informally as follows:

  • Non-concentration (A random walk in this graph does not concentrate in a proper subgroup);
  • Product theorem (A medium-sized subset of this group which is not trapped in a proper subgroup will expand under multiplication); and
  • Quasirandomness (The group has no small non-trivial linear representations).

Quasirandomness of arbitrary finite simple groups of Lie type was established many years ago (predating, in fact, the introduction of the term “quasirandomness” by Gowers for this property) by Landazuri-Seitz and Seitz-Zalesskii, and the product theorem was already established by Pyber-Szabo and independently by Breuillard, Green, and myself. So the main problem is to establish non-concentration: that for a random Cayley graph on a finite simple group {G} of Lie type, random walks did not concentrate in proper subgroups.

The first step was to classify the proper subgroups of {G}. Fortunately, these are all known; in particular, such groups are either contained in proper algebraic subgroups of the algebraic group containing {G} (or a bounded cover thereof) with bounded complexity, or are else arising (up to conjugacy) from a version {G(F')} of the same group {G =G(F)} associated to a proper subfield {F'} of the field {F} respectively; this follows for instance from the work of Larsen and Pink, but also can be deduced using the classification of finite simple groups, together with some work of Aschbacher, Liebeck-Seitz, and Nori. We refer to the two types of subgroups here as “structural subgroups” and “subfield subgroups”.

To preclude concentration in a structural subgroup, we use our previous result that generic elements of an algebraic group generate a strongly dense subgroup, and so do not concentrate in any algebraic subgroup. To translate this result from the algebraic group setting to the finite group setting, we need a Schwarz-Zippel lemma for finite simple groups of Lie type. This is straightforward for Chevalley groups, but turns out to be a bit trickier for the Steinberg and Suzuki-Ree groups, and we have to go back to the Chevalley-type parameterisation of such groups in terms of (twisted) one-parameter subgroups, that can be found for instance in the text of Carter; this “twisted Schwartz-Zippel lemma” may possibly have further application to analysis on twisted simple groups of Lie type. Unfortunately, the Schwartz-Zippel estimate becomes weaker in twisted settings, and particularly in the case of triality groups {{}^3 D_4(q)}, which require a somewhat ad hoc additional treatment that relies on passing to a simpler subgroup present in a triality group, namely a central product of two different {SL_2}‘s.

To rule out concentration in a conjugate of a subfield group, we repeat an argument we introduced in our Suzuki paper and pass to a matrix model and analyse the coefficients of the characteristic polynomial of words in this Cayley graph, to prevent them from concentrating in a subfield. (Note that these coefficients are conjugation-invariant.)

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,307 other followers