You are currently browsing the monthly archive for September 2017.

Apoorva Khare and I have updated our paper “On the sign patterns of entrywise positivity preservers in fixed dimension“, announced at this post from last month. The quantitative results are now sharpened using a new monotonicity property of ratios {s_{\lambda}(u)/s_{\mu}(u)} of Schur polynomials, namely that such ratios are monotone non-decreasing in each coordinate of {u} if {u} is in the positive orthant, and the partition {\lambda} is larger than that of {\mu}. (This monotonicity was also independently observed by Rachid Ait-Haddou, using the theory of blossoms.) In the revised version of the paper we give two proofs of this monotonicity. The first relies on a deep positivity result of Lam, Postnikov, and Pylyavskyy, which uses a representation-theoretic positivity result of Haiman to show that the polynomial combination

\displaystyle s_{(\lambda \wedge \nu) / (\mu \wedge \rho)} s_{(\lambda \vee \nu) / (\mu \vee \rho)} - s_{\lambda/\mu} s_{\nu/\rho} \ \ \ \ \ (1)

of skew-Schur polynomials is Schur-positive for any partitions {\lambda,\mu,\nu,\rho} (using the convention that the skew-Schur polynomial {s_{\lambda/\mu}} vanishes if {\mu} is not contained in {\lambda}, and where {\lambda \wedge \nu} and {\lambda \vee \nu} denotes the pointwise min and max of {\lambda} and {\nu} respectively). It is fairly easy to derive the monotonicity of {s_\lambda(u)/s_\mu(u)} from this, by using the expansion

\displaystyle s_\lambda(u_1,\dots, u_n) = \sum_k u_1^k s_{\lambda/(k)}(u_2,\dots,u_n)

of Schur polynomials into skew-Schur polynomials (as was done in this previous post).

The second proof of monotonicity avoids representation theory by a more elementary argument establishing the weaker claim that the above expression (1) is non-negative on the positive orthant. In fact we prove a more general determinantal log-supermodularity claim which may be of independent interest:

Theorem 1 Let {A} be any {n \times n} totally positive matrix (thus, every minor has a non-negative determinant). Then for any {k}-tuples {I_1,I_2,J_1,J_2} of increasing elements of {\{1,\dots,n\}}, one has

\displaystyle \det( A_{I_1 \wedge I_2, J_1 \wedge J_2} ) \det( A_{I_1 \vee I_2, J_1 \vee J_2} ) - \det(A_{I_1,J_1}) \det(A_{I_2,J_2}) \geq 0

where {A_{I,J}} denotes the {k \times k} minor formed from the rows in {I} and columns in {J}.

For instance, if {A} is the matrix

\displaystyle A = \begin{pmatrix} a & b & c & d \\ e & f & g & h \\ i & j & k & l \\ m & n & o & p \end{pmatrix}

for some real numbers {a,\dots,p}, one has

\displaystyle a h - de\geq 0

(corresponding to the case {k=1}, {I_1 = (1), I_2 = (2), J_1 = (4), J_2 = (1)}), or

\displaystyle \det \begin{pmatrix} a & c \\ i & k \end{pmatrix} \det \begin{pmatrix} f & h \\ n & p \end{pmatrix} - \det \begin{pmatrix} e & h \\ i & l \end{pmatrix} \det \begin{pmatrix} b & c \\ n & o \end{pmatrix} \geq 0

(corresponding to the case {k=2}, {I_1 = (2,3)}, {I_2 = (1,4)}, {J_1 = (1,4)}, {J_2 = (2,3)}). It turns out that this claim can be proven relatively easy by an induction argument, relying on the Dodgson and Karlin identities from this previous post; the difficulties are largely notational in nature. Combining this result with the Jacobi-Trudi identity for skew-Schur polynomials (discussed in this previous post) gives the non-negativity of (1); it can also be used to directly establish the monotonicity of ratios {s_\lambda(u)/s_\mu(u)} by applying the theorem to a generalised Vandermonde matrix.

(Log-supermodularity also arises as the natural hypothesis for the FKG inequality, though I do not know of any interesting application of the FKG inequality in this current setting.)

Suppose we have an {n \times n} matrix {M} that is expressed in block-matrix form as

\displaystyle  M = \begin{pmatrix} A & B \\ C & D \end{pmatrix}

where {A} is an {(n-k) \times (n-k)} matrix, {B} is an {(n-k) \times k} matrix, {C} is an {k \times (n-k)} matrix, and {D} is a {k \times k} matrix for some {1 < k < n}. If {A} is invertible, we can use the technique of Schur complementation to express the inverse of {M} (if it exists) in terms of the inverse of {A}, and the other components {B,C,D} of course. Indeed, to solve the equation

\displaystyle  M \begin{pmatrix} x & y \end{pmatrix} = \begin{pmatrix} a & b \end{pmatrix},

where {x, a} are {(n-k) \times 1} column vectors and {y,b} are {k \times 1} column vectors, we can expand this out as a system

\displaystyle  Ax + By = a

\displaystyle  Cx + Dy = b.

Using the invertibility of {A}, we can write the first equation as

\displaystyle  x = A^{-1} a - A^{-1} B y \ \ \ \ \ (1)

and substituting this into the second equation yields

\displaystyle  (D - C A^{-1} B) y = b - C A^{-1} a

and thus (assuming that {D - CA^{-1} B} is invertible)

\displaystyle  y = - (D - CA^{-1} B)^{-1} CA^{-1} a + (D - CA^{-1} B)^{-1} b

and then inserting this back into (1) gives

\displaystyle  x = (A^{-1} + A^{-1} B (D - CA^{-1} B)^{-1} C A^{-1}) a - A^{-1} B (D - CA^{-1} B)^{-1} b.

Comparing this with

\displaystyle  \begin{pmatrix} x & y \end{pmatrix} = M^{-1} \begin{pmatrix} a & b \end{pmatrix},

we have managed to express the inverse of {M} as

\displaystyle  M^{-1} =

\displaystyle  \begin{pmatrix} A^{-1} + A^{-1} B (D - CA^{-1} B)^{-1} C A^{-1} & - A^{-1} B (D - CA^{-1} B)^{-1} \\ - (D - CA^{-1} B)^{-1} CA^{-1} & (D - CA^{-1} B)^{-1} \end{pmatrix}. \ \ \ \ \ (2)

One can consider the inverse problem: given the inverse {M^{-1}} of {M}, does one have a nice formula for the inverse {A^{-1}} of the minor {A}? Trying to recover this directly from (2) looks somewhat messy. However, one can proceed as follows. Let {U} denote the {n \times k} matrix

\displaystyle  U := \begin{pmatrix} 0 \\ I_k \end{pmatrix}

(with {I_k} the {k \times k} identity matrix), and let {V} be its transpose:

\displaystyle  V := \begin{pmatrix} 0 & I_k \end{pmatrix}.

Then for any scalar {t} (which we identify with {t} times the identity matrix), one has

\displaystyle  M + UtV = \begin{pmatrix} A & B \\ C & D+t \end{pmatrix},

and hence by (2)

\displaystyle  (M+UtV)^{-1} =

\displaystyle \begin{pmatrix} A^{-1} + A^{-1} B (D + t - CA^{-1} B)^{-1} C A^{-1} & - A^{-1} B (D + t- CA^{-1} B)^{-1} \\ - (D + t - CA^{-1} B)^{-1} CA^{-1} & (D + t - CA^{-1} B)^{-1} \end{pmatrix}.

noting that the inverses here will exist for {t} large enough. Taking limits as {t \rightarrow \infty}, we conclude that

\displaystyle  \lim_{t \rightarrow \infty} (M+UtV)^{-1} = \begin{pmatrix} A^{-1} & 0 \\ 0 & 0 \end{pmatrix}.

On the other hand, by the Woodbury matrix identity (discussed in this previous blog post), we have

\displaystyle  (M+UtV)^{-1} = M^{-1} - M^{-1} U (t^{-1} + V M^{-1} U)^{-1} V M^{-1}

and hence on taking limits and comparing with the preceding identity, one has

\displaystyle  \begin{pmatrix} A^{-1} & 0 \\ 0 & 0 \end{pmatrix} = M^{-1} - M^{-1} U (V M^{-1} U)^{-1} V M^{-1}.

This achieves the aim of expressing the inverse {A^{-1}} of the minor in terms of the inverse of the full matrix. Taking traces and rearranging, we conclude in particular that

\displaystyle  \mathrm{tr} A^{-1} = \mathrm{tr} M^{-1} - \mathrm{tr} (V M^{-2} U) (V M^{-1} U)^{-1}. \ \ \ \ \ (3)

In the {k=1} case, this can be simplified to

\displaystyle  \mathrm{tr} A^{-1} = \mathrm{tr} M^{-1} - \frac{e_n^T M^{-2} e_n}{e_n^T M^{-1} e_n} \ \ \ \ \ (4)

where {e_n} is the {n^{th}} basis column vector.

We can apply this identity to understand how the spectrum of an {n \times n} random matrix {M} relates to that of its top left {n-1 \times n-1} minor {A}. Subtracting any complex multiple {z} of the identity from {M} (and hence from {A}), we can relate the Stieltjes transform {s_M(z) := \frac{1}{n} \mathrm{tr}(M-z)^{-1}} of {M} with the Stieltjes transform {s_A(z) := \frac{1}{n-1} \mathrm{tr}(A-z)^{-1}} of {A}:

\displaystyle  s_A(z) = \frac{n}{n-1} s_M(z) - \frac{1}{n-1} \frac{e_n^T (M-z)^{-2} e_n}{e_n^T (M-z)^{-1} e_n} \ \ \ \ \ (5)

At this point we begin to proceed informally. Assume for sake of argument that the random matrix {M} is Hermitian, with distribution that is invariant under conjugation by the unitary group {U(n)}; for instance, {M} could be drawn from the Gaussian Unitary Ensemble (GUE), or alternatively {M} could be of the form {M = U D U^*} for some real diagonal matrix {D} and {U} a unitary matrix drawn randomly from {U(n)} using Haar measure. To fix normalisations we will assume that the eigenvalues of {M} are typically of size {O(1)}. Then {A} is also Hermitian and {U(n)}-invariant. Furthermore, the law of {e_n^T (M-z)^{-1} e_n} will be the same as the law of {u^* (M-z)^{-1} u}, where {u} is now drawn uniformly from the unit sphere (independently of {M}). Diagonalising {M} into eigenvalues {\lambda_j} and eigenvectors {v_j}, we have

\displaystyle u^* (M-z)^{-1} u = \sum_{j=1}^n \frac{|u^* v_j|^2}{\lambda_j - z}.

One can think of {u} as a random (complex) Gaussian vector, divided by the magnitude of that vector (which, by the Chernoff inequality, will concentrate to {\sqrt{n}}). Thus the coefficients {u^* v_j} with respect to the orthonormal basis {v_1,\dots,v_j} can be thought of as independent (complex) Gaussian vectors, divided by that magnitude. Using this and the Chernoff inequality again, we see (for {z} distance {\sim 1} away from the real axis at least) that one has the concentration of measure

\displaystyle  u^* (M-z)^{-1} u \approx \frac{1}{n} \sum_{j=1}^n \frac{1}{\lambda_j - z}

and thus

\displaystyle  e_n^T (M-z)^{-1} e_n \approx \frac{1}{n} \mathrm{tr} (M-z)^{-1} = s_M(z)

(that is to say, the diagonal entries of {(M-z)^{-1}} are roughly constant). Similarly we have

\displaystyle  e_n^T (M-z)^{-2} e_n \approx \frac{1}{n} \mathrm{tr} (M-z)^{-2} = \frac{d}{dz} s_M(z).

Inserting this into (5) and discarding terms of size {O(1/n^2)}, we thus conclude the approximate relationship

\displaystyle  s_A(z) \approx s_M(z) + \frac{1}{n} ( s_M(z) - s_M(z)^{-1} \frac{d}{dz} s_M(z) ).

This can be viewed as a difference equation for the Stieltjes transform of top left minors of {M}. Iterating this equation, and formally replacing the difference equation by a differential equation in the large {n} limit, we see that when {n} is large and {k \approx e^{-t} n} for some {t \geq 0}, one expects the top left {k \times k} minor {A_k} of {M} to have Stieltjes transform

\displaystyle  s_{A_k}(z) \approx s( t, z ) \ \ \ \ \ (6)

where {s(t,z)} solves the Burgers-type equation

\displaystyle  \partial_t s(t,z) = s(t,z) - s(t,z)^{-1} \frac{d}{dz} s(t,z) \ \ \ \ \ (7)

with initial data {s(0,z) = s_M(z)}.

Example 1 If {M} is a constant multiple {M = cI_n} of the identity, then {s_M(z) = \frac{1}{c-z}}. One checks that {s(t,z) = \frac{1}{c-z}} is a steady state solution to (7), which is unsurprising given that all minors of {M} are also {c} times the identity.

Example 2 If {M} is GUE normalised so that each entry has variance {\sigma^2/n}, then by the semi-circular law (see previous notes) one has {s_M(z) \approx \frac{-z + \sqrt{z^2-4\sigma^2}}{2\sigma^2} = -\frac{2}{z + \sqrt{z^2-4\sigma^2}}} (using an appropriate branch of the square root). One can then verify the self-similar solution

\displaystyle  s(t,z) = \frac{-z + \sqrt{z^2 - 4\sigma^2 e^{-t}}}{2\sigma^2 e^{-t}} = -\frac{2}{z + \sqrt{z^2 - 4\sigma^2 e^{-t}}}

to (7), which is consistent with the fact that a top {k \times k} minor of {M} also has the law of GUE, with each entry having variance {\sigma^2 / n \approx \sigma^2 e^{-t} / k} when {k \approx e^{-t} n}.

One can justify the approximation (6) given a sufficiently good well-posedness theory for the equation (7). We will not do so here, but will note that (as with the classical inviscid Burgers equation) the equation can be solved exactly (formally, at least) by the method of characteristics. For any initial position {z_0}, we consider the characteristic flow {t \mapsto z(t)} formed by solving the ODE

\displaystyle  \frac{d}{dt} z(t) = s(t,z(t))^{-1} \ \ \ \ \ (8)

with initial data {z(0) = z_0}, ignoring for this discussion the problems of existence and uniqueness. Then from the chain rule, the equation (7) implies that

\displaystyle  \frac{d}{dt} s( t, z(t) ) = s(t,z(t))

and thus {s(t,z(t)) = e^t s(0,z_0)}. Inserting this back into (8) we see that

\displaystyle  z(t) = z_0 + s(0,z_0)^{-1} (1-e^{-t})

and thus (7) may be solved implicitly via the equation

\displaystyle  s(t, z_0 + s(0,z_0)^{-1} (1-e^{-t}) ) = e^t s(0, z_0) \ \ \ \ \ (9)

for all {t} and {z_0}.

Remark 3 In practice, the equation (9) may stop working when {z_0 + s(0,z_0)^{-1} (1-e^{-t})} crosses the real axis, as (7) does not necessarily hold in this region. It is a cute exercise (ultimately coming from the Cauchy-Schwarz inequality) to show that this crossing always happens, for instance if {z_0} has positive imaginary part then {z_0 + s(0,z_0)^{-1}} necessarily has negative or zero imaginary part.

Example 4 Suppose we have {s(0,z) = \frac{1}{c-z}} as in Example 1. Then (9) becomes

\displaystyle  s( t, z_0 + (c-z_0) (1-e^{-t}) ) = \frac{e^t}{c-z_0}

for any {t,z_0}, which after making the change of variables {z = z_0 + (c-z_0) (1-e^{-t}) = c - e^{-t} (c - z_0)} becomes

\displaystyle  s(t, z ) = \frac{1}{c-z}

as in Example 1.

Example 5 Suppose we have

\displaystyle  s(0,z) = \frac{-z + \sqrt{z^2-4\sigma^2}}{2\sigma^2} = -\frac{2}{z + \sqrt{z^2-4\sigma^2}}.

as in Example 2. Then (9) becomes

\displaystyle  s(t, z_0 - \frac{z_0 + \sqrt{z_0^2-4\sigma^2}}{2} (1-e^{-t}) ) = e^t \frac{-z_0 + \sqrt{z_0^2-4\sigma^2}}{2\sigma^2}.

If we write

\displaystyle  z := z_0 - \frac{z_0 + \sqrt{z_0^2-4\sigma^2}}{2} (1-e^{-t})

\displaystyle  = \frac{(1+e^{-t}) z_0 - (1-e^{-t}) \sqrt{z_0^2-4\sigma^2}}{2}

one can calculate that

\displaystyle  z^2 - 4 \sigma^2 e^{-t} = (\frac{(1-e^{-t}) z_0 - (1+e^{-t}) \sqrt{z_0^2-4\sigma^2}}{2})^2

and hence

\displaystyle  \frac{-z + \sqrt{z^2 - 4\sigma^2 e^{-t}}}{2\sigma^2 e^{-t}} = e^t \frac{-z_0 + \sqrt{z_0^2-4\sigma^2}}{2\sigma^2}

which gives

\displaystyle  s(t,z) = \frac{-z + \sqrt{z^2 - 4\sigma^2 e^{-t}}}{2\sigma^2 e^{-t}}. \ \ \ \ \ (10)

One can recover the spectral measure {\mu} from the Stieltjes transform {s(z)} as the weak limit of {x \mapsto \frac{1}{\pi} \mathrm{Im} s(x+i\varepsilon)} as {\varepsilon \rightarrow 0}; we write this informally as

\displaystyle  d\mu(x) = \frac{1}{\pi} \mathrm{Im} s(x+i0^+)\ dx.

In this informal notation, we have for instance that

\displaystyle  \delta_c(x) = \frac{1}{\pi} \mathrm{Im} \frac{1}{c-x-i0^+}\ dx

which can be interpreted as the fact that the Cauchy distributions {\frac{1}{\pi} \frac{\varepsilon}{(c-x)^2+\varepsilon^2}} converge weakly to the Dirac mass at {c} as {\varepsilon \rightarrow 0}. Similarly, the spectral measure associated to (10) is the semicircular measure {\frac{1}{2\pi \sigma^2 e^{-t}} (4 \sigma^2 e^{-t}-x^2)_+^{1/2}}.

If we let {\mu_t} be the spectral measure associated to {s(t,\cdot)}, then the curve {e^{-t} \mapsto \mu_t} from {(0,1]} to the space of measures is the high-dimensional limit {n \rightarrow \infty} of a Gelfand-Tsetlin pattern (discussed in this previous post), if the pattern is randomly generated amongst all matrices {M} with spectrum asymptotic to {\mu_0} as {n \rightarrow \infty}. For instance, if {\mu_0 = \delta_c}, then the curve is {\alpha \mapsto \delta_c}, corresponding to a pattern that is entirely filled with {c}‘s. If instead {\mu_0 = \frac{1}{2\pi \sigma^2} (4\sigma^2-x^2)_+^{1/2}} is a semicircular distribution, then the pattern is

\displaystyle  \alpha \mapsto \frac{1}{2\pi \sigma^2 \alpha} (4\sigma^2 \alpha -x^2)_+^{1/2},

thus at height {\alpha} from the top, the pattern is semicircular on the interval {[-2\sigma \sqrt{\alpha}, 2\sigma \sqrt{\alpha}]}. The interlacing property of Gelfand-Tsetlin patterns translates to the claim that {\alpha \mu_\alpha(-\infty,\lambda)} (resp. {\alpha \mu_\alpha(\lambda,\infty)}) is non-decreasing (resp. non-increasing) in {\alpha} for any fixed {\lambda}. In principle one should be able to establish these monotonicity claims directly from the PDE (7) or from the implicit solution (9), but it was not clear to me how to do so.

An interesting example of such a limiting Gelfand-Tsetlin pattern occurs when {\mu_0 = \frac{1}{2} \delta_{-1} + \frac{1}{2} \delta_1}, which corresponds to {M} being {2P-I}, where {P} is an orthogonal projection to a random {n/2}-dimensional subspace of {{\bf C}^n}. Here we have

\displaystyle  s(0,z) = \frac{1}{2} \frac{1}{-1-z} + \frac{1}{2} \frac{1}{1-z} = \frac{z}{1-z^2}

and so (9) in this case becomes

\displaystyle  s(t, z_0 + \frac{1-z_0^2}{z_0} (1-e^{-t}) ) = \frac{e^t z_0}{1-z_0^2}

A tedious calculation then gives the solution

\displaystyle  s(t,z) = \frac{(2e^{-t}-1)z + \sqrt{z^2 - 4e^{-t}(1-e^{-t})}}{2e^{-t}(1-z^2)}. \ \ \ \ \ (11)

For {\alpha = e^{-t} > 1/2}, there are simple poles at {z=-1,+1}, and the associated measure is

\displaystyle  \mu_\alpha = \frac{2\alpha-1}{2\alpha} \delta_{-1} + \frac{2\alpha-1}{2\alpha} \delta_1 + \frac{1}{2\pi \alpha(1-x^2)} (4\alpha(1-\alpha)-x^2)_+^{1/2}\ dx.

This reflects the interlacing property, which forces {\frac{2\alpha-1}{2\alpha} \alpha n} of the {\alpha n} eigenvalues of the {\alpha n \times \alpha n} minor to be equal to {-1} (resp. {+1}). For {\alpha = e^{-t} \leq 1/2}, the poles disappear and one just has

\displaystyle  \mu_\alpha = \frac{1}{2\pi \alpha(1-x^2)} (4\alpha(1-\alpha)-x^2)_+^{1/2}\ dx.

For {\alpha=1/2}, one has an inverse semicircle distribution

\displaystyle  \mu_{1/2} = \frac{1}{\pi} (1-x^2)_+^{-1/2}.

There is presumably a direct geometric explanation of this fact (basically describing the singular values of the product of two random orthogonal projections to half-dimensional subspaces of {{\bf C}^n}), but I do not know of one off-hand.

The evolution of {s(t,z)} can also be understood using the {R}-transform and {S}-transform from free probability. Formally, letlet {z(t,s)} be the inverse of {s(t,z)}, thus

\displaystyle  s(t,z(t,s)) = s

for all {t,s}, and then define the {R}-transform

\displaystyle  R(t,s) := z(t,-s) - \frac{1}{s}.

The equation (9) may be rewritten as

\displaystyle  z( t, e^t s ) = z(0,s) + s^{-1} (1-e^{-t})

and hence

\displaystyle  R(t, -e^t s) = R(0, -s)

or equivalently

\displaystyle  R(t,s) = R(0, e^{-t} s). \ \ \ \ \ (12)

See these previous notes for a discussion of free probability topics such as the {R}-transform.

Example 6 If {s(t,z) = \frac{1}{c-z}} then the {R} transform is {R(t,s) = c}.

Example 7 If {s(t,z)} is given by (10), then the {R} transform is

\displaystyle  R(t,s) = \sigma^2 e^{-t} s.

Example 8 If {s(t,z)} is given by (11), then the {R} transform is

\displaystyle  R(t,s) = \frac{-1 + \sqrt{1 + 4 s^2 e^{-2t}}}{2 s e^{-t}}.

This simple relationship (12) is essentially due to Nica and Speicher (thanks to Dima Shylakhtenko for this reference). It has the remarkable consequence that when {\alpha = 1/m} is the reciprocal of a natural number {m}, then {\mu_{1/m}} is the free arithmetic mean of {m} copies of {\mu}, that is to say {\mu_{1/m}} is the free convolution {\mu \boxplus \dots \boxplus \mu} of {m} copies of {\mu}, pushed forward by the map {\lambda \rightarrow \lambda/m}. In terms of random matrices, this is asserting that the top {n/m \times n/m} minor of a random matrix {M} has spectral measure approximately equal to that of an arithmetic mean {\frac{1}{m} (M_1 + \dots + M_m)} of {m} independent copies of {M}, so that the process of taking top left minors is in some sense a continuous analogue of the process of taking freely independent arithmetic means. There ought to be a geometric proof of this assertion, but I do not know of one. In the limit {m \rightarrow \infty} (or {\alpha \rightarrow 0}), the {R}-transform becomes linear and the spectral measure becomes semicircular, which is of course consistent with the free central limit theorem.

In a similar vein, if one defines the function

\displaystyle  \omega(t,z) := \alpha \int_{\bf R} \frac{zx}{1-zx}\ d\mu_\alpha(x) = e^{-t} (- 1 - z^{-1} s(t, z^{-1}))

and inverts it to obtain a function {z(t,\omega)} with

\displaystyle  \omega(t, z(t,\omega)) = \omega

for all {t, \omega}, then the {S}-transform {S(t,\omega)} is defined by

\displaystyle  S(t,\omega) := \frac{1+\omega}{\omega} z(t,\omega).

Writing

\displaystyle  s(t,z) = - z^{-1} ( 1 + e^t \omega(t, z^{-1}) )

for any {t}, {z}, we have

\displaystyle  z_0 + s(0,z_0)^{-1} (1-e^{-t}) = z_0 \frac{\omega(0,z_0^{-1})+e^{-t}}{\omega(0,z_0^{-1})+1}

and so (9) becomes

\displaystyle  - z_0^{-1} \frac{\omega(0,z_0^{-1})+1}{\omega(0,z_0^{-1})+e^{-t}} (1 + e^{t} \omega(t, z_0^{-1} \frac{\omega(0,z_0^{-1})+1}{\omega(0,z_0^{-1})+e^{-t}}))

\displaystyle = - e^t z_0^{-1} (1 + \omega(0, z_0^{-1}))

which simplifies to

\displaystyle  \omega(t, z_0^{-1} \frac{\omega(0,z_0^{-1})+1}{\omega(0,z_0^{-1})+e^{-t}})) = \omega(0, z_0^{-1});

replacing {z_0} by {z(0,\omega)^{-1}} we obtain

\displaystyle  \omega(t, z(0,\omega) \frac{\omega+1}{\omega+e^{-t}}) = \omega

and thus

\displaystyle  z(0,\omega)\frac{\omega+1}{\omega+e^{-t}} = z(t, \omega)

and hence

\displaystyle  S(0, \omega) = \frac{\omega+e^{-t}}{\omega+1} S(t, \omega).

One can compute {\frac{\omega+e^{-t}}{\omega+1}} to be the {S}-transform of the measure {(1-\alpha) \delta_0 + \alpha \delta_1}; from the link between {S}-transforms and free products (see e.g. these notes of Guionnet), we conclude that {(1-\alpha)\delta_0 + \alpha \mu_\alpha} is the free product of {\mu_1} and {(1-\alpha) \delta_0 + \alpha \delta_1}. This is consistent with the random matrix theory interpretation, since {(1-\alpha)\delta_0 + \alpha \mu_\alpha} is also the spectral measure of {PMP}, where {P} is the orthogonal projection to the span of the first {\alpha n} basis elements, so in particular {P} has spectral measure {(1-\alpha) \delta_0 + \alpha \delta_1}. If {M} is unitarily invariant then (by a fundamental result of Voiculescu) it is asymptotically freely independent of {P}, so the spectral measure of {PMP = P^{1/2} M P^{1/2}} is asymptotically the free product of that of {M} and of {P}.

Szemerédi’s theorem asserts that all subsets of the natural numbers of positive density contain arbitrarily long arithmetic progressions.  Roth’s theorem is the special case when one considers arithmetic progressions of length three.  Both theorems have many important proofs using tools from additive combinatorics, (higher order) Fourier analysis, (hyper) graph regularity theory, and ergodic theory.  However, the original proof by Endre Szemerédi, while extremely intricate, was purely combinatorial (and in particular “elementary”) and almost entirely self-contained, except for an invocation of the van der Waerden theorem.  It is also notable for introducing a prototype of what is now known as the Szemerédi regularity lemma.

Back in 2005, I rewrote Szemerédi’s original proof in order to understand it better, however my rewrite ended up being about the same length as the original argument and was probably only usable to myself.  In 2012, after Szemerédi was awarded the Abel prize, I revisited this argument with the intention to try to write up a more readable version of the proof, but ended up just presenting some ingredients of the argument in a blog post, rather than try to rewrite the whole thing.  In that post, I suspected that the cleanest way to write up the argument would be through the language of nonstandard analysis (perhaps in an iterated hyperextension that could handle various hierarchies of infinitesimals), but was unable to actually achieve any substantial simplifications by passing to the nonstandard world.

A few weeks ago, I participated in a week-long workshop at the American Institute of Mathematics on “Nonstandard methods in combinatorial number theory”, and spent some time in a working group with Shabnam Akhtari, Irfam Alam, Renling Jin, Steven Leth, Karl Mahlburg, Paul Potgieter, and Henry Towsner to try to obtain a manageable nonstandard version of Szemerédi’s original proof.  We didn’t end up being able to do so – in fact there are now signs that perhaps nonstandard analysis is not the optimal framework in which to place this argument – but we did at least clarify the existing standard argument, to the point that I was able to go back to my original rewrite of the proof and present it in a more civilised form, which I am now uploading here as an unpublished preprint.   There are now a number of simplifications to the proof.  Firstly, one no longer needs the full strength of the regularity lemma; only the simpler “weak” regularity lemma of Frieze and Kannan is required.  Secondly, the proof has been “factored” into a number of stand-alone propositions of independent interest, in particular involving just (families of) one-dimensional arithmetic progressions rather than the complicated-looking multidimensional arithmetic progressions that occur so frequently in the original argument of Szemerédi.  Finally, the delicate manipulations of densities and epsilons via double counting arguments in Szemerédi’s original paper have been abstracted into a certain key property of families of arithmetic progressions that I call the “double counting property”.

The factoring mentioned above is particularly simple in the case of proving Roth’s theorem, which is now presented separately in the above writeup.  Roth’s theorem seeks to locate a length three progression {(P(1),P(2),P(3)) = (a, a+r, a+2r)} in which all three elements lie in a single set.  This will be deduced from an easier variant of the theorem in which one locates (a family of) length three progressions in which just the first two elements {P(1), P(2)} of the progression lie in a good set (and some other properties of the family are also required).  This is in turn derived from an even easier variant in which now just the first element of the progression is required to be in the good set.

More specifically, Roth’s theorem is now deduced from

Theorem 1.5.  Let {L} be a natural number, and let {S} be a set of integers of upper density at least {1-1/10L}.  Then, whenever {S} is partitioned into finitely many colour classes, there exists a colour class {A} and a family {(P_l(1),P_l(2),P_l(3))_{l=1}^L} of 3-term arithmetic progressions with the following properties:

  1. For each {l}, {P_l(1)} and {P_l(2)} lie in {A}.
  2. For each {l}, {P_l(3)} lie in {S}.
  3. The {P_l(3)} for {l=1,\dots,L} are in arithmetic progression.

The situation in this theorem is depicted by the following diagram, in which elements of A are in blue and elements of S are in grey:

Theorem 1.5 is deduced in turn from the following easier variant:

Theorem 1.6.  Let {L} be a natural number, and let {S} be a set of integers of upper density at least {1-1/10L}.  Then, whenever {S} is partitioned into finitely many colour classes, there exists a colour class {A} and a family {(P_l(1),P_l(2),P_l(3))_{l=1}^L} of 3-term arithmetic progressions with the following properties:

  1. For each {l}, {P_l(1)} lie in {A}.
  2. For each {l}, {P_l(2)} and {P_l(3)} lie in {S}.
  3. The {P_l(2)} for {l=1,\dots,L} are in arithmetic progression.

The situation here is described by the figure below.

Theorem 1.6 is easy to prove.  To derive Theorem 1.5 from Theorem 1.6, or to derive Roth’s theorem from Theorem 1.5, one uses double counting arguments, van der Waerden’s theorem, and the weak regularity lemma, largely as described in this previous blog post; see the writeup for the full details.  (I would be interested in seeing a shorter proof of Theorem 1.5 though that did not go through these arguments, and did not use the more powerful theorems of  Roth or Szemerédi.)

 

Fix a non-negative integer {k}. Define an (weak) integer partition of length {k} to be a tuple {\lambda = (\lambda_1,\dots,\lambda_k)} of non-increasing non-negative integers {\lambda_1 \geq \dots \geq \lambda_k \geq 0}. (Here our partitions are “weak” in the sense that we allow some parts of the partition to be zero. Henceforth we will omit the modifier “weak”, as we will not need to consider the more usual notion of “strong” partitions.) To each such partition {\lambda}, one can associate a Young diagram consisting of {k} left-justified rows of boxes, with the {i^{th}} row containing {\lambda_i} boxes. A semi-standard Young tableau (or Young tableau for short) {T} of shape {\lambda} is a filling of these boxes by integers in {\{1,\dots,k\}} that is weakly increasing along rows (moving rightwards) and strictly increasing along columns (moving downwards). The collection of such tableaux will be denoted {{\mathcal T}_\lambda}. The weight {|T|} of a tableau {T} is the tuple {(n_1,\dots,n_k)}, where {n_i} is the number of occurrences of the integer {i} in the tableau. For instance, if {k=3} and {\lambda = (6,4,2)}, an example of a Young tableau of shape {\lambda} would be

\displaystyle  \begin{tabular}{|c|c|c|c|c|c|} \hline 1 & 1 & 1 & 2 & 3 & 3 \\ \cline{1-6} 2 & 2 & 2 &3\\ \cline{1-4} 3 & 3\\ \cline{1-2} \end{tabular}

The weight here would be {|T| = (3,4,5)}.

To each partition {\lambda} one can associate the Schur polynomial {s_\lambda(u_1,\dots,u_k)} on {k} variables {u = (u_1,\dots,u_k)}, which we will define as

\displaystyle  s_\lambda(u) := \sum_{T \in {\mathcal T}_\lambda} u^{|T|}

using the multinomial convention

\displaystyle (u_1,\dots,u_k)^{(n_1,\dots,n_k)} := u_1^{n_1} \dots u_k^{n_k}.

Thus for instance the Young tableau {T} given above would contribute a term {u_1^3 u_2^4 u_3^5} to the Schur polynomial {s_{(6,4,2)}(u_1,u_2,u_3)}. In the case of partitions of the form {(n,0,\dots,0)}, the Schur polynomial {s_{(n,0,\dots,0)}} is just the complete homogeneous symmetric polynomial {h_n} of degree {n} on {k} variables:

\displaystyle  s_{(n,0,\dots,0)}(u_1,\dots,u_k) := \sum_{n_1,\dots,n_k \geq 0: n_1+\dots+n_k = n} u_1^{n_1} \dots u_k^{n_k},

thus for instance

\displaystyle  s_{(3,0)}(u_1,u_2) = u_1^3 + u_1^2 u_2 + u_1 u_2^2 + u_2^3.

Schur polyomials are ubiquitous in the algebraic combinatorics of “type {A} objects” such as the symmetric group {S_k}, the general linear group {GL_k}, or the unitary group {U_k}. For instance, one can view {s_\lambda} as the character of an irreducible polynomial representation of {GL_k({\bf C})} associated with the partition {\lambda}. However, we will not focus on these interpretations of Schur polynomials in this post.

This definition of Schur polynomials allows for a way to describe the polynomials recursively. If {k > 1} and {T} is a Young tableau of shape {\lambda = (\lambda_1,\dots,\lambda_k)}, taking values in {\{1,\dots,k\}}, one can form a sub-tableau {T'} of some shape {\lambda' = (\lambda'_1,\dots,\lambda'_{k-1})} by removing all the appearances of {k} (which, among other things, necessarily deletes the {k^{th}} row). For instance, with {T} as in the previous example, the sub-tableau {T'} would be

\displaystyle  \begin{tabular}{|c|c|c|c|} \hline 1 & 1 & 1 & 2 \\ \cline{1-4} 2 & 2 & 2 \\ \cline{1-3} \end{tabular}

and the reduced partition {\lambda'} in this case is {(4,3)}. As Young tableaux are required to be strictly increasing down columns, we can see that the reduced partition {\lambda'} must intersperse the original partition {\lambda} in the sense that

\displaystyle  \lambda_{i+1} \leq \lambda'_i \leq \lambda_i \ \ \ \ \ (1)

for all {1 \leq i \leq k-1}; we denote this interspersion relation as {\lambda' \prec \lambda} (though we caution that this is not intended to be a partial ordering). In the converse direction, if {\lambda' \prec \lambda} and {T'} is a Young tableau with shape {\lambda'} with entries in {\{1,\dots,k-1\}}, one can form a Young tableau {T} with shape {\lambda} and entries in {\{1,\dots,k\}} by appending to {T'} an entry of {k} in all the boxes that appear in the {\lambda} shape but not the {\lambda'} shape. This one-to-one correspondence leads to the recursion

\displaystyle  s_\lambda(u) = \sum_{\lambda' \prec \lambda} s_{\lambda'}(u') u_k^{|\lambda| - |\lambda'|} \ \ \ \ \ (2)

where {u = (u_1,\dots,u_k)}, {u' = (u_1,\dots,u_{k-1})}, and the size {|\lambda|} of a partition {\lambda = (\lambda_1,\dots,\lambda_k)} is defined as {|\lambda| := \lambda_1 + \dots + \lambda_k}.

One can use this recursion (2) to prove some further standard identities for Schur polynomials, such as the determinant identity

\displaystyle  s_\lambda(u) V(u) = \det( u_i^{\lambda_j+k-j} )_{1 \leq i,j \leq k} \ \ \ \ \ (3)

for {u=(u_1,\dots,u_k)}, where {V(u)} denotes the Vandermonde determinant

\displaystyle  V(u) := \prod_{1 \leq i < j \leq k} (u_i - u_j), \ \ \ \ \ (4)

or the Jacobi-Trudi identity

\displaystyle  s_\lambda(u) = \det( h_{\lambda_j - j + i}(u) )_{1 \leq i,j \leq k}, \ \ \ \ \ (5)

with the convention that {h_d(u) = 0} if {d} is negative. Thus for instance

\displaystyle s_{(1,1,0,\dots,0)}(u) = h_1^2(u) - h_0(u) h_2(u) = \sum_{1 \leq i < j \leq k} u_i u_j.

We review the (standard) derivation of these identities via (2) below the fold. Among other things, these identities show that the Schur polynomials are symmetric, which is not immediately obvious from their definition.

One can also iterate (2) to write

\displaystyle  s_\lambda(u) = \sum_{() = \lambda^0 \prec \lambda^1 \prec \dots \prec \lambda^k = \lambda} \prod_{j=1}^k u_j^{|\lambda^j| - |\lambda^{j-1}|} \ \ \ \ \ (6)

where the sum is over all tuples {\lambda^1,\dots,\lambda^k}, where each {\lambda^j} is a partition of length {j} that intersperses the next partition {\lambda^{j+1}}, with {\lambda^k} set equal to {\lambda}. We will call such a tuple an integral Gelfand-Tsetlin pattern based at {\lambda}.

One can generalise (6) by introducing the skew Schur functions

\displaystyle  s_{\lambda/\mu}(u) := \sum_{\mu = \lambda^i \prec \dots \prec \lambda^k = \lambda} \prod_{j=i+1}^k u_j^{|\lambda^j| - |\lambda^{j-1}|} \ \ \ \ \ (7)

for {u = (u_{i+1},\dots,u_k)}, whenever {\lambda} is a partition of length {k} and {\mu} a partition of length {i} for some {0 \leq i \leq k}, thus the Schur polynomial {s_\lambda} is also the skew Schur polynomial {s_{\lambda /()}} with {i=0}. (One could relabel the variables here to be something like {(u_1,\dots,u_{k-i})} instead, but this labeling seems slightly more natural, particularly in view of identities such as (8) below.)

By construction, we have the decomposition

\displaystyle  s_{\lambda/\nu}(u_{i+1},\dots,u_k) = \sum_\mu s_{\mu/\nu}(u_{i+1},\dots,u_j) s_{\lambda/\mu}(u_{j+1},\dots,u_k) \ \ \ \ \ (8)

whenever {0 \leq i \leq j \leq k}, and {\nu, \mu, \lambda} are partitions of lengths {i,j,k} respectively. This gives another recursive way to understand Schur polynomials and skew Schur polynomials. For instance, one can use it to establish the generalised Jacobi-Trudi identity

\displaystyle  s_{\lambda/\mu}(u) = \det( h_{\lambda_j - j - \mu_i + i}(u) )_{1 \leq i,j \leq k}, \ \ \ \ \ (9)

with the convention that {\mu_i = 0} for {i} larger than the length of {\mu}; we do this below the fold.

The Schur polynomials (and skew Schur polynomials) are “discretised” (or “quantised”) in the sense that their parameters {\lambda, \mu} are required to be integer-valued, and their definition similarly involves summation over a discrete set. It turns out that there are “continuous” (or “classical”) analogues of these functions, in which the parameters {\lambda,\mu} now take real values rather than integers, and are defined via integration rather than summation. One can view these continuous analogues as a “semiclassical limit” of their discrete counterparts, in a manner that can be made precise using the machinery of geometric quantisation, but we will not do so here.

The continuous analogues can be defined as follows. Define a real partition of length {k} to be a tuple {\lambda = (\lambda_1,\dots,\lambda_k)} where {\lambda_1 \geq \dots \geq \lambda_k \geq 0} are now real numbers. We can define the relation {\lambda' \prec \lambda} of interspersion between a length {k-1} real partition {\lambda' = (\lambda'_1,\dots,\lambda'_{k-1})} and a length {k} real partition {\lambda = (\lambda_1,\dots,\lambda_{k})} precisely as before, by requiring that the inequalities (1) hold for all {1 \leq i \leq k-1}. We can then define the continuous Schur functions {S_\lambda(x)} for {x = (x_1,\dots,x_k) \in {\bf R}^k} recursively by defining

\displaystyle  S_{()}() = 1

and

\displaystyle  S_\lambda(x) = \int_{\lambda' \prec \lambda} S_{\lambda'}(x') \exp( (|\lambda| - |\lambda'|) x_k ) \ \ \ \ \ (10)

for {k \geq 1} and {\lambda} of length {k}, where {x' := (x_1,\dots,x_{k-1})} and the integral is with respect to {k-1}-dimensional Lebesgue measure, and {|\lambda| = \lambda_1 + \dots + \lambda_k} as before. Thus for instance

\displaystyle  S_{(\lambda_1)}(x_1) = \exp( \lambda_1 x_1 )

and

\displaystyle  S_{(\lambda_1,\lambda_2)}(x_1,x_2) = \int_{\lambda_2}^{\lambda_1} \exp( \lambda'_1 x_1 + (\lambda_1+\lambda_2-\lambda'_1) x_2 )\ d\lambda'_1.

More generally, we can define the continuous skew Schur functions {S_{\lambda/\mu}(x)} for {\lambda} of length {k}, {\mu} of length {j \leq k}, and {x = (x_{j+1},\dots,x_k) \in {\bf R}^{k-j}} recursively by defining

\displaystyle  S_{\mu/\mu}() = 1

and

\displaystyle  S_{\lambda/\mu}(x) = \int_{\lambda' \prec \lambda} S_{\lambda'/\mu}(x') \exp( (|\lambda| - |\lambda'|) x_k )

for {k > j}. Thus for instance

\displaystyle  S_{(\lambda_1,\lambda_2,\lambda_3)/(\mu_1,\mu_2)}(x_3) = 1_{\lambda_3 \leq \mu_2 \leq \lambda_2 \leq \mu_1 \leq \lambda_1} \exp( x_3 (\lambda_1+\lambda_2+\lambda_3 - \mu_1 - \mu_2 ))

and

\displaystyle  S_{(\lambda_1,\lambda_2,\lambda_3)/(\mu_1)}(x_2, x_3) = \int_{\lambda_2 \leq \lambda'_2 \leq \lambda_2, \mu_1} \int_{\mu_1, \lambda_2 \leq \lambda'_1 \leq \lambda_1}

\displaystyle \exp( x_2 (\lambda'_1+\lambda'_2 - \mu_1) + x_3 (\lambda_1+\lambda_2+\lambda_3 - \lambda'_1 - \lambda'_2))\ d\lambda'_1 d\lambda'_2.

By expanding out the recursion, one obtains the analogue

\displaystyle  S_\lambda(x) = \int_{\lambda^1 \prec \dots \prec \lambda^k = \lambda} \exp( \sum_{j=1}^k x_j (|\lambda^j| - |\lambda^{j-1}|))\ d\lambda^1 \dots d\lambda^{k-1},

of (6), and more generally one has

\displaystyle  S_{\lambda/\mu}(x) = \int_{\mu = \lambda^i \prec \dots \prec \lambda^k = \lambda} \exp( \sum_{j=i+1}^k x_j (|\lambda^j| - |\lambda^{j-1}|))\ d\lambda^{i+1} \dots d\lambda^{k-1}.

We will call the tuples {(\lambda^1,\dots,\lambda^k)} in the first integral real Gelfand-Tsetlin patterns based at {\lambda}. The analogue of (8) is then

\displaystyle  S_{\lambda/\nu}(x_{i+1},\dots,x_k) = \int S_{\mu/\nu}(x_{i+1},\dots,x_j) S_{\lambda/\mu}(x_{j+1},\dots,x_k)\ d\mu

where the integral is over all real partitions {\mu} of length {j}, with Lebesgue measure.

By approximating various integrals by their Riemann sums, one can relate the continuous Schur functions to their discrete counterparts by the limiting formula

\displaystyle  N^{-k(k-1)/2} s_{\lfloor N \lambda \rfloor}( \exp[ x/N ] ) \rightarrow S_\lambda(x) \ \ \ \ \ (11)

as {N \rightarrow \infty} for any length {k} real partition {\lambda = (\lambda_1,\dots,\lambda_k)} and any {x = (x_1,\dots,x_k) \in {\bf R}^k}, where

\displaystyle  \lfloor N \lambda \rfloor := ( \lfloor N \lambda_1 \rfloor, \dots, \lfloor N \lambda_k \rfloor )

and

\displaystyle  \exp[x/N] := (\exp(x_1/N), \dots, \exp(x_k/N)).

More generally, one has

\displaystyle  N^{j(j-1)/2-k(k-1)/2} s_{\lfloor N \lambda \rfloor / \lfloor N \mu \rfloor}( \exp[ x/N ] ) \rightarrow S_{\lambda/\mu}(x)

as {N \rightarrow \infty} for any length {k} real partition {\lambda}, any length {j} real partition {\mu} with {0 \leq j \leq k}, and any {x = (x_{j+1},\dots,x_k) \in {\bf R}^{k-j}}.

As a consequence of these limiting formulae, one expects all of the discrete identities above to have continuous counterparts. This is indeed the case; below the fold we shall prove the discrete and continuous identities in parallel. These are not new results by any means, but I was not able to locate a good place in the literature where they are explicitly written down, so I thought I would try to do so here (primarily for my own internal reference, but perhaps the calculations will be worthwhile to some others also).

Read the rest of this entry »

Archives