You are currently browsing the tag archive for the ‘Schur polynomials’ tag.

Apoorva Khare and I have updated our paper “On the sign patterns of entrywise positivity preservers in fixed dimension“, announced at this post from last month. The quantitative results are now sharpened using a new monotonicity property of ratios ${s_{\lambda}(u)/s_{\mu}(u)}$ of Schur polynomials, namely that such ratios are monotone non-decreasing in each coordinate of ${u}$ if ${u}$ is in the positive orthant, and the partition ${\lambda}$ is larger than that of ${\mu}$. (This monotonicity was also independently observed by Rachid Ait-Haddou, using the theory of blossoms.) In the revised version of the paper we give two proofs of this monotonicity. The first relies on a deep positivity result of Lam, Postnikov, and Pylyavskyy, which uses a representation-theoretic positivity result of Haiman to show that the polynomial combination

$\displaystyle s_{(\lambda \wedge \nu) / (\mu \wedge \rho)} s_{(\lambda \vee \nu) / (\mu \vee \rho)} - s_{\lambda/\mu} s_{\nu/\rho} \ \ \ \ \ (1)$

of skew-Schur polynomials is Schur-positive for any partitions ${\lambda,\mu,\nu,\rho}$ (using the convention that the skew-Schur polynomial ${s_{\lambda/\mu}}$ vanishes if ${\mu}$ is not contained in ${\lambda}$, and where ${\lambda \wedge \nu}$ and ${\lambda \vee \nu}$ denotes the pointwise min and max of ${\lambda}$ and ${\nu}$ respectively). It is fairly easy to derive the monotonicity of ${s_\lambda(u)/s_\mu(u)}$ from this, by using the expansion

$\displaystyle s_\lambda(u_1,\dots, u_n) = \sum_k u_1^k s_{\lambda/(k)}(u_2,\dots,u_n)$

of Schur polynomials into skew-Schur polynomials (as was done in this previous post).

The second proof of monotonicity avoids representation theory by a more elementary argument establishing the weaker claim that the above expression (1) is non-negative on the positive orthant. In fact we prove a more general determinantal log-supermodularity claim which may be of independent interest:

Theorem 1 Let ${A}$ be any ${n \times n}$ totally positive matrix (thus, every minor has a non-negative determinant). Then for any ${k}$-tuples ${I_1,I_2,J_1,J_2}$ of increasing elements of ${\{1,\dots,n\}}$, one has

$\displaystyle \det( A_{I_1 \wedge I_2, J_1 \wedge J_2} ) \det( A_{I_1 \vee I_2, J_1 \vee J_2} ) - \det(A_{I_1,J_1}) \det(A_{I_2,J_2}) \geq 0$

where ${A_{I,J}}$ denotes the ${k \times k}$ minor formed from the rows in ${I}$ and columns in ${J}$.

For instance, if ${A}$ is the matrix

$\displaystyle A = \begin{pmatrix} a & b & c & d \\ e & f & g & h \\ i & j & k & l \\ m & n & o & p \end{pmatrix}$

for some real numbers ${a,\dots,p}$, one has

$\displaystyle a h - de\geq 0$

(corresponding to the case ${k=1}$, ${I_1 = (1), I_2 = (2), J_1 = (4), J_2 = (1)}$), or

$\displaystyle \det \begin{pmatrix} a & c \\ i & k \end{pmatrix} \det \begin{pmatrix} f & h \\ n & p \end{pmatrix} - \det \begin{pmatrix} e & h \\ i & l \end{pmatrix} \det \begin{pmatrix} b & c \\ n & o \end{pmatrix} \geq 0$

(corresponding to the case ${k=2}$, ${I_1 = (2,3)}$, ${I_2 = (1,4)}$, ${J_1 = (1,4)}$, ${J_2 = (2,3)}$). It turns out that this claim can be proven relatively easy by an induction argument, relying on the Dodgson and Karlin identities from this previous post; the difficulties are largely notational in nature. Combining this result with the Jacobi-Trudi identity for skew-Schur polynomials (discussed in this previous post) gives the non-negativity of (1); it can also be used to directly establish the monotonicity of ratios ${s_\lambda(u)/s_\mu(u)}$ by applying the theorem to a generalised Vandermonde matrix.

(Log-supermodularity also arises as the natural hypothesis for the FKG inequality, though I do not know of any interesting application of the FKG inequality in this current setting.)

Fix a non-negative integer ${k}$. Define an (weak) integer partition of length ${k}$ to be a tuple ${\lambda = (\lambda_1,\dots,\lambda_k)}$ of non-increasing non-negative integers ${\lambda_1 \geq \dots \geq \lambda_k \geq 0}$. (Here our partitions are “weak” in the sense that we allow some parts of the partition to be zero. Henceforth we will omit the modifier “weak”, as we will not need to consider the more usual notion of “strong” partitions.) To each such partition ${\lambda}$, one can associate a Young diagram consisting of ${k}$ left-justified rows of boxes, with the ${i^{th}}$ row containing ${\lambda_i}$ boxes. A semi-standard Young tableau (or Young tableau for short) ${T}$ of shape ${\lambda}$ is a filling of these boxes by integers in ${\{1,\dots,k\}}$ that is weakly increasing along rows (moving rightwards) and strictly increasing along columns (moving downwards). The collection of such tableaux will be denoted ${{\mathcal T}_\lambda}$. The weight ${|T|}$ of a tableau ${T}$ is the tuple ${(n_1,\dots,n_k)}$, where ${n_i}$ is the number of occurrences of the integer ${i}$ in the tableau. For instance, if ${k=3}$ and ${\lambda = (6,4,2)}$, an example of a Young tableau of shape ${\lambda}$ would be

$\displaystyle \begin{tabular}{|c|c|c|c|c|c|} \hline 1 & 1 & 1 & 2 & 3 & 3 \\ \cline{1-6} 2 & 2 & 2 &3\\ \cline{1-4} 3 & 3\\ \cline{1-2} \end{tabular}$

The weight here would be ${|T| = (3,4,5)}$.

To each partition ${\lambda}$ one can associate the Schur polynomial ${s_\lambda(u_1,\dots,u_k)}$ on ${k}$ variables ${u = (u_1,\dots,u_k)}$, which we will define as

$\displaystyle s_\lambda(u) := \sum_{T \in {\mathcal T}_\lambda} u^{|T|}$

using the multinomial convention

$\displaystyle (u_1,\dots,u_k)^{(n_1,\dots,n_k)} := u_1^{n_1} \dots u_k^{n_k}.$

Thus for instance the Young tableau ${T}$ given above would contribute a term ${u_1^3 u_2^4 u_3^5}$ to the Schur polynomial ${s_{(6,4,2)}(u_1,u_2,u_3)}$. In the case of partitions of the form ${(n,0,\dots,0)}$, the Schur polynomial ${s_{(n,0,\dots,0)}}$ is just the complete homogeneous symmetric polynomial ${h_n}$ of degree ${n}$ on ${k}$ variables:

$\displaystyle s_{(n,0,\dots,0)}(u_1,\dots,u_k) := \sum_{n_1,\dots,n_k \geq 0: n_1+\dots+n_k = n} u_1^{n_1} \dots u_k^{n_k},$

thus for instance

$\displaystyle s_{(3,0)}(u_1,u_2) = u_1^3 + u_1^2 u_2 + u_1 u_2^2 + u_2^3.$

Schur polyomials are ubiquitous in the algebraic combinatorics of “type ${A}$ objects” such as the symmetric group ${S_k}$, the general linear group ${GL_k}$, or the unitary group ${U_k}$. For instance, one can view ${s_\lambda}$ as the character of an irreducible polynomial representation of ${GL_k({\bf C})}$ associated with the partition ${\lambda}$. However, we will not focus on these interpretations of Schur polynomials in this post.

This definition of Schur polynomials allows for a way to describe the polynomials recursively. If ${k > 1}$ and ${T}$ is a Young tableau of shape ${\lambda = (\lambda_1,\dots,\lambda_k)}$, taking values in ${\{1,\dots,k\}}$, one can form a sub-tableau ${T'}$ of some shape ${\lambda' = (\lambda'_1,\dots,\lambda'_{k-1})}$ by removing all the appearances of ${k}$ (which, among other things, necessarily deletes the ${k^{th}}$ row). For instance, with ${T}$ as in the previous example, the sub-tableau ${T'}$ would be

$\displaystyle \begin{tabular}{|c|c|c|c|} \hline 1 & 1 & 1 & 2 \\ \cline{1-4} 2 & 2 & 2 \\ \cline{1-3} \end{tabular}$

and the reduced partition ${\lambda'}$ in this case is ${(4,3)}$. As Young tableaux are required to be strictly increasing down columns, we can see that the reduced partition ${\lambda'}$ must intersperse the original partition ${\lambda}$ in the sense that

$\displaystyle \lambda_{i+1} \leq \lambda'_i \leq \lambda_i \ \ \ \ \ (1)$

for all ${1 \leq i \leq k-1}$; we denote this interspersion relation as ${\lambda' \prec \lambda}$ (though we caution that this is not intended to be a partial ordering). In the converse direction, if ${\lambda' \prec \lambda}$ and ${T'}$ is a Young tableau with shape ${\lambda'}$ with entries in ${\{1,\dots,k-1\}}$, one can form a Young tableau ${T}$ with shape ${\lambda}$ and entries in ${\{1,\dots,k\}}$ by appending to ${T'}$ an entry of ${k}$ in all the boxes that appear in the ${\lambda}$ shape but not the ${\lambda'}$ shape. This one-to-one correspondence leads to the recursion

$\displaystyle s_\lambda(u) = \sum_{\lambda' \prec \lambda} s_{\lambda'}(u') u_k^{|\lambda| - |\lambda'|} \ \ \ \ \ (2)$

where ${u = (u_1,\dots,u_k)}$, ${u' = (u_1,\dots,u_{k-1})}$, and the size ${|\lambda|}$ of a partition ${\lambda = (\lambda_1,\dots,\lambda_k)}$ is defined as ${|\lambda| := \lambda_1 + \dots + \lambda_k}$.

One can use this recursion (2) to prove some further standard identities for Schur polynomials, such as the determinant identity

$\displaystyle s_\lambda(u) V(u) = \det( u_i^{\lambda_j+k-j} )_{1 \leq i,j \leq k} \ \ \ \ \ (3)$

for ${u=(u_1,\dots,u_k)}$, where ${V(u)}$ denotes the Vandermonde determinant

$\displaystyle V(u) := \prod_{1 \leq i < j \leq k} (u_i - u_j), \ \ \ \ \ (4)$

or the Jacobi-Trudi identity

$\displaystyle s_\lambda(u) = \det( h_{\lambda_j - j + i}(u) )_{1 \leq i,j \leq k}, \ \ \ \ \ (5)$

with the convention that ${h_d(u) = 0}$ if ${d}$ is negative. Thus for instance

$\displaystyle s_{(1,1,0,\dots,0)}(u) = h_1^2(u) - h_0(u) h_2(u) = \sum_{1 \leq i < j \leq k} u_i u_j.$

We review the (standard) derivation of these identities via (2) below the fold. Among other things, these identities show that the Schur polynomials are symmetric, which is not immediately obvious from their definition.

One can also iterate (2) to write

$\displaystyle s_\lambda(u) = \sum_{() = \lambda^0 \prec \lambda^1 \prec \dots \prec \lambda^k = \lambda} \prod_{j=1}^k u_j^{|\lambda^j| - |\lambda^{j-1}|} \ \ \ \ \ (6)$

where the sum is over all tuples ${\lambda^1,\dots,\lambda^k}$, where each ${\lambda^j}$ is a partition of length ${j}$ that intersperses the next partition ${\lambda^{j+1}}$, with ${\lambda^k}$ set equal to ${\lambda}$. We will call such a tuple an integral Gelfand-Tsetlin pattern based at ${\lambda}$.

One can generalise (6) by introducing the skew Schur functions

$\displaystyle s_{\lambda/\mu}(u) := \sum_{\mu = \lambda^i \prec \dots \prec \lambda^k = \lambda} \prod_{j=i+1}^k u_j^{|\lambda^j| - |\lambda^{j-1}|} \ \ \ \ \ (7)$

for ${u = (u_{i+1},\dots,u_k)}$, whenever ${\lambda}$ is a partition of length ${k}$ and ${\mu}$ a partition of length ${i}$ for some ${0 \leq i \leq k}$, thus the Schur polynomial ${s_\lambda}$ is also the skew Schur polynomial ${s_{\lambda /()}}$ with ${i=0}$. (One could relabel the variables here to be something like ${(u_1,\dots,u_{k-i})}$ instead, but this labeling seems slightly more natural, particularly in view of identities such as (8) below.)

By construction, we have the decomposition

$\displaystyle s_{\lambda/\nu}(u_{i+1},\dots,u_k) = \sum_\mu s_{\mu/\nu}(u_{i+1},\dots,u_j) s_{\lambda/\mu}(u_{j+1},\dots,u_k) \ \ \ \ \ (8)$

whenever ${0 \leq i \leq j \leq k}$, and ${\nu, \mu, \lambda}$ are partitions of lengths ${i,j,k}$ respectively. This gives another recursive way to understand Schur polynomials and skew Schur polynomials. For instance, one can use it to establish the generalised Jacobi-Trudi identity

$\displaystyle s_{\lambda/\mu}(u) = \det( h_{\lambda_j - j - \mu_i + i}(u) )_{1 \leq i,j \leq k}, \ \ \ \ \ (9)$

with the convention that ${\mu_i = 0}$ for ${i}$ larger than the length of ${\mu}$; we do this below the fold.

The Schur polynomials (and skew Schur polynomials) are “discretised” (or “quantised”) in the sense that their parameters ${\lambda, \mu}$ are required to be integer-valued, and their definition similarly involves summation over a discrete set. It turns out that there are “continuous” (or “classical”) analogues of these functions, in which the parameters ${\lambda,\mu}$ now take real values rather than integers, and are defined via integration rather than summation. One can view these continuous analogues as a “semiclassical limit” of their discrete counterparts, in a manner that can be made precise using the machinery of geometric quantisation, but we will not do so here.

The continuous analogues can be defined as follows. Define a real partition of length ${k}$ to be a tuple ${\lambda = (\lambda_1,\dots,\lambda_k)}$ where ${\lambda_1 \geq \dots \geq \lambda_k \geq 0}$ are now real numbers. We can define the relation ${\lambda' \prec \lambda}$ of interspersion between a length ${k-1}$ real partition ${\lambda' = (\lambda'_1,\dots,\lambda'_{k-1})}$ and a length ${k}$ real partition ${\lambda = (\lambda_1,\dots,\lambda_{k})}$ precisely as before, by requiring that the inequalities (1) hold for all ${1 \leq i \leq k-1}$. We can then define the continuous Schur functions ${S_\lambda(x)}$ for ${x = (x_1,\dots,x_k) \in {\bf R}^k}$ recursively by defining

$\displaystyle S_{()}() = 1$

and

$\displaystyle S_\lambda(x) = \int_{\lambda' \prec \lambda} S_{\lambda'}(x') \exp( (|\lambda| - |\lambda'|) x_k ) \ \ \ \ \ (10)$

for ${k \geq 1}$ and ${\lambda}$ of length ${k}$, where ${x' := (x_1,\dots,x_{k-1})}$ and the integral is with respect to ${k-1}$-dimensional Lebesgue measure, and ${|\lambda| = \lambda_1 + \dots + \lambda_k}$ as before. Thus for instance

$\displaystyle S_{(\lambda_1)}(x_1) = \exp( \lambda_1 x_1 )$

and

$\displaystyle S_{(\lambda_1,\lambda_2)}(x_1,x_2) = \int_{\lambda_2}^{\lambda_1} \exp( \lambda'_1 x_1 + (\lambda_1+\lambda_2-\lambda'_1) x_2 )\ d\lambda'_1.$

More generally, we can define the continuous skew Schur functions ${S_{\lambda/\mu}(x)}$ for ${\lambda}$ of length ${k}$, ${\mu}$ of length ${j \leq k}$, and ${x = (x_{j+1},\dots,x_k) \in {\bf R}^{k-j}}$ recursively by defining

$\displaystyle S_{\mu/\mu}() = 1$

and

$\displaystyle S_{\lambda/\mu}(x) = \int_{\lambda' \prec \lambda} S_{\lambda'/\mu}(x') \exp( (|\lambda| - |\lambda'|) x_k )$

for ${k > j}$. Thus for instance

$\displaystyle S_{(\lambda_1,\lambda_2,\lambda_3)/(\mu_1,\mu_2)}(x_3) = 1_{\lambda_3 \leq \mu_2 \leq \lambda_2 \leq \mu_1 \leq \lambda_1} \exp( x_3 (\lambda_1+\lambda_2+\lambda_3 - \mu_1 - \mu_2 ))$

and

$\displaystyle S_{(\lambda_1,\lambda_2,\lambda_3)/(\mu_1)}(x_2, x_3) = \int_{\lambda_2 \leq \lambda'_2 \leq \lambda_2, \mu_1} \int_{\mu_1, \lambda_2 \leq \lambda'_1 \leq \lambda_1}$

$\displaystyle \exp( x_2 (\lambda'_1+\lambda'_2 - \mu_1) + x_3 (\lambda_1+\lambda_2+\lambda_3 - \lambda'_1 - \lambda'_2))\ d\lambda'_1 d\lambda'_2.$

By expanding out the recursion, one obtains the analogue

$\displaystyle S_\lambda(x) = \int_{\lambda^1 \prec \dots \prec \lambda^k = \lambda} \exp( \sum_{j=1}^k x_j (|\lambda^j| - |\lambda^{j-1}|))\ d\lambda^1 \dots d\lambda^{k-1},$

of (6), and more generally one has

$\displaystyle S_{\lambda/\mu}(x) = \int_{\mu = \lambda^i \prec \dots \prec \lambda^k = \lambda} \exp( \sum_{j=i+1}^k x_j (|\lambda^j| - |\lambda^{j-1}|))\ d\lambda^{i+1} \dots d\lambda^{k-1}.$

We will call the tuples ${(\lambda^1,\dots,\lambda^k)}$ in the first integral real Gelfand-Tsetlin patterns based at ${\lambda}$. The analogue of (8) is then

$\displaystyle S_{\lambda/\nu}(x_{i+1},\dots,x_k) = \int S_{\mu/\nu}(x_{i+1},\dots,x_j) S_{\lambda/\mu}(x_{j+1},\dots,x_k)\ d\mu$

where the integral is over all real partitions ${\mu}$ of length ${j}$, with Lebesgue measure.

By approximating various integrals by their Riemann sums, one can relate the continuous Schur functions to their discrete counterparts by the limiting formula

$\displaystyle N^{-k(k-1)/2} s_{\lfloor N \lambda \rfloor}( \exp[ x/N ] ) \rightarrow S_\lambda(x) \ \ \ \ \ (11)$

as ${N \rightarrow \infty}$ for any length ${k}$ real partition ${\lambda = (\lambda_1,\dots,\lambda_k)}$ and any ${x = (x_1,\dots,x_k) \in {\bf R}^k}$, where

$\displaystyle \lfloor N \lambda \rfloor := ( \lfloor N \lambda_1 \rfloor, \dots, \lfloor N \lambda_k \rfloor )$

and

$\displaystyle \exp[x/N] := (\exp(x_1/N), \dots, \exp(x_k/N)).$

More generally, one has

$\displaystyle N^{j(j-1)/2-k(k-1)/2} s_{\lfloor N \lambda \rfloor / \lfloor N \mu \rfloor}( \exp[ x/N ] ) \rightarrow S_{\lambda/\mu}(x)$

as ${N \rightarrow \infty}$ for any length ${k}$ real partition ${\lambda}$, any length ${j}$ real partition ${\mu}$ with ${0 \leq j \leq k}$, and any ${x = (x_{j+1},\dots,x_k) \in {\bf R}^{k-j}}$.

As a consequence of these limiting formulae, one expects all of the discrete identities above to have continuous counterparts. This is indeed the case; below the fold we shall prove the discrete and continuous identities in parallel. These are not new results by any means, but I was not able to locate a good place in the literature where they are explicitly written down, so I thought I would try to do so here (primarily for my own internal reference, but perhaps the calculations will be worthwhile to some others also).

Apoorva Khare and I have just uploaded to the arXiv our paper “On the sign patterns of entrywise positivity preservers in fixed dimension“. This paper explores the relationship between positive definiteness of Hermitian matrices, and entrywise operations on these matrices. The starting point for this theory is the Schur product theorem, which asserts that if ${A = (a_{ij})_{1 \leq i,j \leq N}}$ and ${B = (b_{ij})_{1 \leq i,j \leq N}}$ are two ${N \times N}$ Hermitian matrices that are positive semi-definite, then their Hadamard product

$\displaystyle A \circ B := (a_{ij} b_{ij})_{1 \leq i,j \leq N}$

is also positive semi-definite. (One should caution that the Hadamard product is not the same as the usual matrix product.) To prove this theorem, first observe that the claim is easy when ${A = {\bf u} {\bf u}^*}$ and ${B = {\bf v} {\bf v}^*}$ are rank one positive semi-definite matrices, since in this case ${A \circ B = ({\bf u} \circ {\bf v}) ({\bf u} \circ {\bf v})^*}$ is also a rank one positive semi-definite matrix. The general case then follows by noting from the spectral theorem that a general positive semi-definite matrix can be expressed as a non-negative linear combination of rank one positive semi-definite matrices, and using the bilinearity of the Hadamard product and the fact that the set of positive semi-definite matrices form a convex cone. A modification of this argument also lets one replace “positive semi-definite” by “positive definite” in the statement of the Schur product theorem.

One corollary of the Schur product theorem is that any polynomial ${P(z) = c_0 + c_1 z + \dots + c_d z^d}$ with non-negative coefficients ${c_n \geq 0}$ is entrywise positivity preserving on the space ${{\mathbb P}_N({\bf C})}$ of ${N \times N}$ positive semi-definite Hermitian matrices, in the sense that for any matrix ${A = (a_{ij})_{1 \leq i,j \leq N}}$ in ${{\mathbb P}_N({\bf C})}$, the entrywise application

$\displaystyle P[A] := (P(a_{ij}))_{1 \leq i,j \leq N}$

of ${P}$ to ${A}$ is also positive semi-definite. (As before, one should caution that ${P[A]}$ is not the application ${P(A)}$ of ${P}$ to ${A}$ by the usual functional calculus.) Indeed, one can expand

$\displaystyle P[A] = c_0 A^{\circ 0} + c_1 A^{\circ 1} + \dots + c_d A^{\circ d},$

where ${A^{\circ i}}$ is the Hadamard product of ${i}$ copies of ${A}$, and the claim now follows from the Schur product theorem and the fact that ${{\mathbb P}_N({\bf C})}$ is a convex cone.

A slight variant of this argument, already observed by Pólya and Szegö in 1925, shows that if ${I}$ is any subset of ${{\bf C}}$ and

$\displaystyle f(z) = \sum_{n=0}^\infty c_n z^n \ \ \ \ \ (1)$

is a power series with non-negative coefficients ${c_n \geq 0}$ that is absolutely and uniformly convergent on ${I}$, then ${f}$ will be entrywise positivity preserving on the set ${{\mathbb P}_N(I)}$ of positive definite matrices with entries in ${I}$. (In the case that ${I}$ is of the form ${I = [0,\rho]}$, such functions are precisely the absolutely monotonic functions on ${I}$.)

In the work of Schoenberg and of Rudin, we have a converse: if ${f: (-1,1) \rightarrow {\bf C}}$ is a function that is entrywise positivity preserving on ${{\mathbb P}_N((-1,1))}$ for all ${N}$, then it must be of the form (1) with ${c_n \geq 0}$. Variants of this result, with ${(-1,1)}$ replaced by other domains, appear in the work of Horn, Vasudeva, and Guillot-Khare-Rajaratnam.

This gives a satisfactory classification of functions ${f}$ that are entrywise positivity preservers in all dimensions ${N}$ simultaneously. However, the question remains as to what happens if one fixes the dimension ${N}$, in which case one may have a larger class of entrywise positivity preservers. For instance, in the trivial case ${N=1}$, a function would be entrywise positivity preserving on ${{\mathbb P}_1((0,\rho))}$ if and only if ${f}$ is non-negative on ${I}$. For higher ${N}$, there is a necessary condition of Horn (refined slightly by Guillot-Khare-Rajaratnam) which asserts (at least in the case of smooth ${f}$) that all derivatives of ${f}$ at zero up to ${(N-1)^{th}}$ order must be non-negative in order for ${f}$ to be entrywise positivity preserving on ${{\mathbb P}_N((0,\rho))}$ for some ${0 < \rho < \infty}$. In particular, if ${f}$ is of the form (1), then ${c_0,\dots,c_{N-1}}$ must be non-negative. In fact, a stronger assertion can be made, namely that the first ${N}$ non-zero coefficients in (1) (if they exist) must be positive, or equivalently any negative term in (1) must be preceded (though not necessarily immediately) by at least ${N}$ positive terms. If ${f}$ is of the form (1) is entrywise positivity preserving on the larger set ${{\mathbb P}_N((0,+\infty))}$, one can furthermore show that any negative term in (1) must also be followed (though not necessarily immediately) by at least ${N}$ positive terms.

The main result of this paper is that these sign conditions are the only constraints for entrywise positivity preserving power series. More precisely:

Theorem 1 For each ${n}$, let ${\epsilon_n \in \{-1,0,+1\}}$ be a sign.

• Suppose that any negative sign ${\epsilon_M = -1}$ is preceded by at least ${N}$ positive signs (thus there exists ${0 \leq n_0 < \dots < n_{N-1}< M}$ with ${\epsilon_{n_0} = \dots = \epsilon_{n_{N-1}} = +1}$). Then, for any ${0 < \rho < \infty}$, there exists a convergent power series (1) on ${(0,\rho)}$, with each ${c_n}$ having the sign of ${\epsilon_n}$, which is entrywise positivity preserving on ${{\bf P}_N((0,\rho))}$.
• Suppose in addition that any negative sign ${\epsilon_M = -1}$ is followed by at least ${N}$ positive signs (thus there exists ${M < n_N < \dots < n_{2N-1}}$ with ${\epsilon_{n_N} = \dots = \epsilon_{n_{2N-1}} = +1}$). Then there exists a convergent power series (1) on ${(0,+\infty)}$, with each ${c_n}$ having the sign of ${\epsilon_n}$, which is entrywise positivity preserving on ${{\bf P}_N((0,+\infty))}$.

One can ask the same question with ${(0,\rho)}$ or ${(0,+\infty)}$ replaced by other domains such as ${(-\rho,\rho)}$, or the complex disk ${D(0,\rho)}$, but it turns out that there are far fewer entrywise positivity preserving functions in those cases basically because of the non-trivial zeroes of Schur polynomials in these ranges; see the paper for further discussion. We also have some quantitative bounds on how negative some of the coefficients can be compared to the positive coefficients, but they are a bit technical to state here.

The heart of the proofs of these results is an analysis of the determinants ${\mathrm{det} P[ {\bf u} {\bf u}^* ]}$ of polynomials ${P}$ applied entrywise to rank one matrices ${uu^*}$; the positivity of these determinants can be used (together with a continuity argument) to establish the positive definiteness of ${P[uu^*]}$ for various ranges of ${P}$ and ${u}$. Using the Cauchy-Binet formula, one can rewrite such determinants as linear combinations of squares of magnitudes of generalised Vandermonde determinants

$\displaystyle \mathrm{det}( u_i^{n_j} )_{1 \leq i,j \leq N},$

where ${{\bf u} = (u_1,\dots,u_N)}$ and the signs of the coefficients in the linear combination are determined by the signs of the coefficients of ${P}$. The task is then to find upper and lower bounds for the magnitudes of such generalised Vandermonde determinants. These determinants oscillate in sign, which makes the problem look difficult; however, an algebraic miracle intervenes, namely the factorisation

$\displaystyle \mathrm{det}( u_i^{n_j} )_{1 \leq i,j \leq N} = \pm V({\bf u}) s_\lambda({\bf u})$

of the generalised Vandermonde determinant into the ordinary Vandermonde determinant

$\displaystyle V({\bf u}) = \prod_{1 \leq i < j \leq N} (u_j - u_i)$

and a Schur polynomial ${s_\lambda}$ applied to ${{\bf u}}$, where the weight ${\lambda}$ of the Schur polynomial is determined by ${n_1,\dots,n_N}$ in a simple fashion. The problem then boils down to obtaining upper and lower bounds for these Schur polynomials. Because we are restricting attention to matrices taking values in ${(0,\rho)}$ or ${(0,+\infty)}$, the entries of ${{\bf u}}$ can be taken to be non-negative. One can then take advantage of the total positivity of the Schur polynomials to compare these polynomials with a monomial, at which point one can obtain good criteria for ${P[A]}$ to be positive definite when ${A}$ is a rank one positive definite matrix ${A = {\bf u} {\bf u}^*}$.

If we allow the exponents ${n_1,\dots,n_N}$ to be real numbers rather than integers (thus replacing polynomials or power series by Pusieux series or Hahn series), then we lose the above algebraic miracle, but we can replace it with a geometric miracle, namely the Harish-Chandra-Itzykson-Zuber identity, which I discussed in this previous blog post. This factors the above generalised Vandermonde determinant as the product of the ordinary Vandermonde determinant and an integral of a positive quantity over the orthogonal group, which one can again compare with a monomial after some fairly elementary estimates.

It remains to understand what happens for more general positive semi-definite matrices ${A}$. Here we use a trick of FitzGerald and Horn to amplify the rank one case to the general case, by expressing a general positive semi-definite matrix ${A}$ as a linear combination of a rank one matrix ${{\bf u} {\bf u}^*}$ and another positive semi-definite matrix ${B}$ that vanishes on the last row and column (and is thus effectively a positive definite ${N-1 \times N-1}$ matrix). Using the fundamental theorem of calculus to continuously deform the rank one matrix ${{\bf u} {\bf u}^*}$ to ${A}$ in the direction ${B}$, one can then obtain positivity results for ${P[A]}$ from positivity results for ${P[{\bf u} {\bf u}^*]}$ combined with an induction hypothesis on ${N}$.