Fix a non-negative integer {k}. Define an (weak) integer partition of length {k} to be a tuple {\lambda = (\lambda_1,\dots,\lambda_k)} of non-increasing non-negative integers {\lambda_1 \geq \dots \geq \lambda_k \geq 0}. (Here our partitions are “weak” in the sense that we allow some parts of the partition to be zero. Henceforth we will omit the modifier “weak”, as we will not need to consider the more usual notion of “strong” partitions.) To each such partition {\lambda}, one can associate a Young diagram consisting of {k} left-justified rows of boxes, with the {i^{th}} row containing {\lambda_i} boxes. A semi-standard Young tableau (or Young tableau for short) {T} of shape {\lambda} is a filling of these boxes by integers in {\{1,\dots,k\}} that is weakly increasing along rows (moving rightwards) and strictly increasing along columns (moving downwards). The collection of such tableaux will be denoted {{\mathcal T}_\lambda}. The weight {|T|} of a tableau {T} is the tuple {(n_1,\dots,n_k)}, where {n_i} is the number of occurrences of the integer {i} in the tableau. For instance, if {k=3} and {\lambda = (6,4,2)}, an example of a Young tableau of shape {\lambda} would be

\displaystyle  \begin{tabular}{|c|c|c|c|c|c|} \hline 1 & 1 & 1 & 2 & 3 & 3 \\ \cline{1-6} 2 & 2 & 2 &3\\ \cline{1-4} 3 & 3\\ \cline{1-2} \end{tabular}

The weight here would be {|T| = (3,4,5)}.

To each partition {\lambda} one can associate the Schur polynomial {s_\lambda(u_1,\dots,u_k)} on {k} variables {u = (u_1,\dots,u_k)}, which we will define as

\displaystyle  s_\lambda(u) := \sum_{T \in {\mathcal T}_\lambda} u^{|T|}

using the multinomial convention

\displaystyle (u_1,\dots,u_k)^{(n_1,\dots,n_k)} := u_1^{n_1} \dots u_k^{n_k}.

Thus for instance the Young tableau {T} given above would contribute a term {u_1^3 u_2^4 u_3^5} to the Schur polynomial {s_{(6,4,2)}(u_1,u_2,u_3)}. In the case of partitions of the form {(n,0,\dots,0)}, the Schur polynomial {s_{(n,0,\dots,0)}} is just the complete homogeneous symmetric polynomial {h_n} of degree {n} on {k} variables:

\displaystyle  s_{(n,0,\dots,0)}(u_1,\dots,u_k) := \sum_{n_1,\dots,n_k \geq 0: n_1+\dots+n_k = n} u_1^{n_1} \dots u_k^{n_k},

thus for instance

\displaystyle  s_{(3,0)}(u_1,u_2) = u_1^3 + u_1^2 u_2 + u_1 u_2^2 + u_2^3.

Schur polyomials are ubiquitous in the algebraic combinatorics of “type {A} objects” such as the symmetric group {S_k}, the general linear group {GL_k}, or the unitary group {U_k}. For instance, one can view {s_\lambda} as the character of an irreducible polynomial representation of {GL_k({\bf C})} associated with the partition {\lambda}. However, we will not focus on these interpretations of Schur polynomials in this post.

This definition of Schur polynomials allows for a way to describe the polynomials recursively. If {k > 1} and {T} is a Young tableau of shape {\lambda = (\lambda_1,\dots,\lambda_k)}, taking values in {\{1,\dots,k\}}, one can form a sub-tableau {T'} of some shape {\lambda' = (\lambda'_1,\dots,\lambda'_{k-1})} by removing all the appearances of {k} (which, among other things, necessarily deletes the {k^{th}} row). For instance, with {T} as in the previous example, the sub-tableau {T'} would be

\displaystyle  \begin{tabular}{|c|c|c|c|} \hline 1 & 1 & 1 & 2 \\ \cline{1-4} 2 & 2 & 2 \\ \cline{1-3} \end{tabular}

and the reduced partition {\lambda'} in this case is {(4,3)}. As Young tableaux are required to be strictly increasing down columns, we can see that the reduced partition {\lambda'} must intersperse the original partition {\lambda} in the sense that

\displaystyle  \lambda_{i+1} \leq \lambda'_i \leq \lambda_i \ \ \ \ \ (1)

for all {1 \leq i \leq k-1}; we denote this interspersion relation as {\lambda' \prec \lambda} (though we caution that this is not intended to be a partial ordering). In the converse direction, if {\lambda' \prec \lambda} and {T'} is a Young tableau with shape {\lambda'} with entries in {\{1,\dots,k-1\}}, one can form a Young tableau {T} with shape {\lambda} and entries in {\{1,\dots,k\}} by appending to {T'} an entry of {k} in all the boxes that appear in the {\lambda} shape but not the {\lambda'} shape. This one-to-one correspondence leads to the recursion

\displaystyle  s_\lambda(u) = \sum_{\lambda' \prec \lambda} s_{\lambda'}(u') u_k^{|\lambda| - |\lambda'|} \ \ \ \ \ (2)

where {u = (u_1,\dots,u_k)}, {u' = (u_1,\dots,u_{k-1})}, and the size {|\lambda|} of a partition {\lambda = (\lambda_1,\dots,\lambda_k)} is defined as {|\lambda| := \lambda_1 + \dots + \lambda_k}.

One can use this recursion (2) to prove some further standard identities for Schur polynomials, such as the determinant identity

\displaystyle  s_\lambda(u) V(u) = \det( u_i^{\lambda_j+k-j} )_{1 \leq i,j \leq k} \ \ \ \ \ (3)

for {u=(u_1,\dots,u_k)}, where {V(u)} denotes the Vandermonde determinant

\displaystyle  V(u) := \prod_{1 \leq i < j \leq k} (u_i - u_j), \ \ \ \ \ (4)

or the Jacobi-Trudi identity

\displaystyle  s_\lambda(u) = \det( h_{\lambda_j - j + i}(u) )_{1 \leq i,j \leq k}, \ \ \ \ \ (5)

with the convention that {h_d(u) = 0} if {d} is negative. Thus for instance

\displaystyle s_{(1,1,0,\dots,0)}(u) = h_1^2(u) - h_0(u) h_2(u) = \sum_{1 \leq i < j \leq k} u_i u_j.

We review the (standard) derivation of these identities via (2) below the fold. Among other things, these identities show that the Schur polynomials are symmetric, which is not immediately obvious from their definition.

One can also iterate (2) to write

\displaystyle  s_\lambda(u) = \sum_{() = \lambda^0 \prec \lambda^1 \prec \dots \prec \lambda^k = \lambda} \prod_{j=1}^k u_j^{|\lambda^j| - |\lambda^{j-1}|} \ \ \ \ \ (6)

where the sum is over all tuples {\lambda^1,\dots,\lambda^k}, where each {\lambda^j} is a partition of length {j} that intersperses the next partition {\lambda^{j+1}}, with {\lambda^k} set equal to {\lambda}. We will call such a tuple an integral Gelfand-Tsetlin pattern based at {\lambda}.

One can generalise (6) by introducing the skew Schur functions

\displaystyle  s_{\lambda/\mu}(u) := \sum_{\mu = \lambda^i \prec \dots \prec \lambda^k = \lambda} \prod_{j=i+1}^k u_j^{|\lambda^j| - |\lambda^{j-1}|} \ \ \ \ \ (7)

for {u = (u_{i+1},\dots,u_k)}, whenever {\lambda} is a partition of length {k} and {\mu} a partition of length {i} for some {0 \leq i \leq k}, thus the Schur polynomial {s_\lambda} is also the skew Schur polynomial {s_{\lambda /()}} with {i=0}. (One could relabel the variables here to be something like {(u_1,\dots,u_{k-i})} instead, but this labeling seems slightly more natural, particularly in view of identities such as (8) below.)

By construction, we have the decomposition

\displaystyle  s_{\lambda/\nu}(u_{i+1},\dots,u_k) = \sum_\mu s_{\mu/\nu}(u_{i+1},\dots,u_j) s_{\lambda/\mu}(u_{j+1},\dots,u_k) \ \ \ \ \ (8)

whenever {0 \leq i \leq j \leq k}, and {\nu, \mu, \lambda} are partitions of lengths {i,j,k} respectively. This gives another recursive way to understand Schur polynomials and skew Schur polynomials. For instance, one can use it to establish the generalised Jacobi-Trudi identity

\displaystyle  s_{\lambda/\mu}(u) = \det( h_{\lambda_j - j - \mu_i + i}(u) )_{1 \leq i,j \leq k}, \ \ \ \ \ (9)

with the convention that {\mu_i = 0} for {i} larger than the length of {\mu}; we do this below the fold.

The Schur polynomials (and skew Schur polynomials) are “discretised” (or “quantised”) in the sense that their parameters {\lambda, \mu} are required to be integer-valued, and their definition similarly involves summation over a discrete set. It turns out that there are “continuous” (or “classical”) analogues of these functions, in which the parameters {\lambda,\mu} now take real values rather than integers, and are defined via integration rather than summation. One can view these continuous analogues as a “semiclassical limit” of their discrete counterparts, in a manner that can be made precise using the machinery of geometric quantisation, but we will not do so here.

The continuous analogues can be defined as follows. Define a real partition of length {k} to be a tuple {\lambda = (\lambda_1,\dots,\lambda_k)} where {\lambda_1 \geq \dots \geq \lambda_k \geq 0} are now real numbers. We can define the relation {\lambda' \prec \lambda} of interspersion between a length {k-1} real partition {\lambda' = (\lambda'_1,\dots,\lambda'_{k-1})} and a length {k} real partition {\lambda = (\lambda_1,\dots,\lambda_{k})} precisely as before, by requiring that the inequalities (1) hold for all {1 \leq i \leq k-1}. We can then define the continuous Schur functions {S_\lambda(x)} for {x = (x_1,\dots,x_k) \in {\bf R}^k} recursively by defining

\displaystyle  S_{()}() = 1


\displaystyle  S_\lambda(x) = \int_{\lambda' \prec \lambda} S_{\lambda'}(x') \exp( (|\lambda| - |\lambda'|) x_k ) \ \ \ \ \ (10)

for {k \geq 1} and {\lambda} of length {k}, where {x' := (x_1,\dots,x_{k-1})} and the integral is with respect to {k-1}-dimensional Lebesgue measure, and {|\lambda| = \lambda_1 + \dots + \lambda_k} as before. Thus for instance

\displaystyle  S_{(\lambda_1)}(x_1) = \exp( \lambda_1 x_1 )


\displaystyle  S_{(\lambda_1,\lambda_2)}(x_1,x_2) = \int_{\lambda_2}^{\lambda_1} \exp( \lambda'_1 x_1 + (\lambda_1+\lambda_2-\lambda'_1) x_2 )\ d\lambda'_1.

More generally, we can define the continuous skew Schur functions {S_{\lambda/\mu}(x)} for {\lambda} of length {k}, {\mu} of length {j \leq k}, and {x = (x_{j+1},\dots,x_k) \in {\bf R}^{k-j}} recursively by defining

\displaystyle  S_{\mu/\mu}() = 1


\displaystyle  S_{\lambda/\mu}(x) = \int_{\lambda' \prec \lambda} S_{\lambda'/\mu}(x') \exp( (|\lambda| - |\lambda'|) x_k )

for {k > j}. Thus for instance

\displaystyle  S_{(\lambda_1,\lambda_2,\lambda_3)/(\mu_1,\mu_2)}(x_3) = 1_{\lambda_3 \leq \mu_2 \leq \lambda_2 \leq \mu_1 \leq \lambda_1} \exp( x_3 (\lambda_1+\lambda_2+\lambda_3 - \mu_1 - \mu_2 ))


\displaystyle  S_{(\lambda_1,\lambda_2,\lambda_3)/(\mu_1)}(x_2, x_3) = \int_{\lambda_2 \leq \lambda'_2 \leq \lambda_2, \mu_1} \int_{\mu_1, \lambda_2 \leq \lambda'_1 \leq \lambda_1}

\displaystyle \exp( x_2 (\lambda'_1+\lambda'_2 - \mu_1) + x_3 (\lambda_1+\lambda_2+\lambda_3 - \lambda'_1 - \lambda'_2))\ d\lambda'_1 d\lambda'_2.

By expanding out the recursion, one obtains the analogue

\displaystyle  S_\lambda(x) = \int_{\lambda^1 \prec \dots \prec \lambda^k = \lambda} \exp( \sum_{j=1}^k x_j (|\lambda^j| - |\lambda^{j-1}|))\ d\lambda^1 \dots d\lambda^{k-1},

of (6), and more generally one has

\displaystyle  S_{\lambda/\mu}(x) = \int_{\mu = \lambda^i \prec \dots \prec \lambda^k = \lambda} \exp( \sum_{j=i+1}^k x_j (|\lambda^j| - |\lambda^{j-1}|))\ d\lambda^{i+1} \dots d\lambda^{k-1}.

We will call the tuples {(\lambda^1,\dots,\lambda^k)} in the first integral real Gelfand-Tsetlin patterns based at {\lambda}. The analogue of (8) is then

\displaystyle  S_{\lambda/\nu}(x_{i+1},\dots,x_k) = \int S_{\mu/\nu}(x_{i+1},\dots,x_j) S_{\lambda/\mu}(x_{j+1},\dots,x_k)\ d\mu

where the integral is over all real partitions {\mu} of length {j}, with Lebesgue measure.

By approximating various integrals by their Riemann sums, one can relate the continuous Schur functions to their discrete counterparts by the limiting formula

\displaystyle  N^{-k(k-1)/2} s_{\lfloor N \lambda \rfloor}( \exp[ x/N ] ) \rightarrow S_\lambda(x) \ \ \ \ \ (11)

as {N \rightarrow \infty} for any length {k} real partition {\lambda = (\lambda_1,\dots,\lambda_k)} and any {x = (x_1,\dots,x_k) \in {\bf R}^k}, where

\displaystyle  \lfloor N \lambda \rfloor := ( \lfloor N \lambda_1 \rfloor, \dots, \lfloor N \lambda_k \rfloor )


\displaystyle  \exp[x/N] := (\exp(x_1/N), \dots, \exp(x_k/N)).

More generally, one has

\displaystyle  N^{j(j-1)/2-k(k-1)/2} s_{\lfloor N \lambda \rfloor / \lfloor N \mu \rfloor}( \exp[ x/N ] ) \rightarrow S_{\lambda/\mu}(x)

as {N \rightarrow \infty} for any length {k} real partition {\lambda}, any length {j} real partition {\mu} with {0 \leq j \leq k}, and any {x = (x_{j+1},\dots,x_k) \in {\bf R}^{k-j}}.

As a consequence of these limiting formulae, one expects all of the discrete identities above to have continuous counterparts. This is indeed the case; below the fold we shall prove the discrete and continuous identities in parallel. These are not new results by any means, but I was not able to locate a good place in the literature where they are explicitly written down, so I thought I would try to do so here (primarily for my own internal reference, but perhaps the calculations will be worthwhile to some others also).

— 1. Proofs of identities —

We first prove the determinant identity (3), by induction on {k}. The case {k=0} is trivial (one could also use {k=1} as the base case if desired); now suppose {k \geq 1} and the claim has already been proven for {k-1}. Writing {u = (u',u_k)} with {u' = (u_1,\dots,u_{k-1})}, we have from (4) that

\displaystyle  V(u) = V(u') \prod_{i=1}^{k-1} (u_i - u_k) \ \ \ \ \ (12)

so by (2) it will suffice to show that

\displaystyle  \sum_{\lambda' \prec \lambda} \det( u_i^{\lambda'_j+k-1-j} )_{1 \leq i,j \leq k-1} u_k^{|\lambda| - |\lambda'|} \prod_{i=1}^{k-1} (u_i - u_k) = \det( u_i^{\lambda_j+k-j} )_{1 \leq i,j \leq k}.

By continuity we may assume {u_k} is non-zero. Both sides are homogeneous in {u} of degree {|\lambda|+k(k-1)/2}, so without loss of generality we may normalise {u_k=1}, thus we need to show

\displaystyle  \sum_{\lambda' \prec \lambda} \det( u_i^{\lambda'_j+k-1-j} )_{1 \leq i,j \leq k-1} \prod_{i=1}^{k-1} (u_i - 1) = \det( u_i^{\lambda_j+k-j} )_{1 \leq i,j \leq k} \ \ \ \ \ (13)

where the bottom row of the matrix on the right-hand side consists entirely of {1}‘s.

The sum {\sum_{\lambda' \prec \lambda}} can be factored into {k-1} sums {\sum_{\lambda_{j+1} \leq \lambda'_j \leq \lambda_j}} for {j=1,\dots,k-1}. By the multilinearity of the determinant, the left-hand side of (13) may thus be written as

\displaystyle  \det( (u_i-1) \sum_{\lambda_{j+1} \leq \lambda'_j \leq \lambda_j} u_i^{\lambda'_j+k-1-j} )_{1 \leq i,j \leq k-1}.

This telescopes to

\displaystyle  \det( u_i^{\lambda_{j+1}+k-(j+1)} - u_i^{\lambda_j+k-j} )_{1 \leq i,j \leq k-1}.

By multilinearity, this expands out to an alternating sum of {2^{k-1}} terms, however all but {k} of these terms will vanish due to having two columns identical. The {k} terms that survive are of the form

\displaystyle  (-1)^{a-1} \det( u_i^{\lambda_j+k-j} )_{1 \leq i \leq k-1; j \in \{1,\dots,k\} \backslash \{a\}}

for {a=1,\dots,k} (where we enumerate {\{1,\dots,k\} \backslash \{a\}} in increasing order); but this sums to {\det( u_i^{\lambda_j+k-j} )_{1 \leq i,j \leq k}} after performing cofactor expansion on the bottom row of the latter determinant. This proves (3).

The continuous analogue of (3) is

\displaystyle  S_\lambda(x) V(x) = \det( \exp( x_i \lambda_j ) )_{1 \leq i,j \leq k}

and can either be proven from (3) and (11), or by mimicking the proof of (3) (replacing sums by integrals). We do the latter, leaving the former as an exercise for the reader. (This identity is also discussed at this MathOverflow question of mine, where it was noted that it essentially appears in this paper of Shatashvili; Apoorva Khare and I also used it in this recent paper.) Again we induct on {k}; the {k=0} case is trivial, so suppose {k \geq 1} and the claim has already been proven for {k-1}. Since

\displaystyle  S_\lambda(x) = \int_{\lambda' \prec \lambda} S_{\lambda'}(x') \exp( x_k (|\lambda| - |\lambda'|) )\ d\lambda'

it will suffice by (10) and (12) to prove that

\displaystyle  \int_{\lambda' \prec \lambda} \det( \exp( x_i \lambda'_j ) )_{1 \leq i,j \leq k-1} \exp( x_k (|\lambda| - |\lambda'|)) \prod_{i=1}^{k-1} (x_k - x_i)\ d\lambda'

\displaystyle = \det( \exp( x_i \lambda_j ) )_{1 \leq i,j \leq k}.

If we shift all of the {x_i} by the same shift {h}, both sides of this identity multiply by {\exp( h |\lambda| )}, so we may normalise {x_k=0}. Our task is now to show that

\displaystyle  \int_{\lambda' \prec \lambda} \det( \exp( x_i \lambda'_j ) )_{1 \leq i,j \leq k-1} \prod_{i=1}^{k-1} (- x_i)\ d\lambda'

\displaystyle  = \det( \exp( x_i \lambda_j ) )_{1 \leq i,j \leq k}, \ \ \ \ \ (14)

where the matrix on the right-hand side has a bottom row consisting entirely of {1}s.

The integral {\int_{\lambda' \prec \lambda}\ d\lambda'} can be factored into {k-1} integrals {\int_{\lambda_{j+1}}^{\lambda_j}\ d\lambda'_j} for {j=1,\dots,k-1}. By the multilinearity of the determinant, the left-hand side of (14) may thus be written as

\displaystyle  \det( -x_i \int_{\lambda_{j+1}}^{\lambda_j} \exp( x_i \lambda'_j )\ d\lambda'_j )_{1 \leq i,j \leq k-1}.

By the fundamental theorem of calculus, this evaluates to

\displaystyle  \det( \exp( x_i \lambda_j ) - \exp( x_i \lambda_{j+1} ) )_{1 \leq i,j \leq k-1}.

Again, this expands to {2^{k-1}} terms, all but {k} of which vanish, and the remaining {k} terms form the cofactor expansion of the right-hand side of (14).

Remark 1 Comparing (13) with (14) we obtain a relation between the discrete and continuous Schur functions, namely that

\displaystyle  s_\lambda(\exp[x]) V(\exp[x]) = S_{(\lambda_1+k-1,\dots,\lambda_k)}(x) V(x)

for any integer partition {\lambda} and any {x \in {\bf R}^k}. One can use this identity to obtain an alternate proof of the limiting relation (11).

Now we turn to (5), which can be proven by a similar argument to (3). Again, the base case {k=0} (or {k=1}, if one prefers) is trivial, so suppose {k \geq 1} and the claim has already been proven for {k-1}. By (2) it will suffice to show that

\displaystyle  \sum_{\lambda' \prec \lambda} \det( h_{\lambda'_j - j + i}(u') )_{1 \leq i,j \leq k-1} u_k^{|\lambda|-|\lambda'|} = \det( h_{\lambda_j - j + i}(u) )_{1 \leq i,j \leq k}. \ \ \ \ \ (15)

Both sides are homogeneous of degree {|\lambda|}, so as before we may normalise {u_k=1}. Factoring the left-hand side summation into {k-1} summations and using multilinearity as before, the left-hand side may be written as

\displaystyle  \det( \sum_{\lambda_{j+1} \leq \lambda'_j \leq \lambda_j} h_{\lambda'_j - j + i}(u') )_{1 \leq i,j \leq k-1}.

Now one observes the identities

\displaystyle  h_{\lambda_j - j + i}(u) = \sum_{\lambda'_j \leq \lambda_j} h_{\lambda'_j - j + i}(u')

and similarly

\displaystyle  h_{\lambda_{j+1} - (j+1) + i}(u) = \sum_{\lambda'_j < \lambda_{j+1}} h_{\lambda'_j - j + i}(u')

(where {\lambda'_j} is understood to range over the integers), hence on subtracting

\displaystyle  h_{\lambda_j - j+i}(u) - h_{\lambda_{j+1} - (j+1) + i}(u) = \sum_{\lambda_{j+1} \leq \lambda'_j \leq \lambda_j} h_{\lambda'_j - j + i}(u')

and so the above determinant may be written as

\displaystyle  \det( h_{\lambda_j - j+i}(u) - h_{\lambda_{j+1} - (j+1) + i}(u) )_{1 \leq i,j \leq k-1}.

Again, this expands into {2^{k-1}} terms, all but {k} of which vanish, and which can be collected by cofactor expansion to become the determinant of the {k \times k} matrix whose top {k-1} rows are {(h_{\lambda_j - j+i}(u))_{1 \leq i \leq k-1; 1 \leq j \leq k}}, and whose bottom row {(1)_{1 \leq j \leq k}} consists entirely of {1}s.

Now we use the identity

\displaystyle  1 = \sum_{S \subset \{1,\dots,k-1\}} (-1)^{|S|} h_{d-|S|}(u) \prod_{i \in S} u_i

for any {d \geq 0}. To verify this identity, we observe that the {u^n} coefficient of the right-hand side is equal to

\displaystyle  \sum_{S \subset \{1 \leq i \leq k-1: n_i \neq 0\}} (-1)^{|S|}

if {|n| \leq d}, and zero otherwise; but from the binomial theorem we see that this coefficient is {1} when {n=0} and {0} otherwise, giving the claim. Using this identity with {d = \lambda_j - j + k}, we can write the bottom row {(1)_{1 \leq j \leq k}} as the sum of {(h_{\lambda_j-j+k}(u))} plus a linear combination of {(h_{\lambda_j-j+i}(u))} for {i=1,\dots,k-1}, so after some row operations we conclude (15). The generalised Jacobi-Trudi identity (9) is proven similarly (keeping {\mu} fixed, and inducting on the length of {\lambda}); we leave this to the interested reader.

The continuous analogue of the Jacobi-Trudi identity (5) is a little less intuitive. The analogue of the complete homogeneous polynomials

\displaystyle  h_n(u_1,\dots,u_k) = \sum_{n_1+\dots+n_k=n: n_1,\dots,n_k \geq 0} u_1^{n_1} \dots u_k^{n_k}

for {n \geq 0} an integer, will be the functions

\displaystyle  H_t(x_1,\dots,x_k) := \int_{t_1+\dots+t_k=t: t_1,\dots,t_k \geq 0} \exp( t_1 x_1 + \dots + t_k x_k)\ dt_1 \dots dt_{k-1}

for {t \geq 0} a real number. Thus for example {H_t(x) = \frac{\exp(tx) - 1}{x}} when {k=1} and {t \geq 0}. By rescaling one may write

\displaystyle  H_t(x_1,\dots,x_k)

\displaystyle = t^{k-1} \int_{t_1+\dots+t_k=1: t_1,\dots,t_k \geq 0} \exp( t_1 t x_1 + \dots + t_k t x_k)\ dt_1 \dots dt_{k-1},

at which point it is clear that these expressions are smooth in {t} for any {t \geq 0}, so we may form derivatives {H^{(j)}_t(x) = \frac{d^j}{dt^j} H_t(x)} for any non-negative integer {j} and any {t \geq 0}; here our differentiation will always be in the {t} variable rather than the {x} variables. The analogue of (5) is then

\displaystyle  S_\lambda(x) = \det( H^{(i-1)}_{\lambda_j}(x) )_{1 \leq i,j \leq k}, \ \ \ \ \ (16)

thus for instance

\displaystyle  S_{(\lambda_1)}(x_1) = H_{\lambda_1}(x_1)


\displaystyle  S_{(\lambda_1,\lambda_2)}(x_1,x_2) = H_{\lambda_1}(x_1,x_2) H^{(1)}_{\lambda_2}(x_1,x_2) - H_{\lambda_2}(x_1,x_2) H^{(1)}_{\lambda_1}(x_1,x_2)

and so forth.

As before, we may prove (16) by induction on {k}. The cases {k=0,1} are easy, so let us suppose {k \geq 2} and that the claim already holds for {k-1} (actually the inductive argument will also work for {k=1} if one pays careful attention to the conventions). By (10), it suffices to show that

\displaystyle  \int_{\lambda' \prec \lambda} \det( H^{(i-1)}_{\lambda'_j}(x') )_{1 \leq i,j \leq k-1} \exp(x_k (|\lambda|-|\lambda'|))\ d\lambda'

\displaystyle  = \det( H^{(i-1)}_{\lambda_j}(x) )_{1 \leq i,j \leq k} \ \ \ \ \ (17)

whenever {x = (x_1,\dots,x_k) \in {\bf R}^k}, {\lambda} is a real partition of length {k}, and {x' := (x_1,\dots,x_{k-1})}. Shifting all the {x_j} by {h} will multiply each {H_t(x)} by {\exp( ht)}, and (after some application of the Leibniz rule and row operations) can be seen to multiply both sides here by {\exp(h|\lambda|)}; thus we may normalise {x_k=0}. We can then factor the integral and use multilinearity of the determinant to write the left-hand side of (17) as

\displaystyle  \det( \int_{\lambda_{j+1}}^{\lambda_j} H^{(i-1)}_{\lambda'_j}(x')\ d\lambda'_j )_{1 \leq i,j \leq k-1}.

From construction we see that

\displaystyle  H_t(x) = \int_0^t H_{t'}(x')\ dt'

for any {t \geq 0}, and hence

\displaystyle  H_{t_1}(x) - H_{t_2}(x) = \int_{t_1}^{t_2} H_t(x')\ dt

for any {t_2 \geq t_1 \geq 0}; actually with the convention that {H_t = 0} for negative {t}, this identity holds for all {t_2 \geq t_1}. Shifting {t_1,t_2,t} by {h} and then differentiating repeatedly at {h=0}, we conclude that

\displaystyle  H^{(i-1)}_{t_1}(x) - H^{(i-1)}_{t_2}(x) = \int_{t_1}^{t_2} H^{(i-1)}_t(x')\ dt

for any natural number {i}. Thus we can rewrite the preceding determinant as

\displaystyle  \det( H^{(i-1)}_{\lambda_j}(x) - H^{(i-1)}_{\lambda_{j+1}}(x) )_{1 \leq i,j \leq k-1}.

Performing the now familiar maneuvre of expanding out into {2^{k-1}} terms, observing that all but {k} of them vanish, and interpretating the surviving terms as cofactors, this is the determinant of the {k \times k} matrix whose top {k-1} rows are {( H^{(i-1)}_{\lambda_j}(x))_{1 \leq i \leq k-1; 1 \leq j \leq k}}, and whose bottom row is {(1)_{1 \leq j \leq k}}.

Next, we observe from definition that

\displaystyle  H_t(x_1,\dots,x_k) = \int_0^t H_{t'}(x_2,\dots,x_k) \exp( (t-t') x_1 )\ dt'

for any {t \geq 0} and {(x_1,\dots,x_k) \in {\bf R}^k}, and hence by the fundamental theorem of calculus

\displaystyle  (\frac{d}{dt} - x_1) H_t(x_1,\dots,x_k) = H_t(x_2,\dots,x_k).

Iterating this identity we conclude that

\displaystyle  (\frac{d}{dt}-x_{k-1}) \dots (\frac{d}{dt}-x_1) H_t(x_1,\dots,x_k) = H_t(x_k)

and in particular when {x_k=0} we have

\displaystyle  (\frac{d}{dt}-x_{k-1}) \dots (\frac{d}{dt}-x_1) H_t(x) = 1.

Thus we can write {1} as {H^{(k-1)}_t(x)} plus a linear combination of the {H^{(i-1)}_t(x)} for {i=1,\dots,k-1}, where the coefficients are independent of {t}. This allows us to write the bottom row {(1)_{1 \leq j \leq k}} as {(H^{(k-1)}_{\lambda_j}(x))_{1 \leq j \leq k}} plus a linear combination of the {(H^{(i-1)}_{\lambda_j}(x))_{1 \leq j \leq k}} for {i=1,\dots,k}, and (17) follows.

A similar argument gives the more general Jacobi-Trudi identity

\displaystyle  S_{\lambda/\mu}(x) = \det( ( H_{\lambda_j-\mu_i}(x) )_{1 \leq j \leq k; 1 \leq i \leq k'}, (H^{(i-1)}_{\lambda_j}(x))_{1 \leq j \leq k; 1 \leq i \leq k-k'} ),

whenever {\lambda} is a real partition of length {k}, {\mu} is a real partition of length {0 \leq k' \leq k}, {x \in {\bf R}^k}, and one adopts the convention that {H_t} (and its first {k-1} derivatives) vanish for {t < 0}. Thus for instance

\displaystyle  S_{(\lambda_1,\lambda_2)/(\mu_1)}(x_2) = \det \begin{pmatrix} H_{\lambda_1-\mu_1}(x_2) & H_{\lambda_1}(x_2) \\ H_{\lambda_2 - \mu_1}(x_2) & H_{\lambda_2}(x_2) \end{pmatrix},

\displaystyle  S_{(\lambda_1,\lambda_2,\lambda_3)/(\mu_1)}(x_2,x_3) = \det \begin{pmatrix} H_{\lambda_1-\mu_1}(x_2,x_3) & H_{\lambda_1}(x_2,x_3) & H^{(1)}_{\lambda_1}(x_2,x_3) \\ H_{\lambda_2 - \mu_1}(x_2,x_3) & H_{\lambda_2}(x_2,x_3) & H^{(1)}_{\lambda_2}(x_2,x_3) \\ H_{\lambda_3 - \mu_1}(x_2,x_3) & H_{\lambda_3}(x_2,x_3) & H^{(1)}_{\lambda_3}(x_2,x_3) \end{pmatrix},

and so forth.

Exercise 2 If {\lambda,\mu} are real partitions of length {k} with positive entries, and {k' \geq k}, show that

\displaystyle  \det( H_{\lambda_i-\mu_j}(x))_{1 \leq i,j \leq k} = \lim_{\nu \rightarrow 0} \frac{1}{V(\nu)} S_{(\lambda,\nu)/\mu}(x)

for any {x \in {\bf R}^{k'}}, where {\nu} ranges over real partitions of length {k'-k} with distinct entries, and {(\lambda,\nu)} is the length {k'} partition formed by concatenating {\lambda} and {\nu} (this will also be a partition if {\nu} is sufficiently small).

(Sep 14: updated with several suggestions and corrections supplied by Darij Grinberg.)