You are currently browsing the monthly archive for August 2017.
The determinant of an
matrix (with coefficients in an arbitrary field) obey many useful identities, starting of course with the fundamental multiplicativity
for
matrices
. This multiplicativity can in turn be used to establish many further identities; in particular, as shown in this previous post, it implies the Schur determinant identity
whenever is an invertible
matrix,
is an
matrix,
is a
matrix, and
is a
matrix. The matrix
is known as the Schur complement of the block
.
I only recently discovered that this identity in turn immediately implies what I always found to be a somewhat curious identity, namely the Dodgson condensation identity (also known as the Desnanot-Jacobi identity)
for any and
matrix
, where
denotes the
matrix formed from
by removing the
row and
column, and similarly
denotes the
matrix formed from
by removing the
and
rows and
and
columns. Thus for instance when
we obtain
for any scalars . (Charles Dodgson, better known by his pen name Lewis Caroll, is of course also known for writing “Alice in Wonderland” and “Through the Looking Glass“.)
The derivation is not new; it is for instance noted explicitly in this paper of Brualdi and Schneider, though I do not know if this is the earliest place in the literature where it can be found. (EDIT: Apoorva Khare has pointed out to me that the original arguments of Dodgson can be interpreted as implicitly following this derivation.) I thought it is worth presenting the short derivation here, though.
Firstly, by swapping the first and rows, and similarly for the columns, it is easy to see that the Dodgson condensation identity is equivalent to the variant
Now write
where is an
matrix,
are
column vectors,
are
row vectors, and
are scalars. If
is invertible, we may apply the Schur determinant identity repeatedly to conclude that
and the claim (2) then follows by a brief calculation (and the explicit form of the
determinant). To remove the requirement that
be invertible, one can use a limiting argument, noting that one can work without loss of generality in an algebraically closed field, and in such a field, the set of invertible matrices is dense in the Zariski topology. (In the case when the scalars are reals or complexes, one can just use density in the ordinary topology instead if desired.)
The same argument gives the more general determinant identity of Sylvester
whenever ,
is a
-element subset of
, and
denotes the matrix formed from
by removing the rows associated to
and the columns associated to
. (The Dodgson condensation identity is basically the
case of this identity.)
A closely related proof of (2) proceeds by elementary row and column operations. Observe that if one adds some multiple of one of the first rows of
to one of the last two rows of
, then the left and right sides of (2) do not change. If the minor
is invertible, this allows one to reduce to the case where the components
of the matrix vanish. Similarly, using elementary column operations instead of row operations we may assume that
vanish. All matrices involved are now block-diagonal and the identity follows from a routine computation.
The latter approach can also prove the cute identity
for any , any
column vectors
, and any
matrix
, which can for instance be found in page 7 of this text of Karlin. Observe that both sides of this identity are unchanged if one adds some multiple of any column of
to one of
; for generic
, this allows one to reduce
to have only the first two entries allowed to be non-zero, at which point the determinants split into
and
determinants and we can reduce to the
case (eliminating the role of
). One can now either proceed by a direct computation, or by observing that the left-hand side is quartilinear in
and antisymmetric in
and
which forces it to be a scalar multiple of
, at which point one can test the identity at a single point (e.g.
and
for the standard basis
) to conclude the argument. (One can also derive this identity from the Sylvester determinant identity but I think the calculations are a little messier if one goes by that route. Conversely, one can recover the Dodgson condensation identity from Karlin’s identity by setting
,
(for instance) and then permuting some rows and columns.)
In one of the earliest posts on this blog, I talked about the ability to “arbitrage” a disparity of symmetry in an inequality, and in particular to “amplify” such an inequality into a stronger one. (The principle can apply to other mathematical statements than inequalities, with the “hypothesis” and “conclusion” of that statement generally playing the role of the “right-hand side” and “left-hand side” of an inequality, but for sake of discussion I will restrict attention here to inequalities.) One can formalise this principle as follows. Many inequalities in analysis can be expressed in the form
for all in some space
(in many cases
will be a function space, and
a function in that space), where
and
are some functionals of
(that is to say, real-valued functions of
). For instance,
might be some function space norm of
(e.g. an
norm), and
might be some function space norm of some transform of
. In addition, we assume we have some group
of symmetries
acting on the underlying space. For instance, if
is a space of functions on some spatial domain, the group might consist of translations (e.g.
for some shift
), or perhaps dilations with some normalisation (e.g.
for some dilation factor
and some normalisation exponent
, which can be thought of as the dimensionality of length one is assigning to
). If we have
for all symmetries and all
, we say that
is invariant with respect to the symmetries in
; otherwise, it is not.
Suppose we know that the inequality (1) holds for all , but that there is an imbalance of symmetry: either
is
-invariant and
is not, or vice versa. Suppose first that
is
-invariant and
is not. Substituting
by
in (1) and taking infima, we can then amplify (1) to the stronger inequality
In particular, it is often the case that there is a way to send off to infinity in such a way that the functional
has a limit
, in which case we obtain the amplification
of (1). Note that these amplified inequalities will now be -invariant on both sides (assuming that the way in which we take limits as
is itself
-invariant, which it often is in practice). Similarly, if
is
-invariant but
is not, we may instead amplify (1) to
and in particular (if has a limit
as
)
If neither nor
has a
-symmetry, one can still use the
-symmetry by replacing
by
and taking a limit to conclude that
though now this inequality is not obviously stronger than the original inequality (1) (for instance it could well be trivial). In some cases one can also average over instead of taking a limit as
, thus averaging a non-invariant inequality into an invariant one.
As discussed in the previous post, this use of amplification gives rise to a general principle about inequalities: the most efficient inequalities are those in which the left-hand side and right-hand side enjoy the same symmetries. It is certainly possible to have true inequalities that have an imbalance of symmetry, but as shown above, such inequalities can always be amplified to more efficient and more symmetric inequalities. In the case when limits such as and
exist, the limiting functionals
and
are often simpler in form, or more tractable analytically, than their non-limiting counterparts
and
(this is one of the main reasons why we take limits at infinity in the first place!), and so in many applications there is really no reason to use the weaker and more complicated inequality (1), when stronger, simpler, and more symmetric inequalities such as (2), (3) are available. Among other things, this explains why many of the most useful and natural inequalities one sees in analysis are dimensionally consistent.
One often tries to prove inequalities (1) by directly chaining together simpler inequalities. For instance, one might attempt to prove (1) by by first bounding by some auxiliary quantity
, and then bounding
by
, thus obtaining (1) by chaining together two inequalities
A variant of the above principle then asserts that when proving inequalities by such direct methods, one should, whenever possible, try to maintain the symmetries that are present in both sides of the inequality. Why? Well, suppose that we ignored this principle and tried to prove (1) by establishing (4) for some that is not
-invariant. Assuming for sake of argument that (4) were actually true, we could amplify the first half
of this inequality to conclude that
and also amplify the second half of the inequality to conclude that
and hence (4) amplifies to
Let’s say for sake of argument that all the quantities involved here are positive numbers (which is often the case in analysis). Then we see in particular that
Informally, (6) asserts that in order for the strategy (4) used to prove (1) to work, the extent to which fails to be
-invariant cannot exceed the amount of “room” present in (1). In particular, when dealing with those “extremal”
for which the left and right-hand sides of (1) are comparable to each other, one can only have a bounded amount of non-
-invariance in the functional
. If
fails so badly to be
-invariant that one does not expect the left-hand side of (6) to be at all bounded in such extremal situations, then the strategy of proving (1) using the intermediate quantity
is doomed to failure – even if one has already produced some clever proof of one of the two inequalities
or
needed to make this strategy work. And even if it did work, one could amplify (4) to a simpler inequality
(assuming that the appropriate limit existed) which would likely also be easier to prove (one can take whatever proofs one had in mind of the inequalities in (4), conjugate them by
, and take a limit as
to extract a proof of (7)).
Here are some simple (but somewhat contrived) examples to illustrate these points. Suppose one wishes to prove the inequality
for all . Both sides of this inequality are invariant with respect to interchanging
with
, so the principle suggests that when proving this inequality directly, one should only use sub-inequalities that are also invariant with respect to this interchange. However, in this particular case there is enough “room” in the inequality that it is possible (though somewhat unnatural) to violate this principle. For instance, one could decide (for whatever reason) to start with the inequality
to conclude that
and then use the obvious inequality to conclude the proof. Here, the intermediate quantity
is not invariant with respect to interchange of
and
, but the failure is fairly mild (changing
and
only modifies the quantity
by a multiplicative factor of
at most), and disappears completely in the most extremal case
, which helps explain why one could get away with using this quantity in the proof here. But it would be significantly harder (though still not impossible) to use non-symmetric intermediaries to prove the sharp version
of (8) (that is to say, the arithmetic mean-geometric mean inequality). Try it!
Similarly, consider the task of proving the triangle inequality
for complex numbers . One could try to leverage the triangle inequality
for real numbers by using the crude estimate
and then use the real triangle inequality to obtain
and
and then finally use the inequalities
but when one puts this all together at the end of the day, one loses a factor of two:
One can “blame” this loss on the fact that while the original inequality (9) was invariant with respect to phase rotation , the intermediate expressions we tried to use when proving it were not, leading to inefficient estimates. One can try to be smarter than this by using Pythagoras’ theorem
; this reduces the loss from
to
but does not eliminate it completely, which is to be expected as one is still using non-invariant estimates in the proof. But one can remove the loss completely by using amplification; see the previous blog post for details (we also give a reformulation of this amplification below).
Here is a slight variant of the above example. Suppose that you had just learned in class to prove the triangle inequality
for (say) real square-summable sequences ,
, and was tasked to conclude the corresponding inequality
for doubly infinite square-summable sequences . The quickest way to do this is of course to exploit a bijection between the natural numbers
and the integers, but let us say for sake of argument that one was unaware of such a bijection. One could then proceed instead by splitting the integers into the positive integers and the non-positive integers, and use (12) on each component separately; this is very similar to the strategy of proving (9) by splitting a complex number into real and imaginary parts, and will similarly lose a factor of
or
. In this case, one can “blame” this loss on the abandonment of translation invariance: both sides of the inequality (13) are invariant with respect to shifting the sequences
,
by some shift
to arrive at
, but the intermediate quantities caused by splitting the integers into two subsets are not invariant. Another way of thinking about this is that the splitting of the integers gives a privileged role to the origin
, whereas the inequality (13) treats all values of
equally thanks to the translation invariance, and so using such a splitting is unnatural and not likely to lead to optimal estimates. On the other hand, one can deduce (13) from (12) by sending this symmetry to infinity; indeed, after applying a shift to (12) we see that
for any , and on sending
we obtain (13) (one could invoke the monotone convergence theorem here to justify the limit, though in this case it is simple enough that one can just use first principles).
Note that the principle of preserving symmetry only applies to direct approaches to proving inequalities such as (1). There is a complementary approach, discussed for instance in this previous post, which is to spend the symmetry to place the variable “without loss of generality” in a “normal form”, “convenient coordinate system”, or a “good gauge”. Abstractly: suppose that there is some subset
of
with the property that every
can be expressed in the form
for some
and
(that is to say,
). Then, if one wishes to prove an inequality (1) for all
, and one knows that both sides
of this inequality are
-invariant, then it suffices to check (1) just for those
in
, as this together with the
-invariance will imply the same inequality (1) for all
in
. By restricting to those
in
, one has given up (or spent) the
-invariance, as the set
will in typical not be preserved by the group action
. But by the same token, by eliminating the invariance, one also eliminates the prohibition on using non-invariant proof techniques, and one is now free to use a wider range of inequalities in order to try to establish (1). Of course, such inequalities should make crucial use of the restriction
, for if they did not, then the arguments would work in the more general setting
, and then the previous principle would again kick in and warn us that the use of non-invariant inequalities would be inefficient. Thus one should “spend” the symmetry wisely to “buy” a restriction
that will be of maximal utility in calculations (for instance by setting as many annoying factors and terms in one’s analysis to be
or
as possible).
As a simple example of this, let us revisit the complex triangle inequality (9). As already noted, both sides of this inequality are invariant with respect to the phase rotation symmetry . This seems to limit one to using phase-rotation-invariant techniques to establish the inequality, in particular ruling out the use of real and imaginary parts as discussed previously. However, we can instead spend the phase rotation symmetry to restrict to a special class of
and
. It turns out that the most efficient way to spend the symmetry is to achieve the normalisation of
being a nonnegative real; this is of course possible since any complex number
can be turned into a nonnegative real by multiplying by an appropriate phase
. Once
is a nonnegative real, the imaginary part disappears and we have
and the triangle inequality (9) is now an immediate consequence of (10), (11). (But note that if one had unwisely spent the symmetry to normalise, say, to be a non-negative real, then one is no closer to establishing (9) than before one had spent the symmetry.)
Apoorva Khare and I have just uploaded to the arXiv our paper “On the sign patterns of entrywise positivity preservers in fixed dimension“. This paper explores the relationship between positive definiteness of Hermitian matrices, and entrywise operations on these matrices. The starting point for this theory is the Schur product theorem, which asserts that if and
are two
Hermitian matrices that are positive semi-definite, then their Hadamard product
is also positive semi-definite. (One should caution that the Hadamard product is not the same as the usual matrix product.) To prove this theorem, first observe that the claim is easy when and
are rank one positive semi-definite matrices, since in this case
is also a rank one positive semi-definite matrix. The general case then follows by noting from the spectral theorem that a general positive semi-definite matrix can be expressed as a non-negative linear combination of rank one positive semi-definite matrices, and using the bilinearity of the Hadamard product and the fact that the set of positive semi-definite matrices form a convex cone. A modification of this argument also lets one replace “positive semi-definite” by “positive definite” in the statement of the Schur product theorem.
One corollary of the Schur product theorem is that any polynomial with non-negative coefficients
is entrywise positivity preserving on the space
of
positive semi-definite Hermitian matrices, in the sense that for any matrix
in
, the entrywise application
of to
is also positive semi-definite. (As before, one should caution that
is not the application
of
to
by the usual functional calculus.) Indeed, one can expand
where is the Hadamard product of
copies of
, and the claim now follows from the Schur product theorem and the fact that
is a convex cone.
A slight variant of this argument, already observed by Pólya and Szegö in 1925, shows that if is any subset of
and
is a power series with non-negative coefficients that is absolutely and uniformly convergent on
, then
will be entrywise positivity preserving on the set
of positive definite matrices with entries in
. (In the case that
is of the form
, such functions are precisely the absolutely monotonic functions on
.)
In the work of Schoenberg and of Rudin, we have a converse: if is a function that is entrywise positivity preserving on
for all
, then it must be of the form (1) with
. Variants of this result, with
replaced by other domains, appear in the work of Horn, Vasudeva, and Guillot-Khare-Rajaratnam.
This gives a satisfactory classification of functions that are entrywise positivity preservers in all dimensions
simultaneously. However, the question remains as to what happens if one fixes the dimension
, in which case one may have a larger class of entrywise positivity preservers. For instance, in the trivial case
, a function would be entrywise positivity preserving on
if and only if
is non-negative on
. For higher
, there is a necessary condition of Horn (refined slightly by Guillot-Khare-Rajaratnam) which asserts (at least in the case of smooth
) that all derivatives of
at zero up to
order must be non-negative in order for
to be entrywise positivity preserving on
for some
. In particular, if
is of the form (1), then
must be non-negative. In fact, a stronger assertion can be made, namely that the first
non-zero coefficients in (1) (if they exist) must be positive, or equivalently any negative term in (1) must be preceded (though not necessarily immediately) by at least
positive terms. If
is of the form (1) is entrywise positivity preserving on the larger set
, one can furthermore show that any negative term in (1) must also be followed (though not necessarily immediately) by at least
positive terms.
The main result of this paper is that these sign conditions are the only constraints for entrywise positivity preserving power series. More precisely:
Theorem 1 For each
, let
be a sign.
- Suppose that any negative sign
is preceded by at least
positive signs (thus there exists
with
). Then, for any
, there exists a convergent power series (1) on
, with each
having the sign of
, which is entrywise positivity preserving on
.
- Suppose in addition that any negative sign
is followed by at least
positive signs (thus there exists
with
). Then there exists a convergent power series (1) on
, with each
having the sign of
, which is entrywise positivity preserving on
.
One can ask the same question with or
replaced by other domains such as
, or the complex disk
, but it turns out that there are far fewer entrywise positivity preserving functions in those cases basically because of the non-trivial zeroes of Schur polynomials in these ranges; see the paper for further discussion. We also have some quantitative bounds on how negative some of the coefficients can be compared to the positive coefficients, but they are a bit technical to state here.
The heart of the proofs of these results is an analysis of the determinants of polynomials
applied entrywise to rank one matrices
; the positivity of these determinants can be used (together with a continuity argument) to establish the positive definiteness of
for various ranges of
and
. Using the Cauchy-Binet formula, one can rewrite such determinants as linear combinations of squares of magnitudes of generalised Vandermonde determinants
where and the signs of the coefficients in the linear combination are determined by the signs of the coefficients of
. The task is then to find upper and lower bounds for the magnitudes of such generalised Vandermonde determinants. These determinants oscillate in sign, which makes the problem look difficult; however, an algebraic miracle intervenes, namely the factorisation
of the generalised Vandermonde determinant into the ordinary Vandermonde determinant
and a Schur polynomial applied to
, where the weight
of the Schur polynomial is determined by
in a simple fashion. The problem then boils down to obtaining upper and lower bounds for these Schur polynomials. Because we are restricting attention to matrices taking values in
or
, the entries of
can be taken to be non-negative. One can then take advantage of the total positivity of the Schur polynomials to compare these polynomials with a monomial, at which point one can obtain good criteria for
to be positive definite when
is a rank one positive definite matrix
.
If we allow the exponents to be real numbers rather than integers (thus replacing polynomials or power series by Pusieux series or Hahn series), then we lose the above algebraic miracle, but we can replace it with a geometric miracle, namely the Harish-Chandra-Itzykson-Zuber identity, which I discussed in this previous blog post. This factors the above generalised Vandermonde determinant as the product of the ordinary Vandermonde determinant and an integral of a positive quantity over the orthogonal group, which one can again compare with a monomial after some fairly elementary estimates.
It remains to understand what happens for more general positive semi-definite matrices . Here we use a trick of FitzGerald and Horn to amplify the rank one case to the general case, by expressing a general positive semi-definite matrix
as a linear combination of a rank one matrix
and another positive semi-definite matrix
that vanishes on the last row and column (and is thus effectively a positive definite
matrix). Using the fundamental theorem of calculus to continuously deform the rank one matrix
to
in the direction
, one can then obtain positivity results for
from positivity results for
combined with an induction hypothesis on
.
Joni Teräväinen and I have just uploaded to the arXiv our paper “The structure of logarithmically averaged correlations of multiplicative functions, with applications to the Chowla and Elliott conjectures“, submitted to Duke Mathematical Journal. This paper builds upon my previous paper in which I introduced an “entropy decrement method” to prove the two-point (logarithmically averaged) cases of the Chowla and Elliott conjectures. A bit more specifically, I showed that
whenever were sequences going to infinity,
were distinct integers, and
were
-bounded multiplicative functions which were non-pretentious in the sense that
for all Dirichlet characters and for
. Thus, for instance, one had the logarithmically averaged two-point Chowla conjecture
for fixed any non-zero , where
was the Liouville function.
One would certainly like to extend these results to higher order correlations than the two-point correlations. This looks to be difficult (though perhaps not completely impossible if one allows for logarithmic averaging): in a previous paper I showed that achieving this in the context of the Liouville function would be equivalent to resolving the logarithmically averaged Sarnak conjecture, as well as establishing logarithmically averaged local Gowers uniformity of the Liouville function. However, in this paper we are able to avoid having to resolve these difficult conjectures to obtain partial results towards the (logarithmically averaged) Chowla and Elliott conjecture. For the Chowla conjecture, we can obtain all odd order correlations, in that
for all odd and all integers
(which, in the odd order case, are no longer required to be distinct). (Superficially, this looks like we have resolved “half of the cases” of the logarithmically averaged Chowla conjecture; but it seems the odd order correlations are significantly easier than the even order ones. For instance, because of the Katai-Bourgain-Sarnak-Ziegler criterion, one can basically deduce the odd order cases of (2) from the even order cases (after allowing for some dilations in the argument
).
For the more general Elliott conjecture, we can show that
for any , any integers
and any bounded multiplicative functions
, unless the product
weakly pretends to be a Dirichlet character
in the sense that
This can be seen to imply (2) as a special case. Even when does pretend to be a Dirichlet character
, we can still say something: if the limits
exist for each (which can be guaranteed if we pass to a suitable subsequence), then
is the uniform limit of periodic functions
, each of which is
–isotypic in the sense that
whenever
are integers with
coprime to the periods of
and
. This does not pin down the value of any single correlation
, but does put significant constraints on how these correlations may vary with
.
Among other things, this allows us to show that all possible length four sign patterns
of the Liouville function occur with positive density, and all
possible length four sign patterns
occur with the conjectured logarithmic density. (In a previous paper with Matomaki and Radziwill, we obtained comparable results for length three patterns of Liouville and length two patterns of Möbius.)
To describe the argument, let us focus for simplicity on the case of the Liouville correlations
assuming for sake of discussion that all limits exist. (In the paper, we instead use the device of generalised limits, as discussed in this previous post.) The idea is to combine together two rather different ways to control this function . The first proceeds by the entropy decrement method mentioned earlier, which roughly speaking works as follows. Firstly, we pick a prime
and observe that
for any
, which allows us to rewrite (3) as
Making the change of variables , we obtain
The difference between and
is negligible in the limit (here is where we crucially rely on the log-averaging), hence
and thus by (3) we have
The entropy decrement argument can be used to show that the latter limit is small for most (roughly speaking, this is because the factors
behave like independent random variables as
varies, so that concentration of measure results such as Hoeffding’s inequality can apply, after using entropy inequalities to decouple somewhat these random variables from the
factors). We thus obtain the approximate isotopy property
for most and
.
On the other hand, by the Furstenberg correspondence principle (as discussed in these previous posts), it is possible to express as a multiple correlation
for some probability space equipped with a measure-preserving invertible map
. Using results of Bergelson-Host-Kra, Leibman, and Le, this allows us to obtain a decomposition of the form
where is a nilsequence, and
goes to zero in density (even along the primes, or constant multiples of the primes). The original work of Bergelson-Host-Kra required ergodicity on
, which is very definitely a hypothesis that is not available here; however, the later work of Leibman removed this hypothesis, and the work of Le refined the control on
so that one still has good control when restricting to primes, or constant multiples of primes.
Ignoring the small error , we can now combine (5) to conclude that
Using the equidistribution theory of nilsequences (as developed in this previous paper of Ben Green and myself), one can break up further into a periodic piece
and an “irrational” or “minor arc” piece
. The contribution of the minor arc piece
can be shown to mostly cancel itself out after dilating by primes
and averaging, thanks to Vinogradov-type bilinear sum estimates (transferred to the primes). So we end up with
which already shows (heuristically, at least) the claim that can be approximated by periodic functions
which are isotopic in the sense that
But if is odd, one can use Dirichlet’s theorem on primes in arithmetic progressions to restrict to primes
that are
modulo the period of
, and conclude now that
vanishes identically, which (heuristically, at least) gives (2).
The same sort of argument works to give the more general bounds on correlations of bounded multiplicative functions. But for the specific task of proving (2), we initially used a slightly different argument that avoids using the ergodic theory machinery of Bergelson-Host-Kra, Leibman, and Le, but replaces it instead with the Gowers uniformity norm theory used to count linear equations in primes. Basically, by averaging (4) in using the “
-trick”, as well as known facts about the Gowers uniformity of the von Mangoldt function, one can obtain an approximation of the form
where ranges over a large range of integers coprime to some primorial
. On the other hand, by iterating (4) we have
for most semiprimes , and by again averaging over semiprimes one can obtain an approximation of the form
For odd, one can combine the two approximations to conclude that
. (This argument is not given in the current paper, but we plan to detail it in a subsequent one.)
The complete homogeneous symmetric polynomial of
variables
and degree
can be defined as
thus for instance
and
One can also define all the complete homogeneous symmetric polynomials of variables simultaneously by means of the generating function
We will think of the variables as taking values in the real numbers. When one does so, one might observe that the degree two polynomial
is a positive definite quadratic form, since it has the sum of squares representation
In particular, unless
. This can be compared against the superficially similar quadratic form
where are independent randomly chosen signs. The Wigner semicircle law says that for large
, the eigenvalues of this form will be mostly distributed in the interval
using the semicircle distribution, so in particular the form is quite far from being positive definite despite the presence of the first
positive terms. Thus the positive definiteness is coming from the finer algebraic structure of
, and not just from the magnitudes of its coefficients.
One could ask whether the same positivity holds for other degrees than two. For odd degrees, the answer is clearly no, since
in that case. But one could hope for instance that
also has a sum of squares representation that demonstrates positive definiteness. This turns out to be true, but is remarkably tedious to establish directly. Nevertheless, we have a nice result of Hunter that gives positive definiteness for all even degrees . In fact, a modification of his argument gives a little bit more:
Theorem 1 Let
, let
be even, and let
be reals.
- (i) (Positive definiteness) One has
, with strict inequality unless
.
- (ii) (Schur convexity) One has
whenever
majorises
, with equality if and only if
is a permutation of
.
- (iii) (Schur-Ostrowski criterion for Schur convexity) For any
, one has
, with strict inequality unless
.
Proof: We induct on (allowing
to be arbitrary). The claim is trivially true for
, and easily verified for
, so suppose that
and the claims (i), (ii), (iii) have already been proven for
(and for arbitrary
).
If we apply the differential operator to
using the product rule, one obtains after a brief calculation
Using (1) and extracting the coefficient, we obtain the identity
The claim (iii) then follows from (i) and the induction hypothesis.
To obtain (ii), we use the more general statement (known as the Schur-Ostrowski criterion) that (ii) is implied from (iii) if we replace by an arbitrary symmetric, continuously differentiable function. To establish this criterion, we induct on
(this argument can be made independently of the existing induction on
). If
is majorised by
, it lies in the permutahedron of
. If
lies on a face of this permutahedron, then after permuting both the
and
we may assume that
is majorised by
, and
is majorised by
for some
, and the claim then follows from two applications of the induction hypothesis. If instead
lies in the interior of the permutahedron, one can follow it to the boundary by using one of the vector fields
, and the claim follows from the boundary case.
Finally, to obtain (i), we observe that majorises
, where
is the arithmetic mean of
. But
is clearly a positive multiple of
, and the claim now follows from (ii).
If the variables are restricted to be nonnegative, the same argument gives Schur convexity for odd degrees also.
The proof in Hunter of positive definiteness is arranged a little differently than the one above, but still relies ultimately on the identity (2). I wonder if there is a genuinely different way to establish positive definiteness that does not go through this identity.
Recent Comments