You are currently browsing the monthly archive for October 2023.

I have just uploaded to the arXiv my paper “A Maclaurin type inequality“. This paper concerns a variant of the Maclaurin inequality for the elementary symmetric means

\displaystyle  s_k(y) := \frac{1}{\binom{n}{k}} \sum_{1 \leq i_1 < \dots < i_k \leq n} y_{i_1} \dots y_{i_k}

of {n} real numbers {y_1,\dots,y_n}. This inequality asserts that

\displaystyle  s_\ell(y)^{1/\ell} \leq s_k(y)^{1/k}

whenever {1 \leq k \leq \ell \leq n} and {y_1,\dots,y_n} are non-negative. It can be proven as a consequence of the Newton inequality

\displaystyle  s_{k-1}(y) s_{k+1}(y) \leq s_k(y)^2

valid for all {1 \leq k < n} and arbitrary real {y_1,\dots,y_n} (in particular, here the {y_i} are allowed to be negative). Note that the {k=1, n=2} case of this inequality is just the arithmetic mean-geometric mean inequality

\displaystyle  y_1 y_2 \leq (\frac{y_1+y_2}{2})^2;

the general case of this inequality can be deduced from this special case by a number of standard manipulations (the most non-obvious of which is the operation of differentiating the real-rooted polynomial {\prod_{i=1}^n (z-y_i)} to obtain another real-rooted polynomial, thanks to Rolle’s theorem; the key point is that this operation preserves all the elementary symmetric means up to {s_{n-1}}). One can think of Maclaurin’s inequality as providing a refined version of the arithmetic mean-geometric mean inequality on {n} variables (which corresponds to the case {k=1}, {\ell=n}).

Whereas Newton’s inequality works for arbitrary real {y_i}, the Maclaurin inequality breaks down once one or more of the {y_i} are permitted to be negative. A key example occurs when {n} is even, half of the {y_i} are equal to {+1}, and half are equal to {-1}. Here, one can verify that the elementary symmetric means {s_k(y)} vanish for odd {k} and are equal to { (-1)^{k/2} \frac{\binom{n/2}{k/2}}{\binom{n}{k}}} for even {k}. In particular, some routine estimation then gives the order of magnitude bound

\displaystyle  |s_k(y)|^{\frac{1}{k}} \asymp \frac{k^{1/2}}{n^{1/2}} \ \ \ \ \ (1)

for {0 < k \leq n} even, thus giving a significant violation of the Maclaurin inequality even after putting absolute values around the {s_k(y)}. In particular, vanishing of one {s_k(y)} does not imply vanishing of all subsequent {s_\ell(y)}.

On the other hand, it was observed by Gopalan and Yehudayoff that if two consecutive values {s_k(y), s_{k+1}(y)} are small, then this makes all subsequent values {s_\ell(y)} small as well. More precise versions of this statement were subsequently observed by Meka-Reingold-Tal and Doron-Hatami-Hoza, who obtained estimates of the shape

\displaystyle  |s_\ell(y)|^{\frac{1}{\ell}} \ll \ell^{1/2} \max (|s_k(y)|^{\frac{1}{k}}, |s_{k+1}(y)|^{\frac{1}{k+1}}) \ \ \ \ \ (2)

whenever {1 \leq k \leq \ell \leq n} and {y_1,\dots,y_n} are real (but possibly negative). For instance, setting {k=1, \ell=n} we obtain the inequality

\displaystyle  (y_1 \dots y_n)^{1/n} \ll n^{1/2} \max( |s_1(y)|, |s_2(y)|^2)

which can be established by combining the arithmetic mean-geometric mean inequality

\displaystyle  (y_1 \dots y_n)^{2/n} \leq \frac{y_1^2 + \dots + y_n^2}{n}

with the Newton identity

\displaystyle  y_1^2 + \dots + y_n^2 = n^2 s_1(y)^2 - n(n-1) s_2(y).

As with the proof of the Newton inequalities, the general case of (2) can be obtained from this special case after some standard manipulations (including the differentiation operation mentioned previously).

However, if one inspects the bound (2) against the bounds (1) given by the key example, we see a mismatch – the right-hand side of (2) is larger than the left-hand side by a factor of about {k^{1/2}}. The main result of the paper rectifies this by establishing the optimal (up to constants) improvement

\displaystyle  |s_\ell(y)|^{\frac{1}{\ell}} \ll \frac{\ell^{1/2}}{k^{1/2}} \max (|s_k(y)|^{\frac{1}{k}}, |s_{k+1}(y)|^{\frac{1}{k+1}}) \ \ \ \ \ (3)

of (2). This answers a question posed on MathOverflow.

Unlike the previous arguments, we do not rely primarily on the arithmetic mean-geometric mean inequality. Instead, our primary tool is a new inequality

\displaystyle  \sum_{m=0}^\ell \binom{\ell}{m} |s_m(y)| r^m \geq (1+ |s_\ell(y)|^{2/\ell} r^2)^{\ell/2}, \ \ \ \ \ (4)

valid for all {1 \leq \ell \leq n} and {r>0}. Roughly speaking, the bound (3) would follow from (4) by setting {r \asymp (k/\ell)^{1/2} |s_\ell(y)|^{-1/\ell}}, provided that we can show that the {m=k,k+1} terms of the left-hand side dominate the sum in this regime. This can be done, after a technical step of passing to tuples {y} which nearly optimize the required inequality (3).

We sketch the proof of the inequality (4) as follows. One can use some standard manipulations reduce to the case where {\ell=n} and {|s_n(y)|=1}, and after replacing {r} with {1/r} one is now left with establishing the inequality

\displaystyle  \sum_{m=0}^n \binom{n}{m} |s_m(y)| r^{n-m} \geq (1+r^2)^{n/2}.

Note that equality is attained in the previously discussed example with half of the {y_i} equal to {+1} and the other half equal to {-1}, thanks to the binomial theorem.

To prove this identity, we consider the polynomial

\displaystyle  \prod_{j=1}^n (z - y_j) = \sum_{m=0}^n (-1)^m \binom{n}{m} s_k(m) z^{n-m}.

Evaluating this polynomial at {ir}, taking absolute values, using the triangle inequality, and then taking logarithms, we conclude that

\displaystyle  \frac{1}{2} \sum_{j=1}^n \log(y_j^2 + r^2) \leq \log(\sum_{m=0}^n \binom{n}{m} |s_m(y)| r^{n-m}).

A convexity argument gives the lower bound

\displaystyle  \log(y_j^2 + r^2) \geq \log(1+r^2) + \frac{2}{1+r^2} \log |y_j|

while the normalization {|s_n(y)|=1} gives

\displaystyle  \sum_{j=1}^n \log |y_j| = 0

and the claim follows.

Archives