You are currently browsing the category archive for the ‘expository’ category.

Let {G} be a finite set of order {N}; in applications {G} will be typically something like a finite abelian group, such as the cyclic group {{\bf Z}/N{\bf Z}}. Let us define a {1}-bounded function to be a function {f: G \rightarrow {\bf C}} such that {|f(n)| \leq 1} for all {n \in G}. There are many seminorms {\| \|} of interest that one places on functions {f: G \rightarrow {\bf C}} that are bounded by {1} on {1}-bounded functions, such as the Gowers uniformity seminorms {\| \|_k} for {k \geq 1} (which are genuine norms for {k \geq 2}). All seminorms in this post will be implicitly assumed to obey this property.

In additive combinatorics, a significant role is played by inverse theorems, which abstractly take the following form for certain choices of seminorm {\| \|}, some parameters {\eta, \varepsilon>0}, and some class {{\mathcal F}} of {1}-bounded functions:

Theorem 1 (Inverse theorem template) If {f} is a {1}-bounded function with {\|f\| \geq \eta}, then there exists {F \in {\mathcal F}} such that {|\langle f, F \rangle| \geq \varepsilon}, where {\langle,\rangle} denotes the usual inner product

\displaystyle  \langle f, F \rangle := {\bf E}_{n \in G} f(n) \overline{F(n)}.

Informally, one should think of {\eta} as being somewhat small but fixed independently of {N}, {\varepsilon} as being somewhat smaller but depending only on {\eta} (and on the seminorm), and {{\mathcal F}} as representing the “structured functions” for these choices of parameters. There is some flexibility in exactly how to choose the class {{\mathcal F}} of structured functions, but intuitively an inverse theorem should become more powerful when this class is small. Accordingly, let us define the {(\eta,\varepsilon)}-entropy of the seminorm {\| \|} to be the least cardinality of {{\mathcal F}} for which such an inverse theorem holds. Seminorms with low entropy are ones for which inverse theorems can be expected to be a useful tool. This concept arose in some discussions I had with Ben Green many years ago, but never appeared in print, so I decided to record some observations we had on this concept here on this blog.

Lebesgue norms {\| f\|_{L^p} := ({\bf E}_{n \in G} |f(n)|^p)^{1/p}} for {1 < p < \infty} have exponentially large entropy (and so inverse theorems are not expected to be useful in this case):

Proposition 2 ({L^p} norm has exponentially large inverse entropy) Let {1 < p < \infty} and {0 < \eta < 1}. Then the {(\eta,\eta^p/4)}-entropy of {\| \|_{L^p}} is at most {(1+8/\eta^p)^N}. Conversely, for any {\varepsilon>0}, the {(\eta,\varepsilon)}-entropy of {\| \|_{L^p}} is at least {\exp( c \varepsilon^2 N)} for some absolute constant {c>0}.

Proof: If {f} is {1}-bounded with {\|f\|_{L^p} \geq \eta}, then we have

\displaystyle  |\langle f, |f|^{p-2} f \rangle| \geq \eta^p

and hence by the triangle inequality we have

\displaystyle  |\langle f, F \rangle| \geq \eta^p/2

where {F} is either the real or imaginary part of {|f|^{p-2} f}, which takes values in {[-1,1]}. If we let {\tilde F} be {F} rounded to the nearest multiple of {\eta^p/4}, then by the triangle inequality again we have

\displaystyle  |\langle f, \tilde F \rangle| \geq \eta^p/4.

There are only at most {1+8/\eta^p} possible values for each value {\tilde F(n)} of {\tilde F}, and hence at most {(1+8/\eta^p)^N} possible choices for {\tilde F}. This gives the first claim.

Now suppose that there is an {(\eta,\varepsilon)}-inverse theorem for some {{\mathcal F}} of cardinality {M}. If we let {f} be a random sign function (so the {f(n)} are independent random variables taking values in {-1,+1} with equal probability), then there is a random {F \in {\mathcal F}} such that

\displaystyle  |\langle f, F \rangle| \geq \varepsilon

and hence by the pigeonhole principle there is a deterministic {F \in {\mathcal F}} such that

\displaystyle  {\bf P}( |\langle f, F \rangle| \geq \varepsilon ) \geq 1/M.

On the other hand, from the Hoeffding inequality one has

\displaystyle  {\bf P}( |\langle f, F \rangle| \geq \varepsilon ) \ll \exp( - c \varepsilon^2 N )

for some absolute constant {c}, hence

\displaystyle  M \geq \exp( c \varepsilon^2 N )

as claimed. \Box

Most seminorms of interest in additive combinatorics, such as the Gowers uniformity norms, are bounded by some finite {L^p} norm thanks to Hölder’s inequality, so from the above proposition and the obvious monotonicity properties of entropy, we conclude that all Gowers norms on finite abelian groups {G} have at most exponential inverse theorem entropy. But we can do significantly better than this:

  • For the {U^1} seminorm {\|f\|_{U^1(G)} := |{\bf E}_{n \in G} f(n)|}, one can simply take {{\mathcal F} = \{1\}} to consist of the constant function {1}, and the {(\eta,\eta)}-entropy is clearly equal to {1} for any {0 < \eta < 1}.
  • For the {U^2} norm, the standard Fourier-analytic inverse theorem asserts that if {\|f\|_{U^2(G)} \geq \eta} then {|\langle f, e(\xi \cdot) \rangle| \geq \eta^2} for some Fourier character {\xi \in \hat G}. Thus the {(\eta,\eta^2)}-entropy is at most {N}.
  • For the {U^k({\bf Z}/N{\bf Z})} norm on cyclic groups for {k > 2}, the inverse theorem proved by Green, Ziegler, and myself gives an {(\eta,\varepsilon)}-inverse theorem for some {\varepsilon \gg_{k,\eta} 1} and {{\mathcal F}} consisting of nilsequences {n \mapsto F(g(n) \Gamma)} for some filtered nilmanifold {G/\Gamma} of degree {k-1} in a finite collection of cardinality {O_{\eta,k}(1)}, some polynomial sequence {g: {\bf Z} \rightarrow G} (which was subsequently observed by Candela-Sisask (see also Manners) that one can choose to be {N}-periodic), and some Lipschitz function {F: G/\Gamma \rightarrow {\bf C}} of Lipschitz norm {O_{\eta,k}(1)}. By the Arzela-Ascoli theorem, the number of possible {F} (up to uniform errors of size at most {\varepsilon/2}, say) is {O_{\eta,k}(1)}. By standard arguments one can also ensure that the coefficients of the polynomial {g} are {O_{\eta,k}(1)}, and then by periodicity there are only {O(N^{O_{\eta,k}(1)}} such polynomials. As a consequence, the {(\eta,\varepsilon)}-entropy is of polynomial size {O_{\eta,k}( N^{O_{\eta,k}(1)} )} (a fact that seems to have first been implicitly observed in Lemma 6.2 of this paper of Frantzikinakis; thanks to Ben Green for this reference). One can obtain more precise dependence on {\eta,k} using the quantitative version of this inverse theorem due to Manners; back of the envelope calculations using Section 5 of that paper suggest to me that one can take {\varepsilon = \eta^{O_k(1)}} to be polynomial in {\eta} and the entropy to be of the order {O_k( N^{\exp(\exp(\eta^{-O_k(1)}))} )}, or alternatively one can reduce the entropy to {O_k( \exp(\exp(\eta^{-O_k(1)})) N^{\eta^{-O_k(1)}})} at the cost of degrading {\varepsilon} to {1/\exp\exp( O(\eta^{-O(1)}))}.
  • If one replaces the cyclic group {{\bf Z}/N{\bf Z}} by a vector space {{\bf F}_p^n} over some fixed finite field {{\bf F}_p} of prime order (so that {N=p^n}), then the inverse theorem of Ziegler and myself (available in both high and low characteristic) allows one to obtain an {(\eta,\varepsilon)}-inverse theorem for some {\varepsilon \gg_{k,\eta} 1} and {{\mathcal F}} the collection of non-classical degree {k-1} polynomial phases from {{\bf F}_p^n} to {S^1}, which one can normalize to equal {1} at the origin, and then by the classification of such polynomials one can calculate that the {(\eta,\varepsilon)} entropy is of quasipolynomial size {\exp( O_{p,k}(n^{k-1}) ) = \exp( O_{p,k}( \log^{k-1} N ) )} in {N}. By using the recent work of Gowers and Milicevic, one can make the dependence on {p,k} here more precise, but we will not perform these calcualtions here.
  • For the {U^3(G)} norm on an arbitrary finite abelian group, the recent inverse theorem of Jamneshan and myself gives (after some calculations) a bound of the polynomial form {O( q^{O(n^2)} N^{\exp(\eta^{-O(1)})})} on the {(\eta,\varepsilon)}-entropy for some {\varepsilon \gg \eta^{O(1)}}, which one can improve slightly to {O( q^{O(n^2)} N^{\eta^{-O(1)}})} if one degrades {\varepsilon} to {1/\exp(\eta^{-O(1)})}, where {q} is the maximal order of an element of {G}, and {n} is the rank (the number of elements needed to generate {G}). This bound is polynomial in {N} in the cyclic group case and quasipolynomial in general.

For general finite abelian groups {G}, we do not yet have an inverse theorem of comparable power to the ones mentioned above that give polynomial or quasipolynomial upper bounds on the entropy. However, there is a cheap argument that at least gives some subexponential bounds:

Proposition 3 (Cheap subexponential bound) Let {k \geq 2} and {0 < \eta < 1/2}, and suppose that {G} is a finite abelian group of order {N \geq \eta^{-C_k}} for some sufficiently large {C_k}. Then the {(\eta,c_k \eta^{O_k(1)})}-complexity of {\| \|_{U^k(G)}} is at most {O( \exp( \eta^{-O_k(1)} N^{1 - \frac{k+1}{2^k-1}} ))}.

Proof: (Sketch) We use a standard random sampling argument, of the type used for instance by Croot-Sisask or Briet-Gopi (thanks to Ben Green for this latter reference). We can assume that {N \geq \eta^{-C_k}} for some sufficiently large {C_k>0}, since otherwise the claim follows from Proposition 2.

Let {A} be a random subset of {{\bf Z}/N{\bf Z}} with the events {n \in A} being iid with probability {0 < p < 1} to be chosen later, conditioned to the event {|A| \leq 2pN}. Let {f} be a {1}-bounded function. By a standard second moment calculation, we see that with probability at least {1/2}, we have

\displaystyle  \|f\|_{U^k(G)}^{2^k} = {\bf E}_{n, h_1,\dots,h_k \in G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|} \frac{1}{p} 1_A f(n + \omega \cdot h)

\displaystyle + O((\frac{1}{N^{k+1} p^{2^k-1}})^{1/2}).

Thus, by the triangle inequality, if we choose {p := C \eta^{-2^{k+1}/(2^k-1)} / N^{\frac{k+1}{2^k-1}}} for some sufficiently large {C = C_k > 0}, then for any {1}-bounded {f} with {\|f\|_{U^k(G)} \geq \eta/2}, one has with probability at least {1/2} that

\displaystyle  |{\bf E}_{n, h_1,\dots,h_k \i2^n G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|} \frac{1}{p} 1_A f(n + \omega \cdot h)|

\displaystyle \geq \eta^{2^k}/2^{2^k+1}.

We can write the left-hand side as {|\langle f, F \rangle|} where {F} is the randomly sampled dual function

\displaystyle  F(n) := {\bf E}_{n, h_1,\dots,h_k \in G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|+1} \frac{1}{p} 1_A f(n + \omega \cdot h).

Unfortunately, {F} is not {1}-bounded in general, but we have

\displaystyle  \|F\|_{L^2(G)}^2 \leq {\bf E}_{n, h_1,\dots,h_k ,h'_1,\dots,h'_k \in G}

\displaystyle  \prod_{\omega \in \{0,1\}^k \backslash \{0\}} \frac{1}{p} 1_A(n + \omega \cdot h) \frac{1}{p} 1_A(n + \omega \cdot h')

and the right-hand side can be shown to be {1+o(1)} on the average, so we can condition on the event that the right-hand side is {O(1)} without significant loss in falure probability.

If we then let {\tilde f_A} be {1_A f} rounded to the nearest Gaussian integer multiple of {\eta^{2^k}/2^{2^{10k}}} in the unit disk, one has from the triangle inequality that

\displaystyle  |\langle f, \tilde F \rangle| \geq \eta^{2^k}/2^{2^k+2}

where {\tilde F} is the discretised randomly sampled dual function

\displaystyle  \tilde F(n) := {\bf E}_{n, h_1,\dots,h_k \in G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|+1} \frac{1}{p} \tilde f_A(n + \omega \cdot h).

For any given {A}, there are at most {2np} places {n} where {\tilde f_A(n)} can be non-zero, and in those places there are {O_k( \eta^{-2^{k}})} possible values for {\tilde f_A(n)}. Thus, if we let {{\mathcal F}_A} be the collection of all possible {\tilde f_A} associated to a given {A}, the cardinality of this set is {O( \exp( \eta^{-O_k(1)} N^{1 - \frac{k+1}{2^k-1}} ) )}, and for any {f} with {\|f\|_{U^k(G)} \geq \eta/2}, we have

\displaystyle  \sup_{\tilde F \in {\mathcal F}_A} |\langle f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+2}

with probability at least {1/2}.

Now we remove the failure probability by independent resampling. By rounding to the nearest Gaussian integer multiple of {c_k \eta^{2^k}} in the unit disk for a sufficiently small {c_k>0}, one can find a family {{\mathcal G}} of cardinality {O( \eta^{-O_k(N)})} consisting of {1}-bounded functions {\tilde f} of {U^k(G)} norm at least {\eta/2} such that for every {1}-bounded {f} with {\|f\|_{U^k(G)} \geq \eta} there exists {\tilde f \in {\mathcal G}} such that

\displaystyle  \|f-\tilde f\|_{L^\infty(G)} \leq \eta^{2^k}/2^{k+3}.

Now, let {A_1,\dots,A_M} be independent samples of {A} for some {M} to be chosen later. By the preceding discussion, we see that with probability at least {1 - 2^{-M}}, we have

\displaystyle  \sup_{\tilde F \in \bigcup_{j=1}^M {\mathcal F}_{A_j}} |\langle \tilde f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+2}

for any given {\tilde f \in {\mathcal G}}, so by the union bound, if we choose {M = \lfloor C N \log \frac{1}{\eta} \rfloor} for a large enough {C = C_k}, we can find {A_1,\dots,A_M} such that

\displaystyle  \sup_{\tilde F \in \bigcup_{j=1}^M {\mathcal F}_{A_j}} |\langle \tilde f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+2}

for all {\tilde f \in {\mathcal G}}, and hence y the triangle inequality

\displaystyle  \sup_{\tilde F \in \bigcup_{j=1}^M {\mathcal F}_{A_j}} |\langle f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+3}.

Taking {{\mathcal F}} to be the union of the {{\mathcal F}_{A_j}} (applying some truncation and rescaling to these {L^2}-bounded functions to make them {L^\infty}-bounded, and then {1}-bounded), we obtain the claim. \Box

One way to obtain lower bounds on the inverse theorem entropy is to produce a collection of almost orthogonal functions with large norm. More precisely:

Proposition 4 Let {\| \|} be a seminorm, let {0 < \varepsilon \leq \eta < 1}, and suppose that one has a collection {f_1,\dots,f_M} of {1}-bounded functions such that for all {i=1,\dots,M}, {\|f_i\| \geq \eta} one has {|\langle f_i, f_j \rangle| \leq \varepsilon^2/2} for all but at most {L} choices of {j \in \{1,\dots,M\}} for all distinct {i,j \in \{1,\dots,M\}}. Then the {(\eta, \varepsilon)}-entropy of {\| \|} is at least {\varepsilon^2 M / 2L}.

Proof: Suppose we have an {(\eta,\varepsilon)}-inverse theorem with some family {{\mathcal F}}. Then for each {i=1,\dots,M} there is {F_i \in {\mathcal F}} such that {|\langle f_i, F_i \rangle| \geq \varepsilon}. By the pigeonhole principle, there is thus {F \in {\mathcal F}} such that {|\langle f_i, F \rangle| \geq \varepsilon} for all {i} in a subset {I} of {\{1,\dots,M\}} of cardinality at least {M/|{\mathcal F}|}:

\displaystyle  |I| \geq M / |{\mathcal F}|.

We can sum this to obtain

\displaystyle  |\sum_{i \in I} c_i \langle f_i, F \rangle| \geq |I| \varepsilon

for some complex numbers {c_i} of unit magnitude. By Cauchy-Schwarz, this implies

\displaystyle  \| \sum_{i \in I} c_i f_i \|_{L^2(G)}^2 \geq |I|^2 \varepsilon^2

and hence by the triangle inequality

\displaystyle  \sum_{i,j \in I} |\langle f_i, f_j \rangle| \geq |I|^2 \varepsilon^2.

On the other hand, by hypothesis we can bound the left-hand side by {|I| (L + \varepsilon^2 |I|/2)}. Rearranging, we conclude that

\displaystyle  |I| \leq 2 L / \varepsilon^2

and hence

\displaystyle  |{\mathcal F}| \geq \varepsilon^2 M / 2L

giving the claim. \Box

Thus for instance:

  • For the {U^2(G)} norm, one can take {f_1,\dots,f_M} to be the family of linear exponential phases {n \mapsto e(\xi \cdot n)} with {M = N} and {L=1}, and obtain a linear lower bound of {\varepsilon^2 N/2} for the {(\eta,\varepsilon)}-entropy, thus matching the upper bound of {N} up to constants when {\varepsilon} is fixed.
  • For the {U^k({\bf Z}/N{\bf Z})} norm, a similar calculation using polynomial phases of degree {k-1}, combined with the Weyl sum estimates, gives a lower bound of {\gg_{k,\varepsilon} N^{k-1}} for the {(\eta,\varepsilon)}-entropy for any fixed {\eta,\varepsilon}; by considering nilsequences as well, together with nilsequence equidistribution theory, one can replace the exponent {k-1} here by some quantity that goes to infinity as {\eta \rightarrow 0}, though I have not attempted to calculate the exact rate.
  • For the {U^k({\bf F}_p^n)} norm, another similar calculation using polynomial phases of degree {k-1} should give a lower bound of {\gg_{p,k,\eta,\varepsilon} \exp( c_{p,k,\eta,\varepsilon} n^{k-1} )} for the {(\eta,\varepsilon)}-entropy, though I have not fully performed the calculation.

We close with one final example. Suppose {G} is a product {G = A \times B} of two sets {A,B} of cardinality {\asymp \sqrt{N}}, and we consider the Gowers box norm

\displaystyle  \|f\|_{\Box^2(G)}^4 := {\bf E}_{a,a' \in A; b,b' \in B} f(a,b) \overline{f}(a,b') \overline{f}(a',b) f(a,b).

One possible choice of class {{\mathcal F}} here are the indicators {1_{U \times V}} of “rectangles” {U \times V} with {U \subset A}, {V \subset B} (cf. this previous blog post on cut norms). By standard calculations, one can use this class to show that the {(\eta, \eta^4/10)}-entropy of {\| \|_{\Box^2(G)}} is {O( \exp( O(\sqrt{N}) )}, and a variant of the proof of the second part of Proposition 2 shows that this is the correct order of growth in {N}. In contrast, a modification of Proposition 3 only gives an upper bound of the form {O( \exp( O( N^{2/3} ) ) )} (the bottleneck is ensuring that the randomly sampled dual functions stay bounded in {L^2}), which shows that while this cheap bound is not optimal, it can still broadly give the correct “type” of bound (specifically, intermediate growth between polynomial and exponential).

In orthodox first-order logic, variables and expressions are only allowed to take one value at a time; a variable {x}, for instance, is not allowed to equal {+3} and {-3} simultaneously. We will call such variables completely specified. If one really wants to deal with multiple values of objects simultaneously, one is encouraged to use the language of set theory and/or logical quantifiers to do so.

However, the ability to allow expressions to become only partially specified is undeniably convenient, and also rather intuitive. A classic example here is that of the quadratic formula:

\displaystyle  \hbox{If } x,a,b,c \in {\bf R} \hbox{ with } a \neq 0, \hbox{ then }

\displaystyle  ax^2+bx+c=0 \hbox{ if and only if } x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}. \ \ \ \ \ (1)

Strictly speaking, the expression {x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}} is not well-formed according to the grammar of first-order logic; one should instead use something like

\displaystyle x = \frac{-b - \sqrt{b^2-4ac}}{2a} \hbox{ or } x = \frac{-b + \sqrt{b^2-4ac}}{2a}

or

\displaystyle x \in \left\{ \frac{-b - \sqrt{b^2-4ac}}{2a}, \frac{-b + \sqrt{b^2-4ac}}{2a} \right\}

or

\displaystyle x = \frac{-b + \epsilon \sqrt{b^2-4ac}}{2a} \hbox{ for some } \epsilon \in \{-1,+1\}

in order to strictly adhere to this grammar. But none of these three reformulations are as compact or as conceptually clear as the original one. In a similar spirit, a mathematical English sentence such as

\displaystyle  \hbox{The sum of two odd numbers is an even number} \ \ \ \ \ (2)

is also not a first-order sentence; one would instead have to write something like

\displaystyle  \hbox{For all odd numbers } x, y, \hbox{ the number } x+y \hbox{ is even} \ \ \ \ \ (3)

or

\displaystyle  \hbox{For all odd numbers } x,y \hbox{ there exists an even number } z \ \ \ \ \ (4)

\displaystyle  \hbox{ such that } x+y=z

instead. These reformulations are not all that hard to decipher, but they do have the aesthetically displeasing effect of cluttering an argument with temporary variables such as {x,y,z} which are used once and then discarded.

Another example of partially specified notation is the innocuous {\ldots} notation. For instance, the assertion

\displaystyle \pi=3.14\ldots,

when written formally using first-order logic, would become something like

\displaystyle \pi = 3 + \frac{1}{10} + \frac{4}{10^2} + \sum_{n=3}^\infty \frac{a_n}{10^n} \hbox{ for some sequence } (a_n)_{n=3}^\infty

\displaystyle  \hbox{ with } a_n \in \{0,1,2,3,4,5,6,7,8,9\} \hbox{ for all } n,

which is not exactly an elegant reformulation. Similarly with statements such as

\displaystyle \tan x = x + \frac{x^3}{3} + \ldots \hbox{ for } |x| < \pi/2

or

\displaystyle \tan x = x + \frac{x^3}{3} + O(|x|^5) \hbox{ for } |x| < \pi/2.

Below the fold I’ll try to assign a formal meaning to partially specified expressions such as (1), for instance allowing one to condense (2), (3), (4) to just

\displaystyle  \hbox{odd} + \hbox{odd} = \hbox{even}.

When combined with another common (but often implicit) extension of first-order logic, namely the ability to reason using ambient parameters, we become able to formally introduce asymptotic notation such as the big-O notation {O()} or the little-o notation {o()}. We will explain how to do this at the end of this post.

Read the rest of this entry »

A popular way to visualise relationships between some finite number of sets is via Venn diagrams, or more generally Euler diagrams. In these diagrams, a set is depicted as a two-dimensional shape such as a disk or a rectangle, and the various Boolean relationships between these sets (e.g., that one set is contained in another, or that the intersection of two of the sets is equal to a third) is represented by the Boolean algebra of these shapes; Venn diagrams correspond to the case where the sets are in “general position” in the sense that all non-trivial Boolean combinations of the sets are non-empty. For instance to depict the general situation of two sets {A,B} together with their intersection {A \cap B} and {A \cup B} one might use a Venn diagram such as

venn

(where we have given each region depicted a different color, and moved the edges of each region a little away from each other in order to make them all visible separately), but if one wanted to instead depict a situation in which the intersection {A \cap B} was empty, one could use an Euler diagram such as

euler

One can use the area of various regions in a Venn or Euler diagram as a heuristic proxy for the cardinality {|A|} (or measure {\mu(A)}) of the set {A} corresponding to such a region. For instance, the above Venn diagram can be used to intuitively justify the inclusion-exclusion formula

\displaystyle  |A \cup B| = |A| + |B| - |A \cap B|

for finite sets {A,B}, while the above Euler diagram similarly justifies the special case

\displaystyle  |A \cup B| = |A| + |B|

for finite disjoint sets {A,B}.

While Venn and Euler diagrams are traditionally two-dimensional in nature, there is nothing preventing one from using one-dimensional diagrams such as

venn1d

or even three-dimensional diagrams such as this one from Wikipedia:

venn-3d

Of course, in such cases one would use length or volume as a heuristic proxy for cardinality or measure, rather than area.

With the addition of arrows, Venn and Euler diagrams can also accommodate (to some extent) functions between sets. Here for instance is a depiction of a function {f: A \rightarrow B}, the image {f(A)} of that function, and the image {f(A')} of some subset {A'} of {A}:

afb

Here one can illustrate surjectivity of {f: A \rightarrow B} by having {f(A)} fill out all of {B}; one can similarly illustrate injectivity of {f} by giving {f(A)} exactly the same shape (or at least the same area) as {A}. So here for instance might be how one would illustrate an injective function {f: A \rightarrow B}:

afb-injective

Cartesian product operations can be incorporated into these diagrams by appropriate combinations of one-dimensional and two-dimensional diagrams. Here for instance is a diagram that illustrates the identity {(A \cup B) \times C = (A \times C) \cup (B \times C)}:

cartesian

In this blog post I would like to propose a similar family of diagrams to illustrate relationships between vector spaces (over a fixed base field {k}, such as the reals) or abelian groups, rather than sets. The categories of ({k}-)vector spaces and abelian groups are quite similar in many ways; the former consists of modules over a base field {k}, while the latter consists of modules over the integers {{\bf Z}}; also, both categories are basic examples of abelian categories. The notion of a dimension in a vector space is analogous in many ways to that of cardinality of a set; see this previous post for an instance of this analogy (in the context of Shannon entropy). (UPDATE: I have learned that an essentially identical notation has also been proposed in an unpublished manuscript of Ravi Vakil.)

Read the rest of this entry »

In everyday usage, we rely heavily on percentages to quantify probabilities and proportions: we might say that a prediction is {50\%} accurate or {80\%} accurate, that there is a {2\%} chance of dying from some disease, and so forth. However, for those without extensive mathematical training, it can sometimes be difficult to assess whether a given percentage amounts to a “good” or “bad” outcome, because this depends very much on the context of how the percentage is used. For instance:

  • (i) In a two-party election, an outcome of say {51\%} to {49\%} might be considered close, but {55\%} to {45\%} would probably be viewed as a convincing mandate, and {60\%} to {40\%} would likely be viewed as a landslide.
  • (ii) Similarly, if one were to poll an upcoming election, a poll of {51\%} to {49\%} would be too close to call, {55\%} to {45\%} would be an extremely favorable result for the candidate, and {60\%} to {40\%} would mean that it would be a major upset if the candidate lost the election.
  • (iii) On the other hand, a medical operation that only had a {51\%}, {55\%}, or {60\%} chance of success would be viewed as being incredibly risky, especially if failure meant death or permanent injury to the patient. Even an operation that was {90\%} or {95\%} likely to be non-fatal (i.e., a {10\%} or {5\%} chance of death) would not be conducted lightly.
  • (iv) A weather prediction of, say, {30\%} chance of rain during a vacation trip might be sufficient cause to pack an umbrella, even though it is more likely than not that rain would not occur. On the other hand, if the prediction was for an {80\%} chance of rain, and it ended up that the skies remained clear, this does not seriously damage the accuracy of the prediction – indeed, such an outcome would be expected in one out of every five such predictions.
  • (v) Even extremely tiny percentages of toxic chemicals in everyday products can be considered unacceptable. For instance, EPA rules require action to be taken when the percentage of lead in drinking water exceeds {0.0000015\%} (15 parts per billion). At the opposite extreme, recycling contamination rates as high as {10\%} are often considered acceptable.

Because of all the very different ways in which percentages could be used, I think it may make sense to propose an alternate system of units to measure one class of probabilities, namely the probabilities of avoiding some highly undesirable outcome, such as death, accident or illness. The units I propose are that of “nines“, which are already commonly used to measure availability of some service or purity of a material, but can be equally used to measure the safety (i.e., lack of risk) of some activity. Informally, nines measure how many consecutive appearances of the digit {9} are in the probability of successfully avoiding the negative outcome, thus

  • {90\%} success = one nine of safety
  • {99\%} success = two nines of safety
  • {99.9\%} success = three nines of safety
and so forth. Using the mathematical device of logarithms, one can also assign a fractional number of nines of safety to a general probability:

Definition 1 (Nines of safety) An activity (affecting one or more persons, over some given period of time) that has a probability {p} of the “safe” outcome and probability {1-p} of the “unsafe” outcome will have {k} nines of safety against the unsafe outcome, where {k} is defined by the formula

\displaystyle  k = -\log_{10}(1-p) \ \ \ \ \ (1)

(where {\log_{10}} is the logarithm to base ten), or equivalently

\displaystyle  p = 1 - 10^{-k}. \ \ \ \ \ (2)

Remark 2 Because of the various uncertainties in measuring probabilities, as well as the inaccuracies in some of the assumptions and approximations we will be making later, we will not attempt to measure the number of nines of safety beyond the first decimal point; thus we will round to the nearest tenth of a nine of safety throughout this post.

Here is a conversion table between percentage rates of success (the safe outcome), failure (the unsafe outcome), and the number of nines of safety one has:

Success rate {p} Failure rate {1-p} Number of nines {k}
{0\%} {100\%} {0.0}
{50\%} {50\%} {0.3}
{75\%} {25\%} {0.6}
{80\%} {20\%} {0.7}
{90\%} {10\%} {1.0}
{95\%} {5\%} {1.3}
{97.5\%} {2.5\%} {1.6}
{98\%} {2\%} {1.7}
{99\%} {1\%} {2.0}
{99.5\%} {0.5\%} {2.3}
{99.75\%} {0.25\%} {2.6}
{99.8\%} {0.2\%} {2.7}
{99.9\%} {0.1\%} {3.0}
{99.95\%} {0.05\%} {3.3}
{99.975\%} {0.025\%} {3.6}
{99.98\%} {0.02\%} {3.7}
{99.99\%} {0.01\%} {4.0}
{100\%} {0\%} infinite

Thus, if one has no nines of safety whatsoever, one is guaranteed to fail; but each nine of safety one has reduces the failure rate by a factor of {10}. In an ideal world, one would have infinitely many nines of safety against any risk, but in practice there are no {100\%} guarantees against failure, and so one can only expect a finite amount of nines of safety in any given situation. Realistically, one should thus aim to have as many nines of safety as one can reasonably expect to have, but not to demand an infinite amount.

Remark 3 The number of nines of safety against a certain risk is not absolute; it will depend not only on the risk itself, but (a) the number of people exposed to the risk, and (b) the length of time one is exposed to the risk. Exposing more people or increasing the duration of exposure will reduce the number of nines, and conversely exposing fewer people or reducing the duration will increase the number of nines; see Proposition 7 below for a rough rule of thumb in this regard.

Remark 4 Nines of safety are a logarithmic scale of measurement, rather than a linear scale. Other familiar examples of logarithmic scales of measurement include the Richter scale of earthquake magnitude, the pH scale of acidity, the decibel scale of sound level, octaves in music, and the magnitude scale for stars.

Remark 5 One way to think about nines of safety is via the Swiss cheese model that was created recently to describe pandemic risk management. In this model, each nine of safety can be thought of as a slice of Swiss cheese, with holes occupying {10\%} of that slice. Having {k} nines of safety is then analogous to standing behind {k} such slices of Swiss cheese. In order for a risk to actually impact you, it must pass through each of these {k} slices. A fractional nine of safety corresponds to a fractional slice of Swiss cheese that covers the amount of space given by the above table. For instance, {0.6} nines of safety corresponds to a fractional slice that covers about {75\%} of the given area (leaving {25\%} uncovered).

Now to give some real-world examples of nines of safety. Using data for deaths in the US in 2019 (without attempting to account for factors such as age and gender), a random US citizen will have had the following amount of safety from dying from some selected causes in that year:

Cause of death Mortality rate per {100,\! 000} (approx.) Nines of safety
All causes {870} {2.0}
Heart disease {200} {2.7}
Cancer {180} {2.7}
Accidents {52} {3.3}
Drug overdose {22} {3.7}
Influenza/Pneumonia {15} {3.8}
Suicide {14} {3.8}
Gun violence {12} {3.9}
Car accident {11} {4.0}
Murder {5} {4.3}
Airplane crash {0.14} {5.9}
Lightning strike {0.006} {7.2}

The safety of air travel is particularly remarkable: a given hour of flying in general aviation has a fatality rate of {0.00001}, or about {5} nines of safety, while for the major carriers the fatality rate drops down to {0.0000005}, or about {7.3} nines of safety.

Of course, in 2020, COVID-19 deaths became significant. In this year in the US, the mortality rate for COVID-19 (as the underlying or contributing cause of death) was {91.5} per {100,\! 000}, corresponding to {3.0} nines of safety, which was less safe than all other causes of death except for heart disease and cancer. At this time of writing, data for all of 2021 is of course not yet available, but it seems likely that the safety level would be even lower for this year.

Some further illustrations of the concept of nines of safety:

  • Each round of Russian roulette has a success rate of {5/6}, providing only {0.8} nines of safety. Of course, the safety will decrease with each additional round: one has only {0.5} nines of safety after two rounds, {0.4} nines after three rounds, and so forth. (See also Proposition 7 below.)
  • The ancient Roman punishment of decimation, by definition, provided exactly one nine of safety to each soldier being punished.
  • Rolling a {1} on a {20}-sided die is a risk that carries about {1.3} nines of safety.
  • Rolling a double one (“snake eyes“) from two six-sided dice carries about {1.6} nines of safety.
  • One has about {2.6} nines of safety against the risk of someone randomly guessing your birthday on the first attempt.
  • A null hypothesis has {1.3} nines of safety against producing a {p = 0.05} statistically significant result, and {2.0} nines against producing a {p=0.01} statistically significant result. (However, one has to be careful when reversing the conditional; a {p=0.01} statistically significant result does not necessarily have {2.0} nines of safety against the null hypothesis. In Bayesian statistics, the precise relationship between the two risks is given by Bayes’ theorem.)
  • If a poker opponent is dealt a five-card hand, one has {5.8} nines of safety against that opponent being dealt a royal flush, {4.8} against a straight flush or higher, {3.6} against four-of-a-kind or higher, {2.8} against a full house or higher, {2.4} against a flush or higher, {2.1} against a straight or higher, {1.5} against three-of-a-kind or higher, {1.1} against two pairs or higher, and just {0.3} against one pair or higher. (This data was converted from this Wikipedia table.)
  • A {k}-digit PIN number (or a {k}-digit combination lock) carries {k} nines of safety against each attempt to randomly guess the PIN. A length {k} password that allows for numbers, upper and lower case letters, and punctuation carries about {2k} nines of safety against a single guess. (For the reduction in safety caused by multiple guesses, see Proposition 7 below.)

Here is another way to think about nines of safety:

Proposition 6 (Nines of safety extend expected onset of risk) Suppose a certain risky activity has {k} nines of safety. If one repeatedly indulges in this activity until the risk occurs, then the expected number of trials before the risk occurs is {10^k}.

Proof: The probability that the risk is activated after exactly {n} trials is {(1-10^{-k})^{n-1} 10^{-k}}, which is a geometric distribution of parameter {10^{-k}}. The claim then follows from the standard properties of that distribution. \Box

Thus, for instance, if one performs some risky activity daily, then the expected length of time before the risk occurs is given by the following table:

Daily nines of safety Expected onset of risk
{0} One day
{0.8} One week
{1.5} One month
{2.6} One year
{2.9} Two years
{3.3} Five years
{3.6} Ten years
{3.9} Twenty years
{4.3} Fifty years
{4.6} A century

Or, if one wants to convert the yearly risks of dying from a specific cause into expected years before that cause of death would occur (assuming for sake of discussion that no other cause of death exists):

Yearly nines of safety Expected onset of risk
{0} One year
{0.3} Two years
{0.7} Five years
{1} Ten years
{1.3} Twenty years
{1.7} Fifty years
{2.0} A century

These tables suggest a relationship between the amount of safety one would have in a short timeframe, such as a day, and a longer time frame, such as a year. Here is an approximate formalisation of that relationship:

Proposition 7 (Repeated exposure reduces nines of safety) If a risky activity with {k} nines of safety is (independently) repeated {m} times, then (assuming {k} is large enough depending on {m}), the repeated activity will have approximately {k - \log_{10} m} nines of safety. Conversely: if the repeated activity has {k'} nines of safety, the individual activity will have approximately {k' + \log_{10} m} nines of safety.

Proof: An activity with {k} nines of safety will be safe with probability {1-10^{-k}}, hence safe with probability {(1-10^{-k})^m} if repeated independently {m} times. For {k} large, we can approximate

\displaystyle  (1 - 10^{-k})^m \approx 1 - m 10^{-k} = 1 - 10^{-(k - \log_{10} m)}

giving the former claim. The latter claim follows from inverting the former. \Box

Remark 8 The hypothesis of independence here is key. If there is a lot of correlation between the risks between different repetitions of the activity, then there can be much less reduction in safety caused by that repetition. As a simple example, suppose that {90\%} of a workforce are trained to perform some task flawlessly no matter how many times they repeat the task, but the remaining {10\%} are untrained and will always fail at that task. If one selects a random worker and asks them to perform the task, one has {1.0} nines of safety against the task failing. If one took that same random worker and asked them to perform the task {m} times, the above proposition might suggest that the number of nines of safety would drop to approximately {1.0 - \log_{10} m}; but in this case there is perfect correlation, and in fact the number of nines of safety remains steady at {1.0} since it is the same {10\%} of the workforce that would fail each time.

Because of this caveat, one should view the above proposition as only a crude first approximation that can be used as a simple rule of thumb, but should not be relied upon for more precise calculations.

One can repeat a risk either in time (extending the time of exposure to the risk, say from a day to a year), or in space (by exposing the risk to more people). The above proposition then gives an additive conversion law for nines of safety in either case. Here are some conversion tables for time:

From/to Daily Weekly Monthly Yearly
Daily 0 -0.8 -1.5 -2.6
Weekly +0.8 0 -0.6 -1.7
Monthly +1.5 +0.6 0 -1.1
Yearly +2.6 +1.7 +1.1 0

From/to Yearly Per 5 yr Per decade Per century
Yearly 0 -0.7 -1.0 -2.0
Per 5 yr +0.7 0 -0.3 -1.3
Per decade +1.0 + -0.3 0 -1.0
Per century +2.0 +1.3 +1.0 0

For instance, as mentioned before, the yearly amount of safety against cancer is about {2.7}. Using the above table (and making the somewhat unrealistic hypothesis of independence), we then predict the daily amount of safety against cancer to be about {2.7 + 2.6 = 5.3} nines, the weekly amount to be about {2.7 + 1.7 = 4.4} nines, and the amount of safety over five years to drop to about {2.7 - 0.7 = 2.0} nines.

Now we turn to conversions in space. If one knows the level of safety against a certain risk for an individual, and then one (independently) exposes a group of such individuals to that risk, then the reduction in nines of safety when considering the possibility that at least one group member experiences this risk is given by the following table:

Group Reduction in safety
You ({1} person) {0}
You and your partner ({2} people) {-0.3}
You and your parents ({3} people) {-0.5}
You, your partner, and three children ({5} people) {-0.7}
An extended family of {10} people {-1.0}
A class of {30} people {-1.5}
A workplace of {100} people {-2.0}
A school of {1,\! 000} people {-3.0}
A university of {10,\! 000} people {-4.0}
A town of {100,\! 000} people {-5.0}
A city of {1} million people {-6.0}
A state of {10} million people {-7.0}
A country of {100} million people {-8.0}
A continent of {1} billion people {-9.0}
The entire planet {-9.8}

For instance, in a given year (and making the somewhat implausible assumption of independence), you might have {2.7} nines of safety against cancer, but you and your partner collectively only have about {2.7 - 0.3 = 2.4} nines of safety against this risk, your family of five might only have about {2.7 - 0.7 = 2} nines of safety, and so forth. By the time one gets to a group of {1,\! 000} people, it actually becomes very likely that at least one member of the group will die of cancer in that year. (Here the precise conversion table breaks down, because a negative number of nines such as {2.7 - 3.0 = -0.3} is not possible, but one should interpret a prediction of a negative number of nines as an assertion that failure is very likely to happen. Also, in practice the reduction in safety is less than this rule predicts, due to correlations such as risk factors that are common to the group being considered that are incompatible with the assumption of independence.)

In the opposite direction, any reduction in exposure (either in time or space) to a risk will increase one’s safety level, as per the following table:

Reduction in exposure Additional nines of safety
{\div 1} {0}
{\div 2} {+0.3}
{\div 3} {+0.5}
{\div 5} {+0.7}
{\div 10} {+1.0}
{\div 100} {+2.0}

For instance, a five-fold reduction in exposure will reclaim about {0.7} additional nines of safety.

Here is a slightly different way to view nines of safety:

Proposition 9 Suppose that a group of {m} people are independently exposed to a given risk. If there are at most

\displaystyle  \log_{10} \frac{1}{1-2^{-1/m}}

nines of individual safety against that risk, then there is at least a {50\%} chance that one member of the group is affected by the risk.

Proof: If individually there are {k} nines of safety, then the probability that all the members of the group avoid the risk is {(1-10^{-k})^m}. Since the inequality

\displaystyle  (1-10^{-k})^m \leq \frac{1}{2}

is equivalent to

\displaystyle  k \leq \log_{10} \frac{1}{1-2^{-1/m}},

the claim follows. \Box

Thus, for a group to collectively avoid a risk with at least a {50\%} chance, one needs the following level of individual safety:

Group Individual safety level required
You ({1} person) {0.3}
You and your partner ({2} people) {0.5}
You and your parents ({3} people) {0.7}
You, your partner, and three children ({5} people) {0.9}
An extended family of {10} people {1.2}
A class of {30} people {1.6}
A workplace of {100} people {2.2}
A school of {1,\! 000} people {3.2}
A university of {10,\! 000} people {4.2}
A town of {100,\! 000} people {5.2}
A city of {1} million people {6.2}
A state of {10} million people {7.2}
A country of {100} million people {8.2}
A continent of {1} billion people {9.2}
The entire planet {10.0}

For large {m}, the level {k} of nines of individual safety required to protect a group of size {m} with probability at least {50\%} is approximately {\log_{10} \frac{m}{\ln 2} \approx (\log_{10} m) + 0.2}.

Precautions that can work to prevent a certain risk from occurring will add additional nines of safety against that risk, even if the precaution is not {100\%} effective. Here is the precise rule:

Proposition 10 (Precautions add nines of safety) Suppose an activity carries {k} nines of safety against a certain risk, and a separate precaution can independently protect against that risk with {l} nines of safety (that is to say, the probability that the protection is effective is {1 - 10^{-l}}). Then applying that precaution increases the number of nines in the activity from {k} to {k+l}.

Proof: The probability that the precaution fails and the risk then occurs is {10^{-l} \times 10^{-k} = 10^{-(k+l)}}. The claim now follows from Definition 1. \Box

In particular, we can repurpose the table at the start of this post as a conversion chart for effectiveness of a precaution:

Effectiveness Failure rate Additional nines provided
{0\%} {100\%} {+0.0}
{50\%} {50\%} {+0.3}
{75\%} {25\%} {+0.6}
{80\%} {20\%} {+0.7}
{90\%} {10\%} {+1.0}
{95\%} {5\%} {+1.3}
{97.5\%} {2.5\%} {+1.6}
{98\%} {2\%} {+1.7}
{99\%} {1\%} {+2.0}
{99.5\%} {0.5\%} {+2.3}
{99.75\%} {0.25\%} {+2.6}
{99.8\%} {0.2\%} {+2.7}
{99.9\%} {0.1\%} {+3.0}
{99.95\%} {0.05\%} {+3.3}
{99.975\%} {0.025\%} {+3.6}
{99.98\%} {0.02\%} {+3.7}
{99.99\%} {0.01\%} {+4.0}
{100\%} {0\%} infinite

Thus for instance a precaution that is {80\%} effective will add {0.7} nines of safety, a precaution that is {99.8\%} effective will add {2.7} nines of safety, and so forth. The mRNA COVID vaccines by Pfizer and Moderna have somewhere between {88\% - 96\%} effectiveness against symptomatic COVID illness, providing about {0.9-1.4} nines of safety against that risk, and over {95\%} effectiveness against severe illness, thus adding at least {1.3} nines of safety in this regard.

A slight variant of the above rule can be stated using the concept of relative risk:

Proposition 11 (Relative risk and nines of safety) Suppose an activity carries {k} nines of safety against a certain risk, and an action multiplies the chance of failure by some relative risk {R}. Then the action removes {\log_{10} R} nines of safety (if {R > 1}) or adds {-\log_{10} R} nines of safety (if {R<1}) to the original activity.

Proof: The additional action adjusts the probability of failure from {10^{-k}} to {R \times 10^{-k} = 10^{-(k - \log_{10} R)}}. The claim now follows from Definition 1. \Box

Here is a conversion chart between relative risk and change in nines of safety:

Relative risk Change in nines of safety
{0.01} {+2.0}
{0.02} {+1.7}
{0.05} {+1.3}
{0.1} {+1.0}
{0.2} {+0.7}
{0.5} {+0.3}
{1} {0}
{2} {-0.3}
{5} {-0.7}
{10} {-1.0}
{20} {-1.3}
{50} {-1.7}
{100} {-2.0}

Some examples:

  • Smoking increases the fatality rate of lung cancer by a factor of about {20}, thus removing about {1.3} nines of safety from this particular risk; it also increases the fatality rates of several other diseases, though not quite as dramatically an extent.
  • Seatbelts reduce the fatality rate in car accidents by a factor of about two, adding about {0.3} nines of safety. Airbags achieve a reduction of about {30-50\%}, adding about {0.2-0.3} additional nines of safety.
  • As far as transmission of COVID is concerned, it seems that constant use of face masks reduces transmission by a factor of about five (thus adding about {0.7} nines of safety), and similarly for constant adherence to social distancing; whereas for instance a {30\%} compliance with mask usage reduced transmission by about {10\%} (adding only {0.05} or so nines of safety).

The effect of combining multiple (independent) precautions together is cumulative; one can achieve quite a high level of safety by stacking together several precautions that individually have relatively low levels of effectiveness. Again, see the “swiss cheese model” referred to in Remark 5. For instance, if face masks add {0.7} nines of safety against contracting COVID, social distancing adds another {0.7} nines, and the vaccine provide another {1.0} nine of safety, implementing all three mitigation methods would (assuming independence) add a net of {2.4} nines of safety against contracting COVID.

In summary, when debating the value of a given risk mitigation measure, the correct question to ask is not quite “Is it certain to work” or “Can it fail?”, but rather “How many extra nines of safety does it add?”.

As one final comparison between nines of safety and other standard risk measures, we give the following proposition regarding large deviations from the mean.

Proposition 12 Let {X} be a normally distributed random variable of standard deviation {\sigma}, and let {\lambda > 0}. Then the “one-sided risk” of {X} exceeding its mean {{\bf E} X} by at least {\lambda \sigma} (i.e., {X \geq {\bf E} X + \lambda \sigma}) carries

\displaystyle  -\log_{10} \frac{1 - \mathrm{erf}(\lambda/\sqrt{2})}{2}

nines of safety, the “two-sided risk” of {X} deviating (in either direction) from its mean by at least {\lambda \sigma} (i.e., {|X-{\bf E} X| \geq \lambda \sigma}) carries

\displaystyle  -\log_{10} (1 - \mathrm{erf}(\lambda/\sqrt{2}))

nines of safety, where {\mathrm{erf}} is the error function.

Proof: This is a routine calculation using the cumulative distribution function of the normal distribution. \Box

Here is a short table illustrating this proposition:

Number {\lambda} of deviations from the mean One-sided nines of safety Two-sided nines of safety
{0} {0.3} {0.0}
{1} {0.8} {0.5}
{2} {1.6} {1.3}
{3} {2.9} {2.6}
{4} {4.5} {4.2}
{5} {6.5} {6.2}
{6} {9.0} {8.7}

Thus, for instance, the risk of a five sigma event (deviating by more than five standard deviations from the mean in either direction) should carry {6.2} nines of safety assuming a normal distribution, and so one would ordinarily feel extremely safe against the possibility of such an event, unless one started doing hundreds of thousands of trials. (However, we caution that this conclusion relies heavily on the assumption that one has a normal distribution!)

See also this older essay I wrote on anonymity on the internet, using bits as a measure of anonymity in much the same way that nines are used here as a measure of safety.

In the modern theory of higher order Fourier analysis, a key role are played by the Gowers uniformity norms {\| \|_{U^k}} for {k=1,2,\dots}. For finitely supported functions {f: {\bf Z} \rightarrow {\bf C}}, one can define the (non-normalised) Gowers norm {\|f\|_{\tilde U^k({\bf Z})}} by the formula

\displaystyle  \|f\|_{\tilde U^k({\bf Z})}^{2^k} := \sum_{n,h_1,\dots,h_k \in {\bf Z}} \prod_{\omega_1,\dots,\omega_k \in \{0,1\}} {\mathcal C}^{\omega_1+\dots+\omega_k} f(x+\omega_1 h_1 + \dots + \omega_k h_k)

where {{\mathcal C}} denotes complex conjugation, and then on any discrete interval {[N] = \{1,\dots,N\}} and any function {f: [N] \rightarrow {\bf C}} we can then define the (normalised) Gowers norm

\displaystyle  \|f\|_{U^k([N])} := \| f 1_{[N]} \|_{\tilde U^k({\bf Z})} / \|1_{[N]} \|_{\tilde U^k({\bf Z})}

where {f 1_{[N]}: {\bf Z} \rightarrow {\bf C}} is the extension of {f} by zero to all of {{\bf Z}}. Thus for instance

\displaystyle  \|f\|_{U^1([N])} = |\mathop{\bf E}_{n \in [N]} f(n)|

(which technically makes {\| \|_{U^1([N])}} a seminorm rather than a norm), and one can calculate

\displaystyle  \|f\|_{U^2([N])} \asymp (N \int_0^1 |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)|^4\ d\alpha)^{1/4} \ \ \ \ \ (1)

where {e(\theta) := e^{2\pi i \alpha}}, and we use the averaging notation {\mathop{\bf E}_{n \in A} f(n) = \frac{1}{|A|} \sum_{n \in A} f(n)}.

The significance of the Gowers norms is that they control other multilinear forms that show up in additive combinatorics. Given any polynomials {P_1,\dots,P_m: {\bf Z}^d \rightarrow {\bf Z}} and functions {f_1,\dots,f_m: [N] \rightarrow {\bf C}}, we define the multilinear form

\displaystyle  \Lambda^{P_1,\dots,P_m}(f_1,\dots,f_m) := \sum_{n \in {\bf Z}^d} \prod_{j=1}^m f_j 1_{[N]}(P_j(n)) / \sum_{n \in {\bf Z}^d} \prod_{j=1}^m 1_{[N]}(P_j(n))

(assuming that the denominator is finite and non-zero). Thus for instance

\displaystyle  \Lambda^{\mathrm{n}}(f) = \mathop{\bf E}_{n \in [N]} f(n)

\displaystyle  \Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}}(f,g) = (\mathop{\bf E}_{n \in [N]} f(n)) (\mathop{\bf E}_{n \in [N]} g(n))

\displaystyle  \Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+2\mathrm{r}}(f,g,h) \asymp \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [-N,N]} f(n) g(n+r) h(n+2r)

\displaystyle  \Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+\mathrm{r}^2}(f,g,h) \asymp \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [-N^{1/2},N^{1/2}]} f(n) g(n+r) h(n+r^2)

where we view {\mathrm{n}, \mathrm{r}} as formal (indeterminate) variables, and {f,g,h: [N] \rightarrow {\bf C}} are understood to be extended by zero to all of {{\bf Z}}. These forms are used to count patterns in various sets; for instance, the quantity {\Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+2\mathrm{r}}(1_A,1_A,1_A)} is closely related to the number of length three arithmetic progressions contained in {A}. Let us informally say that a form {\Lambda^{P_1,\dots,P_m}(f_1,\dots,f_m)} is controlled by the {U^k[N]} norm if the form is small whenever {f_1,\dots,f_m: [N] \rightarrow {\bf C}} are {1}-bounded functions with at least one of the {f_j} small in {U^k[N]} norm. This definition was made more precise by Gowers and Wolf, who then defined the true complexity of a form {\Lambda^{P_1,\dots,P_m}} to be the least {s} such that {\Lambda^{P_1,\dots,P_m}} is controlled by the {U^{s+1}[N]} norm. For instance,
  • {\Lambda^{\mathrm{n}}} and {\Lambda^{\mathrm{n}, \mathrm{n} + \mathrm{r}}} have true complexity {0};
  • {\Lambda^{\mathrm{n}, \mathrm{n} + \mathrm{r}, \mathrm{n} + \mathrm{2r}}} has true complexity {1};
  • {\Lambda^{\mathrm{n}, \mathrm{n} + \mathrm{r}, \mathrm{n} + \mathrm{2r}, \mathrm{n} + \mathrm{3r}}} has true complexity {2};
  • The form {\Lambda^{\mathrm{n}, \mathrm{n}+2}} (which among other things could be used to count twin primes) has infinite true complexity (which is quite unfortunate for applications).
Roughly speaking, patterns of complexity {1} or less are amenable to being studied by classical Fourier analytic tools (the Hardy-Littlewood circle method); patterns of higher complexity can be handled (in principle, at least) by the methods of higher order Fourier analysis; and patterns of infinite complexity are out of range of both methods and are generally quite difficult to study. See these recent slides of myself (or this video of the lecture) for some further discussion.

Gowers and Wolf formulated a conjecture on what this complexity should be, at least for linear polynomials {P_1,\dots,P_m}; Ben Green and I thought we had resolved this conjecture back in 2010, though it turned out there was a subtle gap in our arguments and we were only able to resolve the conjecture in a partial range of cases. However, the full conjecture was recently resolved by Daniel Altman.

The {U^1} (semi-)norm is so weak that it barely controls any averages at all. For instance the average

\displaystyle  \Lambda^{2\mathrm{n}}(f) = \mathop{\bf E}_{n \in [N], \hbox{ even}} f(n)

is not controlled by the {U^1[N]} semi-norm: it is perfectly possible for a {1}-bounded function {f: [N] \rightarrow {\bf C}} to even have vanishing {U^1([N])} norm but have large value of {\Lambda^{2\mathrm{n}}(f)} (consider for instance the parity function {f(n) := (-1)^n}).

Because of this, I propose inserting an additional norm in the Gowers uniformity norm hierarchy between the {U^1} and {U^2} norms, which I will call the {U^{1^+}} (or “profinite {U^1}“) norm:

\displaystyle  \| f\|_{U^{1^+}[N]} := \frac{1}{N} \sup_P |\sum_{n \in P} f(n)| = \sup_P | \mathop{\bf E}_{n \in [N]} f 1_P(n)|

where {P} ranges over all arithmetic progressions in {[N]}. This can easily be seen to be a norm on functions {f: [N] \rightarrow {\bf C}} that controls the {U^1[N]} norm. It is also basically controlled by the {U^2[N]} norm for {1}-bounded functions {f}; indeed, if {P} is an arithmetic progression in {[N]} of some spacing {q \geq 1}, then we can write {P} as the intersection of an interval {I} with a residue class modulo {q}, and from Fourier expansion we have

\displaystyle  \mathop{\bf E}_{n \in [N]} f 1_P(n) \ll \sup_\alpha |\mathop{\bf E}_{n \in [N]} f 1_I(n) e(\alpha n)|.

If we let {\psi} be a standard bump function supported on {[-1,1]} with total mass and {\delta>0} is a parameter then

\displaystyle  \mathop{\bf E}_{n \in [N]} f 1_I(n) e(\alpha n)

\displaystyle \ll |\mathop{\bf E}_{n \in [-N,2N]; h, k \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N})

\displaystyle  1_I(n+h+k) f(n+h+k) e(\alpha(n+h+k))|

\displaystyle  \ll |\mathop{\bf E}_{n \in [-N,2N]; h, k \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N}) 1_I(n+k) f(n+h+k) e(\alpha(n+h+k))|

\displaystyle + \delta

(extending {f} by zero outside of {[N]}), as can be seen by using the triangle inequality and the estimate

\displaystyle  \mathop{\bf E}_{h \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N}) 1_I(n+h+k) - \mathop{\bf E}_{h \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N}) 1_I(n+k)

\displaystyle \ll (1 + \mathrm{dist}(n+k, I) / \delta N)^{-2}.

After some Fourier expansion of {\delta \psi(\frac{h}{\delta N})} we now have

\displaystyle  \mathop{\bf E}_{n \in [N]} f 1_P(n) \ll \frac{1}{\delta} \sup_{\alpha,\beta} |\mathop{\bf E}_{n \in [N]; h, k \in [-N,N]} e(\beta h + \alpha (n+h+k))

\displaystyle 1_P(n+k) f(n+h+k)| + \delta.

Writing {\alpha h + \alpha(n+h+k)} as a linear combination of {n, n+h, n+k} and using the Gowers–Cauchy–Schwarz inequality, we conclude

\displaystyle  \mathop{\bf E}_{n \in [N]} f 1_P(n) \ll \frac{1}{\delta} \|f\|_{U^2([N])} + \delta

hence on optimising in {\delta} we have

\displaystyle  \| f\|_{U^{1^+}[N]} \ll \|f\|_{U^2[N]}^{1/2}.

Forms which are controlled by the {U^{1^+}} norm (but not {U^1}) would then have their true complexity adjusted to {0^+} with this insertion.

The {U^{1^+}} norm recently appeared implicitly in work of Peluse and Prendiville, who showed that the form {\Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+\mathrm{r}^2}(f,g,h)} had true complexity {0^+} in this notation (with polynomially strong bounds). [Actually, strictly speaking this control was only shown for the third function {h}; for the first two functions {f,g} one needs to localize the {U^{1^+}} norm to intervals of length {\sim \sqrt{N}}. But I will ignore this technical point to keep the exposition simple.] The weaker claim that {\Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}^2}(f,g)} has true complexity {0^+} is substantially easier to prove (one can apply the circle method together with Gauss sum estimates).

The well known inverse theorem for the {U^2} norm tells us that if a {1}-bounded function {f} has {U^2[N]} norm at least {\eta} for some {0 < \eta < 1}, then there is a Fourier phase {n \mapsto e(\alpha n)} such that

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \gg \eta^2;

this follows easily from (1) and Plancherel’s theorem. Conversely, from the Gowers–Cauchy–Schwarz inequality one has

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \ll \|f\|_{U^2[N]}.

For {U^1[N]} one has a trivial inverse theorem; by definition, the {U^1[N]} norm of {f} is at least {\eta} if and only if

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n)| \geq \eta.

Thus the frequency {\alpha} appearing in the {U^2} inverse theorem can be taken to be zero when working instead with the {U^1} norm.

For {U^{1^+}} one has the intermediate situation in which the frequency {\alpha} is not taken to be zero, but is instead major arc. Indeed, suppose that {f} is {1}-bounded with {\|f\|_{U^{1^+}[N]} \geq \eta}, thus

\displaystyle  |\mathop{\bf E}_{n \in [N]} 1_P(n) f(n)| \geq \eta

for some progression {P}. This forces the spacing {q} of this progression to be {\ll 1/\eta}. We write the above inequality as

\displaystyle  |\mathop{\bf E}_{n \in [N]} 1_{n=b\ (q)} 1_I(n) f(n)| \geq \eta

for some residue class {b\ (q)} and some interval {I}. By Fourier expansion and the triangle inequality we then have

\displaystyle  |\mathop{\bf E}_{n \in [N]} e(-an/q) 1_I(n) f(n)| \geq \eta

for some integer {a}. Convolving {1_I} by {\psi_\delta: n \mapsto \frac{1}{N\delta} \psi(\frac{n}{N\delta})} for {\delta} a small multiple of {\eta} and {\psi} a Schwartz function of unit mass with Fourier transform supported on {[-1,1]}, we have

\displaystyle  |\mathop{\bf E}_{n \in [N]} e(-an/q) (1_I * \psi_\delta)(n) f(n)| \gg \eta.

The Fourier transform {\xi \mapsto \sum_n 1_I * \psi_\delta(n) e(- \xi n)} of {1_I * \psi_\delta} is bounded by {O(N)} and supported on {[-\frac{1}{\delta N},\frac{1}{\delta N}]}, thus by Fourier expansion and the triangle inequality we have

\displaystyle  |\mathop{\bf E}_{n \in [N]} e(-an/q) e(-\xi n) f(n)| \gg \eta^2

for some {\xi \in [-\frac{1}{\delta N},\frac{1}{\delta N}]}, so in particular {\xi = O(\frac{1}{\eta N})}. Thus we have

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \gg \eta^2 \ \ \ \ \ (2)

for some {\alpha} of the major arc form {\alpha = \frac{a}{q} + O(1/\eta)} with {1 \leq q \leq 1/\eta}. Conversely, for {\alpha} of this form, some routine summation by parts gives the bound

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \ll \frac{q}{\eta} \|f\|_{U^{1^+}[N]} \ll \frac{1}{\eta^2} \|f\|_{U^{1^+}[N]}

so if (2) holds for a {1}-bounded {f} then one must have {\|f\|_{U^{1^+}[N]} \gg \eta^4}.

Here is a diagram showing some of the control relationships between various Gowers norms, multilinear forms, and duals of classes {{\mathcal F}} of functions (where each class of functions {{\mathcal F}} induces a dual norm {\| f \|_{{\mathcal F}^*} := \sup_{\phi \in {\mathcal F}} \mathop{\bf E}_{n \in[N]} f(n) \overline{\phi(n)}}:

Here I have included the three classes of functions that one can choose from for the {U^3} inverse theorem, namely degree two nilsequences, bracket quadratic phases, and local quadratic phases, as well as the more narrow class of globally quadratic phases.

The Gowers norms have counterparts for measure-preserving systems {(X,T,\mu)}, known as Host-Kra seminorms. The {U^1(X)} norm can be defined for {f \in L^\infty(X)} as

\displaystyle  \|f\|_{U^1(X)} := \lim_{N \rightarrow \infty} \int_X |\mathop{\bf E}_{n \in [N]} T^n f|\ d\mu

and the {U^2} norm can be defined as

\displaystyle  \|f\|_{U^2(X)}^4 := \lim_{N \rightarrow \infty} \mathop{\bf E}_{n \in [N]} \| T^n f \overline{f} \|_{U^1(X)}^2.

The {U^1(X)} seminorm is orthogonal to the invariant factor {Z^0(X)} (generated by the (almost everywhere) invariant measurable subsets of {X}) in the sense that a function {f \in L^\infty(X)} has vanishing {U^1(X)} seminorm if and only if it is orthogonal to all {Z^0(X)}-measurable (bounded) functions. Similarly, the {U^2(X)} norm is orthogonal to the Kronecker factor {Z^1(X)}, generated by the eigenfunctions of {X} (that is to say, those {f} obeying an identity {Tf = \lambda f} for some {T}-invariant {\lambda}); for ergodic systems, it is the largest factor isomorphic to rotation on a compact abelian group. In analogy to the Gowers {U^{1^+}[N]} norm, one can then define the Host-Kra {U^{1^+}(X)} seminorm by

\displaystyle  \|f\|_{U^{1^+}(X)} := \sup_{q \geq 1} \frac{1}{q} \lim_{N \rightarrow \infty} \int_X |\mathop{\bf E}_{n \in [N]} T^{qn} f|\ d\mu;

it is orthogonal to the profinite factor {Z^{0^+}(X)}, generated by the periodic sets of {X} (or equivalently, by those eigenfunctions whose eigenvalue is a root of unity); for ergodic systems, it is the largest factor isomorphic to rotation on a profinite abelian group.

I’m collecting in this blog post a number of simple group-theoretic lemmas, all of the following flavour: if {H} is a subgroup of some product {G_1 \times \dots \times G_k} of groups, then one of three things has to happen:

  • ({H} too small) {H} is contained in some proper subgroup {G'_1 \times \dots \times G'_k} of {G_1 \times \dots \times G_k}, or the elements of {H} are constrained to some sort of equation that the full group {G_1 \times \dots \times G_k} does not satisfy.
  • ({H} too large) {H} contains some non-trivial normal subgroup {N_1 \times \dots \times N_k} of {G_1 \times \dots \times G_k}, and as such actually arises by pullback from some subgroup of the quotient group {G_1/N_1 \times \dots \times G_k/N_k}.
  • (Structure) There is some useful structural relationship between {H} and the groups {G_1,\dots,G_k}.
These sorts of lemmas show up often in ergodic theory, when the equidistribution of some orbit is governed by some unspecified subgroup {H} of a product group {G_1 \times \dots \times G_k}, and one needs to know further information about this subgroup in order to take the analysis further. In some cases only two of the above three options are relevant. In the cases where {H} is too “small” or too “large” one can reduce the groups {G_1,\dots,G_k} to something smaller (either a subgroup or a quotient) and in applications one can often proceed in this case by some induction on the “size” of the groups {G_1,\dots,G_k} (for instance, if these groups are Lie groups, one can often perform an induction on dimension), so it is often the structured case which is the most interesting case to deal with.

It is perhaps easiest to explain the flavour of these lemmas with some simple examples, starting with the {k=1} case where we are just considering subgroups {H} of a single group {G}.

Lemma 1 Let {H} be a subgroup of a group {G}. Then exactly one of the following hold:
  • (i) ({H} too small) There exists a non-trivial group homomorphism {\eta: G \rightarrow K} into a group {K = (K,\cdot)} such that {\eta(h)=1} for all {h \in H}.
  • (ii) ({H} normally generates {G}) {G} is generated as a group by the conjugates {gHg^{-1}} of {H}.

Proof: Let {G'} be the group normally generated by {H}, that is to say the group generated by the conjugates {gHg^{-1}} of {H}. This is a normal subgroup of {G} containing {H} (indeed it is the smallest such normal subgroup). If {G'} is all of {G} we are in option (ii); otherwise we can take {K} to be the quotient group {K := G/G'} and {\eta} to be the quotient map. Finally, if (i) holds, then all of the conjugates {gHg^{-1}} of {H} lie in the kernel of {\eta}, and so (ii) cannot hold. \Box

Here is a “dual” to the above lemma:

Lemma 2 Let {H} be a subgroup of a group {G}. Then exactly one of the following hold:
  • (i) ({H} too large) {H} is the pullback {H = \pi^{-1}(H')} of some subgroup {H'} of {G/N} for some non-trivial normal subgroup {N} of {G}, where {\pi: G \rightarrow G/N} is the quotient map.
  • (ii) ({H} is core-free) {H} does not contain any non-trivial conjugacy class {\{ ghg^{-1}: g \in G \}}.

Proof: Let {N} be the normal core of {H}, that is to say the intersection of all the conjugates {gHg^{-1}} of {H}. This is the largest normal subgroup of {G} that is contained in {H}. If {N} is non-trivial, we can quotient it out and end up with option (i). If instead {N} is trivial, then there is no non-trivial element {h} that lies in the core, hence no non-trivial conjugacy class lies in {H} and we are in option (ii). Finally, if (i) holds, then every conjugacy class of an element of {N} is contained in {N} and hence in {H}, so (ii) cannot hold. \Box

For subgroups of nilpotent groups, we have a nice dichotomy that detects properness of a subgroup through abelian representations:

Lemma 3 Let {H} be a subgroup of a nilpotent group {G}. Then exactly one of the following hold:
  • (i) ({H} too small) There exists non-trivial group homomorphism {\eta: G \rightarrow K} into an abelian group {K = (K,+)} such that {\eta(h)=0} for all {h \in H}.
  • (ii) {H=G}.

Informally: if {h} is a variable ranging in a subgroup {H} of a nilpotent group {G}, then either {h} is unconstrained (in the sense that it really ranges in all of {G}), or it obeys some abelian constraint {\eta(h)=0}.

Proof: By definition of nilpotency, the lower central series

\displaystyle  G_2 := [G,G], G_3 := [G,G_2], \dots

eventually becomes trivial.

Since {G_2} is a normal subgroup of {G}, {HG_2} is also a subgroup of {G}. Suppose first that {HG_2} is a proper subgroup of {G}, then the quotient map {\eta \colon G \rightarrow G/HG_2} is a non-trivial homomorphism to an abelian group {G/HG_2} that annihilates {H}, and we are in option (i). Thus we may assume that {HG_2 = G}, and thus

\displaystyle  G_2 = [G,G] = [G, HG_2].

Note that modulo the normal group {G_3}, {G_2} commutes with {G}, hence

\displaystyle  [G, HG_2] \subset [G,H] G_3 \subset H G_3

and thus

\displaystyle  G = H G_2 \subset H H G_3 = H G_3.

We conclude that {HG_3 = G}. One can continue this argument by induction to show that {H G_i = G} for every {i}; taking {i} large enough we end up in option (ii). Finally, it is clear that (i) and (ii) cannot both hold. \Box

Remark 4 When the group {G} is locally compact and {H} is closed, one can take the homomorphism {\eta} in Lemma 3 to be continuous, and by using Pontryagin duality one can also take the target group {K} to be the unit circle {{\bf R}/{\bf Z}}. Thus {\eta} is now a character of {G}. Similar considerations hold for some of the later lemmas in this post. Discrete versions of this above lemma, in which the group {H} is replaced by some orbit of a polynomial map on a nilmanifold, were obtained by Leibman and are important in the equidistribution theory of nilmanifolds; see this paper of Ben Green and myself for further discussion.

Here is an analogue of Lemma 3 for special linear groups, due to Serre (IV-23):

Lemma 5 Let {p \geq 5} be a prime, and let {H} be a closed subgroup of {SL_2({\bf Z}_p)}, where {{\bf Z}_p} is the ring of {p}-adic integers. Then exactly one of the following hold:
  • (i) ({H} too small) There exists a proper subgroup {H'} of {SL_2({\mathbf F}_p)} such that {h \hbox{ mod } p \in H'} for all {h \in H}.
  • (ii) {H=SL_2({\bf Z}_p)}.

Proof: It is a standard fact that the reduction of {SL_2({\bf Z}_p)} mod {p} is {SL_2({\mathbf F}_p)}, hence (i) and (ii) cannot both hold.

Suppose that (i) fails, then for every {g \in SL_2({\bf Z}_p)} there exists {h \in H} such that {h = g \hbox{ mod } p}, which we write as

\displaystyle  h = g + O(p).

We now claim inductively that for any {j \geq 0} and {g \in SL_2({\bf Z}_p)}, there exists {h \in SL_2({\bf Z}_p)} with {h = g + O(p^{j+1})}; taking limits as {j \rightarrow \infty} using the closed nature of {H} will then place us in option (ii).

The case {j=0} is already handled, so now suppose {j=1}. If {g \in SL_2({\bf Z}_p)}, we see from the {j=0} case that we can write {g = hg'} where {h \in H} and {g' = 1+O(p)}. Thus to establish the {j=1} claim it suffices to do so under the additional hypothesis that {g = 1+O(p)}.

First suppose that {g = 1 + pX + O(p^2)} for some {X \in M_2({\bf Z}_p)} with {X^2=0 \hbox{ mod } p}. By the {j=0} case, we can find {h \in H} of the form {h = 1 + X + pY + O(p^2)} for some {Y \in M_2({\bf Z}_p)}. Raising to the {p^{th}} power and using {X^2=0} and {p \geq 5 > 3}, we note that

\displaystyle h^p = 1 + \binom{p}{1} X + \binom{p}{1} pY + \binom{p}{2} X pY + \binom{p}{2} pY X

\displaystyle + \binom{p}{3} X pY X + O(p^2)

\displaystyle  = 1 + pX + O(p^2),

giving the claim in this case.

Any {2 \times 2} matrix of trace zero with coefficients in {{\mathbf F}_p} is a linear combination of {\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}}, {\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}}, {\begin{pmatrix} 1 & 1 \\ -1 & -1 \end{pmatrix}} and is thus a sum of matrices that square to zero. Hence, if {g \in SL_2({\bf Z}_p)} is of the form {g = 1 + O(p)}, then {g = 1 + pY + O(p^2)} for some matrix {Y} of trace zero, and thus one can write {g} (up to {O(p^2)} errors) as the finite product of matrices of the form {1 + pY + O(p^2)} with {Y^2=0}. By the previous arguments, such a matrix {1+pY + O(p^2)} lies in {H} up to {O(p^2)} errors, and hence {g} does also. This completes the proof of the {j=1} case.

Now suppose {j \geq 2} and the claim has already been proven for {j-1}. Arguing as before, it suffices to close the induction under the additional hypothesis that {g = 1 + O(p^j)}, thus we may write {g = 1 + p^j X + O(p^{j+1})}. By induction hypothesis, we may find {h \in H} with {h = 1 + p^{j-1} X + O(p^j)}. But then {h^p = 1 + p^j X + O(p^{j+1})}, and we are done. \Box

We note a generalisation of Lemma 3 that involves two groups {G_1,G_2} rather than just one:

Lemma 6 Let {H} be a subgroup of a product {G_1 \times G_2} of two nilpotent groups {G_1, G_2}. Then exactly one of the following hold:
  • (i) ({H} too small) There exists group homomorphisms {\eta_1: G'_1 \rightarrow K}, {\eta_2: G_2 \rightarrow K} into an abelian group {K = (K,+)}, with {\eta_2} non-trivial, such that {\eta_1(h_1) + \eta_2(h_2)=0} for all {(h_1,h_2) \in H}, where {G'_1 := \{ h_1: (h_1,h_2) \in H \hbox{ for some } h_2 \in G_2 \}} is the projection of {H} to {G_1}.
  • (ii) {H = G'_1 \times G_2} for some subgroup {G'_1} of {G_2}.

Proof: Consider the group {\{ h_2 \in G_2: (1,h_2) \in H \}}. This is a subgroup of {G_2}. If it is all of {G_2}, then {H} must be a Cartesian product {H = G'_1 \times G_2} and option (ii) holds. So suppose that this group is a proper subgroup of {G_2}. Applying Lemma 3, we obtain a non-trivial group homomorphism {\eta_2: G_2 \rightarrow K} into an abelian group {K = (K,+)} such that {\eta(h_2)=0} whenever {(1,h_2) \in H}. For any {h_1} in the projection {G'_1} of {H} to {G_1}, there is thus a unique quantity {\eta_1(h_1) \in H} such that {\eta_1(h_1) + \eta_2(h_2) = 0} whenever {(h_1,h_2) \in H}. One easily checks that {\eta_1} is a homomorphism, so we are in option (i).

Finally, it is clear that (i) and (ii) cannot both hold, since (i) places a non-trivial constraint on the second component {h_2} of an element {(h_1,h_2) \in H} of {H} for any fixed choice of {h_1}. \Box

We also note a similar variant of Lemma 5, which is Lemme 10 of this paper of Serre:

Lemma 7 Let {p \geq 5} be a prime, and let {H} be a closed subgroup of {SL_2({\bf Z}_p) \times SL_2({\bf Z}_p)}. Then exactly one of the following hold:
  • (i) ({H} too small) There exists a proper subgroup {H'} of {SL_2({\mathbf F}_p) \times SL_2({\mathbf F}_p)} such that {h \hbox{ mod } p \in H'} for all {h \in H}.
  • (ii) {H=SL_2({\bf Z}_p) \times SL_2({\bf Z}_p)}.

Proof: As in the proof of Lemma 5, (i) and (ii) cannot both hold. Suppose that (i) does not hold, then for any {g \in SL_2({\bf Z}_p)} there exists {h_1 \in H} such that {h_1 = (g+O(p), 1 + O(p))}. Similarly, there exists {h_0 \in H} with {h_0 = (1+O(p), 1+O(p))}. Taking commutators of {h_1} and {h_0}, we can find {h_2 \in H} with {h_2 = (g+O(p), 1+O(p^2))}. Continuing to take commutators with {h_0} and extracting a limit (using compactness and the closed nature of {H}), we can find {h_\infty \in H} with {h_\infty = (g+O(p),1)}. Thus, the closed subgroup {\{ g \in SL_2({\bf Z}_p): (g,1) \in H \}} of {SL_2({\bf Z}_p)} does not obey conclusion (i) of Lemma 5, and must therefore obey conclusion (ii); that is to say, {H} contains {SL_2({\bf Z}_p) \times \{1\}}. Similarly {H} contains {\{1\} \times SL_2({\bf Z}_p)}; multiplying, we end up in conclusion (ii). \Box

The most famous result of this type is of course the Goursat lemma, which we phrase here in a somewhat idiosyncratic manner to conform to the pattern of the other lemmas in this post:

Lemma 8 (Goursat lemma) Let {H} be a subgroup of a product {G_1 \times G_2} of two groups {G_1, G_2}. Then one of the following hold:
  • (i) ({H} too small) {H} is contained in {G'_1 \times G'_2} for some subgroups {G'_1}, {G'_2} of {G_1, G_2} respectively, with either {G'_1 \subsetneq G_1} or {G'_2 \subsetneq G_2} (or both).
  • (ii) ({H} too large) There exist normal subgroups {N_1, N_2} of {G_1, G_2} respectively, not both trivial, such that {H = \pi^{-1}(H')} arises from a subgroup {H'} of {G_1/N_1 \times G_2/N_2}, where {\pi: G_1 \times G_2 \rightarrow G_1/N_1 \times G_2/N_2} is the quotient map.
  • (iii) (Isomorphism) There is a group isomorphism {\phi: G_1 \rightarrow G_2} such that {H = \{ (g_1, \phi(g_1)): g_1 \in G_1\}} is the graph of {\phi}. In particular, {G_1} and {G_2} are isomorphic.

Here we almost have a trichotomy, because option (iii) is incompatible with both option (i) and option (ii). However, it is possible for options (i) and (ii) to simultaneously hold.

Proof: If either of the projections {\pi_1: H \rightarrow G_1}, {\pi_2: H \rightarrow G_2} from {H} to the factor groups {G_1,G_2} (thus {\pi_1(h_1,h_2)=h_1} and {\pi_2(h_1,h_2)=h_2} fail to be surjective, then we are in option (i). Thus we may assume that these maps are surjective.

Next, if either of the maps {\pi_1: H \rightarrow G_1}, {\pi_2: H \rightarrow G_2} fail to be injective, then at least one of the kernels {N_1 \times \{1\} := \mathrm{ker} \pi_2}, {\{1\} \times N_2 := \mathrm{ker} \pi_1} is non-trivial. We can then descend down to the quotient {G_1/N_1 \times G_2/N_2} and end up in option (ii).

The only remaining case is when the group homomorphisms {\pi_1, \pi_2} are both bijections, hence are group isomorphisms. If we set {\phi := \pi_2 \circ \pi_1^{-1}} we end up in case (iii). \Box

We can combine the Goursat lemma with Lemma 3 to obtain a variant:

Corollary 9 (Nilpotent Goursat lemma) Let {H} be a subgroup of a product {G_1 \times G_2} of two nilpotent groups {G_1, G_2}. Then one of the following hold:
  • (i) ({H} too small) There exists {i=1,2} and a non-trivial group homomorphism {\eta_i: G_i \rightarrow K} such that {\eta_i(h_i)=0} for all {(h_1,h_2) \in H}.
  • (ii) ({H} too large) There exist normal subgroups {N_1, N_2} of {G_1, G_2} respectively, not both trivial, such that {H = \pi^{-1}(H')} arises from a subgroup {H'} of {G_1/N_1 \times G_2/N_2}.
  • (iii) (Isomorphism) There is a group isomorphism {\phi: G_1 \rightarrow G_2} such that {H = \{ (g_1, \phi(g_1)): g_1 \in G_1\}} is the graph of {\phi}. In particular, {G_1} and {G_2} are isomorphic.

Proof: If Lemma 8(i) holds, then by applying Lemma 3 we arrive at our current option (i). The other options are unchanged from Lemma 8, giving the claim. \Box

Now we present a lemma involving three groups {G_1,G_2,G_3} that is known in ergodic theory contexts as the “Furstenberg-Weiss argument”, as an argument of this type arose in this paper of Furstenberg and Weiss, though perhaps it also implicitly appears in other contexts also. It has the remarkable feature of being able to enforce the abelian nature of one of the groups once the other options of the lemma are excluded.

Lemma 10 (Furstenberg-Weiss lemma) Let {H} be a subgroup of a product {G_1 \times G_2 \times G_3} of three groups {G_1, G_2, G_3}. Then one of the following hold:
  • (i) ({H} too small) There is some proper subgroup {G'_3} of {G_3} and some {i=1,2} such that {h_3 \in G'_3} whenever {(h_1,h_2,h_3) \in H} and {h_i = 1}.
  • (ii) ({H} too large) There exists a non-trivial normal subgroup {N_3} of {G_3} with {G_3/N_3} abelian, such that {H = \pi^{-1}(H')} arises from a subgroup {H'} of {G_1 \times G_2 \times G_3/N_3}, where {\pi: G_1 \times G_2 \times G_3 \rightarrow G_1 \times G_2 \times G_3/N_3} is the quotient map.
  • (iii) {G_3} is abelian.

Proof: If the group {\{ h_3 \in G_3: (1,h_2,h_3) \in H \}} is a proper subgroup of {G_3}, then we are in option (i) (with {i=1}), so we may assume that

\displaystyle \{ h_3 \in G_3: (1,h_2,h_3) \in H \} = G.

Similarly we may assume that

\displaystyle \{ h_3 \in G_3: (h_1,1,h_3) \in H \} = G.

Now let {g_3,g'_3} be any two elements of {G}. By the above assumptions, we can find {h_1 \in G_1, h_2 \in G_2} such that

\displaystyle (1, h_2, g_3) \in H

and

\displaystyle (h_1,1, g'_3) \in H.

Taking commutators to eliminate the {h_1,h_2} terms, we conclude that

\displaystyle  (1, 1, [g_3,g'_3]) \in H.

Thus the group {\{ h_3 \in G_3: (1,1,h_3) \in H \}} contains every commutator {[g_3,g'_3]}, and thus contains the entire group {[G_3,G_3]} generated by these commutators. If {G_3} fails to be abelian, then {[G_3,G_3]} is a non-trivial normal subgroup of {G_3}, and {H} now arises from {G_1 \times G_2 \times G_3/[G_3,G_3]} in the obvious fashion, placing one in option (ii). Hence the only remaining case is when {G_3} is abelian, giving us option (iii). \Box

As before, we can combine this with previous lemmas to obtain a variant in the nilpotent case:

Lemma 11 (Nilpotent Furstenberg-Weiss lemma) Let {H} be a subgroup of a product {G_1 \times G_2 \times G_3} of three nilpotent groups {G_1, G_2, G_3}. Then one of the following hold:
  • (i) ({H} too small) There exists {i=1,2} and group homomorphisms {\eta_i: G'_i \rightarrow K}, {\eta_3: G_3 \rightarrow K} for some abelian group {K = (K,+)}, with {\eta_3} non-trivial, such that {\eta_i(h_i) + \eta_3(h_3) = 0} whenever {(h_1,h_2,h_3) \in H}, where {G'_i} is the projection of {H} to {G_i}.
  • (ii) ({H} too large) There exists a non-trivial normal subgroup {N_3} of {G_3}, such that {H = \pi^{-1}(H')} arises from a subgroup {H'} of {G_1 \times G_2 \times G_3/N_3}.
  • (iii) {G_3} is abelian.

Informally, this lemma asserts that if {(h_1,h_2,h_3)} is a variable ranging in some subgroup {G_1 \times G_2 \times G_3}, then either (i) there is a non-trivial abelian equation that constrains {h_3} in terms of either {h_1} or {h_2}; (ii) {h_3} is not fully determined by {h_1} and {h_2}; or (iii) {G_3} is abelian.

Proof: Applying Lemma 10, we are already done if conclusions (ii) or (iii) of that lemma hold, so suppose instead that conclusion (i) holds for say {i=1}. Then the group {\{ (h_1,h_3) \in G_1 \times G_3: (h_1,h_2,h_3) \in H \hbox{ for some } h_2 \in G_2 \}} is not of the form {G'_2 \times G_3}, since it only contains those {(1,h_3)} with {h_3 \in G'_3}. Applying Lemma 6, we obtain group homomorphisms {\eta_1: G'_1 \rightarrow K}, {\eta_3: G_3 \rightarrow K} into an abelian group {K= (K,+)}, with {\eta_3} non-trivial, such that {\eta_1(h_1) + \eta_3(h_3) = 0} whenever {(h_1,h_2,h_3) \in H}, placing us in option (i). \Box

The Furstenberg-Weiss argument is often used (though not precisely in this form) to establish that certain key structure groups arising in ergodic theory are abelian; see for instance Proposition 6.3(1) of this paper of Host and Kra for an example.

One can get more structural control on {H} in the Furstenberg-Weiss lemma in option (iii) if one also broadens options (i) and (ii):

Lemma 12 (Variant of Furstenberg-Weiss lemma) Let {H} be a subgroup of a product {G_1 \times G_2 \times G_3} of three groups {G_1, G_2, G_3}. Then one of the following hold:
  • (i) ({H} too small) There is some proper subgroup {G'_{ij}} of {G_i \times G_j} for some {1 \leq i < j \leq 3} such that {(h_i,h_j) \in G'_{ij}} whenever {(h_1,h_2,h_3) \in H}. (In other words, the projection of {H} to {G_i \times G_j} is not surjective.)
  • (ii) ({H} too large) There exists a normal {N_1, N_2, N_3} of {G_1, G_2, G_3} respectively, not all trivial, such that {H = \pi^{-1}(H')} arises from a subgroup {H'} of {G_1/N_1 \times G_2/N_2 \times G_3/N_3}, where {\pi: G_1 \times G_2 \times G_3 \rightarrow G_1/N_1 \times G_2/N_2 \times G_3/N_3} is the quotient map.
  • (iii) {G_1,G_2,G_3} are abelian and isomorphic. Furthermore, there exist isomorphisms {\phi_1: G_1 \rightarrow K}, {\phi_2: G_2 \rightarrow K}, {\phi_3: G_3 \rightarrow K} to an abelian group {K = (K,+)} such that

    \displaystyle  H = \{ (g_1,g_2,g_3) \in G_1 \times G_2 \times G_3: \phi(g_1) + \phi(g_2) + \phi(g_3) = 0 \}.

The ability to encode an abelian additive relation in terms of group-theoretic properties is vaguely reminiscent of the group configuration theorem.

Proof: We apply Lemma 10. Option (i) of that lemma implies option (i) of the current lemma, and similarly for option (ii), so we may assume without loss of generality that {G_3} is abelian. By permuting we may also assume that {G_1,G_2} are abelian, and will use additive notation for these groups.

We may assume that the projections of {H} to {G_1 \times G_2} and {G_3} are surjective, else we are in option (i). The group {\{ g_3 \in G_3: (1,1,g_3) \in H\}} is then a normal subgroup of {G_3}; we may assume it is trivial, otherwise we can quotient it out and be in option (ii). Thus {H} can be expressed as a graph {\{ (h_1,h_2,\phi(h_1,h_2)): h_1 \in G_1, h_2 \in G_2\}} for some map {\phi: G_1 \times G_2 \rightarrow G_3}. As {H} is a group, {\phi} must be a homomorphism, and we can write it as {\phi(h_1+h_2) = -\phi_1(h_1) - \phi_2(h_2)} for some homomorphisms {\phi_1: G_1 \rightarrow G_3}, {\phi_2: G_2 \rightarrow G_3}. Thus elements {(h_1,h_2,h_3)} of {H} obey the constraint {\phi_1(h_1) + \phi_2(h_2) + h_3 = 0}.

If {\phi_1} or {\phi_2} fails to be injective, then we can quotient out by their kernels and end up in option (ii). If {\phi_1} fails to be surjective, then the projection of {H} to {G_2 \times G_3} also fails to be surjective (since for {(h_1,h_2,h_3) \in H}, {\phi_2(h_2) + h_3} is now constrained to lie in the range of {\phi_1}) and we are in option (i). Similarly if {\phi_2} fails to be surjective. Thus we may assume that the homomorphisms {\phi_1,\phi_2} are bijective and thus group isomorphisms. Setting {\phi_3} to the identity, we arrive at option (iii). \Box

Combining this lemma with Lemma 3, we obtain a nilpotent version:

Corollary 13 (Variant of nilpotent Furstenberg-Weiss lemma) Let {H} be a subgroup of a product {G_1 \times G_2 \times G_3} of three groups {G_1, G_2, G_3}. Then one of the following hold:
  • (i) ({H} too small) There are homomorphisms {\eta_i: G_i \rightarrow K}, {\eta_j: G_j \rightarrow K} to some abelian group {K =(K,+)} for some {1 \leq i < j \leq 3}, with {\eta_i, \eta_j} not both trivial, such that {\eta_i(h_i) + \eta_j(h_j) = 0} whenever {(h_1,h_2,h_3) \in H}.
  • (ii) ({H} too large) There exists a normal {N_1, N_2, N_3} of {G_1, G_2, G_3} respectively, not all trivial, such that {H = \pi^{-1}(H')} arises from a subgroup {H'} of {G_1/N_1 \times G_2/N_2 \times G_3/N_3}, where {\pi: G_1 \times G_2 \times G_3 \rightarrow G_1/N_1 \times G_2/N_2 \times G_3/N_3} is the quotient map.
  • (iii) {G_1,G_2,G_3} are abelian and isomorphic. Furthermore, there exist isomorphisms {\phi_1: G_1 \rightarrow K}, {\phi_2: G_2 \rightarrow K}, {\phi_3: G_3 \rightarrow K} to an abelian group {K = (K,+)} such that

    \displaystyle  H = \{ (g_1,g_2,g_3) \in G_1 \times G_2 \times G_3: \phi(g_1) + \phi(g_2) + \phi(g_3) = 0 \}.

Here is another variant of the Furstenberg-Weiss lemma, attributed to Serre by Ribet (see Lemma 3.3):

Lemma 14 (Serre’s lemma) Let {H} be a subgroup of a finite product {G_1 \times \dots \times G_k} of groups {G_1,\dots,G_k} with {k \geq 2}. Then one of the following hold:
  • (i) ({H} too small) There is some proper subgroup {G'_{ij}} of {G_i \times G_j} for some {1 \leq i < j \leq k} such that {(h_i,h_j) \in G'_{ij}} whenever {(h_1,\dots,h_k) \in H}.
  • (ii) ({H} too large) One has {H = G_1 \times \dots \times G_k}.
  • (iii) One of the {G_i} has a non-trivial abelian quotient {G_i/N_i}.

Proof: The claim is trivial for {k=2} (and we don’t need (iii) in this case), so suppose that {k \geq 3}. We can assume that each {G_i} is a perfect group, {G_i = [G_i,G_i]}, otherwise we can quotient out by the commutator and arrive in option (iii). Similarly, we may assume that all the projections of {H} to {G_i \times G_j}, {1 \leq i < j \leq k} are surjective, otherwise we are in option (i).

We now claim that for any {1 \leq j < k} and any {g_k \in G_k}, one can find {(h_1,\dots,h_k) \in H} with {h_i=1} for {1 \leq i \leq j} and {h_k = g_k}. For {j=1} this follows from the surjectivity of the projection of {H} to {G_1 \times G_k}. Now suppose inductively that {1 < j < k} and the claim has already been proven for {j-1}. Since {G_k} is perfect, it suffices to establish this claim for {g_k} of the form {g_k = [g'_k, g''_k]} for some {g'_k, g''_k \in G_k}. By induction hypothesis, we can find {(h'_1,\dots,h'_k) \in H} with {h'_i = 1} for {1 \leq i < j} and {h'_k = g'_k}. By surjectivity of the projection of {H} to {G_j \times G_k}, one can find {(h''_1,\dots,h''_k) \in H} with {h''_j = 1} and {h''_k=g''_k}. Taking commutators of these two elements, we obtain the claim.

Setting {j = k-1}, we conclude that {H} contains {1 \times \dots \times 1 \times G_k}. Similarly for permutations. Multiplying these together we see that {H} contains all of {G_1 \times \dots \times G_k}, and we are in option (ii). \Box

In this previous blog post I noted the following easy application of Cauchy-Schwarz:

Lemma 1 (Van der Corput inequality) Let {v,u_1,\dots,u_n} be unit vectors in a Hilbert space {H}. Then

\displaystyle  (\sum_{i=1}^n |\langle v, u_i \rangle_H|)^2 \leq \sum_{1 \leq i,j \leq n} |\langle u_i, u_j \rangle_H|.

Proof: The left-hand side may be written as {|\langle v, \sum_{i=1}^n \epsilon_i u_i \rangle_H|^2} for some unit complex numbers {\epsilon_i}. By Cauchy-Schwarz we have

\displaystyle  |\langle v, \sum_{i=1}^n \epsilon_i u_i \rangle_H|^2 \leq \langle \sum_{i=1}^n \epsilon_i u_i, \sum_{j=1}^n \epsilon_j u_j \rangle_H

and the claim now follows from the triangle inequality. \Box

As a corollary, correlation becomes transitive in a statistical sense (even though it is not transitive in an absolute sense):

Corollary 2 (Statistical transitivity of correlation) Let {v,u_1,\dots,u_n} be unit vectors in a Hilbert space {H} such that {|\langle v,u_i \rangle_H| \geq \delta} for all {i=1,\dots,n} and some {0 < \delta \leq 1}. Then we have {|\langle u_i, u_j \rangle_H| \geq \delta^2/2} for at least {\delta^2 n^2/2} of the pairs {(i,j) \in \{1,\dots,n\}^2}.

Proof: From the lemma, we have

\displaystyle  \sum_{1 \leq i,j \leq n} |\langle u_i, u_j \rangle_H| \geq \delta^2 n^2.

The contribution of those {i,j} with {|\langle u_i, u_j \rangle_H| < \delta^2/2} is at most {\delta^2 n^2/2}, and all the remaining summands are at most {1}, giving the claim. \Box

One drawback with this corollary is that it does not tell us which pairs {u_i,u_j} correlate. In particular, if the vector {v} also correlates with a separate collection {w_1,\dots,w_n} of unit vectors, the pairs {(i,j)} for which {u_i,u_j} correlate may have no intersection whatsoever with the pairs in which {w_i,w_j} correlate (except of course on the diagonal {i=j} where they must correlate).

While working on an ongoing research project, I recently found that there is a very simple way to get around the latter problem by exploiting the tensor power trick:

Corollary 3 (Simultaneous statistical transitivity of correlation) Let {v, u^k_i} be unit vectors in a Hilbert space for {i=1,\dots,n} and {k=1,\dots,K} such that {|\langle v, u^k_i \rangle_H| \geq \delta_k} for all {i=1,\dots,n}, {k=1,\dots,K} and some {0 < \delta_k \leq 1}. Then there are at least {(\delta_1 \dots \delta_K)^2 n^2/2} pairs {(i,j) \in \{1,\dots,n\}^2} such that {\prod_{k=1}^K |\langle u^k_i, u^k_j \rangle_H| \geq (\delta_1 \dots \delta_K)^2/2}. In particular (by Cauchy-Schwarz) we have {|\langle u^k_i, u^k_j \rangle_H| \geq (\delta_1 \dots \delta_K)^2/2} for all {k}.

Proof: Apply Corollary 2 to the unit vectors {v^{\otimes K}} and {u^1_i \otimes \dots \otimes u^K_i}, {i=1,\dots,n} in the tensor power Hilbert space {H^{\otimes K}}. \Box

It is surprisingly difficult to obtain even a qualitative version of the above conclusion (namely, if {v} correlates with all of the {u^k_i}, then there are many pairs {(i,j)} for which {u^k_i} correlates with {u^k_j} for all {k} simultaneously) without some version of the tensor power trick. For instance, even the powerful Szemerédi regularity lemma, when applied to the set of pairs {i,j} for which one has correlation of {u^k_i}, {u^k_j} for a single {i,j}, does not seem to be sufficient. However, there is a reformulation of the argument using the Schur product theorem as a substitute for (or really, a disguised version of) the tensor power trick. For simplicity of notation let us just work with real Hilbert spaces to illustrate the argument. We start with the identity

\displaystyle  \langle u^k_i, u^k_j \rangle_H = \langle v, u^k_i \rangle_H \langle v, u^k_j \rangle_H + \langle \pi(u^k_i), \pi(u^k_j) \rangle_H

where {\pi} is the orthogonal projection to the complement of {v}. This implies a Gram matrix inequality

\displaystyle  (\langle u^k_i, u^k_j \rangle_H)_{1 \leq i,j \leq n} \succ (\langle v, u^k_i \rangle_H \langle v, u^k_j \rangle_H)_{1 \leq i,j \leq n} \succ 0

for each {k} where {A \succ B} denotes the claim that {A-B} is positive semi-definite. By the Schur product theorem, we conclude that

\displaystyle  (\prod_{k=1}^K \langle u^k_i, u^k_j \rangle_H)_{1 \leq i,j \leq n} \succ (\prod_{k=1}^K \langle v, u^k_i \rangle_H \langle v, u^k_j \rangle_H)_{1 \leq i,j \leq n}

and hence for a suitable choice of signs {\epsilon_1,\dots,\epsilon_n},

\displaystyle  \sum_{1 \leq i, j \leq n} \epsilon_i \epsilon_j \prod_{k=1}^K \langle u^k_i, u^k_j \rangle_H \geq \delta_1^2 \dots \delta_K^2 n^2.

One now argues as in the proof of Corollary 2.

A separate application of tensor powers to amplify correlations was also noted in this previous blog post giving a cheap version of the Kabatjanskii-Levenstein bound, but this seems to not be directly related to this current application.

The (classical) Möbius function {\mu: {\bf N} \rightarrow {\bf Z}} is the unique function that obeys the classical Möbius inversion formula:

Proposition 1 (Classical Möbius inversion) Let {f,g: {\bf N} \rightarrow A} be functions from the natural numbers to an additive group {A}. Then the following two claims are equivalent:
  • (i) {f(n) = \sum_{d|n} g(d)} for all {n \in {\bf N}}.
  • (ii) {g(n) = \sum_{d|n} \mu(n/d) f(d)} for all {n \in {\bf N}}.

There is a generalisation of this formula to (finite) posets, due to Hall, in which one sums over chains {n_0 > \dots > n_k} in the poset:

Proposition 2 (Poset Möbius inversion) Let {{\mathcal N}} be a finite poset, and let {f,g: {\mathcal N} \rightarrow A} be functions from that poset to an additive group {A}. Then the following two claims are equivalent:
  • (i) {f(n) = \sum_{d \leq n} g(d)} for all {n \in {\mathcal N}}, where {d} is understood to range in {{\mathcal N}}.
  • (ii) {g(n) = \sum_{k=0}^\infty (-1)^k \sum_{n = n_0 > n_1 > \dots > n_k} f(n_k)} for all {n \in {\mathcal N}}, where in the inner sum {n_0,\dots,n_k} are understood to range in {{\mathcal N}} with the indicated ordering.
(Note from the finite nature of {{\mathcal N}} that the inner sum in (ii) is vacuous for all but finitely many {k}.)

Comparing Proposition 2 with Proposition 1, it is natural to refer to the function {\mu(d,n) := \sum_{k=0}^\infty (-1)^k \sum_{n = n_0 > n_1 > \dots > n_k = d} 1} as the Möbius function of the poset; the condition (ii) can then be written as

\displaystyle  g(n) = \sum_{d \leq n} \mu(d,n) f(d).

Proof: If (i) holds, then we have

\displaystyle  g(n) = f(n) - \sum_{d<n} g(d) \ \ \ \ \ (1)

for any {n \in {\mathcal N}}. Iterating this we obtain (ii). Conversely, from (ii) and separating out the {k=0} term, and grouping all the other terms based on the value of {d:=n_1}, we obtain (1), and hence (i). \Box

In fact it is not completely necessary that the poset {{\mathcal N}} be finite; an inspection of the proof shows that it suffices that every element {n} of the poset has only finitely many predecessors {\{ d \in {\mathcal N}: d < n \}}.

It is not difficult to see that Proposition 2 includes Proposition 1 as a special case, after verifying the combinatorial fact that the quantity

\displaystyle  \sum_{k=0}^\infty (-1)^k \sum_{d=n_k | n_{k-1} | \dots | n_1 | n_0 = n} 1

is equal to {\mu(n/d)} when {d} divides {n}, and vanishes otherwise.

I recently discovered that Proposition 2 can also lead to a useful variant of the inclusion-exclusion principle. The classical version of this principle can be phrased in terms of indicator functions: if {A_1,\dots,A_\ell} are subsets of some set {X}, then

\displaystyle  \prod_{j=1}^\ell (1-1_{A_j}) = \sum_{k=0}^\ell (-1)^k \sum_{1 \leq j_1 < \dots < j_k \leq \ell} 1_{A_{j_1} \cap \dots \cap A_{j_k}}.

In particular, if there is a finite measure {\nu} on {X} for which {A_1,\dots,A_\ell} are all measurable, we have

\displaystyle  \nu(X \backslash \bigcup_{j=1}^\ell A_j) = \sum_{k=0}^\ell (-1)^k \sum_{1 \leq j_1 < \dots < j_k \leq \ell} \nu( A_{j_1} \cap \dots \cap A_{j_k} ).

One drawback of this formula is that there are exponentially many terms on the right-hand side: {2^\ell} of them, in fact. However, in many cases of interest there are “collisions” between the intersections {A_{j_1} \cap \dots \cap A_{j_k}} (for instance, perhaps many of the pairwise intersections {A_i \cap A_j} agree), in which case there is an opportunity to collect terms and hopefully achieve some cancellation. It turns out that it is possible to use Proposition 2 to do this, in which one only needs to sum over chains in the resulting poset of intersections:

Proposition 3 (Hall-type inclusion-exclusion principle) Let {A_1,\dots,A_\ell} be subsets of some set {X}, and let {{\mathcal N}} be the finite poset formed by intersections of some of the {A_i} (with the convention that {X} is the empty intersection), ordered by set inclusion. Then for any {E \in {\mathcal N}}, one has

\displaystyle  1_E \prod_{F \subsetneq E} (1 - 1_F) = \sum_{k=0}^\ell (-1)^k \sum_{E = E_0 \supsetneq E_1 \supsetneq \dots \supsetneq E_k} 1_{E_k} \ \ \ \ \ (2)

where {F, E_0,\dots,E_k} are understood to range in {{\mathcal N}}. In particular (setting {E} to be the empty intersection) if the {A_j} are all proper subsets of {X} then we have

\displaystyle  \prod_{j=1}^\ell (1-1_{A_j}) = \sum_{k=0}^\ell (-1)^k \sum_{X = E_0 \supsetneq E_1 \supsetneq \dots \supsetneq E_k} 1_{E_k}. \ \ \ \ \ (3)

In particular, if there is a finite measure {\nu} on {X} for which {A_1,\dots,A_\ell} are all measurable, we have

\displaystyle  \mu(X \backslash \bigcup_{j=1}^\ell A_j) = \sum_{k=0}^\ell (-1)^k \sum_{X = E_0 \supsetneq E_1 \supsetneq \dots \supsetneq E_k} \mu(E_k).

Using the Möbius function {\mu} on the poset {{\mathcal N}}, one can write these formulae as

\displaystyle  1_E \prod_{F \subsetneq E} (1 - 1_F) = \sum_{F \subseteq E} \mu(F,E) 1_F,

\displaystyle  \prod_{j=1}^\ell (1-1_{A_j}) = \sum_F \mu(F,X) 1_F

and

\displaystyle  \nu(X \backslash \bigcup_{j=1}^\ell A_j) = \sum_F \mu(F,X) \nu(F).

Proof: It suffices to establish (2) (to derive (3) from (2) observe that all the {F \subsetneq X} are contained in one of the {A_j}, so the effect of {1-1_F} may be absorbed into {1 - 1_{A_j}}). Applying Proposition 2, this is equivalent to the assertion that

\displaystyle  1_E = \sum_{F \subseteq E} 1_F \prod_{G \subsetneq F} (1 - 1_G)

for all {E \in {\mathcal N}}. But this amounts to the assertion that for each {x \in E}, there is precisely one {F \subseteq E} in {{\mathcal n}} with the property that {x \in F} and {x \not \in G} for any {G \subsetneq F} in {{\mathcal N}}, namely one can take {F} to be the intersection of all {G \subseteq E} in {{\mathcal N}} such that {G} contains {x}. \Box

Example 4 If {A_1,A_2,A_3 \subsetneq X} with {A_1 \cap A_2 = A_1 \cap A_3 = A_2 \cap A_3 = A_*}, and {A_1,A_2,A_3,A_*} are all distinct, then we have for any finite measure {\nu} on {X} that makes {A_1,A_2,A_3} measurable that

\displaystyle  \nu(X \backslash (A_1 \cup A_2 \cup A_3)) = \nu(X) - \nu(A_1) - \nu(A_2) \ \ \ \ \ (4)

\displaystyle  - \nu(A_3) - \nu(A_*) + 3 \nu(A_*)

due to the four chains {X \supsetneq A_1}, {X \supsetneq A_2}, {X \supsetneq A_3}, {X \supsetneq A_*} of length one, and the three chains {X \supsetneq A_1 \supsetneq A_*}, {X \supsetneq A_2 \supsetneq A_*}, {X \supsetneq A_3 \supsetneq A_*} of length two. Note that this expansion just has six terms in it, as opposed to the {2^3=8} given by the usual inclusion-exclusion formula, though of course one can reduce the number of terms by combining the {\nu(A_*)} factors. This may not seem particularly impressive, especially if one views the term {3 \mu(A_*)} as really being three terms instead of one, but if we add a fourth set {A_4 \subsetneq X} with {A_i \cap A_j = A_*} for all {1 \leq i < j \leq 4}, the formula now becomes

\displaystyle  \nu(X \backslash (A_1 \cup A_2 \cup A_3 \cap A_4)) = \nu(X) - \nu(A_1) - \nu(A_2) \ \ \ \ \ (5)

\displaystyle  - \nu(A_3) - \nu(A_4) - \nu(A_*) + 4 \nu(A_*)

and we begin to see more cancellation as we now have just seven terms (or ten if we count {4 \nu(A_*)} as four terms) instead of {2^4 = 16} terms.

Example 5 (Variant of Legendre sieve) If {q_1,\dots,q_\ell > 1} are natural numbers, and {a_1,a_2,\dots} is some sequence of complex numbers with only finitely many terms non-zero, then by applying the above proposition to the sets {A_j := q_j {\bf N}} and with {\nu} equal to counting measure weighted by the {a_n} we obtain a variant of the Legendre sieve

\displaystyle  \sum_{n: (n,q_1 \dots q_\ell) = 1} a_n = \sum_{k=0}^\ell (-1)^k \sum_{1 |' d_1 |' \dots |' d_k} \sum_{n: d_k |n} a_n

where {d_1,\dots,d_k} range over the set {{\mathcal N}} formed by taking least common multiples of the {q_j} (with the understanding that the empty least common multiple is {1}), and {d |' n} denotes the assertion that {d} divides {n} but is strictly less than {n}. I am curious to know of this version of the Legendre sieve already appears in the literature (and similarly for the other applications of Proposition 2 given here).

If the poset {{\mathcal N}} has bounded depth then the number of terms in Proposition 3 can end up being just polynomially large in {\ell} rather than exponentially large. Indeed, if all chains {X \supsetneq E_1 \supsetneq \dots \supsetneq E_k} in {{\mathcal N}} have length {k} at most {k_0} then the number of terms here is at most {1 + \ell + \dots + \ell^{k_0}}. (The examples (4), (5) are ones in which the depth is equal to two.) I hope to report in a later post on how this version of inclusion-exclusion with polynomially many terms can be useful in an application.

Actually in our application we need an abstraction of the above formula, in which the indicator functions are replaced by more abstract idempotents:

Proposition 6 (Hall-type inclusion-exclusion principle for idempotents) Let {A_1,\dots,A_\ell} be pairwise commuting elements of some ring {R} with identity, which are all idempotent (thus {A_j A_j = A_j} for {j=1,\dots,\ell}). Let {{\mathcal N}} be the finite poset formed by products of the {A_i} (with the convention that {1} is the empty product), ordered by declaring {E \leq F} when {EF = E} (note that all the elements of {{\mathcal N}} are idempotent so this is a partial ordering). Then for any {E \in {\mathcal N}}, one has

\displaystyle  E \prod_{F < E} (1-F) = \sum_{k=0}^\ell (-1)^k \sum_{E = E_0 > E_1 > \dots > E_k} E_k. \ \ \ \ \ (6)

where {F, E_0,\dots,E_k} are understood to range in {{\mathcal N}}. In particular (setting {E=1}) if all the {A_j} are not equal to {1} then we have

\displaystyle  \prod_{j=1}^\ell (1-A_j) = \sum_{k=0}^\ell (-1)^k \sum_{1 = E_0 > E_1 > \dots > E_k} E_k.

Morally speaking this proposition is equivalent to the previous one after applying a “spectral theorem” to simultaneously diagonalise all of the {A_j}, but it is quicker to just adapt the previous proof to establish this proposition directly. Using the Möbius function {\mu} for {{\mathcal N}}, we can rewrite these formulae as

\displaystyle  E \prod_{F < E} (1-F) = \sum_{F \leq E} \mu(F,E) 1_F

and

\displaystyle  \prod_{j=1}^\ell (1-A_j) = \sum_F \mu(F,1) 1_F.

Proof: Again it suffices to verify (6). Using Proposition 2 as before, it suffices to show that

\displaystyle  E = \sum_{F \leq E} F \prod_{G < F} (1 - G) \ \ \ \ \ (7)

for all {E \in {\mathcal N}} (all sums and products are understood to range in {{\mathcal N}}). We can expand

\displaystyle  E = E \prod_{G < E} (G + (1-G)) = \sum_{{\mathcal A}} (\prod_{G \in {\mathcal A}} G) (\prod_{G < E: G \not \in {\mathcal A}} (1-G)) \ \ \ \ \ (8)

where {{\mathcal A}} ranges over all subsets of {\{ G \in {\mathcal N}: G \leq E \}} that contain {E}. For such an {{\mathcal A}}, if we write {F := \prod_{G \in {\mathcal A}} G}, then {F} is the greatest lower bound of {{\mathcal A}}, and we observe that {F (\prod_{G < E: G \not \in {\mathcal A}} (1-G))} vanishes whenever {{\mathcal A}} fails to contain some {G \in {\mathcal N}} with {F \leq G \leq E}. Thus the only {{\mathcal A}} that give non-zero contributions to (8) are the intervals of the form {\{ G \in {\mathcal N}: F \leq G \leq E\}} for some {F \leq E} (which then forms the greatest lower bound for that interval), and the claim (7) follows (after noting that {F (1-G) = F (1-FG)} for any {F,G \in {\mathcal N}}). \Box

Consider a disk {D(z_0,r) := \{ z: |z-z_0| < r \}} in the complex plane. If one applies an affine-linear map {f(z) = az+b} to this disk, one obtains

\displaystyle  f(D(z_0,r)) = D(f(z_0), |f'(z_0)| r).

For maps that are merely holomorphic instead of affine-linear, one has some variants of this assertion, which I am recording here mostly for my own reference:

Theorem 1 (Holomorphic images of disks) Let {D(z_0,r)} be a disk in the complex plane, and {f: D(z_0,r) \rightarrow {\bf C}} be a holomorphic function with {f'(z_0) \neq 0}.
  • (i) (Open mapping theorem or inverse function theorem) {f(D(z_0,r))} contains a disk {D(f(z_0),\varepsilon)} for some {\varepsilon>0}. (In fact there is even a holomorphic right inverse of {f} from {D(f(z_0), \varepsilon)} to {D(z_0,r)}.)
  • (ii) (Bloch theorem) {f(D(z_0,r))} contains a disk {D(w, c |f'(z_0)| r)} for some absolute constant {c>0} and some {w \in {\bf C}}. (In fact there is even a holomorphic right inverse of {f} from {D(w, c |f'(z_0)| r)} to {D(z_0,r)}.)
  • (iii) (Koebe quarter theorem) If {f} is injective, then {f(D(z_0,r))} contains the disk {D(f(z_0), \frac{1}{4} |f'(z_0)| r)}.
  • (iv) If {f} is a polynomial of degree {n}, then {f(D(z_0,r))} contains the disk {D(f(z_0), \frac{1}{n} |f'(z_0)| r)}.
  • (v) If one has a bound of the form {|f'(z)| \leq A |f'(z_0)|} for all {z \in D(z_0,r)} and some {A>1}, then {f(D(z_0,r))} contains the disk {D(f(z_0), \frac{c}{A} |f'(z_0)| r)} for some absolute constant {c>0}. (In fact there is holomorphic right inverse of {f} from {D(f(z_0), \frac{c}{A} |f'(z_0)| r)} to {D(z_0,r)}.)

Parts (i), (ii), (iii) of this theorem are standard, as indicated by the given links. I found part (iv) as (a consequence of) Theorem 2 of this paper of Degot, who remarks that it “seems not already known in spite of its simplicity”; an equivalent form of this result also appears in Lemma 4 of this paper of Miller. The proof is simple:

Proof: (Proof of (iv)) Let {w \in D(f(z_0), \frac{1}{n} |f'(z_0)| r)}, then we have a lower bound for the log-derivative of {f(z)-w} at {z_0}:

\displaystyle  \frac{|f'(z_0)|}{|f(z_0)-w|} > \frac{n}{r}

(with the convention that the left-hand side is infinite when {f(z_0)=w}). But by the fundamental theorem of algebra we have

\displaystyle  \frac{f'(z_0)}{f(z_0)-w} = \sum_{j=1}^n \frac{1}{z_0-\zeta_j}

where {\zeta_1,\dots,\zeta_n} are the roots of the polynomial {f(z)-w} (counting multiplicity). By the pigeonhole principle, there must therefore exist a root {\zeta_j} of {f(z) - w} such that

\displaystyle  \frac{1}{|z_0-\zeta_j|} > \frac{1}{r}

and hence {\zeta_j \in D(z_0,r)}. Thus {f(D(z_0,r))} contains {w}, and the claim follows. \Box

The constant {\frac{1}{n}} in (iv) is completely sharp: if {f(z) = z^n} and {z_0} is non-zero then {f(D(z_0,|z_0|))} contains the disk

\displaystyle D(f(z_0), \frac{1}{n} |f'(z_0)| r) = D( z_0^n, |z_0|^n)

but avoids the origin, thus does not contain any disk of the form {D( z_0^n, |z_0|^n+\varepsilon)}. This example also shows that despite parts (ii), (iii) of the theorem, one cannot hope for a general inclusion of the form

\displaystyle  f(D(z_0,r)) \supset D(f(z_0), c |f'(z_0)| r )

for an absolute constant {c>0}.

Part (v) is implicit in the standard proof of Bloch’s theorem (part (ii)), and is easy to establish:

Proof: (Proof of (v)) From the Cauchy inequalities one has {f''(z) = O(\frac{A}{r} |f'(z_0)|)} for {z \in D(z_0,r/2)}, hence by Taylor’s theorem with remainder {f(z) = f(z_0) + f'(z_0) (z-z_0) (1 + O( A \frac{|z-z_0|}{r} ) )} for {z \in D(z_0, r/2)}. By Rouche’s theorem, this implies that the function {f(z)-w} has a unique zero in {D(z_0, 2cr/A)} for any {w \in D(f(z_0), cr|f'(z_0)|/A)}, if {c>0} is a sufficiently small absolute constant. The claim follows. \Box

Note that part (v) implies part (i). A standard point picking argument also lets one deduce part (ii) from part (v):

Proof: (Proof of (ii)) By shrinking {r} slightly if necessary we may assume that {f} extends analytically to the closure of the disk {D(z_0,r)}. Let {c} be the constant in (v) with {A=2}; we will prove (iii) with {c} replaced by {c/2}. If we have {|f'(z)| \leq 2 |f'(z_0)|} for all {z \in D(z_0,r/2)} then we are done by (v), so we may assume without loss of generality that there is {z_1 \in D(z_0,r/2)} such that {|f'(z_1)| > 2 |f'(z_0)|}. If {|f'(z)| \leq 2 |f'(z_1)|} for all {z \in D(z_1,r/4)} then by (v) we have

\displaystyle  f( D(z_0, r) ) \supset f( D(z_1,r/2) ) \supset D( f(z_1), \frac{c}{2} |f'(z_1)| \frac{r}{2} )

\displaystyle \supset D( f(z_1), \frac{c}{2} |f'(z_0)| r )

and we are again done. Hence we may assume without loss of generality that there is {z_2 \in D(z_1,r/4)} such that {|f'(z_2)| > 2 |f'(z_1)|}. Iterating this procedure in the obvious fashion we either are done, or obtain a Cauchy sequence {z_0, z_1, \dots} in {D(z_0,r)} such that {f'(z_j)} goes to infinity as {j \rightarrow \infty}, which contradicts the analytic nature of {f} (and hence continuous nature of {f'}) on the closure of {D(z_0,r)}. This gives the claim. \Box

Here is another classical result stated by Alexander (and then proven by Kakeya and by Szego, but also implied to a classical theorem of Grace and Heawood) that is broadly compatible with parts (iii), (iv) of the above theorem:

Proposition 2 Let {D(z_0,r)} be a disk in the complex plane, and {f: D(z_0,r) \rightarrow {\bf C}} be a polynomial of degree {n \geq 1} with {f'(z) \neq 0} for all {z \in D(z_0,r)}. Then {f} is injective on {D(z_0, \sin\frac{\pi}{n})}.

The radius {\sin \frac{\pi}{n}} is best possible, for the polynomial {f(z) = z^n} has {f'} non-vanishing on {D(1,1)}, but one has {f(\cos(\pi/n) e^{i \pi/n}) = f(\cos(\pi/n) e^{-i\pi/n})}, and {\cos(\pi/n) e^{i \pi/n}, \cos(\pi/n) e^{-i\pi/n}} lie on the boundary of {D(1,\sin \frac{\pi}{n})}.

If one narrows {\sin \frac{\pi}{n}} slightly to {\sin \frac{\pi}{2n}} then one can quickly prove this proposition as follows. Suppose for contradiction that there exist distinct {z_1, z_2 \in D(z_0, \sin\frac{\pi}{n})} with {f(z_1)=f(z_2)}, thus if we let {\gamma} be the line segment contour from {z_1} to {z_2} then {\int_\gamma f'(z)\ dz}. However, by assumption we may factor {f'(z) = c (z-\zeta_1) \dots (z-\zeta_{n-1})} where all the {\zeta_j} lie outside of {D(z_0,r)}. Elementary trigonometry then tells us that the argument of {z-\zeta_j} only varies by less than {\frac{\pi}{n}} as {z} traverses {\gamma}, hence the argument of {f'(z)} only varies by less than {\pi}. Thus {f'(z)} takes values in an open half-plane avoiding the origin and so it is not possible for {\int_\gamma f'(z)\ dz} to vanish.

To recover the best constant of {\sin \frac{\pi}{n}} requires some effort. By taking contrapositives and applying an affine rescaling and some trigonometry, the proposition can be deduced from the following result, known variously as the Grace-Heawood theorem or the complex Rolle theorem.

Proposition 3 (Grace-Heawood theorem) Let {f: {\bf C} \rightarrow {\bf C}} be a polynomial of degree {n \geq 1} such that {f(1)=f(-1)}. Then {f'} contains a zero in the closure of {D( 0, \cot \frac{\pi}{n} )}.

This is in turn implied by a remarkable and powerful theorem of Grace (which we shall prove shortly). Given two polynomials {f,g} of degree at most {n}, define the apolar form {(f,g)_n} by

\displaystyle  (f,g)_n := \sum_{k=0}^n (-1)^k f^{(k)}(0) g^{(n-k)}(0). \ \ \ \ \ (1)

Theorem 4 (Grace’s theorem) Let {C} be a circle or line in {{\bf C}}, dividing {{\bf C} \backslash C} into two open connected regions {\Omega_1, \Omega_2}. Let {f,g} be two polynomials of degree at most {n \geq 1}, with all the zeroes of {f} lying in {\Omega_1} and all the zeroes of {g} lying in {\Omega_2}. Then {(f,g)_n \neq 0}.

(Contrapositively: if {(f,g)_n=0}, then the zeroes of {f} cannot be separated from the zeroes of {g} by a circle or line.)

Indeed, a brief calculation reveals the identity

\displaystyle  f(1) - f(-1) = (f', g)_{n-1}

where {g} is the degree {n-1} polynomial

\displaystyle  g(z) := \frac{1}{n!} ((z+1)^n - (z-1)^n).

The zeroes of {g} are {i \cot \frac{\pi j}{n}} for {j=1,\dots,n-1}, so the Grace-Heawood theorem follows by applying Grace’s theorem with {C} equal to the boundary of {D(0, \cot \frac{\pi}{n})}.

The same method of proof gives the following nice consequence:

Theorem 5 (Perpendicular bisector theorem) Let {f: {\bf C} \rightarrow C} be a polynomial such that {f(z_1)=f(z_2)} for some distinct {z_1,z_2}. Then the zeroes of {f'} cannot all lie on one side of the perpendicular bisector of {z_1,z_2}. For instance, if {f(1)=f(-1)}, then the zeroes of {f'} cannot all lie in the halfplane {\{ z: \mathrm{Re} z > 0 \}} or the halfplane {\{ z: \mathrm{Re} z < 0 \}}.

I’d be interested in seeing a proof of this latter theorem that did not proceed via Grace’s theorem.

Now we give a proof of Grace’s theorem. The case {n=1} can be established by direct computation, so suppose inductively that {n>1} and that the claim has already been established for {n-1}. Given the involvement of circles and lines it is natural to suspect that a Möbius transformation symmetry is involved. This is indeed the case and can be made precise as follows. Let {V_n} denote the vector space of polynomials {f} of degree at most {n}, then the apolar form is a bilinear form {(,)_n: V_n \times V_n \rightarrow {\bf C}}. Each translation {z \mapsto z+a} on the complex plane induces a corresponding map on {V_n}, mapping each polynomial {f} to its shift {\tau_a f(z) := f(z-a)}. We claim that the apolar form is invariant with respect to these translations:

\displaystyle  ( \tau_a f, \tau_a g )_n = (f,g)_n.

Taking derivatives in {a}, it suffices to establish the skew-adjointness relation

\displaystyle  (f', g)_n + (f,g')_n = 0

but this is clear from the alternating form of (1).

Next, we see that the inversion map {z \mapsto 1/z} also induces a corresponding map on {V_n}, mapping each polynomial {f \in V_n} to its inversion {\iota f(z) := z^n f(1/z)}. From (1) we see that this map also (projectively) preserves the apolar form:

\displaystyle  (\iota f, \iota g)_n = (-1)^n (f,g)_n.

More generally, the group of Möbius transformations on the Riemann sphere acts projectively on {V_n}, with each Möbius transformation {T: {\bf C} \rightarrow {\bf C}} mapping each {f \in V_n} to {Tf(z) := g_T(z) f(T^{-1} z)}, where {g_T} is the unique (up to constants) rational function that maps this a map from {V_n} to {V_n} (its divisor is {n(T \infty) - n(\infty)}). Since the Möbius transformations are generated by translations and inversion, we see that the action of Möbius transformations projectively preserves the apolar form; also, we see this action of {T} on {V_n} also moves the zeroes of each {f \in V_n} by {T} (viewing polynomials of degree less than {n} in {V_n} as having zeroes at infinity). In particular, the hypotheses and conclusions of Grace’s theorem are preserved by this Möbius action. We can then apply such a transformation to move one of the zeroes of {f} to infinity (thus making {f} a polynomial of degree {n-1}), so that {C} must now be a circle, with the zeroes of {g} inside the circle and the remaining zeroes of {f} outside the circle. But then

\displaystyle  (f,g)_n = (f, g')_{n-1}.

By the Gauss-Lucas theorem, the zeroes of {g'} are also inside {C}. The claim now follows from the induction hypothesis.

A family {A_1,\dots,A_r} of sets for some {r \geq 1} is a sunflower if there is a core set {A_0} contained in each of the {A_i} such that the petal sets {A_i \backslash A_0, i=1,\dots,r} are disjoint. If {k,r \geq 1}, let {\mathrm{Sun}(k,r)} denote the smallest natural number with the property that any family of {\mathrm{Sun}(k,r)} distinct sets of cardinality at most