You are currently browsing the category archive for the ‘expository’ category.

A popular way to visualise relationships between some finite number of sets is via Venn diagrams, or more generally Euler diagrams. In these diagrams, a set is depicted as a two-dimensional shape such as a disk or a rectangle, and the various Boolean relationships between these sets (e.g., that one set is contained in another, or that the intersection of two of the sets is equal to a third) is represented by the Boolean algebra of these shapes; Venn diagrams correspond to the case where the sets are in “general position” in the sense that all non-trivial Boolean combinations of the sets are non-empty. For instance to depict the general situation of two sets together with their intersection and one might use a Venn diagram such as

(where we have given each region depicted a different color, and moved the edges of each region a little away from each other in order to make them all visible separately), but if one wanted to instead depict a situation in which the intersection was empty, one could use an Euler diagram such as

One can use the area of various regions in a Venn or Euler diagram as a heuristic proxy for the cardinality (or measure ) of the set corresponding to such a region. For instance, the above Venn diagram can be used to intuitively justify the inclusion-exclusion formula

for finite sets , while the above Euler diagram similarly justifies the special case for finite*disjoint*sets .

While Venn and Euler diagrams are traditionally two-dimensional in nature, there is nothing preventing one from using one-dimensional diagrams such as

or even three-dimensional diagrams such as this one from Wikipedia:

Of course, in such cases one would use length or volume as a heuristic proxy for cardinality or measure, rather than area.

With the addition of arrows, Venn and Euler diagrams can also accommodate (to some extent) functions between sets. Here for instance is a depiction of a function , the image of that function, and the image of some subset of :

Here one can illustrate surjectivity of by having fill out all of ; one can similarly illustrate injectivity of by giving exactly the same shape (or at least the same area) as . So here for instance might be how one would illustrate an injective function :

Cartesian product operations can be incorporated into these diagrams by appropriate combinations of one-dimensional and two-dimensional diagrams. Here for instance is a diagram that illustrates the identity :

In this blog post I would like to propose a similar family of diagrams to illustrate relationships between *vector spaces* (over a fixed base field , such as the reals) or *abelian groups*, rather than sets. The categories of (-)vector spaces and abelian groups are quite similar in many ways; the former consists of modules over a base field , while the latter consists of modules over the integers ; also, both categories are basic examples of abelian categories. The notion of a dimension in a vector space is analogous in many ways to that of cardinality of a set; see this previous post for an instance of this analogy (in the context of Shannon entropy). (UPDATE: I have learned that an essentially identical notation has also been proposed in an unpublished manuscript of Ravi Vakil.)

In everyday usage, we rely heavily on percentages to quantify probabilities and proportions: we might say that a prediction is accurate or accurate, that there is a chance of dying from some disease, and so forth. However, for those without extensive mathematical training, it can sometimes be difficult to assess whether a given percentage amounts to a “good” or “bad” outcome, because this depends very much on the context of how the percentage is used. For instance:

- (i) In a two-party election, an outcome of say to might be considered close, but to would probably be viewed as a convincing mandate, and to would likely be viewed as a landslide.
- (ii) Similarly, if one were to poll an upcoming election, a poll of to would be too close to call, to would be an extremely favorable result for the candidate, and to would mean that it would be a major upset if the candidate lost the election.
- (iii) On the other hand, a medical operation that only had a , , or chance of success would be viewed as being incredibly risky, especially if failure meant death or permanent injury to the patient. Even an operation that was or likely to be non-fatal (i.e., a or chance of death) would not be conducted lightly.
- (iv) A weather prediction of, say, chance of rain during a vacation trip might be sufficient cause to pack an umbrella, even though it is more likely than not that rain would not occur. On the other hand, if the prediction was for an chance of rain, and it ended up that the skies remained clear, this does not seriously damage the accuracy of the prediction – indeed, such an outcome would be expected in one out of every five such predictions.
- (v) Even extremely tiny percentages of toxic chemicals in everyday products can be considered unacceptable. For instance, EPA rules require action to be taken when the percentage of lead in drinking water exceeds (15 parts per billion). At the opposite extreme, recycling contamination rates as high as are often considered acceptable.

Because of all the very different ways in which percentages could be used, I think it may make sense to propose an alternate system of units to measure one class of probabilities, namely the probabilities of avoiding some highly undesirable outcome, such as death, accident or illness. The units I propose are that of “nines“, which are already commonly used to measure *availability* of some service or *purity* of a material, but can be equally used to measure the *safety* (i.e., lack of risk) of some activity. Informally, nines measure how many consecutive appearances of the digit are in the probability of successfully avoiding the negative outcome, thus

- success = one nine of safety
- success = two nines of safety
- success = three nines of safety

Definition 1 (Nines of safety)An activity (affecting one or more persons, over some given period of time) that has a probability of the “safe” outcome and probability of the “unsafe” outcome will have nines of safety against the unsafe outcome, where is defined by the formula (where is the logarithm to base ten), or equivalently

Remark 2Because of the various uncertainties in measuring probabilities, as well as the inaccuracies in some of the assumptions and approximations we will be making later, we will not attempt to measure the number of nines of safety beyond the first decimal point; thus we will round to the nearest tenth of a nine of safety throughout this post.

Here is a conversion table between percentage rates of success (the safe outcome), failure (the unsafe outcome), and the number of nines of safety one has:

Success rate | Failure rate | Number of nines |

infinite |

Thus, if one has no nines of safety whatsoever, one is guaranteed to fail; but each nine of safety one has reduces the failure rate by a factor of . In an ideal world, one would have infinitely many nines of safety against any risk, but in practice there are no guarantees against failure, and so one can only expect a finite amount of nines of safety in any given situation. Realistically, one should thus aim to have as many nines of safety as one can reasonably expect to have, but not to demand an infinite amount.

Remark 3The number of nines of safety against a certain risk is not absolute; it will depend not only on the risk itself, but (a) the number of people exposed to the risk, and (b) the length of time one is exposed to the risk. Exposing more people or increasing the duration of exposure will reduce the number of nines, and conversely exposing fewer people or reducing the duration will increase the number of nines; see Proposition 7 below for a rough rule of thumb in this regard.

Remark 4Nines of safety are a logarithmic scale of measurement, rather than a linear scale. Other familiar examples of logarithmic scales of measurement include the Richter scale of earthquake magnitude, the pH scale of acidity, the decibel scale of sound level, octaves in music, and the magnitude scale for stars.

Remark 5One way to think about nines of safety is via the Swiss cheese model that was created recently to describe pandemic risk management. In this model, each nine of safety can be thought of as a slice of Swiss cheese, with holes occupying of that slice. Having nines of safety is then analogous to standing behind such slices of Swiss cheese. In order for a risk to actually impact you, it must pass through each of these slices. A fractional nine of safety corresponds to a fractional slice of Swiss cheese that covers the amount of space given by the above table. For instance, nines of safety corresponds to a fractional slice that covers about of the given area (leaving uncovered).

Now to give some real-world examples of nines of safety. Using data for deaths in the US in 2019 (without attempting to account for factors such as age and gender), a random US citizen will have had the following amount of safety from dying from some selected causes in that year:

Cause of death | Mortality rate per (approx.) | Nines of safety |

All causes | ||

Heart disease | ||

Cancer | ||

Accidents | ||

Drug overdose | ||

Influenza/Pneumonia | ||

Suicide | ||

Gun violence | ||

Car accident | ||

Murder | ||

Airplane crash | ||

Lightning strike |

The safety of air travel is particularly remarkable: a given hour of flying in general aviation has a fatality rate of , or about nines of safety, while for the major carriers the fatality rate drops down to , or about nines of safety.

Of course, in 2020, COVID-19 deaths became significant. In this year in the US, the mortality rate for COVID-19 (as the underlying or contributing cause of death) was per , corresponding to nines of safety, which was less safe than all other causes of death except for heart disease and cancer. At this time of writing, data for all of 2021 is of course not yet available, but it seems likely that the safety level would be even lower for this year.

Some further illustrations of the concept of nines of safety:

- Each round of Russian roulette has a success rate of , providing only nines of safety. Of course, the safety will decrease with each additional round: one has only nines of safety after two rounds, nines after three rounds, and so forth. (See also Proposition 7 below.)
- The ancient Roman punishment of decimation, by definition, provided exactly one nine of safety to each soldier being punished.
- Rolling a on a -sided die is a risk that carries about nines of safety.
- Rolling a double one (“snake eyes“) from two six-sided dice carries about nines of safety.
- One has about nines of safety against the risk of someone randomly guessing your birthday on the first attempt.
- A null hypothesis has nines of safety against producing a statistically significant result, and nines against producing a statistically significant result. (However, one has to be careful when reversing the conditional; a statistically significant result does not necessarily have nines of safety against the null hypothesis. In Bayesian statistics, the precise relationship between the two risks is given by Bayes’ theorem.)
- If a poker opponent is dealt a five-card hand, one has nines of safety against that opponent being dealt a royal flush, against a straight flush or higher, against four-of-a-kind or higher, against a full house or higher, against a flush or higher, against a straight or higher, against three-of-a-kind or higher, against two pairs or higher, and just against one pair or higher. (This data was converted from this Wikipedia table.)
- A -digit PIN number (or a -digit combination lock) carries nines of safety against each attempt to randomly guess the PIN. A length password that allows for numbers, upper and lower case letters, and punctuation carries about nines of safety against a single guess. (For the reduction in safety caused by multiple guesses, see Proposition 7 below.)

Here is another way to think about nines of safety:

Proposition 6 (Nines of safety extend expected onset of risk)Suppose a certain risky activity has nines of safety. If one repeatedly indulges in this activity until the risk occurs, then the expected number of trials before the risk occurs is .

*Proof:* The probability that the risk is activated after exactly trials is , which is a geometric distribution of parameter . The claim then follows from the standard properties of that distribution.

Thus, for instance, if one performs some risky activity daily, then the expected length of time before the risk occurs is given by the following table:

Daily nines of safety | Expected onset of risk |

One day | |

One week | |

One month | |

One year | |

Two years | |

Five years | |

Ten years | |

Twenty years | |

Fifty years | |

A century |

Or, if one wants to convert the yearly risks of dying from a specific cause into expected years before that cause of death would occur (assuming for sake of discussion that no other cause of death exists):

Yearly nines of safety | Expected onset of risk |

One year | |

Two years | |

Five years | |

Ten years | |

Twenty years | |

Fifty years | |

A century |

These tables suggest a relationship between the amount of safety one would have in a short timeframe, such as a day, and a longer time frame, such as a year. Here is an approximate formalisation of that relationship:

Proposition 7 (Repeated exposure reduces nines of safety)If a risky activity with nines of safety is (independently) repeated times, then (assuming is large enough depending on ), the repeated activity will have approximately nines of safety. Conversely: if the repeated activity has nines of safety, the individual activity will have approximately nines of safety.

*Proof:* An activity with nines of safety will be safe with probability , hence safe with probability if repeated independently times. For large, we can approximate

Remark 8The hypothesis of independence here is key. If there is a lot of correlation between the risks between different repetitions of the activity, then there can be much less reduction in safety caused by that repetition. As a simple example, suppose that of a workforce are trained to perform some task flawlessly no matter how many times they repeat the task, but the remaining are untrained and will always fail at that task. If one selects a random worker and asks them to perform the task, one has nines of safety against the task failing. If one took that same random worker and asked them to perform the task times, the above proposition might suggest that the number of nines of safety would drop to approximately ; but in this case there is perfect correlation, and in fact the number of nines of safety remains steady at since it is the same of the workforce that would fail each time.Because of this caveat, one should view the above proposition as only a crude first approximation that can be used as a simple rule of thumb, but should not be relied upon for more precise calculations.

One can repeat a risk either in time (extending the time of exposure to the risk, say from a day to a year), or in space (by exposing the risk to more people). The above proposition then gives an additive conversion law for nines of safety in either case. Here are some conversion tables for time:

From/to | Daily | Weekly | Monthly | Yearly |

Daily | 0 | -0.8 | -1.5 | -2.6 |

Weekly | +0.8 | 0 | -0.6 | -1.7 |

Monthly | +1.5 | +0.6 | 0 | -1.1 |

Yearly | +2.6 | +1.7 | +1.1 | 0 |

From/to | Yearly | Per 5 yr | Per decade | Per century |

Yearly | 0 | -0.7 | -1.0 | -2.0 |

Per 5 yr | +0.7 | 0 | -0.3 | -1.3 |

Per decade | +1.0 | + -0.3 | 0 | -1.0 |

Per century | +2.0 | +1.3 | +1.0 | 0 |

For instance, as mentioned before, the yearly amount of safety against cancer is about . Using the above table (and making the somewhat unrealistic hypothesis of independence), we then predict the daily amount of safety against cancer to be about nines, the weekly amount to be about nines, and the amount of safety over five years to drop to about nines.

Now we turn to conversions in space. If one knows the level of safety against a certain risk for an individual, and then one (independently) exposes a group of such individuals to that risk, then the reduction in nines of safety when considering the possibility that at least one group member experiences this risk is given by the following table:

Group | Reduction in safety |

You ( person) | |

You and your partner ( people) | |

You and your parents ( people) | |

You, your partner, and three children ( people) | |

An extended family of people | |

A class of people | |

A workplace of people | |

A school of people | |

A university of people | |

A town of people | |

A city of million people | |

A state of million people | |

A country of million people | |

A continent of billion people | |

The entire planet |

For instance, in a given year (and making the somewhat implausible assumption of independence), you might have nines of safety against cancer, but you and your partner collectively only have about nines of safety against this risk, your family of five might only have about nines of safety, and so forth. By the time one gets to a group of people, it actually becomes very likely that at least one member of the group will die of cancer in that year. (Here the precise conversion table breaks down, because a negative number of nines such as is not possible, but one should interpret a prediction of a negative number of nines as an assertion that failure is very likely to happen. Also, in practice the reduction in safety is less than this rule predicts, due to correlations such as risk factors that are common to the group being considered that are incompatible with the assumption of independence.)

In the opposite direction, any reduction in exposure (either in time or space) to a risk will increase one’s safety level, as per the following table:

Reduction in exposure | Additional nines of safety |

For instance, a five-fold reduction in exposure will reclaim about additional nines of safety.

Here is a slightly different way to view nines of safety:

Proposition 9Suppose that a group of people are independently exposed to a given risk. If there are at most nines of individual safety against that risk, then there is at least a chance that one member of the group is affected by the risk.

*Proof:* If individually there are nines of safety, then the probability that all the members of the group avoid the risk is . Since the inequality

Thus, for a group to collectively avoid a risk with at least a chance, one needs the following level of individual safety:

Group | Individual safety level required |

You ( person) | |

You and your partner ( people) | |

You and your parents ( people) | |

You, your partner, and three children ( people) | |

An extended family of people | |

A class of people | |

A workplace of people | |

A school of people | |

A university of people | |

A town of people | |

A city of million people | |

A state of million people | |

A country of million people | |

A continent of billion people | |

The entire planet |

For large , the level of nines of individual safety required to protect a group of size with probability at least is approximately .

Precautions that can work to prevent a certain risk from occurring will add additional nines of safety against that risk, even if the precaution is not effective. Here is the precise rule:

Proposition 10 (Precautions add nines of safety)Suppose an activity carries nines of safety against a certain risk, and a separate precaution can independently protect against that risk with nines of safety (that is to say, the probability that the protection is effective is ). Then applying that precaution increases the number of nines in the activity from to .

*Proof:* The probability that the precaution fails *and* the risk then occurs is . The claim now follows from Definition 1.

In particular, we can repurpose the table at the start of this post as a conversion chart for effectiveness of a precaution:

Effectiveness | Failure rate | Additional nines provided |

infinite |

Thus for instance a precaution that is effective will add nines of safety, a precaution that is effective will add nines of safety, and so forth. The mRNA COVID vaccines by Pfizer and Moderna have somewhere between effectiveness against symptomatic COVID illness, providing about nines of safety against that risk, and over effectiveness against severe illness, thus adding at least nines of safety in this regard.

A slight variant of the above rule can be stated using the concept of relative risk:

Proposition 11 (Relative risk and nines of safety)Suppose an activity carries nines of safety against a certain risk, and an action multiplies the chance of failure by some relative risk . Then the action removes nines of safety (if ) or adds nines of safety (if ) to the original activity.

*Proof:* The additional action adjusts the probability of failure from to . The claim now follows from Definition 1.

Here is a conversion chart between relative risk and change in nines of safety:

Relative risk | Change in nines of safety |

Some examples:

- Smoking increases the fatality rate of lung cancer by a factor of about , thus removing about nines of safety from this particular risk; it also increases the fatality rates of several other diseases, though not quite as dramatically an extent.
- Seatbelts reduce the fatality rate in car accidents by a factor of about two, adding about nines of safety. Airbags achieve a reduction of about , adding about additional nines of safety.
- As far as transmission of COVID is concerned, it seems that constant use of face masks reduces transmission by a factor of about five (thus adding about nines of safety), and similarly for constant adherence to social distancing; whereas for instance a compliance with mask usage reduced transmission by about (adding only or so nines of safety).

The effect of combining multiple (independent) precautions together is cumulative; one can achieve quite a high level of safety by stacking together several precautions that individually have relatively low levels of effectiveness. Again, see the “swiss cheese model” referred to in Remark 5. For instance, if face masks add nines of safety against contracting COVID, social distancing adds another nines, and the vaccine provide another nine of safety, implementing all three mitigation methods would (assuming independence) add a net of nines of safety against contracting COVID.

In summary, when debating the value of a given risk mitigation measure, the correct question to ask is not quite “Is it certain to work” or “Can it fail?”, but rather “How many extra nines of safety does it add?”.

As one final comparison between nines of safety and other standard risk measures, we give the following proposition regarding large deviations from the mean.

Proposition 12Let be a normally distributed random variable of standard deviation , and let . Then the “one-sided risk” of exceeding its mean by at least (i.e., ) carries nines of safety, the “two-sided risk” of deviating (in either direction) from its mean by at least (i.e., ) carries nines of safety, where is the error function.

*Proof:* This is a routine calculation using the cumulative distribution function of the normal distribution.

Here is a short table illustrating this proposition:

Number of deviations from the mean | One-sided nines of safety | Two-sided nines of safety |

Thus, for instance, the risk of a five sigma event (deviating by more than five standard deviations from the mean in either direction) should carry nines of safety assuming a normal distribution, and so one would ordinarily feel extremely safe against the possibility of such an event, unless one started doing hundreds of thousands of trials. (However, we caution that this conclusion relies *heavily* on the assumption that one has a normal distribution!)

See also this older essay I wrote on anonymity on the internet, using bits as a measure of anonymity in much the same way that nines are used here as a measure of safety.

In the modern theory of higher order Fourier analysis, a key role are played by the Gowers uniformity norms for . For finitely supported functions , one can define the (non-normalised) Gowers norm by the formula

where denotes complex conjugation, and then on any discrete interval and any function we can then define the (normalised) Gowers norm where is the extension of by zero to all of . Thus for instance (which technically makes a seminorm rather than a norm), and one can calculate where , and we use the averaging notation .The significance of the Gowers norms is that they control other multilinear forms that show up in additive combinatorics. Given any polynomials and functions , we define the multilinear form

(assuming that the denominator is finite and non-zero). Thus for instance where we view as formal (indeterminate) variables, and are understood to be extended by zero to all of . These forms are used to count patterns in various sets; for instance, the quantity is closely related to the number of length three arithmetic progressions contained in . Let us informally say that a form is*controlled*by the norm if the form is small whenever are -bounded functions with at least one of the small in norm. This definition was made more precise by Gowers and Wolf, who then defined the

*true complexity*of a form to be the least such that is controlled by the norm. For instance,

- and have true complexity ;
- has true complexity ;
- has true complexity ;
- The form (which among other things could be used to count twin primes) has infinite true complexity (which is quite unfortunate for applications).

Gowers and Wolf formulated a conjecture on what this complexity should be, at least for linear polynomials ; Ben Green and I thought we had resolved this conjecture back in 2010, though it turned out there was a subtle gap in our arguments and we were only able to resolve the conjecture in a partial range of cases. However, the full conjecture was recently resolved by Daniel Altman.

The (semi-)norm is so weak that it barely controls any averages at all. For instance the average

is not controlled by the semi-norm: it is perfectly possible for a -bounded function to even have vanishing norm but have large value of (consider for instance the parity function ).Because of this, I propose inserting an additional norm in the Gowers uniformity norm hierarchy between the and norms, which I will call the (or “profinite “) norm:

where ranges over all arithmetic progressions in . This can easily be seen to be a norm on functions that controls the norm. It is also basically controlled by the norm for -bounded functions ; indeed, if is an arithmetic progression in of some spacing , then we can write as the intersection of an interval with a residue class modulo , and from Fourier expansion we have If we let be a standard bump function supported on with total mass and is a parameter then (extending by zero outside of ), as can be seen by using the triangle inequality and the estimate After some Fourier expansion of we now have Writing as a linear combination of and using the Gowers–Cauchy–Schwarz inequality, we conclude hence on optimising in we have Forms which are controlled by the norm (but not ) would then have their true complexity adjusted to with this insertion.The norm recently appeared implicitly in work of Peluse and Prendiville, who showed that the form had true complexity in this notation (with polynomially strong bounds). [Actually, strictly speaking this control was only shown for the third function ; for the first two functions one needs to localize the norm to intervals of length . But I will ignore this technical point to keep the exposition simple.] The weaker claim that has true complexity is substantially easier to prove (one can apply the circle method together with Gauss sum estimates).

The well known inverse theorem for the norm tells us that if a -bounded function has norm at least for some , then there is a Fourier phase such that

this follows easily from (1) and Plancherel’s theorem. Conversely, from the Gowers–Cauchy–Schwarz inequality one hasFor one has a trivial inverse theorem; by definition, the norm of is at least if and only if

Thus the frequency appearing in the inverse theorem can be taken to be zero when working instead with the norm.For one has the intermediate situation in which the frequency is not taken to be zero, but is instead major arc. Indeed, suppose that is -bounded with , thus

for some progression . This forces the spacing of this progression to be . We write the above inequality as for some residue class and some interval . By Fourier expansion and the triangle inequality we then have for some integer . Convolving by for a small multiple of and a Schwartz function of unit mass with Fourier transform supported on , we have The Fourier transform of is bounded by and supported on , thus by Fourier expansion and the triangle inequality we have for some , so in particular . Thus we have for some of the major arc form with . Conversely, for of this form, some routine summation by parts gives the bound so if (2) holds for a -bounded then one must have .Here is a diagram showing some of the control relationships between various Gowers norms, multilinear forms, and duals of classes of functions (where each class of functions induces a dual norm :

Here I have included the three classes of functions that one can choose from for the inverse theorem, namely degree two nilsequences, bracket quadratic phases, and local quadratic phases, as well as the more narrow class of globally quadratic phases.

The Gowers norms have counterparts for measure-preserving systems , known as *Host-Kra seminorms*. The norm can be defined for as

*invariant factor*(generated by the (almost everywhere) invariant measurable subsets of ) in the sense that a function has vanishing seminorm if and only if it is orthogonal to all -measurable (bounded) functions. Similarly, the norm is orthogonal to the

*Kronecker factor*, generated by the eigenfunctions of (that is to say, those obeying an identity for some -invariant ); for ergodic systems, it is the largest factor isomorphic to rotation on a compact abelian group. In analogy to the Gowers norm, one can then define the Host-Kra seminorm by it is orthogonal to the

*profinite factor*, generated by the periodic sets of (or equivalently, by those eigenfunctions whose eigenvalue is a root of unity); for ergodic systems, it is the largest factor isomorphic to rotation on a profinite abelian group.

I’m collecting in this blog post a number of simple group-theoretic lemmas, all of the following flavour: if is a subgroup of some product of groups, then one of three things has to happen:

- ( too small) is contained in some proper subgroup of , or the elements of are constrained to some sort of equation that the full group does not satisfy.
- ( too large) contains some non-trivial normal subgroup of , and as such actually arises by pullback from some subgroup of the quotient group .
- (Structure) There is some useful structural relationship between and the groups .

It is perhaps easiest to explain the flavour of these lemmas with some simple examples, starting with the case where we are just considering subgroups of a single group .

Lemma 1Let be a subgroup of a group . Then exactly one of the following hold:

- (i) ( too small) There exists a non-trivial group homomorphism into a group such that for all .
- (ii) ( normally generates ) is generated as a group by the conjugates of .

*Proof:* Let be the group normally generated by , that is to say the group generated by the conjugates of . This is a normal subgroup of containing (indeed it is the smallest such normal subgroup). If is all of we are in option (ii); otherwise we can take to be the quotient group and to be the quotient map. Finally, if (i) holds, then all of the conjugates of lie in the kernel of , and so (ii) cannot hold.

Here is a “dual” to the above lemma:

Lemma 2Let be a subgroup of a group . Then exactly one of the following hold:

- (i) ( too large) is the pullback of some subgroup of for some non-trivial normal subgroup of , where is the quotient map.
- (ii) ( is core-free) does not contain any non-trivial conjugacy class .

*Proof:* Let be the normal core of , that is to say the intersection of all the conjugates of . This is the largest normal subgroup of that is contained in . If is non-trivial, we can quotient it out and end up with option (i). If instead is trivial, then there is no non-trivial element that lies in the core, hence no non-trivial conjugacy class lies in and we are in option (ii). Finally, if (i) holds, then every conjugacy class of an element of is contained in and hence in , so (ii) cannot hold.

For subgroups of nilpotent groups, we have a nice dichotomy that detects properness of a subgroup through abelian representations:

Lemma 3Let be a subgroup of a nilpotent group . Then exactly one of the following hold:

- (i) ( too small) There exists non-trivial group homomorphism into an abelian group such that for all .
- (ii) .

Informally: if is a variable ranging in a subgroup of a nilpotent group , then either is unconstrained (in the sense that it really ranges in all of ), or it obeys some abelian constraint .

*Proof:* By definition of nilpotency, the lower central series

Since is a normal subgroup of , is also a subgroup of . Suppose first that is a proper subgroup of , then the quotient map is a non-trivial homomorphism to an abelian group that annihilates , and we are in option (i). Thus we may assume that , and thus

Note that modulo the normal group , commutes with , hence and thus We conclude that . One can continue this argument by induction to show that for every ; taking large enough we end up in option (ii). Finally, it is clear that (i) and (ii) cannot both hold.

Remark 4When the group is locally compact and is closed, one can take the homomorphism in Lemma 3 to be continuous, and by using Pontryagin duality one can also take the target group to be the unit circle . Thus is now a character of . Similar considerations hold for some of the later lemmas in this post. Discrete versions of this above lemma, in which the group is replaced by some orbit of a polynomial map on a nilmanifold, were obtained by Leibman and are important in the equidistribution theory of nilmanifolds; see this paper of Ben Green and myself for further discussion.

Here is an analogue of Lemma 3 for special linear groups, due to Serre (IV-23):

Lemma 5Let be a prime, and let be a closed subgroup of , where is the ring of -adic integers. Then exactly one of the following hold:

- (i) ( too small) There exists a proper subgroup of such that for all .
- (ii) .

*Proof:* It is a standard fact that the reduction of mod is , hence (i) and (ii) cannot both hold.

Suppose that (i) fails, then for every there exists such that , which we write as

We now claim inductively that for any and , there exists with ; taking limits as using the closed nature of will then place us in option (ii).The case is already handled, so now suppose . If , we see from the case that we can write where and . Thus to establish the claim it suffices to do so under the additional hypothesis that .

First suppose that for some with . By the case, we can find of the form for some . Raising to the power and using and , we note that

giving the claim in this case.Any matrix of trace zero with coefficients in is a linear combination of , , and is thus a sum of matrices that square to zero. Hence, if is of the form , then for some matrix of trace zero, and thus one can write (up to errors) as the finite product of matrices of the form with . By the previous arguments, such a matrix lies in up to errors, and hence does also. This completes the proof of the case.

Now suppose and the claim has already been proven for . Arguing as before, it suffices to close the induction under the additional hypothesis that , thus we may write . By induction hypothesis, we may find with . But then , and we are done.

We note a generalisation of Lemma 3 that involves two groups rather than just one:

Lemma 6Let be a subgroup of a product of two nilpotent groups . Then exactly one of the following hold:

- (i) ( too small) There exists group homomorphisms , into an abelian group , with non-trivial, such that for all , where is the projection of to .
- (ii) for some subgroup of .

*Proof:* Consider the group . This is a subgroup of . If it is all of , then must be a Cartesian product and option (ii) holds. So suppose that this group is a proper subgroup of . Applying Lemma 3, we obtain a non-trivial group homomorphism into an abelian group such that whenever . For any in the projection of to , there is thus a unique quantity such that whenever . One easily checks that is a homomorphism, so we are in option (i).

Finally, it is clear that (i) and (ii) cannot both hold, since (i) places a non-trivial constraint on the second component of an element of for any fixed choice of .

We also note a similar variant of Lemma 5, which is Lemme 10 of this paper of Serre:

Lemma 7Let be a prime, and let be a closed subgroup of . Then exactly one of the following hold:

- (i) ( too small) There exists a proper subgroup of such that for all .
- (ii) .

*Proof:* As in the proof of Lemma 5, (i) and (ii) cannot both hold. Suppose that (i) does not hold, then for any there exists such that . Similarly, there exists with . Taking commutators of and , we can find with . Continuing to take commutators with and extracting a limit (using compactness and the closed nature of ), we can find with . Thus, the closed subgroup of does not obey conclusion (i) of Lemma 5, and must therefore obey conclusion (ii); that is to say, contains . Similarly contains ; multiplying, we end up in conclusion (ii).

The most famous result of this type is of course the Goursat lemma, which we phrase here in a somewhat idiosyncratic manner to conform to the pattern of the other lemmas in this post:

Lemma 8 (Goursat lemma)Let be a subgroup of a product of two groups . Then one of the following hold:

- (i) ( too small) is contained in for some subgroups , of respectively, with either or (or both).
- (ii) ( too large) There exist normal subgroups of respectively, not both trivial, such that arises from a subgroup of , where is the quotient map.
- (iii) (Isomorphism) There is a group isomorphism such that is the graph of . In particular, and are isomorphic.

Here we almost have a trichotomy, because option (iii) is incompatible with both option (i) and option (ii). However, it is possible for options (i) and (ii) to simultaneously hold.

*Proof:* If either of the projections , from to the factor groups (thus and fail to be surjective, then we are in option (i). Thus we may assume that these maps are surjective.

Next, if either of the maps , fail to be injective, then at least one of the kernels , is non-trivial. We can then descend down to the quotient and end up in option (ii).

The only remaining case is when the group homomorphisms are both bijections, hence are group isomorphisms. If we set we end up in case (iii).

We can combine the Goursat lemma with Lemma 3 to obtain a variant:

Corollary 9 (Nilpotent Goursat lemma)Let be a subgroup of a product of two nilpotent groups . Then one of the following hold:

- (i) ( too small) There exists and a non-trivial group homomorphism such that for all .
- (ii) ( too large) There exist normal subgroups of respectively, not both trivial, such that arises from a subgroup of .
- (iii) (Isomorphism) There is a group isomorphism such that is the graph of . In particular, and are isomorphic.

*Proof:* If Lemma 8(i) holds, then by applying Lemma 3 we arrive at our current option (i). The other options are unchanged from Lemma 8, giving the claim.

Now we present a lemma involving three groups that is known in ergodic theory contexts as the “Furstenberg-Weiss argument”, as an argument of this type arose in this paper of Furstenberg and Weiss, though perhaps it also implicitly appears in other contexts also. It has the remarkable feature of being able to enforce the abelian nature of one of the groups once the other options of the lemma are excluded.

Lemma 10 (Furstenberg-Weiss lemma)Let be a subgroup of a product of three groups . Then one of the following hold:

- (i) ( too small) There is some proper subgroup of and some such that whenever and .
- (ii) ( too large) There exists a non-trivial normal subgroup of with abelian, such that arises from a subgroup of , where is the quotient map.
- (iii) is abelian.

*Proof:* If the group is a proper subgroup of , then we are in option (i) (with ), so we may assume that

As before, we can combine this with previous lemmas to obtain a variant in the nilpotent case:

Lemma 11 (Nilpotent Furstenberg-Weiss lemma)Let be a subgroup of a product of three nilpotent groups . Then one of the following hold:

- (i) ( too small) There exists and group homomorphisms , for some abelian group , with non-trivial, such that whenever , where is the projection of to .
- (ii) ( too large) There exists a non-trivial normal subgroup of , such that arises from a subgroup of .
- (iii) is abelian.

Informally, this lemma asserts that if is a variable ranging in some subgroup , then either (i) there is a non-trivial abelian equation that constrains in terms of either or ; (ii) is not fully determined by and ; or (iii) is abelian.

*Proof:* Applying Lemma 10, we are already done if conclusions (ii) or (iii) of that lemma hold, so suppose instead that conclusion (i) holds for say . Then the group is not of the form , since it only contains those with . Applying Lemma 6, we obtain group homomorphisms , into an abelian group , with non-trivial, such that whenever , placing us in option (i).

The Furstenberg-Weiss argument is often used (though not precisely in this form) to establish that certain key structure groups arising in ergodic theory are abelian; see for instance Proposition 6.3(1) of this paper of Host and Kra for an example.

One can get more structural control on in the Furstenberg-Weiss lemma in option (iii) if one also broadens options (i) and (ii):

Lemma 12 (Variant of Furstenberg-Weiss lemma)Let be a subgroup of a product of three groups . Then one of the following hold:

- (i) ( too small) There is some proper subgroup of for some such that whenever . (In other words, the projection of to is not surjective.)
- (ii) ( too large) There exists a normal of respectively, not all trivial, such that arises from a subgroup of , where is the quotient map.
- (iii) are abelian and isomorphic. Furthermore, there exist isomorphisms , , to an abelian group such that

The ability to encode an abelian additive relation in terms of group-theoretic properties is vaguely reminiscent of the group configuration theorem.

*Proof:* We apply Lemma 10. Option (i) of that lemma implies option (i) of the current lemma, and similarly for option (ii), so we may assume without loss of generality that is abelian. By permuting we may also assume that are abelian, and will use additive notation for these groups.

We may assume that the projections of to and are surjective, else we are in option (i). The group is then a normal subgroup of ; we may assume it is trivial, otherwise we can quotient it out and be in option (ii). Thus can be expressed as a graph for some map . As is a group, must be a homomorphism, and we can write it as for some homomorphisms , . Thus elements of obey the constraint .

If or fails to be injective, then we can quotient out by their kernels and end up in option (ii). If fails to be surjective, then the projection of to also fails to be surjective (since for , is now constrained to lie in the range of ) and we are in option (i). Similarly if fails to be surjective. Thus we may assume that the homomorphisms are bijective and thus group isomorphisms. Setting to the identity, we arrive at option (iii).

Combining this lemma with Lemma 3, we obtain a nilpotent version:

Corollary 13 (Variant of nilpotent Furstenberg-Weiss lemma)Let be a subgroup of a product of three groups . Then one of the following hold:

- (i) ( too small) There are homomorphisms , to some abelian group for some , with not both trivial, such that whenever .
- (ii) ( too large) There exists a normal of respectively, not all trivial, such that arises from a subgroup of , where is the quotient map.
- (iii) are abelian and isomorphic. Furthermore, there exist isomorphisms , , to an abelian group such that

Here is another variant of the Furstenberg-Weiss lemma, attributed to Serre by Ribet (see Lemma 3.3):

Lemma 14 (Serre’s lemma)Let be a subgroup of a finite product of groups with . Then one of the following hold:

- (i) ( too small) There is some proper subgroup of for some such that whenever .
- (ii) ( too large) One has .
- (iii) One of the has a non-trivial abelian quotient .

*Proof:* The claim is trivial for (and we don’t need (iii) in this case), so suppose that . We can assume that each is a perfect group, , otherwise we can quotient out by the commutator and arrive in option (iii). Similarly, we may assume that all the projections of to , are surjective, otherwise we are in option (i).

We now claim that for any and any , one can find with for and . For this follows from the surjectivity of the projection of to . Now suppose inductively that and the claim has already been proven for . Since is perfect, it suffices to establish this claim for of the form for some . By induction hypothesis, we can find with for and . By surjectivity of the projection of to , one can find with and . Taking commutators of these two elements, we obtain the claim.

Setting , we conclude that contains . Similarly for permutations. Multiplying these together we see that contains all of , and we are in option (ii).

In this previous blog post I noted the following easy application of Cauchy-Schwarz:

Lemma 1 (Van der Corput inequality)Let be unit vectors in a Hilbert space . Then

*Proof:* The left-hand side may be written as for some unit complex numbers . By Cauchy-Schwarz we have

As a corollary, correlation becomes transitive in a statistical sense (even though it is not transitive in an absolute sense):

Corollary 2 (Statistical transitivity of correlation)Let be unit vectors in a Hilbert space such that for all and some . Then we have for at least of the pairs .

*Proof:* From the lemma, we have

One drawback with this corollary is that it does not tell us *which* pairs correlate. In particular, if the vector also correlates with a separate collection of unit vectors, the pairs for which correlate may have no intersection whatsoever with the pairs in which correlate (except of course on the diagonal where they must correlate).

While working on an ongoing research project, I recently found that there is a very simple way to get around the latter problem by exploiting the tensor power trick:

Corollary 3 (Simultaneous statistical transitivity of correlation)Let be unit vectors in a Hilbert space for and such that for all , and some . Then there are at least pairs such that . In particular (by Cauchy-Schwarz) we have for all .

*Proof:* Apply Corollary 2 to the unit vectors and , in the tensor power Hilbert space .

It is surprisingly difficult to obtain even a qualitative version of the above conclusion (namely, if correlates with all of the , then there are many pairs for which correlates with for all simultaneously) without some version of the tensor power trick. For instance, even the powerful Szemerédi regularity lemma, when applied to the set of pairs for which one has correlation of , for a single , does not seem to be sufficient. However, there is a reformulation of the argument using the Schur product theorem as a substitute for (or really, a disguised version of) the tensor power trick. For simplicity of notation let us just work with real Hilbert spaces to illustrate the argument. We start with the identity

where is the orthogonal projection to the complement of . This implies a Gram matrix inequality for each where denotes the claim that is positive semi-definite. By the Schur product theorem, we conclude that and hence for a suitable choice of signs , One now argues as in the proof of Corollary 2.A separate application of tensor powers to amplify correlations was also noted in this previous blog post giving a cheap version of the Kabatjanskii-Levenstein bound, but this seems to not be directly related to this current application.

The (classical) Möbius function is the unique function that obeys the classical Möbius inversion formula:

Proposition 1 (Classical Möbius inversion)Let be functions from the natural numbers to an additive group . Then the following two claims are equivalent:

- (i) for all .
- (ii) for all .

There is a generalisation of this formula to (finite) posets, due to Hall, in which one sums over chains in the poset:

Proposition 2 (Poset Möbius inversion)Let be a finite poset, and let be functions from that poset to an additive group . Then the following two claims are equivalent:(Note from the finite nature of that the inner sum in (ii) is vacuous for all but finitely many .)

- (i) for all , where is understood to range in .
- (ii) for all , where in the inner sum are understood to range in with the indicated ordering.

Comparing Proposition 2 with Proposition 1, it is natural to refer to the function as the Möbius function of the poset; the condition (ii) can then be written as

*Proof:*If (i) holds, then we have for any . Iterating this we obtain (ii). Conversely, from (ii) and separating out the term, and grouping all the other terms based on the value of , we obtain (1), and hence (i).

In fact it is not completely necessary that the poset be finite; an inspection of the proof shows that it suffices that every element of the poset has only finitely many predecessors .

It is not difficult to see that Proposition 2 includes Proposition 1 as a special case, after verifying the combinatorial fact that the quantity

is equal to when divides , and vanishes otherwise.I recently discovered that Proposition 2 can also lead to a useful variant of the inclusion-exclusion principle. The classical version of this principle can be phrased in terms of indicator functions: if are subsets of some set , then

In particular, if there is a finite measure on for which are all measurable, we haveOne drawback of this formula is that there are exponentially many terms on the right-hand side: of them, in fact. However, in many cases of interest there are “collisions” between the intersections (for instance, perhaps many of the pairwise intersections agree), in which case there is an opportunity to collect terms and hopefully achieve some cancellation. It turns out that it is possible to use Proposition 2 to do this, in which one only needs to sum over chains in the resulting poset of intersections:

Proposition 3 (Hall-type inclusion-exclusion principle)Let be subsets of some set , and let be the finite poset formed by intersections of some of the (with the convention that is the empty intersection), ordered by set inclusion. Then for any , one has where are understood to range in . In particular (setting to be the empty intersection) if the are all proper subsets of then we have In particular, if there is a finite measure on for which are all measurable, we have

Using the Möbius function on the poset , one can write these formulae as

and
*Proof:* It suffices to establish (2) (to derive (3) from (2) observe that all the are contained in one of the , so the effect of may be absorbed into ). Applying Proposition 2, this is equivalent to the assertion that

Example 4If with , and are all distinct, then we have for any finite measure on that makes measurable that due to the four chains , , , of length one, and the three chains , , of length two. Note that this expansion just has six terms in it, as opposed to the given by the usual inclusion-exclusion formula, though of course one can reduce the number of terms by combining the factors. This may not seem particularly impressive, especially if one views the term as really being three terms instead of one, but if we add a fourth set with for all , the formula now becomes and we begin to see more cancellation as we now have just seven terms (or ten if we count as four terms) instead of terms.

Example 5 (Variant of Legendre sieve)If are natural numbers, and is some sequence of complex numbers with only finitely many terms non-zero, then by applying the above proposition to the sets and with equal to counting measure weighted by the we obtain a variant of the Legendre sieve where range over the set formed by taking least common multiples of the (with the understanding that the empty least common multiple is ), and denotes the assertion that divides but is strictly less than . I am curious to know of this version of the Legendre sieve already appears in the literature (and similarly for the other applications of Proposition 2 given here).

If the poset has bounded depth then the number of terms in Proposition 3 can end up being just polynomially large in rather than exponentially large. Indeed, if all chains in have length at most then the number of terms here is at most . (The examples (4), (5) are ones in which the depth is equal to two.) I hope to report in a later post on how this version of inclusion-exclusion with polynomially many terms can be useful in an application.

Actually in our application we need an abstraction of the above formula, in which the indicator functions are replaced by more abstract idempotents:

Proposition 6 (Hall-type inclusion-exclusion principle for idempotents)Let be pairwise commuting elements of some ring with identity, which are all idempotent (thus for ). Let be the finite poset formed by products of the (with the convention that is the empty product), ordered by declaring when (note that all the elements of are idempotent so this is a partial ordering). Then for any , one has where are understood to range in . In particular (setting ) if all the are not equal to then we have

Morally speaking this proposition is equivalent to the previous one after applying a “spectral theorem” to simultaneously diagonalise all of the , but it is quicker to just adapt the previous proof to establish this proposition directly. Using the Möbius function for , we can rewrite these formulae as

and
*Proof:* Again it suffices to verify (6). Using Proposition 2 as before, it suffices to show that

Consider a disk in the complex plane. If one applies an affine-linear map to this disk, one obtains

For maps that are merely holomorphic instead of affine-linear, one has some variants of this assertion, which I am recording here mostly for my own reference:

Theorem 1 (Holomorphic images of disks)Let be a disk in the complex plane, and be a holomorphic function with .

- (i) (Open mapping theorem or inverse function theorem) contains a disk for some . (In fact there is even a holomorphic right inverse of from to .)
- (ii) (Bloch theorem) contains a disk for some absolute constant and some . (In fact there is even a holomorphic right inverse of from to .)
- (iii) (Koebe quarter theorem) If is injective, then contains the disk .
- (iv) If is a polynomial of degree , then contains the disk .
- (v) If one has a bound of the form for all and some , then contains the disk for some absolute constant . (In fact there is holomorphic right inverse of from to .)

Parts (i), (ii), (iii) of this theorem are standard, as indicated by the given links. I found part (iv) as (a consequence of) Theorem 2 of this paper of Degot, who remarks that it “seems not already known in spite of its simplicity”; an equivalent form of this result also appears in Lemma 4 of this paper of Miller. The proof is simple:

*Proof:* (Proof of (iv)) Let , then we have a lower bound for the log-derivative of at :

The constant in (iv) is completely sharp: if and is non-zero then contains the disk

but avoids the origin, thus does not contain any disk of the form . This example also shows that despite parts (ii), (iii) of the theorem, one cannot hope for a general inclusion of the form for an absolute constant .Part (v) is implicit in the standard proof of Bloch’s theorem (part (ii)), and is easy to establish:

*Proof:* (Proof of (v)) From the Cauchy inequalities one has for , hence by Taylor’s theorem with remainder for . By Rouche’s theorem, this implies that the function has a unique zero in for any , if is a sufficiently small absolute constant. The claim follows.

Note that part (v) implies part (i). A standard point picking argument also lets one deduce part (ii) from part (v):

*Proof:* (Proof of (ii)) By shrinking slightly if necessary we may assume that extends analytically to the closure of the disk . Let be the constant in (v) with ; we will prove (iii) with replaced by . If we have for all then we are done by (v), so we may assume without loss of generality that there is such that . If for all then by (v) we have

Here is another classical result stated by Alexander (and then proven by Kakeya and by Szego, but also implied to a classical theorem of Grace and Heawood) that is broadly compatible with parts (iii), (iv) of the above theorem:

Proposition 2Let be a disk in the complex plane, and be a polynomial of degree with for all . Then is injective on .

The radius is best possible, for the polynomial has non-vanishing on , but one has , and lie on the boundary of .

If one narrows slightly to then one can quickly prove this proposition as follows. Suppose for contradiction that there exist distinct with , thus if we let be the line segment contour from to then . However, by assumption we may factor where all the lie outside of . Elementary trigonometry then tells us that the argument of only varies by less than as traverses , hence the argument of only varies by less than . Thus takes values in an open half-plane avoiding the origin and so it is not possible for to vanish.

To recover the best constant of requires some effort. By taking contrapositives and applying an affine rescaling and some trigonometry, the proposition can be deduced from the following result, known variously as the Grace-Heawood theorem or the complex Rolle theorem.

Proposition 3 (Grace-Heawood theorem)Let be a polynomial of degree such that . Then contains a zero in the closure of .

This is in turn implied by a remarkable and powerful theorem of Grace (which we shall prove shortly). Given two polynomials of degree at most , define the *apolar* form by

Theorem 4 (Grace’s theorem)Let be a circle or line in , dividing into two open connected regions . Let be two polynomials of degree at most , with all the zeroes of lying in and all the zeroes of lying in . Then .

(Contrapositively: if , then the zeroes of cannot be separated from the zeroes of by a circle or line.)

Indeed, a brief calculation reveals the identity

where is the degree polynomial The zeroes of are for , so the Grace-Heawood theorem follows by applying Grace’s theorem with equal to the boundary of .The same method of proof gives the following nice consequence:

Theorem 5 (Perpendicular bisector theorem)Let be a polynomial such that for some distinct . Then the zeroes of cannot all lie on one side of the perpendicular bisector of . For instance, if , then the zeroes of cannot all lie in the halfplane or the halfplane .

I’d be interested in seeing a proof of this latter theorem that did not proceed via Grace’s theorem.

Now we give a proof of Grace’s theorem. The case can be established by direct computation, so suppose inductively that and that the claim has already been established for . Given the involvement of circles and lines it is natural to suspect that a Möbius transformation symmetry is involved. This is indeed the case and can be made precise as follows. Let denote the vector space of polynomials of degree at most , then the apolar form is a bilinear form . Each translation on the complex plane induces a corresponding map on , mapping each polynomial to its shift . We claim that the apolar form is invariant with respect to these translations:

Taking derivatives in , it suffices to establish the skew-adjointness relation but this is clear from the alternating form of (1).Next, we see that the inversion map also induces a corresponding map on , mapping each polynomial to its inversion . From (1) we see that this map also (projectively) preserves the apolar form:

More generally, the group of Möbius transformations on the Riemann sphere acts projectively on , with each Möbius transformation mapping each to , where is the unique (up to constants) rational function that maps this a map from to (its divisor is ). Since the Möbius transformations are generated by translations and inversion, we see that the action of Möbius transformations projectively preserves the apolar form; also, we see this action of on also moves the zeroes of each by (viewing polynomials of degree less than in as having zeroes at infinity). In particular, the hypotheses and conclusions of Grace’s theorem are preserved by this Möbius action. We can then apply such a transformation to move one of the zeroes of to infinity (thus making a polynomial of degree ), so that must now be a circle, with the zeroes of inside the circle and the remaining zeroes of outside the circle. But then By the Gauss-Lucas theorem, the zeroes of are also inside . The claim now follows from the induction hypothesis.
A family of sets for some is a sunflower if there is a *core set* contained in each of the such that the *petal sets* are disjoint. If , let denote the smallest natural number with the property that any family of distinct sets of cardinality at most contains distinct elements that form a sunflower. The celebrated Erdös-Rado theorem asserts that is finite; in fact Erdös and Rado gave the bounds

*sunflower conjecture*asserts in fact that the upper bound can be improved to . This remains open at present despite much effort (including a Polymath project); after a long series of improvements to the upper bound, the best general bound known currently is for all , established in 2019 by Rao (building upon a recent breakthrough a month previously of Alweiss, Lovett, Wu, and Zhang). Here we remove the easy cases or in order to make the logarithmic factor a little cleaner.

Rao’s argument used the Shannon noiseless coding theorem. It turns out that the argument can be arranged in the very slightly different language of Shannon entropy, and I would like to present it here. The argument proceeds by locating the core and petals of the sunflower separately (this strategy is also followed in Alweiss-Lovett-Wu-Zhang). In both cases the following definition will be key. In this post all random variables, such as random sets, will be understood to be discrete random variables taking values in a finite range. We always use boldface symbols to denote random variables, and non-boldface for deterministic quantities.

Definition 1 (Spread set)Let . A random set is said to be -spread if one has for all sets . A family of sets is said to be -spread if is non-empty and the random variable is -spread, where is drawn uniformly from .

The core can then be selected greedily in such a way that the remainder of a family becomes spread:

Lemma 2 (Locating the core)Let be a family of subsets of a finite set , each of cardinality at most , and let . Then there exists a “core” set of cardinality at most such that the set has cardinality at least , and such that the family is -spread. Furthermore, if and the are distinct, then .

*Proof:* We may assume is non-empty, as the claim is trivial otherwise. For any , define the quantity

Let be the set (3). Since , is non-empty. It remains to check that the family is -spread. But for any and drawn uniformly at random from one has

Observe that , and the probability is only non-empty when are disjoint, so that . The claim follows.In view of the above lemma, the bound (2) will then follow from

Proposition 3 (Locating the petals)Let be natural numbers, and suppose that for a sufficiently large constant . Let be a finite family of subsets of a finite set , each of cardinality at most which is -spread. Then there exist such that is disjoint.

Indeed, to prove (2), we assume that is a family of sets of cardinality greater than for some ; by discarding redundant elements and sets we may assume that is finite and that all the are contained in a common finite set . Apply Lemma 2 to find a set of cardinality such that the family is -spread. By Proposition 3 we can find such that are disjoint; since these sets have cardinality , this implies that the are distinct. Hence form a sunflower as required.

Remark 4Proposition 3 is easy to prove if we strengthen the condition on to . In this case, we have for every , hence by the union bound we see that for any with there exists such that is disjoint from the set , which has cardinality at most . Iterating this, we obtain the conclusion of Proposition 3 in this case. This recovers a bound of the form , and by pursuing this idea a little further one can recover the original upper bound (1) of Erdös and Rado.

It remains to prove Proposition 3. In fact we can locate the petals one at a time, placing each petal inside a random set.

Proposition 5 (Locating a single petal)Let the notation and hypotheses be as in Proposition 3. Let be a random subset of , such that each lies in with an independent probability of . Then with probability greater than , contains one of the .

To see that Proposition 5 implies Proposition 3, we randomly partition into by placing each into one of the , chosen uniformly and independently at random. By Proposition 5 and the union bound, we see that with positive probability, it is simultaneously true for all that each contains one of the . Selecting one such for each , we obtain the required disjoint petals.

We will prove Proposition 5 by gradually increasing the density of the random set and arranging the sets to get quickly absorbed by this random set. The key iteration step is

Proposition 6 (Refinement inequality)Let and . Let be a random subset of a finite set which is -spread, and let be a random subset of independent of , such that each lies in with an independent probability of . Then there exists another -spread random subset of whose support is contained in the support of , such that and

Note that a direct application of the first moment method gives only the bound

but the point is that by switching from to an equivalent we can replace the factor by a quantity significantly smaller than .One can iterate the above proposition, repeatedly replacing with (noting that this preserves the -spread nature of ) to conclude

Corollary 7 (Iterated refinement inequality)Let , , and . Let be a random subset of a finite set which is -spread, and let be a random subset of independent of , such that each lies in with an independent probability of . Then there exists another random subset of with support contained in the support of , such that

Now we can prove Proposition 5. Let be chosen shortly. Applying Corollary 7 with drawn uniformly at random from the , and setting , or equivalently , we have

In particular, if we set , so that , then by choice of we have , hence In particular with probability at least , there must exist such that , giving the proposition.It remains to establish Proposition 6. This is the difficult step, and requires a clever way to find the variant of that has better containment properties in than does. The main trick is to make a conditional copy of that is conditionally independent of subject to the constraint . The point here is that this constrant implies the inclusions

and Because of the -spread hypothesis, it is hard for to contain any fixed large set. If we could apply this observation in the contrapositive to we could hope to get a good upper bound on the size of and hence on thanks to (4). One can also hope to improve such an upper bound by also employing (5), since it is also hard for the random set to contain a fixed large set. There are however difficulties with implementing this approach due to the fact that the random sets are coupled with in a moderately complicated fashion. In Rao’s argument a somewhat complicated encoding scheme was created to give information-theoretic control on these random variables; below the fold we accomplish a similar effect by using Shannon entropy inequalities in place of explicit encoding. A certain amount of information-theoretic sleight of hand is required to decouple certain random variables to the extent that the Shannon inequalities can be effectively applied. The argument bears some resemblance to the “entropy compression method” discussed in this previous blog post; there may be a way to more explicitly express the argument below in terms of that method. (There is also some kinship with the method of dependent random choice, which is used for instance to establish the Balog-Szemerédi-Gowers lemma, and was also translated into information theoretic language in these unpublished notes of Van Vu and myself.)At the most recent MSRI board of trustees meeting on Mar 7 (conducted online, naturally), Nicolas Jewell (a Professor of Biostatistics and Statistics at Berkeley, also affiliated with the Berkeley School of Public Health and the London School of Health and Tropical Disease), gave a presentation on the current coronavirus epidemic entitled “2019-2020 Novel Coronavirus outbreak: mathematics of epidemics, and what it can and cannot tell us”. The presentation (updated with Mar 18 data), hosted by David Eisenbud (the director of MSRI), together with a question and answer session, is now on Youtube:

(I am on this board, but could not make it to this particular meeting; I caught up on the presentation later, and thought it would of interest to several readers of this blog.) While there is some mathematics in the presentation, it is relatively non-technical.

In the modern theory of additive combinatorics, a large role is played by the *Gowers uniformity norms* , where , is a finite abelian group, and is a function (one can also consider these norms in finite approximate groups such as instead of finite groups, but we will focus on the group case here for simplicity). These norms can be defined by the formula

where we use the averaging notation

for any non-empty finite set (with denoting the cardinality of ), and is the multiplicative discrete derivative operator

One reason why these norms play an important role is that they control various multilinear averages. We give two sample examples here:

We establish these claims a little later in this post.

In some more recent literature (e.g., this paper of Conlon, Fox, and Zhao), the role of Gowers norms have been replaced by (generalisations) of the *cut norm*, a concept originating from graph theory. In this blog post, it will be convenient to define these cut norms in the language of probability theory (using boldface to denote random variables).

Definition 2 (Cut norm)Let be independent random variables with ; to avoid minor technicalities we assume that these random variables are discrete and take values in a finite set. Given a random variable of these independent random variables, we define thecut normwhere the supremum ranges over all choices of random variables that are -bounded (thus surely), and such that does not depend on .

If , we abbreviate as .

Strictly speaking, the cut norm is only a cut semi-norm when , but we will abuse notation by referring to it as a norm nevertheless.

Example 3If is a bipartite graph, and , are independent random variables chosen uniformly from respectively, thenwhere the supremum ranges over all -bounded functions , . The right hand side is essentially the cut norm of the graph , as defined for instance by Frieze and Kannan.

The cut norm is basically an expectation when :

Example 4If , we see from definition thatIf , one easily checks that

where is the conditional expectation of to the -algebra generated by all the variables other than , i.e., the -algebra generated by . In particular, if are independent random variables drawn uniformly from respectively, then

Here are some basic properties of the cut norm:

Lemma 5 (Basic properties of cut norm)Let be independent discrete random variables, and a function of these variables.

- (i) (Permutation invariance) The cut norm is invariant with respect to permutations of the , or permutations of the .
- (ii) (Conditioning) One has
where on the right-hand side we view, for each realisation of , as a function of the random variables alone, thus the right-hand side may be expanded as

- (iii) (Monotonicity) If , we have
- (iv) (Multiplicative invariances) If is a -bounded function that does not depend on one of the , then
In particular, if we additionally assume , then

- (v) (Cauchy-Schwarz) If , one has
where is a copy of that is independent of and is the random variable

- (vi) (Averaging) If and , where is another random variable independent of , and is a random variable depending on both and , then

*Proof:* The claims (i), (ii) are clear from expanding out all the definitions. The claim (iii) also easily follows from the definitions (the left-hand side involves a supremum over a more general class of multipliers , while the right-hand side omits the multiplier), as does (iv) (the multiplier can be absorbed into one of the multipliers in the definition of the cut norm). The claim (vi) follows by expanding out the definitions, and observing that all of the terms in the supremum appearing in the left-hand side also appear as terms in the supremum on the right-hand side. It remains to prove (v). By definition, the left-hand side is the supremum over all quantities of the form

where the are -bounded functions of that do not depend on . We average out in the direction (that is, we condition out the variables ), and pull out the factor (which does not depend on ), to write this as

which by Cauchy-Schwarz is bounded by

which can be expanded using the copy as

Expanding

and noting that each is -bounded and independent of for , we obtain the claim.

Now we can relate the cut norm to Gowers uniformity norms:

Lemma 6Let be a finite abelian group, let be independent random variables uniformly drawn from for some , and let . ThenIf is additionally assumed to be -bounded, we have the converse inequalities

*Proof:* Applying Lemma 5(v) times, we can bound

where are independent copies of that are also independent of . The expression inside the norm can also be written as

so by Example 4 one can write (6) as

which after some change of variables simplifies to

which by Cauchy-Schwarz is bounded by

which one can rearrange as

giving (2). A similar argument bounds

by

which gives (3).

For (4), we can reverse the above steps and expand as

which we can write as

for some -bounded function . This can in turn be expanded as

for some -bounded functions that do not depend on . By Example 4, this can be written as

which by several applications of Theorem 5(iii) and then Theorem 5(iv) can be bounded by

giving (4). A similar argument gives (5).

Now we can prove Proposition 1. We begin with part (i). By permutation we may assume , then by translation we may assume . Replacing by and by , we can write the left-hand side of (1) as

where

is a -bounded function that does not depend on . Taking to be independent random variables drawn uniformly from , the left-hand side of (1) can then be written as

which by Example 4 is bounded in magnitude by

After many applications of Lemma 5(iii), (iv), this is bounded by

By Lemma 5(ii) we may drop the variable, and then the claim follows from Lemma 6.

For part (ii), we replace by and by to write the left-hand side as

the point here is that the first factor does not involve , the second factor does not involve , and the third factor has no quadratic terms in . Letting be independent variables drawn uniformly from , we can use Example 4 to bound this in magnitude by

which by Lemma 5(i),(iii),(iv) is bounded by

and then by Lemma 5(v) we may bound this by

which by Example 4 is

Now the expression inside the expectation is the product of four factors, each of which is or applied to an affine form where depends on and is one of , , , . With probability , the four different values of are distinct, and then by part (i) we have

When they are not distinct, we can instead bound this quantity by . Taking expectations in , we obtain the claim.

The analogue of the inverse theorem for cut norms is the following claim (which I learned from Ben Green):

Lemma 7 (-type inverse theorem)Let be independent random variables drawn from a finite abelian group , and let be -bounded. Then we havewhere is the group of homomorphisms is a homomorphism from to , and .

*Proof:* Suppose first that for some , then by definition

for some -bounded . By Fourier expansion, the left-hand side is also

where . From Plancherel’s theorem we have

hence by Hölder’s inequality one has for some , and hence

Conversely, suppose (7) holds. Then there is such that

which on substitution and Example 4 implies

The term splits into the product of a factor not depending on , and a factor not depending on . Applying Lemma 5(iii), (iv) we conclude that

The claim follows.

The higher order inverse theorems are much less trivial (and the optimal quantitative bounds are not currently known). However, there is a useful *degree lowering* argument, due to Peluse and Prendiville, that can allow one to lower the order of a uniformity norm in some cases. We give a simple version of this argument here:

Lemma 8 (Degree lowering argument, special case)Let be a finite abelian group, let be a non-empty finite set, and let be a function of the form for some -bounded functions indexed by . Suppose thatfor some and . Then one of the following claims hold (with implied constants allowed to depend on ):

- (i) (Degree lowering) one has .
- (ii) (Non-zero frequency) There exist and non-zero such that

There are more sophisticated versions of this argument in which the frequency is “minor arc” rather than “zero frequency”, and then the Gowers norms are localised to suitable large arithmetic progressions; this is implicit in the above-mentioned paper of Peluse and Prendiville.

*Proof:* One can write

and hence we conclude that

for a set of tuples of density . Applying Lemma 6 and Lemma 7, we see that for each such tuple, there exists such that

where is drawn uniformly from .

Let us adopt the convention that vanishes for not in , then from Lemma 5(ii) we have

where are independent random variables drawn uniformly from and also independent of . By repeated application of Lemma 5(iii) we then have

Expanding out and using Lemma 5(iv) repeatedly we conclude that

From definition of we then have

By Lemma 5(vi), we see that the left-hand side is less than

where is drawn uniformly from , independently of . By repeated application of Lemma 5(i), (v) repeatedly, we conclude that

where are independent copies of that are also independent of , . By Lemma 5(ii) and Example 4 we conclude that

with probability .

The left-hand side can be rewritten as

where is the additive version of , thus

Translating , we can simplify this a little to

If the frequency is ever non-vanishing in the event (9) then conclusion (ii) applies. We conclude that

with probability . In particular, by the pigeonhole principle, there exist such that

with probability . Expanding this out, we obtain a representation of the form

holding with probability , where the are functions that do not depend on the coordinate. From (8) we conclude that

for of the tuples . Thus by Lemma 5(ii)

By repeated application of Lemma 5(iii) we then have

and then by repeated application of Lemma 5(iv)

and then the conclusion (i) follows from Lemma 6.

As an application of degree lowering, we give an inverse theorem for the average in Proposition 1(ii), first established by Bourgain-Chang and later reproved by Peluse (by different methods from those given here):

Proposition 9Let be a cyclic group of prime order. Suppose that one has -bounded functions such thatfor some . Then either , or one has

We remark that a modification of the arguments below also give .

*Proof:* The left-hand side of (10) can be written as

where is the *dual function*

By Cauchy-Schwarz one thus has

and hence by Proposition 1, we either have (in which case we are done) or

Writing with , we conclude that either , or that

for some and non-zero . The left-hand side can be rewritten as

where and . We can rewrite this in turn as

which is bounded by

where are independent random variables drawn uniformly from . Applying Lemma 5(v), we conclude that

However, a routine Gauss sum calculation reveals that the left-hand side is for some absolute constant because is non-zero, so that . The only remaining case to consider is when

Repeating the above arguments we then conclude that

and then

The left-hand side can be computed to equal , and the claim follows.

This argument was given for the cyclic group setting, but the argument can also be applied to the integers (see Peluse-Prendiville) and can also be used to establish an analogue over the reals (that was first obtained by Bourgain).

## Recent Comments