You are currently browsing the category archive for the ‘expository’ category.

This post is an unofficial sequel to one of my first blog posts from 2007, which was entitled “Quantum mechanics and Tomb Raider“.

One of the oldest and most famous allegories is Plato’s allegory of the cave. This allegory centers around a group of people chained to a wall in a cave that cannot see themselves or each other, but only the two-dimensional shadows of themselves cast on the wall in front of them by some light source they cannot directly see. Because of this, they identify reality with this two-dimensional representation, and have significant conceptual difficulties in trying to view themselves (or the world as a whole) as three-dimensional, until they are freed from the cave and able to venture into the sunlight.

There is a similar conceptual difficulty when trying to understand Einstein’s theory of special relativity (and more so for general relativity, but let us focus on special relativity for now). We are very much accustomed to thinking of reality as a three-dimensional space endowed with a Euclidean geometry that we traverse through in time, but in order to have the clearest view of the universe of special relativity it is better to think of reality instead as a four-dimensional spacetime that is endowed instead with a Minkowski geometry, which mathematically is similar to a (four-dimensional) Euclidean space but with a crucial change of sign in the underlying metric. Indeed, whereas the distance {ds} between two points in Euclidean space {{\bf R}^3} is given by the three-dimensional Pythagorean theorem

\displaystyle  ds^2 = dx^2 + dy^2 + dz^2

under some standard Cartesian coordinate system {(x,y,z)} of that space, and the distance {ds} in a four-dimensional Euclidean space {{\bf R}^4} would be similarly given by

\displaystyle  ds^2 = dx^2 + dy^2 + dz^2 + du^2

under a standard four-dimensional Cartesian coordinate system {(x,y,z,u)}, the spacetime interval {ds} in Minkowski space is given by

\displaystyle  ds^2 = dx^2 + dy^2 + dz^2 - c^2 dt^2

(though in many texts the opposite sign convention {ds^2 = -dx^2 -dy^2 - dz^2 + c^2dt^2} is preferred) in spacetime coordinates {(x,y,z,t)}, where {c} is the speed of light. The geometry of Minkowski space is then quite similar algebraically to the geometry of Euclidean space (with the sign change replacing the traditional trigonometric functions {\sin, \cos, \tan}, etc. by their hyperbolic counterparts {\sinh, \cosh, \tanh}, and with various factors involving “{c}” inserted in the formulae), but also has some qualitative differences to Euclidean space, most notably a causality structure connected to light cones that has no obvious counterpart in Euclidean space.

That said, the analogy between Minkowski space and four-dimensional Euclidean space is strong enough that it serves as a useful conceptual aid when first learning special relativity; for instance the excellent introductory text “Spacetime physics” by Taylor and Wheeler very much adopts this view. On the other hand, this analogy doesn’t directly address the conceptual problem mentioned earlier of viewing reality as a four-dimensional spacetime in the first place, rather than as a three-dimensional space that objects move around in as time progresses. Of course, part of the issue is that we aren’t good at directly visualizing four dimensions in the first place. This latter problem can at least be easily addressed by removing one or two spatial dimensions from this framework – and indeed many relativity texts start with the simplified setting of only having one spatial dimension, so that spacetime becomes two-dimensional and can be depicted with relative ease by spacetime diagrams – but still there is conceptual resistance to the idea of treating time as another spatial dimension, since we clearly cannot “move around” in time as freely as we can in space, nor do we seem able to easily “rotate” between the spatial and temporal axes, the way that we can between the three coordinate axes of Euclidean space.

With this in mind, I thought it might be worth attempting a Plato-type allegory to reconcile the spatial and spacetime views of reality, in a way that can be used to describe (analogues of) some of the less intuitive features of relativity, such as time dilation, length contraction, and the relativity of simultaneity. I have (somewhat whimsically) decided to place this allegory in a Tolkienesque fantasy world (similarly to how my previous allegory to describe quantum mechanics was phrased in a world based on the computer game “Tomb Raider”). This is something of an experiment, and (like any other analogy) the allegory will not be able to perfectly capture every aspect of the phenomenon it is trying to represent, so any feedback to improve the allegory would be appreciated.

Read the rest of this entry »

If {\lambda>0}, a Poisson random variable {{\bf Poisson}(\lambda)} with mean {\lambda} is a random variable taking values in the natural numbers with probability distribution

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) = k) = e^{-\lambda} \frac{\lambda^k}{k!}.

One is often interested in bounding upper tail probabilities

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u))

for {u \geq 0}, or lower tail probabilities

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \leq \lambda(1+u))

for {-1 < u \leq 0}. A standard tool for this is Bennett’s inequality:

Proposition 1 (Bennett’s inequality) One has

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \leq \exp(-\lambda h(u))

for {u \geq 0} and

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \leq \lambda(1+u)) \leq \exp(-\lambda h(u))

for {-1 < u \leq 0}, where

\displaystyle  h(u) := (1+u) \log(1+u) - u.

From the Taylor expansion {h(u) = \frac{u^2}{2} + O(u^3)} for {u=O(1)} we conclude Gaussian type tail bounds in the regime {u = o(1)} (and in particular when {u = O(1/\sqrt{\lambda})} (in the spirit of the Chernoff, Bernstein, and Hoeffding inequalities). but in the regime where {u} is large and positive one obtains a slight gain over these other classical bounds (of {\exp(- \lambda u \log u)} type, rather than {\exp(-\lambda u)}).

Proof: We use the exponential moment method. For any {t \geq 0}, we have from Markov’s inequality that

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \leq e^{-t \lambda(1+u)} {\bf E} \exp( t {\bf Poisson}(\lambda) ).

A standard computation shows that the moment generating function of the Poisson distribution is given by

\displaystyle  \exp( t {\bf Poisson}(\lambda) ) = \exp( (e^t - 1) \lambda )

and hence

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \leq \exp( (e^t - 1)\lambda - t \lambda(1+u) ).

For {u \geq 0}, it turns out that the right-hand side is optimized by setting {t = \log(1+u)}, in which case the right-hand side simplifies to {\exp(-\lambda h(u))}. This proves the first inequality; the second inequality is proven similarly (but now {u} and {t} are non-positive rather than non-negative). \Box

Remark 2 Bennett’s inequality also applies for (suitably normalized) sums of bounded independent random variables. In some cases there are direct comparison inequalities available to relate those variables to the Poisson case. For instance, suppose {S = X_1 + \dots + X_n} is the sum of independent Boolean variables {X_1,\dots,X_n \in \{0,1\}} of total mean {\sum_{j=1}^n {\bf E} X_j = \lambda} and with {\sup_i {\bf P}(X_i) \leq \varepsilon} for some {0 < \varepsilon < 1}. Then for any natural number {k}, we have

\displaystyle  {\bf P}(S=k) = \sum_{1 \leq i_1 < \dots < i_k \leq n} {\bf P}(X_{i_1}=1) \dots {\bf P}(X_{i_k}=1)

\displaystyle  \prod_{i \neq i_1,\dots,i_k} {\bf P}(X_i=0)

\displaystyle  \leq \frac{1}{k!} (\sum_{i=1}^n \frac{{\bf P}(X_i=1)}{{\bf P}(X_i=0)})^k \times \prod_{i=1}^n {\bf P}(X_i=0)

\displaystyle  \leq \frac{1}{k!} (\frac{\lambda}{1-\varepsilon})^k \prod_{i=1}^n \exp( - {\bf P}(X_i = 1))

\displaystyle  \leq e^{-\lambda} \frac{\lambda^k}{(1-\varepsilon)^k k!}

\displaystyle  \leq e^{\frac{\varepsilon}{1-\varepsilon} \lambda} {\bf P}( \mathbf{Poisson}(\frac{\lambda}{1-\varepsilon}) = k).

As such, for {\varepsilon} small, one can efficiently control the tail probabilities of {S} in terms of the tail probability of a Poisson random variable of mean close to {\lambda}; this is of course very closely related to the well known fact that the Poisson distribution emerges as the limit of sums of many independent boolean variables, each of which is non-zero with small probability. See this paper of Bentkus and this paper of Pinelis for some further useful (and less obvious) comparison inequalities of this type.

In this note I wanted to record the observation that one can improve the Bennett bound by a small polynomial factor once one leaves the Gaussian regime {u = O(1/\sqrt{\lambda})}, in particular gaining a factor of {1/\sqrt{\lambda}} when {u \sim 1}. This observation is not difficult and is implicitly in the literature (one can extract it for instance from the much more general results of this paper of Talagrand, and the basic idea already appears in this paper of Glynn), but I was not able to find a clean version of this statement in the literature, so I am placing it here on my blog. (But if a reader knows of a reference that basically contains the bound below, I would be happy to know of it.)

Proposition 3 (Improved Bennett’s inequality) One has

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda \min(u, u^2)}}

for {u \geq 0} and

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \leq \lambda(1+u)) \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda u^2 (1+u)}}

for {-1 < u \leq 0}.

Proof: We begin with the first inequality. We may assume that {u \geq 1/\sqrt{\lambda}}, since otherwise the claim follows from the usual Bennett inequality. We expand out the left-hand side as

\displaystyle  e^{-\lambda} \sum_{k \geq \lambda(1+u)} \frac{\lambda^k}{k!}.

Observe that for {k \geq \lambda(1+u)} that

\displaystyle  \frac{\lambda^{k+1}}{(k+1)!} \leq \frac{1}{1+u} \frac{\lambda^{k}}{k!} .

Thus the sum is dominated by the first term times a geometric series {\sum_{j=0}^\infty \frac{1}{(1+u)^j} = 1 + \frac{1}{u}}. We can thus bound the left-hand side by

\displaystyle  \ll e^{-\lambda} (1 + \frac{1}{u}) \sup_{k \geq \lambda(1+u)} \frac{\lambda^k}{k!}.

By the Stirling approximation, this is

\displaystyle  \ll e^{-\lambda} (1 + \frac{1}{u}) \sup_{k \geq \lambda(1+u)} \frac{1}{\sqrt{k}} \frac{(e\lambda)^k}{k^k}.

The expression inside the supremum is decreasing in {k} for {k > \lambda}, thus we can bound it by

\displaystyle  \ll e^{-\lambda} (1 + \frac{1}{u}) \frac{1}{\sqrt{\lambda(1+u)}} \frac{(e\lambda)^{\lambda(1+u)}}{(\lambda(1+u))^{\lambda(1+u)}},

which simplifies to

\displaystyle  \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda \min(u, u^2)}}

after a routine calculation.

Now we turn to the second inequality. As before we may assume that {u \leq -1/\sqrt{\lambda}}. We first dispose of a degenerate case in which {\lambda(1+u) < 1}. Here the left-hand side is just

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) = 0 ) = e^{-\lambda}

and the right-hand side is comparable to

\displaystyle  e^{-\lambda} \exp( - \lambda (1+u) \log (1+u) + \lambda(1+u) ) / \sqrt{\lambda(1+u)}.

Since {-\lambda(1+u) \log(1+u)} is negative and {0 < \lambda(1+u) < 1}, we see that the right-hand side is {\gg e^{-\lambda}}, and the estimate holds in this case.

It remains to consider the regime where {u \leq -1/\sqrt{\lambda}} and {\lambda(1+u) \geq 1}. The left-hand side expands as

\displaystyle  e^{-\lambda} \sum_{k \leq \lambda(1+u)} \frac{\lambda^k}{k!}.

The sum is dominated by the first term times a geometric series {\sum_{j=-\infty}^0 \frac{1}{(1+u)^j} = \frac{1}{|u|}}. The maximal {k} is comparable to {\lambda(1+u)}, so we can bound the left-hand side by

\displaystyle  \ll e^{-\lambda} \frac{1}{|u|} \sup_{\lambda(1+u) \ll k \leq \lambda(1+u)} \frac{\lambda^k}{k!}.

Using the Stirling approximation as before we can bound this by

\displaystyle  \ll e^{-\lambda} \frac{1}{|u|} \frac{1}{\sqrt{\lambda(1+u)}} \frac{(e\lambda)^{\lambda(1+u)}}{(\lambda(1+u))^{\lambda(1+u)}},

which simplifies to

\displaystyle  \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda u^2 (1+u)}}

after a routine calculation. \Box

The same analysis can be reversed to show that the bounds given above are basically sharp up to constants, at least when {\lambda} (and {\lambda(1+u)}) are large.

This is a spinoff from the previous post. In that post, we remarked that whenever one receives a new piece of information {E}, the prior odds {\mathop{\bf P}( H_1 ) / \mathop{\bf P}( H_0 )} between an alternative hypothesis {H_1} and a null hypothesis {H_0} is updated to a posterior odds {\mathop{\bf P}( H_1|E ) / \mathop{\bf P}( H_0|E )}, which can be computed via Bayes’ theorem by the formula

\displaystyle  \frac{\mathop{\bf P}( H_1|E )}{\mathop{\bf P}(H_0|E)} = \frac{\mathop{\bf P}(H_1)}{\mathop{\bf P}(H_0)} \times \frac{\mathop{\bf P}(E|H_1)}{\mathop{\bf P}(E|H_0)}

where {\mathop{\bf P}(E|H_1)} is the likelihood of this information {E} under the alternative hypothesis {H_1}, and {\mathop{\bf P}(E|H_0)} is the likelihood of this information {E} under the null hypothesis {H_0}. If there are no other hypotheses under consideration, then the two posterior probabilities {\mathop{\bf P}( H_1|E )}, {\mathop{\bf P}( H_0|E )} must add up to one, and so can be recovered from the posterior odds {o := \frac{\mathop{\bf P}( H_1|E )}{\mathop{\bf P}(H_0|E)}} by the formulae

\displaystyle  \mathop{\bf P}(H_1|E) = \frac{o}{1+o}; \quad \mathop{\bf P}(H_0|E) = \frac{1}{1+o}.

This gives a straightforward way to update one’s prior probabilities, and I thought I would present it in the form of a worksheet for ease of calculation:

A PDF version of the worksheet and instructions can be found here. One can fill in this worksheet in the following order:

  1. In Box 1, one enters in the precise statement of the null hypothesis {H_0}.
  2. In Box 2, one enters in the precise statement of the alternative hypothesis {H_1}. (This step is very important! As discussed in the previous post, Bayesian calculations can become extremely inaccurate if the alternative hypothesis is vague.)
  3. In Box 3, one enters in the prior probability {\mathop{\bf P}(H_0)} (or the best estimate thereof) of the null hypothesis {H_0}.
  4. In Box 4, one enters in the prior probability {\mathop{\bf P}(H_1)} (or the best estimate thereof) of the alternative hypothesis {H_1}. If only two hypotheses are being considered, we of course have {\mathop{\bf P}(H_1) = 1 - \mathop{\bf P}(H_0)}.
  5. In Box 5, one enters in the ratio {\mathop{\bf P}(H_1)/\mathop{\bf P}(H_0)} between Box 4 and Box 3.
  6. In Box 6, one enters in the precise new information {E} that one has acquired since the prior state. (As discussed in the previous post, it is important that all relevant information {E} – both supporting and invalidating the alternative hypothesis – are reported accurately. If one cannot be certain that key information has not been withheld to you, then Bayesian calculations become highly unreliable.)
  7. In Box 7, one enters in the likelihood {\mathop{\bf P}(E|H_0)} (or the best estimate thereof) of the new information {E} under the null hypothesis {H_0}.
  8. In Box 8, one enters in the likelihood {\mathop{\bf P}(E|H_1)} (or the best estimate thereof) of the new information {E} under the null hypothesis {H_1}. (This can be difficult to compute, particularly if {H_1} is not specified precisely.)
  9. In Box 9, one enters in the ratio {\mathop{\bf P}(E|H_1)/\mathop{\bf P}(E|H_0)} betwen Box 8 and Box 7.
  10. In Box 10, one enters in the product of Box 5 and Box 9.
  11. (Assuming there are no other hypotheses than {H_0} and {H_1}) In Box 11, enter in {1} divided by {1} plus Box 10.
  12. (Assuming there are no other hypotheses than {H_0} and {H_1}) In Box 12, enter in Box 10 divided by {1} plus Box 10. (Alternatively, one can enter in {1} minus Box 11.)

To illustrate this procedure, let us consider a standard Bayesian update problem. Suppose that a given point in time, {2\%} of the population is infected with COVID-19. In response to this, a company mandates COVID-19 testing of its workforce, using a cheap COVID-19 test. This test has a {20\%} chance of a false negative (testing negative when one has COVID) and a {5\%} chance of a false positive (testing positive when one does not have COVID). An employee {X} takes the mandatory test, which turns out to be positive. What is the probability that {X} actually has COVID?

We can fill out the entries in the worksheet one at a time:

  • Box 1: The null hypothesis {H_0} is that {X} does not have COVID.
  • Box 2: The alternative hypothesis {H_1} is that {X} does have COVID.
  • Box 3: In the absence of any better information, the prior probability {\mathop{\bf P}(H_0)} of the null hypothesis is {98\%}, or {0.98}.
  • Box 4: Similarly, the prior probability {\mathop{\bf P}(H_1)} of the alternative hypothesis is {2\%}, or {0.02}.
  • Box 5: The prior odds {\mathop{\bf P}(H_1)/\mathop{\bf P}(H_0)} are {0.02/0.98 \approx 0.02}.
  • Box 6: The new information {E} is that {X} has tested positive for COVID.
  • Box 7: The likelihood {\mathop{\bf P}(E|H_0)} of {E} under the null hypothesis is {5\%}, or {0.05} (the false positive rate).
  • Box 8: The likelihood {\mathop{\bf P}(E|H_1)} of {E} under the alternative is {80\%}, or {0.8} (one minus the false negative rate).
  • Box 9: The likelihood ratio {\mathop{\bf P}(E|H_1)/\mathop{\bf P}(E|H_0)} is {0.8 / 0.05 = 16}.
  • Box 10: The product of Box 5 and Box 9 is approximately {0.32}.
  • Box 11: The posterior probability {\mathop{\bf P}(H_0|E)} is approximately {1/(1+0.32) \approx 75\%}.
  • Box 12: The posterior probability {\mathop{\bf P}(H_1|E)} is approximately {0.32/(1+0.32) \approx 25\%}.

The filled worksheet looks like this:

Perhaps surprisingly, despite the positive COVID test, the employee {X} only has a {25\%} chance of actually having COVID! This is due to the relatively large false positive rate of this cheap test, and is an illustration of the base rate fallacy in statistics.

We remark that if we switch the roles of the null hypothesis and alternative hypothesis, then some of the odds in the worksheet change, but the ultimate conclusions remain unchanged:

So the question of which hypothesis to designate as the null hypothesis and which one to designate as the alternative hypothesis is largely a matter of convention.

Now let us take a superficially similar situation in which a mother observers her daughter exhibiting COVID-like symptoms, to the point where she estimates the probability of her daughter having COVID at {50\%}. She then administers the same cheap COVID-19 test as before, which returns positive. What is the posterior probability of her daughter having COVID?

One can fill out the worksheet much as before, but now with the prior probability of the alternative hypothesis raised from {2\%} to {50\%} (and the prior probablity of the null hypothesis dropping from {98\%} to {50\%}). One now gets that the probability that the daughter has COVID has increased all the way to {94\%}:

Thus we see that prior probabilities can make a significant impact on the posterior probabilities.

Now we use the worksheet to analyze an infamous probability puzzle, the Monty Hall problem. Let us use the formulation given in that Wikipedia page:

Problem 1 Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

For this problem, the precise formulation of the null hypothesis and the alternative hypothesis become rather important. Suppose we take the following two hypotheses:

  • Null hypothesis {H_0}: The car is behind door number 1, and no matter what door you pick, the host will randomly reveal another door that contains a goat.
  • Alternative hypothesis {H_1}: The car is behind door number 2 or 3, and no matter what door you pick, the host will randomly reveal another door that contains a goat.
Assuming the prizes are distributed randomly, we have {\mathop{\bf P}(H_0)=1/3} and {\mathop{\bf P}(H_1)=2/3}. The new information {E} is that, after door 1 is selected, door 3 is revealed and shown to be a goat. After some thought, we conclude that {\mathop{\bf P}(E|H_0)} is equal to {1/2} (the host has a fifty-fifty chance of revealing door 3 instead of door 2) but that {\mathop{\bf P}(E|H_1)} is also equal to {1/2} (if the car is behind door 2, the host must reveal door 3, whereas if the car is behind door 3, the host cannot reveal door 3). Filling in the worksheet, we see that the new information does not in fact alter the odds, and the probability that the car is not behind door 1 remains at 2/3, so it is advantageous to switch.

However, consider the following different set of hypotheses:

  • Null hypothesis {H'_0}: The car is behind door number 1, and if you pick the door with the car, the host will reveal another door to entice you to switch. Otherwise, the host will not reveal a door.
  • Alternative hypothesis {H'_1}: The car is behind door number 2 or 3, and if you pick the door with the car, the host will reveal another door to entice you to switch. Otherwise, the host will not reveal a door.

Here we still have {\mathop{\bf P}(H'_0)=1/3} and {\mathop{\bf P}(H'_1)=2/3}, but while {\mathop{\bf P}(E|H'_0)} remains equal to {1/2}, {\mathop{\bf P}(E|H'_1)} has dropped to zero (since if the car is not behind door 1, the host will not reveal a door). So now {\mathop{\bf P}(H'_0|E)} has increased all the way to {1}, and it is not advantageous to switch! This dramatically illustrates the importance of specifying the hypotheses precisely. The worksheet is now filled out as follows:

Finally, we consider another famous probability puzzle, the Sleeping Beauty problem. Again we quote the problem as formulated on the Wikipedia page:

Problem 2 Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:
  • If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
  • If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday.
  • In either case, she will be awakened on Wednesday without interview and the experiment ends.
Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Sleeping Beauty is asked: “What is your credence now for the proposition that the coin landed heads?”‘

Here the situation can be confusing because there are key portions of this experiment in which the observer is unconscious, but nevertheless Bayesian probability continues to operate regardless of whether the observer is conscious. To make this issue more precise, let us assume that the awakenings mentioned in the problem always occur at 8am, so in particular at 7am, Sleeping beauty will always be unconscious.

Here, the null and alternative hypotheses are easy to state precisely:

  • Null hypothesis {H_0}: The coin landed tails.
  • Alternative hypothesis {H_1}: The coin landed heads.

The subtle thing here is to work out what the correct prior state is (in most other applications of Bayesian probability, this state is obvious from the problem). It turns out that the most reasonable choice of prior state is “unconscious at 7am, on either Monday or Tuesday, with an equal chance of each”. (Note that whatever the outcome of the coin flip is, Sleeping Beauty will be unconscious at 7am Monday and unconscious again at 7am Tuesday, so it makes sense to give each of these two states an equal probability.) The new information is then

  • New information {E}: One hour after the prior state, Sleeping Beauty is awakened.

With this formulation, we see that {\mathop{\bf P}(H_0)=\mathop{\bf P}(H_1)=1/2}, {\mathop{\bf P}(E|H_0)=1}, and {\mathop{\bf P}(E|H_1)=1/2}, so on working through the worksheet one eventually arrives at {\mathop{\bf P}(H_1|E)=1/3}, so that Sleeping Beauty should only assign a probability of {1/3} to the event that the coin landed as heads.

There are arguments advanced in the literature to adopt the position that {\mathop{\bf P}(H_1|E)} should instead be equal to {1/2}, but I do not see a way to interpret them in this Bayesian framework without a substantial alteration to either the notion of the prior state, or by not presenting the new information {E} properly.

If one has multiple pieces of information {E_1, E_2, \dots} that one wishes to use to update one’s priors, one can do so by filling out one copy of the worksheet for each new piece of information, or by using a multi-row version of the worksheet using such identities as

\displaystyle  \frac{\mathop{\bf P}( H_1|E_1,E_2 )}{\mathop{\bf P}(H_0|E_1,E_2)} = \frac{\mathop{\bf P}(H_1)}{\mathop{\bf P}(H_0)} \times \frac{\mathop{\bf P}(E_1|H_1)}{\mathop{\bf P}(E_1|H_0)} \times \frac{\mathop{\bf P}(E_2|H_1,E_1)}{\mathop{\bf P}(E_2|H_0,E_1)}.

We leave the details of these variants of the Bayesian update problem to the interested reader. The only thing I will note though is that if a key piece of information {E} is withheld from the person filling out the worksheet, for instance if that person relies exclusively on a news source that only reports information that supports the alternative hypothesis {H_1} and omits information that debunks it, then the outcome of the worksheet is likely to be highly inaccurate, and one should only perform a Bayesian analysis when one has a high confidence that all relevant information (both favorable and unfavorable to the alternative hypothesis) is being reported to the user.

An unusual lottery result made the news recently: on October 1, 2022, the PCSO Grand Lotto in the Philippines, which draws six numbers from {1} to {55} at random, managed to draw the numbers {9, 18, 27, 36, 45, 54} (though the balls were actually drawn in the order {9, 45,36, 27, 18, 54}). In other words, they drew exactly six multiples of nine from {1} to {55}. In addition, a total of {433} tickets were bought with this winning combination, whose owners then had to split the {236} million peso jackpot (about {4} million USD) among themselves. This raised enough suspicion that there were calls for an inquiry into the Philippine lottery system, including from the minority leader of the Senate.

Whenever an event like this happens, journalists often contact mathematicians to ask the question: “What are the odds of this happening?”, and in fact I myself received one such inquiry this time around. This is a number that is not too difficult to compute – in this case, the probability of the lottery producing the six numbers {9, 18, 27, 35, 45, 54} in some order turn out to be {1} in {\binom{55}{6} = 28,989,675} – and such a number is often dutifully provided to such journalists, who in turn report it as some sort of quantitative demonstration of how remarkable the event was.

But on the previous draw of the same lottery, on September 28, 2022, the unremarkable sequence of numbers {11, 26, 33, 45, 51, 55} were drawn (again in a different order), and no tickets ended up claiming the jackpot. The probability of the lottery producing the six numbers {11, 26, 33, 45, 51, 55} is also {1} in {\binom{55}{6} = 28,989,675} – just as likely or as unlikely as the October 1 numbers {9, 18, 27, 36, 45, 54}. Indeed, the whole point of drawing the numbers randomly is to make each of the {28,989,675} possible outcomes (whether they be “unusual” or “unremarkable”) equally likely. So why is it that the October 1 lottery attracted so much attention, but the September 28 lottery did not?

Part of the explanation surely lies in the unusually large number ({433}) of lottery winners on October 1, but I will set that aspect of the story aside until the end of this post. The more general points that I want to make with these sorts of situations are:

  1. The question “what are the odds of happening” is often easy to answer mathematically, but it is not the correct question to ask.
  2. The question “what is the probability that an alternative hypothesis is the truth” is (one of) the correct questions to ask, but is very difficult to answer (it involves both mathematical and non-mathematical considerations).
  3. The answer to the first question is one of the quantities needed to calculate the answer to the second, but it is far from the only such quantity. Most of the other quantities involved cannot be calculated exactly.
  4. However, by making some educated guesses, one can still sometimes get a very rough gauge of which events are “more surprising” than others, in that they would lead to relatively higher answers to the second question.

To explain these points it is convenient to adopt the framework of Bayesian probability. In this framework, one imagines that there are competing hypotheses to explain the world, and that one assigns a probability to each such hypothesis representing one’s belief in the truth of that hypothesis. For simplicity, let us assume that there are just two competing hypotheses to be entertained: the null hypothesis {H_0}, and an alternative hypothesis {H_1}. For instance, in our lottery example, the two hypotheses might be:

  • Null hypothesis {H_0}: The lottery is run in a completely fair and random fashion.
  • Alternative hypothesis {H_1}: The lottery is rigged by some corrupt officials for their personal gain.

At any given point in time, a person would have a probability {{\bf P}(H_0)} assigned to the null hypothesis, and a probability {{\bf P}(H_1)} assigned to the alternative hypothesis; in this simplified model where there are only two hypotheses under consideration, these probabilities must add to one, but of course if there were additional hypotheses beyond these two then this would no longer be the case.

Bayesian probability does not provide a rule for calculating the initial (or prior) probabilities {{\bf P}(H_0)}, {{\bf P}(H_1)} that one starts with; these may depend on the subjective experiences and biases of the person considering the hypothesis. For instance, one person might have quite a bit of prior faith in the lottery system, and assign the probabilities {{\bf P}(H_0) = 0.99} and {{\bf P}(H_1) = 0.01}. Another person might have quite a bit of prior cynicism, and perhaps assign {{\bf P}(H_0)=0.5} and {{\bf P}(H_1)=0.5}. One cannot use purely mathematical arguments to determine which of these two people is “correct” (or whether they are both “wrong”); it depends on subjective factors.

What Bayesian probability does do, however, is provide a rule to update these probabilities {{\bf P}(H_0)}, {{\bf P}(H_1)} in view of new information {E} to provide posterior probabilities {{\bf P}(H_0|E)}, {{\bf P}(H_1|E)}. In our example, the new information {E} would be the fact that the October 1 lottery numbers were {9, 18, 27, 36, 45, 54} (in some order). The update is given by the famous Bayes theorem

\displaystyle  {\bf P}(H_0|E) = \frac{{\bf P}(E|H_0) {\bf P}(H_0)}{{\bf P}(E)}; \quad {\bf P}(H_1|E) = \frac{{\bf P}(E|H_1) {\bf P}(H_1)}{{\bf P}(E)},

where {{\bf P}(E|H_0)} is the probability that the event {E} would have occurred under the null hypothesis {H_0}, and {{\bf P}(E|H_1)} is the probability that the event {E} would have occurred under the alternative hypothesis {H_1}. Let us divide the second equation by the first to cancel the {{\bf P}(E)} denominator, and obtain

\displaystyle  \frac{ {\bf P}(H_1|E) }{ {\bf P}(H_0|E) } = \frac{ {\bf P}(H_1) }{ {\bf P}(H_0) } \times \frac{ {\bf P}(E | H_1)}{{\bf P}(E | H_0)}. \ \ \ \ \ (1)

One can interpret {\frac{ {\bf P}(H_1) }{ {\bf P}(H_0) }} as the prior odds of the alternative hypothesis, and {\frac{ {\bf P}(H_1|E) }{ {\bf P}(H_0|E) } } as the posterior odds of the alternative hypothesis. The identity (1) then says that in order to compute the posterior odds {\frac{ {\bf P}(H_1|E) }{ {\bf P}(H_0|E) }} of the alternative hypothesis in light of the new information {E}, one needs to know three things:
  1. The prior odds {\frac{ {\bf P}(H_1) }{ {\bf P}(H_0) }} of the alternative hypothesis;
  2. The probability {\mathop{\bf P}(E|H_0)} that the event {E} occurs under the null hypothesis {H_0}; and
  3. The probability {\mathop{\bf P}(E|H_1)} that the event {E} occurs under the alternative hypothesis {H_1}.

As previously discussed, the prior odds {\frac{ {\bf P}(H_1) }{ {\bf P}(H_0) }} of the alternative hypothesis are subjective and vary from person to person; in the example earlier, the person with substantial faith in the lottery may only give prior odds of {\frac{0.01}{0.99} \approx 0.01} (99 to 1 against) of the alternative hypothesis, whereas the cynic might give odds of {\frac{0.5}{0.5}=1} (even odds). The probability {{\bf P}(E|H_0)} is the quantity that can often be calculated by straightforward mathematics; as discussed before, in this specific example we have

\displaystyle  \mathop{\bf P}(E|H_0) = \frac{1}{\binom{55}{6}} = \frac{1}{28,989,675}.

But this still leaves one crucial quantity that is unknown: the probability {{\bf P}(E|H_1)}. This is incredibly difficult to compute, because it requires a precise theory for how events would play out under the alternative hypothesis {H_1}, and in particular is very sensitive as to what the alternative hypothesis {H_1} actually is.

For instance, suppose we replace the alternative hypothesis {H_1} by the following very specific (and somewhat bizarre) hypothesis:

  • Alternative hypothesis {H'_1}: The lottery is rigged by a cult that worships the multiples of {9}, and views October 1 as their holiest day. On this day, they will manipulate the lottery to only select those balls that are multiples of {9}.

Under this alternative hypothesis {H'_1}, we have {{\bf P}(E|H'_1)=1}. So, when {E} happens, the odds of this alternative hypothesis {H'_1} will increase by the dramatic factor of {\frac{{\bf P}(E|H'_1)}{{\bf P}(E|H_0)} = 28,989,675}. So, for instance, someone who already was entertaining odds of {\frac{0.01}{0.99}} of this hypothesis {H'_1} would now have these odds multiply dramatically to {\frac{0.01}{0.99} \times 28,989,675 \approx 290,000}, so that the probability of {H'_1} would have jumped from a mere {1\%} to a staggering {99.9997\%}. This is about as strong a shift in belief as one could imagine. However, this hypothesis {H'_1} is so specific and bizarre that one’s prior odds of this hypothesis would be nowhere near as large as {\frac{0.01}{0.99}} (unless substantial prior evidence of this cult and its hold on the lottery system existed, of course). A more realistic prior odds for {H'_1} would be something like {\frac{10^{-10^{10}}}{1-10^{-10^{10}}}} – which is so miniscule that even multiplying it by a factor such as {28,989,675} barely moves the needle.

Remark 1 The contrast between alternative hypothesis {H_1} and alternative hypothesis {H'_1} illustrates a common demagogical rhetorical technique when an advocate is trying to convince an audience of an alternative hypothesis, namely to use suggestive language (“`I’m just asking questions here”) rather than precise statements in order to leave the alternative hypothesis deliberately vague. In particular, the advocate may take advantage of the freedom to use a broad formulation of the hypothesis (such as {H_1}) in order to maximize the audience’s prior odds of the hypothesis, simultaneously with a very specific formulation of the hypothesis (such as {H'_1}) in order to maximize the probability of the actual event {E} occuring under this hypothesis. (A related technique is to be deliberately vague about the hypothesized competency of some suspicious actor, so that this actor could be portrayed as being extraordinarily competent when convenient to do so, while simultaneously being portrayed as extraordinarily incompetent when that instead is the more useful hypothesis.) This can lead to wildly inaccurate Bayesian updates of this vague alternative hypothesis, and so precise formulation of such hypothesis is important if one is to approach a topic from anything remotely resembling a scientific approach. [EDIT: as pointed out to me by a reader, this technique is a Bayesian analogue of the motte and bailey fallacy.]

At the opposite extreme, consider instead the following hypothesis:

  • Alternative hypothesis {H''_1}: The lottery is rigged by some corrupt officials, who on October 1 decide to randomly determine the winning numbers in advance, share these numbers with their collaborators, and then manipulate the lottery to choose those numbers that they selected.

If these corrupt officials are indeed choosing their predetermined winning numbers randomly, then the probability {{\bf P}(E|H''_1)} would in fact be just the same probability {\frac{1}{\binom{55}{6}} = \frac{1}{28,989,675}} as {{\bf P}(E|H_0)}, and in this case the seemingly unusual event {E} would in fact have no effect on the odds of the alternative hypothesis, because it was just as unlikely for the alternative hypothesis to generate this multiples-of-nine pattern as for the null hypothesis to. In fact, one would imagine that these corrupt officials would avoid “suspicious” numbers, such as the multiples of {9}, and only choose numbers that look random, in which case {{\bf P}(E|H''_1)} would in fact be less than {{\bf P}(E|H_0)} and so the event {E} would actually lower the odds of the alternative hypothesis in this case. (In fact, one can sometimes use this tendency of fraudsters to not generate truly random data as a statistical tool to detect such fraud; violations of Benford’s law for instance can be used in this fashion, though only in situations where the null hypothesis is expected to obey Benford’s law, as discussed in this previous blog post.)

Now let us consider a third alternative hypothesis:

  • Alternative hypothesis {H'''_1}: On October 1, the lottery machine developed a fault and now only selects numbers that exhibit unusual patterns.

Setting aside the question of precisely what faulty mechanism could induce this sort of effect, it is not clear at all how to compute {{\bf P}(E|H'''_1)} in this case. Using the principle of indifference as a crude rule of thumb, one might expect

\displaystyle  {\bf P}(E|H'''_1) \approx \frac{1}{\# \{ \hbox{unusual patterns}\}}

where the denominator is the number of patterns among the possible {\binom{55}{6}} lottery outcomes that are “unusual”. Among such patterns would presumably be the multiples-of-9 pattern {9,18,27,36,45,54}, but one could easily come up with other patterns that are equally “unusual”, such as consecutive strings such as {11, 12, 13, 14, 15, 16}, or the first few primes {2, 3, 5, 7, 11, 13}, or the first few squares {1, 4, 9, 16, 25, 36}, and so forth. How many such unusual patterns are there? This is too vague a question to answer with any degree of precision, but as one illustrative statistic, the Online Encyclopedia of Integer Sequences (OEIS) currently hosts about {350,000} sequences. Not all of these would begin with six distinct numbers from {1} to {55}, and several of these sequences might generate the same set of six numbers, but this does suggests that patterns that one would deem to be “unusual” could number in the thousands, tens of thousands, or more. Using this guess, we would then expect the event {E} to boost the odds of this hypothesis {H'''_1} by perhaps a thousandfold or so, which is moderately impressive. But subsequent information can counteract this effect. For instance, on October 3, the same lottery produced the numbers {8, 10, 12, 14, 26, 51}, which exhibit no unusual properties (no search results in the OEIS, for instance); if we denote this event by {E'}, then we have {{\bf P}(E'|H'''_1) \approx 0} and so this new information {E'} should drive the odds for this alternative hypothesis {H'''_1} way down again.

Remark 2 This example demonstrates another demagogical rhetorical technique that one sometimes sees (particularly in political or other emotionally charged contexts), which is to cherry-pick the information presented to their audience by informing them of events {E} which have a relatively high probability of occurring under their alternative hypothesis, but withholding information about other relevant events {E'} that have a relatively low probability of occurring under their alternative hypothesis. When confronted with such new information {E'}, a common defense of a demogogue is to modify the alternative hypothesis {H_1} to a more specific hypothesis {H'_1} that can “explain” this information {E'} (“Oh, clearly we heard about {E'} because the conspiracy in fact extends to the additional organizations {X, Y, Z} that reported {E'}“), taking advantage of the vagueness discussed in Remark 1.

Let us consider a superficially similar hypothesis:

  • Alternative hypothesis {H''''_1}: On October 1, a divine being decided to send a sign to humanity by placing an unusual pattern in a lottery.

Here we (literally) stay agnostic on the prior odds of this hypothesis, and do not address the theological question of why a divine being should choose to use the medium of a lottery to send their signs. At first glance, the probability {{\bf P}(E|H''''_1)} here should be similar to the probability {{\bf P}(E|H'''_1)}, and so perhaps one could use this event {E} to improve the odds of the existence of a divine being by a factor of a thousand or so. But note carefully that the hypothesis {H''''_1} did not specify which lottery the divine being chose to use. The PSCO Grand Lotto is just one of a dozen lotteries run by the Philippine Charity Sweepstakes Office (PCSO), and of course there are over a hundred other countries and thousands of states within these countries, each of which often run their own lotteries. Taking into account these thousands or tens of thousands of additional lotteries to choose from, the probability {{\bf P}(E|H''''_1)} now drops by several orders of magnitude, and is now basically comparable to the probability {{\bf P}(E|H_0)} coming from the null hypothesis. As such one does not expect the event {E} to have a significant impact on the odds of the hypothesis {H''''_1}, despite the small-looking nature {\frac{1}{28,989,675}} of the probability {{\bf P}(E|H_0)}.

In summary, we have failed to locate any alternative hypothesis {H_1} which

  1. Has some non-negligible prior odds of being true (and in particular is not excessively specific, as with hypothesis {H'_1});
  2. Has a significantly higher probability of producing the specific event {E} than the null hypothesis; AND
  3. Does not struggle to also produce other events {E'} that have since been observed.
One needs all three of these factors to be present in order to significantly weaken the plausibility of the null hypothesis {H_0}; in the absence of these three factors, a moderately small numerical value of {{\bf P}(E|H_0)}, such as {\frac{1}{28,989,675}} does not actually do much to affect this plausibility. In this case one needs to lay out a reasonably precise alternative hypothesis {H_1} and make some actual educated guesses towards the competing probability {{\bf P}(E|H_1)} before one can lead to further conclusions. However, if {{\bf P}(E|H_0)} is insanely small, e.g., less than {10^{-1000}}, then the possibility of a previously overlooked alternative hypothesis {H_1} becomes far more plausible; as per the famous quote of Arthur Conan Doyle’s Sherlock Holmes, “When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.”

We now return to the fact that for this specific October 1 lottery, there were {433} tickets that managed to select the winning numbers. Let us call this event {F}. In view of this additional information, we should now consider the ratio of the probabilities {{\bf P}(E \& F|H_1)} and {{\bf P}(E \& F|H_0)}, rather than the ratio of the probabilities {{\bf P}(E|H_1)} and {{\bf P}(E|H_0)}. If we augment the null hypothesis to

  • Null hypothesis {H'_0}: The lottery is run in a completely fair and random fashion, and the purchasers of lottery tickets also select their numbers in a completely random fashion.

Then {{\bf P}(E \& F|H'_0)} is indeed of the “insanely improbable” category mentioned previously. I was not able to get official numbers on how many tickets are purchased per lottery, but let us say for sake of argument that it is 1 million (the conclusion will not be extremely sensitive to this choice). Then the expected number of tickets that would have the winning numbers would be

\displaystyle  \frac{1 \hbox{ million}}{28,989,675} \approx 0.03

(which is broadly consistent, by the way, with the jackpot being reached every {30} draws or so), and standard probability theory suggests that the number of winners should now follow a Poisson distribution with this mean {\lambda = 0.03}. The probability of obtaining {433} winners would now be

\displaystyle  {\bf P}(F|H'_0) = \frac{\lambda^{433} e^{-\lambda}}{433!} \approx 10^{-1600}

and of course {{\bf P}(E \& F|H'_0)} would be even smaller than this. So this clearly demands some sort of explanation. But in actuality, many purchasers of lottery tickets do not select their numbers completely randomly; they often have some “lucky” numbers (e.g., based on birthdays or other personally significant dates) that they prefer to use, or choose numbers according to a simple pattern rather than go to the trouble of trying to make them truly random. So if we modify the null hypothesis to

  • Null hypothesis {H''_0}: The lottery is run in a completely fair and random fashion, but a significant fraction of the purchasers of lottery tickets only select “unusual” numbers.

then it can now become quite plausible that a highly unusual set of numbers such as {9,18,27,36,45,54} could be selected by as many as {433} purchasers of tickets; for instance, if {10\%} of the 1 million ticket holders chose to select their numbers according to some sort of pattern, then only {0.4\%} of those holders would have to pick {9,18,27,36,45,54} in order for the event {F} to hold (given {E}), and this is not extremely implausible. Given that this reasonable version of the null hypothesis already gives a plausible explanation for {F}, there does not seem to be a pressing need to locate an alternate hypothesis {H_1} that gives some other explanation (cf. Occam’s razor). [UPDATE: Indeed, given the actual layout of the tickets of ths lottery, the numbers {9,18,27,35,45,54} form a diagonal, and so all that is needed in order for the modified null hypothesis {H''_0} to explain the event {F} is to postulate that a significant fraction of ticket purchasers decided to lay out their numbers in a simple geometric pattern, such as a row or diagonal.]

Remark 3 In view of the above discussion, one can propose a systematic way to evaluate (in as objective a fashion as possible) rhetorical claims in which an advocate is presenting evidence to support some alternative hypothesis:
  1. State the null hypothesis {H_0} and the alternative hypothesis {H_1} as precisely as possible. In particular, avoid conflating an extremely broad hypothesis (such as the hypothesis {H_1} in our running example) with an extremely specific one (such as {H'_1} in our example).
  2. With the hypotheses precisely stated, give an honest estimate to the prior odds of this formulation of the alternative hypothesis.
  3. Consider if all the relevant information {E} (or at least a representative sample thereof) has been presented to you before proceeding further. If not, consider gathering more information {E'} from further sources.
  4. Estimate how likely the information {E} was to have occurred under the null hypothesis.
  5. Estimate how likely the information {E} was to have occurred under the alternative hypothesis (using exactly the same wording of this hypothesis as you did in previous steps).
  6. If the second estimate is significantly larger than the first, then you have cause to update your prior odds of this hypothesis (though if those prior odds were already vanishingly unlikely, this may not move the needle significantly). If not, the argument is unconvincing and no significant adjustment to the odds (except perhaps in a downwards direction) needs to be made.

Let {M_{n \times m}({\bf Z})} denote the space of {n \times m} matrices with integer entries, and let {GL_n({\bf Z})} be the group of invertible {n \times n} matrices with integer entries. The Smith normal form takes an arbitrary matrix {A \in M_{n \times m}({\bf Z})} and factorises it as {A = UDV}, where {U \in GL_n({\bf Z})}, {V \in GL_m({\bf Z})}, and {D} is a rectangular diagonal matrix, by which we mean that the principal {\min(n,m) \times \min(n,m)} minor is diagonal, with all other entries zero. Furthermore the diagonal entries of {D} are {\alpha_1,\dots,\alpha_k,0,\dots,0} for some {0 \leq k \leq \min(n,m)} (which is also the rank of {A}) with the numbers {\alpha_1,\dots,\alpha_k} (known as the invariant factors) principal divisors with {\alpha_1 | \dots | \alpha_k}. The invariant factors are uniquely determined; but there can be some freedom to modify the invertible matrices {U,V}. The Smith normal form can be computed easily; for instance, in SAGE, it can be computed calling the {{\tt smith\_form()}} function from the matrix class. The Smith normal form is also available for other principal ideal domains than the integers, but we will only be focused on the integer case here. For the purposes of this post, we will view the Smith normal form as a primitive operation on matrices that can be invoked as a “black box”.

In this post I would like to record how to use the Smith normal form to computationally manipulate two closely related classes of objects:

  • Subgroups {\Gamma \leq {\bf Z}^d} of a standard lattice {{\bf Z}^d} (or lattice subgroups for short);
  • Closed subgroups {H \leq ({\bf R}/{\bf Z})^d} of a standard torus {({\bf R}/{\bf Z})^d} (or closed torus subgroups for short).
(This arose for me due to the need to actually perform (with a collaborator) some numerical calculations with a number of lattice subgroups and closed torus subgroups.) It’s possible that all of these operations are already encoded in some existing object classes in a computational algebra package; I would be interested to know of such packages and classes for lattice subgroups or closed torus subgroups in the comments.

The above two classes of objects are isomorphic to each other by Pontryagin duality: if {\Gamma \leq {\bf Z}^d} is a lattice subgroup, then the orthogonal complement

\displaystyle  \Gamma^\perp := \{ x \in ({\bf R}/{\bf Z})^d: \langle x, \xi \rangle = 0 \forall \xi \in \Gamma \}

is a closed torus subgroup (with {\langle,\rangle: ({\bf R}/{\bf Z})^d \times {\bf Z}^d \rightarrow {\bf R}/{\bf Z}} the usual Fourier pairing); conversely, if {H \leq ({\bf R}/{\bf Z})^d} is a closed torus subgroup, then

\displaystyle  H^\perp := \{ \xi \in {\bf Z}^d: \langle x, \xi \rangle = 0 \forall x \in H \}

is a lattice subgroup. These two operations invert each other: {(\Gamma^\perp)^\perp = \Gamma} and {(H^\perp)^\perp = H}.

Example 1 The orthogonal complement of the lattice subgroup

\displaystyle  2{\bf Z} \times \{0\} = \{ (2n,0): n \in {\bf Z}\} \leq {\bf Z}^2

is the closed torus subgroup

\displaystyle  (\frac{1}{2}{\bf Z}/{\bf Z}) \times ({\bf R}/{\bf Z}) = \{ (x,y) \in ({\bf R}/{\bf Z})^2: 2x=0\} \leq ({\bf R}/{\bf Z})^2

and conversely.

Let us focus first on lattice subgroups {\Gamma \leq {\bf Z}^d}. As all such subgroups are finitely generated abelian groups, one way to describe a lattice subgroup is to specify a set {v_1,\dots,v_n \in \Gamma} of generators of {\Gamma}. Equivalently, we have

\displaystyle  \Gamma = A {\bf Z}^n

where {A \in M_{d \times n}({\bf Z})} is the matrix whose columns are {v_1,\dots,v_n}. Applying the Smith normal form {A = UDV}, we conclude that

\displaystyle  \Gamma = UDV{\bf Z}^n = UD{\bf Z}^n

so in particular {\Gamma} is isomorphic (with respect to the automorphism group {GL_d({\bf Z})} of {{\bf Z}^d}) to {D{\bf Z}^n}. In particular, we see that {\Gamma} is a free abelian group of rank {k}, where {k} is the rank of {D} (or {A}). This representation also allows one to trim the representation {A {\bf Z}^n} down to {U D'{\bf Z}^k}, where {D' \in M_{d \times k}} is the matrix formed from the {k} left columns of {D}; the columns of {UD'} then give a basis for {\Gamma}. Let us call this a trimmed representation of {A{\bf Z}^n}.

Example 2 Let {\Gamma \leq {\bf Z}^3} be the lattice subgroup generated by {(1,3,1)}, {(2,-2,2)}, {(3,1,3)}, thus {\Gamma = A {\bf Z}^3} with {A = \begin{pmatrix} 1 & 2 & 3 \\ 3 & -2 & 1 \\ 1 & 2 & 3 \end{pmatrix}}. A Smith normal form for {A} is given by

\displaystyle  A = \begin{pmatrix} 3 & 1 & 1 \\ 1 & 0 & 0 \\ 3 & 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 8 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 3 & -2 & 1 \\ -1 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix}

so {A{\bf Z}^3} is a rank two lattice with a basis of {(3,1,3) \times 1 = (3,1,3)} and {(1,0,1) \times 8 = (8,0,8)} (and the invariant factors are {1} and {8}). The trimmed representation is

\displaystyle  A {\bf Z}^3 = \begin{pmatrix} 3 & 1 & 1 \\ 1 & 0 & 0 \\ 3 & 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 8 \\ 0 & 0 \end{pmatrix} {\bf Z}^2 = \begin{pmatrix} 3 & 8 \\ 1 & 0 \\ 3 & 8 \end{pmatrix} {\bf Z}^2.

There are other Smith normal forms for {A}, giving slightly different representations here, but the rank and invariant factors will always be the same.

By the above discussion we can represent a lattice subgroup {\Gamma \leq {\bf Z}^d} by a matrix {A \in M_{d \times n}({\bf Z})} for some {n}; this representation is not unique, but we will address this issue shortly. For now, we focus on the question of how to use such data representations of subgroups to perform basic operations on lattice subgroups. There are some operations that are very easy to perform using this data representation:

  • (Applying a linear transformation) if {T \in M_{d' \times d}({\bf Z})}, so that {T} is also a linear transformation from {{\bf Z}^d} to {{\bf Z}^{d'}}, then {T} maps lattice subgroups to lattice subgroups, and clearly maps the lattice subgroup {A{\bf Z}^n} to {(TA){\bf Z}^n} for any {A \in M_{d \times n}({\bf Z})}.
  • (Sum) Given two lattice subgroups {A_1 {\bf Z}^{n_1}, A_2 {\bf Z}^{n_2} \leq {\bf Z}^d} for some {A_1 \in M_{d \times n_1}({\bf Z})}, {A_2 \in M_{d \times n_2}({\bf Z})}, the sum {A_1 {\bf Z}^{n_1} + A_2 {\bf Z}^{n_2}} is equal to the lattice subgroup {A {\bf Z}^{n_1+n_2}}, where {A = (A_1 A_2) \in M_{d \times n_1 + n_2}({\bf Z})} is the matrix formed by concatenating the columns of {A_1} with the columns of {A_2}.
  • (Direct sum) Given two lattice subgroups {A_1 {\bf Z}^{n_1} \leq {\bf Z}^{d_1}}, {A_2 {\bf Z}^{n_2} \leq {\bf Z}^{d_2}}, the direct sum {A_1 {\bf Z}^{n_1} \times A_2 {\bf Z}^{n_2}} is equal to the lattice subgroup {A {\bf Z}^{n_1+n_2}}, where {A = \begin{pmatrix} A_1 & 0 \\ 0 & A_2 \end{pmatrix} \in M_{d_1+d_2 \times n_1 + n_2}({\bf Z})} is the block matrix formed by taking the direct sum of {A_1} and {A_2}.

One can also use Smith normal form to detect when one lattice subgroup {B {\bf Z}^m \leq {\bf Z}^d} is a subgroup of another lattice subgroup {A {\bf Z}^n \leq {\bf Z}^d}. Using Smith normal form factorization {A = U D V}, with invariant factors {\alpha_1|\dots|\alpha_k}, the relation {B {\bf Z}^m \leq A {\bf Z}^n} is equivalent after some manipulation to

\displaystyle  U^{-1} B {\bf Z}^m \leq D {\bf Z}^n.

The group {U^{-1} B {\bf Z}^m} is generated by the columns of {U^{-1} B}, so this gives a test to determine whether {B {\bf Z}^{m} \leq A {\bf Z}^{n}}: the {i^{th}} row of {U^{-1} B} must be divisible by {\alpha_i} for {i=1,\dots,k}, and all other rows must vanish.

Example 3 To test whether the lattice subgroup {\Gamma'} generated by {(1,1,1)} and {(0,2,0)} is contained in the lattice subgroup {\Gamma = A{\bf Z}^3} from Example 2, we write {\Gamma'} as {B {\bf Z}^2} with {B = \begin{pmatrix} 1 & 0 \\ 1 & 2 \\ 1 & 0\end{pmatrix}}, and observe that

\displaystyle  U^{-1} B = \begin{pmatrix} 1 & 2 \\ -2 & -6 \\ 0 & 0 \end{pmatrix}.

The first row is of course divisible by {1}, and the last row vanishes as required, but the second row is not divisible by {8}, so {\Gamma'} is not contained in {\Gamma} (but {4\Gamma'} is); also a similar computation verifies that {\Gamma} is conversely contained in {\Gamma'}.

One can now test whether {B{\bf Z}^m = A{\bf Z}^n} by testing whether {B{\bf Z}^m \leq A{\bf Z}^n} and {A{\bf Z}^n \leq B{\bf Z}^m} simultaneously hold (there may be more efficient ways to do this, but this is already computationally manageable in many applications). This in principle addresses the issue of non-uniqueness of representation of a subgroup {\Gamma} in the form {A{\bf Z}^n}.

Next, we consider the question of representing the intersection {A{\bf Z}^n \cap B{\bf Z}^m} of two subgroups {A{\bf Z}^n, B{\bf Z}^m \leq {\bf Z}^d} in the form {C{\bf Z}^p} for some {p} and {C \in M_{d \times p}({\bf Z})}. We can write

\displaystyle  A{\bf Z}^n \cap B{\bf Z}^m = \{ Ax: Ax = By \hbox{ for some } x \in {\bf Z}^n, y \in {\bf Z}^m \}

\displaystyle  = (A 0) \{ z \in {\bf Z}^{n+m}: (A B) z = 0 \}

where {(A B) \in M_{d \times n+m}({\bf Z})} is the matrix formed by concatenating {A} and {B}, and similarly for {(A 0) \in M_{d \times n+m}({\bf Z})} (here we use the change of variable {z = \begin{pmatrix} x \\ -y \end{pmatrix}}). We apply the Smith normal form to {(A B)} to write

\displaystyle  (A B) = U D V

where {U \in GL_d({\bf Z})}, {D \in M_{d \times n+m}({\bf Z})}, {V \in GL_{n+m}({\bf Z})} with {D} of rank {k}. We can then write

\displaystyle  \{ z \in {\bf Z}^{n+m}: (A B) z = 0 \} = V^{-1} \{ w \in {\bf Z}^{n+m}: Dw = 0 \}

\displaystyle  = V^{-1} (\{0\}^k \times {\bf Z}^{n+m-k})

(making the change of variables {w = Vz}). Thus we can write {A{\bf Z}^n \cap B{\bf Z}^m = C {\bf Z}^{n+m-k}} where {C \in M_{d \times n+m-k}({\bf Z})} consists of the right {n+m-k} columns of {(A 0) V^{-1} \in M_{d \times n+m}({\bf Z})}.

Example 4 With the lattice {A{\bf Z}^3} from Example 2, we shall compute the intersection of {A{\bf Z}^3} with the subgroup {{\bf Z}^2 \times \{0\}}, which one can also write as {B{\bf Z}^2} with {B = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{pmatrix}}. We obtain a Smith normal form

\displaystyle  (A B) = \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} 3 & -2 & 1 & 0 & 1 \\ 1 & 2 & 3 & 1 & 0 \\ 1 & 2 & 3 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 \end{pmatrix}

so {k=3}. We have

\displaystyle  (A 0) V^{-1} = \begin{pmatrix} 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 3 & 0 & -8 \\ 0 & 0 & 1 & 0 & 0 \end{pmatrix}

and so we can write {A{\bf Z}^3 \cap B{\bf Z}^2 = C{\bf Z}^2} where

\displaystyle  C = \begin{pmatrix} 0 & 0 \\ 0 & -8 \\ 0 & 0 \end{pmatrix}.

One can trim this representation if desired, for instance by deleting the first column of {C} (and replacing {{\bf Z}^2} with {{\bf Z}}). Thus the intersection of {A{\bf Z}^3} with {{\bf Z}^2 \times \{0\}} is the rank one subgroup generated by {(0,-8,0)}.

A similar calculation allows one to represent the pullback {T^{-1} (A {\bf Z}^n) \leq {\bf Z}^{d'}} of a subgroup {A{\bf Z}^n \leq {\bf Z}^d} via a linear transformation {T \in M_{d \times d'}({\bf Z})}, since

\displaystyle T^{-1} (A {\bf Z}^n) = \{ x \in {\bf Z}^{d'}: Tx = Ay \hbox{ for some } y \in {\bf Z}^m \}

\displaystyle  = (I 0) \{ z \in {\bf Z}^{d'+m}: (T A) z = 0 \}

where {(I 0) \in M_{d' \times d'+m}({\bf Z})} is the concatenation of the {d' \times d'} identity matrix {I} and the {d' \times m} zero matrix. Applying the Smith normal form to write {(T A) = UDV} with {D} of rank {k}, the same argument as before allows us to write {T^{-1}(A{\bf Z}^n) = C {\bf Z}^{d'+m-k}} where {C \in M_{d' \times d'+m-k}} consists of the right {d'+m-k} columns of {(I 0) V^{-1} \in M_{d' \times d'+m}({\bf Z})}.

Among other things, this allows one to describe lattices given by systems of linear equations and congruences in the {A{\bf Z}^n} format. Indeed, the set of lattice vectors {x \in {\bf Z}^d} that solve the system of congruences

\displaystyle  \alpha_i | x \cdot v_i \ \ \ \ \ (1)

for {i=1,\dots,k}, some natural numbers {\alpha_i}, and some lattice vectors {v_i \in {\bf Z}^d}, together with an additional system of equations

\displaystyle  x \cdot w_j = 0 \ \ \ \ \ (2)

for {j=1,\dots,l} and some lattice vectors {w_j \in {\bf Z}^d}, can be written as {T^{-1}(A {\bf Z}^k)} where {T \in M_{k+l \times d}({\bf Z})} is the matrix with rows {v_1,\dots,v_k,w_1,\dots,w_l}, and {A \in M_{k+l \times k}({\bf Z})} is the diagonal matrix with diagonal entries {\alpha_1,\dots,\alpha_k}. Conversely, any subgroup {A{\bf Z}^n} can be described in this form by first using the trimmed representation {A{\bf Z}^n = UD'{\bf Z}^k}, at which point membership of a lattice vector {x \in {\bf Z}^d} in {A{\bf Z}^n} is seen to be equivalent to the congruences

\displaystyle  \alpha_i | U^{-1} x \cdot e_i

for {i=1,\dots,k} (where {k} is the rank, {\alpha_1,\dots,\alpha_k} are the invariant factors, and {e_1,\dots,e_d} is the standard basis of {{\bf Z}^d}) together with the equations

\displaystyle  U^{-1} x \cdot e_j = 0

for {j=k+1,\dots,d}. Thus one can obtain a representation in the form (1), (2) with {l=d-k}, and {v_1,\dots,v_k,w_1,\dots,w_{d-k}} to be the rows of {U^{-1}} in order.

Example 5 With the lattice subgroup {A{\bf Z}^3} from Example 2, we have {U^{-1} = \begin{pmatrix} 0 & 1 & 0 \\ 0 & -3 & 1 \\ 1 & 0 & -1 \end{pmatrix}}, and so {A{\bf Z}^3} consists of those triples {(x_1,x_2,x_3)} which obey the (redundant) congruence

\displaystyle  1 | x_2,

the congruence

\displaystyle  8 | -3x_2 + x_3

and the identity

\displaystyle  x_1 - x_3 = 0.

Conversely, one can use the above procedure to convert the above system of congruences and identities back into a form {A' {\bf Z}^{n'}} (though depending on which Smith normal form one chooses, the end result may be a different representation of the same lattice group {A{\bf Z}^3}).

Now we apply Pontryagin duality. We claim the identity

\displaystyle  (A{\bf Z}^n)^\perp = \{ x \in ({\bf R}/{\bf Z})^d: A^Tx = 0 \}

for any {A \in M_{d \times n}({\bf Z})} (where {A^T \in M_{n \times d}({\bf Z})} induces a homomorphism from {({\bf R}/{\bf Z})^d} to {({\bf R}/{\bf Z})^n} in the obvious fashion). This can be verified by direct computation when {A} is a (rectangular) diagonal matrix, and the general case then easily follows from a Smith normal form computation (one can presumably also derive it from the category-theoretic properties of Pontryagin duality, although I will not do so here). So closed torus subgroups that are defined by a system of linear equations (over {{\bf R}/{\bf Z}}, with integer coefficients) are represented in the form {(A{\bf Z}^n)^\perp} of an orthogonal complement of a lattice subgroup. Using the trimmed form {A{\bf Z}^n = U D' {\bf Z}^k}, we see that

\displaystyle  (A{\bf Z}^n)^\perp = \{ x \in ({\bf R}/{\bf Z})^d: (UD')^T x = 0 \}

\displaystyle  = (U^{-1})^T \{ y \in ({\bf R}/{\bf Z})^d: (D')^T x = 0 \}

\displaystyle  = (U^{-1})^T (\frac{1}{\alpha_1} {\bf Z}/{\bf Z} \times \dots \times \frac{1}{\alpha_k} {\bf Z}/{\bf Z} \times ({\bf R}/{\bf Z})^{d-k}),

giving an explicit representation “in coordinates” of such a closed torus subgroup. In particular we can read off the isomorphism class of a closed torus subgroup as the product of a finite number of cyclic groups and a torus:

\displaystyle (A{\bf Z}^n)^\perp \equiv ({\bf Z}/\alpha_1 {\bf Z}) \times \dots \times ({\bf Z}/\alpha_k{\bf Z}) \times ({\bf R}/{\bf Z})^{d-k}.

Example 6 The orthogonal complement of the lattice subgroup {A{\bf Z}^3} from Example 2 is the closed torus subgroup

\displaystyle  (A{\bf Z}^3)^\perp = \{ (x_1,x_2,x_3) \in ({\bf R}/{\bf Z})^3: x_1 + 3x_2 + x_3

\displaystyle  = 2x_1 - 2x_2 + 2x_3 = 3x_1 + x_2 + 3x_3 = 0 \};

using the trimmed representation of {(A{\bf Z}^3)^\perp}, one can simplify this a little to

\displaystyle  (A{\bf Z}^3)^\perp = \{ (x_1,x_2,x_3) \in ({\bf R}/{\bf Z})^3: 3x_1 + x_2 + 3x_3

\displaystyle  = 8 x_1 + 8x_3 = 0 \}

and one can also write this as the image of the group {\{ 0\} \times (\frac{1}{8}{\bf Z}/{\bf Z}) \times ({\bf R}/{\bf Z})} under the torus isomorphism

\displaystyle  (y_1,y_2,y_3) \mapsto (y_3, y_1 - 3y_2, y_2 - y_3).

In other words, one can write

\displaystyle  (A{\bf Z}^3)^\perp = \{ (y,0,-y) + (0,-\frac{3a}{8},\frac{a}{8}): y \in {\bf R}/{\bf Z}; a \in {\bf Z}/8{\bf Z} \}

so that {(A{\bf Z}^3)^\perp} is isomorphic to {{\bf R}/{\bf Z} \times {\bf Z}/8{\bf Z}}.

We can now dualize all of the previous computable operations on subgroups of {{\bf Z}^d} to produce computable operations on closed subgroups of {({\bf R}/{\bf Z})^d}. For instance:

  • To form the intersection or sum of two closed torus subgroups {(A_1 {\bf Z}^{n_1})^\perp, (A_2 {\bf Z}^{n_2})^\perp \leq ({\bf R}/{\bf Z})^d}, use the identities

    \displaystyle  (A_1 {\bf Z}^{n_1})^\perp \cap (A_2 {\bf Z}^{n_2})^\perp = (A_1 {\bf Z}^{n_1} + A_2 {\bf Z}^{n_2})^\perp

    and

    \displaystyle  (A_1 {\bf Z}^{n_1})^\perp + (A_2 {\bf Z}^{n_2})^\perp = (A_1 {\bf Z}^{n_1} \cap A_2 {\bf Z}^{n_2})^\perp

    and then calculate the sum or intersection of the lattice subgroups {A_1 {\bf Z}^{n_1}, A_2 {\bf Z}^{n_2}} by the previous methods. Similarly, the operation of direct sum of two closed torus subgroups dualises to the operation of direct sum of two lattice subgroups.
  • To determine whether one closed torus subgroup {(A_1 {\bf Z}^{n_1})^\perp \leq ({\bf R}/{\bf Z})^d} is contained in (or equal to) another closed torus subgroup {(A_2 {\bf Z}^{n_2})^\perp \leq ({\bf R}/{\bf Z})^d}, simply use the preceding methods to check whether the lattice subgroup {A_2 {\bf Z}^{n_2}} is contained in (or equal to) the lattice subgroup {A_1 {\bf Z}^{n_1}}.
  • To compute the pull back {T^{-1}( (A{\bf Z}^n)^\perp )} of a closed torus subgroup {(A{\bf Z}^n)^\perp \leq ({\bf R}/{\bf Z})^d} via a linear transformation {T \in M_{d' \times d}({\bf Z})}, use the identity

    \displaystyle T^{-1}( (A{\bf Z}^n)^\perp ) = (T^T A {\bf Z}^n)^\perp.

    Similarly, to compute the image {T( (B {\bf Z}^m)^\perp )} of a closed torus subgroup {(B {\bf Z}^m)^\perp \leq ({\bf R}/{\bf Z})^{d'}}, use the identity

    \displaystyle T( (B{\bf Z}^m)^\perp ) = ((T^T)^{-1} B {\bf Z}^m)^\perp.

Example 7 Suppose one wants to compute the sum of the closed torus subgroup {(A{\bf Z}^3)^\perp} from Example 6 with the closed torus subgroup {\{0\}^2 \times {\bf R}/{\bf Z}}. This latter group is the orthogonal complement of the lattice subgroup {{\bf Z}^2 \times \{0\}} considered in Example 4. Thus we have {(A{\bf Z}^3)^\perp + (\{0\}^2 \times {\bf R}/{\bf Z}) = (C{\bf Z}^2)^\perp} where {C} is the matrix from Example 6; discarding the zero column, we thus have

\displaystyle (A{\bf Z}^3)^\perp + (\{0\}^2 \times {\bf R}/{\bf Z}) = \{ (x_1,x_2,x_3): -8x_2 = 0 \}.

Let {G} be a finite set of order {N}; in applications {G} will be typically something like a finite abelian group, such as the cyclic group {{\bf Z}/N{\bf Z}}. Let us define a {1}-bounded function to be a function {f: G \rightarrow {\bf C}} such that {|f(n)| \leq 1} for all {n \in G}. There are many seminorms {\| \|} of interest that one places on functions {f: G \rightarrow {\bf C}} that are bounded by {1} on {1}-bounded functions, such as the Gowers uniformity seminorms {\| \|_k} for {k \geq 1} (which are genuine norms for {k \geq 2}). All seminorms in this post will be implicitly assumed to obey this property.

In additive combinatorics, a significant role is played by inverse theorems, which abstractly take the following form for certain choices of seminorm {\| \|}, some parameters {\eta, \varepsilon>0}, and some class {{\mathcal F}} of {1}-bounded functions:

Theorem 1 (Inverse theorem template) If {f} is a {1}-bounded function with {\|f\| \geq \eta}, then there exists {F \in {\mathcal F}} such that {|\langle f, F \rangle| \geq \varepsilon}, where {\langle,\rangle} denotes the usual inner product

\displaystyle  \langle f, F \rangle := {\bf E}_{n \in G} f(n) \overline{F(n)}.

Informally, one should think of {\eta} as being somewhat small but fixed independently of {N}, {\varepsilon} as being somewhat smaller but depending only on {\eta} (and on the seminorm), and {{\mathcal F}} as representing the “structured functions” for these choices of parameters. There is some flexibility in exactly how to choose the class {{\mathcal F}} of structured functions, but intuitively an inverse theorem should become more powerful when this class is small. Accordingly, let us define the {(\eta,\varepsilon)}-entropy of the seminorm {\| \|} to be the least cardinality of {{\mathcal F}} for which such an inverse theorem holds. Seminorms with low entropy are ones for which inverse theorems can be expected to be a useful tool. This concept arose in some discussions I had with Ben Green many years ago, but never appeared in print, so I decided to record some observations we had on this concept here on this blog.

Lebesgue norms {\| f\|_{L^p} := ({\bf E}_{n \in G} |f(n)|^p)^{1/p}} for {1 < p < \infty} have exponentially large entropy (and so inverse theorems are not expected to be useful in this case):

Proposition 2 ({L^p} norm has exponentially large inverse entropy) Let {1 < p < \infty} and {0 < \eta < 1}. Then the {(\eta,\eta^p/4)}-entropy of {\| \|_{L^p}} is at most {(1+8/\eta^p)^N}. Conversely, for any {\varepsilon>0}, the {(\eta,\varepsilon)}-entropy of {\| \|_{L^p}} is at least {\exp( c \varepsilon^2 N)} for some absolute constant {c>0}.

Proof: If {f} is {1}-bounded with {\|f\|_{L^p} \geq \eta}, then we have

\displaystyle  |\langle f, |f|^{p-2} f \rangle| \geq \eta^p

and hence by the triangle inequality we have

\displaystyle  |\langle f, F \rangle| \geq \eta^p/2

where {F} is either the real or imaginary part of {|f|^{p-2} f}, which takes values in {[-1,1]}. If we let {\tilde F} be {F} rounded to the nearest multiple of {\eta^p/4}, then by the triangle inequality again we have

\displaystyle  |\langle f, \tilde F \rangle| \geq \eta^p/4.

There are only at most {1+8/\eta^p} possible values for each value {\tilde F(n)} of {\tilde F}, and hence at most {(1+8/\eta^p)^N} possible choices for {\tilde F}. This gives the first claim.

Now suppose that there is an {(\eta,\varepsilon)}-inverse theorem for some {{\mathcal F}} of cardinality {M}. If we let {f} be a random sign function (so the {f(n)} are independent random variables taking values in {-1,+1} with equal probability), then there is a random {F \in {\mathcal F}} such that

\displaystyle  |\langle f, F \rangle| \geq \varepsilon

and hence by the pigeonhole principle there is a deterministic {F \in {\mathcal F}} such that

\displaystyle  {\bf P}( |\langle f, F \rangle| \geq \varepsilon ) \geq 1/M.

On the other hand, from the Hoeffding inequality one has

\displaystyle  {\bf P}( |\langle f, F \rangle| \geq \varepsilon ) \ll \exp( - c \varepsilon^2 N )

for some absolute constant {c}, hence

\displaystyle  M \geq \exp( c \varepsilon^2 N )

as claimed. \Box

Most seminorms of interest in additive combinatorics, such as the Gowers uniformity norms, are bounded by some finite {L^p} norm thanks to Hölder’s inequality, so from the above proposition and the obvious monotonicity properties of entropy, we conclude that all Gowers norms on finite abelian groups {G} have at most exponential inverse theorem entropy. But we can do significantly better than this:

  • For the {U^1} seminorm {\|f\|_{U^1(G)} := |{\bf E}_{n \in G} f(n)|}, one can simply take {{\mathcal F} = \{1\}} to consist of the constant function {1}, and the {(\eta,\eta)}-entropy is clearly equal to {1} for any {0 < \eta < 1}.
  • For the {U^2} norm, the standard Fourier-analytic inverse theorem asserts that if {\|f\|_{U^2(G)} \geq \eta} then {|\langle f, e(\xi \cdot) \rangle| \geq \eta^2} for some Fourier character {\xi \in \hat G}. Thus the {(\eta,\eta^2)}-entropy is at most {N}.
  • For the {U^k({\bf Z}/N{\bf Z})} norm on cyclic groups for {k > 2}, the inverse theorem proved by Green, Ziegler, and myself gives an {(\eta,\varepsilon)}-inverse theorem for some {\varepsilon \gg_{k,\eta} 1} and {{\mathcal F}} consisting of nilsequences {n \mapsto F(g(n) \Gamma)} for some filtered nilmanifold {G/\Gamma} of degree {k-1} in a finite collection of cardinality {O_{\eta,k}(1)}, some polynomial sequence {g: {\bf Z} \rightarrow G} (which was subsequently observed by Candela-Sisask (see also Manners) that one can choose to be {N}-periodic), and some Lipschitz function {F: G/\Gamma \rightarrow {\bf C}} of Lipschitz norm {O_{\eta,k}(1)}. By the Arzela-Ascoli theorem, the number of possible {F} (up to uniform errors of size at most {\varepsilon/2}, say) is {O_{\eta,k}(1)}. By standard arguments one can also ensure that the coefficients of the polynomial {g} are {O_{\eta,k}(1)}, and then by periodicity there are only {O(N^{O_{\eta,k}(1)}} such polynomials. As a consequence, the {(\eta,\varepsilon)}-entropy is of polynomial size {O_{\eta,k}( N^{O_{\eta,k}(1)} )} (a fact that seems to have first been implicitly observed in Lemma 6.2 of this paper of Frantzikinakis; thanks to Ben Green for this reference). One can obtain more precise dependence on {\eta,k} using the quantitative version of this inverse theorem due to Manners; back of the envelope calculations using Section 5 of that paper suggest to me that one can take {\varepsilon = \eta^{O_k(1)}} to be polynomial in {\eta} and the entropy to be of the order {O_k( N^{\exp(\exp(\eta^{-O_k(1)}))} )}, or alternatively one can reduce the entropy to {O_k( \exp(\exp(\eta^{-O_k(1)})) N^{\eta^{-O_k(1)}})} at the cost of degrading {\varepsilon} to {1/\exp\exp( O(\eta^{-O(1)}))}.
  • If one replaces the cyclic group {{\bf Z}/N{\bf Z}} by a vector space {{\bf F}_p^n} over some fixed finite field {{\bf F}_p} of prime order (so that {N=p^n}), then the inverse theorem of Ziegler and myself (available in both high and low characteristic) allows one to obtain an {(\eta,\varepsilon)}-inverse theorem for some {\varepsilon \gg_{k,\eta} 1} and {{\mathcal F}} the collection of non-classical degree {k-1} polynomial phases from {{\bf F}_p^n} to {S^1}, which one can normalize to equal {1} at the origin, and then by the classification of such polynomials one can calculate that the {(\eta,\varepsilon)} entropy is of quasipolynomial size {\exp( O_{p,k}(n^{k-1}) ) = \exp( O_{p,k}( \log^{k-1} N ) )} in {N}. By using the recent work of Gowers and Milicevic, one can make the dependence on {p,k} here more precise, but we will not perform these calcualtions here.
  • For the {U^3(G)} norm on an arbitrary finite abelian group, the recent inverse theorem of Jamneshan and myself gives (after some calculations) a bound of the polynomial form {O( q^{O(n^2)} N^{\exp(\eta^{-O(1)})})} on the {(\eta,\varepsilon)}-entropy for some {\varepsilon \gg \eta^{O(1)}}, which one can improve slightly to {O( q^{O(n^2)} N^{\eta^{-O(1)}})} if one degrades {\varepsilon} to {1/\exp(\eta^{-O(1)})}, where {q} is the maximal order of an element of {G}, and {n} is the rank (the number of elements needed to generate {G}). This bound is polynomial in {N} in the cyclic group case and quasipolynomial in general.

For general finite abelian groups {G}, we do not yet have an inverse theorem of comparable power to the ones mentioned above that give polynomial or quasipolynomial upper bounds on the entropy. However, there is a cheap argument that at least gives some subexponential bounds:

Proposition 3 (Cheap subexponential bound) Let {k \geq 2} and {0 < \eta < 1/2}, and suppose that {G} is a finite abelian group of order {N \geq \eta^{-C_k}} for some sufficiently large {C_k}. Then the {(\eta,c_k \eta^{O_k(1)})}-complexity of {\| \|_{U^k(G)}} is at most {O( \exp( \eta^{-O_k(1)} N^{1 - \frac{k+1}{2^k-1}} ))}.

Proof: (Sketch) We use a standard random sampling argument, of the type used for instance by Croot-Sisask or Briet-Gopi (thanks to Ben Green for this latter reference). We can assume that {N \geq \eta^{-C_k}} for some sufficiently large {C_k>0}, since otherwise the claim follows from Proposition 2.

Let {A} be a random subset of {{\bf Z}/N{\bf Z}} with the events {n \in A} being iid with probability {0 < p < 1} to be chosen later, conditioned to the event {|A| \leq 2pN}. Let {f} be a {1}-bounded function. By a standard second moment calculation, we see that with probability at least {1/2}, we have

\displaystyle  \|f\|_{U^k(G)}^{2^k} = {\bf E}_{n, h_1,\dots,h_k \in G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|} \frac{1}{p} 1_A f(n + \omega \cdot h)

\displaystyle + O((\frac{1}{N^{k+1} p^{2^k-1}})^{1/2}).

Thus, by the triangle inequality, if we choose {p := C \eta^{-2^{k+1}/(2^k-1)} / N^{\frac{k+1}{2^k-1}}} for some sufficiently large {C = C_k > 0}, then for any {1}-bounded {f} with {\|f\|_{U^k(G)} \geq \eta/2}, one has with probability at least {1/2} that

\displaystyle  |{\bf E}_{n, h_1,\dots,h_k \i2^n G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|} \frac{1}{p} 1_A f(n + \omega \cdot h)|

\displaystyle \geq \eta^{2^k}/2^{2^k+1}.

We can write the left-hand side as {|\langle f, F \rangle|} where {F} is the randomly sampled dual function

\displaystyle  F(n) := {\bf E}_{n, h_1,\dots,h_k \in G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|+1} \frac{1}{p} 1_A f(n + \omega \cdot h).

Unfortunately, {F} is not {1}-bounded in general, but we have

\displaystyle  \|F\|_{L^2(G)}^2 \leq {\bf E}_{n, h_1,\dots,h_k ,h'_1,\dots,h'_k \in G}

\displaystyle  \prod_{\omega \in \{0,1\}^k \backslash \{0\}} \frac{1}{p} 1_A(n + \omega \cdot h) \frac{1}{p} 1_A(n + \omega \cdot h')

and the right-hand side can be shown to be {1+o(1)} on the average, so we can condition on the event that the right-hand side is {O(1)} without significant loss in falure probability.

If we then let {\tilde f_A} be {1_A f} rounded to the nearest Gaussian integer multiple of {\eta^{2^k}/2^{2^{10k}}} in the unit disk, one has from the triangle inequality that

\displaystyle  |\langle f, \tilde F \rangle| \geq \eta^{2^k}/2^{2^k+2}

where {\tilde F} is the discretised randomly sampled dual function

\displaystyle  \tilde F(n) := {\bf E}_{n, h_1,\dots,h_k \in G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|+1} \frac{1}{p} \tilde f_A(n + \omega \cdot h).

For any given {A}, there are at most {2np} places {n} where {\tilde f_A(n)} can be non-zero, and in those places there are {O_k( \eta^{-2^{k}})} possible values for {\tilde f_A(n)}. Thus, if we let {{\mathcal F}_A} be the collection of all possible {\tilde f_A} associated to a given {A}, the cardinality of this set is {O( \exp( \eta^{-O_k(1)} N^{1 - \frac{k+1}{2^k-1}} ) )}, and for any {f} with {\|f\|_{U^k(G)} \geq \eta/2}, we have

\displaystyle  \sup_{\tilde F \in {\mathcal F}_A} |\langle f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+2}

with probability at least {1/2}.

Now we remove the failure probability by independent resampling. By rounding to the nearest Gaussian integer multiple of {c_k \eta^{2^k}} in the unit disk for a sufficiently small {c_k>0}, one can find a family {{\mathcal G}} of cardinality {O( \eta^{-O_k(N)})} consisting of {1}-bounded functions {\tilde f} of {U^k(G)} norm at least {\eta/2} such that for every {1}-bounded {f} with {\|f\|_{U^k(G)} \geq \eta} there exists {\tilde f \in {\mathcal G}} such that

\displaystyle  \|f-\tilde f\|_{L^\infty(G)} \leq \eta^{2^k}/2^{k+3}.

Now, let {A_1,\dots,A_M} be independent samples of {A} for some {M} to be chosen later. By the preceding discussion, we see that with probability at least {1 - 2^{-M}}, we have

\displaystyle  \sup_{\tilde F \in \bigcup_{j=1}^M {\mathcal F}_{A_j}} |\langle \tilde f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+2}

for any given {\tilde f \in {\mathcal G}}, so by the union bound, if we choose {M = \lfloor C N \log \frac{1}{\eta} \rfloor} for a large enough {C = C_k}, we can find {A_1,\dots,A_M} such that

\displaystyle  \sup_{\tilde F \in \bigcup_{j=1}^M {\mathcal F}_{A_j}} |\langle \tilde f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+2}

for all {\tilde f \in {\mathcal G}}, and hence y the triangle inequality

\displaystyle  \sup_{\tilde F \in \bigcup_{j=1}^M {\mathcal F}_{A_j}} |\langle f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+3}.

Taking {{\mathcal F}} to be the union of the {{\mathcal F}_{A_j}} (applying some truncation and rescaling to these {L^2}-bounded functions to make them {L^\infty}-bounded, and then {1}-bounded), we obtain the claim. \Box

One way to obtain lower bounds on the inverse theorem entropy is to produce a collection of almost orthogonal functions with large norm. More precisely:

Proposition 4 Let {\| \|} be a seminorm, let {0 < \varepsilon \leq \eta < 1}, and suppose that one has a collection {f_1,\dots,f_M} of {1}-bounded functions such that for all {i=1,\dots,M}, {\|f_i\| \geq \eta} one has {|\langle f_i, f_j \rangle| \leq \varepsilon^2/2} for all but at most {L} choices of {j \in \{1,\dots,M\}} for all distinct {i,j \in \{1,\dots,M\}}. Then the {(\eta, \varepsilon)}-entropy of {\| \|} is at least {\varepsilon^2 M / 2L}.

Proof: Suppose we have an {(\eta,\varepsilon)}-inverse theorem with some family {{\mathcal F}}. Then for each {i=1,\dots,M} there is {F_i \in {\mathcal F}} such that {|\langle f_i, F_i \rangle| \geq \varepsilon}. By the pigeonhole principle, there is thus {F \in {\mathcal F}} such that {|\langle f_i, F \rangle| \geq \varepsilon} for all {i} in a subset {I} of {\{1,\dots,M\}} of cardinality at least {M/|{\mathcal F}|}:

\displaystyle  |I| \geq M / |{\mathcal F}|.

We can sum this to obtain

\displaystyle  |\sum_{i \in I} c_i \langle f_i, F \rangle| \geq |I| \varepsilon

for some complex numbers {c_i} of unit magnitude. By Cauchy-Schwarz, this implies

\displaystyle  \| \sum_{i \in I} c_i f_i \|_{L^2(G)}^2 \geq |I|^2 \varepsilon^2

and hence by the triangle inequality

\displaystyle  \sum_{i,j \in I} |\langle f_i, f_j \rangle| \geq |I|^2 \varepsilon^2.

On the other hand, by hypothesis we can bound the left-hand side by {|I| (L + \varepsilon^2 |I|/2)}. Rearranging, we conclude that

\displaystyle  |I| \leq 2 L / \varepsilon^2

and hence

\displaystyle  |{\mathcal F}| \geq \varepsilon^2 M / 2L

giving the claim. \Box

Thus for instance:

  • For the {U^2(G)} norm, one can take {f_1,\dots,f_M} to be the family of linear exponential phases {n \mapsto e(\xi \cdot n)} with {M = N} and {L=1}, and obtain a linear lower bound of {\varepsilon^2 N/2} for the {(\eta,\varepsilon)}-entropy, thus matching the upper bound of {N} up to constants when {\varepsilon} is fixed.
  • For the {U^k({\bf Z}/N{\bf Z})} norm, a similar calculation using polynomial phases of degree {k-1}, combined with the Weyl sum estimates, gives a lower bound of {\gg_{k,\varepsilon} N^{k-1}} for the {(\eta,\varepsilon)}-entropy for any fixed {\eta,\varepsilon}; by considering nilsequences as well, together with nilsequence equidistribution theory, one can replace the exponent {k-1} here by some quantity that goes to infinity as {\eta \rightarrow 0}, though I have not attempted to calculate the exact rate.
  • For the {U^k({\bf F}_p^n)} norm, another similar calculation using polynomial phases of degree {k-1} should give a lower bound of {\gg_{p,k,\eta,\varepsilon} \exp( c_{p,k,\eta,\varepsilon} n^{k-1} )} for the {(\eta,\varepsilon)}-entropy, though I have not fully performed the calculation.

We close with one final example. Suppose {G} is a product {G = A \times B} of two sets {A,B} of cardinality {\asymp \sqrt{N}}, and we consider the Gowers box norm

\displaystyle  \|f\|_{\Box^2(G)}^4 := {\bf E}_{a,a' \in A; b,b' \in B} f(a,b) \overline{f}(a,b') \overline{f}(a',b) f(a,b).

One possible choice of class {{\mathcal F}} here are the indicators {1_{U \times V}} of “rectangles” {U \times V} with {U \subset A}, {V \subset B} (cf. this previous blog post on cut norms). By standard calculations, one can use this class to show that the {(\eta, \eta^4/10)}-entropy of {\| \|_{\Box^2(G)}} is {O( \exp( O(\sqrt{N}) )}, and a variant of the proof of the second part of Proposition 2 shows that this is the correct order of growth in {N}. In contrast, a modification of Proposition 3 only gives an upper bound of the form {O( \exp( O( N^{2/3} ) ) )} (the bottleneck is ensuring that the randomly sampled dual functions stay bounded in {L^2}), which shows that while this cheap bound is not optimal, it can still broadly give the correct “type” of bound (specifically, intermediate growth between polynomial and exponential).

In orthodox first-order logic, variables and expressions are only allowed to take one value at a time; a variable {x}, for instance, is not allowed to equal {+3} and {-3} simultaneously. We will call such variables completely specified. If one really wants to deal with multiple values of objects simultaneously, one is encouraged to use the language of set theory and/or logical quantifiers to do so.

However, the ability to allow expressions to become only partially specified is undeniably convenient, and also rather intuitive. A classic example here is that of the quadratic formula:

\displaystyle  \hbox{If } x,a,b,c \in {\bf R} \hbox{ with } a \neq 0, \hbox{ then }

\displaystyle  ax^2+bx+c=0 \hbox{ if and only if } x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}. \ \ \ \ \ (1)

Strictly speaking, the expression {x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}} is not well-formed according to the grammar of first-order logic; one should instead use something like

\displaystyle x = \frac{-b - \sqrt{b^2-4ac}}{2a} \hbox{ or } x = \frac{-b + \sqrt{b^2-4ac}}{2a}

or

\displaystyle x \in \left\{ \frac{-b - \sqrt{b^2-4ac}}{2a}, \frac{-b + \sqrt{b^2-4ac}}{2a} \right\}

or

\displaystyle x = \frac{-b + \epsilon \sqrt{b^2-4ac}}{2a} \hbox{ for some } \epsilon \in \{-1,+1\}

in order to strictly adhere to this grammar. But none of these three reformulations are as compact or as conceptually clear as the original one. In a similar spirit, a mathematical English sentence such as

\displaystyle  \hbox{The sum of two odd numbers is an even number} \ \ \ \ \ (2)

is also not a first-order sentence; one would instead have to write something like

\displaystyle  \hbox{For all odd numbers } x, y, \hbox{ the number } x+y \hbox{ is even} \ \ \ \ \ (3)

or

\displaystyle  \hbox{For all odd numbers } x,y \hbox{ there exists an even number } z \ \ \ \ \ (4)

\displaystyle  \hbox{ such that } x+y=z

instead. These reformulations are not all that hard to decipher, but they do have the aesthetically displeasing effect of cluttering an argument with temporary variables such as {x,y,z} which are used once and then discarded.

Another example of partially specified notation is the innocuous {\ldots} notation. For instance, the assertion

\displaystyle \pi=3.14\ldots,

when written formally using first-order logic, would become something like

\displaystyle \pi = 3 + \frac{1}{10} + \frac{4}{10^2} + \sum_{n=3}^\infty \frac{a_n}{10^n} \hbox{ for some sequence } (a_n)_{n=3}^\infty

\displaystyle  \hbox{ with } a_n \in \{0,1,2,3,4,5,6,7,8,9\} \hbox{ for all } n,

which is not exactly an elegant reformulation. Similarly with statements such as

\displaystyle \tan x = x + \frac{x^3}{3} + \ldots \hbox{ for } |x| < \pi/2

or

\displaystyle \tan x = x + \frac{x^3}{3} + O(|x|^5) \hbox{ for } |x| < \pi/2.

Below the fold I’ll try to assign a formal meaning to partially specified expressions such as (1), for instance allowing one to condense (2), (3), (4) to just

\displaystyle  \hbox{odd} + \hbox{odd} = \hbox{even}.

When combined with another common (but often implicit) extension of first-order logic, namely the ability to reason using ambient parameters, we become able to formally introduce asymptotic notation such as the big-O notation {O()} or the little-o notation {o()}. We will explain how to do this at the end of this post.

Read the rest of this entry »

A popular way to visualise relationships between some finite number of sets is via Venn diagrams, or more generally Euler diagrams. In these diagrams, a set is depicted as a two-dimensional shape such as a disk or a rectangle, and the various Boolean relationships between these sets (e.g., that one set is contained in another, or that the intersection of two of the sets is equal to a third) is represented by the Boolean algebra of these shapes; Venn diagrams correspond to the case where the sets are in “general position” in the sense that all non-trivial Boolean combinations of the sets are non-empty. For instance to depict the general situation of two sets {A,B} together with their intersection {A \cap B} and {A \cup B} one might use a Venn diagram such as

venn

(where we have given each region depicted a different color, and moved the edges of each region a little away from each other in order to make them all visible separately), but if one wanted to instead depict a situation in which the intersection {A \cap B} was empty, one could use an Euler diagram such as

euler

One can use the area of various regions in a Venn or Euler diagram as a heuristic proxy for the cardinality {|A|} (or measure {\mu(A)}) of the set {A} corresponding to such a region. For instance, the above Venn diagram can be used to intuitively justify the inclusion-exclusion formula

\displaystyle  |A \cup B| = |A| + |B| - |A \cap B|

for finite sets {A,B}, while the above Euler diagram similarly justifies the special case

\displaystyle  |A \cup B| = |A| + |B|

for finite disjoint sets {A,B}.

While Venn and Euler diagrams are traditionally two-dimensional in nature, there is nothing preventing one from using one-dimensional diagrams such as

venn1d

or even three-dimensional diagrams such as this one from Wikipedia:

venn-3d

Of course, in such cases one would use length or volume as a heuristic proxy for cardinality or measure, rather than area.

With the addition of arrows, Venn and Euler diagrams can also accommodate (to some extent) functions between sets. Here for instance is a depiction of a function {f: A \rightarrow B}, the image {f(A)} of that function, and the image {f(A')} of some subset {A'} of {A}:

afb

Here one can illustrate surjectivity of {f: A \rightarrow B} by having {f(A)} fill out all of {B}; one can similarly illustrate injectivity of {f} by giving {f(A)} exactly the same shape (or at least the same area) as {A}. So here for instance might be how one would illustrate an injective function {f: A \rightarrow B}:

afb-injective

Cartesian product operations can be incorporated into these diagrams by appropriate combinations of one-dimensional and two-dimensional diagrams. Here for instance is a diagram that illustrates the identity {(A \cup B) \times C = (A \times C) \cup (B \times C)}:

cartesian

In this blog post I would like to propose a similar family of diagrams to illustrate relationships between vector spaces (over a fixed base field {k}, such as the reals) or abelian groups, rather than sets. The categories of ({k}-)vector spaces and abelian groups are quite similar in many ways; the former consists of modules over a base field {k}, while the latter consists of modules over the integers {{\bf Z}}; also, both categories are basic examples of abelian categories. The notion of a dimension in a vector space is analogous in many ways to that of cardinality of a set; see this previous post for an instance of this analogy (in the context of Shannon entropy). (UPDATE: I have learned that an essentially identical notation has also been proposed in an unpublished manuscript of Ravi Vakil.)

Read the rest of this entry »

In everyday usage, we rely heavily on percentages to quantify probabilities and proportions: we might say that a prediction is {50\%} accurate or {80\%} accurate, that there is a {2\%} chance of dying from some disease, and so forth. However, for those without extensive mathematical training, it can sometimes be difficult to assess whether a given percentage amounts to a “good” or “bad” outcome, because this depends very much on the context of how the percentage is used. For instance:

  • (i) In a two-party election, an outcome of say {51\%} to {49\%} might be considered close, but {55\%} to {45\%} would probably be viewed as a convincing mandate, and {60\%} to {40\%} would likely be viewed as a landslide.
  • (ii) Similarly, if one were to poll an upcoming election, a poll of {51\%} to {49\%} would be too close to call, {55\%} to {45\%} would be an extremely favorable result for the candidate, and {60\%} to {40\%} would mean that it would be a major upset if the candidate lost the election.
  • (iii) On the other hand, a medical operation that only had a {51\%}, {55\%}, or {60\%} chance of success would be viewed as being incredibly risky, especially if failure meant death or permanent injury to the patient. Even an operation that was {90\%} or {95\%} likely to be non-fatal (i.e., a {10\%} or {5\%} chance of death) would not be conducted lightly.
  • (iv) A weather prediction of, say, {30\%} chance of rain during a vacation trip might be sufficient cause to pack an umbrella, even though it is more likely than not that rain would not occur. On the other hand, if the prediction was for an {80\%} chance of rain, and it ended up that the skies remained clear, this does not seriously damage the accuracy of the prediction – indeed, such an outcome would be expected in one out of every five such predictions.
  • (v) Even extremely tiny percentages of toxic chemicals in everyday products can be considered unacceptable. For instance, EPA rules require action to be taken when the percentage of lead in drinking water exceeds {0.0000015\%} (15 parts per billion). At the opposite extreme, recycling contamination rates as high as {10\%} are often considered acceptable.

Because of all the very different ways in which percentages could be used, I think it may make sense to propose an alternate system of units to measure one class of probabilities, namely the probabilities of avoiding some highly undesirable outcome, such as death, accident or illness. The units I propose are that of “nines“, which are already commonly used to measure availability of some service or purity of a material, but can be equally used to measure the safety (i.e., lack of risk) of some activity. Informally, nines measure how many consecutive appearances of the digit {9} are in the probability of successfully avoiding the negative outcome, thus

  • {90\%} success = one nine of safety
  • {99\%} success = two nines of safety
  • {99.9\%} success = three nines of safety
and so forth. Using the mathematical device of logarithms, one can also assign a fractional number of nines of safety to a general probability:

Definition 1 (Nines of safety) An activity (affecting one or more persons, over some given period of time) that has a probability {p} of the “safe” outcome and probability {1-p} of the “unsafe” outcome will have {k} nines of safety against the unsafe outcome, where {k} is defined by the formula

\displaystyle  k = -\log_{10}(1-p) \ \ \ \ \ (1)

(where {\log_{10}} is the logarithm to base ten), or equivalently

\displaystyle  p = 1 - 10^{-k}. \ \ \ \ \ (2)

Remark 2 Because of the various uncertainties in measuring probabilities, as well as the inaccuracies in some of the assumptions and approximations we will be making later, we will not attempt to measure the number of nines of safety beyond the first decimal point; thus we will round to the nearest tenth of a nine of safety throughout this post.

Here is a conversion table between percentage rates of success (the safe outcome), failure (the unsafe outcome), and the number of nines of safety one has:

Success rate {p} Failure rate {1-p} Number of nines {k}
{0\%} {100\%} {0.0}
{50\%} {50\%} {0.3}
{75\%} {25\%} {0.6}
{80\%} {20\%} {0.7}
{90\%} {10\%} {1.0}
{95\%} {5\%} {1.3}
{97.5\%} {2.5\%} {1.6}
{98\%} {2\%} {1.7}
{99\%} {1\%} {2.0}
{99.5\%} {0.5\%} {2.3}
{99.75\%} {0.25\%} {2.6}
{99.8\%} {0.2\%} {2.7}
{99.9\%} {0.1\%} {3.0}
{99.95\%} {0.05\%} {3.3}
{99.975\%} {0.025\%} {3.6}
{99.98\%} {0.02\%} {3.7}
{99.99\%} {0.01\%} {4.0}
{100\%} {0\%} infinite

Thus, if one has no nines of safety whatsoever, one is guaranteed to fail; but each nine of safety one has reduces the failure rate by a factor of {10}. In an ideal world, one would have infinitely many nines of safety against any risk, but in practice there are no {100\%} guarantees against failure, and so one can only expect a finite amount of nines of safety in any given situation. Realistically, one should thus aim to have as many nines of safety as one can reasonably expect to have, but not to demand an infinite amount.

Remark 3 The number of nines of safety against a certain risk is not absolute; it will depend not only on the risk itself, but (a) the number of people exposed to the risk, and (b) the length of time one is exposed to the risk. Exposing more people or increasing the duration of exposure will reduce the number of nines, and conversely exposing fewer people or reducing the duration will increase the number of nines; see Proposition 7 below for a rough rule of thumb in this regard.

Remark 4 Nines of safety are a logarithmic scale of measurement, rather than a linear scale. Other familiar examples of logarithmic scales of measurement include the Richter scale of earthquake magnitude, the pH scale of acidity, the decibel scale of sound level, octaves in music, and the magnitude scale for stars.

Remark 5 One way to think about nines of safety is via the Swiss cheese model that was created recently to describe pandemic risk management. In this model, each nine of safety can be thought of as a slice of Swiss cheese, with holes occupying {10\%} of that slice. Having {k} nines of safety is then analogous to standing behind {k} such slices of Swiss cheese. In order for a risk to actually impact you, it must pass through each of these {k} slices. A fractional nine of safety corresponds to a fractional slice of Swiss cheese that covers the amount of space given by the above table. For instance, {0.6} nines of safety corresponds to a fractional slice that covers about {75\%} of the given area (leaving {25\%} uncovered).

Now to give some real-world examples of nines of safety. Using data for deaths in the US in 2019 (without attempting to account for factors such as age and gender), a random US citizen will have had the following amount of safety from dying from some selected causes in that year:

Cause of death Mortality rate per {100,\! 000} (approx.) Nines of safety
All causes {870} {2.0}
Heart disease {200} {2.7}
Cancer {180} {2.7}
Accidents {52} {3.3}
Drug overdose {22} {3.7}
Influenza/Pneumonia {15} {3.8}
Suicide {14} {3.8}
Gun violence {12} {3.9}
Car accident {11} {4.0}
Murder {5} {4.3}
Airplane crash {0.14} {5.9}
Lightning strike {0.006} {7.2}

The safety of air travel is particularly remarkable: a given hour of flying in general aviation has a fatality rate of {0.00001}, or about {5} nines of safety, while for the major carriers the fatality rate drops down to {0.0000005}, or about {7.3} nines of safety.

Of course, in 2020, COVID-19 deaths became significant. In this year in the US, the mortality rate for COVID-19 (as the underlying or contributing cause of death) was {91.5} per {100,\! 000}, corresponding to {3.0} nines of safety, which was less safe than all other causes of death except for heart disease and cancer. At this time of writing, data for all of 2021 is of course not yet available, but it seems likely that the safety level would be even lower for this year.

Some further illustrations of the concept of nines of safety:

  • Each round of Russian roulette has a success rate of {5/6}, providing only {0.8} nines of safety. Of course, the safety will decrease with each additional round: one has only {0.5} nines of safety after two rounds, {0.4} nines after three rounds, and so forth. (See also Proposition 7 below.)
  • The ancient Roman punishment of decimation, by definition, provided exactly one nine of safety to each soldier being punished.
  • Rolling a {1} on a {20}-sided die is a risk that carries about {1.3} nines of safety.
  • Rolling a double one (“snake eyes“) from two six-sided dice carries about {1.6} nines of safety.
  • One has about {2.6} nines of safety against the risk of someone randomly guessing your birthday on the first attempt.
  • A null hypothesis has {1.3} nines of safety against producing a {p = 0.05} statistically significant result, and {2.0} nines against producing a {p=0.01} statistically significant result. (However, one has to be careful when reversing the conditional; a {p=0.01} statistically significant result does not necessarily have {2.0} nines of safety against the null hypothesis. In Bayesian statistics, the precise relationship between the two risks is given by Bayes’ theorem.)
  • If a poker opponent is dealt a five-card hand, one has {5.8} nines of safety against that opponent being dealt a royal flush, {4.8} against a straight flush or higher, {3.6} against four-of-a-kind or higher, {2.8} against a full house or higher, {2.4} against a flush or higher, {2.1} against a straight or higher, {1.5} against three-of-a-kind or higher, {1.1} against two pairs or higher, and just {0.3} against one pair or higher. (This data was converted from this Wikipedia table.)
  • A {k}-digit PIN number (or a {k}-digit combination lock) carries {k} nines of safety against each attempt to randomly guess the PIN. A length {k} password that allows for numbers, upper and lower case letters, and punctuation carries about {2k} nines of safety against a single guess. (For the reduction in safety caused by multiple guesses, see Proposition 7 below.)

Here is another way to think about nines of safety:

Proposition 6 (Nines of safety extend expected onset of risk) Suppose a certain risky activity has {k} nines of safety. If one repeatedly indulges in this activity until the risk occurs, then the expected number of trials before the risk occurs is {10^k}.

Proof: The probability that the risk is activated after exactly {n} trials is {(1-10^{-k})^{n-1} 10^{-k}}, which is a geometric distribution of parameter {10^{-k}}. The claim then follows from the standard properties of that distribution. \Box

Thus, for instance, if one performs some risky activity daily, then the expected length of time before the risk occurs is given by the following table:

Daily nines of safety Expected onset of risk
{0} One day
{0.8} One week
{1.5} One month
{2.6} One year
{2.9} Two years
{3.3} Five years
{3.6} Ten years
{3.9} Twenty years
{4.3} Fifty years
{4.6} A century

Or, if one wants to convert the yearly risks of dying from a specific cause into expected years before that cause of death would occur (assuming for sake of discussion that no other cause of death exists):

Yearly nines of safety Expected onset of risk
{0} One year
{0.3} Two years
{0.7} Five years
{1} Ten years
{1.3} Twenty years
{1.7} Fifty years
{2.0} A century

These tables suggest a relationship between the amount of safety one would have in a short timeframe, such as a day, and a longer time frame, such as a year. Here is an approximate formalisation of that relationship:

Proposition 7 (Repeated exposure reduces nines of safety) If a risky activity with {k} nines of safety is (independently) repeated {m} times, then (assuming {k} is large enough depending on {m}), the repeated activity will have approximately {k - \log_{10} m} nines of safety. Conversely: if the repeated activity has {k'} nines of safety, the individual activity will have approximately {k' + \log_{10} m} nines of safety.

Proof: An activity with {k} nines of safety will be safe with probability {1-10^{-k}}, hence safe with probability {(1-10^{-k})^m} if repeated independently {m} times. For {k} large, we can approximate

\displaystyle  (1 - 10^{-k})^m \approx 1 - m 10^{-k} = 1 - 10^{-(k - \log_{10} m)}

giving the former claim. The latter claim follows from inverting the former. \Box

Remark 8 The hypothesis of independence here is key. If there is a lot of correlation between the risks between different repetitions of the activity, then there can be much less reduction in safety caused by that repetition. As a simple example, suppose that {90\%} of a workforce are trained to perform some task flawlessly no matter how many times they repeat the task, but the remaining {10\%} are untrained and will always fail at that task. If one selects a random worker and asks them to perform the task, one has {1.0} nines of safety against the task failing. If one took that same random worker and asked them to perform the task {m} times, the above proposition might suggest that the number of nines of safety would drop to approximately {1.0 - \log_{10} m}; but in this case there is perfect correlation, and in fact the number of nines of safety remains steady at {1.0} since it is the same {10\%} of the workforce that would fail each time.

Because of this caveat, one should view the above proposition as only a crude first approximation that can be used as a simple rule of thumb, but should not be relied upon for more precise calculations.

One can repeat a risk either in time (extending the time of exposure to the risk, say from a day to a year), or in space (by exposing the risk to more people). The above proposition then gives an additive conversion law for nines of safety in either case. Here are some conversion tables for time:

From/to Daily Weekly Monthly Yearly
Daily 0 -0.8 -1.5 -2.6
Weekly +0.8 0 -0.6 -1.7
Monthly +1.5 +0.6 0 -1.1
Yearly +2.6 +1.7 +1.1 0

From/to Yearly Per 5 yr Per decade Per century
Yearly 0 -0.7 -1.0 -2.0
Per 5 yr +0.7 0 -0.3 -1.3
Per decade +1.0 + -0.3 0 -1.0
Per century +2.0 +1.3 +1.0 0

For instance, as mentioned before, the yearly amount of safety against cancer is about {2.7}. Using the above table (and making the somewhat unrealistic hypothesis of independence), we then predict the daily amount of safety against cancer to be about {2.7 + 2.6 = 5.3} nines, the weekly amount to be about {2.7 + 1.7 = 4.4} nines, and the amount of safety over five years to drop to about {2.7 - 0.7 = 2.0} nines.

Now we turn to conversions in space. If one knows the level of safety against a certain risk for an individual, and then one (independently) exposes a group of such individuals to that risk, then the reduction in nines of safety when considering the possibility that at least one group member experiences this risk is given by the following table:

Group Reduction in safety
You ({1} person) {0}
You and your partner ({2} people) {-0.3}
You and your parents ({3} people) {-0.5}
You, your partner, and three children ({5} people) {-0.7}
An extended family of {10} people {-1.0}
A class of {30} people {-1.5}
A workplace of {100} people {-2.0}
A school of {1,\! 000} people {-3.0}
A university of {10,\! 000} people {-4.0}
A town of {100,\! 000} people {-5.0}
A city of {1} million people {-6.0}
A state of {10} million people {-7.0}
A country of {100} million people {-8.0}
A continent of {1} billion people {-9.0}
The entire planet {-9.8}

For instance, in a given year (and making the somewhat implausible assumption of independence), you might have {2.7} nines of safety against cancer, but you and your partner collectively only have about {2.7 - 0.3 = 2.4} nines of safety against this risk, your family of five might only have about {2.7 - 0.7 = 2} nines of safety, and so forth. By the time one gets to a group of {1,\! 000} people, it actually becomes very likely that at least one member of the group will die of cancer in that year. (Here the precise conversion table breaks down, because a negative number of nines such as {2.7 - 3.0 = -0.3} is not possible, but one should interpret a prediction of a negative number of nines as an assertion that failure is very likely to happen. Also, in practice the reduction in safety is less than this rule predicts, due to correlations such as risk factors that are common to the group being considered that are incompatible with the assumption of independence.)

In the opposite direction, any reduction in exposure (either in time or space) to a risk will increase one’s safety level, as per the following table:

Reduction in exposure Additional nines of safety
{\div 1} {0}
{\div 2} {+0.3}
{\div 3} {+0.5}
{\div 5} {+0.7}
{\div 10} {+1.0}
{\div 100} {+2.0}

For instance, a five-fold reduction in exposure will reclaim about {0.7} additional nines of safety.

Here is a slightly different way to view nines of safety:

Proposition 9 Suppose that a group of {m} people are independently exposed to a given risk. If there are at most

\displaystyle  \log_{10} \frac{1}{1-2^{-1/m}}

nines of individual safety against that risk, then there is at least a {50\%} chance that one member of the group is affected by the risk.

Proof: If individually there are {k} nines of safety, then the probability that all the members of the group avoid the risk is {(1-10^{-k})^m}. Since the inequality

\displaystyle  (1-10^{-k})^m \leq \frac{1}{2}

is equivalent to

\displaystyle  k \leq \log_{10} \frac{1}{1-2^{-1/m}},

the claim follows. \Box

Thus, for a group to collectively avoid a risk with at least a {50\%} chance, one needs the following level of individual safety:

Group Individual safety level required
You ({1} person) {0.3}
You and your partner ({2} people) {0.5}
You and your parents ({3} people) {0.7}
You, your partner, and three children ({5} people) {0.9}
An extended family of {10} people {1.2}
A class of {30} people {1.6}
A workplace of {100} people {2.2}
A school of {1,\! 000} people {3.2}
A university of {10,\! 000} people {4.2}
A town of {100,\! 000} people {5.2}
A city of {1} million people {6.2}
A state of {10} million people {7.2}
A country of {100} million people {8.2}
A continent of {1} billion people {9.2}
The entire planet {10.0}

For large {m}, the level {k} of nines of individual safety required to protect a group of size {m} with probability at least {50\%} is approximately {\log_{10} \frac{m}{\ln 2} \approx (\log_{10} m) + 0.2}.

Precautions that can work to prevent a certain risk from occurring will add additional nines of safety against that risk, even if the precaution is not {100\%} effective. Here is the precise rule:

Proposition 10 (Precautions add nines of safety) Suppose an activity carries {k} nines of safety against a certain risk, and a separate precaution can independently protect against that risk with {l} nines of safety (that is to say, the probability that the protection is effective is {1 - 10^{-l}}). Then applying that precaution increases the number of nines in the activity from {k} to {k+l}.

Proof: The probability that the precaution fails and the risk then occurs is {10^{-l} \times 10^{-k} = 10^{-(k+l)}}. The claim now follows from Definition 1. \Box

In particular, we can repurpose the table at the start of this post as a conversion chart for effectiveness of a precaution:

Effectiveness Failure rate Additional nines provided
{0\%} {100\%} {+0.0}
{50\%} {50\%} {+0.3}
{75\%} {25\%} {+0.6}
{80\%} {20\%} {+0.7}
{90\%} {10\%} {+1.0}
{95\%} {5\%} {+1.3}
{97.5\%} {2.5\%} {+1.6}
{98\%} {2\%} {+1.7}
{99\%} {1\%} {+2.0}
{99.5\%} {0.5\%} {+2.3}
{99.75\%} {0.25\%} {+2.6}
{99.8\%} {0.2\%} {+2.7}
{99.9\%} {0.1\%} {+3.0}
{99.95\%} {0.05\%} {+3.3}
{99.975\%} {0.025\%} {+3.6}
{99.98\%} {0.02\%} {+3.7}
{99.99\%} {0.01\%} {+4.0}
{100\%} {0\%} infinite

Thus for instance a precaution that is {80\%} effective will add {0.7} nines of safety, a precaution that is {99.8\%} effective will add {2.7} nines of safety, and so forth. The mRNA COVID vaccines by Pfizer and Moderna have somewhere between {88\% - 96\%} effectiveness against symptomatic COVID illness, providing about {0.9-1.4} nines of safety against that risk, and over {95\%} effectiveness against severe illness, thus adding at least {1.3} nines of safety in this regard.

A slight variant of the above rule can be stated using the concept of relative risk:

Proposition 11 (Relative risk and nines of safety) Suppose an activity carries {k} nines of safety against a certain risk, and an action multiplies the chance of failure by some relative risk {R}. Then the action removes {\log_{10} R} nines of safety (if {R > 1}) or adds {-\log_{10} R} nines of safety (if {R<1}) to the original activity.

Proof: The additional action adjusts the probability of failure from {10^{-k}} to {R \times 10^{-k} = 10^{-(k - \log_{10} R)}}. The claim now follows from Definition 1. \Box

Here is a conversion chart between relative risk and change in nines of safety:

Relative risk Change in nines of safety
{0.01} {+2.0}
{0.02} {+1.7}
{0.05} {+1.3}
{0.1} {+1.0}
{0.2} {+0.7}
{0.5} {+0.3}
{1} {0}
{2} {-0.3}
{5} {-0.7}
{10} {-1.0}
{20} {-1.3}
{50} {-1.7}
{100} {-2.0}

Some examples:

  • Smoking increases the fatality rate of lung cancer by a factor of about {20}, thus removing about {1.3} nines of safety from this particular risk; it also increases the fatality rates of several other diseases, though not quite as dramatically an extent.
  • Seatbelts reduce the fatality rate in car accidents by a factor of about two, adding about {0.3} nines of safety. Airbags achieve a reduction of about {30-50\%}, adding about {0.2-0.3} additional nines of safety.
  • As far as transmission of COVID is concerned, it seems that constant use of face masks reduces transmission by a factor of about five (thus adding about {0.7} nines of safety), and similarly for constant adherence to social distancing; whereas for instance a {30\%} compliance with mask usage reduced transmission by about {10\%} (adding only {0.05} or so nines of safety).

The effect of combining multiple (independent) precautions together is cumulative; one can achieve quite a high level of safety by stacking together several precautions that individually have relatively low levels of effectiveness. Again, see the “swiss cheese model” referred to in Remark 5. For instance, if face masks add {0.7} nines of safety against contracting COVID, social distancing adds another {0.7} nines, and the vaccine provide another {1.0} nine of safety, implementing all three mitigation methods would (assuming independence) add a net of {2.4} nines of safety against contracting COVID.

In summary, when debating the value of a given risk mitigation measure, the correct question to ask is not quite “Is it certain to work” or “Can it fail?”, but rather “How many extra nines of safety does it add?”.

As one final comparison between nines of safety and other standard risk measures, we give the following proposition regarding large deviations from the mean.

Proposition 12 Let {X} be a normally distributed random variable of standard deviation {\sigma}, and let {\lambda > 0}. Then the “one-sided risk” of {X} exceeding its mean {{\bf E} X} by at least {\lambda \sigma} (i.e., {X \geq {\bf E} X + \lambda \sigma}) carries

\displaystyle  -\log_{10} \frac{1 - \mathrm{erf}(\lambda/\sqrt{2})}{2}

nines of safety, the “two-sided risk” of {X} deviating (in either direction) from its mean by at least {\lambda \sigma} (i.e., {|X-{\bf E} X| \geq \lambda \sigma}) carries

\displaystyle  -\log_{10} (1 - \mathrm{erf}(\lambda/\sqrt{2}))

nines of safety, where {\mathrm{erf}} is the error function.

Proof: This is a routine calculation using the cumulative distribution function of the normal distribution. \Box

Here is a short table illustrating this proposition:

Number {\lambda} of deviations from the mean One-sided nines of safety Two-sided nines of safety
{0} {0.3} {0.0}
{1} {0.8} {0.5}
{2} {1.6} {1.3}
{3} {2.9} {2.6}
{4} {4.5} {4.2}
{5} {6.5} {6.2}
{6} {9.0} {8.7}

Thus, for instance, the risk of a five sigma event (deviating by more than five standard deviations from the mean in either direction) should carry {6.2} nines of safety assuming a normal distribution, and so one would ordinarily feel extremely safe against the possibility of such an event, unless one started doing hundreds of thousands of trials. (However, we caution that this conclusion relies heavily on the assumption that one has a normal distribution!)

See also this older essay I wrote on anonymity on the internet, using bits as a measure of anonymity in much the same way that nines are used here as a measure of safety.

In the modern theory of higher order Fourier analysis, a key role are played by the Gowers uniformity norms {\| \|_{U^k}} for {k=1,2,\dots}. For finitely supported functions {f: {\bf Z} \rightarrow {\bf C}}, one can define the (non-normalised) Gowers norm {\|f\|_{\tilde U^k({\bf Z})}} by the formula

\displaystyle  \|f\|_{\tilde U^k({\bf Z})}^{2^k} := \sum_{n,h_1,\dots,h_k \in {\bf Z}} \prod_{\omega_1,\dots,\omega_k \in \{0,1\}} {\mathcal C}^{\omega_1+\dots+\omega_k} f(x+\omega_1 h_1 + \dots + \omega_k h_k)

where {{\mathcal C}} denotes complex conjugation, and then on any discrete interval {[N] = \{1,\dots,N\}} and any function {f: [N] \rightarrow {\bf C}} we can then define the (normalised) Gowers norm

\displaystyle  \|f\|_{U^k([N])} := \| f 1_{[N]} \|_{\tilde U^k({\bf Z})} / \|1_{[N]} \|_{\tilde U^k({\bf Z})}

where {f 1_{[N]}: {\bf Z} \rightarrow {\bf C}} is the extension of {f} by zero to all of {{\bf Z}}. Thus for instance

\displaystyle  \|f\|_{U^1([N])} = |\mathop{\bf E}_{n \in [N]} f(n)|

(which technically makes {\| \|_{U^1([N])}} a seminorm rather than a norm), and one can calculate

\displaystyle  \|f\|_{U^2([N])} \asymp (N \int_0^1 |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)|^4\ d\alpha)^{1/4} \ \ \ \ \ (1)

where {e(\theta) := e^{2\pi i \alpha}}, and we use the averaging notation {\mathop{\bf E}_{n \in A} f(n) = \frac{1}{|A|} \sum_{n \in A} f(n)}.

The significance of the Gowers norms is that they control other multilinear forms that show up in additive combinatorics. Given any polynomials {P_1,\dots,P_m: {\bf Z}^d \rightarrow {\bf Z}} and functions {f_1,\dots,f_m: [N] \rightarrow {\bf C}}, we define the multilinear form

\displaystyle  \Lambda^{P_1,\dots,P_m}(f_1,\dots,f_m) := \sum_{n \in {\bf Z}^d} \prod_{j=1}^m f_j 1_{[N]}(P_j(n)) / \sum_{n \in {\bf Z}^d} \prod_{j=1}^m 1_{[N]}(P_j(n))

(assuming that the denominator is finite and non-zero). Thus for instance

\displaystyle  \Lambda^{\mathrm{n}}(f) = \mathop{\bf E}_{n \in [N]} f(n)

\displaystyle  \Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}}(f,g) = (\mathop{\bf E}_{n \in [N]} f(n)) (\mathop{\bf E}_{n \in [N]} g(n))

\displaystyle  \Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+2\mathrm{r}}(f,g,h) \asymp \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [-N,N]} f(n) g(n+r) h(n+2r)

\displaystyle  \Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+\mathrm{r}^2}(f,g,h) \asymp \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [-N^{1/2},N^{1/2}]} f(n) g(n+r) h(n+r^2)

where we view {\mathrm{n}, \mathrm{r}} as formal (indeterminate) variables, and {f,g,h: [N] \rightarrow {\bf C}} are understood to be extended by zero to all of {{\bf Z}}. These forms are used to count patterns in various sets; for instance, the quantity {\Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+2\mathrm{r}}(1_A,1_A,1_A)} is closely related to the number of length three arithmetic progressions contained in {A}. Let us informally say that a form {\Lambda^{P_1,\dots,P_m}(f_1,\dots,f_m)} is controlled by the {U^k[N]} norm if the form is small whenever {f_1,\dots,f_m: [N] \rightarrow {\bf C}} are {1}-bounded functions with at least one of the {f_j} small in {U^k[N]} norm. This definition was made more precise by Gowers and Wolf, who then defined the true complexity of a form {\Lambda^{P_1,\dots,P_m}} to be the least {s} such that {\Lambda^{P_1,\dots,P_m}} is controlled by the {U^{s+1}[N]} norm. For instance,
  • {\Lambda^{\mathrm{n}}} and {\Lambda^{\mathrm{n}, \mathrm{n} + \mathrm{r}}} have true complexity {0};
  • {\Lambda^{\mathrm{n}, \mathrm{n} + \mathrm{r}, \mathrm{n} + \mathrm{2r}}} has true complexity {1};
  • {\Lambda^{\mathrm{n}, \mathrm{n} + \mathrm{r}, \mathrm{n} + \mathrm{2r}, \mathrm{n} + \mathrm{3r}}} has true complexity {2};
  • The form {\Lambda^{\mathrm{n}, \mathrm{n}+2}} (which among other things could be used to count twin primes) has infinite true complexity (which is quite unfortunate for applications).
Roughly speaking, patterns of complexity {1} or less are amenable to being studied by classical Fourier analytic tools (the Hardy-Littlewood circle method); patterns of higher complexity can be handled (in principle, at least) by the methods of higher order Fourier analysis; and patterns of infinite complexity are out of range of both methods and are generally quite difficult to study. See these recent slides of myself (or this video of the lecture) for some further discussion.

Gowers and Wolf formulated a conjecture on what this complexity should be, at least for linear polynomials {P_1,\dots,P_m}; Ben Green and I thought we had resolved this conjecture back in 2010, though it turned out there was a subtle gap in our arguments and we were only able to resolve the conjecture in a partial range of cases. However, the full conjecture was recently resolved by Daniel Altman.

The {U^1} (semi-)norm is so weak that it barely controls any averages at all. For instance the average

\displaystyle  \Lambda^{2\mathrm{n}}(f) = \mathop{\bf E}_{n \in [N], \hbox{ even}} f(n)

is not controlled by the {U^1[N]} semi-norm: it is perfectly possible for a {1}-bounded function {f: [N] \rightarrow {\bf C}} to even have vanishing {U^1([N])} norm but have large value of {\Lambda^{2\mathrm{n}}(f)} (consider for instance the parity function {f(n) := (-1)^n}).

Because of this, I propose inserting an additional norm in the Gowers uniformity norm hierarchy between the {U^1} and {U^2} norms, which I will call the {U^{1^+}} (or “profinite {U^1}“) norm:

\displaystyle  \| f\|_{U^{1^+}[N]} := \frac{1}{N} \sup_P |\sum_{n \in P} f(n)| = \sup_P | \mathop{\bf E}_{n \in [N]} f 1_P(n)|

where {P} ranges over all arithmetic progressions in {[N]}. This can easily be seen to be a norm on functions {f: [N] \rightarrow {\bf C}} that controls the {U^1[N]} norm. It is also basically controlled by the {U^2[N]} norm for {1}-bounded functions {f}; indeed, if {P} is an arithmetic progression in {[N]} of some spacing {q \geq 1}, then we can write {P} as the intersection of an interval {I} with a residue class modulo {q}, and from Fourier expansion we have

\displaystyle  \mathop{\bf E}_{n \in [N]} f 1_P(n) \ll \sup_\alpha |\mathop{\bf E}_{n \in [N]} f 1_I(n) e(\alpha n)|.

If we let {\psi} be a standard bump function supported on {[-1,1]} with total mass and {\delta>0} is a parameter then

\displaystyle  \mathop{\bf E}_{n \in [N]} f 1_I(n) e(\alpha n)

\displaystyle \ll |\mathop{\bf E}_{n \in [-N,2N]; h, k \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N})

\displaystyle  1_I(n+h+k) f(n+h+k) e(\alpha(n+h+k))|

\displaystyle  \ll |\mathop{\bf E}_{n \in [-N,2N]; h, k \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N}) 1_I(n+k) f(n+h+k) e(\alpha(n+h+k))|

\displaystyle + \delta

(extending {f} by zero outside of {[N]}), as can be seen by using the triangle inequality and the estimate

\displaystyle  \mathop{\bf E}_{h \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N}) 1_I(n+h+k) - \mathop{\bf E}_{h \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N}) 1_I(n+k)

\displaystyle \ll (1 + \mathrm{dist}(n+k, I) / \delta N)^{-2}.

After some Fourier expansion of {\delta \psi(\frac{h}{\delta N})} we now have

\displaystyle  \mathop{\bf E}_{n \in [N]} f 1_P(n) \ll \frac{1}{\delta} \sup_{\alpha,\beta} |\mathop{\bf E}_{n \in [N]; h, k \in [-N,N]} e(\beta h + \alpha (n+h+k))

\displaystyle 1_P(n+k) f(n+h+k)| + \delta.

Writing {\alpha h + \alpha(n+h+k)} as a linear combination of {n, n+h, n+k} and using the Gowers–Cauchy–Schwarz inequality, we conclude

\displaystyle  \mathop{\bf E}_{n \in [N]} f 1_P(n) \ll \frac{1}{\delta} \|f\|_{U^2([N])} + \delta

hence on optimising in {\delta} we have

\displaystyle  \| f\|_{U^{1^+}[N]} \ll \|f\|_{U^2[N]}^{1/2}.

Forms which are controlled by the {U^{1^+}} norm (but not {U^1}) would then have their true complexity adjusted to {0^+} with this insertion.

The {U^{1^+}} norm recently appeared implicitly in work of Peluse and Prendiville, who showed that the form {\Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+\mathrm{r}^2}(f,g,h)} had true complexity {0^+} in this notation (with polynomially strong bounds). [Actually, strictly speaking this control was only shown for the third function {h}; for the first two functions {f,g} one needs to localize the {U^{1^+}} norm to intervals of length {\sim \sqrt{N}}. But I will ignore this technical point to keep the exposition simple.] The weaker claim that {\Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}^2}(f,g)} has true complexity {0^+} is substantially easier to prove (one can apply the circle method together with Gauss sum estimates).

The well known inverse theorem for the {U^2} norm tells us that if a {1}-bounded function {f} has {U^2[N]} norm at least {\eta} for some {0 < \eta < 1}, then there is a Fourier phase {n \mapsto e(\alpha n)} such that

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \gg \eta^2;

this follows easily from (1) and Plancherel’s theorem. Conversely, from the Gowers–Cauchy–Schwarz inequality one has

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \ll \|f\|_{U^2[N]}.

For {U^1[N]} one has a trivial inverse theorem; by definition, the {U^1[N]} norm of {f} is at least {\eta} if and only if

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n)| \geq \eta.

Thus the frequency {\alpha} appearing in the {U^2} inverse theorem can be taken to be zero when working instead with the {U^1} norm.

For {U^{1^+}} one has the intermediate situation in which the frequency {\alpha} is not taken to be zero, but is instead major arc. Indeed, suppose that {f} is {1}-bounded with {\|f\|_{U^{1^+}[N]} \geq \eta}, thus

\displaystyle  |\mathop{\bf E}_{n \in [N]} 1_P(n) f(n)| \geq \eta

for some progression {P}. This forces the spacing {q} of this progression to be {\ll 1/\eta}. We write the above inequality as

\displaystyle  |\mathop{\bf E}_{n \in [N]} 1_{n=b\ (q)} 1_I(n) f(n)| \geq \eta

for some residue class {b\ (q)} and some interval {I}. By Fourier expansion and the triangle inequality we then have

\displaystyle  |\mathop{\bf E}_{n \in [N]} e(-an/q) 1_I(n) f(n)| \geq \eta

for some integer {a}. Convolving {1_I} by {\psi_\delta: n \mapsto \frac{1}{N\delta} \psi(\frac{n}{N\delta})} for {\delta} a small multiple of {\eta} and {\psi} a Schwartz function of unit mass with Fourier transform supported on {[-1,1]}, we have

\displaystyle  |\mathop{\bf E}_{n \in [N]} e(-an/q) (1_I * \psi_\delta)(n) f(n)| \gg \eta.

The Fourier transform {\xi \mapsto \sum_n 1_I * \psi_\delta(n) e(- \xi n)} of {1_I * \psi_\delta} is bounded by {O(N)} and supported on {[-\frac{1}{\delta N},\frac{1}{\delta N}]}, thus by Fourier expansion and the triangle inequality we have

\displaystyle  |\mathop{\bf E}_{n \in [N]} e(-an/q) e(-\xi n) f(n)| \gg \eta^2

for some {\xi \in [-\frac{1}{\delta N},\frac{1}{\delta N}]}, so in particular {\xi = O(\frac{1}{\eta N})}. Thus we have

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \gg \eta^2 \ \ \ \ \ (2)

for some {\alpha} of the major arc form {\alpha = \frac{a}{q} + O(1/\eta)} with {1 \leq q \leq 1/\eta}. Conversely, for {\alpha} of this form, some routine summation by parts gives the bound

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \ll \frac{q}{\eta} \|f\|_{U^{1^+}[N]} \ll \frac{1}{\eta^2} \|f\|_{U^{1^+}[N]}

so if (2) holds for a {1}-bounded {f} then one must have {\|f\|_{U^{1^+}[N]} \gg \eta^4}.

Here is a diagram showing some of the control relationships between various Gowers norms, multilinear forms, and duals of classes {{\mathcal F}} of functions (where each class of functions {{\mathcal F}} induces a dual norm {\| f \|_{{\mathcal F}^*} := \sup_{\phi \in {\mathcal F}} \mathop{\bf E}_{n \in[N]} f(n) \overline{\phi(n)}}:

Here I have included the three classes of functions that one can choose from for the {U^3} inverse theorem, namely degree two nilsequences, bracket quadratic phases, and local quadratic phases, as well as the more narrow class of globally quadratic phases.

The Gowers norms have counterparts for measure-preserving systems {(X,T,\mu)}, known as Host-Kra seminorms. The {U^1(X)} norm can be defined for {f \in L^\infty(X)} as

\displaystyle  \|f\|_{U^1(X)} := \lim_{N \rightarrow \infty} \int_X |\mathop{\bf E}_{n \in [N]} T^n f|\ d\mu

and the {U^2} norm can be defined as

\displaystyle  \|f\|_{U^2(X)}^4 := \lim_{N \rightarrow \infty} \mathop{\bf E}_{n \in [N]} \| T^n f \overline{f} \|_{U^1(X)}^2.

The {U^1(X)} seminorm is orthogonal to the invariant factor {Z^0(X)} (generated by the (almost everywhere) invariant measurable subsets of {X}) in the sense that a function {f \in L^\infty(X)} has vanishing {U^1(X)} seminorm if and only if it is orthogonal to all {Z^0(X)}-measurable (bounded) functions. Similarly, the {U^2(X)} norm is orthogonal to the Kronecker factor {Z^1(X)}, generated by the eigenfunctions of {X} (that is to say, those {f} obeying an identity {Tf = \lambda f} for some {T}-invariant {\lambda}); for ergodic systems, it is the largest factor isomorphic to rotation on a compact abelian group. In analogy to the Gowers {U^{1^+}[N]} norm, one can then define the Host-Kra {U^{1^+}(X)} seminorm by

\displaystyle  \|f\|_{U^{1^+}(X)} := \sup_{q \geq 1} \frac{1}{q} \lim_{N \rightarrow \infty} \int_X |\mathop{\bf E}_{n \in [N]} T^{qn} f|\ d\mu;

it is orthogonal to the profinite factor {Z^{0^+}(X)}, generated by the periodic sets of {X} (or equivalently, by those eigenfunctions whose eigenvalue is a root of unity); for ergodic systems, it is the largest factor isomorphic to rotation on a profinite abelian group.

Archives