You are currently browsing the category archive for the ‘math.PR’ category.

If {\lambda>0}, a Poisson random variable {{\bf Poisson}(\lambda)} with mean {\lambda} is a random variable taking values in the natural numbers with probability distribution

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) = k) = e^{-\lambda} \frac{\lambda^k}{k!}.

One is often interested in bounding upper tail probabilities

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u))

for {u \geq 0}, or lower tail probabilities

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \leq \lambda(1+u))

for {-1 < u \leq 0}. A standard tool for this is Bennett’s inequality:

Proposition 1 (Bennett’s inequality) One has

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \leq \exp(-\lambda h(u))

for {u \geq 0} and

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \leq \lambda(1+u)) \leq \exp(-\lambda h(u))

for {-1 < u \leq 0}, where

\displaystyle  h(u) := (1+u) \log(1+u) - u.

From the Taylor expansion {h(u) = \frac{u^2}{2} + O(u^3)} for {u=O(1)} we conclude Gaussian type tail bounds in the regime {u = o(1)} (and in particular when {u = O(1/\sqrt{\lambda})} (in the spirit of the Chernoff, Bernstein, and Hoeffding inequalities). but in the regime where {u} is large and positive one obtains a slight gain over these other classical bounds (of {\exp(- \lambda u \log u)} type, rather than {\exp(-\lambda u)}).

Proof: We use the exponential moment method. For any {t \geq 0}, we have from Markov’s inequality that

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \leq e^{-t \lambda(1+u)} {\bf E} \exp( t {\bf Poisson}(\lambda) ).

A standard computation shows that the moment generating function of the Poisson distribution is given by

\displaystyle  \exp( t {\bf Poisson}(\lambda) ) = \exp( (e^t - 1) \lambda )

and hence

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \leq \exp( (e^t - 1)\lambda - t \lambda(1+u) ).

For {u \geq 0}, it turns out that the right-hand side is optimized by setting {t = \log(1+u)}, in which case the right-hand side simplifies to {\exp(-\lambda h(u))}. This proves the first inequality; the second inequality is proven similarly (but now {u} and {t} are non-positive rather than non-negative). \Box

Remark 2 Bennett’s inequality also applies for (suitably normalized) sums of bounded independent random variables. In some cases there are direct comparison inequalities available to relate those variables to the Poisson case. For instance, suppose {S = X_1 + \dots + X_n} is the sum of independent Boolean variables {X_1,\dots,X_n \in \{0,1\}} of total mean {\sum_{j=1}^n {\bf E} X_j = \lambda} and with {\sup_i {\bf P}(X_i) \leq \varepsilon} for some {0 < \varepsilon < 1}. Then for any natural number {k}, we have

\displaystyle  {\bf P}(S=k) = \sum_{1 \leq i_1 < \dots < i_k \leq n} {\bf P}(X_{i_1}=1) \dots {\bf P}(X_{i_k}=1)

\displaystyle  \prod_{i \neq i_1,\dots,i_k} {\bf P}(X_i=0)

\displaystyle  \leq \frac{1}{k!} (\sum_{i=1}^n \frac{{\bf P}(X_i=1)}{{\bf P}(X_i=0)})^k \times \prod_{i=1}^n {\bf P}(X_i=0)

\displaystyle  \leq \frac{1}{k!} (\frac{\lambda}{1-\varepsilon})^k \prod_{i=1}^n \exp( - {\bf P}(X_i = 1))

\displaystyle  \leq e^{-\lambda} \frac{\lambda^k}{(1-\varepsilon)^k k!}

\displaystyle  \leq e^{\frac{\varepsilon}{1-\varepsilon} \lambda} {\bf P}( \mathbf{Poisson}(\frac{\lambda}{1-\varepsilon}) = k).

As such, for {\varepsilon} small, one can efficiently control the tail probabilities of {S} in terms of the tail probability of a Poisson random variable of mean close to {\lambda}; this is of course very closely related to the well known fact that the Poisson distribution emerges as the limit of sums of many independent boolean variables, each of which is non-zero with small probability. See this paper of Bentkus and this paper of Pinelis for some further useful (and less obvious) comparison inequalities of this type.

In this note I wanted to record the observation that one can improve the Bennett bound by a small polynomial factor once one leaves the Gaussian regime {u = O(1/\sqrt{\lambda})}, in particular gaining a factor of {1/\sqrt{\lambda}} when {u \sim 1}. This observation is not difficult and is implicitly in the literature (one can extract it for instance from the much more general results of this paper of Talagrand, and the basic idea already appears in this paper of Glynn), but I was not able to find a clean version of this statement in the literature, so I am placing it here on my blog. (But if a reader knows of a reference that basically contains the bound below, I would be happy to know of it.)

Proposition 3 (Improved Bennett’s inequality) One has

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda \min(u, u^2)}}

for {u \geq 0} and

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \leq \lambda(1+u)) \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda u^2 (1+u)}}

for {-1 < u \leq 0}.

Proof: We begin with the first inequality. We may assume that {u \geq 1/\sqrt{\lambda}}, since otherwise the claim follows from the usual Bennett inequality. We expand out the left-hand side as

\displaystyle  e^{-\lambda} \sum_{k \geq \lambda(1+u)} \frac{\lambda^k}{k!}.

Observe that for {k \geq \lambda(1+u)} that

\displaystyle  \frac{\lambda^{k+1}}{(k+1)!} \leq \frac{1}{1+u} \frac{\lambda^{k}}{k!} .

Thus the sum is dominated by the first term times a geometric series {\sum_{j=0}^\infty \frac{1}{(1+u)^j} = 1 + \frac{1}{u}}. We can thus bound the left-hand side by

\displaystyle  \ll e^{-\lambda} (1 + \frac{1}{u}) \sup_{k \geq \lambda(1+u)} \frac{\lambda^k}{k!}.

By the Stirling approximation, this is

\displaystyle  \ll e^{-\lambda} (1 + \frac{1}{u}) \sup_{k \geq \lambda(1+u)} \frac{1}{\sqrt{k}} \frac{(e\lambda)^k}{k^k}.

The expression inside the supremum is decreasing in {k} for {k > \lambda}, thus we can bound it by

\displaystyle  \ll e^{-\lambda} (1 + \frac{1}{u}) \frac{1}{\sqrt{\lambda(1+u)}} \frac{(e\lambda)^{\lambda(1+u)}}{(\lambda(1+u))^{\lambda(1+u)}},

which simplifies to

\displaystyle  \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda \min(u, u^2)}}

after a routine calculation.

Now we turn to the second inequality. As before we may assume that {u \leq -1/\sqrt{\lambda}}. We first dispose of a degenerate case in which {\lambda(1+u) < 1}. Here the left-hand side is just

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) = 0 ) = e^{-\lambda}

and the right-hand side is comparable to

\displaystyle  e^{-\lambda} \exp( - \lambda (1+u) \log (1+u) + \lambda(1+u) ) / \sqrt{\lambda(1+u)}.

Since {-\lambda(1+u) \log(1+u)} is negative and {0 < \lambda(1+u) < 1}, we see that the right-hand side is {\gg e^{-\lambda}}, and the estimate holds in this case.

It remains to consider the regime where {u \leq -1/\sqrt{\lambda}} and {\lambda(1+u) \geq 1}. The left-hand side expands as

\displaystyle  e^{-\lambda} \sum_{k \leq \lambda(1+u)} \frac{\lambda^k}{k!}.

The sum is dominated by the first term times a geometric series {\sum_{j=-\infty}^0 \frac{1}{(1+u)^j} = \frac{1}{|u|}}. The maximal {k} is comparable to {\lambda(1+u)}, so we can bound the left-hand side by

\displaystyle  \ll e^{-\lambda} \frac{1}{|u|} \sup_{\lambda(1+u) \ll k \leq \lambda(1+u)} \frac{\lambda^k}{k!}.

Using the Stirling approximation as before we can bound this by

\displaystyle  \ll e^{-\lambda} \frac{1}{|u|} \frac{1}{\sqrt{\lambda(1+u)}} \frac{(e\lambda)^{\lambda(1+u)}}{(\lambda(1+u))^{\lambda(1+u)}},

which simplifies to

\displaystyle  \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda u^2 (1+u)}}

after a routine calculation. \Box

The same analysis can be reversed to show that the bounds given above are basically sharp up to constants, at least when {\lambda} (and {\lambda(1+u)}) are large.

An unusual lottery result made the news recently: on October 1, 2022, the PCSO Grand Lotto in the Philippines, which draws six numbers from {1} to {55} at random, managed to draw the numbers {9, 18, 27, 36, 45, 54} (though the balls were actually drawn in the order {9, 45,36, 27, 18, 54}). In other words, they drew exactly six multiples of nine from {1} to {55}. In addition, a total of {433} tickets were bought with this winning combination, whose owners then had to split the {236} million peso jackpot (about {4} million USD) among themselves. This raised enough suspicion that there were calls for an inquiry into the Philippine lottery system, including from the minority leader of the Senate.

Whenever an event like this happens, journalists often contact mathematicians to ask the question: “What are the odds of this happening?”, and in fact I myself received one such inquiry this time around. This is a number that is not too difficult to compute – in this case, the probability of the lottery producing the six numbers {9, 18, 27, 35, 45, 54} in some order turn out to be {1} in {\binom{55}{6} = 28,989,675} – and such a number is often dutifully provided to such journalists, who in turn report it as some sort of quantitative demonstration of how remarkable the event was.

But on the previous draw of the same lottery, on September 28, 2022, the unremarkable sequence of numbers {11, 26, 33, 45, 51, 55} were drawn (again in a different order), and no tickets ended up claiming the jackpot. The probability of the lottery producing the six numbers {11, 26, 33, 45, 51, 55} is also {1} in {\binom{55}{6} = 28,989,675} – just as likely or as unlikely as the October 1 numbers {9, 18, 27, 36, 45, 54}. Indeed, the whole point of drawing the numbers randomly is to make each of the {28,989,675} possible outcomes (whether they be “unusual” or “unremarkable”) equally likely. So why is it that the October 1 lottery attracted so much attention, but the September 28 lottery did not?

Part of the explanation surely lies in the unusually large number ({433}) of lottery winners on October 1, but I will set that aspect of the story aside until the end of this post. The more general points that I want to make with these sorts of situations are:

  1. The question “what are the odds of happening” is often easy to answer mathematically, but it is not the correct question to ask.
  2. The question “what is the probability that an alternative hypothesis is the truth” is (one of) the correct questions to ask, but is very difficult to answer (it involves both mathematical and non-mathematical considerations).
  3. The answer to the first question is one of the quantities needed to calculate the answer to the second, but it is far from the only such quantity. Most of the other quantities involved cannot be calculated exactly.
  4. However, by making some educated guesses, one can still sometimes get a very rough gauge of which events are “more surprising” than others, in that they would lead to relatively higher answers to the second question.

To explain these points it is convenient to adopt the framework of Bayesian probability. In this framework, one imagines that there are competing hypotheses to explain the world, and that one assigns a probability to each such hypothesis representing one’s belief in the truth of that hypothesis. For simplicity, let us assume that there are just two competing hypotheses to be entertained: the null hypothesis {H_0}, and an alternative hypothesis {H_1}. For instance, in our lottery example, the two hypotheses might be:

  • Null hypothesis {H_0}: The lottery is run in a completely fair and random fashion.
  • Alternative hypothesis {H_1}: The lottery is rigged by some corrupt officials for their personal gain.

At any given point in time, a person would have a probability {{\bf P}(H_0)} assigned to the null hypothesis, and a probability {{\bf P}(H_1)} assigned to the alternative hypothesis; in this simplified model where there are only two hypotheses under consideration, these probabilities must add to one, but of course if there were additional hypotheses beyond these two then this would no longer be the case.

Bayesian probability does not provide a rule for calculating the initial (or prior) probabilities {{\bf P}(H_0)}, {{\bf P}(H_1)} that one starts with; these may depend on the subjective experiences and biases of the person considering the hypothesis. For instance, one person might have quite a bit of prior faith in the lottery system, and assign the probabilities {{\bf P}(H_0) = 0.99} and {{\bf P}(H_1) = 0.01}. Another person might have quite a bit of prior cynicism, and perhaps assign {{\bf P}(H_0)=0.5} and {{\bf P}(H_1)=0.5}. One cannot use purely mathematical arguments to determine which of these two people is “correct” (or whether they are both “wrong”); it depends on subjective factors.

What Bayesian probability does do, however, is provide a rule to update these probabilities {{\bf P}(H_0)}, {{\bf P}(H_1)} in view of new information {E} to provide posterior probabilities {{\bf P}(H_0|E)}, {{\bf P}(H_1|E)}. In our example, the new information {E} would be the fact that the October 1 lottery numbers were {9, 18, 27, 36, 45, 54} (in some order). The update is given by the famous Bayes theorem

\displaystyle  {\bf P}(H_0|E) = \frac{{\bf P}(E|H_0) {\bf P}(H_0)}{{\bf P}(E)}; \quad {\bf P}(H_1|E) = \frac{{\bf P}(E|H_1) {\bf P}(H_1)}{{\bf P}(E)},

where {{\bf P}(E|H_0)} is the probability that the event {E} would have occurred under the null hypothesis {H_0}, and {{\bf P}(E|H_1)} is the probability that the event {E} would have occurred under the alternative hypothesis {H_1}. Let us divide the second equation by the first to cancel the {{\bf P}(E)} denominator, and obtain

\displaystyle  \frac{ {\bf P}(H_1|E) }{ {\bf P}(H_0|E) } = \frac{ {\bf P}(H_1) }{ {\bf P}(H_0) } \times \frac{ {\bf P}(E | H_1)}{{\bf P}(E | H_0)}. \ \ \ \ \ (1)

One can interpret {\frac{ {\bf P}(H_1) }{ {\bf P}(H_0) }} as the prior odds of the alternative hypothesis, and {\frac{ {\bf P}(H_1|E) }{ {\bf P}(H_0|E) } } as the posterior odds of the alternative hypothesis. The identity (1) then says that in order to compute the posterior odds {\frac{ {\bf P}(H_1|E) }{ {\bf P}(H_0|E) }} of the alternative hypothesis in light of the new information {E}, one needs to know three things:
  1. The prior odds {\frac{ {\bf P}(H_1) }{ {\bf P}(H_0) }} of the alternative hypothesis;
  2. The probability {\mathop{\bf P}(E|H_0)} that the event {E} occurs under the null hypothesis {H_0}; and
  3. The probability {\mathop{\bf P}(E|H_1)} that the event {E} occurs under the alternative hypothesis {H_1}.

As previously discussed, the prior odds {\frac{ {\bf P}(H_1) }{ {\bf P}(H_0) }} of the alternative hypothesis are subjective and vary from person to person; in the example earlier, the person with substantial faith in the lottery may only give prior odds of {\frac{0.01}{0.99} \approx 0.01} (99 to 1 against) of the alternative hypothesis, whereas the cynic might give odds of {\frac{0.5}{0.5}=1} (even odds). The probability {{\bf P}(E|H_0)} is the quantity that can often be calculated by straightforward mathematics; as discussed before, in this specific example we have

\displaystyle  \mathop{\bf P}(E|H_0) = \frac{1}{\binom{55}{6}} = \frac{1}{28,989,675}.

But this still leaves one crucial quantity that is unknown: the probability {{\bf P}(E|H_1)}. This is incredibly difficult to compute, because it requires a precise theory for how events would play out under the alternative hypothesis {H_1}, and in particular is very sensitive as to what the alternative hypothesis {H_1} actually is.

For instance, suppose we replace the alternative hypothesis {H_1} by the following very specific (and somewhat bizarre) hypothesis:

  • Alternative hypothesis {H'_1}: The lottery is rigged by a cult that worships the multiples of {9}, and views October 1 as their holiest day. On this day, they will manipulate the lottery to only select those balls that are multiples of {9}.

Under this alternative hypothesis {H'_1}, we have {{\bf P}(E|H'_1)=1}. So, when {E} happens, the odds of this alternative hypothesis {H'_1} will increase by the dramatic factor of {\frac{{\bf P}(E|H'_1)}{{\bf P}(E|H_0)} = 28,989,675}. So, for instance, someone who already was entertaining odds of {\frac{0.01}{0.99}} of this hypothesis {H'_1} would now have these odds multiply dramatically to {\frac{0.01}{0.99} \times 28,989,675 \approx 290,000}, so that the probability of {H'_1} would have jumped from a mere {1\%} to a staggering {99.9997\%}. This is about as strong a shift in belief as one could imagine. However, this hypothesis {H'_1} is so specific and bizarre that one’s prior odds of this hypothesis would be nowhere near as large as {\frac{0.01}{0.99}} (unless substantial prior evidence of this cult and its hold on the lottery system existed, of course). A more realistic prior odds for {H'_1} would be something like {\frac{10^{-10^{10}}}{1-10^{-10^{10}}}} – which is so miniscule that even multiplying it by a factor such as {28,989,675} barely moves the needle.

Remark 1 The contrast between alternative hypothesis {H_1} and alternative hypothesis {H'_1} illustrates a common demagogical rhetorical technique when an advocate is trying to convince an audience of an alternative hypothesis, namely to use suggestive language (“`I’m just asking questions here”) rather than precise statements in order to leave the alternative hypothesis deliberately vague. In particular, the advocate may take advantage of the freedom to use a broad formulation of the hypothesis (such as {H_1}) in order to maximize the audience’s prior odds of the hypothesis, simultaneously with a very specific formulation of the hypothesis (such as {H'_1}) in order to maximize the probability of the actual event {E} occuring under this hypothesis. (A related technique is to be deliberately vague about the hypothesized competency of some suspicious actor, so that this actor could be portrayed as being extraordinarily competent when convenient to do so, while simultaneously being portrayed as extraordinarily incompetent when that instead is the more useful hypothesis.) This can lead to wildly inaccurate Bayesian updates of this vague alternative hypothesis, and so precise formulation of such hypothesis is important if one is to approach a topic from anything remotely resembling a scientific approach. [EDIT: as pointed out to me by a reader, this technique is a Bayesian analogue of the motte and bailey fallacy.]

At the opposite extreme, consider instead the following hypothesis:

  • Alternative hypothesis {H''_1}: The lottery is rigged by some corrupt officials, who on October 1 decide to randomly determine the winning numbers in advance, share these numbers with their collaborators, and then manipulate the lottery to choose those numbers that they selected.

If these corrupt officials are indeed choosing their predetermined winning numbers randomly, then the probability {{\bf P}(E|H''_1)} would in fact be just the same probability {\frac{1}{\binom{55}{6}} = \frac{1}{28,989,675}} as {{\bf P}(E|H_0)}, and in this case the seemingly unusual event {E} would in fact have no effect on the odds of the alternative hypothesis, because it was just as unlikely for the alternative hypothesis to generate this multiples-of-nine pattern as for the null hypothesis to. In fact, one would imagine that these corrupt officials would avoid “suspicious” numbers, such as the multiples of {9}, and only choose numbers that look random, in which case {{\bf P}(E|H''_1)} would in fact be less than {{\bf P}(E|H_0)} and so the event {E} would actually lower the odds of the alternative hypothesis in this case. (In fact, one can sometimes use this tendency of fraudsters to not generate truly random data as a statistical tool to detect such fraud; violations of Benford’s law for instance can be used in this fashion, though only in situations where the null hypothesis is expected to obey Benford’s law, as discussed in this previous blog post.)

Now let us consider a third alternative hypothesis:

  • Alternative hypothesis {H'''_1}: On October 1, the lottery machine developed a fault and now only selects numbers that exhibit unusual patterns.

Setting aside the question of precisely what faulty mechanism could induce this sort of effect, it is not clear at all how to compute {{\bf P}(E|H'''_1)} in this case. Using the principle of indifference as a crude rule of thumb, one might expect

\displaystyle  {\bf P}(E|H'''_1) \approx \frac{1}{\# \{ \hbox{unusual patterns}\}}

where the denominator is the number of patterns among the possible {\binom{55}{6}} lottery outcomes that are “unusual”. Among such patterns would presumably be the multiples-of-9 pattern {9,18,27,36,45,54}, but one could easily come up with other patterns that are equally “unusual”, such as consecutive strings such as {11, 12, 13, 14, 15, 16}, or the first few primes {2, 3, 5, 7, 11, 13}, or the first few squares {1, 4, 9, 16, 25, 36}, and so forth. How many such unusual patterns are there? This is too vague a question to answer with any degree of precision, but as one illustrative statistic, the Online Encyclopedia of Integer Sequences (OEIS) currently hosts about {350,000} sequences. Not all of these would begin with six distinct numbers from {1} to {55}, and several of these sequences might generate the same set of six numbers, but this does suggests that patterns that one would deem to be “unusual” could number in the thousands, tens of thousands, or more. Using this guess, we would then expect the event {E} to boost the odds of this hypothesis {H'''_1} by perhaps a thousandfold or so, which is moderately impressive. But subsequent information can counteract this effect. For instance, on October 3, the same lottery produced the numbers {8, 10, 12, 14, 26, 51}, which exhibit no unusual properties (no search results in the OEIS, for instance); if we denote this event by {E'}, then we have {{\bf P}(E'|H'''_1) \approx 0} and so this new information {E'} should drive the odds for this alternative hypothesis {H'''_1} way down again.

Remark 2 This example demonstrates another demagogical rhetorical technique that one sometimes sees (particularly in political or other emotionally charged contexts), which is to cherry-pick the information presented to their audience by informing them of events {E} which have a relatively high probability of occurring under their alternative hypothesis, but withholding information about other relevant events {E'} that have a relatively low probability of occurring under their alternative hypothesis. When confronted with such new information {E'}, a common defense of a demogogue is to modify the alternative hypothesis {H_1} to a more specific hypothesis {H'_1} that can “explain” this information {E'} (“Oh, clearly we heard about {E'} because the conspiracy in fact extends to the additional organizations {X, Y, Z} that reported {E'}“), taking advantage of the vagueness discussed in Remark 1.

Let us consider a superficially similar hypothesis:

  • Alternative hypothesis {H''''_1}: On October 1, a divine being decided to send a sign to humanity by placing an unusual pattern in a lottery.

Here we (literally) stay agnostic on the prior odds of this hypothesis, and do not address the theological question of why a divine being should choose to use the medium of a lottery to send their signs. At first glance, the probability {{\bf P}(E|H''''_1)} here should be similar to the probability {{\bf P}(E|H'''_1)}, and so perhaps one could use this event {E} to improve the odds of the existence of a divine being by a factor of a thousand or so. But note carefully that the hypothesis {H''''_1} did not specify which lottery the divine being chose to use. The PSCO Grand Lotto is just one of a dozen lotteries run by the Philippine Charity Sweepstakes Office (PCSO), and of course there are over a hundred other countries and thousands of states within these countries, each of which often run their own lotteries. Taking into account these thousands or tens of thousands of additional lotteries to choose from, the probability {{\bf P}(E|H''''_1)} now drops by several orders of magnitude, and is now basically comparable to the probability {{\bf P}(E|H_0)} coming from the null hypothesis. As such one does not expect the event {E} to have a significant impact on the odds of the hypothesis {H''''_1}, despite the small-looking nature {\frac{1}{28,989,675}} of the probability {{\bf P}(E|H_0)}.

In summary, we have failed to locate any alternative hypothesis {H_1} which

  1. Has some non-negligible prior odds of being true (and in particular is not excessively specific, as with hypothesis {H'_1});
  2. Has a significantly higher probability of producing the specific event {E} than the null hypothesis; AND
  3. Does not struggle to also produce other events {E'} that have since been observed.
One needs all three of these factors to be present in order to significantly weaken the plausibility of the null hypothesis {H_0}; in the absence of these three factors, a moderately small numerical value of {{\bf P}(E|H_0)}, such as {\frac{1}{28,989,675}} does not actually do much to affect this plausibility. In this case one needs to lay out a reasonably precise alternative hypothesis {H_1} and make some actual educated guesses towards the competing probability {{\bf P}(E|H_1)} before one can lead to further conclusions. However, if {{\bf P}(E|H_0)} is insanely small, e.g., less than {10^{-1000}}, then the possibility of a previously overlooked alternative hypothesis {H_1} becomes far more plausible; as per the famous quote of Arthur Conan Doyle’s Sherlock Holmes, “When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.”

We now return to the fact that for this specific October 1 lottery, there were {433} tickets that managed to select the winning numbers. Let us call this event {F}. In view of this additional information, we should now consider the ratio of the probabilities {{\bf P}(E \& F|H_1)} and {{\bf P}(E \& F|H_0)}, rather than the ratio of the probabilities {{\bf P}(E|H_1)} and {{\bf P}(E|H_0)}. If we augment the null hypothesis to

  • Null hypothesis {H'_0}: The lottery is run in a completely fair and random fashion, and the purchasers of lottery tickets also select their numbers in a completely random fashion.

Then {{\bf P}(E \& F|H'_0)} is indeed of the “insanely improbable” category mentioned previously. I was not able to get official numbers on how many tickets are purchased per lottery, but let us say for sake of argument that it is 1 million (the conclusion will not be extremely sensitive to this choice). Then the expected number of tickets that would have the winning numbers would be

\displaystyle  \frac{1 \hbox{ million}}{28,989,675} \approx 0.03

(which is broadly consistent, by the way, with the jackpot being reached every {30} draws or so), and standard probability theory suggests that the number of winners should now follow a Poisson distribution with this mean {\lambda = 0.03}. The probability of obtaining {433} winners would now be

\displaystyle  {\bf P}(F|H'_0) = \frac{\lambda^{433} e^{-\lambda}}{433!} \approx 10^{-1600}

and of course {{\bf P}(E \& F|H'_0)} would be even smaller than this. So this clearly demands some sort of explanation. But in actuality, many purchasers of lottery tickets do not select their numbers completely randomly; they often have some “lucky” numbers (e.g., based on birthdays or other personally significant dates) that they prefer to use, or choose numbers according to a simple pattern rather than go to the trouble of trying to make them truly random. So if we modify the null hypothesis to

  • Null hypothesis {H''_0}: The lottery is run in a completely fair and random fashion, but a significant fraction of the purchasers of lottery tickets only select “unusual” numbers.

then it can now become quite plausible that a highly unusual set of numbers such as {9,18,27,36,45,54} could be selected by as many as {433} purchasers of tickets; for instance, if {10\%} of the 1 million ticket holders chose to select their numbers according to some sort of pattern, then only {0.4\%} of those holders would have to pick {9,18,27,36,45,54} in order for the event {F} to hold (given {E}), and this is not extremely implausible. Given that this reasonable version of the null hypothesis already gives a plausible explanation for {F}, there does not seem to be a pressing need to locate an alternate hypothesis {H_1} that gives some other explanation (cf. Occam’s razor). [UPDATE: Indeed, given the actual layout of the tickets of ths lottery, the numbers {9,18,27,35,45,54} form a diagonal, and so all that is needed in order for the modified null hypothesis {H''_0} to explain the event {F} is to postulate that a significant fraction of ticket purchasers decided to lay out their numbers in a simple geometric pattern, such as a row or diagonal.]

Remark 3 In view of the above discussion, one can propose a systematic way to evaluate (in as objective a fashion as possible) rhetorical claims in which an advocate is presenting evidence to support some alternative hypothesis:
  1. State the null hypothesis {H_0} and the alternative hypothesis {H_1} as precisely as possible. In particular, avoid conflating an extremely broad hypothesis (such as the hypothesis {H_1} in our running example) with an extremely specific one (such as {H'_1} in our example).
  2. With the hypotheses precisely stated, give an honest estimate to the prior odds of this formulation of the alternative hypothesis.
  3. Consider if all the relevant information {E} (or at least a representative sample thereof) has been presented to you before proceeding further. If not, consider gathering more information {E'} from further sources.
  4. Estimate how likely the information {E} was to have occurred under the null hypothesis.
  5. Estimate how likely the information {E} was to have occurred under the alternative hypothesis (using exactly the same wording of this hypothesis as you did in previous steps).
  6. If the second estimate is significantly larger than the first, then you have cause to update your prior odds of this hypothesis (though if those prior odds were already vanishingly unlikely, this may not move the needle significantly). If not, the argument is unconvincing and no significant adjustment to the odds (except perhaps in a downwards direction) needs to be made.

Let {G} be a finite set of order {N}; in applications {G} will be typically something like a finite abelian group, such as the cyclic group {{\bf Z}/N{\bf Z}}. Let us define a {1}-bounded function to be a function {f: G \rightarrow {\bf C}} such that {|f(n)| \leq 1} for all {n \in G}. There are many seminorms {\| \|} of interest that one places on functions {f: G \rightarrow {\bf C}} that are bounded by {1} on {1}-bounded functions, such as the Gowers uniformity seminorms {\| \|_k} for {k \geq 1} (which are genuine norms for {k \geq 2}). All seminorms in this post will be implicitly assumed to obey this property.

In additive combinatorics, a significant role is played by inverse theorems, which abstractly take the following form for certain choices of seminorm {\| \|}, some parameters {\eta, \varepsilon>0}, and some class {{\mathcal F}} of {1}-bounded functions:

Theorem 1 (Inverse theorem template) If {f} is a {1}-bounded function with {\|f\| \geq \eta}, then there exists {F \in {\mathcal F}} such that {|\langle f, F \rangle| \geq \varepsilon}, where {\langle,\rangle} denotes the usual inner product

\displaystyle  \langle f, F \rangle := {\bf E}_{n \in G} f(n) \overline{F(n)}.

Informally, one should think of {\eta} as being somewhat small but fixed independently of {N}, {\varepsilon} as being somewhat smaller but depending only on {\eta} (and on the seminorm), and {{\mathcal F}} as representing the “structured functions” for these choices of parameters. There is some flexibility in exactly how to choose the class {{\mathcal F}} of structured functions, but intuitively an inverse theorem should become more powerful when this class is small. Accordingly, let us define the {(\eta,\varepsilon)}-entropy of the seminorm {\| \|} to be the least cardinality of {{\mathcal F}} for which such an inverse theorem holds. Seminorms with low entropy are ones for which inverse theorems can be expected to be a useful tool. This concept arose in some discussions I had with Ben Green many years ago, but never appeared in print, so I decided to record some observations we had on this concept here on this blog.

Lebesgue norms {\| f\|_{L^p} := ({\bf E}_{n \in G} |f(n)|^p)^{1/p}} for {1 < p < \infty} have exponentially large entropy (and so inverse theorems are not expected to be useful in this case):

Proposition 2 ({L^p} norm has exponentially large inverse entropy) Let {1 < p < \infty} and {0 < \eta < 1}. Then the {(\eta,\eta^p/4)}-entropy of {\| \|_{L^p}} is at most {(1+8/\eta^p)^N}. Conversely, for any {\varepsilon>0}, the {(\eta,\varepsilon)}-entropy of {\| \|_{L^p}} is at least {\exp( c \varepsilon^2 N)} for some absolute constant {c>0}.

Proof: If {f} is {1}-bounded with {\|f\|_{L^p} \geq \eta}, then we have

\displaystyle  |\langle f, |f|^{p-2} f \rangle| \geq \eta^p

and hence by the triangle inequality we have

\displaystyle  |\langle f, F \rangle| \geq \eta^p/2

where {F} is either the real or imaginary part of {|f|^{p-2} f}, which takes values in {[-1,1]}. If we let {\tilde F} be {F} rounded to the nearest multiple of {\eta^p/4}, then by the triangle inequality again we have

\displaystyle  |\langle f, \tilde F \rangle| \geq \eta^p/4.

There are only at most {1+8/\eta^p} possible values for each value {\tilde F(n)} of {\tilde F}, and hence at most {(1+8/\eta^p)^N} possible choices for {\tilde F}. This gives the first claim.

Now suppose that there is an {(\eta,\varepsilon)}-inverse theorem for some {{\mathcal F}} of cardinality {M}. If we let {f} be a random sign function (so the {f(n)} are independent random variables taking values in {-1,+1} with equal probability), then there is a random {F \in {\mathcal F}} such that

\displaystyle  |\langle f, F \rangle| \geq \varepsilon

and hence by the pigeonhole principle there is a deterministic {F \in {\mathcal F}} such that

\displaystyle  {\bf P}( |\langle f, F \rangle| \geq \varepsilon ) \geq 1/M.

On the other hand, from the Hoeffding inequality one has

\displaystyle  {\bf P}( |\langle f, F \rangle| \geq \varepsilon ) \ll \exp( - c \varepsilon^2 N )

for some absolute constant {c}, hence

\displaystyle  M \geq \exp( c \varepsilon^2 N )

as claimed. \Box

Most seminorms of interest in additive combinatorics, such as the Gowers uniformity norms, are bounded by some finite {L^p} norm thanks to Hölder’s inequality, so from the above proposition and the obvious monotonicity properties of entropy, we conclude that all Gowers norms on finite abelian groups {G} have at most exponential inverse theorem entropy. But we can do significantly better than this:

  • For the {U^1} seminorm {\|f\|_{U^1(G)} := |{\bf E}_{n \in G} f(n)|}, one can simply take {{\mathcal F} = \{1\}} to consist of the constant function {1}, and the {(\eta,\eta)}-entropy is clearly equal to {1} for any {0 < \eta < 1}.
  • For the {U^2} norm, the standard Fourier-analytic inverse theorem asserts that if {\|f\|_{U^2(G)} \geq \eta} then {|\langle f, e(\xi \cdot) \rangle| \geq \eta^2} for some Fourier character {\xi \in \hat G}. Thus the {(\eta,\eta^2)}-entropy is at most {N}.
  • For the {U^k({\bf Z}/N{\bf Z})} norm on cyclic groups for {k > 2}, the inverse theorem proved by Green, Ziegler, and myself gives an {(\eta,\varepsilon)}-inverse theorem for some {\varepsilon \gg_{k,\eta} 1} and {{\mathcal F}} consisting of nilsequences {n \mapsto F(g(n) \Gamma)} for some filtered nilmanifold {G/\Gamma} of degree {k-1} in a finite collection of cardinality {O_{\eta,k}(1)}, some polynomial sequence {g: {\bf Z} \rightarrow G} (which was subsequently observed by Candela-Sisask (see also Manners) that one can choose to be {N}-periodic), and some Lipschitz function {F: G/\Gamma \rightarrow {\bf C}} of Lipschitz norm {O_{\eta,k}(1)}. By the Arzela-Ascoli theorem, the number of possible {F} (up to uniform errors of size at most {\varepsilon/2}, say) is {O_{\eta,k}(1)}. By standard arguments one can also ensure that the coefficients of the polynomial {g} are {O_{\eta,k}(1)}, and then by periodicity there are only {O(N^{O_{\eta,k}(1)}} such polynomials. As a consequence, the {(\eta,\varepsilon)}-entropy is of polynomial size {O_{\eta,k}( N^{O_{\eta,k}(1)} )} (a fact that seems to have first been implicitly observed in Lemma 6.2 of this paper of Frantzikinakis; thanks to Ben Green for this reference). One can obtain more precise dependence on {\eta,k} using the quantitative version of this inverse theorem due to Manners; back of the envelope calculations using Section 5 of that paper suggest to me that one can take {\varepsilon = \eta^{O_k(1)}} to be polynomial in {\eta} and the entropy to be of the order {O_k( N^{\exp(\exp(\eta^{-O_k(1)}))} )}, or alternatively one can reduce the entropy to {O_k( \exp(\exp(\eta^{-O_k(1)})) N^{\eta^{-O_k(1)}})} at the cost of degrading {\varepsilon} to {1/\exp\exp( O(\eta^{-O(1)}))}.
  • If one replaces the cyclic group {{\bf Z}/N{\bf Z}} by a vector space {{\bf F}_p^n} over some fixed finite field {{\bf F}_p} of prime order (so that {N=p^n}), then the inverse theorem of Ziegler and myself (available in both high and low characteristic) allows one to obtain an {(\eta,\varepsilon)}-inverse theorem for some {\varepsilon \gg_{k,\eta} 1} and {{\mathcal F}} the collection of non-classical degree {k-1} polynomial phases from {{\bf F}_p^n} to {S^1}, which one can normalize to equal {1} at the origin, and then by the classification of such polynomials one can calculate that the {(\eta,\varepsilon)} entropy is of quasipolynomial size {\exp( O_{p,k}(n^{k-1}) ) = \exp( O_{p,k}( \log^{k-1} N ) )} in {N}. By using the recent work of Gowers and Milicevic, one can make the dependence on {p,k} here more precise, but we will not perform these calcualtions here.
  • For the {U^3(G)} norm on an arbitrary finite abelian group, the recent inverse theorem of Jamneshan and myself gives (after some calculations) a bound of the polynomial form {O( q^{O(n^2)} N^{\exp(\eta^{-O(1)})})} on the {(\eta,\varepsilon)}-entropy for some {\varepsilon \gg \eta^{O(1)}}, which one can improve slightly to {O( q^{O(n^2)} N^{\eta^{-O(1)}})} if one degrades {\varepsilon} to {1/\exp(\eta^{-O(1)})}, where {q} is the maximal order of an element of {G}, and {n} is the rank (the number of elements needed to generate {G}). This bound is polynomial in {N} in the cyclic group case and quasipolynomial in general.

For general finite abelian groups {G}, we do not yet have an inverse theorem of comparable power to the ones mentioned above that give polynomial or quasipolynomial upper bounds on the entropy. However, there is a cheap argument that at least gives some subexponential bounds:

Proposition 3 (Cheap subexponential bound) Let {k \geq 2} and {0 < \eta < 1/2}, and suppose that {G} is a finite abelian group of order {N \geq \eta^{-C_k}} for some sufficiently large {C_k}. Then the {(\eta,c_k \eta^{O_k(1)})}-complexity of {\| \|_{U^k(G)}} is at most {O( \exp( \eta^{-O_k(1)} N^{1 - \frac{k+1}{2^k-1}} ))}.

Proof: (Sketch) We use a standard random sampling argument, of the type used for instance by Croot-Sisask or Briet-Gopi (thanks to Ben Green for this latter reference). We can assume that {N \geq \eta^{-C_k}} for some sufficiently large {C_k>0}, since otherwise the claim follows from Proposition 2.

Let {A} be a random subset of {{\bf Z}/N{\bf Z}} with the events {n \in A} being iid with probability {0 < p < 1} to be chosen later, conditioned to the event {|A| \leq 2pN}. Let {f} be a {1}-bounded function. By a standard second moment calculation, we see that with probability at least {1/2}, we have

\displaystyle  \|f\|_{U^k(G)}^{2^k} = {\bf E}_{n, h_1,\dots,h_k \in G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|} \frac{1}{p} 1_A f(n + \omega \cdot h)

\displaystyle + O((\frac{1}{N^{k+1} p^{2^k-1}})^{1/2}).

Thus, by the triangle inequality, if we choose {p := C \eta^{-2^{k+1}/(2^k-1)} / N^{\frac{k+1}{2^k-1}}} for some sufficiently large {C = C_k > 0}, then for any {1}-bounded {f} with {\|f\|_{U^k(G)} \geq \eta/2}, one has with probability at least {1/2} that

\displaystyle  |{\bf E}_{n, h_1,\dots,h_k \i2^n G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|} \frac{1}{p} 1_A f(n + \omega \cdot h)|

\displaystyle \geq \eta^{2^k}/2^{2^k+1}.

We can write the left-hand side as {|\langle f, F \rangle|} where {F} is the randomly sampled dual function

\displaystyle  F(n) := {\bf E}_{n, h_1,\dots,h_k \in G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|+1} \frac{1}{p} 1_A f(n + \omega \cdot h).

Unfortunately, {F} is not {1}-bounded in general, but we have

\displaystyle  \|F\|_{L^2(G)}^2 \leq {\bf E}_{n, h_1,\dots,h_k ,h'_1,\dots,h'_k \in G}

\displaystyle  \prod_{\omega \in \{0,1\}^k \backslash \{0\}} \frac{1}{p} 1_A(n + \omega \cdot h) \frac{1}{p} 1_A(n + \omega \cdot h')

and the right-hand side can be shown to be {1+o(1)} on the average, so we can condition on the event that the right-hand side is {O(1)} without significant loss in falure probability.

If we then let {\tilde f_A} be {1_A f} rounded to the nearest Gaussian integer multiple of {\eta^{2^k}/2^{2^{10k}}} in the unit disk, one has from the triangle inequality that

\displaystyle  |\langle f, \tilde F \rangle| \geq \eta^{2^k}/2^{2^k+2}

where {\tilde F} is the discretised randomly sampled dual function

\displaystyle  \tilde F(n) := {\bf E}_{n, h_1,\dots,h_k \in G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|+1} \frac{1}{p} \tilde f_A(n + \omega \cdot h).

For any given {A}, there are at most {2np} places {n} where {\tilde f_A(n)} can be non-zero, and in those places there are {O_k( \eta^{-2^{k}})} possible values for {\tilde f_A(n)}. Thus, if we let {{\mathcal F}_A} be the collection of all possible {\tilde f_A} associated to a given {A}, the cardinality of this set is {O( \exp( \eta^{-O_k(1)} N^{1 - \frac{k+1}{2^k-1}} ) )}, and for any {f} with {\|f\|_{U^k(G)} \geq \eta/2}, we have

\displaystyle  \sup_{\tilde F \in {\mathcal F}_A} |\langle f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+2}

with probability at least {1/2}.

Now we remove the failure probability by independent resampling. By rounding to the nearest Gaussian integer multiple of {c_k \eta^{2^k}} in the unit disk for a sufficiently small {c_k>0}, one can find a family {{\mathcal G}} of cardinality {O( \eta^{-O_k(N)})} consisting of {1}-bounded functions {\tilde f} of {U^k(G)} norm at least {\eta/2} such that for every {1}-bounded {f} with {\|f\|_{U^k(G)} \geq \eta} there exists {\tilde f \in {\mathcal G}} such that

\displaystyle  \|f-\tilde f\|_{L^\infty(G)} \leq \eta^{2^k}/2^{k+3}.

Now, let {A_1,\dots,A_M} be independent samples of {A} for some {M} to be chosen later. By the preceding discussion, we see that with probability at least {1 - 2^{-M}}, we have

\displaystyle  \sup_{\tilde F \in \bigcup_{j=1}^M {\mathcal F}_{A_j}} |\langle \tilde f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+2}

for any given {\tilde f \in {\mathcal G}}, so by the union bound, if we choose {M = \lfloor C N \log \frac{1}{\eta} \rfloor} for a large enough {C = C_k}, we can find {A_1,\dots,A_M} such that

\displaystyle  \sup_{\tilde F \in \bigcup_{j=1}^M {\mathcal F}_{A_j}} |\langle \tilde f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+2}

for all {\tilde f \in {\mathcal G}}, and hence y the triangle inequality

\displaystyle  \sup_{\tilde F \in \bigcup_{j=1}^M {\mathcal F}_{A_j}} |\langle f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+3}.

Taking {{\mathcal F}} to be the union of the {{\mathcal F}_{A_j}} (applying some truncation and rescaling to these {L^2}-bounded functions to make them {L^\infty}-bounded, and then {1}-bounded), we obtain the claim. \Box

One way to obtain lower bounds on the inverse theorem entropy is to produce a collection of almost orthogonal functions with large norm. More precisely:

Proposition 4 Let {\| \|} be a seminorm, let {0 < \varepsilon \leq \eta < 1}, and suppose that one has a collection {f_1,\dots,f_M} of {1}-bounded functions such that for all {i=1,\dots,M}, {\|f_i\| \geq \eta} one has {|\langle f_i, f_j \rangle| \leq \varepsilon^2/2} for all but at most {L} choices of {j \in \{1,\dots,M\}} for all distinct {i,j \in \{1,\dots,M\}}. Then the {(\eta, \varepsilon)}-entropy of {\| \|} is at least {\varepsilon^2 M / 2L}.

Proof: Suppose we have an {(\eta,\varepsilon)}-inverse theorem with some family {{\mathcal F}}. Then for each {i=1,\dots,M} there is {F_i \in {\mathcal F}} such that {|\langle f_i, F_i \rangle| \geq \varepsilon}. By the pigeonhole principle, there is thus {F \in {\mathcal F}} such that {|\langle f_i, F \rangle| \geq \varepsilon} for all {i} in a subset {I} of {\{1,\dots,M\}} of cardinality at least {M/|{\mathcal F}|}:

\displaystyle  |I| \geq M / |{\mathcal F}|.

We can sum this to obtain

\displaystyle  |\sum_{i \in I} c_i \langle f_i, F \rangle| \geq |I| \varepsilon

for some complex numbers {c_i} of unit magnitude. By Cauchy-Schwarz, this implies

\displaystyle  \| \sum_{i \in I} c_i f_i \|_{L^2(G)}^2 \geq |I|^2 \varepsilon^2

and hence by the triangle inequality

\displaystyle  \sum_{i,j \in I} |\langle f_i, f_j \rangle| \geq |I|^2 \varepsilon^2.

On the other hand, by hypothesis we can bound the left-hand side by {|I| (L + \varepsilon^2 |I|/2)}. Rearranging, we conclude that

\displaystyle  |I| \leq 2 L / \varepsilon^2

and hence

\displaystyle  |{\mathcal F}| \geq \varepsilon^2 M / 2L

giving the claim. \Box

Thus for instance:

  • For the {U^2(G)} norm, one can take {f_1,\dots,f_M} to be the family of linear exponential phases {n \mapsto e(\xi \cdot n)} with {M = N} and {L=1}, and obtain a linear lower bound of {\varepsilon^2 N/2} for the {(\eta,\varepsilon)}-entropy, thus matching the upper bound of {N} up to constants when {\varepsilon} is fixed.
  • For the {U^k({\bf Z}/N{\bf Z})} norm, a similar calculation using polynomial phases of degree {k-1}, combined with the Weyl sum estimates, gives a lower bound of {\gg_{k,\varepsilon} N^{k-1}} for the {(\eta,\varepsilon)}-entropy for any fixed {\eta,\varepsilon}; by considering nilsequences as well, together with nilsequence equidistribution theory, one can replace the exponent {k-1} here by some quantity that goes to infinity as {\eta \rightarrow 0}, though I have not attempted to calculate the exact rate.
  • For the {U^k({\bf F}_p^n)} norm, another similar calculation using polynomial phases of degree {k-1} should give a lower bound of {\gg_{p,k,\eta,\varepsilon} \exp( c_{p,k,\eta,\varepsilon} n^{k-1} )} for the {(\eta,\varepsilon)}-entropy, though I have not fully performed the calculation.

We close with one final example. Suppose {G} is a product {G = A \times B} of two sets {A,B} of cardinality {\asymp \sqrt{N}}, and we consider the Gowers box norm

\displaystyle  \|f\|_{\Box^2(G)}^4 := {\bf E}_{a,a' \in A; b,b' \in B} f(a,b) \overline{f}(a,b') \overline{f}(a',b) f(a,b).

One possible choice of class {{\mathcal F}} here are the indicators {1_{U \times V}} of “rectangles” {U \times V} with {U \subset A}, {V \subset B} (cf. this previous blog post on cut norms). By standard calculations, one can use this class to show that the {(\eta, \eta^4/10)}-entropy of {\| \|_{\Box^2(G)}} is {O( \exp( O(\sqrt{N}) )}, and a variant of the proof of the second part of Proposition 2 shows that this is the correct order of growth in {N}. In contrast, a modification of Proposition 3 only gives an upper bound of the form {O( \exp( O( N^{2/3} ) ) )} (the bottleneck is ensuring that the randomly sampled dual functions stay bounded in {L^2}), which shows that while this cheap bound is not optimal, it can still broadly give the correct “type” of bound (specifically, intermediate growth between polynomial and exponential).

In everyday usage, we rely heavily on percentages to quantify probabilities and proportions: we might say that a prediction is {50\%} accurate or {80\%} accurate, that there is a {2\%} chance of dying from some disease, and so forth. However, for those without extensive mathematical training, it can sometimes be difficult to assess whether a given percentage amounts to a “good” or “bad” outcome, because this depends very much on the context of how the percentage is used. For instance:

  • (i) In a two-party election, an outcome of say {51\%} to {49\%} might be considered close, but {55\%} to {45\%} would probably be viewed as a convincing mandate, and {60\%} to {40\%} would likely be viewed as a landslide.
  • (ii) Similarly, if one were to poll an upcoming election, a poll of {51\%} to {49\%} would be too close to call, {55\%} to {45\%} would be an extremely favorable result for the candidate, and {60\%} to {40\%} would mean that it would be a major upset if the candidate lost the election.
  • (iii) On the other hand, a medical operation that only had a {51\%}, {55\%}, or {60\%} chance of success would be viewed as being incredibly risky, especially if failure meant death or permanent injury to the patient. Even an operation that was {90\%} or {95\%} likely to be non-fatal (i.e., a {10\%} or {5\%} chance of death) would not be conducted lightly.
  • (iv) A weather prediction of, say, {30\%} chance of rain during a vacation trip might be sufficient cause to pack an umbrella, even though it is more likely than not that rain would not occur. On the other hand, if the prediction was for an {80\%} chance of rain, and it ended up that the skies remained clear, this does not seriously damage the accuracy of the prediction – indeed, such an outcome would be expected in one out of every five such predictions.
  • (v) Even extremely tiny percentages of toxic chemicals in everyday products can be considered unacceptable. For instance, EPA rules require action to be taken when the percentage of lead in drinking water exceeds {0.0000015\%} (15 parts per billion). At the opposite extreme, recycling contamination rates as high as {10\%} are often considered acceptable.

Because of all the very different ways in which percentages could be used, I think it may make sense to propose an alternate system of units to measure one class of probabilities, namely the probabilities of avoiding some highly undesirable outcome, such as death, accident or illness. The units I propose are that of “nines“, which are already commonly used to measure availability of some service or purity of a material, but can be equally used to measure the safety (i.e., lack of risk) of some activity. Informally, nines measure how many consecutive appearances of the digit {9} are in the probability of successfully avoiding the negative outcome, thus

  • {90\%} success = one nine of safety
  • {99\%} success = two nines of safety
  • {99.9\%} success = three nines of safety
and so forth. Using the mathematical device of logarithms, one can also assign a fractional number of nines of safety to a general probability:

Definition 1 (Nines of safety) An activity (affecting one or more persons, over some given period of time) that has a probability {p} of the “safe” outcome and probability {1-p} of the “unsafe” outcome will have {k} nines of safety against the unsafe outcome, where {k} is defined by the formula

\displaystyle  k = -\log_{10}(1-p) \ \ \ \ \ (1)

(where {\log_{10}} is the logarithm to base ten), or equivalently

\displaystyle  p = 1 - 10^{-k}. \ \ \ \ \ (2)

Remark 2 Because of the various uncertainties in measuring probabilities, as well as the inaccuracies in some of the assumptions and approximations we will be making later, we will not attempt to measure the number of nines of safety beyond the first decimal point; thus we will round to the nearest tenth of a nine of safety throughout this post.

Here is a conversion table between percentage rates of success (the safe outcome), failure (the unsafe outcome), and the number of nines of safety one has:

Success rate {p} Failure rate {1-p} Number of nines {k}
{0\%} {100\%} {0.0}
{50\%} {50\%} {0.3}
{75\%} {25\%} {0.6}
{80\%} {20\%} {0.7}
{90\%} {10\%} {1.0}
{95\%} {5\%} {1.3}
{97.5\%} {2.5\%} {1.6}
{98\%} {2\%} {1.7}
{99\%} {1\%} {2.0}
{99.5\%} {0.5\%} {2.3}
{99.75\%} {0.25\%} {2.6}
{99.8\%} {0.2\%} {2.7}
{99.9\%} {0.1\%} {3.0}
{99.95\%} {0.05\%} {3.3}
{99.975\%} {0.025\%} {3.6}
{99.98\%} {0.02\%} {3.7}
{99.99\%} {0.01\%} {4.0}
{100\%} {0\%} infinite

Thus, if one has no nines of safety whatsoever, one is guaranteed to fail; but each nine of safety one has reduces the failure rate by a factor of {10}. In an ideal world, one would have infinitely many nines of safety against any risk, but in practice there are no {100\%} guarantees against failure, and so one can only expect a finite amount of nines of safety in any given situation. Realistically, one should thus aim to have as many nines of safety as one can reasonably expect to have, but not to demand an infinite amount.

Remark 3 The number of nines of safety against a certain risk is not absolute; it will depend not only on the risk itself, but (a) the number of people exposed to the risk, and (b) the length of time one is exposed to the risk. Exposing more people or increasing the duration of exposure will reduce the number of nines, and conversely exposing fewer people or reducing the duration will increase the number of nines; see Proposition 7 below for a rough rule of thumb in this regard.

Remark 4 Nines of safety are a logarithmic scale of measurement, rather than a linear scale. Other familiar examples of logarithmic scales of measurement include the Richter scale of earthquake magnitude, the pH scale of acidity, the decibel scale of sound level, octaves in music, and the magnitude scale for stars.

Remark 5 One way to think about nines of safety is via the Swiss cheese model that was created recently to describe pandemic risk management. In this model, each nine of safety can be thought of as a slice of Swiss cheese, with holes occupying {10\%} of that slice. Having {k} nines of safety is then analogous to standing behind {k} such slices of Swiss cheese. In order for a risk to actually impact you, it must pass through each of these {k} slices. A fractional nine of safety corresponds to a fractional slice of Swiss cheese that covers the amount of space given by the above table. For instance, {0.6} nines of safety corresponds to a fractional slice that covers about {75\%} of the given area (leaving {25\%} uncovered).

Now to give some real-world examples of nines of safety. Using data for deaths in the US in 2019 (without attempting to account for factors such as age and gender), a random US citizen will have had the following amount of safety from dying from some selected causes in that year:

Cause of death Mortality rate per {100,\! 000} (approx.) Nines of safety
All causes {870} {2.0}
Heart disease {200} {2.7}
Cancer {180} {2.7}
Accidents {52} {3.3}
Drug overdose {22} {3.7}
Influenza/Pneumonia {15} {3.8}
Suicide {14} {3.8}
Gun violence {12} {3.9}
Car accident {11} {4.0}
Murder {5} {4.3}
Airplane crash {0.14} {5.9}
Lightning strike {0.006} {7.2}

The safety of air travel is particularly remarkable: a given hour of flying in general aviation has a fatality rate of {0.00001}, or about {5} nines of safety, while for the major carriers the fatality rate drops down to {0.0000005}, or about {7.3} nines of safety.

Of course, in 2020, COVID-19 deaths became significant. In this year in the US, the mortality rate for COVID-19 (as the underlying or contributing cause of death) was {91.5} per {100,\! 000}, corresponding to {3.0} nines of safety, which was less safe than all other causes of death except for heart disease and cancer. At this time of writing, data for all of 2021 is of course not yet available, but it seems likely that the safety level would be even lower for this year.

Some further illustrations of the concept of nines of safety:

  • Each round of Russian roulette has a success rate of {5/6}, providing only {0.8} nines of safety. Of course, the safety will decrease with each additional round: one has only {0.5} nines of safety after two rounds, {0.4} nines after three rounds, and so forth. (See also Proposition 7 below.)
  • The ancient Roman punishment of decimation, by definition, provided exactly one nine of safety to each soldier being punished.
  • Rolling a {1} on a {20}-sided die is a risk that carries about {1.3} nines of safety.
  • Rolling a double one (“snake eyes“) from two six-sided dice carries about {1.6} nines of safety.
  • One has about {2.6} nines of safety against the risk of someone randomly guessing your birthday on the first attempt.
  • A null hypothesis has {1.3} nines of safety against producing a {p = 0.05} statistically significant result, and {2.0} nines against producing a {p=0.01} statistically significant result. (However, one has to be careful when reversing the conditional; a {p=0.01} statistically significant result does not necessarily have {2.0} nines of safety against the null hypothesis. In Bayesian statistics, the precise relationship between the two risks is given by Bayes’ theorem.)
  • If a poker opponent is dealt a five-card hand, one has {5.8} nines of safety against that opponent being dealt a royal flush, {4.8} against a straight flush or higher, {3.6} against four-of-a-kind or higher, {2.8} against a full house or higher, {2.4} against a flush or higher, {2.1} against a straight or higher, {1.5} against three-of-a-kind or higher, {1.1} against two pairs or higher, and just {0.3} against one pair or higher. (This data was converted from this Wikipedia table.)
  • A {k}-digit PIN number (or a {k}-digit combination lock) carries {k} nines of safety against each attempt to randomly guess the PIN. A length {k} password that allows for numbers, upper and lower case letters, and punctuation carries about {2k} nines of safety against a single guess. (For the reduction in safety caused by multiple guesses, see Proposition 7 below.)

Here is another way to think about nines of safety:

Proposition 6 (Nines of safety extend expected onset of risk) Suppose a certain risky activity has {k} nines of safety. If one repeatedly indulges in this activity until the risk occurs, then the expected number of trials before the risk occurs is {10^k}.

Proof: The probability that the risk is activated after exactly {n} trials is {(1-10^{-k})^{n-1} 10^{-k}}, which is a geometric distribution of parameter {10^{-k}}. The claim then follows from the standard properties of that distribution. \Box

Thus, for instance, if one performs some risky activity daily, then the expected length of time before the risk occurs is given by the following table:

Daily nines of safety Expected onset of risk
{0} One day
{0.8} One week
{1.5} One month
{2.6} One year
{2.9} Two years
{3.3} Five years
{3.6} Ten years
{3.9} Twenty years
{4.3} Fifty years
{4.6} A century

Or, if one wants to convert the yearly risks of dying from a specific cause into expected years before that cause of death would occur (assuming for sake of discussion that no other cause of death exists):

Yearly nines of safety Expected onset of risk
{0} One year
{0.3} Two years
{0.7} Five years
{1} Ten years
{1.3} Twenty years
{1.7} Fifty years
{2.0} A century

These tables suggest a relationship between the amount of safety one would have in a short timeframe, such as a day, and a longer time frame, such as a year. Here is an approximate formalisation of that relationship:

Proposition 7 (Repeated exposure reduces nines of safety) If a risky activity with {k} nines of safety is (independently) repeated {m} times, then (assuming {k} is large enough depending on {m}), the repeated activity will have approximately {k - \log_{10} m} nines of safety. Conversely: if the repeated activity has {k'} nines of safety, the individual activity will have approximately {k' + \log_{10} m} nines of safety.

Proof: An activity with {k} nines of safety will be safe with probability {1-10^{-k}}, hence safe with probability {(1-10^{-k})^m} if repeated independently {m} times. For {k} large, we can approximate

\displaystyle  (1 - 10^{-k})^m \approx 1 - m 10^{-k} = 1 - 10^{-(k - \log_{10} m)}

giving the former claim. The latter claim follows from inverting the former. \Box

Remark 8 The hypothesis of independence here is key. If there is a lot of correlation between the risks between different repetitions of the activity, then there can be much less reduction in safety caused by that repetition. As a simple example, suppose that {90\%} of a workforce are trained to perform some task flawlessly no matter how many times they repeat the task, but the remaining {10\%} are untrained and will always fail at that task. If one selects a random worker and asks them to perform the task, one has {1.0} nines of safety against the task failing. If one took that same random worker and asked them to perform the task {m} times, the above proposition might suggest that the number of nines of safety would drop to approximately {1.0 - \log_{10} m}; but in this case there is perfect correlation, and in fact the number of nines of safety remains steady at {1.0} since it is the same {10\%} of the workforce that would fail each time.

Because of this caveat, one should view the above proposition as only a crude first approximation that can be used as a simple rule of thumb, but should not be relied upon for more precise calculations.

One can repeat a risk either in time (extending the time of exposure to the risk, say from a day to a year), or in space (by exposing the risk to more people). The above proposition then gives an additive conversion law for nines of safety in either case. Here are some conversion tables for time:

From/to Daily Weekly Monthly Yearly
Daily 0 -0.8 -1.5 -2.6
Weekly +0.8 0 -0.6 -1.7
Monthly +1.5 +0.6 0 -1.1
Yearly +2.6 +1.7 +1.1 0

From/to Yearly Per 5 yr Per decade Per century
Yearly 0 -0.7 -1.0 -2.0
Per 5 yr +0.7 0 -0.3 -1.3
Per decade +1.0 + -0.3 0 -1.0
Per century +2.0 +1.3 +1.0 0

For instance, as mentioned before, the yearly amount of safety against cancer is about {2.7}. Using the above table (and making the somewhat unrealistic hypothesis of independence), we then predict the daily amount of safety against cancer to be about {2.7 + 2.6 = 5.3} nines, the weekly amount to be about {2.7 + 1.7 = 4.4} nines, and the amount of safety over five years to drop to about {2.7 - 0.7 = 2.0} nines.

Now we turn to conversions in space. If one knows the level of safety against a certain risk for an individual, and then one (independently) exposes a group of such individuals to that risk, then the reduction in nines of safety when considering the possibility that at least one group member experiences this risk is given by the following table:

Group Reduction in safety
You ({1} person) {0}
You and your partner ({2} people) {-0.3}
You and your parents ({3} people) {-0.5}
You, your partner, and three children ({5} people) {-0.7}
An extended family of {10} people {-1.0}
A class of {30} people {-1.5}
A workplace of {100} people {-2.0}
A school of {1,\! 000} people {-3.0}
A university of {10,\! 000} people {-4.0}
A town of {100,\! 000} people {-5.0}
A city of {1} million people {-6.0}
A state of {10} million people {-7.0}
A country of {100} million people {-8.0}
A continent of {1} billion people {-9.0}
The entire planet {-9.8}

For instance, in a given year (and making the somewhat implausible assumption of independence), you might have {2.7} nines of safety against cancer, but you and your partner collectively only have about {2.7 - 0.3 = 2.4} nines of safety against this risk, your family of five might only have about {2.7 - 0.7 = 2} nines of safety, and so forth. By the time one gets to a group of {1,\! 000} people, it actually becomes very likely that at least one member of the group will die of cancer in that year. (Here the precise conversion table breaks down, because a negative number of nines such as {2.7 - 3.0 = -0.3} is not possible, but one should interpret a prediction of a negative number of nines as an assertion that failure is very likely to happen. Also, in practice the reduction in safety is less than this rule predicts, due to correlations such as risk factors that are common to the group being considered that are incompatible with the assumption of independence.)

In the opposite direction, any reduction in exposure (either in time or space) to a risk will increase one’s safety level, as per the following table:

Reduction in exposure Additional nines of safety
{\div 1} {0}
{\div 2} {+0.3}
{\div 3} {+0.5}
{\div 5} {+0.7}
{\div 10} {+1.0}
{\div 100} {+2.0}

For instance, a five-fold reduction in exposure will reclaim about {0.7} additional nines of safety.

Here is a slightly different way to view nines of safety:

Proposition 9 Suppose that a group of {m} people are independently exposed to a given risk. If there are at most

\displaystyle  \log_{10} \frac{1}{1-2^{-1/m}}

nines of individual safety against that risk, then there is at least a {50\%} chance that one member of the group is affected by the risk.

Proof: If individually there are {k} nines of safety, then the probability that all the members of the group avoid the risk is {(1-10^{-k})^m}. Since the inequality

\displaystyle  (1-10^{-k})^m \leq \frac{1}{2}

is equivalent to

\displaystyle  k \leq \log_{10} \frac{1}{1-2^{-1/m}},

the claim follows. \Box

Thus, for a group to collectively avoid a risk with at least a {50\%} chance, one needs the following level of individual safety:

Group Individual safety level required
You ({1} person) {0.3}
You and your partner ({2} people) {0.5}
You and your parents ({3} people) {0.7}
You, your partner, and three children ({5} people) {0.9}
An extended family of {10} people {1.2}
A class of {30} people {1.6}
A workplace of {100} people {2.2}
A school of {1,\! 000} people {3.2}
A university of {10,\! 000} people {4.2}
A town of {100,\! 000} people {5.2}
A city of {1} million people {6.2}
A state of {10} million people {7.2}
A country of {100} million people {8.2}
A continent of {1} billion people {9.2}
The entire planet {10.0}

For large {m}, the level {k} of nines of individual safety required to protect a group of size {m} with probability at least {50\%} is approximately {\log_{10} \frac{m}{\ln 2} \approx (\log_{10} m) + 0.2}.

Precautions that can work to prevent a certain risk from occurring will add additional nines of safety against that risk, even if the precaution is not {100\%} effective. Here is the precise rule:

Proposition 10 (Precautions add nines of safety) Suppose an activity carries {k} nines of safety against a certain risk, and a separate precaution can independently protect against that risk with {l} nines of safety (that is to say, the probability that the protection is effective is {1 - 10^{-l}}). Then applying that precaution increases the number of nines in the activity from {k} to {k+l}.

Proof: The probability that the precaution fails and the risk then occurs is {10^{-l} \times 10^{-k} = 10^{-(k+l)}}. The claim now follows from Definition 1. \Box

In particular, we can repurpose the table at the start of this post as a conversion chart for effectiveness of a precaution:

Effectiveness Failure rate Additional nines provided
{0\%} {100\%} {+0.0}
{50\%} {50\%} {+0.3}
{75\%} {25\%} {+0.6}
{80\%} {20\%} {+0.7}
{90\%} {10\%} {+1.0}
{95\%} {5\%} {+1.3}
{97.5\%} {2.5\%} {+1.6}
{98\%} {2\%} {+1.7}
{99\%} {1\%} {+2.0}
{99.5\%} {0.5\%} {+2.3}
{99.75\%} {0.25\%} {+2.6}
{99.8\%} {0.2\%} {+2.7}
{99.9\%} {0.1\%} {+3.0}
{99.95\%} {0.05\%} {+3.3}
{99.975\%} {0.025\%} {+3.6}
{99.98\%} {0.02\%} {+3.7}
{99.99\%} {0.01\%} {+4.0}
{100\%} {0\%} infinite

Thus for instance a precaution that is {80\%} effective will add {0.7} nines of safety, a precaution that is {99.8\%} effective will add {2.7} nines of safety, and so forth. The mRNA COVID vaccines by Pfizer and Moderna have somewhere between {88\% - 96\%} effectiveness against symptomatic COVID illness, providing about {0.9-1.4} nines of safety against that risk, and over {95\%} effectiveness against severe illness, thus adding at least {1.3} nines of safety in this regard.

A slight variant of the above rule can be stated using the concept of relative risk:

Proposition 11 (Relative risk and nines of safety) Suppose an activity carries {k} nines of safety against a certain risk, and an action multiplies the chance of failure by some relative risk {R}. Then the action removes {\log_{10} R} nines of safety (if {R > 1}) or adds {-\log_{10} R} nines of safety (if {R<1}) to the original activity.

Proof: The additional action adjusts the probability of failure from {10^{-k}} to {R \times 10^{-k} = 10^{-(k - \log_{10} R)}}. The claim now follows from Definition 1. \Box

Here is a conversion chart between relative risk and change in nines of safety:

Relative risk Change in nines of safety
{0.01} {+2.0}
{0.02} {+1.7}
{0.05} {+1.3}
{0.1} {+1.0}
{0.2} {+0.7}
{0.5} {+0.3}
{1} {0}
{2} {-0.3}
{5} {-0.7}
{10} {-1.0}
{20} {-1.3}
{50} {-1.7}
{100} {-2.0}

Some examples:

  • Smoking increases the fatality rate of lung cancer by a factor of about {20}, thus removing about {1.3} nines of safety from this particular risk; it also increases the fatality rates of several other diseases, though not quite as dramatically an extent.
  • Seatbelts reduce the fatality rate in car accidents by a factor of about two, adding about {0.3} nines of safety. Airbags achieve a reduction of about {30-50\%}, adding about {0.2-0.3} additional nines of safety.
  • As far as transmission of COVID is concerned, it seems that constant use of face masks reduces transmission by a factor of about five (thus adding about {0.7} nines of safety), and similarly for constant adherence to social distancing; whereas for instance a {30\%} compliance with mask usage reduced transmission by about {10\%} (adding only {0.05} or so nines of safety).

The effect of combining multiple (independent) precautions together is cumulative; one can achieve quite a high level of safety by stacking together several precautions that individually have relatively low levels of effectiveness. Again, see the “swiss cheese model” referred to in Remark 5. For instance, if face masks add {0.7} nines of safety against contracting COVID, social distancing adds another {0.7} nines, and the vaccine provide another {1.0} nine of safety, implementing all three mitigation methods would (assuming independence) add a net of {2.4} nines of safety against contracting COVID.

In summary, when debating the value of a given risk mitigation measure, the correct question to ask is not quite “Is it certain to work” or “Can it fail?”, but rather “How many extra nines of safety does it add?”.

As one final comparison between nines of safety and other standard risk measures, we give the following proposition regarding large deviations from the mean.

Proposition 12 Let {X} be a normally distributed random variable of standard deviation {\sigma}, and let {\lambda > 0}. Then the “one-sided risk” of {X} exceeding its mean {{\bf E} X} by at least {\lambda \sigma} (i.e., {X \geq {\bf E} X + \lambda \sigma}) carries

\displaystyle  -\log_{10} \frac{1 - \mathrm{erf}(\lambda/\sqrt{2})}{2}

nines of safety, the “two-sided risk” of {X} deviating (in either direction) from its mean by at least {\lambda \sigma} (i.e., {|X-{\bf E} X| \geq \lambda \sigma}) carries

\displaystyle  -\log_{10} (1 - \mathrm{erf}(\lambda/\sqrt{2}))

nines of safety, where {\mathrm{erf}} is the error function.

Proof: This is a routine calculation using the cumulative distribution function of the normal distribution. \Box

Here is a short table illustrating this proposition:

Number {\lambda} of deviations from the mean One-sided nines of safety Two-sided nines of safety
{0} {0.3} {0.0}
{1} {0.8} {0.5}
{2} {1.6} {1.3}
{3} {2.9} {2.6}
{4} {4.5} {4.2}
{5} {6.5} {6.2}
{6} {9.0} {8.7}

Thus, for instance, the risk of a five sigma event (deviating by more than five standard deviations from the mean in either direction) should carry {6.2} nines of safety assuming a normal distribution, and so one would ordinarily feel extremely safe against the possibility of such an event, unless one started doing hundreds of thousands of trials. (However, we caution that this conclusion relies heavily on the assumption that one has a normal distribution!)

See also this older essay I wrote on anonymity on the internet, using bits as a measure of anonymity in much the same way that nines are used here as a measure of safety.

Asgar Jamneshan and I have just uploaded to the arXiv our paper “Foundational aspects of uncountable measure theory: Gelfand duality, Riesz representation, canonical models, and canonical disintegration“. This paper arose from our longer-term project to systematically develop “uncountable” ergodic theory – ergodic theory in which the groups acting are not required to be countable, the probability spaces one acts on are not required to be standard Borel, or Polish, and the compact groups that arise in the structural theory (e.g., the theory of group extensions) are not required to be separable. One of the motivations of doing this is to allow ergodic theory results to be applied to ultraproducts of finite dynamical systems, which can then hopefully be transferred to establish combinatorial results with good uniformity properties. An instance of this is the uncountable Mackey-Zimmer theorem, discussed in this companion blog post.

In the course of this project, we ran into the obstacle that many foundational results, such as the Riesz representation theorem, often require one or more of these countability hypotheses when encountered in textbooks. Other technical issues also arise in the uncountable setting, such as the need to distinguish the Borel {\sigma}-algebra from the (two different types of) Baire {\sigma}-algebra. As such we needed to spend some time reviewing and synthesizing the known literature on some foundational results of “uncountable” measure theory, which led to this paper. As such, most of the results of this paper are already in the literature, either explicitly or implicitly, in one form or another (with perhaps the exception of the canonical disintegration, which we discuss below); we view the main contribution of this paper as presenting the results in a coherent and unified fashion. In particular we found that the language of category theory was invaluable in clarifying and organizing all the different results. In subsequent work we (and some other authors) will use the results in this paper for various applications in uncountable ergodic theory.

The foundational results covered in this paper can be divided into a number of subtopics (Gelfand duality, Baire {\sigma}-algebras and Riesz representation, canonical models, and canonical disintegration), which we discuss further below the fold.

Read the rest of this entry »

Dimitri Shlyakhtenko and I have uploaded to the arXiv our paper Fractional free convolution powers. For me, this project (which we started during the 2018 IPAM program on quantitative linear algebra) was motivated by a desire to understand the behavior of the minor process applied to a large random Hermitian {N \times N} matrix {A_N}, in which one takes the successive upper left {n \times n} minors {A_n} of {A_N} and computes their eigenvalues {\lambda_1(A_n) \leq \dots \leq \lambda_n(A_n)} in non-decreasing order. These eigenvalues are related to each other by the Cauchy interlacing inequalities

\displaystyle  \lambda_i(A_{n+1}) \leq \lambda_i(A_n) \leq \lambda_{i+1}(A_{n+1})

for {1 \leq i \leq n < N}, and are often arranged in a triangular array known as a Gelfand-Tsetlin pattern, as discussed in these previous blog posts.

When {N} is large and the matrix {A_N} is a random matrix with empirical spectral distribution converging to some compactly supported probability measure {\mu} on the real line, then under suitable hypotheses (e.g., unitary conjugation invariance of the random matrix ensemble {A_N}), a “concentration of measure” effect occurs, with the spectral distribution of the minors {A_n} for {n = \lfloor N/k\rfloor} for any fixed {k \geq 1} converging to a specific measure {k^{-1}_* \mu^{\boxplus k}} that depends only on {\mu} and {k}. The reason for this notation is that there is a surprising description of this measure {k^{-1}_* \mu^{\boxplus k}} when {k} is a natural number, namely it is the free convolution {\mu^{\boxplus k}} of {k} copies of {\mu}, pushed forward by the dilation map {x \mapsto k^{-1} x}. For instance, if {\mu} is the Wigner semicircular measure {d\mu_{sc} = \frac{1}{\pi} (4-x^2)^{1/2}_+\ dx}, then {k^{-1}_* \mu_{sc}^{\boxplus k} = k^{-1/2}_* \mu_{sc}}. At the random matrix level, this reflects the fact that the minor of a GUE matrix is again a GUE matrix (up to a renormalizing constant).

As first observed by Bercovici and Voiculescu and developed further by Nica and Speicher, among other authors, the notion of a free convolution power {\mu^{\boxplus k}} of {\mu} can be extended to non-integer {k \geq 1}, thus giving the notion of a “fractional free convolution power”. This notion can be defined in several different ways. One of them proceeds via the Cauchy transform

\displaystyle  G_\mu(z) := \int_{\bf R} \frac{d\mu(x)}{z-x}

of the measure {\mu}, and {\mu^{\boxplus k}} can be defined by solving the Burgers-type equation

\displaystyle  (k \partial_k + z \partial_z) G_{\mu^{\boxplus k}}(z) = \frac{\partial_z G_{\mu^{\boxplus k}}(z)}{G_{\mu^{\boxplus k}}(z)} \ \ \ \ \ (1)

with initial condition {G_{\mu^{\boxplus 1}} = G_\mu} (see this previous blog post for a derivation). This equation can be solved explicitly using the {R}-transform {R_\mu} of {\mu}, defined by solving the equation

\displaystyle  \frac{1}{G_\mu(z)} + R_\mu(G_\mu(z)) = z

for sufficiently large {z}, in which case one can show that

\displaystyle  R_{\mu^{\boxplus k}}(z) = k R_\mu(z).

(In the case of the semicircular measure {\mu_{sc}}, the {R}-transform is simply the identity: {R_{\mu_{sc}}(z)=z}.)

Nica and Speicher also gave a free probability interpretation of the fractional free convolution power: if {A} is a noncommutative random variable in a noncommutative probability space {({\mathcal A},\tau)} with distribution {\mu}, and {p} is a real projection operator free of {A} with trace {1/k}, then the “minor” {[pAp]} of {A} (viewed as an element of a new noncommutative probability space {({\mathcal A}_p, \tau_p)} whose elements are minors {[pXp]}, {X \in {\mathcal A}} with trace {\tau_p([pXp]) := k \tau(pXp)}) has the law of {k^{-1}_* \mu^{\boxplus k}} (we give a self-contained proof of this in an appendix to our paper). This suggests that the minor process (or fractional free convolution) can be studied within the framework of free probability theory.

One of the known facts about integer free convolution powers {\mu^{\boxplus k}} is monotonicity of the free entropy

\displaystyle  \chi(\mu) = \int_{\bf R} \int_{\bf R} \log|s-t|\ d\mu(s) d\mu(t) + \frac{3}{4} + \frac{1}{2} \log 2\pi

and free Fisher information

\displaystyle  \Phi(\mu) = \frac{2\pi^2}{3} \int_{\bf R} \left(\frac{d\mu}{dx}\right)^3\ dx

which were introduced by Voiculescu as free probability analogues of the classical probability concepts of differential entropy and classical Fisher information. (Here we correct a small typo in the normalization constant of Fisher entropy as presented in Voiculescu’s paper.) Namely, it was shown by Shylakhtenko that the quantity {\chi(k^{-1/2}_* \mu^{\boxplus k})} is monotone non-decreasing for integer {k}, and the Fisher information {\Phi(k^{-1/2}_* \mu^{\boxplus k})} is monotone non-increasing for integer {k}. This is the free probability analogue of the corresponding monotonicities for differential entropy and classical Fisher information that was established by Artstein, Ball, Barthe, and Naor, answering a question of Shannon.

Our first main result is to extend the monotonicity results of Shylakhtenko to fractional {k \geq 1}. We give two proofs of this fact, one using free probability machinery, and a more self contained (but less motivated) proof using integration by parts and contour integration. The free probability proof relies on the concept of the free score {J(X)} of a noncommutative random variable, which is the analogue of the classical score. The free score, also introduced by Voiculescu, can be defined by duality as measuring the perturbation with respect to semicircular noise, or more precisely

\displaystyle  \frac{d}{d\varepsilon} \tau( Z P( X + \varepsilon Z) )|_{\varepsilon=0} = \tau( J(X) P(X) )

whenever {P} is a polynomial and {Z} is a semicircular element free of {X}. If {X} has an absolutely continuous law {\mu = f\ dx} for a sufficiently regular {f}, one can calculate {J(X)} explicitly as {J(X) = 2\pi Hf(X)}, where {Hf} is the Hilbert transform of {f}, and the Fisher information is given by the formula

\displaystyle  \Phi(X) = \tau( J(X)^2 ).

One can also define a notion of relative free score {J(X:B)} relative to some subalgebra {B} of noncommutative random variables.

The free score interacts very well with the free minor process {X \mapsto [pXp]}, in particular by standard calculations one can establish the identity

\displaystyle  J( [pXp] : [pBp] ) = k {\bf E}( [p J(X:B) p] | [pXp], [pBp] )

whenever {X} is a noncommutative random variable, {B} is an algebra of noncommutative random variables, and {p} is a real projection of trace {1/k} that is free of both {X} and {B}. The monotonicity of free Fisher information then follows from an application of Pythagoras’s theorem (which implies in particular that conditional expectation operators are contractions on {L^2}). The monotonicity of free entropy then follows from an integral representation of free entropy as an integral of free Fisher information along the free Ornstein-Uhlenbeck process (or equivalently, free Fisher information is essentially the rate of change of free entropy with respect to perturbation by semicircular noise). The argument also shows when equality holds in the monotonicity inequalities; this occurs precisely when {\mu} is a semicircular measure up to affine rescaling.

After an extensive amount of calculation of all the quantities that were implicit in the above free probability argument (in particular computing the various terms involved in the application of Pythagoras’ theorem), we were able to extract a self-contained proof of monotonicity that relied on differentiating the quantities in {k} and using the differential equation (1). It turns out that if {d\mu = f\ dx} for sufficiently regular {f}, then there is an identity

\displaystyle  \partial_k \Phi( k^{-1/2}_* \mu^{\boxplus k} ) = -\frac{1}{2\pi^2} \lim_{\varepsilon \rightarrow 0} \sum_{\alpha,\beta = \pm} f(x) f(y) K(x+i\alpha \varepsilon, y+i\beta \varepsilon)\ dx dy \ \ \ \ \ (2)

where {K} is the kernel

\displaystyle  K(z,w) := \frac{1}{G(z) G(w)} (\frac{G(z)-G(w)}{z-w} + G(z) G(w))^2

and {G(z) := G_\mu(z)}. It is not difficult to show that {K(z,\overline{w})} is a positive semi-definite kernel, which gives the required monotonicity. It would be interesting to obtain some more insightful interpretation of the kernel {K} and the identity (2).

These monotonicity properties hint at the minor process {A \mapsto [pAp]} being associated to some sort of “gradient flow” in the {k} parameter. We were not able to formalize this intuition; indeed, it is not clear what a gradient flow on a varying noncommutative probability space {({\mathcal A}_p, \tau_p)} even means. However, after substantial further calculation we were able to formally describe the minor process as the Euler-Lagrange equation for an intriguing Lagrangian functional that we conjecture to have a random matrix interpretation. We first work in “Lagrangian coordinates”, defining the quantity {\lambda(s,y)} on the “Gelfand-Tsetlin pyramid”

\displaystyle  \Delta = \{ (s,y): 0 < s < 1; 0 < y < s \}

by the formula

\displaystyle  \mu^{\boxplus 1/s}((-\infty,\lambda(s,y)/s])=y/s,

which is well defined if the density of {\mu} is sufficiently well behaved. The random matrix interpretation of {\lambda(s,y)} is that it is the asymptotic location of the {\lfloor yN\rfloor^{th}} eigenvalue of the {\lfloor sN \rfloor \times \lfloor sN \rfloor} upper left minor of a random {N \times N} matrix {A_N} with asymptotic empirical spectral distribution {\mu} and with unitarily invariant distribution, thus {\lambda} is in some sense a continuum limit of Gelfand-Tsetlin patterns. Thus for instance the Cauchy interlacing laws in this asymptotic limit regime become

\displaystyle  0 \leq \partial_s \lambda \leq \partial_y \lambda.

After a lengthy calculation (involving extensive use of the chain rule and product rule), the equation (1) is equivalent to the Euler-Lagrange equation

\displaystyle  \partial_s L_{\lambda_s}(\partial_s \lambda, \partial_y \lambda) + \partial_y L_{\lambda_y}(\partial_s \lambda, \partial_y \lambda) = 0

where {L} is the Lagrangian density

\displaystyle  L(\lambda_s, \lambda_y) := \log \lambda_y + \log \sin( \pi \frac{\lambda_s}{\lambda_y} ).

Thus the minor process is formally a critical point of the integral {\int_\Delta L(\partial_s \lambda, \partial_y \lambda)\ ds dy}. The quantity {\partial_y \lambda} measures the mean eigenvalue spacing at some location of the Gelfand-Tsetlin pyramid, and the ratio {\frac{\partial_s \lambda}{\partial_y \lambda}} measures mean eigenvalue drift in the minor process. This suggests that this Lagrangian density is some sort of measure of entropy of the asymptotic microscale point process emerging from the minor process at this spacing and drift. There is work of Metcalfe demonstrating that this point process is given by the Boutillier bead model, so we conjecture that this Lagrangian density {L} somehow measures the entropy density of this process.

Asgar Jamneshan and I have just uploaded to the arXiv our paper “An uncountable Moore-Schmidt theorem“. This paper revisits a classical theorem of Moore and Schmidt in measurable cohomology of measure-preserving systems. To state the theorem, let {X = (X,{\mathcal X},\mu)} be a probability space, and {\mathrm{Aut}(X, {\mathcal X}, \mu)} be the group of measure-preserving automorphisms of this space, that is to say the invertible bimeasurable maps {T: X \rightarrow X} that preserve the measure {\mu}: {T_* \mu = \mu}. To avoid some ambiguity later in this post when we introduce abstract analogues of measure theory, we will refer to measurable maps as concrete measurable maps, and measurable spaces as concrete measurable spaces. (One could also call {X = (X,{\mathcal X}, \mu)} a concrete probability space, but we will not need to do so here as we will not be working explicitly with abstract probability spaces.)

Let {\Gamma = (\Gamma,\cdot)} be a discrete group. A (concrete) measure-preserving action of {\Gamma} on {X} is a group homomorphism {\gamma \mapsto T^\gamma} from {\Gamma} to {\mathrm{Aut}(X, {\mathcal X}, \mu)}, thus {T^1} is the identity map and {T^{\gamma_1} \circ T^{\gamma_2} = T^{\gamma_1 \gamma_2}} for all {\gamma_1,\gamma_2 \in \Gamma}. A large portion of ergodic theory is concerned with the study of such measure-preserving actions, especially in the classical case when {\Gamma} is the integers (with the additive group law).

Let {K = (K,+)} be a compact Hausdorff abelian group, which we can endow with the Borel {\sigma}-algebra {{\mathcal B}(K)}. A (concrete measurable) {K}cocycle is a collection {\rho = (\rho_\gamma)_{\gamma \in \Gamma}} of concrete measurable maps {\rho_\gamma: X \rightarrow K} obeying the cocycle equation

\displaystyle \rho_{\gamma_1 \gamma_2}(x) = \rho_{\gamma_1} \circ T^{\gamma_2}(x) + \rho_{\gamma_2}(x) \ \ \ \ \ (1)

for {\mu}-almost every {x \in X}. (Here we are glossing over a measure-theoretic subtlety that we will return to later in this post – see if you can spot it before then!) Cocycles arise naturally in the theory of group extensions of dynamical systems; in particular (and ignoring the aforementioned subtlety), each cocycle induces a measure-preserving action {\gamma \mapsto S^\gamma} on {X \times K} (which we endow with the product of {\mu} with Haar probability measure on {K}), defined by

\displaystyle S^\gamma( x, k ) := (T^\gamma x, k + \rho_\gamma(x) ).

This connection with group extensions was the original motivation for our study of measurable cohomology, but is not the focus of the current paper.

A special case of a {K}-valued cocycle is a (concrete measurable) {K}-valued coboundary, in which {\rho_\gamma} for each {\gamma \in \Gamma} takes the special form

\displaystyle \rho_\gamma(x) = F \circ T^\gamma(x) - F(x)

for {\mu}-almost every {x \in X}, where {F: X \rightarrow K} is some measurable function; note that (ignoring the aforementioned subtlety), every function of this form is automatically a concrete measurable {K}-valued cocycle. One of the first basic questions in measurable cohomology is to try to characterize which {K}-valued cocycles are in fact {K}-valued coboundaries. This is a difficult question in general. However, there is a general result of Moore and Schmidt that at least allows one to reduce to the model case when {K} is the unit circle {\mathbf{T} = {\bf R}/{\bf Z}}, by taking advantage of the Pontryagin dual group {\hat K} of characters {\hat k: K \rightarrow \mathbf{T}}, that is to say the collection of continuous homomorphisms {\hat k: k \mapsto \langle \hat k, k \rangle} to the unit circle. More precisely, we have

Theorem 1 (Countable Moore-Schmidt theorem) Let {\Gamma} be a discrete group acting in a concrete measure-preserving fashion on a probability space {X}. Let {K} be a compact Hausdorff abelian group. Assume the following additional hypotheses:

Then a {K}-valued concrete measurable cocycle {\rho = (\rho_\gamma)_{\gamma \in \Gamma}} is a concrete coboundary if and only if for each character {\hat k \in \hat K}, the {\mathbf{T}}-valued cocycles {\langle \hat k, \rho \rangle = ( \langle \hat k, \rho_\gamma \rangle )_{\gamma \in \Gamma}} are concrete coboundaries.

The hypotheses (i), (ii), (iii) are saying in some sense that the data {\Gamma, X, K} are not too “large”; in all three cases they are saying in some sense that the data are only “countably complicated”. For instance, (iii) is equivalent to {K} being second countable, and (ii) is equivalent to {X} being modeled by a complete separable metric space. It is because of this restriction that we refer to this result as a “countable” Moore-Schmidt theorem. This theorem is a useful tool in several other applications, such as the Host-Kra structure theorem for ergodic systems; I hope to return to these subsequent applications in a future post.

Let us very briefly sketch the main ideas of the proof of Theorem 1. Ignore for now issues of measurability, and pretend that something that holds almost everywhere in fact holds everywhere. The hard direction is to show that if each {\langle \hat k, \rho \rangle} is a coboundary, then so is {\rho}. By hypothesis, we then have an equation of the form

\displaystyle \langle \hat k, \rho_\gamma(x) \rangle = \alpha_{\hat k} \circ T^\gamma(x) - \alpha_{\hat k}(x) \ \ \ \ \ (2)

for all {\hat k, \gamma, x} and some functions {\alpha_{\hat k}: X \rightarrow {\mathbf T}}, and our task is then to produce a function {F: X \rightarrow K} for which

\displaystyle \rho_\gamma(x) = F \circ T^\gamma(x) - F(x)

for all {\gamma,x}.

Comparing the two equations, the task would be easy if we could find an {F: X \rightarrow K} for which

\displaystyle \langle \hat k, F(x) \rangle = \alpha_{\hat k}(x) \ \ \ \ \ (3)

for all {\hat k, x}. However there is an obstruction to this: the left-hand side of (3) is additive in {\hat k}, so the right-hand side would have to be also in order to obtain such a representation. In other words, for this strategy to work, one would have to first establish the identity

\displaystyle \alpha_{\hat k_1 + \hat k_2}(x) - \alpha_{\hat k_1}(x) - \alpha_{\hat k_2}(x) = 0 \ \ \ \ \ (4)

for all {\hat k_1, \hat k_2, x}. On the other hand, the good news is that if we somehow manage to obtain the equation, then we can obtain a function {F} obeying (3), thanks to Pontryagin duality, which gives a one-to-one correspondence between {K} and the homomorphisms of the (discrete) group {\hat K} to {\mathbf{T}}.

Now, it turns out that one cannot derive the equation (4) directly from the given information (2). However, the left-hand side of (2) is additive in {\hat k}, so the right-hand side must be also. Manipulating this fact, we eventually arrive at

\displaystyle (\alpha_{\hat k_1 + \hat k_2} - \alpha_{\hat k_1} - \alpha_{\hat k_2}) \circ T^\gamma(x) = (\alpha_{\hat k_1 + \hat k_2} - \alpha_{\hat k_1} - \alpha_{\hat k_2})(x).

In other words, we don’t get to show that the left-hand side of (4) vanishes, but we do at least get to show that it is {\Gamma}-invariant. Now let us assume for sake of argument that the action of {\Gamma} is ergodic, which (ignoring issues about sets of measure zero) basically asserts that the only {\Gamma}-invariant functions are constant. So now we get a weaker version of (4), namely

\displaystyle \alpha_{\hat k_1 + \hat k_2}(x) - \alpha_{\hat k_1}(x) - \alpha_{\hat k_2}(x) = c_{\hat k_1, \hat k_2} \ \ \ \ \ (5)

for some constants {c_{\hat k_1, \hat k_2} \in \mathbf{T}}.

Now we need to eliminate the constants. This can be done by the following group-theoretic projection. Let {L^0({\bf X} \rightarrow {\bf T})} denote the space of concrete measurable maps {\alpha} from {{\bf X}} to {{\bf T}}, up to almost everywhere equivalence; this is an abelian group where the various terms in (5) naturally live. Inside this group we have the subgroup {{\bf T}} of constant functions (up to almost everywhere equivalence); this is where the right-hand side of (5) lives. Because {{\bf T}} is a divisible group, there is an application of Zorn’s lemma (a good exercise for those who are not acquainted with these things) to show that there exists a retraction {w: L^0({\bf X} \rightarrow {\bf T}) \rightarrow {\bf T}}, that is to say a group homomorphism that is the identity on the subgroup {{\bf T}}. We can use this retraction, or more precisely the complement {\alpha \mapsto \alpha - w(\alpha)}, to eliminate the constant in (5). Indeed, if we set

\displaystyle \tilde \alpha_{\hat k}(x) := \alpha_{\hat k}(x) - w(\alpha_{\hat k})

then from (5) we see that

\displaystyle \tilde \alpha_{\hat k_1 + \hat k_2}(x) - \tilde \alpha_{\hat k_1}(x) - \tilde \alpha_{\hat k_2}(x) = 0

while from (2) one has

\displaystyle \langle \hat k, \rho_\gamma(x) \rangle = \tilde \alpha_{\hat k} \circ T^\gamma(x) - \tilde \alpha_{\hat k}(x)

and now the previous strategy works with {\alpha_{\hat k}} replaced by {\tilde \alpha_{\hat k}}. This concludes the sketch of proof of Theorem 1.

In making the above argument rigorous, the hypotheses (i)-(iii) are used in several places. For instance, to reduce to the ergodic case one relies on the ergodic decomposition, which requires the hypothesis (ii). Also, most of the above equations only hold outside of a set of measure zero, and the hypothesis (i) and the hypothesis (iii) (which is equivalent to {\hat K} being at most countable) to avoid the problem that an uncountable union of sets of measure zero could have positive measure (or fail to be measurable at all).

My co-author Asgar Jamneshan and I are working on a long-term project to extend many results in ergodic theory (such as the aforementioned Host-Kra structure theorem) to “uncountable” settings in which hypotheses analogous to (i)-(iii) are omitted; thus we wish to consider actions on uncountable groups, on spaces that are not standard Borel, and cocycles taking values in groups that are not metrisable. Such uncountable contexts naturally arise when trying to apply ergodic theory techniques to combinatorial problems (such as the inverse conjecture for the Gowers norms), as one often relies on the ultraproduct construction (or something similar) to generate an ergodic theory translation of these problems, and these constructions usually give “uncountable” objects rather than “countable” ones. (For instance, the ultraproduct of finite groups is a hyperfinite group, which is usually uncountable.). This paper marks the first step in this project by extending the Moore-Schmidt theorem to the uncountable setting.

If one simply drops the hypotheses (i)-(iii) and tries to prove the Moore-Schmidt theorem, several serious difficulties arise. We have already mentioned the loss of the ergodic decomposition and the possibility that one has to control an uncountable union of null sets. But there is in fact a more basic problem when one deletes (iii): the addition operation {+: K \times K \rightarrow K}, while still continuous, can fail to be measurable as a map from {(K \times K, {\mathcal B}(K) \otimes {\mathcal B}(K))} to {(K, {\mathcal B}(K))}! Thus for instance the sum of two measurable functions {F: X \rightarrow K} need not remain measurable, which makes even the very definition of a measurable cocycle or measurable coboundary problematic (or at least unnatural). This phenomenon is known as the Nedoma pathology. A standard example arises when {K} is the uncountable torus {{\mathbf T}^{{\bf R}}}, endowed with the product topology. Crucially, the Borel {\sigma}-algebra {{\mathcal B}(K)} generated by this uncountable product is not the product {{\mathcal B}(\mathbf{T})^{\otimes {\bf R}}} of the factor Borel {\sigma}-algebras (the discrepancy ultimately arises from the fact that topologies permit uncountable unions, but {\sigma}-algebras do not); relating to this, the product {\sigma}-algebra {{\mathcal B}(K) \otimes {\mathcal B}(K)} is not the same as the Borel {\sigma}-algebra {{\mathcal B}(K \times K)}, but is instead a strict sub-algebra. If the group operations on {K} were measurable, then the diagonal set

\displaystyle K^\Delta := \{ (k,k') \in K \times K: k = k' \} = \{ (k,k') \in K \times K: k - k' = 0 \}

would be measurable in {{\mathcal B}(K) \otimes {\mathcal B}(K)}. But it is an easy exercise in manipulation of {\sigma}-algebras to show that if {(X, {\mathcal X}), (Y, {\mathcal Y})} are any two measurable spaces and {E \subset X \times Y} is measurable in {{\mathcal X} \otimes {\mathcal Y}}, then the fibres {E_x := \{ y \in Y: (x,y) \in E \}} of {E} are contained in some countably generated subalgebra of {{\mathcal Y}}. Thus if {K^\Delta} were {{\mathcal B}(K) \otimes {\mathcal B}(K)}-measurable, then all the points of {K} would lie in a single countably generated {\sigma}-algebra. But the cardinality of such an algebra is at most {2^{\alpha_0}} while the cardinality of {K} is {2^{2^{\alpha_0}}}, and Cantor’s theorem then gives a contradiction.

To resolve this problem, we give {K} a coarser {\sigma}-algebra than the Borel {\sigma}-algebra, namely the Baire {\sigma}-algebra {{\mathcal B}^\otimes(K)}, thus coarsening the measurable space structure on {K = (K,{\mathcal B}(K))} to a new measurable space {K_\otimes := (K, {\mathcal B}^\otimes(K))}. In the case of compact Hausdorff abelian groups, {{\mathcal B}^{\otimes}(K)} can be defined as the {\sigma}-algebra generated by the characters {\hat k: K \rightarrow {\mathbf T}}; for more general compact abelian groups, one can define {{\mathcal B}^{\otimes}(K)} as the {\sigma}-algebra generated by all continuous maps into metric spaces. This {\sigma}-algebra is equal to {{\mathcal B}(K)} when {K} is metrisable but can be smaller for other {K}. With this measurable structure, {K_\otimes} becomes a measurable group; it seems that once one leaves the metrisable world that {K_\otimes} is a superior (or at least equally good) space to work with than {K} for analysis, as it avoids the Nedoma pathology. (For instance, from Plancherel’s theorem, we see that if {m_K} is the Haar probability measure on {K}, then {L^2(K,m_K) = L^2(K_\otimes,m_K)} (thus, every {K}-measurable set is equivalent modulo {m_K}-null sets to a {K_\otimes}-measurable set), so there is no damage to Plancherel caused by passing to the Baire {\sigma}-algebra.

Passing to the Baire {\sigma}-algebra {K_\otimes} fixes the most severe problems with an uncountable Moore-Schmidt theorem, but one is still faced with an issue of having to potentially take an uncountable union of null sets. To avoid this sort of problem, we pass to the framework of abstract measure theory, in which we remove explicit mention of “points” and can easily delete all null sets at a very early stage of the formalism. In this setup, the category of concrete measurable spaces is replaced with the larger category of abstract measurable spaces, which we formally define as the opposite category of the category of {\sigma}-algebras (with Boolean algebra homomorphisms). Thus, we define an abstract measurable space to be an object of the form {{\mathcal X}^{\mathrm{op}}}, where {{\mathcal X}} is an (abstract) {\sigma}-algebra and {\mathrm{op}} is a formal placeholder symbol that signifies use of the opposite category, and an abstract measurable map {T: {\mathcal X}^{\mathrm{op}} \rightarrow {\mathcal Y}^{\mathrm{op}}} is an object of the form {(T^*)^{\mathrm{op}}}, where {T^*: {\mathcal Y} \rightarrow {\mathcal X}} is a Boolean algebra homomorphism and {\mathrm{op}} is again used as a formal placeholder; we call {T^*} the pullback map associated to {T}.  [UPDATE: It turns out that this definition of a measurable map led to technical issues.  In a forthcoming revision of the paper we also impose the requirement that the abstract measurable map be \sigma-complete (i.e., it respects countable joins).] The composition {S \circ T: {\mathcal X}^{\mathrm{op}} \rightarrow {\mathcal Z}^{\mathrm{op}}} of two abstract measurable maps {T: {\mathcal X}^{\mathrm{op}} \rightarrow {\mathcal Y}^{\mathrm{op}}}, {S: {\mathcal Y}^{\mathrm{op}} \rightarrow {\mathcal Z}^{\mathrm{op}}} is defined by the formula {S \circ T := (T^* \circ S^*)^{\mathrm{op}}}, or equivalently {(S \circ T)^* = T^* \circ S^*}.

Every concrete measurable space {X = (X,{\mathcal X})} can be identified with an abstract counterpart {{\mathcal X}^{op}}, and similarly every concrete measurable map {T: X \rightarrow Y} can be identified with an abstract counterpart {(T^*)^{op}}, where {T^*: {\mathcal Y} \rightarrow {\mathcal X}} is the pullback map {T^* E := T^{-1}(E)}. Thus the category of concrete measurable spaces can be viewed as a subcategory of the category of abstract measurable spaces. The advantage of working in the abstract setting is that it gives us access to more spaces that could not be directly defined in the concrete setting. Most importantly for us, we have a new abstract space, the opposite measure algebra {X_\mu} of {X}, defined as {({\bf X}/{\bf N})^*} where {{\bf N}} is the ideal of null sets in {{\bf X}}. Informally, {X_\mu} is the space {X} with all the null sets removed; there is a canonical abstract embedding map {\iota: X_\mu \rightarrow X}, which allows one to convert any concrete measurable map {f: X \rightarrow Y} into an abstract one {[f]: X_\mu \rightarrow Y}. One can then define the notion of an abstract action, abstract cocycle, and abstract coboundary by replacing every occurrence of the category of concrete measurable spaces with their abstract counterparts, and replacing {X} with the opposite measure algebra {X_\mu}; see the paper for details. Our main theorem is then

Theorem 2 (Uncountable Moore-Schmidt theorem) Let {\Gamma} be a discrete group acting abstractly on a {\sigma}-finite measure space {X}. Let {K} be a compact Hausdorff abelian group. Then a {K_\otimes}-valued abstract measurable cocycle {\rho = (\rho_\gamma)_{\gamma \in \Gamma}} is an abstract coboundary if and only if for each character {\hat k \in \hat K}, the {\mathbf{T}}-valued cocycles {\langle \hat k, \rho \rangle = ( \langle \hat k, \rho_\gamma \rangle )_{\gamma \in \Gamma}} are abstract coboundaries.

With the abstract formalism, the proof of the uncountable Moore-Schmidt theorem is almost identical to the countable one (in fact we were able to make some simplifications, such as avoiding the use of the ergodic decomposition). A key tool is what we call a “conditional Pontryagin duality” theorem, which asserts that if one has an abstract measurable map {\alpha_{\hat k}: X_\mu \rightarrow {\bf T}} for each {\hat k \in K} obeying the identity { \alpha_{\hat k_1 + \hat k_2} - \alpha_{\hat k_1} - \alpha_{\hat k_2} = 0} for all {\hat k_1,\hat k_2 \in \hat K}, then there is an abstract measurable map {F: X_\mu \rightarrow K_\otimes} such that {\alpha_{\hat k} = \langle \hat k, F \rangle} for all {\hat k \in \hat K}. This is derived from the usual Pontryagin duality and some other tools, most notably the completeness of the {\sigma}-algebra of {X_\mu}, and the Sikorski extension theorem.

We feel that it is natural to stay within the abstract measure theory formalism whenever dealing with uncountable situations. However, it is still an interesting question as to when one can guarantee that the abstract objects constructed in this formalism are representable by concrete analogues. The basic questions in this regard are:

  • (i) Suppose one has an abstract measurable map {f: X_\mu \rightarrow Y} into a concrete measurable space. Does there exist a representation of {f} by a concrete measurable map {\tilde f: X \rightarrow Y}? Is it unique up to almost everywhere equivalence?
  • (ii) Suppose one has a concrete cocycle that is an abstract coboundary. When can it be represented by a concrete coboundary?

For (i) the answer is somewhat interesting (as I learned after posing this MathOverflow question):

  • If {Y} does not separate points, or is not compact metrisable or Polish, there can be counterexamples to uniqueness. If {Y} is not compact or Polish, there can be counterexamples to existence.
  • If {Y} is a compact metric space or a Polish space, then one always has existence and uniqueness.
  • If {Y} is a compact Hausdorff abelian group, one always has existence.
  • If {X} is a complete measure space, then one always has existence (from a theorem of Maharam).
  • If {X} is the unit interval with the Borel {\sigma}-algebra and Lebesgue measure, then one has existence for all compact Hausdorff {Y} assuming the continuum hypothesis (from a theorem of von Neumann) but existence can fail under other extensions of ZFC (from a theorem of Shelah, using the method of forcing).
  • For more general {X}, existence for all compact Hausdorff {Y} is equivalent to the existence of a lifting from the {\sigma}-algebra {\mathcal{X}/\mathcal{N}} to {\mathcal{X}} (or, in the language of abstract measurable spaces, the existence of an abstract retraction from {X} to {X_\mu}).
  • It is a long-standing open question (posed for instance by Fremlin) whether it is relatively consistent with ZFC that existence holds whenever {Y} is compact Hausdorff.

Our understanding of (ii) is much less complete:

  • If {K} is metrisable, the answer is “always” (which among other things establishes the countable Moore-Schmidt theorem as a corollary of the uncountable one).
  • If {\Gamma} is at most countable and {X} is a complete measure space, then the answer is again “always”.

In view of the answers to (i), I would not be surprised if the full answer to (ii) was also sensitive to axioms of set theory. However, such set theoretic issues seem to be almost completely avoided if one sticks with the abstract formalism throughout; they only arise when trying to pass back and forth between the abstract and concrete categories.

I’ve just uploaded to the arXiv my paper “Almost all Collatz orbits attain almost bounded values“, submitted to the proceedings of the Forum of Mathematics, Pi. In this paper I returned to the topic of the notorious Collatz conjecture (also known as the {3x+1} conjecture), which I previously discussed in this blog post. This conjecture can be phrased as follows. Let {{\bf N}+1 = \{1,2,\dots\}} denote the positive integers (with {{\bf N} =\{0,1,2,\dots\}} the natural numbers), and let {\mathrm{Col}: {\bf N}+1 \rightarrow {\bf N}+1} be the map defined by setting {\mathrm{Col}(N)} equal to {3N+1} when {N} is odd and {N/2} when {N} is even. Let {\mathrm{Col}_{\min}(N) := \inf_{n \in {\bf N}} \mathrm{Col}^n(N)} be the minimal element of the Collatz orbit {N, \mathrm{Col}(N), \mathrm{Col}^2(N),\dots}. Then we have

Conjecture 1 (Collatz conjecture) One has {\mathrm{Col}_{\min}(N)=1} for all {N \in {\bf N}+1}.

Establishing the conjecture for all {N} remains out of reach of current techniques (for instance, as discussed in the previous blog post, it is basically at least as difficult as Baker’s theorem, all known proofs of which are quite difficult). However, the situation is more promising if one is willing to settle for results which only hold for “most” {N} in some sense. For instance, it is a result of Krasikov and Lagarias that

\displaystyle  \{ N \leq x: \mathrm{Col}_{\min}(N) = 1 \} \gg x^{0.84}

for all sufficiently large {x}. In another direction, it was shown by Terras that for almost all {N} (in the sense of natural density), one has {\mathrm{Col}_{\min}(N) < N}. This was then improved by Allouche to {\mathrm{Col}_{\min}(N) < N^\theta} for almost all {N} and any fixed {\theta > 0.869}, and extended later by Korec to cover all {\theta > \frac{\log 3}{\log 4} \approx 0.7924}. In this paper we obtain the following further improvement (at the cost of weakening natural density to logarithmic density):

Theorem 2 Let {f: {\bf N}+1 \rightarrow {\bf R}} be any function with {\lim_{N \rightarrow \infty} f(N) = +\infty}. Then we have {\mathrm{Col}_{\min}(N) < f(N)} for almost all {N} (in the sense of logarithmic density).

Thus for instance one has {\mathrm{Col}_{\min}(N) < \log\log\log\log N} for almost all {N} (in the sense of logarithmic density).
The difficulty here is one usually only expects to establish “local-in-time” results that control the evolution {\mathrm{Col}^n(N)} for times {n} that only get as large as a small multiple {c \log N} of {\log N}; the aforementioned results of Terras, Allouche, and Korec, for instance, are of this type. However, to get {\mathrm{Col}^n(N)} all the way down to {f(N)} one needs something more like an “(almost) global-in-time” result, where the evolution remains under control for so long that the orbit has nearly reached the bounded state {N=O(1)}.
However, as observed by Bourgain in the context of nonlinear Schrödinger equations, one can iterate “almost sure local wellposedness” type results (which give local control for almost all initial data from a given distribution) into “almost sure (almost) global wellposedness” type results if one is fortunate enough to draw one’s data from an invariant measure for the dynamics. To illustrate the idea, let us take Korec’s aforementioned result that if {\theta > \frac{\log 3}{\log 4}} one picks at random an integer {N} from a large interval {[1,x]}, then in most cases, the orbit of {N} will eventually move into the interval {[1,x^{\theta}]}. Similarly, if one picks an integer {M} at random from {[1,x^\theta]}, then in most cases, the orbit of {M} will eventually move into {[1,x^{\theta^2}]}. It is then tempting to concatenate the two statements and conclude that for most {N} in {[1,x]}, the orbit will eventually move {[1,x^{\theta^2}]}. Unfortunately, this argument does not quite work, because by the time the orbit from a randomly drawn {N \in [1,x]} reaches {[1,x^\theta]}, the distribution of the final value is unlikely to be close to being uniformly distributed on {[1,x^\theta]}, and in particular could potentially concentrate almost entirely in the exceptional set of {M \in [1,x^\theta]} that do not make it into {[1,x^{\theta^2}]}. The point here is the uniform measure on {[1,x]} is not transported by Collatz dynamics to anything resembling the uniform measure on {[1,x^\theta]}.
So, one now needs to locate a measure which has better invariance properties under the Collatz dynamics. It turns out to be technically convenient to work with a standard acceleration of the Collatz map known as the Syracuse map {\mathrm{Syr}: 2{\bf N}+1 \rightarrow 2{\bf N}+1}, defined on the odd numbers {2{\bf N}+1 = \{1,3,5,\dots\}} by setting {\mathrm{Syr}(N) = (3N+1)/2^a}, where {2^a} is the largest power of {2} that divides {3N+1}. (The advantage of using the Syracuse map over the Collatz map is that it performs precisely one multiplication of {3} at each iteration step, which makes the map better behaved when performing “{3}-adic” analysis.)
When viewed {3}-adically, we soon see that iterations of the Syracuse map become somewhat irregular. Most obviously, {\mathrm{Syr}(N)} is never divisible by {3}. A little less obviously, {\mathrm{Syr}(N)} is twice as likely to equal {2} mod {3} as it is to equal {1} mod {3}. This is because for a randomly chosen odd {\mathbf{N}}, the number of times {\mathbf{a}} that {2} divides {3\mathbf{N}+1} can be seen to have a geometric distribution of mean {2} – it equals any given value {a \in{\bf N}+1} with probability {2^{-a}}. Such a geometric random variable is twice as likely to be odd as to be even, which is what gives the above irregularity. There are similar irregularities modulo higher powers of {3}. For instance, one can compute that for large random odd {\mathbf{N}}, {\mathrm{Syr}^2(\mathbf{N}) \hbox{ mod } 9} will take the residue classes {0,1,2,3,4,5,6,7,8 \hbox{ mod } 9} with probabilities

\displaystyle  0, \frac{8}{63}, \frac{16}{63}, 0, \frac{11}{63}, \frac{4}{63}, 0, \frac{2}{63}, \frac{22}{63}

respectively. More generally, for any {n}, {\mathrm{Syr}^n(N) \hbox{ mod } 3^n} will be distributed according to the law of a random variable {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})} on {{\bf Z}/3^n{\bf Z}} that we call a Syracuse random variable, and can be described explicitly as

\displaystyle  \mathbf{Syrac}({\bf Z}/3^n{\bf Z}) = 2^{-\mathbf{a}_1} + 3^1 2^{-\mathbf{a}_1-\mathbf{a}_2} + \dots + 3^{n-1} 2^{-\mathbf{a}_1-\dots-\mathbf{a}_n} \hbox{ mod } 3^n, \ \ \ \ \ (1)

where {\mathbf{a}_1,\dots,\mathbf{a}_n} are iid copies of a geometric random variable of mean {2}.
In view of this, any proposed “invariant” (or approximately invariant) measure (or family of measures) for the Syracuse dynamics should take this {3}-adic irregularity of distribution into account. It turns out that one can use the Syracuse random variables {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})} to construct such a measure, but only if these random variables stabilise in the limit {n \rightarrow \infty} in a certain total variation sense. More precisely, in the paper we establish the estimate

\displaystyle  \sum_{Y \in {\bf Z}/3^n{\bf Z}} | \mathbb{P}( \mathbf{Syrac}({\bf Z}/3^n{\bf Z})=Y) - 3^{m-n} \mathbb{P}( \mathbf{Syrac}({\bf Z}/3^m{\bf Z})=Y \hbox{ mod } 3^m)| \ \ \ \ \ (2)

\displaystyle  \ll_A m^{-A}

for any {1 \leq m \leq n} and any {A > 0}. This type of stabilisation is plausible from entropy heuristics – the tuple {(\mathbf{a}_1,\dots,\mathbf{a}_n)} of geometric random variables that generates {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})} has Shannon entropy {n \log 4}, which is significantly larger than the total entropy {n \log 3} of the uniform distribution on {{\bf Z}/3^n{\bf Z}}, so we expect a lot of “mixing” and “collision” to occur when converting the tuple {(\mathbf{a}_1,\dots,\mathbf{a}_n)} to {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})}; these heuristics can be supported by numerics (which I was able to work out up to about {n=10} before running into memory and CPU issues), but it turns out to be surprisingly delicate to make this precise.
A first hint of how to proceed comes from the elementary number theory observation (easily proven by induction) that the rational numbers

\displaystyle  2^{-a_1} + 3^1 2^{-a_1-a_2} + \dots + 3^{n-1} 2^{-a_1-\dots-a_n}

are all distinct as {(a_1,\dots,a_n)} vary over tuples in {({\bf N}+1)^n}. Unfortunately, the process of reducing mod {3^n} creates a lot of collisions (as must happen from the pigeonhole principle); however, by a simple “Lefschetz principle” type argument one can at least show that the reductions

\displaystyle  2^{-a_1} + 3^1 2^{-a_1-a_2} + \dots + 3^{m-1} 2^{-a_1-\dots-a_m} \hbox{ mod } 3^n \ \ \ \ \ (3)

are mostly distinct for “typical” {a_1,\dots,a_m} (as drawn using the geometric distribution) as long as {m} is a bit smaller than {\frac{\log 3}{\log 4} n} (basically because the rational number appearing in (3) then typically takes a form like {M/2^{2m}} with {M} an integer between {0} and {3^n}). This analysis of the component (3) of (1) is already enough to get quite a bit of spreading on { \mathbf{Syrac}({\bf Z}/3^n{\bf Z})} (roughly speaking, when the argument is optimised, it shows that this random variable cannot concentrate in any subset of {{\bf Z}/3^n{\bf Z}} of density less than {n^{-C}} for some large absolute constant {C>0}). To get from this to a stabilisation property (2) we have to exploit the mixing effects of the remaining portion of (1) that does not come from (3). After some standard Fourier-analytic manipulations, matters then boil down to obtaining non-trivial decay of the characteristic function of {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})}, and more precisely in showing that

\displaystyle  \mathbb{E} e^{-2\pi i \xi \mathbf{Syrac}({\bf Z}/3^n{\bf Z}) / 3^n} \ll_A n^{-A} \ \ \ \ \ (4)

for any {A > 0} and any {\xi \in {\bf Z}/3^n{\bf Z}} that is not divisible by {3}.
If the random variable (1) was the sum of independent terms, one could express this characteristic function as something like a Riesz product, which would be straightforward to estimate well. Unfortunately, the terms in (1) are loosely coupled together, and so the characteristic factor does not immediately factor into a Riesz product. However, if one groups adjacent terms in (1) together, one can rewrite it (assuming {n} is even for sake of discussion) as

\displaystyle  (2^{\mathbf{a}_2} + 3) 2^{-\mathbf{b}_1} + (2^{\mathbf{a}_4}+3) 3^2 2^{-\mathbf{b}_1-\mathbf{b}_2} + \dots

\displaystyle  + (2^{\mathbf{a}_n}+3) 3^{n-2} 2^{-\mathbf{b}_1-\dots-\mathbf{b}_{n/2}} \hbox{ mod } 3^n

where {\mathbf{b}_j := \mathbf{a}_{2j-1} + \mathbf{a}_{2j}}. The point here is that after conditioning on the {\mathbf{b}_1,\dots,\mathbf{b}_{n/2}} to be fixed, the random variables {\mathbf{a}_2, \mathbf{a}_4,\dots,\mathbf{a}_n} remain independent (though the distribution of each {\mathbf{a}_{2j}} depends on the value that we conditioned {\mathbf{b}_j} to), and so the above expression is a conditional sum of independent random variables. This lets one express the characeteristic function of (1) as an averaged Riesz product. One can use this to establish the bound (4) as long as one can show that the expression

\displaystyle  \frac{\xi 3^{2j-2} (2^{-\mathbf{b}_1-\dots-\mathbf{b}_j+1} \mod 3^n)}{3^n}

is not close to an integer for a moderately large number ({\gg A \log n}, to be precise) of indices {j = 1,\dots,n/2}. (Actually, for technical reasons we have to also restrict to those {j} for which {\mathbf{b}_j=3}, but let us ignore this detail here.) To put it another way, if we let {B} denote the set of pairs {(j,l)} for which

\displaystyle  \frac{\xi 3^{2j-2} (2^{-l+1} \mod 3^n)}{3^n} \in [-\varepsilon,\varepsilon] + {\bf Z},

we have to show that (with overwhelming probability) the random walk

\displaystyle (1,\mathbf{b}_1), (2, \mathbf{b}_1 + \mathbf{b}_2), \dots, (n/2, \mathbf{b}_1+\dots+\mathbf{b}_{n/2})

(which we view as a two-dimensional renewal process) contains at least a few points lying outside of {B}.
A little bit of elementary number theory and combinatorics allows one to describe the set {B} as the union of “triangles” with a certain non-zero separation between them. If the triangles were all fairly small, then one expects the renewal process to visit at least one point outside of {B} after passing through any given such triangle, and it then becomes relatively easy to then show that the renewal process usually has the required number of points outside of {B}. The most difficult case is when the renewal process passes through a particularly large triangle in {B}. However, it turns out that large triangles enjoy particularly good separation properties, and in particular afer passing through a large triangle one is likely to only encounter nothing but small triangles for a while. After making these heuristics more precise, one is finally able to get enough points on the renewal process outside of {B} that one can finish the proof of (4), and thus Theorem 2.

William Banks, Kevin Ford, and I have just uploaded to the arXiv our paper “Large prime gaps and probabilistic models“. In this paper we introduce a random model to help understand the connection between two well known conjectures regarding the primes {{\mathcal P} := \{2,3,5,\dots\}}, the Cramér conjecture and the Hardy-Littlewood conjecture:

Conjecture 1 (Cramér conjecture) If {x} is a large number, then the largest prime gap {G_{\mathcal P}(x) := \sup_{p_n, p_{n+1} \leq x} p_{n+1}-p_n} in {[1,x]} is of size {\asymp \log^2 x}. (Granville refines this conjecture to {\gtrsim \xi \log^2 x}, where {\xi := 2e^{-\gamma} = 1.1229\dots}. Here we use the asymptotic notation {X \gtrsim Y} for {X \geq (1-o(1)) Y}, {X \sim Y} for {X \gtrsim Y \gtrsim X}, {X \gg Y} for {X \geq C^{-1} Y}, and {X \asymp Y} for {X \gg Y \gg X}.)

Conjecture 2 (Hardy-Littlewood conjecture) If {\mathcal{H} := \{h_1,\dots,h_k\}} are fixed distinct integers, then the number of numbers {n \in [1,x]} with {n+h_1,\dots,n+h_k} all prime is {({\mathfrak S}(\mathcal{H}) +o(1)) \int_2^x \frac{dt}{\log^k t}} as {x \rightarrow \infty}, where the singular series {{\mathfrak S}(\mathcal{H})} is defined by the formula

\displaystyle {\mathfrak S}(\mathcal{H}) := \prod_p \left( 1 - \frac{|{\mathcal H} \hbox{ mod } p|}{p}\right) (1-\frac{1}{p})^{-k}.

(One can view these conjectures as modern versions of two of the classical Landau problems, namely Legendre’s conjecture and the twin prime conjecture respectively.)

A well known connection between the Hardy-Littlewood conjecture and prime gaps was made by Gallagher. Among other things, Gallagher showed that if the Hardy-Littlewood conjecture was true, then the prime gaps {p_{n+1}-p_n} with {n \leq x} were asymptotically distributed according to an exponential distribution of mean {\log x}, in the sense that

\displaystyle | \{ n: p_n \leq x, p_{n+1}-p_n \geq \lambda \log x \}| = (e^{-\lambda}+o(1)) \frac{x}{\log x} \ \ \ \ \ (1)

 

as {x \rightarrow \infty} for any fixed {\lambda \geq 0}. Roughly speaking, the way this is established is by using the Hardy-Littlewood conjecture to control the mean values of {\binom{|{\mathcal P} \cap (p_n, p_n + \lambda \log x)|}{k}} for fixed {k,\lambda}, where {p_n} ranges over the primes in {[1,x]}. The relevance of these quantities arises from the Bonferroni inequalities (or “Brun pure sieve“), which can be formulated as the assertion that

\displaystyle 1_{N=0} \leq \sum_{k=0}^K (-1)^k \binom{N}{k}

when {K} is even and

\displaystyle 1_{N=0} \geq \sum_{k=0}^K (-1)^k \binom{N}{k}

when {K} is odd, for any natural number {N}; setting {N := |{\mathcal P} \cap (p_n, p_n + \lambda \log x)|} and taking means, one then gets upper and lower bounds for the probability that the interval {(p_n, p_n + \lambda \log x)} is free of primes. The most difficult step is to control the mean values of the singular series {{\mathfrak S}(\mathcal{H})} as {{\mathcal H}} ranges over {k}-tuples in a fixed interval such as {[0, \lambda \log x]}.

Heuristically, if one extrapolates the asymptotic (1) to the regime {\lambda \asymp \log x}, one is then led to Cramér’s conjecture, since the right-hand side of (1) falls below {1} when {\lambda} is significantly larger than {\log x}. However, this is not a rigorous derivation of Cramér’s conjecture from the Hardy-Littlewood conjecture, since Gallagher’s computations only establish (1) for fixed choices of {\lambda}, which is only enough to establish the far weaker bound {G_{\mathcal P}(x) / \log x \rightarrow \infty}, which was already known (see this previous paper for a discussion of the best known unconditional lower bounds on {G_{\mathcal P}(x)}). An inspection of the argument shows that if one wished to extend (1) to parameter choices {\lambda} that were allowed to grow with {x}, then one would need as input a stronger version of the Hardy-Littlewood conjecture in which the length {k} of the tuple {{\mathcal H} = (h_1,\dots,h_k)}, as well as the magnitudes of the shifts {h_1,\dots,h_k}, were also allowed to grow with {x}. Our initial objective in this project was then to quantify exactly what strengthening of the Hardy-Littlewood conjecture would be needed to rigorously imply Cramer’s conjecture. The precise results are technical, but roughly we show results of the following form:

Theorem 3 (Large gaps from Hardy-Littlewood, rough statement)

  • If the Hardy-Littlewood conjecture is uniformly true for {k}-tuples of length {k \ll \frac{\log x}{\log\log x}}, and with shifts {h_1,\dots,h_k} of size {O( \log^2 x )}, with a power savings in the error term, then {G_{\mathcal P}(x) \gg \frac{\log^2 x}{\log\log x}}.
  • If the Hardy-Littlewood conjecture is “true on average” for {k}-tuples of length {k \ll \frac{y}{\log x}} and shifts {h_1,\dots,h_k} of size {y} for all {\log x \leq y \leq \log^2 x \log\log x}, with a power savings in the error term, then {G_{\mathcal P}(x) \gg \log^2 x}.

In particular, we can recover Cramer’s conjecture given a sufficiently powerful version of the Hardy-Littlewood conjecture “on the average”.

Our proof of this theorem proceeds more or less along the same lines as Gallagher’s calculation, but now with {k} allowed to grow slowly with {x}. Again, the main difficulty is to accurately estimate average values of the singular series {{\mathfrak S}({\mathfrak H})}. Here we found it useful to switch to a probabilistic interpretation of this series. For technical reasons it is convenient to work with a truncated, unnormalised version

\displaystyle V_{\mathcal H}(z) := \prod_{p \leq z} \left( 1 - \frac{|{\mathcal H} \hbox{ mod } p|}{p} \right)

of the singular series, for a suitable cutoff {z}; it turns out that when studying prime tuples of size {t}, the most convenient cutoff {z(t)} is the “Pólya magic cutoff“, defined as the largest prime for which

\displaystyle \prod_{p \leq z(t)}(1-\frac{1}{p}) \geq \frac{1}{\log t} \ \ \ \ \ (2)

 

(this is well defined for {t \geq e^2}); by Mertens’ theorem, we have {z(t) \sim t^{1/e^\gamma}}. One can interpret {V_{\mathcal Z}(z)} probabilistically as

\displaystyle V_{\mathcal Z}(z) = \mathbf{P}( {\mathcal H} \subset \mathcal{S}_z )

where {\mathcal{S}_z \subset {\bf Z}} is the randomly sifted set of integers formed by removing one residue class {a_p \hbox{ mod } p} uniformly at random for each prime {p \leq z}. The Hardy-Littlewood conjecture can be viewed as an assertion that the primes {{\mathcal P}} behave in some approximate statistical sense like the random sifted set {\mathcal{S}_z}, and one can prove the above theorem by using the Bonferroni inequalities both for the primes {{\mathcal P}} and for the random sifted set, and comparing the two (using an even {K} for the sifted set and an odd {K} for the primes in order to be able to combine the two together to get a useful bound).

The proof of Theorem 3 ended up not using any properties of the set of primes {{\mathcal P}} other than that this set obeyed some form of the Hardy-Littlewood conjectures; the theorem remains true (with suitable notational changes) if this set were replaced by any other set. In order to convince ourselves that our theorem was not vacuous due to our version of the Hardy-Littlewood conjecture being too strong to be true, we then started exploring the question of coming up with random models of {{\mathcal P}} which obeyed various versions of the Hardy-Littlewood and Cramér conjectures.

This line of inquiry was started by Cramér, who introduced what we now call the Cramér random model {{\mathcal C}} of the primes, in which each natural number {n \geq 3} is selected for membership in {{\mathcal C}} with an independent probability of {1/\log n}. This model matches the primes well in some respects; for instance, it almost surely obeys the “Riemann hypothesis”

\displaystyle | {\mathcal C} \cap [1,x] | = \int_2^x \frac{dt}{\log t} + O( x^{1/2+o(1)})

and Cramér also showed that the largest gap {G_{\mathcal C}(x)} was almost surely {\sim \log^2 x}. On the other hand, it does not obey the Hardy-Littlewood conjecture; more precisely, it obeys a simplified variant of that conjecture in which the singular series {{\mathfrak S}({\mathcal H})} is absent.

Granville proposed a refinement {{\mathcal G}} to Cramér’s random model {{\mathcal C}} in which one first sieves out (in each dyadic interval {[x,2x]}) all residue classes {0 \hbox{ mod } p} for {p \leq A} for a certain threshold {A = \log^{1-o(1)} x = o(\log x)}, and then places each surviving natural number {n} in {{\mathcal G}} with an independent probability {\frac{1}{\log n} \prod_{p \leq A} (1-\frac{1}{p})^{-1}}. One can verify that this model obeys the Hardy-Littlewood conjectures, and Granville showed that the largest gap {G_{\mathcal G}(x)} in this model was almost surely {\gtrsim \xi \log^2 x}, leading to his conjecture that this bound also was true for the primes. (Interestingly, this conjecture is not yet borne out by numerics; calculations of prime gaps up to {10^{18}}, for instance, have shown that {G_{\mathcal P}(x)} never exceeds {0.9206 \log^2 x} in this range. This is not necessarily a conflict, however; Granville’s analysis relies on inspecting gaps in an extremely sparse region of natural numbers that are more devoid of primes than average, and this region is not well explored by existing numerics. See this previous blog post for more discussion of Granville’s argument.)

However, Granville’s model does not produce a power savings in the error term of the Hardy-Littlewood conjectures, mostly due to the need to truncate the singular series at the logarithmic cutoff {A}. After some experimentation, we were able to produce a tractable random model {{\mathcal R}} for the primes which obeyed the Hardy-Littlewood conjectures with power savings, and which reproduced Granville’s gap prediction of {\gtrsim \xi \log^2 x} (we also get an upper bound of {\lesssim \xi \log^2 x \frac{\log\log x}{2 \log\log\log x}} for both models, though we expect the lower bound to be closer to the truth); to us, this strengthens the case for Granville’s version of Cramér’s conjecture. The model can be described as follows. We select one residue class {a_p \hbox{ mod } p} uniformly at random for each prime {p}, and as before we let {S_z} be the sifted set of integers formed by deleting the residue classes {a_p \hbox{ mod } p} with {p \leq z}. We then set

\displaystyle {\mathcal R} := \{ n \geq e^2: n \in S_{z(t)}\}

with {z(t)} Pólya’s magic cutoff (this is the cutoff that gives {{\mathcal R}} a density consistent with the prime number theorem or the Riemann hypothesis). As stated above, we are able to show that almost surely one has

\displaystyle \xi \log^2 x \lesssim {\mathcal G}_{\mathcal R}(x) \lesssim \xi \log^2 x \frac{\log\log x}{2 \log\log\log x} \ \ \ \ \ (3)

 

and that the Hardy-Littlewood conjectures hold with power savings for {k} up to {\log^c x} for any fixed {c < 1} and for shifts {h_1,\dots,h_k} of size {O(\log^c x)}. This is unfortunately a tiny bit weaker than what Theorem 3 requires (which more or less corresponds to the endpoint {c=1}), although there is a variant of Theorem 3 that can use this input to produce a lower bound on gaps in the model {{\mathcal R}} (but it is weaker than the one in (3)). In fact we prove a more precise almost sure asymptotic formula for {{\mathcal G}_{\mathcal R}(x) } that involves the optimal bounds for the linear sieve (or interval sieve), in which one deletes one residue class modulo {p} from an interval {[0,y]} for all primes {p} up to a given threshold. The lower bound in (3) relates to the case of deleting the {0 \hbox{ mod } p} residue classes from {[0,y]}; the upper bound comes from the delicate analysis of the linear sieve by Iwaniec. Improving on either of the two bounds looks to be quite a difficult problem.

The probabilistic analysis of {{\mathcal R}} is somewhat more complicated than of {{\mathcal C}} or {{\mathcal G}} as there is now non-trivial coupling between the events {n \in {\mathcal R}} as {n} varies, although moment methods such as the second moment method are still viable and allow one to verify the Hardy-Littlewood conjectures by a lengthy but fairly straightforward calculation. To analyse large gaps, one has to understand the statistical behaviour of a random linear sieve in which one starts with an interval {[0,y]} and randomly deletes a residue class {a_p \hbox{ mod } p} for each prime {p} up to a given threshold. For very small {p} this is handled by the deterministic theory of the linear sieve as discussed above. For medium sized {p}, it turns out that there is good concentration of measure thanks to tools such as Bennett’s inequality or Azuma’s inequality, as one can view the sieving process as a martingale or (approximately) as a sum of independent random variables. For larger primes {p}, in which only a small number of survivors are expected to be sieved out by each residue class, a direct combinatorial calculation of all possible outcomes (involving the random graph that connects interval elements {n \in [0,y]} to primes {p} if {n} falls in the random residue class {a_p \hbox{ mod } p}) turns out to give the best results.

In a recent post I discussed how the Riemann zeta function {\zeta} can be locally approximated by a polynomial, in the sense that for randomly chosen {t \in [T,2T]} one has an approximation

\displaystyle  \zeta(\frac{1}{2} + it - \frac{2\pi i z}{\log T}) \approx P_t( e^{2\pi i z/N} ) \ \ \ \ \ (1)

where {N} grows slowly with {T}, and {P_t} is a polynomial of degree {N}. Assuming the Riemann hypothesis (as we will throughout this post), the zeroes of {P_t} should all lie on the unit circle, and one should then be able to write {P_t} as a scalar multiple of the characteristic polynomial of (the inverse of) a unitary matrix {U = U_t \in U(N)}, which we normalise as

\displaystyle  P_t(Z) = \exp(A_t) \mathrm{det}(1 - ZU). \ \ \ \ \ (2)

Here {A_t} is some quantity depending on {t}. We view {U} as a random element of {U(N)}; in the limit {T \rightarrow \infty}, the GUE hypothesis is equivalent to {U} becoming equidistributed with respect to Haar measure on {U(N)} (also known as the Circular Unitary Ensemble, CUE; it is to the unit circle what the Gaussian Unitary Ensemble (GUE) is on the real line). One can also view {U} as analogous to the “geometric Frobenius” operator in the function field setting, though unfortunately it is difficult at present to make this analogy any more precise (due, among other things, to the lack of a sufficiently satisfactory theory of the “field of one element“).

Taking logarithmic derivatives of (2), we have

\displaystyle  -\frac{P'_t(Z)}{P_t(Z)} = \mathrm{tr}( U (1-ZU)^{-1} ) = \sum_{j=1}^\infty Z^{j-1} \mathrm{tr} U^j \ \ \ \ \ (3)

and hence on taking logarithmic derivatives of (1) in the {z} variable we (heuristically) have

\displaystyle  -\frac{2\pi i}{\log T} \frac{\zeta'}{\zeta}( \frac{1}{2} + it - \frac{2\pi i z}{\log T}) \approx \frac{2\pi i}{N} \sum_{j=1}^\infty e^{2\pi i jz/N} \mathrm{tr} U^j.

Morally speaking, we have

\displaystyle  - \frac{\zeta'}{\zeta}( \frac{1}{2} + it - \frac{2\pi i z}{\log T}) = \sum_{n=1}^\infty \frac{\Lambda(n)}{n^{1/2+it}} e^{2\pi i z (\log n/\log T)}

so on comparing coefficients we expect to interpret the moments {\mathrm{tr} U^j} of {U} as a finite Dirichlet series:

\displaystyle  \mathrm{tr} U^j \approx \frac{N}{\log T} \sum_{T^{(j-1)/N} < n \leq T^{j/N}} \frac{\Lambda(n)}{n^{1/2+it}}. \ \ \ \ \ (4)

To understand the distribution of {U} in the unitary group {U(N)}, it suffices to understand the distribution of the moments

\displaystyle  {\bf E}_t \prod_{j=1}^k (\mathrm{tr} U^j)^{a_j} (\overline{\mathrm{tr} U^j})^{b_j} \ \ \ \ \ (5)

where {{\bf E}_t} denotes averaging over {t \in [T,2T]}, and {k, a_1,\dots,a_k, b_1,\dots,b_k \geq 0}. The GUE hypothesis asserts that in the limit {T \rightarrow \infty}, these moments converge to their CUE counterparts

\displaystyle  {\bf E}_{\mathrm{CUE}} \prod_{j=1}^k (\mathrm{tr} U^j)^{a_j} (\overline{\mathrm{tr} U^j})^{b_j} \ \ \ \ \ (6)

where {U} is now drawn uniformly in {U(n)} with respect to the CUE ensemble, and {{\bf E}_{\mathrm{CUE}}} denotes expectation with respect to that measure.

The moment (6) vanishes unless one has the homogeneity condition

\displaystyle  \sum_{j=1}^k j a_j = \sum_{j=1}^k j b_j. \ \ \ \ \ (7)

This follows from the fact that for any phase {\theta \in {\bf R}}, {e(\theta) U} has the same distribution as {U}, where we use the number theory notation {e(\theta) := e^{2\pi i\theta}}.

In the case when the degree {\sum_{j=1}^k j a_j} is low, we can use representation theory to establish the following simple formula for the moment (6), as evaluated by Diaconis and Shahshahani:

Proposition 1 (Low moments in CUE model) If

\displaystyle  \sum_{j=1}^k j a_j \leq N, \ \ \ \ \ (8)

then the moment (6) vanishes unless {a_j=b_j} for all {j}, in which case it is equal to

\displaystyle  \prod_{j=1}^k j^{a_j} a_j!. \ \ \ \ \ (9)

Another way of viewing this proposition is that for {U} distributed according to CUE, the random variables {\mathrm{tr} U^j} are distributed like independent complex random variables of mean zero and variance {j}, as long as one only considers moments obeying (8). This identity definitely breaks down for larger values of {a_j}, so one only obtains central limit theorems in certain limiting regimes, notably when one only considers a fixed number of {j}‘s and lets {N} go to infinity. (The paper of Diaconis and Shahshahani writes {\sum_{j=1}^k a_j + b_j} in place of {\sum_{j=1}^k j a_j}, but I believe this to be a typo.)

Proof: Let {D} be the left-hand side of (8). We may assume that (7) holds since we are done otherwise, hence

\displaystyle  D = \sum_{j=1}^k j a_j = \sum_{j=1}^k j b_j.

Our starting point is Schur-Weyl duality. Namely, we consider the {n^D}-dimensional complex vector space

\displaystyle  ({\bf C}^n)^{\otimes D} = {\bf C}^n \otimes \dots \otimes {\bf C}^n.

This space has an action of the product group {S_D \times GL_n({\bf C})}: the symmetric group {S_D} acts by permutation on the {D} tensor factors, while the general linear group {GL_n({\bf C})} acts diagonally on the {{\bf C}^n} factors, and the two actions commute with each other. Schur-Weyl duality gives a decomposition

\displaystyle  ({\bf C}^n)^{\otimes D} \equiv \bigoplus_\lambda V^\lambda_{S_D} \otimes V^\lambda_{GL_n({\bf C})} \ \ \ \ \ (10)

where {\lambda} ranges over Young tableaux of size {D} with at most {n} rows, {V^\lambda_{S_D}} is the {S_D}-irreducible unitary representation corresponding to {\lambda} (which can be constructed for instance using Specht modules), and {V^\lambda_{GL_n({\bf C})}} is the {GL_n({\bf C})}-irreducible polynomial representation corresponding with highest weight {\lambda}.

Let {\pi \in S_D} be a permutation consisting of {a_j} cycles of length {j} (this is uniquely determined up to conjugation), and let {g \in GL_n({\bf C})}. The pair {(\pi,g)} then acts on {({\bf C}^n)^{\otimes D}}, with the action on basis elements {e_{i_1} \otimes \dots \otimes e_{i_D}} given by

\displaystyle  g e_{\pi(i_1)} \otimes \dots \otimes g_{\pi(i_D)}.

The trace of this action can then be computed as

\displaystyle  \sum_{i_1,\dots,i_D \in \{1,\dots,n\}} g_{\pi(i_1),i_1} \dots g_{\pi(i_D),i_D}

where {g_{i,j}} is the {ij} matrix coefficient of {g}. Breaking up into cycles and summing, this is just

\displaystyle  \prod_{j=1}^k \mathrm{tr}(g^j)^{a_j}.

But we can also compute this trace using the Schur-Weyl decomposition (10), yielding the identity

\displaystyle  \prod_{j=1}^k \mathrm{tr}(g^j)^{a_j} = \sum_\lambda \chi_\lambda(\pi) s_\lambda(g) \ \ \ \ \ (11)

where {\chi_\lambda: S_D \rightarrow {\bf C}} is the character on {S_D} associated to {V^\lambda_{S_D}}, and {s_\lambda: GL_n({\bf C}) \rightarrow {\bf C}} is the character on {GL_n({\bf C})} associated to {V^\lambda_{GL_n({\bf C})}}. As is well known, {s_\lambda(g)} is just the Schur polynomial of weight {\lambda} applied to the (algebraic, generalised) eigenvalues of {g}. We can specialise to unitary matrices to conclude that

\displaystyle  \prod_{j=1}^k \mathrm{tr}(U^j)^{a_j} = \sum_\lambda \chi_\lambda(\pi) s_\lambda(U)

and similarly

\displaystyle  \prod_{j=1}^k \mathrm{tr}(U^j)^{b_j} = \sum_\lambda \chi_\lambda(\pi') s_\lambda(U)

where {\pi' \in S_D} consists of {b_j} cycles of length {j} for each {j=1,\dots,k}. On the other hand, the characters {s_\lambda} are an orthonormal system on {L^2(U(N))} with the CUE measure. Thus we can write the expectation (6) as

\displaystyle  \sum_\lambda \chi_\lambda(\pi) \overline{\chi_\lambda(\pi')}. \ \ \ \ \ (12)

Now recall that {\lambda} ranges over all the Young tableaux of size {D} with at most {N} rows. But by (8) we have {D \leq N}, and so the condition of having {N} rows is redundant. Hence {\lambda} now ranges over all Young tableaux of size {D}, which as is well known enumerates all the irreducible representations of {S_D}. One can then use the standard orthogonality properties of characters to show that the sum (12) vanishes if {\pi}, {\pi'} are not conjugate, and is equal to {D!} divided by the size of the conjugacy class of {\pi} (or equivalently, by the size of the centraliser of {\pi}) otherwise. But the latter expression is easily computed to be {\prod_{j=1}^k j^{a_j} a_j!}, giving the claim. \Box

Example 2 We illustrate the identity (11) when {D=3}, {n \geq 3}. The Schur polynomials are given as

\displaystyle  s_{3}(g) = \sum_i \lambda_i^3 + \sum_{i<j} \lambda_i^2 \lambda_j + \lambda_i \lambda_j^2 + \sum_{i<j<k} \lambda_i \lambda_j \lambda_k

\displaystyle  s_{2,1}(g) = \sum_{i < j} \lambda_i^2 \lambda_j + \sum_{i < j,k} \lambda_i \lambda_j \lambda_k

\displaystyle  s_{1,1,1}(g) = \sum_{i<j<k} \lambda_i \lambda_j \lambda_k

where {\lambda_1,\dots,\lambda_n} are the (generalised) eigenvalues of {g}, and the formula (11) in this case becomes

\displaystyle  \mathrm{tr}(g^3) = s_{3}(g) - s_{2,1}(g) + s_{1,1,1}(g)

\displaystyle  \mathrm{tr}(g^2) \mathrm{tr}(g) = s_{3}(g) - s_{1,1,1}(g)

\displaystyle  \mathrm{tr}(g)^3 = s_{3}(g) + 2 s_{2,1}(g) + s_{1,1,1}(g).

The functions {s_{1,1,1}, s_{2,1}, s_3} are orthonormal on {U(n)}, so the three functions {\mathrm{tr}(g^3), \mathrm{tr}(g^2) \mathrm{tr}(g), \mathrm{tr}(g)^3} are also, and their {L^2} norms are {\sqrt{3}}, {\sqrt{2}}, and {\sqrt{6}} respectively, reflecting the size in {S_3} of the centralisers of the permutations {(123)}, {(12)}, and {\mathrm{id}} respectively. If {n} is instead set to say {2}, then the {s_{1,1,1}} terms now disappear (the Young tableau here has too many rows), and the three quantities here now have some non-trivial covariance.

Example 3 Consider the moment {{\bf E}_{\mathrm{CUE}} |\mathrm{tr} U^j|^2}. For {j \leq N}, the above proposition shows us that this moment is equal to {D}. What happens for {j>N}? The formula (12) computes this moment as

\displaystyle  \sum_\lambda |\chi_\lambda(\pi)|^2

where {\pi} is a cycle of length {j} in {S_j}, and {\lambda} ranges over all Young tableaux with size {j} and at most {N} rows. The Murnaghan-Nakayama rule tells us that {\chi_\lambda(\pi)} vanishes unless {\lambda} is a hook (all but one of the non-zero rows consisting of just a single box; this also can be interpreted as an exterior power representation on the space {{\bf C}^j_{\sum=0}} of vectors in {{\bf C}^j} whose coordinates sum to zero), in which case it is equal to {\pm 1} (depending on the parity of the number of non-zero rows). As such we see that this moment is equal to {N}. Thus in general we have

\displaystyle  {\bf E}_{\mathrm{CUE}} |\mathrm{tr} U^j|^2 = \min(j,N). \ \ \ \ \ (13)

Now we discuss what is known for the analogous moments (5). Here we shall be rather non-rigorous, in particular ignoring an annoying “Archimedean” issue that the product of the ranges {T^{(j-1)/N} < n \leq T^{j/N}} and {T^{(k-1)/N} < n \leq T^{k/N}} is not quite the range {T^{(j+k-1)/N} < n \leq T^{j+k/N}} but instead leaks into the adjacent range {T^{(j+k-2)/N} < n \leq T^{j+k-1/N}}. This issue can be addressed by working in a “weak" sense in which parameters such as {j,k} are averaged over fairly long scales, or by passing to a function field analogue of these questions, but we shall simply ignore the issue completely and work at a heuristic level only. For similar reasons we will ignore some technical issues arising from the sharp cutoff of {t} to the range {[T,2T]} (it would be slightly better technically to use a smooth cutoff).

One can morally expand out (5) using (4) as

\displaystyle  (\frac{N}{\log T})^{J+K} \sum_{n_1,\dots,n_J,m_1,\dots,m_K} \frac{\Lambda(n_1) \dots \Lambda(n_J) \Lambda(m_1) \dots \Lambda(m_K)}{n_1^{1/2} \dots n_J^{1/2} m_1^{1/2} \dots m_K^{1/2}} \times \ \ \ \ \ (14)

\displaystyle  \times {\bf E}_t (m_1 \dots m_K / n_1 \dots n_J)^{it}

where {J := \sum_{j=1}^k a_j}, {K := \sum_{j=1}^k b_j}, and the integers {n_i,m_i} are in the ranges

\displaystyle  T^{(j-1)/N} < n_{a_1 + \dots + a_{j-1} + i} \leq T^{j/N}

for {j=1,\dots,k} and {1 \leq i \leq a_j}, and

\displaystyle  T^{(j-1)/N} < m_{b_1 + \dots + b_{j-1} + i} \leq T^{j/N}

for {j=1,\dots,k} and {1 \leq i \leq b_j}. Morally, the expectation here is negligible unless

\displaystyle  m_1 \dots m_K = (1 + O(1/T)) n_1 \dots n_J \ \ \ \ \ (15)

in which case the expecation is oscillates with magnitude one. In particular, if (7) fails (with some room to spare) then the moment (5) should be negligible, which is consistent with the analogous behaviour for the moments (6). Now suppose that (8) holds (with some room to spare). Then {n_1 \dots n_J} is significantly less than {T}, so the