You are currently browsing the category archive for the ‘math.ST’ category.

This is a spinoff from the previous post. In that post, we remarked that whenever one receives a new piece of information {E}, the prior odds {\mathop{\bf P}( H_1 ) / \mathop{\bf P}( H_0 )} between an alternative hypothesis {H_1} and a null hypothesis {H_0} is updated to a posterior odds {\mathop{\bf P}( H_1|E ) / \mathop{\bf P}( H_0|E )}, which can be computed via Bayes’ theorem by the formula

\displaystyle  \frac{\mathop{\bf P}( H_1|E )}{\mathop{\bf P}(H_0|E)} = \frac{\mathop{\bf P}(H_1)}{\mathop{\bf P}(H_0)} \times \frac{\mathop{\bf P}(E|H_1)}{\mathop{\bf P}(E|H_0)}

where {\mathop{\bf P}(E|H_1)} is the likelihood of this information {E} under the alternative hypothesis {H_1}, and {\mathop{\bf P}(E|H_0)} is the likelihood of this information {E} under the null hypothesis {H_0}. If there are no other hypotheses under consideration, then the two posterior probabilities {\mathop{\bf P}( H_1|E )}, {\mathop{\bf P}( H_0|E )} must add up to one, and so can be recovered from the posterior odds {o := \frac{\mathop{\bf P}( H_1|E )}{\mathop{\bf P}(H_0|E)}} by the formulae

\displaystyle  \mathop{\bf P}(H_1|E) = \frac{o}{1+o}; \quad \mathop{\bf P}(H_0|E) = \frac{1}{1+o}.

This gives a straightforward way to update one’s prior probabilities, and I thought I would present it in the form of a worksheet for ease of calculation:

A PDF version of the worksheet and instructions can be found here. One can fill in this worksheet in the following order:

  1. In Box 1, one enters in the precise statement of the null hypothesis {H_0}.
  2. In Box 2, one enters in the precise statement of the alternative hypothesis {H_1}. (This step is very important! As discussed in the previous post, Bayesian calculations can become extremely inaccurate if the alternative hypothesis is vague.)
  3. In Box 3, one enters in the prior probability {\mathop{\bf P}(H_0)} (or the best estimate thereof) of the null hypothesis {H_0}.
  4. In Box 4, one enters in the prior probability {\mathop{\bf P}(H_1)} (or the best estimate thereof) of the alternative hypothesis {H_1}. If only two hypotheses are being considered, we of course have {\mathop{\bf P}(H_1) = 1 - \mathop{\bf P}(H_0)}.
  5. In Box 5, one enters in the ratio {\mathop{\bf P}(H_1)/\mathop{\bf P}(H_0)} between Box 4 and Box 3.
  6. In Box 6, one enters in the precise new information {E} that one has acquired since the prior state. (As discussed in the previous post, it is important that all relevant information {E} – both supporting and invalidating the alternative hypothesis – are reported accurately. If one cannot be certain that key information has not been withheld to you, then Bayesian calculations become highly unreliable.)
  7. In Box 7, one enters in the likelihood {\mathop{\bf P}(E|H_0)} (or the best estimate thereof) of the new information {E} under the null hypothesis {H_0}.
  8. In Box 8, one enters in the likelihood {\mathop{\bf P}(E|H_1)} (or the best estimate thereof) of the new information {E} under the null hypothesis {H_1}. (This can be difficult to compute, particularly if {H_1} is not specified precisely.)
  9. In Box 9, one enters in the ratio {\mathop{\bf P}(E|H_1)/\mathop{\bf P}(E|H_0)} betwen Box 8 and Box 7.
  10. In Box 10, one enters in the product of Box 5 and Box 9.
  11. (Assuming there are no other hypotheses than {H_0} and {H_1}) In Box 11, enter in {1} divided by {1} plus Box 10.
  12. (Assuming there are no other hypotheses than {H_0} and {H_1}) In Box 12, enter in Box 10 divided by {1} plus Box 10. (Alternatively, one can enter in {1} minus Box 11.)

To illustrate this procedure, let us consider a standard Bayesian update problem. Suppose that a given point in time, {2\%} of the population is infected with COVID-19. In response to this, a company mandates COVID-19 testing of its workforce, using a cheap COVID-19 test. This test has a {20\%} chance of a false negative (testing negative when one has COVID) and a {5\%} chance of a false positive (testing positive when one does not have COVID). An employee {X} takes the mandatory test, which turns out to be positive. What is the probability that {X} actually has COVID?

We can fill out the entries in the worksheet one at a time:

  • Box 1: The null hypothesis {H_0} is that {X} does not have COVID.
  • Box 2: The alternative hypothesis {H_1} is that {X} does have COVID.
  • Box 3: In the absence of any better information, the prior probability {\mathop{\bf P}(H_0)} of the null hypothesis is {98\%}, or {0.98}.
  • Box 4: Similarly, the prior probability {\mathop{\bf P}(H_1)} of the alternative hypothesis is {2\%}, or {0.02}.
  • Box 5: The prior odds {\mathop{\bf P}(H_1)/\mathop{\bf P}(H_0)} are {0.02/0.98 \approx 0.02}.
  • Box 6: The new information {E} is that {X} has tested positive for COVID.
  • Box 7: The likelihood {\mathop{\bf P}(E|H_0)} of {E} under the null hypothesis is {5\%}, or {0.05} (the false positive rate).
  • Box 8: The likelihood {\mathop{\bf P}(E|H_1)} of {E} under the alternative is {80\%}, or {0.8} (one minus the false negative rate).
  • Box 9: The likelihood ratio {\mathop{\bf P}(E|H_1)/\mathop{\bf P}(E|H_0)} is {0.8 / 0.05 = 16}.
  • Box 10: The product of Box 5 and Box 9 is approximately {0.32}.
  • Box 11: The posterior probability {\mathop{\bf P}(H_0|E)} is approximately {1/(1+0.32) \approx 75\%}.
  • Box 12: The posterior probability {\mathop{\bf P}(H_1|E)} is approximately {0.32/(1+0.32) \approx 25\%}.

The filled worksheet looks like this:

Perhaps surprisingly, despite the positive COVID test, the employee {X} only has a {25\%} chance of actually having COVID! This is due to the relatively large false positive rate of this cheap test, and is an illustration of the base rate fallacy in statistics.

We remark that if we switch the roles of the null hypothesis and alternative hypothesis, then some of the odds in the worksheet change, but the ultimate conclusions remain unchanged:

So the question of which hypothesis to designate as the null hypothesis and which one to designate as the alternative hypothesis is largely a matter of convention.

Now let us take a superficially similar situation in which a mother observers her daughter exhibiting COVID-like symptoms, to the point where she estimates the probability of her daughter having COVID at {50\%}. She then administers the same cheap COVID-19 test as before, which returns positive. What is the posterior probability of her daughter having COVID?

One can fill out the worksheet much as before, but now with the prior probability of the alternative hypothesis raised from {2\%} to {50\%} (and the prior probablity of the null hypothesis dropping from {98\%} to {50\%}). One now gets that the probability that the daughter has COVID has increased all the way to {94\%}:

Thus we see that prior probabilities can make a significant impact on the posterior probabilities.

Now we use the worksheet to analyze an infamous probability puzzle, the Monty Hall problem. Let us use the formulation given in that Wikipedia page:

Problem 1 Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

For this problem, the precise formulation of the null hypothesis and the alternative hypothesis become rather important. Suppose we take the following two hypotheses:

  • Null hypothesis {H_0}: The car is behind door number 1, and no matter what door you pick, the host will randomly reveal another door that contains a goat.
  • Alternative hypothesis {H_1}: The car is behind door number 2 or 3, and no matter what door you pick, the host will randomly reveal another door that contains a goat.
Assuming the prizes are distributed randomly, we have {\mathop{\bf P}(H_0)=1/3} and {\mathop{\bf P}(H_1)=2/3}. The new information {E} is that, after door 1 is selected, door 3 is revealed and shown to be a goat. After some thought, we conclude that {\mathop{\bf P}(E|H_0)} is equal to {1/2} (the host has a fifty-fifty chance of revealing door 3 instead of door 2) but that {\mathop{\bf P}(E|H_1)} is also equal to {1/2} (if the car is behind door 2, the host must reveal door 3, whereas if the car is behind door 3, the host cannot reveal door 3). Filling in the worksheet, we see that the new information does not in fact alter the odds, and the probability that the car is not behind door 1 remains at 2/3, so it is advantageous to switch.

However, consider the following different set of hypotheses:

  • Null hypothesis {H'_0}: The car is behind door number 1, and if you pick the door with the car, the host will reveal another door to entice you to switch. Otherwise, the host will not reveal a door.
  • Alternative hypothesis {H'_1}: The car is behind door number 2 or 3, and if you pick the door with the car, the host will reveal another door to entice you to switch. Otherwise, the host will not reveal a door.

Here we still have {\mathop{\bf P}(H'_0)=1/3} and {\mathop{\bf P}(H'_1)=2/3}, but while {\mathop{\bf P}(E|H'_0)} remains equal to {1/2}, {\mathop{\bf P}(E|H'_1)} has dropped to zero (since if the car is not behind door 1, the host will not reveal a door). So now {\mathop{\bf P}(H'_0|E)} has increased all the way to {1}, and it is not advantageous to switch! This dramatically illustrates the importance of specifying the hypotheses precisely. The worksheet is now filled out as follows:

Finally, we consider another famous probability puzzle, the Sleeping Beauty problem. Again we quote the problem as formulated on the Wikipedia page:

Problem 2 Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:
  • If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
  • If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday.
  • In either case, she will be awakened on Wednesday without interview and the experiment ends.
Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Sleeping Beauty is asked: “What is your credence now for the proposition that the coin landed heads?”‘

Here the situation can be confusing because there are key portions of this experiment in which the observer is unconscious, but nevertheless Bayesian probability continues to operate regardless of whether the observer is conscious. To make this issue more precise, let us assume that the awakenings mentioned in the problem always occur at 8am, so in particular at 7am, Sleeping beauty will always be unconscious.

Here, the null and alternative hypotheses are easy to state precisely:

  • Null hypothesis {H_0}: The coin landed tails.
  • Alternative hypothesis {H_1}: The coin landed heads.

The subtle thing here is to work out what the correct prior state is (in most other applications of Bayesian probability, this state is obvious from the problem). It turns out that the most reasonable choice of prior state is “unconscious at 7am, on either Monday or Tuesday, with an equal chance of each”. (Note that whatever the outcome of the coin flip is, Sleeping Beauty will be unconscious at 7am Monday and unconscious again at 7am Tuesday, so it makes sense to give each of these two states an equal probability.) The new information is then

  • New information {E}: One hour after the prior state, Sleeping Beauty is awakened.

With this formulation, we see that {\mathop{\bf P}(H_0)=\mathop{\bf P}(H_1)=1/2}, {\mathop{\bf P}(E|H_0)=1}, and {\mathop{\bf P}(E|H_1)=1/2}, so on working through the worksheet one eventually arrives at {\mathop{\bf P}(H_1|E)=1/3}, so that Sleeping Beauty should only assign a probability of {1/3} to the event that the coin landed as heads.

There are arguments advanced in the literature to adopt the position that {\mathop{\bf P}(H_1|E)} should instead be equal to {1/2}, but I do not see a way to interpret them in this Bayesian framework without a substantial alteration to either the notion of the prior state, or by not presenting the new information {E} properly.

If one has multiple pieces of information {E_1, E_2, \dots} that one wishes to use to update one’s priors, one can do so by filling out one copy of the worksheet for each new piece of information, or by using a multi-row version of the worksheet using such identities as

\displaystyle  \frac{\mathop{\bf P}( H_1|E_1,E_2 )}{\mathop{\bf P}(H_0|E_1,E_2)} = \frac{\mathop{\bf P}(H_1)}{\mathop{\bf P}(H_0)} \times \frac{\mathop{\bf P}(E_1|H_1)}{\mathop{\bf P}(E_1|H_0)} \times \frac{\mathop{\bf P}(E_2|H_1,E_1)}{\mathop{\bf P}(E_2|H_0,E_1)}.

We leave the details of these variants of the Bayesian update problem to the interested reader. The only thing I will note though is that if a key piece of information {E} is withheld from the person filling out the worksheet, for instance if that person relies exclusively on a news source that only reports information that supports the alternative hypothesis {H_1} and omits information that debunks it, then the outcome of the worksheet is likely to be highly inaccurate, and one should only perform a Bayesian analysis when one has a high confidence that all relevant information (both favorable and unfavorable to the alternative hypothesis) is being reported to the user.

An unusual lottery result made the news recently: on October 1, 2022, the PCSO Grand Lotto in the Philippines, which draws six numbers from {1} to {55} at random, managed to draw the numbers {9, 18, 27, 36, 45, 54} (though the balls were actually drawn in the order {9, 45,36, 27, 18, 54}). In other words, they drew exactly six multiples of nine from {1} to {55}. In addition, a total of {433} tickets were bought with this winning combination, whose owners then had to split the {236} million peso jackpot (about {4} million USD) among themselves. This raised enough suspicion that there were calls for an inquiry into the Philippine lottery system, including from the minority leader of the Senate.

Whenever an event like this happens, journalists often contact mathematicians to ask the question: “What are the odds of this happening?”, and in fact I myself received one such inquiry this time around. This is a number that is not too difficult to compute – in this case, the probability of the lottery producing the six numbers {9, 18, 27, 35, 45, 54} in some order turn out to be {1} in {\binom{55}{6} = 28,989,675} – and such a number is often dutifully provided to such journalists, who in turn report it as some sort of quantitative demonstration of how remarkable the event was.

But on the previous draw of the same lottery, on September 28, 2022, the unremarkable sequence of numbers {11, 26, 33, 45, 51, 55} were drawn (again in a different order), and no tickets ended up claiming the jackpot. The probability of the lottery producing the six numbers {11, 26, 33, 45, 51, 55} is also {1} in {\binom{55}{6} = 28,989,675} – just as likely or as unlikely as the October 1 numbers {9, 18, 27, 36, 45, 54}. Indeed, the whole point of drawing the numbers randomly is to make each of the {28,989,675} possible outcomes (whether they be “unusual” or “unremarkable”) equally likely. So why is it that the October 1 lottery attracted so much attention, but the September 28 lottery did not?

Part of the explanation surely lies in the unusually large number ({433}) of lottery winners on October 1, but I will set that aspect of the story aside until the end of this post. The more general points that I want to make with these sorts of situations are:

  1. The question “what are the odds of happening” is often easy to answer mathematically, but it is not the correct question to ask.
  2. The question “what is the probability that an alternative hypothesis is the truth” is (one of) the correct questions to ask, but is very difficult to answer (it involves both mathematical and non-mathematical considerations).
  3. The answer to the first question is one of the quantities needed to calculate the answer to the second, but it is far from the only such quantity. Most of the other quantities involved cannot be calculated exactly.
  4. However, by making some educated guesses, one can still sometimes get a very rough gauge of which events are “more surprising” than others, in that they would lead to relatively higher answers to the second question.

To explain these points it is convenient to adopt the framework of Bayesian probability. In this framework, one imagines that there are competing hypotheses to explain the world, and that one assigns a probability to each such hypothesis representing one’s belief in the truth of that hypothesis. For simplicity, let us assume that there are just two competing hypotheses to be entertained: the null hypothesis {H_0}, and an alternative hypothesis {H_1}. For instance, in our lottery example, the two hypotheses might be:

  • Null hypothesis {H_0}: The lottery is run in a completely fair and random fashion.
  • Alternative hypothesis {H_1}: The lottery is rigged by some corrupt officials for their personal gain.

At any given point in time, a person would have a probability {{\bf P}(H_0)} assigned to the null hypothesis, and a probability {{\bf P}(H_1)} assigned to the alternative hypothesis; in this simplified model where there are only two hypotheses under consideration, these probabilities must add to one, but of course if there were additional hypotheses beyond these two then this would no longer be the case.

Bayesian probability does not provide a rule for calculating the initial (or prior) probabilities {{\bf P}(H_0)}, {{\bf P}(H_1)} that one starts with; these may depend on the subjective experiences and biases of the person considering the hypothesis. For instance, one person might have quite a bit of prior faith in the lottery system, and assign the probabilities {{\bf P}(H_0) = 0.99} and {{\bf P}(H_1) = 0.01}. Another person might have quite a bit of prior cynicism, and perhaps assign {{\bf P}(H_0)=0.5} and {{\bf P}(H_1)=0.5}. One cannot use purely mathematical arguments to determine which of these two people is “correct” (or whether they are both “wrong”); it depends on subjective factors.

What Bayesian probability does do, however, is provide a rule to update these probabilities {{\bf P}(H_0)}, {{\bf P}(H_1)} in view of new information {E} to provide posterior probabilities {{\bf P}(H_0|E)}, {{\bf P}(H_1|E)}. In our example, the new information {E} would be the fact that the October 1 lottery numbers were {9, 18, 27, 36, 45, 54} (in some order). The update is given by the famous Bayes theorem

\displaystyle  {\bf P}(H_0|E) = \frac{{\bf P}(E|H_0) {\bf P}(H_0)}{{\bf P}(E)}; \quad {\bf P}(H_1|E) = \frac{{\bf P}(E|H_1) {\bf P}(H_1)}{{\bf P}(E)},

where {{\bf P}(E|H_0)} is the probability that the event {E} would have occurred under the null hypothesis {H_0}, and {{\bf P}(E|H_1)} is the probability that the event {E} would have occurred under the alternative hypothesis {H_1}. Let us divide the second equation by the first to cancel the {{\bf P}(E)} denominator, and obtain

\displaystyle  \frac{ {\bf P}(H_1|E) }{ {\bf P}(H_0|E) } = \frac{ {\bf P}(H_1) }{ {\bf P}(H_0) } \times \frac{ {\bf P}(E | H_1)}{{\bf P}(E | H_0)}. \ \ \ \ \ (1)

One can interpret {\frac{ {\bf P}(H_1) }{ {\bf P}(H_0) }} as the prior odds of the alternative hypothesis, and {\frac{ {\bf P}(H_1|E) }{ {\bf P}(H_0|E) } } as the posterior odds of the alternative hypothesis. The identity (1) then says that in order to compute the posterior odds {\frac{ {\bf P}(H_1|E) }{ {\bf P}(H_0|E) }} of the alternative hypothesis in light of the new information {E}, one needs to know three things:
  1. The prior odds {\frac{ {\bf P}(H_1) }{ {\bf P}(H_0) }} of the alternative hypothesis;
  2. The probability {\mathop{\bf P}(E|H_0)} that the event {E} occurs under the null hypothesis {H_0}; and
  3. The probability {\mathop{\bf P}(E|H_1)} that the event {E} occurs under the alternative hypothesis {H_1}.

As previously discussed, the prior odds {\frac{ {\bf P}(H_1) }{ {\bf P}(H_0) }} of the alternative hypothesis are subjective and vary from person to person; in the example earlier, the person with substantial faith in the lottery may only give prior odds of {\frac{0.01}{0.99} \approx 0.01} (99 to 1 against) of the alternative hypothesis, whereas the cynic might give odds of {\frac{0.5}{0.5}=1} (even odds). The probability {{\bf P}(E|H_0)} is the quantity that can often be calculated by straightforward mathematics; as discussed before, in this specific example we have

\displaystyle  \mathop{\bf P}(E|H_0) = \frac{1}{\binom{55}{6}} = \frac{1}{28,989,675}.

But this still leaves one crucial quantity that is unknown: the probability {{\bf P}(E|H_1)}. This is incredibly difficult to compute, because it requires a precise theory for how events would play out under the alternative hypothesis {H_1}, and in particular is very sensitive as to what the alternative hypothesis {H_1} actually is.

For instance, suppose we replace the alternative hypothesis {H_1} by the following very specific (and somewhat bizarre) hypothesis:

  • Alternative hypothesis {H'_1}: The lottery is rigged by a cult that worships the multiples of {9}, and views October 1 as their holiest day. On this day, they will manipulate the lottery to only select those balls that are multiples of {9}.

Under this alternative hypothesis {H'_1}, we have {{\bf P}(E|H'_1)=1}. So, when {E} happens, the odds of this alternative hypothesis {H'_1} will increase by the dramatic factor of {\frac{{\bf P}(E|H'_1)}{{\bf P}(E|H_0)} = 28,989,675}. So, for instance, someone who already was entertaining odds of {\frac{0.01}{0.99}} of this hypothesis {H'_1} would now have these odds multiply dramatically to {\frac{0.01}{0.99} \times 28,989,675 \approx 290,000}, so that the probability of {H'_1} would have jumped from a mere {1\%} to a staggering {99.9997\%}. This is about as strong a shift in belief as one could imagine. However, this hypothesis {H'_1} is so specific and bizarre that one’s prior odds of this hypothesis would be nowhere near as large as {\frac{0.01}{0.99}} (unless substantial prior evidence of this cult and its hold on the lottery system existed, of course). A more realistic prior odds for {H'_1} would be something like {\frac{10^{-10^{10}}}{1-10^{-10^{10}}}} – which is so miniscule that even multiplying it by a factor such as {28,989,675} barely moves the needle.

Remark 1 The contrast between alternative hypothesis {H_1} and alternative hypothesis {H'_1} illustrates a common demagogical rhetorical technique when an advocate is trying to convince an audience of an alternative hypothesis, namely to use suggestive language (“`I’m just asking questions here”) rather than precise statements in order to leave the alternative hypothesis deliberately vague. In particular, the advocate may take advantage of the freedom to use a broad formulation of the hypothesis (such as {H_1}) in order to maximize the audience’s prior odds of the hypothesis, simultaneously with a very specific formulation of the hypothesis (such as {H'_1}) in order to maximize the probability of the actual event {E} occuring under this hypothesis. (A related technique is to be deliberately vague about the hypothesized competency of some suspicious actor, so that this actor could be portrayed as being extraordinarily competent when convenient to do so, while simultaneously being portrayed as extraordinarily incompetent when that instead is the more useful hypothesis.) This can lead to wildly inaccurate Bayesian updates of this vague alternative hypothesis, and so precise formulation of such hypothesis is important if one is to approach a topic from anything remotely resembling a scientific approach. [EDIT: as pointed out to me by a reader, this technique is a Bayesian analogue of the motte and bailey fallacy.]

At the opposite extreme, consider instead the following hypothesis:

  • Alternative hypothesis {H''_1}: The lottery is rigged by some corrupt officials, who on October 1 decide to randomly determine the winning numbers in advance, share these numbers with their collaborators, and then manipulate the lottery to choose those numbers that they selected.

If these corrupt officials are indeed choosing their predetermined winning numbers randomly, then the probability {{\bf P}(E|H''_1)} would in fact be just the same probability {\frac{1}{\binom{55}{6}} = \frac{1}{28,989,675}} as {{\bf P}(E|H_0)}, and in this case the seemingly unusual event {E} would in fact have no effect on the odds of the alternative hypothesis, because it was just as unlikely for the alternative hypothesis to generate this multiples-of-nine pattern as for the null hypothesis to. In fact, one would imagine that these corrupt officials would avoid “suspicious” numbers, such as the multiples of {9}, and only choose numbers that look random, in which case {{\bf P}(E|H''_1)} would in fact be less than {{\bf P}(E|H_0)} and so the event {E} would actually lower the odds of the alternative hypothesis in this case. (In fact, one can sometimes use this tendency of fraudsters to not generate truly random data as a statistical tool to detect such fraud; violations of Benford’s law for instance can be used in this fashion, though only in situations where the null hypothesis is expected to obey Benford’s law, as discussed in this previous blog post.)

Now let us consider a third alternative hypothesis:

  • Alternative hypothesis {H'''_1}: On October 1, the lottery machine developed a fault and now only selects numbers that exhibit unusual patterns.

Setting aside the question of precisely what faulty mechanism could induce this sort of effect, it is not clear at all how to compute {{\bf P}(E|H'''_1)} in this case. Using the principle of indifference as a crude rule of thumb, one might expect

\displaystyle  {\bf P}(E|H'''_1) \approx \frac{1}{\# \{ \hbox{unusual patterns}\}}

where the denominator is the number of patterns among the possible {\binom{55}{6}} lottery outcomes that are “unusual”. Among such patterns would presumably be the multiples-of-9 pattern {9,18,27,36,45,54}, but one could easily come up with other patterns that are equally “unusual”, such as consecutive strings such as {11, 12, 13, 14, 15, 16}, or the first few primes {2, 3, 5, 7, 11, 13}, or the first few squares {1, 4, 9, 16, 25, 36}, and so forth. How many such unusual patterns are there? This is too vague a question to answer with any degree of precision, but as one illustrative statistic, the Online Encyclopedia of Integer Sequences (OEIS) currently hosts about {350,000} sequences. Not all of these would begin with six distinct numbers from {1} to {55}, and several of these sequences might generate the same set of six numbers, but this does suggests that patterns that one would deem to be “unusual” could number in the thousands, tens of thousands, or more. Using this guess, we would then expect the event {E} to boost the odds of this hypothesis {H'''_1} by perhaps a thousandfold or so, which is moderately impressive. But subsequent information can counteract this effect. For instance, on October 3, the same lottery produced the numbers {8, 10, 12, 14, 26, 51}, which exhibit no unusual properties (no search results in the OEIS, for instance); if we denote this event by {E'}, then we have {{\bf P}(E'|H'''_1) \approx 0} and so this new information {E'} should drive the odds for this alternative hypothesis {H'''_1} way down again.

Remark 2 This example demonstrates another demagogical rhetorical technique that one sometimes sees (particularly in political or other emotionally charged contexts), which is to cherry-pick the information presented to their audience by informing them of events {E} which have a relatively high probability of occurring under their alternative hypothesis, but withholding information about other relevant events {E'} that have a relatively low probability of occurring under their alternative hypothesis. When confronted with such new information {E'}, a common defense of a demogogue is to modify the alternative hypothesis {H_1} to a more specific hypothesis {H'_1} that can “explain” this information {E'} (“Oh, clearly we heard about {E'} because the conspiracy in fact extends to the additional organizations {X, Y, Z} that reported {E'}“), taking advantage of the vagueness discussed in Remark 1.

Let us consider a superficially similar hypothesis:

  • Alternative hypothesis {H''''_1}: On October 1, a divine being decided to send a sign to humanity by placing an unusual pattern in a lottery.

Here we (literally) stay agnostic on the prior odds of this hypothesis, and do not address the theological question of why a divine being should choose to use the medium of a lottery to send their signs. At first glance, the probability {{\bf P}(E|H''''_1)} here should be similar to the probability {{\bf P}(E|H'''_1)}, and so perhaps one could use this event {E} to improve the odds of the existence of a divine being by a factor of a thousand or so. But note carefully that the hypothesis {H''''_1} did not specify which lottery the divine being chose to use. The PSCO Grand Lotto is just one of a dozen lotteries run by the Philippine Charity Sweepstakes Office (PCSO), and of course there are over a hundred other countries and thousands of states within these countries, each of which often run their own lotteries. Taking into account these thousands or tens of thousands of additional lotteries to choose from, the probability {{\bf P}(E|H''''_1)} now drops by several orders of magnitude, and is now basically comparable to the probability {{\bf P}(E|H_0)} coming from the null hypothesis. As such one does not expect the event {E} to have a significant impact on the odds of the hypothesis {H''''_1}, despite the small-looking nature {\frac{1}{28,989,675}} of the probability {{\bf P}(E|H_0)}.

In summary, we have failed to locate any alternative hypothesis {H_1} which

  1. Has some non-negligible prior odds of being true (and in particular is not excessively specific, as with hypothesis {H'_1});
  2. Has a significantly higher probability of producing the specific event {E} than the null hypothesis; AND
  3. Does not struggle to also produce other events {E'} that have since been observed.
One needs all three of these factors to be present in order to significantly weaken the plausibility of the null hypothesis {H_0}; in the absence of these three factors, a moderately small numerical value of {{\bf P}(E|H_0)}, such as {\frac{1}{28,989,675}} does not actually do much to affect this plausibility. In this case one needs to lay out a reasonably precise alternative hypothesis {H_1} and make some actual educated guesses towards the competing probability {{\bf P}(E|H_1)} before one can lead to further conclusions. However, if {{\bf P}(E|H_0)} is insanely small, e.g., less than {10^{-1000}}, then the possibility of a previously overlooked alternative hypothesis {H_1} becomes far more plausible; as per the famous quote of Arthur Conan Doyle’s Sherlock Holmes, “When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.”

We now return to the fact that for this specific October 1 lottery, there were {433} tickets that managed to select the winning numbers. Let us call this event {F}. In view of this additional information, we should now consider the ratio of the probabilities {{\bf P}(E \& F|H_1)} and {{\bf P}(E \& F|H_0)}, rather than the ratio of the probabilities {{\bf P}(E|H_1)} and {{\bf P}(E|H_0)}. If we augment the null hypothesis to

  • Null hypothesis {H'_0}: The lottery is run in a completely fair and random fashion, and the purchasers of lottery tickets also select their numbers in a completely random fashion.

Then {{\bf P}(E \& F|H'_0)} is indeed of the “insanely improbable” category mentioned previously. I was not able to get official numbers on how many tickets are purchased per lottery, but let us say for sake of argument that it is 1 million (the conclusion will not be extremely sensitive to this choice). Then the expected number of tickets that would have the winning numbers would be

\displaystyle  \frac{1 \hbox{ million}}{28,989,675} \approx 0.03

(which is broadly consistent, by the way, with the jackpot being reached every {30} draws or so), and standard probability theory suggests that the number of winners should now follow a Poisson distribution with this mean {\lambda = 0.03}. The probability of obtaining {433} winners would now be

\displaystyle  {\bf P}(F|H'_0) = \frac{\lambda^{433} e^{-\lambda}}{433!} \approx 10^{-1600}

and of course {{\bf P}(E \& F|H'_0)} would be even smaller than this. So this clearly demands some sort of explanation. But in actuality, many purchasers of lottery tickets do not select their numbers completely randomly; they often have some “lucky” numbers (e.g., based on birthdays or other personally significant dates) that they prefer to use, or choose numbers according to a simple pattern rather than go to the trouble of trying to make them truly random. So if we modify the null hypothesis to

  • Null hypothesis {H''_0}: The lottery is run in a completely fair and random fashion, but a significant fraction of the purchasers of lottery tickets only select “unusual” numbers.

then it can now become quite plausible that a highly unusual set of numbers such as {9,18,27,36,45,54} could be selected by as many as {433} purchasers of tickets; for instance, if {10\%} of the 1 million ticket holders chose to select their numbers according to some sort of pattern, then only {0.4\%} of those holders would have to pick {9,18,27,36,45,54} in order for the event {F} to hold (given {E}), and this is not extremely implausible. Given that this reasonable version of the null hypothesis already gives a plausible explanation for {F}, there does not seem to be a pressing need to locate an alternate hypothesis {H_1} that gives some other explanation (cf. Occam’s razor). [UPDATE: Indeed, given the actual layout of the tickets of ths lottery, the numbers {9,18,27,35,45,54} form a diagonal, and so all that is needed in order for the modified null hypothesis {H''_0} to explain the event {F} is to postulate that a significant fraction of ticket purchasers decided to lay out their numbers in a simple geometric pattern, such as a row or diagonal.]

Remark 3 In view of the above discussion, one can propose a systematic way to evaluate (in as objective a fashion as possible) rhetorical claims in which an advocate is presenting evidence to support some alternative hypothesis:
  1. State the null hypothesis {H_0} and the alternative hypothesis {H_1} as precisely as possible. In particular, avoid conflating an extremely broad hypothesis (such as the hypothesis {H_1} in our running example) with an extremely specific one (such as {H'_1} in our example).
  2. With the hypotheses precisely stated, give an honest estimate to the prior odds of this formulation of the alternative hypothesis.
  3. Consider if all the relevant information {E} (or at least a representative sample thereof) has been presented to you before proceeding further. If not, consider gathering more information {E'} from further sources.
  4. Estimate how likely the information {E} was to have occurred under the null hypothesis.
  5. Estimate how likely the information {E} was to have occurred under the alternative hypothesis (using exactly the same wording of this hypothesis as you did in previous steps).
  6. If the second estimate is significantly larger than the first, then you have cause to update your prior odds of this hypothesis (though if those prior odds were already vanishingly unlikely, this may not move the needle significantly). If not, the argument is unconvincing and no significant adjustment to the odds (except perhaps in a downwards direction) needs to be made.

In everyday usage, we rely heavily on percentages to quantify probabilities and proportions: we might say that a prediction is {50\%} accurate or {80\%} accurate, that there is a {2\%} chance of dying from some disease, and so forth. However, for those without extensive mathematical training, it can sometimes be difficult to assess whether a given percentage amounts to a “good” or “bad” outcome, because this depends very much on the context of how the percentage is used. For instance:

  • (i) In a two-party election, an outcome of say {51\%} to {49\%} might be considered close, but {55\%} to {45\%} would probably be viewed as a convincing mandate, and {60\%} to {40\%} would likely be viewed as a landslide.
  • (ii) Similarly, if one were to poll an upcoming election, a poll of {51\%} to {49\%} would be too close to call, {55\%} to {45\%} would be an extremely favorable result for the candidate, and {60\%} to {40\%} would mean that it would be a major upset if the candidate lost the election.
  • (iii) On the other hand, a medical operation that only had a {51\%}, {55\%}, or {60\%} chance of success would be viewed as being incredibly risky, especially if failure meant death or permanent injury to the patient. Even an operation that was {90\%} or {95\%} likely to be non-fatal (i.e., a {10\%} or {5\%} chance of death) would not be conducted lightly.
  • (iv) A weather prediction of, say, {30\%} chance of rain during a vacation trip might be sufficient cause to pack an umbrella, even though it is more likely than not that rain would not occur. On the other hand, if the prediction was for an {80\%} chance of rain, and it ended up that the skies remained clear, this does not seriously damage the accuracy of the prediction – indeed, such an outcome would be expected in one out of every five such predictions.
  • (v) Even extremely tiny percentages of toxic chemicals in everyday products can be considered unacceptable. For instance, EPA rules require action to be taken when the percentage of lead in drinking water exceeds {0.0000015\%} (15 parts per billion). At the opposite extreme, recycling contamination rates as high as {10\%} are often considered acceptable.

Because of all the very different ways in which percentages could be used, I think it may make sense to propose an alternate system of units to measure one class of probabilities, namely the probabilities of avoiding some highly undesirable outcome, such as death, accident or illness. The units I propose are that of “nines“, which are already commonly used to measure availability of some service or purity of a material, but can be equally used to measure the safety (i.e., lack of risk) of some activity. Informally, nines measure how many consecutive appearances of the digit {9} are in the probability of successfully avoiding the negative outcome, thus

  • {90\%} success = one nine of safety
  • {99\%} success = two nines of safety
  • {99.9\%} success = three nines of safety
and so forth. Using the mathematical device of logarithms, one can also assign a fractional number of nines of safety to a general probability:

Definition 1 (Nines of safety) An activity (affecting one or more persons, over some given period of time) that has a probability {p} of the “safe” outcome and probability {1-p} of the “unsafe” outcome will have {k} nines of safety against the unsafe outcome, where {k} is defined by the formula

\displaystyle  k = -\log_{10}(1-p) \ \ \ \ \ (1)

(where {\log_{10}} is the logarithm to base ten), or equivalently

\displaystyle  p = 1 - 10^{-k}. \ \ \ \ \ (2)

Remark 2 Because of the various uncertainties in measuring probabilities, as well as the inaccuracies in some of the assumptions and approximations we will be making later, we will not attempt to measure the number of nines of safety beyond the first decimal point; thus we will round to the nearest tenth of a nine of safety throughout this post.

Here is a conversion table between percentage rates of success (the safe outcome), failure (the unsafe outcome), and the number of nines of safety one has:

Success rate {p} Failure rate {1-p} Number of nines {k}
{0\%} {100\%} {0.0}
{50\%} {50\%} {0.3}
{75\%} {25\%} {0.6}
{80\%} {20\%} {0.7}
{90\%} {10\%} {1.0}
{95\%} {5\%} {1.3}
{97.5\%} {2.5\%} {1.6}
{98\%} {2\%} {1.7}
{99\%} {1\%} {2.0}
{99.5\%} {0.5\%} {2.3}
{99.75\%} {0.25\%} {2.6}
{99.8\%} {0.2\%} {2.7}
{99.9\%} {0.1\%} {3.0}
{99.95\%} {0.05\%} {3.3}
{99.975\%} {0.025\%} {3.6}
{99.98\%} {0.02\%} {3.7}
{99.99\%} {0.01\%} {4.0}
{100\%} {0\%} infinite

Thus, if one has no nines of safety whatsoever, one is guaranteed to fail; but each nine of safety one has reduces the failure rate by a factor of {10}. In an ideal world, one would have infinitely many nines of safety against any risk, but in practice there are no {100\%} guarantees against failure, and so one can only expect a finite amount of nines of safety in any given situation. Realistically, one should thus aim to have as many nines of safety as one can reasonably expect to have, but not to demand an infinite amount.

Remark 3 The number of nines of safety against a certain risk is not absolute; it will depend not only on the risk itself, but (a) the number of people exposed to the risk, and (b) the length of time one is exposed to the risk. Exposing more people or increasing the duration of exposure will reduce the number of nines, and conversely exposing fewer people or reducing the duration will increase the number of nines; see Proposition 7 below for a rough rule of thumb in this regard.

Remark 4 Nines of safety are a logarithmic scale of measurement, rather than a linear scale. Other familiar examples of logarithmic scales of measurement include the Richter scale of earthquake magnitude, the pH scale of acidity, the decibel scale of sound level, octaves in music, and the magnitude scale for stars.

Remark 5 One way to think about nines of safety is via the Swiss cheese model that was created recently to describe pandemic risk management. In this model, each nine of safety can be thought of as a slice of Swiss cheese, with holes occupying {10\%} of that slice. Having {k} nines of safety is then analogous to standing behind {k} such slices of Swiss cheese. In order for a risk to actually impact you, it must pass through each of these {k} slices. A fractional nine of safety corresponds to a fractional slice of Swiss cheese that covers the amount of space given by the above table. For instance, {0.6} nines of safety corresponds to a fractional slice that covers about {75\%} of the given area (leaving {25\%} uncovered).

Now to give some real-world examples of nines of safety. Using data for deaths in the US in 2019 (without attempting to account for factors such as age and gender), a random US citizen will have had the following amount of safety from dying from some selected causes in that year:

Cause of death Mortality rate per {100,\! 000} (approx.) Nines of safety
All causes {870} {2.0}
Heart disease {200} {2.7}
Cancer {180} {2.7}
Accidents {52} {3.3}
Drug overdose {22} {3.7}
Influenza/Pneumonia {15} {3.8}
Suicide {14} {3.8}
Gun violence {12} {3.9}
Car accident {11} {4.0}
Murder {5} {4.3}
Airplane crash {0.14} {5.9}
Lightning strike {0.006} {7.2}

The safety of air travel is particularly remarkable: a given hour of flying in general aviation has a fatality rate of {0.00001}, or about {5} nines of safety, while for the major carriers the fatality rate drops down to {0.0000005}, or about {7.3} nines of safety.

Of course, in 2020, COVID-19 deaths became significant. In this year in the US, the mortality rate for COVID-19 (as the underlying or contributing cause of death) was {91.5} per {100,\! 000}, corresponding to {3.0} nines of safety, which was less safe than all other causes of death except for heart disease and cancer. At this time of writing, data for all of 2021 is of course not yet available, but it seems likely that the safety level would be even lower for this year.

Some further illustrations of the concept of nines of safety:

  • Each round of Russian roulette has a success rate of {5/6}, providing only {0.8} nines of safety. Of course, the safety will decrease with each additional round: one has only {0.5} nines of safety after two rounds, {0.4} nines after three rounds, and so forth. (See also Proposition 7 below.)
  • The ancient Roman punishment of decimation, by definition, provided exactly one nine of safety to each soldier being punished.
  • Rolling a {1} on a {20}-sided die is a risk that carries about {1.3} nines of safety.
  • Rolling a double one (“snake eyes“) from two six-sided dice carries about {1.6} nines of safety.
  • One has about {2.6} nines of safety against the risk of someone randomly guessing your birthday on the first attempt.
  • A null hypothesis has {1.3} nines of safety against producing a {p = 0.05} statistically significant result, and {2.0} nines against producing a {p=0.01} statistically significant result. (However, one has to be careful when reversing the conditional; a {p=0.01} statistically significant result does not necessarily have {2.0} nines of safety against the null hypothesis. In Bayesian statistics, the precise relationship between the two risks is given by Bayes’ theorem.)
  • If a poker opponent is dealt a five-card hand, one has {5.8} nines of safety against that opponent being dealt a royal flush, {4.8} against a straight flush or higher, {3.6} against four-of-a-kind or higher, {2.8} against a full house or higher, {2.4} against a flush or higher, {2.1} against a straight or higher, {1.5} against three-of-a-kind or higher, {1.1} against two pairs or higher, and just {0.3} against one pair or higher. (This data was converted from this Wikipedia table.)
  • A {k}-digit PIN number (or a {k}-digit combination lock) carries {k} nines of safety against each attempt to randomly guess the PIN. A length {k} password that allows for numbers, upper and lower case letters, and punctuation carries about {2k} nines of safety against a single guess. (For the reduction in safety caused by multiple guesses, see Proposition 7 below.)

Here is another way to think about nines of safety:

Proposition 6 (Nines of safety extend expected onset of risk) Suppose a certain risky activity has {k} nines of safety. If one repeatedly indulges in this activity until the risk occurs, then the expected number of trials before the risk occurs is {10^k}.

Proof: The probability that the risk is activated after exactly {n} trials is {(1-10^{-k})^{n-1} 10^{-k}}, which is a geometric distribution of parameter {10^{-k}}. The claim then follows from the standard properties of that distribution. \Box

Thus, for instance, if one performs some risky activity daily, then the expected length of time before the risk occurs is given by the following table:

Daily nines of safety Expected onset of risk
{0} One day
{0.8} One week
{1.5} One month
{2.6} One year
{2.9} Two years
{3.3} Five years
{3.6} Ten years
{3.9} Twenty years
{4.3} Fifty years
{4.6} A century

Or, if one wants to convert the yearly risks of dying from a specific cause into expected years before that cause of death would occur (assuming for sake of discussion that no other cause of death exists):

Yearly nines of safety Expected onset of risk
{0} One year
{0.3} Two years
{0.7} Five years
{1} Ten years
{1.3} Twenty years
{1.7} Fifty years
{2.0} A century

These tables suggest a relationship between the amount of safety one would have in a short timeframe, such as a day, and a longer time frame, such as a year. Here is an approximate formalisation of that relationship:

Proposition 7 (Repeated exposure reduces nines of safety) If a risky activity with {k} nines of safety is (independently) repeated {m} times, then (assuming {k} is large enough depending on {m}), the repeated activity will have approximately {k - \log_{10} m} nines of safety. Conversely: if the repeated activity has {k'} nines of safety, the individual activity will have approximately {k' + \log_{10} m} nines of safety.

Proof: An activity with {k} nines of safety will be safe with probability {1-10^{-k}}, hence safe with probability {(1-10^{-k})^m} if repeated independently {m} times. For {k} large, we can approximate

\displaystyle  (1 - 10^{-k})^m \approx 1 - m 10^{-k} = 1 - 10^{-(k - \log_{10} m)}

giving the former claim. The latter claim follows from inverting the former. \Box

Remark 8 The hypothesis of independence here is key. If there is a lot of correlation between the risks between different repetitions of the activity, then there can be much less reduction in safety caused by that repetition. As a simple example, suppose that {90\%} of a workforce are trained to perform some task flawlessly no matter how many times they repeat the task, but the remaining {10\%} are untrained and will always fail at that task. If one selects a random worker and asks them to perform the task, one has {1.0} nines of safety against the task failing. If one took that same random worker and asked them to perform the task {m} times, the above proposition might suggest that the number of nines of safety would drop to approximately {1.0 - \log_{10} m}; but in this case there is perfect correlation, and in fact the number of nines of safety remains steady at {1.0} since it is the same {10\%} of the workforce that would fail each time.

Because of this caveat, one should view the above proposition as only a crude first approximation that can be used as a simple rule of thumb, but should not be relied upon for more precise calculations.

One can repeat a risk either in time (extending the time of exposure to the risk, say from a day to a year), or in space (by exposing the risk to more people). The above proposition then gives an additive conversion law for nines of safety in either case. Here are some conversion tables for time:

From/to Daily Weekly Monthly Yearly
Daily 0 -0.8 -1.5 -2.6
Weekly +0.8 0 -0.6 -1.7
Monthly +1.5 +0.6 0 -1.1
Yearly +2.6 +1.7 +1.1 0

From/to Yearly Per 5 yr Per decade Per century
Yearly 0 -0.7 -1.0 -2.0
Per 5 yr +0.7 0 -0.3 -1.3
Per decade +1.0 + -0.3 0 -1.0
Per century +2.0 +1.3 +1.0 0

For instance, as mentioned before, the yearly amount of safety against cancer is about {2.7}. Using the above table (and making the somewhat unrealistic hypothesis of independence), we then predict the daily amount of safety against cancer to be about {2.7 + 2.6 = 5.3} nines, the weekly amount to be about {2.7 + 1.7 = 4.4} nines, and the amount of safety over five years to drop to about {2.7 - 0.7 = 2.0} nines.

Now we turn to conversions in space. If one knows the level of safety against a certain risk for an individual, and then one (independently) exposes a group of such individuals to that risk, then the reduction in nines of safety when considering the possibility that at least one group member experiences this risk is given by the following table:

Group Reduction in safety
You ({1} person) {0}
You and your partner ({2} people) {-0.3}
You and your parents ({3} people) {-0.5}
You, your partner, and three children ({5} people) {-0.7}
An extended family of {10} people {-1.0}
A class of {30} people {-1.5}
A workplace of {100} people {-2.0}
A school of {1,\! 000} people {-3.0}
A university of {10,\! 000} people {-4.0}
A town of {100,\! 000} people {-5.0}
A city of {1} million people {-6.0}
A state of {10} million people {-7.0}
A country of {100} million people {-8.0}
A continent of {1} billion people {-9.0}
The entire planet {-9.8}

For instance, in a given year (and making the somewhat implausible assumption of independence), you might have {2.7} nines of safety against cancer, but you and your partner collectively only have about {2.7 - 0.3 = 2.4} nines of safety against this risk, your family of five might only have about {2.7 - 0.7 = 2} nines of safety, and so forth. By the time one gets to a group of {1,\! 000} people, it actually becomes very likely that at least one member of the group will die of cancer in that year. (Here the precise conversion table breaks down, because a negative number of nines such as {2.7 - 3.0 = -0.3} is not possible, but one should interpret a prediction of a negative number of nines as an assertion that failure is very likely to happen. Also, in practice the reduction in safety is less than this rule predicts, due to correlations such as risk factors that are common to the group being considered that are incompatible with the assumption of independence.)

In the opposite direction, any reduction in exposure (either in time or space) to a risk will increase one’s safety level, as per the following table:

Reduction in exposure Additional nines of safety
{\div 1} {0}
{\div 2} {+0.3}
{\div 3} {+0.5}
{\div 5} {+0.7}
{\div 10} {+1.0}
{\div 100} {+2.0}

For instance, a five-fold reduction in exposure will reclaim about {0.7} additional nines of safety.

Here is a slightly different way to view nines of safety:

Proposition 9 Suppose that a group of {m} people are independently exposed to a given risk. If there are at most

\displaystyle  \log_{10} \frac{1}{1-2^{-1/m}}

nines of individual safety against that risk, then there is at least a {50\%} chance that one member of the group is affected by the risk.

Proof: If individually there are {k} nines of safety, then the probability that all the members of the group avoid the risk is {(1-10^{-k})^m}. Since the inequality

\displaystyle  (1-10^{-k})^m \leq \frac{1}{2}

is equivalent to

\displaystyle  k \leq \log_{10} \frac{1}{1-2^{-1/m}},

the claim follows. \Box

Thus, for a group to collectively avoid a risk with at least a {50\%} chance, one needs the following level of individual safety:

Group Individual safety level required
You ({1} person) {0.3}
You and your partner ({2} people) {0.5}
You and your parents ({3} people) {0.7}
You, your partner, and three children ({5} people) {0.9}
An extended family of {10} people {1.2}
A class of {30} people {1.6}
A workplace of {100} people {2.2}
A school of {1,\! 000} people {3.2}
A university of {10,\! 000} people {4.2}
A town of {100,\! 000} people {5.2}
A city of {1} million people {6.2}
A state of {10} million people {7.2}
A country of {100} million people {8.2}
A continent of {1} billion people {9.2}
The entire planet {10.0}

For large {m}, the level {k} of nines of individual safety required to protect a group of size {m} with probability at least {50\%} is approximately {\log_{10} \frac{m}{\ln 2} \approx (\log_{10} m) + 0.2}.

Precautions that can work to prevent a certain risk from occurring will add additional nines of safety against that risk, even if the precaution is not {100\%} effective. Here is the precise rule:

Proposition 10 (Precautions add nines of safety) Suppose an activity carries {k} nines of safety against a certain risk, and a separate precaution can independently protect against that risk with {l} nines of safety (that is to say, the probability that the protection is effective is {1 - 10^{-l}}). Then applying that precaution increases the number of nines in the activity from {k} to {k+l}.

Proof: The probability that the precaution fails and the risk then occurs is {10^{-l} \times 10^{-k} = 10^{-(k+l)}}. The claim now follows from Definition 1. \Box

In particular, we can repurpose the table at the start of this post as a conversion chart for effectiveness of a precaution:

Effectiveness Failure rate Additional nines provided
{0\%} {100\%} {+0.0}
{50\%} {50\%} {+0.3}
{75\%} {25\%} {+0.6}
{80\%} {20\%} {+0.7}
{90\%} {10\%} {+1.0}
{95\%} {5\%} {+1.3}
{97.5\%} {2.5\%} {+1.6}
{98\%} {2\%} {+1.7}
{99\%} {1\%} {+2.0}
{99.5\%} {0.5\%} {+2.3}
{99.75\%} {0.25\%} {+2.6}
{99.8\%} {0.2\%} {+2.7}
{99.9\%} {0.1\%} {+3.0}
{99.95\%} {0.05\%} {+3.3}
{99.975\%} {0.025\%} {+3.6}
{99.98\%} {0.02\%} {+3.7}
{99.99\%} {0.01\%} {+4.0}
{100\%} {0\%} infinite

Thus for instance a precaution that is {80\%} effective will add {0.7} nines of safety, a precaution that is {99.8\%} effective will add {2.7} nines of safety, and so forth. The mRNA COVID vaccines by Pfizer and Moderna have somewhere between {88\% - 96\%} effectiveness against symptomatic COVID illness, providing about {0.9-1.4} nines of safety against that risk, and over {95\%} effectiveness against severe illness, thus adding at least {1.3} nines of safety in this regard.

A slight variant of the above rule can be stated using the concept of relative risk:

Proposition 11 (Relative risk and nines of safety) Suppose an activity carries {k} nines of safety against a certain risk, and an action multiplies the chance of failure by some relative risk {R}. Then the action removes {\log_{10} R} nines of safety (if {R > 1}) or adds {-\log_{10} R} nines of safety (if {R<1}) to the original activity.

Proof: The additional action adjusts the probability of failure from {10^{-k}} to {R \times 10^{-k} = 10^{-(k - \log_{10} R)}}. The claim now follows from Definition 1. \Box

Here is a conversion chart between relative risk and change in nines of safety:

Relative risk Change in nines of safety
{0.01} {+2.0}
{0.02} {+1.7}
{0.05} {+1.3}
{0.1} {+1.0}
{0.2} {+0.7}
{0.5} {+0.3}
{1} {0}
{2} {-0.3}
{5} {-0.7}
{10} {-1.0}
{20} {-1.3}
{50} {-1.7}
{100} {-2.0}

Some examples:

  • Smoking increases the fatality rate of lung cancer by a factor of about {20}, thus removing about {1.3} nines of safety from this particular risk; it also increases the fatality rates of several other diseases, though not quite as dramatically an extent.
  • Seatbelts reduce the fatality rate in car accidents by a factor of about two, adding about {0.3} nines of safety. Airbags achieve a reduction of about {30-50\%}, adding about {0.2-0.3} additional nines of safety.
  • As far as transmission of COVID is concerned, it seems that constant use of face masks reduces transmission by a factor of about five (thus adding about {0.7} nines of safety), and similarly for constant adherence to social distancing; whereas for instance a {30\%} compliance with mask usage reduced transmission by about {10\%} (adding only {0.05} or so nines of safety).

The effect of combining multiple (independent) precautions together is cumulative; one can achieve quite a high level of safety by stacking together several precautions that individually have relatively low levels of effectiveness. Again, see the “swiss cheese model” referred to in Remark 5. For instance, if face masks add {0.7} nines of safety against contracting COVID, social distancing adds another {0.7} nines, and the vaccine provide another {1.0} nine of safety, implementing all three mitigation methods would (assuming independence) add a net of {2.4} nines of safety against contracting COVID.

In summary, when debating the value of a given risk mitigation measure, the correct question to ask is not quite “Is it certain to work” or “Can it fail?”, but rather “How many extra nines of safety does it add?”.

As one final comparison between nines of safety and other standard risk measures, we give the following proposition regarding large deviations from the mean.

Proposition 12 Let {X} be a normally distributed random variable of standard deviation {\sigma}, and let {\lambda > 0}. Then the “one-sided risk” of {X} exceeding its mean {{\bf E} X} by at least {\lambda \sigma} (i.e., {X \geq {\bf E} X + \lambda \sigma}) carries

\displaystyle  -\log_{10} \frac{1 - \mathrm{erf}(\lambda/\sqrt{2})}{2}

nines of safety, the “two-sided risk” of {X} deviating (in either direction) from its mean by at least {\lambda \sigma} (i.e., {|X-{\bf E} X| \geq \lambda \sigma}) carries

\displaystyle  -\log_{10} (1 - \mathrm{erf}(\lambda/\sqrt{2}))

nines of safety, where {\mathrm{erf}} is the error function.

Proof: This is a routine calculation using the cumulative distribution function of the normal distribution. \Box

Here is a short table illustrating this proposition:

Number {\lambda} of deviations from the mean One-sided nines of safety Two-sided nines of safety
{0} {0.3} {0.0}
{1} {0.8} {0.5}
{2} {1.6} {1.3}
{3} {2.9} {2.6}
{4} {4.5} {4.2}
{5} {6.5} {6.2}
{6} {9.0} {8.7}

Thus, for instance, the risk of a five sigma event (deviating by more than five standard deviations from the mean in either direction) should carry {6.2} nines of safety assuming a normal distribution, and so one would ordinarily feel extremely safe against the possibility of such an event, unless one started doing hundreds of thousands of trials. (However, we caution that this conclusion relies heavily on the assumption that one has a normal distribution!)

See also this older essay I wrote on anonymity on the internet, using bits as a measure of anonymity in much the same way that nines are used here as a measure of safety.

After some discussion with the applied math research groups here at UCLA (in particular the groups led by Andrea Bertozzi and Deanna Needell), one of the members of these groups, Chris Strohmeier, has produced a proposal for a Polymath project to crowdsource in a single repository (a) a collection of public data sets relating to the COVID-19 pandemic, (b) requests for such data sets, (c) requests for data cleaning of such sets, and (d) submissions of cleaned data sets.  (The proposal can be viewed as a PDF, and is also available on Overleaf).  As mentioned in the proposal, this database would be slightly different in focus than existing data sets such as the COVID-19 data sets hosted on Kaggle, with a focus on producing high quality cleaned data sets.  (Another relevant data set that I am aware of is the SafeGraph aggregated foot traffic data, although this data set, while open, is not quite public as it requires a non-commercial agreement to execute.  Feel free to mention further relevant data sets in the comments.)

This seems like a very interesting and timely proposal to me and I would like to open it up for discussion, for instance by proposing some seed requests for data and data cleaning and to discuss possible platforms that such a repository could be built on.  In the spirit of “building the plane while flying it”, one could begin by creating a basic github repository as a prototype and use the comments in this blog post to handle requests, and then migrate to a more high quality platform once it becomes clear what direction this project might move in.  (For instance one might eventually move beyond data cleaning to more sophisticated types of data analysis.)

UPDATE, Mar 25: a prototype page for such a clearinghouse is now up at this wiki page.

UPDATE, Mar 27: the data cleaning aspect of this project largely duplicates the existing efforts at the United against COVID-19 project, so we are redirecting requests of this type to that project (and specifically to their data discourse page).  The polymath proposal will now refocus on crowdsourcing a list of public data sets relating to the COVID-19 pandemic.

 

At the most recent MSRI board of trustees meeting on Mar 7 (conducted online, naturally), Nicolas Jewell (a Professor of Biostatistics and Statistics at Berkeley, also affiliated with the Berkeley School of Public Health and the London School of Health and Tropical Disease), gave a presentation on the current coronavirus epidemic entitled “2019-2020 Novel Coronavirus outbreak: mathematics of epidemics, and what it can and cannot tell us”.  The presentation (updated with Mar 18 data), hosted by David Eisenbud (the director of MSRI), together with a question and answer session, is now on Youtube:

(I am on this board, but could not make it to this particular meeting; I caught up on the presentation later, and thought it would of interest to several readers of this blog.)  While there is some mathematics in the presentation, it is relatively non-technical.

Note: the following is a record of some whimsical mathematical thoughts and computations I had after doing some grading. It is likely that the sort of problems discussed here are in fact well studied in the appropriate literature; I would appreciate knowing of any links to such.

Suppose one assigns {N} true-false questions on an examination, with the answers randomised so that each question is equally likely to have “true” as the correct answer as “false”, with no correlation between different questions. Suppose that the students taking the examination must answer each question with exactly one of “true” or “false” (they are not allowed to skip any question). Then it is easy to see how to grade the exam: one can simply count how many questions each student answered correctly (i.e. each correct answer scores one point, and each incorrect answer scores zero points), and give that number {k} as the final grade of the examination. More generally, one could assign some score of {A} points to each correct answer and some score (possibly negative) of {B} points to each incorrect answer, giving a total grade of {A k + B(N-k)} points. As long as {A > B}, this grade is simply an affine rescaling of the simple grading scheme {k} and would serve just as well for the purpose of evaluating the students, as well as encouraging each student to answer the questions as correctly as possible.

In practice, though, a student will probably not know the answer to each individual question with absolute certainty. One can adopt a probabilistic model, where for a given student {S} and a given question {n}, the student {S} may think that the answer to question {n} is true with probability {p_{S,n}} and false with probability {1-p_{S,n}}, where {0 \leq p_{S,n} \leq 1} is some quantity that can be viewed as a measure of confidence {S} has in the answer (with {S} being confident that the answer is true if {p_{S,n}} is close to {1}, and confident that the answer is false if {p_{S,n}} is close to {0}); for simplicity let us assume that in {S}‘s probabilistic model, the answers to each question are independent random variables. Given this model, and assuming that the student {S} wishes to maximise his or her expected grade on the exam, it is an easy matter to see that the optimal strategy for {S} to take is to answer question {n} true if {p_{S,n} > 1/2} and false if {p_{S,n} < 1/2}. (If {p_{S,n}=1/2}, the student {S} can answer arbitrarily.)

[Important note: here we are not using the term “confidence” in the technical sense used in statistics, but rather as an informal term for “subjective probability”.]

This is fine as far as it goes, but for the purposes of evaluating how well the student actually knows the material, it provides only a limited amount of information, in particular we do not get to directly see the student’s subjective probabilities {p_{S,n}} for each question. If for instance {S} answered {7} out of {10} questions correctly, was it because he or she actually knew the right answer for seven of the questions, or was it because he or she was making educated guesses for the ten questions that turned out to be slightly better than random chance? There seems to be no way to discern this if the only input the student is allowed to provide for each question is the single binary choice of true/false.

But what if the student were able to give probabilistic answers to any given question? That is to say, instead of being forced to answer just “true” or “false” for a given question {n}, the student was allowed to give answers such as “{60\%} confident that the answer is true” (and hence {40\%} confidence the answer is false). Such answers would give more insight as to how well the student actually knew the material; in particular, we would theoretically be able to actually see the student’s subjective probabilities {p_{S,n}}.

But now it becomes less clear what the right grading scheme to pick is. Suppose for instance we wish to extend the simple grading scheme in which an correct answer given in {100\%} confidence is awarded one point. How many points should one award a correct answer given in {60\%} confidence? How about an incorrect answer given in {60\%} confidence (or equivalently, a correct answer given in {40\%} confidence)?

Mathematically, one could design a grading scheme by selecting some grading function {f: [0,1] \rightarrow {\bf R}} and then awarding a student {f(p)} points whenever they indicate the correct answer with a confidence of {p}. For instance, if the student was {60\%} confident that the answer was “true” (and hence {40\%} confident that the answer was “false”), then this grading scheme would award the student {f(0.6)} points if the correct answer actually was “true”, and {f(0.4)} points if the correct answer actually was “false”. One can then ask the question of what functions {f} would be “best” for this scheme?

Intuitively, one would expect that {f} should be monotone increasing – one should be rewarded more for being correct with high confidence, than correct with low confidence. On the other hand, some sort of “partial credit” should still be assigned in the latter case. One obvious proposal is to just use a linear grading function {f(p) = p} – thus for instance a correct answer given with {60\%} confidence might be worth {0.6} points. But is this the “best” option?

To make the problem more mathematically precise, one needs an objective criterion with which to evaluate a given grading scheme. One criterion that one could use here is the avoidance of perverse incentives. If a grading scheme is designed badly, a student may end up overstating or understating his or her confidence in an answer in order to optimise the (expected) grade: the optimal level of confidence {q_{S,n}} for a student {S} to report on a question may differ from that student’s subjective confidence {p_{S,n}}. So one could ask to design a scheme so that {q_{S,n}} is always equal to {p_{S,n}}, so that the incentive is for the student to honestly report his or her confidence level in the answer.

This turns out to give a precise constraint on the grading function {f}. If a student {S} thinks that the answer to a question {n} is true with probability {p_{S,n}} and false with probability {1-p_{S,n}}, and enters in an answer of “true” with confidence {q_{S,n}} (and thus “false” with confidence {1-q_{S,n}}), then student would expect a grade of

\displaystyle p_{S,n} f( q_{S,n} ) + (1-p_{S,n}) f(1 - q_{S,n})

on average for this question. To maximise this expected grade (assuming differentiability of {f}, which is a reasonable hypothesis for a partial credit grading scheme), one performs the usual maneuvre of differentiating in the independent variable {q_{S,n}} and setting the result to zero, thus obtaining

\displaystyle p_{S,n} f'( q_{S,n} ) - (1-p_{S,n}) f'(1 - q_{S,n}) = 0.

In order to avoid perverse incentives, the maximum should occur at {q_{S,n} = p_{S,n}}, thus we should have

\displaystyle p f'(p) - (1-p) f'(1-p) = 0

for all {0 \leq p \leq 1}. This suggests that the function {p \mapsto p f'(p)} should be constant. (Strictly speaking, it only gives the weaker constraint that {p \mapsto p f'(p)} is symmetric around {p=1/2}; but if one generalised the problem to allow for multiple-choice questions with more than two possible answers, with a grading scheme that depended only on the confidence assigned to the correct answer, the same analysis would in fact force {p f'(p)} to be constant in {p}; we leave this computation to the interested reader.) In other words, {f(p)} should be of the form {A \log p + B} for some {A,B}; by monotonicity we expect {A} to be positive. If we make the normalisation {f(1/2)=0} (so that no points are awarded for a {50-50} split in confidence between true and false) and {f(1)=1}, one arrives at the grading scheme

\displaystyle f(p) := \log_2(2p).

Thus, if a student believes that an answer is “true” with confidence {p} and “false” with confidence {1-p}, he or she will be awarded {\log_2(2p)} points when the correct answer is “true”, and {\log_2(2(1-p))} points if the correct answer is “false”. The following table gives some illustrative values for this scheme:

Confidence that answer is “true” Points awarded if answer is “true” Points awarded if answer is “false”
{0\%} {-\infty} {1.000}
{1\%} {-5.644} {0.9855}
{2\%} {-4.644} {0.9709}
{5\%} {-3.322} {0.9260}
{10\%} {-2.322} {0.8480}
{20\%} {-1.322} {0.6781}
{30\%} {-0.737} {0.4854}
{40\%} {-0.322} {0.2630}
{50\%} {0.000} {0.000}
{60\%} {0.2630} {-0.322}
{70\%} {0.4854} {-0.737}
{80\%} {0.6781} {-1.322}
{90\%} {0.8480} {-2.322}
{95\%} {0.9260} {-3.322}
{98\%} {0.9709} {-4.644}
{99\%} {0.9855} {-5.644}
{100\%} {1.000} {-\infty}

Note the large penalties for being extremely confident of an answer that ultimately turns out to be incorrect; in particular, answers of {100\%} confidence should be avoided unless one really is absolutely certain as to the correctness of one’s answer.

The total grade given under such a scheme to a student {S} who answers each question {n} to be “true” with confidence {p_{S,n}}, and “false” with confidence {1-p_{S,n}}, is

\displaystyle \sum_{n: \hbox{ ans is true}} \log_2(2 p_{S,n} ) + \sum_{n: \hbox{ ans is false}} \log_2(2(1-p_{S,n})).

This grade can also be written as

\displaystyle N + \frac{1}{\log 2} \log {\mathcal L}

where

\displaystyle {\mathcal L} := \prod_{n: \hbox{ ans is true}} p_{S,n} \times \prod_{n: \hbox{ ans is false}} (1-p_{S,n})

is the likelihood of the student {S}‘s subjective probability model, given the outcome of the correct answers. Thus the grade system here has another natural interpretation, as being an affine rescaling of the log-likelihood. The incentive is thus for the student to maximise the likelihood of his or her own subjective model, which aligns well with standard practices in statistics. From the perspective of Bayesian probability, the grade given to a student can then be viewed as a measurement (in logarithmic scale) of how much the posterior probability that the student’s model was correct has improved over the prior probability.

One could propose using the above grading scheme to evaluate predictions to binary events, such as an upcoming election with only two viable candidates, to see in hindsight just how effective each predictor was in calling these events. One difficulty in doing so is that many predictions do not come with explicit probabilities attached to them, and attaching a default confidence level of {100\%} to any prediction made without any such qualification would result in an automatic grade of {-\infty} if even one of these predictions turned out to be incorrect. But perhaps if a predictor refuses to attach confidence level to his or her predictions, one can assign some default level {p} of confidence to these predictions, and then (using some suitable set of predictions from this predictor as “training data”) find the value of {p} that maximises this predictor’s grade. This level can then be used going forward as the default level of confidence to apply to any future predictions from this predictor.

The above grading scheme extends easily enough to multiple-choice questions. But one question I had trouble with was how to deal with uncertainty, in which the student does not know enough about a question to venture even a probability of being true or false. Here, it is natural to allow a student to leave a question blank (i.e. to answer “I don’t know”); a more advanced option would be to allow the student to enter his or her confidence level as an interval range (e.g. “I am between {50\%} and {70\%} confident that the answer is “true””). But now I do not have a good proposal for a grading scheme; once there is uncertainty in the student’s subjective model, the problem of that student maximising his or her expected grade becomes ill-posed due to the “unknown unknowns”, and so the previous criterion of avoiding perverse incentives becomes far less useful.

I recently learned about a curious operation on square matrices known as sweeping, which is used in numerical linear algebra (particularly in applications to statistics), as a useful and more robust variant of the usual Gaussian elimination operations seen in undergraduate linear algebra courses. Given an {n \times n} matrix {A := (a_{ij})_{1 \leq i,j \leq n}} (with, say, complex entries) and an index {1 \leq k \leq n}, with the entry {a_{kk}} non-zero, the sweep {\hbox{Sweep}_k[A] = (\hat a_{ij})_{1 \leq i,j \leq n}} of {A} at {k} is the matrix given by the formulae

\displaystyle  \hat a_{ij} := a_{ij} - \frac{a_{ik} a_{kj}}{a_{kk}}

\displaystyle  \hat a_{ik} := \frac{a_{ik}}{a_{kk}}

\displaystyle  \hat a_{kj} := \frac{a_{kj}}{a_{kk}}

\displaystyle  \hat a_{kk} := \frac{-1}{a_{kk}}

for all {i,j \in \{1,\dots,n\} \backslash \{k\}}. Thus for instance if {k=1}, and {A} is written in block form as

\displaystyle  A = \begin{pmatrix} a_{11} & X \\ Y & B \end{pmatrix} \ \ \ \ \ (1)

for some {1 \times n-1} row vector {X}, {n-1 \times 1} column vector {Y}, and {n-1 \times n-1} minor {B}, one has

\displaystyle  \hbox{Sweep}_1[A] = \begin{pmatrix} -1/a_{11} & X / a_{11} \\ Y/a_{11} & B - a_{11}^{-1} YX \end{pmatrix}. \ \ \ \ \ (2)

The inverse sweep operation {\hbox{Sweep}_k^{-1}[A] = (\check a_{ij})_{1 \leq i,j \leq n}} is given by a nearly identical set of formulae:

\displaystyle  \check a_{ij} := a_{ij} - \frac{a_{ik} a_{kj}}{a_{kk}}

\displaystyle  \check a_{ik} := -\frac{a_{ik}}{a_{kk}}

\displaystyle  \check a_{kj} := -\frac{a_{kj}}{a_{kk}}

\displaystyle  \check a_{kk} := \frac{-1}{a_{kk}}

for all {i,j \in \{1,\dots,n\} \backslash \{k\}}. One can check that these operations invert each other. Actually, each sweep turns out to have order {4}, so that {\hbox{Sweep}_k^{-1} = \hbox{Sweep}_k^3}: an inverse sweep performs the same operation as three forward sweeps. Sweeps also preserve the space of symmetric matrices (allowing one to cut down computational run time in that case by a factor of two), and behave well with respect to principal minors; a sweep of a principal minor is a principal minor of a sweep, after adjusting indices appropriately.

Remarkably, the sweep operators all commute with each other: {\hbox{Sweep}_k \hbox{Sweep}_l = \hbox{Sweep}_l \hbox{Sweep}_k}. If {1 \leq k \leq n} and we perform the first {k} sweeps (in any order) to a matrix

\displaystyle  A = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix}

with {A_{11}} a {k \times k} minor, {A_{12}} a {k \times n-k} matrix, {A_{12}} a {n-k \times k} matrix, and {A_{22}} a {n-k \times n-k} matrix, one obtains the new matrix

\displaystyle  \hbox{Sweep}_1 \dots \hbox{Sweep}_k[A] = \begin{pmatrix} -A_{11}^{-1} & A_{11}^{-1} A_{12} \\ A_{21} A_{11}^{-1} & A_{22} - A_{21} A_{11}^{-1} A_{12} \end{pmatrix}.

Note the appearance of the Schur complement in the bottom right block. Thus, for instance, one can essentially invert a matrix {A} by performing all {n} sweeps:

\displaystyle  \hbox{Sweep}_1 \dots \hbox{Sweep}_n[A] = -A^{-1}.

If a matrix has the form

\displaystyle  A = \begin{pmatrix} B & X \\ Y & a \end{pmatrix}

for a {n-1 \times n-1} minor {B}, {n-1 \times 1} column vector {X}, {1 \times n-1} row vector {Y}, and scalar {a}, then performing the first {n-1} sweeps gives

\displaystyle  \hbox{Sweep}_1 \dots \hbox{Sweep}_{n-1}[A] = \begin{pmatrix} -B^{-1} & B^{-1} X \\ Y B^{-1} & a - Y B^{-1} X \end{pmatrix}

and all the components of this matrix are usable for various numerical linear algebra applications in statistics (e.g. in least squares regression). Given that sweeps behave well with inverses, it is perhaps not surprising that sweeps also behave well under determinants: the determinant of {A} can be factored as the product of the entry {a_{kk}} and the determinant of the {n-1 \times n-1} matrix formed from {\hbox{Sweep}_k[A]} by removing the {k^{th}} row and column. As a consequence, one can compute the determinant of {A} fairly efficiently (so long as the sweep operations don’t come close to dividing by zero) by sweeping the matrix for {k=1,\dots,n} in turn, and multiplying together the {kk^{th}} entry of the matrix just before the {k^{th}} sweep for {k=1,\dots,n} to obtain the determinant.

It turns out that there is a simple geometric explanation for these seemingly magical properties of the sweep operation. Any {n \times n} matrix {A} creates a graph {\hbox{Graph}[A] := \{ (X, AX): X \in {\bf R}^n \}} (where we think of {{\bf R}^n} as the space of column vectors). This graph is an {n}-dimensional subspace of {{\bf R}^n \times {\bf R}^n}. Conversely, most subspaces of {{\bf R}^n \times {\bf R}^n} arises as graphs; there are some that fail the vertical line test, but these are a positive codimension set of counterexamples.

We use {e_1,\dots,e_n,f_1,\dots,f_n} to denote the standard basis of {{\bf R}^n \times {\bf R}^n}, with {e_1,\dots,e_n} the standard basis for the first factor of {{\bf R}^n} and {f_1,\dots,f_n} the standard basis for the second factor. The operation of sweeping the {k^{th}} entry then corresponds to a ninety degree rotation {\hbox{Rot}_k: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}^n \times {\bf R}^n} in the {e_k,f_k} plane, that sends {f_k} to {e_k} (and {e_k} to {-f_k}), keeping all other basis vectors fixed: thus we have

\displaystyle  \hbox{Graph}[ \hbox{Sweep}_k[A] ] = \hbox{Rot}_k \hbox{Graph}[A]

for generic {n \times n} {A} (more precisely, those {A} with non-vanishing entry {a_{kk}}). For instance, if {k=1} and {A} is of the form (1), then {\hbox{Graph}[A]} is the set of tuples {(r,R,s,S) \in {\bf R} \times {\bf R}^{n-1} \times {\bf R} \times {\bf R}^{n-1}} obeying the equations

\displaystyle  a_{11} r + X R = s

\displaystyle  Y r + B R = S.

The image of {(r,R,s,S)} under {\hbox{Rot}_1} is {(s, R, -r, S)}. Since we can write the above system of equations (for {a_{11} \neq 0}) as

\displaystyle  \frac{-1}{a_{11}} s + \frac{X}{a_{11}} R = -r

\displaystyle  \frac{Y}{a_{11}} s + (B - a_{11}^{-1} YX) R = S

we see from (2) that {\hbox{Rot}_1 \hbox{Graph}[A]} is the graph of {\hbox{Sweep}_1[A]}. Thus the sweep operation is a multidimensional generalisation of the high school geometry fact that the line {y = mx} in the plane becomes {y = \frac{-1}{m} x} after applying a ninety degree rotation.

It is then an instructive exercise to use this geometric interpretation of the sweep operator to recover all the remarkable properties about these operations listed above. It is also useful to compare the geometric interpretation of sweeping as rotation of the graph to that of Gaussian elimination, which instead shears and reflects the graph by various elementary transformations (this is what is going on geometrically when one performs Gaussian elimination on an augmented matrix). Rotations are less distorting than shears, so one can see geometrically why sweeping can produce fewer numerical artefacts than Gaussian elimination.

Given two unit vectors {v,w} in a real inner product space, one can define the correlation between these vectors to be their inner product {\langle v, w \rangle}, or in more geometric terms, the cosine of the angle {\angle(v,w)} subtended by {v} and {w}. By the Cauchy-Schwarz inequality, this is a quantity between {-1} and {+1}, with the extreme positive correlation {+1} occurring when {v,w} are identical, the extreme negative correlation {-1} occurring when {v,w} are diametrically opposite, and the zero correlation {0} occurring when {v,w} are orthogonal. This notion is closely related to the notion of correlation between two non-constant square-integrable real-valued random variables {X,Y}, which is the same as the correlation between two unit vectors {v,w} lying in the Hilbert space {L^2(\Omega)} of square-integrable random variables, with {v} being the normalisation of {X} defined by subtracting off the mean {\mathbf{E} X} and then dividing by the standard deviation of {X}, and similarly for {w} and {Y}.

One can also define correlation for complex (Hermitian) inner product spaces by taking the real part {\hbox{Re} \langle , \rangle} of the complex inner product to recover a real inner product.

While reading the (highly recommended) recent popular maths book “How not to be wrong“, by my friend and co-author Jordan Ellenberg, I came across the (important) point that correlation is not necessarily transitive: if {X} correlates with {Y}, and {Y} correlates with {Z}, then this does not imply that {X} correlates with {Z}. A simple geometric example is provided by the three unit vectors

\displaystyle  u := (1,0); v := (\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}); w := (0,1)

in the Euclidean plane {{\bf R}^2}: {u} and {v} have a positive correlation of {\frac{1}{\sqrt{2}}}, as does {v} and {w}, but {u} and {w} are not correlated with each other. Or: for a typical undergraduate course, it is generally true that good exam scores are correlated with a deep understanding of the course material, and memorising from flash cards are correlated with good exam scores, but this does not imply that memorising flash cards is correlated with deep understanding of the course material.

However, there are at least two situations in which some partial version of transitivity of correlation can be recovered. The first is in the “99%” regime in which the correlations are very close to {1}: if {u,v,w} are unit vectors such that {u} is very highly correlated with {v}, and {v} is very highly correlated with {w}, then this does imply that {u} is very highly correlated with {w}. Indeed, from the identity

\displaystyle  \| u-v \| = 2^{1/2} (1 - \langle u,v\rangle)^{1/2}

(and similarly for {v-w} and {u-w}) and the triangle inequality

\displaystyle  \|u-w\| \leq \|u-v\| + \|v-w\|,

we see that

\displaystyle  (1 - \langle u,w \rangle)^{1/2} \leq (1 - \langle u,v\rangle)^{1/2} + (1 - \langle v,w\rangle)^{1/2}. \ \ \ \ \ (1)

Thus, for instance, if {\langle u, v \rangle \geq 1-\varepsilon} and {\langle v,w \rangle \geq 1-\varepsilon}, then {\langle u,w \rangle \geq 1-4\varepsilon}. This is of course closely related to (though slightly weaker than) the triangle inequality for angles:

\displaystyle  \angle(u,w) \leq \angle(u,v) + \angle(v,w).

Remark 1 (Thanks to Andrew Granville for conversations leading to this observation.) The inequality (1) also holds for sub-unit vectors, i.e. vectors {u,v,w} with {\|u\|, \|v\|, \|w\| \leq 1}. This comes by extending {u,v,w} in directions orthogonal to all three original vectors and to each other in order to make them unit vectors, enlarging the ambient Hilbert space {H} if necessary. More concretely, one can apply (1) to the unit vectors

\displaystyle  (u, \sqrt{1-\|u\|^2}, 0, 0), (v, 0, \sqrt{1-\|v\|^2}, 0), (w, 0, 0, \sqrt{1-\|w\|^2})

in {H \times {\bf R}^3}.

But even in the “{1\%}” regime in which correlations are very weak, there is still a version of transitivity of correlation, known as the van der Corput lemma, which basically asserts that if a unit vector {v} is correlated with many unit vectors {u_1,\dots,u_n}, then many of the pairs {u_i,u_j} will then be correlated with each other. Indeed, from the Cauchy-Schwarz inequality

\displaystyle  |\langle v, \sum_{i=1}^n u_i \rangle|^2 \leq \|v\|^2 \| \sum_{i=1}^n u_i \|^2

we see that

\displaystyle  (\sum_{i=1}^n \langle v, u_i \rangle)^2 \leq \sum_{1 \leq i,j \leq n} \langle u_i, u_j \rangle. \ \ \ \ \ (2)

Thus, for instance, if {\langle v, u_i \rangle \geq \varepsilon} for at least {\varepsilon n} values of {i=1,\dots,n}, then (after removing those indices {i} for which {\langle v, u_i \rangle < \varepsilon}) {\sum_{i,j} \langle u_i, u_j \rangle} must be at least {\varepsilon^4 n^2}, which implies that {\langle u_i, u_j \rangle \geq \varepsilon^4/2} for at least {\varepsilon^4 n^2/2} pairs {(i,j)}. Or as another example: if a random variable {X} exhibits at least {1\%} positive correlation with {n} other random variables {Y_1,\dots,Y_n}, then if {n > 10,000}, at least two distinct {Y_i,Y_j} must have positive correlation with each other (although this argument does not tell you which pair {Y_i,Y_j} are so correlated). Thus one can view this inequality as a sort of `pigeonhole principle” for correlation.

A similar argument (multiplying each {u_i} by an appropriate sign {\pm 1}) shows the related van der Corput inequality

\displaystyle  (\sum_{i=1}^n |\langle v, u_i \rangle|)^2 \leq \sum_{1 \leq i,j \leq n} |\langle u_i, u_j \rangle|, \ \ \ \ \ (3)

and this inequality is also true for complex inner product spaces. (Also, the {u_i} do not need to be unit vectors for this inequality to hold.)

Geometrically, the picture is this: if {v} positively correlates with all of the {u_1,\dots,u_n}, then the {u_1,\dots,u_n} are all squashed into a somewhat narrow cone centred at {v}. The cone is still wide enough to allow a few pairs {u_i, u_j} to be orthogonal (or even negatively correlated) with each other, but (when {n} is large enough) it is not wide enough to allow all of the {u_i,u_j} to be so widely separated. Remarkably, the bound here does not depend on the dimension of the ambient inner product space; while increasing the number of dimensions should in principle add more “room” to the cone, this effect is counteracted by the fact that in high dimensions, almost all pairs of vectors are close to orthogonal, and the exceptional pairs that are even weakly correlated to each other become exponentially rare. (See this previous blog post for some related discussion; in particular, Lemma 2 from that post is closely related to the van der Corput inequality presented here.)

A particularly common special case of the van der Corput inequality arises when {v} is a unit vector fixed by some unitary operator {T}, and the {u_i} are shifts {u_i = T^i u} of a single unit vector {u}. In this case, the inner products {\langle v, u_i \rangle} are all equal, and we arrive at the useful van der Corput inequality

\displaystyle  |\langle v, u \rangle|^2 \leq \frac{1}{n^2} \sum_{1 \leq i,j \leq n} |\langle T^i u, T^j u \rangle|. \ \ \ \ \ (4)

(In fact, one can even remove the absolute values from the right-hand side, by using (2) instead of (4).) Thus, to show that {v} has negligible correlation with {u}, it suffices to show that the shifts of {u} have negligible correlation with each other.

Here is a basic application of the van der Corput inequality:

Proposition 2 (Weyl equidistribution estimate) Let {P: {\bf Z} \rightarrow {\bf R}/{\bf Z}} be a polynomial with at least one non-constant coefficient irrational. Then one has

\displaystyle  \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N e( P(n) ) = 0,

where {e(x) := e^{2\pi i x}}.

Note that this assertion implies the more general assertion

\displaystyle  \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N e( kP(n) ) = 0

for any non-zero integer {k} (simply by replacing {P} by {kP}), which by the Weyl equidistribution criterion is equivalent to the sequence {P(1), P(2),\dots} being asymptotically equidistributed in {{\bf R}/{\bf Z}}.

Proof: We induct on the degree {d} of the polynomial {P}, which must be at least one. If {d} is equal to one, the claim is easily established from the geometric series formula, so suppose that {d>1} and that the claim has already been proven for {d-1}. If the top coefficient {a_d} of {P(n) = a_d n^d + \dots + a_0} is rational, say {a_d = \frac{p}{q}}, then by partitioning the natural numbers into residue classes modulo {q}, we see that the claim follows from the induction hypothesis; so we may assume that the top coefficient {a_d} is irrational.

In order to use the van der Corput inequality as stated above (i.e. in the formalism of inner product spaces) we will need a non-principal ultrafilter {p} (see e.g this previous blog post for basic theory of ultrafilters); we leave it as an exercise to the reader to figure out how to present the argument below without the use of ultrafilters (or similar devices, such as Banach limits). The ultrafilter {p} defines an inner product {\langle, \rangle_p} on bounded complex sequences {z = (z_1,z_2,z_3,\dots)} by setting

\displaystyle  \langle z, w \rangle_p := \hbox{st} \lim_{N \rightarrow p} \frac{1}{N} \sum_{n=1}^N z_n \overline{w_n}.

Strictly speaking, this inner product is only positive semi-definite rather than positive definite, but one can quotient out by the null vectors to obtain a positive-definite inner product. To establish the claim, it will suffice to show that

\displaystyle  \langle 1, e(P) \rangle_p = 0

for every non-principal ultrafilter {p}.

Note that the space of bounded sequences (modulo null vectors) admits a shift {T}, defined by

\displaystyle  T (z_1,z_2,\dots) := (z_2,z_3,\dots).

This shift becomes unitary once we quotient out by null vectors, and the constant sequence {1} is clearly a unit vector that is invariant with respect to the shift. So by the van der Corput inequality, we have

\displaystyle  |\langle 1, e(P) \rangle_p| \leq \frac{1}{n^2} \sum_{1 \leq i,j \leq n} |\langle T^i e(P), T^j e(P) \rangle_p|

for any {n \geq 1}. But we may rewrite {\langle T^i e(P), T^j e(P) \rangle = \langle 1, e(T^j P - T^i P) \rangle_p}. Then observe that if {i \neq j}, {T^j P - T^i P} is a polynomial of degree {d-1} whose {d-1} coefficient is irrational, so by induction hypothesis we have {\langle T^i e(P), T^j e(P) \rangle_p = 0} for {i \neq j}. For {i=j} we of course have {\langle T^i e(P), T^j e(P) \rangle_p = 1}, and so

\displaystyle  |\langle 1, e(P) \rangle_p| \leq \frac{1}{n^2} \times n

for any {n}. Letting {n \rightarrow \infty}, we obtain the claim. \Box

A remarkable phenomenon in probability theory is that of universality – that many seemingly unrelated probability distributions, which ostensibly involve large numbers of unknown parameters, can end up converging to a universal law that may only depend on a small handful of parameters. One of the most famous examples of the universality phenomenon is the central limit theorem; another rich source of examples comes from random matrix theory, which is one of the areas of my own research.

Analogous universality phenomena also show up in empirical distributions – the distributions of a statistic {X} from a large population of “real-world” objects. Examples include Benford’s law, Zipf’s law, and the Pareto distribution (of which the Pareto principle or 80-20 law is a special case). These laws govern the asymptotic distribution of many statistics {X} which

  • (i) take values as positive numbers;
  • (ii) range over many different orders of magnitude;
  • (iiii) arise from a complicated combination of largely independent factors (with different samples of {X} arising from different independent factors); and
  • (iv) have not been artificially rounded, truncated, or otherwise constrained in size.

Examples here include the population of countries or cities, the frequency of occurrence of words in a language, the mass of astronomical objects, or the net worth of individuals or corporations. The laws are then as follows:

  • Benford’s law: For {k=1,\ldots,9}, the proportion of {X} whose first digit is {k} is approximately {\log_{10} \frac{k+1}{k}}. Thus, for instance, {X} should have a first digit of {1} about {30\%} of the time, but a first digit of {9} only about {5\%} of the time.
  • Zipf’s law: The {n^{th}} largest value of {X} should obey an approximate power law, i.e. it should be approximately {C n^{-\alpha}} for the first few {n=1,2,3,\ldots} and some parameters {C, \alpha > 0}. In many cases, {\alpha} is close to {1}.
  • Pareto distribution: The proportion of {X} with at least {m} digits (before the decimal point), where {m} is above the median number of digits, should obey an approximate exponential law, i.e. be approximately of the form {c 10^{-m/\alpha}} for some {c, \alpha > 0}. Again, in many cases {\alpha} is close to {1}.

Benford’s law and Pareto distribution are stated here for base {10}, which is what we are most familiar with, but the laws hold for any base (after replacing all the occurrences of {10} in the above laws with the new base, of course). The laws tend to break down if the hypotheses (i)-(iv) are dropped. For instance, if the statistic {X} concentrates around its mean (as opposed to being spread over many orders of magnitude), then the normal distribution tends to be a much better model (as indicated by such results as the central limit theorem). If instead the various samples of the statistics are highly correlated with each other, then other laws can arise (for instance, the eigenvalues of a random matrix, as well as many empirically observed matrices, are correlated to each other, with the behaviour of the largest eigenvalues being governed by laws such as the Tracy-Widom law rather than Zipf’s law, and the bulk distribution being governed by laws such as the semicircular law rather than the normal or Pareto distributions).

To illustrate these laws, let us take as a data set the populations of 235 countries and regions of the world in 2007 (using the CIA world factbook); I have put the raw data here. This is a relatively small sample (cf. my previous post), but is already enough to discern these laws in action. For instance, here is how the data set tracks with Benford’s law (rounded to three significant figures):

{k} Countries Number Benford prediction
1 Angola, Anguilla, Aruba, Bangladesh, Belgium, Botswana, Brazil, Burkina Faso, Cambodia, Cameroon, Chad, Chile, China, Christmas Island, Cook Islands, Cuba, Czech Republic, Ecuador, Estonia, Gabon, (The) Gambia, Greece, Guam, Guatemala, Guinea-Bissau, India, Japan, Kazakhstan, Kiribati, Malawi, Mali, Mauritius, Mexico, (Federated States of) Micronesia, Nauru, Netherlands, Niger, Nigeria, Niue, Pakistan, Portugal, Russia, Rwanda, Saint Lucia, Saint Vincent and the Grenadines, Senegal, Serbia, Swaziland, Syria, Timor-Leste (East-Timor), Tokelau, Tonga, Trinidad and Tobago, Tunisia, Tuvalu, (U.S.) Virgin Islands, Wallis and Futuna, Zambia, Zimbabwe 59 ({25.1\%}) 71 ({30.1\%})
2 Armenia, Australia, Barbados, British Virgin Islands, Cote d’Ivoire, French Polynesia, Ghana, Gibraltar, Indonesia, Iraq, Jamaica, (North) Korea, Kosovo, Kuwait, Latvia, Lesotho, Macedonia, Madagascar, Malaysia, Mayotte, Mongolia, Mozambique, Namibia, Nepal, Netherlands Antilles, New Caledonia Norfolk Island, Palau, Peru, Romania, Saint Martin, Samoa, San Marino, Sao Tome and Principe, Saudi Arabia, Slovenia, Sri Lanka, Svalbard, Taiwan, Turks and Caicos Islands, Uzbekistan, Vanuatu, Venezuela, Yemen 44 ({18.7\%}) 41 ({17.6\%})
3 Afghanistan, Albania, Algeria, (The) Bahamas, Belize, Brunei, Canada, (Rep. of the) Congo, Falkland Islands (Islas Malvinas), Iceland, Kenya, Lebanon, Liberia, Liechtenstein, Lithuania, Maldives, Mauritania, Monaco, Morocco, Oman, (Occupied) Palestinian Territory, Panama, Poland, Puerto Rico, Saint Kitts and Nevis, Uganda, United States of America, Uruguay, Western Sahara 29 ({12.3\%}) 29 ({12.5\%})
4 Argentina, Bosnia and Herzegovina, Burma (Myanmar), Cape Verde, Cayman Islands, Central African Republic, Colombia, Costa Rica, Croatia, Faroe Islands, Georgia, Ireland, (South) Korea, Luxembourg, Malta, Moldova, New Zealand, Norway, Pitcairn Islands, Singapore, South Africa, Spain, Sudan, Suriname, Tanzania, Ukraine, United Arab Emirates 27 ({11.4\%}) 22 ({9.7\%})
5 (Macao SAR) China, Cocos Islands, Denmark, Djibouti, Eritrea, Finland, Greenland, Italy, Kyrgyzstan, Montserrat, Nicaragua, Papua New Guinea, Slovakia, Solomon Islands, Togo, Turkmenistan 16 ({6.8\%}) 19 ({7.9\%})
6 American Samoa, Bermuda, Bhutan, (Dem. Rep. of the) Congo, Equatorial Guinea, France, Guernsey, Iran, Jordan, Laos, Libya, Marshall Islands, Montenegro, Paraguay, Sierra Leone, Thailand, United Kingdom 17 ({7.2\%}) 16 ({6.7\%})
7 Bahrain, Bulgaria, (Hong Kong SAR) China, Comoros, Cyprus, Dominica, El Salvador, Guyana, Honduras, Israel, (Isle of) Man, Saint Barthelemy, Saint Helena, Saint Pierre and Miquelon, Switzerland, Tajikistan, Turkey 17 ({7.2\%}) 14 ({5.8\%})
8 Andorra, Antigua and Barbuda, Austria, Azerbaijan, Benin, Burundi, Egypt, Ethiopia, Germany, Haiti, Holy See (Vatican City), Northern Mariana Islands, Qatar, Seychelles, Vietnam 15 ({6.4\%}) 12 ({5.1\%})
9 Belarus, Bolivia, Dominican Republic, Fiji, Grenada, Guinea, Hungary, Jersey, Philippines, Somalia, Sweden 11 ({4.5\%}) 11 ({4.6\%})

Here is how the same data tracks Zipf’s law for the first twenty values of {n}, with the parameters {C \approx 1.28 \times 10^9} and {\alpha \approx 1.03} (selected by log-linear regression), again rounding to three significant figures:

{n} Country Population Zipf prediction Deviation from prediction
1 China 1,330,000,000 1,280,000,000 {+4.1\%}
2 India 1,150,000,000 626,000,000 {+83.5\%}
3 USA 304,000,000 412,000,000 {-26.3\%}
4 Indonesia 238,000,000 307,000,000 {-22.5\%}
5 Brazil 196,000,000 244,000,000 {-19.4\%}
6 Pakistan 173,000,000 202,000,000 {-14.4\%}
7 Bangladesh 154,000,000 172,000,000 {-10.9\%}
8 Nigeria 146,000,000 150,000,000 {-2.6\%}
9 Russia 141,000,000 133,000,000 {+5.8\%}
10 Japan 128,000,000 120,000,000 {+6.7\%}
11 Mexico 110,000,000 108,000,000 {+1.7\%}
12 Philippines 96,100,000 98,900,000 {-2.9\%}
13 Vietnam 86,100,000 91,100,000 {-5.4\%}
14 Ethiopia 82,600,000 84,400,000 {-2.1\%}
15 Germany 82,400,000 78,600,000 {+4.8\%}
16 Egypt 81,700,000 73,500,000 {+11.1\%}
17 Turkey 71,900,000 69,100,000 {+4.1\%}
18 Congo 66,500,000 65,100,000 {+2.2\%}
19 Iran 65,900,000 61,600,000 {+6.9\%}
20 Thailand 65,500,000 58,400,000 {+12.1\%}

As one sees, Zipf’s law is not particularly precise at the extreme edge of the statistics (when {n} is very small), but becomes reasonably accurate (given the small sample size, and given that we are fitting twenty data points using only two parameters) for moderate sizes of {n}.

This data set has too few scales in base {10} to illustrate the Pareto distribution effectively – over half of the country populations are either seven or eight digits in that base. But if we instead work in base {2}, then country populations range in a decent number of scales (the majority of countries have population between {2^{23}} and {2^{32}}), and we begin to see the law emerge, where {m} is now the number of digits in binary, the best-fit parameters are {\alpha \approx 1.18} and {c \approx 1.7 \times 2^{26} / 235}:

{m} Countries with {\geq m} binary digit populations Number Pareto prediction
31 China, India 2 1
30 2 2
29 “, United States of America 3 5
28 “, Indonesia, Brazil, Pakistan, Bangladesh, Nigeria, Russia 9 8
27 “, Japan, Mexico, Philippines, Vietnam, Ethiopia, Germany, Egypt, Turkey 17 15
26 “, (Dem. Rep. of the) Congo, Iran, Thailand, France, United Kingdom, Italy, South Africa, (South) Korea, Burma (Myanmar), Ukraine, Colombia, Spain, Argentina, Sudan, Tanzania, Poland, Kenya, Morocco, Algeria 36 27
25 “, Canada, Afghanistan, Uganda, Nepal, Peru, Iraq, Saudi Arabia, Uzbekistan, Venezuela, Malaysia, (North) Korea, Ghana, Yemen, Taiwan, Romania, Mozambique, Sri Lanka, Australia, Cote d’Ivoire, Madagascar, Syria, Cameroon 58 49
24 “, Netherlands, Chile, Kazakhstan, Burkina Faso, Cambodia, Malawi, Ecuador, Niger, Guatemala, Senegal, Angola, Mali, Zambia, Cuba, Zimbabwe, Greece, Portugal, Belgium, Tunisia, Czech Republic, Rwanda, Serbia, Chad, Hungary, Guinea, Belarus, Somalia, Dominican Republic, Bolivia, Sweden, Haiti, Burundi, Benin 91 88
23 “, Austria, Azerbaijan, Honduras, Switzerland, Bulgaria, Tajikistan, Israel, El Salvador, (Hong Kong SAR) China, Paraguay, Laos, Sierra Leone, Jordan, Libya, Papua New Guinea, Togo, Nicaragua, Eritrea, Denmark, Slovakia, Kyrgyzstan, Finland, Turkmenistan, Norway, Georgia, United Arab Emirates, Singapore, Bosnia and Herzegovina, Croatia, Central African Republic, Moldova, Costa Rica 123 159

Thus, with each new scale, the number of countries introduced increases by a factor of a little less than {2}, on the average. This approximate doubling of countries with each new scale begins to falter at about the population {2^{23}} (i.e. at around {4} million), for the simple reason that one has begun to run out of countries. (Note that the median-population country in this set, Singapore, has a population with {23} binary digits.)

These laws are not merely interesting statistical curiosities; for instance, Benford’s law is often used to help detect fraudulent statistics (such as those arising from accounting fraud), as many such statistics are invented by choosing digits at random, and will therefore deviate significantly from Benford’s law. (This is nicely discussed in Robert Matthews’ New Scientist article “The power of one“; this article can also be found on the web at a number of other places.) In a somewhat analogous spirit, Zipf’s law and the Pareto distribution can be used to mathematically test various models of real-world systems (e.g. formation of astronomical objects, accumulation of wealth, population growth of countries, etc.), without necessarily having to fit all the parameters of that model with the actual data.

Being empirically observed phenomena rather than abstract mathematical facts, Benford’s law, Zipf’s law, and the Pareto distribution cannot be “proved” the same way a mathematical theorem can be proved. However, one can still support these laws mathematically in a number of ways, for instance showing how these laws are compatible with each other, and with other plausible hypotheses on the source of the data. In this post I would like to describe a number of ways (both technical and non-technical) in which one can do this; these arguments do not fully explain these laws (in particular, the empirical fact that the exponent {\alpha} in Zipf’s law or the Pareto distribution is often close to {1} is still quite a mysterious phenomenon), and do not always have the same universal range of applicability as these laws seem to have, but I hope that they do demonstrate that these laws are not completely arbitrary, and ought to have a satisfactory basis of mathematical support. Read the rest of this entry »

The U.S. presidential election is now only a few weeks away.  The politics of this election are of course interesting and important, but I do not want to discuss these topics here (there is not exactly a shortage of other venues for such a discussion), and would request that readers refrain from doing so in the comments to this post.  However, I thought it would be apropos to talk about some of the basic mathematics underlying electoral polling, and specifically to explain the fact, which can be highly unintuitive to those not well versed in statistics, that polls can be accurate even when sampling only a tiny fraction of the entire population.

Take for instance a nationwide poll of U.S. voters on which presidential candidate they intend to vote for.  A typical poll will ask a number n of randomly selected voters for their opinion; a typical value here is n = 1000.  In contrast, the total voting-eligible population of the U.S. – let’s call this set X – is about 200 million.  (The actual turnout in the election is likely to be closer to 100 million, but let’s ignore this fact for the sake of discussion.)  Thus, such a poll would sample about 0.0005% of the total population X – an incredibly tiny fraction.  Nevertheless, the margin of error (at the 95% confidence level) for such a poll, if conducted under idealised conditions (see below), is about 3%.  In other words, if we let p denote the proportion of the entire population X that will vote for a given candidate A, and let \overline{p} denote the proportion of the polled voters that will vote for A, then the event \overline{p}-0.03 \leq p \leq \overline{p}+0.03 will occur with probability at least 0.95.  Thus, for instance (and oversimplifying a little – see below), if the poll reports that 55% of respondents would vote for A, then the true percentage of the electorate that would vote for A has at least a 95% chance of lying between 52% and 58%.  Larger polls will of course give a smaller margin of error; for instance the margin of error for an (idealised) poll of 2,000 voters is about 2%.

I’ll give a rigorous proof of a weaker version of the above statement (giving a margin of error of about 7%, rather than 3%) in an appendix at the end of this post.  But the main point of my post here is a little different, namely to address the common misconception that the accuracy of a poll is a function of the relative sample size rather than the absolute sample size, which would suggest that a poll involving only 0.0005% of the population could not possibly have a margin of error as low as 3%.  I also want to point out some limitations of the mathematical analysis; depending on the methodology and the context, some polls involving 1000 respondents may have a much higher margin of error than the idealised rate of 3%.

Read the rest of this entry »

Archives